Runtime class
class to include a runtime object in a task
A runtime object provides an interface for interacting with the scheduling system from within a task (i.e., the parent task of this runtime). It enables operations such as spawning asynchronous tasks, executing tasks cooperatively, and implementing recursive parallelism. The runtime guarantees an implicit join at the end of its scope, so all spawned tasks will finish before the parent runtime task continues to its successors.
tf::Executor executor(num_threads); tf::Taskflow taskflow; std::atomic<size_t> counter(0); tf::Task A = taskflow.emplace([&](tf::Runtime& rt){ // spawn 1000 asynchronous tasks from this runtime task for(size_t i=0; i<1000; i++) { rt.silent_async([&](){ counter.fetch_add(1, std::memory_order_relaxed); }); } // implicit synchronization at the end of the runtime scope }); tf::Task B = taskflow.emplace([&](){ REQUIRE(counter.load(std::memory_order_relaxed) == 1000); }); A.precede(B); executor.run(taskflow).wait();
A runtime object is associated with the worker and the executor that runs its parent task.
Public functions
- auto executor() -> Executor&
- obtains the running executor
- auto worker() -> Worker&
- acquire a reference to the underlying worker
- void schedule(Task task)
- schedules an active task immediately to the worker's queue
-
template<typename F>auto async(F&& f) -> auto
- runs the given callable asynchronously
-
template<typename P, typename F>auto async(P&& params, F&& f) -> auto
- runs the given callable asynchronously
-
template<typename F>void silent_async(F&& f)
- runs the given function asynchronously without returning any future object
-
template<typename P, typename F>void silent_async(P&& params, F&& f)
- runs the given function asynchronously without returning any future object
-
template<typename F, typename... Tasks, std::enable_if_t<all_same_v<AsyncTask, std::decay_t<Tasks>...>, void>* = nullptr>auto silent_dependent_async(F&& func, Tasks && ... tasks) -> tf::
AsyncTask - runs the given function asynchronously when the given predecessors finish
-
template<typename P, typename F, typename... Tasks, std::enable_if_t<is_auto silent_dependent_async(P&& params, F&& func, Tasks && ... tasks) -> tf::
task_ params_ v<P> && all_same_v<AsyncTask, std::decay_t<Tasks>...>, void>* = nullptr> AsyncTask - runs the given function asynchronously when the given predecessors finish
-
template<typename F, typename I, std::enable_if_t<!std::is_same_v<std::decay_t<I>, AsyncTask>, void>* = nullptr>auto silent_dependent_async(F&& func, I first, I last) -> tf::
AsyncTask - runs the given function asynchronously when the given range of predecessors finish
-
template<typename P, typename F, typename I, std::enable_if_t<is_auto silent_dependent_async(P&& params, F&& func, I first, I last) -> tf::
task_ params_ v<P> && !std::is_same_v<std::decay_t<I>, AsyncTask>, void>* = nullptr> AsyncTask - runs the given function asynchronously when the given range of predecessors finish
-
template<typename F, typename... Tasks, std::enable_if_t<all_same_v<AsyncTask, std::decay_t<Tasks>...>, void>* = nullptr>auto dependent_async(F&& func, Tasks && ... tasks) -> auto
- runs the given function asynchronously when the given predecessors finish
-
template<typename P, typename F, typename... Tasks, std::enable_if_t<is_auto dependent_async(P&& params, F&& func, Tasks && ... tasks) -> auto
task_ params_ v<P> && all_same_v<AsyncTask, std::decay_t<Tasks>...>, void>* = nullptr> - runs the given function asynchronously when the given predecessors finish
-
template<typename F, typename I, std::enable_if_t<!std::is_same_v<std::decay_t<I>, AsyncTask>, void>* = nullptr>auto dependent_async(F&& func, I first, I last) -> auto
- runs the given function asynchronously when the given range of predecessors finish
-
template<typename P, typename F, typename I, std::enable_if_t<is_auto dependent_async(P&& params, F&& func, I first, I last) -> auto
task_ params_ v<P> && !std::is_same_v<std::decay_t<I>, AsyncTask>, void>* = nullptr> - runs the given function asynchronously when the given range of predecessors finish
- void corun()
- corun all tasks spawned by this runtime with other workers
- void corun_all()
- equivalent to tf::
Runtime:: corun - just an alias for legacy purpose - auto is_cancelled() -> bool
- This method verifies if the task has been cancelled.
Function documentation
Executor& tf:: Runtime:: executor()
obtains the running executor
The running executor of a runtime task is the executor that runs the parent taskflow of that runtime task.
tf::Executor executor; tf::Taskflow taskflow; taskflow.emplace([&](tf::Runtime& rt){ assert(&(rt.executor()) == &executor); }); executor.run(taskflow).wait();
void tf:: Runtime:: schedule(Task task)
schedules an active task immediately to the worker's queue
| Parameters | |
|---|---|
| task | the given active task to schedule immediately |
This member function immediately schedules an active task to the task queue of the associated worker in the runtime task. An active task is a task in a running taskflow. The task may or may not be running, and scheduling that task will immediately put the task into the task queue of the worker that is running the runtime task. Consider the following example:
tf::Task A, B, C, D; std::tie(A, B, C, D) = taskflow.emplace( [] () { return 0; }, [&C] (tf::Runtime& rt) { // C must be captured by reference std::cout << "B\n"; rt.schedule(C); }, [] () { std::cout << "C\n"; }, [] () { std::cout << "D\n"; } ); A.precede(B, C, D); executor.run(taskflow).wait();
The executor will first run the condition task A which returns 0 to inform the scheduler to go to the runtime task B. During the execution of B, it directly schedules task C without going through the normal taskflow graph scheduling process. At this moment, task C is active because its parent taskflow is running. When the taskflow finishes, we will see both B and C in the output.
template<typename F>
auto tf:: Runtime:: async(F&& f)
runs the given callable asynchronously
| Template parameters | |
|---|---|
| F | callable type |
| Parameters | |
| f | callable object |
This method creates an asynchronous task that executes the given function with the specified arguments. Unlike tf::
std::atomic<int> counter(0); taskflow.emplace([&](tf::Runtime& rt){ auto fu1 = rt.async([&](){ counter++; }); auto fu2 = rt.async([&](){ counter++; }); fu1.get(); fu2.get(); assert(counter == 2); // spawn 100 asynchronous tasks from the worker of the runtime for(int i=0; i<100; i++) { rt.silent_async([&](){ counter++; }); } // explicitly wait for the 100 asynchronous tasks to finish rt.corun(); assert(counter == 102); // do something else afterwards ... });
template<typename P, typename F>
auto tf:: Runtime:: async(P&& params,
F&& f)
runs the given callable asynchronously
| Template parameters | |
|---|---|
| P | task parameters type |
| F | callable type |
| Parameters | |
| params | task parameters |
| f | callable |
Similar to tf::
taskflow.emplace([&](tf::Runtime& rt){ auto future = rt.async("my task", [](){}); future.get(); });
template<typename F>
void tf:: Runtime:: silent_async(F&& f)
runs the given function asynchronously without returning any future object
| Template parameters | |
|---|---|
| F | callable type |
| Parameters | |
| f | callable |
This function is more efficient than tf::
std::atomic<int> counter(0); taskflow.emplace([&](tf::Runtime& rt){ for(int i=0; i<100; i++) { rt.silent_async([&](){ counter++; }); } rt.corun(); assert(counter == 100); });
This member function is thread-safe.
template<typename P, typename F>
void tf:: Runtime:: silent_async(P&& params,
F&& f)
runs the given function asynchronously without returning any future object
| Template parameters | |
|---|---|
| F | callable type |
| Parameters | |
| params | task parameters |
| f | callable |
Similar to tf::
taskflow.emplace([&](tf::Runtime& rt){ rt.silent_async("my task", [](){}); });
template<typename F, typename... Tasks, std::enable_if_t<all_same_v<AsyncTask, std::decay_t<Tasks>...>, void>* = nullptr>
tf:: AsyncTask tf:: Runtime:: silent_dependent_async(F&& func,
Tasks && ... tasks)
runs the given function asynchronously when the given predecessors finish
| Template parameters | |
|---|---|
| F | callable type |
| Tasks | task types convertible to tf:: |
| Parameters | |
| func | callable object |
| tasks | asynchronous tasks on which this execution depends |
| Returns | a tf:: |
This member function is more efficient than tf::A, B, and C, in which task C runs after task A and task B.
taskflow.emplace([](tf::Runtime& rt){ tf::AsyncTask A = rt.silent_dependent_async([](){ printf("A\n"); }); tf::AsyncTask B = rt.silent_dependent_async([](){ printf("B\n"); }); rt.silent_dependent_async([](){ printf("C runs after A and B\n"); }, A, B); }); // implicit synchronization of all tasks at the end of runtime's scope executor.wait_for_all();
template<typename P, typename F, typename... Tasks, std::enable_if_t<is_ task_ params_ v<P> && all_same_v<AsyncTask, std::decay_t<Tasks>...>, void>* = nullptr>
tf:: AsyncTask tf:: Runtime:: silent_dependent_async(P&& params,
F&& func,
Tasks && ... tasks)
runs the given function asynchronously when the given predecessors finish
| Template parameters | |
|---|---|
| F | callable type |
| Tasks | task types convertible to tf:: |
| Parameters | |
| params | task parameters |
| func | callable object |
| tasks | asynchronous tasks on which this execution depends |
| Returns | a tf:: |
This member function is more efficient than tf::A, B, and C, in which task C runs after task A and task B. Assigned task names will appear in the observers of the executor.
taskflow.emplace([](tf::Runtime& rt){ tf::AsyncTask A = rt.silent_dependent_async("A", [](){ printf("A\n"); }); tf::AsyncTask B = rt.silent_dependent_async("B", [](){ printf("B\n"); }); rt.silent_dependent_async( "C", [](){ printf("C runs after A and B\n"); }, A, B ); }); // implicit synchronization of all tasks at the end of runtime's scope executor.wait_for_all();
This member function is thread-safe.
template<typename F, typename I, std::enable_if_t<!std::is_same_v<std::decay_t<I>, AsyncTask>, void>* = nullptr>
tf:: AsyncTask tf:: Runtime:: silent_dependent_async(F&& func,
I first,
I last)
runs the given function asynchronously when the given range of predecessors finish
| Template parameters | |
|---|---|
| F | callable type |
| I | iterator type |
| Parameters | |
| func | callable object |
| first | iterator to the beginning (inclusive) |
| last | iterator to the end (exclusive) |
| Returns | a tf:: |
This member function is more efficient than tf::A, B, and C, in which task C runs after task A and task B.
Taskflow.emplace([&](tf::Runtime& rt){ std::array<tf::AsyncTask, 2> array { rt.silent_dependent_async([](){ printf("A\n"); }), rt.silent_dependent_async([](){ printf("B\n"); }) }; rt.silent_dependent_async( [](){ printf("C runs after A and B\n"); }, array.begin(), array.end() ); }); // implicit synchronization of all tasks at the end of runtime's scope executor.wait_for_all();
template<typename P, typename F, typename I, std::enable_if_t<is_ task_ params_ v<P> && !std::is_same_v<std::decay_t<I>, AsyncTask>, void>* = nullptr>
tf:: AsyncTask tf:: Runtime:: silent_dependent_async(P&& params,
F&& func,
I first,
I last)
runs the given function asynchronously when the given range of predecessors finish
| Template parameters | |
|---|---|
| F | callable type |
| I | iterator type |
| Parameters | |
| params | tasks parameters |
| func | callable object |
| first | iterator to the beginning (inclusive) |
| last | iterator to the end (exclusive) |
| Returns | a tf:: |
This member function is more efficient than tf::A, B, and C, in which task C runs after task A and task B. Assigned task names will appear in the observers of the executor.
taskflow.emplace([](tf::Runtime& rt){ std::array<tf::AsyncTask, 2> array { rt.silent_dependent_async("A", [](){ printf("A\n"); }), rt.silent_dependent_async("B", [](){ printf("B\n"); }) }; rt.silent_dependent_async( "C", [](){ printf("C runs after A and B\n"); }, array.begin(), array.end() ); }); // implicit synchronization of all tasks at the end of runtime's scope executor.run(taskflow).wait();
template<typename F, typename... Tasks, std::enable_if_t<all_same_v<AsyncTask, std::decay_t<Tasks>...>, void>* = nullptr>
auto tf:: Runtime:: dependent_async(F&& func,
Tasks && ... tasks)
runs the given function asynchronously when the given predecessors finish
| Template parameters | |
|---|---|
| F | callable type |
| Tasks | task types convertible to tf:: |
| Parameters | |
| func | callable object |
| tasks | asynchronous tasks on which this execution depends |
| Returns | a pair of a tf:: |
The example below creates three asynchronous tasks, A, B, and C, in which task C runs after task A and task B. Task C returns a pair of its tf::
taskflow.emplace([](tf::Runtime& rt){ tf::AsyncTask A = rt.silent_dependent_async([](){ printf("A\n"); }); tf::AsyncTask B = rt.silent_dependent_async([](){ printf("B\n"); }); auto [C, fuC] = rt.dependent_async( [](){ printf("C runs after A and B\n"); return 1; }, A, B ); fuC.get(); // C finishes, which in turns means both A and B finish }); // implicit synchronization of all tasks at the end of runtime's scope executor.run(taskflow).wait();
You can mix the use of tf::
template<typename P, typename F, typename... Tasks, std::enable_if_t<is_ task_ params_ v<P> && all_same_v<AsyncTask, std::decay_t<Tasks>...>, void>* = nullptr>
auto tf:: Runtime:: dependent_async(P&& params,
F&& func,
Tasks && ... tasks)
runs the given function asynchronously when the given predecessors finish
| Template parameters | |
|---|---|
| P | task parameters type |
| F | callable type |
| Tasks | task types convertible to tf:: |
| Parameters | |
| params | task parameters |
| func | callable object |
| tasks | asynchronous tasks on which this execution depends |
| Returns | a pair of a tf:: |
The example below creates three named asynchronous tasks, A, B, and C, in which task C runs after task A and task B. Task C returns a pair of its tf::
taskflow.emplace([](tf::Runtime& rt){ tf::AsyncTask A = rt.silent_dependent_async("A", [](){ printf("A\n"); }); tf::AsyncTask B = rt.silent_dependent_async("B", [](){ printf("B\n"); }); auto [C, fuC] = rt.dependent_async( "C", [](){ printf("C runs after A and B\n"); return 1; }, A, B ); assert(fuC.get()==1); // C finishes, which in turns means both A and B finish }); // implicit synchronization of all tasks at the end of runtime's scope executor.run(taskflow).wait();
You can mix the use of tf::
template<typename F, typename I, std::enable_if_t<!std::is_same_v<std::decay_t<I>, AsyncTask>, void>* = nullptr>
auto tf:: Runtime:: dependent_async(F&& func,
I first,
I last)
runs the given function asynchronously when the given range of predecessors finish
| Template parameters | |
|---|---|
| F | callable type |
| I | iterator type |
| Parameters | |
| func | callable object |
| first | iterator to the beginning (inclusive) |
| last | iterator to the end (exclusive) |
| Returns | a pair of a tf:: |
The example below creates three asynchronous tasks, A, B, and C, in which task C runs after task A and task B. Task C returns a pair of its tf::
taskflow.emplace([](tf::Runtime& rt){ std::array<tf::AsyncTask, 2> array { rt.silent_dependent_async([](){ printf("A\n"); }), rt.silent_dependent_async([](){ printf("B\n"); }) }; auto [C, fuC] = rt.dependent_async( [](){ printf("C runs after A and B\n"); return 1; }, array.begin(), array.end() ); assert(fuC.get()==1); // C finishes, which in turns means both A and B finish }); // implicit synchronization of all tasks at the end of runtime's scope executor.run(taskflow).wait();
You can mix the use of tf::
template<typename P, typename F, typename I, std::enable_if_t<is_ task_ params_ v<P> && !std::is_same_v<std::decay_t<I>, AsyncTask>, void>* = nullptr>
auto tf:: Runtime:: dependent_async(P&& params,
F&& func,
I first,
I last)
runs the given function asynchronously when the given range of predecessors finish
| Template parameters | |
|---|---|
| P | task parameters type |
| F | callable type |
| I | iterator type |
| Parameters | |
| params | task parameters |
| func | callable object |
| first | iterator to the beginning (inclusive) |
| last | iterator to the end (exclusive) |
| Returns | a pair of a tf:: |
The example below creates three named asynchronous tasks, A, B, and C, in which task C runs after task A and task B. Task C returns a pair of its tf::
taskflow.emplace([](tf::Runtime& rt){ std::array<tf::AsyncTask, 2> array { rt.silent_dependent_async("A", [](){ printf("A\n"); }), rt.silent_dependent_async("B", [](){ printf("B\n"); }) }; auto [C, fuC] = rt.dependent_async( "C", [](){ printf("C runs after A and B\n"); return 1; }, array.begin(), array.end() ); assert(fuC.get()==1); // C finishes, which in turns means both A and B finish }); // implicit synchronization of all tasks at the end of runtime's scope executor.run(taskflow).wait();
You can mix the use of tf::
void tf:: Runtime:: corun()
corun all tasks spawned by this runtime with other workers
Coruns all tasks spawned by this runtime cooperatively with other workers in the same executor until all these tasks finish. Under cooperative execution, a worker is not preempted. Instead, it continues participating in the work-stealing loop, executing available tasks alongside other workers.
std::atomic<size_t> counter{0}; taskflow.emplace([&](tf::Runtime& rt){ // spawn 100 async tasks and wait for(int i=0; i<100; i++) { rt.silent_async([&](){ counter++; }); } rt.corun(); assert(counter == 100); // spawn another 100 async tasks and wait for(int i=0; i<100; i++) { rt.silent_async([&](){ counter++; }); } rt.corun(); assert(counter == 200); });