Runtime Tasking
Taskflow allows you to interact with the scheduling runtime by taking a runtime object as an argument of a task. This is mostly useful for designing recursive parallel algorithms that require dynamic tasking on the fly.
Create a Runtime Object
Taskflow allows users to define a runtime task that takes a referenced tf::
tf::Task A, B, C, D; std::tie(A, B, C, D) = taskflow.emplace( [] () { return 0; }, [&C] (tf::Runtime& rt) { // C must be captured by reference std::cout << "B\n"; rt.schedule(C); }, [] () { std::cout << "C\n"; }, [] () { std::cout << "D\n"; } ); A.precede(B, C, D); executor.run(taskflow).wait();
When the condition task A
completes and returns 0
, the scheduler moves on to task B
. Under the normal circumstance, tasks C
and D
will not run because their conditional dependencies never happen. This can be broken by forcefully scheduling C
or/and D
via a runtime object of a task that resides in the same graph. Here, task B
call tf::C
even though the weak dependency between A
and C
will never happen based on the graph structure itself. As a result, we will see both B
and C
in the output:
B # B leverages a runtime object to schedule C out of its dependency constraint C
Acquire the Running Executor
You can acquire the reference to the running executor using tf::
tf::Executor executor; tf::Taskflow taskflow; taskflow.emplace([&](tf::Runtime& rt){ assert(&(rt.executor()) == &executor); }); executor.run(taskflow).wait();
Run a Task Graph Asynchronously
A tf::
// create a custom graph tf::Taskflow graph; graph.emplace([](){ std::cout << "independent task 1\n"; }); graph.emplace([](){ std::cout << "independent task 2\n"; }); taskflow.emplace([&](tf::Runtime& rt){ // coruns the graph without blocking the calling worker of this runtime rt.corun(graph); }); executor.run_n(taskflow, 10000);
Although tf::
tf::Executor executor(2); tf::Taskflow taskflow; std::array<tf::Taskflow, 1000> others; for(size_t n=0; n<1000; n++) { for(size_t i=0; i<500; i++) { others[n].emplace([&](){}); } taskflow.emplace([&executor, &tf=others[n]](){ // blocking the worker can introduce deadlock where // all workers are waiting for their taskflows to finish executor.run(tf).wait(); }); } executor.run(taskflow).wait();
Using tf::
tf::Executor executor(2); tf::Taskflow taskflow; std::array<tf::Taskflow, 1000> others; for(size_t n=0; n<1000; n++) { for(size_t i=0; i<500; i++) { others[n].emplace([&](){}); } taskflow.emplace([&tf=others[n]](tf::Runtime& rt){ // the caller worker will not block on wait but corun these // taskflows through its work-stealing loop rt.corun(tf); }); } executor.run(taskflow).wait();
Run a Task Asynchronously
One of the most powerful features of tf::
#include <taskflow/taskflow.hpp> size_t fibonacci(size_t N, tf::Runtime& rt) { if(N < 2) return N; size_t res1, res2; rt.silent_async([N, &res1](tf::Runtime& rt1){ res1 = fibonacci(N-1, rt1); }); // tail optimization for the right child res2 = fibonacci(N-2, rt); // use corun to avoid blocking the worker from waiting the two children tasks // to finish rt.corun(); return res1 + res2; } int main() { tf::Executor executor; size_t N = 5, res; executor.silent_async([N, &res](tf::Runtime& rt){ res = fibonacci(N, rt); }); executor.wait_for_all(); std::cout << N << "-th Fibonacci number is " << res << '\n'; return 0; }
The figure below shows the execution diagram, where the suffix *_1 represent the left child spawned by its parent runtime.
For more details, please refer to Asynchronous Tasking and Fibonacci Number.