class
Runtimeclass to include a runtime object in a task
A runtime object allows users to interact with the scheduling runtime inside a task, such as scheduling an active task, spawning a subflow, and so on.
tf::Task A, B, C, D; std::tie(A, B, C, D) = taskflow.emplace( [] () { return 0; }, [&C] (tf::Runtime& rt) { // C must be captured by reference std::cout << "B\n"; rt.schedule(C); }, [] () { std::cout << "C\n"; }, [] () { std::cout << "D\n"; } ); A.precede(B, C, D); executor.run(taskflow).wait();
A runtime object is associated with the worker and the executor that runs the task.
Public functions
- auto executor() -> Executor&
- obtains the running executor
- auto worker() -> Worker&
- acquire a reference to the underlying worker
- void schedule(Task task)
- schedules an active task immediately to the worker's queue
-
template<typename F>auto async(F&& f) -> auto
- runs the given callable asynchronously
-
template<typename P, typename F>auto async(P&& params, F&& f) -> auto
- runs the given callable asynchronously
-
template<typename F>void silent_async(F&& f)
- runs the given function asynchronously without returning any future object
-
template<typename P, typename F>void silent_async(P&& params, F&& f)
- runs the given function asynchronously without returning any future object
-
template<typename T>void corun(T&& target)
- co-runs the given target and waits until it completes
- void corun_all()
- corun all asynchronous tasks spawned by this runtime with other workers
Function documentation
Executor& tf:: Runtime:: executor()
obtains the running executor
The running executor of a runtime task is the executor that runs the parent taskflow of that runtime task.
tf::Executor executor; tf::Taskflow taskflow; taskflow.emplace([&](tf::Runtime& rt){ assert(&(rt.executor()) == &executor); }); executor.run(taskflow).wait();
void tf:: Runtime:: schedule(Task task)
schedules an active task immediately to the worker's queue
Parameters | |
---|---|
task | the given active task to schedule immediately |
This member function immediately schedules an active task to the task queue of the associated worker in the runtime task. An active task is a task in a running taskflow. The task may or may not be running, and scheduling that task will immediately put the task into the task queue of the worker that is running the runtime task. Consider the following example:
tf::Task A, B, C, D; std::tie(A, B, C, D) = taskflow.emplace( [] () { return 0; }, [&C] (tf::Runtime& rt) { // C must be captured by reference std::cout << "B\n"; rt.schedule(C); }, [] () { std::cout << "C\n"; }, [] () { std::cout << "D\n"; } ); A.precede(B, C, D); executor.run(taskflow).wait();
The executor will first run the condition task A
which returns 0
to inform the scheduler to go to the runtime task B
. During the execution of B
, it directly schedules task C
without going through the normal taskflow graph scheduling process. At this moment, task C
is active because its parent taskflow is running. When the taskflow finishes, we will see both B
and C
in the output.
template<typename F>
auto tf:: Runtime:: async(F&& f)
runs the given callable asynchronously
Template parameters | |
---|---|
F | callable type |
Parameters | |
f | callable object |
The method creates an asynchronous task to launch the given function on the given arguments. The difference to tf::
std::atomic<int> counter(0); taskflow.emplace([&](tf::Runtime& rt){ auto fu1 = rt.async([&](){ counter++; }); auto fu2 = rt.async([&](){ counter++; }); fu1.get(); fu2.get(); assert(counter == 2); // spawn 100 asynchronous tasks from the worker of the runtime for(int i=0; i<100; i++) { rt.async([&](){ counter++; }); } // wait for the 100 asynchronous tasks to finish rt.corun_all(); assert(counter == 102); });
This method is thread-safe and can be called by multiple workers that hold the reference to the runtime. For example, the code below spawns 100 tasks from the worker of a runtime, and each of the 100 tasks spawns another task that will be run by another worker.
std::atomic<int> counter(0); taskflow.emplace([&](tf::Runtime& rt){ // worker of the runtime spawns 100 tasks each spawning another task // that will be run by another worker for(int i=0; i<100; i++) { rt.async([&](){ counter++; rt.async([](){ counter++; }); }); } // wait for the 200 asynchronous tasks to finish rt.corun_all(); assert(counter == 200); });
template<typename P, typename F>
auto tf:: Runtime:: async(P&& params,
F&& f)
runs the given callable asynchronously
Template parameters | |
---|---|
P | task parameters type |
F | callable type |
Parameters | |
params | task parameters |
f | callable |
taskflow.emplace([&](tf::Runtime& rt){ auto future = rt.async("my task", [](){}); future.get(); });
template<typename F>
void tf:: Runtime:: silent_async(F&& f)
runs the given function asynchronously without returning any future object
Template parameters | |
---|---|
F | callable type |
Parameters | |
f | callable |
This member function is more efficient than tf::
std::atomic<int> counter(0); taskflow.emplace([&](tf::Runtime& rt){ for(int i=0; i<100; i++) { rt.silent_async([&](){ counter++; }); } rt.corun_all(); assert(counter == 100); });
This member function is thread-safe.
template<typename P, typename F>
void tf:: Runtime:: silent_async(P&& params,
F&& f)
runs the given function asynchronously without returning any future object
Template parameters | |
---|---|
F | callable type |
Parameters | |
params | task parameters |
f | callable |
taskflow.emplace([&](tf::Runtime& rt){ rt.silent_async("my task", [](){}); rt.corun_all(); });
template<typename T>
void tf:: Runtime:: corun(T&& target)
co-runs the given target and waits until it completes
A corunnable target must have tf::
defined.
co-run a taskflow and wait until all tasks complete
tf::Taskflow taskflow1, taskflow2; taskflow1.emplace([](){ std::cout << "running taskflow1\n"; }); taskflow2.emplace([&](tf::Runtime& rt){ std::cout << "running taskflow2\n"; rt.corun(taskflow1); }); executor.run(taskflow2).wait();
Although tf::
void tf:: Runtime:: corun_all()
corun all asynchronous tasks spawned by this runtime with other workers
Coruns all asynchronous tasks (tf::
std::atomic<size_t> counter{0}; taskflow.emplace([&](tf::Runtime& rt){ // spawn 100 async tasks and wait for(int i=0; i<100; i++) { rt.silent_async([&](){ counter++; }); } rt.corun_all(); assert(counter == 100); // spawn another 100 async tasks and wait for(int i=0; i<100; i++) { rt.silent_async([&](){ counter++; }); } rt.corun_all(); assert(counter == 200); });