Taskflow allows a task to interact directly with the scheduling runtime by taking a tf::Runtime object as its argument. Through this handle, a task can spawn new work dynamically, synchronise with sub-tasks cooperatively, and implement recursive parallel algorithms — capabilities that are not possible with ordinary static tasks. We recommend reading Asynchronous Tasking and Asynchronous Tasking with Dependencies before this page.
An ordinary Taskflow task is a pure callable with no connection to the scheduler that runs it. A runtime task breaks this boundary by accepting a tf::Runtime& parameter, giving the task a live handle to the executor. Any callable that takes tf::Runtime& as its argument is automatically recognized as a runtime task by Taskflow:
The real power of a runtime task lies in its ability to spawn and synchronise sub-tasks on the fly, described in the sections below.
tf::Runtime::async and tf::Runtime::silent_async launch unordered async tasks from within a running runtime task. These calls are thread-safe and place the new tasks immediately into the executor's work-stealing pool.
A key property of runtime-spawned tasks is implicit synchronisation: all tasks spawned from a tf::Runtime are guaranteed to finish before the runtime task itself completes and control passes to the next task in the graph. You do not need to manually join them — the runtime handles this automatically at the end of its scope.
The example below spawns 1000 async tasks from runtime task A. Task B runs after A in the static graph. Thanks to the implicit join, B is guaranteed to observe counter == 1000 with no additional synchronisation required:
tf::Runtime::dependent_async and tf::Runtime::silent_dependent_async let you build a dynamic task graph with explicit dependency edges from within a runtime task. This mirrors the executor-level API in Asynchronous Tasking with Dependencies, with the same implicit synchronisation guarantee: all dependent-async tasks spawned from the runtime are joined before the runtime task completes.
The example below builds a sequential chain of 1001 dependent-async tasks inside a single runtime task. Each task asserts a specific value of counter, which is enforced by the dependency edges:
tf::Runtime::corun allows a runtime task to explicitly wait for all its currently spawned sub-tasks to finish at any point during execution, not just at the end of its scope. Unlike a blocking wait, corun does not suspend the calling worker. Instead, the worker remains active in the executor's work-stealing loop, picking up and executing available tasks while waiting — keeping all threads productive.
A particularly important advantage of this cooperative model is that corun preserves the call stack of the invoking runtime task. The runtime task stays live on the worker while corun executes; when all sub-tasks finish, execution resumes exactly where it left off with all local variables and state intact. This makes it possible to implement recursive parallel algorithms where each level of recursion spawns sub-tasks and then waits cooperatively, building up a tree of live runtime contexts without ever blocking a thread.
The example below implements parallel Fibonacci using recursive runtime tasks. At each level, the left child is spawned as an async runtime task while the right child is computed inline. rt.corun() then waits cooperatively for the left child, resuming with the local variables res1 and res2 exactly as they were:
The figure below shows the execution diagram for fibonacci(4). The suffix _1 denotes the left child spawned by its parent runtime: