We implement a speculative execution pattern using condition tasks to route a batch of queries through a fast approximate path or a slow exact path based on a runtime probe. This example demonstrates how condition tasks express performance-driven branching — not just correctness-driven control flow — and how to correctly wire the convergence of two exclusive branches into a single downstream task without deadlock.
Many real-world computations have two valid execution strategies for the same result: a fast path that succeeds most of the time with low cost, and a slow fallback that is always correct but expensive. Classic examples include:
In all three cases the routing decision is made by a cheap probe whose cost is negligible compared to the fallback. The structure is always: probe → fast path or slow path → merge result. This is exactly the branching structure condition tasks are designed to express.
We process a batch of N string keys against a set stored in a hash table. Before querying the hash table, each key is probed against a Bloom filter — a space-efficient probabilistic data structure that can definitively rule out absent keys in O(k) hash operations, where k is the number of hash functions.
The two execution paths per key are:
The Bloom filter has no false negatives: if it says not present, the key is guaranteed absent. It may have false positives: a key the filter calls probably present may still be absent, in which case the hash-table lookup returns not found after doing the full work. The fast-path short circuit is therefore exact in the negative case and conservative (but correct) in the positive case.
For a single key the task graph is a diamond: a condition task (speculate) probes the Bloom filter and routes to either fast_handler or slow_fallback, both of which must then converge on a single merge point before the result is recorded.
A naive first attempt makes fast_handler and slow_fallback regular tasks and feeds both into merge via strong edges:
This graph deadlocks. Task merge has two strong dependencies — one from fast_handler and one from slow_fallback — but only the branch selected by speculate will actually execute. The other branch is never scheduled, so merge's strong dependency counter drops from 2 to 1 and never reaches 0. merge waits forever. This is Pitfall 2 from Avoid Common Pitfalls applied to the switch pattern: just as both the switch task and each case task must be condition tasks in a switch control flow (see Implement Switch Control Flow), here both the routing task and the two branch tasks must be condition tasks so that their outgoing edges to merge are weak, bypassing the strong dependency counter entirely.
The correct per-item graph is:
Every node that has an outgoing edge to merge is a condition task. merge itself is a condition task that always returns 0 to route to done, making the merge→done edge weak. The dependency counts for the single-item graph are:
| Task | Strong | Weak | Total |
|---|---|---|---|
| prepare | 0 | 0 | 0 |
| speculate | 1 | 0 | 1 |
| fast_handler | 0 | 1 | 1 |
| slow_fallback | 0 | 1 | 1 |
| merge | 0 | 2 | 2 |
| done | 0 | 1 | 1 |
merge has two weak incoming edges but only one fires per execution, so it is enqueued exactly once. done has one weak incoming edge from merge and is also enqueued exactly once.
For a batch of N keys, each key gets its own independent (speculate, fast_handler, slow_fallback, merge, done) chain. The N chains have no mutual dependencies and execute fully in parallel across available workers. A final collect task gathers results; it waits for all N done tasks via strong dependencies, so it is enqueued exactly once after every chain has completed.
Because done is reached via a chain of weak edges (merge→done is weak), done itself has zero strong dependencies. It is a regular task, so its outgoing edge to collect is strong. This gives collect exactly N strong dependencies, one per chain, and it runs once all keys have been processed.
We implement a minimal Bloom filter and a hash-table backing store. For each key in the batch we build the five-task chain and wire it into the taskflow. Results are recorded into a per-key array so that collect can aggregate without any synchronization.
A sample run with the keys above produces output such as:
The fast and slow paths for different keys interleave freely because the N chains are fully independent. The collect task appears last, after all done tasks have completed.
spec_wrong arises because fast_handler and slow_fallback are regular tasks, making their edges to merge strong. merge then waits for two strong dependencies that can never both be satisfied. The fix is for every task that has an outgoing edge to a shared convergence point to be a condition task, so those edges are weak and bypass the strong dependency counter. This is the same rule as for switch control flow (see Implement Switch Control Flow): whenever a condition task fans out to k branches that all converge on the same downstream task, all k branch tasks must also be condition tasks.merge is a pass-through condition task: merge performs no computation — it exists solely as a controlled convergence point. It has two incoming weak edges (exactly one fires) and one outgoing weak edge to done. Its return value is always 0, which is the index of its only successor. This pattern — a no-op condition task used purely to merge exclusive branches — is the condition-task analogue of the auxiliary join task used for the same purpose with regular tasks in Incremental Build Graph.done bridges the condition-task chain to the strong-dependency world: collect needs strong incoming edges so that it is enqueued exactly once after all N chains have completed. done is a regular task reached via a weak edge from merge, so its outgoing edge to collect is strong. It serves as the re-entry point from the condition-task world (all weak edges, activated by routing) back to the regular-task world (strong edges, activated by reference counting). Without done, collect would need N weak incoming edges from merge and would be enqueued up to N times simultaneously — a task race analogous to Pitfall 2 from Avoid Common Pitfalls.speculate returns 0, the scheduler enqueues fast_handler immediately and never touches slow_fallback. slow_fallback is never dequeued, never invoked, and its slot in the worker queue is never occupied. On a workload where 90% of keys are absent and the Bloom filter has a low false-positive rate, 90% of the per-key chains skip the hash-table lookup entirely, and those workers are freed to pick up the next available chain sooner. This throughput gain is automatic: it requires no explicit thread management, no condition variables, and no external job queue.k = ln(2) * m/n hash functions, where m is the bit-array size and n is the number of inserted elements, to minimise the false-positive rate for a given memory budget. The graph structure and condition task wiring are identical regardless of k; only the probe function changes.