Loading...
Searching...
No Matches
tf::PartitionerBase< C > Class Template Reference

class to derive a partitioner for scheduling parallel algorithms More...

#include <taskflow/algorithm/partitioner.hpp>

Public Types

using closure_wrapper_type = C
 the closure type
 

Public Member Functions

 PartitionerBase ()=default
 default constructor
 
 PartitionerBase (size_t chunk_size)
 construct a partitioner with the given chunk size
 
 PartitionerBase (size_t chunk_size, C &&closure_wrapper)
 construct a partitioner with the given chunk size and closure wrapper
 
size_t chunk_size () const
 query the chunk size of this partitioner
 
void chunk_size (size_t cz)
 update the chunk size of this partitioner
 
const C & closure_wrapper () const
 acquire an immutable access to the closure wrapper object
 
C & closure_wrapper ()
 acquire a mutable access to the closure wrapper object
 
template<typename F>
void closure_wrapper (F &&fn)
 modify the closure wrapper object
 
template<typename F>
TF_FORCE_INLINE decltype(auto) operator() (F &&callable)
 wraps the given callable with the associated closure wrapper
 

Static Public Attributes

static constexpr bool is_default_wrapper_v = std::is_same_v<C, DefaultClosureWrapper>
 indicating if the given closure wrapper is a default wrapper (i.e., empty)
 

Detailed Description

template<typename C = DefaultClosureWrapper>
class tf::PartitionerBase< C >

class to derive a partitioner for scheduling parallel algorithms

Template Parameters
Cclosure wrapper type

The class provides base methods to derive a partitioner that can be used to schedule parallel iterations (e.g., tf::Taskflow::for_each).

An partitioner defines the scheduling method for running parallel algorithms, such tf::Taskflow::for_each, tf::Taskflow::reduce, and so on. By default, we provide the following partitioners:

Depending on applications, partitioning algorithms can impact the performance a lot. For example, if a parallel-iteration workload contains a regular work unit per iteration, tf::StaticPartitioner can deliver the best performance. On the other hand, if the work unit per iteration is irregular and unbalanced, tf::GuidedPartitioner or tf::DynamicPartitioner can outperform tf::StaticPartitioner. In most situations, tf::GuidedPartitioner can deliver decent performance and is thus used as our default partitioner.

Attention
Giving the partition size of 0 lets the Taskflow runtime automatically determines the partition size for the given partitioner.

In addition to partition size, the application can specify a closure wrapper for a partitioner. A closure wrapper allows the application to wrap a partitioned task (i.e., closure) with a custom function object that performs additional tasks. For example:

std::atomic<int> count = 0;
tf::Taskflow taskflow;
taskflow.for_each_index(0, 100, 1,
[](){
printf("%d\n", i);
},
tf::StaticPartitioner(0, [](auto&& closure){
// do something before invoking the partitioned task
// ...
// invoke the partitioned task
closure();
// do something else after invoking the partitioned task
// ...
}
);
executor.run(taskflow).wait();
Task for_each_index(B first, E last, S step, C callable, P part=P())
constructs an index-based parallel-for task
class to construct a static partitioner for scheduling parallel algorithms
Definition partitioner.hpp:262
class to create a taskflow object
Definition taskflow.hpp:64
Attention
The default closure wrapper (tf::DefaultClosureWrapper) does nothing but invoke the partitioned task (closure).

The documentation for this class was generated from the following file: