Building and Installing » Compile Taskflow with CUDA

Install CUDA Compiler

To compile Taskflow with CUDA code, you need a nvcc compiler. Please visit the official page of Downloading CUDA Toolkit.

Compile Source Code Directly

Taskflow's GPU programming interface for CUDA is tf::cudaFlow. Consider the following simple.cu program that launches a single kernel function to output a message:

#include <taskflow/taskflow.hpp>
#include <taskflow/cudaflow.hpp>  
#include <taskflow/cuda/for_each.hpp>

int main(int argc, const char** argv) {

  tf::Executor executor;
  tf::Taskflow taskflow;

  tf::Task task1 = taskflow.emplace([](){}).name("cpu task");
  tf::Task task2 = taskflow.emplace([](){
    // create a cudaFlow of a single-threaded task
    tf::cudaFlow cf;
    cf.single_task([] __device__ () { printf("hello cudaFlow!\n"); });
    
    // launch the cudaflow through a stream
    tf::cudaStream stream;
    cf.run(stream);
    stream.synchronize();
  }).name("gpu task");

  task1.precede(task2);

  executor.run(taskflow).wait();
  return 0;
}

The easiest way to compile Taskflow with CUDA code (e.g., cudaFlow, kernels) is to use nvcc:

~$ nvcc -std=c++17 -I path/to/taskflow/ --extended-lambda simple.cu -o simple
~$ ./simple
hello cudaFlow!

Compile Source Code Separately

Large GPU applications often compile a program into separate objects and link them together to form an executable or a library. You can compile your CPU code and GPU code separately with Taskflow using nvcc and other compilers (such as g++ and clang++). Consider the following example that defines two tasks on two different pieces (main.cpp and cudaflow.cpp) of source code:

// main.cpp
#include <taskflow/taskflow.hpp>

tf::Task make_cudaflow(tf::Taskflow& taskflow);  // create a cudaFlow task

int main() {

  tf::Executor executor;
  tf::Taskflow taskflow;

  tf::Task task1 = taskflow.emplace([](){ std::cout << "main.cpp!\n"; })
                           .name("cpu task");
  tf::Task task2 = make_cudaflow(taskflow);

  task1.precede(task2);

  executor.run(taskflow).wait();

  return 0;
}
// cudaflow.cpp
#include <taskflow/taskflow.hpp>
#include <taskflow/cudaflow.hpp>

tf::Task make_cudaflow(tf::Taskflow& taskflow) {
  return taskflow.emplace([](){
    // create a cudaFlow of a single-threaded task
    tf::cudaFlow cf;
    cf.single_task([] __device__ () { printf("cudaflow.cpp!\n"); });
    
    // launch the cudaflow through a stream
    tf::cudaStream stream;
    cf.run(stream);
    stream.synchronize();
  }).name("gpu task");
}

Compile each source to an object (g++ as an example):

~$ g++ -std=c++17 -I path/to/taskflow -c main.cpp -o main.o
~$ nvcc -std=c++17 --extended-lambda -x cu -I path/to/taskflow \
        -dc cudaflow.cpp -o cudaflow.o
~$ ls
# now we have the two compiled .o objects, main.o and cudaflow.o
main.o cudaflow.o 

The --extended-lambda option tells nvcc to generate GPU code for the lambda defined with device. The -x cu tells nvcc to treat the input files as .cu files containing both CPU and GPU code. By default, nvcc treats .cpp files as CPU-only code. This option is required to have nvcc generate device code here, but it is also a handy way to avoid renaming source files in larger projects. The –dc option tells nvcc to generate device code for later linking.

You may also need to specify the target architecture to tell nvcc to target on a compatible SM architecture using the option -arch. For instance, the following command requires device code linking to have compute capability 7.5 or later:

~$ nvcc -std=c++17 --extended-lambda -x cu -arch=sm_75 -I path/to/taskflow \
        -dc cudaflow.cpp -o cudaflow.o

Link Objects Using nvcc

Using nvcc to link compiled object code is nothing special but replacing the normal compiler with nvcc and it takes care of all the necessary steps:

~$ nvcc main.o cudaflow.o -o main

# run the main program 
~$ ./main
main.cpp!
cudaflow.cpp!

Link Objects Using Different Linkers

You can choose to use a compiler other than nvcc for the final link step. Since your CPU compiler does not know how to link CUDA device code, you have to add a step in your build to have nvcc link the CUDA device code, using the option -dlink:

~$ nvcc -o gpuCode.o -dlink main.o cudaflow.o

This step links all the device object code and places it into gpuCode.o.

To complete the link to an executable, you can use, for example, ld or g++.

# replace /usr/local/cuda/lib64 with your own CUDA library installation path
~$ g++ -pthread -L /usr/local/cuda/lib64/ -lcudart \
   gpuCode.o main.o cudaflow.o -o main

# run the main program
~$ ./main
main.cpp!
cudaflow.cpp!

We give g++ all of the objects again because it needs the CPU object code, which is not in gpuCode.o. The device code stored in the original objects, main.o and cudaflow.o, does not conflict with the code in gpuCode.o. g++ ignores device code because it does not know how to link it, and the device code in gpuCode.o is already linked and ready to go.