1. 引言
__syncthreads()
仅支持单一block内线程间的同步。
而Cooperative Groups支持跨grid,跨多个GPU设备的同步。
相关代码实现可看:
cudaLaunchCooperativeKernel
参数定义为:
template < class T >
__host__cudaError_t cudaLaunchCooperativeKernel ( const T* func, dim3 gridDim, dim3 blockDim, void** args, size_t sharedMem = 0, cudaStream_t stream = 0 ) [inline]
Launches a device function.
Parameters
func
- Device function symbol
gridDim
- Grid dimentions
blockDim
- Block dimentions
args
- Arguments
sharedMem
- Shared memory (defaults to 0)
stream
- Stream identifier (defaults to NULL)
cudaLaunchCooperativeKernel
会调用kernel func
函数,将该函数运行在gridDim (gridDim.x gridDim.y gridDim.z)
个grid of blocks,每个block内包含了blockDim (blockDim.x blockDim.y blockDim.z)
个threads。
【当前,每个block最多仅能有1024个threads,每个grid最多有65535个blocks。】
运行该kernel的设备必须具有非零的设备属性值cudaDevAttrCooperativeLaunch
。
2. 多block线程同步
若采用cooperative groups方式,kernel必须使用cudaLaunchCooperativeKernel
来调用。
可采用atomics with bitfields这种更简单的方式来实现,如:
// A global var with 64 bits can track 64 blocks,
// use an array if you need to track more blocks
__device__ uint64_t CompleteMask;
//This is where we put in all the smarts
//from the CPU reference solver
__global__ void doWork() {
atomicAnd(&CompleteMask, 0);
//do lots of work
const auto SollMask = (1 << gridDim.x) - 1;
if (ThreadId() == 0) {
while ((atomicOr(&CompleteMask, 1ULL << blockIdx.x)) != SollMask) { /*do nothing*/ }
}
if (ThreadId() == 0 && 0 == blockIdx.x) {
printf("Print a single line for the entire process")
}
}
以上仅可同步64个blocks之间的线程,可借助数组来track the bits and aotmicAdd
来跟踪数量,如:
// A global var with 64 bits can track 64 blocks,
// use an array if you need to track more blocks
__device__ int CompleteMask[2];
__device__ int CompleteSuperMask;
__global__ void doWork() {
for (auto i = 0; i < 2; i++) { atomicAnd(&CompleteMask[i], 0); }
atomicAnd(&CompleteSuperMask, 0);
//do lots of work
int SollMask[3];
SollMask[0] = -1;
SollMask[1] = (1 << (gridDim.x % 32)) - 1;
SollMask[2] = (1 << (gridDim.x / 32)) - 1;
const auto b = blockIdx.x / 32;
while (atomicOr(&CompleteMask[b], (1U << (blockIdx.x % 32))) != SollMask[b]) { /*do nothing*/ }
while (atomicOr(&CompleteSuperMask, (1U << b)) != SollMask[2]) { /*do nothing*/ }
if (threadIdx.x == 0 && blockIdx.x == 0) {
printf("Print a single line for the entire process");
}
}
参考资料
[1] CUDA: Cooperative Groups
[2] NVIDIA Cooperative Groups
[3] Cooperative Groups: Flexible CUDA Thread Programming