高效策略
1.公式
1.1最大化计算强度:
Math是数学计算量,Memory是每个线程的内存。
1.最大化每个线程的计算量
2.最小化每个线程个的内存读取速度
- 每个线程读取的数据少
- 每个线程读取的速度快 -> 1.本地内存 > 共享内存 > > 全局内存 2.合并全局内存
1.2 合并全局内存
- 好的内存是内存是连续的
- 一般的内存存储是内存之间有固定的步长
- 不好的内存分布是随机的内存分布
1.3避免线程发散
线程发散:同一个线程块中的线程执行不同的内容代码
导致发散的例子:
- kernel 中做条件判断
- 循环长度不一
2.kernel加载方式
2.1查看本机参数
lcx@lcx:/usr/local/cuda-10.1/samples/1_Utilities/deviceQuery$ ./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 2 CUDA Capable device(s)
Device 0: "GeForce GTX 980"
CUDA Driver Version / Runtime Version 10.1 / 10.1
CUDA Capability Major/Minor version number: 5.2
Total amount of global memory: 4044 MBytes (4240179200 bytes)
(16) Multiprocessors, (128) CUDA Cores/MP: 2048 CUDA Cores
GPU Max Clock rate: 1216 MHz (1.22 GHz)
Memory Clock rate: 3505 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: No
Supports Cooperative Kernel Launch: No
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 41 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
Device 1: "GeForce GTX 980"
CUDA Driver Version / Runtime Version 10.1 / 10.1
CUDA Capability Major/Minor version number: 5.2
Total amount of global memory: 4041 MBytes (4237033472 bytes)
(16) Multiprocessors, (128) CUDA Cores/MP: 2048 CUDA Cores
GPU Max Clock rate: 1216 MHz (1.22 GHz)
Memory Clock rate: 3505 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: No
Supports Cooperative Kernel Launch: No
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 42 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
> Peer access from GeForce GTX 980 (GPU0) -> GeForce GTX 980 (GPU1) : Yes
> Peer access from GeForce GTX 980 (GPU1) -> GeForce GTX 980 (GPU0) : Yes
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.1, CUDA Runtime Version = 10.1, NumDevs = 2
Result = PASS
2.2如何查看本机参数
在Linux下安装好cuda以后,在/usr/local/cuda-10.1/samples/1_Utilities/deviceQuery这个路径下make一下就可以了,二胺后执行deviceQuery。
比较重要的参数:
- Maximum number of threads per multiprocessor: 2048
- Maximum number of threads per block: 1024
- Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
注意事项:kernel的加载中,自定义的线程数,线程块的数量等都不要超过系统ongoing本身的设定,否则,会影响机器的效率。
Kernel加载——1D模式
本质上就是一个加载进位模型,区分1D,2D,3D和grid和block.本质上是分线程束.
举个栗子:这是一个GPU<<<3,8>>>();线程模型.
这一个2D模型,dim3 dimGrid(5,3);dim3 dimBlock(4,2);
GPU<<<dimGrid,dimBlock>>>();这个模型;
解释结几个名词:
girdDim.x = 5 是固定的值
gridDim.y = 3 是固定值
blockDim.x = 4 是固定值
blockDim.y = 2 是固定值
blockIdx.x 是变化的值 (0~4)
blockIdx.y 是变化的值(0~2)
threadIdx.x 是变化的值(0~3)
threadIdx.y 是变化的值(0~1)
超级总结:
Kernel加载——2D模式
Kernel加载——3D模式
Kernel 函数关键字
3.cuda中的各种内存的代码
3.1本地变量
在核函数中定义的变量
3.2全局变量
3.3共享内存
4.cuda同步操作
原子操作解决的问题: 对于有很多线程需要同时读取或写入相同的内存时,保证同一时间只有一个线程能进行操作。
4.1原子操作的
- 只支持某些运算(加、减、最小值、异或运算等,不支持求余和求幂等)和数据类型(整型)
- 运行顺序不定
- 安排不当,会使速度很慢(因为内部是个串行的运行)
4.2同步函数
__syncthreads()
线程块内线程同步 保证线程块内所有线程都执行到统一位置
__threadfence()
一个线程调用__threadfence后,该线程在该语句前对全局存储器或共享存储器的访问已经全部完成,执行结果对grid中的所有线程可见。
__threadfence_block()
一个线程调用__threadfence_block后,该线程在该语句前对全局存储器或者共享存储器的访问已经全部完成,执行结果对block中的所有线程可见。
以上两个函数的重要作用是,及时通知其他线程,全局内存或者共享内存内的结果已经读入或写入完成了。
4.3CPU/GPU同步
- cudaStreamSynchronize()/cudaEventSynchronize()
- 主机端代码中使用cudaThreadSynchronize():实现CPU和GPU线程同步
- kernel启动后控制权将异步返回,利用该函数可以确定所有设备端线程均已运行结束;