CUDA编程指南阅读笔记_3

4. CUDA C语言编程接口

        接上文。

4.3 CUDA C Runtime

4.3.3 共享内存(Shared Memory)

        共享内存是CUDA设备中非常重要的一个存储区域,有效地使用共享内存可以充分利用CUDA设备的潜能,极大提升程序性能。那么,共享内存有哪些特点呢?

        1、共享内存(shared Memory)是集成在GPU处理器芯片上的(on-chip),因此相比于存在于显存颗粒中的全局内存(global Memory)和本地内存(local Memory),它具有更高的传输带宽,一般情况下,共享内存的带宽大约是全局内存带宽的7-10倍。

        2、共享内存的容量很小。根据NVIDIA官方文档的说法,在计算能力1.x的设备中,每一个流多处理器(Streaming Multiprocessor)上的共享内存容量为16KB。对于计算能力2.x、3.0及3.5的设备该参数为48KB。因此共享内存是稀有资源。

        3、共享内存在物理上被划分为很多块,每一块被称为一个存储体(bank)。在同一时刻,CUDA设备可以同时访问多个存储体。因此,如果一次针对共享内存的访存操作需要读取n个地址,而这n个地址恰好分布在n个不同的存储体(bank)中,那么只需要一个存取周期就可以完成n个地址的访存任务了。对于计算能力1.x的设备,共享内存被平均划分为16个存储体。而对于计算能力2.x、3.0及3.5的设备此参数为32。在共享内存中,相邻两块32bit的数据分别属于相邻的两个存储体。存储体每两个时钟周期可以传输32位数据。

        4、共享内存既可以静态分配,也可以动态分配

        从共享内存的这些特点中我们可以看出,它实际上相当于一个程序员可以操控的缓存(cache),下面,我们使用矩阵乘法的例子来说明如何有效使用共享内存。

        首先,我们使用最直观的方法来完成矩阵乘法C = A x B:读取A的每一行和B的每一列,顺次完成计算任务。矩阵乘法的示意图如下所示:

下面是矩阵乘法的CUDA C主要实现代码:

// Matrices are stored in row-major order:
// M(row, col) = *(M.elements + row * M.width + col)
typedef struct {
    int width;
    int height;
    float *elements;
} Matrix;
 
// Thread block size
#define BLOCK_SIZE 16
 
// Forward declaration of the matrix multiplication kernel
__global__ void MatMulKernel(const Matrix, const Matrix, Matrix);
 
// Matrix multiplication - Host code
// Matrix dimensions are assumed to be multiples of BLOCK_SIZE
void MatMul(const Matrix A, const Matrix B, Matrix C) {
    // Load A and B to device memory
    Matrix d_A;
    d_A.width = A.width; d_A.height = A.height;
    size_t size = A.width * A.height * sizeof(float);
    cudaMalloc(&d_A.elements, size);
    cudaMemcpy(d_A.elements, A.elements, size, cudaMemcpyHostToDevice);
    Matrix d_B;
    d_B.width = B.width; d_B.height = B.height;
    size = B.width * B.height * sizeof(float);
    cudaMalloc(&d_B.elements, size);
    cudaMemcpy(d_B.elements, B.elements, size, cudaMemcpyHostToDevice);
 
    // Allocate C in device memory
    Matrix d_C;
    d_C.width = C.width; d_C.height = C.height;
    size = C.width * C.height * sizeof(float);
    cudaMalloc(&d_C.elements, size);
 
    // Invoke kernel
    dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);
    dim3 dimGrid(B.width / dimBlock.x, A.height / dimBlock.y);
    MatMulKernel<<<dimGrid, dimBlock>>>(d_A, d_B, d_C);
 
    // Read C from device memory
    cudaMemcpy(C.elements, d_c.elements, size, cudaMemcpyDeviceToHost);
 
    // Free device memory
    cudaFree(d_A.elements);
    cudaFree(d_B.elements);
    cudaFree(d_C.elements);
}
 
// Matrix multiplication kernel called by MatMul()
__global__ void MatMulKernel(Matrix A, Matrix B, Matrix C) {
    // Each thread computes one element of C
    // by accumulating results into Cvalue
    float Cvalue = 0;
    int row  = blockIdx.y * blockDim.y + threadIdx.y;
    int col = blockIdx.x * blockDim.x + threadIdx.xl
    for (int e = 0; e < A.width; ++e)
        Cvalue += A.elements[row * A.width + e] * B.elements[e * B.width + col];
    C.elements[row * C.width + col] = Cvalue;
}

可以看出,为了计算矩阵C的任何一个元素,程序都需要从全局内存(global memory)中获得矩阵A的一行和矩阵B的一列。因此,完成这一计算矩阵A被读取了B.width次,矩阵B被读取了A.height次。
        现在我们来使用共享内存(shared memory)实现矩阵乘法。假设矩阵C可以被划分为若干个较小的子方阵Csub,我们使用一个线程块(thread block)来负责某一子方阵的计算,线程块中的每一个线程(thread)正好负责子方阵Csub中一个元素的计算。这样划分后,任何一个结果子方阵Csub'(尺寸为block_size * block_size)都是与该方阵具有相同行索引的尺寸为A.width * block_size的A的子矩阵Asub和与该方阵具有相同列索引的尺寸为block_size * B.height的B的子矩阵Bsub相乘所得到。

        为了匹配设备的计算资源,两个子矩阵Asub和Bsub被划分为尽可能多的分离的维度为block_size的子方阵,Csub的值便是这些子矩阵相乘后相加所得到的结果。子矩阵乘法的执行顺序都是首先将它们从全局内存(global memory)拷贝到共享内存(shared memory)(线程块中的每一个线程正好负责方阵一个元素的拷贝),然后由线程自己完成相应元素的计算任务,利用寄存器存储局部结果,最后将寄存器的内容与新得到的计算结果依此累加起来得到最终运算结果并将其传输到全局内存(global memory)中。

      通过使用这种分治的计算策略,共享内存得到了很好的利用,采用这种方案计算完成时全局内存中矩阵A被访问的次数为B.width / block_size,矩阵B被访问的次数为A.height / block_size,很明显,这为我们节省了非常多的全局内存带宽。优化后的矩阵计算示意图如下所示:

为了提升计算效率,我们为类型Matrix增加了一个成员变量stride。__device__函数用来获得和设置子矩阵的元素。下面是优化后的代码:


// Matrices are stored in row-major order;
// M(row, col) = *(M.elements + row * M.stride + col)
typedef struct {
    int width;
    int height;
    int stride;
    float* elements;
} Matrix;
 
// Get a matrix element
__device__ float GetElement(const Matrix A, int row, int col) {
    return A.elements[row * A.stride + col];
}
 
// Set a matrix element
__device__ void SetElement(Matrix A, int row, int col, float value) {
    A.elements[row * A.stride + col] = value;
}
 
// Get the BLOCK_SIZExBLOCK_SIZE sub-matrix Asub of A that is
// located col sub-matrices to the right and row sub-matrices down
// from the upper-left corner of A
__device__ Matrix GetSubMatrix(Matrix A, int row, int col) {
    Matrix Asub;
    Asub.width = BLOCK_SIZE;
    Asub.height = BLOCK_SIZE;
    Asub.stride = A.stride;
    Asub.elements = &A.elements[A.stride * BLOCK_SIZE * row + BLOCK_SIZE * col];
    return Asub;
}
 
// Thread block size
#define BLOCK_SIZE 16
 
// Forward declaration of the matrix multiplication kernel
__global__ void MatMulKernel(const Matrix, const Matrix, Matrix);
 
// Matrix multiplication - Host code
// Matrix dimensions are assumed to be multiples of BLOCK_SIZE
void MatMul(const Matrix A, const Matrix B, Matrix C) {
    // Load A and B to device memory
    Matrix d_A;
    d_A.width = d_A.stride = A.width;
    d_A.height = A.height;
    size_t size = A.width * A.height * sizeof(float);
    cudaMalloc(&d_A.elements, size);
    cudaMemcpy(d_A.elements, A.elements, size, cudaMemcpyHostToDevice);
    Matrix d_B;
    d_B.width = d_B.stride = B.width;
    d_B.height = B.height;
    size = B.width * B.height * sizeof(float);
    cudaMalloc(&d_B.elements, size);
    cudaMemcpy(d_B.elements, B.elements, size, cudaMemcpyHostToDevice);
 
    // Allocate C in device memory
    Matrix d_C;
    d_C.width = d_C.stride = C.width;
    d_C.height = C.height;
    size = C.width * C.height * sizeof(float);
    cudaMalloc(&d_C.elements, size);
 
    // Invoke kernel
    dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);
    dim3 dimGrid(B.width / dimBlock.x, A.height / dimBlock.y);
    MatMulKernel<<<dimGrid, dimBlock>>>(d_A, d_B, d_C);
 
    // Read C from device memory
    cudaMemcpy(C.elements, d_C.elements, size, cudaMemcpyDeviceToHost);
 
    // Free device memory
    cudaFree(d_A.elements);
    cudaFree(d_B.elements);
    cudaFree(d_C.elements);
}
 
// Matrix multiplication kernel called by MatMul()
__global__ void MatMulKernel(Matrix A, Matrix B, Matrix C) {
    // Block row and column
    int blockRow = blockIdx.y;
    int blockCol = blockIdx.x;
 
    // Each thread block computes one sub-matrix Csub of C
    Matrix Csub = GetSubMatrix(C, blockRow, blockCol);
 
    // Each thread computes one element of Csub
    // by accumulating results into Cvalue
    float Cvalue = 0;
 
    // Thread row and column within Csub
    int row = threadIdx.y;
    int col = threadIdx.x;
 
    // Look over all the sub-matrices of A and B that are required to compute Csub
    // Multiply each pair of sub-matrices together and accumulate the results
    for (int m = 0; m < (A.width / BLOCK_SIZE); ++m) {
        // Get sub-matrix Asub of A
        Matrix Asub = GetSubMatrix(A, blockRow, m);
        
        // Get sub-matrix Bsub of B
        Matrix Bsub = GetSubMatrix(B, m, blockCol);
 
        // Shared memory used to store Asub and Bsub respectively
        __shared__ float As[BLOCK_SIZE][BLOCK_SIZE];
        __shared__ float Bs[BLOCK_SIZE][BLOCK_SIZE];
 
        // Load Asub and Bsub from device memory to shared memory
        // Each thread loads one element of each sub-matrix
        As[row][col] = GetElement(Asub, row, col);
        Bs[row][col] = GetElement(Bsub, row, col);
 
        // Synchronize to make sure the sub-matrices are loaded
        // before starting the computation
        __syncthreads();
 
        // Multiply Asub and Bsub together
        for (int e = 0; e < BLOCK_SIZE; ++e)
            Cvalue += As[row][e] * Bs[e][col];
 
        // Synchronize to make sure that the preceding computation is done before
        // loading two new sub-matrices of A and B in the next iteration
        __syncthreads();
    }
 
    // Write Csub to device memory
    // Each thread writes one element
    SetElement(Csub, row, col, Cvalue);
}

 https://blog.csdn.net/csgxy123/article/details/10018531

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值