2022CUDA夏季训练营Day4实践

前情回顾:


2022CUDA夏季训练营Day1实践icon-default.png?t=M666https://zhanghui-china.blog.csdn.net/article/details/1257114422022CUDA夏季训练营Day2实践icon-default.png?t=M666https://zhanghui-china.blog.csdn.net/article/details/1257117672022CUDA夏季训练营Day3实践icon-default.png?t=M666https://zhanghui-china.blog.csdn.net/article/details/125711914

今天是第四天,主题是统一内存、原子操作等。

(一)统一内存

从前几天的矩阵乘的代码中可以看出,要写好一个CUDA的代码,需要分配HOST内存(malloc或cudaMallocHost),需要分配DEVICE内存(cudaMalloc),需要将HOST内存数据复制到DEVICE(cudaMemcpy),需要完成GPU核函数的调用,需要把核函数的调用结果在复制回HOST(cudaMemcpy),还需要对前面的各种内存做释放工作(free,cudaFreeHost,cudaFree)。

这些工作,虽然是套路,显然还是太繁琐了。

于是,聪明的Nvidia在CUDA 6.0以上的版本提出了一个叫做Unified Memory(统一内存)的概念。它把GPU内存、CPU内存在编码层面屏蔽起来:

它是可以从系统的任何CPU、GPU访问的单个内存地址空间。它允许应用程序分配可以从CPUs或GPUs上允许的代码读取或者写入数据。

具体的方式如下所示:

它把原来CPU上的malloc改为cudaMallocManaged,并且分配好的内存地址可以直接被GPU的核函数(图中的 qsort)使用(还记得原来的代码需要先cudaMallocHost/malloc,在cudaMemcpy吗?这里统统不要了。

统一内存除了上面使用的 cudaMallocManaged函数来定义变量以外,还可以使用 __managed__ 标识符来表示这是一块统一内存。(这个前面可能还需要再加上 __device__ 标识符供 核函数使用。

统一内存使用的时候要借助于 cudaDeviceSynchronize() 来确保CPU和GPU同步。

统一内存不显式的区分HOST还是DEVICE的memory,它简化了代码,增强了代码的通用性。

统一内存只能在HOST申请。

这里面有几个误区需要澄清下:

(1)张小白原来以为,只有 Nvidia Jetson Orin那种显存和内存合二为一的设备才有统一内存的概念。但其实并不是——满足 SM架构大于3.0(Kepler架构以上)都可以使用统一内存的方式来编程。逻辑上任何GPU卡或者设备都可以使用统一内存,但是从效果上来看,只有真正的融合为一体的设备(如Jetson AGX Orin),才有最好的统一内存的效果。

(2)对于矩阵乘的代码而言,统一内存相当于对Global Memory的一个等效版本,而共享内存则是对SM内部的一种速度优化方式。两者是无关的。也就是说,你在使用统一内存的代码中可以同时使用共享内存。

(2)使用了 __managed__ 标识符或 cudaMallocManaged 之后,确实代码中不需要 cudaMalloc,cudaMemcpy这些代码了。但是系统底层其实还会根据情况,决定自己是否需要执行相关的GPU内存分配和 HOST和DEVICE内存的互相拷贝的动作。

举个例子,对于张小白的Nvidia Quardo P1000的显卡而言,HOST内存在自己的笔记本内存上(大概有64G),DEVICE内存在GPU显卡(大概有4G)。在这样的环境运行代码,系统仍然会做 申请HOST内存,申请DEVICE内存,HOST内存与DEVICE内存复制等动作。

但是对于张小白新购置的了不起的Nvidia AGX Orin而言,HOST内存就是DEVICE内存(大概有32G)。两者不仅仅叫做统一内存,其实还叫做同一内存(张小白自创的)。也就是说,ARM CPU和Nvidia GPU共享一个物理内存。具体的说明可参见:

【NVIDIA GTC2022】揭秘 Jetson 上的统一内存https://zhuanlan.zhihu.com/p/486130961

同一内存最大的好处就是:下面典型的三个动作,1、3都可以省略了:

所以典型的代码就从左边的模式变成了右边的模式:

(1)定义变量:仅需要定义unified memory的变量。节省了空间。

(2)HOST->DEVICE:步骤省略

(3)执行核函数:跟原来一样

(4)DEVICE->HOST:步骤省略

(5)显式同步:只是统一内存比原来的方式多一个CPU等待GPU完成的动作。

注:上述图片(含代码)来自于上面链接中的文章。辅导员欢老师指出,上述代码好像是Python的,说明PyCUDA也是CUDA编程不可分割的一部分。张小白要学习的地方还有很多!

那么,统一内存到底是怎么实现的呢?这里借助了下图的做法:

CUDA在现有内存池的结构上增加了一个 统一内存系统。开发人员可以直接访问任何内存或者显存资源。

当CUDA发现需要访问GPU内存时,如果一开始定义在HOST侧,并且对其进行了初始化,CUDA会自动执行数据拷贝,所以,仍然会受制于PCI-E的带宽和延迟。

我们可以看到在这个情况下,代码和运行时变量前后的变迁:

注:上述图片(含代码)来自于何老师的PPT。不过代码居然是C++的,这点明眼人欢老师又看出来了。其中使用了deep copy(这点张小白目前还不懂)。统一内存就解决了deep copy的复杂问题。

好了,概念好像整理得差不多了。下面开始实战:

我们把昨天的矩阵乘的代码(包含共享内存优化部分)拿过来,然后看看该怎么优化。

原来的代码是这样的:

matrix_mul.cuh

#pragma once
#include <stdio.h>

#define CHECK(call)                                   \
do                                                    \
{                                                     \
    const cudaError_t error_code = call;              \
    if (error_code != cudaSuccess)                    \
    {                                                 \
        printf("CUDA Error:\n");                      \
        printf("    File:       %s\n", __FILE__);     \
        printf("    Line:       %d\n", __LINE__);     \
        printf("    Error code: %d\n", error_code);   \
        printf("    Error text: %s\n",                \
            cudaGetErrorString(error_code));          \
        exit(1);                                      \
    }                                                 \
} while (0)

matrix_mul_old.cu

#include <stdio.h>
#include <math.h>
#include "error.cuh"
#include "matrix_mul.cuh"

#define BLOCK_SIZE 32

__global__ void gpu_matrix_mult(int *a,int *b, int *c, int m, int n, int k)
{ 
    int row = blockIdx.y * blockDim.y + threadIdx.y; 
    int col = blockIdx.x * blockDim.x + threadIdx.x;
    int sum = 0;
    if( col < k && row < m) 
    {
        for(int i = 0; i < n; i++) 
        {
            sum += a[row * n + i] * b[i * k + col];
        }
        c[row * k + col] = sum;
    }
} 


__global__ void gpu_matrix_mult_shared(int *d_a, int *d_b, int *d_result, int m, int n, int k) 
{
    __shared__ int tile_a[BLOCK_SIZE][BLOCK_SIZE];
    __shared__ int tile_b[BLOCK_SIZE][BLOCK_SIZE];

    int row = blockIdx.y * BLOCK_SIZE + threadIdx.y;
    int col = blockIdx.x * BLOCK_SIZE + threadIdx.x;
    int tmp = 0;
    int idx;

    for (int sub = 0; sub < gridDim.x; ++sub) 
    {
        idx = row * n + sub * BLOCK_SIZE + threadIdx.x;
        tile_a[threadIdx.y][threadIdx.x] = row<n && (sub * BLOCK_SIZE + threadIdx.x)<n? d_a[idx]:0;
        idx = (sub * BLOCK_SIZE + threadIdx.y) * n + col;
        tile_b[threadIdx.y][threadIdx.x] = col<n && (sub * BLOCK_SIZE + threadIdx.y)<n? d_b[idx]:0;
        
        __syncthreads();
        for (int k = 0; k < BLOCK_SIZE; ++k) 
        {
            tmp += tile_a[threadIdx.y][k] * tile_b[k][threadIdx.x];
        }
        __syncthreads();
    }
    if(row < n && col < n)
    {
        d_result[row * n + col] = tmp;
    }
}

void cpu_matrix_mult(int *h_a, int *h_b, int *h_result, int m, int n, int k) {
    for (int i = 0; i < m; ++i) 
    {
        for (int j = 0; j < k; ++j) 
        {
            int tmp = 0.0;
            for (int h = 0; h < n; ++h) 
            {
                tmp += h_a[i * n + h] * h_b[h * k + j];
            }
            h_result[i * k + j] = tmp;
        }
    }
}

int main(int argc, char const *argv[])
{
    int m=100;
    int n=100;
    int k=100;
    
    //声明Event
    cudaEvent_t start, stop, stop2, stop3 , stop4 ;
    
    //创建Event
    CHECK(cudaEventCreate(&start));
    CHECK(cudaEventCreate(&stop));
    CHECK(cudaEventCreate(&stop2));

    int *h_a, *h_b, *h_c, *h_cc;
    CHECK(cudaMallocHost((void **) &h_a, sizeof(int)*m*n));
    CHECK(cudaMallocHost((void **) &h_b, sizeof(int)*n*k));
    CHECK(cudaMallocHost((void **) &h_c, sizeof(int)*m*k));
    CHECK(cudaMallocHost((void **) &h_cc, sizeof(int)*m*k));

    for (int i = 0; i < m; ++i) {
        for (int j = 0; j < n; ++j) {
            h_a[i * n + j] = rand() % 1024;
        }
    }

    for (int i = 0; i < n; ++i) {
        for (int j = 0; j < k; ++j) {
            h_b[i * k + j] = rand() % 1024;
        }
    }

    int *d_a, *d_b, *d_c;
    CHECK(cudaMalloc((void **) &d_a, sizeof(int)*m*n));
    CHECK(cudaMalloc((void **) &d_b, sizeof(int)*n*k));
    CHECK(cudaMalloc((void **) &d_c, sizeof(int)*m*k));

    // copy matrix A and B from host to device memory
    CHECK(cudaMemcpy(d_a, h_a, sizeof(int)*m*n, cudaMemcpyHostToDevice));
    CHECK(cudaMemcpy(d_b, h_b, sizeof(int)*n*k, cudaMemcpyHostToDevice));

    unsigned int grid_rows = (m + BLOCK_SIZE - 1) / BLOCK_SIZE;
    unsigned int grid_cols = (k + BLOCK_SIZE - 1) / BLOCK_SIZE;
    dim3 dimGrid(grid_cols, grid_rows);
    dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);
   
    //开始start Event
    cudaEventRecord(start);
    //非阻塞模式
    cudaEventQuery(start);

    //gpu_matrix_mult<<<dimGrid, dimBlock>>>(d_a, d_b, d_c, m, n, k);   
    
    gpu_matrix_mult_shared<<<dimGrid, dimBlock>>>(d_a, d_b, d_c, m, n, k);  

    //开始stop Event
    cudaEventRecord(stop);
    //由于要等待核函数执行完毕,所以选择阻塞模式
    cudaEventSynchronize(stop);
    
    //计算时间 stop-start
    float elapsed_time;
    CHECK(cudaEventElapsedTime(&elapsed_time, start, stop));
    printf("start-》stop:Time = %g ms.\n", elapsed_time);

    
    CHECK(cudaMemcpy(h_c, d_c, (sizeof(int)*m*k), cudaMemcpyDeviceToHost));
    //cudaThreadSynchronize();

    //开始stop2 Event
    CHECK(cudaEventRecord(stop2));
    //非阻塞模式
    //CHECK(cudaEventSynchronize(stop2));
    cudaEventQuery(stop2);

    //计算时间 stop-stop2
    float elapsed_time2;
    cudaEventElapsedTime(&elapsed_time2, stop, stop2);
    printf("stop-》stop2:Time = %g ms.\n", elapsed_time2);

    //销毁Event
    CHECK(cudaEventDestroy(start));
    CHECK(cudaEventDestroy(stop));
    CHECK(cudaEventDestroy(stop2));

    //CPU函数计算
    cpu_matrix_mult(h_a, h_b, h_cc, m, n, k);

    int ok = 1;
    for (int i = 0; i < m; ++i)
    {
        for (int j = 0; j < k; ++j)
        {
            if(fabs(h_cc[i*k + j] - h_c[i*k + j])>(1.0e-10))
            {
                
                ok = 0;
            }
        }
    }

    if(ok)
    {
        printf("Pass!!!\n");
    }
    else
    {
        printf("Error!!!\n");
    }

    // free memory
    cudaFree(d_a);
    cudaFree(d_b);
    cudaFree(d_c);
    cudaFreeHost(h_a);
    cudaFreeHost(h_b);
    cudaFreeHost(h_c);
    return 0;
}

先执行一下:

没啥问题。

我们来分析一下:

上面的代码用到了 h_a, h_b, h_c, h_cc 4个HOST内存,还用到了 d_a, d_b, d_c 三个DEVICE内存。

其中,abc是对应的。而cc是放CPU运算结果专用的。其实我们可以把h_cc直接改为malloc的内存就行了。但是为了好看,也可以将这4个HOST内存都改为统一内存。

注:有位名人说过:人类看不起,但是我好看。(出处:Joyce《女神》)

我们将统一内存起名为 u_a, u_b, u_c, u_cc吧!

魔改开始:

将代码中 h_a->u_a,h_b->u_b,h_c->u_c,h_cc->u_cc,其他变量做相应的适当修改。

matrix_mul.cu

#include <stdio.h>
#include <math.h>
#include "error.cuh"
#include "matrix_mul.cuh"

#define BLOCK_SIZE 32

__managed__ int u_a[100*100];
__managed__ int u_b[100*100];
__managed__ int u_c[100*100];
__managed__ int u_cc[100*100];


__global__ void gpu_matrix_mult(int *a,int *b, int *c, int m, int n, int k)
{ 
    int row = blockIdx.y * blockDim.y + threadIdx.y; 
    int col = blockIdx.x * blockDim.x + threadIdx.x;
    int sum = 0;
    if( col < k && row < m) 
    {
        for(int i = 0; i < n; i++) 
        {
            sum += u_a[row * n + i] * u_b[i * k + col];
        }
        u_c[row * k + col] = sum;
    }
} 


__global__ void gpu_matrix_mult_shared(int *u_a, int *u_b, int *u_result, int m, int n, int k) 
{
    __shared__ int tile_a[BLOCK_SIZE][BLOCK_SIZE];
    __shared__ int tile_b[BLOCK_SIZE][BLOCK_SIZE];

    int row = blockIdx.y * BLOCK_SIZE + threadIdx.y;
    int col = blockIdx.x * BLOCK_SIZE + threadIdx.x;
    int tmp = 0;
    int idx;

    for (int sub = 0; sub < gridDim.x; ++sub) 
    {
        idx = row * n + sub * BLOCK_SIZE + threadIdx.x;
        tile_a[threadIdx.y][threadIdx.x] = row<n && (sub * BLOCK_SIZE + threadIdx.x)<n? u_a[idx]:0;
        idx = (sub * BLOCK_SIZE + threadIdx.y) * n + col;
        tile_b[threadIdx.y][threadIdx.x] = col<n && (sub * BLOCK_SIZE + threadIdx.y)<n? u_b[idx]:0;
        
        __syncthreads();
        for (int k = 0; k < BLOCK_SIZE; ++k) 
        {
            tmp += tile_a[threadIdx.y][k] * tile_b[k][threadIdx.x];
        }
        __syncthreads();
    }
    if(row < n && col < n)
    {
        u_result[row * n + col] = tmp;
    }
}

void cpu_matrix_mult(int *u_a, int *u_b, int *u_result, int m, int n, int k) {
    for (int i = 0; i < m; ++i) 
    {
        for (int j = 0; j < k; ++j) 
        {
            int tmp = 0.0;
            for (int h = 0; h < n; ++h) 
            {
                tmp += u_a[i * n + h] * u_b[h * k + j];
            }
            u_result[i * k + j] = tmp;
        }
    }
}

int main(int argc, char const *argv[])
{
    int m=100;
    int n=100;
    int k=100;
    
    //声明Event
    cudaEvent_t start, stop, stop2, stop3 , stop4 ;
    
    //创建Event
    CHECK(cudaEventCreate(&start));
    CHECK(cudaEventCreate(&stop));
    CHECK(cudaEventCreate(&stop2));

    //int *h_a, *h_b, *h_c, *h_cc;
    //CHECK(cudaMallocHost((void **) &h_a, sizeof(int)*m*n));
    //CHECK(cudaMallocHost((void **) &h_b, sizeof(int)*n*k));
    //CHECK(cudaMallocHost((void **) &h_c, sizeof(int)*m*k));
    //CHECK(cudaMallocHost((void **) &h_cc, sizeof(int)*m*k));

    for (int i = 0; i < m; ++i) {
        for (int j = 0; j < n; ++j) {
            u_a[i * n + j] = rand() % 1024;
        }
    }

    for (int i = 0; i < n; ++i) {
        for (int j = 0; j < k; ++j) {
            u_b[i * k + j] = rand() % 1024;
        }
    }

    //int *d_a, *d_b, *d_c;
    //CHECK(cudaMalloc((void **) &d_a, sizeof(int)*m*n));
    //CHECK(cudaMalloc((void **) &d_b, sizeof(int)*n*k));
    //CHECK(cudaMalloc((void **) &d_c, sizeof(int)*m*k));

    // copy matrix A and B from host to device memory
    //CHECK(cudaMemcpy(d_a, h_a, sizeof(int)*m*n, cudaMemcpyHostToDevice));
    //CHECK(cudaMemcpy(d_b, h_b, sizeof(int)*n*k, cudaMemcpyHostToDevice));

    unsigned int grid_rows = (m + BLOCK_SIZE - 1) / BLOCK_SIZE;
    unsigned int grid_cols = (k + BLOCK_SIZE - 1) / BLOCK_SIZE;
    dim3 dimGrid(grid_cols, grid_rows);
    dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);
   
    //开始start Event
    cudaEventRecord(start);
    //非阻塞模式
    cudaEventQuery(start);

    //gpu_matrix_mult<<<dimGrid, dimBlock>>>(d_a, d_b, d_c, m, n, k);   
    
    gpu_matrix_mult_shared<<<dimGrid, dimBlock>>>(u_a, u_b, u_c, m, n, k);  

    //开始stop Event
    cudaEventRecord(stop);
    //由于要等待核函数执行完毕,所以选择阻塞模式
    cudaEventSynchronize(stop);
    
    //计算时间 stop-start
    float elapsed_time;
    CHECK(cudaEventElapsedTime(&elapsed_time, start, stop));
    printf("start-》stop:Time = %g ms.\n", elapsed_time);

    
    //CHECK(cudaMemcpy(h_c, d_c, (sizeof(int)*m*k), cudaMemcpyDeviceToHost));
    //cudaThreadSynchronize();

    //开始stop2 Event
    CHECK(cudaEventRecord(stop2));
    //非阻塞模式
    //CHECK(cudaEventSynchronize(stop2));
    cudaEventQuery(stop2);

    //计算时间 stop-stop2
    float elapsed_time2;
    cudaEventElapsedTime(&elapsed_time2, stop, stop2);
    printf("stop-》stop2:Time = %g ms.\n", elapsed_time2);

    //销毁Event
    CHECK(cudaEventDestroy(start));
    CHECK(cudaEventDestroy(stop));
    CHECK(cudaEventDestroy(stop2));


    //CPU函数计算
    cpu_matrix_mult(u_a, u_b, u_cc, m, n, k);

    int ok = 1;
    for (int i = 0; i < m; ++i)
    {
        for (int j = 0; j < k; ++j)
        {
            if(fabs(u_cc[i*k + j] - u_c[i*k + j])>(1.0e-10))
            {
                
                ok = 0;
            }
        }
    }

    if(ok)
    {
        printf("Pass!!!\n");
    }
    else
    {
        printf("Error!!!\n");
    }
   

    // free memory
    //cudaFree(d_a);
    //cudaFree(d_b);
    //cudaFree(d_c);
    //cudaFreeHost(h_a);
    //cudaFreeHost(h_b);
    //cudaFreeHost(h_c);
    return 0;
}

执行一下:

额,好像速度没啥变化。

查看下性能:

是不是矩阵只有100X100,太小了看不出来?

不妨将其改为1000X1000的两个矩阵乘看看(当然,这个时候需要注释掉CPU计算的部分)。

改为 1000:

#include <stdio.h>
#include <math.h>
#include "error.cuh"
#include "matrix_mul.cuh"

#define BLOCK_SIZE 32

__global__ void gpu_matrix_mult(int *a,int *b, int *c, int m, int n, int k)
{ 
    int row = blockIdx.y * blockDim.y + threadIdx.y; 
    int col = blockIdx.x * blockDim.x + threadIdx.x;
    int sum = 0;
    if( col < k && row < m) 
    {
        for(int i = 0; i < n; i++) 
        {
            sum += a[row * n + i] * b[i * k + col];
        }
        c[row * k + col] = sum;
    }
} 


__global__ void gpu_matrix_mult_shared(int *d_a, int *d_b, int *d_result, int m, int n, int k) 
{
    __shared__ int tile_a[BLOCK_SIZE][BLOCK_SIZE];
    __shared__ int tile_b[BLOCK_SIZE][BLOCK_SIZE];

    int row = blockIdx.y * BLOCK_SIZE + threadIdx.y;
    int col = blockIdx.x * BLOCK_SIZE + threadIdx.x;
    int tmp = 0;
    int idx;

    for (int sub = 0; sub < gridDim.x; ++sub) 
    {
        idx = row * n + sub * BLOCK_SIZE + threadIdx.x;
        tile_a[threadIdx.y][threadIdx.x] = row<n && (sub * BLOCK_SIZE + threadIdx.x)<n? d_a[idx]:0;
        idx = (sub * BLOCK_SIZE + threadIdx.y) * n + col;
        tile_b[threadIdx.y][threadIdx.x] = col<n && (sub * BLOCK_SIZE + threadIdx.y)<n? d_b[idx]:0;
        
        __syncthreads();
        for (int k = 0; k < BLOCK_SIZE; ++k) 
        {
            tmp += tile_a[threadIdx.y][k] * tile_b[k][threadIdx.x];
        }
        __syncthreads();
    }
    if(row < n && col < n)
    {
        d_result[row * n + col] = tmp;
    }
}

void cpu_matrix_mult(int *h_a, int *h_b, int *h_result, int m, int n, int k) {
    for (int i = 0; i < m; ++i) 
    {
        for (int j = 0; j < k; ++j) 
        {
            int tmp = 0.0;
            for (int h = 0; h < n; ++h) 
            {
                tmp += h_a[i * n + h] * h_b[h * k + j];
            }
            h_result[i * k + j] = tmp;
        }
    }
}

int main(int argc, char const *argv[])
{
    int m=1000;
    int n=1000;
    int k=1000;
    
    //声明Event
    cudaEvent_t start, stop, stop2, stop3 , stop4 ;
    
    //创建Event
    CHECK(cudaEventCreate(&start));
    CHECK(cudaEventCreate(&stop));
    CHECK(cudaEventCreate(&stop2));

    int *h_a, *h_b, *h_c, *h_cc;
    CHECK(cudaMallocHost((void **) &h_a, sizeof(int)*m*n));
    CHECK(cudaMallocHost((void **) &h_b, sizeof(int)*n*k));
    CHECK(cudaMallocHost((void **) &h_c, sizeof(int)*m*k));
    CHECK(cudaMallocHost((void **) &h_cc, sizeof(int)*m*k));

    for (int i = 0; i < m; ++i) {
        for (int j = 0; j < n; ++j) {
            h_a[i * n + j] = rand() % 1024;
        }
    }

    for (int i = 0; i < n; ++i) {
        for (int j = 0; j < k; ++j) {
            h_b[i * k + j] = rand() % 1024;
        }
    }

    int *d_a, *d_b, *d_c;
    CHECK(cudaMalloc((void **) &d_a, sizeof(int)*m*n));
    CHECK(cudaMalloc((void **) &d_b, sizeof(int)*n*k));
    CHECK(cudaMalloc((void **) &d_c, sizeof(int)*m*k));

    // copy matrix A and B from host to device memory
    CHECK(cudaMemcpy(d_a, h_a, sizeof(int)*m*n, cudaMemcpyHostToDevice));
    CHECK(cudaMemcpy(d_b, h_b, sizeof(int)*n*k, cudaMemcpyHostToDevice));

    unsigned int grid_rows = (m + BLOCK_SIZE - 1) / BLOCK_SIZE;
    unsigned int grid_cols = (k + BLOCK_SIZE - 1) / BLOCK_SIZE;
    dim3 dimGrid(grid_cols, grid_rows);
    dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);
   
    //开始start Event
    cudaEventRecord(start);
    //非阻塞模式
    cudaEventQuery(start);

    //gpu_matrix_mult<<<dimGrid, dimBlock>>>(d_a, d_b, d_c, m, n, k);   
    
    gpu_matrix_mult_shared<<<dimGrid, dimBlock>>>(d_a, d_b, d_c, m, n, k);  

    //开始stop Event
    cudaEventRecord(stop);
    //由于要等待核函数执行完毕,所以选择阻塞模式
    cudaEventSynchronize(stop);
    
    //计算时间 stop-start
    float elapsed_time;
    CHECK(cudaEventElapsedTime(&elapsed_time, start, stop));
    printf("start-》stop:Time = %g ms.\n", elapsed_time);

    
    CHECK(cudaMemcpy(h_c, d_c, (sizeof(int)*m*k), cudaMemcpyDeviceToHost));
    //cudaThreadSynchronize();

    //开始stop2 Event
    CHECK(cudaEventRecord(stop2));
    //非阻塞模式
    //CHECK(cudaEventSynchronize(stop2));
    cudaEventQuery(stop2);

    //计算时间 stop-stop2
    float elapsed_time2;
    cudaEventElapsedTime(&elapsed_time2, stop, stop2);
    printf("stop-》stop2:Time = %g ms.\n", elapsed_time2);

    //销毁Event
    CHECK(cudaEventDestroy(start));
    CHECK(cudaEventDestroy(stop));
    CHECK(cudaEventDestroy(stop2));

    //CPU函数计算
    /*
    cpu_matrix_mult(h_a, h_b, h_cc, m, n, k);

    int ok = 1;
    for (int i = 0; i < m; ++i)
    {
        for (int j = 0; j < k; ++j)
        {
            if(fabs(h_cc[i*k + j] - h_c[i*k + j])>(1.0e-10))
            {
                
                ok = 0;
            }
        }
    }

    if(ok)
    {
        printf("Pass!!!\n");
    }
    else
    {
        printf("Error!!!\n");
    }
    */
    
 
    // free memory
    cudaFree(d_a);
    cudaFree(d_b);
    cudaFree(d_c);
    cudaFreeHost(h_a);
    cudaFreeHost(h_b);
    cudaFreeHost(h_c);
    return 0;
}

重新执行下:

运行结果为 246ms。

改为统一内存后的代码呢?

修改如下:

#include <stdio.h>
#include <math.h>
#include "error.cuh"
#include "matrix_mul.cuh"

#define BLOCK_SIZE 32

__managed__ int u_a[1000*1000];
__managed__ int u_b[1000*1000];
__managed__ int u_c[1000*1000];
__managed__ int u_cc[1000*1000];


__global__ void gpu_matrix_mult(int *a,int *b, int *c, int m, int n, int k)
{ 
    int row = blockIdx.y * blockDim.y + threadIdx.y; 
    int col = blockIdx.x * blockDim.x + threadIdx.x;
    int sum = 0;
    if( col < k && row < m) 
    {
        for(int i = 0; i < n; i++) 
        {
            sum += u_a[row * n + i] * u_b[i * k + col];
        }
        u_c[row * k + col] = sum;
    }
} 


__global__ void gpu_matrix_mult_shared(int *u_a, int *u_b, int *u_result, int m, int n, int k) 
{
    __shared__ int tile_a[BLOCK_SIZE][BLOCK_SIZE];
    __shared__ int tile_b[BLOCK_SIZE][BLOCK_SIZE];

    int row = blockIdx.y * BLOCK_SIZE + threadIdx.y;
    int col = blockIdx.x * BLOCK_SIZE + threadIdx.x;
    int tmp = 0;
    int idx;

    for (int sub = 0; sub < gridDim.x; ++sub) 
    {
        idx = row * n + sub * BLOCK_SIZE + threadIdx.x;
        tile_a[threadIdx.y][threadIdx.x] = row<n && (sub * BLOCK_SIZE + threadIdx.x)<n? u_a[idx]:0;
        idx = (sub * BLOCK_SIZE + threadIdx.y) * n + col;
        tile_b[threadIdx.y][threadIdx.x] = col<n && (sub * BLOCK_SIZE + threadIdx.y)<n? u_b[idx]:0;
        
        __syncthreads();
        for (int k = 0; k < BLOCK_SIZE; ++k) 
        {
            tmp += tile_a[threadIdx.y][k] * tile_b[k][threadIdx.x];
        }
        __syncthreads();
    }
    if(row < n && col < n)
    {
        u_result[row * n + col] = tmp;
    }
}

void cpu_matrix_mult(int *u_a, int *u_b, int *u_result, int m, int n, int k) {
    for (int i = 0; i < m; ++i) 
    {
        for (int j = 0; j < k; ++j) 
        {
            int tmp = 0.0;
            for (int h = 0; h < n; ++h) 
            {
                tmp += u_a[i * n + h] * u_b[h * k + j];
            }
            u_result[i * k + j] = tmp;
        }
    }
}

int main(int argc, char const *argv[])
{
    int m=1000;
    int n=1000;
    int k=1000;
    
    //声明Event
    cudaEvent_t start, stop, stop2, stop3 , stop4 ;
    
    //创建Event
    CHECK(cudaEventCreate(&start));
    CHECK(cudaEventCreate(&stop));
    CHECK(cudaEventCreate(&stop2));

    //int *h_a, *h_b, *h_c, *h_cc;
    //CHECK(cudaMallocHost((void **) &h_a, sizeof(int)*m*n));
    //CHECK(cudaMallocHost((void **) &h_b, sizeof(int)*n*k));
    //CHECK(cudaMallocHost((void **) &h_c, sizeof(int)*m*k));
    //CHECK(cudaMallocHost((void **) &h_cc, sizeof(int)*m*k));

    for (int i = 0; i < m; ++i) {
        for (int j = 0; j < n; ++j) {
            u_a[i * n + j] = rand() % 1024;
        }
    }

    for (int i = 0; i < n; ++i) {
        for (int j = 0; j < k; ++j) {
            u_b[i * k + j] = rand() % 1024;
        }
    }

    //int *d_a, *d_b, *d_c;
    //CHECK(cudaMalloc((void **) &d_a, sizeof(int)*m*n));
    //CHECK(cudaMalloc((void **) &d_b, sizeof(int)*n*k));
    //CHECK(cudaMalloc((void **) &d_c, sizeof(int)*m*k));

    // copy matrix A and B from host to device memory
    //CHECK(cudaMemcpy(d_a, h_a, sizeof(int)*m*n, cudaMemcpyHostToDevice));
    //CHECK(cudaMemcpy(d_b, h_b, sizeof(int)*n*k, cudaMemcpyHostToDevice));

    unsigned int grid_rows = (m + BLOCK_SIZE - 1) / BLOCK_SIZE;
    unsigned int grid_cols = (k + BLOCK_SIZE - 1) / BLOCK_SIZE;
    dim3 dimGrid(grid_cols, grid_rows);
    dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);
   
    //开始start Event
    cudaEventRecord(start);
    //非阻塞模式
    cudaEventQuery(start);

    //gpu_matrix_mult<<<dimGrid, dimBlock>>>(d_a, d_b, d_c, m, n, k);   
    
    gpu_matrix_mult_shared<<<dimGrid, dimBlock>>>(u_a, u_b, u_c, m, n, k);  

    //开始stop Event
    cudaEventRecord(stop);
    //由于要等待核函数执行完毕,所以选择阻塞模式
    cudaEventSynchronize(stop);
    
    //计算时间 stop-start
    float elapsed_time;
    CHECK(cudaEventElapsedTime(&elapsed_time, start, stop));
    printf("start-》stop:Time = %g ms.\n", elapsed_time);

    
    //CHECK(cudaMemcpy(h_c, d_c, (sizeof(int)*m*k), cudaMemcpyDeviceToHost));
    //cudaThreadSynchronize();

    //开始stop2 Event
    CHECK(cudaEventRecord(stop2));
    //非阻塞模式
    //CHECK(cudaEventSynchronize(stop2));
    cudaEventQuery(stop2);

    //计算时间 stop-stop2
    float elapsed_time2;
    cudaEventElapsedTime(&elapsed_time2, stop, stop2);
    printf("stop-》stop2:Time = %g ms.\n", elapsed_time2);

    //销毁Event
    CHECK(cudaEventDestroy(start));
    CHECK(cudaEventDestroy(stop));
    CHECK(cudaEventDestroy(stop2));


    //CPU函数计算
    /*
    cpu_matrix_mult(u_a, u_b, u_cc, m, n, k);

    int ok = 1;
    for (int i = 0; i < m; ++i)
    {
        for (int j = 0; j < k; ++j)
        {
            if(fabs(u_cc[i*k + j] - u_c[i*k + j])>(1.0e-10))
            {
                
                ok = 0;
            }
        }
    }

    if(ok)
    {
        printf("Pass!!!\n");
    }
    else
    {
        printf("Error!!!\n");
    }
    */
    
    // free memory
    //cudaFree(d_a);
    //cudaFree(d_b);
    //cudaFree(d_c);
    //cudaFreeHost(h_a);
    //cudaFreeHost(h_b);
    //cudaFreeHost(h_c);
    return 0;
}

张小白发现了一个奇怪的现象:

多次运行统一内存的代码,在Nano上每次时间都不一样。有的比以前快,有些比以前慢。

张小白又运行了改造前的代码:

好像也有点飘忽不定。。。

这是怎么回事呢?张小白有点丈二和尚摸不着头脑了。

注:欢老师指出,Jetson设备运行的时候需要一定的热身时间,热身过后,频率才能上去。可以先运行别的什么程序把kernel搞起来,然后再运行这段代码,也许就不会出现执行时间不一致的情况了。再说了,都是几秒钟的事情。其实刚热身好像就冷却下去了,这就像一个人刚跑了10米就不跑了。他的短跑成绩怎么能好呢?

要多学习刘耕宏(及其夫人),天天锻炼才行,对吧?Nano!

(二)原子操作

CUDA的原子操作针对的是Global Memory或者是Shared Memory。

为什么要引入原子操作这个概念。我们从前几天的训练营课程得知:

  1. Shared Memory是可被同一个block的所有thread访问(读写)的。

  2. Global Memory相当于显存,可以被所有thread访问(读写)的。

那么,这两种Memory,就很可能会遇到多个thread同时读写同一块内存区域的问题。

假如两个线程都在做“读取-修改-写入"操作,如果在这个操作中,出现互相交错的情况,就会出现混乱。举个例子,比如有块内存里面的值是10,A、B两个用途为”加一“的线程同时读该块内存,然后各自都加1,A将值变为11,再写回去;B也将值改为11,也写了回去。这个时候,结果就变成了11。但是显然我们要求的结果应为12。

我们只好要求将“读取-修改-写入"捆绑成一个逻辑上的单体操作,不可拆分,逻辑上顺序进行,保证一次性成功。这样才能确保任何一次的操作对变量的操作结果的正确性。

常用的原子操作函数如下:

这些函数大多会返回原子操作前的变量值。

原子操作的函数存在多态,适用于不同数据类型和精度的版本,以atomicAdd为例:

我们来实战吧!(额,其实可能是因为张小白好像也写不出来更多的东西了)

(a)实战1:对1000万的整型数组求和

关于对向量所有元素求和这个事情,讲师何老师提供了一个框架。他通过ppt介绍了这个框架的原理。看起来比较复杂。他以只有32个数据的求和为例图示了这个过程:

具体的代码如下:

sum.cu

#include<stdio.h>
#include<stdint.h>
#include<time.h>     //for time()
#include<stdlib.h>   //for srand()/rand()
#include<sys/time.h> //for gettimeofday()/struct timeval
#include"error.cuh"

#define N 10000000
#define BLOCK_SIZE 256
#define BLOCKS ((N + BLOCK_SIZE - 1) / BLOCK_SIZE) 


__managed__ int source[N];               //input data
__managed__ int final_result[1] = {0};   //scalar output

__global__ void _sum_gpu(int *input, int count, int *output)
{
    __shared__ int sum_per_block[BLOCK_SIZE];

    int temp = 0;
    for (int idx = threadIdx.x + blockDim.x * blockIdx.x;
         idx < count;
	 idx += gridDim.x * blockDim.x
	)
    {
        temp += input[idx];
    }

    sum_per_block[threadIdx.x] = temp;  //the per-thread partial sum is temp!
    __syncthreads();

    //**********shared memory summation stage***********
    for (int length = BLOCK_SIZE / 2; length >= 1; length /= 2)
    {
        int double_kill = -1;
	if (threadIdx.x < length)
	{
	    double_kill = sum_per_block[threadIdx.x] + sum_per_block[threadIdx.x + length];
	}
	__syncthreads();  //why we need two __syncthreads() here, and,
	
	if (threadIdx.x < length)
	{
	    sum_per_block[threadIdx.x] = double_kill;
	}
	__syncthreads();  //....here ?
	
    } //the per-block partial sum is sum_per_block[0]

    if (blockDim.x * blockIdx.x < count) //in case that our users are naughty
    {
        //the final reduction performed by atomicAdd()
        if (threadIdx.x == 0) atomicAdd(output, sum_per_block[0]);
    }
}

int _sum_cpu(int *ptr, int count)
{
    int sum = 0;
    for (int i = 0; i < count; i++)
    {
        sum += ptr[i];
    }
    return sum;
}

void _init(int *ptr, int count)
{
    uint32_t seed = (uint32_t)time(NULL); //make huan happy
    srand(seed);  //reseeding the random generator

    //filling the buffer with random data
    for (int i = 0; i < count; i++) ptr[i] = rand();
}

double get_time()
{
    struct timeval tv;
    gettimeofday(&tv, NULL);
    return ((double)tv.tv_usec * 0.000001 + tv.tv_sec);
}

int main()
{
    //**********************************
    fprintf(stderr, "filling the buffer with %d elements...\n", N);
    _init(source, N);

    //**********************************
    //Now we are going to kick start your kernel.
    cudaDeviceSynchronize(); //steady! ready! go!
    
    fprintf(stderr, "Running on GPU...\n");
    
double t0 = get_time();
    _sum_gpu<<<BLOCKS, BLOCK_SIZE>>>(source, N, final_result);
    CHECK(cudaGetLastError());  //checking for launch failures
    CHECK(cudaDeviceSynchronize()); //checking for run-time failurs
double t1 = get_time();

    int A = final_result[0];
    fprintf(stderr, "GPU sum: %u\n", A);


    //**********************************
    //Now we are going to exercise your CPU...
    fprintf(stderr, "Running on CPU...\n");

double t2 = get_time();
    int B = _sum_cpu(source, N);
double t3 = get_time();
    fprintf(stderr, "CPU sum: %u\n", B);

    //******The last judgement**********
    if (A == B)
    {
        fprintf(stderr, "Test Passed!\n");
    }
    else
    {
        fprintf(stderr, "Test failed!\n");
	exit(-1);
    }
    
    //****and some timing details*******
    fprintf(stderr, "GPU time %.3f ms\n", (t1 - t0) * 1000.0);
    fprintf(stderr, "CPU time %.3f ms\n", (t3 - t2) * 1000.0);

    return 0;
}	
	

由于其原理略有复杂,张小白是这么想的:

以上的代码其实是提供了一个GPU遍历所有字段的框架,这是一个分而治之的思路:

block中的多个线程负责多个数据点,这些点被规约(reduce/缩减)到一个标量。这样每个block中都有一个标量的结果。但blocks有很多,这些变量组成的数组/向量,还需要二次缩减到最终的1个标量值。

以上过程存在两步reduce,第一步用并行折半缩减(规约),第二步直接用原子操作函数atomicAdd规约。两步完成后,得到了单一点。

我们运行下试试:

可见,CPU和GPU求和的结果是一致的,说明这个遍历所有字段的框架是没问题的。

看下性能:

(b)实战2:对1000万的整型数组求出最大值和最小值

基于上面实战1分析的原理,我们接着分析本题的解题思路:

同样使用两步reduce,第一步用并行折半缩减(规约),第二步直接用原子操作atomicMax和atomicMin规约。两步完成后,得到了单一点(最大值/最小值)。

于是我们就像搭积木那样,将一个sum改为一个max和一个min,代码变动如下:

min_or_max.cu

#include<stdio.h>
#include<stdint.h>
#include<time.h>     //for time()
#include<stdlib.h>   //for srand()/rand()
#include<sys/time.h> //for gettimeofday()/struct timeval
#include"error.cuh"

#define N 10000000
#define BLOCK_SIZE 256
#define BLOCKS ((N + BLOCK_SIZE - 1) / BLOCK_SIZE) 


__managed__ int source[N];               //input data
//__managed__ int final_result[2] = {INT_MIN,INT_MAX};   //scalar output
__managed__ int final_result_max = INT_MIN;   //scalar output
__managed__ int final_result_min = INT_MAX;   //scalar output

__global__ void _sum_min_or_max(int *input, int count, int *max_output, int *min_output)
{
    __shared__ int max_per_block[BLOCK_SIZE];
    __shared__ int min_per_block[BLOCK_SIZE];

    int max_temp = 0;
    int min_temp = 0;
    for (int idx = threadIdx.x + blockDim.x * blockIdx.x;
         idx < count;
         idx += gridDim.x * blockDim.x
	)
    {
        //temp += input[idx];
        max_temp = (input[idx] > max_temp) ? input[idx] :max_temp;
        min_temp = (input[idx] < min_temp) ? input[idx] :min_temp;
    }

    max_per_block[threadIdx.x] = max_temp;  //the per-thread partial max is temp!
    min_per_block[threadIdx.x] = min_temp;  //the per-thread partial max is temp!
    
    __syncthreads();

    //**********shared memory summation stage***********
    for (int length = BLOCK_SIZE / 2; length >= 1; length /= 2)
    {
        int max_double_kill = -1;
        int min_double_kill = -1;
        
        if (threadIdx.x < length)
        {
            max_double_kill = (max_per_block[threadIdx.x] > max_per_block[threadIdx.x + length]) ? max_per_block[threadIdx.x] : max_per_block[threadIdx.x + length];
            min_double_kill = (min_per_block[threadIdx.x] < min_per_block[threadIdx.x + length]) ? min_per_block[threadIdx.x] : min_per_block[threadIdx.x + length];
        }
        __syncthreads();  //why we need two __syncthreads() here, and,
	
        if (threadIdx.x < length)
        {
            max_per_block[threadIdx.x] = max_double_kill;
            min_per_block[threadIdx.x] = min_double_kill;
        }
        __syncthreads();  //....here ?
	
    } //the per-block partial sum is sum_per_block[0]

    if (blockDim.x * blockIdx.x < count) //in case that our users are naughty
    {
        //the final reduction performed by atomicAdd()
        //if (threadIdx.x == 0) atomicAdd(output, max_per_block[0]);
        if (threadIdx.x == 0) atomicMax(max_output, max_per_block[0]);
        if (threadIdx.x == 0) atomicMin(min_output, min_per_block[0]);
    }
}

int _max_min_cpu(int *ptr, int count, int *max1, int *min1)
{
    int max = INT_MIN;
    int min = INT_MAX;
    
    for (int i = 0; i < count; i++)
    {
        //sum += ptr[i];
        max = (ptr[i] > max)? ptr[i]:max;
        min = (ptr[i] < min)? ptr[i]:min;
        
    }
    
    //printf(" CPU max = %d\n", max);
    //printf(" CPU min = %d\n", min);
    
    *max1 = max;
    *min1 = min;
    
    return 0;
}



void _init(int *ptr, int count)
{
    uint32_t seed = (uint32_t)time(NULL); //make huan happy
    //srand(seed);  //reseeding the random generator

    //filling the buffer with random data
    for (int i = 0; i < count; i++) 
    {
        //ptr[i] = rand() % 100000000;
        ptr[i] = rand() ;
        if (i % 2 == 0) ptr[i] = 0 - ptr[i] ;
    }
      
   
}

double get_time()
{
    struct timeval tv;
    gettimeofday(&tv, NULL);
    return ((double)tv.tv_usec * 0.000001 + tv.tv_sec);
}

int main()
{
    //**********************************
    fprintf(stderr, "filling the buffer with %d elements...\n", N);
    _init(source, N);

    //**********************************
    //Now we are going to kick start your kernel.
    cudaDeviceSynchronize(); //steady! ready! go!
    
    fprintf(stderr, "Running on GPU...\n");
    
double t0 = get_time();

    _sum_min_or_max<<<BLOCKS, BLOCK_SIZE>>>(source, N, &final_result_max, &final_result_min);
    
    CHECK(cudaGetLastError());  //checking for launch failures
    CHECK(cudaDeviceSynchronize()); //checking for run-time failures
    
double t1 = get_time();

    //int A = final_result[0];
    fprintf(stderr, " GPU max: %d\n", final_result_max);
    fprintf(stderr, " GPU min: %d\n", final_result_min);

    //**********************************
    //Now we are going to exercise your CPU...
    fprintf(stderr, "Running on CPU...\n");

double t2 = get_time();

    int cpu_max=0;
    int cpu_min=0;

    int B = _max_min_cpu(source, N, &cpu_max, &cpu_min);
    printf(" CPU max = %d\n", cpu_max);
    printf(" CPU min = %d\n", cpu_min);
    
double t3 = get_time();

    //fprintf(stderr, "CPU sum: %u\n", B);

    //******The last judgement**********
    if ( final_result_max == cpu_max   &&  final_result_min == cpu_min  )
    {
        fprintf(stderr, "Test Passed!\n");
    }
    else
    {
        fprintf(stderr, "Test failed!\n");
	exit(-1);
    }
    
    //****and some timing details*******
    fprintf(stderr, "GPU time %.3f ms\n", (t1 - t0) * 1000.0);
    fprintf(stderr, "CPU time %.3f ms\n", (t3 - t2) * 1000.0);

    return 0;
}	
	

这里需要指出几点:

(1)初始化最大值变量final_result_max的时候,给它赋最小值INT_MIN;初始化最小值变量final_result_min的时候,给它赋最大值INT_MAX,这样在它比较的时候,就一定会被比下去,换成最新的值。如果有人不小心写反了,那么就完蛋了。不信大家可以试试。

(2)在产生1000万个随机数的时候,张小白采纳了何老师的建议,每两个数就有一个正数,有一个负数。这样不会导致原来取最小值永远是0的情况。

编译运行:

看起来CPU和GPU算出的结果都是一致的。怎么样?简单吧?

上面的代码,张小白偷懒,使用了两个managed变量记录结果,张小白看了看后面的作业,还有一道“找到1000万数据中前10个最大值”的题目,感觉还是用 数组会更合适点。也许可以无缝的升级解决后面这道题,于是张小白又做了以下改动:

#include<stdio.h>
#include<stdint.h>
#include<time.h>     //for time()
#include<stdlib.h>   //for srand()/rand()
#include<sys/time.h> //for gettimeofday()/struct timeval
#include"error.cuh"

#define N 10000000
#define BLOCK_SIZE 256
#define BLOCKS ((N + BLOCK_SIZE - 1) / BLOCK_SIZE) 


__managed__ int source[N];               //input data
__managed__ int final_result[2] = {INT_MIN,INT_MAX};   //scalar output
//__managed__ int final_result_max = INT_MIN;   //scalar output
//__managed__ int final_result_min = INT_MAX;   //scalar output

//__global__ void _sum_min_or_max(int *input, int count, int *max_output, int *min_output)
__global__ void _sum_min_or_max(int *input, int count,int *output)
{
    __shared__ int max_per_block[BLOCK_SIZE];
    __shared__ int min_per_block[BLOCK_SIZE];

    int max_temp = 0;
    int min_temp = 0;
    for (int idx = threadIdx.x + blockDim.x * blockIdx.x;
         idx < count;
         idx += gridDim.x * blockDim.x
	)
    {
        //temp += input[idx];
        max_temp = (input[idx] > max_temp) ? input[idx] :max_temp;
        min_temp = (input[idx] < min_temp) ? input[idx] :min_temp;
    }

    max_per_block[threadIdx.x] = max_temp;  //the per-thread partial max is temp!
    min_per_block[threadIdx.x] = min_temp;  //the per-thread partial max is temp!
    
    __syncthreads();

    //**********shared memory summation stage***********
    for (int length = BLOCK_SIZE / 2; length >= 1; length /= 2)
    {
        int max_double_kill = -1;
        int min_double_kill = -1;
        
        if (threadIdx.x < length)
        {
            max_double_kill = (max_per_block[threadIdx.x] > max_per_block[threadIdx.x + length]) ? max_per_block[threadIdx.x] : max_per_block[threadIdx.x + length];
            min_double_kill = (min_per_block[threadIdx.x] < min_per_block[threadIdx.x + length]) ? min_per_block[threadIdx.x] : min_per_block[threadIdx.x + length];
        }
        __syncthreads();  //why we need two __syncthreads() here, and,
	
        if (threadIdx.x < length)
        {
            max_per_block[threadIdx.x] = max_double_kill;
            min_per_block[threadIdx.x] = min_double_kill;
        }
        __syncthreads();  //....here ?
	
    } //the per-block partial sum is sum_per_block[0]

    if (blockDim.x * blockIdx.x < count) //in case that our users are naughty
    {
        //the final reduction performed by atomicAdd()
        //if (threadIdx.x == 0) atomicAdd(output, max_per_block[0]);
        //if (threadIdx.x == 0) atomicMax(max_output, max_per_block[0]);
        //if (threadIdx.x == 0) atomicMin(min_output, min_per_block[0]);
        if (threadIdx.x == 0) atomicMax(&output[0], max_per_block[0]);
        if (threadIdx.x == 0) atomicMin(&output[1], min_per_block[0]);
    }
}

int _max_min_cpu(int *ptr, int count, int *max1, int *min1)
{
    int max = INT_MIN;
    int min = INT_MAX;
    
    for (int i = 0; i < count; i++)
    {
        //sum += ptr[i];
        max = (ptr[i] > max)? ptr[i]:max;
        min = (ptr[i] < min)? ptr[i]:min;
        
    }
    
    //printf(" CPU max = %d\n", max);
    //printf(" CPU min = %d\n", min);
    
    *max1 = max;
    *min1 = min;
    
    return 0;
}



void _init(int *ptr, int count)
{
    uint32_t seed = (uint32_t)time(NULL); //make huan happy
    srand(seed);  //reseeding the random generator

    //filling the buffer with random data
    for (int i = 0; i < count; i++) 
    {
        //ptr[i] = rand() % 100000000;
        ptr[i] = rand() ;
        if (i % 2 == 0) ptr[i] = 0 - ptr[i] ;
    }
      
   
}

double get_time()
{
    struct timeval tv;
    gettimeofday(&tv, NULL);
    return ((double)tv.tv_usec * 0.000001 + tv.tv_sec);
}

int main()
{
    //**********************************
    fprintf(stderr, "filling the buffer with %d elements...\n", N);
    _init(source, N);

    //**********************************
    //Now we are going to kick start your kernel.
    cudaDeviceSynchronize(); //steady! ready! go!
    
    fprintf(stderr, "Running on GPU...\n");
    
double t0 = get_time();

    //_sum_min_or_max<<<BLOCKS, BLOCK_SIZE>>>(source, N, &final_result_max, &final_result_min);
    _sum_min_or_max<<<BLOCKS, BLOCK_SIZE>>>(source, N,final_result);
    
    CHECK(cudaGetLastError());  //checking for launch failures
    CHECK(cudaDeviceSynchronize()); //checking for run-time failures
    
double t1 = get_time();

    //int A = final_result[0];
    //fprintf(stderr, " GPU max: %d\n", final_result_max);
    //fprintf(stderr, " GPU min: %d\n", final_result_min);
    fprintf(stderr, " GPU max: %d\n", final_result[0]);
    fprintf(stderr, " GPU min: %d\n", final_result[1]);

    //**********************************
    //Now we are going to exercise your CPU...
    fprintf(stderr, "Running on CPU...\n");

double t2 = get_time();

    int cpu_max=0;
    int cpu_min=0;

    int B = _max_min_cpu(source, N, &cpu_max, &cpu_min);
    printf(" CPU max = %d\n", cpu_max);
    printf(" CPU min = %d\n", cpu_min);
    
double t3 = get_time();

    //fprintf(stderr, "CPU sum: %u\n", B);

    //******The last judgement**********
    //if ( final_result_max == cpu_max   &&  final_result_min == cpu_min  )
    if ( final_result[0] == cpu_max   &&  final_result[1] == cpu_min  )
    {
        fprintf(stderr, "Test Passed!\n");
    }
    else
    {
        fprintf(stderr, "Test failed!\n");
	exit(-1);
    }
    
    //****and some timing details*******
    fprintf(stderr, "GPU time %.3f ms\n", (t1 - t0) * 1000.0);
    fprintf(stderr, "CPU time %.3f ms\n", (t3 - t2) * 1000.0);

    return 0;
}	
	

分别在

定义:

__managed__ int final_result[2] = {INT_MIN,INT_MAX};   //scalar output
 

核函数定义:

__global__ void _sum_min_or_max(int *input, int count,int *output)

核函数操作:

if (threadIdx.x == 0) atomicMax(&output[0], max_per_block[0]);         
if (threadIdx.x == 0) atomicMin(&output[1], min_per_block[0]); 

以及核函数调用:

_sum_min_or_max<<<BLOCKS, BLOCK_SIZE>>>(source, N,final_result);

这几个地方做了改动。

开始编译,运行:

(Quardo P1000上运行)

(Nano上运行)

运行没问题,但是貌似GPU运行时间(81ms)比CPU运行时间(22ms)要慢一些。比较在Nano上GPU运行时间(154ms)比CPU运行时间(126ms),好像结果中确实GPU的速度并不占优势。这是什么原因呢?

计算包括访存密集型还是计算密集型等类型。无论是加法,还是max/min,都是访存密集的计算。除非独立显卡,且提前预取或者传输数据到显存,否则GPU无论是managed数据自动迁移,或者GPU和CPU一样的享受同样的带宽(Jetson上),都不会占据优势。

那么,将过程泛化到怎样的f(a,b)操作,才能让GPU具有显著的优势呢?哪怕是在Jetson这种CPU和GPU有同样的访存带宽,或者哪怕是强制走了慢速的PCI-E传输的带宽,GPU依然能比CPU的运算快得多呢?

这个问题,就留给大家思索了!听说阅读 樊哲勇老师的小红书《CUDA 编程:基础与实践》可以找到解决之路哦~~

(未完待续)

BTW:欢欢老师对本文有巨大贡献,但文责由本人自负。

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

张小白TWO

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值