The Improvement of Reduction Summation on GPU Using CUDA

   We can never be satisfied with the program just only running correctly.The reduction summation program described in previous blog post needs to be optimized.

1.make the best use of  hardware and do not forget CPUs!

   During the second part in the reduction Summation, the amount of  data to be calculated has been greatly reduced when the second kernel function runs.At this time, it equals to the number of  threads per block.Given the difference in architecture between the CPU and GPU device, CPUs are designed for running a small number of  potentially quite complex tasks,while GPUs are designed for running a large number of  potentially quite simple tasks. When you have a small amount of data,don’t forget CPUs,which can provide much faster computing rate than GPUs.

   We can delete the second kernel function,pass the partial sum within every block to CPU and add them on the CPU.

cudaMemcpy(a,dev_a,N*sizeof(int),cudaMemcpyDeviceToHost);

int c=0;
for(int i=0;i<BlockPerGrid;i++)
{
     c=+a[i];
}

2.the appropriate number of threads per block: not the more,the better.

   As we all know, if there are too few threads ,GPUs can’t hide memory latency using the capacity to handle data.Therefore we’d better not choose too few threads.

   However,as we consider  the number of threads,it will make a difference when we have synchronization points in the kernel.The number of threads per block not the more,the better.

   The time to execute a given block is undefined.A block cannot be retired from an SM until it’s completed its entire execution.Sometimes,all other idle warps are waiting for a single warp to complete,making the SM also idle.

   It follows that the larger the thread block,the more potential to wait for a slow warp to catch up. As a general,the value 256 gets you 100% utilization across all levels of the hardware.We had better aim for either 192 or 256.Or you can look up the table of utilization and select the smallest number of threads that gives the highest device utilization.

3.not too much more branches

   As the hardware can only fetch a single instruction stream per warp and if branches appear, some of the threads that don’t meet the condition will  stall,making the device utilization rate decrease.However, the actual scheduler in terms of  instruction execution is half-warp based,not warp based.Therefore we can arrange the divergence to fall on a half warp (16-thread) boundary,then it can execute both sides of the branch  condition.

if((thread_idx%32)<16)
{
     do something;
}
else
{
    do something;
}

   However,it just happens when data across memory is continuous.Sometimes we can supplement with zeros behind the array,just like the previous blog mentioned,to a standard length of the integral multiple of 32.That can help you keep the number of branches to a minimum. 

转载于:https://www.cnblogs.com/little-hobbit/p/4488958.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值