cuda-thread-block-grid详细解释

cuda wiki资料汇总:
https://en.wikipedia.org/wiki/CUDA#Limitations
如题,翻译过来变味了,原汁原味更加好。

scroll to ‘The important part’ for the important stuff if you already know about threads and blocks and warps:

(I have no experience with the Kepler architecture, yet. Some of these numbers may be different if not using Fermi.)

Some terms need to be explained to understand the next section: The following terms relate to the logical (logical as in software constructs) threads:

thread – a single thread of execution.
block – a group of multiple threads that execute the same kernel.
grid – a group of blocks.
The following terms relate to the physical (physical as in hardware architecture dependent) threads:

core – a single compute core, one core runs exactly one instruction at a time.
warp – a group of threads that execute in parallel on the hardware, a warp consists of 32 threads on current generation CUDA hardware.
Kernels are executed by one or more Streaming Multiprocessors (SM). A typical mid-to-high-end GeForce card from the Fermi family (GeForce 400 and GeForce 500 series) has 8-16 SMs on a single GPU[Fermi whitepaper]. Each SM consists of 32 CUDA Cores (cores). Threads are scheduled for execution by the warp schedulers, each SM has two warp scheduler units that work in a lockstep fashion. The smallest unit that a warp scheduler can schedule is called a warp, which consists of 32 threads on all CUDA hardware released so far at the time of writing. Only one warp may execute at a time on each SM.

Threads in CUDA are much more lightweight than CPU threads, context switches are cheaper and all threads of a warp execute the same instruction or have to wait while the other threads in the warp execute the instruction, this is called Sin- gle Instruction Multiple Thread (SIMT) and is similar to traditional CPU Single Instruction Multiple Data (SIMD) instructions such as SSE, AVX, NEON, Al- tivec etc., this has consequences when using conditional statements as described further down.

To allow for problems which demand more than 32 threads to solve the CUDA threads are arranged into logical groups called blocks and grids of sizes that are defined by the software developer. A block is a 3-dimensional collection of threads, each thread in the block has its own individual 3-dimensional identification num- ber to allow the developer to distinguish between the threads in the kernel code. Threads within a single block can share data through shared memory, this reduces the load on global memory. Shared memory has a much lower latency than global memory but is a limited resource, the user can choose between (per block) 16 kB shared memory and 48 kB L1 cache or 48 kB shared memory and 16 kB L1 cache.

Several blocks of threads in turn can be grouped into a grid. Grids are 3-dimensional arrays of blocks. The maximum block size is tied to the available hardware resources while the grids can be of (almost) arbitrary size. Blocks within a grid can only share data through global memory, which is the on-GPU memory which has the highest latency.

A Fermi GPU can have 48 warps (1536 threads) active at once per SM, given that the threads use little enough local and shared memory to fit all at the same time. Context switches between threads are fast since registers are allocated to the threads and hence there is no need for saving and restoring registers and shared memory between thread switches. The result is that it is actually desired to over- allocate the hardware since it will hide memory stalls inside the kernels by letting the warp schedulers switch the currently active warp whenever a stall occurs.

The important part
The thread warp is a hardware group of threads that execute on the same Streaming Multiprocessor (SM). Threads of a warp can be compared to sharing a common program counter between the threads, hence all threads must execute the same line of program code. If the code has some brancing statements such as if … then … else the warp must first execute the threads that enter the first block, while the other threads of the warp wait, next the threads that enter the next block will execute while the other threads wait and so on. Because of this behaviour conditional statements should be avoided in GPU code if possible. When threads of a warp follow different lines of execution it is known as having divergent threads. While conditional blocks should be kept to a minimum inside CUDA kernels, it is sometimes possible to reorder statements so that all threads of the same warp follow only a single path of execution in an if … then … else block and mitigate this limitation.

The while and for statements are branching statements, so it is not limited to if.

下面讲了SP和warp:
For the GTX 970 there are 13 Streaming Multiprocessors (SM) with 128 Cuda Cores each. Cuda Cores are also called Stream Processors (SP).

You can define grids which maps blocks to the GPU.

You can define blocks which map threads to Stream Processors (the 128 Cuda Cores per SM).

One warp is always formed by 32 threads and all threads of a warp are executed simulaneously.

To use the full possible power of a GPU you need much more threads per SM than the SM has SPs. For each Compute Capability there is a certain number of threads which can reside in one SM at a time. All blocks you define are queued and wait for a SM to have the resources (number of SPs free), then it is loaded. The SM starts to execute Warps. Since one Warp only has 32 Threads and a SM has for example 128 SPs a SM can execute 4 Warps at a given time. The thing is if the threads do memory access the thread will block until its memory request is satisfied. In numbers: An arithmetic calculation on the SP has a latency of 18-22 cycles while a non-cached global memory access can take up to 300-400 cycles. This means if the threads of one warp are waiting for data only a subset of the 128 SPs would work. Therefor the scheduler switches to execute another warp if available. And if this warp blocks it executes the next and so on. This concept is called latency hiding. The number of warps and the block size determine the occupancy (from how many warps the SM can choose to execute). If the occupancy is high it is more unlikely that there is no work for the SPs.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值