# Writing Cache-friendly Code

In the previous essay Exhibiting Good Locality in Your Programs, we presented two functions named sumarrayrows and sumarraycols respectively. And we knew that sumarrayrows had a stride-1 reference pattern (visit each element of the array sequentially), whereas sumarraycols had a stride-N reference pattern (visit every Nth element of the contiguous array). In this essay, we will show you how to quantify the idea of locality in terms of cache hits and cache misses.
In general, if a cache has a block size of B bytes, then a stride-k reference pattern (where k is expressed in words) results in an average of min(1,  (wordsize × k) / B) misses per loop iteration. This is minimized for = 1.
To take sumarrayrowsfor example,
int sumarrayrows(int a[M][N])
{
int sum = 0;
for (int i=0; i!=M; ++i)
for (int j=0; j!=N; ++j)
sum += a[i][j];
return sum;
}
since C stores arrays in row-major order, the inner loop of this function has a desirable stride-1 access pattern. Suppose that a is block aligned, words are 4 bytes, cache blocks are 4 words, and the cache is initially empty (a cold cache). Then the references to the array a will result in the following pattern of hits and misses:

In this example, the reference to a[0][0] misses and the corresponding block which contains a[0][0]-a[0][3], is loaded into the cache from memory. Thus, the next three reference are all hits. The reference to a[0][4] causes another miss as a new block is loaded into the cache, the next three references are hits, and so on. In general, three out of four references will hit, which is the best we can do in this case with a cold cache.
But consider what happens if we make the seemingly innocuous change of permuting the loops as sumarraycols:
int sumarraycols(int a[M][N])
{
int sum = 0;
for (int j=0; j!=N; ++j)
for (int i=0; i!=M; ++i)
sum += a[i][j];
return sum;
}
In this case, we are scanning the array column by column instead of  row by row. If we are lucky and the entire array fits in the cache, then we will enjoy the same miss rate of 1/4. However, if the array if larger than the cache (the more likely case), then each and every access of a[i][j] will miss!

Higher miss rates can have a significant impact on running time. For example, on our desktop machine,sumarrayrows runs twice as fast as sumarraycols. To summarize, the two functions illustrate two important points about writing cache-friendly code:
1. Repeated references to local variables are good because the compiler can cache them in the register file (temporal locality).
2. Stride-1 reference patterns are good because caches at all levels of the memory hierarchy store data as contiguous blocks (spatial locality).
• 本文已收录于以下专栏：

## Writing Code Suitable for Implementation with Conditional Moves

The effect of a mispredicted  branch can be very high, but the branch prediction logic found in mode...

## Writing Efficient C Code for Embedded Systems

By Hai Shalom   原文链接: http://www.rt-embedded.com/blog/archives/writing-efficient-c-code-for-embedde...

## Codeforces544C:Writing Code(完全背包)

Programmers working on a large project have just received a task to write exactly m lines of code. T...

## Writing Fast Matlab code 6-7

6 内联简单函数“内联一个函数”指用一个调用取代函数代码本身。注意你定义的M函数不要与MATLAB本身自带的函数混淆。如果你需要修改函数，在操作台键入：edit [函数名]以下函数值得内联： con...
• lLYDl
• 2015年10月11日 12:18
• 281

## Codeforces 544C Writing Code【二维完全背包】

C. Writing Code time limit per test 3 seconds memory limit per test 256 megabytes input ...

## C. Writing Code（Codeforces Round #302（div2）

C. Writing Code time limit per test 3 seconds memory limit per test 256 megabytes i...
• 2015年05月08日 17:12
• 3582

## Writing Lock-Free Code: A Corrected Queue

By Herb Sutter, September 29, 2008 As we saw last month [1], lock-free coding is hard even for ...
• ani_di
• 2012年09月08日 17:04
• 997

举报原因： 您举报文章：Writing Cache-friendly Code 色情 政治 抄袭 广告 招聘 骂人 其他 (最多只允许输入30个字)