阅读总结
DaDianNao:机器学习超级计算机 – MICRO2014
Abstract :
We show that, on a subset of the largest known neural network layers, it
is possible to achieve a speedup of 450.65x over a GPU, and
reduce the energy by 150.31x on average for a 64-chip system.
III. THE GPU OPTION
1)their (area) cost is high because of both the number of hardware operators and the need to remain reasonably general-purpose.
2) Second, the total execution time remains large (up to 18.03
seconds for the largest layer CLASS1)
3) Third, the GPU energy efficiency is moderate.
IV. THE ACCELERATOR OPTION
3mm2 at 65nm, 0.98GHz