文章目录
- 题目:Accuracy vs. Efficiency Achieving Both through FPGA-Implementation Aware Neural Architecture Search
- 时间:2019
- 会议:DAC(IEEE/ACM Design Automation Conference)
- 研究机构:匹兹堡大学Jingtong Hu
1 缩写 & 引用
- NAS: Neural Architecture Search
- TS: timing specifications
- IFM: input feature map
- OFM: ouput feature map
- PS: processing system
- PL: programming logics
Maximizing CNN Accelerator Efficiency Through Resource Partitioning
Optimizing fpga-based accelerator design for deep convolutional neural networks
2 abstract & introduction & background
就是现在的NAS计算量太大,还没有考虑硬件的延时
主要贡献有:
- FPGA-implementation aware神经网络搜索架构,叫做FNAS
- 提出了图模型来分析FPGA的latency
- 对多个FPGA提出了调度策略
搜索网络结构的两个方向