- 【Energy Efficiency & Low Power】AutoFL: Enabling Heterogeneity-Aware Energy Efficient Federated Learning
- 【Processing In/Near Memory】TRiM: Enhancing Processor-Memory Interfaces with Scalable Tensor Reduction in Memory
- 【Accelerators】PointAcc: Efficient Point Cloud Accelerator
- 【Accelerators】Equinox: Training (for Free) on a Custom Inference Accelerator
- 【Accelerators】EdgeBERT: Sentence-Level Energy Optimizations for Latency-Aware Multi-Task NLP Inference
- 【Accelerators】RecPipe: Co-Designing Models and Hardware to Jointly Optimize Recommendation Quality and Performance
- 【Accelerators】Shift-BNN: Highly-Efficient Probabilistic Bayesian Neural Network Training via Memory-Friendly Pattern Retrieving
- 【Sparse Processing】Distilling Bit-Level Sparsity Parallelism for General Purpose Deep Learning Acceleration
- 【Sparse Processing】Sanger: A Co-Design Framework for Enabling Sparse Attention using Reconfigurable Architecture
- 【Sparse Processing】ESCALATE: Boosting the Efficiency of Sparse CNN Accelerator with Kernel Decomposition
- 【Virtual Memory & Prefetching】Pythia: A Customizable Hardware Prefetching Framework Using Online Reinforcement Learning
MICRO‘21文章挑选(感兴趣)
于 2022-02-17 13:09:50 首次发布