CVPR 2025 扩散模型加速相关论文汇总


计算机视觉领域顶级国际会议CVPR 2025(CCF-A类)将于2025年6月11日至15日在美国田纳西州举行。

https://cvpr.thecvf.com/Conferences/2025/AcceptedPapers 

本文从采样、模型、特征三个维度整理了29篇扩散模型加速的相关工作。

# Feature-based

1. [PruneAttend to Not Attended: Structure-then-Detail Token Merging for Post-training DiT Acceleration

Haipeng Fang · Sheng Tang · Juan Cao · Enshuo Zhang · Fan Tang · Tong-Yee Lee

https://cvpr.thecvf.com/virtual/2025/poster/35120

https://github.com/ICTMCG/SDTM
 

2. [PruneLayer- and Timestep-Adaptive Differentiable Token Compression Ratios for Efficient Diffusion Transformers 

Haoran You · Connelly Barnes · Yuqian Zhou · Yan Kang · Zhenbang Du · Wei Zhou · Lingzhi Zhang · Yotam Nitzan · Xiaoyang Liu · Zhe Lin · Eli Shechtman · Sohrab Amirghodsi · Yingyan (Celine) Lin

https://arxiv.org/pdf/2412.16822
 

3. [PruneTinyFusion: Diffusion Transformers Learned Shallow

Gongfan Fang · Kunjun Li · Xinyin Ma · Xinchao Wang

https://arxiv.org/pdf/2412.01199
 

4. [PruneStretching Each Dollar: Diffusion Training from Scratch on a Micro-Budget

Vikash Sehwag · Xianghao Kong · Jingtao Li · Michael Spranger · Lingjuan Lyu

https://arxiv.org/pdf/2407.15811
 

5. [CacheDreamCache: Finetuning-Free Lightweight Personalized Image Generation via Feature Caching 

Emanuele Aiello · Umberto Michieli · Diego Valsesia · Mete Ozay · Enrico Magli

https://arxiv.org/abs/2411.17786
 

6. [CacheTimestep Embedding Tells: It's Time to Cache for Video Diffusion Model

Feng Liu · Shiwei Zhang · Xiaofeng Wang · Yujie Wei · Haonan Qiu · Yuzhong Zhao · Yingya Zhang · Qixiang Ye · Fang Wan

https://arxiv.org/pdf/2411.19108
 

7. [CacheBlockDance: Reuse Structurally Similar Spatio-Temporal Features to Accelerate Diffusion Transformers 

Hui Zhang · Tingwei Gao · Jie Shao · Zuxuan Wu

https://arxiv.org/pdf/2503.15927
 

8. [Cache] Accelerating Diffusion Transformer via Increment-Calibrated Caching with Channel-Aware Singular Value Decomposition

Zhiyuan Chen · Keyi Li · Yifan Jia · Le Ye · Yufei Ma

https://cvpr.thecvf.com/virtual/2025/poster/34798
 

9. [QuantizationPioneering 4-Bit FP Quantization for Diffusion Models: Mixup-Sign Quantization and Timestep-Aware Fine-Tuning 

Maosen Zhao · Pengtao Chen · Chong Yu · Yan Wen · Xudong Tan · Tao Chen

https://cvpr.thecvf.com/virtual/2025/poster/34497
 

10. [QuantizationQ-DiT: Accurate Post-Training Quantization for Diffusion Transformers

Lei Chen · Yuan Meng · Chen Tang · Xinzhu Ma · Jingyan Jiang · Xin Wang · Zhi Wang · Wenwu Zhu

https://arxiv.org/pdf/2406.17343
 

11. [Quantization] Quantization without Tears

Minghao Fu · Hao Yu · Jie Shao · Junjie Zhou · Ke Zhu · Jianxin Wu

https://arxiv.org/pdf/2411.13918
 

12. [QuantizationPassionSR: Post-Training Quantization with Adaptive Scale in One-Step Diffusion based Image Super-Resolution

Zhu Li Bo · Jianze Li · Haotong Qin · Wenbo Li · Yulun Zhang · Yong Guo · Xiaokang Yang

https://arxiv.org/pdf/2411.17106
 

13. [Cache + QuantizationCacheQuant: Comprehensively Accelerated Diffusion Models

Xuewen Liu · Zhikai Li · Qingyi Gu

https://arxiv.org/pdf/2503.01323

# Sample-based
 

14. [SamplerRayFlow: Instance-Aware Diffusion Acceleration via Adaptive Flow Trajectories

Huiyang Shao · Xin Xia · Yuhong Yang · Ren Yuxi · XING WANG · Xuefeng Xiao

https://arxiv.org/pdf/2503.07699

 

15. [Sechdule] Schedule On the Fly: Diffusion Time Prediction for Faster and Better Image Generation

Zilyu Ye · Zhiyang Chen · Tiancheng Li · Zemin Huang · Weijian Luo · Guo-Jun Qi

https://arxiv.org/abs/2410.19250
 

16. [SechduleA Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training

Kai Wang · Mingjia Shi · YuKun Zhou · Zekai Li · Xiaojiang Peng · Zhihang Yuan · Yuzhang Shang · Hanwang Zhang · Yang You

https://arxiv.org/pdf/2405.17403

 

17. [SechduleScaling Inference Time Compute for Diffusion Models

Nanye Ma · Shangyuan Tong · Haolin Jia · Hexiang Hu · Yu-Chuan Su · Mingda Zhang · Xuan Yang · Yandong Li · Tommi Jaakkola · Xuhui Jia · Saining Xie

https://arxiv.org/pdf/2501.09732?

 

18. [SechduleRaSS: Improving Denoising Diffusion Samplers with Reinforced Active Sampling Scheduler

Xin Ding · Lei Yu · Xin Li · Zhijun Tu · Hanting Chen · Jie Hu · Zhibo Chen
 

19. [ScheduleAdaptive Non-Uniform Timestep Sampling for Diffusion Model Training 

Myunsoo Kim · Donghyeon Ki · Seong-Woong Shim · Byung-Jun Lee

https://arxiv.org/pdf/2411.09998
 

20. [DistillationNitroFusion: High-Fidelity Single-Step Diffusion through Dynamic Adversarial Training

Dar-Yen Chen · Hmrishav Bandyopadhyay · Kai Zou · Yi-Zhe Song

https://arxiv.org/pdf/2412.02030
 

21. [Parallel SamplingPCM: Picard Consistency Model for Fast Parallel Sampling of Diffusion Models

Junhyuk So · Jiwoong Shin · Chaeyeon Jang · Eunhyeok Park

https://arxiv.org/abs/2503.19731
 

22. [DistillationAutoregressive Distillation of Diffusion Transformers

Yeongmin Kim · Sotiris Anagnostidis · Yuming Du · Edgar Schoenfeld · Jonas Kohler · Markos Georgopoulos · Albert Pumarola · Ali Thabet · Artsiom Sanakoyeu

https://cvpr.thecvf.com/virtual/2025/poster/35166
 

23. [Distillation] Acc3D: Accelerating Single Image to 3D Diffusion Models via Edge Consistency Guided Score Distillation

Kendong Liu · Zhiyu Zhu · Hui Liu · Junhui Hou

https://arxiv.org/pdf/2503.15975
 

24. [Distillation] Random Conditioning for Diffusion Model Compression with Distillation 

Dohyun Kim · Sehwan Park · GeonHee Han · Seung Wook Kim · Paul Hongsuck Seo

https://cvpr.thecvf.com/virtual/2025/poster/33271

 

25. [DistillationOptimizing for the Shortest Path in Denoising Diffusion Model

Ping Chen · Xingpeng Zhang · Zhaoxiang Liu · Huan Hu · Xiang Liu · Kai Wang · Min Wang · Yanlin Qian · Shiguo Lian

https://arxiv.org/pdf/2503.03265
 

26. [Distillation] OSV: One Step is Enough for High-Quality Image to Video Generation

Xiaofeng Mao · Zhengkai Jiang · Fu-Yun Wang · Jiangning Zhang · Hao Chen · Mingmin Chi · Yabiao Wang · Wenhan Luo

https://arxiv.org/pdf/2409.11367
 

27. [Distillation] TSD-SR: One-Step Diffusion with Target Score Distillation for Real-World Image Super-Resolution

Linwei Dong · Qingnan Fan · Yihong Guo · Zhonghao Wang · Qi Zhang · Jinwei Chen · Yawei Luo · Changqing Zou

https://arxiv.org/pdf/2411.18263

# Model-based
 

28. [ArchitectureTaming High-Resolution Text-to-Image Models for Mobile Devices with Efficient Architectures and Training

Jierun Chen · Dongting Hu · Xijie Huang · Huseyin Coskun · Arpit Sahni · Aarush Gupta · Anujraaj Goyal · Dishani Lahiri · Rajesh Singh · Yerlan Idelbayev · Junli Cao · Yanyu Li · Kwang-Ting Cheng · Mingming Gong · S.-H. Gary Chan · Sergey Tulyakov · Anil Kag · Yanwu Xu · Jian Ren

https://arxiv.org/abs/2412.09619

29. [Linear Attention] DiG: Scalable and Efficient Diffusion Models with Gated Linear Attention

Lianghui Zhu · Zilong Huang · Bencheng Liao · Jun Hao Liew · Hanshu Yan · Jiashi Feng · Xinggang Wang

https://arxiv.org/pdf/2405.18428

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值