CVPR2024|底层视觉(超分辨率,图像恢复,去雨,去雾,去模糊,去噪等)相关论文汇总(附论文链接/开源代码/解析)【持续更新】

这篇文章总结了CVPR2024年会议中关于底层视觉任务的研究,涵盖了超分辨率、图像去雨、去雾、去模糊、去噪、图像恢复、增强等多个方面,展示了最新的技术进展和创新方法。
摘要由CSDN通过智能技术生成

CVPR2024|底层视觉相关论文汇总(如果觉得有帮助,欢迎点赞和收藏)

Awesome-CVPR2024-Low-Level-Vision

整理汇总下今年CVPR底层视觉(Low-Level Vision)相关的论文和代码,括超分辨率,图像去雨,图像去雾,去模糊,去噪,图像恢复,图像增强,图像去摩尔纹,图像修复,图像质量评价,插帧,图像/视频压缩等任务,具体如下。

欢迎star,fork和PR~
优先在Github更新Awesome-CVPR2024-Low-Level-Vision,欢迎star~
知乎https://zhuanlan.zhihu.com/p/684196283
参考或转载请注明出处

CVPR2024官网:https://cvpr.thecvf.com/Conferences/2024

CVPR接收论文列表:https://cvpr.thecvf.com/Conferences/2024/AcceptedPapers

CVPR完整论文库:https://openaccess.thecvf.com/CVPR2024

开会时间:2024年6月17日-6月21日

论文接收公布时间:2024年2月27日

【Contents】

1.超分辨率(Super-Resolution)

AdaBM: On-the-Fly Adaptive Bit Mapping for Image Super-Resolution

  • Paper: https://arxiv.org/abs/2404.03296
  • Code: https://github.com/Cheeun/AdaBM

A Dynamic Kernel Prior Model for Unsupervised Blind Image Super-Resolution

  • Paper: https://arxiv.org/abs/2404.15620
  • Code: https://github.com/XYLGroup/DKP

APISR: Anime Production Inspired Real-World Anime Super-Resolution

  • Paper: https://arxiv.org/abs/2403.01598
  • Code: https://github.com/Kiteretsu77/APISR

Arbitrary-Scale Image Generation and Upsampling using Latent Diffusion Model and Implicit Neural Decoder

  • Paper: https://arxiv.org/abs/2403.10255v1
  • Code: https://github.com/zhenshij/arbitrary-scale-diffusion

Beyond Image Super-Resolution for Image Recognition with Task-Driven Perceptual Loss

  • Paper: https://arxiv.org/abs/2404.01692
  • Code: https://github.com/JaehaKim97/SR4IR

Bilateral Event Mining and Complementary for Event Stream Super-Resolution

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Huang_Bilateral_Event_Mining_and_Complementary_for_Event_Stream_Super-Resolution_CVPR_2024_paper.html
  • Code: https://github.com/Lqm26/BMCNet-ESR

Boosting Flow-based Generative Super-Resolution Models via Learned Prior

  • Paper: https://arxiv.org/abs/2403.10988
  • Code: https://github.com/liyuantsao/FlowSR-LP

Building Bridges across Spatial and Temporal Resolutions: Reference-Based Super-Resolution via Change Priors and Conditional Diffusion Model

  • Paper: https://arxiv.org/abs/2403.17460
  • Code: https://github.com/dongrunmin/RefDiff

CAMixerSR: Only Details Need More “Attention”

  • Paper: https://arxiv.org/abs/2402.19289
  • Code: https://github.com/icandle/CAMixerSR

CFAT: Unleashing Triangular Windows for Image Super-resolution

  • Paper: https://arxiv.org/abs/2403.16143
  • Code: https://github.com/rayabhisek123/CFAT

Continuous Optical Zooming: A Benchmark for Arbitrary-Scale Image Super-Resolution in Real World

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Fu_Continuous_Optical_Zooming_A_Benchmark_for_Arbitrary-Scale_Image_Super-Resolution_in_CVPR_2024_paper.html
  • Code: https://github.com/pf0607/COZ

CoSeR: Bridging Image and Language for Cognitive Super-Resolution

  • Paper: https://arxiv.org/abs/2311.16512
  • Code: https://github.com/VINHYU/CoSeR

CDFormer: When Degradation Prediction Embraces Diffusion Model for Blind Image Super-Resolution

  • Paper: https://arxiv.org/abs/2405.07648
  • Code: https://github.com/I2-Multimedia-Lab/CDFormer

CycleINR: Cycle Implicit Neural Representation for Arbitrary-Scale Volumetric Super-Resolution of Medical Data

  • Paper: https://arxiv.org/abs/2404.04878
  • Code:

Diffusion-based Blind Text Image Super-Resolution

  • Paper: https://arxiv.org/abs/2312.08886
  • Code: https://github.com/YuzheZhang-1999/DiffTSR

DiSR-NeRF: Diffusion-Guided View-Consistent Super-Resolution NeRF

  • Paper: https://arxiv.org/abs/2404.00874
  • Code:

Image Processing GNN: Breaking Rigidity in Super-Resolution

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Tian_Image_Processing_GNN_Breaking_Rigidity_in_Super-Resolution_CVPR_2024_paper.html
  • Code: https://github.com/huawei-noah/Efficient-Computing/tree/master/LowLevel/IPG

Latent Modulated Function for Computational Optimal Continuous Image Representation

  • Paper: https://arxiv.org/abs/2404.16451
  • Code: https://github.com/HeZongyao/LMF

Learning Coupled Dictionaries from Unpaired Data for Image Super-Resolution

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Learning_Coupled_Dictionaries_from_Unpaired_Data_for_Image_Super-Resolution_CVPR_2024_paper.html
  • Code:

Learning Large-Factor EM Image Super-Resolution with Generative Priors

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Shou_Learning_Large-Factor_EM_Image_Super-Resolution_with_Generative_Priors_CVPR_2024_paper.html
  • Code: https://github.com/jtshou/GPEMSR

Low-Res Leads the Way: Improving Generalization for Super-Resolution by Self-Supervised Learning

  • Paper: https://arxiv.org/abs/2403.02601
  • Code: https://github.com/haoyuc/LWay

Navigating Beyond Dropout: An Intriguing Solution towards Generalizable Image Super-Resolution

  • Paper: https://arxiv.org/abs/2402.18929v2
  • Code: https://github.com/Dreamzz5/Simple-Align

Neural Super-Resolution for Real-time Rendering with Radiance Demodulation

  • Paper: https://arxiv.org/abs/2308.06699
  • Code: https://github.com/Riga2/NSRD

Rethinking Diffusion Model for Multi-Contrast MRI Super-Resolution

  • Paper: https://arxiv.org/abs/2404.04785
  • Code: https://github.com/GuangYuanKK/DiffMSR

SeD: Semantic-Aware Discriminator for Image Super-Resolution

  • Paper: https://arxiv.org/abs/2402.19387
  • Code: https://github.com/lbc12345/SeD

SeeSR: Towards Semantics-Aware Real-World Image Super-Resolution

  • Paper: https://arxiv.org/abs/2311.16518
  • Code: https://github.com/cswry/SeeSR

Self-Adaptive Reality-Guided Diffusion for Artifact-Free Super-Resolution

  • Paper: https://arxiv.org/abs/2403.16643
  • Code: https://github.com/ProAirVerse/Self-Adaptive-Guidance-Diffusion

SinSR: Diffusion-Based Image Super-Resolution in a Single Step

  • Paper: https://github.com/wyf0912/SinSR/blob/main/main.pdf
  • Code: https://github.com/wyf0912/SinSR

Super-Resolution Reconstruction from Bayer-Pattern Spike Streams

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Dong_Super-Resolution_Reconstruction_from_Bayer-Pattern_Spike_Streams_CVPR_2024_paper.html
  • Code: https://github.com/csycdong/CSCSR

Text-guided Explorable Image Super-resolution

  • Paper: https://arxiv.org/abs/2403.01124
  • Code:

Training Generative Image Super-Resolution Models by Wavelet-Domain Losses Enables Better Control of Artifacts

  • Paper: https://arxiv.org/abs/2402.19215
  • Code: https://github.com/mandalinadagi/wgsr

Transcending the Limit of Local Window: Advanced Super-Resolution Transformer with Adaptive Token Dictionary

  • Paper: https://arxiv.org/abs/2401.08209
  • Code: https://github.com/LabShuHangGU/Adaptive-Token-Dictionary

Uncertainty-Aware Source-Free Adaptive Image Super-Resolution with Wavelet Augmentation Transformer

  • Paper: https://arxiv.org/abs/2303.17783
  • Code:

Universal Robustness via Median Randomized Smoothing for Real-World Super-Resolution

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Chaouai_Universal_Robustness_via_Median_Randomized_Smoothing_for_Real-World_Super-Resolution_CVPR_2024_paper.html
  • Code:

Video Super-Resolution

Enhancing Video Super-Resolution via Implicit Resampling-based Alignment

  • Paper: https://github.com/kai422/IART/blob/main/arxiv.pdf
  • Code: https://github.com/kai422/IART

FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring

  • Paper: https://arxiv.org/abs/2401.03707
  • Code: https://github.com/KAIST-VICLab/FMA-Net

Learning Spatial Adaptation and Temporal Coherence in Diffusion Models for Video Super-Resolution

  • Paper: https://arxiv.org/abs/2403.17000
  • Code:

Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World Video Super-Resolution

  • Paper: https://arxiv.org/abs/2312.06640
  • Code: https://github.com/sczhou/Upscale-A-Video

Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention

  • Paper: https://arxiv.org/abs/2401.06312
  • Code: https://github.com/LabShuHangGU/MIA-VSR

2.图像去雨(Image Deraining)

Bidirectional Multi-Scale Implicit Neural Representations for Image Deraining

  • Paper: https://arxiv.org/abs/2404.01547
  • Code: https://github.com/cschenxiang/NeRD-Rain

3.图像去雾(Image Dehazing)

A Semi-supervised Nighttime Dehazing Baseline with Spatial-Frequency Aware and Realistic Brightness Constraint

  • Paper: https://arxiv.org/abs/2403.18548
  • Code: https://github.com/Xiaofeng-life/SFSNiD

Depth Information Assisted Collaborative Mutual Promotion Network for Single Image Dehazing

  • Paper: https://arxiv.org/abs/2403.01105
  • Code:

ODCR: Orthogonal Decoupling Contrastive Regularization for Unpaired Image Dehazing

  • Paper: https://arxiv.org/abs/2404.17825v1
  • Code:

Video Dehazing

Driving-Video Dehazing with Non-Aligned Regularization for Safety Assistance

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Fan_Driving-Video_Dehazing_with_Non-Aligned_Regularization_for_Safety_Assistance_CVPR_2024_paper.html
  • Code:

4.去模糊(Deblurring)

A Unified Framework for Microscopy Defocus Deblur with Multi-Pyramid Transformer and Contrastive Learning

  • Paper: https://arxiv.org/abs/2403.02611
  • Code: https://github.com/PieceZhang/MPT-CataBlur

AdaRevD: Adaptive Patch Exiting Reversible Decoder Pushes the Limit of Image Deblurring

  • Paper: https://github.com/INVOKERer/AdaRevD/blob/master/AdaRevD.pdf
  • Code: https://github.com/INVOKERer/AdaRevD

Blur2Blur: Blur Conversion for Unsupervised Image Deblurring on Unknown Domains

  • Paper: https://arxiv.org/abs/2403.16205
  • Code: https://github.com/VinAIResearch/Blur2Blur

Fourier Priors-Guided Diffusion for Zero-Shot Joint Low-Light Enhancement and Deblurring

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Lv_Fourier_Priors-Guided_Diffusion_for_Zero-Shot_Joint_Low-Light_Enhancement_and_Deblurring_CVPR_2024_paper.html
  • Code:

ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation

  • Paper:https://arxiv.org/abs/2312.10998
  • Code: https://github.com/plusgood-steven/ID-Blau

LDP: Language-driven Dual-Pixel Image Defocus Deblurring Network

  • Paper: https://arxiv.org/abs/2307.09815
  • Code: https://github.com/noxsine/LDP

Mitigating Motion Blur in Neural Radiance Fields with Events and Frames

  • Paper: https://rpg.ifi.uzh.ch/docs/CVPR24_Cannici.pdf
  • Code: https://github.com/uzh-rpg/EvDeblurNeRF

Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring

  • Paper: https://arxiv.org/abs/2404.13153
  • Code: https://github.com/ChengxuLiu/MISCFilter

Motion Blur Decomposition with Cross-shutter Guidance

  • Paper: https://arxiv.org/abs/2404.01120
  • Code: https://github.com/jixiang2016/dualBR

Real-World Efficient Blind Motion Deblurring via Blur Pixel Discretization

  • Paper: https://arxiv.org/abs/2404.12168
  • Code:

Spike-guided Motion Deblurring with Unknown Modal Spatiotemporal Alignment

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Spike-guided_Motion_Deblurring_with_Unknown_Modal_Spatiotemporal_Alignment_CVPR_2024_paper.html
  • Code: https://github.com/Leozhangjiyuan/UaSDN

Unsupervised Blind Image Deblurring Based on Self-Enhancement

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Unsupervised_Blind_Image_Deblurring_Based_on_Self-Enhancement_CVPR_2024_paper.html
  • Code:

Video Deblurring

Blur-aware Spatio-temporal Sparse Transformer for Video Deblurring

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Blur-aware_Spatio-temporal_Sparse_Transformer_for_Video_Deblurring_CVPR_2024_paper.html
  • Code: https://github.com/huicongzhang/BSSTNet

EVS-assisted Joint Deblurring Rolling-Shutter Correction and Video Frame Interpolation through Sensor Inverse Modeling

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Jiang_EVS-assisted_Joint_Deblurring_Rolling-Shutter_Correction_and_Video_Frame_Interpolation_through_CVPR_2024_paper.html
  • Code:

Frequency-aware Event-based Video Deblurring for Real-World Motion Blur

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Kim_Frequency-aware_Event-based_Video_Deblurring_for_Real-World_Motion_Blur_CVPR_2024_paper.html
  • Code:

Latency Correction for Event-guided Deblurring and Frame Interpolation

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Latency_Correction_for_Event-guided_Deblurring_and_Frame_Interpolation_CVPR_2024_paper.html
  • Code:

5.去噪(Denoising)

LAN: Learning to Adapt Noise for Image Denoising

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Kim_LAN_Learning_to_Adapt_Noise_for_Image_Denoising_CVPR_2024_paper.html
  • Code:

LED: A Large-scale Real-world Paired Dataset for Event Camera Denoising

  • Paper: https://arxiv.org/abs/2405.19718
  • Code:

Robust Image Denoising through Adversarial Frequency Mixup

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Ryou_Robust_Image_Denoising_through_Adversarial_Frequency_Mixup_CVPR_2024_paper.html
  • Code: https://github.com/dhryougit/AFM

Real-World Mobile Image Denoising Dataset with Efficient Baselines

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Flepp_Real-World_Mobile_Image_Denoising_Dataset_with_Efficient_Baselines_CVPR_2024_paper.html
  • Code:

SeNM-VAE: Semi-Supervised Noise Modeling with Hierarchical Variational Autoencoder

  • Paper: https://arxiv.org/abs/2403.17502
  • Code: https://github.com/zhengdharia/SeNM-VAE

Transfer CLIP for Generalizable Image Denoising

  • Paper: https://arxiv.org/abs/2403.15132
  • Code:

Unmixing Diffusion for Self-Supervised Hyperspectral Image Denoising

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Zeng_Unmixing_Diffusion_for_Self-Supervised_Hyperspectral_Image_Denoising_CVPR_2024_paper.html
  • Code:

ZERO-IG: Zero-Shot Illumination-Guided Joint Denoising and Adaptive Enhancement for Low-Light Images

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Shi_ZERO-IG_Zero-Shot_Illumination-Guided_Joint_Denoising_and_Adaptive_Enhancement_for_Low-Light_CVPR_2024_paper.html
  • Code: https://github.com/Doyle59217/ZeroIG

6.图像恢复(Image Restoration)

Adapt or Perish: Adaptive Sparse Transformer with Attentive Feature Refinement for Image Restoration

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Zhou_Adapt_or_Perish_Adaptive_Sparse_Transformer_with_Attentive_Feature_Refinement_CVPR_2024_paper.html
  • Code: https://github.com/joshyZhou/AST

Boosting Image Restoration via Priors from Pre-trained Models

  • Paper: https://arxiv.org/abs/2403.06793
  • Code:

CoDe: An Explicit Content Decoupling Framework for Image Restoration

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Gu_CoDe_An_Explicit_Content_Decoupling_Framework_for_Image_Restoration_CVPR_2024_paper.html
  • Code:

Deep Equilibrium Diffusion Restoration with Parallel Sampling

  • Paper: https://arxiv.org/abs/2311.11600
  • Code: https://github.com/caojiezhang/DeqIR

Diff-Plugin: Revitalizing Details for Diffusion-based Low-level Tasks

  • Paper: https://arxiv.org/abs/2403.00644
  • Code: https://github.com/yuhaoliu7456/Diff-Plugin

Distilling Semantic Priors from SAM to Efficient Image Restoration Models

  • Paper: https://arxiv.org/abs/2403.16368
  • Code:

DocRes: A Generalist Model Toward Unifying Document Image Restoration Tasks

  • Paper: https://arxiv.org/abs/2405.04408
  • Code: https://github.com/ZZZHANG-jx/DocRes

HIR-Diff: Unsupervised Hyperspectral Image Restoration Via Improved Diffusion Models

  • Paper: https://arxiv.org/abs/2402.15865
  • Code: https://github.com/LiPang/HIRDiff

Image Restoration by Denoising Diffusion Models With Iteratively Preconditioned Guidance

  • Paper: https://arxiv.org/abs/2312.16519
  • Code: https://github.com/tirer-lab/DDPG

Improving Image Restoration through Removing Degradations in Textual Representations

  • Paper: https://arxiv.org/abs/2312.17334
  • Code: https://github.com/mrluin/TextualDegRemoval

Learning Degradation-unaware Representation with Prior-based Latent Transformations for Blind Face Restoration

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Xie_Learning_Degradation-unaware_Representation_with_Prior-based_Latent_Transformations_for_Blind_Face_CVPR_2024_paper.html
  • Code:

Learning Diffusion Texture Priors for Image Restoration

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Ye_Learning_Diffusion_Texture_Priors_for_Image_Restoration_CVPR_2024_paper.html
  • Code:

Look-Up Table Compression for Efficient Image Restoration

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Li_Look-Up_Table_Compression_for_Efficient_Image_Restoration_CVPR_2024_paper.html
  • Code:

Multimodal Prompt Perceiver: Empower Adaptiveness, Generalizability and Fidelity for All-in-One Image Restoration

  • Paper: https://arxiv.org/abs/2312.02918
  • Code:

PFStorer: Personalized Face Restoration and Super-Resolution

  • Paper: https://arxiv.org/abs/2403.08436
  • Code:

Restoration by Generation with Constrained Priors

  • Paper: https://arxiv.org/abs/2312.17161
  • Code:

Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild

  • Paper: https://arxiv.org/abs/2401.13627
  • Code: https://github.com/Fanghua-Yu/SUPIR

Selective Hourglass Mapping for Universal Image Restoration Based on Diffusion Model

  • Paper: https://arxiv.org/abs/2403.11157
  • Code: https://github.com/iSEE-Laboratory/DiffUIR

Turb-Seg-Res: A Segment-then-Restore Pipeline for Dynamic Videos with Atmospheric Turbulence

  • Paper: https://arxiv.org/abs/2404.13605
  • Code: https://github.com/Riponcs/Turb-Seg-Res

WaveFace: Authentic Face Restoration with Efficient Frequency Recovery

  • Paper: https://arxiv.org/abs/2403.12760
  • Code:

Wavelet-based Fourier Information Interaction with Frequency Diffusion Adjustment for Underwater Image Restoration

  • Paper: https://arxiv.org/abs/2311.16845
  • Code: https://github.com/zhihefang/wf-diff

7.图像增强(Image Enhancement)

Color Shift Estimation-and-Correction for Image Enhancement

  • Paper: https://drive.google.com/file/d/1jZB2rW_I2WLTE5yNA4IZq9wb5p4NNOCR/view
  • Code: https://github.com/yiyulics/CSEC

Empowering Resampling Operation for Ultra-High-Definition Image Enhancement with Model-Aware Guidance

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Yu_Empowering_Resampling_Operation_for_Ultra-High-Definition_Image_Enhancement_with_Model-Aware_Guidance_CVPR_2024_paper.html
  • Code: https://github.com/YPatrickW/LMAR

Fourier Priors-Guided Diffusion for Zero-Shot Joint Low-Light Enhancement and Deblurring

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Lv_Fourier_Priors-Guided_Diffusion_for_Zero-Shot_Joint_Low-Light_Enhancement_and_Deblurring_CVPR_2024_paper.html
  • Code:

FlowIE:Efficient Image Enhancement via Rectified Flow

  • Paper: https://arxiv.org/abs/2406.00508
  • Code: https://github.com/EternalEvan/FlowIE

Light the Night: A Multi-Condition Diffusion Framework for Unpaired Low-Light Enhancement in Autonomous Driving

  • Paper: https://arxiv.org/abs/2404.04804
  • Code: https://github.com/jinlong17/LightDiff

Robust Depth Enhancement via Polarization Prompt Fusion Tuning

  • Paper: https://arxiv.org/abs/2404.04318
  • Code: https://github.com/lastbasket/Polarization-Prompt-Fusion-Tuning

Specularity Factorization for Low Light Enhancement

  • Paper: https://arxiv.org/abs/2404.01998
  • Code:

Towards Robust Event-guided Low-Light Image Enhancement: A Large-Scale Real-World Event-Image Dataset and Novel Approach

  • Paper: https://arxiv.org/abs/2404.00834
  • Code: https://github.com/EthanLiang99/EvLight

ZERO-IG: Zero-Shot Illumination-Guided Joint Denoising and Adaptive Enhancement for Low-Light Images

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Shi_ZERO-IG_Zero-Shot_Illumination-Guided_Joint_Denoising_and_Adaptive_Enhancement_for_Low-Light_CVPR_2024_paper.html
  • Code: https://github.com/Doyle59217/ZeroIG

Zero-Reference Low-Light Enhancement via Physical Quadruple Priors

  • Paper: https://arxiv.org/abs/2403.12933
  • Code: https://github.com/daooshee/QuadPrior

Video Enhancement

Binarized Low-light Raw Video Enhancement

  • Paper: https://arxiv.org/abs/2403.19944
  • Code: https://github.com/zhanggengchen/BRVE

UVEB: A Large-scale Benchmark and Baseline Towards Real-World Underwater Video Enhancement

  • Paper: https://arxiv.org/abs/2404.14542
  • Code: https://github.com/yzbouc/UVEB

8.图像修复(Inpainting)

Amodal Completion via Progressive Mixed Context Diffusion

  • Paper: https://arxiv.org/abs/2312.15540
  • Code: https://github.com/k8xu/amodal

Brush2Prompt: Contextual Prompt Generator for Object Inpainting

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Chiu_Brush2Prompt_Contextual_Prompt_Generator_for_Object_Inpainting_CVPR_2024_paper.html
  • Code:

Don’t Look into the Dark: Latent Codes for Pluralistic Image Inpainting

  • Paper: https://arxiv.org/abs/2403.18186
  • Code:

Structure Matters: Tackling the Semantic Discrepancy in Diffusion Models for Image Inpainting

  • Paper: https://arxiv.org/abs/2403.19898
  • Code: https://github.com/htyjers/StrDiffusion

Video Inpainting

AVID: Any-Length Video Inpainting with Diffusion Model

  • Paper: https://arxiv.org/abs/2312.03816
  • Code: https://github.com/zhang-zx/AVID

Towards Language-Driven Video Inpainting via Multimodal Large Language Models

  • Paper: https://arxiv.org/abs/2401.10226
  • Code: https://github.com/jianzongwu/Language-Driven-Video-Inpainting

9.高动态范围成像(HDR Imaging)

CLIPtone: Unsupervised Learning for Text-based Image Tone Adjustment

  • Paper: https://arxiv.org/abs/2404.01123
  • Code: https://github.com/hmin970922/CLIPtone/

Deep Video Inverse Tone Mapping Based on Temporal Clues

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Ye_Deep_Video_Inverse_Tone_Mapping_Based_on_Temporal_Clues_CVPR_2024_paper.html
  • Code: https://github.com/ye3why/VITM-TC

Generating Content for HDR Deghosting from Frequency View

  • Paper: https://arxiv.org/abs/2404.00849
  • Code:

HDRFlow: Real-Time HDR Video Reconstruction with Large Motions

  • Paper: https://arxiv.org/abs/2403.03447
  • Code: https://github.com/OpenImagingLab/HDRFlow

Perceptual Assessment and Optimization of HDR Image Rendering

  • Paper: https://arxiv.org/abs/2310.12877v4
  • Code: https://github.com/cpb68/HDRQA/

Towards HDR and HFR Video from Rolling-Mixed-Bit Spikings

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Chang_Towards_HDR_and_HFR_Video_from_Rolling-Mixed-Bit_Spikings_CVPR_2024_paper.html
  • Code:

Towards Real-World HDR Video Reconstruction: A Large-Scale Benchmark Dataset and A Two-Stage Alignment Network

  • Paper: https://arxiv.org/abs/2405.00244
  • Code: https://github.com/yungsyu99/Real-HDRV

Zero-Shot Structure-Preserving Diffusion Model for High Dynamic Range Tone Mapping

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Zhu_Zero-Shot_Structure-Preserving_Diffusion_Model_for_High_Dynamic_Range_Tone_Mapping_CVPR_2024_paper.html
  • Code:

10.图像质量评价(Image Quality Assessment)

Blind Image Quality Assessment Based on Geometric Order Learning

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Shin_Blind_Image_Quality_Assessment_Based_on_Geometric_Order_Learning_CVPR_2024_paper.html
  • Code: https://github.com/nhshin-mcl/QCN

Boosting Image Quality Assessment through Efficient Transformer Adaptation with Local Feature Enhancement

  • Paper: https://arxiv.org/abs/2308.12001
  • Code:

Bridging the Synthetic-to-Authentic Gap: Distortion-Guided Unsupervised Domain Adaptation for Blind Image Quality Assessment

  • Paper: https://arxiv.org/abs/2405.04167
  • Code:

CLIB-FIQA: Face Image Quality Assessment with Confidence Calibration

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Ou_CLIB-FIQA_Face_Image_Quality_Assessment_with_Confidence_Calibration_CVPR_2024_paper.html
  • Code:

Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment

  • Paper: https://arxiv.org/abs/2403.10066
  • Code:

Deep Generative Model based Rate-Distortion for Image Downscaling Assessment

  • Paper: https://arxiv.org/abs/2403.15139
  • Code: https://github.com/Byronliang8/IDA-RD

Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization

  • Paper: https://arxiv.org/abs/2403.11397
  • Code: https://github.com/YangiD/DefenseIQA-NT

DSL-FIQA: Assessing Facial Image Quality via Dual-Set Degradation Learning and Landmark-Guided Transformer

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Chen_DSL-FIQA_Assessing_Facial_Image_Quality_via_Dual-Set_Degradation_Learning_and_CVPR_2024_paper.html
  • Code:

EvalCrafter: Benchmarking and Evaluating Large Video Generation Models

  • Paper: https://arxiv.org/abs/2310.11440
  • Code: https://github.com/evalcrafter/EvalCrafter

FineParser: A Fine-grained Spatio-temporal Action Parser for Human-centric Action Quality Assessment

  • Paper: https://arxiv.org/abs/2405.06887
  • Code: https://github.com/PKU-ICST-MIPL/FineParser_CVPR2024

KVQ: Kwai Video Quality Assessment for Short-form Videos

  • Paper: https://arxiv.org/abs/2402.07220
  • Code: https://github.com/lixinustc/KVQ-Challenge-CVPR-NTIRE2024

Learned Scanpaths Aid Blind Panoramic Video Quality Assessment

  • Paper: https://arxiv.org/abs/2404.00252
  • Code: https://github.com/kalofan/AutoScanpathQA

Modular Blind Video Quality Assessment

  • Paper: https://arxiv.org/abs/2402.19276
  • Code: https://github.com/winwinwenwen77/ModularBVQA

On the Content Bias in Fréchet Video Distance

  • Paper: https://arxiv.org/abs/2404.12391
  • Code: https://github.com/songweige/content-debiased-fvd

PTM-VQA: Efficient Video Quality Assessment Leveraging Diverse PreTrained Models from the Wild

  • Paper: https://arxiv.org/abs/2405.17765
  • Code:

Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models

  • Paper: https://arxiv.org/abs/2311.06783
  • Code: https://github.com/Q-Future/Q-Instruct

11.插帧(Frame Interpolation)

Data-Efficient Unsupervised Interpolation Without Any Intermediate Frame for 4D Medical Images

  • Paper: https://arxiv.org/abs/2404.01464
  • Code: https://github.com/jungeun122333/UVI-Net

IQ-VFI: Implicit Quadratic Motion Estimation for Video Frame Interpolation

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Hu_IQ-VFI_Implicit_Quadratic_Motion_Estimation_for_Video_Frame_Interpolation_CVPR_2024_paper.html
  • Code:

Perceptual-Oriented Video Frame Interpolation Via Asymmetric Synergistic Blending

  • Paper: https://arxiv.org/abs/2404.06692
  • Code:

Sparse Global Matching for Video Frame Interpolation with Large Motion

  • Paper: https://arxiv.org/abs/2404.06913
  • Code: https://github.com/MCG-NJU/SGM-VFI

SportsSloMo: A New Benchmark and Baselines for Human-centric Video Frame Interpolation

  • Paper: https://arxiv.org/abs/2308.16876
  • Code: https://github.com/neu-vi/SportsSloMo

TTA-EVF: Test-Time Adaptation for Event-based Video Frame Interpolation via Reliable Pixel and Sample Estimation

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Cho_TTA-EVF_Test-Time_Adaptation_for_Event-based_Video_Frame_Interpolation_via_Reliable_CVPR_2024_paper.html
  • Code: https://github.com/Chohoonhee/TTA-EVF

Video Frame Interpolation via Direct Synthesis with the Event-based Reference

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Video_Frame_Interpolation_via_Direct_Synthesis_with_the_Event-based_Reference_CVPR_2024_paper.html
  • Code:

Video Interpolation with Diffusion Models

  • Paper: https://arxiv.org/abs/2404.01203
  • Code:

12.视频/图像压缩(Video/Image Compression)

C3: High-performance and low-complexity neural compression from a single image or video

  • Paper: https://arxiv.org/abs/2312.02753
  • Code: https://github.com/google-deepmind/c3_neural_compression

Generative Latent Coding for Ultra-Low Bitrate Image Compression

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Jia_Generative_Latent_Coding_for_Ultra-Low_Bitrate_Image_Compression_CVPR_2024_paper.html
  • Code:

Laplacian-guided Entropy Model in Neural Codec with Blur-dissipated Synthesis

  • Paper: https://arxiv.org/abs/2403.16258
  • Code:

Learned Lossless Image Compression based on Bit Plane Slicing

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Learned_Lossless_Image_Compression_based_on_Bit_Plane_Slicing_CVPR_2024_paper.html
  • Code: https://github.com/ZZ022/ArIB-BPS

Towards Backward-Compatible Continual Learning of Image Compression

  • Paper: https://arxiv.org/abs/2402.18862
  • Code: https://gitlab.com/viper-purdue/continual-compression

Video Compression

Task-Aware Encoder Control for Deep Video Compression

  • Paper: https://arxiv.org/abs/2404.04848
  • Code:

Low-Latency Neural Stereo Streaming

  • Paper: https://arxiv.org/abs/2403.17879
  • Code:

Neural Video Compression with Feature Modulation

  • Paper: https://arxiv.org/abs/2402.17414
  • Code: https://github.com/microsoft/DCVC

13.压缩图像质量增强(Compressed Image Quality Enhancement)

CPGA: Coding Priors-Guided Aggregation Network for Compressed Video Quality Enhancement

  • Paper: https://arxiv.org/abs/2403.10362
  • Code:

Enhancing Quality of Compressed Images by Mitigating Enhancement Bias Towards Compression Domain

  • Paper: https://arxiv.org/abs/2402.17200
  • Code:

14.图像去反光(Image Reflection Removal)

Language-guided Image Reflection Separation

  • Paper: https://arxiv.org/abs/2402.11874
  • Code:

Revisiting Singlelmage Reflection Removal in the Wild

  • Paper: https://arxiv.org/abs/2311.17320
  • Code: https://github.com/zhuyr97/Reflection_RemoVal_CVPR2024

15.图像去阴影(Image Shadow Removal)

HomoFormer: Homogenized Transformer for Image Shadow Removal

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Xiao_HomoFormer_Homogenized_Transformer_for_Image_Shadow_Removal_CVPR_2024_paper.html
  • Code: https://github.com/jiexiaou/HomoFormer

16.图像上色(Image Colorization)

Automatic Controllable Colorization by Imagination

  • Paper: https://arxiv.org/abs/2404.05661
  • Code: https://github.com/xy-cong/imagine-colorization

Generative Quanta Color Imaging

  • Paper: https://arxiv.org/abs/2403.19066
  • Code:

Learning Inclusion Matching for Animation Paint Bucket Colorization

  • Paper: https://arxiv.org/abs/2403.18342
  • Code: https://github.com/ykdai/BasicPBC

17.图像和谐化(Image Harmonization)

Relightful Harmonization: Lighting-aware Portrait Background Replacement

  • Paper: https://arxiv.org/abs/2312.06886
  • Code:

Video Harmonization with Triplet Spatio-Temporal Variation Patterns

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Guo_Video_Harmonization_with_Triplet_Spatio-Temporal_Variation_Patterns_CVPR_2024_paper.html
  • Code: https://github.com/zhenglab/VideoTripletTransformer

18.视频稳相(Video Stabilization)

3D Multi-frame Fusion for Video Stabilization

  • Paper: https://arxiv.org/abs/2404.12887
  • Code:

Harnessing Meta-Learning for Improving Full-Frame Video Stabilization

  • Paper: https://arxiv.org/abs/2403.03662
  • Code: https://github.com/MKashifAli/MetaVideoStab

19.图像融合(Image Fusion)

Equivariant Multi-Modality Image Fusion

  • Paper: https://arxiv.org/abs/2305.11443
  • Code: https://github.com/Zhaozixiang1228/MMIF-EMMA

MRFS: Mutually Reinforcing Image Fusion and Segmentation

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_MRFS_Mutually_Reinforcing_Image_Fusion_and_Segmentation_CVPR_2024_paper.html
  • Code: https://github.com/HaoZhang1018/MRFS

Neural Spline Fields for Burst Image Fusion and Layer Separation

  • Paper: https://arxiv.org/abs/2312.14235
  • Code: https://github.com/princeton-computational-imaging/NSF

Probing Synergistic High-Order Interaction in Infrared and Visible Image Fusion

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Zheng_Probing_Synergistic_High-Order_Interaction_in_Infrared_and_Visible_Image_Fusion_CVPR_2024_paper.html
  • Code:

Revisiting Spatial-Frequency Information Integration from a Hierarchical Perspective for Panchromatic and Multi-Spectral Image Fusion

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Zheng_Probing_Synergistic_High-Order_Interaction_in_Infrared_and_Visible_Image_Fusion_CVPR_2024_paper.html
  • Code:

Text-IF: Leveraging Semantic Text Guidance for Degradation-Aware and Interactive Image Fusion

  • Paper: https://arxiv.org/abs/2403.16387
  • Code: https://github.com/XunpengYi/Text-IF

Task-Customized Mixture of Adapters for General Image Fusion

  • Paper: https://arxiv.org/abs/2403.12494
  • Code: https://github.com/YangSun22/TC-MoA

20.其他任务(Others)

Close Imitation of Expert Retouching for Black-and-White Photography

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Shin_Close_Imitation_of_Expert_Retouching_for_Black-and-White_Photography_CVPR_2024_paper.html
  • Code: https://github.com/seunghyuns98/Decolorization

Content-Adaptive Non-Local Convolution for Remote Sensing Pansharpening

  • Paper: https://arxiv.org/abs/2404.07543
  • Code: https://github.com/Duanyll/CANConv

DiffSCI: Zero-Shot Snapshot Compressive Imaging via Iterative Spectral Diffusion Model

  • Paper: https://arxiv.org/abs/2311.11417
  • Code: https://github.com/PAN083/DiffSCI

Dual Prior Unfolding for Snapshot Compressive Imaging

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Dual_Prior_Unfolding_for_Snapshot_Compressive_Imaging_CVPR_2024_paper.html
  • Code: https://github.com/ZhangJC-2k/DPU

Dual-Camera Smooth Zoom on Mobile Phones

  • Paper: https://arxiv.org/abs/2404.04908
  • Code: https://github.com/ZcsrenlongZ/ZoomGS

Dual-scale Transformer for Large-scale Single-Pixel Imaging

  • Paper: https://arxiv.org/abs/2404.05001
  • Code: https://github.com/Gang-Qu/HATNet-SPI

Genuine Knowledge from Practice: Diffusion Test-Time Adaptation for Video Adverse Weather Removal

  • Paper: https://arxiv.org/abs/2403.07684
  • Code: https://github.com/scott-yjyang/DiffTTA

Language-driven All-in-one Adverse Weather Removal

  • Paper: https://arxiv.org/abs/2312.01381
  • Code:

Learning to Remove Wrinkled Transparent Film with Polarized Prior

  • Paper: https://arxiv.org/abs/2403.04368
  • Code: https://github.com/jqtangust/FilmRemovalww

Misalignment-Robust Frequency Distribution Loss for Image Transformation

  • Paper: https://arxiv.org/abs/2402.18192
  • Code: https://github.com/eezkni/FDL

On the Robustness of Language Guidance for Low-Level Vision Tasks: Findings from Depth Estimation

  • Paper: https://arxiv.org/abs/2404.08540
  • Code: https://github.com/agneet42/lang_depth

ParamISP: Learned Forward and Inverse ISPs using Camera Parameters

  • Paper: https://arxiv.org/abs/2312.13313
  • Code: https://github.com/woo525/ParamISP

RecDiffusion: Rectangling for Image Stitching with Diffusion Models

  • Paper: https://arxiv.org/abs/2402.18192
  • Code: https://github.com/lhaippp/RecDiffusion

Residual Denoising Diffusion Models

  • Paper: https://arxiv.org/abs/2308.13712
  • Code: https://github.com/nachifur/RDDM

Real-Time Exposure Correction via Collaborative Transformations and Adaptive Sampling

  • Paper: https://arxiv.org/abs/2404.11884
  • Code: https://github.com/HUST-IAL/CoTF

SCINeRF: Neural Radiance Fields from a Snapshot Compressive Image

  • Paper: https://arxiv.org/abs/2403.20018
  • Code: https://github.com/WU-CVGL/SCINeRF

Seeing Motion at Nighttime with an Event Camera

  • Paper: https://arxiv.org/abs/2404.11884
  • Code: https://github.com/Liu-haoyue/NER-Net

Shadow Generation for Composite Image Using Diffusion Model

  • Paper: https://arxiv.org/abs/2403.15234
  • Code: https://github.com/bcmi/Object-Shadow-Generation-Dataset-DESOBAv2

Improving Spectral Snapshot Reconstruction with Spectral-Spatial Rectification

  • Paper: https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Improving_Spectral_Snapshot_Reconstruction_with_Spectral-Spatial_Rectification_CVPR_2024_paper.html
  • Code: https://github.com/ZhangJC-2k/SSR

持续更新~

参考

相关Low-Level-Vision整理

  • 35
    点赞
  • 118
    收藏
    觉得还不错? 一键收藏
  • 5
    评论
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值