Low-Light Image Enhancement 弱光照图像增强算法 资源整理

Resources for Low Light Image Enhancement

https://github.com/dawnlh/low-light-image-enhancement-resources

-------------------------------------------------------------

Paper

TIP 2021

Sparse Gradient Regularized Deep Retinex Network for Robust Low-Light Image Enhancement

Wenhan Yang; Wenjing Wang; Haofeng Huang; Shiqi Wang; Jiaying Liu

Abstract

Due to the absence of a desirable objective for low-light image enhancement, previous data-driven methods may provide undesirable enhanced results including amplified noise, degraded contrast and biased colors. In this work, inspired by Retinex theory, we design an end-to-end signal prior-guided layer separation and data-driven mapping network with layer-specified constraints for single-image low-light enhancement. A Sparse Gradient Minimization sub-Network (SGM-Net) is constructed to remove the low-amplitude structures and preserve major edge information, which facilitates extracting paired illumination maps of low/normal-light images. After the learned decomposition, two sub-networks (Enhance-Net and Restore-Net) are utilized to predict the enhanced illumination and reflectance maps, respectively, which helps stretch the contrast of the illumination map and remove intensive noise in the reflectance map. The effects of all these configured constraints, including the signal structure regularization and losses, combine together reciprocally, which leads to good reconstruction results in overall visual quality. The evaluation on both synthetic and real images, particularly on those containing intensive noise, compression artifacts and their interleaved artifacts, shows the effectiveness of our novel models, which significantly outperforms the state-of-the-art methods.

 

 

TIP 2021

EnlightenGAN: Deep Light Enhancement Without Paired Supervision

 

Yifan Jiang; Xinyu Gong; Ding Liu; Yu Cheng; Chen Fang; Xiaohui Shen; Jianchao Yang; Pan Zhou; Zhangyang Wang

Abstract

Deep learning-based methods have achieved remarkable success in image restoration and enhancement, but are they still competitive when there is a lack of paired training data? As one such example, this paper explores the low-light image enhancement problem, where in practice it is extremely challenging to simultaneously take a low-light and a normal-light photo of the same visual scene. We propose a highly effective unsupervised generative adversarial network, dubbed EnlightenGAN, that can be trained without low/normal-light image pairs, yet proves to generalize very well on various real-world test images. Instead of supervising the learning using ground truth data, we propose to regularize the unpaired training using the information extracted from the input itself, and benchmark a series of innovations for the low-light image enhancement problem, including a global-local discriminator structure, a self-regularized perceptual loss fusion, and the attention mechanism. Through extensive experiments, our proposed approach outperforms recent methods under a variety of metrics in terms of visual quality and subjective user study. Thanks to the great flexibility brought by unpaired training, EnlightenGAN is demonstrated to be easily adaptable to enhancing real-world images from various domains. Our codes and pre-trained models are available at: https://github.com/VITA-Group/EnlightenGAN.

 

CVPR 2020

From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light Image Enhancement

Wenhan Yang, Shiqi Wang, Yuming Fang, Yue Wang, Jiaying Liu

Abstract

Under-exposure introduces a series of visual degradation, i.e. decreased visibility, intensive noise, and biased color, etc. To address these problems, we propose a novel semi-supervised learning approach for low-light image enhancement. A deep recursive band network (DRBN) is proposed to recover a linear band representation of an enhanced normal-light image with paired low/normal-light images, and then obtain an improved one by recomposing the given bands via another learnable linear transformation based on a perceptual quality-driven adversarial learning with unpaired data. The architecture is powerful and flexible to have the merit of training with both paired and unpaired data. On one hand, the proposed network is well designed to extract a series of coarse-to-fine band representations, whose estimations are mutually beneficial in a recursive process. On the other hand, the extracted band representation of the enhanced image in the first stage of DRBN (recursive band learning) bridges the gap between the restoration knowledge of paired data and the perceptual quality preference to real high-quality images. Its second stage (band recomposition) learns to recompose the band representation towards fitting perceptual properties of highquality images via adversarial learning. With the help of this two-stage design, our approach generates the enhanced results with well reconstructed details and visually promising contrast and color distributions. Extensive evaluations demonstrate the superiority of our DRBN.

Resources

Figure 2. The framework of the proposed Deep Recursive Band Network (DRBN), which consists of two stages: recursive band learning and band recomposition. (1) In the first stage, a coarse-to-fine band representation is learned and different band signals are inferred jointly in a recursive process. The enhanced result from the last recurrence is used as the guidance of the next recurrence and the later recurrence is only responsible to learn the residue in the feature and image domains at different scales. (2) In the second stage, the band representation is recomposed to improve perceptual quality of the enhanced low-light image via a perceptual quality-guided adversarial learning.

 

IJCV 2020

Benchmarking Low-Light Image Enhancement and Beyond

Jiaying Liu*, Dejia Xu, Wenhan Yang*, Minhao Fan

Abstract

In this paper, we present a systematic review and evaluation of existing single-image low-light enhancement algorithms. Besides the commonly used low-level vision oriented evaluations, we additionally consider measuring machine vision performance in the low-light condition via face detection task to explore the potential of joint optimization of high-level and low-level vision enhancement. To this end, we first propose a large-scale low-light image dataset serving both low/high-level vision with diversified scenes and contents as well as complex degradation in real scenarios, called Vision Enhancement in the LOw-Light condition (VE-LOL). Beyond paired low/normal-light images without annotations, we additionally include the analysis resource related to human, i.e. face images in the low-light condition with annotated face bounding boxes. Then, efforts are made on benchmarking from the perspective of both human and machine visions. A rich variety of criteria is used for the low-level vision evaluation, including full-reference, no-reference, and semantic similarity metrics. We also measure the effects of the low-light enhancement on face detection in the low-light condition. State-of-the-art face detection methods are used in the evaluation. Furthermore, with the rich material of VE-LOL, we explore the novel problem of joint low-light enhancement and face detection. We develop an enhanced face detector to apply low-light enhancement and face detection jointly. The features extracted by the enhancement module are fed to the successive layer with the same resolution of the detection module. Thus, these features are intertwined together to unitedly learn useful information across two phases, i.e. enhancement and detection. Experiments on VE-LOL provide a comparison of state-of-the-art low-light enhancement algorithms, point out their limitations, and suggest promising future directions. Our dataset has supported the Track “Face Detection in Low Light Conditions” of CVPR UG2+ Challenge (2019–2020) (http://cvpr2020.ug2challenge.org/).

Resources

Fig. 15 The proposed Enhancement and Detection Twins Network (EDTNet) for joint low-light enhancement and face detection. The features extracted by the enhancement module are fed into the same level of the detection module. Thus, these features are interwined and unitedly learn useful information across two phases for face detection in lowlight conditions. HCC Enhancement enables exploiting both paired and unpaired data, while Dual-path fusion helps utilize of information at both original and enhanced exposure levels”

ACM MM 2020

Integrating Semantic Segmentation and Retinex Model for Low-Light Image Enhancement

Minhao Fan, Wenjing Wang, Wenhan Yang, and Jiaying Liu

Abstract

Retinex model is widely adopted in various low-light image enhancement tasks. The basic idea of the Retinex theory is to decompose images into reflectance and illumination. The ill-posed decomposition is usually handled by hand-crafted constraints and priors. With the recently emerging deep-learning based approaches as tools, in this paper, we integrate the idea of Retinex decomposition and semantic information awareness. Based on the observation that various objects and backgrounds have different material, reflection and perspective attributes, regions of a single low-light image may require different adjustment and enhancement regarding contrast, illumination and noise. We propose an enhancement pipeline with three parts that effectively utilize the semantic layer information. Specifically, we extract the segmentation, reflectance as well as illumination layers, and concurrently enhance every separate region, 𝑖.𝑒. sky, ground and objects for outdoor scenes. Extensive experiments on both synthetic data and real world images demonstrate the superiority of our method over current state-ofthe-art low-light enhancement algorithms. Our code will be public available at: https://mm20-semanticreti.github.io/.

Resources

 

Figure 2: The architecture of the proposed semantic-aware Retinex-based low-light enhancement network, including three components: Information Extraction, Reflectance Restoration, and Illumination Adjustment. We first estimate semantic segmentation, reflectance, and illumination from the input underexposed image. Then, we enhance reflectance with the help of semantic information, and use the reconstructed reflectance to adjust the illumination. The final result is generated by fusing both reflectance and illumination.

 

 

TIP 2020

Lightening Network for Low-Light Image Enhancement

Li-Wen WangZhi-Song LiuWan-Chi SiuDaniel P. K. Lun

Abstract

Low-light image enhancement is a challenging task that has attracted considerable attention. Pictures taken in low-light conditions often have bad visual quality. To address the problem, we regard the low-light enhancement as a residual learning problem that is to estimate the residual between low- and normal-light images. In this paper, we propose a novel Deep Lightening Network (DLN) that benefits from the recent development of Convolutional Neural Networks (CNNs). The proposed DLN consists of several Lightening Back-Projection (LBP) blocks. The LBPs perform lightening and darkening processes iteratively to learn the residual for normal-light estimations. To effectively utilize the local and global features, we also propose a Feature Aggregation (FA) block that adaptively fuses the results of different LBPs. We evaluate the proposed method on different datasets. Numerical results show that our proposed DLN approach outperforms other methods under both objective and subjective metrics.

 

STAR: A Structure and Texture Aware Retinex Model

Jun XuYingkun HouDongwei RenLi LiuFan ZhuMengyang YuHaoqian WangLing Shao

Abstract:

Retinex theory is developed mainly to decompose an image into the illumination and reflectance components by analyzing local image derivatives. In this theory, larger derivatives are attributed to the changes in reflectance, while smaller derivatives are emerged in the smooth illumination. In this paper, we utilize exponentiated local derivatives (with an exponent γ ) of an observed image to generate its structure map and texture map. The structure map is produced by been amplified with γ>1 , while the texture map is generated by been shrank with γ<;1 . To this end, we design exponential filters for the local derivatives, and present their capability on extracting accurate structure and texture maps, influenced by the choices of exponents γ . The extracted structure and texture maps are employed to regularize the illumination and reflectance components in Retinex decomposition. A novel Structure and Texture Aware Retinex (STAR) model is further proposed for illumination and reflectance decomposition of a single image. We solve the STAR model by an alternating optimization algorithm. Each sub-problem is transformed into a vectorized least squares regression, with closed-form solutions. Comprehensive experiments on commonly tested datasets demonstrate that, the proposed STAR model produce better quantitative and qualitative performance than previous competing methods, on illumination and reflectance decomposition, low-light image enhancement, and color correction. The code is publicly available at https://github.com/csjunxu/STAR.

Resources

 

 

TIP 2020

LR3M: Robust Low-Light Enhancement via Low-Rank Regularized Retinex Model

Xutong Ren, Wenhan Yang , Member, IEEE, Wen-Huang Cheng , Senior Member, IEEE, and Jiaying Liu , Senior Member, IEEE

Abstract

Noise causes unpleasant visual effects in low-light image/video enhancement. In this paper, we aim to make the enhancement model and method aware of noise in the whole process. To deal with heavy noise which is not handled in previous methods, we introduce a robust low-light enhancement approach, aiming at well enhancing low-light images/videos and suppressing intensive noise jointly. Our method is based on the proposed Low-Rank Regularized Retinex Model (LR3M), which is the first to inject low-rank prior into a Retinex decomposition process to suppress noise in the reflectance map. Our method estimates a piece-wise smoothed illumination and a noise-suppressed reflectance sequentially, avoiding remaining noise in the illumination and reflectance maps which are usually presented in alternative decomposition methods. After getting the estimated illumination and reflectance, we adjust the illumination layer and generate our enhancement result. Furthermore, we apply our LR3M to video low-light enhancement. We consider inter-frame coherence of illumination maps and find similar patches through reflectance maps of successive frames to form the low-rank prior to make use of temporal correspondence. Our method performs well for a wide variety of images and videos, and achieves better quality both in enhancing and denoising, compared with the state-of-the-art methods.

Resources

TIP 2019

Low-Light Image Enhancement via a Deep Hybrid Network

Wenqi RenSifei LiuLin MaQianqian XuXiangyu XuXiaochun CaoJunping DuMing-Hsuan Yang

Abstract

Camera sensors often fail to capture clear images or videos in a poorly lit environment. In this paper, we propose a trainable hybrid network to enhance the visibility of such degraded images. The proposed network consists of two distinct streams to simultaneously learn the global content and the salient structures of the clear image in a unified network. More specifically, the content stream estimates the global content of the low-light input through an encoder-decoder network. However, the encoder in the content stream tends to lose some structure details. To remedy this, we propose a novel spatially variant recurrent neural network (RNN) as an edge stream to model edge details, with the guidance of another auto-encoder. The experimental results show that the proposed network favorably performs against the state-of-the-art low-light image enhancement algorithms.

Resources

 

TIP 2019

Low-Light Image Enhancement via the Absorption Light Scattering Model

Yun-Fei Wang; He-Ming Liu; Zhao-Wang Fu

Abstract

Low light often leads to poor image visibility, which can easily affect the performance of computer vision algorithms. First, this paper proposes the absorption light scattering model (ALSM), which can be used to reasonably explain the absorbed light imaging process for low-light images. In addition, the absorbing light scattering image obtained via ALSM under a sufficient and uniform illumination can reproduce hidden outlines and details from the low-light image. Then, we identify that the minimum channel of ALSM obtained above exhibits high local similarity. This similarity can be constrained by superpixels, which effectively prevent the use of gradient operations at the edges so that the noise is not amplified quickly during enhancement. Finally, by analyzing the monotonicity between the scene reflection and the atmospheric light or transmittance in ALSM, a new low-light image enhancement method is identified. We replace atmospheric light with inverted atmospheric light to reduce the contribution of atmospheric light in the imaging results. Moreover, a soft jointed mean-standard-deviation (MSD) mechanism is proposed that directly acts on the patches represented by the superpixels. The MSD can obtain a smaller transmittance than that obtained by the minimum strategy, and it can be automatically adjusted according to the information of the image. The experiments on challenging low-light images are conducted to reveal the advantages of our method compared with other powerful techniques.

Resources

 

TIP 2019

LECARM: Low-Light Image Enhancement Using the Camera Response Model

Yurui Ren; Zhenqiang Ying; Thomas H. Li; Ge Li

Abstract

Low-light image enhancement algorithms can improve the visual quality of low-light images and support the extraction of valuable information for some computer vision techniques. However, existing techniques inevitably introduce color and lightness distortions when enhancing the images. To lower the distortions, we propose a novel enhancement framework using the response characteristics of cameras. First, we discuss how to determine a reasonable camera response model and its parameters. Then, we use the illumination estimation techniques to estimate the exposure ratio for each pixel. Finally, the selected camera response model is used to adjust each pixel to the desired exposure according to the estimated exposure ratio map. Experiments show that our method can obtain enhancement results with fewer color and lightness distortions compared with the several state-of-the-art methods.

Resources

 

TIP 2018

Structure-Revealing Low-Light Image Enhancement via Robust Retinex Model

Abstract

Low-light image enhancement methods based on classic Retinex model attempt to manipulate the estimated illumination and project it back to the corresponding reflectance. However, the model does not consider the noise, which inevitably exists in images captured in low-light conditions. In this paper, we propose the robust Retinex model, which additionally considers a noise map compared with the conventional Retinex model, to improve the performance of enhancing low-light images accompanied by intensive noise. Based on the robust Retinex model, we present an optimization function that includes novel regularization terms for the illumination and reflectance. Specifically, we use l1 norm to constrain the piece-wise smoothness of the illumination, adopt a fidelity term for gradients of the reflectance to reveal the structure details in low-light images, and make the first attempt to estimate a noise map out of the robust Retinex model. To effectively solve the optimization problem, we provide an augmented Lagrange multiplier based alternating direction minimization algorithm without logarithmic transformation. Experimental results demonstrate the effectiveness of the proposed method in low-light image enhancement. In addition, the proposed method can be generalized to handle a series of similar problems, such as image enhancement for underwater or remote sensing, and in hazy or dusty conditions.

 

TIP 2016

LIME: Low-Light Image Enhancement via Illumination Map Estimation

Xiaojie Guo; Yu Li; Haibin Ling

Abstract

When one captures images in low-light conditions, the images often suffer from low visibility. Besides degrading the visual aesthetics of images, this poor quality may also significantly degenerate the performance of many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In this paper, we propose a simple yet effective low-light image enhancement (LIME) method. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G, and B channels. Furthermore, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the well-constructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.

Resources

 

  • 19
    点赞
  • 112
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值