Huang Y, Liao X, Liang J, Shi B, Xu Y, Le Callet P.
Detail-Preserving Diffusion Models for Low-Light Image Enhancement. IEEE Transactions on Circuits and Systems for Video Technology. 2025;35:3396–3409.
AbstractExisting diffusion models for low-light image enhancement typically incrementally remove noise introduced during the forward diffusion process using a denoising loss, with the process being conditioned on input low-light images. While these models demonstrate remarkable abilities in generating realistic high-frequency details, they often struggle to restore fine details that are faithful to the input. To address this, we present a novel detail-preserving diffusion model for realistic and faithful low-light image enhancement. Our approach integrates a size-agnostic diffusion process with a reverse process reconstruction loss, significantly enhancing the fidelity of enhanced images to their low-light counterparts and enabling more accurate recovery of fine details. To ensure the preservation of region- and content-aware details, we employ an efficient noise estimation network with a simplified channel-spatial attention mechanism. Additionally, we propose a multiscale ensemble scheme to maintain detail fidelity across diverse illumination regions. Comprehensive experiments on eight benchmark datasets demonstrate that our method achieves state-of-the-art results compared to over twenty existing methods in terms of both perceptual quality (LPIPS) and distortion metrics (PSNR and SSIM). The code is available at: https://github.com/CSYanH/DePDiff.
Huang Y, Liao X, Liang J, Quan Y, Shi B, Xu Y.
Zero-Shot Low-Light Image Enhancement via Latent Diffusion Models, in
Proceedings of the AAAI Conference on Artificial Intelligence (AAAI).; 2025.
AbstractLow-light image enhancement (LLIE) aims to improve visibility and signal-to-noise ratio in images captured under poor lighting conditions. Despite impressive improvement, deep learning-based LLIE approaches require extensive training data, which is often difficult and costly to obtain. In this paper, we propose a zero-shot LLIE framework leveraging pre-trained latent diffusion models for the first time, which act as powerful priors to recover latent images from low-light inputs. Our approach introduces several components to alleviate the inherent challenges in utilizing pre-trained latent diffusion models, modeling the degradation process in an image-adaptive manner, penalizing the latent outside the manifold of natural images, and balancing the strengths of the guidance from the given low-light image during the denoising process. Experimental results demonstrate that our framework outperforms existing methods, achieving superior performance across various datasets.
Quan Y, Wan X, Tang Z, Liang J, Ji H.
Multi-Focus Image Fusion via Explicit Defocus Blur Modelling, in
Proceedings of the AAAI Conference on Artificial Intelligence (AAAI).; 2025.
AbstractMulti-focus image fusion (MFIF) is a critical technique for enhancing depth of field in photography, producing an all-in-focus image from multiple images captured at different focal lengths. While deep learning has shown promise in MFIF, most existing methods ignore the physical model of defocus blurring in their neural architecture design, limiting their interoperability and generalization. This paper presents a novel framework that integrates explicit defocus blur modeling into the MFIF process, leading to enhanced interpretability and performance. Leveraging an atom-based spatially-varying parameterized defocus blurring model, our approach first calculates pixel-wise defocus descriptors and initial focused images from multi-focus source images through a scale-recurrent fashion, based on which soft decision maps are estimated. Afterward, image fusion is performed using masks constructed from the decision maps, with a separate treatment on pixels that are probably defocused in all source images or near boundaries of defocused/focused regions. Model training is done with a fusion loss and a cross-scale defocus estimation loss. Extensive experiments on benchmark datasets have demonstrated the effectiveness of our approach.