<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Huang, Yan</style></author><author><style face="normal" font="default" size="100%">Liao, Xiaoshan</style></author><author><style face="normal" font="default" size="100%">Liang, Jinxiu</style></author><author><style face="normal" font="default" size="100%">Shi, Boxin</style></author><author><style face="normal" font="default" size="100%">Yong Xu</style></author><author><style face="normal" font="default" size="100%">Le Callet, Patrick</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Detail-Preserving Diffusion Models for Low-Light Image Enhancement</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE Transactions on Circuits and Systems for Video Technology</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2025</style></year><pub-dates><date><style  face="normal" font="default" size="100%">apr</style></date></pub-dates></dates><number><style face="normal" font="default" size="100%">4</style></number><volume><style face="normal" font="default" size="100%">35</style></volume><pages><style face="normal" font="default" size="100%">3396–3409</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Existing diffusion models for low-light image enhancement typically incrementally remove noise introduced during the forward diffusion process using a denoising loss, with the process being conditioned on input low-light images. While these models demonstrate remarkable abilities in generating realistic high-frequency details, they often struggle to restore fine details that are faithful to the input. To address this, we present a novel detail-preserving diffusion model for realistic and faithful low-light image enhancement. Our approach integrates a size-agnostic diffusion process with a reverse process reconstruction loss, significantly enhancing the fidelity of enhanced images to their low-light counterparts and enabling more accurate recovery of fine details. To ensure the preservation of region- and content-aware details, we employ an efficient noise estimation network with a simplified channel-spatial attention mechanism. Additionally, we propose a multiscale ensemble scheme to maintain detail fidelity across diverse illumination regions. Comprehensive experiments on eight benchmark datasets demonstrate that our method achieves state-of-the-art results compared to over twenty existing methods in terms of both perceptual quality (LPIPS) and distortion metrics (PSNR and SSIM). The code is available at: https://github.com/CSYanH/DePDiff.</style></abstract></record></records></xml>