cjwbw / night-enhancement

Unsupervised Night Image Enhancement

  • Public
  • 41.3K runs
  • GitHub
  • Paper
  • License

Run time and cost

This model costs approximately $0.022 to run on Replicate, or 45 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia T4 GPU hardware. Predictions typically complete within 99 seconds. The predict time for this model varies significantly based on the inputs.

Readme

This is a cog implementation for https://github.com/jinyeying/night-enhancement

Official implementation of the following paper.

Unsupervised Night Image Enhancement: When Layer Decomposition Meets Light-Effects Suppression. European Conference on Computer Vision (ECCV‘2022)

Yeying Jin, Wenhan Yang and Robby T. Tan

arXiv

PWC

Abstract

Night images suffer not only from low light, but also from uneven distributions of light. Most existing night visibility enhancement methods focus mainly on enhancing low-light regions. This inevitably leads to over enhancement and saturation in bright regions, such as those regions affected by light effects (glare, floodlight, etc). To address this problem, we need to suppress the light effects in bright regions while, at the same time, boosting the intensity of dark regions. With this idea in mind, we introduce an unsupervised method that integrates a layer decomposition network and a light-effects suppression network. Given a single night image as input, our decomposition network learns to decompose shading, reflectance and light-effects layers, guided by unsupervised layer-specific prior losses. Our light-effects suppression network further suppresses the light effects and, at the same time, enhances the illumination in dark regions. This light-effects suppression network exploits the estimated light-effects layer as the guidance to focus on the light-effects regions. To recover the background details and reduce hallucination/artefacts, we propose structure and high-frequency consistency losses. Our quantitative and qualitative evaluations on real images show that our method outperforms state-of-the-art methods in suppressing night light effects and boosting the intensity of dark regions.

Low-Light Enhancement

  1. LOL-Real Results

  1. LOL-test Results

Citations

If this work is useful for your research, please cite our paper.

@article{jin2022unsupervised,
  title={Unsupervised Night Image Enhancement: When Layer Decomposition Meets Light-Effects Suppression},
  author={Jin, Yeying and Yang, Wenhan and Tan, Robby T},
  journal={arXiv preprint arXiv:2207.10564},
  year={2022}
}

If light-effects data is useful for your research, please cite our paper.

@inproceedings{sharma2021nighttime,
    title={Nighttime Visibility Enhancement by Increasing the Dynamic Range and Suppression of Light Effects},
    author={Sharma, Aashish and Tan, Robby T},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    pages={11977--11986},
    year={2021}
}

If GTA5 nighttime fog data is useful for your research, please cite our paper.

@inproceedings{yan2020nighttime,
    title={Nighttime defogging using high-low frequency decomposition and grayscale-color networks},
    author={Yan, Wending and Tan, Robby T and Dai, Dengxin},
    booktitle={European Conference on Computer Vision},
    pages={473--488},
    year={2020},
    organization={Springer}
}