vis-opt-group / sci

Low-Light Image Enhancement

  • Public
  • 11.3K runs
  • GitHub
  • Paper
  • License

Toward Fast, Flexible, and Robust Low-Light Image Enhancement, CVPR 2022 (Oral)

Requirements

  • python3.7
  • pytorch==1.8.0
  • cuda11.1

Introduce the trained model

Under the weights folder, there are three different models, the main difference is that the training data is different * easy.pt mainly uses the MIT dataset for training * medium.pt mainly uses the LOL and LSRW datasets for training * difficult.pt mainly uses the darkface dataset for training If you want to retrain a new model, you can write the path of the dataset to train.py and run “train.py”, the final model will be saved to the weights folder, and the intermediate visualization results will be saved to the results folder

Test steps

Citation

```bibtex @inproceedings{ma2022toward, title={Toward Fast, Flexible, and Robust Low-Light Image Enhancement}, author={Ma, Long and Ma, Tengyu and Liu, Risheng and Fan, Xin and Luo, Zhongxuan}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={5637–5646}, year={2022} }