vis-opt-group / sci

Low-Light Image Enhancement

  • Public
  • 11.3K runs
  • GitHub
  • Paper
  • License

Run time and cost

This model costs approximately $0.00022 to run on Replicate, or 4545 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia T4 GPU hardware. Predictions typically complete within 1 seconds. The predict time for this model varies significantly based on the inputs.

Readme

Toward Fast, Flexible, and Robust Low-Light Image Enhancement, CVPR 2022 (Oral)

Requirements

  • python3.7
  • pytorch==1.8.0
  • cuda11.1

Introduce the trained model

Under the weights folder, there are three different models, the main difference is that the training data is different * easy.pt mainly uses the MIT dataset for training * medium.pt mainly uses the LOL and LSRW datasets for training * difficult.pt mainly uses the darkface dataset for training If you want to retrain a new model, you can write the path of the dataset to train.py and run “train.py”, the final model will be saved to the weights folder, and the intermediate visualization results will be saved to the results folder

Test steps

Citation

```bibtex @inproceedings{ma2022toward, title={Toward Fast, Flexible, and Robust Low-Light Image Enhancement}, author={Ma, Long and Ma, Tengyu and Liu, Risheng and Fan, Xin and Luo, Zhongxuan}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={5637–5646}, year={2022} }