arielreplicate / tres_iqa

Assess the quality of an image

  • Public
  • 146.9K runs
  • GitHub
  • Paper
  • License



Run time and cost

This model runs on Nvidia T4 GPU hardware. Predictions typically complete within 1 seconds.



Given an input image, this model predicts the quality of that image. Quality can be defined as how distortion-free an image is, where sources of distortion can include noise, blurring, and compression artifacts. Note that a lower score indicates a higher quality image!


This is an implementation of No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency. Paper Video

This is a model for Image quality assessment (IQA), a task wherein machine learning models are trained to predict the quality of an image in a manner that’s consistent with human quality raters. No-Reference Image Quality Assessment (NR-IQA) means assessing the image quality without a “clean” image to compare to, i.e, predict a score given a single image input. This model is an NR-IQA model.

The demo uses a model trained on the LIVE dataset downloaded from here.


This code borrows elements from HyperIQA and DETR.


If you find this work useful for your research, please cite our paper:

  title={No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency},
  author={Golestaneh, S Alireza and Dadsetan, Saba and Kitani, Kris M},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},

If you have any questions about our work, please do not hesitate to contact