andreasjansson / dreamsim

Image similarity metric that that compares a reference image to a set of images.

  • Public
  • 278 runs
  • GitHub
  • Paper
  • License

Run time and cost

This model costs approximately $0.075 to run on Replicate, or 13 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 54 seconds. The predict time for this model varies significantly based on the inputs.

Readme

Current metrics for perceptual image similarity operate at the level of pixels and patches. These metrics compare images in terms of their low-level colors and textures, but fail to capture mid-level differences in layout, pose, semantic content, etc. Models that use image-level embeddings such as DINO and CLIP capture high-level and semantic judgements, but may not be aligned with human perception of more finegrained attributes.

DreamSim is a new metric for perceptual image similarity that bridges the gap between “low-level” metrics (e.g. LPIPS, PSNR, SSIM) and “high-level” measures (e.g. CLIP). Our model was trained by concatenating CLIP, OpenCLIP, and DINO embeddings, and then finetuning on human perceptual judgements. We gathered these judgements on a dataset of ~20k image triplets, generated by diffusion models. Our model achieves better alignment with human similarity judgements than existing metrics, and can be used for downstream applications such as image retrieval.