zsxkib / molmo-7b

allenai/Molmo-7B-D-0924, Answers questions and caption about images

  • Public
  • 98.8K runs
  • L40S
  • GitHub
  • Weights
  • Paper
  • License

Input

image
*file

Input image

*string
Shift + Return to add a new line

Text prompt or question about the image

integer
(minimum: 1, maximum: 100)

Number of highest probability vocabulary tokens to keep for top-k-filtering

Default: 50

number
(minimum: 0, maximum: 1)

Cumulative probability for top-p-filtering

Default: 1

integer
(minimum: 1, maximum: 1000)

Maximum number of new tokens to generate

Default: 200

number
(minimum: 0.1, maximum: 2)

Randomness in token selection (higher values increase randomness)

Default: 1

number
(minimum: 0.1, maximum: 2)

Exponential penalty to the length (values < 1.0 encourage shorter outputs, > 1.0 encourage longer outputs)

Default: 1

Output

I see a charming scene featuring a large, fluffy white dog sitting on a wooden bench in the middle of a field. The dog appears to be a poodle mix, with curly fur covering its entire body. It's sitting upright with its front paws hanging over the edge of the bench, looking directly at the camera with a happy expression. The dog's tongue is hanging out, and its eyes are dark and expressive. The bench is made of dark brown wood with a lattice design on the backrest. It's positioned on a patch of brown, dead grass, which suggests it might be late fall or early winter. In the background, there's a vast field filled with tall, brown grass, and further back, I can see a line of bare trees against a light blue sky. The overall atmosphere is peaceful and serene. The dog seems to be enjoying its time outdoors, perhaps waiting for its owner or just taking in the scenery. The contrast between the white, fluffy dog and the brown
Generated in

Run time and cost

This model costs approximately $0.011 to run on Replicate, or 90 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 12 seconds. The predict time for this model varies significantly based on the inputs.

Readme

Molmo 7B-D

Molmo is a family of open vision-language models developed by the Allen Institute for AI. Molmo models are trained on PixMo, a dataset of 1 million, highly-curated image-text pairs. It has state-of-the-art performance among multimodal models with a similar size while being fully open-source. You can find all models in the Molmo family here. Learn more about the Molmo family in our announcement blog post.

Molmo 7B-D is based on Qwen2-7B and uses OpenAI CLIP as vision backbone. It performs comfortably between GPT-4V and GPT-4o on both academic benchmarks and human evaluation. It powers the Molmo demo at molmo.allenai.org.

This checkpoint is a preview of the Molmo release. All artifacts used in creating Molmo (PixMo dataset, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility.

Sign up here to be the first to know when artifacts are released.

Quick links: - 💬 Demo - 📂 All Models - 📃 Paper - 🎥 Blog with Videos

Evaluations

Model Average Score on 11 Academic Benchmarks Human Preference Elo Rating
Molmo 72B 81.2 1077
Molmo 7B-D (this model) 77.3 1056
Molmo 7B-O 74.6 1051
MolmoE 1B 68.6 1032
GPT-4o 78.5 1079
GPT-4V 71.1 1041
Gemini 1.5 Pro 78.3 1074
Gemini 1.5 Flash 75.1 1054
Claude 3.5 Sonnet 76.7 1069
Claude 3 Opus 66.4 971
Claude 3 Haiku 65.3 999
Qwen VL2 72B 79.4 1037
Qwen VL2 7B 73.7 1025
Intern VL2 LLAMA 76B 77.1 1018
Intern VL2 8B 69.4 953
Pixtral 12B 69.5 1016
Phi3.5-Vision 4B 59.7 982
PaliGemma 3B 50.0 937
LLAVA OneVision 72B 76.6 1051
LLAVA OneVision 7B 72.0 1024
Cambrian-1 34B 66.8 953
Cambrian-1 8B 63.4 952
xGen - MM - Interleave 4B 59.5 979
LLAVA-1.5 13B 43.9 960
LLAVA-1.5 7B 40.7 951

Benchmarks: AI2D test, ChartQA test, VQA v2.0 test, DocQA test, InfographicVQA test, TextVQA val, RealWorldQA, MMMU val, MathVista testmini, CountBenchQA, Flickr Count (we collected this new dataset that is significantly harder than CountBenchQA).

FAQs

Molmo doesn’t work great with transparent images!

We received reports that Molmo models might struggle with transparent images. For the time being, we recommend adding a white or dark background to your images before passing them to the model.

License and Use

This model is licensed under Apache 2.0. It is intended for research and educational use. For more information, please see our Responsible Use Guidelines.