(Research only) Moondream1 is a vision language model that performs on par with models twice its size
Salesforce/xgen-7b-8k-base
SigLIP proposes to replace the loss function used in CLIP by a simple pairwise sigmoid loss
sdxs-512-0.9 can generate high-resolution images in real-time based on prompt texts, trained using score distillation and feature matching
GoogleAI: Style Aligned Image Generation via Shared Attention
Stability AI's FreeWilly2
DiffusionLight: Light Probes by Painting a Chrome Ball
2.7B param open source chat model trained on Glaive’s synthetic data generation platform
BGE-M3, the first embedding model which supports multiple retrieval mode, multilingual and multi-granularity retrieval.
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
Shiba stable diffusion model
Mistral-7B-v0.1 fine tuned for chat with the Dolphin dataset (an open-source implementation of Microsoft's Orca)
Ollama QwQ 32B
Segmind-Vega Model is a distilled version of SDXL, offering a 70% reduction in size and an 100% speedup
FLUX.1-Dev LoRA Training by Huggingface Diffusers
CUDA implementation of an rgb2grayscale function
Hermes 2 Pro is an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house
InternLM2.5 has open-sourced a 7 billion parameter base model and a chat model tailored for practical scenarios.
Huggingface Diffusers: SDv1.4/1.5/2.0/2.1 finetuner
Converts a video into a black and white dotted video effect
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
This model runs on T4. View more.