Stable Diffusion 3.5 is here

Posted October 22, 2024 by

We're excited to announce that Stable Diffusion 3.5, the latest and most powerful text-to-image model from Stability AI, is now available on Replicate. It brings significant improvements in image quality, better prompt understanding, and supports a wide range of artistic styles.

Stable Diffusion 3.5 comes in three variants:

You can generate images using Stable Diffusion 3.5 right away. Try this in Python:

import replicate
 
output = replicate.run(
    "stability-ai/stable-diffusion-3.5-large",
    input={"prompt": "A watercolor painting of a futuristic city skyline at dawn"}
)
print(output.url)

Or use JavaScript:

import Replicate from "replicate";
 
const replicate = new Replicate({
  auth: process.env.REPLICATE_API_TOKEN,
});
 
const [output] = await replicate.run("stability-ai/stable-diffusion-3.5-large", {
  input: {
    prompt: "A watercolor painting of a futuristic city skyline at dawn",
  },
});
console.log(output.url());

You can also experiment with Stable Diffusion 3.5 directly in your browser.

What's new in Stable Diffusion 3.5 Large

  • Enhanced image quality: Generates higher-resolution images with finer details, resulting in more photorealistic and visually appealing outputs.
  • Greater output variety: Thanks to query-key normalization, the model produces a broader range of outputs from the same prompt. For example, entering "a human" with different seeds will yield diverse genders, ethnicities, and appearances.
  • Improved prompt adherence: Better understanding and representation of complex prompts, allowing for more accurate and detailed images based on your descriptions.
  • Versatile styles: Capable of generating images in diverse styles like watercolor, pixel art, 3D renders, line art, and more.

Also introducing Stable Diffusion 3.5 Large Turbo

Stability AI has also released Stable Diffusion 3.5 Large Turbo. This distilled version generates high-quality images in just 4 steps, offering faster inference and reduced costs—ideal for applications where speed and efficiency are crucial.

Here's how to use it in Python:

import replicate
 
output = replicate.run(
    "stability-ai/stable-diffusion-3.5-large-turbo",
    input={"prompt": "A pixel art dragon in a mystical forest"}
)
print(output.url)

Stable 3.5 Medium can be run on consumer hardware

Stable Diffusion 3.5 Medium is a 2.5 billion parameter model that can be run in the cloud as well as on consumer hardware.

To run it on the Replicate API with Python:

import replicate
 
output = replicate.run(
    "stability-ai/stable-diffusion-3.5-medium",
    input={"prompt": "A baroque-style painting of aliens attending a royal ball"}
)
print(output.url)

The weights are available on HuggingFace if you'd like to run the model on a local GPU: https://huggingface.co/stabilityai/stable-diffusion-3.5-medium

Pricing

Stable Diffusion 3.5 models are priced per image:

Visit our pricing page for more details.

Licensing

Stable Diffusion 3.5 models are available under the Stability AI Community License. Here's what you need to know:

  • Non-commercial use: Free for non-commercial projects and research.
  • Commercial use: Free for commercial use if your company’s annual revenue is less than $1 million.
  • Ownership of outputs: You retain ownership of the images you generate.

Please refer to the full license terms for more information.

Looking ahead: fine-tuning support

We're working on bringing fine-tuning capabilities for Stable Diffusion 3.5 models to Replicate. This will allow you to customize the models further for your specific use cases. Stay tuned for updates!

Join the community

We can't wait to see what you create with Stable Diffusion 3.5. Share your creations and connect with other developers on our Discord server. Follow us on X for the latest updates.