🚀 Want to run this model with an API? Get started

rinnakk/japanese-stable-diffusion

Public
Japanese-specific latent text-to-image diffusion model
2.1K runs

Run time and cost

Predictions run on Nvidia T4 GPU hardware. Predictions typically complete within 103 seconds. The predict time for this model varies significantly based on the inputs.

Japanese Stable Diffusion

img

Japanese Stable Diffusion is a Japanese-specific latent text-to-image diffusion model.

This model was trained by using a powerful text-to-image model, Stable Diffusion.
Many thanks to CompVis, Stability AI and LAION for public release.

Model Details

Why Japanese Stable Diffusion?

Stable Diffusion is a very powerful text-to-image model, not only in terms of quality but also in terms of computational cost.
Because Stable Diffusion was trained on English dataset, you need translate prompts or use directly if you are non-English users.
Surprisingly, Stable Diffusion can sometimes generate nice images even by using non-English prompts.
So, why do we need a language-specific Japanese Stable Diffusion?

Because we want a model to understand our culture, identity, and unique expressions such as slang.
For example, one of the famous Japanglish is "salary man" which means a businessman especially we often imagine he's wearing a suit.
Stable Diffusion cannot understand such Japanese unique words correctly because Japanese is not their target.

So, we made a language-specific version of Stable Diffusion!
Japanese Stable Diffusion can achieve the following points compared to the original Stable Diffusion.
- Generate Japanese-style images
- Understand Japanglish
- Understand Japanese unique onomatope
- Understand Japanese proper noun

img

caption: "サラリーマン 油絵", that means "salary man, oil painting"

Training

Japanese Stable Diffusion was trained by using Stable Diffusion and has the same architecture and the same number of parameters.
But, this is not a fully fine-tuned model on Japanese datasets because Stable Diffusion was trained on English dataset and the CLIP tokenizer is basically for English.
To achieve make a Japanese-specific model based on Stable Diffusion, we had 2 stages inspired by PITI.

  1. Train a Japanese-specific text encoder with our Japanese tokenizer from scratch with the latent diffusion model fixed. This stage is expected to map Japanese captions to Stable Diffusion's latent space.
  2. Fine-tune the text encoder and the latent diffusion model jointly. This stage is expected to generate Japanese-style images more.

We used the following dataset for training the model:

  • Approximately 100 million images with Japanese captions, including the Japanese subset of LAION-5B.

Citation

@InProceedings{Rombach_2022_CVPR,
      author    = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
      title     = {High-Resolution Image Synthesis With Latent Diffusion Models},
      booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
      month     = {June},
      year      = {2022},
      pages     = {10684-10695}
  }
@misc{japanese_stable_diffusion,
    author    = {Shing, Makoto and Sawada, Kei},
    title     = {Japanese Stable Diffusion},
    howpublished = {\url{https://github.com/rinnakk/japanese-stable-diffusion}},
    month     = {September},
    year      = {2022},
}

License

The CreativeML OpenRAIL M license is an Open RAIL M license, adapted from the work that BigScience and the RAIL Initiative are jointly carrying in the area of responsible AI licensing. See also the article about the BLOOM Open RAIL license on which our license is based.