hvision-nku / storydiffusion

Consistent Self-Attention for Long-Range Image and Video Generation

  • Public
  • 68.4K runs
  • L40S
  • GitHub
  • Paper
  • License

Input

string

Choose a model

Default: "Unstable"

file

Reference image for the character

string
Shift + Return to add a new line

General description of the character. If ref_image above is provided, making sure to follow the class word you want to customize with the trigger word 'img', such as: 'man img' or 'woman img' or 'girl img'

Default: "a man, wearing black suit"

string
Shift + Return to add a new line

Describe things you do not want to see in the output

Default: "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs"

string
Shift + Return to add a new line

Comic Description. Each frame is divided by a new line. Only the first 10 prompts are valid for demo speed! For comic_description NOT using ref_image: (1) Support Typesetting Style and Captioning. By default, the prompt is used as the caption for each image. If you need to change the caption, add a '#' at the end of each line. Only the part after the '#' will be added as a caption to the image. (2) The [NC] symbol is used as a flag to indicate that no characters should be present in the generated scene images. If you want do that, prepend the '[NC]' at the beginning of the line.

Default: "at home, read new paper #at home, The newspaper says there is a treasure house in the forest.\non the road, near the forest\n[NC] The car on the road, near the forest #He drives to the forest in search of treasure.\n[NC]A tiger appeared in the forest, at night \nvery frightened, open mouth, in the forest, at night\nrunning very fast, in the forest, at night\n[NC] A house in the forest, at night #Suddenly, he discovers the treasure house!\nin the house filled with treasure, laughing, at night #He is overjoyed inside the house."

string

Style template

Default: "Japanese Anime"

string

Select the comic style for the combined comic

Default: "Classic Comic Style"

integer
(minimum: 15, maximum: 50)

Style strength of Ref Image (%), only used if ref_image is provided

Default: 20

integer

Width of output image

Default: 768

integer

Height of output image

Default: 768

integer
(minimum: 20, maximum: 50)

Number of sample steps

Default: 25

number
(minimum: 0.1, maximum: 10)

Scale for classifier-free guidance

Default: 5

integer

Random seed. Leave blank to randomize the seed

number
(minimum: 0, maximum: 1)

The degree of Paired Attention at 32 x 32 self-attention layers

Default: 0.5

number
(minimum: 0, maximum: 1)

The degree of Paired Attention at 64 x 64 self-attention layers

Default: 0.5

integer

Number of id images in total images. This should not exceed total number of line-separated prompts

Default: 3

string

Format of the output images

Default: "webp"

integer
(minimum: 0, maximum: 100)

Quality of the output images, from 0 to 100. 100 is best quality, 0 is lowest quality

Default: 80

Output

comic

comic

individual_images

outputoutputoutputoutputoutputoutputoutputoutput
Generated in

Run time and cost

This model costs approximately $0.079 to run on Replicate, or 12 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 82 seconds. The predict time for this model varies significantly based on the inputs.

Readme

Demo Video

https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/d5b80f8f-09b0-48cd-8b10-daff46d422af

🌠 Key Features:

StoryDiffusion can create a magic story by generating consistent images and videos. Our work mainly has two parts: 1. Consistent self-attention for character-consistent image generation over long-range sequences. It is hot-pluggable and compatible with all SD1.5 and SDXL-based image diffusion models. For the current implementation, the user needs to provide at least 3 text prompts for the consistent self-attention module. We recommend at least 5 - 6 text prompts for better layout arrangement. 2. Motion predictor for long-range video generation, which predicts motion between Condition Images in a compressed image semantic space, achieving larger motion prediction.

Disclaimer

This project strives to impact the domain of AI-driven image and video generation positively. Users are granted the freedom to create images and videos using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.

BibTeX

If you find StoryDiffusion useful for your research and applications, please cite using this BibTeX:

@article{Zhou2024storydiffusion,
  title={StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation},
  author={Zhou, Yupeng and Zhou, Daquan and Cheng, Ming-Ming and Feng, Jiashi and Hou, Qibin},
  year={2024}
}