andreasjansson / tile-morph

Create tileable animations with seamless transitions (Updated 2 years, 4 months ago)

  • Public
  • 529.3K runs
  • GitHub
  • License
Iterate in playground

Input

*string
Shift + Return to add a new line

Prompt to start the animation with

*string
Shift + Return to add a new line

Prompt to end the animation with. You can include multiple prompts by separating the prompts with | (the 'pipe' character)

integer

Width of output video

Default: 512

integer

Height of output video

Default: 512

integer
(minimum: 0, maximum: 1000)

Number of steps to interpolate between animation frames

Default: 20

integer
(minimum: 1, maximum: 5000)

Number of denoising steps

Default: 50

integer
(minimum: 2, maximum: 50)

Number of frames to animate

Default: 10

number
(minimum: 1, maximum: 20)

Scale for classifier-free guidance

Default: 7.5

integer
(minimum: 1, maximum: 60)

Frames per second in output video

Default: 20

boolean

Whether to display intermediate outputs during generation

Default: false

integer

Random seed for first prompt. Leave blank to randomize the seed

integer

Random seed for last prompt. Leave blank to randomize the seed

Output

Generated in

This output was created using a different version of the model, andreasjansson/tile-morph:a819625e.

Run time and cost

This model runs on Nvidia A100 (80GB) GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

TileMorph

TileMorph creates a tileable animation between two Stable Diffusion prompts. It uses the circular padding trick to generate images that wrap around the edges.

The animation effect is achieved by interpolating both in CLIP embedding space and latent space.

  • The number of CLIP interpolation steps is controlled by the num_animation_frames input. Each “animation frame” runs a full Stable Diffusion inference, which makes it slow but interesting.
  • The number of latent space interpolation steps between animation frames is controlled by the num_interpolation_steps input. Each interpolation step only runs a VAE inference, and is fast but less interesting. You can trade off interestingness versus prediction time by tweaking num_animation_frames and num_interpolation_steps
  • num_animation_frames * num_interpolation_steps = number of output frames
  • num_animation_frames * num_interpolation_steps / frames_per_second = output video length in seconds

This model supports seamless transitions between different generations. Set prompt_end and seed_end to the same value of video number n as prompt_start and seed_start of video number n + 1.