anotherjesse / zeroscope-v2-xl

Zeroscope V2 XL & 576w

  • Public
  • 286.9K runs
  • A100 (80GB)
  • GitHub
  • Paper
  • License

Input

string
Shift + Return to add a new line

Input prompt

Default: "An astronaut riding a horse"

string
Shift + Return to add a new line

Negative prompt

file

URL of the initial video (optional)

number

Strength of init_video

Default: 0.5

integer

Number of frames for the output video

Default: 24

integer
(minimum: 1, maximum: 500)

Number of denoising steps

Default: 50

integer
(minimum: 256)

Width of the output video

Default: 576

integer
(minimum: 256)

Height of the output video

Default: 320

number
(minimum: 1, maximum: 100)

Guidance scale

Default: 7.5

integer

fps for the output video

Default: 8

string

Model to use

Default: "xl"

integer
(minimum: 1)

Batch size

Default: 1

boolean

Remove watermark

Default: false

integer

Random seed. Leave blank to randomize the seed

Output

Generated in

This example was created by a different version, anotherjesse/zeroscope-v2-xl:1f0dd155.

Run time and cost

This model costs approximately $0.13 to run on Replicate, or 7 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 96 seconds. The predict time for this model varies significantly based on the inputs.

Readme

A watermark-free Modelscope-based video model optimized for producing high-quality 16:9 compositions and a smooth video output. This model was trained using 9,923 clips and 29,769 tagged frames at 24 frames, 576x320 resolution. Zeroscope_v2_567w is specifically designed for upscaling with zeroscope_v2_XL using vid2vid

By cerspense

Thanks to camenduru, kabachuha, ExponentialML, dotsimulate, VANYA, polyware, tin2tin

📖 Check out the Replicate guide to Zeroscope