fofr / lcm-video2video

Fast video2video with a latent consistency model

  • Public
  • 2.4K runs
  • L40S

Input

string
Shift + Return to add a new line

Prompt for video2video

Default: "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"

*file

Video to split into frames

integer
(minimum: 1)

Number of images per second of video, when not exporting all frames

Default: 8

boolean

Get every frame of the video. Ignores fps. Slow for large videos.

Default: false

integer
(minimum: 1)

Maximum width of the video. Maintains aspect ratio.

Default: 512

number
(minimum: 0, maximum: 1)

1.0 corresponds to full destruction of information in video frame

Default: 0.2

integer
(minimum: 1, maximum: 50)

Number of denoising steps per frame. Recommend 1 to 8 steps.

Default: 4

string

Controlnet to use

Default: "none"

number
(minimum: 0.1, maximum: 4)

Controlnet conditioning scale

Default: 2

number
(minimum: 0, maximum: 1)

Controlnet start

Default: 0

number
(minimum: 0, maximum: 1)

Controlnet end

Default: 1

number
(minimum: 1, maximum: 255)

Canny low threshold

Default: 100

number
(minimum: 1, maximum: 255)

Canny high threshold

Default: 200

number
(minimum: 1, maximum: 20)

Scale for classifier-free guidance

Default: 8

integer

Random seed. Leave blank to randomize the seed

boolean

Return a tar file with all the frames alongside the video

Default: false

Output

Generated in

This output was created using a different version of the model, fofr/lcm-video2video:5aa3d1b1.

Run time and cost

This model costs approximately $0.13 to run on Replicate, or 7 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 137 seconds. The predict time for this model varies significantly based on the inputs.

Readme

This model doesn't have a readme.