fofr / video-morpher

Generate a video that morphs between subjects, with an optional style

  • Public
  • 14.4K runs
  • L40S
  • GitHub
  • License

Input

string
Shift + Return to add a new line

The prompt has a small effect, but most of the video is driven by the subject images

Default: ""

string
Shift + Return to add a new line

What you do not want to see in the video

Default: ""

string

The aspect ratio of the video

Default: "2:3"

string

Determines if you produce a quick experimental video or an upscaled interpolated one. (small ~20s, medium ~60s, upscaled ~2min, upscaled-and-interpolated ~4min)

Default: "medium"

*file
Preview
subject_image_1

The first subject of the video

*file
Preview
subject_image_2

The second subject of the video

*file
Preview
subject_image_3

The third subject of the video

*file
Preview
subject_image_4

The fourth subject of the video

file
Preview
style_image

Apply the style from this image to the whole video

number
(minimum: 0, maximum: 2)

How strong the style is applied

Default: 1

string

The checkpoint to use for the model

Default: "realistic"

boolean

Use geometric circles to guide the generation

Default: true

integer

Set a seed for reproducibility. Random by default.

Output

Generated in

Run time and cost

This model costs approximately $0.12 to run on Replicate, or 8 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 128 seconds. The predict time for this model varies significantly based on the inputs.