Readme
This model doesn't have a readme.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/fofr/sdxl-lcm-video2video@sha256:b960f1c3a124d3e254035c8849cf136fd3fc72a711ddde152f93d83b477bd6da
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "fps": 8, "prompt": "An astronaut riding a rainbow unicorn", "max_width": 512, "lora_scale": 0.6, "controlnet_1": "none", "controlnet_2": "none", "controlnet_3": "none", "return_frames": false, "guidance_scale": 1.1, "negative_prompt": "", "prompt_strength": 0.5, "controlnet_1_end": 1, "controlnet_2_end": 1, "controlnet_3_end": 1, "controlnet_1_start": 0, "controlnet_2_start": 0, "controlnet_3_start": 0, "extract_all_frames": false, "num_inference_steps": 4, "controlnet_1_conditioning_scale": 0.75, "controlnet_2_conditioning_scale": 0.75, "controlnet_3_conditioning_scale": 0.75 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
No output yet! Press "Submit" to start a prediction.
This model costs approximately $0.11 to run on Replicate, or 9 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.
This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 112 seconds. The predict time for this model varies significantly based on the inputs.
This model doesn't have a readme.