Readme
This model doesn't have a readme.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/jerakst/belio@sha256:6a0a43312d062a3263e69546b33db904bc9ea72991238ca863c10a48cef13286
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "model": "dev", "go_fast": false, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
No output yet! Press "Submit" to start a prediction.
This model runs on Nvidia H100 GPU hardware. We don't yet have enough runs of this model to provide performance information.
This model doesn't have a readme.