Readme
This model doesn't have a readme.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/rial-cenia/chala@sha256:ba69d30d41485bfa1bd06a9b2a7b57ee41896ccb272f6628002109512ea432d9
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "model": "dev", "go_fast": false, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
No output yet! Press "Submit" to start a prediction.
This model runs on Nvidia H100 GPU hardware. We don't yet have enough runs of this model to provide performance information.
This model doesn't have a readme.