Readme
This model doesn't have a readme.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/trin1140/kouzai_01@sha256:fd3d56aa8a578bad71ae150bd1197c4b542d7be1579086befc44d3064d9c3e4d
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "width": 1024, "height": 1024, "prompt": "An astronaut riding a rainbow unicorn", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
No output yet! Press "Submit" to start a prediction.
This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.
This model doesn't have a readme.