callmejz-ai
/
doodle
Doodles trained on black line drawings, fashion illustrations, and wire sculptures. Simple images for complex intellectuals, luxury brands, b2b marketing, saas..
- Public
- 234 runs
-
L40S
- SDXL fine-tune
Prediction
callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8ID4dhywgsacdrgm0cjrp1s333qkcStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- flower
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.6
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 50
{ "width": 1024, "height": 1024, "prompt": "flower", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", { input: { width: 1024, height: 1024, prompt: "flower", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 50 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", input={ "width": 1024, "height": 1024, "prompt": "flower", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", "input": { "width": 1024, "height": 1024, "prompt": "flower", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-10-25T21:09:49.387587Z", "created_at": "2024-10-25T21:09:20.611000Z", "data_removed": false, "error": null, "id": "4dhywgsacdrgm0cjrp1s333qkc", "input": { "width": 1024, "height": 1024, "prompt": "flower", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }, "logs": "Using seed: 7047\nEnsuring enough disk space...\nFree disk space: 1439622897664\nDownloading weights: https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar\n2024-10-25T21:09:27Z | INFO | [ Initiating ] chunk_size=150M dest=/src/weights-cache/b9e40f01def7cc54 url=https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar\n2024-10-25T21:09:32Z | INFO | [ Complete ] dest=/src/weights-cache/b9e40f01def7cc54 size=\"186 MB\" total_elapsed=4.907s url=https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar\nb''\nDownloaded weights in 5.0415003299713135 seconds\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: flower\ntxt2img mode\n 0%| | 0/50 [00:00<?, ?it/s]/usr/local/lib/python3.9/site-packages/diffusers/models/attention_processor.py:1946: FutureWarning: `LoRAAttnProcessor2_0` is deprecated and will be removed in version 0.26.0. Make sure use AttnProcessor2_0 instead by settingLoRA layers to `self.{to_q,to_k,to_v,to_out[0]}.lora_layer` respectively. This will be done automatically when using `LoraLoaderMixin.load_lora_weights`\ndeprecate(\n 2%|▏ | 1/50 [00:00<00:11, 4.20it/s]\n 4%|▍ | 2/50 [00:00<00:11, 4.19it/s]\n 6%|▌ | 3/50 [00:00<00:11, 4.17it/s]\n 8%|▊ | 4/50 [00:00<00:11, 4.16it/s]\n 10%|█ | 5/50 [00:01<00:10, 4.16it/s]\n 12%|█▏ | 6/50 [00:01<00:10, 4.16it/s]\n 14%|█▍ | 7/50 [00:01<00:10, 4.16it/s]\n 16%|█▌ | 8/50 [00:01<00:10, 4.15it/s]\n 18%|█▊ | 9/50 [00:02<00:09, 4.15it/s]\n 20%|██ | 10/50 [00:02<00:09, 4.15it/s]\n 22%|██▏ | 11/50 [00:02<00:09, 4.15it/s]\n 24%|██▍ | 12/50 [00:02<00:09, 4.16it/s]\n 26%|██▌ | 13/50 [00:03<00:08, 4.16it/s]\n 28%|██▊ | 14/50 [00:03<00:08, 4.15it/s]\n 30%|███ | 15/50 [00:03<00:08, 4.15it/s]\n 32%|███▏ | 16/50 [00:03<00:08, 4.16it/s]\n 34%|███▍ | 17/50 [00:04<00:07, 4.15it/s]\n 36%|███▌ | 18/50 [00:04<00:07, 4.15it/s]\n 38%|███▊ | 19/50 [00:04<00:07, 4.15it/s]\n 40%|████ | 20/50 [00:04<00:07, 4.15it/s]\n 42%|████▏ | 21/50 [00:05<00:06, 4.15it/s]\n 44%|████▍ | 22/50 [00:05<00:06, 4.15it/s]\n 46%|████▌ | 23/50 [00:05<00:06, 4.15it/s]\n 48%|████▊ | 24/50 [00:05<00:06, 4.15it/s]\n 50%|█████ | 25/50 [00:06<00:06, 4.15it/s]\n 52%|█████▏ | 26/50 [00:06<00:05, 4.15it/s]\n 54%|█████▍ | 27/50 [00:06<00:05, 4.15it/s]\n 56%|█████▌ | 28/50 [00:06<00:05, 4.15it/s]\n 58%|█████▊ | 29/50 [00:06<00:05, 4.15it/s]\n 60%|██████ | 30/50 [00:07<00:04, 4.15it/s]\n 62%|██████▏ | 31/50 [00:07<00:04, 4.15it/s]\n 64%|██████▍ | 32/50 [00:07<00:04, 4.14it/s]\n 66%|██████▌ | 33/50 [00:07<00:04, 4.14it/s]\n 68%|██████▊ | 34/50 [00:08<00:03, 4.15it/s]\n 70%|███████ | 35/50 [00:08<00:03, 4.15it/s]\n 72%|███████▏ | 36/50 [00:08<00:03, 4.14it/s]\n 74%|███████▍ | 37/50 [00:08<00:03, 4.14it/s]\n 76%|███████▌ | 38/50 [00:09<00:02, 4.15it/s]\n 78%|███████▊ | 39/50 [00:09<00:02, 4.15it/s]\n 80%|████████ | 40/50 [00:09<00:02, 4.15it/s]\n 82%|████████▏ | 41/50 [00:09<00:02, 4.15it/s]\n 84%|████████▍ | 42/50 [00:10<00:01, 4.14it/s]\n 86%|████████▌ | 43/50 [00:10<00:01, 4.14it/s]\n 88%|████████▊ | 44/50 [00:10<00:01, 4.14it/s]\n 90%|█████████ | 45/50 [00:10<00:01, 4.15it/s]\n 92%|█████████▏| 46/50 [00:11<00:00, 4.14it/s]\n 94%|█████████▍| 47/50 [00:11<00:00, 4.14it/s]\n 96%|█████████▌| 48/50 [00:11<00:00, 4.14it/s]\n 98%|█████████▊| 49/50 [00:11<00:00, 4.14it/s]\n100%|██████████| 50/50 [00:12<00:00, 4.14it/s]\n100%|██████████| 50/50 [00:12<00:00, 4.15it/s]", "metrics": { "predict_time": 21.646749119, "total_time": 28.776587 }, "output": [ "https://replicate.delivery/pbxt/mwnCo7UxFypqBZTaNPfsfVbamEAzHvNGgsukppxa1ZycsbqTA/out-0.png" ], "started_at": "2024-10-25T21:09:27.740838Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/4dhywgsacdrgm0cjrp1s333qkc", "cancel": "https://api.replicate.com/v1/predictions/4dhywgsacdrgm0cjrp1s333qkc/cancel" }, "version": "b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8" }
Generated inUsing seed: 7047 Ensuring enough disk space... Free disk space: 1439622897664 Downloading weights: https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar 2024-10-25T21:09:27Z | INFO | [ Initiating ] chunk_size=150M dest=/src/weights-cache/b9e40f01def7cc54 url=https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar 2024-10-25T21:09:32Z | INFO | [ Complete ] dest=/src/weights-cache/b9e40f01def7cc54 size="186 MB" total_elapsed=4.907s url=https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar b'' Downloaded weights in 5.0415003299713135 seconds Loading fine-tuned model Does not have Unet. assume we are using LoRA Loading Unet LoRA Prompt: flower txt2img mode 0%| | 0/50 [00:00<?, ?it/s]/usr/local/lib/python3.9/site-packages/diffusers/models/attention_processor.py:1946: FutureWarning: `LoRAAttnProcessor2_0` is deprecated and will be removed in version 0.26.0. Make sure use AttnProcessor2_0 instead by settingLoRA layers to `self.{to_q,to_k,to_v,to_out[0]}.lora_layer` respectively. This will be done automatically when using `LoraLoaderMixin.load_lora_weights` deprecate( 2%|▏ | 1/50 [00:00<00:11, 4.20it/s] 4%|▍ | 2/50 [00:00<00:11, 4.19it/s] 6%|▌ | 3/50 [00:00<00:11, 4.17it/s] 8%|▊ | 4/50 [00:00<00:11, 4.16it/s] 10%|█ | 5/50 [00:01<00:10, 4.16it/s] 12%|█▏ | 6/50 [00:01<00:10, 4.16it/s] 14%|█▍ | 7/50 [00:01<00:10, 4.16it/s] 16%|█▌ | 8/50 [00:01<00:10, 4.15it/s] 18%|█▊ | 9/50 [00:02<00:09, 4.15it/s] 20%|██ | 10/50 [00:02<00:09, 4.15it/s] 22%|██▏ | 11/50 [00:02<00:09, 4.15it/s] 24%|██▍ | 12/50 [00:02<00:09, 4.16it/s] 26%|██▌ | 13/50 [00:03<00:08, 4.16it/s] 28%|██▊ | 14/50 [00:03<00:08, 4.15it/s] 30%|███ | 15/50 [00:03<00:08, 4.15it/s] 32%|███▏ | 16/50 [00:03<00:08, 4.16it/s] 34%|███▍ | 17/50 [00:04<00:07, 4.15it/s] 36%|███▌ | 18/50 [00:04<00:07, 4.15it/s] 38%|███▊ | 19/50 [00:04<00:07, 4.15it/s] 40%|████ | 20/50 [00:04<00:07, 4.15it/s] 42%|████▏ | 21/50 [00:05<00:06, 4.15it/s] 44%|████▍ | 22/50 [00:05<00:06, 4.15it/s] 46%|████▌ | 23/50 [00:05<00:06, 4.15it/s] 48%|████▊ | 24/50 [00:05<00:06, 4.15it/s] 50%|█████ | 25/50 [00:06<00:06, 4.15it/s] 52%|█████▏ | 26/50 [00:06<00:05, 4.15it/s] 54%|█████▍ | 27/50 [00:06<00:05, 4.15it/s] 56%|█████▌ | 28/50 [00:06<00:05, 4.15it/s] 58%|█████▊ | 29/50 [00:06<00:05, 4.15it/s] 60%|██████ | 30/50 [00:07<00:04, 4.15it/s] 62%|██████▏ | 31/50 [00:07<00:04, 4.15it/s] 64%|██████▍ | 32/50 [00:07<00:04, 4.14it/s] 66%|██████▌ | 33/50 [00:07<00:04, 4.14it/s] 68%|██████▊ | 34/50 [00:08<00:03, 4.15it/s] 70%|███████ | 35/50 [00:08<00:03, 4.15it/s] 72%|███████▏ | 36/50 [00:08<00:03, 4.14it/s] 74%|███████▍ | 37/50 [00:08<00:03, 4.14it/s] 76%|███████▌ | 38/50 [00:09<00:02, 4.15it/s] 78%|███████▊ | 39/50 [00:09<00:02, 4.15it/s] 80%|████████ | 40/50 [00:09<00:02, 4.15it/s] 82%|████████▏ | 41/50 [00:09<00:02, 4.15it/s] 84%|████████▍ | 42/50 [00:10<00:01, 4.14it/s] 86%|████████▌ | 43/50 [00:10<00:01, 4.14it/s] 88%|████████▊ | 44/50 [00:10<00:01, 4.14it/s] 90%|█████████ | 45/50 [00:10<00:01, 4.15it/s] 92%|█████████▏| 46/50 [00:11<00:00, 4.14it/s] 94%|█████████▍| 47/50 [00:11<00:00, 4.14it/s] 96%|█████████▌| 48/50 [00:11<00:00, 4.14it/s] 98%|█████████▊| 49/50 [00:11<00:00, 4.14it/s] 100%|██████████| 50/50 [00:12<00:00, 4.14it/s] 100%|██████████| 50/50 [00:12<00:00, 4.15it/s]
Prediction
callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8IDbgynyn1j9nrgj0cjrp18bszcbcStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- bird
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.6
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 50
{ "width": 1024, "height": 1024, "prompt": "bird", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", { input: { width: 1024, height: 1024, prompt: "bird", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 50 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", input={ "width": 1024, "height": 1024, "prompt": "bird", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", "input": { "width": 1024, "height": 1024, "prompt": "bird", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-10-25T21:08:34.069747Z", "created_at": "2024-10-25T21:08:17.101000Z", "data_removed": false, "error": null, "id": "bgynyn1j9nrgj0cjrp18bszcbc", "input": { "width": 1024, "height": 1024, "prompt": "bird", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }, "logs": "Using seed: 18260\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: bird\ntxt2img mode\n 0%| | 0/50 [00:00<?, ?it/s]\n 2%|▏ | 1/50 [00:00<00:11, 4.24it/s]\n 4%|▍ | 2/50 [00:00<00:11, 4.22it/s]\n 6%|▌ | 3/50 [00:00<00:11, 4.20it/s]\n 8%|▊ | 4/50 [00:00<00:10, 4.20it/s]\n 10%|█ | 5/50 [00:01<00:10, 4.19it/s]\n 12%|█▏ | 6/50 [00:01<00:10, 4.19it/s]\n 14%|█▍ | 7/50 [00:01<00:10, 4.19it/s]\n 16%|█▌ | 8/50 [00:01<00:10, 4.19it/s]\n 18%|█▊ | 9/50 [00:02<00:09, 4.19it/s]\n 20%|██ | 10/50 [00:02<00:09, 4.19it/s]\n 22%|██▏ | 11/50 [00:02<00:09, 4.19it/s]\n 24%|██▍ | 12/50 [00:02<00:09, 4.19it/s]\n 26%|██▌ | 13/50 [00:03<00:08, 4.19it/s]\n 28%|██▊ | 14/50 [00:03<00:08, 4.19it/s]\n 30%|███ | 15/50 [00:03<00:08, 4.19it/s]\n 32%|███▏ | 16/50 [00:03<00:08, 4.19it/s]\n 34%|███▍ | 17/50 [00:04<00:07, 4.19it/s]\n 36%|███▌ | 18/50 [00:04<00:07, 4.19it/s]\n 38%|███▊ | 19/50 [00:04<00:07, 4.19it/s]\n 40%|████ | 20/50 [00:04<00:07, 4.19it/s]\n 42%|████▏ | 21/50 [00:05<00:06, 4.18it/s]\n 44%|████▍ | 22/50 [00:05<00:06, 4.18it/s]\n 46%|████▌ | 23/50 [00:05<00:06, 4.18it/s]\n 48%|████▊ | 24/50 [00:05<00:06, 4.18it/s]\n 50%|█████ | 25/50 [00:05<00:05, 4.18it/s]\n 52%|█████▏ | 26/50 [00:06<00:05, 4.18it/s]\n 54%|█████▍ | 27/50 [00:06<00:05, 4.18it/s]\n 56%|█████▌ | 28/50 [00:06<00:05, 4.18it/s]\n 58%|█████▊ | 29/50 [00:06<00:05, 4.18it/s]\n 60%|██████ | 30/50 [00:07<00:04, 4.18it/s]\n 62%|██████▏ | 31/50 [00:07<00:04, 4.18it/s]\n 64%|██████▍ | 32/50 [00:07<00:04, 4.18it/s]\n 66%|██████▌ | 33/50 [00:07<00:04, 4.18it/s]\n 68%|██████▊ | 34/50 [00:08<00:03, 4.18it/s]\n 70%|███████ | 35/50 [00:08<00:03, 4.18it/s]\n 72%|███████▏ | 36/50 [00:08<00:03, 4.18it/s]\n 74%|███████▍ | 37/50 [00:08<00:03, 4.18it/s]\n 76%|███████▌ | 38/50 [00:09<00:02, 4.18it/s]\n 78%|███████▊ | 39/50 [00:09<00:02, 4.18it/s]\n 80%|████████ | 40/50 [00:09<00:02, 4.18it/s]\n 82%|████████▏ | 41/50 [00:09<00:02, 4.17it/s]\n 84%|████████▍ | 42/50 [00:10<00:01, 4.17it/s]\n 86%|████████▌ | 43/50 [00:10<00:01, 4.18it/s]\n 88%|████████▊ | 44/50 [00:10<00:01, 4.18it/s]\n 90%|█████████ | 45/50 [00:10<00:01, 4.18it/s]\n 92%|█████████▏| 46/50 [00:10<00:00, 4.18it/s]\n 94%|█████████▍| 47/50 [00:11<00:00, 4.18it/s]\n 96%|█████████▌| 48/50 [00:11<00:00, 4.18it/s]\n 98%|█████████▊| 49/50 [00:11<00:00, 4.18it/s]\n100%|██████████| 50/50 [00:11<00:00, 4.18it/s]\n100%|██████████| 50/50 [00:11<00:00, 4.18it/s]", "metrics": { "predict_time": 16.517675131, "total_time": 16.968747 }, "output": [ "https://replicate.delivery/pbxt/vuQr9Kn707YpHdLjgypFZVs1oYfnBwIpSnfDvf5PwAIgW3UnA/out-0.png" ], "started_at": "2024-10-25T21:08:17.552071Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/bgynyn1j9nrgj0cjrp18bszcbc", "cancel": "https://api.replicate.com/v1/predictions/bgynyn1j9nrgj0cjrp18bszcbc/cancel" }, "version": "b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8" }
Generated inUsing seed: 18260 Loading fine-tuned model Does not have Unet. assume we are using LoRA Loading Unet LoRA Prompt: bird txt2img mode 0%| | 0/50 [00:00<?, ?it/s] 2%|▏ | 1/50 [00:00<00:11, 4.24it/s] 4%|▍ | 2/50 [00:00<00:11, 4.22it/s] 6%|▌ | 3/50 [00:00<00:11, 4.20it/s] 8%|▊ | 4/50 [00:00<00:10, 4.20it/s] 10%|█ | 5/50 [00:01<00:10, 4.19it/s] 12%|█▏ | 6/50 [00:01<00:10, 4.19it/s] 14%|█▍ | 7/50 [00:01<00:10, 4.19it/s] 16%|█▌ | 8/50 [00:01<00:10, 4.19it/s] 18%|█▊ | 9/50 [00:02<00:09, 4.19it/s] 20%|██ | 10/50 [00:02<00:09, 4.19it/s] 22%|██▏ | 11/50 [00:02<00:09, 4.19it/s] 24%|██▍ | 12/50 [00:02<00:09, 4.19it/s] 26%|██▌ | 13/50 [00:03<00:08, 4.19it/s] 28%|██▊ | 14/50 [00:03<00:08, 4.19it/s] 30%|███ | 15/50 [00:03<00:08, 4.19it/s] 32%|███▏ | 16/50 [00:03<00:08, 4.19it/s] 34%|███▍ | 17/50 [00:04<00:07, 4.19it/s] 36%|███▌ | 18/50 [00:04<00:07, 4.19it/s] 38%|███▊ | 19/50 [00:04<00:07, 4.19it/s] 40%|████ | 20/50 [00:04<00:07, 4.19it/s] 42%|████▏ | 21/50 [00:05<00:06, 4.18it/s] 44%|████▍ | 22/50 [00:05<00:06, 4.18it/s] 46%|████▌ | 23/50 [00:05<00:06, 4.18it/s] 48%|████▊ | 24/50 [00:05<00:06, 4.18it/s] 50%|█████ | 25/50 [00:05<00:05, 4.18it/s] 52%|█████▏ | 26/50 [00:06<00:05, 4.18it/s] 54%|█████▍ | 27/50 [00:06<00:05, 4.18it/s] 56%|█████▌ | 28/50 [00:06<00:05, 4.18it/s] 58%|█████▊ | 29/50 [00:06<00:05, 4.18it/s] 60%|██████ | 30/50 [00:07<00:04, 4.18it/s] 62%|██████▏ | 31/50 [00:07<00:04, 4.18it/s] 64%|██████▍ | 32/50 [00:07<00:04, 4.18it/s] 66%|██████▌ | 33/50 [00:07<00:04, 4.18it/s] 68%|██████▊ | 34/50 [00:08<00:03, 4.18it/s] 70%|███████ | 35/50 [00:08<00:03, 4.18it/s] 72%|███████▏ | 36/50 [00:08<00:03, 4.18it/s] 74%|███████▍ | 37/50 [00:08<00:03, 4.18it/s] 76%|███████▌ | 38/50 [00:09<00:02, 4.18it/s] 78%|███████▊ | 39/50 [00:09<00:02, 4.18it/s] 80%|████████ | 40/50 [00:09<00:02, 4.18it/s] 82%|████████▏ | 41/50 [00:09<00:02, 4.17it/s] 84%|████████▍ | 42/50 [00:10<00:01, 4.17it/s] 86%|████████▌ | 43/50 [00:10<00:01, 4.18it/s] 88%|████████▊ | 44/50 [00:10<00:01, 4.18it/s] 90%|█████████ | 45/50 [00:10<00:01, 4.18it/s] 92%|█████████▏| 46/50 [00:10<00:00, 4.18it/s] 94%|█████████▍| 47/50 [00:11<00:00, 4.18it/s] 96%|█████████▌| 48/50 [00:11<00:00, 4.18it/s] 98%|█████████▊| 49/50 [00:11<00:00, 4.18it/s] 100%|██████████| 50/50 [00:11<00:00, 4.18it/s] 100%|██████████| 50/50 [00:11<00:00, 4.18it/s]
Prediction
callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8IDypj1v5fctdrgm0cjrnsatjnn5rStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- bird in the style of a black wirey line drawing
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.6
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 50
{ "width": 1024, "height": 1024, "prompt": "bird in the style of a black wirey line drawing", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", { input: { width: 1024, height: 1024, prompt: "bird in the style of a black wirey line drawing", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 50 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", input={ "width": 1024, "height": 1024, "prompt": "bird in the style of a black wirey line drawing", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", "input": { "width": 1024, "height": 1024, "prompt": "bird in the style of a black wirey line drawing", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-10-25T20:53:24.881542Z", "created_at": "2024-10-25T20:51:36.275000Z", "data_removed": false, "error": null, "id": "ypj1v5fctdrgm0cjrnsatjnn5r", "input": { "width": 1024, "height": 1024, "prompt": "bird in the style of a black wirey line drawing", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }, "logs": "Using seed: 3083\nEnsuring enough disk space...\nFree disk space: 1767044743168\nDownloading weights: https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar\n2024-10-25T20:53:03Z | INFO | [ Initiating ] chunk_size=150M dest=/src/weights-cache/b9e40f01def7cc54 url=https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar\n2024-10-25T20:53:09Z | INFO | [ Complete ] dest=/src/weights-cache/b9e40f01def7cc54 size=\"186 MB\" total_elapsed=5.723s url=https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar\nb''\nDownloaded weights in 5.900073766708374 seconds\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: bird in the style of a black wirey line drawing\ntxt2img mode\n 0%| | 0/50 [00:00<?, ?it/s]/usr/local/lib/python3.9/site-packages/torch/nn/modules/conv.py:459: UserWarning: Applied workaround for CuDNN issue, install nvrtc.so (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:80.)\nreturn F.conv2d(input, weight, bias, self.stride,\n/usr/local/lib/python3.9/site-packages/diffusers/models/attention_processor.py:1946: FutureWarning: `LoRAAttnProcessor2_0` is deprecated and will be removed in version 0.26.0. Make sure use AttnProcessor2_0 instead by settingLoRA layers to `self.{to_q,to_k,to_v,to_out[0]}.lora_layer` respectively. This will be done automatically when using `LoraLoaderMixin.load_lora_weights`\ndeprecate(\n 2%|▏ | 1/50 [00:00<00:21, 2.28it/s]\n 4%|▍ | 2/50 [00:00<00:15, 3.12it/s]\n 6%|▌ | 3/50 [00:00<00:13, 3.54it/s]\n 8%|▊ | 4/50 [00:01<00:12, 3.78it/s]\n 10%|█ | 5/50 [00:01<00:11, 3.91it/s]\n 12%|█▏ | 6/50 [00:01<00:10, 4.00it/s]\n 14%|█▍ | 7/50 [00:01<00:10, 4.06it/s]\n 16%|█▌ | 8/50 [00:02<00:10, 4.10it/s]\n 18%|█▊ | 9/50 [00:02<00:09, 4.13it/s]\n 20%|██ | 10/50 [00:02<00:09, 4.15it/s]\n 22%|██▏ | 11/50 [00:02<00:09, 4.16it/s]\n 24%|██▍ | 12/50 [00:03<00:09, 4.17it/s]\n 26%|██▌ | 13/50 [00:03<00:08, 4.17it/s]\n 28%|██▊ | 14/50 [00:03<00:08, 4.18it/s]\n 30%|███ | 15/50 [00:03<00:08, 4.18it/s]\n 32%|███▏ | 16/50 [00:04<00:08, 4.18it/s]\n 34%|███▍ | 17/50 [00:04<00:07, 4.18it/s]\n 36%|███▌ | 18/50 [00:04<00:07, 4.19it/s]\n 38%|███▊ | 19/50 [00:04<00:07, 4.19it/s]\n 40%|████ | 20/50 [00:04<00:07, 4.18it/s]\n 42%|████▏ | 21/50 [00:05<00:06, 4.18it/s]\n 44%|████▍ | 22/50 [00:05<00:06, 4.18it/s]\n 46%|████▌ | 23/50 [00:05<00:06, 4.18it/s]\n 48%|████▊ | 24/50 [00:05<00:06, 4.18it/s]\n 50%|█████ | 25/50 [00:06<00:05, 4.18it/s]\n 52%|█████▏ | 26/50 [00:06<00:05, 4.18it/s]\n 54%|█████▍ | 27/50 [00:06<00:05, 4.18it/s]\n 56%|█████▌ | 28/50 [00:06<00:05, 4.19it/s]\n 58%|█████▊ | 29/50 [00:07<00:05, 4.19it/s]\n 60%|██████ | 30/50 [00:07<00:04, 4.19it/s]\n 62%|██████▏ | 31/50 [00:07<00:04, 4.20it/s]\n 64%|██████▍ | 32/50 [00:07<00:04, 4.20it/s]\n 66%|██████▌ | 33/50 [00:08<00:04, 4.21it/s]\n 68%|██████▊ | 34/50 [00:08<00:03, 4.21it/s]\n 70%|███████ | 35/50 [00:08<00:03, 4.21it/s]\n 72%|███████▏ | 36/50 [00:08<00:03, 4.21it/s]\n 74%|███████▍ | 37/50 [00:09<00:03, 4.21it/s]\n 76%|███████▌ | 38/50 [00:09<00:02, 4.21it/s]\n 78%|███████▊ | 39/50 [00:09<00:02, 4.21it/s]\n 80%|████████ | 40/50 [00:09<00:02, 4.21it/s]\n 82%|████████▏ | 41/50 [00:09<00:02, 4.21it/s]\n 84%|████████▍ | 42/50 [00:10<00:01, 4.21it/s]\n 86%|████████▌ | 43/50 [00:10<00:01, 4.21it/s]\n 88%|████████▊ | 44/50 [00:10<00:01, 4.21it/s]\n 90%|█████████ | 45/50 [00:10<00:01, 4.21it/s]\n 92%|█████████▏| 46/50 [00:11<00:00, 4.21it/s]\n 94%|█████████▍| 47/50 [00:11<00:00, 4.21it/s]\n 96%|█████████▌| 48/50 [00:11<00:00, 4.21it/s]\n 98%|█████████▊| 49/50 [00:11<00:00, 4.21it/s]\n100%|██████████| 50/50 [00:12<00:00, 4.21it/s]\n100%|██████████| 50/50 [00:12<00:00, 4.13it/s]", "metrics": { "predict_time": 21.584100705, "total_time": 108.606542 }, "output": [ "https://replicate.delivery/pbxt/seQFiW5PY4VCBanzZaS8GlSMi7ar4aUaeaOI8NN0bUjDdbqTA/out-0.png" ], "started_at": "2024-10-25T20:53:03.297441Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/ypj1v5fctdrgm0cjrnsatjnn5r", "cancel": "https://api.replicate.com/v1/predictions/ypj1v5fctdrgm0cjrnsatjnn5r/cancel" }, "version": "b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8" }
Generated inUsing seed: 3083 Ensuring enough disk space... Free disk space: 1767044743168 Downloading weights: https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar 2024-10-25T20:53:03Z | INFO | [ Initiating ] chunk_size=150M dest=/src/weights-cache/b9e40f01def7cc54 url=https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar 2024-10-25T20:53:09Z | INFO | [ Complete ] dest=/src/weights-cache/b9e40f01def7cc54 size="186 MB" total_elapsed=5.723s url=https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar b'' Downloaded weights in 5.900073766708374 seconds Loading fine-tuned model Does not have Unet. assume we are using LoRA Loading Unet LoRA Prompt: bird in the style of a black wirey line drawing txt2img mode 0%| | 0/50 [00:00<?, ?it/s]/usr/local/lib/python3.9/site-packages/torch/nn/modules/conv.py:459: UserWarning: Applied workaround for CuDNN issue, install nvrtc.so (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:80.) return F.conv2d(input, weight, bias, self.stride, /usr/local/lib/python3.9/site-packages/diffusers/models/attention_processor.py:1946: FutureWarning: `LoRAAttnProcessor2_0` is deprecated and will be removed in version 0.26.0. Make sure use AttnProcessor2_0 instead by settingLoRA layers to `self.{to_q,to_k,to_v,to_out[0]}.lora_layer` respectively. This will be done automatically when using `LoraLoaderMixin.load_lora_weights` deprecate( 2%|▏ | 1/50 [00:00<00:21, 2.28it/s] 4%|▍ | 2/50 [00:00<00:15, 3.12it/s] 6%|▌ | 3/50 [00:00<00:13, 3.54it/s] 8%|▊ | 4/50 [00:01<00:12, 3.78it/s] 10%|█ | 5/50 [00:01<00:11, 3.91it/s] 12%|█▏ | 6/50 [00:01<00:10, 4.00it/s] 14%|█▍ | 7/50 [00:01<00:10, 4.06it/s] 16%|█▌ | 8/50 [00:02<00:10, 4.10it/s] 18%|█▊ | 9/50 [00:02<00:09, 4.13it/s] 20%|██ | 10/50 [00:02<00:09, 4.15it/s] 22%|██▏ | 11/50 [00:02<00:09, 4.16it/s] 24%|██▍ | 12/50 [00:03<00:09, 4.17it/s] 26%|██▌ | 13/50 [00:03<00:08, 4.17it/s] 28%|██▊ | 14/50 [00:03<00:08, 4.18it/s] 30%|███ | 15/50 [00:03<00:08, 4.18it/s] 32%|███▏ | 16/50 [00:04<00:08, 4.18it/s] 34%|███▍ | 17/50 [00:04<00:07, 4.18it/s] 36%|███▌ | 18/50 [00:04<00:07, 4.19it/s] 38%|███▊ | 19/50 [00:04<00:07, 4.19it/s] 40%|████ | 20/50 [00:04<00:07, 4.18it/s] 42%|████▏ | 21/50 [00:05<00:06, 4.18it/s] 44%|████▍ | 22/50 [00:05<00:06, 4.18it/s] 46%|████▌ | 23/50 [00:05<00:06, 4.18it/s] 48%|████▊ | 24/50 [00:05<00:06, 4.18it/s] 50%|█████ | 25/50 [00:06<00:05, 4.18it/s] 52%|█████▏ | 26/50 [00:06<00:05, 4.18it/s] 54%|█████▍ | 27/50 [00:06<00:05, 4.18it/s] 56%|█████▌ | 28/50 [00:06<00:05, 4.19it/s] 58%|█████▊ | 29/50 [00:07<00:05, 4.19it/s] 60%|██████ | 30/50 [00:07<00:04, 4.19it/s] 62%|██████▏ | 31/50 [00:07<00:04, 4.20it/s] 64%|██████▍ | 32/50 [00:07<00:04, 4.20it/s] 66%|██████▌ | 33/50 [00:08<00:04, 4.21it/s] 68%|██████▊ | 34/50 [00:08<00:03, 4.21it/s] 70%|███████ | 35/50 [00:08<00:03, 4.21it/s] 72%|███████▏ | 36/50 [00:08<00:03, 4.21it/s] 74%|███████▍ | 37/50 [00:09<00:03, 4.21it/s] 76%|███████▌ | 38/50 [00:09<00:02, 4.21it/s] 78%|███████▊ | 39/50 [00:09<00:02, 4.21it/s] 80%|████████ | 40/50 [00:09<00:02, 4.21it/s] 82%|████████▏ | 41/50 [00:09<00:02, 4.21it/s] 84%|████████▍ | 42/50 [00:10<00:01, 4.21it/s] 86%|████████▌ | 43/50 [00:10<00:01, 4.21it/s] 88%|████████▊ | 44/50 [00:10<00:01, 4.21it/s] 90%|█████████ | 45/50 [00:10<00:01, 4.21it/s] 92%|█████████▏| 46/50 [00:11<00:00, 4.21it/s] 94%|█████████▍| 47/50 [00:11<00:00, 4.21it/s] 96%|█████████▌| 48/50 [00:11<00:00, 4.21it/s] 98%|█████████▊| 49/50 [00:11<00:00, 4.21it/s] 100%|██████████| 50/50 [00:12<00:00, 4.21it/s] 100%|██████████| 50/50 [00:12<00:00, 4.13it/s]
Prediction
callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8IDxkmd9pgyj5rgg0cjrp89whbztwStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- eye with butterfly drawn around it
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.6
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 50
{ "width": 1024, "height": 1024, "prompt": "eye with butterfly drawn around it", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", { input: { width: 1024, height: 1024, prompt: "eye with butterfly drawn around it", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 50 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", input={ "width": 1024, "height": 1024, "prompt": "eye with butterfly drawn around it", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", "input": { "width": 1024, "height": 1024, "prompt": "eye with butterfly drawn around it", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-10-25T21:23:52.080704Z", "created_at": "2024-10-25T21:23:29.553000Z", "data_removed": false, "error": null, "id": "xkmd9pgyj5rgg0cjrp89whbztw", "input": { "width": 1024, "height": 1024, "prompt": "eye with butterfly drawn around it", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }, "logs": "Using seed: 42414\nEnsuring enough disk space...\nFree disk space: 1476043132928\nDownloading weights: https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar\n2024-10-25T21:23:35Z | INFO | [ Initiating ] chunk_size=150M dest=/src/weights-cache/b9e40f01def7cc54 url=https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar\n2024-10-25T21:23:37Z | INFO | [ Complete ] dest=/src/weights-cache/b9e40f01def7cc54 size=\"186 MB\" total_elapsed=1.475s url=https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar\nb''\nDownloaded weights in 1.6270411014556885 seconds\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: eye with butterfly drawn around it\ntxt2img mode\n 0%| | 0/50 [00:00<?, ?it/s]/usr/local/lib/python3.9/site-packages/torch/nn/modules/conv.py:459: UserWarning: Applied workaround for CuDNN issue, install nvrtc.so (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:80.)\nreturn F.conv2d(input, weight, bias, self.stride,\n/usr/local/lib/python3.9/site-packages/diffusers/models/attention_processor.py:1946: FutureWarning: `LoRAAttnProcessor2_0` is deprecated and will be removed in version 0.26.0. Make sure use AttnProcessor2_0 instead by settingLoRA layers to `self.{to_q,to_k,to_v,to_out[0]}.lora_layer` respectively. This will be done automatically when using `LoraLoaderMixin.load_lora_weights`\ndeprecate(\n 2%|▏ | 1/50 [00:00<00:21, 2.28it/s]\n 4%|▍ | 2/50 [00:00<00:15, 3.13it/s]\n 6%|▌ | 3/50 [00:00<00:13, 3.54it/s]\n 8%|▊ | 4/50 [00:01<00:12, 3.77it/s]\n 10%|█ | 5/50 [00:01<00:11, 3.92it/s]\n 12%|█▏ | 6/50 [00:01<00:10, 4.00it/s]\n 14%|█▍ | 7/50 [00:01<00:10, 4.06it/s]\n 16%|█▌ | 8/50 [00:02<00:10, 4.10it/s]\n 18%|█▊ | 9/50 [00:02<00:09, 4.13it/s]\n 20%|██ | 10/50 [00:02<00:09, 4.15it/s]\n 22%|██▏ | 11/50 [00:02<00:09, 4.17it/s]\n 24%|██▍ | 12/50 [00:03<00:09, 4.17it/s]\n 26%|██▌ | 13/50 [00:03<00:08, 4.18it/s]\n 28%|██▊ | 14/50 [00:03<00:08, 4.18it/s]\n 30%|███ | 15/50 [00:03<00:08, 4.18it/s]\n 32%|███▏ | 16/50 [00:04<00:08, 4.18it/s]\n 34%|███▍ | 17/50 [00:04<00:07, 4.19it/s]\n 36%|███▌ | 18/50 [00:04<00:07, 4.19it/s]\n 38%|███▊ | 19/50 [00:04<00:07, 4.18it/s]\n 40%|████ | 20/50 [00:04<00:07, 4.18it/s]\n 42%|████▏ | 21/50 [00:05<00:06, 4.18it/s]\n 44%|████▍ | 22/50 [00:05<00:06, 4.18it/s]\n 46%|████▌ | 23/50 [00:05<00:06, 4.18it/s]\n 48%|████▊ | 24/50 [00:05<00:06, 4.18it/s]\n 50%|█████ | 25/50 [00:06<00:05, 4.18it/s]\n 52%|█████▏ | 26/50 [00:06<00:05, 4.18it/s]\n 54%|█████▍ | 27/50 [00:06<00:05, 4.19it/s]\n 56%|█████▌ | 28/50 [00:06<00:05, 4.20it/s]\n 58%|█████▊ | 29/50 [00:07<00:04, 4.20it/s]\n 60%|██████ | 30/50 [00:07<00:04, 4.20it/s]\n 62%|██████▏ | 31/50 [00:07<00:04, 4.20it/s]\n 64%|██████▍ | 32/50 [00:07<00:04, 4.20it/s]\n 66%|██████▌ | 33/50 [00:08<00:04, 4.20it/s]\n 68%|██████▊ | 34/50 [00:08<00:03, 4.21it/s]\n 70%|███████ | 35/50 [00:08<00:03, 4.21it/s]\n 72%|███████▏ | 36/50 [00:08<00:03, 4.21it/s]\n 74%|███████▍ | 37/50 [00:09<00:03, 4.21it/s]\n 76%|███████▌ | 38/50 [00:09<00:02, 4.21it/s]\n 78%|███████▊ | 39/50 [00:09<00:02, 4.21it/s]\n 80%|████████ | 40/50 [00:09<00:02, 4.21it/s]\n 82%|████████▏ | 41/50 [00:09<00:02, 4.21it/s]\n 84%|████████▍ | 42/50 [00:10<00:01, 4.21it/s]\n 86%|████████▌ | 43/50 [00:10<00:01, 4.21it/s]\n 88%|████████▊ | 44/50 [00:10<00:01, 4.20it/s]\n 90%|█████████ | 45/50 [00:10<00:01, 4.21it/s]\n 92%|█████████▏| 46/50 [00:11<00:00, 4.21it/s]\n 94%|█████████▍| 47/50 [00:11<00:00, 4.21it/s]\n 96%|█████████▌| 48/50 [00:11<00:00, 4.20it/s]\n 98%|█████████▊| 49/50 [00:11<00:00, 4.20it/s]\n100%|██████████| 50/50 [00:12<00:00, 4.20it/s]\n100%|██████████| 50/50 [00:12<00:00, 4.13it/s]", "metrics": { "predict_time": 16.39715504, "total_time": 22.527704 }, "output": [ "https://replicate.delivery/pbxt/6W7DJgcVOC69Il2CcptS6euGDouKZeyS1Li3e2Sznm1Pz3UnA/out-0.png" ], "started_at": "2024-10-25T21:23:35.683549Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/xkmd9pgyj5rgg0cjrp89whbztw", "cancel": "https://api.replicate.com/v1/predictions/xkmd9pgyj5rgg0cjrp89whbztw/cancel" }, "version": "b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8" }
Generated inUsing seed: 42414 Ensuring enough disk space... Free disk space: 1476043132928 Downloading weights: https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar 2024-10-25T21:23:35Z | INFO | [ Initiating ] chunk_size=150M dest=/src/weights-cache/b9e40f01def7cc54 url=https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar 2024-10-25T21:23:37Z | INFO | [ Complete ] dest=/src/weights-cache/b9e40f01def7cc54 size="186 MB" total_elapsed=1.475s url=https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar b'' Downloaded weights in 1.6270411014556885 seconds Loading fine-tuned model Does not have Unet. assume we are using LoRA Loading Unet LoRA Prompt: eye with butterfly drawn around it txt2img mode 0%| | 0/50 [00:00<?, ?it/s]/usr/local/lib/python3.9/site-packages/torch/nn/modules/conv.py:459: UserWarning: Applied workaround for CuDNN issue, install nvrtc.so (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:80.) return F.conv2d(input, weight, bias, self.stride, /usr/local/lib/python3.9/site-packages/diffusers/models/attention_processor.py:1946: FutureWarning: `LoRAAttnProcessor2_0` is deprecated and will be removed in version 0.26.0. Make sure use AttnProcessor2_0 instead by settingLoRA layers to `self.{to_q,to_k,to_v,to_out[0]}.lora_layer` respectively. This will be done automatically when using `LoraLoaderMixin.load_lora_weights` deprecate( 2%|▏ | 1/50 [00:00<00:21, 2.28it/s] 4%|▍ | 2/50 [00:00<00:15, 3.13it/s] 6%|▌ | 3/50 [00:00<00:13, 3.54it/s] 8%|▊ | 4/50 [00:01<00:12, 3.77it/s] 10%|█ | 5/50 [00:01<00:11, 3.92it/s] 12%|█▏ | 6/50 [00:01<00:10, 4.00it/s] 14%|█▍ | 7/50 [00:01<00:10, 4.06it/s] 16%|█▌ | 8/50 [00:02<00:10, 4.10it/s] 18%|█▊ | 9/50 [00:02<00:09, 4.13it/s] 20%|██ | 10/50 [00:02<00:09, 4.15it/s] 22%|██▏ | 11/50 [00:02<00:09, 4.17it/s] 24%|██▍ | 12/50 [00:03<00:09, 4.17it/s] 26%|██▌ | 13/50 [00:03<00:08, 4.18it/s] 28%|██▊ | 14/50 [00:03<00:08, 4.18it/s] 30%|███ | 15/50 [00:03<00:08, 4.18it/s] 32%|███▏ | 16/50 [00:04<00:08, 4.18it/s] 34%|███▍ | 17/50 [00:04<00:07, 4.19it/s] 36%|███▌ | 18/50 [00:04<00:07, 4.19it/s] 38%|███▊ | 19/50 [00:04<00:07, 4.18it/s] 40%|████ | 20/50 [00:04<00:07, 4.18it/s] 42%|████▏ | 21/50 [00:05<00:06, 4.18it/s] 44%|████▍ | 22/50 [00:05<00:06, 4.18it/s] 46%|████▌ | 23/50 [00:05<00:06, 4.18it/s] 48%|████▊ | 24/50 [00:05<00:06, 4.18it/s] 50%|█████ | 25/50 [00:06<00:05, 4.18it/s] 52%|█████▏ | 26/50 [00:06<00:05, 4.18it/s] 54%|█████▍ | 27/50 [00:06<00:05, 4.19it/s] 56%|█████▌ | 28/50 [00:06<00:05, 4.20it/s] 58%|█████▊ | 29/50 [00:07<00:04, 4.20it/s] 60%|██████ | 30/50 [00:07<00:04, 4.20it/s] 62%|██████▏ | 31/50 [00:07<00:04, 4.20it/s] 64%|██████▍ | 32/50 [00:07<00:04, 4.20it/s] 66%|██████▌ | 33/50 [00:08<00:04, 4.20it/s] 68%|██████▊ | 34/50 [00:08<00:03, 4.21it/s] 70%|███████ | 35/50 [00:08<00:03, 4.21it/s] 72%|███████▏ | 36/50 [00:08<00:03, 4.21it/s] 74%|███████▍ | 37/50 [00:09<00:03, 4.21it/s] 76%|███████▌ | 38/50 [00:09<00:02, 4.21it/s] 78%|███████▊ | 39/50 [00:09<00:02, 4.21it/s] 80%|████████ | 40/50 [00:09<00:02, 4.21it/s] 82%|████████▏ | 41/50 [00:09<00:02, 4.21it/s] 84%|████████▍ | 42/50 [00:10<00:01, 4.21it/s] 86%|████████▌ | 43/50 [00:10<00:01, 4.21it/s] 88%|████████▊ | 44/50 [00:10<00:01, 4.20it/s] 90%|█████████ | 45/50 [00:10<00:01, 4.21it/s] 92%|█████████▏| 46/50 [00:11<00:00, 4.21it/s] 94%|█████████▍| 47/50 [00:11<00:00, 4.21it/s] 96%|█████████▌| 48/50 [00:11<00:00, 4.20it/s] 98%|█████████▊| 49/50 [00:11<00:00, 4.20it/s] 100%|██████████| 50/50 [00:12<00:00, 4.20it/s] 100%|██████████| 50/50 [00:12<00:00, 4.13it/s]
Prediction
callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8IDqmkxeavmehrgp0cjrp4vdgaqwwStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- eyelashes
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.6
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 50
{ "width": 1024, "height": 1024, "prompt": "eyelashes", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", { input: { width: 1024, height: 1024, prompt: "eyelashes", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 50 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", input={ "width": 1024, "height": 1024, "prompt": "eyelashes", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", "input": { "width": 1024, "height": 1024, "prompt": "eyelashes", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-10-25T21:16:29.577534Z", "created_at": "2024-10-25T21:16:12.788000Z", "data_removed": false, "error": null, "id": "qmkxeavmehrgp0cjrp4vdgaqww", "input": { "width": 1024, "height": 1024, "prompt": "eyelashes", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }, "logs": "Using seed: 5059\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: eyelashes\ntxt2img mode\n 0%| | 0/50 [00:00<?, ?it/s]\n 2%|▏ | 1/50 [00:00<00:11, 4.27it/s]\n 4%|▍ | 2/50 [00:00<00:11, 4.24it/s]\n 6%|▌ | 3/50 [00:00<00:11, 4.23it/s]\n 8%|▊ | 4/50 [00:00<00:10, 4.21it/s]\n 10%|█ | 5/50 [00:01<00:10, 4.21it/s]\n 12%|█▏ | 6/50 [00:01<00:10, 4.21it/s]\n 14%|█▍ | 7/50 [00:01<00:10, 4.21it/s]\n 16%|█▌ | 8/50 [00:01<00:09, 4.21it/s]\n 18%|█▊ | 9/50 [00:02<00:09, 4.20it/s]\n 20%|██ | 10/50 [00:02<00:09, 4.20it/s]\n 22%|██▏ | 11/50 [00:02<00:09, 4.20it/s]\n 24%|██▍ | 12/50 [00:02<00:09, 4.20it/s]\n 26%|██▌ | 13/50 [00:03<00:08, 4.20it/s]\n 28%|██▊ | 14/50 [00:03<00:08, 4.20it/s]\n 30%|███ | 15/50 [00:03<00:08, 4.20it/s]\n 32%|███▏ | 16/50 [00:03<00:08, 4.19it/s]\n 34%|███▍ | 17/50 [00:04<00:07, 4.19it/s]\n 36%|███▌ | 18/50 [00:04<00:07, 4.19it/s]\n 38%|███▊ | 19/50 [00:04<00:07, 4.19it/s]\n 40%|████ | 20/50 [00:04<00:07, 4.19it/s]\n 42%|████▏ | 21/50 [00:04<00:06, 4.19it/s]\n 44%|████▍ | 22/50 [00:05<00:06, 4.19it/s]\n 46%|████▌ | 23/50 [00:05<00:06, 4.19it/s]\n 48%|████▊ | 24/50 [00:05<00:06, 4.19it/s]\n 50%|█████ | 25/50 [00:05<00:05, 4.19it/s]\n 52%|█████▏ | 26/50 [00:06<00:05, 4.19it/s]\n 54%|█████▍ | 27/50 [00:06<00:05, 4.20it/s]\n 56%|█████▌ | 28/50 [00:06<00:05, 4.20it/s]\n 58%|█████▊ | 29/50 [00:06<00:04, 4.20it/s]\n 60%|██████ | 30/50 [00:07<00:04, 4.20it/s]\n 62%|██████▏ | 31/50 [00:07<00:04, 4.20it/s]\n 64%|██████▍ | 32/50 [00:07<00:04, 4.20it/s]\n 66%|██████▌ | 33/50 [00:07<00:04, 4.19it/s]\n 68%|██████▊ | 34/50 [00:08<00:03, 4.20it/s]\n 70%|███████ | 35/50 [00:08<00:03, 4.19it/s]\n 72%|███████▏ | 36/50 [00:08<00:03, 4.20it/s]\n 74%|███████▍ | 37/50 [00:08<00:03, 4.19it/s]\n 76%|███████▌ | 38/50 [00:09<00:02, 4.19it/s]\n 78%|███████▊ | 39/50 [00:09<00:02, 4.19it/s]\n 80%|████████ | 40/50 [00:09<00:02, 4.19it/s]\n 82%|████████▏ | 41/50 [00:09<00:02, 4.19it/s]\n 84%|████████▍ | 42/50 [00:10<00:01, 4.19it/s]\n 86%|████████▌ | 43/50 [00:10<00:01, 4.19it/s]\n 88%|████████▊ | 44/50 [00:10<00:01, 4.19it/s]\n 90%|█████████ | 45/50 [00:10<00:01, 4.19it/s]\n 92%|█████████▏| 46/50 [00:10<00:00, 4.19it/s]\n 94%|█████████▍| 47/50 [00:11<00:00, 4.19it/s]\n 96%|█████████▌| 48/50 [00:11<00:00, 4.19it/s]\n 98%|█████████▊| 49/50 [00:11<00:00, 4.18it/s]\n100%|██████████| 50/50 [00:11<00:00, 4.19it/s]\n100%|██████████| 50/50 [00:11<00:00, 4.20it/s]", "metrics": { "predict_time": 16.6164095, "total_time": 16.789534 }, "output": [ "https://replicate.delivery/pbxt/2bXfYNOlzm3dSSalnegTOp3RIqrDQrM5vU7KQhk5dLSsybqTA/out-0.png" ], "started_at": "2024-10-25T21:16:12.961124Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/qmkxeavmehrgp0cjrp4vdgaqww", "cancel": "https://api.replicate.com/v1/predictions/qmkxeavmehrgp0cjrp4vdgaqww/cancel" }, "version": "b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8" }
Generated inUsing seed: 5059 Loading fine-tuned model Does not have Unet. assume we are using LoRA Loading Unet LoRA Prompt: eyelashes txt2img mode 0%| | 0/50 [00:00<?, ?it/s] 2%|▏ | 1/50 [00:00<00:11, 4.27it/s] 4%|▍ | 2/50 [00:00<00:11, 4.24it/s] 6%|▌ | 3/50 [00:00<00:11, 4.23it/s] 8%|▊ | 4/50 [00:00<00:10, 4.21it/s] 10%|█ | 5/50 [00:01<00:10, 4.21it/s] 12%|█▏ | 6/50 [00:01<00:10, 4.21it/s] 14%|█▍ | 7/50 [00:01<00:10, 4.21it/s] 16%|█▌ | 8/50 [00:01<00:09, 4.21it/s] 18%|█▊ | 9/50 [00:02<00:09, 4.20it/s] 20%|██ | 10/50 [00:02<00:09, 4.20it/s] 22%|██▏ | 11/50 [00:02<00:09, 4.20it/s] 24%|██▍ | 12/50 [00:02<00:09, 4.20it/s] 26%|██▌ | 13/50 [00:03<00:08, 4.20it/s] 28%|██▊ | 14/50 [00:03<00:08, 4.20it/s] 30%|███ | 15/50 [00:03<00:08, 4.20it/s] 32%|███▏ | 16/50 [00:03<00:08, 4.19it/s] 34%|███▍ | 17/50 [00:04<00:07, 4.19it/s] 36%|███▌ | 18/50 [00:04<00:07, 4.19it/s] 38%|███▊ | 19/50 [00:04<00:07, 4.19it/s] 40%|████ | 20/50 [00:04<00:07, 4.19it/s] 42%|████▏ | 21/50 [00:04<00:06, 4.19it/s] 44%|████▍ | 22/50 [00:05<00:06, 4.19it/s] 46%|████▌ | 23/50 [00:05<00:06, 4.19it/s] 48%|████▊ | 24/50 [00:05<00:06, 4.19it/s] 50%|█████ | 25/50 [00:05<00:05, 4.19it/s] 52%|█████▏ | 26/50 [00:06<00:05, 4.19it/s] 54%|█████▍ | 27/50 [00:06<00:05, 4.20it/s] 56%|█████▌ | 28/50 [00:06<00:05, 4.20it/s] 58%|█████▊ | 29/50 [00:06<00:04, 4.20it/s] 60%|██████ | 30/50 [00:07<00:04, 4.20it/s] 62%|██████▏ | 31/50 [00:07<00:04, 4.20it/s] 64%|██████▍ | 32/50 [00:07<00:04, 4.20it/s] 66%|██████▌ | 33/50 [00:07<00:04, 4.19it/s] 68%|██████▊ | 34/50 [00:08<00:03, 4.20it/s] 70%|███████ | 35/50 [00:08<00:03, 4.19it/s] 72%|███████▏ | 36/50 [00:08<00:03, 4.20it/s] 74%|███████▍ | 37/50 [00:08<00:03, 4.19it/s] 76%|███████▌ | 38/50 [00:09<00:02, 4.19it/s] 78%|███████▊ | 39/50 [00:09<00:02, 4.19it/s] 80%|████████ | 40/50 [00:09<00:02, 4.19it/s] 82%|████████▏ | 41/50 [00:09<00:02, 4.19it/s] 84%|████████▍ | 42/50 [00:10<00:01, 4.19it/s] 86%|████████▌ | 43/50 [00:10<00:01, 4.19it/s] 88%|████████▊ | 44/50 [00:10<00:01, 4.19it/s] 90%|█████████ | 45/50 [00:10<00:01, 4.19it/s] 92%|█████████▏| 46/50 [00:10<00:00, 4.19it/s] 94%|█████████▍| 47/50 [00:11<00:00, 4.19it/s] 96%|█████████▌| 48/50 [00:11<00:00, 4.19it/s] 98%|█████████▊| 49/50 [00:11<00:00, 4.18it/s] 100%|██████████| 50/50 [00:11<00:00, 4.19it/s] 100%|██████████| 50/50 [00:11<00:00, 4.20it/s]
Prediction
callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8IDbjaefgwja9rgp0cjrp7by3nccgStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- eye with fish drawn around it
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.6
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 50
{ "width": 1024, "height": 1024, "prompt": "eye with fish drawn around it", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", { input: { width: 1024, height: 1024, prompt: "eye with fish drawn around it", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 50 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "callmejz-ai/doodle:b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", input={ "width": 1024, "height": 1024, "prompt": "eye with fish drawn around it", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run callmejz-ai/doodle using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8", "input": { "width": 1024, "height": 1024, "prompt": "eye with fish drawn around it", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-10-25T21:22:53.943767Z", "created_at": "2024-10-25T21:21:48.114000Z", "data_removed": false, "error": null, "id": "bjaefgwja9rgp0cjrp7by3nccg", "input": { "width": 1024, "height": 1024, "prompt": "eye with fish drawn around it", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }, "logs": "Using seed: 53081\nEnsuring enough disk space...\nFree disk space: 1665123356672\nDownloading weights: https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar\n2024-10-25T21:22:34Z | INFO | [ Initiating ] chunk_size=150M dest=/src/weights-cache/b9e40f01def7cc54 url=https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar\n2024-10-25T21:22:37Z | INFO | [ Complete ] dest=/src/weights-cache/b9e40f01def7cc54 size=\"186 MB\" total_elapsed=3.634s url=https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar\nb''\nDownloaded weights in 3.747882604598999 seconds\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: eye with fish drawn around it\ntxt2img mode\n 0%| | 0/50 [00:00<?, ?it/s]/usr/local/lib/python3.9/site-packages/torch/nn/modules/conv.py:459: UserWarning: Applied workaround for CuDNN issue, install nvrtc.so (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:80.)\nreturn F.conv2d(input, weight, bias, self.stride,\n/usr/local/lib/python3.9/site-packages/diffusers/models/attention_processor.py:1946: FutureWarning: `LoRAAttnProcessor2_0` is deprecated and will be removed in version 0.26.0. Make sure use AttnProcessor2_0 instead by settingLoRA layers to `self.{to_q,to_k,to_v,to_out[0]}.lora_layer` respectively. This will be done automatically when using `LoraLoaderMixin.load_lora_weights`\ndeprecate(\n 2%|▏ | 1/50 [00:00<00:20, 2.36it/s]\n 4%|▍ | 2/50 [00:00<00:15, 3.19it/s]\n 6%|▌ | 3/50 [00:00<00:13, 3.58it/s]\n 8%|▊ | 4/50 [00:01<00:12, 3.80it/s]\n 10%|█ | 5/50 [00:01<00:11, 3.92it/s]\n 12%|█▏ | 6/50 [00:01<00:10, 4.01it/s]\n 14%|█▍ | 7/50 [00:01<00:10, 4.08it/s]\n 16%|█▌ | 8/50 [00:02<00:10, 4.12it/s]\n 18%|█▊ | 9/50 [00:02<00:09, 4.15it/s]\n 20%|██ | 10/50 [00:02<00:09, 4.17it/s]\n 22%|██▏ | 11/50 [00:02<00:09, 4.18it/s]\n 24%|██▍ | 12/50 [00:03<00:09, 4.19it/s]\n 26%|██▌ | 13/50 [00:03<00:08, 4.20it/s]\n 28%|██▊ | 14/50 [00:03<00:08, 4.20it/s]\n 30%|███ | 15/50 [00:03<00:08, 4.20it/s]\n 32%|███▏ | 16/50 [00:03<00:08, 4.20it/s]\n 34%|███▍ | 17/50 [00:04<00:07, 4.20it/s]\n 36%|███▌ | 18/50 [00:04<00:07, 4.21it/s]\n 38%|███▊ | 19/50 [00:04<00:07, 4.20it/s]\n 40%|████ | 20/50 [00:04<00:07, 4.21it/s]\n 42%|████▏ | 21/50 [00:05<00:06, 4.20it/s]\n 44%|████▍ | 22/50 [00:05<00:06, 4.20it/s]\n 46%|████▌ | 23/50 [00:05<00:06, 4.21it/s]\n 48%|████▊ | 24/50 [00:05<00:06, 4.20it/s]\n 50%|█████ | 25/50 [00:06<00:05, 4.20it/s]\n 52%|█████▏ | 26/50 [00:06<00:05, 4.20it/s]\n 54%|█████▍ | 27/50 [00:06<00:05, 4.20it/s]\n 56%|█████▌ | 28/50 [00:06<00:05, 4.20it/s]\n 58%|█████▊ | 29/50 [00:07<00:04, 4.20it/s]\n 60%|██████ | 30/50 [00:07<00:04, 4.20it/s]\n 62%|██████▏ | 31/50 [00:07<00:04, 4.20it/s]\n 64%|██████▍ | 32/50 [00:07<00:04, 4.20it/s]\n 66%|██████▌ | 33/50 [00:08<00:04, 4.20it/s]\n 68%|██████▊ | 34/50 [00:08<00:03, 4.20it/s]\n 70%|███████ | 35/50 [00:08<00:03, 4.20it/s]\n 72%|███████▏ | 36/50 [00:08<00:03, 4.19it/s]\n 74%|███████▍ | 37/50 [00:08<00:03, 4.20it/s]\n 76%|███████▌ | 38/50 [00:09<00:02, 4.19it/s]\n 78%|███████▊ | 39/50 [00:09<00:02, 4.19it/s]\n 80%|████████ | 40/50 [00:09<00:02, 4.20it/s]\n 82%|████████▏ | 41/50 [00:09<00:02, 4.20it/s]\n 84%|████████▍ | 42/50 [00:10<00:01, 4.20it/s]\n 86%|████████▌ | 43/50 [00:10<00:01, 4.19it/s]\n 88%|████████▊ | 44/50 [00:10<00:01, 4.19it/s]\n 90%|█████████ | 45/50 [00:10<00:01, 4.19it/s]\n 92%|█████████▏| 46/50 [00:11<00:00, 4.18it/s]\n 94%|█████████▍| 47/50 [00:11<00:00, 4.18it/s]\n 96%|█████████▌| 48/50 [00:11<00:00, 4.18it/s]\n 98%|█████████▊| 49/50 [00:11<00:00, 4.18it/s]\n100%|██████████| 50/50 [00:12<00:00, 4.19it/s]\n100%|██████████| 50/50 [00:12<00:00, 4.13it/s]", "metrics": { "predict_time": 19.72665727, "total_time": 65.829767 }, "output": [ "https://replicate.delivery/pbxt/s2OhVUdLSwaOFZfiswvGw1fSUDYlNF7Oc769ZeeeJNeALeN1JA/out-0.png" ], "started_at": "2024-10-25T21:22:34.217110Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/bjaefgwja9rgp0cjrp7by3nccg", "cancel": "https://api.replicate.com/v1/predictions/bjaefgwja9rgp0cjrp7by3nccg/cancel" }, "version": "b9e155a586824e58f5a5193d65b0992ae5b6e5ef7420c1a967638922c4e103a8" }
Generated inUsing seed: 53081 Ensuring enough disk space... Free disk space: 1665123356672 Downloading weights: https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar 2024-10-25T21:22:34Z | INFO | [ Initiating ] chunk_size=150M dest=/src/weights-cache/b9e40f01def7cc54 url=https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar 2024-10-25T21:22:37Z | INFO | [ Complete ] dest=/src/weights-cache/b9e40f01def7cc54 size="186 MB" total_elapsed=3.634s url=https://replicate.delivery/pbxt/WZoBJcam9jK2OtB0Loes20TaJNuR8C87mhikU4gLnbTU7M1JA/trained_model.tar b'' Downloaded weights in 3.747882604598999 seconds Loading fine-tuned model Does not have Unet. assume we are using LoRA Loading Unet LoRA Prompt: eye with fish drawn around it txt2img mode 0%| | 0/50 [00:00<?, ?it/s]/usr/local/lib/python3.9/site-packages/torch/nn/modules/conv.py:459: UserWarning: Applied workaround for CuDNN issue, install nvrtc.so (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:80.) return F.conv2d(input, weight, bias, self.stride, /usr/local/lib/python3.9/site-packages/diffusers/models/attention_processor.py:1946: FutureWarning: `LoRAAttnProcessor2_0` is deprecated and will be removed in version 0.26.0. Make sure use AttnProcessor2_0 instead by settingLoRA layers to `self.{to_q,to_k,to_v,to_out[0]}.lora_layer` respectively. This will be done automatically when using `LoraLoaderMixin.load_lora_weights` deprecate( 2%|▏ | 1/50 [00:00<00:20, 2.36it/s] 4%|▍ | 2/50 [00:00<00:15, 3.19it/s] 6%|▌ | 3/50 [00:00<00:13, 3.58it/s] 8%|▊ | 4/50 [00:01<00:12, 3.80it/s] 10%|█ | 5/50 [00:01<00:11, 3.92it/s] 12%|█▏ | 6/50 [00:01<00:10, 4.01it/s] 14%|█▍ | 7/50 [00:01<00:10, 4.08it/s] 16%|█▌ | 8/50 [00:02<00:10, 4.12it/s] 18%|█▊ | 9/50 [00:02<00:09, 4.15it/s] 20%|██ | 10/50 [00:02<00:09, 4.17it/s] 22%|██▏ | 11/50 [00:02<00:09, 4.18it/s] 24%|██▍ | 12/50 [00:03<00:09, 4.19it/s] 26%|██▌ | 13/50 [00:03<00:08, 4.20it/s] 28%|██▊ | 14/50 [00:03<00:08, 4.20it/s] 30%|███ | 15/50 [00:03<00:08, 4.20it/s] 32%|███▏ | 16/50 [00:03<00:08, 4.20it/s] 34%|███▍ | 17/50 [00:04<00:07, 4.20it/s] 36%|███▌ | 18/50 [00:04<00:07, 4.21it/s] 38%|███▊ | 19/50 [00:04<00:07, 4.20it/s] 40%|████ | 20/50 [00:04<00:07, 4.21it/s] 42%|████▏ | 21/50 [00:05<00:06, 4.20it/s] 44%|████▍ | 22/50 [00:05<00:06, 4.20it/s] 46%|████▌ | 23/50 [00:05<00:06, 4.21it/s] 48%|████▊ | 24/50 [00:05<00:06, 4.20it/s] 50%|█████ | 25/50 [00:06<00:05, 4.20it/s] 52%|█████▏ | 26/50 [00:06<00:05, 4.20it/s] 54%|█████▍ | 27/50 [00:06<00:05, 4.20it/s] 56%|█████▌ | 28/50 [00:06<00:05, 4.20it/s] 58%|█████▊ | 29/50 [00:07<00:04, 4.20it/s] 60%|██████ | 30/50 [00:07<00:04, 4.20it/s] 62%|██████▏ | 31/50 [00:07<00:04, 4.20it/s] 64%|██████▍ | 32/50 [00:07<00:04, 4.20it/s] 66%|██████▌ | 33/50 [00:08<00:04, 4.20it/s] 68%|██████▊ | 34/50 [00:08<00:03, 4.20it/s] 70%|███████ | 35/50 [00:08<00:03, 4.20it/s] 72%|███████▏ | 36/50 [00:08<00:03, 4.19it/s] 74%|███████▍ | 37/50 [00:08<00:03, 4.20it/s] 76%|███████▌ | 38/50 [00:09<00:02, 4.19it/s] 78%|███████▊ | 39/50 [00:09<00:02, 4.19it/s] 80%|████████ | 40/50 [00:09<00:02, 4.20it/s] 82%|████████▏ | 41/50 [00:09<00:02, 4.20it/s] 84%|████████▍ | 42/50 [00:10<00:01, 4.20it/s] 86%|████████▌ | 43/50 [00:10<00:01, 4.19it/s] 88%|████████▊ | 44/50 [00:10<00:01, 4.19it/s] 90%|█████████ | 45/50 [00:10<00:01, 4.19it/s] 92%|█████████▏| 46/50 [00:11<00:00, 4.18it/s] 94%|█████████▍| 47/50 [00:11<00:00, 4.18it/s] 96%|█████████▌| 48/50 [00:11<00:00, 4.18it/s] 98%|█████████▊| 49/50 [00:11<00:00, 4.18it/s] 100%|██████████| 50/50 [00:12<00:00, 4.19it/s] 100%|██████████| 50/50 [00:12<00:00, 4.13it/s]
Want to make some of these yourself?
Run this model