lucataco
/
flux-watercolor
A Flux LoRA trained on watercolor style photos
- Public
- 4.7K runs
-
H100
Prediction
lucataco/flux-watercolor:846d1eb3IDvz9xg32mn1rm20chatatqhx7jgStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- model
- dev
- prompt
- a boat in the style of TOK
- lora_scale
- 1
- num_outputs
- 1
- aspect_ratio
- 1:1
- output_format
- webp
- guidance_scale
- 3.5
- output_quality
- 80
- num_inference_steps
- 28
{ "model": "dev", "prompt": "a boat in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/flux-watercolor:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", { input: { model: "dev", prompt: "a boat in the style of TOK", lora_scale: 1, num_outputs: 1, aspect_ratio: "1:1", output_format: "webp", guidance_scale: 3.5, output_quality: 80, num_inference_steps: 28 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/flux-watercolor:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", input={ "model": "dev", "prompt": "a boat in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", "input": { "model": "dev", "prompt": "a boat in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/lucataco/flux-watercolor@sha256:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383 \ -i 'model="dev"' \ -i 'prompt="a boat in the style of TOK"' \ -i 'lora_scale=1' \ -i 'num_outputs=1' \ -i 'aspect_ratio="1:1"' \ -i 'output_format="webp"' \ -i 'guidance_scale=3.5' \ -i 'output_quality=80' \ -i 'num_inference_steps=28'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/lucataco/flux-watercolor@sha256:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "model": "dev", "prompt": "a boat in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2024-08-15T15:08:53.145820Z", "created_at": "2024-08-15T15:08:32.808000Z", "data_removed": false, "error": null, "id": "vz9xg32mn1rm20chatatqhx7jg", "input": { "model": "dev", "prompt": "a boat in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 }, "logs": "Using seed: 10800\nPrompt: a boat in the style of TOK\ntxt2img mode\nUsing dev model\nLoading LoRA weights from https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar\nEnsuring enough disk space...\nFree disk space: 9722244792320\nDownloading weights: https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar\n2024-08-15T15:08:34Z | INFO | [ Initiating ] chunk_size=150M dest=/src/weights-cache/cf81144cd0f2e7f0 url=https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar\n2024-08-15T15:08:36Z | INFO | [ Complete ] dest=/src/weights-cache/cf81144cd0f2e7f0 size=\"172 MB\" total_elapsed=1.448s url=https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar\nb''\nDownloaded weights in 1.4735283851623535 seconds\nLoRA weights loaded successfully\n 0%| | 0/28 [00:00<?, ?it/s]\n 4%|▎ | 1/28 [00:00<00:07, 3.65it/s]\n 7%|▋ | 2/28 [00:00<00:06, 4.22it/s]\n 11%|█ | 3/28 [00:00<00:06, 3.95it/s]\n 14%|█▍ | 4/28 [00:01<00:06, 3.82it/s]\n 18%|█▊ | 5/28 [00:01<00:06, 3.76it/s]\n 21%|██▏ | 6/28 [00:01<00:05, 3.73it/s]\n 25%|██▌ | 7/28 [00:01<00:05, 3.71it/s]\n 29%|██▊ | 8/28 [00:02<00:05, 3.69it/s]\n 32%|███▏ | 9/28 [00:02<00:05, 3.68it/s]\n 36%|███▌ | 10/28 [00:02<00:04, 3.67it/s]\n 39%|███▉ | 11/28 [00:02<00:04, 3.67it/s]\n 43%|████▎ | 12/28 [00:03<00:04, 3.66it/s]\n 46%|████▋ | 13/28 [00:03<00:04, 3.65it/s]\n 50%|█████ | 14/28 [00:03<00:03, 3.66it/s]\n 54%|█████▎ | 15/28 [00:04<00:03, 3.65it/s]\n 57%|█████▋ | 16/28 [00:04<00:03, 3.65it/s]\n 61%|██████ | 17/28 [00:04<00:03, 3.65it/s]\n 64%|██████▍ | 18/28 [00:04<00:02, 3.66it/s]\n 68%|██████▊ | 19/28 [00:05<00:02, 3.66it/s]\n 71%|███████▏ | 20/28 [00:05<00:02, 3.65it/s]\n 75%|███████▌ | 21/28 [00:05<00:01, 3.64it/s]\n 79%|███████▊ | 22/28 [00:05<00:01, 3.65it/s]\n 82%|████████▏ | 23/28 [00:06<00:01, 3.65it/s]\n 86%|████████▌ | 24/28 [00:06<00:01, 3.65it/s]\n 89%|████████▉ | 25/28 [00:06<00:00, 3.65it/s]\n 93%|█████████▎| 26/28 [00:07<00:00, 3.65it/s]\n 96%|█████████▋| 27/28 [00:07<00:00, 3.65it/s]\n100%|██████████| 28/28 [00:07<00:00, 3.64it/s]\n100%|██████████| 28/28 [00:07<00:00, 3.68it/s]", "metrics": { "predict_time": 18.325301507, "total_time": 20.33782 }, "output": [ "https://replicate.delivery/yhqm/z7f2OBcvga07dCoJ4FeRGZCbE5PvipLhogPhEeU7BazIg5lmA/out-0.webp" ], "started_at": "2024-08-15T15:08:34.820518Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/vz9xg32mn1rm20chatatqhx7jg", "cancel": "https://api.replicate.com/v1/predictions/vz9xg32mn1rm20chatatqhx7jg/cancel" }, "version": "846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383" }
Generated inUsing seed: 10800 Prompt: a boat in the style of TOK txt2img mode Using dev model Loading LoRA weights from https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar Ensuring enough disk space... Free disk space: 9722244792320 Downloading weights: https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar 2024-08-15T15:08:34Z | INFO | [ Initiating ] chunk_size=150M dest=/src/weights-cache/cf81144cd0f2e7f0 url=https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar 2024-08-15T15:08:36Z | INFO | [ Complete ] dest=/src/weights-cache/cf81144cd0f2e7f0 size="172 MB" total_elapsed=1.448s url=https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar b'' Downloaded weights in 1.4735283851623535 seconds LoRA weights loaded successfully 0%| | 0/28 [00:00<?, ?it/s] 4%|▎ | 1/28 [00:00<00:07, 3.65it/s] 7%|▋ | 2/28 [00:00<00:06, 4.22it/s] 11%|█ | 3/28 [00:00<00:06, 3.95it/s] 14%|█▍ | 4/28 [00:01<00:06, 3.82it/s] 18%|█▊ | 5/28 [00:01<00:06, 3.76it/s] 21%|██▏ | 6/28 [00:01<00:05, 3.73it/s] 25%|██▌ | 7/28 [00:01<00:05, 3.71it/s] 29%|██▊ | 8/28 [00:02<00:05, 3.69it/s] 32%|███▏ | 9/28 [00:02<00:05, 3.68it/s] 36%|███▌ | 10/28 [00:02<00:04, 3.67it/s] 39%|███▉ | 11/28 [00:02<00:04, 3.67it/s] 43%|████▎ | 12/28 [00:03<00:04, 3.66it/s] 46%|████▋ | 13/28 [00:03<00:04, 3.65it/s] 50%|█████ | 14/28 [00:03<00:03, 3.66it/s] 54%|█████▎ | 15/28 [00:04<00:03, 3.65it/s] 57%|█████▋ | 16/28 [00:04<00:03, 3.65it/s] 61%|██████ | 17/28 [00:04<00:03, 3.65it/s] 64%|██████▍ | 18/28 [00:04<00:02, 3.66it/s] 68%|██████▊ | 19/28 [00:05<00:02, 3.66it/s] 71%|███████▏ | 20/28 [00:05<00:02, 3.65it/s] 75%|███████▌ | 21/28 [00:05<00:01, 3.64it/s] 79%|███████▊ | 22/28 [00:05<00:01, 3.65it/s] 82%|████████▏ | 23/28 [00:06<00:01, 3.65it/s] 86%|████████▌ | 24/28 [00:06<00:01, 3.65it/s] 89%|████████▉ | 25/28 [00:06<00:00, 3.65it/s] 93%|█████████▎| 26/28 [00:07<00:00, 3.65it/s] 96%|█████████▋| 27/28 [00:07<00:00, 3.65it/s] 100%|██████████| 28/28 [00:07<00:00, 3.64it/s] 100%|██████████| 28/28 [00:07<00:00, 3.68it/s]
Prediction
lucataco/flux-watercolor:846d1eb3IDdbaqgyj2bsrm00chatbbb2jzjwStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- model
- dev
- prompt
- a woman in the style of TOK
- lora_scale
- 1
- num_outputs
- 1
- aspect_ratio
- 1:1
- output_format
- webp
- guidance_scale
- 3.5
- output_quality
- 80
- num_inference_steps
- 28
{ "model": "dev", "prompt": "a woman in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/flux-watercolor:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", { input: { model: "dev", prompt: "a woman in the style of TOK", lora_scale: 1, num_outputs: 1, aspect_ratio: "1:1", output_format: "webp", guidance_scale: 3.5, output_quality: 80, num_inference_steps: 28 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/flux-watercolor:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", input={ "model": "dev", "prompt": "a woman in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", "input": { "model": "dev", "prompt": "a woman in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/lucataco/flux-watercolor@sha256:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383 \ -i 'model="dev"' \ -i 'prompt="a woman in the style of TOK"' \ -i 'lora_scale=1' \ -i 'num_outputs=1' \ -i 'aspect_ratio="1:1"' \ -i 'output_format="webp"' \ -i 'guidance_scale=3.5' \ -i 'output_quality=80' \ -i 'num_inference_steps=28'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/lucataco/flux-watercolor@sha256:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "model": "dev", "prompt": "a woman in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2024-08-15T15:09:49.369186Z", "created_at": "2024-08-15T15:09:33.662000Z", "data_removed": false, "error": null, "id": "dbaqgyj2bsrm00chatbbb2jzjw", "input": { "model": "dev", "prompt": "a woman in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 }, "logs": "Using seed: 35429\nPrompt: a woman in the style of TOK\ntxt2img mode\nUsing dev model\nLoading LoRA weights from https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar\nLoRA weights loaded successfully\n 0%| | 0/28 [00:00<?, ?it/s]\n 4%|▎ | 1/28 [00:00<00:07, 3.68it/s]\n 7%|▋ | 2/28 [00:00<00:06, 4.21it/s]\n 11%|█ | 3/28 [00:00<00:06, 3.94it/s]\n 14%|█▍ | 4/28 [00:01<00:06, 3.82it/s]\n 18%|█▊ | 5/28 [00:01<00:06, 3.76it/s]\n 21%|██▏ | 6/28 [00:01<00:05, 3.71it/s]\n 25%|██▌ | 7/28 [00:01<00:05, 3.69it/s]\n 29%|██▊ | 8/28 [00:02<00:05, 3.68it/s]\n 32%|███▏ | 9/28 [00:02<00:05, 3.67it/s]\n 36%|███▌ | 10/28 [00:02<00:04, 3.66it/s]\n 39%|███▉ | 11/28 [00:02<00:04, 3.65it/s]\n 43%|████▎ | 12/28 [00:03<00:04, 3.65it/s]\n 46%|████▋ | 13/28 [00:03<00:04, 3.65it/s]\n 50%|█████ | 14/28 [00:03<00:03, 3.64it/s]\n 54%|█████▎ | 15/28 [00:04<00:03, 3.64it/s]\n 57%|█████▋ | 16/28 [00:04<00:03, 3.64it/s]\n 61%|██████ | 17/28 [00:04<00:03, 3.64it/s]\n 64%|██████▍ | 18/28 [00:04<00:02, 3.63it/s]\n 68%|██████▊ | 19/28 [00:05<00:02, 3.63it/s]\n 71%|███████▏ | 20/28 [00:05<00:02, 3.64it/s]\n 75%|███████▌ | 21/28 [00:05<00:01, 3.64it/s]\n 79%|███████▊ | 22/28 [00:05<00:01, 3.64it/s]\n 82%|████████▏ | 23/28 [00:06<00:01, 3.64it/s]\n 86%|████████▌ | 24/28 [00:06<00:01, 3.64it/s]\n 89%|████████▉ | 25/28 [00:06<00:00, 3.64it/s]\n 93%|█████████▎| 26/28 [00:07<00:00, 3.64it/s]\n 96%|█████████▋| 27/28 [00:07<00:00, 3.64it/s]\n100%|██████████| 28/28 [00:07<00:00, 3.64it/s]\n100%|██████████| 28/28 [00:07<00:00, 3.67it/s]", "metrics": { "predict_time": 15.662710888, "total_time": 15.707186 }, "output": [ "https://replicate.delivery/yhqm/Dfkpe5EBvMnynUDBKfF3R636OgzA7iZ8SeKc4iBerDfYPMv0E/out-0.webp" ], "started_at": "2024-08-15T15:09:33.706475Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/dbaqgyj2bsrm00chatbbb2jzjw", "cancel": "https://api.replicate.com/v1/predictions/dbaqgyj2bsrm00chatbbb2jzjw/cancel" }, "version": "846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383" }
Generated inUsing seed: 35429 Prompt: a woman in the style of TOK txt2img mode Using dev model Loading LoRA weights from https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar LoRA weights loaded successfully 0%| | 0/28 [00:00<?, ?it/s] 4%|▎ | 1/28 [00:00<00:07, 3.68it/s] 7%|▋ | 2/28 [00:00<00:06, 4.21it/s] 11%|█ | 3/28 [00:00<00:06, 3.94it/s] 14%|█▍ | 4/28 [00:01<00:06, 3.82it/s] 18%|█▊ | 5/28 [00:01<00:06, 3.76it/s] 21%|██▏ | 6/28 [00:01<00:05, 3.71it/s] 25%|██▌ | 7/28 [00:01<00:05, 3.69it/s] 29%|██▊ | 8/28 [00:02<00:05, 3.68it/s] 32%|███▏ | 9/28 [00:02<00:05, 3.67it/s] 36%|███▌ | 10/28 [00:02<00:04, 3.66it/s] 39%|███▉ | 11/28 [00:02<00:04, 3.65it/s] 43%|████▎ | 12/28 [00:03<00:04, 3.65it/s] 46%|████▋ | 13/28 [00:03<00:04, 3.65it/s] 50%|█████ | 14/28 [00:03<00:03, 3.64it/s] 54%|█████▎ | 15/28 [00:04<00:03, 3.64it/s] 57%|█████▋ | 16/28 [00:04<00:03, 3.64it/s] 61%|██████ | 17/28 [00:04<00:03, 3.64it/s] 64%|██████▍ | 18/28 [00:04<00:02, 3.63it/s] 68%|██████▊ | 19/28 [00:05<00:02, 3.63it/s] 71%|███████▏ | 20/28 [00:05<00:02, 3.64it/s] 75%|███████▌ | 21/28 [00:05<00:01, 3.64it/s] 79%|███████▊ | 22/28 [00:05<00:01, 3.64it/s] 82%|████████▏ | 23/28 [00:06<00:01, 3.64it/s] 86%|████████▌ | 24/28 [00:06<00:01, 3.64it/s] 89%|████████▉ | 25/28 [00:06<00:00, 3.64it/s] 93%|█████████▎| 26/28 [00:07<00:00, 3.64it/s] 96%|█████████▋| 27/28 [00:07<00:00, 3.64it/s] 100%|██████████| 28/28 [00:07<00:00, 3.64it/s] 100%|██████████| 28/28 [00:07<00:00, 3.67it/s]
Prediction
lucataco/flux-watercolor:846d1eb3ID7qg8jp10jdrm00chatbs28wy78StatusSucceededSourceWebHardwareH100Total durationCreatedInput
- model
- dev
- prompt
- a car in the style of TOK
- lora_scale
- 1
- num_outputs
- 1
- aspect_ratio
- 1:1
- output_format
- webp
- guidance_scale
- 3.5
- output_quality
- 80
- num_inference_steps
- 28
{ "model": "dev", "prompt": "a car in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/flux-watercolor:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", { input: { model: "dev", prompt: "a car in the style of TOK", lora_scale: 1, num_outputs: 1, aspect_ratio: "1:1", output_format: "webp", guidance_scale: 3.5, output_quality: 80, num_inference_steps: 28 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/flux-watercolor:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", input={ "model": "dev", "prompt": "a car in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", "input": { "model": "dev", "prompt": "a car in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/lucataco/flux-watercolor@sha256:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383 \ -i 'model="dev"' \ -i 'prompt="a car in the style of TOK"' \ -i 'lora_scale=1' \ -i 'num_outputs=1' \ -i 'aspect_ratio="1:1"' \ -i 'output_format="webp"' \ -i 'guidance_scale=3.5' \ -i 'output_quality=80' \ -i 'num_inference_steps=28'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/lucataco/flux-watercolor@sha256:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "model": "dev", "prompt": "a car in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2024-08-15T15:11:04.084935Z", "created_at": "2024-08-15T15:10:30.547000Z", "data_removed": false, "error": null, "id": "7qg8jp10jdrm00chatbs28wy78", "input": { "model": "dev", "prompt": "a car in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 }, "logs": "Using seed: 16840\nPrompt: a car in the style of TOK\ntxt2img mode\nUsing dev model\nLoading LoRA weights from https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar\nLoRA weights loaded successfully\n 0%| | 0/28 [00:00<?, ?it/s]\n 4%|▎ | 1/28 [00:00<00:07, 3.65it/s]\n 7%|▋ | 2/28 [00:00<00:06, 4.19it/s]\n 11%|█ | 3/28 [00:00<00:06, 3.93it/s]\n 14%|█▍ | 4/28 [00:01<00:06, 3.82it/s]\n 18%|█▊ | 5/28 [00:01<00:06, 3.75it/s]\n 21%|██▏ | 6/28 [00:01<00:05, 3.71it/s]\n 25%|██▌ | 7/28 [00:01<00:05, 3.69it/s]\n 29%|██▊ | 8/28 [00:02<00:05, 3.68it/s]\n 32%|███▏ | 9/28 [00:02<00:05, 3.66it/s]\n 36%|███▌ | 10/28 [00:02<00:04, 3.65it/s]\n 39%|███▉ | 11/28 [00:02<00:04, 3.65it/s]\n 43%|████▎ | 12/28 [00:03<00:04, 3.65it/s]\n 46%|████▋ | 13/28 [00:03<00:04, 3.65it/s]\n 50%|█████ | 14/28 [00:03<00:03, 3.64it/s]\n 54%|█████▎ | 15/28 [00:04<00:03, 3.64it/s]\n 57%|█████▋ | 16/28 [00:04<00:03, 3.64it/s]\n 61%|██████ | 17/28 [00:04<00:03, 3.64it/s]\n 64%|██████▍ | 18/28 [00:04<00:02, 3.64it/s]\n 68%|██████▊ | 19/28 [00:05<00:02, 3.64it/s]\n 71%|███████▏ | 20/28 [00:05<00:02, 3.64it/s]\n 75%|███████▌ | 21/28 [00:05<00:01, 3.64it/s]\n 79%|███████▊ | 22/28 [00:05<00:01, 3.63it/s]\n 82%|████████▏ | 23/28 [00:06<00:01, 3.64it/s]\n 86%|████████▌ | 24/28 [00:06<00:01, 3.64it/s]\n 89%|████████▉ | 25/28 [00:06<00:00, 3.63it/s]\n 93%|█████████▎| 26/28 [00:07<00:00, 3.63it/s]\n 96%|█████████▋| 27/28 [00:07<00:00, 3.64it/s]\n100%|██████████| 28/28 [00:07<00:00, 3.64it/s]\n100%|██████████| 28/28 [00:07<00:00, 3.67it/s]", "metrics": { "predict_time": 16.161111966, "total_time": 33.537935 }, "output": [ "https://replicate.delivery/yhqm/c9Bh6p0VuSoJKtNseie38Kl9piM5Fmnw49AyxfuI7e3cIzLNB/out-0.webp" ], "started_at": "2024-08-15T15:10:47.923823Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/7qg8jp10jdrm00chatbs28wy78", "cancel": "https://api.replicate.com/v1/predictions/7qg8jp10jdrm00chatbs28wy78/cancel" }, "version": "846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383" }
Generated inUsing seed: 16840 Prompt: a car in the style of TOK txt2img mode Using dev model Loading LoRA weights from https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar LoRA weights loaded successfully 0%| | 0/28 [00:00<?, ?it/s] 4%|▎ | 1/28 [00:00<00:07, 3.65it/s] 7%|▋ | 2/28 [00:00<00:06, 4.19it/s] 11%|█ | 3/28 [00:00<00:06, 3.93it/s] 14%|█▍ | 4/28 [00:01<00:06, 3.82it/s] 18%|█▊ | 5/28 [00:01<00:06, 3.75it/s] 21%|██▏ | 6/28 [00:01<00:05, 3.71it/s] 25%|██▌ | 7/28 [00:01<00:05, 3.69it/s] 29%|██▊ | 8/28 [00:02<00:05, 3.68it/s] 32%|███▏ | 9/28 [00:02<00:05, 3.66it/s] 36%|███▌ | 10/28 [00:02<00:04, 3.65it/s] 39%|███▉ | 11/28 [00:02<00:04, 3.65it/s] 43%|████▎ | 12/28 [00:03<00:04, 3.65it/s] 46%|████▋ | 13/28 [00:03<00:04, 3.65it/s] 50%|█████ | 14/28 [00:03<00:03, 3.64it/s] 54%|█████▎ | 15/28 [00:04<00:03, 3.64it/s] 57%|█████▋ | 16/28 [00:04<00:03, 3.64it/s] 61%|██████ | 17/28 [00:04<00:03, 3.64it/s] 64%|██████▍ | 18/28 [00:04<00:02, 3.64it/s] 68%|██████▊ | 19/28 [00:05<00:02, 3.64it/s] 71%|███████▏ | 20/28 [00:05<00:02, 3.64it/s] 75%|███████▌ | 21/28 [00:05<00:01, 3.64it/s] 79%|███████▊ | 22/28 [00:05<00:01, 3.63it/s] 82%|████████▏ | 23/28 [00:06<00:01, 3.64it/s] 86%|████████▌ | 24/28 [00:06<00:01, 3.64it/s] 89%|████████▉ | 25/28 [00:06<00:00, 3.63it/s] 93%|█████████▎| 26/28 [00:07<00:00, 3.63it/s] 96%|█████████▋| 27/28 [00:07<00:00, 3.64it/s] 100%|██████████| 28/28 [00:07<00:00, 3.64it/s] 100%|██████████| 28/28 [00:07<00:00, 3.67it/s]
Prediction
lucataco/flux-watercolor:846d1eb3IDhnvh7fpv6srm60chatbvt7mw8cStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- model
- dev
- prompt
- a house in the style of TOK
- lora_scale
- 1
- num_outputs
- 1
- aspect_ratio
- 1:1
- output_format
- webp
- guidance_scale
- 3.5
- output_quality
- 80
- num_inference_steps
- 28
{ "model": "dev", "prompt": "a house in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/flux-watercolor:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", { input: { model: "dev", prompt: "a house in the style of TOK", lora_scale: 1, num_outputs: 1, aspect_ratio: "1:1", output_format: "webp", guidance_scale: 3.5, output_quality: 80, num_inference_steps: 28 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/flux-watercolor:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", input={ "model": "dev", "prompt": "a house in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", "input": { "model": "dev", "prompt": "a house in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/lucataco/flux-watercolor@sha256:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383 \ -i 'model="dev"' \ -i 'prompt="a house in the style of TOK"' \ -i 'lora_scale=1' \ -i 'num_outputs=1' \ -i 'aspect_ratio="1:1"' \ -i 'output_format="webp"' \ -i 'guidance_scale=3.5' \ -i 'output_quality=80' \ -i 'num_inference_steps=28'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/lucataco/flux-watercolor@sha256:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "model": "dev", "prompt": "a house in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2024-08-15T15:11:37.880091Z", "created_at": "2024-08-15T15:11:18.326000Z", "data_removed": false, "error": null, "id": "hnvh7fpv6srm60chatbvt7mw8c", "input": { "model": "dev", "prompt": "a house in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 }, "logs": "Using seed: 62276\nPrompt: a house in the style of TOK\ntxt2img mode\nUsing dev model\nLoading LoRA weights from https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar\nLoRA weights loaded successfully\n 0%| | 0/28 [00:00<?, ?it/s]\n 4%|▎ | 1/28 [00:00<00:07, 3.65it/s]\n 7%|▋ | 2/28 [00:00<00:06, 4.18it/s]\n 11%|█ | 3/28 [00:00<00:06, 3.92it/s]\n 14%|█▍ | 4/28 [00:01<00:06, 3.82it/s]\n 18%|█▊ | 5/28 [00:01<00:06, 3.76it/s]\n 21%|██▏ | 6/28 [00:01<00:05, 3.72it/s]\n 25%|██▌ | 7/28 [00:01<00:05, 3.69it/s]\n 29%|██▊ | 8/28 [00:02<00:05, 3.68it/s]\n 32%|███▏ | 9/28 [00:02<00:05, 3.67it/s]\n 36%|███▌ | 10/28 [00:02<00:04, 3.66it/s]\n 39%|███▉ | 11/28 [00:02<00:04, 3.65it/s]\n 43%|████▎ | 12/28 [00:03<00:04, 3.65it/s]\n 46%|████▋ | 13/28 [00:03<00:04, 3.65it/s]\n 50%|█████ | 14/28 [00:03<00:03, 3.65it/s]\n 54%|█████▎ | 15/28 [00:04<00:03, 3.64it/s]\n 57%|█████▋ | 16/28 [00:04<00:03, 3.64it/s]\n 61%|██████ | 17/28 [00:04<00:03, 3.65it/s]\n 64%|██████▍ | 18/28 [00:04<00:02, 3.64it/s]\n 68%|██████▊ | 19/28 [00:05<00:02, 3.64it/s]\n 71%|███████▏ | 20/28 [00:05<00:02, 3.64it/s]\n 75%|███████▌ | 21/28 [00:05<00:01, 3.64it/s]\n 79%|███████▊ | 22/28 [00:05<00:01, 3.63it/s]\n 82%|████████▏ | 23/28 [00:06<00:01, 3.64it/s]\n 86%|████████▌ | 24/28 [00:06<00:01, 3.64it/s]\n 89%|████████▉ | 25/28 [00:06<00:00, 3.64it/s]\n 93%|█████████▎| 26/28 [00:07<00:00, 3.64it/s]\n 96%|█████████▋| 27/28 [00:07<00:00, 3.64it/s]\n100%|██████████| 28/28 [00:07<00:00, 3.64it/s]\n100%|██████████| 28/28 [00:07<00:00, 3.67it/s]", "metrics": { "predict_time": 17.32802582, "total_time": 19.554091 }, "output": [ "https://replicate.delivery/yhqm/LOAdr8EWDd6cKFbLZ0BNbee6qjusgdN2efjlro4M9cOmKzLNB/out-0.webp" ], "started_at": "2024-08-15T15:11:20.552065Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/hnvh7fpv6srm60chatbvt7mw8c", "cancel": "https://api.replicate.com/v1/predictions/hnvh7fpv6srm60chatbvt7mw8c/cancel" }, "version": "846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383" }
Generated inUsing seed: 62276 Prompt: a house in the style of TOK txt2img mode Using dev model Loading LoRA weights from https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar LoRA weights loaded successfully 0%| | 0/28 [00:00<?, ?it/s] 4%|▎ | 1/28 [00:00<00:07, 3.65it/s] 7%|▋ | 2/28 [00:00<00:06, 4.18it/s] 11%|█ | 3/28 [00:00<00:06, 3.92it/s] 14%|█▍ | 4/28 [00:01<00:06, 3.82it/s] 18%|█▊ | 5/28 [00:01<00:06, 3.76it/s] 21%|██▏ | 6/28 [00:01<00:05, 3.72it/s] 25%|██▌ | 7/28 [00:01<00:05, 3.69it/s] 29%|██▊ | 8/28 [00:02<00:05, 3.68it/s] 32%|███▏ | 9/28 [00:02<00:05, 3.67it/s] 36%|███▌ | 10/28 [00:02<00:04, 3.66it/s] 39%|███▉ | 11/28 [00:02<00:04, 3.65it/s] 43%|████▎ | 12/28 [00:03<00:04, 3.65it/s] 46%|████▋ | 13/28 [00:03<00:04, 3.65it/s] 50%|█████ | 14/28 [00:03<00:03, 3.65it/s] 54%|█████▎ | 15/28 [00:04<00:03, 3.64it/s] 57%|█████▋ | 16/28 [00:04<00:03, 3.64it/s] 61%|██████ | 17/28 [00:04<00:03, 3.65it/s] 64%|██████▍ | 18/28 [00:04<00:02, 3.64it/s] 68%|██████▊ | 19/28 [00:05<00:02, 3.64it/s] 71%|███████▏ | 20/28 [00:05<00:02, 3.64it/s] 75%|███████▌ | 21/28 [00:05<00:01, 3.64it/s] 79%|███████▊ | 22/28 [00:05<00:01, 3.63it/s] 82%|████████▏ | 23/28 [00:06<00:01, 3.64it/s] 86%|████████▌ | 24/28 [00:06<00:01, 3.64it/s] 89%|████████▉ | 25/28 [00:06<00:00, 3.64it/s] 93%|█████████▎| 26/28 [00:07<00:00, 3.64it/s] 96%|█████████▋| 27/28 [00:07<00:00, 3.64it/s] 100%|██████████| 28/28 [00:07<00:00, 3.64it/s] 100%|██████████| 28/28 [00:07<00:00, 3.67it/s]
Prediction
lucataco/flux-watercolor:846d1eb3IDy9y371savdrm60chatcsdrvtncStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- model
- dev
- prompt
- a girl holding a baloon in the style of TOK
- lora_scale
- 1
- num_outputs
- 1
- aspect_ratio
- 1:1
- output_format
- webp
- guidance_scale
- 3.5
- output_quality
- 80
- num_inference_steps
- 28
{ "model": "dev", "prompt": "a girl holding a baloon in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/flux-watercolor:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", { input: { model: "dev", prompt: "a girl holding a baloon in the style of TOK", lora_scale: 1, num_outputs: 1, aspect_ratio: "1:1", output_format: "webp", guidance_scale: 3.5, output_quality: 80, num_inference_steps: 28 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/flux-watercolor:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", input={ "model": "dev", "prompt": "a girl holding a baloon in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", "input": { "model": "dev", "prompt": "a girl holding a baloon in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/lucataco/flux-watercolor@sha256:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383 \ -i 'model="dev"' \ -i 'prompt="a girl holding a baloon in the style of TOK"' \ -i 'lora_scale=1' \ -i 'num_outputs=1' \ -i 'aspect_ratio="1:1"' \ -i 'output_format="webp"' \ -i 'guidance_scale=3.5' \ -i 'output_quality=80' \ -i 'num_inference_steps=28'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/lucataco/flux-watercolor@sha256:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "model": "dev", "prompt": "a girl holding a baloon in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2024-08-15T15:12:59.759313Z", "created_at": "2024-08-15T15:12:44.251000Z", "data_removed": false, "error": null, "id": "y9y371savdrm60chatcsdrvtnc", "input": { "model": "dev", "prompt": "a girl holding a baloon in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 }, "logs": "Using seed: 28468\nPrompt: a girl holding a baloon in the style of TOK\ntxt2img mode\nUsing dev model\nLoading LoRA weights from https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar\nLoRA weights loaded successfully\n 0%| | 0/28 [00:00<?, ?it/s]\n 4%|▎ | 1/28 [00:00<00:07, 3.65it/s]\n 7%|▋ | 2/28 [00:00<00:06, 4.20it/s]\n 11%|█ | 3/28 [00:00<00:06, 3.93it/s]\n 14%|█▍ | 4/28 [00:01<00:06, 3.81it/s]\n 18%|█▊ | 5/28 [00:01<00:06, 3.75it/s]\n 21%|██▏ | 6/28 [00:01<00:05, 3.71it/s]\n 25%|██▌ | 7/28 [00:01<00:05, 3.69it/s]\n 29%|██▊ | 8/28 [00:02<00:05, 3.68it/s]\n 32%|███▏ | 9/28 [00:02<00:05, 3.67it/s]\n 36%|███▌ | 10/28 [00:02<00:04, 3.66it/s]\n 39%|███▉ | 11/28 [00:02<00:04, 3.66it/s]\n 43%|████▎ | 12/28 [00:03<00:04, 3.65it/s]\n 46%|████▋ | 13/28 [00:03<00:04, 3.64it/s]\n 50%|█████ | 14/28 [00:03<00:03, 3.64it/s]\n 54%|█████▎ | 15/28 [00:04<00:03, 3.64it/s]\n 57%|█████▋ | 16/28 [00:04<00:03, 3.64it/s]\n 61%|██████ | 17/28 [00:04<00:03, 3.64it/s]\n 64%|██████▍ | 18/28 [00:04<00:02, 3.63it/s]\n 68%|██████▊ | 19/28 [00:05<00:02, 3.64it/s]\n 71%|███████▏ | 20/28 [00:05<00:02, 3.64it/s]\n 75%|███████▌ | 21/28 [00:05<00:01, 3.63it/s]\n 79%|███████▊ | 22/28 [00:05<00:01, 3.63it/s]\n 82%|████████▏ | 23/28 [00:06<00:01, 3.64it/s]\n 86%|████████▌ | 24/28 [00:06<00:01, 3.64it/s]\n 89%|████████▉ | 25/28 [00:06<00:00, 3.63it/s]\n 93%|█████████▎| 26/28 [00:07<00:00, 3.63it/s]\n 96%|█████████▋| 27/28 [00:07<00:00, 3.64it/s]\n100%|██████████| 28/28 [00:07<00:00, 3.64it/s]\n100%|██████████| 28/28 [00:07<00:00, 3.67it/s]", "metrics": { "predict_time": 15.463723257, "total_time": 15.508313 }, "output": [ "https://replicate.delivery/yhqm/9gKX8SxSwMbbCddDDT3RaTevPKn0DeUxMk8NzaUDPBY7z8STA/out-0.webp" ], "started_at": "2024-08-15T15:12:44.295589Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/y9y371savdrm60chatcsdrvtnc", "cancel": "https://api.replicate.com/v1/predictions/y9y371savdrm60chatcsdrvtnc/cancel" }, "version": "846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383" }
Generated inUsing seed: 28468 Prompt: a girl holding a baloon in the style of TOK txt2img mode Using dev model Loading LoRA weights from https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar LoRA weights loaded successfully 0%| | 0/28 [00:00<?, ?it/s] 4%|▎ | 1/28 [00:00<00:07, 3.65it/s] 7%|▋ | 2/28 [00:00<00:06, 4.20it/s] 11%|█ | 3/28 [00:00<00:06, 3.93it/s] 14%|█▍ | 4/28 [00:01<00:06, 3.81it/s] 18%|█▊ | 5/28 [00:01<00:06, 3.75it/s] 21%|██▏ | 6/28 [00:01<00:05, 3.71it/s] 25%|██▌ | 7/28 [00:01<00:05, 3.69it/s] 29%|██▊ | 8/28 [00:02<00:05, 3.68it/s] 32%|███▏ | 9/28 [00:02<00:05, 3.67it/s] 36%|███▌ | 10/28 [00:02<00:04, 3.66it/s] 39%|███▉ | 11/28 [00:02<00:04, 3.66it/s] 43%|████▎ | 12/28 [00:03<00:04, 3.65it/s] 46%|████▋ | 13/28 [00:03<00:04, 3.64it/s] 50%|█████ | 14/28 [00:03<00:03, 3.64it/s] 54%|█████▎ | 15/28 [00:04<00:03, 3.64it/s] 57%|█████▋ | 16/28 [00:04<00:03, 3.64it/s] 61%|██████ | 17/28 [00:04<00:03, 3.64it/s] 64%|██████▍ | 18/28 [00:04<00:02, 3.63it/s] 68%|██████▊ | 19/28 [00:05<00:02, 3.64it/s] 71%|███████▏ | 20/28 [00:05<00:02, 3.64it/s] 75%|███████▌ | 21/28 [00:05<00:01, 3.63it/s] 79%|███████▊ | 22/28 [00:05<00:01, 3.63it/s] 82%|████████▏ | 23/28 [00:06<00:01, 3.64it/s] 86%|████████▌ | 24/28 [00:06<00:01, 3.64it/s] 89%|████████▉ | 25/28 [00:06<00:00, 3.63it/s] 93%|█████████▎| 26/28 [00:07<00:00, 3.63it/s] 96%|█████████▋| 27/28 [00:07<00:00, 3.64it/s] 100%|██████████| 28/28 [00:07<00:00, 3.64it/s] 100%|██████████| 28/28 [00:07<00:00, 3.67it/s]
Prediction
lucataco/flux-watercolor:846d1eb3IDjwge9etp61rm00chatdb4v3by0StatusSucceededSourceWebHardwareH100Total durationCreatedInput
- model
- dev
- prompt
- a squirrel and a bird in the style of TOK
- lora_scale
- 1
- num_outputs
- 1
- aspect_ratio
- 1:1
- output_format
- webp
- guidance_scale
- 3.5
- output_quality
- 80
- num_inference_steps
- 28
{ "model": "dev", "prompt": "a squirrel and a bird in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/flux-watercolor:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", { input: { model: "dev", prompt: "a squirrel and a bird in the style of TOK", lora_scale: 1, num_outputs: 1, aspect_ratio: "1:1", output_format: "webp", guidance_scale: 3.5, output_quality: 80, num_inference_steps: 28 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/flux-watercolor:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", input={ "model": "dev", "prompt": "a squirrel and a bird in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", "input": { "model": "dev", "prompt": "a squirrel and a bird in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/lucataco/flux-watercolor@sha256:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383 \ -i 'model="dev"' \ -i 'prompt="a squirrel and a bird in the style of TOK"' \ -i 'lora_scale=1' \ -i 'num_outputs=1' \ -i 'aspect_ratio="1:1"' \ -i 'output_format="webp"' \ -i 'guidance_scale=3.5' \ -i 'output_quality=80' \ -i 'num_inference_steps=28'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/lucataco/flux-watercolor@sha256:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "model": "dev", "prompt": "a squirrel and a bird in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2024-08-15T15:14:16.656029Z", "created_at": "2024-08-15T15:14:00.880000Z", "data_removed": false, "error": null, "id": "jwge9etp61rm00chatdb4v3by0", "input": { "model": "dev", "prompt": "a squirrel and a bird in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 }, "logs": "Using seed: 40405\nPrompt: a squirrel and a bird in the style of TOK\ntxt2img mode\nUsing dev model\nLoading LoRA weights from https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar\nLoRA weights loaded successfully\n 0%| | 0/28 [00:00<?, ?it/s]\n 4%|▎ | 1/28 [00:00<00:07, 3.66it/s]\n 7%|▋ | 2/28 [00:00<00:06, 4.21it/s]\n 11%|█ | 3/28 [00:00<00:06, 3.94it/s]\n 14%|█▍ | 4/28 [00:01<00:06, 3.81it/s]\n 18%|█▊ | 5/28 [00:01<00:06, 3.74it/s]\n 21%|██▏ | 6/28 [00:01<00:05, 3.71it/s]\n 25%|██▌ | 7/28 [00:01<00:05, 3.69it/s]\n 29%|██▊ | 8/28 [00:02<00:05, 3.67it/s]\n 32%|███▏ | 9/28 [00:02<00:05, 3.66it/s]\n 36%|███▌ | 10/28 [00:02<00:04, 3.65it/s]\n 39%|███▉ | 11/28 [00:02<00:04, 3.65it/s]\n 43%|████▎ | 12/28 [00:03<00:04, 3.64it/s]\n 46%|████▋ | 13/28 [00:03<00:04, 3.64it/s]\n 50%|█████ | 14/28 [00:03<00:03, 3.64it/s]\n 54%|█████▎ | 15/28 [00:04<00:03, 3.64it/s]\n 57%|█████▋ | 16/28 [00:04<00:03, 3.63it/s]\n 61%|██████ | 17/28 [00:04<00:03, 3.64it/s]\n 64%|██████▍ | 18/28 [00:04<00:02, 3.64it/s]\n 68%|██████▊ | 19/28 [00:05<00:02, 3.64it/s]\n 71%|███████▏ | 20/28 [00:05<00:02, 3.63it/s]\n 75%|███████▌ | 21/28 [00:05<00:01, 3.63it/s]\n 79%|███████▊ | 22/28 [00:05<00:01, 3.64it/s]\n 82%|████████▏ | 23/28 [00:06<00:01, 3.64it/s]\n 86%|████████▌ | 24/28 [00:06<00:01, 3.63it/s]\n 89%|████████▉ | 25/28 [00:06<00:00, 3.63it/s]\n 93%|█████████▎| 26/28 [00:07<00:00, 3.64it/s]\n 96%|█████████▋| 27/28 [00:07<00:00, 3.64it/s]\n100%|██████████| 28/28 [00:07<00:00, 3.63it/s]\n100%|██████████| 28/28 [00:07<00:00, 3.67it/s]", "metrics": { "predict_time": 15.732377468, "total_time": 15.776029 }, "output": [ "https://replicate.delivery/yhqm/VOTinXz6gJbGMx54oqpv3Lfy2w9xm0jrHACAVIeOV4cI18STA/out-0.webp" ], "started_at": "2024-08-15T15:14:00.923652Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/jwge9etp61rm00chatdb4v3by0", "cancel": "https://api.replicate.com/v1/predictions/jwge9etp61rm00chatdb4v3by0/cancel" }, "version": "846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383" }
Generated inUsing seed: 40405 Prompt: a squirrel and a bird in the style of TOK txt2img mode Using dev model Loading LoRA weights from https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar LoRA weights loaded successfully 0%| | 0/28 [00:00<?, ?it/s] 4%|▎ | 1/28 [00:00<00:07, 3.66it/s] 7%|▋ | 2/28 [00:00<00:06, 4.21it/s] 11%|█ | 3/28 [00:00<00:06, 3.94it/s] 14%|█▍ | 4/28 [00:01<00:06, 3.81it/s] 18%|█▊ | 5/28 [00:01<00:06, 3.74it/s] 21%|██▏ | 6/28 [00:01<00:05, 3.71it/s] 25%|██▌ | 7/28 [00:01<00:05, 3.69it/s] 29%|██▊ | 8/28 [00:02<00:05, 3.67it/s] 32%|███▏ | 9/28 [00:02<00:05, 3.66it/s] 36%|███▌ | 10/28 [00:02<00:04, 3.65it/s] 39%|███▉ | 11/28 [00:02<00:04, 3.65it/s] 43%|████▎ | 12/28 [00:03<00:04, 3.64it/s] 46%|████▋ | 13/28 [00:03<00:04, 3.64it/s] 50%|█████ | 14/28 [00:03<00:03, 3.64it/s] 54%|█████▎ | 15/28 [00:04<00:03, 3.64it/s] 57%|█████▋ | 16/28 [00:04<00:03, 3.63it/s] 61%|██████ | 17/28 [00:04<00:03, 3.64it/s] 64%|██████▍ | 18/28 [00:04<00:02, 3.64it/s] 68%|██████▊ | 19/28 [00:05<00:02, 3.64it/s] 71%|███████▏ | 20/28 [00:05<00:02, 3.63it/s] 75%|███████▌ | 21/28 [00:05<00:01, 3.63it/s] 79%|███████▊ | 22/28 [00:05<00:01, 3.64it/s] 82%|████████▏ | 23/28 [00:06<00:01, 3.64it/s] 86%|████████▌ | 24/28 [00:06<00:01, 3.63it/s] 89%|████████▉ | 25/28 [00:06<00:00, 3.63it/s] 93%|█████████▎| 26/28 [00:07<00:00, 3.64it/s] 96%|█████████▋| 27/28 [00:07<00:00, 3.64it/s] 100%|██████████| 28/28 [00:07<00:00, 3.63it/s] 100%|██████████| 28/28 [00:07<00:00, 3.67it/s]
Prediction
lucataco/flux-watercolor:846d1eb3IDws6dp744chrm40chate8xe8jxmStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- model
- dev
- prompt
- a cat in a hat, in the style of TOK
- lora_scale
- 1
- num_outputs
- 1
- aspect_ratio
- 1:1
- output_format
- webp
- guidance_scale
- 3.5
- output_quality
- 80
- num_inference_steps
- 28
{ "model": "dev", "prompt": "a cat in a hat, in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lucataco/flux-watercolor:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", { input: { model: "dev", prompt: "a cat in a hat, in the style of TOK", lora_scale: 1, num_outputs: 1, aspect_ratio: "1:1", output_format: "webp", guidance_scale: 3.5, output_quality: 80, num_inference_steps: 28 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lucataco/flux-watercolor:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", input={ "model": "dev", "prompt": "a cat in a hat, in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run lucataco/flux-watercolor using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383", "input": { "model": "dev", "prompt": "a cat in a hat, in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/lucataco/flux-watercolor@sha256:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383 \ -i 'model="dev"' \ -i 'prompt="a cat in a hat, in the style of TOK"' \ -i 'lora_scale=1' \ -i 'num_outputs=1' \ -i 'aspect_ratio="1:1"' \ -i 'output_format="webp"' \ -i 'guidance_scale=3.5' \ -i 'output_quality=80' \ -i 'num_inference_steps=28'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/lucataco/flux-watercolor@sha256:846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "model": "dev", "prompt": "a cat in a hat, in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2024-08-15T15:16:39.790003Z", "created_at": "2024-08-15T15:16:23.780000Z", "data_removed": false, "error": null, "id": "ws6dp744chrm40chate8xe8jxm", "input": { "model": "dev", "prompt": "a cat in a hat, in the style of TOK", "lora_scale": 1, "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3.5, "output_quality": 80, "num_inference_steps": 28 }, "logs": "Using seed: 24084\nPrompt: a cat in a hat, in the style of TOK\ntxt2img mode\nUsing dev model\nLoading LoRA weights from https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar\nLoRA weights loaded successfully\n 0%| | 0/28 [00:00<?, ?it/s]\n 4%|▎ | 1/28 [00:00<00:07, 3.69it/s]\n 7%|▋ | 2/28 [00:00<00:06, 4.23it/s]\n 11%|█ | 3/28 [00:00<00:06, 3.94it/s]\n 14%|█▍ | 4/28 [00:01<00:06, 3.83it/s]\n 18%|█▊ | 5/28 [00:01<00:06, 3.78it/s]\n 21%|██▏ | 6/28 [00:01<00:05, 3.73it/s]\n 25%|██▌ | 7/28 [00:01<00:05, 3.71it/s]\n 29%|██▊ | 8/28 [00:02<00:05, 3.70it/s]\n 32%|███▏ | 9/28 [00:02<00:05, 3.69it/s]\n 36%|███▌ | 10/28 [00:02<00:04, 3.68it/s]\n 39%|███▉ | 11/28 [00:02<00:04, 3.67it/s]\n 43%|████▎ | 12/28 [00:03<00:04, 3.67it/s]\n 46%|████▋ | 13/28 [00:03<00:04, 3.67it/s]\n 50%|█████ | 14/28 [00:03<00:03, 3.67it/s]\n 54%|█████▎ | 15/28 [00:04<00:03, 3.66it/s]\n 57%|█████▋ | 16/28 [00:04<00:03, 3.67it/s]\n 61%|██████ | 17/28 [00:04<00:02, 3.67it/s]\n 64%|██████▍ | 18/28 [00:04<00:02, 3.67it/s]\n 68%|██████▊ | 19/28 [00:05<00:02, 3.66it/s]\n 71%|███████▏ | 20/28 [00:05<00:02, 3.67it/s]\n 75%|███████▌ | 21/28 [00:05<00:01, 3.67it/s]\n 79%|███████▊ | 22/28 [00:05<00:01, 3.66it/s]\n 82%|████████▏ | 23/28 [00:06<00:01, 3.66it/s]\n 86%|████████▌ | 24/28 [00:06<00:01, 3.66it/s]\n 89%|████████▉ | 25/28 [00:06<00:00, 3.67it/s]\n 93%|█████████▎| 26/28 [00:07<00:00, 3.66it/s]\n 96%|█████████▋| 27/28 [00:07<00:00, 3.66it/s]\n100%|██████████| 28/28 [00:07<00:00, 3.67it/s]\n100%|██████████| 28/28 [00:07<00:00, 3.69it/s]", "metrics": { "predict_time": 15.965973349, "total_time": 16.010003 }, "output": [ "https://replicate.delivery/yhqm/Dl0Cdv5pS3JqPRXq4SyEvSSdpBwnoCBcG5rxUGtf0tfX38STA/out-0.webp" ], "started_at": "2024-08-15T15:16:23.824030Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/ws6dp744chrm40chate8xe8jxm", "cancel": "https://api.replicate.com/v1/predictions/ws6dp744chrm40chate8xe8jxm/cancel" }, "version": "846d1eb37059ed2ed268ff8dd4aa1531487fcdc3425a7a44c2a0a10723ef8383" }
Generated inUsing seed: 24084 Prompt: a cat in a hat, in the style of TOK txt2img mode Using dev model Loading LoRA weights from https://replicate.delivery/yhqm/sC4uO2EfV9xFGi4YrH1LH3GenLrMwVeS7YdqbVPEeU6z9yLNB/trained_model.tar LoRA weights loaded successfully 0%| | 0/28 [00:00<?, ?it/s] 4%|▎ | 1/28 [00:00<00:07, 3.69it/s] 7%|▋ | 2/28 [00:00<00:06, 4.23it/s] 11%|█ | 3/28 [00:00<00:06, 3.94it/s] 14%|█▍ | 4/28 [00:01<00:06, 3.83it/s] 18%|█▊ | 5/28 [00:01<00:06, 3.78it/s] 21%|██▏ | 6/28 [00:01<00:05, 3.73it/s] 25%|██▌ | 7/28 [00:01<00:05, 3.71it/s] 29%|██▊ | 8/28 [00:02<00:05, 3.70it/s] 32%|███▏ | 9/28 [00:02<00:05, 3.69it/s] 36%|███▌ | 10/28 [00:02<00:04, 3.68it/s] 39%|███▉ | 11/28 [00:02<00:04, 3.67it/s] 43%|████▎ | 12/28 [00:03<00:04, 3.67it/s] 46%|████▋ | 13/28 [00:03<00:04, 3.67it/s] 50%|█████ | 14/28 [00:03<00:03, 3.67it/s] 54%|█████▎ | 15/28 [00:04<00:03, 3.66it/s] 57%|█████▋ | 16/28 [00:04<00:03, 3.67it/s] 61%|██████ | 17/28 [00:04<00:02, 3.67it/s] 64%|██████▍ | 18/28 [00:04<00:02, 3.67it/s] 68%|██████▊ | 19/28 [00:05<00:02, 3.66it/s] 71%|███████▏ | 20/28 [00:05<00:02, 3.67it/s] 75%|███████▌ | 21/28 [00:05<00:01, 3.67it/s] 79%|███████▊ | 22/28 [00:05<00:01, 3.66it/s] 82%|████████▏ | 23/28 [00:06<00:01, 3.66it/s] 86%|████████▌ | 24/28 [00:06<00:01, 3.66it/s] 89%|████████▉ | 25/28 [00:06<00:00, 3.67it/s] 93%|█████████▎| 26/28 [00:07<00:00, 3.66it/s] 96%|█████████▋| 27/28 [00:07<00:00, 3.66it/s] 100%|██████████| 28/28 [00:07<00:00, 3.67it/s] 100%|██████████| 28/28 [00:07<00:00, 3.69it/s]
Want to make some of these yourself?
Run this model