Readme
This model doesn't have a readme.
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run gileslerockeur/kamil using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"gileslerockeur/kamil:8f796abbd45e701e2b4521c93724b56ab8388ce8e07ddd1e0689d5909aa47e86",
{
input: {
width: 1024,
height: 1024,
prompt: "A HD portrait picture of TOK at the beach, mexican sombero hat, vivid colors",
refine: "no_refiner",
scheduler: "K_EULER",
lora_scale: 0.6,
num_outputs: 1,
guidance_scale: 7.5,
apply_watermark: true,
high_noise_frac: 0.8,
negative_prompt: "",
prompt_strength: 0.8,
num_inference_steps: 50
}
}
);
console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run gileslerockeur/kamil using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"gileslerockeur/kamil:8f796abbd45e701e2b4521c93724b56ab8388ce8e07ddd1e0689d5909aa47e86",
input={
"width": 1024,
"height": 1024,
"prompt": "A HD portrait picture of TOK at the beach, mexican sombero hat, vivid colors",
"refine": "no_refiner",
"scheduler": "K_EULER",
"lora_scale": 0.6,
"num_outputs": 1,
"guidance_scale": 7.5,
"apply_watermark": True,
"high_noise_frac": 0.8,
"negative_prompt": "",
"prompt_strength": 0.8,
"num_inference_steps": 50
}
)
print(output)
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run gileslerockeur/kamil using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "8f796abbd45e701e2b4521c93724b56ab8388ce8e07ddd1e0689d5909aa47e86",
"input": {
"width": 1024,
"height": 1024,
"prompt": "A HD portrait picture of TOK at the beach, mexican sombero hat, vivid colors",
"refine": "no_refiner",
"scheduler": "K_EULER",
"lora_scale": 0.6,
"num_outputs": 1,
"guidance_scale": 7.5,
"apply_watermark": true,
"high_noise_frac": 0.8,
"negative_prompt": "",
"prompt_strength": 0.8,
"num_inference_steps": 50
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
brew install cog
If you don’t have Homebrew, there are other installation options available.
Pull and run gileslerockeur/kamil using Cog (this will download the full model and run it in your local environment):
cog predict r8.im/gileslerockeur/kamil@sha256:8f796abbd45e701e2b4521c93724b56ab8388ce8e07ddd1e0689d5909aa47e86 \
-i 'width=1024' \
-i 'height=1024' \
-i 'prompt="A HD portrait picture of TOK at the beach, mexican sombero hat, vivid colors"' \
-i 'refine="no_refiner"' \
-i 'scheduler="K_EULER"' \
-i 'lora_scale=0.6' \
-i 'num_outputs=1' \
-i 'guidance_scale=7.5' \
-i 'apply_watermark=true' \
-i 'high_noise_frac=0.8' \
-i 'negative_prompt=""' \
-i 'prompt_strength=0.8' \
-i 'num_inference_steps=50'
To learn more, take a look at the Cog documentation.
Pull and run gileslerockeur/kamil using Docker (this will download the full model and run it in your local environment):
docker run -d -p 5000:5000 --gpus=all r8.im/gileslerockeur/kamil@sha256:8f796abbd45e701e2b4521c93724b56ab8388ce8e07ddd1e0689d5909aa47e86
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "width": 1024, "height": 1024, "prompt": "A HD portrait picture of TOK at the beach, mexican sombero hat, vivid colors", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ http://localhost:5000/predictions
Add a payment method to run this model.
By signing in, you agree to our
terms of service and privacy policy
{
"completed_at": "2024-02-28T20:14:36.430559Z",
"created_at": "2024-02-28T20:13:42.230880Z",
"data_removed": false,
"error": null,
"id": "ukipia3b2qwv2ockes3muftzw4",
"input": {
"width": 1024,
"height": 1024,
"prompt": "A HD portrait picture of TOK at the beach, mexican sombero hat, vivid colors",
"refine": "no_refiner",
"scheduler": "K_EULER",
"lora_scale": 0.6,
"num_outputs": 1,
"guidance_scale": 7.5,
"apply_watermark": true,
"high_noise_frac": 0.8,
"negative_prompt": "",
"prompt_strength": 0.8,
"num_inference_steps": 50
},
"logs": "Using seed: 58012\nEnsuring enough disk space...\nFree disk space: 2109683073024\nDownloading weights: https://replicate.delivery/pbxt/P5BCUkt3ObbnAxf7S2jcoNDoXDaeCkSdVptfTqR2NTn4hm2kA/trained_model.tar\n2024-02-28T20:14:13Z | INFO | [ Initiating ] dest=/src/weights-cache/936a01005cede4be minimum_chunk_size=150M url=https://replicate.delivery/pbxt/P5BCUkt3ObbnAxf7S2jcoNDoXDaeCkSdVptfTqR2NTn4hm2kA/trained_model.tar\n2024-02-28T20:14:19Z | INFO | [ Complete ] dest=/src/weights-cache/936a01005cede4be size=\"186 MB\" total_elapsed=5.755s url=https://replicate.delivery/pbxt/P5BCUkt3ObbnAxf7S2jcoNDoXDaeCkSdVptfTqR2NTn4hm2kA/trained_model.tar\nb''\nDownloaded weights in 5.875920057296753 seconds\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: A HD portrait picture of <s0><s1> at the beach, mexican sombero hat, vivid colors\ntxt2img mode\n 0%| | 0/50 [00:00<?, ?it/s]/usr/local/lib/python3.9/site-packages/torch/nn/modules/conv.py:459: UserWarning: Applied workaround for CuDNN issue, install nvrtc.so (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:80.)\nreturn F.conv2d(input, weight, bias, self.stride,\n 2%|▏ | 1/50 [00:00<00:41, 1.17it/s]\n 4%|▍ | 2/50 [00:01<00:24, 1.96it/s]\n 6%|▌ | 3/50 [00:01<00:18, 2.49it/s]\n 8%|▊ | 4/50 [00:01<00:16, 2.85it/s]\n 10%|█ | 5/50 [00:01<00:14, 3.10it/s]\n 12%|█▏ | 6/50 [00:02<00:13, 3.27it/s]\n 14%|█▍ | 7/50 [00:02<00:12, 3.39it/s]\n 16%|█▌ | 8/50 [00:02<00:12, 3.47it/s]\n 18%|█▊ | 9/50 [00:03<00:11, 3.53it/s]\n 20%|██ | 10/50 [00:03<00:11, 3.57it/s]\n 22%|██▏ | 11/50 [00:03<00:10, 3.60it/s]\n 24%|██▍ | 12/50 [00:03<00:10, 3.62it/s]\n 26%|██▌ | 13/50 [00:04<00:10, 3.63it/s]\n 28%|██▊ | 14/50 [00:04<00:09, 3.64it/s]\n 30%|███ | 15/50 [00:04<00:09, 3.64it/s]\n 32%|███▏ | 16/50 [00:04<00:09, 3.64it/s]\n 34%|███▍ | 17/50 [00:05<00:09, 3.65it/s]\n 36%|███▌ | 18/50 [00:05<00:08, 3.65it/s]\n 38%|███▊ | 19/50 [00:05<00:08, 3.65it/s]\n 40%|████ | 20/50 [00:06<00:08, 3.65it/s]\n 42%|████▏ | 21/50 [00:06<00:07, 3.65it/s]\n 44%|████▍ | 22/50 [00:06<00:07, 3.66it/s]\n 46%|████▌ | 23/50 [00:06<00:07, 3.65it/s]\n 48%|████▊ | 24/50 [00:07<00:07, 3.65it/s]\n 50%|█████ | 25/50 [00:07<00:06, 3.65it/s]\n 52%|█████▏ | 26/50 [00:07<00:06, 3.65it/s]\n 54%|█████▍ | 27/50 [00:07<00:06, 3.65it/s]\n 56%|█████▌ | 28/50 [00:08<00:06, 3.65it/s]\n 58%|█████▊ | 29/50 [00:08<00:05, 3.65it/s]\n 60%|██████ | 30/50 [00:08<00:05, 3.66it/s]\n 62%|██████▏ | 31/50 [00:09<00:05, 3.67it/s]\n 64%|██████▍ | 32/50 [00:09<00:04, 3.67it/s]\n 66%|██████▌ | 33/50 [00:09<00:04, 3.67it/s]\n 68%|██████▊ | 34/50 [00:09<00:04, 3.67it/s]\n 70%|███████ | 35/50 [00:10<00:04, 3.67it/s]\n 72%|███████▏ | 36/50 [00:10<00:03, 3.65it/s]\n 74%|███████▍ | 37/50 [00:10<00:03, 3.65it/s]\n 76%|███████▌ | 38/50 [00:10<00:03, 3.65it/s]\n 78%|███████▊ | 39/50 [00:11<00:03, 3.66it/s]\n 80%|████████ | 40/50 [00:11<00:02, 3.66it/s]\n 82%|████████▏ | 41/50 [00:11<00:02, 3.67it/s]\n 84%|████████▍ | 42/50 [00:12<00:02, 3.67it/s]\n 86%|████████▌ | 43/50 [00:12<00:01, 3.67it/s]\n 88%|████████▊ | 44/50 [00:12<00:01, 3.67it/s]\n 90%|█████████ | 45/50 [00:12<00:01, 3.67it/s]\n 92%|█████████▏| 46/50 [00:13<00:01, 3.67it/s]\n 94%|█████████▍| 47/50 [00:13<00:00, 3.67it/s]\n 96%|█████████▌| 48/50 [00:13<00:00, 3.67it/s]\n 98%|█████████▊| 49/50 [00:13<00:00, 3.66it/s]\n100%|██████████| 50/50 [00:14<00:00, 3.66it/s]\n100%|██████████| 50/50 [00:14<00:00, 3.51it/s]",
"metrics": {
"predict_time": 23.244437,
"total_time": 54.199679
},
"output": [
"https://replicate.delivery/pbxt/ma1xFE7yzM7IGtAfxpzKvvyfZMta8eWHt4mNUZbkPpzZxo2kA/out-0.png"
],
"started_at": "2024-02-28T20:14:13.186122Z",
"status": "succeeded",
"urls": {
"get": "https://api.replicate.com/v1/predictions/ukipia3b2qwv2ockes3muftzw4",
"cancel": "https://api.replicate.com/v1/predictions/ukipia3b2qwv2ockes3muftzw4/cancel"
},
"version": "8f796abbd45e701e2b4521c93724b56ab8388ce8e07ddd1e0689d5909aa47e86"
}
Using seed: 58012
Ensuring enough disk space...
Free disk space: 2109683073024
Downloading weights: https://replicate.delivery/pbxt/P5BCUkt3ObbnAxf7S2jcoNDoXDaeCkSdVptfTqR2NTn4hm2kA/trained_model.tar
2024-02-28T20:14:13Z | INFO | [ Initiating ] dest=/src/weights-cache/936a01005cede4be minimum_chunk_size=150M url=https://replicate.delivery/pbxt/P5BCUkt3ObbnAxf7S2jcoNDoXDaeCkSdVptfTqR2NTn4hm2kA/trained_model.tar
2024-02-28T20:14:19Z | INFO | [ Complete ] dest=/src/weights-cache/936a01005cede4be size="186 MB" total_elapsed=5.755s url=https://replicate.delivery/pbxt/P5BCUkt3ObbnAxf7S2jcoNDoXDaeCkSdVptfTqR2NTn4hm2kA/trained_model.tar
b''
Downloaded weights in 5.875920057296753 seconds
Loading fine-tuned model
Does not have Unet. assume we are using LoRA
Loading Unet LoRA
Prompt: A HD portrait picture of <s0><s1> at the beach, mexican sombero hat, vivid colors
txt2img mode
0%| | 0/50 [00:00<?, ?it/s]/usr/local/lib/python3.9/site-packages/torch/nn/modules/conv.py:459: UserWarning: Applied workaround for CuDNN issue, install nvrtc.so (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:80.)
return F.conv2d(input, weight, bias, self.stride,
2%|▏ | 1/50 [00:00<00:41, 1.17it/s]
4%|▍ | 2/50 [00:01<00:24, 1.96it/s]
6%|▌ | 3/50 [00:01<00:18, 2.49it/s]
8%|▊ | 4/50 [00:01<00:16, 2.85it/s]
10%|█ | 5/50 [00:01<00:14, 3.10it/s]
12%|█▏ | 6/50 [00:02<00:13, 3.27it/s]
14%|█▍ | 7/50 [00:02<00:12, 3.39it/s]
16%|█▌ | 8/50 [00:02<00:12, 3.47it/s]
18%|█▊ | 9/50 [00:03<00:11, 3.53it/s]
20%|██ | 10/50 [00:03<00:11, 3.57it/s]
22%|██▏ | 11/50 [00:03<00:10, 3.60it/s]
24%|██▍ | 12/50 [00:03<00:10, 3.62it/s]
26%|██▌ | 13/50 [00:04<00:10, 3.63it/s]
28%|██▊ | 14/50 [00:04<00:09, 3.64it/s]
30%|███ | 15/50 [00:04<00:09, 3.64it/s]
32%|███▏ | 16/50 [00:04<00:09, 3.64it/s]
34%|███▍ | 17/50 [00:05<00:09, 3.65it/s]
36%|███▌ | 18/50 [00:05<00:08, 3.65it/s]
38%|███▊ | 19/50 [00:05<00:08, 3.65it/s]
40%|████ | 20/50 [00:06<00:08, 3.65it/s]
42%|████▏ | 21/50 [00:06<00:07, 3.65it/s]
44%|████▍ | 22/50 [00:06<00:07, 3.66it/s]
46%|████▌ | 23/50 [00:06<00:07, 3.65it/s]
48%|████▊ | 24/50 [00:07<00:07, 3.65it/s]
50%|█████ | 25/50 [00:07<00:06, 3.65it/s]
52%|█████▏ | 26/50 [00:07<00:06, 3.65it/s]
54%|█████▍ | 27/50 [00:07<00:06, 3.65it/s]
56%|█████▌ | 28/50 [00:08<00:06, 3.65it/s]
58%|█████▊ | 29/50 [00:08<00:05, 3.65it/s]
60%|██████ | 30/50 [00:08<00:05, 3.66it/s]
62%|██████▏ | 31/50 [00:09<00:05, 3.67it/s]
64%|██████▍ | 32/50 [00:09<00:04, 3.67it/s]
66%|██████▌ | 33/50 [00:09<00:04, 3.67it/s]
68%|██████▊ | 34/50 [00:09<00:04, 3.67it/s]
70%|███████ | 35/50 [00:10<00:04, 3.67it/s]
72%|███████▏ | 36/50 [00:10<00:03, 3.65it/s]
74%|███████▍ | 37/50 [00:10<00:03, 3.65it/s]
76%|███████▌ | 38/50 [00:10<00:03, 3.65it/s]
78%|███████▊ | 39/50 [00:11<00:03, 3.66it/s]
80%|████████ | 40/50 [00:11<00:02, 3.66it/s]
82%|████████▏ | 41/50 [00:11<00:02, 3.67it/s]
84%|████████▍ | 42/50 [00:12<00:02, 3.67it/s]
86%|████████▌ | 43/50 [00:12<00:01, 3.67it/s]
88%|████████▊ | 44/50 [00:12<00:01, 3.67it/s]
90%|█████████ | 45/50 [00:12<00:01, 3.67it/s]
92%|█████████▏| 46/50 [00:13<00:01, 3.67it/s]
94%|█████████▍| 47/50 [00:13<00:00, 3.67it/s]
96%|█████████▌| 48/50 [00:13<00:00, 3.67it/s]
98%|█████████▊| 49/50 [00:13<00:00, 3.66it/s]
100%|██████████| 50/50 [00:14<00:00, 3.66it/s]
100%|██████████| 50/50 [00:14<00:00, 3.51it/s]
This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.
This model doesn't have a readme.
This model is warm. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
Choose a file from your machine
Hint: you can also drag files onto the input
Choose a file from your machine
Hint: you can also drag files onto the input
Using seed: 58012
Ensuring enough disk space...
Free disk space: 2109683073024
Downloading weights: https://replicate.delivery/pbxt/P5BCUkt3ObbnAxf7S2jcoNDoXDaeCkSdVptfTqR2NTn4hm2kA/trained_model.tar
2024-02-28T20:14:13Z | INFO | [ Initiating ] dest=/src/weights-cache/936a01005cede4be minimum_chunk_size=150M url=https://replicate.delivery/pbxt/P5BCUkt3ObbnAxf7S2jcoNDoXDaeCkSdVptfTqR2NTn4hm2kA/trained_model.tar
2024-02-28T20:14:19Z | INFO | [ Complete ] dest=/src/weights-cache/936a01005cede4be size="186 MB" total_elapsed=5.755s url=https://replicate.delivery/pbxt/P5BCUkt3ObbnAxf7S2jcoNDoXDaeCkSdVptfTqR2NTn4hm2kA/trained_model.tar
b''
Downloaded weights in 5.875920057296753 seconds
Loading fine-tuned model
Does not have Unet. assume we are using LoRA
Loading Unet LoRA
Prompt: A HD portrait picture of <s0><s1> at the beach, mexican sombero hat, vivid colors
txt2img mode
0%| | 0/50 [00:00<?, ?it/s]/usr/local/lib/python3.9/site-packages/torch/nn/modules/conv.py:459: UserWarning: Applied workaround for CuDNN issue, install nvrtc.so (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:80.)
return F.conv2d(input, weight, bias, self.stride,
2%|▏ | 1/50 [00:00<00:41, 1.17it/s]
4%|▍ | 2/50 [00:01<00:24, 1.96it/s]
6%|▌ | 3/50 [00:01<00:18, 2.49it/s]
8%|▊ | 4/50 [00:01<00:16, 2.85it/s]
10%|█ | 5/50 [00:01<00:14, 3.10it/s]
12%|█▏ | 6/50 [00:02<00:13, 3.27it/s]
14%|█▍ | 7/50 [00:02<00:12, 3.39it/s]
16%|█▌ | 8/50 [00:02<00:12, 3.47it/s]
18%|█▊ | 9/50 [00:03<00:11, 3.53it/s]
20%|██ | 10/50 [00:03<00:11, 3.57it/s]
22%|██▏ | 11/50 [00:03<00:10, 3.60it/s]
24%|██▍ | 12/50 [00:03<00:10, 3.62it/s]
26%|██▌ | 13/50 [00:04<00:10, 3.63it/s]
28%|██▊ | 14/50 [00:04<00:09, 3.64it/s]
30%|███ | 15/50 [00:04<00:09, 3.64it/s]
32%|███▏ | 16/50 [00:04<00:09, 3.64it/s]
34%|███▍ | 17/50 [00:05<00:09, 3.65it/s]
36%|███▌ | 18/50 [00:05<00:08, 3.65it/s]
38%|███▊ | 19/50 [00:05<00:08, 3.65it/s]
40%|████ | 20/50 [00:06<00:08, 3.65it/s]
42%|████▏ | 21/50 [00:06<00:07, 3.65it/s]
44%|████▍ | 22/50 [00:06<00:07, 3.66it/s]
46%|████▌ | 23/50 [00:06<00:07, 3.65it/s]
48%|████▊ | 24/50 [00:07<00:07, 3.65it/s]
50%|█████ | 25/50 [00:07<00:06, 3.65it/s]
52%|█████▏ | 26/50 [00:07<00:06, 3.65it/s]
54%|█████▍ | 27/50 [00:07<00:06, 3.65it/s]
56%|█████▌ | 28/50 [00:08<00:06, 3.65it/s]
58%|█████▊ | 29/50 [00:08<00:05, 3.65it/s]
60%|██████ | 30/50 [00:08<00:05, 3.66it/s]
62%|██████▏ | 31/50 [00:09<00:05, 3.67it/s]
64%|██████▍ | 32/50 [00:09<00:04, 3.67it/s]
66%|██████▌ | 33/50 [00:09<00:04, 3.67it/s]
68%|██████▊ | 34/50 [00:09<00:04, 3.67it/s]
70%|███████ | 35/50 [00:10<00:04, 3.67it/s]
72%|███████▏ | 36/50 [00:10<00:03, 3.65it/s]
74%|███████▍ | 37/50 [00:10<00:03, 3.65it/s]
76%|███████▌ | 38/50 [00:10<00:03, 3.65it/s]
78%|███████▊ | 39/50 [00:11<00:03, 3.66it/s]
80%|████████ | 40/50 [00:11<00:02, 3.66it/s]
82%|████████▏ | 41/50 [00:11<00:02, 3.67it/s]
84%|████████▍ | 42/50 [00:12<00:02, 3.67it/s]
86%|████████▌ | 43/50 [00:12<00:01, 3.67it/s]
88%|████████▊ | 44/50 [00:12<00:01, 3.67it/s]
90%|█████████ | 45/50 [00:12<00:01, 3.67it/s]
92%|█████████▏| 46/50 [00:13<00:01, 3.67it/s]
94%|█████████▍| 47/50 [00:13<00:00, 3.67it/s]
96%|█████████▌| 48/50 [00:13<00:00, 3.67it/s]
98%|█████████▊| 49/50 [00:13<00:00, 3.66it/s]
100%|██████████| 50/50 [00:14<00:00, 3.66it/s]
100%|██████████| 50/50 [00:14<00:00, 3.51it/s]