Readme
This model doesn't have a readme.
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run anotherjesse/amy-tattoo-test-1 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"anotherjesse/amy-tattoo-test-1:0e345aaf74965ae98fb83ca9c65376ee42da5c7837d88c648ddc5d3cba1f35dc",
{
input: {
width: 1024,
height: 1024,
prompt: "A TOK tattoo drawing style of california poppies on mount tam",
refine: "no_refiner",
scheduler: "K_EULER",
lora_scale: 0.6,
num_outputs: 2,
guidance_scale: 7.5,
apply_watermark: true,
high_noise_frac: 0.8,
negative_prompt: "",
prompt_strength: 0.8,
num_inference_steps: 25
}
}
);
console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run anotherjesse/amy-tattoo-test-1 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"anotherjesse/amy-tattoo-test-1:0e345aaf74965ae98fb83ca9c65376ee42da5c7837d88c648ddc5d3cba1f35dc",
input={
"width": 1024,
"height": 1024,
"prompt": "A TOK tattoo drawing style of california poppies on mount tam",
"refine": "no_refiner",
"scheduler": "K_EULER",
"lora_scale": 0.6,
"num_outputs": 2,
"guidance_scale": 7.5,
"apply_watermark": True,
"high_noise_frac": 0.8,
"negative_prompt": "",
"prompt_strength": 0.8,
"num_inference_steps": 25
}
)
print(output)
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run anotherjesse/amy-tattoo-test-1 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "0e345aaf74965ae98fb83ca9c65376ee42da5c7837d88c648ddc5d3cba1f35dc",
"input": {
"width": 1024,
"height": 1024,
"prompt": "A TOK tattoo drawing style of california poppies on mount tam",
"refine": "no_refiner",
"scheduler": "K_EULER",
"lora_scale": 0.6,
"num_outputs": 2,
"guidance_scale": 7.5,
"apply_watermark": true,
"high_noise_frac": 0.8,
"negative_prompt": "",
"prompt_strength": 0.8,
"num_inference_steps": 25
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/anotherjesse/amy-tattoo-test-1@sha256:0e345aaf74965ae98fb83ca9c65376ee42da5c7837d88c648ddc5d3cba1f35dc \
-i 'width=1024' \
-i 'height=1024' \
-i 'prompt="A TOK tattoo drawing style of california poppies on mount tam"' \
-i 'refine="no_refiner"' \
-i 'scheduler="K_EULER"' \
-i 'lora_scale=0.6' \
-i 'num_outputs=2' \
-i 'guidance_scale=7.5' \
-i 'apply_watermark=true' \
-i 'high_noise_frac=0.8' \
-i 'negative_prompt=""' \
-i 'prompt_strength=0.8' \
-i 'num_inference_steps=25'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/anotherjesse/amy-tattoo-test-1@sha256:0e345aaf74965ae98fb83ca9c65376ee42da5c7837d88c648ddc5d3cba1f35dc
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "width": 1024, "height": 1024, "prompt": "A TOK tattoo drawing style of california poppies on mount tam", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 2, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 25 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Add a payment method to run this model.
Each run costs approximately $0.012. Alternatively, try out our featured models for free.
By signing in, you agree to our
terms of service and privacy policy
{
"completed_at": "2024-07-03T21:24:43.030280Z",
"created_at": "2024-07-03T21:24:16.282000Z",
"data_removed": false,
"error": null,
"id": "60vz0tpx39rgg0cgf9xrq5szjg",
"input": {
"width": 1024,
"height": 1024,
"prompt": "A TOK tattoo drawing style of california poppies on mount tam",
"refine": "no_refiner",
"scheduler": "K_EULER",
"lora_scale": 0.6,
"num_outputs": 2,
"guidance_scale": 7.5,
"apply_watermark": true,
"high_noise_frac": 0.8,
"negative_prompt": "",
"prompt_strength": 0.8,
"num_inference_steps": 25
},
"logs": "Using seed: 39467\nEnsuring enough disk space...\nFree disk space: 1653744140288\nDownloading weights: https://replicate.delivery/pbxt/aZJJmwPed5UpbynzcEnx4Fr5EegYWlfrTYwQGbGrakuTLuJmA/trained_model.tar\n2024-07-03T21:24:18Z | INFO | [ Initiating ] chunk_size=150M dest=/src/weights-cache/05a120e6d4f7a196 url=https://replicate.delivery/pbxt/aZJJmwPed5UpbynzcEnx4Fr5EegYWlfrTYwQGbGrakuTLuJmA/trained_model.tar\n2024-07-03T21:24:24Z | INFO | [ Complete ] dest=/src/weights-cache/05a120e6d4f7a196 size=\"186 MB\" total_elapsed=6.250s url=https://replicate.delivery/pbxt/aZJJmwPed5UpbynzcEnx4Fr5EegYWlfrTYwQGbGrakuTLuJmA/trained_model.tar\nb''\nDownloaded weights in 6.396426200866699 seconds\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: A <s0><s1> tattoo drawing style of california poppies on mount tam\ntxt2img mode\n 0%| | 0/25 [00:00<?, ?it/s]/usr/local/lib/python3.9/site-packages/diffusers/models/attention_processor.py:1946: FutureWarning: `LoRAAttnProcessor2_0` is deprecated and will be removed in version 0.26.0. Make sure use AttnProcessor2_0 instead by settingLoRA layers to `self.{to_q,to_k,to_v,to_out[0]}.lora_layer` respectively. This will be done automatically when using `LoraLoaderMixin.load_lora_weights`\ndeprecate(\n 4%|▍ | 1/25 [00:00<00:11, 2.12it/s]\n 8%|▊ | 2/25 [00:00<00:10, 2.16it/s]\n 12%|█▏ | 3/25 [00:01<00:10, 2.17it/s]\n 16%|█▌ | 4/25 [00:01<00:09, 2.18it/s]\n 20%|██ | 5/25 [00:02<00:09, 2.18it/s]\n 24%|██▍ | 6/25 [00:02<00:08, 2.18it/s]\n 28%|██▊ | 7/25 [00:03<00:08, 2.18it/s]\n 32%|███▏ | 8/25 [00:03<00:07, 2.18it/s]\n 36%|███▌ | 9/25 [00:04<00:07, 2.18it/s]\n 40%|████ | 10/25 [00:04<00:06, 2.18it/s]\n 44%|████▍ | 11/25 [00:05<00:06, 2.18it/s]\n 48%|████▊ | 12/25 [00:05<00:05, 2.18it/s]\n 52%|█████▏ | 13/25 [00:05<00:05, 2.18it/s]\n 56%|█████▌ | 14/25 [00:06<00:05, 2.18it/s]\n 60%|██████ | 15/25 [00:06<00:04, 2.17it/s]\n 64%|██████▍ | 16/25 [00:07<00:04, 2.17it/s]\n 68%|██████▊ | 17/25 [00:07<00:03, 2.17it/s]\n 72%|███████▏ | 18/25 [00:08<00:03, 2.17it/s]\n 76%|███████▌ | 19/25 [00:08<00:02, 2.17it/s]\n 80%|████████ | 20/25 [00:09<00:02, 2.17it/s]\n 84%|████████▍ | 21/25 [00:09<00:01, 2.17it/s]\n 88%|████████▊ | 22/25 [00:10<00:01, 2.17it/s]\n 92%|█████████▏| 23/25 [00:10<00:00, 2.17it/s]\n 96%|█████████▌| 24/25 [00:11<00:00, 2.17it/s]\n100%|██████████| 25/25 [00:11<00:00, 2.17it/s]\n100%|██████████| 25/25 [00:11<00:00, 2.17it/s]",
"metrics": {
"predict_time": 24.525892586,
"total_time": 26.74828
},
"output": [
"https://replicate.delivery/pbxt/egU7WeQQVwkGLUd97eImqxzgCLYDkD6BdG6yyuDs7Oel5cTMB/out-0.png",
"https://replicate.delivery/pbxt/hW3CK6qw6iK4DxlxfDGg96jOMVCY89CMBdufmTXqqa3aO3ETA/out-1.png"
],
"started_at": "2024-07-03T21:24:18.504388Z",
"status": "succeeded",
"urls": {
"get": "https://api.replicate.com/v1/predictions/60vz0tpx39rgg0cgf9xrq5szjg",
"cancel": "https://api.replicate.com/v1/predictions/60vz0tpx39rgg0cgf9xrq5szjg/cancel"
},
"version": "0e345aaf74965ae98fb83ca9c65376ee42da5c7837d88c648ddc5d3cba1f35dc"
}
Using seed: 39467
Ensuring enough disk space...
Free disk space: 1653744140288
Downloading weights: https://replicate.delivery/pbxt/aZJJmwPed5UpbynzcEnx4Fr5EegYWlfrTYwQGbGrakuTLuJmA/trained_model.tar
2024-07-03T21:24:18Z | INFO | [ Initiating ] chunk_size=150M dest=/src/weights-cache/05a120e6d4f7a196 url=https://replicate.delivery/pbxt/aZJJmwPed5UpbynzcEnx4Fr5EegYWlfrTYwQGbGrakuTLuJmA/trained_model.tar
2024-07-03T21:24:24Z | INFO | [ Complete ] dest=/src/weights-cache/05a120e6d4f7a196 size="186 MB" total_elapsed=6.250s url=https://replicate.delivery/pbxt/aZJJmwPed5UpbynzcEnx4Fr5EegYWlfrTYwQGbGrakuTLuJmA/trained_model.tar
b''
Downloaded weights in 6.396426200866699 seconds
Loading fine-tuned model
Does not have Unet. assume we are using LoRA
Loading Unet LoRA
Prompt: A <s0><s1> tattoo drawing style of california poppies on mount tam
txt2img mode
0%| | 0/25 [00:00<?, ?it/s]/usr/local/lib/python3.9/site-packages/diffusers/models/attention_processor.py:1946: FutureWarning: `LoRAAttnProcessor2_0` is deprecated and will be removed in version 0.26.0. Make sure use AttnProcessor2_0 instead by settingLoRA layers to `self.{to_q,to_k,to_v,to_out[0]}.lora_layer` respectively. This will be done automatically when using `LoraLoaderMixin.load_lora_weights`
deprecate(
4%|▍ | 1/25 [00:00<00:11, 2.12it/s]
8%|▊ | 2/25 [00:00<00:10, 2.16it/s]
12%|█▏ | 3/25 [00:01<00:10, 2.17it/s]
16%|█▌ | 4/25 [00:01<00:09, 2.18it/s]
20%|██ | 5/25 [00:02<00:09, 2.18it/s]
24%|██▍ | 6/25 [00:02<00:08, 2.18it/s]
28%|██▊ | 7/25 [00:03<00:08, 2.18it/s]
32%|███▏ | 8/25 [00:03<00:07, 2.18it/s]
36%|███▌ | 9/25 [00:04<00:07, 2.18it/s]
40%|████ | 10/25 [00:04<00:06, 2.18it/s]
44%|████▍ | 11/25 [00:05<00:06, 2.18it/s]
48%|████▊ | 12/25 [00:05<00:05, 2.18it/s]
52%|█████▏ | 13/25 [00:05<00:05, 2.18it/s]
56%|█████▌ | 14/25 [00:06<00:05, 2.18it/s]
60%|██████ | 15/25 [00:06<00:04, 2.17it/s]
64%|██████▍ | 16/25 [00:07<00:04, 2.17it/s]
68%|██████▊ | 17/25 [00:07<00:03, 2.17it/s]
72%|███████▏ | 18/25 [00:08<00:03, 2.17it/s]
76%|███████▌ | 19/25 [00:08<00:02, 2.17it/s]
80%|████████ | 20/25 [00:09<00:02, 2.17it/s]
84%|████████▍ | 21/25 [00:09<00:01, 2.17it/s]
88%|████████▊ | 22/25 [00:10<00:01, 2.17it/s]
92%|█████████▏| 23/25 [00:10<00:00, 2.17it/s]
96%|█████████▌| 24/25 [00:11<00:00, 2.17it/s]
100%|██████████| 25/25 [00:11<00:00, 2.17it/s]
100%|██████████| 25/25 [00:11<00:00, 2.17it/s]
This model costs approximately $0.012 to run on Replicate, or 83 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.
This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 13 seconds.
This model doesn't have a readme.
This model is warm. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
Choose a file from your machine
Hint: you can also drag files onto the input
Choose a file from your machine
Hint: you can also drag files onto the input
Using seed: 39467
Ensuring enough disk space...
Free disk space: 1653744140288
Downloading weights: https://replicate.delivery/pbxt/aZJJmwPed5UpbynzcEnx4Fr5EegYWlfrTYwQGbGrakuTLuJmA/trained_model.tar
2024-07-03T21:24:18Z | INFO | [ Initiating ] chunk_size=150M dest=/src/weights-cache/05a120e6d4f7a196 url=https://replicate.delivery/pbxt/aZJJmwPed5UpbynzcEnx4Fr5EegYWlfrTYwQGbGrakuTLuJmA/trained_model.tar
2024-07-03T21:24:24Z | INFO | [ Complete ] dest=/src/weights-cache/05a120e6d4f7a196 size="186 MB" total_elapsed=6.250s url=https://replicate.delivery/pbxt/aZJJmwPed5UpbynzcEnx4Fr5EegYWlfrTYwQGbGrakuTLuJmA/trained_model.tar
b''
Downloaded weights in 6.396426200866699 seconds
Loading fine-tuned model
Does not have Unet. assume we are using LoRA
Loading Unet LoRA
Prompt: A <s0><s1> tattoo drawing style of california poppies on mount tam
txt2img mode
0%| | 0/25 [00:00<?, ?it/s]/usr/local/lib/python3.9/site-packages/diffusers/models/attention_processor.py:1946: FutureWarning: `LoRAAttnProcessor2_0` is deprecated and will be removed in version 0.26.0. Make sure use AttnProcessor2_0 instead by settingLoRA layers to `self.{to_q,to_k,to_v,to_out[0]}.lora_layer` respectively. This will be done automatically when using `LoraLoaderMixin.load_lora_weights`
deprecate(
4%|▍ | 1/25 [00:00<00:11, 2.12it/s]
8%|▊ | 2/25 [00:00<00:10, 2.16it/s]
12%|█▏ | 3/25 [00:01<00:10, 2.17it/s]
16%|█▌ | 4/25 [00:01<00:09, 2.18it/s]
20%|██ | 5/25 [00:02<00:09, 2.18it/s]
24%|██▍ | 6/25 [00:02<00:08, 2.18it/s]
28%|██▊ | 7/25 [00:03<00:08, 2.18it/s]
32%|███▏ | 8/25 [00:03<00:07, 2.18it/s]
36%|███▌ | 9/25 [00:04<00:07, 2.18it/s]
40%|████ | 10/25 [00:04<00:06, 2.18it/s]
44%|████▍ | 11/25 [00:05<00:06, 2.18it/s]
48%|████▊ | 12/25 [00:05<00:05, 2.18it/s]
52%|█████▏ | 13/25 [00:05<00:05, 2.18it/s]
56%|█████▌ | 14/25 [00:06<00:05, 2.18it/s]
60%|██████ | 15/25 [00:06<00:04, 2.17it/s]
64%|██████▍ | 16/25 [00:07<00:04, 2.17it/s]
68%|██████▊ | 17/25 [00:07<00:03, 2.17it/s]
72%|███████▏ | 18/25 [00:08<00:03, 2.17it/s]
76%|███████▌ | 19/25 [00:08<00:02, 2.17it/s]
80%|████████ | 20/25 [00:09<00:02, 2.17it/s]
84%|████████▍ | 21/25 [00:09<00:01, 2.17it/s]
88%|████████▊ | 22/25 [00:10<00:01, 2.17it/s]
92%|█████████▏| 23/25 [00:10<00:00, 2.17it/s]
96%|█████████▌| 24/25 [00:11<00:00, 2.17it/s]
100%|██████████| 25/25 [00:11<00:00, 2.17it/s]
100%|██████████| 25/25 [00:11<00:00, 2.17it/s]