Readme
This model doesn't have a readme.
I have enjoyed watching "How to Train You Dragon", and I was specifically a big fan of Toothless. A unique, yet special Dragon, so I thought about generating more images of (him or it ? I dunno rlly...)
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run m0hc3n/toothless-images-generator using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"m0hc3n/toothless-images-generator:6f0ed86f9b5b70a4adde5f4a58d94715737b35b0aa617df272a225c737476033",
{
input: {
model: "dev",
prompt: "imagine TOOTHLESS as a software engineer discussing with other dragons about some topics related to their work while each one of them hold its laptop. They are standing in an office having sofas and desks",
go_fast: false,
lora_scale: 1,
megapixels: "1",
num_outputs: 1,
aspect_ratio: "1:1",
output_format: "webp",
guidance_scale: 3,
output_quality: 80,
prompt_strength: 0.8,
extra_lora_scale: 1,
num_inference_steps: 28
}
}
);
// To access the file URL:
console.log(output[0].url()); //=> "http://example.com"
// To write the file to disk:
fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run m0hc3n/toothless-images-generator using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"m0hc3n/toothless-images-generator:6f0ed86f9b5b70a4adde5f4a58d94715737b35b0aa617df272a225c737476033",
input={
"model": "dev",
"prompt": "imagine TOOTHLESS as a software engineer discussing with other dragons about some topics related to their work while each one of them hold its laptop. They are standing in an office having sofas and desks",
"go_fast": False,
"lora_scale": 1,
"megapixels": "1",
"num_outputs": 1,
"aspect_ratio": "1:1",
"output_format": "webp",
"guidance_scale": 3,
"output_quality": 80,
"prompt_strength": 0.8,
"extra_lora_scale": 1,
"num_inference_steps": 28
}
)
print(output)
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run m0hc3n/toothless-images-generator using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "6f0ed86f9b5b70a4adde5f4a58d94715737b35b0aa617df272a225c737476033",
"input": {
"model": "dev",
"prompt": "imagine TOOTHLESS as a software engineer discussing with other dragons about some topics related to their work while each one of them hold its laptop. They are standing in an office having sofas and desks",
"go_fast": false,
"lora_scale": 1,
"megapixels": "1",
"num_outputs": 1,
"aspect_ratio": "1:1",
"output_format": "webp",
"guidance_scale": 3,
"output_quality": 80,
"prompt_strength": 0.8,
"extra_lora_scale": 1,
"num_inference_steps": 28
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Add a payment method to run this model.
By signing in, you agree to our
terms of service and privacy policy
{
"completed_at": "2024-12-23T23:47:56.098951Z",
"created_at": "2024-12-23T23:47:42.786000Z",
"data_removed": false,
"error": null,
"id": "kscwcx2w89rma0ckyqntkahhwc",
"input": {
"model": "dev",
"prompt": "imagine TOOTHLESS as a software engineer discussing with other dragons about some topics related to their work while each one of them hold its laptop. They are standing in an office having sofas and desks",
"go_fast": false,
"lora_scale": 1,
"megapixels": "1",
"num_outputs": 1,
"aspect_ratio": "1:1",
"output_format": "webp",
"guidance_scale": 3,
"output_quality": 80,
"prompt_strength": 0.8,
"extra_lora_scale": 1,
"num_inference_steps": 28
},
"logs": "2024-12-23 23:47:47.109 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys\n2024-12-23 23:47:47.109 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted\nApplying LoRA: 0%| | 0/304 [00:00<?, ?it/s]\nApplying LoRA: 91%|█████████ | 277/304 [00:00<00:00, 2767.90it/s]\nApplying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2637.31it/s]\n2024-12-23 23:47:47.225 | SUCCESS | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.12s\nfree=29018639048704\nDownloading weights\n2024-12-23T23:47:47Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpha3b3zqp/weights url=https://replicate.delivery/xezq/mJVAUebMGjUJXyo5vO1Df4kIs1cUNfIwkxf4K5FAiTH3ip3PB/trained_model.tar\n2024-12-23T23:47:49Z | INFO | [ Complete ] dest=/tmp/tmpha3b3zqp/weights size=\"172 MB\" total_elapsed=2.580s url=https://replicate.delivery/xezq/mJVAUebMGjUJXyo5vO1Df4kIs1cUNfIwkxf4K5FAiTH3ip3PB/trained_model.tar\nDownloaded weights in 2.61s\n2024-12-23 23:47:49.832 | INFO | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/c40175f5dedf2296\n2024-12-23 23:47:49.903 | INFO | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded\n2024-12-23 23:47:49.903 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys\n2024-12-23 23:47:49.903 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted\nApplying LoRA: 0%| | 0/304 [00:00<?, ?it/s]\nApplying LoRA: 91%|█████████▏| 278/304 [00:00<00:00, 2776.35it/s]\nApplying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2639.29it/s]\n2024-12-23 23:47:50.018 | SUCCESS | fp8.lora_loading:load_lora:539 - LoRA applied in 0.19s\nUsing seed: 59870\n0it [00:00, ?it/s]\n1it [00:00, 8.38it/s]\n2it [00:00, 5.86it/s]\n3it [00:00, 5.35it/s]\n4it [00:00, 5.14it/s]\n5it [00:00, 5.03it/s]\n6it [00:01, 4.94it/s]\n7it [00:01, 4.90it/s]\n8it [00:01, 4.88it/s]\n9it [00:01, 4.87it/s]\n10it [00:01, 4.85it/s]\n11it [00:02, 4.84it/s]\n12it [00:02, 4.84it/s]\n13it [00:02, 4.83it/s]\n14it [00:02, 4.83it/s]\n15it [00:03, 4.82it/s]\n16it [00:03, 4.82it/s]\n17it [00:03, 4.82it/s]\n18it [00:03, 4.83it/s]\n19it [00:03, 4.82it/s]\n20it [00:04, 4.82it/s]\n21it [00:04, 4.82it/s]\n22it [00:04, 4.82it/s]\n23it [00:04, 4.82it/s]\n24it [00:04, 4.83it/s]\n25it [00:05, 4.83it/s]\n26it [00:05, 4.83it/s]\n27it [00:05, 4.82it/s]\n28it [00:05, 4.83it/s]\n28it [00:05, 4.90it/s]\nTotal safe images: 1 out of 1",
"metrics": {
"predict_time": 8.989515004,
"total_time": 13.312951
},
"output": [
"https://replicate.delivery/xezq/kiznYbGHcc4eGSJX7Iw8nNVslm6eUXDjT26nZbDnXQJsi69TA/out-0.webp"
],
"started_at": "2024-12-23T23:47:47.109436Z",
"status": "succeeded",
"urls": {
"stream": "https://stream.replicate.com/v1/files/bcwr-w6d4fqpd6cedg7doatx77hfxkpkr2xhpeaebblfbji7jizfkyoxq",
"get": "https://api.replicate.com/v1/predictions/kscwcx2w89rma0ckyqntkahhwc",
"cancel": "https://api.replicate.com/v1/predictions/kscwcx2w89rma0ckyqntkahhwc/cancel"
},
"version": "6f0ed86f9b5b70a4adde5f4a58d94715737b35b0aa617df272a225c737476033"
}
2024-12-23 23:47:47.109 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys
2024-12-23 23:47:47.109 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted
Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s]
Applying LoRA: 91%|█████████ | 277/304 [00:00<00:00, 2767.90it/s]
Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2637.31it/s]
2024-12-23 23:47:47.225 | SUCCESS | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.12s
free=29018639048704
Downloading weights
2024-12-23T23:47:47Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpha3b3zqp/weights url=https://replicate.delivery/xezq/mJVAUebMGjUJXyo5vO1Df4kIs1cUNfIwkxf4K5FAiTH3ip3PB/trained_model.tar
2024-12-23T23:47:49Z | INFO | [ Complete ] dest=/tmp/tmpha3b3zqp/weights size="172 MB" total_elapsed=2.580s url=https://replicate.delivery/xezq/mJVAUebMGjUJXyo5vO1Df4kIs1cUNfIwkxf4K5FAiTH3ip3PB/trained_model.tar
Downloaded weights in 2.61s
2024-12-23 23:47:49.832 | INFO | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/c40175f5dedf2296
2024-12-23 23:47:49.903 | INFO | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded
2024-12-23 23:47:49.903 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys
2024-12-23 23:47:49.903 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted
Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s]
Applying LoRA: 91%|█████████▏| 278/304 [00:00<00:00, 2776.35it/s]
Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2639.29it/s]
2024-12-23 23:47:50.018 | SUCCESS | fp8.lora_loading:load_lora:539 - LoRA applied in 0.19s
Using seed: 59870
0it [00:00, ?it/s]
1it [00:00, 8.38it/s]
2it [00:00, 5.86it/s]
3it [00:00, 5.35it/s]
4it [00:00, 5.14it/s]
5it [00:00, 5.03it/s]
6it [00:01, 4.94it/s]
7it [00:01, 4.90it/s]
8it [00:01, 4.88it/s]
9it [00:01, 4.87it/s]
10it [00:01, 4.85it/s]
11it [00:02, 4.84it/s]
12it [00:02, 4.84it/s]
13it [00:02, 4.83it/s]
14it [00:02, 4.83it/s]
15it [00:03, 4.82it/s]
16it [00:03, 4.82it/s]
17it [00:03, 4.82it/s]
18it [00:03, 4.83it/s]
19it [00:03, 4.82it/s]
20it [00:04, 4.82it/s]
21it [00:04, 4.82it/s]
22it [00:04, 4.82it/s]
23it [00:04, 4.82it/s]
24it [00:04, 4.83it/s]
25it [00:05, 4.83it/s]
26it [00:05, 4.83it/s]
27it [00:05, 4.82it/s]
28it [00:05, 4.83it/s]
28it [00:05, 4.90it/s]
Total safe images: 1 out of 1
This model runs on Nvidia H100 GPU hardware. We don't yet have enough runs of this model to provide performance information.
This model doesn't have a readme.
This model is warm. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
Choose a file from your machine
Hint: you can also drag files onto the input
Choose a file from your machine
Hint: you can also drag files onto the input
2024-12-23 23:47:47.109 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys
2024-12-23 23:47:47.109 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted
Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s]
Applying LoRA: 91%|█████████ | 277/304 [00:00<00:00, 2767.90it/s]
Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2637.31it/s]
2024-12-23 23:47:47.225 | SUCCESS | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.12s
free=29018639048704
Downloading weights
2024-12-23T23:47:47Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpha3b3zqp/weights url=https://replicate.delivery/xezq/mJVAUebMGjUJXyo5vO1Df4kIs1cUNfIwkxf4K5FAiTH3ip3PB/trained_model.tar
2024-12-23T23:47:49Z | INFO | [ Complete ] dest=/tmp/tmpha3b3zqp/weights size="172 MB" total_elapsed=2.580s url=https://replicate.delivery/xezq/mJVAUebMGjUJXyo5vO1Df4kIs1cUNfIwkxf4K5FAiTH3ip3PB/trained_model.tar
Downloaded weights in 2.61s
2024-12-23 23:47:49.832 | INFO | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/c40175f5dedf2296
2024-12-23 23:47:49.903 | INFO | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded
2024-12-23 23:47:49.903 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys
2024-12-23 23:47:49.903 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted
Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s]
Applying LoRA: 91%|█████████▏| 278/304 [00:00<00:00, 2776.35it/s]
Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2639.29it/s]
2024-12-23 23:47:50.018 | SUCCESS | fp8.lora_loading:load_lora:539 - LoRA applied in 0.19s
Using seed: 59870
0it [00:00, ?it/s]
1it [00:00, 8.38it/s]
2it [00:00, 5.86it/s]
3it [00:00, 5.35it/s]
4it [00:00, 5.14it/s]
5it [00:00, 5.03it/s]
6it [00:01, 4.94it/s]
7it [00:01, 4.90it/s]
8it [00:01, 4.88it/s]
9it [00:01, 4.87it/s]
10it [00:01, 4.85it/s]
11it [00:02, 4.84it/s]
12it [00:02, 4.84it/s]
13it [00:02, 4.83it/s]
14it [00:02, 4.83it/s]
15it [00:03, 4.82it/s]
16it [00:03, 4.82it/s]
17it [00:03, 4.82it/s]
18it [00:03, 4.83it/s]
19it [00:03, 4.82it/s]
20it [00:04, 4.82it/s]
21it [00:04, 4.82it/s]
22it [00:04, 4.82it/s]
23it [00:04, 4.82it/s]
24it [00:04, 4.83it/s]
25it [00:05, 4.83it/s]
26it [00:05, 4.83it/s]
27it [00:05, 4.82it/s]
28it [00:05, 4.83it/s]
28it [00:05, 4.90it/s]
Total safe images: 1 out of 1