Failed to load versions. Head to the versions page to see all versions for this model.
You're looking at a specific version of this model. Jump to the model overview.
rinatkurmaev /flux-dev-lora-tatra-t3:25b2bc71
Input
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
import fs from "node:fs";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run rinatkurmaev/flux-dev-lora-tatra-t3 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"rinatkurmaev/flux-dev-lora-tatra-t3:25b2bc71bd11c42a9e7ce2f91995ded93cc12f1f23d2cbb7fff77f3da1bb65c4",
{
input: {
seed: 256,
model: "dev",
width: 1024,
height: 1024,
prompt: "A tram model tatra t3 on a city street. The tram is green and red.\nThe snowy mountains are on the background",
go_fast: true,
lora_scale: 1,
megapixels: "1",
num_outputs: 1,
aspect_ratio: "1:1",
output_format: "png",
guidance_scale: 3,
output_quality: 80,
prompt_strength: 0.8,
extra_lora_scale: 1,
num_inference_steps: 28
}
}
);
// To access the file URL:
console.log(output[0].url()); //=> "http://example.com"
// To write the file to disk:
fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run rinatkurmaev/flux-dev-lora-tatra-t3 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"rinatkurmaev/flux-dev-lora-tatra-t3:25b2bc71bd11c42a9e7ce2f91995ded93cc12f1f23d2cbb7fff77f3da1bb65c4",
input={
"seed": 256,
"model": "dev",
"width": 1024,
"height": 1024,
"prompt": "A tram model tatra t3 on a city street. The tram is green and red.\nThe snowy mountains are on the background",
"go_fast": True,
"lora_scale": 1,
"megapixels": "1",
"num_outputs": 1,
"aspect_ratio": "1:1",
"output_format": "png",
"guidance_scale": 3,
"output_quality": 80,
"prompt_strength": 0.8,
"extra_lora_scale": 1,
"num_inference_steps": 28
}
)
# To access the file URL:
print(output[0].url())
#=> "http://example.com"
# To write the file to disk:
with open("my-image.png", "wb") as file:
file.write(output[0].read())
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run rinatkurmaev/flux-dev-lora-tatra-t3 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "rinatkurmaev/flux-dev-lora-tatra-t3:25b2bc71bd11c42a9e7ce2f91995ded93cc12f1f23d2cbb7fff77f3da1bb65c4",
"input": {
"seed": 256,
"model": "dev",
"width": 1024,
"height": 1024,
"prompt": "A tram model tatra t3 on a city street. The tram is green and red.\\nThe snowy mountains are on the background",
"go_fast": true,
"lora_scale": 1,
"megapixels": "1",
"num_outputs": 1,
"aspect_ratio": "1:1",
"output_format": "png",
"guidance_scale": 3,
"output_quality": 80,
"prompt_strength": 0.8,
"extra_lora_scale": 1,
"num_inference_steps": 28
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Add a payment method to run this model.
By signing in, you agree to our
terms of service and privacy policy
Output
{
"completed_at": "2025-02-07T22:27:42.481446Z",
"created_at": "2025-02-07T22:27:35.874000Z",
"data_removed": false,
"error": null,
"id": "ng0q524b89rm80cmwa397n1sdm",
"input": {
"seed": 256,
"model": "dev",
"width": 1024,
"height": 1024,
"prompt": "A tram model tatra t3 on a city street. The tram is green and red.\nThe snowy mountains are on the background",
"go_fast": true,
"lora_scale": 1,
"megapixels": "1",
"num_outputs": 1,
"aspect_ratio": "1:1",
"output_format": "png",
"guidance_scale": 3,
"output_quality": 80,
"prompt_strength": 0.8,
"extra_lora_scale": 1,
"num_inference_steps": 28
},
"logs": "2025-02-07 22:27:35.906 | INFO | fp8.lora_loading:restore_clones:592 - Unloaded 304 layers\n2025-02-07 22:27:35.907 | SUCCESS | fp8.lora_loading:unload_loras:563 - LoRAs unloaded in 0.024s\nfree=28471593234432\nDownloading weights\n2025-02-07T22:27:35Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpv57dbkor/weights url=https://replicate.delivery/xezq/5wg4NzY5IOIuMZ4bhf8BTSUhf8hq4vj2AWECTkhWETqANDNUA/trained_model.tar\n2025-02-07T22:27:38Z | INFO | [ Complete ] dest=/tmp/tmpv57dbkor/weights size=\"215 MB\" total_elapsed=3.046s url=https://replicate.delivery/xezq/5wg4NzY5IOIuMZ4bhf8BTSUhf8hq4vj2AWECTkhWETqANDNUA/trained_model.tar\nDownloaded weights in 3.07s\n2025-02-07 22:27:38.981 | INFO | fp8.lora_loading:convert_lora_weights:502 - Loading LoRA weights for /src/weights-cache/265dc36a7a34f2ad\n2025-02-07 22:27:39.060 | INFO | fp8.lora_loading:convert_lora_weights:523 - LoRA weights loaded\n2025-02-07 22:27:39.060 | DEBUG | fp8.lora_loading:apply_lora_to_model_and_optionally_store_clones:602 - Extracting keys\n2025-02-07 22:27:39.061 | DEBUG | fp8.lora_loading:apply_lora_to_model_and_optionally_store_clones:609 - Keys extracted\nApplying LoRA: 0%| | 0/304 [00:00<?, ?it/s]\nApplying LoRA: 39%|███▉ | 118/304 [00:00<00:00, 1177.22it/s]\nApplying LoRA: 78%|███████▊ | 236/304 [00:00<00:00, 981.72it/s] \nApplying LoRA: 100%|██████████| 304/304 [00:00<00:00, 961.15it/s]\n2025-02-07 22:27:39.377 | INFO | fp8.lora_loading:apply_lora_to_model_and_optionally_store_clones:661 - Loading LoRA in fp8\n2025-02-07 22:27:39.377 | SUCCESS | fp8.lora_loading:load_lora:542 - LoRA applied in 0.4s\nrunning quantized prediction\nUsing seed: 256\n 0%| | 0/28 [00:00<?, ?it/s]\n 7%|▋ | 2/28 [00:00<00:01, 17.58it/s]\n 14%|█▍ | 4/28 [00:00<00:01, 12.77it/s]\n 21%|██▏ | 6/28 [00:00<00:01, 11.74it/s]\n 29%|██▊ | 8/28 [00:00<00:01, 11.30it/s]\n 36%|███▌ | 10/28 [00:00<00:01, 10.98it/s]\n 43%|████▎ | 12/28 [00:01<00:01, 10.70it/s]\n 50%|█████ | 14/28 [00:01<00:01, 10.68it/s]\n 57%|█████▋ | 16/28 [00:01<00:01, 10.69it/s]\n 64%|██████▍ | 18/28 [00:01<00:00, 10.66it/s]\n 71%|███████▏ | 20/28 [00:01<00:00, 10.61it/s]\n 79%|███████▊ | 22/28 [00:02<00:00, 10.51it/s]\n 86%|████████▌ | 24/28 [00:02<00:00, 10.52it/s]\n 93%|█████████▎| 26/28 [00:02<00:00, 10.58it/s]\n100%|██████████| 28/28 [00:02<00:00, 10.60it/s]\n100%|██████████| 28/28 [00:02<00:00, 10.88it/s]\nTotal safe images: 1 out of 1",
"metrics": {
"predict_time": 6.596790403,
"total_time": 6.607446
},
"output": [
"https://replicate.delivery/xezq/fMpYkC7rRoyGJKQ7LeoEpXIdewfbpZXI6evntmR28HpxbdohC/out-0.png"
],
"started_at": "2025-02-07T22:27:35.884656Z",
"status": "succeeded",
"urls": {
"stream": "https://stream.replicate.com/v1/files/bcwr-j6uvwyb5hw75c2je6apotpamhhjt5mpg3wbaw2ztvkvuqixgloca",
"get": "https://api.replicate.com/v1/predictions/ng0q524b89rm80cmwa397n1sdm",
"cancel": "https://api.replicate.com/v1/predictions/ng0q524b89rm80cmwa397n1sdm/cancel"
},
"version": "25b2bc71bd11c42a9e7ce2f91995ded93cc12f1f23d2cbb7fff77f3da1bb65c4"
}
2025-02-07 22:27:35.906 | INFO | fp8.lora_loading:restore_clones:592 - Unloaded 304 layers
2025-02-07 22:27:35.907 | SUCCESS | fp8.lora_loading:unload_loras:563 - LoRAs unloaded in 0.024s
free=28471593234432
Downloading weights
2025-02-07T22:27:35Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpv57dbkor/weights url=https://replicate.delivery/xezq/5wg4NzY5IOIuMZ4bhf8BTSUhf8hq4vj2AWECTkhWETqANDNUA/trained_model.tar
2025-02-07T22:27:38Z | INFO | [ Complete ] dest=/tmp/tmpv57dbkor/weights size="215 MB" total_elapsed=3.046s url=https://replicate.delivery/xezq/5wg4NzY5IOIuMZ4bhf8BTSUhf8hq4vj2AWECTkhWETqANDNUA/trained_model.tar
Downloaded weights in 3.07s
2025-02-07 22:27:38.981 | INFO | fp8.lora_loading:convert_lora_weights:502 - Loading LoRA weights for /src/weights-cache/265dc36a7a34f2ad
2025-02-07 22:27:39.060 | INFO | fp8.lora_loading:convert_lora_weights:523 - LoRA weights loaded
2025-02-07 22:27:39.060 | DEBUG | fp8.lora_loading:apply_lora_to_model_and_optionally_store_clones:602 - Extracting keys
2025-02-07 22:27:39.061 | DEBUG | fp8.lora_loading:apply_lora_to_model_and_optionally_store_clones:609 - Keys extracted
Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s]
Applying LoRA: 39%|███▉ | 118/304 [00:00<00:00, 1177.22it/s]
Applying LoRA: 78%|███████▊ | 236/304 [00:00<00:00, 981.72it/s]
Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 961.15it/s]
2025-02-07 22:27:39.377 | INFO | fp8.lora_loading:apply_lora_to_model_and_optionally_store_clones:661 - Loading LoRA in fp8
2025-02-07 22:27:39.377 | SUCCESS | fp8.lora_loading:load_lora:542 - LoRA applied in 0.4s
running quantized prediction
Using seed: 256
0%| | 0/28 [00:00<?, ?it/s]
7%|▋ | 2/28 [00:00<00:01, 17.58it/s]
14%|█▍ | 4/28 [00:00<00:01, 12.77it/s]
21%|██▏ | 6/28 [00:00<00:01, 11.74it/s]
29%|██▊ | 8/28 [00:00<00:01, 11.30it/s]
36%|███▌ | 10/28 [00:00<00:01, 10.98it/s]
43%|████▎ | 12/28 [00:01<00:01, 10.70it/s]
50%|█████ | 14/28 [00:01<00:01, 10.68it/s]
57%|█████▋ | 16/28 [00:01<00:01, 10.69it/s]
64%|██████▍ | 18/28 [00:01<00:00, 10.66it/s]
71%|███████▏ | 20/28 [00:01<00:00, 10.61it/s]
79%|███████▊ | 22/28 [00:02<00:00, 10.51it/s]
86%|████████▌ | 24/28 [00:02<00:00, 10.52it/s]
93%|█████████▎| 26/28 [00:02<00:00, 10.58it/s]
100%|██████████| 28/28 [00:02<00:00, 10.60it/s]
100%|██████████| 28/28 [00:02<00:00, 10.88it/s]
Total safe images: 1 out of 1