m0hc3n / toothless-images-generator
I have enjoyed watching "How to Train You Dragon", and I was specifically a big fan of Toothless. A unique, yet special Dragon, so I thought about generating more images of (him or it ? I dunno rlly...)
- Public
- 17 runs
-
H100
Prediction
m0hc3n/toothless-images-generator:6f0ed86f9b5b70a4adde5f4a58d94715737b35b0aa617df272a225c737476033IDkscwcx2w89rma0ckyqntkahhwcStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- model
- dev
- prompt
- imagine TOOTHLESS as a software engineer discussing with other dragons about some topics related to their work while each one of them hold its laptop. They are standing in an office having sofas and desks
- go_fast
- lora_scale
- 1
- megapixels
- 1
- num_outputs
- 1
- aspect_ratio
- 1:1
- output_format
- webp
- guidance_scale
- 3
- output_quality
- 80
- prompt_strength
- 0.8
- extra_lora_scale
- 1
- num_inference_steps
- 28
{ "model": "dev", "prompt": "imagine TOOTHLESS as a software engineer discussing with other dragons about some topics related to their work while each one of them hold its laptop. They are standing in an office having sofas and desks", "go_fast": false, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run m0hc3n/toothless-images-generator using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "m0hc3n/toothless-images-generator:6f0ed86f9b5b70a4adde5f4a58d94715737b35b0aa617df272a225c737476033", { input: { model: "dev", prompt: "imagine TOOTHLESS as a software engineer discussing with other dragons about some topics related to their work while each one of them hold its laptop. They are standing in an office having sofas and desks", go_fast: false, lora_scale: 1, megapixels: "1", num_outputs: 1, aspect_ratio: "1:1", output_format: "webp", guidance_scale: 3, output_quality: 80, prompt_strength: 0.8, extra_lora_scale: 1, num_inference_steps: 28 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run m0hc3n/toothless-images-generator using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "m0hc3n/toothless-images-generator:6f0ed86f9b5b70a4adde5f4a58d94715737b35b0aa617df272a225c737476033", input={ "model": "dev", "prompt": "imagine TOOTHLESS as a software engineer discussing with other dragons about some topics related to their work while each one of them hold its laptop. They are standing in an office having sofas and desks", "go_fast": False, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run m0hc3n/toothless-images-generator using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "m0hc3n/toothless-images-generator:6f0ed86f9b5b70a4adde5f4a58d94715737b35b0aa617df272a225c737476033", "input": { "model": "dev", "prompt": "imagine TOOTHLESS as a software engineer discussing with other dragons about some topics related to their work while each one of them hold its laptop. They are standing in an office having sofas and desks", "go_fast": false, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-12-23T23:47:56.098951Z", "created_at": "2024-12-23T23:47:42.786000Z", "data_removed": false, "error": null, "id": "kscwcx2w89rma0ckyqntkahhwc", "input": { "model": "dev", "prompt": "imagine TOOTHLESS as a software engineer discussing with other dragons about some topics related to their work while each one of them hold its laptop. They are standing in an office having sofas and desks", "go_fast": false, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 }, "logs": "2024-12-23 23:47:47.109 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys\n2024-12-23 23:47:47.109 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted\nApplying LoRA: 0%| | 0/304 [00:00<?, ?it/s]\nApplying LoRA: 91%|█████████ | 277/304 [00:00<00:00, 2767.90it/s]\nApplying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2637.31it/s]\n2024-12-23 23:47:47.225 | SUCCESS | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.12s\nfree=29018639048704\nDownloading weights\n2024-12-23T23:47:47Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpha3b3zqp/weights url=https://replicate.delivery/xezq/mJVAUebMGjUJXyo5vO1Df4kIs1cUNfIwkxf4K5FAiTH3ip3PB/trained_model.tar\n2024-12-23T23:47:49Z | INFO | [ Complete ] dest=/tmp/tmpha3b3zqp/weights size=\"172 MB\" total_elapsed=2.580s url=https://replicate.delivery/xezq/mJVAUebMGjUJXyo5vO1Df4kIs1cUNfIwkxf4K5FAiTH3ip3PB/trained_model.tar\nDownloaded weights in 2.61s\n2024-12-23 23:47:49.832 | INFO | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/c40175f5dedf2296\n2024-12-23 23:47:49.903 | INFO | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded\n2024-12-23 23:47:49.903 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys\n2024-12-23 23:47:49.903 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted\nApplying LoRA: 0%| | 0/304 [00:00<?, ?it/s]\nApplying LoRA: 91%|█████████▏| 278/304 [00:00<00:00, 2776.35it/s]\nApplying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2639.29it/s]\n2024-12-23 23:47:50.018 | SUCCESS | fp8.lora_loading:load_lora:539 - LoRA applied in 0.19s\nUsing seed: 59870\n0it [00:00, ?it/s]\n1it [00:00, 8.38it/s]\n2it [00:00, 5.86it/s]\n3it [00:00, 5.35it/s]\n4it [00:00, 5.14it/s]\n5it [00:00, 5.03it/s]\n6it [00:01, 4.94it/s]\n7it [00:01, 4.90it/s]\n8it [00:01, 4.88it/s]\n9it [00:01, 4.87it/s]\n10it [00:01, 4.85it/s]\n11it [00:02, 4.84it/s]\n12it [00:02, 4.84it/s]\n13it [00:02, 4.83it/s]\n14it [00:02, 4.83it/s]\n15it [00:03, 4.82it/s]\n16it [00:03, 4.82it/s]\n17it [00:03, 4.82it/s]\n18it [00:03, 4.83it/s]\n19it [00:03, 4.82it/s]\n20it [00:04, 4.82it/s]\n21it [00:04, 4.82it/s]\n22it [00:04, 4.82it/s]\n23it [00:04, 4.82it/s]\n24it [00:04, 4.83it/s]\n25it [00:05, 4.83it/s]\n26it [00:05, 4.83it/s]\n27it [00:05, 4.82it/s]\n28it [00:05, 4.83it/s]\n28it [00:05, 4.90it/s]\nTotal safe images: 1 out of 1", "metrics": { "predict_time": 8.989515004, "total_time": 13.312951 }, "output": [ "https://replicate.delivery/xezq/kiznYbGHcc4eGSJX7Iw8nNVslm6eUXDjT26nZbDnXQJsi69TA/out-0.webp" ], "started_at": "2024-12-23T23:47:47.109436Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-w6d4fqpd6cedg7doatx77hfxkpkr2xhpeaebblfbji7jizfkyoxq", "get": "https://api.replicate.com/v1/predictions/kscwcx2w89rma0ckyqntkahhwc", "cancel": "https://api.replicate.com/v1/predictions/kscwcx2w89rma0ckyqntkahhwc/cancel" }, "version": "6f0ed86f9b5b70a4adde5f4a58d94715737b35b0aa617df272a225c737476033" }
Generated in2024-12-23 23:47:47.109 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2024-12-23 23:47:47.109 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 91%|█████████ | 277/304 [00:00<00:00, 2767.90it/s] Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2637.31it/s] 2024-12-23 23:47:47.225 | SUCCESS | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.12s free=29018639048704 Downloading weights 2024-12-23T23:47:47Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpha3b3zqp/weights url=https://replicate.delivery/xezq/mJVAUebMGjUJXyo5vO1Df4kIs1cUNfIwkxf4K5FAiTH3ip3PB/trained_model.tar 2024-12-23T23:47:49Z | INFO | [ Complete ] dest=/tmp/tmpha3b3zqp/weights size="172 MB" total_elapsed=2.580s url=https://replicate.delivery/xezq/mJVAUebMGjUJXyo5vO1Df4kIs1cUNfIwkxf4K5FAiTH3ip3PB/trained_model.tar Downloaded weights in 2.61s 2024-12-23 23:47:49.832 | INFO | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/c40175f5dedf2296 2024-12-23 23:47:49.903 | INFO | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded 2024-12-23 23:47:49.903 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2024-12-23 23:47:49.903 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 91%|█████████▏| 278/304 [00:00<00:00, 2776.35it/s] Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2639.29it/s] 2024-12-23 23:47:50.018 | SUCCESS | fp8.lora_loading:load_lora:539 - LoRA applied in 0.19s Using seed: 59870 0it [00:00, ?it/s] 1it [00:00, 8.38it/s] 2it [00:00, 5.86it/s] 3it [00:00, 5.35it/s] 4it [00:00, 5.14it/s] 5it [00:00, 5.03it/s] 6it [00:01, 4.94it/s] 7it [00:01, 4.90it/s] 8it [00:01, 4.88it/s] 9it [00:01, 4.87it/s] 10it [00:01, 4.85it/s] 11it [00:02, 4.84it/s] 12it [00:02, 4.84it/s] 13it [00:02, 4.83it/s] 14it [00:02, 4.83it/s] 15it [00:03, 4.82it/s] 16it [00:03, 4.82it/s] 17it [00:03, 4.82it/s] 18it [00:03, 4.83it/s] 19it [00:03, 4.82it/s] 20it [00:04, 4.82it/s] 21it [00:04, 4.82it/s] 22it [00:04, 4.82it/s] 23it [00:04, 4.82it/s] 24it [00:04, 4.83it/s] 25it [00:05, 4.83it/s] 26it [00:05, 4.83it/s] 27it [00:05, 4.82it/s] 28it [00:05, 4.83it/s] 28it [00:05, 4.90it/s] Total safe images: 1 out of 1
Prediction
m0hc3n/toothless-images-generator:6f0ed86f9b5b70a4adde5f4a58d94715737b35b0aa617df272a225c737476033IDdcz8n18971rmc0ckyqrr045ec4StatusSucceededSourceWebHardwareH100Total durationCreatedInput
- model
- dev
- prompt
- imagine TOOTHLESS as a web developer, he is staying at his desk and coding on his laptop while checking the Figma file on another monitor. He Has a fancy developer setup on his desk. The image shows a backshoot of this scene (i.e. it shows this scene from behind the back of TOOTHLESS showing only his head)
- go_fast
- lora_scale
- 1
- megapixels
- 1
- num_outputs
- 1
- aspect_ratio
- 1:1
- output_format
- webp
- guidance_scale
- 3
- output_quality
- 80
- prompt_strength
- 0.8
- extra_lora_scale
- 1
- num_inference_steps
- 28
{ "model": "dev", "prompt": "imagine TOOTHLESS as a web developer, he is staying at his desk and coding on his laptop while checking the Figma file on another monitor. He Has a fancy developer setup on his desk. The image shows a backshoot of this scene (i.e. it shows this scene from behind the back of TOOTHLESS showing only his head)", "go_fast": false, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run m0hc3n/toothless-images-generator using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "m0hc3n/toothless-images-generator:6f0ed86f9b5b70a4adde5f4a58d94715737b35b0aa617df272a225c737476033", { input: { model: "dev", prompt: "imagine TOOTHLESS as a web developer, he is staying at his desk and coding on his laptop while checking the Figma file on another monitor. He Has a fancy developer setup on his desk. The image shows a backshoot of this scene (i.e. it shows this scene from behind the back of TOOTHLESS showing only his head)", go_fast: false, lora_scale: 1, megapixels: "1", num_outputs: 1, aspect_ratio: "1:1", output_format: "webp", guidance_scale: 3, output_quality: 80, prompt_strength: 0.8, extra_lora_scale: 1, num_inference_steps: 28 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run m0hc3n/toothless-images-generator using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "m0hc3n/toothless-images-generator:6f0ed86f9b5b70a4adde5f4a58d94715737b35b0aa617df272a225c737476033", input={ "model": "dev", "prompt": "imagine TOOTHLESS as a web developer, he is staying at his desk and coding on his laptop while checking the Figma file on another monitor. He Has a fancy developer setup on his desk. The image shows a backshoot of this scene (i.e. it shows this scene from behind the back of TOOTHLESS showing only his head)", "go_fast": False, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run m0hc3n/toothless-images-generator using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "m0hc3n/toothless-images-generator:6f0ed86f9b5b70a4adde5f4a58d94715737b35b0aa617df272a225c737476033", "input": { "model": "dev", "prompt": "imagine TOOTHLESS as a web developer, he is staying at his desk and coding on his laptop while checking the Figma file on another monitor. He Has a fancy developer setup on his desk. The image shows a backshoot of this scene (i.e. it shows this scene from behind the back of TOOTHLESS showing only his head)", "go_fast": false, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-12-23T23:54:07.608191Z", "created_at": "2024-12-23T23:53:54.744000Z", "data_removed": false, "error": null, "id": "dcz8n18971rmc0ckyqrr045ec4", "input": { "model": "dev", "prompt": "imagine TOOTHLESS as a web developer, he is staying at his desk and coding on his laptop while checking the Figma file on another monitor. He Has a fancy developer setup on his desk. The image shows a backshoot of this scene (i.e. it shows this scene from behind the back of TOOTHLESS showing only his head)", "go_fast": false, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 }, "logs": "2024-12-23 23:54:00.936 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys\n2024-12-23 23:54:00.936 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted\nApplying LoRA: 0%| | 0/304 [00:00<?, ?it/s]\nApplying LoRA: 87%|████████▋ | 265/304 [00:00<00:00, 2628.68it/s]\nApplying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2601.56it/s]\n2024-12-23 23:54:01.053 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys\n2024-12-23 23:54:01.054 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted\nApplying LoRA: 0%| | 0/304 [00:00<?, ?it/s]\nApplying LoRA: 64%|██████▍ | 195/304 [00:00<00:00, 1934.31it/s]\nApplying LoRA: 100%|██████████| 304/304 [00:00<00:00, 1871.59it/s]\n2024-12-23 23:54:01.217 | SUCCESS | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.28s\n2024-12-23 23:54:01.219 | INFO | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/c40175f5dedf2296\n2024-12-23 23:54:01.332 | INFO | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded\n2024-12-23 23:54:01.333 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys\n2024-12-23 23:54:01.333 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted\nApplying LoRA: 0%| | 0/304 [00:00<?, ?it/s]\nApplying LoRA: 87%|████████▋ | 265/304 [00:00<00:00, 2632.87it/s]\nApplying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2604.75it/s]\n2024-12-23 23:54:01.450 | SUCCESS | fp8.lora_loading:load_lora:539 - LoRA applied in 0.23s\nUsing seed: 54529\n0it [00:00, ?it/s]\n1it [00:00, 8.27it/s]\n2it [00:00, 5.78it/s]\n3it [00:00, 5.27it/s]\n4it [00:00, 5.07it/s]\n5it [00:00, 4.94it/s]\n6it [00:01, 4.87it/s]\n7it [00:01, 4.84it/s]\n8it [00:01, 4.81it/s]\n9it [00:01, 4.80it/s]\n10it [00:02, 4.78it/s]\n11it [00:02, 4.76it/s]\n12it [00:02, 4.76it/s]\n13it [00:02, 4.75it/s]\n14it [00:02, 4.75it/s]\n15it [00:03, 4.74it/s]\n16it [00:03, 4.74it/s]\n17it [00:03, 4.74it/s]\n18it [00:03, 4.76it/s]\n19it [00:03, 4.76it/s]\n20it [00:04, 4.76it/s]\n21it [00:04, 4.75it/s]\n22it [00:04, 4.74it/s]\n23it [00:04, 4.75it/s]\n24it [00:04, 4.75it/s]\n25it [00:05, 4.75it/s]\n26it [00:05, 4.74it/s]\n27it [00:05, 4.74it/s]\n28it [00:05, 4.74it/s]\n28it [00:05, 4.82it/s]\nTotal safe images: 1 out of 1", "metrics": { "predict_time": 6.670765322, "total_time": 12.864191 }, "output": [ "https://replicate.delivery/xezq/uaPUHK1rQS5XNtRqyg9iN2xxXVtjMsD2JscxeueDHelfhq3PB/out-0.webp" ], "started_at": "2024-12-23T23:54:00.937426Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-mpt3gswv2hqvedazuoks7ajuabnwhdjofvqbxki24ku7khzfzi6a", "get": "https://api.replicate.com/v1/predictions/dcz8n18971rmc0ckyqrr045ec4", "cancel": "https://api.replicate.com/v1/predictions/dcz8n18971rmc0ckyqrr045ec4/cancel" }, "version": "6f0ed86f9b5b70a4adde5f4a58d94715737b35b0aa617df272a225c737476033" }
Generated in2024-12-23 23:54:00.936 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2024-12-23 23:54:00.936 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 87%|████████▋ | 265/304 [00:00<00:00, 2628.68it/s] Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2601.56it/s] 2024-12-23 23:54:01.053 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2024-12-23 23:54:01.054 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 64%|██████▍ | 195/304 [00:00<00:00, 1934.31it/s] Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 1871.59it/s] 2024-12-23 23:54:01.217 | SUCCESS | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.28s 2024-12-23 23:54:01.219 | INFO | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/c40175f5dedf2296 2024-12-23 23:54:01.332 | INFO | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded 2024-12-23 23:54:01.333 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2024-12-23 23:54:01.333 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 87%|████████▋ | 265/304 [00:00<00:00, 2632.87it/s] Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2604.75it/s] 2024-12-23 23:54:01.450 | SUCCESS | fp8.lora_loading:load_lora:539 - LoRA applied in 0.23s Using seed: 54529 0it [00:00, ?it/s] 1it [00:00, 8.27it/s] 2it [00:00, 5.78it/s] 3it [00:00, 5.27it/s] 4it [00:00, 5.07it/s] 5it [00:00, 4.94it/s] 6it [00:01, 4.87it/s] 7it [00:01, 4.84it/s] 8it [00:01, 4.81it/s] 9it [00:01, 4.80it/s] 10it [00:02, 4.78it/s] 11it [00:02, 4.76it/s] 12it [00:02, 4.76it/s] 13it [00:02, 4.75it/s] 14it [00:02, 4.75it/s] 15it [00:03, 4.74it/s] 16it [00:03, 4.74it/s] 17it [00:03, 4.74it/s] 18it [00:03, 4.76it/s] 19it [00:03, 4.76it/s] 20it [00:04, 4.76it/s] 21it [00:04, 4.75it/s] 22it [00:04, 4.74it/s] 23it [00:04, 4.75it/s] 24it [00:04, 4.75it/s] 25it [00:05, 4.75it/s] 26it [00:05, 4.74it/s] 27it [00:05, 4.74it/s] 28it [00:05, 4.74it/s] 28it [00:05, 4.82it/s] Total safe images: 1 out of 1
Want to make some of these yourself?
Run this model