swk23 / youngquigon
- Public
- 67 runs
-
H100
Prediction
swk23/youngquigon:d096fdaa46b96d40fcacd840d0123be6ca857f566b5e45e5addf3a78afa29ea0IDvv2tb026exrme0cm9p2934v9emStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- model
- dev
- prompt
- TOK he stands tall with a green lightsaber
- go_fast
- lora_scale
- 1
- megapixels
- 1
- num_outputs
- 1
- aspect_ratio
- 21:9
- output_format
- jpg
- guidance_scale
- 3
- output_quality
- 80
- prompt_strength
- 0.8
- extra_lora_scale
- 1
- num_inference_steps
- 28
{ "image": "https://replicate.delivery/pbxt/MIIwN0ZYcfVXg5kcBmjzZh3kOAMXOkPLjeDU8OTset184kpX/Screenshot%20%28805%29.png", "model": "dev", "prompt": "TOK he stands tall with a green lightsaber", "go_fast": false, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "21:9", "output_format": "jpg", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run swk23/youngquigon using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "swk23/youngquigon:d096fdaa46b96d40fcacd840d0123be6ca857f566b5e45e5addf3a78afa29ea0", { input: { image: "https://replicate.delivery/pbxt/MIIwN0ZYcfVXg5kcBmjzZh3kOAMXOkPLjeDU8OTset184kpX/Screenshot%20%28805%29.png", model: "dev", prompt: "TOK he stands tall with a green lightsaber", go_fast: false, lora_scale: 1, megapixels: "1", num_outputs: 1, aspect_ratio: "21:9", output_format: "jpg", guidance_scale: 3, output_quality: 80, prompt_strength: 0.8, extra_lora_scale: 1, num_inference_steps: 28 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run swk23/youngquigon using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "swk23/youngquigon:d096fdaa46b96d40fcacd840d0123be6ca857f566b5e45e5addf3a78afa29ea0", input={ "image": "https://replicate.delivery/pbxt/MIIwN0ZYcfVXg5kcBmjzZh3kOAMXOkPLjeDU8OTset184kpX/Screenshot%20%28805%29.png", "model": "dev", "prompt": "TOK he stands tall with a green lightsaber", "go_fast": False, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "21:9", "output_format": "jpg", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run swk23/youngquigon using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "swk23/youngquigon:d096fdaa46b96d40fcacd840d0123be6ca857f566b5e45e5addf3a78afa29ea0", "input": { "image": "https://replicate.delivery/pbxt/MIIwN0ZYcfVXg5kcBmjzZh3kOAMXOkPLjeDU8OTset184kpX/Screenshot%20%28805%29.png", "model": "dev", "prompt": "TOK he stands tall with a green lightsaber", "go_fast": false, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "21:9", "output_format": "jpg", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-10T00:01:49.657273Z", "created_at": "2025-01-10T00:01:42.007000Z", "data_removed": false, "error": null, "id": "vv2tb026exrme0cm9p2934v9em", "input": { "image": "https://replicate.delivery/pbxt/MIIwN0ZYcfVXg5kcBmjzZh3kOAMXOkPLjeDU8OTset184kpX/Screenshot%20%28805%29.png", "model": "dev", "prompt": "TOK he stands tall with a green lightsaber", "go_fast": false, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "21:9", "output_format": "jpg", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 }, "logs": "2025-01-10 00:01:42.496 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys\n2025-01-10 00:01:42.497 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted\nApplying LoRA: 0%| | 0/304 [00:00<?, ?it/s]\nApplying LoRA: 92%|█████████▏| 281/304 [00:00<00:00, 2807.83it/s]\nApplying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2585.69it/s]\n2025-01-10 00:01:42.614 | SUCCESS | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.12s\nfree=29271458500608\nDownloading weights\n2025-01-10T00:01:42Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpg_jda5ty/weights url=https://replicate.delivery/xezq/338g1RKEXM4OO12MecTbjcCj45UKqmA2If0MtJ8l1c4YPNDUA/trained_model.tar\n2025-01-10T00:01:44Z | INFO | [ Complete ] dest=/tmp/tmpg_jda5ty/weights size=\"172 MB\" total_elapsed=1.689s url=https://replicate.delivery/xezq/338g1RKEXM4OO12MecTbjcCj45UKqmA2If0MtJ8l1c4YPNDUA/trained_model.tar\nDownloaded weights in 1.71s\n2025-01-10 00:01:44.328 | INFO | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/e8e32a74966c5a3f\n2025-01-10 00:01:44.399 | INFO | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded\n2025-01-10 00:01:44.399 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys\n2025-01-10 00:01:44.399 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted\nApplying LoRA: 0%| | 0/304 [00:00<?, ?it/s]\nApplying LoRA: 92%|█████████▏| 281/304 [00:00<00:00, 2808.71it/s]\nApplying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2586.89it/s]\n2025-01-10 00:01:44.517 | SUCCESS | fp8.lora_loading:load_lora:539 - LoRA applied in 0.19s\nUsing seed: 65087\nImage detected - setting to img2img mode\nInput image size: 1920x817\nScaling image down to 1440x612\nInput image size set to: 1440x608\n0it [00:00, ?it/s]\n1it [00:00, 9.85it/s]\n2it [00:00, 6.99it/s]\n3it [00:00, 6.38it/s]\n4it [00:00, 6.14it/s]\n5it [00:00, 6.00it/s]\n6it [00:00, 5.91it/s]\n7it [00:01, 5.86it/s]\n8it [00:01, 5.84it/s]\n9it [00:01, 5.83it/s]\n10it [00:01, 5.82it/s]\n11it [00:01, 5.81it/s]\n12it [00:02, 5.81it/s]\n13it [00:02, 5.80it/s]\n14it [00:02, 5.81it/s]\n15it [00:02, 5.81it/s]\n16it [00:02, 5.80it/s]\n17it [00:02, 5.79it/s]\n18it [00:03, 5.79it/s]\n19it [00:03, 5.79it/s]\n20it [00:03, 5.79it/s]\n21it [00:03, 5.79it/s]\n22it [00:03, 5.79it/s]\n23it [00:03, 5.79it/s]\n23it [00:03, 5.89it/s]\nTotal safe images: 1 out of 1", "metrics": { "predict_time": 7.324535834, "total_time": 7.650273 }, "output": [ "https://replicate.delivery/xezq/0Frj0fFHbvUaYK2CecVt7iTjWp37L72v8O2ljVqkXP4tVhDUA/out-0.jpg" ], "started_at": "2025-01-10T00:01:42.332737Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-mkvempgnev7t6u7hytldieb4u6u6di46cqkzs3ozmpai5zh3rxqq", "get": "https://api.replicate.com/v1/predictions/vv2tb026exrme0cm9p2934v9em", "cancel": "https://api.replicate.com/v1/predictions/vv2tb026exrme0cm9p2934v9em/cancel" }, "version": "d096fdaa46b96d40fcacd840d0123be6ca857f566b5e45e5addf3a78afa29ea0" }
Generated in2025-01-10 00:01:42.496 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2025-01-10 00:01:42.497 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 92%|█████████▏| 281/304 [00:00<00:00, 2807.83it/s] Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2585.69it/s] 2025-01-10 00:01:42.614 | SUCCESS | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.12s free=29271458500608 Downloading weights 2025-01-10T00:01:42Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpg_jda5ty/weights url=https://replicate.delivery/xezq/338g1RKEXM4OO12MecTbjcCj45UKqmA2If0MtJ8l1c4YPNDUA/trained_model.tar 2025-01-10T00:01:44Z | INFO | [ Complete ] dest=/tmp/tmpg_jda5ty/weights size="172 MB" total_elapsed=1.689s url=https://replicate.delivery/xezq/338g1RKEXM4OO12MecTbjcCj45UKqmA2If0MtJ8l1c4YPNDUA/trained_model.tar Downloaded weights in 1.71s 2025-01-10 00:01:44.328 | INFO | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/e8e32a74966c5a3f 2025-01-10 00:01:44.399 | INFO | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded 2025-01-10 00:01:44.399 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2025-01-10 00:01:44.399 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 92%|█████████▏| 281/304 [00:00<00:00, 2808.71it/s] Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2586.89it/s] 2025-01-10 00:01:44.517 | SUCCESS | fp8.lora_loading:load_lora:539 - LoRA applied in 0.19s Using seed: 65087 Image detected - setting to img2img mode Input image size: 1920x817 Scaling image down to 1440x612 Input image size set to: 1440x608 0it [00:00, ?it/s] 1it [00:00, 9.85it/s] 2it [00:00, 6.99it/s] 3it [00:00, 6.38it/s] 4it [00:00, 6.14it/s] 5it [00:00, 6.00it/s] 6it [00:00, 5.91it/s] 7it [00:01, 5.86it/s] 8it [00:01, 5.84it/s] 9it [00:01, 5.83it/s] 10it [00:01, 5.82it/s] 11it [00:01, 5.81it/s] 12it [00:02, 5.81it/s] 13it [00:02, 5.80it/s] 14it [00:02, 5.81it/s] 15it [00:02, 5.81it/s] 16it [00:02, 5.80it/s] 17it [00:02, 5.79it/s] 18it [00:03, 5.79it/s] 19it [00:03, 5.79it/s] 20it [00:03, 5.79it/s] 21it [00:03, 5.79it/s] 22it [00:03, 5.79it/s] 23it [00:03, 5.79it/s] 23it [00:03, 5.89it/s] Total safe images: 1 out of 1
Want to make some of these yourself?
Run this model