lightweight-ai
/
model2
flux dev
- Public
- 93.2K runs
-
A100 (80GB)
Prediction
lightweight-ai/model2:1246c90e8d3af59d466924a1221ec6cd7183b5e88659be2ccdc106a979b3f69aIDy6s0a0mp9xrj40ck5zfsgg3m74StatusSucceededSourceWebHardwareA100 (80GB)Total durationCreatedInput
- loras
- [ "Realism", "Karina" ]
- width
- 720
- height
- 1320
- prompt
- medium full shot of beautiful korean girl with long wavy hair , wearing black white sleeveless dress, gray background, studio lighting, necklace, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz,<lora:flux_realism_lora:1>, . <lora:makinaflux_karina_v2.0:1>
- lora_scales
- [ 1 ]
- num_outputs
- 1
- output_format
- png
- output_quality
- 100
- prompt_strength
- 0.8
- num_inference_steps
- 28
{ "loras": [ "Realism", "Karina" ], "width": 720, "height": 1320, "prompt": "medium full shot of beautiful korean girl with long wavy hair , wearing black white sleeveless dress, gray background, studio lighting, necklace, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz,<lora:flux_realism_lora:1>, . <lora:makinaflux_karina_v2.0:1>", "lora_scales": [ 1 ], "num_outputs": 1, "output_format": "png", "output_quality": 100, "prompt_strength": 0.8, "num_inference_steps": 28 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lightweight-ai/model2 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lightweight-ai/model2:1246c90e8d3af59d466924a1221ec6cd7183b5e88659be2ccdc106a979b3f69a", { input: { loras: ["Realism","Karina"], width: 720, height: 1320, prompt: "medium full shot of beautiful korean girl with long wavy hair , wearing black white sleeveless dress, gray background, studio lighting, necklace, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz,<lora:flux_realism_lora:1>, . <lora:makinaflux_karina_v2.0:1>", lora_scales: [1], num_outputs: 1, output_format: "png", output_quality: 100, prompt_strength: 0.8, num_inference_steps: 28 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run lightweight-ai/model2 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lightweight-ai/model2:1246c90e8d3af59d466924a1221ec6cd7183b5e88659be2ccdc106a979b3f69a", input={ "loras": ["Realism","Karina"], "width": 720, "height": 1320, "prompt": "medium full shot of beautiful korean girl with long wavy hair , wearing black white sleeveless dress, gray background, studio lighting, necklace, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz,<lora:flux_realism_lora:1>, . <lora:makinaflux_karina_v2.0:1>", "lora_scales": [1], "num_outputs": 1, "output_format": "png", "output_quality": 100, "prompt_strength": 0.8, "num_inference_steps": 28 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run lightweight-ai/model2 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "1246c90e8d3af59d466924a1221ec6cd7183b5e88659be2ccdc106a979b3f69a", "input": { "loras": ["Realism","Karina"], "width": 720, "height": 1320, "prompt": "medium full shot of beautiful korean girl with long wavy hair , wearing black white sleeveless dress, gray background, studio lighting, necklace, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz,<lora:flux_realism_lora:1>, . <lora:makinaflux_karina_v2.0:1>", "lora_scales": [1], "num_outputs": 1, "output_format": "png", "output_quality": 100, "prompt_strength": 0.8, "num_inference_steps": 28 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-11-15T12:50:16.113111Z", "created_at": "2024-11-15T12:50:02.447000Z", "data_removed": false, "error": null, "id": "y6s0a0mp9xrj40ck5zfsgg3m74", "input": { "loras": [ "Realism", "Karina" ], "width": 720, "height": 1320, "prompt": "medium full shot of beautiful korean girl with long wavy hair , wearing black white sleeveless dress, gray background, studio lighting, necklace, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz,<lora:flux_realism_lora:1>, . <lora:makinaflux_karina_v2.0:1>", "lora_scales": [ 1 ], "num_outputs": 1, "output_format": "png", "output_quality": 100, "prompt_strength": 0.8, "num_inference_steps": 28 }, "logs": "Using seed: 22810\nPrompt: medium full shot of beautiful korean girl with long wavy hair , wearing black white sleeveless dress, gray background, studio lighting, necklace, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz,<lora:flux_realism_lora:1>, . <lora:makinaflux_karina_v2.0:1>\ntxt2img mode\nThe following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['a : 1 >,. < lora : makinaflux _ karina _ v 2. 0 : 1 >']\n 0%| | 0/28 [00:00<?, ?it/s]\n 4%|▎ | 1/28 [00:00<00:12, 2.12it/s]\n 7%|▋ | 2/28 [00:00<00:10, 2.44it/s]\n 11%|█ | 3/28 [00:01<00:10, 2.29it/s]\n 14%|█▍ | 4/28 [00:01<00:10, 2.22it/s]\n 18%|█▊ | 5/28 [00:02<00:10, 2.19it/s]\n 21%|██▏ | 6/28 [00:02<00:10, 2.17it/s]\n 25%|██▌ | 7/28 [00:03<00:09, 2.16it/s]\n 29%|██▊ | 8/28 [00:03<00:09, 2.15it/s]\n 32%|███▏ | 9/28 [00:04<00:08, 2.14it/s]\n 36%|███▌ | 10/28 [00:04<00:08, 2.14it/s]\n 39%|███▉ | 11/28 [00:05<00:07, 2.13it/s]\n 43%|████▎ | 12/28 [00:05<00:07, 2.13it/s]\n 46%|████▋ | 13/28 [00:06<00:07, 2.13it/s]\n 50%|█████ | 14/28 [00:06<00:06, 2.13it/s]\n 54%|█████▎ | 15/28 [00:06<00:06, 2.13it/s]\n 57%|█████▋ | 16/28 [00:07<00:05, 2.13it/s]\n 61%|██████ | 17/28 [00:07<00:05, 2.13it/s]\n 64%|██████▍ | 18/28 [00:08<00:04, 2.13it/s]\n 68%|██████▊ | 19/28 [00:08<00:04, 2.13it/s]\n 71%|███████▏ | 20/28 [00:09<00:03, 2.13it/s]\n 75%|███████▌ | 21/28 [00:09<00:03, 2.13it/s]\n 79%|███████▊ | 22/28 [00:10<00:02, 2.13it/s]\n 82%|████████▏ | 23/28 [00:10<00:02, 2.13it/s]\n 86%|████████▌ | 24/28 [00:11<00:01, 2.13it/s]\n 89%|████████▉ | 25/28 [00:11<00:01, 2.13it/s]\n 93%|█████████▎| 26/28 [00:12<00:00, 2.13it/s]\n 96%|█████████▋| 27/28 [00:12<00:00, 2.13it/s]\n100%|██████████| 28/28 [00:13<00:00, 2.13it/s]\n100%|██████████| 28/28 [00:13<00:00, 2.15it/s]", "metrics": { "predict_time": 13.655777527, "total_time": 13.666111 }, "output": [ "https://replicate.delivery/yhqm/bbmja4xeQxQvTyKr2sNPX2MsSW7TIVMdeb2aoMFrV9HIWPxTA/out-0.png" ], "started_at": "2024-11-15T12:50:02.457334Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/qoxq-edcc3wwzvedqlom6qrrk4mdiyhjkpsdzsz6uoaxcbwryxbsmvbtq", "get": "https://api.replicate.com/v1/predictions/y6s0a0mp9xrj40ck5zfsgg3m74", "cancel": "https://api.replicate.com/v1/predictions/y6s0a0mp9xrj40ck5zfsgg3m74/cancel" }, "version": "1246c90e8d3af59d466924a1221ec6cd7183b5e88659be2ccdc106a979b3f69a" }
Generated inUsing seed: 22810 Prompt: medium full shot of beautiful korean girl with long wavy hair , wearing black white sleeveless dress, gray background, studio lighting, necklace, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz,<lora:flux_realism_lora:1>, . <lora:makinaflux_karina_v2.0:1> txt2img mode The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['a : 1 >,. < lora : makinaflux _ karina _ v 2. 0 : 1 >'] 0%| | 0/28 [00:00<?, ?it/s] 4%|▎ | 1/28 [00:00<00:12, 2.12it/s] 7%|▋ | 2/28 [00:00<00:10, 2.44it/s] 11%|█ | 3/28 [00:01<00:10, 2.29it/s] 14%|█▍ | 4/28 [00:01<00:10, 2.22it/s] 18%|█▊ | 5/28 [00:02<00:10, 2.19it/s] 21%|██▏ | 6/28 [00:02<00:10, 2.17it/s] 25%|██▌ | 7/28 [00:03<00:09, 2.16it/s] 29%|██▊ | 8/28 [00:03<00:09, 2.15it/s] 32%|███▏ | 9/28 [00:04<00:08, 2.14it/s] 36%|███▌ | 10/28 [00:04<00:08, 2.14it/s] 39%|███▉ | 11/28 [00:05<00:07, 2.13it/s] 43%|████▎ | 12/28 [00:05<00:07, 2.13it/s] 46%|████▋ | 13/28 [00:06<00:07, 2.13it/s] 50%|█████ | 14/28 [00:06<00:06, 2.13it/s] 54%|█████▎ | 15/28 [00:06<00:06, 2.13it/s] 57%|█████▋ | 16/28 [00:07<00:05, 2.13it/s] 61%|██████ | 17/28 [00:07<00:05, 2.13it/s] 64%|██████▍ | 18/28 [00:08<00:04, 2.13it/s] 68%|██████▊ | 19/28 [00:08<00:04, 2.13it/s] 71%|███████▏ | 20/28 [00:09<00:03, 2.13it/s] 75%|███████▌ | 21/28 [00:09<00:03, 2.13it/s] 79%|███████▊ | 22/28 [00:10<00:02, 2.13it/s] 82%|████████▏ | 23/28 [00:10<00:02, 2.13it/s] 86%|████████▌ | 24/28 [00:11<00:01, 2.13it/s] 89%|████████▉ | 25/28 [00:11<00:01, 2.13it/s] 93%|█████████▎| 26/28 [00:12<00:00, 2.13it/s] 96%|█████████▋| 27/28 [00:12<00:00, 2.13it/s] 100%|██████████| 28/28 [00:13<00:00, 2.13it/s] 100%|██████████| 28/28 [00:13<00:00, 2.15it/s]
Prediction
lightweight-ai/model2:1246c90e8d3af59d466924a1221ec6cd7183b5e88659be2ccdc106a979b3f69aIDekhffs1bnhrj60ck5zmtdcvdy8StatusSucceededSourceWebHardwareA100 (80GB)Total durationCreatedInput
- seed
- 49553
- loras
- [ "Realism", "IU" ]
- width
- 720
- height
- 1320
- prompt
- (medium full shot), beautiful korean girl, bokeh lights background, studio lighting, wearing bohemian dress with floral pattern, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz, <lora:flux_realism_lora:1>, <lora:makinaflux_iu_v1.0:1>
- lora_scales
- [ 1 ]
- num_outputs
- 1
- output_format
- png
- output_quality
- 100
- prompt_strength
- 0.8
- num_inference_steps
- 28
{ "seed": 49553, "loras": [ "Realism", "IU" ], "width": 720, "height": 1320, "prompt": "(medium full shot), beautiful korean girl, bokeh lights background, studio lighting, wearing bohemian dress with floral pattern, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz, <lora:flux_realism_lora:1>, <lora:makinaflux_iu_v1.0:1>", "lora_scales": [ 1 ], "num_outputs": 1, "output_format": "png", "output_quality": 100, "prompt_strength": 0.8, "num_inference_steps": 28 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lightweight-ai/model2 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lightweight-ai/model2:1246c90e8d3af59d466924a1221ec6cd7183b5e88659be2ccdc106a979b3f69a", { input: { seed: 49553, loras: ["Realism","IU"], width: 720, height: 1320, prompt: "(medium full shot), beautiful korean girl, bokeh lights background, studio lighting, wearing bohemian dress with floral pattern, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz, <lora:flux_realism_lora:1>, <lora:makinaflux_iu_v1.0:1>", lora_scales: [1], num_outputs: 1, output_format: "png", output_quality: 100, prompt_strength: 0.8, num_inference_steps: 28 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run lightweight-ai/model2 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lightweight-ai/model2:1246c90e8d3af59d466924a1221ec6cd7183b5e88659be2ccdc106a979b3f69a", input={ "seed": 49553, "loras": ["Realism","IU"], "width": 720, "height": 1320, "prompt": "(medium full shot), beautiful korean girl, bokeh lights background, studio lighting, wearing bohemian dress with floral pattern, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz, <lora:flux_realism_lora:1>, <lora:makinaflux_iu_v1.0:1>", "lora_scales": [1], "num_outputs": 1, "output_format": "png", "output_quality": 100, "prompt_strength": 0.8, "num_inference_steps": 28 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run lightweight-ai/model2 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "1246c90e8d3af59d466924a1221ec6cd7183b5e88659be2ccdc106a979b3f69a", "input": { "seed": 49553, "loras": ["Realism","IU"], "width": 720, "height": 1320, "prompt": "(medium full shot), beautiful korean girl, bokeh lights background, studio lighting, wearing bohemian dress with floral pattern, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz, <lora:flux_realism_lora:1>, <lora:makinaflux_iu_v1.0:1>", "lora_scales": [1], "num_outputs": 1, "output_format": "png", "output_quality": 100, "prompt_strength": 0.8, "num_inference_steps": 28 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-11-15T13:00:45.342448Z", "created_at": "2024-11-15T13:00:30.508000Z", "data_removed": false, "error": null, "id": "ekhffs1bnhrj60ck5zmtdcvdy8", "input": { "seed": 49553, "loras": [ "Realism", "IU" ], "width": 720, "height": 1320, "prompt": "(medium full shot), beautiful korean girl, bokeh lights background, studio lighting, wearing bohemian dress with floral pattern, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz, <lora:flux_realism_lora:1>, <lora:makinaflux_iu_v1.0:1>", "lora_scales": [ 1 ], "num_outputs": 1, "output_format": "png", "output_quality": 100, "prompt_strength": 0.8, "num_inference_steps": 28 }, "logs": "Using seed: 49553\nPrompt: (medium full shot), beautiful korean girl, bokeh lights background, studio lighting, wearing bohemian dress with floral pattern, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz, <lora:flux_realism_lora:1>, <lora:makinaflux_iu_v1.0:1>\ntxt2img mode\napply loras\nDownloading LoRA weights from - safetensor URL: https://sg-model-store.s3.ap-northeast-2.amazonaws.com/System/Lora/realism/Converted+for+Comfyul/flux_realism_lora.safetensors\nUnsuppored keys for ai-toolkit: dict_keys(['diffusion_model.double_blocks.0.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.0.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.0.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.0.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.0.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.0.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.0.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.0.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.1.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.1.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.1.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.1.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.1.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.1.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.1.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.1.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.10.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.10.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.10.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.10.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.10.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.10.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.10.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.10.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.11.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.11.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.11.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.11.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.11.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.11.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.11.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.11.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.12.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.12.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.12.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.12.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.12.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.12.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.12.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.12.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.13.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.13.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.13.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.13.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.13.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.13.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.13.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.13.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.14.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.14.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.14.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.14.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.14.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.14.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.14.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.14.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.15.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.15.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.15.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.15.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.15.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.15.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.15.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.15.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.16.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.16.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.16.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.16.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.16.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.16.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.16.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.16.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.17.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.17.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.17.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.17.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.17.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.17.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.17.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.17.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.18.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.18.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.18.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.18.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.18.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.18.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.18.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.18.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.2.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.2.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.2.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.2.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.2.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.2.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.2.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.2.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.3.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.3.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.3.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.3.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.3.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.3.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.3.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.3.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.4.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.4.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.4.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.4.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.4.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.4.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.4.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.4.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.5.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.5.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.5.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.5.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.5.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.5.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.5.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.5.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.6.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.6.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.6.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.6.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.6.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.6.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.6.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.6.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.7.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.7.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.7.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.7.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.7.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.7.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.7.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.7.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.8.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.8.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.8.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.8.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.8.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.8.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.8.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.8.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.9.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.9.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.9.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.9.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.9.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.9.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.9.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.9.txt_attn.qkv.lora_up.weight'])\nLoading LoRA took: 0.00 seconds\nDownloading LoRA weights from - safetensor URL: https://sg-model-store.s3.ap-northeast-2.amazonaws.com/System/Lora/IU+Makina/v1.0/makinaflux_iu_v1.0.safetensors\nLoading LoRA took: 1.11 seconds\nThe following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['1 >, < lora : makinaflux _ iu _ v 1. 0 : 1 >']\n 0%| | 0/28 [00:00<?, ?it/s]\n 4%|▎ | 1/28 [00:00<00:12, 2.13it/s]\n 7%|▋ | 2/28 [00:00<00:10, 2.44it/s]\n 11%|█ | 3/28 [00:01<00:10, 2.29it/s]\n 14%|█▍ | 4/28 [00:01<00:10, 2.23it/s]\n 18%|█▊ | 5/28 [00:02<00:10, 2.19it/s]\n 21%|██▏ | 6/28 [00:02<00:10, 2.17it/s]\n 25%|██▌ | 7/28 [00:03<00:09, 2.16it/s]\n 29%|██▊ | 8/28 [00:03<00:09, 2.15it/s]\n 32%|███▏ | 9/28 [00:04<00:08, 2.14it/s]\n 36%|███▌ | 10/28 [00:04<00:08, 2.14it/s]\n 39%|███▉ | 11/28 [00:05<00:07, 2.14it/s]\n 43%|████▎ | 12/28 [00:05<00:07, 2.14it/s]\n 46%|████▋ | 13/28 [00:05<00:07, 2.13it/s]\n 50%|█████ | 14/28 [00:06<00:06, 2.13it/s]\n 54%|█████▎ | 15/28 [00:06<00:06, 2.13it/s]\n 57%|█████▋ | 16/28 [00:07<00:05, 2.13it/s]\n 61%|██████ | 17/28 [00:07<00:05, 2.13it/s]\n 64%|██████▍ | 18/28 [00:08<00:04, 2.13it/s]\n 68%|██████▊ | 19/28 [00:08<00:04, 2.13it/s]\n 71%|███████▏ | 20/28 [00:09<00:03, 2.13it/s]\n 75%|███████▌ | 21/28 [00:09<00:03, 2.13it/s]\n 79%|███████▊ | 22/28 [00:10<00:02, 2.13it/s]\n 82%|████████▏ | 23/28 [00:10<00:02, 2.13it/s]\n 86%|████████▌ | 24/28 [00:11<00:01, 2.13it/s]\n 89%|████████▉ | 25/28 [00:11<00:01, 2.13it/s]\n 93%|█████████▎| 26/28 [00:12<00:00, 2.13it/s]\n 96%|█████████▋| 27/28 [00:12<00:00, 2.13it/s]\n100%|██████████| 28/28 [00:13<00:00, 2.13it/s]\n100%|██████████| 28/28 [00:13<00:00, 2.15it/s]", "metrics": { "predict_time": 14.825822153, "total_time": 14.834448 }, "output": [ "https://replicate.delivery/yhqm/fMkCCT9e3AlQP0D2ofY62pXuKaTJvyKXi9QBUcZUSx97f9EPB/out-0.png" ], "started_at": "2024-11-15T13:00:30.516626Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/qoxq-2kcpkzxseefn7u2u4uskgwscrbz3hs5jnvb74xdyugvkruw3pkda", "get": "https://api.replicate.com/v1/predictions/ekhffs1bnhrj60ck5zmtdcvdy8", "cancel": "https://api.replicate.com/v1/predictions/ekhffs1bnhrj60ck5zmtdcvdy8/cancel" }, "version": "1246c90e8d3af59d466924a1221ec6cd7183b5e88659be2ccdc106a979b3f69a" }
Generated inUsing seed: 49553 Prompt: (medium full shot), beautiful korean girl, bokeh lights background, studio lighting, wearing bohemian dress with floral pattern, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz, <lora:flux_realism_lora:1>, <lora:makinaflux_iu_v1.0:1> txt2img mode apply loras Downloading LoRA weights from - safetensor URL: https://sg-model-store.s3.ap-northeast-2.amazonaws.com/System/Lora/realism/Converted+for+Comfyul/flux_realism_lora.safetensors Unsuppored keys for ai-toolkit: dict_keys(['diffusion_model.double_blocks.0.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.0.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.0.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.0.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.0.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.0.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.0.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.0.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.1.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.1.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.1.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.1.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.1.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.1.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.1.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.1.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.10.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.10.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.10.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.10.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.10.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.10.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.10.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.10.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.11.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.11.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.11.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.11.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.11.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.11.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.11.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.11.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.12.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.12.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.12.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.12.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.12.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.12.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.12.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.12.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.13.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.13.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.13.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.13.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.13.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.13.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.13.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.13.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.14.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.14.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.14.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.14.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.14.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.14.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.14.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.14.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.15.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.15.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.15.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.15.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.15.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.15.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.15.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.15.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.16.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.16.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.16.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.16.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.16.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.16.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.16.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.16.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.17.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.17.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.17.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.17.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.17.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.17.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.17.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.17.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.18.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.18.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.18.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.18.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.18.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.18.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.18.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.18.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.2.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.2.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.2.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.2.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.2.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.2.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.2.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.2.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.3.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.3.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.3.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.3.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.3.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.3.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.3.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.3.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.4.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.4.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.4.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.4.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.4.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.4.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.4.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.4.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.5.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.5.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.5.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.5.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.5.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.5.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.5.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.5.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.6.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.6.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.6.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.6.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.6.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.6.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.6.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.6.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.7.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.7.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.7.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.7.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.7.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.7.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.7.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.7.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.8.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.8.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.8.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.8.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.8.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.8.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.8.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.8.txt_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.9.img_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.9.img_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.9.img_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.9.img_attn.qkv.lora_up.weight', 'diffusion_model.double_blocks.9.txt_attn.proj.lora_down.weight', 'diffusion_model.double_blocks.9.txt_attn.proj.lora_up.weight', 'diffusion_model.double_blocks.9.txt_attn.qkv.lora_down.weight', 'diffusion_model.double_blocks.9.txt_attn.qkv.lora_up.weight']) Loading LoRA took: 0.00 seconds Downloading LoRA weights from - safetensor URL: https://sg-model-store.s3.ap-northeast-2.amazonaws.com/System/Lora/IU+Makina/v1.0/makinaflux_iu_v1.0.safetensors Loading LoRA took: 1.11 seconds The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['1 >, < lora : makinaflux _ iu _ v 1. 0 : 1 >'] 0%| | 0/28 [00:00<?, ?it/s] 4%|▎ | 1/28 [00:00<00:12, 2.13it/s] 7%|▋ | 2/28 [00:00<00:10, 2.44it/s] 11%|█ | 3/28 [00:01<00:10, 2.29it/s] 14%|█▍ | 4/28 [00:01<00:10, 2.23it/s] 18%|█▊ | 5/28 [00:02<00:10, 2.19it/s] 21%|██▏ | 6/28 [00:02<00:10, 2.17it/s] 25%|██▌ | 7/28 [00:03<00:09, 2.16it/s] 29%|██▊ | 8/28 [00:03<00:09, 2.15it/s] 32%|███▏ | 9/28 [00:04<00:08, 2.14it/s] 36%|███▌ | 10/28 [00:04<00:08, 2.14it/s] 39%|███▉ | 11/28 [00:05<00:07, 2.14it/s] 43%|████▎ | 12/28 [00:05<00:07, 2.14it/s] 46%|████▋ | 13/28 [00:05<00:07, 2.13it/s] 50%|█████ | 14/28 [00:06<00:06, 2.13it/s] 54%|█████▎ | 15/28 [00:06<00:06, 2.13it/s] 57%|█████▋ | 16/28 [00:07<00:05, 2.13it/s] 61%|██████ | 17/28 [00:07<00:05, 2.13it/s] 64%|██████▍ | 18/28 [00:08<00:04, 2.13it/s] 68%|██████▊ | 19/28 [00:08<00:04, 2.13it/s] 71%|███████▏ | 20/28 [00:09<00:03, 2.13it/s] 75%|███████▌ | 21/28 [00:09<00:03, 2.13it/s] 79%|███████▊ | 22/28 [00:10<00:02, 2.13it/s] 82%|████████▏ | 23/28 [00:10<00:02, 2.13it/s] 86%|████████▌ | 24/28 [00:11<00:01, 2.13it/s] 89%|████████▉ | 25/28 [00:11<00:01, 2.13it/s] 93%|█████████▎| 26/28 [00:12<00:00, 2.13it/s] 96%|█████████▋| 27/28 [00:12<00:00, 2.13it/s] 100%|██████████| 28/28 [00:13<00:00, 2.13it/s] 100%|██████████| 28/28 [00:13<00:00, 2.15it/s]
Prediction
lightweight-ai/model2:1246c90e8d3af59d466924a1221ec6cd7183b5e88659be2ccdc106a979b3f69aIDrygd6nkhtnrj00ck5zv9812ypwStatusSucceededSourceWebHardwareA100 (80GB)Total durationCreatedInput
- seed
- 46274
- loras
- [ "Realism" ]
- width
- 1024
- height
- 1024
- prompt
- 8K UHD resolution cinematic shot of a stunningly beautiful woman with flawless skin, captivating emerald green eyes, and a radiant smile that illuminates her face. She is adorned in an elegant flowing gown, her hair cascading down her back in soft waves, captured in perfect lighting against a neutral backdrop, emphasizing her natural beauty.
- lora_scales
- [ 1 ]
- num_outputs
- 1
- output_format
- png
- output_quality
- 100
- prompt_strength
- 0.8
- num_inference_steps
- 30
{ "seed": 46274, "loras": [ "Realism" ], "width": 1024, "height": 1024, "prompt": "8K UHD resolution cinematic shot of a stunningly beautiful woman with flawless skin, captivating emerald green eyes, and a radiant smile that illuminates her face. She is adorned in an elegant flowing gown, her hair cascading down her back in soft waves, captured in perfect lighting against a neutral backdrop, emphasizing her natural beauty. ", "lora_scales": [ 1 ], "num_outputs": 1, "output_format": "png", "output_quality": 100, "prompt_strength": 0.8, "num_inference_steps": 30 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lightweight-ai/model2 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lightweight-ai/model2:1246c90e8d3af59d466924a1221ec6cd7183b5e88659be2ccdc106a979b3f69a", { input: { seed: 46274, loras: ["Realism"], width: 1024, height: 1024, prompt: "8K UHD resolution cinematic shot of a stunningly beautiful woman with flawless skin, captivating emerald green eyes, and a radiant smile that illuminates her face. She is adorned in an elegant flowing gown, her hair cascading down her back in soft waves, captured in perfect lighting against a neutral backdrop, emphasizing her natural beauty. ", lora_scales: [1], num_outputs: 1, output_format: "png", output_quality: 100, prompt_strength: 0.8, num_inference_steps: 30 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run lightweight-ai/model2 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lightweight-ai/model2:1246c90e8d3af59d466924a1221ec6cd7183b5e88659be2ccdc106a979b3f69a", input={ "seed": 46274, "loras": ["Realism"], "width": 1024, "height": 1024, "prompt": "8K UHD resolution cinematic shot of a stunningly beautiful woman with flawless skin, captivating emerald green eyes, and a radiant smile that illuminates her face. She is adorned in an elegant flowing gown, her hair cascading down her back in soft waves, captured in perfect lighting against a neutral backdrop, emphasizing her natural beauty. ", "lora_scales": [1], "num_outputs": 1, "output_format": "png", "output_quality": 100, "prompt_strength": 0.8, "num_inference_steps": 30 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run lightweight-ai/model2 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "1246c90e8d3af59d466924a1221ec6cd7183b5e88659be2ccdc106a979b3f69a", "input": { "seed": 46274, "loras": ["Realism"], "width": 1024, "height": 1024, "prompt": "8K UHD resolution cinematic shot of a stunningly beautiful woman with flawless skin, captivating emerald green eyes, and a radiant smile that illuminates her face. She is adorned in an elegant flowing gown, her hair cascading down her back in soft waves, captured in perfect lighting against a neutral backdrop, emphasizing her natural beauty. ", "lora_scales": [1], "num_outputs": 1, "output_format": "png", "output_quality": 100, "prompt_strength": 0.8, "num_inference_steps": 30 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-11-15T13:15:14.742486Z", "created_at": "2024-11-15T13:15:00.437000Z", "data_removed": false, "error": null, "id": "rygd6nkhtnrj00ck5zv9812ypw", "input": { "seed": 46274, "loras": [ "Realism" ], "width": 1024, "height": 1024, "prompt": "8K UHD resolution cinematic shot of a stunningly beautiful woman with flawless skin, captivating emerald green eyes, and a radiant smile that illuminates her face. She is adorned in an elegant flowing gown, her hair cascading down her back in soft waves, captured in perfect lighting against a neutral backdrop, emphasizing her natural beauty. ", "lora_scales": [ 1 ], "num_outputs": 1, "output_format": "png", "output_quality": 100, "prompt_strength": 0.8, "num_inference_steps": 30 }, "logs": "Using seed: 46274\nPrompt: 8K UHD resolution cinematic shot of a stunningly beautiful woman with flawless skin, captivating emerald green eyes, and a radiant smile that illuminates her face. She is adorned in an elegant flowing gown, her hair cascading down her back in soft waves, captured in perfect lighting against a neutral backdrop, emphasizing her natural beauty.\ntxt2img mode\n 0%| | 0/30 [00:00<?, ?it/s]\n 3%|▎ | 1/30 [00:00<00:13, 2.19it/s]\n 7%|▋ | 2/30 [00:00<00:10, 2.74it/s]\n 10%|█ | 3/30 [00:01<00:10, 2.46it/s]\n 13%|█▎ | 4/30 [00:01<00:11, 2.34it/s]\n 17%|█▋ | 5/30 [00:02<00:10, 2.28it/s]\n 20%|██ | 6/30 [00:02<00:10, 2.25it/s]\n 23%|██▎ | 7/30 [00:03<00:10, 2.22it/s]\n 27%|██▋ | 8/30 [00:03<00:09, 2.21it/s]\n 30%|███ | 9/30 [00:03<00:09, 2.20it/s]\n 33%|███▎ | 10/30 [00:04<00:09, 2.20it/s]\n 37%|███▋ | 11/30 [00:04<00:08, 2.19it/s]\n 40%|████ | 12/30 [00:05<00:08, 2.19it/s]\n 43%|████▎ | 13/30 [00:05<00:07, 2.19it/s]\n 47%|████▋ | 14/30 [00:06<00:07, 2.18it/s]\n 50%|█████ | 15/30 [00:06<00:06, 2.18it/s]\n 53%|█████▎ | 16/30 [00:07<00:06, 2.18it/s]\n 57%|█████▋ | 17/30 [00:07<00:05, 2.18it/s]\n 60%|██████ | 18/30 [00:08<00:05, 2.18it/s]\n 63%|██████▎ | 19/30 [00:08<00:05, 2.18it/s]\n 67%|██████▋ | 20/30 [00:09<00:04, 2.18it/s]\n 70%|███████ | 21/30 [00:09<00:04, 2.18it/s]\n 73%|███████▎ | 22/30 [00:09<00:03, 2.18it/s]\n 77%|███████▋ | 23/30 [00:10<00:03, 2.18it/s]\n 80%|████████ | 24/30 [00:10<00:02, 2.18it/s]\n 83%|████████▎ | 25/30 [00:11<00:02, 2.18it/s]\n 87%|████████▋ | 26/30 [00:11<00:01, 2.18it/s]\n 90%|█████████ | 27/30 [00:12<00:01, 2.18it/s]\n 93%|█████████▎| 28/30 [00:12<00:00, 2.18it/s]\n 97%|█████████▋| 29/30 [00:13<00:00, 2.18it/s]\n100%|██████████| 30/30 [00:13<00:00, 2.18it/s]\n100%|██████████| 30/30 [00:13<00:00, 2.21it/s]", "metrics": { "predict_time": 14.297243711, "total_time": 14.305486 }, "output": [ "https://replicate.delivery/yhqm/28nuMNl3PIrRLtftfXl81EfuJ4HwgifsCLJbL500ba6I2eJeE/out-0.png" ], "started_at": "2024-11-15T13:15:00.445243Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/qoxq-nip3glv6z5cudpegam4u52m6cbkhh2sfqky4tpeoicrnzelas4la", "get": "https://api.replicate.com/v1/predictions/rygd6nkhtnrj00ck5zv9812ypw", "cancel": "https://api.replicate.com/v1/predictions/rygd6nkhtnrj00ck5zv9812ypw/cancel" }, "version": "1246c90e8d3af59d466924a1221ec6cd7183b5e88659be2ccdc106a979b3f69a" }
Generated inUsing seed: 46274 Prompt: 8K UHD resolution cinematic shot of a stunningly beautiful woman with flawless skin, captivating emerald green eyes, and a radiant smile that illuminates her face. She is adorned in an elegant flowing gown, her hair cascading down her back in soft waves, captured in perfect lighting against a neutral backdrop, emphasizing her natural beauty. txt2img mode 0%| | 0/30 [00:00<?, ?it/s] 3%|▎ | 1/30 [00:00<00:13, 2.19it/s] 7%|▋ | 2/30 [00:00<00:10, 2.74it/s] 10%|█ | 3/30 [00:01<00:10, 2.46it/s] 13%|█▎ | 4/30 [00:01<00:11, 2.34it/s] 17%|█▋ | 5/30 [00:02<00:10, 2.28it/s] 20%|██ | 6/30 [00:02<00:10, 2.25it/s] 23%|██▎ | 7/30 [00:03<00:10, 2.22it/s] 27%|██▋ | 8/30 [00:03<00:09, 2.21it/s] 30%|███ | 9/30 [00:03<00:09, 2.20it/s] 33%|███▎ | 10/30 [00:04<00:09, 2.20it/s] 37%|███▋ | 11/30 [00:04<00:08, 2.19it/s] 40%|████ | 12/30 [00:05<00:08, 2.19it/s] 43%|████▎ | 13/30 [00:05<00:07, 2.19it/s] 47%|████▋ | 14/30 [00:06<00:07, 2.18it/s] 50%|█████ | 15/30 [00:06<00:06, 2.18it/s] 53%|█████▎ | 16/30 [00:07<00:06, 2.18it/s] 57%|█████▋ | 17/30 [00:07<00:05, 2.18it/s] 60%|██████ | 18/30 [00:08<00:05, 2.18it/s] 63%|██████▎ | 19/30 [00:08<00:05, 2.18it/s] 67%|██████▋ | 20/30 [00:09<00:04, 2.18it/s] 70%|███████ | 21/30 [00:09<00:04, 2.18it/s] 73%|███████▎ | 22/30 [00:09<00:03, 2.18it/s] 77%|███████▋ | 23/30 [00:10<00:03, 2.18it/s] 80%|████████ | 24/30 [00:10<00:02, 2.18it/s] 83%|████████▎ | 25/30 [00:11<00:02, 2.18it/s] 87%|████████▋ | 26/30 [00:11<00:01, 2.18it/s] 90%|█████████ | 27/30 [00:12<00:01, 2.18it/s] 93%|█████████▎| 28/30 [00:12<00:00, 2.18it/s] 97%|█████████▋| 29/30 [00:13<00:00, 2.18it/s] 100%|██████████| 30/30 [00:13<00:00, 2.18it/s] 100%|██████████| 30/30 [00:13<00:00, 2.21it/s]
Prediction
lightweight-ai/model2:5f4aa3a2edeed089ff512d2ca2807172a5b0de0babb0b4ca025b1cffab4893fbIDh3jm03mgd1rj60cm9r9vfy008cStatusSucceededSourceWebHardwareA100 (80GB)Total durationCreatedInput
- loras
- []
- width
- 720
- height
- 1320
- prompt
- medium full shot of beautiful korean girl with long wavy hair , wearing black white sleeveless dress, gray background, studio lighting, necklace, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz,<lora:flux_realism_lora:1>, . <lora:makinaflux_karina_v2.0:1>
- inpaint
- controlnet
- lora_scales
- [ 0 ]
- num_outputs
- 1
- control_type
- canny
- low_threshold
- 100
- output_format
- png
- guidance_scale
- 3.5
- high_threshold
- 200
- output_quality
- 100
- prompt_strength
- 0.8
- control_strength
- 0.2
- num_inference_steps
- 28
{ "loras": [], "width": 720, "height": 1320, "prompt": "medium full shot of beautiful korean girl with long wavy hair , wearing black white sleeveless dress, gray background, studio lighting, necklace, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz,<lora:flux_realism_lora:1>, . <lora:makinaflux_karina_v2.0:1>", "inpaint": false, "controlnet": false, "lora_scales": [ 0 ], "num_outputs": 1, "control_type": "canny", "low_threshold": 100, "output_format": "png", "guidance_scale": 3.5, "high_threshold": 200, "output_quality": 100, "prompt_strength": 0.8, "control_strength": 0.2, "num_inference_steps": 28 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run lightweight-ai/model2 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "lightweight-ai/model2:5f4aa3a2edeed089ff512d2ca2807172a5b0de0babb0b4ca025b1cffab4893fb", { input: { loras: [], width: 720, height: 1320, prompt: "medium full shot of beautiful korean girl with long wavy hair , wearing black white sleeveless dress, gray background, studio lighting, necklace, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz,<lora:flux_realism_lora:1>, . <lora:makinaflux_karina_v2.0:1>", inpaint: false, controlnet: false, lora_scales: [0], num_outputs: 1, control_type: "canny", low_threshold: 100, output_format: "png", guidance_scale: 3.5, high_threshold: 200, output_quality: 100, prompt_strength: 0.8, control_strength: 0.2, num_inference_steps: 28 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run lightweight-ai/model2 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "lightweight-ai/model2:5f4aa3a2edeed089ff512d2ca2807172a5b0de0babb0b4ca025b1cffab4893fb", input={ "loras": [], "width": 720, "height": 1320, "prompt": "medium full shot of beautiful korean girl with long wavy hair , wearing black white sleeveless dress, gray background, studio lighting, necklace, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz,<lora:flux_realism_lora:1>, . <lora:makinaflux_karina_v2.0:1>", "inpaint": False, "controlnet": False, "lora_scales": [0], "num_outputs": 1, "control_type": "canny", "low_threshold": 100, "output_format": "png", "guidance_scale": 3.5, "high_threshold": 200, "output_quality": 100, "prompt_strength": 0.8, "control_strength": 0.2, "num_inference_steps": 28 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run lightweight-ai/model2 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "5f4aa3a2edeed089ff512d2ca2807172a5b0de0babb0b4ca025b1cffab4893fb", "input": { "loras": [], "width": 720, "height": 1320, "prompt": "medium full shot of beautiful korean girl with long wavy hair , wearing black white sleeveless dress, gray background, studio lighting, necklace, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz,<lora:flux_realism_lora:1>, . <lora:makinaflux_karina_v2.0:1>", "inpaint": false, "controlnet": false, "lora_scales": [0], "num_outputs": 1, "control_type": "canny", "low_threshold": 100, "output_format": "png", "guidance_scale": 3.5, "high_threshold": 200, "output_quality": 100, "prompt_strength": 0.8, "control_strength": 0.2, "num_inference_steps": 28 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-01-10T02:40:07.854758Z", "created_at": "2025-01-10T02:38:12.584000Z", "data_removed": false, "error": null, "id": "h3jm03mgd1rj60cm9r9vfy008c", "input": { "loras": [], "width": 720, "height": 1320, "prompt": "medium full shot of beautiful korean girl with long wavy hair , wearing black white sleeveless dress, gray background, studio lighting, necklace, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz,<lora:flux_realism_lora:1>, . <lora:makinaflux_karina_v2.0:1>", "inpaint": false, "controlnet": false, "lora_scales": [ 0 ], "num_outputs": 1, "control_type": "canny", "low_threshold": 100, "output_format": "png", "guidance_scale": 3.5, "high_threshold": 200, "output_quality": 100, "prompt_strength": 0.8, "control_strength": 0.2, "num_inference_steps": 28 }, "logs": "Using seed: 54738\nPrompt: medium full shot of beautiful korean girl with long wavy hair , wearing black white sleeveless dress, gray background, studio lighting, necklace, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz,<lora:flux_realism_lora:1>, . <lora:makinaflux_karina_v2.0:1>\nToken indices sequence length is longer than the specified maximum sequence length for this model (100 > 77). Running this sequence through the model will result in indexing errors\nThe following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['a : 1 >,. < lora : makinaflux _ karina _ v 2. 0 : 1 >']\n 0%| | 0/28 [00:00<?, ?it/s]\n 4%|▎ | 1/28 [00:00<00:12, 2.14it/s]\n 7%|▋ | 2/28 [00:00<00:09, 2.79it/s]\n 11%|█ | 3/28 [00:01<00:09, 2.58it/s]\n 14%|█▍ | 4/28 [00:01<00:09, 2.49it/s]\n 18%|█▊ | 5/28 [00:02<00:09, 2.45it/s]\n 21%|██▏ | 6/28 [00:02<00:09, 2.42it/s]\n 25%|██▌ | 7/28 [00:02<00:08, 2.40it/s]\n 29%|██▊ | 8/28 [00:03<00:08, 2.39it/s]\n 32%|███▏ | 9/28 [00:03<00:07, 2.38it/s]\n 36%|███▌ | 10/28 [00:04<00:07, 2.38it/s]\n 39%|███▉ | 11/28 [00:04<00:07, 2.37it/s]\n 43%|████▎ | 12/28 [00:04<00:06, 2.37it/s]\n 46%|████▋ | 13/28 [00:05<00:06, 2.37it/s]\n 50%|█████ | 14/28 [00:05<00:05, 2.37it/s]\n 54%|█████▎ | 15/28 [00:06<00:05, 2.37it/s]\n 57%|█████▋ | 16/28 [00:06<00:05, 2.37it/s]\n 61%|██████ | 17/28 [00:07<00:04, 2.37it/s]\n 64%|██████▍ | 18/28 [00:07<00:04, 2.36it/s]\n 68%|██████▊ | 19/28 [00:07<00:03, 2.36it/s]\n 71%|███████▏ | 20/28 [00:08<00:03, 2.36it/s]\n 75%|███████▌ | 21/28 [00:08<00:02, 2.36it/s]\n 79%|███████▊ | 22/28 [00:09<00:02, 2.36it/s]\n 82%|████████▏ | 23/28 [00:09<00:02, 2.36it/s]\n 86%|████████▌ | 24/28 [00:10<00:01, 2.36it/s]\n 89%|████████▉ | 25/28 [00:10<00:01, 2.36it/s]\n 93%|█████████▎| 26/28 [00:10<00:00, 2.36it/s]\n 96%|█████████▋| 27/28 [00:11<00:00, 2.36it/s]\n100%|██████████| 28/28 [00:11<00:00, 2.36it/s]\n100%|██████████| 28/28 [00:11<00:00, 2.38it/s]", "metrics": { "predict_time": 13.089554248, "total_time": 115.270758 }, "output": [ "https://replicate.delivery/yhqm/XHyRTzD4Te2mCyCTwzKUj9jOULLVbaycT3rxAueE9STHqjDUA/out-0.png" ], "started_at": "2025-01-10T02:39:54.765204Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/qoxq-tyr7fc6wzzbmrg4rxu33i2nn23aqwntem3pl2v6y6qeafufc7gkq", "get": "https://api.replicate.com/v1/predictions/h3jm03mgd1rj60cm9r9vfy008c", "cancel": "https://api.replicate.com/v1/predictions/h3jm03mgd1rj60cm9r9vfy008c/cancel" }, "version": "5f4aa3a2edeed089ff512d2ca2807172a5b0de0babb0b4ca025b1cffab4893fb" }
Generated inUsing seed: 54738 Prompt: medium full shot of beautiful korean girl with long wavy hair , wearing black white sleeveless dress, gray background, studio lighting, necklace, dslr, soft lighting, high quality, film grain, light reflections, blood vessels, pale skin, skin pores,blood vessels in sclera, detailed skin, beauty spots, skin fuzz,<lora:flux_realism_lora:1>, . <lora:makinaflux_karina_v2.0:1> Token indices sequence length is longer than the specified maximum sequence length for this model (100 > 77). Running this sequence through the model will result in indexing errors The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['a : 1 >,. < lora : makinaflux _ karina _ v 2. 0 : 1 >'] 0%| | 0/28 [00:00<?, ?it/s] 4%|▎ | 1/28 [00:00<00:12, 2.14it/s] 7%|▋ | 2/28 [00:00<00:09, 2.79it/s] 11%|█ | 3/28 [00:01<00:09, 2.58it/s] 14%|█▍ | 4/28 [00:01<00:09, 2.49it/s] 18%|█▊ | 5/28 [00:02<00:09, 2.45it/s] 21%|██▏ | 6/28 [00:02<00:09, 2.42it/s] 25%|██▌ | 7/28 [00:02<00:08, 2.40it/s] 29%|██▊ | 8/28 [00:03<00:08, 2.39it/s] 32%|███▏ | 9/28 [00:03<00:07, 2.38it/s] 36%|███▌ | 10/28 [00:04<00:07, 2.38it/s] 39%|███▉ | 11/28 [00:04<00:07, 2.37it/s] 43%|████▎ | 12/28 [00:04<00:06, 2.37it/s] 46%|████▋ | 13/28 [00:05<00:06, 2.37it/s] 50%|█████ | 14/28 [00:05<00:05, 2.37it/s] 54%|█████▎ | 15/28 [00:06<00:05, 2.37it/s] 57%|█████▋ | 16/28 [00:06<00:05, 2.37it/s] 61%|██████ | 17/28 [00:07<00:04, 2.37it/s] 64%|██████▍ | 18/28 [00:07<00:04, 2.36it/s] 68%|██████▊ | 19/28 [00:07<00:03, 2.36it/s] 71%|███████▏ | 20/28 [00:08<00:03, 2.36it/s] 75%|███████▌ | 21/28 [00:08<00:02, 2.36it/s] 79%|███████▊ | 22/28 [00:09<00:02, 2.36it/s] 82%|████████▏ | 23/28 [00:09<00:02, 2.36it/s] 86%|████████▌ | 24/28 [00:10<00:01, 2.36it/s] 89%|████████▉ | 25/28 [00:10<00:01, 2.36it/s] 93%|█████████▎| 26/28 [00:10<00:00, 2.36it/s] 96%|█████████▋| 27/28 [00:11<00:00, 2.36it/s] 100%|██████████| 28/28 [00:11<00:00, 2.36it/s] 100%|██████████| 28/28 [00:11<00:00, 2.38it/s]
Want to make some of these yourself?
Run this model