vectorspacelab
/
omnigen
OmniGen: Unified Image Generation
Prediction
vectorspacelab/omnigen:af66691aID3svw7yqzgdrgp0cjygj9rzp2w4StatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- a photo of an astronaut riding a horse on mars
- offload_model
- guidance_scale
- 2.5
- inference_steps
- 50
- img_guidance_scale
- 1.6
- separate_cfg_infer
- max_input_image_size
- 1024
- use_input_image_size_as_output
{ "width": 1024, "height": 1024, "prompt": "a photo of an astronaut riding a horse on mars", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": false }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "vectorspacelab/omnigen:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", { input: { width: 1024, height: 1024, prompt: "a photo of an astronaut riding a horse on mars", offload_model: false, guidance_scale: 2.5, inference_steps: 50, img_guidance_scale: 1.6, separate_cfg_infer: true, max_input_image_size: 1024, use_input_image_size_as_output: false } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "vectorspacelab/omnigen:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", input={ "width": 1024, "height": 1024, "prompt": "a photo of an astronaut riding a horse on mars", "offload_model": False, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": True, "max_input_image_size": 1024, "use_input_image_size_as_output": False } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", "input": { "width": 1024, "height": 1024, "prompt": "a photo of an astronaut riding a horse on mars", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": false } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Install Cogbrew install cog
If you don’t have Homebrew, there are other installation options available.
Pull and run vectorspacelab/omnigen using Cog (this will download the full model and run it in your local environment):
cog predict r8.im/vectorspacelab/omnigen@sha256:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="a photo of an astronaut riding a horse on mars"' \ -i 'offload_model=false' \ -i 'guidance_scale=2.5' \ -i 'inference_steps=50' \ -i 'img_guidance_scale=1.6' \ -i 'separate_cfg_infer=true' \ -i 'max_input_image_size=1024' \ -i 'use_input_image_size_as_output=false'
To learn more, take a look at the Cog documentation.
Pull and run vectorspacelab/omnigen using Docker (this will download the full model and run it in your local environment):
docker run -d -p 5000:5000 --gpus=all r8.im/vectorspacelab/omnigen@sha256:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "width": 1024, "height": 1024, "prompt": "a photo of an astronaut riding a horse on mars", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": false } }' \ http://localhost:5000/predictions
Output
{ "completed_at": "2024-11-03T22:33:36.846987Z", "created_at": "2024-11-03T22:28:38.403000Z", "data_removed": false, "error": null, "id": "3svw7yqzgdrgp0cjygj9rzp2w4", "input": { "width": 1024, "height": 1024, "prompt": "a photo of an astronaut riding a horse on mars", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": false }, "logs": "Using seed: 46534\n 0%| | 0/50 [00:00<?, ?it/s]\n 2%|▏ | 1/50 [00:01<01:01, 1.26s/it]\n 4%|▍ | 2/50 [00:02<00:57, 1.20s/it]\n 6%|▌ | 3/50 [00:03<00:55, 1.19s/it]\n 8%|▊ | 4/50 [00:04<00:54, 1.18s/it]\n 10%|█ | 5/50 [00:05<00:52, 1.18s/it]\n 12%|█▏ | 6/50 [00:07<00:51, 1.17s/it]\n 14%|█▍ | 7/50 [00:08<00:50, 1.17s/it]\n 16%|█▌ | 8/50 [00:09<00:49, 1.17s/it]\n 18%|█▊ | 9/50 [00:10<00:48, 1.17s/it]\n 20%|██ | 10/50 [00:11<00:46, 1.17s/it]\n 22%|██▏ | 11/50 [00:12<00:45, 1.17s/it]\n 24%|██▍ | 12/50 [00:14<00:44, 1.17s/it]\n 26%|██▌ | 13/50 [00:15<00:43, 1.17s/it]\n 28%|██▊ | 14/50 [00:16<00:42, 1.17s/it]\n 30%|███ | 15/50 [00:17<00:40, 1.17s/it]\n 32%|███▏ | 16/50 [00:18<00:39, 1.17s/it]\n 34%|███▍ | 17/50 [00:19<00:38, 1.17s/it]\n 36%|███▌ | 18/50 [00:21<00:37, 1.17s/it]\n 38%|███▊ | 19/50 [00:22<00:36, 1.17s/it]\n 40%|████ | 20/50 [00:23<00:35, 1.17s/it]\n 42%|████▏ | 21/50 [00:24<00:33, 1.17s/it]\n 44%|████▍ | 22/50 [00:25<00:32, 1.17s/it]\n 46%|████▌ | 23/50 [00:26<00:31, 1.17s/it]\n 48%|████▊ | 24/50 [00:28<00:30, 1.17s/it]\n 50%|█████ | 25/50 [00:29<00:29, 1.17s/it]\n 52%|█████▏ | 26/50 [00:30<00:28, 1.17s/it]\n 54%|█████▍ | 27/50 [00:31<00:27, 1.17s/it]\n 56%|█████▌ | 28/50 [00:32<00:25, 1.17s/it]\n 58%|█████▊ | 29/50 [00:34<00:24, 1.17s/it]\n 60%|██████ | 30/50 [00:35<00:23, 1.18s/it]\n 62%|██████▏ | 31/50 [00:36<00:22, 1.18s/it]\n 64%|██████▍ | 32/50 [00:37<00:21, 1.18s/it]\n 66%|██████▌ | 33/50 [00:38<00:19, 1.18s/it]\n 68%|██████▊ | 34/50 [00:39<00:18, 1.18s/it]\n 70%|███████ | 35/50 [00:41<00:17, 1.18s/it]\n 72%|███████▏ | 36/50 [00:42<00:16, 1.18s/it]\n 74%|███████▍ | 37/50 [00:43<00:15, 1.18s/it]\n 76%|███████▌ | 38/50 [00:44<00:14, 1.18s/it]\n 78%|███████▊ | 39/50 [00:45<00:12, 1.18s/it]\n 80%|████████ | 40/50 [00:46<00:11, 1.18s/it]\n 82%|████████▏ | 41/50 [00:48<00:10, 1.18s/it]\n 84%|████████▍ | 42/50 [00:49<00:09, 1.18s/it]\n 86%|████████▌ | 43/50 [00:50<00:08, 1.18s/it]\n 88%|████████▊ | 44/50 [00:51<00:07, 1.18s/it]\n 90%|█████████ | 45/50 [00:52<00:05, 1.18s/it]\n 92%|█████████▏| 46/50 [00:54<00:04, 1.18s/it]\n 94%|█████████▍| 47/50 [00:55<00:03, 1.18s/it]\n 96%|█████████▌| 48/50 [00:56<00:02, 1.18s/it]\n 98%|█████████▊| 49/50 [00:57<00:01, 1.18s/it]\n100%|██████████| 50/50 [00:58<00:00, 1.18s/it]\n100%|██████████| 50/50 [00:58<00:00, 1.18s/it]", "metrics": { "predict_time": 63.182460724, "total_time": 298.443987 }, "output": "https://replicate.delivery/pbxt/QZ92oqfpib0GcCy4iN910UTxqSP3RE7DlQ1hDtkSE0rfwatTA/out.png", "started_at": "2024-11-03T22:32:33.664526Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/3svw7yqzgdrgp0cjygj9rzp2w4", "cancel": "https://api.replicate.com/v1/predictions/3svw7yqzgdrgp0cjygj9rzp2w4/cancel" }, "version": "af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b" }
Generated inUsing seed: 46534 0%| | 0/50 [00:00<?, ?it/s] 2%|▏ | 1/50 [00:01<01:01, 1.26s/it] 4%|▍ | 2/50 [00:02<00:57, 1.20s/it] 6%|▌ | 3/50 [00:03<00:55, 1.19s/it] 8%|▊ | 4/50 [00:04<00:54, 1.18s/it] 10%|█ | 5/50 [00:05<00:52, 1.18s/it] 12%|█▏ | 6/50 [00:07<00:51, 1.17s/it] 14%|█▍ | 7/50 [00:08<00:50, 1.17s/it] 16%|█▌ | 8/50 [00:09<00:49, 1.17s/it] 18%|█▊ | 9/50 [00:10<00:48, 1.17s/it] 20%|██ | 10/50 [00:11<00:46, 1.17s/it] 22%|██▏ | 11/50 [00:12<00:45, 1.17s/it] 24%|██▍ | 12/50 [00:14<00:44, 1.17s/it] 26%|██▌ | 13/50 [00:15<00:43, 1.17s/it] 28%|██▊ | 14/50 [00:16<00:42, 1.17s/it] 30%|███ | 15/50 [00:17<00:40, 1.17s/it] 32%|███▏ | 16/50 [00:18<00:39, 1.17s/it] 34%|███▍ | 17/50 [00:19<00:38, 1.17s/it] 36%|███▌ | 18/50 [00:21<00:37, 1.17s/it] 38%|███▊ | 19/50 [00:22<00:36, 1.17s/it] 40%|████ | 20/50 [00:23<00:35, 1.17s/it] 42%|████▏ | 21/50 [00:24<00:33, 1.17s/it] 44%|████▍ | 22/50 [00:25<00:32, 1.17s/it] 46%|████▌ | 23/50 [00:26<00:31, 1.17s/it] 48%|████▊ | 24/50 [00:28<00:30, 1.17s/it] 50%|█████ | 25/50 [00:29<00:29, 1.17s/it] 52%|█████▏ | 26/50 [00:30<00:28, 1.17s/it] 54%|█████▍ | 27/50 [00:31<00:27, 1.17s/it] 56%|█████▌ | 28/50 [00:32<00:25, 1.17s/it] 58%|█████▊ | 29/50 [00:34<00:24, 1.17s/it] 60%|██████ | 30/50 [00:35<00:23, 1.18s/it] 62%|██████▏ | 31/50 [00:36<00:22, 1.18s/it] 64%|██████▍ | 32/50 [00:37<00:21, 1.18s/it] 66%|██████▌ | 33/50 [00:38<00:19, 1.18s/it] 68%|██████▊ | 34/50 [00:39<00:18, 1.18s/it] 70%|███████ | 35/50 [00:41<00:17, 1.18s/it] 72%|███████▏ | 36/50 [00:42<00:16, 1.18s/it] 74%|███████▍ | 37/50 [00:43<00:15, 1.18s/it] 76%|███████▌ | 38/50 [00:44<00:14, 1.18s/it] 78%|███████▊ | 39/50 [00:45<00:12, 1.18s/it] 80%|████████ | 40/50 [00:46<00:11, 1.18s/it] 82%|████████▏ | 41/50 [00:48<00:10, 1.18s/it] 84%|████████▍ | 42/50 [00:49<00:09, 1.18s/it] 86%|████████▌ | 43/50 [00:50<00:08, 1.18s/it] 88%|████████▊ | 44/50 [00:51<00:07, 1.18s/it] 90%|█████████ | 45/50 [00:52<00:05, 1.18s/it] 92%|█████████▏| 46/50 [00:54<00:04, 1.18s/it] 94%|█████████▍| 47/50 [00:55<00:03, 1.18s/it] 96%|█████████▌| 48/50 [00:56<00:02, 1.18s/it] 98%|█████████▊| 49/50 [00:57<00:01, 1.18s/it] 100%|██████████| 50/50 [00:58<00:00, 1.18s/it] 100%|██████████| 50/50 [00:58<00:00, 1.18s/it]
Prediction
vectorspacelab/omnigen:af66691aIDpfs0tqbgp1rgm0cjygjvm2ykpgStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- A curly-haired man in a red shirt is drinking tea.
- offload_model
- guidance_scale
- 2.5
- inference_steps
- 50
- img_guidance_scale
- 1.6
- separate_cfg_infer
- max_input_image_size
- 1024
- use_input_image_size_as_output
{ "width": 1024, "height": 1024, "prompt": "A curly-haired man in a red shirt is drinking tea.", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": false }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "vectorspacelab/omnigen:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", { input: { width: 1024, height: 1024, prompt: "A curly-haired man in a red shirt is drinking tea.", offload_model: false, guidance_scale: 2.5, inference_steps: 50, img_guidance_scale: 1.6, separate_cfg_infer: true, max_input_image_size: 1024, use_input_image_size_as_output: false } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "vectorspacelab/omnigen:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", input={ "width": 1024, "height": 1024, "prompt": "A curly-haired man in a red shirt is drinking tea.", "offload_model": False, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": True, "max_input_image_size": 1024, "use_input_image_size_as_output": False } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", "input": { "width": 1024, "height": 1024, "prompt": "A curly-haired man in a red shirt is drinking tea.", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": false } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Install Cogbrew install cog
If you don’t have Homebrew, there are other installation options available.
Pull and run vectorspacelab/omnigen using Cog (this will download the full model and run it in your local environment):
cog predict r8.im/vectorspacelab/omnigen@sha256:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="A curly-haired man in a red shirt is drinking tea."' \ -i 'offload_model=false' \ -i 'guidance_scale=2.5' \ -i 'inference_steps=50' \ -i 'img_guidance_scale=1.6' \ -i 'separate_cfg_infer=true' \ -i 'max_input_image_size=1024' \ -i 'use_input_image_size_as_output=false'
To learn more, take a look at the Cog documentation.
Pull and run vectorspacelab/omnigen using Docker (this will download the full model and run it in your local environment):
docker run -d -p 5000:5000 --gpus=all r8.im/vectorspacelab/omnigen@sha256:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "width": 1024, "height": 1024, "prompt": "A curly-haired man in a red shirt is drinking tea.", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": false } }' \ http://localhost:5000/predictions
Output
{ "completed_at": "2024-11-03T22:33:33.233388Z", "created_at": "2024-11-03T22:29:07.376000Z", "data_removed": false, "error": null, "id": "pfs0tqbgp1rgm0cjygjvm2ykpg", "input": { "width": 1024, "height": 1024, "prompt": "A curly-haired man in a red shirt is drinking tea.", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": false }, "logs": "Using seed: 51026\n 0%| | 0/50 [00:00<?, ?it/s]\n 2%|▏ | 1/50 [00:01<01:03, 1.29s/it]\n 4%|▍ | 2/50 [00:02<00:58, 1.21s/it]\n 6%|▌ | 3/50 [00:03<00:55, 1.19s/it]\n 8%|▊ | 4/50 [00:04<00:54, 1.18s/it]\n 10%|█ | 5/50 [00:05<00:52, 1.18s/it]\n 12%|█▏ | 6/50 [00:07<00:51, 1.17s/it]\n 14%|█▍ | 7/50 [00:08<00:50, 1.17s/it]\n 16%|█▌ | 8/50 [00:09<00:49, 1.17s/it]\n 18%|█▊ | 9/50 [00:10<00:48, 1.17s/it]\n 20%|██ | 10/50 [00:11<00:46, 1.17s/it]\n 22%|██▏ | 11/50 [00:13<00:46, 1.18s/it]\n 24%|██▍ | 12/50 [00:14<00:44, 1.18s/it]\n 26%|██▌ | 13/50 [00:15<00:43, 1.18s/it]\n 28%|██▊ | 14/50 [00:16<00:42, 1.18s/it]\n 30%|███ | 15/50 [00:17<00:41, 1.18s/it]\n 32%|███▏ | 16/50 [00:18<00:39, 1.17s/it]\n 34%|███▍ | 17/50 [00:20<00:38, 1.17s/it]\n 36%|███▌ | 18/50 [00:21<00:37, 1.17s/it]\n 38%|███▊ | 19/50 [00:22<00:36, 1.17s/it]\n 40%|████ | 20/50 [00:23<00:35, 1.17s/it]\n 42%|████▏ | 21/50 [00:24<00:33, 1.17s/it]\n 44%|████▍ | 22/50 [00:25<00:32, 1.17s/it]\n 46%|████▌ | 23/50 [00:27<00:31, 1.17s/it]\n 48%|████▊ | 24/50 [00:28<00:30, 1.17s/it]\n 50%|█████ | 25/50 [00:29<00:29, 1.17s/it]\n 52%|█████▏ | 26/50 [00:30<00:28, 1.17s/it]\n 54%|█████▍ | 27/50 [00:31<00:26, 1.17s/it]\n 56%|█████▌ | 28/50 [00:32<00:25, 1.17s/it]\n 58%|█████▊ | 29/50 [00:34<00:24, 1.17s/it]\n 60%|██████ | 30/50 [00:35<00:23, 1.17s/it]\n 62%|██████▏ | 31/50 [00:36<00:22, 1.17s/it]\n 64%|██████▍ | 32/50 [00:37<00:21, 1.17s/it]\n 66%|██████▌ | 33/50 [00:38<00:19, 1.18s/it]\n 68%|██████▊ | 34/50 [00:39<00:18, 1.18s/it]\n 70%|███████ | 35/50 [00:41<00:17, 1.18s/it]\n 72%|███████▏ | 36/50 [00:42<00:16, 1.18s/it]\n 74%|███████▍ | 37/50 [00:43<00:15, 1.18s/it]\n 76%|███████▌ | 38/50 [00:44<00:14, 1.18s/it]\n 78%|███████▊ | 39/50 [00:45<00:12, 1.18s/it]\n 80%|████████ | 40/50 [00:47<00:11, 1.19s/it]\n 82%|████████▏ | 41/50 [00:48<00:10, 1.19s/it]\n 84%|████████▍ | 42/50 [00:49<00:09, 1.19s/it]\n 86%|████████▌ | 43/50 [00:50<00:08, 1.19s/it]\n 88%|████████▊ | 44/50 [00:51<00:07, 1.19s/it]\n 90%|█████████ | 45/50 [00:53<00:05, 1.19s/it]\n 92%|█████████▏| 46/50 [00:54<00:04, 1.18s/it]\n 94%|█████████▍| 47/50 [00:55<00:03, 1.18s/it]\n 96%|█████████▌| 48/50 [00:56<00:02, 1.18s/it]\n 98%|█████████▊| 49/50 [00:57<00:01, 1.18s/it]\n100%|██████████| 50/50 [00:58<00:00, 1.18s/it]\n100%|██████████| 50/50 [00:58<00:00, 1.18s/it]", "metrics": { "predict_time": 62.667601229, "total_time": 265.857388 }, "output": "https://replicate.delivery/pbxt/mrS1sDyh4Z6BPJ0eCGkniIP6NSqE1WN3ti3YGNZkxtUewatTA/out.png", "started_at": "2024-11-03T22:32:30.565787Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/pfs0tqbgp1rgm0cjygjvm2ykpg", "cancel": "https://api.replicate.com/v1/predictions/pfs0tqbgp1rgm0cjygjvm2ykpg/cancel" }, "version": "af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b" }
Generated inUsing seed: 51026 0%| | 0/50 [00:00<?, ?it/s] 2%|▏ | 1/50 [00:01<01:03, 1.29s/it] 4%|▍ | 2/50 [00:02<00:58, 1.21s/it] 6%|▌ | 3/50 [00:03<00:55, 1.19s/it] 8%|▊ | 4/50 [00:04<00:54, 1.18s/it] 10%|█ | 5/50 [00:05<00:52, 1.18s/it] 12%|█▏ | 6/50 [00:07<00:51, 1.17s/it] 14%|█▍ | 7/50 [00:08<00:50, 1.17s/it] 16%|█▌ | 8/50 [00:09<00:49, 1.17s/it] 18%|█▊ | 9/50 [00:10<00:48, 1.17s/it] 20%|██ | 10/50 [00:11<00:46, 1.17s/it] 22%|██▏ | 11/50 [00:13<00:46, 1.18s/it] 24%|██▍ | 12/50 [00:14<00:44, 1.18s/it] 26%|██▌ | 13/50 [00:15<00:43, 1.18s/it] 28%|██▊ | 14/50 [00:16<00:42, 1.18s/it] 30%|███ | 15/50 [00:17<00:41, 1.18s/it] 32%|███▏ | 16/50 [00:18<00:39, 1.17s/it] 34%|███▍ | 17/50 [00:20<00:38, 1.17s/it] 36%|███▌ | 18/50 [00:21<00:37, 1.17s/it] 38%|███▊ | 19/50 [00:22<00:36, 1.17s/it] 40%|████ | 20/50 [00:23<00:35, 1.17s/it] 42%|████▏ | 21/50 [00:24<00:33, 1.17s/it] 44%|████▍ | 22/50 [00:25<00:32, 1.17s/it] 46%|████▌ | 23/50 [00:27<00:31, 1.17s/it] 48%|████▊ | 24/50 [00:28<00:30, 1.17s/it] 50%|█████ | 25/50 [00:29<00:29, 1.17s/it] 52%|█████▏ | 26/50 [00:30<00:28, 1.17s/it] 54%|█████▍ | 27/50 [00:31<00:26, 1.17s/it] 56%|█████▌ | 28/50 [00:32<00:25, 1.17s/it] 58%|█████▊ | 29/50 [00:34<00:24, 1.17s/it] 60%|██████ | 30/50 [00:35<00:23, 1.17s/it] 62%|██████▏ | 31/50 [00:36<00:22, 1.17s/it] 64%|██████▍ | 32/50 [00:37<00:21, 1.17s/it] 66%|██████▌ | 33/50 [00:38<00:19, 1.18s/it] 68%|██████▊ | 34/50 [00:39<00:18, 1.18s/it] 70%|███████ | 35/50 [00:41<00:17, 1.18s/it] 72%|███████▏ | 36/50 [00:42<00:16, 1.18s/it] 74%|███████▍ | 37/50 [00:43<00:15, 1.18s/it] 76%|███████▌ | 38/50 [00:44<00:14, 1.18s/it] 78%|███████▊ | 39/50 [00:45<00:12, 1.18s/it] 80%|████████ | 40/50 [00:47<00:11, 1.19s/it] 82%|████████▏ | 41/50 [00:48<00:10, 1.19s/it] 84%|████████▍ | 42/50 [00:49<00:09, 1.19s/it] 86%|████████▌ | 43/50 [00:50<00:08, 1.19s/it] 88%|████████▊ | 44/50 [00:51<00:07, 1.19s/it] 90%|█████████ | 45/50 [00:53<00:05, 1.19s/it] 92%|█████████▏| 46/50 [00:54<00:04, 1.18s/it] 94%|█████████▍| 47/50 [00:55<00:03, 1.18s/it] 96%|█████████▌| 48/50 [00:56<00:02, 1.18s/it] 98%|█████████▊| 49/50 [00:57<00:01, 1.18s/it] 100%|██████████| 50/50 [00:58<00:00, 1.18s/it] 100%|██████████| 50/50 [00:58<00:00, 1.18s/it]
Prediction
vectorspacelab/omnigen:af66691aIDsgsb0shqrxrgm0cjygpsebv3mwStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- The flower <img><|image_1|><\/img> is placed in the vase which is in the middle of <img><|image_2|><\/img> on a wooden table of a living room
- offload_model
- guidance_scale
- 2.5
- inference_steps
- 50
- img_guidance_scale
- 1.6
- separate_cfg_infer
- max_input_image_size
- 1024
- use_input_image_size_as_output
{ "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/1f6249e8bb52cfc61be3595778b68f873ffaa04c26e2c107df9fce503892976d/rose.jpg", "img2": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/67cbd1b8e20b17b208c0838eb839d1ddeab037fabdf8839fe7698270d1fc9e0b/vase.jpg", "width": 1024, "height": 1024, "prompt": "The flower <img><|image_1|><\\/img> is placed in the vase which is in the middle of <img><|image_2|><\\/img> on a wooden table of a living room", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": false }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "vectorspacelab/omnigen:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", { input: { img1: "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/1f6249e8bb52cfc61be3595778b68f873ffaa04c26e2c107df9fce503892976d/rose.jpg", img2: "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/67cbd1b8e20b17b208c0838eb839d1ddeab037fabdf8839fe7698270d1fc9e0b/vase.jpg", width: 1024, height: 1024, prompt: "The flower <img><|image_1|><\\/img> is placed in the vase which is in the middle of <img><|image_2|><\\/img> on a wooden table of a living room", offload_model: false, guidance_scale: 2.5, inference_steps: 50, img_guidance_scale: 1.6, separate_cfg_infer: true, max_input_image_size: 1024, use_input_image_size_as_output: false } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "vectorspacelab/omnigen:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", input={ "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/1f6249e8bb52cfc61be3595778b68f873ffaa04c26e2c107df9fce503892976d/rose.jpg", "img2": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/67cbd1b8e20b17b208c0838eb839d1ddeab037fabdf8839fe7698270d1fc9e0b/vase.jpg", "width": 1024, "height": 1024, "prompt": "The flower <img><|image_1|><\\/img> is placed in the vase which is in the middle of <img><|image_2|><\\/img> on a wooden table of a living room", "offload_model": False, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": True, "max_input_image_size": 1024, "use_input_image_size_as_output": False } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", "input": { "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/1f6249e8bb52cfc61be3595778b68f873ffaa04c26e2c107df9fce503892976d/rose.jpg", "img2": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/67cbd1b8e20b17b208c0838eb839d1ddeab037fabdf8839fe7698270d1fc9e0b/vase.jpg", "width": 1024, "height": 1024, "prompt": "The flower <img><|image_1|><\\\\/img> is placed in the vase which is in the middle of <img><|image_2|><\\\\/img> on a wooden table of a living room", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": false } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Install Cogbrew install cog
If you don’t have Homebrew, there are other installation options available.
Pull and run vectorspacelab/omnigen using Cog (this will download the full model and run it in your local environment):
cog predict r8.im/vectorspacelab/omnigen@sha256:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b \ -i 'img1="https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/1f6249e8bb52cfc61be3595778b68f873ffaa04c26e2c107df9fce503892976d/rose.jpg"' \ -i 'img2="https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/67cbd1b8e20b17b208c0838eb839d1ddeab037fabdf8839fe7698270d1fc9e0b/vase.jpg"' \ -i 'width=1024' \ -i 'height=1024' \ -i $'prompt="The flower <img><|image_1|><\\\\/img> is placed in the vase which is in the middle of <img><|image_2|><\\\\/img> on a wooden table of a living room"' \ -i 'offload_model=false' \ -i 'guidance_scale=2.5' \ -i 'inference_steps=50' \ -i 'img_guidance_scale=1.6' \ -i 'separate_cfg_infer=true' \ -i 'max_input_image_size=1024' \ -i 'use_input_image_size_as_output=false'
To learn more, take a look at the Cog documentation.
Pull and run vectorspacelab/omnigen using Docker (this will download the full model and run it in your local environment):
docker run -d -p 5000:5000 --gpus=all r8.im/vectorspacelab/omnigen@sha256:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/1f6249e8bb52cfc61be3595778b68f873ffaa04c26e2c107df9fce503892976d/rose.jpg", "img2": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/67cbd1b8e20b17b208c0838eb839d1ddeab037fabdf8839fe7698270d1fc9e0b/vase.jpg", "width": 1024, "height": 1024, "prompt": "The flower <img><|image_1|><\\\\/img> is placed in the vase which is in the middle of <img><|image_2|><\\\\/img> on a wooden table of a living room", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": false } }' \ http://localhost:5000/predictions
Output
{ "completed_at": "2024-11-03T22:41:35.146269Z", "created_at": "2024-11-03T22:37:37.095000Z", "data_removed": false, "error": null, "id": "sgsb0shqrxrgm0cjygpsebv3mw", "input": { "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/1f6249e8bb52cfc61be3595778b68f873ffaa04c26e2c107df9fce503892976d/rose.jpg", "img2": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/67cbd1b8e20b17b208c0838eb839d1ddeab037fabdf8839fe7698270d1fc9e0b/vase.jpg", "width": 1024, "height": 1024, "prompt": "The flower <img><|image_1|><\\/img> is placed in the vase which is in the middle of <img><|image_2|><\\/img> on a wooden table of a living room", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": false }, "logs": "Using seed: 65037\n 0%| | 0/50 [00:00<?, ?it/s]\n 2%|▏ | 1/50 [00:05<04:15, 5.22s/it]\n 4%|▍ | 2/50 [00:07<02:53, 3.61s/it]\n 6%|▌ | 3/50 [00:10<02:23, 3.06s/it]\n 8%|▊ | 4/50 [00:12<02:09, 2.81s/it]\n 10%|█ | 5/50 [00:14<01:59, 2.67s/it]\n 12%|█▏ | 6/50 [00:17<01:53, 2.58s/it]\n 14%|█▍ | 7/50 [00:19<01:48, 2.52s/it]\n 16%|█▌ | 8/50 [00:22<01:44, 2.48s/it]\n 18%|█▊ | 9/50 [00:24<01:40, 2.46s/it]\n 20%|██ | 10/50 [00:26<01:37, 2.44s/it]\n 22%|██▏ | 11/50 [00:29<01:34, 2.43s/it]\n 24%|██▍ | 12/50 [00:31<01:32, 2.43s/it]\n 26%|██▌ | 13/50 [00:34<01:29, 2.42s/it]\n 28%|██▊ | 14/50 [00:36<01:27, 2.42s/it]\n 30%|███ | 15/50 [00:39<01:24, 2.42s/it]\n 32%|███▏ | 16/50 [00:41<01:22, 2.42s/it]\n 34%|███▍ | 17/50 [00:43<01:19, 2.42s/it]\n 36%|███▌ | 18/50 [00:46<01:17, 2.42s/it]\n 38%|███▊ | 19/50 [00:48<01:15, 2.42s/it]\n 40%|████ | 20/50 [00:51<01:12, 2.42s/it]\n 42%|████▏ | 21/50 [00:53<01:10, 2.42s/it]\n 44%|████▍ | 22/50 [00:55<01:07, 2.42s/it]\n 46%|████▌ | 23/50 [00:58<01:05, 2.42s/it]\n 48%|████▊ | 24/50 [01:00<01:03, 2.42s/it]\n 50%|█████ | 25/50 [01:03<01:00, 2.42s/it]\n 52%|█████▏ | 26/50 [01:05<00:58, 2.43s/it]\n 54%|█████▍ | 27/50 [01:08<00:55, 2.43s/it]\n 56%|█████▌ | 28/50 [01:10<00:53, 2.43s/it]\n 58%|█████▊ | 29/50 [01:12<00:51, 2.43s/it]\n 60%|██████ | 30/50 [01:15<00:48, 2.43s/it]\n 62%|██████▏ | 31/50 [01:17<00:46, 2.43s/it]\n 64%|██████▍ | 32/50 [01:20<00:43, 2.43s/it]\n 66%|██████▌ | 33/50 [01:22<00:41, 2.43s/it]\n 68%|██████▊ | 34/50 [01:25<00:38, 2.43s/it]\n 70%|███████ | 35/50 [01:27<00:36, 2.43s/it]\n 72%|███████▏ | 36/50 [01:30<00:34, 2.43s/it]\n 74%|███████▍ | 37/50 [01:32<00:31, 2.43s/it]\n 76%|███████▌ | 38/50 [01:34<00:29, 2.43s/it]\n 78%|███████▊ | 39/50 [01:37<00:26, 2.43s/it]\n 80%|████████ | 40/50 [01:39<00:24, 2.43s/it]\n 82%|████████▏ | 41/50 [01:42<00:21, 2.43s/it]\n 84%|████████▍ | 42/50 [01:44<00:19, 2.43s/it]\n 86%|████████▌ | 43/50 [01:47<00:17, 2.43s/it]\n 88%|████████▊ | 44/50 [01:49<00:14, 2.43s/it]\n 90%|█████████ | 45/50 [01:51<00:12, 2.44s/it]\n 92%|█████████▏| 46/50 [01:54<00:09, 2.44s/it]\n 94%|█████████▍| 47/50 [01:56<00:07, 2.44s/it]\n 96%|█████████▌| 48/50 [01:59<00:04, 2.43s/it]\n 98%|█████████▊| 49/50 [02:01<00:02, 2.43s/it]\n100%|██████████| 50/50 [02:04<00:00, 2.43s/it]\n100%|██████████| 50/50 [02:04<00:00, 2.48s/it]", "metrics": { "predict_time": 128.587871826, "total_time": 238.051269 }, "output": "https://replicate.delivery/pbxt/5R7G6AbVoXICNxkUeHCWQRCEXmv8yk06Aej9CNFfhEF8w1anA/out.png", "started_at": "2024-11-03T22:39:26.558397Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/sgsb0shqrxrgm0cjygpsebv3mw", "cancel": "https://api.replicate.com/v1/predictions/sgsb0shqrxrgm0cjygpsebv3mw/cancel" }, "version": "af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b" }
Generated inUsing seed: 65037 0%| | 0/50 [00:00<?, ?it/s] 2%|▏ | 1/50 [00:05<04:15, 5.22s/it] 4%|▍ | 2/50 [00:07<02:53, 3.61s/it] 6%|▌ | 3/50 [00:10<02:23, 3.06s/it] 8%|▊ | 4/50 [00:12<02:09, 2.81s/it] 10%|█ | 5/50 [00:14<01:59, 2.67s/it] 12%|█▏ | 6/50 [00:17<01:53, 2.58s/it] 14%|█▍ | 7/50 [00:19<01:48, 2.52s/it] 16%|█▌ | 8/50 [00:22<01:44, 2.48s/it] 18%|█▊ | 9/50 [00:24<01:40, 2.46s/it] 20%|██ | 10/50 [00:26<01:37, 2.44s/it] 22%|██▏ | 11/50 [00:29<01:34, 2.43s/it] 24%|██▍ | 12/50 [00:31<01:32, 2.43s/it] 26%|██▌ | 13/50 [00:34<01:29, 2.42s/it] 28%|██▊ | 14/50 [00:36<01:27, 2.42s/it] 30%|███ | 15/50 [00:39<01:24, 2.42s/it] 32%|███▏ | 16/50 [00:41<01:22, 2.42s/it] 34%|███▍ | 17/50 [00:43<01:19, 2.42s/it] 36%|███▌ | 18/50 [00:46<01:17, 2.42s/it] 38%|███▊ | 19/50 [00:48<01:15, 2.42s/it] 40%|████ | 20/50 [00:51<01:12, 2.42s/it] 42%|████▏ | 21/50 [00:53<01:10, 2.42s/it] 44%|████▍ | 22/50 [00:55<01:07, 2.42s/it] 46%|████▌ | 23/50 [00:58<01:05, 2.42s/it] 48%|████▊ | 24/50 [01:00<01:03, 2.42s/it] 50%|█████ | 25/50 [01:03<01:00, 2.42s/it] 52%|█████▏ | 26/50 [01:05<00:58, 2.43s/it] 54%|█████▍ | 27/50 [01:08<00:55, 2.43s/it] 56%|█████▌ | 28/50 [01:10<00:53, 2.43s/it] 58%|█████▊ | 29/50 [01:12<00:51, 2.43s/it] 60%|██████ | 30/50 [01:15<00:48, 2.43s/it] 62%|██████▏ | 31/50 [01:17<00:46, 2.43s/it] 64%|██████▍ | 32/50 [01:20<00:43, 2.43s/it] 66%|██████▌ | 33/50 [01:22<00:41, 2.43s/it] 68%|██████▊ | 34/50 [01:25<00:38, 2.43s/it] 70%|███████ | 35/50 [01:27<00:36, 2.43s/it] 72%|███████▏ | 36/50 [01:30<00:34, 2.43s/it] 74%|███████▍ | 37/50 [01:32<00:31, 2.43s/it] 76%|███████▌ | 38/50 [01:34<00:29, 2.43s/it] 78%|███████▊ | 39/50 [01:37<00:26, 2.43s/it] 80%|████████ | 40/50 [01:39<00:24, 2.43s/it] 82%|████████▏ | 41/50 [01:42<00:21, 2.43s/it] 84%|████████▍ | 42/50 [01:44<00:19, 2.43s/it] 86%|████████▌ | 43/50 [01:47<00:17, 2.43s/it] 88%|████████▊ | 44/50 [01:49<00:14, 2.43s/it] 90%|█████████ | 45/50 [01:51<00:12, 2.44s/it] 92%|█████████▏| 46/50 [01:54<00:09, 2.44s/it] 94%|█████████▍| 47/50 [01:56<00:07, 2.44s/it] 96%|█████████▌| 48/50 [01:59<00:04, 2.43s/it] 98%|█████████▊| 49/50 [02:01<00:02, 2.43s/it] 100%|██████████| 50/50 [02:04<00:00, 2.43s/it] 100%|██████████| 50/50 [02:04<00:00, 2.48s/it]
Prediction
vectorspacelab/omnigen:af66691aIDcgh2hgqy39rgp0cjygrakdwh1mStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- A man and a short-haired woman with a wrinkled face are standing in front of a bookshelf in a library. The man is the man in the middle of <img><|image_1|></img>, and the woman is oldest woman in <img><|image_2|></img>
- offload_model
- guidance_scale
- 2.5
- inference_steps
- 50
- img_guidance_scale
- 1.6
- separate_cfg_infer
- max_input_image_size
- 1024
- use_input_image_size_as_output
{ "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/0bf3ea49bb711bd01c604ad5d92f0a979c1aa77a0178f5aa8bd6f630fbe91b5b/1.jpg", "img2": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/9b4a1dda348cb4c67cfb8452b170c842d836ea931066a5ded7132ab4a71a6f3c/2.jpg", "width": 1024, "height": 1024, "prompt": "A man and a short-haired woman with a wrinkled face are standing in front of a bookshelf in a library. The man is the man in the middle of <img><|image_1|></img>, and the woman is oldest woman in <img><|image_2|></img>", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": false }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "vectorspacelab/omnigen:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", { input: { img1: "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/0bf3ea49bb711bd01c604ad5d92f0a979c1aa77a0178f5aa8bd6f630fbe91b5b/1.jpg", img2: "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/9b4a1dda348cb4c67cfb8452b170c842d836ea931066a5ded7132ab4a71a6f3c/2.jpg", width: 1024, height: 1024, prompt: "A man and a short-haired woman with a wrinkled face are standing in front of a bookshelf in a library. The man is the man in the middle of <img><|image_1|></img>, and the woman is oldest woman in <img><|image_2|></img>", offload_model: false, guidance_scale: 2.5, inference_steps: 50, img_guidance_scale: 1.6, separate_cfg_infer: true, max_input_image_size: 1024, use_input_image_size_as_output: false } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "vectorspacelab/omnigen:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", input={ "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/0bf3ea49bb711bd01c604ad5d92f0a979c1aa77a0178f5aa8bd6f630fbe91b5b/1.jpg", "img2": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/9b4a1dda348cb4c67cfb8452b170c842d836ea931066a5ded7132ab4a71a6f3c/2.jpg", "width": 1024, "height": 1024, "prompt": "A man and a short-haired woman with a wrinkled face are standing in front of a bookshelf in a library. The man is the man in the middle of <img><|image_1|></img>, and the woman is oldest woman in <img><|image_2|></img>", "offload_model": False, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": True, "max_input_image_size": 1024, "use_input_image_size_as_output": False } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", "input": { "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/0bf3ea49bb711bd01c604ad5d92f0a979c1aa77a0178f5aa8bd6f630fbe91b5b/1.jpg", "img2": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/9b4a1dda348cb4c67cfb8452b170c842d836ea931066a5ded7132ab4a71a6f3c/2.jpg", "width": 1024, "height": 1024, "prompt": "A man and a short-haired woman with a wrinkled face are standing in front of a bookshelf in a library. The man is the man in the middle of <img><|image_1|></img>, and the woman is oldest woman in <img><|image_2|></img>", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": false } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Install Cogbrew install cog
If you don’t have Homebrew, there are other installation options available.
Pull and run vectorspacelab/omnigen using Cog (this will download the full model and run it in your local environment):
cog predict r8.im/vectorspacelab/omnigen@sha256:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b \ -i 'img1="https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/0bf3ea49bb711bd01c604ad5d92f0a979c1aa77a0178f5aa8bd6f630fbe91b5b/1.jpg"' \ -i 'img2="https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/9b4a1dda348cb4c67cfb8452b170c842d836ea931066a5ded7132ab4a71a6f3c/2.jpg"' \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="A man and a short-haired woman with a wrinkled face are standing in front of a bookshelf in a library. The man is the man in the middle of <img><|image_1|></img>, and the woman is oldest woman in <img><|image_2|></img>"' \ -i 'offload_model=false' \ -i 'guidance_scale=2.5' \ -i 'inference_steps=50' \ -i 'img_guidance_scale=1.6' \ -i 'separate_cfg_infer=true' \ -i 'max_input_image_size=1024' \ -i 'use_input_image_size_as_output=false'
To learn more, take a look at the Cog documentation.
Pull and run vectorspacelab/omnigen using Docker (this will download the full model and run it in your local environment):
docker run -d -p 5000:5000 --gpus=all r8.im/vectorspacelab/omnigen@sha256:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/0bf3ea49bb711bd01c604ad5d92f0a979c1aa77a0178f5aa8bd6f630fbe91b5b/1.jpg", "img2": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/9b4a1dda348cb4c67cfb8452b170c842d836ea931066a5ded7132ab4a71a6f3c/2.jpg", "width": 1024, "height": 1024, "prompt": "A man and a short-haired woman with a wrinkled face are standing in front of a bookshelf in a library. The man is the man in the middle of <img><|image_1|></img>, and the woman is oldest woman in <img><|image_2|></img>", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": false } }' \ http://localhost:5000/predictions
Output
{ "completed_at": "2024-11-03T22:46:05.666799Z", "created_at": "2024-11-03T22:41:44.474000Z", "data_removed": false, "error": null, "id": "cgh2hgqy39rgp0cjygrakdwh1m", "input": { "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/0bf3ea49bb711bd01c604ad5d92f0a979c1aa77a0178f5aa8bd6f630fbe91b5b/1.jpg", "img2": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/9b4a1dda348cb4c67cfb8452b170c842d836ea931066a5ded7132ab4a71a6f3c/2.jpg", "width": 1024, "height": 1024, "prompt": "A man and a short-haired woman with a wrinkled face are standing in front of a bookshelf in a library. The man is the man in the middle of <img><|image_1|></img>, and the woman is oldest woman in <img><|image_2|></img>", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": false }, "logs": "Using seed: 30978\n 0%| | 0/50 [00:00<?, ?it/s]\n 2%|▏ | 1/50 [00:08<07:13, 8.84s/it]\n 4%|▍ | 2/50 [00:11<04:18, 5.39s/it]\n 6%|▌ | 3/50 [00:14<03:18, 4.22s/it]\n 8%|▊ | 4/50 [00:17<02:48, 3.67s/it]\n 10%|█ | 5/50 [00:20<02:31, 3.37s/it]\n 12%|█▏ | 6/50 [00:23<02:20, 3.19s/it]\n 14%|█▍ | 7/50 [00:25<02:12, 3.07s/it]\n 16%|█▌ | 8/50 [00:28<02:05, 3.00s/it]\n 18%|█▊ | 9/50 [00:31<02:00, 2.95s/it]\n 20%|██ | 10/50 [00:34<01:56, 2.91s/it]\n 22%|██▏ | 11/50 [00:37<01:52, 2.89s/it]\n 24%|██▍ | 12/50 [00:40<01:49, 2.87s/it]\n 26%|██▌ | 13/50 [00:42<01:45, 2.86s/it]\n 28%|██▊ | 14/50 [00:45<01:42, 2.86s/it]\n 30%|███ | 15/50 [00:48<01:39, 2.85s/it]\n 32%|███▏ | 16/50 [00:51<01:36, 2.85s/it]\n 34%|███▍ | 17/50 [00:54<01:33, 2.84s/it]\n 36%|███▌ | 18/50 [00:57<01:30, 2.84s/it]\n 38%|███▊ | 19/50 [01:00<01:28, 2.84s/it]\n 40%|████ | 20/50 [01:02<01:25, 2.84s/it]\n 42%|████▏ | 21/50 [01:05<01:22, 2.84s/it]\n 44%|████▍ | 22/50 [01:08<01:19, 2.84s/it]\n 46%|████▌ | 23/50 [01:11<01:16, 2.84s/it]\n 48%|████▊ | 24/50 [01:14<01:13, 2.84s/it]\n 50%|█████ | 25/50 [01:17<01:10, 2.84s/it]\n 52%|█████▏ | 26/50 [01:19<01:08, 2.84s/it]\n 54%|█████▍ | 27/50 [01:22<01:05, 2.84s/it]\n 56%|█████▌ | 28/50 [01:25<01:02, 2.84s/it]\n 58%|█████▊ | 29/50 [01:28<00:59, 2.84s/it]\n 60%|██████ | 30/50 [01:31<00:56, 2.83s/it]\n 62%|██████▏ | 31/50 [01:34<00:53, 2.84s/it]\n 64%|██████▍ | 32/50 [01:36<00:51, 2.84s/it]\n 66%|██████▌ | 33/50 [01:39<00:48, 2.84s/it]\n 68%|██████▊ | 34/50 [01:42<00:45, 2.84s/it]\n 70%|███████ | 35/50 [01:45<00:42, 2.84s/it]\n 72%|███████▏ | 36/50 [01:48<00:39, 2.84s/it]\n 74%|███████▍ | 37/50 [01:51<00:36, 2.84s/it]\n 76%|███████▌ | 38/50 [01:53<00:34, 2.84s/it]\n 78%|███████▊ | 39/50 [01:56<00:31, 2.84s/it]\n 80%|████████ | 40/50 [01:59<00:28, 2.84s/it]\n 82%|████████▏ | 41/50 [02:02<00:25, 2.84s/it]\n 84%|████████▍ | 42/50 [02:05<00:22, 2.84s/it]\n 86%|████████▌ | 43/50 [02:08<00:19, 2.84s/it]\n 88%|████████▊ | 44/50 [02:10<00:17, 2.84s/it]\n 90%|█████████ | 45/50 [02:13<00:14, 2.83s/it]\n 92%|█████████▏| 46/50 [02:16<00:11, 2.84s/it]\n 94%|█████████▍| 47/50 [02:19<00:08, 2.84s/it]\n 96%|█████████▌| 48/50 [02:22<00:05, 2.83s/it]\n 98%|█████████▊| 49/50 [02:25<00:02, 2.84s/it]\n100%|██████████| 50/50 [02:27<00:00, 2.84s/it]\n100%|██████████| 50/50 [02:27<00:00, 2.96s/it]", "metrics": { "predict_time": 152.117271227, "total_time": 261.192799 }, "output": "https://replicate.delivery/pbxt/eBhtjH1O9xUtTKoGjVwjzFzE48aCdFea3E5fVrSWumdb51anA/out.png", "started_at": "2024-11-03T22:43:33.549528Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/cgh2hgqy39rgp0cjygrakdwh1m", "cancel": "https://api.replicate.com/v1/predictions/cgh2hgqy39rgp0cjygrakdwh1m/cancel" }, "version": "af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b" }
Generated inUsing seed: 30978 0%| | 0/50 [00:00<?, ?it/s] 2%|▏ | 1/50 [00:08<07:13, 8.84s/it] 4%|▍ | 2/50 [00:11<04:18, 5.39s/it] 6%|▌ | 3/50 [00:14<03:18, 4.22s/it] 8%|▊ | 4/50 [00:17<02:48, 3.67s/it] 10%|█ | 5/50 [00:20<02:31, 3.37s/it] 12%|█▏ | 6/50 [00:23<02:20, 3.19s/it] 14%|█▍ | 7/50 [00:25<02:12, 3.07s/it] 16%|█▌ | 8/50 [00:28<02:05, 3.00s/it] 18%|█▊ | 9/50 [00:31<02:00, 2.95s/it] 20%|██ | 10/50 [00:34<01:56, 2.91s/it] 22%|██▏ | 11/50 [00:37<01:52, 2.89s/it] 24%|██▍ | 12/50 [00:40<01:49, 2.87s/it] 26%|██▌ | 13/50 [00:42<01:45, 2.86s/it] 28%|██▊ | 14/50 [00:45<01:42, 2.86s/it] 30%|███ | 15/50 [00:48<01:39, 2.85s/it] 32%|███▏ | 16/50 [00:51<01:36, 2.85s/it] 34%|███▍ | 17/50 [00:54<01:33, 2.84s/it] 36%|███▌ | 18/50 [00:57<01:30, 2.84s/it] 38%|███▊ | 19/50 [01:00<01:28, 2.84s/it] 40%|████ | 20/50 [01:02<01:25, 2.84s/it] 42%|████▏ | 21/50 [01:05<01:22, 2.84s/it] 44%|████▍ | 22/50 [01:08<01:19, 2.84s/it] 46%|████▌ | 23/50 [01:11<01:16, 2.84s/it] 48%|████▊ | 24/50 [01:14<01:13, 2.84s/it] 50%|█████ | 25/50 [01:17<01:10, 2.84s/it] 52%|█████▏ | 26/50 [01:19<01:08, 2.84s/it] 54%|█████▍ | 27/50 [01:22<01:05, 2.84s/it] 56%|█████▌ | 28/50 [01:25<01:02, 2.84s/it] 58%|█████▊ | 29/50 [01:28<00:59, 2.84s/it] 60%|██████ | 30/50 [01:31<00:56, 2.83s/it] 62%|██████▏ | 31/50 [01:34<00:53, 2.84s/it] 64%|██████▍ | 32/50 [01:36<00:51, 2.84s/it] 66%|██████▌ | 33/50 [01:39<00:48, 2.84s/it] 68%|██████▊ | 34/50 [01:42<00:45, 2.84s/it] 70%|███████ | 35/50 [01:45<00:42, 2.84s/it] 72%|███████▏ | 36/50 [01:48<00:39, 2.84s/it] 74%|███████▍ | 37/50 [01:51<00:36, 2.84s/it] 76%|███████▌ | 38/50 [01:53<00:34, 2.84s/it] 78%|███████▊ | 39/50 [01:56<00:31, 2.84s/it] 80%|████████ | 40/50 [01:59<00:28, 2.84s/it] 82%|████████▏ | 41/50 [02:02<00:25, 2.84s/it] 84%|████████▍ | 42/50 [02:05<00:22, 2.84s/it] 86%|████████▌ | 43/50 [02:08<00:19, 2.84s/it] 88%|████████▊ | 44/50 [02:10<00:17, 2.84s/it] 90%|█████████ | 45/50 [02:13<00:14, 2.83s/it] 92%|█████████▏| 46/50 [02:16<00:11, 2.84s/it] 94%|█████████▍| 47/50 [02:19<00:08, 2.84s/it] 96%|█████████▌| 48/50 [02:22<00:05, 2.83s/it] 98%|█████████▊| 49/50 [02:25<00:02, 2.84s/it] 100%|██████████| 50/50 [02:27<00:00, 2.84s/it] 100%|██████████| 50/50 [02:27<00:00, 2.96s/it]
Prediction
vectorspacelab/omnigen:af66691aIDxn6ygbqh4xrgp0cjygv80rdjv8StatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- Generate a new photo using the following picture and text as conditions: <img><|image_1|><img> A young boy is sitting on a sofa in the library, holding a book. His hair is neatly combed, and a faint smile plays on his lips, with a few freckles scattered across his cheeks. The library is quiet, with rows of shelves filled with books stretching out behind him.
- offload_model
- guidance_scale
- 2.5
- inference_steps
- 50
- img_guidance_scale
- 1.6
- separate_cfg_infer
- max_input_image_size
- 1024
- use_input_image_size_as_output
{ "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/100b06b1fb7c8fb362406f0101d3b76dc6c088ed3dfaf25fc7231a8ae1ea0340/skeletal.png", "width": 1024, "height": 1024, "prompt": "Generate a new photo using the following picture and text as conditions: <img><|image_1|><img> A young boy is sitting on a sofa in the library, holding a book. His hair is neatly combed, and a faint smile plays on his lips, with a few freckles scattered across his cheeks. The library is quiet, with rows of shelves filled with books stretching out behind him.", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": true }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "vectorspacelab/omnigen:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", { input: { img1: "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/100b06b1fb7c8fb362406f0101d3b76dc6c088ed3dfaf25fc7231a8ae1ea0340/skeletal.png", width: 1024, height: 1024, prompt: "Generate a new photo using the following picture and text as conditions: <img><|image_1|><img> A young boy is sitting on a sofa in the library, holding a book. His hair is neatly combed, and a faint smile plays on his lips, with a few freckles scattered across his cheeks. The library is quiet, with rows of shelves filled with books stretching out behind him.", offload_model: false, guidance_scale: 2.5, inference_steps: 50, img_guidance_scale: 1.6, separate_cfg_infer: true, max_input_image_size: 1024, use_input_image_size_as_output: true } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "vectorspacelab/omnigen:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", input={ "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/100b06b1fb7c8fb362406f0101d3b76dc6c088ed3dfaf25fc7231a8ae1ea0340/skeletal.png", "width": 1024, "height": 1024, "prompt": "Generate a new photo using the following picture and text as conditions: <img><|image_1|><img> A young boy is sitting on a sofa in the library, holding a book. His hair is neatly combed, and a faint smile plays on his lips, with a few freckles scattered across his cheeks. The library is quiet, with rows of shelves filled with books stretching out behind him.", "offload_model": False, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": True, "max_input_image_size": 1024, "use_input_image_size_as_output": True } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", "input": { "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/100b06b1fb7c8fb362406f0101d3b76dc6c088ed3dfaf25fc7231a8ae1ea0340/skeletal.png", "width": 1024, "height": 1024, "prompt": "Generate a new photo using the following picture and text as conditions: <img><|image_1|><img> A young boy is sitting on a sofa in the library, holding a book. His hair is neatly combed, and a faint smile plays on his lips, with a few freckles scattered across his cheeks. The library is quiet, with rows of shelves filled with books stretching out behind him.", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": true } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Install Cogbrew install cog
If you don’t have Homebrew, there are other installation options available.
Pull and run vectorspacelab/omnigen using Cog (this will download the full model and run it in your local environment):
cog predict r8.im/vectorspacelab/omnigen@sha256:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b \ -i 'img1="https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/100b06b1fb7c8fb362406f0101d3b76dc6c088ed3dfaf25fc7231a8ae1ea0340/skeletal.png"' \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="Generate a new photo using the following picture and text as conditions: <img><|image_1|><img> A young boy is sitting on a sofa in the library, holding a book. His hair is neatly combed, and a faint smile plays on his lips, with a few freckles scattered across his cheeks. The library is quiet, with rows of shelves filled with books stretching out behind him."' \ -i 'offload_model=false' \ -i 'guidance_scale=2.5' \ -i 'inference_steps=50' \ -i 'img_guidance_scale=1.6' \ -i 'separate_cfg_infer=true' \ -i 'max_input_image_size=1024' \ -i 'use_input_image_size_as_output=true'
To learn more, take a look at the Cog documentation.
Pull and run vectorspacelab/omnigen using Docker (this will download the full model and run it in your local environment):
docker run -d -p 5000:5000 --gpus=all r8.im/vectorspacelab/omnigen@sha256:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/100b06b1fb7c8fb362406f0101d3b76dc6c088ed3dfaf25fc7231a8ae1ea0340/skeletal.png", "width": 1024, "height": 1024, "prompt": "Generate a new photo using the following picture and text as conditions: <img><|image_1|><img> A young boy is sitting on a sofa in the library, holding a book. His hair is neatly combed, and a faint smile plays on his lips, with a few freckles scattered across his cheeks. The library is quiet, with rows of shelves filled with books stretching out behind him.", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": true } }' \ http://localhost:5000/predictions
Output
{ "completed_at": "2024-11-03T22:52:01.150500Z", "created_at": "2024-11-03T22:48:14.375000Z", "data_removed": false, "error": null, "id": "xn6ygbqh4xrgp0cjygv80rdjv8", "input": { "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/100b06b1fb7c8fb362406f0101d3b76dc6c088ed3dfaf25fc7231a8ae1ea0340/skeletal.png", "width": 1024, "height": 1024, "prompt": "Generate a new photo using the following picture and text as conditions: <img><|image_1|><img> A young boy is sitting on a sofa in the library, holding a book. His hair is neatly combed, and a faint smile plays on his lips, with a few freckles scattered across his cheeks. The library is quiet, with rows of shelves filled with books stretching out behind him.", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": true }, "logs": "Using seed: 64852\n 0%| | 0/50 [00:00<?, ?it/s]\n 2%|▏ | 1/50 [00:04<03:31, 4.31s/it]\n 4%|▍ | 2/50 [00:06<02:30, 3.14s/it]\n 6%|▌ | 3/50 [00:08<02:08, 2.74s/it]\n 8%|▊ | 4/50 [00:11<01:57, 2.55s/it]\n 10%|█ | 5/50 [00:13<01:49, 2.44s/it]\n 12%|█▏ | 6/50 [00:15<01:44, 2.38s/it]\n 14%|█▍ | 7/50 [00:17<01:40, 2.34s/it]\n 16%|█▌ | 8/50 [00:20<01:37, 2.31s/it]\n 18%|█▊ | 9/50 [00:22<01:34, 2.29s/it]\n 20%|██ | 10/50 [00:24<01:31, 2.28s/it]\n 22%|██▏ | 11/50 [00:26<01:28, 2.28s/it]\n 24%|██▍ | 12/50 [00:29<01:26, 2.27s/it]\n 26%|██▌ | 13/50 [00:31<01:24, 2.27s/it]\n 28%|██▊ | 14/50 [00:33<01:21, 2.27s/it]\n 30%|███ | 15/50 [00:36<01:19, 2.27s/it]\n 32%|███▏ | 16/50 [00:38<01:17, 2.27s/it]\n 34%|███▍ | 17/50 [00:40<01:14, 2.27s/it]\n 36%|███▌ | 18/50 [00:42<01:12, 2.27s/it]\n 38%|███▊ | 19/50 [00:45<01:10, 2.27s/it]\n 40%|████ | 20/50 [00:47<01:08, 2.27s/it]\n 42%|████▏ | 21/50 [00:49<01:05, 2.27s/it]\n 44%|████▍ | 22/50 [00:51<01:03, 2.28s/it]\n 46%|████▌ | 23/50 [00:54<01:01, 2.28s/it]\n 48%|████▊ | 24/50 [00:56<00:59, 2.28s/it]\n 50%|█████ | 25/50 [00:58<00:57, 2.28s/it]\n 52%|█████▏ | 26/50 [01:01<00:54, 2.28s/it]\n 54%|█████▍ | 27/50 [01:03<00:52, 2.28s/it]\n 56%|█████▌ | 28/50 [01:05<00:50, 2.28s/it]\n 58%|█████▊ | 29/50 [01:07<00:47, 2.28s/it]\n 60%|██████ | 30/50 [01:10<00:45, 2.28s/it]\n 62%|██████▏ | 31/50 [01:12<00:43, 2.28s/it]\n 64%|██████▍ | 32/50 [01:14<00:41, 2.28s/it]\n 66%|██████▌ | 33/50 [01:17<00:38, 2.28s/it]\n 68%|██████▊ | 34/50 [01:19<00:36, 2.28s/it]\n 70%|███████ | 35/50 [01:21<00:34, 2.28s/it]\n 72%|███████▏ | 36/50 [01:23<00:31, 2.28s/it]\n 74%|███████▍ | 37/50 [01:26<00:29, 2.28s/it]\n 76%|███████▌ | 38/50 [01:28<00:27, 2.28s/it]\n 78%|███████▊ | 39/50 [01:30<00:25, 2.28s/it]\n 80%|████████ | 40/50 [01:33<00:22, 2.29s/it]\n 82%|████████▏ | 41/50 [01:35<00:20, 2.29s/it]\n 84%|████████▍ | 42/50 [01:37<00:18, 2.29s/it]\n 86%|████████▌ | 43/50 [01:39<00:16, 2.29s/it]\n 88%|████████▊ | 44/50 [01:42<00:13, 2.29s/it]\n 90%|█████████ | 45/50 [01:44<00:11, 2.29s/it]\n 92%|█████████▏| 46/50 [01:46<00:09, 2.29s/it]\n 94%|█████████▍| 47/50 [01:49<00:06, 2.29s/it]\n 96%|█████████▌| 48/50 [01:51<00:04, 2.29s/it]\n 98%|█████████▊| 49/50 [01:53<00:02, 2.29s/it]\n100%|██████████| 50/50 [01:55<00:00, 2.29s/it]\n100%|██████████| 50/50 [01:55<00:00, 2.32s/it]", "metrics": { "predict_time": 121.539274185, "total_time": 226.7755 }, "output": "https://replicate.delivery/pbxt/F1Av6GG4i76iH9r58KBeP5LrWQkmMI8B7pCiddFWe72PCbtTA/out.png", "started_at": "2024-11-03T22:49:59.611225Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/xn6ygbqh4xrgp0cjygv80rdjv8", "cancel": "https://api.replicate.com/v1/predictions/xn6ygbqh4xrgp0cjygv80rdjv8/cancel" }, "version": "af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b" }
Generated inUsing seed: 64852 0%| | 0/50 [00:00<?, ?it/s] 2%|▏ | 1/50 [00:04<03:31, 4.31s/it] 4%|▍ | 2/50 [00:06<02:30, 3.14s/it] 6%|▌ | 3/50 [00:08<02:08, 2.74s/it] 8%|▊ | 4/50 [00:11<01:57, 2.55s/it] 10%|█ | 5/50 [00:13<01:49, 2.44s/it] 12%|█▏ | 6/50 [00:15<01:44, 2.38s/it] 14%|█▍ | 7/50 [00:17<01:40, 2.34s/it] 16%|█▌ | 8/50 [00:20<01:37, 2.31s/it] 18%|█▊ | 9/50 [00:22<01:34, 2.29s/it] 20%|██ | 10/50 [00:24<01:31, 2.28s/it] 22%|██▏ | 11/50 [00:26<01:28, 2.28s/it] 24%|██▍ | 12/50 [00:29<01:26, 2.27s/it] 26%|██▌ | 13/50 [00:31<01:24, 2.27s/it] 28%|██▊ | 14/50 [00:33<01:21, 2.27s/it] 30%|███ | 15/50 [00:36<01:19, 2.27s/it] 32%|███▏ | 16/50 [00:38<01:17, 2.27s/it] 34%|███▍ | 17/50 [00:40<01:14, 2.27s/it] 36%|███▌ | 18/50 [00:42<01:12, 2.27s/it] 38%|███▊ | 19/50 [00:45<01:10, 2.27s/it] 40%|████ | 20/50 [00:47<01:08, 2.27s/it] 42%|████▏ | 21/50 [00:49<01:05, 2.27s/it] 44%|████▍ | 22/50 [00:51<01:03, 2.28s/it] 46%|████▌ | 23/50 [00:54<01:01, 2.28s/it] 48%|████▊ | 24/50 [00:56<00:59, 2.28s/it] 50%|█████ | 25/50 [00:58<00:57, 2.28s/it] 52%|█████▏ | 26/50 [01:01<00:54, 2.28s/it] 54%|█████▍ | 27/50 [01:03<00:52, 2.28s/it] 56%|█████▌ | 28/50 [01:05<00:50, 2.28s/it] 58%|█████▊ | 29/50 [01:07<00:47, 2.28s/it] 60%|██████ | 30/50 [01:10<00:45, 2.28s/it] 62%|██████▏ | 31/50 [01:12<00:43, 2.28s/it] 64%|██████▍ | 32/50 [01:14<00:41, 2.28s/it] 66%|██████▌ | 33/50 [01:17<00:38, 2.28s/it] 68%|██████▊ | 34/50 [01:19<00:36, 2.28s/it] 70%|███████ | 35/50 [01:21<00:34, 2.28s/it] 72%|███████▏ | 36/50 [01:23<00:31, 2.28s/it] 74%|███████▍ | 37/50 [01:26<00:29, 2.28s/it] 76%|███████▌ | 38/50 [01:28<00:27, 2.28s/it] 78%|███████▊ | 39/50 [01:30<00:25, 2.28s/it] 80%|████████ | 40/50 [01:33<00:22, 2.29s/it] 82%|████████▏ | 41/50 [01:35<00:20, 2.29s/it] 84%|████████▍ | 42/50 [01:37<00:18, 2.29s/it] 86%|████████▌ | 43/50 [01:39<00:16, 2.29s/it] 88%|████████▊ | 44/50 [01:42<00:13, 2.29s/it] 90%|█████████ | 45/50 [01:44<00:11, 2.29s/it] 92%|█████████▏| 46/50 [01:46<00:09, 2.29s/it] 94%|█████████▍| 47/50 [01:49<00:06, 2.29s/it] 96%|█████████▌| 48/50 [01:51<00:04, 2.29s/it] 98%|█████████▊| 49/50 [01:53<00:02, 2.29s/it] 100%|██████████| 50/50 [01:55<00:00, 2.29s/it] 100%|██████████| 50/50 [01:55<00:00, 2.32s/it]
Prediction
vectorspacelab/omnigen:af66691aID8ahxgzxswnrgg0cjygvat764fcStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedby @chenxwhInput
- width
- 1024
- height
- 1024
- prompt
- <img><|image_1|><img> Remove the woman's earrings. Replace the mug with a clear glass filled with sparkling iced cola.
- offload_model
- guidance_scale
- 2.5
- inference_steps
- 50
- img_guidance_scale
- 1.6
- separate_cfg_infer
- max_input_image_size
- 1024
- use_input_image_size_as_output
{ "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/23e02261d0d5dd416702fc5645b5192af904ddd5d83e6953913720c625b40306/t2i_woman_with_book.png", "width": 1024, "height": 1024, "prompt": "<img><|image_1|><img> Remove the woman's earrings. Replace the mug with a clear glass filled with sparkling iced cola.", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": true }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "vectorspacelab/omnigen:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", { input: { img1: "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/23e02261d0d5dd416702fc5645b5192af904ddd5d83e6953913720c625b40306/t2i_woman_with_book.png", width: 1024, height: 1024, prompt: "<img><|image_1|><img> Remove the woman's earrings. Replace the mug with a clear glass filled with sparkling iced cola.", offload_model: false, guidance_scale: 2.5, inference_steps: 50, img_guidance_scale: 1.6, separate_cfg_infer: true, max_input_image_size: 1024, use_input_image_size_as_output: true } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "vectorspacelab/omnigen:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", input={ "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/23e02261d0d5dd416702fc5645b5192af904ddd5d83e6953913720c625b40306/t2i_woman_with_book.png", "width": 1024, "height": 1024, "prompt": "<img><|image_1|><img> Remove the woman's earrings. Replace the mug with a clear glass filled with sparkling iced cola.", "offload_model": False, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": True, "max_input_image_size": 1024, "use_input_image_size_as_output": True } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run vectorspacelab/omnigen using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b", "input": { "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/23e02261d0d5dd416702fc5645b5192af904ddd5d83e6953913720c625b40306/t2i_woman_with_book.png", "width": 1024, "height": 1024, "prompt": "<img><|image_1|><img> Remove the woman\'s earrings. Replace the mug with a clear glass filled with sparkling iced cola.", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": true } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Install Cogbrew install cog
If you don’t have Homebrew, there are other installation options available.
Pull and run vectorspacelab/omnigen using Cog (this will download the full model and run it in your local environment):
cog predict r8.im/vectorspacelab/omnigen@sha256:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b \ -i 'img1="https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/23e02261d0d5dd416702fc5645b5192af904ddd5d83e6953913720c625b40306/t2i_woman_with_book.png"' \ -i 'width=1024' \ -i 'height=1024' \ -i $'prompt="<img><|image_1|><img> Remove the woman\'s earrings. Replace the mug with a clear glass filled with sparkling iced cola."' \ -i 'offload_model=false' \ -i 'guidance_scale=2.5' \ -i 'inference_steps=50' \ -i 'img_guidance_scale=1.6' \ -i 'separate_cfg_infer=true' \ -i 'max_input_image_size=1024' \ -i 'use_input_image_size_as_output=true'
To learn more, take a look at the Cog documentation.
Pull and run vectorspacelab/omnigen using Docker (this will download the full model and run it in your local environment):
docker run -d -p 5000:5000 --gpus=all r8.im/vectorspacelab/omnigen@sha256:af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/23e02261d0d5dd416702fc5645b5192af904ddd5d83e6953913720c625b40306/t2i_woman_with_book.png", "width": 1024, "height": 1024, "prompt": "<img><|image_1|><img> Remove the woman\'s earrings. Replace the mug with a clear glass filled with sparkling iced cola.", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": true } }' \ http://localhost:5000/predictions
Output
{ "completed_at": "2024-11-03T22:52:27.127618Z", "created_at": "2024-11-03T22:48:00.229000Z", "data_removed": false, "error": null, "id": "8ahxgzxswnrgg0cjygvat764fc", "input": { "img1": "https://shitao-omnigen.hf.space/gradio_api/file=/tmp/gradio/23e02261d0d5dd416702fc5645b5192af904ddd5d83e6953913720c625b40306/t2i_woman_with_book.png", "width": 1024, "height": 1024, "prompt": "<img><|image_1|><img> Remove the woman's earrings. Replace the mug with a clear glass filled with sparkling iced cola.", "offload_model": false, "guidance_scale": 2.5, "inference_steps": 50, "img_guidance_scale": 1.6, "separate_cfg_infer": true, "max_input_image_size": 1024, "use_input_image_size_as_output": true }, "logs": "Using seed: 5102\n 0%| | 0/50 [00:00<?, ?it/s]\n 2%|▏ | 1/50 [00:05<04:05, 5.01s/it]\n 4%|▍ | 2/50 [00:07<02:47, 3.49s/it]\n 6%|▌ | 3/50 [00:09<02:19, 2.96s/it]\n 8%|▊ | 4/50 [00:12<02:04, 2.71s/it]\n 10%|█ | 5/50 [00:14<01:55, 2.57s/it]\n 12%|█▏ | 6/50 [00:16<01:49, 2.48s/it]\n 14%|█▍ | 7/50 [00:19<01:44, 2.43s/it]\n 16%|█▌ | 8/50 [00:21<01:40, 2.40s/it]\n 18%|█▊ | 9/50 [00:23<01:37, 2.38s/it]\n 20%|██ | 10/50 [00:26<01:34, 2.36s/it]\n 22%|██▏ | 11/50 [00:28<01:31, 2.36s/it]\n 24%|██▍ | 12/50 [00:30<01:29, 2.36s/it]\n 26%|██▌ | 13/50 [00:33<01:27, 2.37s/it]\n 28%|██▊ | 14/50 [00:35<01:25, 2.37s/it]\n 30%|███ | 15/50 [00:37<01:22, 2.36s/it]\n 32%|███▏ | 16/50 [00:40<01:20, 2.36s/it]\n 34%|███▍ | 17/50 [00:42<01:17, 2.35s/it]\n 36%|███▌ | 18/50 [00:44<01:15, 2.35s/it]\n 38%|███▊ | 19/50 [00:47<01:12, 2.34s/it]\n 40%|████ | 20/50 [00:49<01:09, 2.33s/it]\n 42%|████▏ | 21/50 [00:51<01:07, 2.33s/it]\n 44%|████▍ | 22/50 [00:54<01:05, 2.32s/it]\n 46%|████▌ | 23/50 [00:56<01:02, 2.32s/it]\n 48%|████▊ | 24/50 [00:58<01:00, 2.32s/it]\n 50%|█████ | 25/50 [01:01<00:57, 2.32s/it]\n 52%|█████▏ | 26/50 [01:03<00:55, 2.32s/it]\n 54%|█████▍ | 27/50 [01:05<00:53, 2.32s/it]\n 56%|█████▌ | 28/50 [01:08<00:50, 2.32s/it]\n 58%|█████▊ | 29/50 [01:10<00:48, 2.32s/it]\n 60%|██████ | 30/50 [01:12<00:46, 2.32s/it]\n 62%|██████▏ | 31/50 [01:14<00:43, 2.32s/it]\n 64%|██████▍ | 32/50 [01:17<00:41, 2.32s/it]\n 66%|██████▌ | 33/50 [01:19<00:39, 2.32s/it]\n 68%|██████▊ | 34/50 [01:21<00:37, 2.32s/it]\n 70%|███████ | 35/50 [01:24<00:34, 2.32s/it]\n 72%|███████▏ | 36/50 [01:26<00:32, 2.32s/it]\n 74%|███████▍ | 37/50 [01:28<00:30, 2.32s/it]\n 76%|███████▌ | 38/50 [01:31<00:27, 2.32s/it]\n 78%|███████▊ | 39/50 [01:33<00:25, 2.33s/it]\n 80%|████████ | 40/50 [01:35<00:23, 2.34s/it]\n 82%|████████▏ | 41/50 [01:38<00:21, 2.34s/it]\n 84%|████████▍ | 42/50 [01:40<00:18, 2.34s/it]\n 86%|████████▌ | 43/50 [01:42<00:16, 2.34s/it]\n 88%|████████▊ | 44/50 [01:45<00:14, 2.36s/it]\n 90%|█████████ | 45/50 [01:47<00:11, 2.38s/it]\n 92%|█████████▏| 46/50 [01:50<00:09, 2.40s/it]\n 94%|█████████▍| 47/50 [01:52<00:07, 2.42s/it]\n 96%|█████████▌| 48/50 [01:55<00:04, 2.44s/it]\n 98%|█████████▊| 49/50 [01:57<00:02, 2.44s/it]\n100%|██████████| 50/50 [01:59<00:00, 2.42s/it]\n100%|██████████| 50/50 [01:59<00:00, 2.40s/it]", "metrics": { "predict_time": 125.613674649, "total_time": 266.898618 }, "output": "https://replicate.delivery/pbxt/Yqaueh60432MPqWU6pUdatZe7mHOvXrjGKek8K6rLIeoKs1OB/out.png", "started_at": "2024-11-03T22:50:21.513944Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/8ahxgzxswnrgg0cjygvat764fc", "cancel": "https://api.replicate.com/v1/predictions/8ahxgzxswnrgg0cjygvat764fc/cancel" }, "version": "af66691a8952a0ce21b26e840835ad1efe176af159e10169ec5df6916338863b" }
Generated inUsing seed: 5102 0%| | 0/50 [00:00<?, ?it/s] 2%|▏ | 1/50 [00:05<04:05, 5.01s/it] 4%|▍ | 2/50 [00:07<02:47, 3.49s/it] 6%|▌ | 3/50 [00:09<02:19, 2.96s/it] 8%|▊ | 4/50 [00:12<02:04, 2.71s/it] 10%|█ | 5/50 [00:14<01:55, 2.57s/it] 12%|█▏ | 6/50 [00:16<01:49, 2.48s/it] 14%|█▍ | 7/50 [00:19<01:44, 2.43s/it] 16%|█▌ | 8/50 [00:21<01:40, 2.40s/it] 18%|█▊ | 9/50 [00:23<01:37, 2.38s/it] 20%|██ | 10/50 [00:26<01:34, 2.36s/it] 22%|██▏ | 11/50 [00:28<01:31, 2.36s/it] 24%|██▍ | 12/50 [00:30<01:29, 2.36s/it] 26%|██▌ | 13/50 [00:33<01:27, 2.37s/it] 28%|██▊ | 14/50 [00:35<01:25, 2.37s/it] 30%|███ | 15/50 [00:37<01:22, 2.36s/it] 32%|███▏ | 16/50 [00:40<01:20, 2.36s/it] 34%|███▍ | 17/50 [00:42<01:17, 2.35s/it] 36%|███▌ | 18/50 [00:44<01:15, 2.35s/it] 38%|███▊ | 19/50 [00:47<01:12, 2.34s/it] 40%|████ | 20/50 [00:49<01:09, 2.33s/it] 42%|████▏ | 21/50 [00:51<01:07, 2.33s/it] 44%|████▍ | 22/50 [00:54<01:05, 2.32s/it] 46%|████▌ | 23/50 [00:56<01:02, 2.32s/it] 48%|████▊ | 24/50 [00:58<01:00, 2.32s/it] 50%|█████ | 25/50 [01:01<00:57, 2.32s/it] 52%|█████▏ | 26/50 [01:03<00:55, 2.32s/it] 54%|█████▍ | 27/50 [01:05<00:53, 2.32s/it] 56%|█████▌ | 28/50 [01:08<00:50, 2.32s/it] 58%|█████▊ | 29/50 [01:10<00:48, 2.32s/it] 60%|██████ | 30/50 [01:12<00:46, 2.32s/it] 62%|██████▏ | 31/50 [01:14<00:43, 2.32s/it] 64%|██████▍ | 32/50 [01:17<00:41, 2.32s/it] 66%|██████▌ | 33/50 [01:19<00:39, 2.32s/it] 68%|██████▊ | 34/50 [01:21<00:37, 2.32s/it] 70%|███████ | 35/50 [01:24<00:34, 2.32s/it] 72%|███████▏ | 36/50 [01:26<00:32, 2.32s/it] 74%|███████▍ | 37/50 [01:28<00:30, 2.32s/it] 76%|███████▌ | 38/50 [01:31<00:27, 2.32s/it] 78%|███████▊ | 39/50 [01:33<00:25, 2.33s/it] 80%|████████ | 40/50 [01:35<00:23, 2.34s/it] 82%|████████▏ | 41/50 [01:38<00:21, 2.34s/it] 84%|████████▍ | 42/50 [01:40<00:18, 2.34s/it] 86%|████████▌ | 43/50 [01:42<00:16, 2.34s/it] 88%|████████▊ | 44/50 [01:45<00:14, 2.36s/it] 90%|█████████ | 45/50 [01:47<00:11, 2.38s/it] 92%|█████████▏| 46/50 [01:50<00:09, 2.40s/it] 94%|█████████▍| 47/50 [01:52<00:07, 2.42s/it] 96%|█████████▌| 48/50 [01:55<00:04, 2.44s/it] 98%|█████████▊| 49/50 [01:57<00:02, 2.44s/it] 100%|██████████| 50/50 [01:59<00:00, 2.42s/it] 100%|██████████| 50/50 [01:59<00:00, 2.40s/it]
Want to make some of these yourself?
Run this model