fofr
/
style-transfer
Transfer the style of one image to another
Prediction
fofr/style-transfer:4f8304d9f7742fdacc13bf618f0984040cc6d765e0f7f94133db835a1893ff75ID91dqgwy815rgg0cey2zbwynkgcStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedby @fofrInput
- model
- animated
- width
- 1024
- height
- 1024
- prompt
- A sports car, dynamic, motion
- output_format
- webp
- output_quality
- 80
- negative_prompt
- number_of_images
- 1
{ "model": "animated", "width": 1024, "height": 1024, "prompt": "A sports car, dynamic, motion", "style_image": "https://replicate.delivery/pbxt/KlgY7WKOOSyNAEKvx2KGfeHZeNlNvJfFKIYNkJ8r9bgUEyPu/tshirt_01829_.png", "output_format": "webp", "output_quality": 80, "negative_prompt": "", "number_of_images": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/style-transfer using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/style-transfer:4f8304d9f7742fdacc13bf618f0984040cc6d765e0f7f94133db835a1893ff75", { input: { model: "animated", width: 1024, height: 1024, prompt: "A sports car, dynamic, motion", style_image: "https://replicate.delivery/pbxt/KlgY7WKOOSyNAEKvx2KGfeHZeNlNvJfFKIYNkJ8r9bgUEyPu/tshirt_01829_.png", output_format: "webp", output_quality: 80, negative_prompt: "", number_of_images: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run fofr/style-transfer using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/style-transfer:4f8304d9f7742fdacc13bf618f0984040cc6d765e0f7f94133db835a1893ff75", input={ "model": "animated", "width": 1024, "height": 1024, "prompt": "A sports car, dynamic, motion", "style_image": "https://replicate.delivery/pbxt/KlgY7WKOOSyNAEKvx2KGfeHZeNlNvJfFKIYNkJ8r9bgUEyPu/tshirt_01829_.png", "output_format": "webp", "output_quality": 80, "negative_prompt": "", "number_of_images": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run fofr/style-transfer using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4f8304d9f7742fdacc13bf618f0984040cc6d765e0f7f94133db835a1893ff75", "input": { "model": "animated", "width": 1024, "height": 1024, "prompt": "A sports car, dynamic, motion", "style_image": "https://replicate.delivery/pbxt/KlgY7WKOOSyNAEKvx2KGfeHZeNlNvJfFKIYNkJ8r9bgUEyPu/tshirt_01829_.png", "output_format": "webp", "output_quality": 80, "negative_prompt": "", "number_of_images": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-04-18T10:27:05.590240Z", "created_at": "2024-04-18T10:26:58.697000Z", "data_removed": false, "error": null, "id": "91dqgwy815rgg0cey2zbwynkgc", "input": { "model": "animated", "width": 1024, "height": 1024, "prompt": "A sports car, dynamic, motion", "style_image": "https://replicate.delivery/pbxt/KlgY7WKOOSyNAEKvx2KGfeHZeNlNvJfFKIYNkJ8r9bgUEyPu/tshirt_01829_.png", "output_format": "webp", "output_quality": 80, "negative_prompt": "", "number_of_images": 1 }, "logs": "Random seed set to: 4188587148\nChecking weights\nIncluding weights for IPAdapter preset: PLUS (high strength)\n✅ ip-adapter-plus_sdxl_vit-h.safetensors\n✅ starlightXLAnimated_v3.safetensors\n✅ CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors\n====================================\nRunning workflow\ngot prompt\nExecuting node 6, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode\nExecuting node 3, title: KSampler, class type: KSampler\n 0%| | 0/20 [00:00<?, ?it/s]\n 5%|▌ | 1/20 [00:00<00:03, 5.08it/s]\n 10%|█ | 2/20 [00:00<00:03, 4.99it/s]\n 15%|█▌ | 3/20 [00:00<00:03, 5.00it/s]\n 20%|██ | 4/20 [00:00<00:03, 5.01it/s]\n 25%|██▌ | 5/20 [00:00<00:02, 5.01it/s]\n 30%|███ | 6/20 [00:01<00:02, 5.00it/s]\n 35%|███▌ | 7/20 [00:01<00:02, 5.01it/s]\n 40%|████ | 8/20 [00:01<00:02, 5.01it/s]\n 45%|████▌ | 9/20 [00:01<00:02, 5.01it/s]\n 50%|█████ | 10/20 [00:01<00:02, 5.00it/s]\n 55%|█████▌ | 11/20 [00:02<00:01, 4.98it/s]\n 60%|██████ | 12/20 [00:02<00:01, 4.99it/s]\n 65%|██████▌ | 13/20 [00:02<00:01, 5.00it/s]\n 70%|███████ | 14/20 [00:02<00:01, 5.01it/s]\n 75%|███████▌ | 15/20 [00:02<00:00, 5.01it/s]\n 80%|████████ | 16/20 [00:03<00:00, 5.01it/s]\n 85%|████████▌ | 17/20 [00:03<00:00, 5.00it/s]\n 90%|█████████ | 18/20 [00:03<00:00, 5.00it/s]\n 95%|█████████▌| 19/20 [00:03<00:00, 5.01it/s]\n100%|██████████| 20/20 [00:03<00:00, 5.03it/s]\n100%|██████████| 20/20 [00:03<00:00, 5.01it/s]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 9, title: Save Image, class type: SaveImage\nPrompt executed in 4.41 seconds\noutputs: {'9': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}}\n====================================\nComfyUI_00001_.png", "metrics": { "predict_time": 6.808624, "total_time": 6.89324 }, "output": [ "https://replicate.delivery/pbxt/lmceUmoXO30AQK5jWMug2UfD60TB8P29cREmR9GPmTd4dqrSA/ComfyUI_00001_.webp" ], "started_at": "2024-04-18T10:26:58.781616Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/91dqgwy815rgg0cey2zbwynkgc", "cancel": "https://api.replicate.com/v1/predictions/91dqgwy815rgg0cey2zbwynkgc/cancel" }, "version": "4f8304d9f7742fdacc13bf618f0984040cc6d765e0f7f94133db835a1893ff75" }
Generated inRandom seed set to: 4188587148 Checking weights Including weights for IPAdapter preset: PLUS (high strength) ✅ ip-adapter-plus_sdxl_vit-h.safetensors ✅ starlightXLAnimated_v3.safetensors ✅ CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors ==================================== Running workflow got prompt Executing node 6, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Executing node 3, title: KSampler, class type: KSampler 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:03, 5.08it/s] 10%|█ | 2/20 [00:00<00:03, 4.99it/s] 15%|█▌ | 3/20 [00:00<00:03, 5.00it/s] 20%|██ | 4/20 [00:00<00:03, 5.01it/s] 25%|██▌ | 5/20 [00:00<00:02, 5.01it/s] 30%|███ | 6/20 [00:01<00:02, 5.00it/s] 35%|███▌ | 7/20 [00:01<00:02, 5.01it/s] 40%|████ | 8/20 [00:01<00:02, 5.01it/s] 45%|████▌ | 9/20 [00:01<00:02, 5.01it/s] 50%|█████ | 10/20 [00:01<00:02, 5.00it/s] 55%|█████▌ | 11/20 [00:02<00:01, 4.98it/s] 60%|██████ | 12/20 [00:02<00:01, 4.99it/s] 65%|██████▌ | 13/20 [00:02<00:01, 5.00it/s] 70%|███████ | 14/20 [00:02<00:01, 5.01it/s] 75%|███████▌ | 15/20 [00:02<00:00, 5.01it/s] 80%|████████ | 16/20 [00:03<00:00, 5.01it/s] 85%|████████▌ | 17/20 [00:03<00:00, 5.00it/s] 90%|█████████ | 18/20 [00:03<00:00, 5.00it/s] 95%|█████████▌| 19/20 [00:03<00:00, 5.01it/s] 100%|██████████| 20/20 [00:03<00:00, 5.03it/s] 100%|██████████| 20/20 [00:03<00:00, 5.01it/s] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 9, title: Save Image, class type: SaveImage Prompt executed in 4.41 seconds outputs: {'9': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}} ==================================== ComfyUI_00001_.png
Prediction
fofr/style-transfer:b85ce11e8afb98f6ee0a0237caf38345eb28be9ebb081aee66751716d5dbeca8IDk95qy9gahdrgp0cexq38bkrrsrStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedby @fofrInput
- width
- 1024
- height
- 1024
- prompt
- An astronaut riding a unicorn
- negative_prompt
- number_of_images
- 1
- optimise_output_images
- optimise_output_images_quality
- 80
{ "width": 1024, "height": 1024, "prompt": "An astronaut riding a unicorn", "style_image": "https://replicate.delivery/pbxt/KlTqluRakBzt7N5mm1WExEQCc4J3usa7E3n5dhttcayTqFRm/van-gogh.jpeg", "negative_prompt": "", "number_of_images": 1, "optimise_output_images": true, "optimise_output_images_quality": 80 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/style-transfer using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/style-transfer:b85ce11e8afb98f6ee0a0237caf38345eb28be9ebb081aee66751716d5dbeca8", { input: { width: 1024, height: 1024, prompt: "An astronaut riding a unicorn", style_image: "https://replicate.delivery/pbxt/KlTqluRakBzt7N5mm1WExEQCc4J3usa7E3n5dhttcayTqFRm/van-gogh.jpeg", negative_prompt: "", number_of_images: 1, optimise_output_images: true, optimise_output_images_quality: 80 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run fofr/style-transfer using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/style-transfer:b85ce11e8afb98f6ee0a0237caf38345eb28be9ebb081aee66751716d5dbeca8", input={ "width": 1024, "height": 1024, "prompt": "An astronaut riding a unicorn", "style_image": "https://replicate.delivery/pbxt/KlTqluRakBzt7N5mm1WExEQCc4J3usa7E3n5dhttcayTqFRm/van-gogh.jpeg", "negative_prompt": "", "number_of_images": 1, "optimise_output_images": True, "optimise_output_images_quality": 80 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run fofr/style-transfer using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "b85ce11e8afb98f6ee0a0237caf38345eb28be9ebb081aee66751716d5dbeca8", "input": { "width": 1024, "height": 1024, "prompt": "An astronaut riding a unicorn", "style_image": "https://replicate.delivery/pbxt/KlTqluRakBzt7N5mm1WExEQCc4J3usa7E3n5dhttcayTqFRm/van-gogh.jpeg", "negative_prompt": "", "number_of_images": 1, "optimise_output_images": true, "optimise_output_images_quality": 80 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-04-17T20:37:25.866217Z", "created_at": "2024-04-17T20:36:02.827000Z", "data_removed": false, "error": null, "id": "k95qy9gahdrgp0cexq38bkrrsr", "input": { "width": 1024, "height": 1024, "prompt": "An astronaut riding a unicorn", "style_image": "https://replicate.delivery/pbxt/KlTqluRakBzt7N5mm1WExEQCc4J3usa7E3n5dhttcayTqFRm/van-gogh.jpeg", "negative_prompt": "", "number_of_images": 1, "optimise_output_images": true, "optimise_output_images_quality": 80 }, "logs": "Random seed set to: 246154933\nChecking inputs\n✅ /tmp/inputs/image.png\n====================================\nRunning workflow\ngot prompt\nExecuting node 2, title: Load Checkpoint, class type: CheckpointLoaderSimple\nmodel_type EPS\nadm 2816\nUsing pytorch attention in VAE\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\nUsing pytorch attention in VAE\nmissing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}\nleft over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection'])\nloaded straight to GPU\nRequested to load SDXL\nLoading 1 new model\nExecuting node 1, title: IPAdapter Unified Loader, class type: IPAdapterUnifiedLoader\n\u001b[33mINFO: Clip Vision model loaded from /src/ComfyUI/models/clip_vision/CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors\u001b[0m\n\u001b[33mINFO: IPAdapter model loaded from /src/ComfyUI/models/ipadapter/ip-adapter-plus_sdxl_vit-h.safetensors\u001b[0m\nExecuting node 5, title: Load Image, class type: LoadImage\n\u001b[33mINFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting.\u001b[0m\nRequested to load CLIPVisionModelProjection\nLoading 1 new model\nExecuting node 4, title: IPAdapter, class type: IPAdapter\nExecuting node 6, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode\nRequested to load SDXLClipModel\nLoading 1 new model\nExecuting node 7, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode\nExecuting node 10, title: Empty Latent Image, class type: EmptyLatentImage\nExecuting node 3, title: KSampler, class type: KSampler\nRequested to load SDXL\nLoading 1 new model\nunload clone 2\n 0%| | 0/4 [00:00<?, ?it/s]/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torchsde/_brownian/brownian_interval.py:608: UserWarning: Should have tb<=t1 but got tb=14.614644050598145 and t1=14.614643.\nwarnings.warn(f\"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.\")\n 25%|██▌ | 1/4 [00:00<00:01, 1.67it/s]\n 50%|█████ | 2/4 [00:01<00:01, 1.95it/s]\n 75%|███████▌ | 3/4 [00:01<00:00, 2.14it/s]\n100%|██████████| 4/4 [00:01<00:00, 2.78it/s]\n100%|██████████| 4/4 [00:01<00:00, 2.41it/s]\nRequested to load AutoencoderKL\nLoading 1 new model\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 9, title: Save Image, class type: SaveImage\nPrompt executed in 7.07 seconds\noutputs: {'9': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}}\n====================================\nComfyUI_00001_.png", "metrics": { "predict_time": 9.354159, "total_time": 83.039217 }, "output": [ "https://replicate.delivery/pbxt/wmtBOf7pSlzHF6RBbeG5YpIXEYkRlGcoTpnOMi2Fqg9EUeWlA/ComfyUI_00001_.webp" ], "started_at": "2024-04-17T20:37:16.512058Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/k95qy9gahdrgp0cexq38bkrrsr", "cancel": "https://api.replicate.com/v1/predictions/k95qy9gahdrgp0cexq38bkrrsr/cancel" }, "version": "b85ce11e8afb98f6ee0a0237caf38345eb28be9ebb081aee66751716d5dbeca8" }
Generated inRandom seed set to: 246154933 Checking inputs ✅ /tmp/inputs/image.png ==================================== Running workflow got prompt Executing node 2, title: Load Checkpoint, class type: CheckpointLoaderSimple model_type EPS adm 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} left over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection']) loaded straight to GPU Requested to load SDXL Loading 1 new model Executing node 1, title: IPAdapter Unified Loader, class type: IPAdapterUnifiedLoader INFO: Clip Vision model loaded from /src/ComfyUI/models/clip_vision/CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors INFO: IPAdapter model loaded from /src/ComfyUI/models/ipadapter/ip-adapter-plus_sdxl_vit-h.safetensors Executing node 5, title: Load Image, class type: LoadImage INFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting. Requested to load CLIPVisionModelProjection Loading 1 new model Executing node 4, title: IPAdapter, class type: IPAdapter Executing node 6, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Requested to load SDXLClipModel Loading 1 new model Executing node 7, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Executing node 10, title: Empty Latent Image, class type: EmptyLatentImage Executing node 3, title: KSampler, class type: KSampler Requested to load SDXL Loading 1 new model unload clone 2 0%| | 0/4 [00:00<?, ?it/s]/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torchsde/_brownian/brownian_interval.py:608: UserWarning: Should have tb<=t1 but got tb=14.614644050598145 and t1=14.614643. warnings.warn(f"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.") 25%|██▌ | 1/4 [00:00<00:01, 1.67it/s] 50%|█████ | 2/4 [00:01<00:01, 1.95it/s] 75%|███████▌ | 3/4 [00:01<00:00, 2.14it/s] 100%|██████████| 4/4 [00:01<00:00, 2.78it/s] 100%|██████████| 4/4 [00:01<00:00, 2.41it/s] Requested to load AutoencoderKL Loading 1 new model Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 9, title: Save Image, class type: SaveImage Prompt executed in 7.07 seconds outputs: {'9': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}} ==================================== ComfyUI_00001_.png
Prediction
fofr/style-transfer:8e579174a98cd09caca7e7a99fa2aaf4eaef16daf2003a3862c1af05c1c531c8IDr2r06vfv09rgp0cexrrrcbdtbcStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedby @fofrInput
- width
- 1024
- height
- 1024
- prompt
- An abstract black and white astronaut riding a unicorn, flat
- output_format
- webp
- output_quality
- 80
- negative_prompt
- 3d
- number_of_images
- 2
{ "width": 1024, "height": 1024, "prompt": "An abstract black and white astronaut riding a unicorn, flat", "style_image": "https://replicate.delivery/pbxt/KlVeVv3ttJmu35cFzssGNQiXvs6RGNfmwzud9kmwr2bUokkq/black-and-white-checkerboard-pattern-printable_268921.png", "output_format": "webp", "output_quality": 80, "negative_prompt": "3d", "number_of_images": 2 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/style-transfer using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/style-transfer:8e579174a98cd09caca7e7a99fa2aaf4eaef16daf2003a3862c1af05c1c531c8", { input: { width: 1024, height: 1024, prompt: "An abstract black and white astronaut riding a unicorn, flat", style_image: "https://replicate.delivery/pbxt/KlVeVv3ttJmu35cFzssGNQiXvs6RGNfmwzud9kmwr2bUokkq/black-and-white-checkerboard-pattern-printable_268921.png", output_format: "webp", output_quality: 80, negative_prompt: "3d", number_of_images: 2 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run fofr/style-transfer using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/style-transfer:8e579174a98cd09caca7e7a99fa2aaf4eaef16daf2003a3862c1af05c1c531c8", input={ "width": 1024, "height": 1024, "prompt": "An abstract black and white astronaut riding a unicorn, flat", "style_image": "https://replicate.delivery/pbxt/KlVeVv3ttJmu35cFzssGNQiXvs6RGNfmwzud9kmwr2bUokkq/black-and-white-checkerboard-pattern-printable_268921.png", "output_format": "webp", "output_quality": 80, "negative_prompt": "3d", "number_of_images": 2 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run fofr/style-transfer using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "8e579174a98cd09caca7e7a99fa2aaf4eaef16daf2003a3862c1af05c1c531c8", "input": { "width": 1024, "height": 1024, "prompt": "An abstract black and white astronaut riding a unicorn, flat", "style_image": "https://replicate.delivery/pbxt/KlVeVv3ttJmu35cFzssGNQiXvs6RGNfmwzud9kmwr2bUokkq/black-and-white-checkerboard-pattern-printable_268921.png", "output_format": "webp", "output_quality": 80, "negative_prompt": "3d", "number_of_images": 2 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-04-17T22:34:02.235853Z", "created_at": "2024-04-17T22:33:56.738000Z", "data_removed": false, "error": null, "id": "r2r06vfv09rgp0cexrrrcbdtbc", "input": { "width": 1024, "height": 1024, "prompt": "An abstract black and white astronaut riding a unicorn, flat", "style_image": "https://replicate.delivery/pbxt/KlVeVv3ttJmu35cFzssGNQiXvs6RGNfmwzud9kmwr2bUokkq/black-and-white-checkerboard-pattern-printable_268921.png", "output_format": "webp", "output_quality": 80, "negative_prompt": "3d", "number_of_images": 2 }, "logs": "Random seed set to: 1038558568\nRunning workflow\ngot prompt\nExecuting node 6, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode\nExecuting node 7, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode\nExecuting node 3, title: KSampler, class type: KSampler\n 0%| | 0/4 [00:00<?, ?it/s]\n 25%|██▌ | 1/4 [00:00<00:02, 1.27it/s]\n 50%|█████ | 2/4 [00:01<00:01, 1.22it/s]\n 75%|███████▌ | 3/4 [00:02<00:00, 1.26it/s]\n100%|██████████| 4/4 [00:02<00:00, 1.61it/s]\n100%|██████████| 4/4 [00:02<00:00, 1.46it/s]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 9, title: Save Image, class type: SaveImage\nPrompt executed in 3.50 seconds\noutputs: {'9': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00002_.png', 'subfolder': '', 'type': 'output'}]}}\n====================================\nComfyUI_00001_.png\nComfyUI_00002_.png", "metrics": { "predict_time": 5.457723, "total_time": 5.497853 }, "output": [ "https://replicate.delivery/pbxt/UKRKeGbTDczlbCuJoza6BjHXBb19oQfdUoSRxWxPSCxZBgrSA/ComfyUI_00001_.webp", "https://replicate.delivery/pbxt/vK81zNB12ZZdO9GIVhiSZkeot2iMk0eg3jPJFab7uc1ZBgrSA/ComfyUI_00002_.webp" ], "started_at": "2024-04-17T22:33:56.778130Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/r2r06vfv09rgp0cexrrrcbdtbc", "cancel": "https://api.replicate.com/v1/predictions/r2r06vfv09rgp0cexrrrcbdtbc/cancel" }, "version": "8e579174a98cd09caca7e7a99fa2aaf4eaef16daf2003a3862c1af05c1c531c8" }
Generated inRandom seed set to: 1038558568 Running workflow got prompt Executing node 6, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Executing node 7, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Executing node 3, title: KSampler, class type: KSampler 0%| | 0/4 [00:00<?, ?it/s] 25%|██▌ | 1/4 [00:00<00:02, 1.27it/s] 50%|█████ | 2/4 [00:01<00:01, 1.22it/s] 75%|███████▌ | 3/4 [00:02<00:00, 1.26it/s] 100%|██████████| 4/4 [00:02<00:00, 1.61it/s] 100%|██████████| 4/4 [00:02<00:00, 1.46it/s] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 9, title: Save Image, class type: SaveImage Prompt executed in 3.50 seconds outputs: {'9': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}, {'filename': 'ComfyUI_00002_.png', 'subfolder': '', 'type': 'output'}]}} ==================================== ComfyUI_00001_.png ComfyUI_00002_.png
Prediction
fofr/style-transfer:f1023890703bc0a5a3a2c21b5e498833be5f6ef6e70e9daf6b9b3a4fd8309cf0Input
- model
- fast
- width
- 1024
- height
- 1024
- prompt
- An astronaut riding a unicorn
- output_format
- webp
- output_quality
- 80
- negative_prompt
- number_of_images
- 1
- structure_depth_strength
- 1
- structure_denoising_strength
- 0.65
{ "model": "fast", "width": 1024, "height": 1024, "prompt": "An astronaut riding a unicorn", "style_image": "https://replicate.delivery/pbxt/KlTqluRakBzt7N5mm1WExEQCc4J3usa7E3n5dhttcayTqFRm/van-gogh.jpeg", "output_format": "webp", "output_quality": 80, "negative_prompt": "", "number_of_images": 1, "structure_depth_strength": 1, "structure_denoising_strength": 0.65 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/style-transfer using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/style-transfer:f1023890703bc0a5a3a2c21b5e498833be5f6ef6e70e9daf6b9b3a4fd8309cf0", { input: { model: "fast", width: 1024, height: 1024, prompt: "An astronaut riding a unicorn", style_image: "https://replicate.delivery/pbxt/KlTqluRakBzt7N5mm1WExEQCc4J3usa7E3n5dhttcayTqFRm/van-gogh.jpeg", output_format: "webp", output_quality: 80, negative_prompt: "", number_of_images: 1, structure_depth_strength: 1, structure_denoising_strength: 0.65 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run fofr/style-transfer using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/style-transfer:f1023890703bc0a5a3a2c21b5e498833be5f6ef6e70e9daf6b9b3a4fd8309cf0", input={ "model": "fast", "width": 1024, "height": 1024, "prompt": "An astronaut riding a unicorn", "style_image": "https://replicate.delivery/pbxt/KlTqluRakBzt7N5mm1WExEQCc4J3usa7E3n5dhttcayTqFRm/van-gogh.jpeg", "output_format": "webp", "output_quality": 80, "negative_prompt": "", "number_of_images": 1, "structure_depth_strength": 1, "structure_denoising_strength": 0.65 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run fofr/style-transfer using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "f1023890703bc0a5a3a2c21b5e498833be5f6ef6e70e9daf6b9b3a4fd8309cf0", "input": { "model": "fast", "width": 1024, "height": 1024, "prompt": "An astronaut riding a unicorn", "style_image": "https://replicate.delivery/pbxt/KlTqluRakBzt7N5mm1WExEQCc4J3usa7E3n5dhttcayTqFRm/van-gogh.jpeg", "output_format": "webp", "output_quality": 80, "negative_prompt": "", "number_of_images": 1, "structure_depth_strength": 1, "structure_denoising_strength": 0.65 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-11-28T11:15:29.481256Z", "created_at": "2024-11-28T11:15:23.572000Z", "data_removed": false, "error": null, "id": "t71z6mcv6hrma0cke9xrtyzm1w", "input": { "model": "fast", "width": 1024, "height": 1024, "prompt": "An astronaut riding a unicorn", "style_image": "https://replicate.delivery/pbxt/KlTqluRakBzt7N5mm1WExEQCc4J3usa7E3n5dhttcayTqFRm/van-gogh.jpeg", "output_format": "webp", "output_quality": 80, "negative_prompt": "", "number_of_images": 1, "structure_depth_strength": 1, "structure_denoising_strength": 0.65 }, "logs": "Random seed set to: 1640868803\nChecking weights\nIncluding weights for IPAdapter preset: PLUS (high strength)\n✅ ip-adapter-plus_sdxl_vit-h.safetensors\n✅ CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors\n✅ dreamshaperXL_lightningDPMSDE.safetensors\n====================================\nRunning workflow\ngot prompt\nExecuting node 2, title: Load Checkpoint, class type: CheckpointLoaderSimple\nmodel_type EPS\nUsing pytorch attention in VAE\nUsing pytorch attention in VAE\nclip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']\nloaded straight to GPU\nRequested to load SDXL\nLoading 1 new model\nExecuting node 1, title: IPAdapter Unified Loader, class type: IPAdapterUnifiedLoader\n\u001b[33mINFO: Clip Vision model loaded from /src/ComfyUI/models/clip_vision/CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors\u001b[0m\n\u001b[33mINFO: IPAdapter model loaded from /src/ComfyUI/models/ipadapter/ip-adapter-plus_sdxl_vit-h.safetensors\u001b[0m\nExecuting node 5, title: Load Image, class type: LoadImage\n\u001b[33mINFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting.\u001b[0m\nExecuting node 4, title: IPAdapter, class type: IPAdapter\nRequested to load CLIPVisionModelProjection\nLoading 1 new model\nExecuting node 6, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode\nRequested to load SDXLClipModel\nLoading 1 new model\nExecuting node 7, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode\nExecuting node 10, title: Empty Latent Image, class type: EmptyLatentImage\nExecuting node 3, title: KSampler, class type: KSampler\nRequested to load SDXL\nLoading 1 new model\n 0%| | 0/4 [00:00<?, ?it/s]/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torchsde/_brownian/brownian_interval.py:608: UserWarning: Should have tb<=t1 but got tb=14.614644050598145 and t1=14.614643.\nwarnings.warn(f\"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.\")\n 25%|██▌ | 1/4 [00:00<00:01, 2.99it/s]\n 50%|█████ | 2/4 [00:00<00:00, 3.64it/s]\n 75%|███████▌ | 3/4 [00:00<00:00, 3.94it/s]\n100%|██████████| 4/4 [00:00<00:00, 5.05it/s]\n100%|██████████| 4/4 [00:00<00:00, 4.40it/s]\nRequested to load AutoencoderKL\nLoading 1 new model\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 9, title: Save Image, class type: SaveImage\nPrompt executed in 5.15 seconds\noutputs: {'9': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}}\n====================================\nComfyUI_00001_.png", "metrics": { "predict_time": 5.8975421279999996, "total_time": 5.909256 }, "output": [ "https://replicate.delivery/xezq/OgPfUK3cTWSYF6ZKr30ItkZClxuuf3INMygNv48O47wRLg1TA/ComfyUI_00001_.webp" ], "started_at": "2024-11-28T11:15:23.583714Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-ylw2cq4qmhgzi5wukon7sd7n2euktdfpoled6mp6tqvfknt5xnqq", "get": "https://api.replicate.com/v1/predictions/t71z6mcv6hrma0cke9xrtyzm1w", "cancel": "https://api.replicate.com/v1/predictions/t71z6mcv6hrma0cke9xrtyzm1w/cancel" }, "version": "f1023890703bc0a5a3a2c21b5e498833be5f6ef6e70e9daf6b9b3a4fd8309cf0" }
Generated inRandom seed set to: 1640868803 Checking weights Including weights for IPAdapter preset: PLUS (high strength) ✅ ip-adapter-plus_sdxl_vit-h.safetensors ✅ CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors ✅ dreamshaperXL_lightningDPMSDE.safetensors ==================================== Running workflow got prompt Executing node 2, title: Load Checkpoint, class type: CheckpointLoaderSimple model_type EPS Using pytorch attention in VAE Using pytorch attention in VAE clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] loaded straight to GPU Requested to load SDXL Loading 1 new model Executing node 1, title: IPAdapter Unified Loader, class type: IPAdapterUnifiedLoader INFO: Clip Vision model loaded from /src/ComfyUI/models/clip_vision/CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors INFO: IPAdapter model loaded from /src/ComfyUI/models/ipadapter/ip-adapter-plus_sdxl_vit-h.safetensors Executing node 5, title: Load Image, class type: LoadImage INFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting. Executing node 4, title: IPAdapter, class type: IPAdapter Requested to load CLIPVisionModelProjection Loading 1 new model Executing node 6, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Requested to load SDXLClipModel Loading 1 new model Executing node 7, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Executing node 10, title: Empty Latent Image, class type: EmptyLatentImage Executing node 3, title: KSampler, class type: KSampler Requested to load SDXL Loading 1 new model 0%| | 0/4 [00:00<?, ?it/s]/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torchsde/_brownian/brownian_interval.py:608: UserWarning: Should have tb<=t1 but got tb=14.614644050598145 and t1=14.614643. warnings.warn(f"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.") 25%|██▌ | 1/4 [00:00<00:01, 2.99it/s] 50%|█████ | 2/4 [00:00<00:00, 3.64it/s] 75%|███████▌ | 3/4 [00:00<00:00, 3.94it/s] 100%|██████████| 4/4 [00:00<00:00, 5.05it/s] 100%|██████████| 4/4 [00:00<00:00, 4.40it/s] Requested to load AutoencoderKL Loading 1 new model Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 9, title: Save Image, class type: SaveImage Prompt executed in 5.15 seconds outputs: {'9': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}} ==================================== ComfyUI_00001_.png
Want to make some of these yourself?
Run this model