paper11667
/
clipstyler
Image Style Transfer with Text Condition
Prediction
paper11667/clipstyler:f2d6b24eID2dthoruewrg5vjugeeb7xpiusqStatusSucceededSourceWebHardware–Total durationCreatedInput
{ "text": "mosaic", "image": "https://replicate.delivery/mgxm/6e632d27-d0d3-4310-95bf-dd941dbb415e/destinations-london-banner-mobile-1024x553.jpeg", "iterations": 200 }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "paper11667/clipstyler:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", { input: { text: "mosaic", image: "https://replicate.delivery/mgxm/6e632d27-d0d3-4310-95bf-dd941dbb415e/destinations-london-banner-mobile-1024x553.jpeg", iterations: 200 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "paper11667/clipstyler:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", input={ "text": "mosaic", "image": "https://replicate.delivery/mgxm/6e632d27-d0d3-4310-95bf-dd941dbb415e/destinations-london-banner-mobile-1024x553.jpeg", "iterations": 200 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", "input": { "text": "mosaic", "image": "https://replicate.delivery/mgxm/6e632d27-d0d3-4310-95bf-dd941dbb415e/destinations-london-banner-mobile-1024x553.jpeg", "iterations": 200 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Install Cogbrew install cog
If you don’t have Homebrew, there are other installation options available.
Pull and run paper11667/clipstyler using Cog (this will download the full model and run it in your local environment):
cog predict r8.im/paper11667/clipstyler@sha256:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1 \ -i 'text="mosaic"' \ -i 'image="https://replicate.delivery/mgxm/6e632d27-d0d3-4310-95bf-dd941dbb415e/destinations-london-banner-mobile-1024x553.jpeg"' \ -i 'iterations=200'
To learn more, take a look at the Cog documentation.
Pull and run paper11667/clipstyler using Docker (this will download the full model and run it in your local environment):
docker run -d -p 5000:5000 --gpus=all r8.im/paper11667/clipstyler@sha256:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "text": "mosaic", "image": "https://replicate.delivery/mgxm/6e632d27-d0d3-4310-95bf-dd941dbb415e/destinations-london-banner-mobile-1024x553.jpeg", "iterations": 200 } }' \ http://localhost:5000/predictions
Output
{ "completed_at": "2021-12-05T15:06:49.363516Z", "created_at": "2021-12-05T15:05:16.882070Z", "data_removed": false, "error": null, "id": "2dthoruewrg5vjugeeb7xpiusq", "input": { "text": "mosaic", "image": "https://replicate.delivery/mgxm/6e632d27-d0d3-4310-95bf-dd941dbb415e/destinations-london-banner-mobile-1024x553.jpeg", "iterations": 200 }, "logs": "After 0 iterations\nTotal loss: 10951.251953125\nContent loss: 14.45604133605957\npatch loss: 0.92236328125\ndir loss: 0.95654296875\nTV loss: 0.5960552096366882\nAfter 20 iterations\nTotal loss: 8567.00390625\nContent loss: 6.422166347503662\npatch loss: 0.81201171875\ndir loss: 0.58984375\nTV loss: 0.6782550811767578\nAfter 40 iterations\nTotal loss: 7456.77734375\nContent loss: 4.979325294494629\npatch loss: 0.7197265625\ndir loss: 0.46630859375\nTV loss: 0.7536323666572571\nAfter 60 iterations\nTotal loss: 5633.82666015625\nContent loss: 4.275139331817627\npatch loss: 0.53173828125\ndir loss: 0.41552734375\nTV loss: 0.8055071830749512\nAfter 80 iterations\nTotal loss: 4716.60498046875\nContent loss: 3.7552905082702637\npatch loss: 0.444091796875\ndir loss: 0.31298828125\nTV loss: 0.8117271661758423\nAfter 100 iterations\nTotal loss: 3482.844970703125\nContent loss: 3.0841939449310303\npatch loss: 0.321044921875\ndir loss: 0.2587890625\nTV loss: 0.8408981561660767\nAfter 120 iterations\nTotal loss: 3466.21240234375\nContent loss: 2.606598377227783\npatch loss: 0.329345703125\ndir loss: 0.220703125\nTV loss: 0.8477326035499573\nAfter 140 iterations\nTotal loss: 3402.08837890625\nContent loss: 2.3590710163116455\npatch loss: 0.325439453125\ndir loss: 0.23876953125\nTV loss: 0.8526958227157593\nAfter 160 iterations\nTotal loss: 2670.8544921875\nContent loss: 2.2865982055664062\npatch loss: 0.248046875\ndir loss: 0.18994140625\nTV loss: 0.8646764159202576\nAfter 180 iterations\nTotal loss: 2765.137939453125\nContent loss: 2.134284019470215\npatch loss: 0.260498046875\ndir loss: 0.2001953125\nTV loss: 0.8703643083572388\nAfter 200 iterations\nTotal loss: 2597.854736328125\nContent loss: 2.0453529357910156\npatch loss: 0.244873046875\ndir loss: 0.17236328125\nTV loss: 0.864228367805481", "metrics": { "predict_time": 91.925876, "total_time": 92.481446 }, "output": [ { "file": "https://replicate.delivery/mgxm/ab52826d-4f8f-49ad-a979-a5ee9811d44d/out.png" }, { "file": "https://replicate.delivery/mgxm/991b197b-05c0-4677-897d-404db6d72d5c/out.png" }, { "file": "https://replicate.delivery/mgxm/6468d5dc-e3bd-4f22-ba3b-250a315886d9/out.png" }, { "file": "https://replicate.delivery/mgxm/26df871f-992c-41d5-8939-1a1c8cfc66a3/out.png" }, { "file": "https://replicate.delivery/mgxm/266a5793-6bd8-46d9-823a-8a226772ff73/out.png" }, { "file": "https://replicate.delivery/mgxm/9dde688a-cc9e-4219-910d-29a4e8885f53/out.png" }, { "file": "https://replicate.delivery/mgxm/5c5b4b2d-3384-431c-afe2-f67ca0de00a8/out.png" }, { "file": "https://replicate.delivery/mgxm/06f5033f-5d9b-4830-83ee-0fe1c5b31456/out.png" }, { "file": "https://replicate.delivery/mgxm/e806fbfd-d899-45c1-8482-dd97f45eb4d4/out.png" }, { "file": "https://replicate.delivery/mgxm/3cebb68d-2a26-4e12-bd4d-89ac8e661c0a/out.png" }, { "file": "https://replicate.delivery/mgxm/58c3d06c-6269-4e64-a26f-269ee7113cbe/out.png" } ], "started_at": "2021-12-05T15:05:17.437640Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/2dthoruewrg5vjugeeb7xpiusq", "cancel": "https://api.replicate.com/v1/predictions/2dthoruewrg5vjugeeb7xpiusq/cancel" }, "version": "847f902b2e3b7d34dc7e500ec7b03eb574a28610fdddf87d527e26e7c4286016" }
Generated inAfter 0 iterations Total loss: 10951.251953125 Content loss: 14.45604133605957 patch loss: 0.92236328125 dir loss: 0.95654296875 TV loss: 0.5960552096366882 After 20 iterations Total loss: 8567.00390625 Content loss: 6.422166347503662 patch loss: 0.81201171875 dir loss: 0.58984375 TV loss: 0.6782550811767578 After 40 iterations Total loss: 7456.77734375 Content loss: 4.979325294494629 patch loss: 0.7197265625 dir loss: 0.46630859375 TV loss: 0.7536323666572571 After 60 iterations Total loss: 5633.82666015625 Content loss: 4.275139331817627 patch loss: 0.53173828125 dir loss: 0.41552734375 TV loss: 0.8055071830749512 After 80 iterations Total loss: 4716.60498046875 Content loss: 3.7552905082702637 patch loss: 0.444091796875 dir loss: 0.31298828125 TV loss: 0.8117271661758423 After 100 iterations Total loss: 3482.844970703125 Content loss: 3.0841939449310303 patch loss: 0.321044921875 dir loss: 0.2587890625 TV loss: 0.8408981561660767 After 120 iterations Total loss: 3466.21240234375 Content loss: 2.606598377227783 patch loss: 0.329345703125 dir loss: 0.220703125 TV loss: 0.8477326035499573 After 140 iterations Total loss: 3402.08837890625 Content loss: 2.3590710163116455 patch loss: 0.325439453125 dir loss: 0.23876953125 TV loss: 0.8526958227157593 After 160 iterations Total loss: 2670.8544921875 Content loss: 2.2865982055664062 patch loss: 0.248046875 dir loss: 0.18994140625 TV loss: 0.8646764159202576 After 180 iterations Total loss: 2765.137939453125 Content loss: 2.134284019470215 patch loss: 0.260498046875 dir loss: 0.2001953125 TV loss: 0.8703643083572388 After 200 iterations Total loss: 2597.854736328125 Content loss: 2.0453529357910156 patch loss: 0.244873046875 dir loss: 0.17236328125 TV loss: 0.864228367805481
Prediction
paper11667/clipstyler:f2d6b24eIDlkdgdhgdxffglnarzddhftbg2uStatusSucceededSourceWebHardware–Total durationCreatedInput
{ "text": "Oil painting", "image": "https://replicate.delivery/mgxm/470ec29f-8082-4f39-ac6d-d43d87746609/bair.jpeg", "iterations": "100" }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "paper11667/clipstyler:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", { input: { text: "Oil painting", image: "https://replicate.delivery/mgxm/470ec29f-8082-4f39-ac6d-d43d87746609/bair.jpeg", iterations: "100" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "paper11667/clipstyler:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", input={ "text": "Oil painting", "image": "https://replicate.delivery/mgxm/470ec29f-8082-4f39-ac6d-d43d87746609/bair.jpeg", "iterations": "100" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", "input": { "text": "Oil painting", "image": "https://replicate.delivery/mgxm/470ec29f-8082-4f39-ac6d-d43d87746609/bair.jpeg", "iterations": "100" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Install Cogbrew install cog
If you don’t have Homebrew, there are other installation options available.
Pull and run paper11667/clipstyler using Cog (this will download the full model and run it in your local environment):
cog predict r8.im/paper11667/clipstyler@sha256:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1 \ -i 'text="Oil painting"' \ -i 'image="https://replicate.delivery/mgxm/470ec29f-8082-4f39-ac6d-d43d87746609/bair.jpeg"' \ -i 'iterations="100"'
To learn more, take a look at the Cog documentation.
Pull and run paper11667/clipstyler using Docker (this will download the full model and run it in your local environment):
docker run -d -p 5000:5000 --gpus=all r8.im/paper11667/clipstyler@sha256:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "text": "Oil painting", "image": "https://replicate.delivery/mgxm/470ec29f-8082-4f39-ac6d-d43d87746609/bair.jpeg", "iterations": "100" } }' \ http://localhost:5000/predictions
Output
{ "completed_at": "2021-12-07T04:42:17.677779Z", "created_at": "2021-12-07T04:41:32.477407Z", "data_removed": false, "error": null, "id": "lkdgdhgdxffglnarzddhftbg2u", "input": { "text": "Oil painting", "image": "https://replicate.delivery/mgxm/470ec29f-8082-4f39-ac6d-d43d87746609/bair.jpeg", "iterations": "100" }, "logs": "/root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/nn/functional.py:3060: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.\n warnings.warn(\"Default upsampling behavior when mode={} is changed \"\n/root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:131: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate\n warnings.warn(\"Detected call of `lr_scheduler.step()` before `optimizer.step()`. \"\n/root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/nn/functional.py:3060: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.\n warnings.warn(\"Default upsampling behavior when mode={} is changed \"\n/root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torchvision/transforms/functional_tensor.py:876: UserWarning: Argument fill/fillcolor is not supported for Tensor input. Fill value is zero\n warnings.warn(\"Argument fill/fillcolor is not supported for Tensor input. Fill value is zero\")\nAfter 0 iterations\nTotal loss: 10358.33203125\nContent loss: 6.562626838684082\npatch loss: 0.984375\ndir loss: 1.03515625\nTV loss: 0.43746981024742126\nAfter 20 iterations\nTotal loss: 8604.3720703125\nContent loss: 3.967160940170288\npatch loss: 0.85302734375\ndir loss: 0.666015625\nTV loss: 0.2975453734397888\nAfter 40 iterations\nTotal loss: 8188.57177734375\nContent loss: 3.2917633056640625\npatch loss: 0.82470703125\ndir loss: 0.541015625\nTV loss: 0.3072921633720398\nAfter 60 iterations\nTotal loss: 7892.453125\nContent loss: 2.9949536323547363\npatch loss: 0.7998046875\ndir loss: 0.48583984375\nTV loss: 0.3348064124584198\nAfter 80 iterations\nTotal loss: 7761.1201171875\nContent loss: 2.860962390899658\npatch loss: 0.791015625\ndir loss: 0.42333984375\nTV loss: 0.3505210876464844\nAfter 100 iterations\nTotal loss: 7593.8984375\nContent loss: 2.6860525608062744\npatch loss: 0.77734375\ndir loss: 0.38916015625\nTV loss: 0.3655411899089813", "metrics": { "predict_time": 44.539845, "total_time": 45.200372 }, "output": [ { "file": "https://replicate.delivery/mgxm/1a07ba63-84e7-40cf-9d0e-2c4fedbfab29/out.png" }, { "file": "https://replicate.delivery/mgxm/3b78fdfb-41c1-478a-8c3d-acf571d3f144/out.png" }, { "file": "https://replicate.delivery/mgxm/1db21bc6-3341-407d-960c-c86ec8da72af/out.png" }, { "file": "https://replicate.delivery/mgxm/ac329a81-68ae-4355-8ac0-cf5eeb00527d/out.png" }, { "file": "https://replicate.delivery/mgxm/59fc5f5c-f39e-466c-b7c8-25387b613baa/out.png" } ], "started_at": "2021-12-07T04:41:33.137934Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/lkdgdhgdxffglnarzddhftbg2u", "cancel": "https://api.replicate.com/v1/predictions/lkdgdhgdxffglnarzddhftbg2u/cancel" }, "version": "8618820dc559b8dbf460ed11379af18d695eb607ea684f7a79a7b8b8b470ebec" }
Generated in/root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/nn/functional.py:3060: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. warnings.warn("Default upsampling behavior when mode={} is changed " /root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:131: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " /root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/nn/functional.py:3060: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. warnings.warn("Default upsampling behavior when mode={} is changed " /root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torchvision/transforms/functional_tensor.py:876: UserWarning: Argument fill/fillcolor is not supported for Tensor input. Fill value is zero warnings.warn("Argument fill/fillcolor is not supported for Tensor input. Fill value is zero") After 0 iterations Total loss: 10358.33203125 Content loss: 6.562626838684082 patch loss: 0.984375 dir loss: 1.03515625 TV loss: 0.43746981024742126 After 20 iterations Total loss: 8604.3720703125 Content loss: 3.967160940170288 patch loss: 0.85302734375 dir loss: 0.666015625 TV loss: 0.2975453734397888 After 40 iterations Total loss: 8188.57177734375 Content loss: 3.2917633056640625 patch loss: 0.82470703125 dir loss: 0.541015625 TV loss: 0.3072921633720398 After 60 iterations Total loss: 7892.453125 Content loss: 2.9949536323547363 patch loss: 0.7998046875 dir loss: 0.48583984375 TV loss: 0.3348064124584198 After 80 iterations Total loss: 7761.1201171875 Content loss: 2.860962390899658 patch loss: 0.791015625 dir loss: 0.42333984375 TV loss: 0.3505210876464844 After 100 iterations Total loss: 7593.8984375 Content loss: 2.6860525608062744 patch loss: 0.77734375 dir loss: 0.38916015625 TV loss: 0.3655411899089813
Prediction
paper11667/clipstyler:f2d6b24eInput
{ "text": "sketch with crayon", "image": "https://replicate.delivery/mgxm/e4500aa0-f71b-42ff-a540-aadb44c8d1b2/face.jpg", "iterations": "100" }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "paper11667/clipstyler:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", { input: { text: "sketch with crayon", image: "https://replicate.delivery/mgxm/e4500aa0-f71b-42ff-a540-aadb44c8d1b2/face.jpg", iterations: "100" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "paper11667/clipstyler:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", input={ "text": "sketch with crayon", "image": "https://replicate.delivery/mgxm/e4500aa0-f71b-42ff-a540-aadb44c8d1b2/face.jpg", "iterations": "100" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", "input": { "text": "sketch with crayon", "image": "https://replicate.delivery/mgxm/e4500aa0-f71b-42ff-a540-aadb44c8d1b2/face.jpg", "iterations": "100" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Install Cogbrew install cog
If you don’t have Homebrew, there are other installation options available.
Pull and run paper11667/clipstyler using Cog (this will download the full model and run it in your local environment):
cog predict r8.im/paper11667/clipstyler@sha256:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1 \ -i 'text="sketch with crayon"' \ -i 'image="https://replicate.delivery/mgxm/e4500aa0-f71b-42ff-a540-aadb44c8d1b2/face.jpg"' \ -i 'iterations="100"'
To learn more, take a look at the Cog documentation.
Pull and run paper11667/clipstyler using Docker (this will download the full model and run it in your local environment):
docker run -d -p 5000:5000 --gpus=all r8.im/paper11667/clipstyler@sha256:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "text": "sketch with crayon", "image": "https://replicate.delivery/mgxm/e4500aa0-f71b-42ff-a540-aadb44c8d1b2/face.jpg", "iterations": "100" } }' \ http://localhost:5000/predictions
Output
{ "completed_at": "2021-12-05T15:00:38.792184Z", "created_at": "2021-12-05T14:59:30.323839Z", "data_removed": false, "error": null, "id": "qckvikgch5go7mhnqymnheheki", "input": { "text": "sketch with crayon", "image": "https://replicate.delivery/mgxm/e4500aa0-f71b-42ff-a540-aadb44c8d1b2/face.jpg", "iterations": "100" }, "logs": "/root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/nn/functional.py:3060: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.\n warnings.warn(\"Default upsampling behavior when mode={} is changed \"\n/root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:131: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate\n warnings.warn(\"Detected call of `lr_scheduler.step()` before `optimizer.step()`. \"\n/root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/nn/functional.py:3060: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.\n warnings.warn(\"Default upsampling behavior when mode={} is changed \"\n/root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torchvision/transforms/functional_tensor.py:876: UserWarning: Argument fill/fillcolor is not supported for Tensor input. Fill value is zero\n warnings.warn(\"Argument fill/fillcolor is not supported for Tensor input. Fill value is zero\")\nAfter 0 iterations\nTotal loss: 9802.1005859375\nContent loss: 5.383391380310059\npatch loss: 0.9482421875\ndir loss: 0.91650390625\nTV loss: 0.3422693610191345\nAfter 20 iterations\nTotal loss: 8772.9951171875\nContent loss: 3.439800500869751\npatch loss: 0.87939453125\ndir loss: 0.681640625\nTV loss: 0.2753937542438507\nAfter 40 iterations\nTotal loss: 8309.794921875\nContent loss: 2.8816094398498535\npatch loss: 0.84228515625\ndir loss: 0.5947265625\nTV loss: 0.30363643169403076\nAfter 60 iterations\nTotal loss: 8068.50048828125\nContent loss: 2.774502754211426\npatch loss: 0.8193359375\ndir loss: 0.5517578125\nTV loss: 0.3253156840801239\nAfter 80 iterations\nTotal loss: 7695.32177734375\nContent loss: 2.5266687870025635\npatch loss: 0.78466796875\ndir loss: 0.50390625\nTV loss: 0.321286678314209\nAfter 100 iterations\nTotal loss: 7482.86083984375\nContent loss: 2.315229892730713\npatch loss: 0.7685546875\ndir loss: 0.4384765625\nTV loss: 0.32614612579345703", "metrics": { "predict_time": 42.086328, "total_time": 68.468345 }, "output": [ { "file": "https://replicate.delivery/mgxm/bdcd9c0c-09df-4ea7-8ad8-0efa3fa48eda/out.png" }, { "file": "https://replicate.delivery/mgxm/ea95e311-13b1-41e3-bfe1-0d2c9e6dfc5d/out.png" }, { "file": "https://replicate.delivery/mgxm/d4e354f6-e22b-4b70-abc8-63e92b0de85d/out.png" }, { "file": "https://replicate.delivery/mgxm/9f274601-c587-4d54-b841-f7f3af3f9636/out.png" }, { "file": "https://replicate.delivery/mgxm/59d6234b-3662-429d-93ba-637d849f1383/out.png" } ], "started_at": "2021-12-05T14:59:56.705856Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/qckvikgch5go7mhnqymnheheki", "cancel": "https://api.replicate.com/v1/predictions/qckvikgch5go7mhnqymnheheki/cancel" }, "version": "847f902b2e3b7d34dc7e500ec7b03eb574a28610fdddf87d527e26e7c4286016" }
Generated in/root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/nn/functional.py:3060: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. warnings.warn("Default upsampling behavior when mode={} is changed " /root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:131: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " /root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/nn/functional.py:3060: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. warnings.warn("Default upsampling behavior when mode={} is changed " /root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torchvision/transforms/functional_tensor.py:876: UserWarning: Argument fill/fillcolor is not supported for Tensor input. Fill value is zero warnings.warn("Argument fill/fillcolor is not supported for Tensor input. Fill value is zero") After 0 iterations Total loss: 9802.1005859375 Content loss: 5.383391380310059 patch loss: 0.9482421875 dir loss: 0.91650390625 TV loss: 0.3422693610191345 After 20 iterations Total loss: 8772.9951171875 Content loss: 3.439800500869751 patch loss: 0.87939453125 dir loss: 0.681640625 TV loss: 0.2753937542438507 After 40 iterations Total loss: 8309.794921875 Content loss: 2.8816094398498535 patch loss: 0.84228515625 dir loss: 0.5947265625 TV loss: 0.30363643169403076 After 60 iterations Total loss: 8068.50048828125 Content loss: 2.774502754211426 patch loss: 0.8193359375 dir loss: 0.5517578125 TV loss: 0.3253156840801239 After 80 iterations Total loss: 7695.32177734375 Content loss: 2.5266687870025635 patch loss: 0.78466796875 dir loss: 0.50390625 TV loss: 0.321286678314209 After 100 iterations Total loss: 7482.86083984375 Content loss: 2.315229892730713 patch loss: 0.7685546875 dir loss: 0.4384765625 TV loss: 0.32614612579345703
Prediction
paper11667/clipstyler:f2d6b24eIDeeesssxwxnbsncc5v2a36hrgvyStatusSucceededSourceWebHardware–Total durationCreatedInput
{ "text": "green crystal", "image": "https://replicate.delivery/mgxm/40458c64-8b02-48eb-a3ab-77827967e044/face2.jpeg", "iterations": 200 }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "paper11667/clipstyler:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", { input: { text: "green crystal", image: "https://replicate.delivery/mgxm/40458c64-8b02-48eb-a3ab-77827967e044/face2.jpeg", iterations: 200 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "paper11667/clipstyler:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", input={ "text": "green crystal", "image": "https://replicate.delivery/mgxm/40458c64-8b02-48eb-a3ab-77827967e044/face2.jpeg", "iterations": 200 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", "input": { "text": "green crystal", "image": "https://replicate.delivery/mgxm/40458c64-8b02-48eb-a3ab-77827967e044/face2.jpeg", "iterations": 200 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Install Cogbrew install cog
If you don’t have Homebrew, there are other installation options available.
Pull and run paper11667/clipstyler using Cog (this will download the full model and run it in your local environment):
cog predict r8.im/paper11667/clipstyler@sha256:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1 \ -i 'text="green crystal"' \ -i 'image="https://replicate.delivery/mgxm/40458c64-8b02-48eb-a3ab-77827967e044/face2.jpeg"' \ -i 'iterations=200'
To learn more, take a look at the Cog documentation.
Pull and run paper11667/clipstyler using Docker (this will download the full model and run it in your local environment):
docker run -d -p 5000:5000 --gpus=all r8.im/paper11667/clipstyler@sha256:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "text": "green crystal", "image": "https://replicate.delivery/mgxm/40458c64-8b02-48eb-a3ab-77827967e044/face2.jpeg", "iterations": 200 } }' \ http://localhost:5000/predictions
Output
{ "completed_at": "2021-12-05T15:09:10.893873Z", "created_at": "2021-12-05T15:07:35.867625Z", "data_removed": false, "error": null, "id": "eeesssxwxnbsncc5v2a36hrgvy", "input": { "text": "green crystal", "image": "https://replicate.delivery/mgxm/40458c64-8b02-48eb-a3ab-77827967e044/face2.jpeg", "iterations": 200 }, "logs": "After 0 iterations\nTotal loss: 10208.4990234375\nContent loss: 6.972418785095215\npatch loss: 0.95947265625\ndir loss: 1.0595703125\nTV loss: 0.6359344124794006\nAfter 20 iterations\nTotal loss: 7959.51708984375\nContent loss: 4.503644943237305\npatch loss: 0.77001953125\ndir loss: 0.703125\nTV loss: 0.4700694680213928\nAfter 40 iterations\nTotal loss: 7513.33056640625\nContent loss: 3.164238452911377\npatch loss: 0.75146484375\ndir loss: 0.54833984375\nTV loss: 0.44487759470939636\nAfter 60 iterations\nTotal loss: 7343.89990234375\nContent loss: 2.6621737480163574\npatch loss: 0.7451171875\ndir loss: 0.47216796875\nTV loss: 0.4487256407737732\nAfter 80 iterations\nTotal loss: 7238.70166015625\nContent loss: 2.2532944679260254\npatch loss: 0.74365234375\ndir loss: 0.41650390625\nTV loss: 0.45729830861091614\nAfter 100 iterations\nTotal loss: 7170.22216796875\nContent loss: 2.085845470428467\npatch loss: 0.740234375\ndir loss: 0.3857421875\nTV loss: 0.4702468812465668\nAfter 120 iterations\nTotal loss: 7073.27392578125\nContent loss: 1.83698308467865\npatch loss: 0.736328125\ndir loss: 0.33837890625\nTV loss: 0.47658607363700867\nAfter 140 iterations\nTotal loss: 7063.70556640625\nContent loss: 1.7406328916549683\npatch loss: 0.73779296875\ndir loss: 0.32421875\nTV loss: 0.4857983887195587\nAfter 160 iterations\nTotal loss: 7047.20654296875\nContent loss: 1.719779133796692\npatch loss: 0.7353515625\ndir loss: 0.33740234375\nTV loss: 0.48977938294410706\nAfter 180 iterations\nTotal loss: 6999.2861328125\nContent loss: 1.636114239692688\npatch loss: 0.7333984375\ndir loss: 0.306640625\nTV loss: 0.4940940737724304\nAfter 200 iterations\nTotal loss: 6977.45849609375\nContent loss: 1.5955630540847778\npatch loss: 0.73193359375\ndir loss: 0.29931640625\nTV loss: 0.49903395771980286", "metrics": { "predict_time": 94.45057, "total_time": 95.026248 }, "output": [ { "file": "https://replicate.delivery/mgxm/726e40d3-1c1c-47ac-b7f9-5f0bf048e18a/out.png" }, { "file": "https://replicate.delivery/mgxm/6464f40c-0af7-4956-8852-9821c1beb879/out.png" }, { "file": "https://replicate.delivery/mgxm/a21afa3c-18e7-45e5-998e-90fbecfbbf6f/out.png" }, { "file": "https://replicate.delivery/mgxm/bde8aab4-9559-45b4-b613-6114274ae18b/out.png" }, { "file": "https://replicate.delivery/mgxm/48cb1c4d-9e00-469e-a8d9-7b9bd9d336ab/out.png" }, { "file": "https://replicate.delivery/mgxm/4a9e59d5-3238-4a2a-94bb-3bcaf7fd44c2/out.png" }, { "file": "https://replicate.delivery/mgxm/ef14773f-5ca6-4055-89d1-60679ed9aede/out.png" }, { "file": "https://replicate.delivery/mgxm/cdc93acf-5ab4-49f8-9e5e-a222e5881096/out.png" }, { "file": "https://replicate.delivery/mgxm/de6f5591-521b-4a50-9c11-13e92caa5f8f/out.png" }, { "file": "https://replicate.delivery/mgxm/b3223021-95c5-466a-a39a-e09a2181f39a/out.png" } ], "started_at": "2021-12-05T15:07:36.443303Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/eeesssxwxnbsncc5v2a36hrgvy", "cancel": "https://api.replicate.com/v1/predictions/eeesssxwxnbsncc5v2a36hrgvy/cancel" }, "version": "847f902b2e3b7d34dc7e500ec7b03eb574a28610fdddf87d527e26e7c4286016" }
Generated inAfter 0 iterations Total loss: 10208.4990234375 Content loss: 6.972418785095215 patch loss: 0.95947265625 dir loss: 1.0595703125 TV loss: 0.6359344124794006 After 20 iterations Total loss: 7959.51708984375 Content loss: 4.503644943237305 patch loss: 0.77001953125 dir loss: 0.703125 TV loss: 0.4700694680213928 After 40 iterations Total loss: 7513.33056640625 Content loss: 3.164238452911377 patch loss: 0.75146484375 dir loss: 0.54833984375 TV loss: 0.44487759470939636 After 60 iterations Total loss: 7343.89990234375 Content loss: 2.6621737480163574 patch loss: 0.7451171875 dir loss: 0.47216796875 TV loss: 0.4487256407737732 After 80 iterations Total loss: 7238.70166015625 Content loss: 2.2532944679260254 patch loss: 0.74365234375 dir loss: 0.41650390625 TV loss: 0.45729830861091614 After 100 iterations Total loss: 7170.22216796875 Content loss: 2.085845470428467 patch loss: 0.740234375 dir loss: 0.3857421875 TV loss: 0.4702468812465668 After 120 iterations Total loss: 7073.27392578125 Content loss: 1.83698308467865 patch loss: 0.736328125 dir loss: 0.33837890625 TV loss: 0.47658607363700867 After 140 iterations Total loss: 7063.70556640625 Content loss: 1.7406328916549683 patch loss: 0.73779296875 dir loss: 0.32421875 TV loss: 0.4857983887195587 After 160 iterations Total loss: 7047.20654296875 Content loss: 1.719779133796692 patch loss: 0.7353515625 dir loss: 0.33740234375 TV loss: 0.48977938294410706 After 180 iterations Total loss: 6999.2861328125 Content loss: 1.636114239692688 patch loss: 0.7333984375 dir loss: 0.306640625 TV loss: 0.4940940737724304 After 200 iterations Total loss: 6977.45849609375 Content loss: 1.5955630540847778 patch loss: 0.73193359375 dir loss: 0.29931640625 TV loss: 0.49903395771980286
Prediction
paper11667/clipstyler:f2d6b24eIDk2wqn3cd3rcnpgrsfih45k3lqqStatusSucceededSourceWebHardware–Total durationCreatedInput
{ "text": "white wool", "image": "https://replicate.delivery/mgxm/5c209a45-3cc3-42df-abb6-9afe97f88990/boat.jpg", "iterations": 200 }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "paper11667/clipstyler:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", { input: { text: "white wool", image: "https://replicate.delivery/mgxm/5c209a45-3cc3-42df-abb6-9afe97f88990/boat.jpg", iterations: 200 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "paper11667/clipstyler:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", input={ "text": "white wool", "image": "https://replicate.delivery/mgxm/5c209a45-3cc3-42df-abb6-9afe97f88990/boat.jpg", "iterations": 200 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", "input": { "text": "white wool", "image": "https://replicate.delivery/mgxm/5c209a45-3cc3-42df-abb6-9afe97f88990/boat.jpg", "iterations": 200 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Install Cogbrew install cog
If you don’t have Homebrew, there are other installation options available.
Pull and run paper11667/clipstyler using Cog (this will download the full model and run it in your local environment):
cog predict r8.im/paper11667/clipstyler@sha256:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1 \ -i 'text="white wool"' \ -i 'image="https://replicate.delivery/mgxm/5c209a45-3cc3-42df-abb6-9afe97f88990/boat.jpg"' \ -i 'iterations=200'
To learn more, take a look at the Cog documentation.
Pull and run paper11667/clipstyler using Docker (this will download the full model and run it in your local environment):
docker run -d -p 5000:5000 --gpus=all r8.im/paper11667/clipstyler@sha256:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "text": "white wool", "image": "https://replicate.delivery/mgxm/5c209a45-3cc3-42df-abb6-9afe97f88990/boat.jpg", "iterations": 200 } }' \ http://localhost:5000/predictions
Output
{ "completed_at": "2021-12-05T15:10:54.691347Z", "created_at": "2021-12-05T15:09:22.147490Z", "data_removed": false, "error": null, "id": "k2wqn3cd3rcnpgrsfih45k3lqq", "input": { "text": "white wool", "image": "https://replicate.delivery/mgxm/5c209a45-3cc3-42df-abb6-9afe97f88990/boat.jpg", "iterations": 200 }, "logs": "After 0 iterations\nTotal loss: 10853.8896484375\nContent loss: 7.111127853393555\npatch loss: 1.029296875\ndir loss: 1.044921875\nTV loss: 0.7209429144859314\nAfter 20 iterations\nTotal loss: 8929.1240234375\nContent loss: 4.3252763748168945\npatch loss: 0.8759765625\ndir loss: 0.79150390625\nTV loss: 0.5834510326385498\nAfter 40 iterations\nTotal loss: 7946.7412109375\nContent loss: 3.7231082916259766\npatch loss: 0.783203125\ndir loss: 0.6796875\nTV loss: 0.5251058340072632\nAfter 60 iterations\nTotal loss: 7643.06982421875\nContent loss: 3.213907241821289\npatch loss: 0.76123046875\ndir loss: 0.6171875\nTV loss: 0.48408231139183044\nAfter 80 iterations\nTotal loss: 7480.6875\nContent loss: 2.8429744243621826\npatch loss: 0.751953125\ndir loss: 0.5712890625\nTV loss: 0.4911484718322754\nAfter 100 iterations\nTotal loss: 7339.08642578125\nContent loss: 2.647282123565674\npatch loss: 0.7412109375\ndir loss: 0.5390625\nTV loss: 0.4940810799598694\nAfter 120 iterations\nTotal loss: 7218.22119140625\nContent loss: 2.318110227584839\npatch loss: 0.7353515625\ndir loss: 0.5\nTV loss: 0.5046759843826294\nAfter 140 iterations\nTotal loss: 7015.40283203125\nContent loss: 2.206007957458496\npatch loss: 0.7158203125\ndir loss: 0.47998046875\nTV loss: 0.5016579031944275\nAfter 160 iterations\nTotal loss: 7164.42578125\nContent loss: 2.1319518089294434\npatch loss: 0.73486328125\ndir loss: 0.46435546875\nTV loss: 0.5076705813407898\nAfter 180 iterations\nTotal loss: 6762.08935546875\nContent loss: 2.078019142150879\npatch loss: 0.69140625\ndir loss: 0.45166015625\nTV loss: 0.5111395120620728\nAfter 200 iterations\nTotal loss: 7038.11083984375\nContent loss: 1.9989449977874756\npatch loss: 0.724609375\ndir loss: 0.435546875\nTV loss: 0.519220232963562", "metrics": { "predict_time": 91.996955, "total_time": 92.543857 }, "output": [ { "file": "https://replicate.delivery/mgxm/69e3a051-eb52-451b-95b4-c49e6a021e68/out.png" }, { "file": "https://replicate.delivery/mgxm/b95f78c4-5e2c-444e-aa05-cd24eef597f3/out.png" }, { "file": "https://replicate.delivery/mgxm/2374e8d9-ba6b-4c9f-98b6-ff8e7d73614d/out.png" }, { "file": "https://replicate.delivery/mgxm/b06d8ceb-2adb-4787-924e-4d51c89c45cc/out.png" }, { "file": "https://replicate.delivery/mgxm/e22c02d4-231e-461c-aeb7-185de243d0e8/out.png" }, { "file": "https://replicate.delivery/mgxm/b3f8c7f4-fbe4-4d2a-a762-5573ca44aaf0/out.png" }, { "file": "https://replicate.delivery/mgxm/fb4c9e2e-33c7-48fe-a689-0b9c797f8794/out.png" }, { "file": "https://replicate.delivery/mgxm/b9e6712c-b19f-4a11-acfd-26024b046677/out.png" }, { "file": "https://replicate.delivery/mgxm/b454a607-5315-4f6c-97d2-0a09da2ca99b/out.png" }, { "file": "https://replicate.delivery/mgxm/2419aeb7-076c-4593-9a2e-b6433713febb/out.png" } ], "started_at": "2021-12-05T15:09:22.694392Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/k2wqn3cd3rcnpgrsfih45k3lqq", "cancel": "https://api.replicate.com/v1/predictions/k2wqn3cd3rcnpgrsfih45k3lqq/cancel" }, "version": "847f902b2e3b7d34dc7e500ec7b03eb574a28610fdddf87d527e26e7c4286016" }
Generated inAfter 0 iterations Total loss: 10853.8896484375 Content loss: 7.111127853393555 patch loss: 1.029296875 dir loss: 1.044921875 TV loss: 0.7209429144859314 After 20 iterations Total loss: 8929.1240234375 Content loss: 4.3252763748168945 patch loss: 0.8759765625 dir loss: 0.79150390625 TV loss: 0.5834510326385498 After 40 iterations Total loss: 7946.7412109375 Content loss: 3.7231082916259766 patch loss: 0.783203125 dir loss: 0.6796875 TV loss: 0.5251058340072632 After 60 iterations Total loss: 7643.06982421875 Content loss: 3.213907241821289 patch loss: 0.76123046875 dir loss: 0.6171875 TV loss: 0.48408231139183044 After 80 iterations Total loss: 7480.6875 Content loss: 2.8429744243621826 patch loss: 0.751953125 dir loss: 0.5712890625 TV loss: 0.4911484718322754 After 100 iterations Total loss: 7339.08642578125 Content loss: 2.647282123565674 patch loss: 0.7412109375 dir loss: 0.5390625 TV loss: 0.4940810799598694 After 120 iterations Total loss: 7218.22119140625 Content loss: 2.318110227584839 patch loss: 0.7353515625 dir loss: 0.5 TV loss: 0.5046759843826294 After 140 iterations Total loss: 7015.40283203125 Content loss: 2.206007957458496 patch loss: 0.7158203125 dir loss: 0.47998046875 TV loss: 0.5016579031944275 After 160 iterations Total loss: 7164.42578125 Content loss: 2.1319518089294434 patch loss: 0.73486328125 dir loss: 0.46435546875 TV loss: 0.5076705813407898 After 180 iterations Total loss: 6762.08935546875 Content loss: 2.078019142150879 patch loss: 0.69140625 dir loss: 0.45166015625 TV loss: 0.5111395120620728 After 200 iterations Total loss: 7038.11083984375 Content loss: 1.9989449977874756 patch loss: 0.724609375 dir loss: 0.435546875 TV loss: 0.519220232963562
Prediction
paper11667/clipstyler:f2d6b24eIDklfiw47665byzf6phxhnrqjpyeStatusSucceededSourceWebHardware–Total durationCreatedInput
{ "text": "Flame face", "image": "https://replicate.delivery/mgxm/e4500aa0-f71b-42ff-a540-aadb44c8d1b2/face.jpg", "iterations": "100" }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "paper11667/clipstyler:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", { input: { text: "Flame face", image: "https://replicate.delivery/mgxm/e4500aa0-f71b-42ff-a540-aadb44c8d1b2/face.jpg", iterations: "100" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "paper11667/clipstyler:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", input={ "text": "Flame face", "image": "https://replicate.delivery/mgxm/e4500aa0-f71b-42ff-a540-aadb44c8d1b2/face.jpg", "iterations": "100" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run paper11667/clipstyler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1", "input": { "text": "Flame face", "image": "https://replicate.delivery/mgxm/e4500aa0-f71b-42ff-a540-aadb44c8d1b2/face.jpg", "iterations": "100" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Install Cogbrew install cog
If you don’t have Homebrew, there are other installation options available.
Pull and run paper11667/clipstyler using Cog (this will download the full model and run it in your local environment):
cog predict r8.im/paper11667/clipstyler@sha256:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1 \ -i 'text="Flame face"' \ -i 'image="https://replicate.delivery/mgxm/e4500aa0-f71b-42ff-a540-aadb44c8d1b2/face.jpg"' \ -i 'iterations="100"'
To learn more, take a look at the Cog documentation.
Pull and run paper11667/clipstyler using Docker (this will download the full model and run it in your local environment):
docker run -d -p 5000:5000 --gpus=all r8.im/paper11667/clipstyler@sha256:f2d6b24e6002f25f77ae89c2b0a5987daa6d0bf751b858b94b8416e8542434d1
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "text": "Flame face", "image": "https://replicate.delivery/mgxm/e4500aa0-f71b-42ff-a540-aadb44c8d1b2/face.jpg", "iterations": "100" } }' \ http://localhost:5000/predictions
Output
{ "completed_at": "2021-12-07T04:23:38.473534Z", "created_at": "2021-12-07T04:21:39.407682Z", "data_removed": false, "error": null, "id": "klfiw47665byzf6phxhnrqjpye", "input": { "text": "Flame face", "image": "https://replicate.delivery/mgxm/e4500aa0-f71b-42ff-a540-aadb44c8d1b2/face.jpg", "iterations": "100" }, "logs": "/root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/nn/functional.py:3060: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.\n warnings.warn(\"Default upsampling behavior when mode={} is changed \"\n/root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:131: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate\n warnings.warn(\"Detected call of `lr_scheduler.step()` before `optimizer.step()`. \"\n/root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/nn/functional.py:3060: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.\n warnings.warn(\"Default upsampling behavior when mode={} is changed \"\n/root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torchvision/transforms/functional_tensor.py:876: UserWarning: Argument fill/fillcolor is not supported for Tensor input. Fill value is zero\n warnings.warn(\"Argument fill/fillcolor is not supported for Tensor input. Fill value is zero\")\nAfter 0 iterations\nTotal loss: 10293.490234375\nContent loss: 6.156279563903809\npatch loss: 0.98876953125\ndir loss: 0.947265625\nTV loss: 0.29833361506462097\nAfter 20 iterations\nTotal loss: 8733.28125\nContent loss: 3.4643611907958984\npatch loss: 0.87060546875\ndir loss: 0.7548828125\nTV loss: 0.12711727619171143\nAfter 40 iterations\nTotal loss: 8304.1875\nContent loss: 2.5820446014404297\npatch loss: 0.841796875\ndir loss: 0.681640625\nTV loss: 0.1308639943599701\nAfter 60 iterations\nTotal loss: 8117.97314453125\nContent loss: 2.207214117050171\npatch loss: 0.83203125\ndir loss: 0.59765625\nTV loss: 0.14130297303199768\nAfter 80 iterations\nTotal loss: 7969.0439453125\nContent loss: 2.0092592239379883\npatch loss: 0.8232421875\ndir loss: 0.51904296875\nTV loss: 0.1552249789237976\nAfter 100 iterations\nTotal loss: 7850.5068359375\nContent loss: 1.947278380393982\npatch loss: 0.81298828125\ndir loss: 0.484375\nTV loss: 0.16510261595249176", "metrics": { "predict_time": 43.586527, "total_time": 119.065852 }, "output": [ { "file": "https://replicate.delivery/mgxm/05287d20-bdf3-413b-a203-8e1f6de10bda/out.png" }, { "file": "https://replicate.delivery/mgxm/1d310fb4-0bee-462f-9370-6254d41c219a/out.png" }, { "file": "https://replicate.delivery/mgxm/e8249b58-d0b2-438e-93cc-c5337b2f91e7/out.png" }, { "file": "https://replicate.delivery/mgxm/74c73465-ae02-4f82-9b52-625d2f5ea597/out.png" }, { "file": "https://replicate.delivery/mgxm/f2ff583c-db8b-4acc-b9cd-9ae2d543cc3f/out.png" } ], "started_at": "2021-12-07T04:22:54.887007Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/klfiw47665byzf6phxhnrqjpye", "cancel": "https://api.replicate.com/v1/predictions/klfiw47665byzf6phxhnrqjpye/cancel" }, "version": "8618820dc559b8dbf460ed11379af18d695eb607ea684f7a79a7b8b8b470ebec" }
Generated in/root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/nn/functional.py:3060: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. warnings.warn("Default upsampling behavior when mode={} is changed " /root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:131: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " /root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/nn/functional.py:3060: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. warnings.warn("Default upsampling behavior when mode={} is changed " /root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torchvision/transforms/functional_tensor.py:876: UserWarning: Argument fill/fillcolor is not supported for Tensor input. Fill value is zero warnings.warn("Argument fill/fillcolor is not supported for Tensor input. Fill value is zero") After 0 iterations Total loss: 10293.490234375 Content loss: 6.156279563903809 patch loss: 0.98876953125 dir loss: 0.947265625 TV loss: 0.29833361506462097 After 20 iterations Total loss: 8733.28125 Content loss: 3.4643611907958984 patch loss: 0.87060546875 dir loss: 0.7548828125 TV loss: 0.12711727619171143 After 40 iterations Total loss: 8304.1875 Content loss: 2.5820446014404297 patch loss: 0.841796875 dir loss: 0.681640625 TV loss: 0.1308639943599701 After 60 iterations Total loss: 8117.97314453125 Content loss: 2.207214117050171 patch loss: 0.83203125 dir loss: 0.59765625 TV loss: 0.14130297303199768 After 80 iterations Total loss: 7969.0439453125 Content loss: 2.0092592239379883 patch loss: 0.8232421875 dir loss: 0.51904296875 TV loss: 0.1552249789237976 After 100 iterations Total loss: 7850.5068359375 Content loss: 1.947278380393982 patch loss: 0.81298828125 dir loss: 0.484375 TV loss: 0.16510261595249176
Want to make some of these yourself?
Run this model