eladrich
/
pixel2style2pixel
a StyleGAN Encoder for Image-to-Image Translation
Prediction
eladrich/pixel2style2pixel:919ed2f7Input
{ "image": "https://replicate.delivery/mgxm/7935db96-ae91-440f-8c75-b94bd6315d79/input_img.jpg", "model": "toonify" }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run eladrich/pixel2style2pixel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "eladrich/pixel2style2pixel:919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4", { input: { image: "https://replicate.delivery/mgxm/7935db96-ae91-440f-8c75-b94bd6315d79/input_img.jpg", model: "toonify" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run eladrich/pixel2style2pixel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "eladrich/pixel2style2pixel:919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4", input={ "image": "https://replicate.delivery/mgxm/7935db96-ae91-440f-8c75-b94bd6315d79/input_img.jpg", "model": "toonify" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run eladrich/pixel2style2pixel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4", "input": { "image": "https://replicate.delivery/mgxm/7935db96-ae91-440f-8c75-b94bd6315d79/input_img.jpg", "model": "toonify" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/eladrich/pixel2style2pixel@sha256:919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4 \ -i 'image="https://replicate.delivery/mgxm/7935db96-ae91-440f-8c75-b94bd6315d79/input_img.jpg"' \ -i 'model="toonify"'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/eladrich/pixel2style2pixel@sha256:919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "image": "https://replicate.delivery/mgxm/7935db96-ae91-440f-8c75-b94bd6315d79/input_img.jpg", "model": "toonify" } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2021-09-14T22:01:27.122378Z", "created_at": "2021-09-14T22:01:17.248756Z", "data_removed": false, "error": null, "id": "epfjo3erfnfizagwhoqnv4a3hq", "input": { "image": "https://replicate.delivery/mgxm/7935db96-ae91-440f-8c75-b94bd6315d79/input_img.jpg", "model": "toonify" }, "logs": "Namespace(batch_size=8, board_interval=50, checkpoint_path='pretrained_models/psp_ffhq_toonify.pt', dataset_type='ffhq_encode', device='cuda:0', encoder_type='GradualStyleEncoder', exp_dir='', id_lambda=1.0, image_interval=100, input_nc=3, l2_lambda=1.0, l2_lambda_crop=0, label_nc=0, learn_in_w=False, learning_rate=0.0001, lpips_lambda=0.8, lpips_lambda_crop=0, max_steps='10000', optim_name='ranger', output_size=1024, resize_factors=None, save_interval=1000, start_from_latent_avg=True, stylegan_weights='', test_batch_size=8, test_workers=8, train_decoder=False, val_interval=1000, w_norm_lambda=0.025, workers=8)\nLoading pSp from checkpoint: pretrained_models/psp_ffhq_toonify.pt\nModel successfully loaded!\nAligned image has shape: (256, 256)", "metrics": { "total_time": 9.873622 }, "output": [ { "file": "https://replicate.delivery/mgxm/ad94def7-0808-4f65-ab1f-c3e157df82cd/out.png" } ], "started_at": "2021-11-30T17:26:23.902218Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/epfjo3erfnfizagwhoqnv4a3hq", "cancel": "https://api.replicate.com/v1/predictions/epfjo3erfnfizagwhoqnv4a3hq/cancel" }, "version": "a9e25b9995d98612afc75a3dd98446976aec34251843b36a7b34d7e272d170cf" }
Namespace(batch_size=8, board_interval=50, checkpoint_path='pretrained_models/psp_ffhq_toonify.pt', dataset_type='ffhq_encode', device='cuda:0', encoder_type='GradualStyleEncoder', exp_dir='', id_lambda=1.0, image_interval=100, input_nc=3, l2_lambda=1.0, l2_lambda_crop=0, label_nc=0, learn_in_w=False, learning_rate=0.0001, lpips_lambda=0.8, lpips_lambda_crop=0, max_steps='10000', optim_name='ranger', output_size=1024, resize_factors=None, save_interval=1000, start_from_latent_avg=True, stylegan_weights='', test_batch_size=8, test_workers=8, train_decoder=False, val_interval=1000, w_norm_lambda=0.025, workers=8) Loading pSp from checkpoint: pretrained_models/psp_ffhq_toonify.pt Model successfully loaded! Aligned image has shape: (256, 256)
Prediction
eladrich/pixel2style2pixel:919ed2f7Input
{ "image": "https://replicate.delivery/mgxm/baef54d7-4c0f-4afb-9dc1-4b9151ffb062/input_img.jpg", "model": "celebs_super_resolution" }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run eladrich/pixel2style2pixel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "eladrich/pixel2style2pixel:919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4", { input: { image: "https://replicate.delivery/mgxm/baef54d7-4c0f-4afb-9dc1-4b9151ffb062/input_img.jpg", model: "celebs_super_resolution" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run eladrich/pixel2style2pixel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "eladrich/pixel2style2pixel:919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4", input={ "image": "https://replicate.delivery/mgxm/baef54d7-4c0f-4afb-9dc1-4b9151ffb062/input_img.jpg", "model": "celebs_super_resolution" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run eladrich/pixel2style2pixel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4", "input": { "image": "https://replicate.delivery/mgxm/baef54d7-4c0f-4afb-9dc1-4b9151ffb062/input_img.jpg", "model": "celebs_super_resolution" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/eladrich/pixel2style2pixel@sha256:919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4 \ -i 'image="https://replicate.delivery/mgxm/baef54d7-4c0f-4afb-9dc1-4b9151ffb062/input_img.jpg"' \ -i 'model="celebs_super_resolution"'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/eladrich/pixel2style2pixel@sha256:919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "image": "https://replicate.delivery/mgxm/baef54d7-4c0f-4afb-9dc1-4b9151ffb062/input_img.jpg", "model": "celebs_super_resolution" } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2021-09-14T22:02:37.838206Z", "created_at": "2021-09-14T22:02:28.877862Z", "data_removed": false, "error": null, "id": "dbjtr7ioyrekxihjfp64ozxyau", "input": { "image": "https://replicate.delivery/mgxm/baef54d7-4c0f-4afb-9dc1-4b9151ffb062/input_img.jpg", "model": "celebs_super_resolution" }, "logs": "Namespace(batch_size=8, board_interval=50, checkpoint_path='pretrained_models/psp_celebs_super_resolution.pt', contrastive_lambda=0, dataset_type='celebs_super_resolution', device='cuda:0', encoder_type='GradualStyleEncoder', exp_dir='', id_lambda=0.1, image_interval=100, input_nc=3, l2_lambda=1.0, l2_lambda_crop=0, label_nc=0, learn_in_w=False, learning_rate=0.0001, lpips_lambda=0.8, lpips_lambda_crop=0, max_steps=100000, optim_name='ranger', output_size=1024, resize_factors='1,2,4,8,16,32', save_interval=10000, start_from_latent_avg=True, stylegan_weights='', test_batch_size=8, test_workers=8, train_decoder=False, val_interval=5000, w_norm_lambda=0.005, workers=8)\nLoading pSp from checkpoint: pretrained_models/psp_celebs_super_resolution.pt\nModel successfully loaded!\nAligned image has shape: (256, 256)", "metrics": { "total_time": 8.960344 }, "output": [ { "file": "https://replicate.delivery/mgxm/0fd79570-1c6b-4654-a44b-d10f31d6f91d/out.png" } ], "started_at": "2021-12-04T03:07:18.459805Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/dbjtr7ioyrekxihjfp64ozxyau", "cancel": "https://api.replicate.com/v1/predictions/dbjtr7ioyrekxihjfp64ozxyau/cancel" }, "version": "a9e25b9995d98612afc75a3dd98446976aec34251843b36a7b34d7e272d170cf" }
Namespace(batch_size=8, board_interval=50, checkpoint_path='pretrained_models/psp_celebs_super_resolution.pt', contrastive_lambda=0, dataset_type='celebs_super_resolution', device='cuda:0', encoder_type='GradualStyleEncoder', exp_dir='', id_lambda=0.1, image_interval=100, input_nc=3, l2_lambda=1.0, l2_lambda_crop=0, label_nc=0, learn_in_w=False, learning_rate=0.0001, lpips_lambda=0.8, lpips_lambda_crop=0, max_steps=100000, optim_name='ranger', output_size=1024, resize_factors='1,2,4,8,16,32', save_interval=10000, start_from_latent_avg=True, stylegan_weights='', test_batch_size=8, test_workers=8, train_decoder=False, val_interval=5000, w_norm_lambda=0.005, workers=8) Loading pSp from checkpoint: pretrained_models/psp_celebs_super_resolution.pt Model successfully loaded! Aligned image has shape: (256, 256)
Prediction
eladrich/pixel2style2pixel:919ed2f7IDsbs2vb2ocfastdct3vb6j5klqeStatusSucceededSourceWebHardware–Total duration–CreatedInput
{ "image": "https://replicate.delivery/mgxm/b7d8e726-d982-412a-92ee-c9966c94d210/input_sketch.jpg", "model": "celebs_sketch_to_face" }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run eladrich/pixel2style2pixel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "eladrich/pixel2style2pixel:919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4", { input: { image: "https://replicate.delivery/mgxm/b7d8e726-d982-412a-92ee-c9966c94d210/input_sketch.jpg", model: "celebs_sketch_to_face" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run eladrich/pixel2style2pixel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "eladrich/pixel2style2pixel:919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4", input={ "image": "https://replicate.delivery/mgxm/b7d8e726-d982-412a-92ee-c9966c94d210/input_sketch.jpg", "model": "celebs_sketch_to_face" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run eladrich/pixel2style2pixel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4", "input": { "image": "https://replicate.delivery/mgxm/b7d8e726-d982-412a-92ee-c9966c94d210/input_sketch.jpg", "model": "celebs_sketch_to_face" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/eladrich/pixel2style2pixel@sha256:919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4 \ -i 'image="https://replicate.delivery/mgxm/b7d8e726-d982-412a-92ee-c9966c94d210/input_sketch.jpg"' \ -i 'model="celebs_sketch_to_face"'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/eladrich/pixel2style2pixel@sha256:919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "image": "https://replicate.delivery/mgxm/b7d8e726-d982-412a-92ee-c9966c94d210/input_sketch.jpg", "model": "celebs_sketch_to_face" } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2021-09-14T22:03:10.606944Z", "created_at": "2021-09-14T22:03:03.850708Z", "data_removed": false, "error": null, "id": "sbs2vb2ocfastdct3vb6j5klqe", "input": { "image": "https://replicate.delivery/mgxm/b7d8e726-d982-412a-92ee-c9966c94d210/input_sketch.jpg", "model": "celebs_sketch_to_face" }, "logs": "Namespace(batch_size=8, board_interval=50, checkpoint_path='pretrained_models/psp_celebs_sketch_to_face.pt', dataset_type='celebs_sketch_to_face', device='cuda:0', encoder_type='GradualStyleEncoder', exp_dir='', id_lambda=0.0, image_interval=100, input_nc=1, l2_lambda=1.0, l2_lambda_crop=0, label_nc=1, learn_in_w=False, learning_rate=0.0001, lpips_lambda=0.8, lpips_lambda_crop=0, max_steps=80000, optim_name='ranger', output_size=1024, resize_factors=None, save_interval=10000, start_from_latent_avg=True, stylegan_weights='', test_batch_size=8, test_workers=8, train_decoder=False, val_interval=5000, w_norm_lambda=0.005, workers=8)\nLoading pSp from checkpoint: pretrained_models/psp_celebs_sketch_to_face.pt\nModel successfully loaded!", "metrics": { "total_time": 6.756236 }, "output": [ { "file": "https://replicate.delivery/mgxm/2209a3ab-507e-4759-bfbe-9332d9b31c71/out.png" } ], "started_at": "2021-11-30T20:56:19.362776Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/sbs2vb2ocfastdct3vb6j5klqe", "cancel": "https://api.replicate.com/v1/predictions/sbs2vb2ocfastdct3vb6j5klqe/cancel" }, "version": "a9e25b9995d98612afc75a3dd98446976aec34251843b36a7b34d7e272d170cf" }
Namespace(batch_size=8, board_interval=50, checkpoint_path='pretrained_models/psp_celebs_sketch_to_face.pt', dataset_type='celebs_sketch_to_face', device='cuda:0', encoder_type='GradualStyleEncoder', exp_dir='', id_lambda=0.0, image_interval=100, input_nc=1, l2_lambda=1.0, l2_lambda_crop=0, label_nc=1, learn_in_w=False, learning_rate=0.0001, lpips_lambda=0.8, lpips_lambda_crop=0, max_steps=80000, optim_name='ranger', output_size=1024, resize_factors=None, save_interval=10000, start_from_latent_avg=True, stylegan_weights='', test_batch_size=8, test_workers=8, train_decoder=False, val_interval=5000, w_norm_lambda=0.005, workers=8) Loading pSp from checkpoint: pretrained_models/psp_celebs_sketch_to_face.pt Model successfully loaded!
Prediction
eladrich/pixel2style2pixel:919ed2f7ID4j4seh7d2jhflom3egej7ldbk4StatusSucceededSourceWebHardware–Total duration–CreatedInput
{ "image": "https://replicate.delivery/mgxm/7935db96-ae91-440f-8c75-b94bd6315d79/input_img.jpg", "model": "ffhq_frontalize" }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run eladrich/pixel2style2pixel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "eladrich/pixel2style2pixel:919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4", { input: { image: "https://replicate.delivery/mgxm/7935db96-ae91-440f-8c75-b94bd6315d79/input_img.jpg", model: "ffhq_frontalize" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run eladrich/pixel2style2pixel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "eladrich/pixel2style2pixel:919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4", input={ "image": "https://replicate.delivery/mgxm/7935db96-ae91-440f-8c75-b94bd6315d79/input_img.jpg", "model": "ffhq_frontalize" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run eladrich/pixel2style2pixel using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4", "input": { "image": "https://replicate.delivery/mgxm/7935db96-ae91-440f-8c75-b94bd6315d79/input_img.jpg", "model": "ffhq_frontalize" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/eladrich/pixel2style2pixel@sha256:919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4 \ -i 'image="https://replicate.delivery/mgxm/7935db96-ae91-440f-8c75-b94bd6315d79/input_img.jpg"' \ -i 'model="ffhq_frontalize"'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/eladrich/pixel2style2pixel@sha256:919ed2f7b6c5c24f3a53207842b61b6eba515136bd7bb9ffa75e01e970609cc4
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "image": "https://replicate.delivery/mgxm/7935db96-ae91-440f-8c75-b94bd6315d79/input_img.jpg", "model": "ffhq_frontalize" } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2021-09-14T22:10:55.685835Z", "created_at": "2021-09-14T22:10:47.388254Z", "data_removed": false, "error": null, "id": "4j4seh7d2jhflom3egej7ldbk4", "input": { "image": "https://replicate.delivery/mgxm/7935db96-ae91-440f-8c75-b94bd6315d79/input_img.jpg", "model": "ffhq_frontalize" }, "logs": "Namespace(batch_size=8, board_interval=50, checkpoint_path='pretrained_models/psp_ffhq_frontalization.pt', dataset_type='ffhq_frontalize', device='cuda:0', encoder_type='GradualStyleEncoder', exp_dir='', id_lambda=1.0, image_interval=100, input_nc=3, l2_lambda=0.001, l2_lambda_crop=0.01, label_nc=0, learn_in_w=False, learning_rate=0.0001, lpips_lambda=0.08, lpips_lambda_crop=0.8, max_steps=80000, optim_name='ranger', output_size=1024, resize_factors=None, save_interval=5000, start_from_latent_avg=True, stylegan_weights='', test_batch_size=8, test_workers=2, train_decoder=False, val_interval=2500, w_norm_lambda=0.005, workers=8)\nLoading pSp from checkpoint: pretrained_models/psp_ffhq_frontalization.pt\nModel successfully loaded!\nAligned image has shape: (256, 256)", "metrics": { "total_time": 8.297581 }, "output": [ { "file": "https://replicate.delivery/mgxm/281550ae-e8ee-4288-b0bd-26cde320a151/out.png" } ], "started_at": "2021-12-08T07:29:10.466197Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/4j4seh7d2jhflom3egej7ldbk4", "cancel": "https://api.replicate.com/v1/predictions/4j4seh7d2jhflom3egej7ldbk4/cancel" }, "version": "a9e25b9995d98612afc75a3dd98446976aec34251843b36a7b34d7e272d170cf" }
Namespace(batch_size=8, board_interval=50, checkpoint_path='pretrained_models/psp_ffhq_frontalization.pt', dataset_type='ffhq_frontalize', device='cuda:0', encoder_type='GradualStyleEncoder', exp_dir='', id_lambda=1.0, image_interval=100, input_nc=3, l2_lambda=0.001, l2_lambda_crop=0.01, label_nc=0, learn_in_w=False, learning_rate=0.0001, lpips_lambda=0.08, lpips_lambda_crop=0.8, max_steps=80000, optim_name='ranger', output_size=1024, resize_factors=None, save_interval=5000, start_from_latent_avg=True, stylegan_weights='', test_batch_size=8, test_workers=2, train_decoder=False, val_interval=2500, w_norm_lambda=0.005, workers=8) Loading pSp from checkpoint: pretrained_models/psp_ffhq_frontalization.pt Model successfully loaded! Aligned image has shape: (256, 256)
Want to make some of these yourself?
Run this model