prakharsaxena24
/
masked-upscaler
Upscaler and detailer for a selected area
- Public
- 4.8K runs
-
A100 (80GB)
Prediction
prakharsaxena24/masked-upscaler:0e864cd4844ac63d862efd3468e4c55219066351009db73833ad67f98c5eaefbIDy9q613he61rgp0cfvy6sqcwjkcStatusSucceededSourceWebHardwareA100 (40GB)Total durationCreatedInput
- seed
- 42
- prompt
- masterpiece, best quality, highres, <lora:more_details:0.5> <lora:SDXLrender_v2.0:1>
- scale_factor
- 2
- num_inference_steps
- 20
{ "mask": "https://replicate.delivery/pbxt/L293tY1UNaSlq01zA1VCCkNyv49jD4Ab3QrMau376xUON56q/inverse_image_mask.png", "seed": 42, "image": "https://replicate.delivery/pbxt/L293tzfx8WiFQQrLRxPMwRwMZHzi9Bs5a1mUOgwySqf77men/img5a21b4bd1c924b0ba6d04f1c75ced25d.png", "prompt": "masterpiece, best quality, highres, <lora:more_details:0.5> <lora:SDXLrender_v2.0:1>", "scale_factor": 2, "num_inference_steps": 20 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run prakharsaxena24/masked-upscaler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "prakharsaxena24/masked-upscaler:0e864cd4844ac63d862efd3468e4c55219066351009db73833ad67f98c5eaefb", { input: { mask: "https://replicate.delivery/pbxt/L293tY1UNaSlq01zA1VCCkNyv49jD4Ab3QrMau376xUON56q/inverse_image_mask.png", seed: 42, image: "https://replicate.delivery/pbxt/L293tzfx8WiFQQrLRxPMwRwMZHzi9Bs5a1mUOgwySqf77men/img5a21b4bd1c924b0ba6d04f1c75ced25d.png", prompt: "masterpiece, best quality, highres, <lora:more_details:0.5> <lora:SDXLrender_v2.0:1>", scale_factor: 2, num_inference_steps: 20 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run prakharsaxena24/masked-upscaler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "prakharsaxena24/masked-upscaler:0e864cd4844ac63d862efd3468e4c55219066351009db73833ad67f98c5eaefb", input={ "mask": "https://replicate.delivery/pbxt/L293tY1UNaSlq01zA1VCCkNyv49jD4Ab3QrMau376xUON56q/inverse_image_mask.png", "seed": 42, "image": "https://replicate.delivery/pbxt/L293tzfx8WiFQQrLRxPMwRwMZHzi9Bs5a1mUOgwySqf77men/img5a21b4bd1c924b0ba6d04f1c75ced25d.png", "prompt": "masterpiece, best quality, highres, <lora:more_details:0.5> <lora:SDXLrender_v2.0:1>", "scale_factor": 2, "num_inference_steps": 20 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run prakharsaxena24/masked-upscaler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "0e864cd4844ac63d862efd3468e4c55219066351009db73833ad67f98c5eaefb", "input": { "mask": "https://replicate.delivery/pbxt/L293tY1UNaSlq01zA1VCCkNyv49jD4Ab3QrMau376xUON56q/inverse_image_mask.png", "seed": 42, "image": "https://replicate.delivery/pbxt/L293tzfx8WiFQQrLRxPMwRwMZHzi9Bs5a1mUOgwySqf77men/img5a21b4bd1c924b0ba6d04f1c75ced25d.png", "prompt": "masterpiece, best quality, highres, <lora:more_details:0.5> <lora:SDXLrender_v2.0:1>", "scale_factor": 2, "num_inference_steps": 20 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-06-03T19:24:15.121352Z", "created_at": "2024-06-03T19:22:02.672000Z", "data_removed": false, "error": null, "id": "y9q613he61rgp0cfvy6sqcwjkc", "input": { "mask": "https://replicate.delivery/pbxt/L293tY1UNaSlq01zA1VCCkNyv49jD4Ab3QrMau376xUON56q/inverse_image_mask.png", "seed": 42, "image": "https://replicate.delivery/pbxt/L293tzfx8WiFQQrLRxPMwRwMZHzi9Bs5a1mUOgwySqf77men/img5a21b4bd1c924b0ba6d04f1c75ced25d.png", "prompt": "masterpiece, best quality, highres, <lora:more_details:0.5> <lora:SDXLrender_v2.0:1>", "scale_factor": 2, "num_inference_steps": 20 }, "logs": "Running prediction\nUpscaling with scale_factor: 2.0\n[Tiled Diffusion] upscaling image with 4x-UltraSharp...\n[Tiled Diffusion] ControlNet found, support is enabled.\n2024-06-03 19:24:07,997 - ControlNet - \u001b[0;32mINFO\u001b[0m - unit_separate = False, style_align = False\n2024-06-03 19:24:07,997 - ControlNet - \u001b[0;32mINFO\u001b[0m - Loading model from cache: control_v11f1e_sd15_tile\n2024-06-03 19:24:08,011 - ControlNet - \u001b[0;32mINFO\u001b[0m - Using preprocessor: tile_resample\n2024-06-03 19:24:08,011 - ControlNet - \u001b[0;32mINFO\u001b[0m - preprocessor resolution = 950\n2024-06-03 19:24:08,092 - ControlNet - \u001b[0;32mINFO\u001b[0m - ControlNet Hooked - Time = 0.1022803783416748\nMultiDiffusion hooked into 'DPM++ 3M SDE Karras' sampler, Tile size: 118x112, Tile count: 3, Batch size: 3, Tile batches: 1 (ext: ContrlNet)\n[Tiled VAE]: the input size is tiny and unnecessary to tile.\nMultiDiffusion Sampling: 0%| | 0/1 [00:00<?, ?it/s]\n 0%| | 0/8 [00:00<?, ?it/s]\u001b[A\nTotal progress: 0%| | 0/8 [00:00<?, ?it/s]\u001b[A\n 12%|█▎ | 1/8 [00:01<00:07, 1.01s/it]\u001b[A\nTotal progress: 25%|██▌ | 2/8 [00:00<00:00, 6.32it/s]\u001b[A\n 25%|██▌ | 2/8 [00:01<00:03, 1.66it/s]\u001b[A\nTotal progress: 38%|███▊ | 3/8 [00:00<00:01, 4.51it/s]\u001b[A\n 38%|███▊ | 3/8 [00:01<00:02, 2.13it/s]\u001b[A\nTotal progress: 50%|█████ | 4/8 [00:00<00:01, 3.91it/s]\u001b[A\n 50%|█████ | 4/8 [00:01<00:01, 2.45it/s]\u001b[A\nTotal progress: 62%|██████▎ | 5/8 [00:01<00:00, 3.63it/s]\u001b[A\n 62%|██████▎ | 5/8 [00:02<00:01, 2.68it/s]\u001b[A\nTotal progress: 75%|███████▌ | 6/8 [00:01<00:00, 3.48it/s]\u001b[A\n 75%|███████▌ | 6/8 [00:02<00:00, 2.84it/s]\u001b[A\nTotal progress: 88%|████████▊ | 7/8 [00:01<00:00, 3.38it/s]\u001b[A\n 88%|████████▊ | 7/8 [00:02<00:00, 2.96it/s]\u001b[A\n100%|██████████| 8/8 [00:03<00:00, 3.05it/s]\u001b[A\n100%|██████████| 8/8 [00:03<00:00, 2.51it/s]\nTotal progress: 100%|██████████| 8/8 [00:02<00:00, 3.33it/s]\u001b[A[Tiled VAE]: input_size: torch.Size([1, 4, 118, 250]), tile_size: 128, padding: 11\n[Tiled VAE]: split to 1x2 = 2 tiles. Optimal tile size 128x96, original tile size 128x128\n[Tiled VAE]: Fast mode enabled, estimating group norm parameters on 128 x 60 image\n[Tiled VAE]: Executing Decoder Task Queue: 0%| | 0/246 [00:00<?, ?it/s]\u001b[A\u001b[A\n[Tiled VAE]: Executing Decoder Task Queue: 50%|█████ | 124/246 [00:00<00:00, 672.41it/s]\u001b[A\u001b[A\n[Tiled VAE]: Executing Decoder Task Queue: 100%|██████████| 246/246 [00:00<00:00, 744.19it/s]\n[Tiled VAE]: Done in 0.965s, max VRAM alloc 5375.187 MB\nTotal progress: 100%|██████████| 8/8 [00:03<00:00, 3.33it/s]\u001b[A\nTotal progress: 100%|██████████| 8/8 [00:03<00:00, 2.30it/s]\nPrediction took 8.56 seconds", "metrics": { "predict_time": 9.202251, "total_time": 132.449352 }, "output": [ "https://replicate.delivery/pbxt/4n6tIgWqfnV3EyQRdqGfyYjkhye5sG4xyr3Ab20EQp99S51lA/42-dd0112c8-21de-11ef-b285-1e1adbf3739f.png" ], "started_at": "2024-06-03T19:24:05.919101Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/y9q613he61rgp0cfvy6sqcwjkc", "cancel": "https://api.replicate.com/v1/predictions/y9q613he61rgp0cfvy6sqcwjkc/cancel" }, "version": "0e864cd4844ac63d862efd3468e4c55219066351009db73833ad67f98c5eaefb" }
Generated inRunning prediction Upscaling with scale_factor: 2.0 [Tiled Diffusion] upscaling image with 4x-UltraSharp... [Tiled Diffusion] ControlNet found, support is enabled. 2024-06-03 19:24:07,997 - ControlNet - INFO - unit_separate = False, style_align = False 2024-06-03 19:24:07,997 - ControlNet - INFO - Loading model from cache: control_v11f1e_sd15_tile 2024-06-03 19:24:08,011 - ControlNet - INFO - Using preprocessor: tile_resample 2024-06-03 19:24:08,011 - ControlNet - INFO - preprocessor resolution = 950 2024-06-03 19:24:08,092 - ControlNet - INFO - ControlNet Hooked - Time = 0.1022803783416748 MultiDiffusion hooked into 'DPM++ 3M SDE Karras' sampler, Tile size: 118x112, Tile count: 3, Batch size: 3, Tile batches: 1 (ext: ContrlNet) [Tiled VAE]: the input size is tiny and unnecessary to tile. MultiDiffusion Sampling: 0%| | 0/1 [00:00<?, ?it/s] 0%| | 0/8 [00:00<?, ?it/s] Total progress: 0%| | 0/8 [00:00<?, ?it/s] 12%|█▎ | 1/8 [00:01<00:07, 1.01s/it] Total progress: 25%|██▌ | 2/8 [00:00<00:00, 6.32it/s] 25%|██▌ | 2/8 [00:01<00:03, 1.66it/s] Total progress: 38%|███▊ | 3/8 [00:00<00:01, 4.51it/s] 38%|███▊ | 3/8 [00:01<00:02, 2.13it/s] Total progress: 50%|█████ | 4/8 [00:00<00:01, 3.91it/s] 50%|█████ | 4/8 [00:01<00:01, 2.45it/s] Total progress: 62%|██████▎ | 5/8 [00:01<00:00, 3.63it/s] 62%|██████▎ | 5/8 [00:02<00:01, 2.68it/s] Total progress: 75%|███████▌ | 6/8 [00:01<00:00, 3.48it/s] 75%|███████▌ | 6/8 [00:02<00:00, 2.84it/s] Total progress: 88%|████████▊ | 7/8 [00:01<00:00, 3.38it/s] 88%|████████▊ | 7/8 [00:02<00:00, 2.96it/s] 100%|██████████| 8/8 [00:03<00:00, 3.05it/s] 100%|██████████| 8/8 [00:03<00:00, 2.51it/s] Total progress: 100%|██████████| 8/8 [00:02<00:00, 3.33it/s][Tiled VAE]: input_size: torch.Size([1, 4, 118, 250]), tile_size: 128, padding: 11 [Tiled VAE]: split to 1x2 = 2 tiles. Optimal tile size 128x96, original tile size 128x128 [Tiled VAE]: Fast mode enabled, estimating group norm parameters on 128 x 60 image [Tiled VAE]: Executing Decoder Task Queue: 0%| | 0/246 [00:00<?, ?it/s] [Tiled VAE]: Executing Decoder Task Queue: 50%|█████ | 124/246 [00:00<00:00, 672.41it/s] [Tiled VAE]: Executing Decoder Task Queue: 100%|██████████| 246/246 [00:00<00:00, 744.19it/s] [Tiled VAE]: Done in 0.965s, max VRAM alloc 5375.187 MB Total progress: 100%|██████████| 8/8 [00:03<00:00, 3.33it/s] Total progress: 100%|██████████| 8/8 [00:03<00:00, 2.30it/s] Prediction took 8.56 seconds
Want to make some of these yourself?
Run this model