Failed to load versions. Head to the versions page to see all versions for this model.
You're looking at a specific version of this model. Jump to the model overview.
Input
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run zsyoaoa/invsr using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"zsyoaoa/invsr:37eebabfb6cdc4be2892b884b96b361d6fedc9f6a934d2fa3c1a2f85f004b0f0",
{
input: {
seed: 12345,
in_path: "https://replicate.delivery/pbxt/M8qhJrY5aD7tG40HumHd3gIIR3LXjMKThkOCNB1oSfGrimcu/32.jpg",
num_steps: 1,
chopping_size: 128
}
}
);
console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run zsyoaoa/invsr using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"zsyoaoa/invsr:37eebabfb6cdc4be2892b884b96b361d6fedc9f6a934d2fa3c1a2f85f004b0f0",
input={
"seed": 12345,
"in_path": "https://replicate.delivery/pbxt/M8qhJrY5aD7tG40HumHd3gIIR3LXjMKThkOCNB1oSfGrimcu/32.jpg",
"num_steps": 1,
"chopping_size": 128
}
)
print(output)
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run zsyoaoa/invsr using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "37eebabfb6cdc4be2892b884b96b361d6fedc9f6a934d2fa3c1a2f85f004b0f0",
"input": {
"seed": 12345,
"in_path": "https://replicate.delivery/pbxt/M8qhJrY5aD7tG40HumHd3gIIR3LXjMKThkOCNB1oSfGrimcu/32.jpg",
"num_steps": 1,
"chopping_size": 128
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Add a payment method to run this model.
By signing in, you agree to our
terms of service and privacy policy
Output
Loading...
{
"completed_at": "2024-12-14T08:44:04.962719Z",
"created_at": "2024-12-14T08:42:11.447000Z",
"data_removed": false,
"error": null,
"id": "j2xw7errexrge0ckrhatgy909g",
"input": {
"seed": 12345,
"in_path": "https://replicate.delivery/pbxt/M8qhJrY5aD7tG40HumHd3gIIR3LXjMKThkOCNB1oSfGrimcu/32.jpg",
"num_steps": 1,
"chopping_size": 128
},
"logs": "Setting timesteps for inference: [200]\nDownloading: \"https://huggingface.co/OAOA/InvSR/resolve/main/noise_predictor_sd_turbo_v5.pth\" to /src/weights/noise_predictor_sd_turbo_v5.pth\n 0%| | 0.00/129M [00:00<?, ?B/s]\n 8%|▊ | 9.88M/129M [00:00<00:01, 102MB/s]\n 26%|██▌ | 33.6M/129M [00:00<00:00, 188MB/s]\n 40%|███▉ | 51.6M/129M [00:00<00:01, 45.9MB/s]\n 48%|████▊ | 62.5M/129M [00:01<00:01, 44.9MB/s]\n 55%|█████▍ | 70.8M/129M [00:01<00:01, 44.3MB/s]\n 60%|██████ | 77.5M/129M [00:01<00:01, 44.0MB/s]\n 65%|██████▍ | 83.4M/129M [00:01<00:01, 42.5MB/s]\n 69%|██████▊ | 88.5M/129M [00:01<00:01, 42.3MB/s]\n 72%|███████▏ | 93.2M/129M [00:02<00:00, 42.4MB/s]\n 76%|███████▌ | 97.9M/129M [00:02<00:00, 43.0MB/s]\n 79%|███████▉ | 102M/129M [00:02<00:00, 43.1MB/s] \n 83%|████████▎ | 107M/129M [00:02<00:00, 42.7MB/s]\n 86%|████████▌ | 111M/129M [00:02<00:00, 42.8MB/s]\n 89%|████████▉ | 115M/129M [00:02<00:00, 43.0MB/s]\n 93%|█████████▎| 120M/129M [00:02<00:00, 42.8MB/s]\n 96%|█████████▌| 124M/129M [00:02<00:00, 40.1MB/s]\n 99%|█████████▉| 128M/129M [00:02<00:00, 40.8MB/s]\n100%|██████████| 129M/129M [00:02<00:00, 46.3MB/s]\nFetching 12 files: 0%| | 0/12 [00:00<?, ?it/s]\nFetching 12 files: 17%|█▋ | 2/12 [00:00<00:02, 4.21it/s]\nFetching 12 files: 33%|███▎ | 4/12 [00:32<01:17, 9.65s/it]\nFetching 12 files: 83%|████████▎ | 10/12 [01:23<00:17, 8.72s/it]\nFetching 12 files: 100%|██████████| 12/12 [01:23<00:00, 6.92s/it]\nLoading pipeline components...: 0%| | 0/5 [00:00<?, ?it/s]\nLoading pipeline components...: 20%|██ | 1/5 [00:00<00:03, 1.19it/s]\nLoading pipeline components...: 40%|████ | 2/5 [00:01<00:03, 1.01s/it]\nLoading pipeline components...: 80%|████████ | 4/5 [00:02<00:00, 2.43it/s]\nLoading pipeline components...: 100%|██████████| 5/5 [00:02<00:00, 2.96it/s]\nLoading pipeline components...: 100%|██████████| 5/5 [00:02<00:00, 2.22it/s]\nYou have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .\nYou have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_inversion_sr.StableDiffusionInvEnhancePipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .\nActivating gradient checkpoing for vae...\nLoading started model from ./weights/noise_predictor_sd_turbo_v5.pth...\n/src/sampler_invsr.py:101: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\nstate = torch.load(ckpt_path, map_location=f\"cuda\")\nLoading Done\n/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/diffusers/configuration_utils.py:140: FutureWarning: Accessing config attribute `vae_latent_channels` directly via 'VaeImageProcessor' object attribute is deprecated. Please access 'vae_latent_channels' over 'VaeImageProcessor's config object instead, e.g. 'scheduler.config.vae_latent_channels'.\ndeprecate(\"direct config name access\", \"1.0.0\", deprecation_message, standard_warn=False)\n 0%| | 0/1 [00:00<?, ?it/s]\n100%|██████████| 1/1 [00:00<00:00, 4.53it/s]\n100%|██████████| 1/1 [00:00<00:00, 4.52it/s]\n 0%| | 0/1 [00:00<?, ?it/s]\n100%|██████████| 1/1 [00:00<00:00, 21.85it/s]\n 0%| | 0/1 [00:00<?, ?it/s]\n100%|██████████| 1/1 [00:00<00:00, 10.38it/s]\nProcessing done, enjoy the results in invsr_output",
"metrics": {
"predict_time": 96.037241225,
"total_time": 113.515719
},
"output": "https://replicate.delivery/czjl/BqklqAF5Wu5XOxH2WQJ8lV5HzkMruQ9V74VacSfYecMUdv6TA/out.png",
"started_at": "2024-12-14T08:42:28.925478Z",
"status": "succeeded",
"urls": {
"stream": "https://stream.replicate.com/v1/files/fddq-2q5gkhdzdt4tnofrkfrdklb4e3uv7jv6stgo56xqhojelc2254rq",
"get": "https://api.replicate.com/v1/predictions/j2xw7errexrge0ckrhatgy909g",
"cancel": "https://api.replicate.com/v1/predictions/j2xw7errexrge0ckrhatgy909g/cancel"
},
"version": "37eebabfb6cdc4be2892b884b96b361d6fedc9f6a934d2fa3c1a2f85f004b0f0"
}
Setting timesteps for inference: [200]
Downloading: "https://huggingface.co/OAOA/InvSR/resolve/main/noise_predictor_sd_turbo_v5.pth" to /src/weights/noise_predictor_sd_turbo_v5.pth
0%| | 0.00/129M [00:00<?, ?B/s]
8%|▊ | 9.88M/129M [00:00<00:01, 102MB/s]
26%|██▌ | 33.6M/129M [00:00<00:00, 188MB/s]
40%|███▉ | 51.6M/129M [00:00<00:01, 45.9MB/s]
48%|████▊ | 62.5M/129M [00:01<00:01, 44.9MB/s]
55%|█████▍ | 70.8M/129M [00:01<00:01, 44.3MB/s]
60%|██████ | 77.5M/129M [00:01<00:01, 44.0MB/s]
65%|██████▍ | 83.4M/129M [00:01<00:01, 42.5MB/s]
69%|██████▊ | 88.5M/129M [00:01<00:01, 42.3MB/s]
72%|███████▏ | 93.2M/129M [00:02<00:00, 42.4MB/s]
76%|███████▌ | 97.9M/129M [00:02<00:00, 43.0MB/s]
79%|███████▉ | 102M/129M [00:02<00:00, 43.1MB/s]
83%|████████▎ | 107M/129M [00:02<00:00, 42.7MB/s]
86%|████████▌ | 111M/129M [00:02<00:00, 42.8MB/s]
89%|████████▉ | 115M/129M [00:02<00:00, 43.0MB/s]
93%|█████████▎| 120M/129M [00:02<00:00, 42.8MB/s]
96%|█████████▌| 124M/129M [00:02<00:00, 40.1MB/s]
99%|█████████▉| 128M/129M [00:02<00:00, 40.8MB/s]
100%|██████████| 129M/129M [00:02<00:00, 46.3MB/s]
Fetching 12 files: 0%| | 0/12 [00:00<?, ?it/s]
Fetching 12 files: 17%|█▋ | 2/12 [00:00<00:02, 4.21it/s]
Fetching 12 files: 33%|███▎ | 4/12 [00:32<01:17, 9.65s/it]
Fetching 12 files: 83%|████████▎ | 10/12 [01:23<00:17, 8.72s/it]
Fetching 12 files: 100%|██████████| 12/12 [01:23<00:00, 6.92s/it]
Loading pipeline components...: 0%| | 0/5 [00:00<?, ?it/s]
Loading pipeline components...: 20%|██ | 1/5 [00:00<00:03, 1.19it/s]
Loading pipeline components...: 40%|████ | 2/5 [00:01<00:03, 1.01s/it]
Loading pipeline components...: 80%|████████ | 4/5 [00:02<00:00, 2.43it/s]
Loading pipeline components...: 100%|██████████| 5/5 [00:02<00:00, 2.96it/s]
Loading pipeline components...: 100%|██████████| 5/5 [00:02<00:00, 2.22it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_inversion_sr.StableDiffusionInvEnhancePipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
Activating gradient checkpoing for vae...
Loading started model from ./weights/noise_predictor_sd_turbo_v5.pth...
/src/sampler_invsr.py:101: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state = torch.load(ckpt_path, map_location=f"cuda")
Loading Done
/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/diffusers/configuration_utils.py:140: FutureWarning: Accessing config attribute `vae_latent_channels` directly via 'VaeImageProcessor' object attribute is deprecated. Please access 'vae_latent_channels' over 'VaeImageProcessor's config object instead, e.g. 'scheduler.config.vae_latent_channels'.
deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False)
0%| | 0/1 [00:00<?, ?it/s]
100%|██████████| 1/1 [00:00<00:00, 4.53it/s]
100%|██████████| 1/1 [00:00<00:00, 4.52it/s]
0%| | 0/1 [00:00<?, ?it/s]
100%|██████████| 1/1 [00:00<00:00, 21.85it/s]
0%| | 0/1 [00:00<?, ?it/s]
100%|██████████| 1/1 [00:00<00:00, 10.38it/s]
Processing done, enjoy the results in invsr_output