fofr / wan-0_1-webp
Wan 14b fine-tuned on 0_1 character
- Public
- 162 runs
-
H100
Prediction
fofr/wan-0_1-webp:ab9ed47d967bc3efd31ef27ace2b4b3078c089f358df68c53aeae093074db029IDbvfsde1xc5rmc0cnha4tspa9jgStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- frames
- 81
- prompt
- 0_1 woman is laughing
- aspect_ratio
- 9:16
- sample_shift
- 8
- sample_steps
- 30
- negative_prompt
- lora_strength_clip
- 1
- sample_guide_scale
- 5
- lora_strength_model
- 1
{ "frames": 81, "prompt": "0_1 woman is laughing", "aspect_ratio": "9:16", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/wan-0_1-webp using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/wan-0_1-webp:ab9ed47d967bc3efd31ef27ace2b4b3078c089f358df68c53aeae093074db029", { input: { frames: 81, prompt: "0_1 woman is laughing", aspect_ratio: "9:16", sample_shift: 8, sample_steps: 30, negative_prompt: "", lora_strength_clip: 1, sample_guide_scale: 5, lora_strength_model: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run fofr/wan-0_1-webp using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/wan-0_1-webp:ab9ed47d967bc3efd31ef27ace2b4b3078c089f358df68c53aeae093074db029", input={ "frames": 81, "prompt": "0_1 woman is laughing", "aspect_ratio": "9:16", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run fofr/wan-0_1-webp using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "fofr/wan-0_1-webp:ab9ed47d967bc3efd31ef27ace2b4b3078c089f358df68c53aeae093074db029", "input": { "frames": 81, "prompt": "0_1 woman is laughing", "aspect_ratio": "9:16", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-03-12T13:29:47.642944Z", "created_at": "2025-03-12T13:26:44.833000Z", "data_removed": false, "error": null, "id": "bvfsde1xc5rmc0cnha4tspa9jg", "input": { "frames": 81, "prompt": "0_1 woman is laughing", "aspect_ratio": "9:16", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }, "logs": "Random seed set to: 769503617\n✅ 66277f106a6cd4de4e912c891772291a.safetensors already cached\nChecking inputs\n====================================\nChecking weights\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models\n✅ 66277f106a6cd4de4e912c891772291a.safetensors exists in loras directory\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 40, title: EmptyHunyuanLatentVideo, class type: EmptyHunyuanLatentVideo\nExecuting node 53, title: WanVideo Enhance A Video (native), class type: WanVideoEnhanceAVideoKJ\nExecuting node 54, title: WanVideo Tea Cache (native), class type: WanVideoTeaCacheKJ\nExecuting node 49, title: Load LoRA, class type: LoraLoader\n[ComfyUI] Requested to load WanTEModel\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\n[ComfyUI] loaded completely 113273.66620521546 10835.4765625 True\nExecuting node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI] Requested to load WAN21\n[ComfyUI] loaded completely 123801.91498866271 27251.406372070312 True\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] Resetting TeaCache state\n[ComfyUI]\n[ComfyUI] 3%|▎ | 1/30 [00:07<03:32, 7.31s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:16<04:03, 8.70s/it]\n[ComfyUI] 10%|█ | 3/30 [00:26<04:07, 9.16s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:39<04:31, 10.45s/it]\n[ComfyUI] 20%|██ | 6/30 [00:49<03:05, 7.72s/it]\n[ComfyUI] 23%|██▎ | 7/30 [00:49<02:09, 5.64s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:59<02:31, 6.87s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:09<02:00, 6.05s/it]\n[ComfyUI] 40%|████ | 12/30 [01:19<01:41, 5.65s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:29<01:26, 5.42s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:40<01:13, 5.28s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:50<01:02, 5.20s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [02:00<00:51, 5.14s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:10<00:40, 5.10s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:20<00:30, 5.08s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:30<00:20, 5.06s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:40<00:10, 5.05s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:50<00:00, 5.05s/it]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] 100%|██████████| 30/30 [02:50<00:00, 5.68s/it]\n[ComfyUI] Prompt executed in 182.62 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4", "metrics": { "predict_time": 182.801434733, "total_time": 182.809944 }, "output": [ "https://replicate.delivery/xezq/NkJh9qaz9W6mPZSuKfdBHet9HCrfSdszwUrcz2eLOO8skPfiC/R8_Wan_00001.mp4" ], "started_at": "2025-03-12T13:26:44.841509Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-tvqrq3oxw6iwv3xx4kdzgzjqosaffbogah43stww77ne3oq263wq", "get": "https://api.replicate.com/v1/predictions/bvfsde1xc5rmc0cnha4tspa9jg", "cancel": "https://api.replicate.com/v1/predictions/bvfsde1xc5rmc0cnha4tspa9jg/cancel" }, "version": "ab9ed47d967bc3efd31ef27ace2b4b3078c089f358df68c53aeae093074db029" }
Generated inRandom seed set to: 769503617 ✅ 66277f106a6cd4de4e912c891772291a.safetensors already cached Checking inputs ==================================== Checking weights ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models ✅ 66277f106a6cd4de4e912c891772291a.safetensors exists in loras directory ==================================== Running workflow [ComfyUI] got prompt Executing node 40, title: EmptyHunyuanLatentVideo, class type: EmptyHunyuanLatentVideo Executing node 53, title: WanVideo Enhance A Video (native), class type: WanVideoEnhanceAVideoKJ Executing node 54, title: WanVideo Tea Cache (native), class type: WanVideoTeaCacheKJ Executing node 49, title: Load LoRA, class type: LoraLoader [ComfyUI] Requested to load WanTEModel Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode [ComfyUI] loaded completely 113273.66620521546 10835.4765625 True Executing node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3 Executing node 3, title: KSampler, class type: KSampler [ComfyUI] Requested to load WAN21 [ComfyUI] loaded completely 123801.91498866271 27251.406372070312 True [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] Resetting TeaCache state [ComfyUI] [ComfyUI] 3%|▎ | 1/30 [00:07<03:32, 7.31s/it] [ComfyUI] 7%|▋ | 2/30 [00:16<04:03, 8.70s/it] [ComfyUI] 10%|█ | 3/30 [00:26<04:07, 9.16s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:39<04:31, 10.45s/it] [ComfyUI] 20%|██ | 6/30 [00:49<03:05, 7.72s/it] [ComfyUI] 23%|██▎ | 7/30 [00:49<02:09, 5.64s/it] [ComfyUI] 27%|██▋ | 8/30 [00:59<02:31, 6.87s/it] [ComfyUI] 33%|███▎ | 10/30 [01:09<02:00, 6.05s/it] [ComfyUI] 40%|████ | 12/30 [01:19<01:41, 5.65s/it] [ComfyUI] 47%|████▋ | 14/30 [01:29<01:26, 5.42s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:40<01:13, 5.28s/it] [ComfyUI] 60%|██████ | 18/30 [01:50<01:02, 5.20s/it] [ComfyUI] 67%|██████▋ | 20/30 [02:00<00:51, 5.14s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:10<00:40, 5.10s/it] [ComfyUI] 80%|████████ | 24/30 [02:20<00:30, 5.08s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:30<00:20, 5.06s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:40<00:10, 5.05s/it] [ComfyUI] 100%|██████████| 30/30 [02:50<00:00, 5.05s/it] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] 100%|██████████| 30/30 [02:50<00:00, 5.68s/it] [ComfyUI] Prompt executed in 182.62 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Prediction
fofr/wan-0_1-webp:ab9ed47d967bc3efd31ef27ace2b4b3078c089f358df68c53aeae093074db029IDrparezt44xrm80cnha7rxzskwcStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- frames
- 81
- prompt
- 0_1 woman is a tiktok dancer in red jogging bottoms and a baseball cap
- aspect_ratio
- 9:16
- sample_shift
- 8
- sample_steps
- 30
- negative_prompt
- lora_strength_clip
- 1
- sample_guide_scale
- 5
- lora_strength_model
- 1
{ "frames": 81, "prompt": "0_1 woman is a tiktok dancer in red jogging bottoms and a baseball cap", "aspect_ratio": "9:16", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/wan-0_1-webp using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/wan-0_1-webp:ab9ed47d967bc3efd31ef27ace2b4b3078c089f358df68c53aeae093074db029", { input: { frames: 81, prompt: "0_1 woman is a tiktok dancer in red jogging bottoms and a baseball cap", aspect_ratio: "9:16", sample_shift: 8, sample_steps: 30, negative_prompt: "", lora_strength_clip: 1, sample_guide_scale: 5, lora_strength_model: 1 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run fofr/wan-0_1-webp using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/wan-0_1-webp:ab9ed47d967bc3efd31ef27ace2b4b3078c089f358df68c53aeae093074db029", input={ "frames": 81, "prompt": "0_1 woman is a tiktok dancer in red jogging bottoms and a baseball cap", "aspect_ratio": "9:16", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run fofr/wan-0_1-webp using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "fofr/wan-0_1-webp:ab9ed47d967bc3efd31ef27ace2b4b3078c089f358df68c53aeae093074db029", "input": { "frames": 81, "prompt": "0_1 woman is a tiktok dancer in red jogging bottoms and a baseball cap", "aspect_ratio": "9:16", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-03-12T13:36:09.575860Z", "created_at": "2025-03-12T13:33:19.783000Z", "data_removed": false, "error": null, "id": "rparezt44xrm80cnha7rxzskwc", "input": { "frames": 81, "prompt": "0_1 woman is a tiktok dancer in red jogging bottoms and a baseball cap", "aspect_ratio": "9:16", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }, "logs": "Random seed set to: 126738032\n✅ 66277f106a6cd4de4e912c891772291a.safetensors already cached\nChecking inputs\n====================================\nChecking weights\n✅ 66277f106a6cd4de4e912c891772291a.safetensors exists in loras directory\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] Resetting TeaCache state\n[ComfyUI]\n[ComfyUI] 3%|▎ | 1/30 [00:07<03:25, 7.08s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:16<03:55, 8.42s/it]\n[ComfyUI] 10%|█ | 3/30 [00:25<03:59, 8.86s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:37<04:23, 10.13s/it]\n[ComfyUI] 20%|██ | 6/30 [00:47<02:57, 7.39s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:57<02:19, 6.34s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:07<01:56, 5.82s/it]\n[ComfyUI] 40%|████ | 12/30 [01:17<01:39, 5.51s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:27<01:25, 5.33s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:37<01:12, 5.21s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:47<01:01, 5.13s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:57<00:50, 5.07s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:07<00:40, 5.04s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:17<00:30, 5.01s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:26<00:19, 5.00s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:36<00:09, 4.98s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:46<00:00, 4.97s/it]\nExecuting node 8, title: VAE Decode, class type: VAEDecode\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] 100%|██████████| 30/30 [02:46<00:00, 5.56s/it]\n[ComfyUI] Prompt executed in 169.61 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4", "metrics": { "predict_time": 169.787160449, "total_time": 169.79286 }, "output": [ "https://replicate.delivery/xezq/kd7FfubeafGPKob413BjqRz1iBARr8wyx7ZCX3Y2unTSePfiC/R8_Wan_00001.mp4" ], "started_at": "2025-03-12T13:33:19.788700Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-z57ukzf65tmxdthi3jykiag4qgt5lirjifzcx2sngtotq66dly7q", "get": "https://api.replicate.com/v1/predictions/rparezt44xrm80cnha7rxzskwc", "cancel": "https://api.replicate.com/v1/predictions/rparezt44xrm80cnha7rxzskwc/cancel" }, "version": "ab9ed47d967bc3efd31ef27ace2b4b3078c089f358df68c53aeae093074db029" }
Generated inRandom seed set to: 126738032 ✅ 66277f106a6cd4de4e912c891772291a.safetensors already cached Checking inputs ==================================== Checking weights ✅ 66277f106a6cd4de4e912c891772291a.safetensors exists in loras directory ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ==================================== Running workflow [ComfyUI] got prompt Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode Executing node 3, title: KSampler, class type: KSampler [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] Resetting TeaCache state [ComfyUI] [ComfyUI] 3%|▎ | 1/30 [00:07<03:25, 7.08s/it] [ComfyUI] 7%|▋ | 2/30 [00:16<03:55, 8.42s/it] [ComfyUI] 10%|█ | 3/30 [00:25<03:59, 8.86s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:37<04:23, 10.13s/it] [ComfyUI] 20%|██ | 6/30 [00:47<02:57, 7.39s/it] [ComfyUI] 27%|██▋ | 8/30 [00:57<02:19, 6.34s/it] [ComfyUI] 33%|███▎ | 10/30 [01:07<01:56, 5.82s/it] [ComfyUI] 40%|████ | 12/30 [01:17<01:39, 5.51s/it] [ComfyUI] 47%|████▋ | 14/30 [01:27<01:25, 5.33s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:37<01:12, 5.21s/it] [ComfyUI] 60%|██████ | 18/30 [01:47<01:01, 5.13s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:57<00:50, 5.07s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:07<00:40, 5.04s/it] [ComfyUI] 80%|████████ | 24/30 [02:17<00:30, 5.01s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:26<00:19, 5.00s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:36<00:09, 4.98s/it] [ComfyUI] 100%|██████████| 30/30 [02:46<00:00, 4.97s/it] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] 100%|██████████| 30/30 [02:46<00:00, 5.56s/it] [ComfyUI] Prompt executed in 169.61 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Want to make some of these yourself?
Run this model