fofr / wan-14b-my-subconscious
- Public
- 52 runs
-
H100
Prediction
fofr/wan-14b-my-subconscious:7b98fe85978ee4cfecb0a7e6552899ba3dfc54329364c6b00a8f1cc561d07bb3ID4v197k49edrme0cnhfg94ptdswStatusSucceededSourceWebHardwareH100Total durationCreatedInput
- frames
- 81
- prompt
- MY_SUBCONSCIOUS monster is swimming, realistic photo, animated
- aspect_ratio
- 16:9
- sample_shift
- 5
- sample_steps
- 30
- negative_prompt
- lora_strength_clip
- 0.8
- sample_guide_scale
- 4
- lora_strength_model
- 0.3
{ "frames": 81, "prompt": "MY_SUBCONSCIOUS monster is swimming, realistic photo, animated", "aspect_ratio": "16:9", "sample_shift": 5, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 0.8, "sample_guide_scale": 4, "lora_strength_model": 0.3 }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/wan-14b-my-subconscious using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/wan-14b-my-subconscious:7b98fe85978ee4cfecb0a7e6552899ba3dfc54329364c6b00a8f1cc561d07bb3", { input: { frames: 81, prompt: "MY_SUBCONSCIOUS monster is swimming, realistic photo, animated", aspect_ratio: "16:9", sample_shift: 5, sample_steps: 30, negative_prompt: "", lora_strength_clip: 0.8, sample_guide_scale: 4, lora_strength_model: 0.3 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run fofr/wan-14b-my-subconscious using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/wan-14b-my-subconscious:7b98fe85978ee4cfecb0a7e6552899ba3dfc54329364c6b00a8f1cc561d07bb3", input={ "frames": 81, "prompt": "MY_SUBCONSCIOUS monster is swimming, realistic photo, animated", "aspect_ratio": "16:9", "sample_shift": 5, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 0.8, "sample_guide_scale": 4, "lora_strength_model": 0.3 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run fofr/wan-14b-my-subconscious using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "fofr/wan-14b-my-subconscious:7b98fe85978ee4cfecb0a7e6552899ba3dfc54329364c6b00a8f1cc561d07bb3", "input": { "frames": 81, "prompt": "MY_SUBCONSCIOUS monster is swimming, realistic photo, animated", "aspect_ratio": "16:9", "sample_shift": 5, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 0.8, "sample_guide_scale": 4, "lora_strength_model": 0.3 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2025-03-12T19:46:04.054759Z", "created_at": "2025-03-12T19:41:43.155000Z", "data_removed": false, "error": null, "id": "4v197k49edrme0cnhfg94ptdsw", "input": { "frames": 81, "prompt": "MY_SUBCONSCIOUS monster is swimming, realistic photo, animated", "aspect_ratio": "16:9", "sample_shift": 5, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 0.8, "sample_guide_scale": 4, "lora_strength_model": 0.3 }, "logs": "Random seed set to: 2293881177\n2025-03-12T19:42:51Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmp654rau3v/weights url=https://replicate.delivery/xezq/qocyf3gptqQwJqBXaeMVS8kESLBmeIjvnqrFboh1CNigawvoA/trained_model.tar\n2025-03-12T19:42:55Z | INFO | [ Complete ] dest=/tmp/tmp654rau3v/weights size=\"307 MB\" total_elapsed=4.600s url=https://replicate.delivery/xezq/qocyf3gptqQwJqBXaeMVS8kESLBmeIjvnqrFboh1CNigawvoA/trained_model.tar\nChecking inputs\n====================================\nChecking weights\n✅ 2f613a6130046d67cede16f086db3550.safetensors exists in loras directory\n✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models\n✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae\n✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders\n====================================\nRunning workflow\n[ComfyUI] got prompt\nExecuting node 39, title: Load VAE, class type: VAELoader\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] Using pytorch attention in VAE\n[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16\nExecuting node 40, title: EmptyHunyuanLatentVideo, class type: EmptyHunyuanLatentVideo\nExecuting node 38, title: Load CLIP, class type: CLIPLoader\n[ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16\n[ComfyUI] Requested to load WanTEModel\nExecuting node 7, title: CLIP Text Encode (Negative Prompt), class type: CLIPTextEncode\n[ComfyUI] loaded completely 141327.4875 10835.4765625 True\nExecuting node 37, title: Load Diffusion Model, class type: UNETLoader\n[ComfyUI] model weight dtype torch.float16, manual cast: None\n[ComfyUI] model_type FLOW\nExecuting node 53, title: WanVideo Enhance A Video (native), class type: WanVideoEnhanceAVideoKJ\nExecuting node 54, title: WanVideo Tea Cache (native), class type: WanVideoTeaCacheKJ\nExecuting node 49, title: Load LoRA, class type: LoraLoader\n[ComfyUI] Requested to load WanTEModel\nExecuting node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode\n[ComfyUI] loaded completely 141069.4875 10835.4765625 True\nExecuting node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3\nExecuting node 3, title: KSampler, class type: KSampler\n[ComfyUI] Requested to load WAN21\n[ComfyUI] loaded completely 124343.96281542968 27251.406372070312 True\n[ComfyUI]\n[ComfyUI] 0%| | 0/30 [00:00<?, ?it/s]\n[ComfyUI] 3%|▎ | 1/30 [00:07<03:51, 7.98s/it]\n[ComfyUI] 7%|▋ | 2/30 [00:17<04:07, 8.85s/it]\n[ComfyUI] 10%|█ | 3/30 [00:27<04:08, 9.20s/it]\n[ComfyUI] TeaCache: Initialized\n[ComfyUI]\n[ComfyUI] 13%|█▎ | 4/30 [00:39<04:30, 10.41s/it]\n[ComfyUI] 20%|██ | 6/30 [00:49<03:00, 7.53s/it]\n[ComfyUI] 27%|██▋ | 8/30 [00:59<02:21, 6.43s/it]\n[ComfyUI] 33%|███▎ | 10/30 [01:09<01:57, 5.88s/it]\n[ComfyUI] 40%|████ | 12/30 [01:19<01:40, 5.57s/it]\n[ComfyUI] 47%|████▋ | 14/30 [01:29<01:25, 5.37s/it]\n[ComfyUI] 53%|█████▎ | 16/30 [01:39<01:13, 5.24s/it]\n[ComfyUI] 60%|██████ | 18/30 [01:49<01:01, 5.16s/it]\n[ComfyUI] 67%|██████▋ | 20/30 [01:59<00:51, 5.11s/it]\n[ComfyUI] 73%|███████▎ | 22/30 [02:08<00:40, 5.07s/it]\n[ComfyUI] 80%|████████ | 24/30 [02:18<00:30, 5.05s/it]\n[ComfyUI] 87%|████████▋ | 26/30 [02:28<00:20, 5.03s/it]\n[ComfyUI] 93%|█████████▎| 28/30 [02:38<00:10, 5.02s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:48<00:00, 5.01s/it]\n[ComfyUI] 100%|██████████| 30/30 [02:48<00:00, 5.63s/it]\n[ComfyUI] Requested to load WanVAE\nExecuting node 8, title: VAE Decode, class type: VAEDecode\n[ComfyUI] loaded completely 98169.38668441772 242.02829551696777 True\nExecuting node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine\n[ComfyUI] Prompt executed in 188.21 seconds\noutputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}}\n====================================\nR8_Wan_00001.png\nR8_Wan_00001.mp4", "metrics": { "predict_time": 193.008079893, "total_time": 260.899759 }, "output": [ "https://replicate.delivery/xezq/wkwUmqTIpMrgMJil3vJRBbbFQYd1flYjfBaYmWjDfeWxnlfiC/R8_Wan_00001.mp4" ], "started_at": "2025-03-12T19:42:51.046679Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-7akcplh2kh5u3t2vs7da4mv4nmr5u7a5yw4ct6eajcrzjv22wd6a", "get": "https://api.replicate.com/v1/predictions/4v197k49edrme0cnhfg94ptdsw", "cancel": "https://api.replicate.com/v1/predictions/4v197k49edrme0cnhfg94ptdsw/cancel" }, "version": "7b98fe85978ee4cfecb0a7e6552899ba3dfc54329364c6b00a8f1cc561d07bb3" }
Generated inRandom seed set to: 2293881177 2025-03-12T19:42:51Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmp654rau3v/weights url=https://replicate.delivery/xezq/qocyf3gptqQwJqBXaeMVS8kESLBmeIjvnqrFboh1CNigawvoA/trained_model.tar 2025-03-12T19:42:55Z | INFO | [ Complete ] dest=/tmp/tmp654rau3v/weights size="307 MB" total_elapsed=4.600s url=https://replicate.delivery/xezq/qocyf3gptqQwJqBXaeMVS8kESLBmeIjvnqrFboh1CNigawvoA/trained_model.tar Checking inputs ==================================== Checking weights ✅ 2f613a6130046d67cede16f086db3550.safetensors exists in loras directory ✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ==================================== Running workflow [ComfyUI] got prompt Executing node 39, title: Load VAE, class type: VAELoader [ComfyUI] Using pytorch attention in VAE [ComfyUI] Using pytorch attention in VAE [ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 Executing node 40, title: EmptyHunyuanLatentVideo, class type: EmptyHunyuanLatentVideo Executing node 38, title: Load CLIP, class type: CLIPLoader [ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16 [ComfyUI] Requested to load WanTEModel Executing node 7, title: CLIP Text Encode (Negative Prompt), class type: CLIPTextEncode [ComfyUI] loaded completely 141327.4875 10835.4765625 True Executing node 37, title: Load Diffusion Model, class type: UNETLoader [ComfyUI] model weight dtype torch.float16, manual cast: None [ComfyUI] model_type FLOW Executing node 53, title: WanVideo Enhance A Video (native), class type: WanVideoEnhanceAVideoKJ Executing node 54, title: WanVideo Tea Cache (native), class type: WanVideoTeaCacheKJ Executing node 49, title: Load LoRA, class type: LoraLoader [ComfyUI] Requested to load WanTEModel Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode [ComfyUI] loaded completely 141069.4875 10835.4765625 True Executing node 48, title: ModelSamplingSD3, class type: ModelSamplingSD3 Executing node 3, title: KSampler, class type: KSampler [ComfyUI] Requested to load WAN21 [ComfyUI] loaded completely 124343.96281542968 27251.406372070312 True [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] 3%|▎ | 1/30 [00:07<03:51, 7.98s/it] [ComfyUI] 7%|▋ | 2/30 [00:17<04:07, 8.85s/it] [ComfyUI] 10%|█ | 3/30 [00:27<04:08, 9.20s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:39<04:30, 10.41s/it] [ComfyUI] 20%|██ | 6/30 [00:49<03:00, 7.53s/it] [ComfyUI] 27%|██▋ | 8/30 [00:59<02:21, 6.43s/it] [ComfyUI] 33%|███▎ | 10/30 [01:09<01:57, 5.88s/it] [ComfyUI] 40%|████ | 12/30 [01:19<01:40, 5.57s/it] [ComfyUI] 47%|████▋ | 14/30 [01:29<01:25, 5.37s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:39<01:13, 5.24s/it] [ComfyUI] 60%|██████ | 18/30 [01:49<01:01, 5.16s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:59<00:51, 5.11s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:08<00:40, 5.07s/it] [ComfyUI] 80%|████████ | 24/30 [02:18<00:30, 5.05s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:28<00:20, 5.03s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:38<00:10, 5.02s/it] [ComfyUI] 100%|██████████| 30/30 [02:48<00:00, 5.01s/it] [ComfyUI] 100%|██████████| 30/30 [02:48<00:00, 5.63s/it] [ComfyUI] Requested to load WanVAE Executing node 8, title: VAE Decode, class type: VAEDecode [ComfyUI] loaded completely 98169.38668441772 242.02829551696777 True Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] Prompt executed in 188.21 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Want to make some of these yourself?
Run this model