You're looking at a specific version of this model. Jump to the model overview.

enhance-replicate /wan22-comfyui-full-prep:b4419736

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
prompt
string
A baby dressed in a fluffy outfit is gently nose-to-nose with a small kitten. The background is softly blurred, highlighting the tender interaction between them.
Text prompt for video generation (only used with default WAN2.2 workflow)
workflow_json
string
{ "6": { "inputs": { "text": "A baby dressed in a fluffy outfit is gently nose-to-nose with a small kitten. The background is softly blurred, highlighting the tender interaction between them.", "clip": [ "12", 0 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Prompt)" } }, "7": { "inputs": { "text": "色调艳丽, 过曝, 静态, 细节模糊不清, 字幕, 风格, 作品, 画作, 画面, 静止, 整体发灰, 最差质量, 低质量, JPEG压缩残留, 丑陋的, 残缺的, 多余的手指, 画得不好的手部, 画得不好的脸部, 畸形的, 毁容的, 形态畸形的肢体, 手指融合, 静止不动的画面, 杂乱的背景, 三条腿, 背景人很多, 倒着走", "clip": [ "12", 0 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Prompt)" } }, "12": { "inputs": { "clip_name": "umt5_xxl_fp8_e4m3fn_scaled.safetensors", "type": "wan", "device": "default" }, "class_type": "CLIPLoader", "_meta": { "title": "Load CLIP" } }, "14": { "inputs": { "vae_name": "wan_2.1_vae.safetensors" }, "class_type": "VAELoader", "_meta": { "title": "Load VAE" } }, "44": { "inputs": { "add_noise": "disable", "noise_seed": 270712418741028, "steps": 8, "cfg": 1, "sampler_name": "res_multistep", "scheduler": "beta", "start_at_step": 4, "end_at_step": 10000, "return_with_leftover_noise": "disable", "model": [ "48", 0 ], "positive": [ "6", 0 ], "negative": [ "7", 0 ], "latent_image": [ "45", 0 ] }, "class_type": "KSamplerAdvanced", "_meta": { "title": "KSampler (Advanced)" } }, "45": { "inputs": { "add_noise": "enable", "noise_seed": 868084697386425, "steps": 8, "cfg": 1, "sampler_name": "res_multistep", "scheduler": "beta", "start_at_step": 0, "end_at_step": 4, "return_with_leftover_noise": "enable", "model": [ "47", 0 ], "positive": [ "6", 0 ], "negative": [ "7", 0 ], "latent_image": [ "73", 0 ] }, "class_type": "KSamplerAdvanced", "_meta": { "title": "KSampler (Advanced)" } }, "46": { "inputs": { "unet_name": "Wan2.2-T2V-A14B-HighNoise-Q4_K_M.gguf" }, "class_type": "UnetLoaderGGUF", "_meta": { "title": "Unet Loader (GGUF)" } }, "47": { "inputs": { "shift": 8.000000000000002, "model": [ "50", 0 ] }, "class_type": "ModelSamplingSD3", "_meta": { "title": "ModelSamplingSD3" } }, "48": { "inputs": { "shift": 8.000000000000002, "model": [ "51", 0 ] }, "class_type": "ModelSamplingSD3", "_meta": { "title": "ModelSamplingSD3" } }, "49": { "inputs": { "unet_name": "Wan2.2-T2V-A14B-LowNoise-Q4_K_M.gguf" }, "class_type": "UnetLoaderGGUF", "_meta": { "title": "Unet Loader (GGUF)" } }, "50": { "inputs": { "lora_name": "lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors", "strength_model": 1.0000000000000002, "model": [ "58", 0 ] }, "class_type": "LoraLoaderModelOnly", "_meta": { "title": "LoraLoaderModelOnly" } }, "51": { "inputs": { "lora_name": "lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors", "strength_model": 1.0000000000000002, "model": [ "59", 0 ] }, "class_type": "LoraLoaderModelOnly", "_meta": { "title": "LoraLoaderModelOnly" } }, "58": { "inputs": { "lora_name": "Wan2.1_T2V_14B_FusionX_LoRA.safetensors", "strength_model": 0.5000000000000001, "model": [ "46", 0 ] }, "class_type": "LoraLoaderModelOnly", "_meta": { "title": "LoraLoaderModelOnly" } }, "59": { "inputs": { "lora_name": "Wan2.1_T2V_14B_FusionX_LoRA.safetensors", "strength_model": 0.5000000000000001, "model": [ "49", 0 ] }, "class_type": "LoraLoaderModelOnly", "_meta": { "title": "LoraLoaderModelOnly" } }, "64": { "inputs": { "frame_rate": 8, "loop_count": 0, "filename_prefix": "AnimateDiff", "format": "video/h264-mp4", "pix_fmt": "yuv420p", "crf": 19, "save_metadata": true, "trim_to_audio": false, "pingpong": false, "save_output": true, "images": [ "71", 0 ] }, "class_type": "VHS_VideoCombine", "_meta": { "title": "Video Combine 🎥🅥🅗🅢" } }, "65": { "inputs": { "tile_size": 512, "overlap": 64, "temporal_size": 64, "temporal_overlap": 8, "samples": [ "44", 0 ], "vae": [ "14", 0 ] }, "class_type": "VAEDecodeTiled", "_meta": { "title": "VAE Decode (Tiled)" } }, "69": { "inputs": { "upscale_model": [ "70", 0 ], "image": [ "65", 0 ] }, "class_type": "ImageUpscaleWithModel", "_meta": { "title": "Upscale Image (using Model)" } }, "70": { "inputs": { "model_name": "RealESRGAN_x2.pth" }, "class_type": "UpscaleModelLoader", "_meta": { "title": "Load Upscale Model" } }, "71": { "inputs": { "ckpt_name": "rife47.pth", "clear_cache_after_n_frames": 10, "multiplier": 2, "fast_mode": true, "ensemble": true, "scale_factor": 1, "frames": [ "69", 0 ] }, "class_type": "RIFE VFI", "_meta": { "title": "RIFE VFI (recommend rife47 and rife49)" } }, "73": { "inputs": { "width": 384, "height": 704, "length": 41, "batch_size": 1, "vae": [ "14", 0 ] }, "class_type": "Wan22ImageToVideoLatent", "_meta": { "title": "Wan22ImageToVideoLatent" } } }
Your ComfyUI workflow as JSON string or URL. Default: WAN2.2 text-to-video workflow. Get API format from ComfyUI using 'Save (API format)'. Instructions: https://github.com/replicate/cog-comfyui
input_file
string
Input image, video, tar or zip file. Read guidance on workflows and input files here: https://github.com/replicate/cog-comfyui. Alternatively, you can replace inputs with URLs in your JSON workflow and the model will download them.
return_temp_files
boolean
False
Return any temporary files, such as preprocessed controlnet images. Useful for debugging.
output_format
None
webp
Format of the output images
output_quality
integer
95

Max: 100

Quality of the output images, from 0 to 100. 100 is best quality, 0 is lowest quality.
randomise_seeds
boolean
True
Automatically randomise seeds (seed, noise_seed, rand_seed)
force_reset_cache
boolean
False
Force reset the ComfyUI cache before running the workflow. Useful for debugging.

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'items': {'format': 'uri', 'type': 'string'},
 'title': 'Output',
 'type': 'array'}