fofr / video-morpher
Generate a video that morphs between subjects, with an optional style
Prediction
fofr/video-morpher:355c6bbaf8bc2deeafd3e2384a50af51bc2091a8be96dc082f1ef02c74640bafIDzq1ppbr981rgg0cf22trg7ex1rStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedby @fofrInput
- mode
- upscaled-and-interpolated
- prompt
- checkpoint
- anime
- aspect_ratio
- 4:3
- style_strength
- 0.25
- negative_prompt
{ "mode": "upscaled-and-interpolated", "prompt": "", "checkpoint": "anime", "style_image": "https://replicate.delivery/pbxt/Knt8RPT8KLlsXLreP04hXAULTlrL29TH9W8NNUV3eKDfGkug/replicate-prediction-6da3fldbhkwkmaeba4bhzif72m.png", "aspect_ratio": "4:3", "style_strength": 0.25, "negative_prompt": "", "subject_image_1": "https://replicate.delivery/pbxt/Knt8R9B6sVKljclLEsUe1tUz5gELYq3WQ9mebHcdEnpaKEvY/fofr_a_middle_aged_man_with_thick_glasses_16f412ff-db00-4e06-acdf-828885df6c58.webp", "subject_image_2": "https://replicate.delivery/pbxt/Knt8QyjO5NMhX7S9wCjJ6ZKCxYnL2cBSKyeU9oJlpABFIjHb/fofr_a_blonde_woman_studio_headshot_cf000993-a68c-499f-9619-026601661d97.png", "subject_image_3": "https://replicate.delivery/pbxt/Knt8REy4ySLJqPNNsw6RPAnxfdmSEXyWWjFsfqsMwpIX11tK/marble-statue-antinous-height-180-cm-9513049.jpg.webp", "subject_image_4": "https://replicate.delivery/pbxt/Knt8RgLdi4GZ7AOdQ7INvzI4SMKWlR1eJQjfIxK8cZIzifIA/ComfyUI_02710_.png" }
Install Replicateβs Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/video-morpher:355c6bbaf8bc2deeafd3e2384a50af51bc2091a8be96dc082f1ef02c74640baf", { input: { mode: "upscaled-and-interpolated", prompt: "", checkpoint: "anime", style_image: "https://replicate.delivery/pbxt/Knt8RPT8KLlsXLreP04hXAULTlrL29TH9W8NNUV3eKDfGkug/replicate-prediction-6da3fldbhkwkmaeba4bhzif72m.png", aspect_ratio: "4:3", style_strength: 0.25, negative_prompt: "", subject_image_1: "https://replicate.delivery/pbxt/Knt8R9B6sVKljclLEsUe1tUz5gELYq3WQ9mebHcdEnpaKEvY/fofr_a_middle_aged_man_with_thick_glasses_16f412ff-db00-4e06-acdf-828885df6c58.webp", subject_image_2: "https://replicate.delivery/pbxt/Knt8QyjO5NMhX7S9wCjJ6ZKCxYnL2cBSKyeU9oJlpABFIjHb/fofr_a_blonde_woman_studio_headshot_cf000993-a68c-499f-9619-026601661d97.png", subject_image_3: "https://replicate.delivery/pbxt/Knt8REy4ySLJqPNNsw6RPAnxfdmSEXyWWjFsfqsMwpIX11tK/marble-statue-antinous-height-180-cm-9513049.jpg.webp", subject_image_4: "https://replicate.delivery/pbxt/Knt8RgLdi4GZ7AOdQ7INvzI4SMKWlR1eJQjfIxK8cZIzifIA/ComfyUI_02710_.png" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicateβs Python client library:pip install replicate
Import the client:import replicate
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/video-morpher:355c6bbaf8bc2deeafd3e2384a50af51bc2091a8be96dc082f1ef02c74640baf", input={ "mode": "upscaled-and-interpolated", "prompt": "", "checkpoint": "anime", "style_image": "https://replicate.delivery/pbxt/Knt8RPT8KLlsXLreP04hXAULTlrL29TH9W8NNUV3eKDfGkug/replicate-prediction-6da3fldbhkwkmaeba4bhzif72m.png", "aspect_ratio": "4:3", "style_strength": 0.25, "negative_prompt": "", "subject_image_1": "https://replicate.delivery/pbxt/Knt8R9B6sVKljclLEsUe1tUz5gELYq3WQ9mebHcdEnpaKEvY/fofr_a_middle_aged_man_with_thick_glasses_16f412ff-db00-4e06-acdf-828885df6c58.webp", "subject_image_2": "https://replicate.delivery/pbxt/Knt8QyjO5NMhX7S9wCjJ6ZKCxYnL2cBSKyeU9oJlpABFIjHb/fofr_a_blonde_woman_studio_headshot_cf000993-a68c-499f-9619-026601661d97.png", "subject_image_3": "https://replicate.delivery/pbxt/Knt8REy4ySLJqPNNsw6RPAnxfdmSEXyWWjFsfqsMwpIX11tK/marble-statue-antinous-height-180-cm-9513049.jpg.webp", "subject_image_4": "https://replicate.delivery/pbxt/Knt8RgLdi4GZ7AOdQ7INvzI4SMKWlR1eJQjfIxK8cZIzifIA/ComfyUI_02710_.png" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "fofr/video-morpher:355c6bbaf8bc2deeafd3e2384a50af51bc2091a8be96dc082f1ef02c74640baf", "input": { "mode": "upscaled-and-interpolated", "prompt": "", "checkpoint": "anime", "style_image": "https://replicate.delivery/pbxt/Knt8RPT8KLlsXLreP04hXAULTlrL29TH9W8NNUV3eKDfGkug/replicate-prediction-6da3fldbhkwkmaeba4bhzif72m.png", "aspect_ratio": "4:3", "style_strength": 0.25, "negative_prompt": "", "subject_image_1": "https://replicate.delivery/pbxt/Knt8R9B6sVKljclLEsUe1tUz5gELYq3WQ9mebHcdEnpaKEvY/fofr_a_middle_aged_man_with_thick_glasses_16f412ff-db00-4e06-acdf-828885df6c58.webp", "subject_image_2": "https://replicate.delivery/pbxt/Knt8QyjO5NMhX7S9wCjJ6ZKCxYnL2cBSKyeU9oJlpABFIjHb/fofr_a_blonde_woman_studio_headshot_cf000993-a68c-499f-9619-026601661d97.png", "subject_image_3": "https://replicate.delivery/pbxt/Knt8REy4ySLJqPNNsw6RPAnxfdmSEXyWWjFsfqsMwpIX11tK/marble-statue-antinous-height-180-cm-9513049.jpg.webp", "subject_image_4": "https://replicate.delivery/pbxt/Knt8RgLdi4GZ7AOdQ7INvzI4SMKWlR1eJQjfIxK8cZIzifIA/ComfyUI_02710_.png" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicateβs HTTP API reference docs.
Output
{ "completed_at": "2024-04-24T15:28:58.190754Z", "created_at": "2024-04-24T15:24:10.944000Z", "data_removed": false, "error": null, "id": "zq1ppbr981rgg0cf22trg7ex1r", "input": { "mode": "upscaled-and-interpolated", "prompt": "", "checkpoint": "anime", "style_image": "https://replicate.delivery/pbxt/Knt8RPT8KLlsXLreP04hXAULTlrL29TH9W8NNUV3eKDfGkug/replicate-prediction-6da3fldbhkwkmaeba4bhzif72m.png", "aspect_ratio": "4:3", "style_strength": 0.25, "negative_prompt": "", "subject_image_1": "https://replicate.delivery/pbxt/Knt8R9B6sVKljclLEsUe1tUz5gELYq3WQ9mebHcdEnpaKEvY/fofr_a_middle_aged_man_with_thick_glasses_16f412ff-db00-4e06-acdf-828885df6c58.webp", "subject_image_2": "https://replicate.delivery/pbxt/Knt8QyjO5NMhX7S9wCjJ6ZKCxYnL2cBSKyeU9oJlpABFIjHb/fofr_a_blonde_woman_studio_headshot_cf000993-a68c-499f-9619-026601661d97.png", "subject_image_3": "https://replicate.delivery/pbxt/Knt8REy4ySLJqPNNsw6RPAnxfdmSEXyWWjFsfqsMwpIX11tK/marble-statue-antinous-height-180-cm-9513049.jpg.webp", "subject_image_4": "https://replicate.delivery/pbxt/Knt8RgLdi4GZ7AOdQ7INvzI4SMKWlR1eJQjfIxK8cZIzifIA/ComfyUI_02710_.png" }, "logs": "Random seed set to: 665779694\nChecking inputs\nβ /tmp/inputs/2.png\nβ /tmp/inputs/1.png\nβ /tmp/inputs/3.png\nβ /tmp/inputs/4.png\nβ /tmp/inputs/circles.mp4\nβ /tmp/inputs/style.png\n====================================\nChecking weights\nIncluding weights for IPAdapter preset: PLUS (high strength)\nβ³ Downloading ip-adapter-plus_sdxl_vit-h.safetensors to ComfyUI/models/ipadapter\nβοΈ Downloaded ip-adapter-plus_sdxl_vit-h.safetensors in 0.60s, size: 808.26MB\nβ ip-adapter-plus_sdxl_vit-h.safetensors\nβ³ Downloading RealESRGAN_x4.pth to ComfyUI/models/upscale_models\nβοΈ Downloaded RealESRGAN_x4.pth in 0.15s, size: 63.94MB\nβ RealESRGAN_x4.pth\nβ³ Downloading vae-ft-mse-840000-ema-pruned.safetensors to ComfyUI/models/vae\nβοΈ Downloaded vae-ft-mse-840000-ema-pruned.safetensors in 0.35s, size: 319.14MB\nβ vae-ft-mse-840000-ema-pruned.safetensors\nβ³ Downloading CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors to ComfyUI/models/clip_vision\nβοΈ Downloaded CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors in 1.94s, size: 2411.24MB\nβ CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors\nβ³ Downloading ip-adapter-plus_sd15.safetensors to ComfyUI/models/ipadapter\nβοΈ Downloaded ip-adapter-plus_sd15.safetensors in 0.18s, size: 93.63MB\nβ ip-adapter-plus_sd15.safetensors\nβ³ Downloading film_net_fp32.pt to ComfyUI/custom_nodes/ComfyUI-Frame-Interpolation/ckpts/film\nβοΈ Downloaded film_net_fp32.pt in 0.25s, size: 131.53MB\nβ film_net_fp32.pt\nβ³ Downloading AnimateLCM_sd15_t2v_lora.safetensors to ComfyUI/models/loras\nβοΈ Downloaded AnimateLCM_sd15_t2v_lora.safetensors in 0.23s, size: 128.39MB\nβ AnimateLCM_sd15_t2v_lora.safetensors\nβ³ Downloading toonyou_beta6.safetensors to ComfyUI/models/checkpoints\nβοΈ Downloaded toonyou_beta6.safetensors in 1.36s, size: 2193.39MB\nβ toonyou_beta6.safetensors\nβ³ Downloading control_v1p_sd15_qrcode_monster.safetensors to ComfyUI/models/controlnet\nβοΈ Downloaded control_v1p_sd15_qrcode_monster.safetensors in 0.55s, size: 689.12MB\nβ control_v1p_sd15_qrcode_monster.safetensors\nβ³ Downloading AnimateLCM_sd15_t2v.ckpt to ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/models\nβοΈ Downloaded AnimateLCM_sd15_t2v.ckpt in 1.16s, size: 1729.05MB\nβ AnimateLCM_sd15_t2v.ckpt\n====================================\nRunning workflow\ngot prompt\nExecuting node 564, title: Load Checkpoint, class type: CheckpointLoaderSimple\nmodel_type EPS\nUsing pytorch attention in VAE\nUsing pytorch attention in VAE\nclip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']\nloaded straight to GPU\nRequested to load BaseModel\nLoading 1 new model\nExecuting node 563, title: LoraLoaderModelOnly, class type: LoraLoaderModelOnly\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Loading motion module AnimateLCM_sd15_t2v.ckpt via Gen2\nExecuting node 87, title: Load AnimateDiff Model ππ π β‘, class type: ADE_LoadAnimateDiffModel\nExecuting node 256, title: Motion Scale ππ π , class type: ADE_MultivalDynamic\nExecuting node 79, title: Apply AnimateDiff Model ππ π β‘, class type: ADE_ApplyAnimateDiffModelSimple\nExecuting node 156, title: Context OptionsβLooped Uniform ππ π , class type: ADE_LoopedUniformContextOptions\nExecuting node 77, title: Use Evolved Sampling ππ π β‘, class type: ADE_UseEvolvedSampling\nExecuting node 573, title: IPAdapter Unified Loader, class type: IPAdapterUnifiedLoader\n\u001b[33mINFO: Clip Vision model loaded from /src/ComfyUI/models/clip_vision/CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors\u001b[0m\n\u001b[33mINFO: IPAdapter model loaded from /src/ComfyUI/models/ipadapter/ip-adapter-plus_sd15.safetensors\u001b[0m\nExecuting node 142, title: Load Image, class type: LoadImage\nExecuting node 701, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced\nRequested to load CLIPVisionModelProjection\nLoading 1 new model\nExecuting node 545, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\nExecuting node 135, title: Load Image, class type: LoadImage\nExecuting node 707, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced\nExecuting node 548, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\nExecuting node 680, title: Load Image, class type: LoadImage\nExecuting node 710, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced\nExecuting node 681, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\n\u001b[33mINFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting.\u001b[0m\nExecuting node 683, title: Load Image, class type: LoadImage\nExecuting node 713, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced\nExecuting node 682, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\nExecuting node 752, title: Load Image, class type: LoadImage\nExecuting node 751, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\n\u001b[33mINFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting.\u001b[0m\nRequested to load SD1ClipModel\nLoading 1 new model\nExecuting node 565, title: Positive, class type: CLIPTextEncode\nExecuting node 566, title: Negative, class type: CLIPTextEncode\nExecuting node 134, title: Empty Latent Image, class type: EmptyLatentImage\nExecuting node 80, title: KSampler, class type: KSampler\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Sliding context window activated - latents passed in (96) greater than context_length 16.\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Using motion module AnimateLCM_sd15_t2v.ckpt:v2.\nRequested to load BaseModel\nRequested to load AnimateDiffModel\nLoading 2 new models\n 0%| | 0/8 [00:00<?, ?it/s]\n 12%|ββ | 1/8 [00:02<00:14, 2.09s/it]\n 25%|βββ | 2/8 [00:04<00:11, 1.99s/it]\n 38%|ββββ | 3/8 [00:05<00:09, 1.96s/it]\n 50%|βββββ | 4/8 [00:07<00:07, 1.95s/it]\n 62%|βββββββ | 5/8 [00:09<00:05, 1.94s/it]\n 75%|ββββββββ | 6/8 [00:11<00:03, 1.94s/it]\n 88%|βββββββββ | 7/8 [00:13<00:01, 1.93s/it]\n100%|ββββββββββ| 8/8 [00:15<00:00, 1.93s/it]\n100%|ββββββββββ| 8/8 [00:15<00:00, 1.95s/it]\nExecuting node 85, title: Load VAE, class type: VAELoader\nUsing pytorch attention in VAE\nUsing pytorch attention in VAE\nRequested to load AutoencoderKL\nLoading 1 new model\nExecuting node 84, title: VAE Decode, class type: VAEDecode\nExecuting node 53, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 203, title: Upscale Image By, class type: ImageScaleBy\nExecuting node 204, title: VAE Encode, class type: VAEEncode\nExecuting node 198, title: KSampler, class type: KSampler\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Sliding context window activated - latents passed in (96) greater than context_length 16.\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Using motion module AnimateLCM_sd15_t2v.ckpt:v2.\n 0%| | 0/8 [00:00<?, ?it/s]\n 12%|ββ | 1/8 [00:03<00:27, 3.88s/it]\n 25%|βββ | 2/8 [00:07<00:23, 3.88s/it]\n 38%|ββββ | 3/8 [00:11<00:19, 3.89s/it]\n 50%|βββββ | 4/8 [00:15<00:15, 3.89s/it]\n 62%|βββββββ | 5/8 [00:19<00:11, 3.95s/it]\n 75%|ββββββββ | 6/8 [00:23<00:07, 3.94s/it]\n 88%|βββββββββ | 7/8 [00:27<00:03, 3.93s/it]\n100%|ββββββββββ| 8/8 [00:31<00:00, 3.93s/it]\n100%|ββββββββββ| 8/8 [00:31<00:00, 3.92s/it]\nExecuting node 201, title: VAE Decode, class type: VAEDecode\nExecuting node 205, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 270, title: Load Upscale Model, class type: UpscaleModelLoader\nExecuting node 271, title: Upscale Image (using Model), class type: ImageUpscaleWithModel\nExecuting node 279, title: Upscale Image, class type: ImageScale\nExecuting node 272, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 770, title: FILM VFI, class type: FILM VFI\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Final clearing cache...\nComfy-VFI: Done cache clearing\nExecuting node 219, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nPrompt executed in 250.36 seconds\noutputs: {'53': {'gifs': [{'filename': 'preview_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '205': {'gifs': [{'filename': 'upscaled_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '272': {'gifs': [{'filename': 'upscaled_model_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '219': {'gifs': [{'filename': 'interpolated_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}}\n====================================\npreview_00001.mp4\nupscaled_00001.mp4\nupscaled_model_00001.mp4\ninterpolated_00001.mp4", "metrics": { "predict_time": 265.163699, "total_time": 287.246754 }, "output": [ "https://replicate.delivery/pbxt/K9dkJNIWZC72LRQeSGRxexJgSSkUy8Ib7vt2s0goVjp3cttSA/preview_00001.mp4", "https://replicate.delivery/pbxt/ff9G37f5SNWF6o7a56G5hrS1YQLwcBYJRi123cjvw3cw5ablA/upscaled_00001.mp4", "https://replicate.delivery/pbxt/RqySCgbh21KvGVkmpCvImkFNJALWAAKPqZOeOPuiELucu2WJA/upscaled_model_00001.mp4", "https://replicate.delivery/pbxt/5VWDfWRTrF0nMivSriscwK3SSdZl1oy7amOshxofepdy5ablA/interpolated_00001.mp4" ], "started_at": "2024-04-24T15:24:33.027055Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/zq1ppbr981rgg0cf22trg7ex1r", "cancel": "https://api.replicate.com/v1/predictions/zq1ppbr981rgg0cf22trg7ex1r/cancel" }, "version": "355c6bbaf8bc2deeafd3e2384a50af51bc2091a8be96dc082f1ef02c74640baf" }
Generated inRandom seed set to: 665779694 Checking inputs β /tmp/inputs/2.png β /tmp/inputs/1.png β /tmp/inputs/3.png β /tmp/inputs/4.png β /tmp/inputs/circles.mp4 β /tmp/inputs/style.png ==================================== Checking weights Including weights for IPAdapter preset: PLUS (high strength) β³ Downloading ip-adapter-plus_sdxl_vit-h.safetensors to ComfyUI/models/ipadapter βοΈ Downloaded ip-adapter-plus_sdxl_vit-h.safetensors in 0.60s, size: 808.26MB β ip-adapter-plus_sdxl_vit-h.safetensors β³ Downloading RealESRGAN_x4.pth to ComfyUI/models/upscale_models βοΈ Downloaded RealESRGAN_x4.pth in 0.15s, size: 63.94MB β RealESRGAN_x4.pth β³ Downloading vae-ft-mse-840000-ema-pruned.safetensors to ComfyUI/models/vae βοΈ Downloaded vae-ft-mse-840000-ema-pruned.safetensors in 0.35s, size: 319.14MB β vae-ft-mse-840000-ema-pruned.safetensors β³ Downloading CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors to ComfyUI/models/clip_vision βοΈ Downloaded CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors in 1.94s, size: 2411.24MB β CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors β³ Downloading ip-adapter-plus_sd15.safetensors to ComfyUI/models/ipadapter βοΈ Downloaded ip-adapter-plus_sd15.safetensors in 0.18s, size: 93.63MB β ip-adapter-plus_sd15.safetensors β³ Downloading film_net_fp32.pt to ComfyUI/custom_nodes/ComfyUI-Frame-Interpolation/ckpts/film βοΈ Downloaded film_net_fp32.pt in 0.25s, size: 131.53MB β film_net_fp32.pt β³ Downloading AnimateLCM_sd15_t2v_lora.safetensors to ComfyUI/models/loras βοΈ Downloaded AnimateLCM_sd15_t2v_lora.safetensors in 0.23s, size: 128.39MB β AnimateLCM_sd15_t2v_lora.safetensors β³ Downloading toonyou_beta6.safetensors to ComfyUI/models/checkpoints βοΈ Downloaded toonyou_beta6.safetensors in 1.36s, size: 2193.39MB β toonyou_beta6.safetensors β³ Downloading control_v1p_sd15_qrcode_monster.safetensors to ComfyUI/models/controlnet βοΈ Downloaded control_v1p_sd15_qrcode_monster.safetensors in 0.55s, size: 689.12MB β control_v1p_sd15_qrcode_monster.safetensors β³ Downloading AnimateLCM_sd15_t2v.ckpt to ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/models βοΈ Downloaded AnimateLCM_sd15_t2v.ckpt in 1.16s, size: 1729.05MB β AnimateLCM_sd15_t2v.ckpt ==================================== Running workflow got prompt Executing node 564, title: Load Checkpoint, class type: CheckpointLoaderSimple model_type EPS Using pytorch attention in VAE Using pytorch attention in VAE clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] loaded straight to GPU Requested to load BaseModel Loading 1 new model Executing node 563, title: LoraLoaderModelOnly, class type: LoraLoaderModelOnly [AnimateDiffEvo] - INFO - Loading motion module AnimateLCM_sd15_t2v.ckpt via Gen2 Executing node 87, title: Load AnimateDiff Model ππ π β‘, class type: ADE_LoadAnimateDiffModel Executing node 256, title: Motion Scale ππ π , class type: ADE_MultivalDynamic Executing node 79, title: Apply AnimateDiff Model ππ π β‘, class type: ADE_ApplyAnimateDiffModelSimple Executing node 156, title: Context OptionsβLooped Uniform ππ π , class type: ADE_LoopedUniformContextOptions Executing node 77, title: Use Evolved Sampling ππ π β‘, class type: ADE_UseEvolvedSampling Executing node 573, title: IPAdapter Unified Loader, class type: IPAdapterUnifiedLoader INFO: Clip Vision model loaded from /src/ComfyUI/models/clip_vision/CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors INFO: IPAdapter model loaded from /src/ComfyUI/models/ipadapter/ip-adapter-plus_sd15.safetensors Executing node 142, title: Load Image, class type: LoadImage Executing node 701, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced Requested to load CLIPVisionModelProjection Loading 1 new model Executing node 545, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch Executing node 135, title: Load Image, class type: LoadImage Executing node 707, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced Executing node 548, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch Executing node 680, title: Load Image, class type: LoadImage Executing node 710, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced Executing node 681, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch INFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting. Executing node 683, title: Load Image, class type: LoadImage Executing node 713, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced Executing node 682, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch Executing node 752, title: Load Image, class type: LoadImage Executing node 751, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch INFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting. Requested to load SD1ClipModel Loading 1 new model Executing node 565, title: Positive, class type: CLIPTextEncode Executing node 566, title: Negative, class type: CLIPTextEncode Executing node 134, title: Empty Latent Image, class type: EmptyLatentImage Executing node 80, title: KSampler, class type: KSampler [AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (96) greater than context_length 16. [AnimateDiffEvo] - INFO - Using motion module AnimateLCM_sd15_t2v.ckpt:v2. Requested to load BaseModel Requested to load AnimateDiffModel Loading 2 new models 0%| | 0/8 [00:00<?, ?it/s] 12%|ββ | 1/8 [00:02<00:14, 2.09s/it] 25%|βββ | 2/8 [00:04<00:11, 1.99s/it] 38%|ββββ | 3/8 [00:05<00:09, 1.96s/it] 50%|βββββ | 4/8 [00:07<00:07, 1.95s/it] 62%|βββββββ | 5/8 [00:09<00:05, 1.94s/it] 75%|ββββββββ | 6/8 [00:11<00:03, 1.94s/it] 88%|βββββββββ | 7/8 [00:13<00:01, 1.93s/it] 100%|ββββββββββ| 8/8 [00:15<00:00, 1.93s/it] 100%|ββββββββββ| 8/8 [00:15<00:00, 1.95s/it] Executing node 85, title: Load VAE, class type: VAELoader Using pytorch attention in VAE Using pytorch attention in VAE Requested to load AutoencoderKL Loading 1 new model Executing node 84, title: VAE Decode, class type: VAEDecode Executing node 53, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 203, title: Upscale Image By, class type: ImageScaleBy Executing node 204, title: VAE Encode, class type: VAEEncode Executing node 198, title: KSampler, class type: KSampler [AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (96) greater than context_length 16. [AnimateDiffEvo] - INFO - Using motion module AnimateLCM_sd15_t2v.ckpt:v2. 0%| | 0/8 [00:00<?, ?it/s] 12%|ββ | 1/8 [00:03<00:27, 3.88s/it] 25%|βββ | 2/8 [00:07<00:23, 3.88s/it] 38%|ββββ | 3/8 [00:11<00:19, 3.89s/it] 50%|βββββ | 4/8 [00:15<00:15, 3.89s/it] 62%|βββββββ | 5/8 [00:19<00:11, 3.95s/it] 75%|ββββββββ | 6/8 [00:23<00:07, 3.94s/it] 88%|βββββββββ | 7/8 [00:27<00:03, 3.93s/it] 100%|ββββββββββ| 8/8 [00:31<00:00, 3.93s/it] 100%|ββββββββββ| 8/8 [00:31<00:00, 3.92s/it] Executing node 201, title: VAE Decode, class type: VAEDecode Executing node 205, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 270, title: Load Upscale Model, class type: UpscaleModelLoader Executing node 271, title: Upscale Image (using Model), class type: ImageUpscaleWithModel Executing node 279, title: Upscale Image, class type: ImageScale Executing node 272, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 770, title: FILM VFI, class type: FILM VFI Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Final clearing cache... Comfy-VFI: Done cache clearing Executing node 219, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Prompt executed in 250.36 seconds outputs: {'53': {'gifs': [{'filename': 'preview_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '205': {'gifs': [{'filename': 'upscaled_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '272': {'gifs': [{'filename': 'upscaled_model_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '219': {'gifs': [{'filename': 'interpolated_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}} ==================================== preview_00001.mp4 upscaled_00001.mp4 upscaled_model_00001.mp4 interpolated_00001.mp4
Prediction
fofr/video-morpher:e70e975067d2b5dbe9e2d9022833d27230a1bdeb3f4af6fe6bb49a548a3039a7ID0j54ep0shhrgg0cf26gbreemamStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedby @fofrInput
- mode
- upscaled-and-interpolated
- prompt
- bright, vibrant, high contrast
- checkpoint
- illustrated
- aspect_ratio
- 3:4
- style_strength
- 1
- use_controlnet
- negative_prompt
- dark, gloomy
{ "mode": "upscaled-and-interpolated", "prompt": "bright, vibrant, high contrast", "checkpoint": "illustrated", "style_image": "https://replicate.delivery/pbxt/Knst8xuE7YWv1wOj7AzV6JMZbyBx4yqlMw0YgMBqwI13ll5U/tshirt_01829_.png", "aspect_ratio": "3:4", "style_strength": 1, "use_controlnet": true, "negative_prompt": "dark, gloomy", "subject_image_1": "https://replicate.delivery/pbxt/Knst8Jmhr7nUhLLsEQ5y6shXtzUNKApvXsHmpsfmvQi3ak9t/fofr_a_middle_aged_man_with_thick_glasses_16f412ff-db00-4e06-acdf-828885df6c58.webp", "subject_image_2": "https://replicate.delivery/pbxt/KnroDfMF7HJay9msYtZMHB4vwMz9PXpvuY4vldEEmrGFxfFb/ComfyUI_02602_.png", "subject_image_3": "https://replicate.delivery/pbxt/KnroE2qG6XaIwokhg66ApTDHh8URpKEg8IP2CFXQ0L8isxpP/ComfyUI_02575_.png", "subject_image_4": "https://replicate.delivery/pbxt/KnroE49VZGwosByNYd6XCI5dStBtrSQRPmiW89bSdXv9f1Lx/863x1200-828592993.jpg" }
Install Replicateβs Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/video-morpher:e70e975067d2b5dbe9e2d9022833d27230a1bdeb3f4af6fe6bb49a548a3039a7", { input: { mode: "upscaled-and-interpolated", prompt: "bright, vibrant, high contrast", checkpoint: "illustrated", style_image: "https://replicate.delivery/pbxt/Knst8xuE7YWv1wOj7AzV6JMZbyBx4yqlMw0YgMBqwI13ll5U/tshirt_01829_.png", aspect_ratio: "3:4", style_strength: 1, use_controlnet: true, negative_prompt: "dark, gloomy", subject_image_1: "https://replicate.delivery/pbxt/Knst8Jmhr7nUhLLsEQ5y6shXtzUNKApvXsHmpsfmvQi3ak9t/fofr_a_middle_aged_man_with_thick_glasses_16f412ff-db00-4e06-acdf-828885df6c58.webp", subject_image_2: "https://replicate.delivery/pbxt/KnroDfMF7HJay9msYtZMHB4vwMz9PXpvuY4vldEEmrGFxfFb/ComfyUI_02602_.png", subject_image_3: "https://replicate.delivery/pbxt/KnroE2qG6XaIwokhg66ApTDHh8URpKEg8IP2CFXQ0L8isxpP/ComfyUI_02575_.png", subject_image_4: "https://replicate.delivery/pbxt/KnroE49VZGwosByNYd6XCI5dStBtrSQRPmiW89bSdXv9f1Lx/863x1200-828592993.jpg" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicateβs Python client library:pip install replicate
Import the client:import replicate
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/video-morpher:e70e975067d2b5dbe9e2d9022833d27230a1bdeb3f4af6fe6bb49a548a3039a7", input={ "mode": "upscaled-and-interpolated", "prompt": "bright, vibrant, high contrast", "checkpoint": "illustrated", "style_image": "https://replicate.delivery/pbxt/Knst8xuE7YWv1wOj7AzV6JMZbyBx4yqlMw0YgMBqwI13ll5U/tshirt_01829_.png", "aspect_ratio": "3:4", "style_strength": 1, "use_controlnet": True, "negative_prompt": "dark, gloomy", "subject_image_1": "https://replicate.delivery/pbxt/Knst8Jmhr7nUhLLsEQ5y6shXtzUNKApvXsHmpsfmvQi3ak9t/fofr_a_middle_aged_man_with_thick_glasses_16f412ff-db00-4e06-acdf-828885df6c58.webp", "subject_image_2": "https://replicate.delivery/pbxt/KnroDfMF7HJay9msYtZMHB4vwMz9PXpvuY4vldEEmrGFxfFb/ComfyUI_02602_.png", "subject_image_3": "https://replicate.delivery/pbxt/KnroE2qG6XaIwokhg66ApTDHh8URpKEg8IP2CFXQ0L8isxpP/ComfyUI_02575_.png", "subject_image_4": "https://replicate.delivery/pbxt/KnroE49VZGwosByNYd6XCI5dStBtrSQRPmiW89bSdXv9f1Lx/863x1200-828592993.jpg" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "fofr/video-morpher:e70e975067d2b5dbe9e2d9022833d27230a1bdeb3f4af6fe6bb49a548a3039a7", "input": { "mode": "upscaled-and-interpolated", "prompt": "bright, vibrant, high contrast", "checkpoint": "illustrated", "style_image": "https://replicate.delivery/pbxt/Knst8xuE7YWv1wOj7AzV6JMZbyBx4yqlMw0YgMBqwI13ll5U/tshirt_01829_.png", "aspect_ratio": "3:4", "style_strength": 1, "use_controlnet": true, "negative_prompt": "dark, gloomy", "subject_image_1": "https://replicate.delivery/pbxt/Knst8Jmhr7nUhLLsEQ5y6shXtzUNKApvXsHmpsfmvQi3ak9t/fofr_a_middle_aged_man_with_thick_glasses_16f412ff-db00-4e06-acdf-828885df6c58.webp", "subject_image_2": "https://replicate.delivery/pbxt/KnroDfMF7HJay9msYtZMHB4vwMz9PXpvuY4vldEEmrGFxfFb/ComfyUI_02602_.png", "subject_image_3": "https://replicate.delivery/pbxt/KnroE2qG6XaIwokhg66ApTDHh8URpKEg8IP2CFXQ0L8isxpP/ComfyUI_02575_.png", "subject_image_4": "https://replicate.delivery/pbxt/KnroE49VZGwosByNYd6XCI5dStBtrSQRPmiW89bSdXv9f1Lx/863x1200-828592993.jpg" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicateβs HTTP API reference docs.
Output
{ "completed_at": "2024-04-24T19:46:28.871671Z", "created_at": "2024-04-24T19:40:56.076000Z", "data_removed": false, "error": null, "id": "0j54ep0shhrgg0cf26gbreemam", "input": { "mode": "upscaled-and-interpolated", "prompt": "bright, vibrant, high contrast", "checkpoint": "illustrated", "style_image": "https://replicate.delivery/pbxt/Knst8xuE7YWv1wOj7AzV6JMZbyBx4yqlMw0YgMBqwI13ll5U/tshirt_01829_.png", "aspect_ratio": "3:4", "style_strength": 1, "use_controlnet": true, "negative_prompt": "dark, gloomy", "subject_image_1": "https://replicate.delivery/pbxt/Knst8Jmhr7nUhLLsEQ5y6shXtzUNKApvXsHmpsfmvQi3ak9t/fofr_a_middle_aged_man_with_thick_glasses_16f412ff-db00-4e06-acdf-828885df6c58.webp", "subject_image_2": "https://replicate.delivery/pbxt/KnroDfMF7HJay9msYtZMHB4vwMz9PXpvuY4vldEEmrGFxfFb/ComfyUI_02602_.png", "subject_image_3": "https://replicate.delivery/pbxt/KnroE2qG6XaIwokhg66ApTDHh8URpKEg8IP2CFXQ0L8isxpP/ComfyUI_02575_.png", "subject_image_4": "https://replicate.delivery/pbxt/KnroE49VZGwosByNYd6XCI5dStBtrSQRPmiW89bSdXv9f1Lx/863x1200-828592993.jpg" }, "logs": "Random seed set to: 3365804795\nChecking inputs\nβ /tmp/inputs/2.png\nβ /tmp/inputs/1.png\nβ /tmp/inputs/3.png\nβ /tmp/inputs/4.png\nβ /tmp/inputs/circles.mp4\nβ /tmp/inputs/style.png\n====================================\nChecking weights\nIncluding weights for IPAdapter preset: PLUS (high strength)\nβ³ Downloading dreamshaper_8.safetensors to ComfyUI/models/checkpoints\nβοΈ Downloaded dreamshaper_8.safetensors in 1.41s, size: 2033.83MB\nβ dreamshaper_8.safetensors\nβ³ Downloading film_net_fp32.pt to ComfyUI/custom_nodes/ComfyUI-Frame-Interpolation/ckpts/film\nβοΈ Downloaded film_net_fp32.pt in 0.28s, size: 131.53MB\nβ film_net_fp32.pt\nβ³ Downloading AnimateLCM_sd15_t2v.ckpt to ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/models\nβοΈ Downloaded AnimateLCM_sd15_t2v.ckpt in 1.43s, size: 1729.05MB\nβ AnimateLCM_sd15_t2v.ckpt\nβ³ Downloading ip-adapter-plus_sd15.safetensors to ComfyUI/models/ipadapter\nβοΈ Downloaded ip-adapter-plus_sd15.safetensors in 0.25s, size: 93.63MB\nβ ip-adapter-plus_sd15.safetensors\nβ³ Downloading ip-adapter-plus_sdxl_vit-h.safetensors to ComfyUI/models/ipadapter\nβοΈ Downloaded ip-adapter-plus_sdxl_vit-h.safetensors in 0.64s, size: 808.26MB\nβ ip-adapter-plus_sdxl_vit-h.safetensors\nβ³ Downloading vae-ft-mse-840000-ema-pruned.safetensors to ComfyUI/models/vae\nβοΈ Downloaded vae-ft-mse-840000-ema-pruned.safetensors in 0.41s, size: 319.14MB\nβ vae-ft-mse-840000-ema-pruned.safetensors\nβ³ Downloading control_v1p_sd15_qrcode_monster.safetensors to ComfyUI/models/controlnet\nβοΈ Downloaded control_v1p_sd15_qrcode_monster.safetensors in 0.61s, size: 689.12MB\nβ control_v1p_sd15_qrcode_monster.safetensors\nβ³ Downloading AnimateLCM_sd15_t2v_lora.safetensors to ComfyUI/models/loras\nβοΈ Downloaded AnimateLCM_sd15_t2v_lora.safetensors in 0.25s, size: 128.39MB\nβ AnimateLCM_sd15_t2v_lora.safetensors\nβ³ Downloading RealESRGAN_x4.pth to ComfyUI/models/upscale_models\nβοΈ Downloaded RealESRGAN_x4.pth in 0.24s, size: 63.94MB\nβ RealESRGAN_x4.pth\nβ³ Downloading CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors to ComfyUI/models/clip_vision\nβοΈ Downloaded CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors in 1.64s, size: 2411.24MB\nβ CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors\n====================================\nRunning workflow\ngot prompt\nExecuting node 564, title: Load Checkpoint, class type: CheckpointLoaderSimple\nmodel_type EPS\nUsing pytorch attention in VAE\nUsing pytorch attention in VAE\nclip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']\nloaded straight to GPU\nRequested to load BaseModel\nLoading 1 new model\nExecuting node 563, title: LoraLoaderModelOnly, class type: LoraLoaderModelOnly\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Loading motion module AnimateLCM_sd15_t2v.ckpt via Gen2\nExecuting node 87, title: Load AnimateDiff Model ππ π β‘, class type: ADE_LoadAnimateDiffModel\nExecuting node 256, title: Motion Scale ππ π , class type: ADE_MultivalDynamic\nExecuting node 79, title: Apply AnimateDiff Model ππ π β‘, class type: ADE_ApplyAnimateDiffModelSimple\nExecuting node 156, title: Context OptionsβLooped Uniform ππ π , class type: ADE_LoopedUniformContextOptions\nExecuting node 77, title: Use Evolved Sampling ππ π β‘, class type: ADE_UseEvolvedSampling\nExecuting node 573, title: IPAdapter Unified Loader, class type: IPAdapterUnifiedLoader\n\u001b[33mINFO: Clip Vision model loaded from /src/ComfyUI/models/clip_vision/CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors\u001b[0m\n\u001b[33mINFO: IPAdapter model loaded from /src/ComfyUI/models/ipadapter/ip-adapter-plus_sd15.safetensors\u001b[0m\nExecuting node 142, title: Load Image, class type: LoadImage\nExecuting node 701, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced\nRequested to load CLIPVisionModelProjection\nLoading 1 new model\nExecuting node 545, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\nExecuting node 135, title: Load Image, class type: LoadImage\nExecuting node 707, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced\nExecuting node 548, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\nExecuting node 680, title: Load Image, class type: LoadImage\nExecuting node 710, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced\nExecuting node 681, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\nExecuting node 683, title: Load Image, class type: LoadImage\nExecuting node 713, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced\nExecuting node 682, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\n\u001b[33mINFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting.\u001b[0m\nExecuting node 752, title: Load Image, class type: LoadImage\nExecuting node 751, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\n\u001b[33mINFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting.\u001b[0m\nExecuting node 565, title: Positive, class type: CLIPTextEncode\nRequested to load SD1ClipModel\nLoading 1 new model\nExecuting node 566, title: Negative, class type: CLIPTextEncode\nExecuting node 127, title: Load Advanced ControlNet Model ππ π π , class type: ControlNetLoaderAdvanced\nExecuting node 134, title: Empty Latent Image, class type: EmptyLatentImage\nExecuting node 569, title: π§ Batch Count, class type: BatchCount+\nExecuting node 746, title: Load Video (Upload) π₯π ₯π π ’, class type: VHS_LoadVideo\nExecuting node 461, title: π§ Simple Math, class type: SimpleMath+\nExecuting node 454, title: RepeatImageBatch, class type: RepeatImageBatch\nExecuting node 458, title: Split Image Batch π₯π ₯π π ’, class type: VHS_SplitImages\nExecuting node 125, title: Apply Advanced ControlNet ππ π π , class type: ACN_AdvancedControlNetApply\nExecuting node 80, title: KSampler, class type: KSampler\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Sliding context window activated - latents passed in (96) greater than context_length 16.\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Using motion module AnimateLCM_sd15_t2v.ckpt:v2.\nRequested to load AnimateDiffModel\nRequested to load BaseModel\nRequested to load ControlNet\nLoading 3 new models\n 0%| | 0/11 [00:00<?, ?it/s]\n 9%|β | 1/11 [00:04<00:49, 4.91s/it]\n 18%|ββ | 2/11 [00:07<00:29, 3.28s/it]\n 27%|βββ | 3/11 [00:09<00:22, 2.86s/it]\n 36%|ββββ | 4/11 [00:11<00:18, 2.67s/it]\n 45%|βββββ | 5/11 [00:13<00:14, 2.39s/it]\n 55%|ββββββ | 6/11 [00:15<00:11, 2.22s/it]\n 64%|βββββββ | 7/11 [00:17<00:08, 2.12s/it]\n 73%|ββββββββ | 8/11 [00:19<00:06, 2.05s/it]\n 82%|βββββββββ | 9/11 [00:21<00:03, 2.00s/it]\n 91%|βββββββββ | 10/11 [00:23<00:01, 1.97s/it]\n100%|ββββββββββ| 11/11 [00:25<00:00, 1.95s/it]\n100%|ββββββββββ| 11/11 [00:25<00:00, 2.28s/it]\nExecuting node 85, title: Load VAE, class type: VAELoader\nUsing pytorch attention in VAE\nUsing pytorch attention in VAE\nRequested to load AutoencoderKL\nLoading 1 new model\nExecuting node 84, title: VAE Decode, class type: VAEDecode\nExecuting node 53, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 203, title: Upscale Image By, class type: ImageScaleBy\nExecuting node 204, title: VAE Encode, class type: VAEEncode\nExecuting node 198, title: KSampler, class type: KSampler\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Sliding context window activated - latents passed in (96) greater than context_length 16.\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Using motion module AnimateLCM_sd15_t2v.ckpt:v2.\n 0%| | 0/11 [00:00<?, ?it/s]\n 9%|β | 1/11 [00:03<00:38, 3.84s/it]\n 18%|ββ | 2/11 [00:07<00:34, 3.83s/it]\n 27%|βββ | 3/11 [00:11<00:30, 3.83s/it]\n 36%|ββββ | 4/11 [00:15<00:26, 3.83s/it]\n 45%|βββββ | 5/11 [00:19<00:22, 3.83s/it]\n 55%|ββββββ | 6/11 [00:22<00:19, 3.83s/it]\n 64%|βββββββ | 7/11 [00:26<00:15, 3.84s/it]\n 73%|ββββββββ | 8/11 [00:30<00:11, 3.84s/it]\n 82%|βββββββββ | 9/11 [00:34<00:07, 3.84s/it]\n 91%|βββββββββ | 10/11 [00:38<00:03, 3.84s/it]\n100%|ββββββββββ| 11/11 [00:42<00:00, 3.84s/it]\n100%|ββββββββββ| 11/11 [00:42<00:00, 3.84s/it]\nExecuting node 201, title: VAE Decode, class type: VAEDecode\nExecuting node 205, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 270, title: Load Upscale Model, class type: UpscaleModelLoader\nExecuting node 271, title: Upscale Image (using Model), class type: ImageUpscaleWithModel\nExecuting node 279, title: Upscale Image, class type: ImageScale\nExecuting node 272, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 770, title: FILM VFI, class type: FILM VFI\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Final clearing cache...\nComfy-VFI: Done cache clearing\nExecuting node 219, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nPrompt executed in 277.90 seconds\noutputs: {'53': {'gifs': [{'filename': 'preview_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '205': {'gifs': [{'filename': 'upscaled_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '272': {'gifs': [{'filename': 'upscaled_model_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '219': {'gifs': [{'filename': 'interpolated_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}}\n====================================\npreview_00001.mp4\nupscaled_00001.mp4\nupscaled_model_00001.mp4\ninterpolated_00001.mp4", "metrics": { "predict_time": 291.526079, "total_time": 332.795671 }, "output": [ "https://replicate.delivery/pbxt/UEnKdoDxVDbmFluBmlyNBdBfZcaSXdsK1FmPuvcSVRhIn4WJA/preview_00001.mp4", "https://replicate.delivery/pbxt/jks6HpcRn5rfTil8aWs4hQvRet618blrRURZLYtukKWTOxtSA/upscaled_00001.mp4", "https://replicate.delivery/pbxt/Hfb7RalBgR3qK6XmxlWCL1G0Ko2fHkuiRRF3idzdblTTOxtSA/upscaled_model_00001.mp4", "https://replicate.delivery/pbxt/Emvw5LrbvP4fKyA9yJg194pHvIWfMkHybue87hUr1HYociblA/interpolated_00001.mp4" ], "started_at": "2024-04-24T19:41:37.345592Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/0j54ep0shhrgg0cf26gbreemam", "cancel": "https://api.replicate.com/v1/predictions/0j54ep0shhrgg0cf26gbreemam/cancel" }, "version": "e70e975067d2b5dbe9e2d9022833d27230a1bdeb3f4af6fe6bb49a548a3039a7" }
Generated inRandom seed set to: 3365804795 Checking inputs β /tmp/inputs/2.png β /tmp/inputs/1.png β /tmp/inputs/3.png β /tmp/inputs/4.png β /tmp/inputs/circles.mp4 β /tmp/inputs/style.png ==================================== Checking weights Including weights for IPAdapter preset: PLUS (high strength) β³ Downloading dreamshaper_8.safetensors to ComfyUI/models/checkpoints βοΈ Downloaded dreamshaper_8.safetensors in 1.41s, size: 2033.83MB β dreamshaper_8.safetensors β³ Downloading film_net_fp32.pt to ComfyUI/custom_nodes/ComfyUI-Frame-Interpolation/ckpts/film βοΈ Downloaded film_net_fp32.pt in 0.28s, size: 131.53MB β film_net_fp32.pt β³ Downloading AnimateLCM_sd15_t2v.ckpt to ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/models βοΈ Downloaded AnimateLCM_sd15_t2v.ckpt in 1.43s, size: 1729.05MB β AnimateLCM_sd15_t2v.ckpt β³ Downloading ip-adapter-plus_sd15.safetensors to ComfyUI/models/ipadapter βοΈ Downloaded ip-adapter-plus_sd15.safetensors in 0.25s, size: 93.63MB β ip-adapter-plus_sd15.safetensors β³ Downloading ip-adapter-plus_sdxl_vit-h.safetensors to ComfyUI/models/ipadapter βοΈ Downloaded ip-adapter-plus_sdxl_vit-h.safetensors in 0.64s, size: 808.26MB β ip-adapter-plus_sdxl_vit-h.safetensors β³ Downloading vae-ft-mse-840000-ema-pruned.safetensors to ComfyUI/models/vae βοΈ Downloaded vae-ft-mse-840000-ema-pruned.safetensors in 0.41s, size: 319.14MB β vae-ft-mse-840000-ema-pruned.safetensors β³ Downloading control_v1p_sd15_qrcode_monster.safetensors to ComfyUI/models/controlnet βοΈ Downloaded control_v1p_sd15_qrcode_monster.safetensors in 0.61s, size: 689.12MB β control_v1p_sd15_qrcode_monster.safetensors β³ Downloading AnimateLCM_sd15_t2v_lora.safetensors to ComfyUI/models/loras βοΈ Downloaded AnimateLCM_sd15_t2v_lora.safetensors in 0.25s, size: 128.39MB β AnimateLCM_sd15_t2v_lora.safetensors β³ Downloading RealESRGAN_x4.pth to ComfyUI/models/upscale_models βοΈ Downloaded RealESRGAN_x4.pth in 0.24s, size: 63.94MB β RealESRGAN_x4.pth β³ Downloading CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors to ComfyUI/models/clip_vision βοΈ Downloaded CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors in 1.64s, size: 2411.24MB β CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors ==================================== Running workflow got prompt Executing node 564, title: Load Checkpoint, class type: CheckpointLoaderSimple model_type EPS Using pytorch attention in VAE Using pytorch attention in VAE clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] loaded straight to GPU Requested to load BaseModel Loading 1 new model Executing node 563, title: LoraLoaderModelOnly, class type: LoraLoaderModelOnly [AnimateDiffEvo] - INFO - Loading motion module AnimateLCM_sd15_t2v.ckpt via Gen2 Executing node 87, title: Load AnimateDiff Model ππ π β‘, class type: ADE_LoadAnimateDiffModel Executing node 256, title: Motion Scale ππ π , class type: ADE_MultivalDynamic Executing node 79, title: Apply AnimateDiff Model ππ π β‘, class type: ADE_ApplyAnimateDiffModelSimple Executing node 156, title: Context OptionsβLooped Uniform ππ π , class type: ADE_LoopedUniformContextOptions Executing node 77, title: Use Evolved Sampling ππ π β‘, class type: ADE_UseEvolvedSampling Executing node 573, title: IPAdapter Unified Loader, class type: IPAdapterUnifiedLoader INFO: Clip Vision model loaded from /src/ComfyUI/models/clip_vision/CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors INFO: IPAdapter model loaded from /src/ComfyUI/models/ipadapter/ip-adapter-plus_sd15.safetensors Executing node 142, title: Load Image, class type: LoadImage Executing node 701, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced Requested to load CLIPVisionModelProjection Loading 1 new model Executing node 545, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch Executing node 135, title: Load Image, class type: LoadImage Executing node 707, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced Executing node 548, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch Executing node 680, title: Load Image, class type: LoadImage Executing node 710, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced Executing node 681, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch Executing node 683, title: Load Image, class type: LoadImage Executing node 713, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced Executing node 682, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch INFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting. Executing node 752, title: Load Image, class type: LoadImage Executing node 751, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch INFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting. Executing node 565, title: Positive, class type: CLIPTextEncode Requested to load SD1ClipModel Loading 1 new model Executing node 566, title: Negative, class type: CLIPTextEncode Executing node 127, title: Load Advanced ControlNet Model ππ π π , class type: ControlNetLoaderAdvanced Executing node 134, title: Empty Latent Image, class type: EmptyLatentImage Executing node 569, title: π§ Batch Count, class type: BatchCount+ Executing node 746, title: Load Video (Upload) π₯π ₯π π ’, class type: VHS_LoadVideo Executing node 461, title: π§ Simple Math, class type: SimpleMath+ Executing node 454, title: RepeatImageBatch, class type: RepeatImageBatch Executing node 458, title: Split Image Batch π₯π ₯π π ’, class type: VHS_SplitImages Executing node 125, title: Apply Advanced ControlNet ππ π π , class type: ACN_AdvancedControlNetApply Executing node 80, title: KSampler, class type: KSampler [AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (96) greater than context_length 16. [AnimateDiffEvo] - INFO - Using motion module AnimateLCM_sd15_t2v.ckpt:v2. Requested to load AnimateDiffModel Requested to load BaseModel Requested to load ControlNet Loading 3 new models 0%| | 0/11 [00:00<?, ?it/s] 9%|β | 1/11 [00:04<00:49, 4.91s/it] 18%|ββ | 2/11 [00:07<00:29, 3.28s/it] 27%|βββ | 3/11 [00:09<00:22, 2.86s/it] 36%|ββββ | 4/11 [00:11<00:18, 2.67s/it] 45%|βββββ | 5/11 [00:13<00:14, 2.39s/it] 55%|ββββββ | 6/11 [00:15<00:11, 2.22s/it] 64%|βββββββ | 7/11 [00:17<00:08, 2.12s/it] 73%|ββββββββ | 8/11 [00:19<00:06, 2.05s/it] 82%|βββββββββ | 9/11 [00:21<00:03, 2.00s/it] 91%|βββββββββ | 10/11 [00:23<00:01, 1.97s/it] 100%|ββββββββββ| 11/11 [00:25<00:00, 1.95s/it] 100%|ββββββββββ| 11/11 [00:25<00:00, 2.28s/it] Executing node 85, title: Load VAE, class type: VAELoader Using pytorch attention in VAE Using pytorch attention in VAE Requested to load AutoencoderKL Loading 1 new model Executing node 84, title: VAE Decode, class type: VAEDecode Executing node 53, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 203, title: Upscale Image By, class type: ImageScaleBy Executing node 204, title: VAE Encode, class type: VAEEncode Executing node 198, title: KSampler, class type: KSampler [AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (96) greater than context_length 16. [AnimateDiffEvo] - INFO - Using motion module AnimateLCM_sd15_t2v.ckpt:v2. 0%| | 0/11 [00:00<?, ?it/s] 9%|β | 1/11 [00:03<00:38, 3.84s/it] 18%|ββ | 2/11 [00:07<00:34, 3.83s/it] 27%|βββ | 3/11 [00:11<00:30, 3.83s/it] 36%|ββββ | 4/11 [00:15<00:26, 3.83s/it] 45%|βββββ | 5/11 [00:19<00:22, 3.83s/it] 55%|ββββββ | 6/11 [00:22<00:19, 3.83s/it] 64%|βββββββ | 7/11 [00:26<00:15, 3.84s/it] 73%|ββββββββ | 8/11 [00:30<00:11, 3.84s/it] 82%|βββββββββ | 9/11 [00:34<00:07, 3.84s/it] 91%|βββββββββ | 10/11 [00:38<00:03, 3.84s/it] 100%|ββββββββββ| 11/11 [00:42<00:00, 3.84s/it] 100%|ββββββββββ| 11/11 [00:42<00:00, 3.84s/it] Executing node 201, title: VAE Decode, class type: VAEDecode Executing node 205, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 270, title: Load Upscale Model, class type: UpscaleModelLoader Executing node 271, title: Upscale Image (using Model), class type: ImageUpscaleWithModel Executing node 279, title: Upscale Image, class type: ImageScale Executing node 272, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 770, title: FILM VFI, class type: FILM VFI Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Final clearing cache... Comfy-VFI: Done cache clearing Executing node 219, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Prompt executed in 277.90 seconds outputs: {'53': {'gifs': [{'filename': 'preview_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '205': {'gifs': [{'filename': 'upscaled_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '272': {'gifs': [{'filename': 'upscaled_model_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '219': {'gifs': [{'filename': 'interpolated_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}} ==================================== preview_00001.mp4 upscaled_00001.mp4 upscaled_model_00001.mp4 interpolated_00001.mp4
Prediction
fofr/video-morpher:e70e975067d2b5dbe9e2d9022833d27230a1bdeb3f4af6fe6bb49a548a3039a7ID9esbb1jcmxrgg0cf26rsptyyf8StatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedby @fofrInput
- mode
- upscaled-and-interpolated
- prompt
- bright, vibrant, high contrast
- checkpoint
- 3D
- aspect_ratio
- 3:4
- style_strength
- 0.5
- use_controlnet
- negative_prompt
- dark, gloomy
{ "mode": "upscaled-and-interpolated", "prompt": "bright, vibrant, high contrast", "checkpoint": "3D", "style_image": "https://replicate.delivery/pbxt/KnxLVfe6BRRu2zHc3gT99mnwaemKfR4JzaZWxMCLsZYSTKzp/2024-03-05--06-47-29-u-q1-fofr_tropical_purple_beksinski_aaad09f0-d194-4e40-b312-51054fc4ebbf.png", "aspect_ratio": "3:4", "style_strength": 0.5, "use_controlnet": true, "negative_prompt": "dark, gloomy", "subject_image_1": "https://replicate.delivery/pbxt/KnxLX9qnA82YKCkScCJZf5VCX6hy5RuprGwLEAVjp3vu6Oh1/1.webp", "subject_image_2": "https://replicate.delivery/pbxt/KnxLWqPg086DnRTUvxDM7gCBYi3W3coIbn3Q8jXnTJsfvZzt/2.webp", "subject_image_3": "https://replicate.delivery/pbxt/KnxLWU6aHjc6kVW2QOF0xC4oen8zuPnlsK2o24GtOV2bJEn8/4.webp", "subject_image_4": "https://replicate.delivery/pbxt/KnxLVg54ySFfg8s78YyfzK0Qgse8YdTbSmEZKpnkFkmSFlsf/4.webp" }
Install Replicateβs Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/video-morpher:e70e975067d2b5dbe9e2d9022833d27230a1bdeb3f4af6fe6bb49a548a3039a7", { input: { mode: "upscaled-and-interpolated", prompt: "bright, vibrant, high contrast", checkpoint: "3D", style_image: "https://replicate.delivery/pbxt/KnxLVfe6BRRu2zHc3gT99mnwaemKfR4JzaZWxMCLsZYSTKzp/2024-03-05--06-47-29-u-q1-fofr_tropical_purple_beksinski_aaad09f0-d194-4e40-b312-51054fc4ebbf.png", aspect_ratio: "3:4", style_strength: 0.5, use_controlnet: true, negative_prompt: "dark, gloomy", subject_image_1: "https://replicate.delivery/pbxt/KnxLX9qnA82YKCkScCJZf5VCX6hy5RuprGwLEAVjp3vu6Oh1/1.webp", subject_image_2: "https://replicate.delivery/pbxt/KnxLWqPg086DnRTUvxDM7gCBYi3W3coIbn3Q8jXnTJsfvZzt/2.webp", subject_image_3: "https://replicate.delivery/pbxt/KnxLWU6aHjc6kVW2QOF0xC4oen8zuPnlsK2o24GtOV2bJEn8/4.webp", subject_image_4: "https://replicate.delivery/pbxt/KnxLVg54ySFfg8s78YyfzK0Qgse8YdTbSmEZKpnkFkmSFlsf/4.webp" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicateβs Python client library:pip install replicate
Import the client:import replicate
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/video-morpher:e70e975067d2b5dbe9e2d9022833d27230a1bdeb3f4af6fe6bb49a548a3039a7", input={ "mode": "upscaled-and-interpolated", "prompt": "bright, vibrant, high contrast", "checkpoint": "3D", "style_image": "https://replicate.delivery/pbxt/KnxLVfe6BRRu2zHc3gT99mnwaemKfR4JzaZWxMCLsZYSTKzp/2024-03-05--06-47-29-u-q1-fofr_tropical_purple_beksinski_aaad09f0-d194-4e40-b312-51054fc4ebbf.png", "aspect_ratio": "3:4", "style_strength": 0.5, "use_controlnet": True, "negative_prompt": "dark, gloomy", "subject_image_1": "https://replicate.delivery/pbxt/KnxLX9qnA82YKCkScCJZf5VCX6hy5RuprGwLEAVjp3vu6Oh1/1.webp", "subject_image_2": "https://replicate.delivery/pbxt/KnxLWqPg086DnRTUvxDM7gCBYi3W3coIbn3Q8jXnTJsfvZzt/2.webp", "subject_image_3": "https://replicate.delivery/pbxt/KnxLWU6aHjc6kVW2QOF0xC4oen8zuPnlsK2o24GtOV2bJEn8/4.webp", "subject_image_4": "https://replicate.delivery/pbxt/KnxLVg54ySFfg8s78YyfzK0Qgse8YdTbSmEZKpnkFkmSFlsf/4.webp" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "fofr/video-morpher:e70e975067d2b5dbe9e2d9022833d27230a1bdeb3f4af6fe6bb49a548a3039a7", "input": { "mode": "upscaled-and-interpolated", "prompt": "bright, vibrant, high contrast", "checkpoint": "3D", "style_image": "https://replicate.delivery/pbxt/KnxLVfe6BRRu2zHc3gT99mnwaemKfR4JzaZWxMCLsZYSTKzp/2024-03-05--06-47-29-u-q1-fofr_tropical_purple_beksinski_aaad09f0-d194-4e40-b312-51054fc4ebbf.png", "aspect_ratio": "3:4", "style_strength": 0.5, "use_controlnet": true, "negative_prompt": "dark, gloomy", "subject_image_1": "https://replicate.delivery/pbxt/KnxLX9qnA82YKCkScCJZf5VCX6hy5RuprGwLEAVjp3vu6Oh1/1.webp", "subject_image_2": "https://replicate.delivery/pbxt/KnxLWqPg086DnRTUvxDM7gCBYi3W3coIbn3Q8jXnTJsfvZzt/2.webp", "subject_image_3": "https://replicate.delivery/pbxt/KnxLWU6aHjc6kVW2QOF0xC4oen8zuPnlsK2o24GtOV2bJEn8/4.webp", "subject_image_4": "https://replicate.delivery/pbxt/KnxLVg54ySFfg8s78YyfzK0Qgse8YdTbSmEZKpnkFkmSFlsf/4.webp" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicateβs HTTP API reference docs.
Output
{ "completed_at": "2024-04-24T20:05:10.327727Z", "created_at": "2024-04-24T19:59:43.271000Z", "data_removed": false, "error": null, "id": "9esbb1jcmxrgg0cf26rsptyyf8", "input": { "mode": "upscaled-and-interpolated", "prompt": "bright, vibrant, high contrast", "checkpoint": "3D", "style_image": "https://replicate.delivery/pbxt/KnxLVfe6BRRu2zHc3gT99mnwaemKfR4JzaZWxMCLsZYSTKzp/2024-03-05--06-47-29-u-q1-fofr_tropical_purple_beksinski_aaad09f0-d194-4e40-b312-51054fc4ebbf.png", "aspect_ratio": "3:4", "style_strength": 0.5, "use_controlnet": true, "negative_prompt": "dark, gloomy", "subject_image_1": "https://replicate.delivery/pbxt/KnxLX9qnA82YKCkScCJZf5VCX6hy5RuprGwLEAVjp3vu6Oh1/1.webp", "subject_image_2": "https://replicate.delivery/pbxt/KnxLWqPg086DnRTUvxDM7gCBYi3W3coIbn3Q8jXnTJsfvZzt/2.webp", "subject_image_3": "https://replicate.delivery/pbxt/KnxLWU6aHjc6kVW2QOF0xC4oen8zuPnlsK2o24GtOV2bJEn8/4.webp", "subject_image_4": "https://replicate.delivery/pbxt/KnxLVg54ySFfg8s78YyfzK0Qgse8YdTbSmEZKpnkFkmSFlsf/4.webp" }, "logs": "Random seed set to: 2661073672\nChecking inputs\nβ /tmp/inputs/2.png\nβ /tmp/inputs/1.png\nβ /tmp/inputs/3.png\nβ /tmp/inputs/4.png\nβ /tmp/inputs/circles.mp4\nβ /tmp/inputs/style.png\n====================================\nChecking weights\nIncluding weights for IPAdapter preset: PLUS (high strength)\nβ³ Downloading vae-ft-mse-840000-ema-pruned.safetensors to ComfyUI/models/vae\nβοΈ Downloaded vae-ft-mse-840000-ema-pruned.safetensors in 0.41s, size: 319.14MB\nβ vae-ft-mse-840000-ema-pruned.safetensors\nβ³ Downloading RealESRGAN_x4.pth to ComfyUI/models/upscale_models\nβοΈ Downloaded RealESRGAN_x4.pth in 0.18s, size: 63.94MB\nβ RealESRGAN_x4.pth\nβ³ Downloading AnimateLCM_sd15_t2v.ckpt to ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/models\nβοΈ Downloaded AnimateLCM_sd15_t2v.ckpt in 1.20s, size: 1729.05MB\nβ AnimateLCM_sd15_t2v.ckpt\nβ³ Downloading AnimateLCM_sd15_t2v_lora.safetensors to ComfyUI/models/loras\nβοΈ Downloaded AnimateLCM_sd15_t2v_lora.safetensors in 0.92s, size: 128.39MB\nβ AnimateLCM_sd15_t2v_lora.safetensors\nβ³ Downloading ip-adapter-plus_sd15.safetensors to ComfyUI/models/ipadapter\nβοΈ Downloaded ip-adapter-plus_sd15.safetensors in 0.25s, size: 93.63MB\nβ ip-adapter-plus_sd15.safetensors\nβ³ Downloading rcnzCartoon3d_v20.safetensors to ComfyUI/models/checkpoints\nβοΈ Downloaded rcnzCartoon3d_v20.safetensors in 2.04s, size: 2033.83MB\nβ rcnzCartoon3d_v20.safetensors\nβ³ Downloading ip-adapter-plus_sdxl_vit-h.safetensors to ComfyUI/models/ipadapter\nβοΈ Downloaded ip-adapter-plus_sdxl_vit-h.safetensors in 0.67s, size: 808.26MB\nβ ip-adapter-plus_sdxl_vit-h.safetensors\nβ³ Downloading CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors to ComfyUI/models/clip_vision\nβοΈ Downloaded CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors in 11.24s, size: 2411.24MB\nβ CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors\nβ³ Downloading film_net_fp32.pt to ComfyUI/custom_nodes/ComfyUI-Frame-Interpolation/ckpts/film\nβοΈ Downloaded film_net_fp32.pt in 1.16s, size: 131.53MB\nβ film_net_fp32.pt\nβ³ Downloading control_v1p_sd15_qrcode_monster.safetensors to ComfyUI/models/controlnet\nβοΈ Downloaded control_v1p_sd15_qrcode_monster.safetensors in 0.69s, size: 689.12MB\nβ control_v1p_sd15_qrcode_monster.safetensors\n====================================\nRunning workflow\ngot prompt\nExecuting node 564, title: Load Checkpoint, class type: CheckpointLoaderSimple\nmodel_type EPS\nUsing pytorch attention in VAE\nUsing pytorch attention in VAE\nclip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']\nloaded straight to GPU\nRequested to load BaseModel\nLoading 1 new model\nExecuting node 563, title: LoraLoaderModelOnly, class type: LoraLoaderModelOnly\nExecuting node 87, title: Load AnimateDiff Model ππ π β‘, class type: ADE_LoadAnimateDiffModel[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Loading motion module AnimateLCM_sd15_t2v.ckpt via Gen2\nExecuting node 256, title: Motion Scale ππ π , class type: ADE_MultivalDynamic\nExecuting node 79, title: Apply AnimateDiff Model ππ π β‘, class type: ADE_ApplyAnimateDiffModelSimple\nExecuting node 156, title: Context OptionsβLooped Uniform ππ π , class type: ADE_LoopedUniformContextOptions\nExecuting node 77, title: Use Evolved Sampling ππ π β‘, class type: ADE_UseEvolvedSampling\nExecuting node 573, title: IPAdapter Unified Loader, class type: IPAdapterUnifiedLoader\n\u001b[33mINFO: Clip Vision model loaded from /src/ComfyUI/models/clip_vision/CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors\u001b[0m\n\u001b[33mINFO: IPAdapter model loaded from /src/ComfyUI/models/ipadapter/ip-adapter-plus_sd15.safetensors\u001b[0m\nExecuting node 142, title: Load Image, class type: LoadImage\nExecuting node 701, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced\nRequested to load CLIPVisionModelProjection\nLoading 1 new model\nExecuting node 545, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\nExecuting node 135, title: Load Image, class type: LoadImage\nExecuting node 707, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced\nExecuting node 548, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\nExecuting node 680, title: Load Image, class type: LoadImage\nExecuting node 710, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced\nExecuting node 681, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\nExecuting node 683, title: Load Image, class type: LoadImage\nExecuting node 713, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced\nExecuting node 682, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\nExecuting node 752, title: Load Image, class type: LoadImage\nExecuting node 751, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\n\u001b[33mINFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting.\u001b[0m\nExecuting node 565, title: Positive, class type: CLIPTextEncode\nRequested to load SD1ClipModel\nLoading 1 new model\nExecuting node 566, title: Negative, class type: CLIPTextEncode\nExecuting node 127, title: Load Advanced ControlNet Model ππ π π , class type: ControlNetLoaderAdvanced\nExecuting node 134, title: Empty Latent Image, class type: EmptyLatentImage\nExecuting node 569, title: π§ Batch Count, class type: BatchCount+\nExecuting node 746, title: Load Video (Upload) π₯π ₯π π ’, class type: VHS_LoadVideo\nExecuting node 461, title: π§ Simple Math, class type: SimpleMath+\nExecuting node 454, title: RepeatImageBatch, class type: RepeatImageBatch\nExecuting node 458, title: Split Image Batch π₯π ₯π π ’, class type: VHS_SplitImages\nExecuting node 125, title: Apply Advanced ControlNet ππ π π , class type: ACN_AdvancedControlNetApply\nExecuting node 80, title: KSampler, class type: KSampler\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Sliding context window activated - latents passed in (96) greater than context_length 16.\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Using motion module AnimateLCM_sd15_t2v.ckpt:v2.\nRequested to load AnimateDiffModel\nRequested to load BaseModel\nRequested to load ControlNet\nLoading 3 new models\n 0%| | 0/11 [00:00<?, ?it/s]\n 9%|β | 1/11 [00:03<00:30, 3.10s/it]\n 18%|ββ | 2/11 [00:05<00:22, 2.50s/it]\n 27%|βββ | 3/11 [00:07<00:19, 2.43s/it]\n 36%|ββββ | 4/11 [00:09<00:16, 2.40s/it]\n 45%|βββββ | 5/11 [00:11<00:13, 2.22s/it]\n 55%|ββββββ | 6/11 [00:13<00:10, 2.11s/it]\n 64%|βββββββ | 7/11 [00:15<00:08, 2.04s/it]\n 73%|ββββββββ | 8/11 [00:17<00:05, 1.99s/it]\n 82%|βββββββββ | 9/11 [00:19<00:03, 1.96s/it]\n 91%|βββββββββ | 10/11 [00:21<00:01, 1.94s/it]\n100%|ββββββββββ| 11/11 [00:23<00:00, 1.93s/it]\n100%|ββββββββββ| 11/11 [00:23<00:00, 2.11s/it]\nExecuting node 85, title: Load VAE, class type: VAELoader\nUsing pytorch attention in VAE\nUsing pytorch attention in VAE\nRequested to load AutoencoderKL\nLoading 1 new model\nExecuting node 84, title: VAE Decode, class type: VAEDecode\nExecuting node 53, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 203, title: Upscale Image By, class type: ImageScaleBy\nExecuting node 204, title: VAE Encode, class type: VAEEncode\nExecuting node 198, title: KSampler, class type: KSampler\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Sliding context window activated - latents passed in (96) greater than context_length 16.\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Using motion module AnimateLCM_sd15_t2v.ckpt:v2.\n 0%| | 0/11 [00:00<?, ?it/s]\n 9%|β | 1/11 [00:03<00:38, 3.86s/it]\n 18%|ββ | 2/11 [00:07<00:34, 3.85s/it]\n 27%|βββ | 3/11 [00:11<00:30, 3.85s/it]\n 36%|ββββ | 4/11 [00:15<00:27, 3.86s/it]\n 45%|βββββ | 5/11 [00:19<00:23, 3.86s/it]\n 55%|ββββββ | 6/11 [00:23<00:19, 3.86s/it]\n 64%|βββββββ | 7/11 [00:27<00:15, 3.86s/it]\n 73%|ββββββββ | 8/11 [00:30<00:11, 3.86s/it]\n 82%|βββββββββ | 9/11 [00:34<00:07, 3.87s/it]\n 91%|βββββββββ | 10/11 [00:38<00:03, 3.87s/it]\n100%|ββββββββββ| 11/11 [00:42<00:00, 3.87s/it]\n100%|ββββββββββ| 11/11 [00:42<00:00, 3.86s/it]\nExecuting node 201, title: VAE Decode, class type: VAEDecode\nExecuting node 205, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 270, title: Load Upscale Model, class type: UpscaleModelLoader\nExecuting node 271, title: Upscale Image (using Model), class type: ImageUpscaleWithModel\nExecuting node 279, title: Upscale Image, class type: ImageScale\nExecuting node 272, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 770, title: FILM VFI, class type: FILM VFI\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Final clearing cache...\nComfy-VFI: Done cache clearing\nExecuting node 219, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nPrompt executed in 270.81 seconds\noutputs: {'53': {'gifs': [{'filename': 'preview_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '205': {'gifs': [{'filename': 'upscaled_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '272': {'gifs': [{'filename': 'upscaled_model_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '219': {'gifs': [{'filename': 'interpolated_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}}\n====================================\npreview_00001.mp4\nupscaled_00001.mp4\nupscaled_model_00001.mp4\ninterpolated_00001.mp4", "metrics": { "predict_time": 301.418943, "total_time": 327.056727 }, "output": [ "https://replicate.delivery/pbxt/GPTWM5fboOShBKlFRQtMapKDoUBohf9qFxrySPt7AF4zfiblA/preview_00001.mp4", "https://replicate.delivery/pbxt/GZV1nIJXsIJLMpIg1w6XHWxSfN3qm36kEkTIa2A1F6L6v4WJA/upscaled_00001.mp4", "https://replicate.delivery/pbxt/N2u5EY0jm4ovC5GAztQaUrvL4KiyJTKzr1g5Ia2nKQe6v4WJA/upscaled_model_00001.mp4", "https://replicate.delivery/pbxt/w3xiWkOFJEb7KRmQq5itovpqr1FGVZ033Nj6YBe1jiz6v4WJA/interpolated_00001.mp4" ], "started_at": "2024-04-24T20:00:08.908784Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/9esbb1jcmxrgg0cf26rsptyyf8", "cancel": "https://api.replicate.com/v1/predictions/9esbb1jcmxrgg0cf26rsptyyf8/cancel" }, "version": "e70e975067d2b5dbe9e2d9022833d27230a1bdeb3f4af6fe6bb49a548a3039a7" }
Generated inRandom seed set to: 2661073672 Checking inputs β /tmp/inputs/2.png β /tmp/inputs/1.png β /tmp/inputs/3.png β /tmp/inputs/4.png β /tmp/inputs/circles.mp4 β /tmp/inputs/style.png ==================================== Checking weights Including weights for IPAdapter preset: PLUS (high strength) β³ Downloading vae-ft-mse-840000-ema-pruned.safetensors to ComfyUI/models/vae βοΈ Downloaded vae-ft-mse-840000-ema-pruned.safetensors in 0.41s, size: 319.14MB β vae-ft-mse-840000-ema-pruned.safetensors β³ Downloading RealESRGAN_x4.pth to ComfyUI/models/upscale_models βοΈ Downloaded RealESRGAN_x4.pth in 0.18s, size: 63.94MB β RealESRGAN_x4.pth β³ Downloading AnimateLCM_sd15_t2v.ckpt to ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/models βοΈ Downloaded AnimateLCM_sd15_t2v.ckpt in 1.20s, size: 1729.05MB β AnimateLCM_sd15_t2v.ckpt β³ Downloading AnimateLCM_sd15_t2v_lora.safetensors to ComfyUI/models/loras βοΈ Downloaded AnimateLCM_sd15_t2v_lora.safetensors in 0.92s, size: 128.39MB β AnimateLCM_sd15_t2v_lora.safetensors β³ Downloading ip-adapter-plus_sd15.safetensors to ComfyUI/models/ipadapter βοΈ Downloaded ip-adapter-plus_sd15.safetensors in 0.25s, size: 93.63MB β ip-adapter-plus_sd15.safetensors β³ Downloading rcnzCartoon3d_v20.safetensors to ComfyUI/models/checkpoints βοΈ Downloaded rcnzCartoon3d_v20.safetensors in 2.04s, size: 2033.83MB β rcnzCartoon3d_v20.safetensors β³ Downloading ip-adapter-plus_sdxl_vit-h.safetensors to ComfyUI/models/ipadapter βοΈ Downloaded ip-adapter-plus_sdxl_vit-h.safetensors in 0.67s, size: 808.26MB β ip-adapter-plus_sdxl_vit-h.safetensors β³ Downloading CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors to ComfyUI/models/clip_vision βοΈ Downloaded CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors in 11.24s, size: 2411.24MB β CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors β³ Downloading film_net_fp32.pt to ComfyUI/custom_nodes/ComfyUI-Frame-Interpolation/ckpts/film βοΈ Downloaded film_net_fp32.pt in 1.16s, size: 131.53MB β film_net_fp32.pt β³ Downloading control_v1p_sd15_qrcode_monster.safetensors to ComfyUI/models/controlnet βοΈ Downloaded control_v1p_sd15_qrcode_monster.safetensors in 0.69s, size: 689.12MB β control_v1p_sd15_qrcode_monster.safetensors ==================================== Running workflow got prompt Executing node 564, title: Load Checkpoint, class type: CheckpointLoaderSimple model_type EPS Using pytorch attention in VAE Using pytorch attention in VAE clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] loaded straight to GPU Requested to load BaseModel Loading 1 new model Executing node 563, title: LoraLoaderModelOnly, class type: LoraLoaderModelOnly Executing node 87, title: Load AnimateDiff Model ππ π β‘, class type: ADE_LoadAnimateDiffModel[AnimateDiffEvo] - INFO - Loading motion module AnimateLCM_sd15_t2v.ckpt via Gen2 Executing node 256, title: Motion Scale ππ π , class type: ADE_MultivalDynamic Executing node 79, title: Apply AnimateDiff Model ππ π β‘, class type: ADE_ApplyAnimateDiffModelSimple Executing node 156, title: Context OptionsβLooped Uniform ππ π , class type: ADE_LoopedUniformContextOptions Executing node 77, title: Use Evolved Sampling ππ π β‘, class type: ADE_UseEvolvedSampling Executing node 573, title: IPAdapter Unified Loader, class type: IPAdapterUnifiedLoader INFO: Clip Vision model loaded from /src/ComfyUI/models/clip_vision/CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors INFO: IPAdapter model loaded from /src/ComfyUI/models/ipadapter/ip-adapter-plus_sd15.safetensors Executing node 142, title: Load Image, class type: LoadImage Executing node 701, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced Requested to load CLIPVisionModelProjection Loading 1 new model Executing node 545, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch Executing node 135, title: Load Image, class type: LoadImage Executing node 707, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced Executing node 548, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch Executing node 680, title: Load Image, class type: LoadImage Executing node 710, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced Executing node 681, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch Executing node 683, title: Load Image, class type: LoadImage Executing node 713, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced Executing node 682, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch Executing node 752, title: Load Image, class type: LoadImage Executing node 751, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch INFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting. Executing node 565, title: Positive, class type: CLIPTextEncode Requested to load SD1ClipModel Loading 1 new model Executing node 566, title: Negative, class type: CLIPTextEncode Executing node 127, title: Load Advanced ControlNet Model ππ π π , class type: ControlNetLoaderAdvanced Executing node 134, title: Empty Latent Image, class type: EmptyLatentImage Executing node 569, title: π§ Batch Count, class type: BatchCount+ Executing node 746, title: Load Video (Upload) π₯π ₯π π ’, class type: VHS_LoadVideo Executing node 461, title: π§ Simple Math, class type: SimpleMath+ Executing node 454, title: RepeatImageBatch, class type: RepeatImageBatch Executing node 458, title: Split Image Batch π₯π ₯π π ’, class type: VHS_SplitImages Executing node 125, title: Apply Advanced ControlNet ππ π π , class type: ACN_AdvancedControlNetApply Executing node 80, title: KSampler, class type: KSampler [AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (96) greater than context_length 16. [AnimateDiffEvo] - INFO - Using motion module AnimateLCM_sd15_t2v.ckpt:v2. Requested to load AnimateDiffModel Requested to load BaseModel Requested to load ControlNet Loading 3 new models 0%| | 0/11 [00:00<?, ?it/s] 9%|β | 1/11 [00:03<00:30, 3.10s/it] 18%|ββ | 2/11 [00:05<00:22, 2.50s/it] 27%|βββ | 3/11 [00:07<00:19, 2.43s/it] 36%|ββββ | 4/11 [00:09<00:16, 2.40s/it] 45%|βββββ | 5/11 [00:11<00:13, 2.22s/it] 55%|ββββββ | 6/11 [00:13<00:10, 2.11s/it] 64%|βββββββ | 7/11 [00:15<00:08, 2.04s/it] 73%|ββββββββ | 8/11 [00:17<00:05, 1.99s/it] 82%|βββββββββ | 9/11 [00:19<00:03, 1.96s/it] 91%|βββββββββ | 10/11 [00:21<00:01, 1.94s/it] 100%|ββββββββββ| 11/11 [00:23<00:00, 1.93s/it] 100%|ββββββββββ| 11/11 [00:23<00:00, 2.11s/it] Executing node 85, title: Load VAE, class type: VAELoader Using pytorch attention in VAE Using pytorch attention in VAE Requested to load AutoencoderKL Loading 1 new model Executing node 84, title: VAE Decode, class type: VAEDecode Executing node 53, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 203, title: Upscale Image By, class type: ImageScaleBy Executing node 204, title: VAE Encode, class type: VAEEncode Executing node 198, title: KSampler, class type: KSampler [AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (96) greater than context_length 16. [AnimateDiffEvo] - INFO - Using motion module AnimateLCM_sd15_t2v.ckpt:v2. 0%| | 0/11 [00:00<?, ?it/s] 9%|β | 1/11 [00:03<00:38, 3.86s/it] 18%|ββ | 2/11 [00:07<00:34, 3.85s/it] 27%|βββ | 3/11 [00:11<00:30, 3.85s/it] 36%|ββββ | 4/11 [00:15<00:27, 3.86s/it] 45%|βββββ | 5/11 [00:19<00:23, 3.86s/it] 55%|ββββββ | 6/11 [00:23<00:19, 3.86s/it] 64%|βββββββ | 7/11 [00:27<00:15, 3.86s/it] 73%|ββββββββ | 8/11 [00:30<00:11, 3.86s/it] 82%|βββββββββ | 9/11 [00:34<00:07, 3.87s/it] 91%|βββββββββ | 10/11 [00:38<00:03, 3.87s/it] 100%|ββββββββββ| 11/11 [00:42<00:00, 3.87s/it] 100%|ββββββββββ| 11/11 [00:42<00:00, 3.86s/it] Executing node 201, title: VAE Decode, class type: VAEDecode Executing node 205, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 270, title: Load Upscale Model, class type: UpscaleModelLoader Executing node 271, title: Upscale Image (using Model), class type: ImageUpscaleWithModel Executing node 279, title: Upscale Image, class type: ImageScale Executing node 272, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 770, title: FILM VFI, class type: FILM VFI Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Final clearing cache... Comfy-VFI: Done cache clearing Executing node 219, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Prompt executed in 270.81 seconds outputs: {'53': {'gifs': [{'filename': 'preview_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '205': {'gifs': [{'filename': 'upscaled_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '272': {'gifs': [{'filename': 'upscaled_model_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '219': {'gifs': [{'filename': 'interpolated_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}} ==================================== preview_00001.mp4 upscaled_00001.mp4 upscaled_model_00001.mp4 interpolated_00001.mp4
Prediction
fofr/video-morpher:a75043c99151a0d97d02ea55998168172188e9687c6f2e66c102ea99fd7ca4deID36pah0gewxrgm0cf225ak122wmStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- mode
- upscaled-and-interpolated
- prompt
- checkpoint
- 3D
- aspect_ratio
- 4:3
- style_strength
- 1
- negative_prompt
{ "mode": "upscaled-and-interpolated", "prompt": "", "checkpoint": "3D", "style_image": "https://replicate.delivery/pbxt/Kns41UhVhW7rmSPxTxTGK7oWd441x2Xjv2DI5w8oep07lE0o/hero-char.png", "aspect_ratio": "4:3", "style_strength": 1, "negative_prompt": "", "subject_image_1": "https://replicate.delivery/pbxt/Kns41KZNEQhNyIjeVyW9mA7PV21EkcRDZ14rYbfHa0ZeNug1/ComfyUI_03072_.png", "subject_image_2": "https://replicate.delivery/pbxt/KnroDfMF7HJay9msYtZMHB4vwMz9PXpvuY4vldEEmrGFxfFb/ComfyUI_02602_.png", "subject_image_3": "https://replicate.delivery/pbxt/KnroE2qG6XaIwokhg66ApTDHh8URpKEg8IP2CFXQ0L8isxpP/ComfyUI_02575_.png", "subject_image_4": "https://replicate.delivery/pbxt/KnroE49VZGwosByNYd6XCI5dStBtrSQRPmiW89bSdXv9f1Lx/863x1200-828592993.jpg" }
Install Replicateβs Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/video-morpher:a75043c99151a0d97d02ea55998168172188e9687c6f2e66c102ea99fd7ca4de", { input: { mode: "upscaled-and-interpolated", prompt: "", checkpoint: "3D", style_image: "https://replicate.delivery/pbxt/Kns41UhVhW7rmSPxTxTGK7oWd441x2Xjv2DI5w8oep07lE0o/hero-char.png", aspect_ratio: "4:3", style_strength: 1, negative_prompt: "", subject_image_1: "https://replicate.delivery/pbxt/Kns41KZNEQhNyIjeVyW9mA7PV21EkcRDZ14rYbfHa0ZeNug1/ComfyUI_03072_.png", subject_image_2: "https://replicate.delivery/pbxt/KnroDfMF7HJay9msYtZMHB4vwMz9PXpvuY4vldEEmrGFxfFb/ComfyUI_02602_.png", subject_image_3: "https://replicate.delivery/pbxt/KnroE2qG6XaIwokhg66ApTDHh8URpKEg8IP2CFXQ0L8isxpP/ComfyUI_02575_.png", subject_image_4: "https://replicate.delivery/pbxt/KnroE49VZGwosByNYd6XCI5dStBtrSQRPmiW89bSdXv9f1Lx/863x1200-828592993.jpg" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicateβs Python client library:pip install replicate
Import the client:import replicate
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/video-morpher:a75043c99151a0d97d02ea55998168172188e9687c6f2e66c102ea99fd7ca4de", input={ "mode": "upscaled-and-interpolated", "prompt": "", "checkpoint": "3D", "style_image": "https://replicate.delivery/pbxt/Kns41UhVhW7rmSPxTxTGK7oWd441x2Xjv2DI5w8oep07lE0o/hero-char.png", "aspect_ratio": "4:3", "style_strength": 1, "negative_prompt": "", "subject_image_1": "https://replicate.delivery/pbxt/Kns41KZNEQhNyIjeVyW9mA7PV21EkcRDZ14rYbfHa0ZeNug1/ComfyUI_03072_.png", "subject_image_2": "https://replicate.delivery/pbxt/KnroDfMF7HJay9msYtZMHB4vwMz9PXpvuY4vldEEmrGFxfFb/ComfyUI_02602_.png", "subject_image_3": "https://replicate.delivery/pbxt/KnroE2qG6XaIwokhg66ApTDHh8URpKEg8IP2CFXQ0L8isxpP/ComfyUI_02575_.png", "subject_image_4": "https://replicate.delivery/pbxt/KnroE49VZGwosByNYd6XCI5dStBtrSQRPmiW89bSdXv9f1Lx/863x1200-828592993.jpg" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "fofr/video-morpher:a75043c99151a0d97d02ea55998168172188e9687c6f2e66c102ea99fd7ca4de", "input": { "mode": "upscaled-and-interpolated", "prompt": "", "checkpoint": "3D", "style_image": "https://replicate.delivery/pbxt/Kns41UhVhW7rmSPxTxTGK7oWd441x2Xjv2DI5w8oep07lE0o/hero-char.png", "aspect_ratio": "4:3", "style_strength": 1, "negative_prompt": "", "subject_image_1": "https://replicate.delivery/pbxt/Kns41KZNEQhNyIjeVyW9mA7PV21EkcRDZ14rYbfHa0ZeNug1/ComfyUI_03072_.png", "subject_image_2": "https://replicate.delivery/pbxt/KnroDfMF7HJay9msYtZMHB4vwMz9PXpvuY4vldEEmrGFxfFb/ComfyUI_02602_.png", "subject_image_3": "https://replicate.delivery/pbxt/KnroE2qG6XaIwokhg66ApTDHh8URpKEg8IP2CFXQ0L8isxpP/ComfyUI_02575_.png", "subject_image_4": "https://replicate.delivery/pbxt/KnroE49VZGwosByNYd6XCI5dStBtrSQRPmiW89bSdXv9f1Lx/863x1200-828592993.jpg" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicateβs HTTP API reference docs.
Output
{ "completed_at": "2024-04-24T14:42:12.708949Z", "created_at": "2024-04-24T14:37:14.343000Z", "data_removed": false, "error": null, "id": "36pah0gewxrgm0cf225ak122wm", "input": { "mode": "upscaled-and-interpolated", "prompt": "", "checkpoint": "3D", "style_image": "https://replicate.delivery/pbxt/Kns41UhVhW7rmSPxTxTGK7oWd441x2Xjv2DI5w8oep07lE0o/hero-char.png", "aspect_ratio": "4:3", "style_strength": 1, "negative_prompt": "", "subject_image_1": "https://replicate.delivery/pbxt/Kns41KZNEQhNyIjeVyW9mA7PV21EkcRDZ14rYbfHa0ZeNug1/ComfyUI_03072_.png", "subject_image_2": "https://replicate.delivery/pbxt/KnroDfMF7HJay9msYtZMHB4vwMz9PXpvuY4vldEEmrGFxfFb/ComfyUI_02602_.png", "subject_image_3": "https://replicate.delivery/pbxt/KnroE2qG6XaIwokhg66ApTDHh8URpKEg8IP2CFXQ0L8isxpP/ComfyUI_02575_.png", "subject_image_4": "https://replicate.delivery/pbxt/KnroE49VZGwosByNYd6XCI5dStBtrSQRPmiW89bSdXv9f1Lx/863x1200-828592993.jpg" }, "logs": "Random seed set to: 3961438974\nChecking inputs\nβ /tmp/inputs/2.png\nβ /tmp/inputs/1.png\nβ /tmp/inputs/3.png\nβ /tmp/inputs/4.png\nβ /tmp/inputs/circles.mp4\nβ /tmp/inputs/style.png\n====================================\nChecking weights\nIncluding weights for IPAdapter preset: PLUS (high strength)\nβ³ Downloading control_v1p_sd15_qrcode_monster.safetensors to ComfyUI/models/controlnet\nβοΈ Downloaded control_v1p_sd15_qrcode_monster.safetensors in 0.58s, size: 689.12MB\nβ control_v1p_sd15_qrcode_monster.safetensors\nβ³ Downloading AnimateLCM_sd15_t2v_lora.safetensors to ComfyUI/models/loras\nβοΈ Downloaded AnimateLCM_sd15_t2v_lora.safetensors in 0.25s, size: 128.39MB\nβ AnimateLCM_sd15_t2v_lora.safetensors\nβ³ Downloading vae-ft-mse-840000-ema-pruned.safetensors to ComfyUI/models/vae\nβοΈ Downloaded vae-ft-mse-840000-ema-pruned.safetensors in 0.34s, size: 319.14MB\nβ vae-ft-mse-840000-ema-pruned.safetensors\nβ³ Downloading AnimateLCM_sd15_t2v.ckpt to ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/models\nβοΈ Downloaded AnimateLCM_sd15_t2v.ckpt in 1.37s, size: 1729.05MB\nβ AnimateLCM_sd15_t2v.ckpt\nβ³ Downloading rcnzCartoon3d_v20.safetensors to ComfyUI/models/checkpoints\nβοΈ Downloaded rcnzCartoon3d_v20.safetensors in 10.13s, size: 2033.83MB\nβ rcnzCartoon3d_v20.safetensors\nβ³ Downloading ip-adapter-plus_sd15.safetensors to ComfyUI/models/ipadapter\nβοΈ Downloaded ip-adapter-plus_sd15.safetensors in 0.30s, size: 93.63MB\nβ ip-adapter-plus_sd15.safetensors\nβ³ Downloading CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors to ComfyUI/models/clip_vision\nβοΈ Downloaded CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors in 1.46s, size: 2411.24MB\nβ CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors\nβ³ Downloading RealESRGAN_x4.pth to ComfyUI/models/upscale_models\nβοΈ Downloaded RealESRGAN_x4.pth in 0.18s, size: 63.94MB\nβ RealESRGAN_x4.pth\nβ³ Downloading ip-adapter-plus_sdxl_vit-h.safetensors to ComfyUI/models/ipadapter\nβοΈ Downloaded ip-adapter-plus_sdxl_vit-h.safetensors in 0.72s, size: 808.26MB\nβ ip-adapter-plus_sdxl_vit-h.safetensors\nβ³ Downloading film_net_fp32.pt to ComfyUI/custom_nodes/ComfyUI-Frame-Interpolation/ckpts/film\nβοΈ Downloaded film_net_fp32.pt in 0.22s, size: 131.53MB\nβ film_net_fp32.pt\n====================================\nRunning workflow\ngot prompt\nExecuting node 564, title: Load Checkpoint, class type: CheckpointLoaderSimple\nmodel_type EPS\nUsing pytorch attention in VAE\nUsing pytorch attention in VAE\nclip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']\nloaded straight to GPU\nRequested to load BaseModel\nLoading 1 new model\nExecuting node 563, title: LoraLoaderModelOnly, class type: LoraLoaderModelOnly\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Loading motion module AnimateLCM_sd15_t2v.ckpt via Gen2\nExecuting node 87, title: Load AnimateDiff Model ππ π β‘, class type: ADE_LoadAnimateDiffModel\nExecuting node 256, title: Motion Scale ππ π , class type: ADE_MultivalDynamic\nExecuting node 79, title: Apply AnimateDiff Model ππ π β‘, class type: ADE_ApplyAnimateDiffModelSimple\nExecuting node 156, title: Context OptionsβLooped Uniform ππ π , class type: ADE_LoopedUniformContextOptions\nExecuting node 77, title: Use Evolved Sampling ππ π β‘, class type: ADE_UseEvolvedSampling\nExecuting node 573, title: IPAdapter Unified Loader, class type: IPAdapterUnifiedLoader\n\u001b[33mINFO: Clip Vision model loaded from /src/ComfyUI/models/clip_vision/CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors\u001b[0m\n\u001b[33mINFO: IPAdapter model loaded from /src/ComfyUI/models/ipadapter/ip-adapter-plus_sd15.safetensors\u001b[0m\nExecuting node 142, title: Load Image, class type: LoadImage\nExecuting node 701, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced\nRequested to load CLIPVisionModelProjection\nLoading 1 new model\nExecuting node 545, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\nExecuting node 135, title: Load Image, class type: LoadImage\nExecuting node 707, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced\nExecuting node 548, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\nExecuting node 680, title: Load Image, class type: LoadImage\nExecuting node 710, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced\nExecuting node 681, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\nExecuting node 683, title: Load Image, class type: LoadImage\nExecuting node 713, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced\nExecuting node 682, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\n\u001b[33mINFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting.\u001b[0m\nExecuting node 752, title: Load Image, class type: LoadImage\nExecuting node 751, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\n\u001b[33mINFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting.\u001b[0m\nRequested to load SD1ClipModel\nLoading 1 new model\nExecuting node 565, title: Positive, class type: CLIPTextEncode\nExecuting node 566, title: Negative, class type: CLIPTextEncode\nExecuting node 134, title: Empty Latent Image, class type: EmptyLatentImage\nExecuting node 80, title: KSampler, class type: KSampler\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Sliding context window activated - latents passed in (96) greater than context_length 16.\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Using motion module AnimateLCM_sd15_t2v.ckpt:v2.\nRequested to load BaseModel\nRequested to load AnimateDiffModel\nLoading 2 new models\n 0%| | 0/8 [00:00<?, ?it/s]\n 12%|ββ | 1/8 [00:03<00:22, 3.27s/it]\n 25%|βββ | 2/8 [00:05<00:14, 2.47s/it]\n 38%|ββββ | 3/8 [00:07<00:11, 2.22s/it]\n 50%|βββββ | 4/8 [00:09<00:08, 2.10s/it]\n 62%|βββββββ | 5/8 [00:10<00:06, 2.03s/it]\n 75%|ββββββββ | 6/8 [00:12<00:03, 2.00s/it]\n 88%|βββββββββ | 7/8 [00:14<00:01, 1.97s/it]\n100%|ββββββββββ| 8/8 [00:16<00:00, 1.96s/it]\n100%|ββββββββββ| 8/8 [00:16<00:00, 2.09s/it]\nExecuting node 85, title: Load VAE, class type: VAELoader\nUsing pytorch attention in VAE\nUsing pytorch attention in VAE\nRequested to load AutoencoderKL\nLoading 1 new model\nExecuting node 84, title: VAE Decode, class type: VAEDecode\nExecuting node 53, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 203, title: Upscale Image By, class type: ImageScaleBy\nExecuting node 204, title: VAE Encode, class type: VAEEncode\nExecuting node 198, title: KSampler, class type: KSampler\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Sliding context window activated - latents passed in (96) greater than context_length 16.\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Using motion module AnimateLCM_sd15_t2v.ckpt:v2.\n 0%| | 0/8 [00:00<?, ?it/s]\n 12%|ββ | 1/8 [00:03<00:27, 3.89s/it]\n 25%|βββ | 2/8 [00:07<00:23, 3.87s/it]\n 38%|ββββ | 3/8 [00:11<00:19, 3.94s/it]\n 50%|βββββ | 4/8 [00:15<00:15, 3.92s/it]\n 62%|βββββββ | 5/8 [00:19<00:11, 3.90s/it]\n 75%|ββββββββ | 6/8 [00:23<00:07, 3.89s/it]\n 88%|βββββββββ | 7/8 [00:27<00:03, 3.89s/it]\n100%|ββββββββββ| 8/8 [00:31<00:00, 3.89s/it]\n100%|ββββββββββ| 8/8 [00:31<00:00, 3.89s/it]\nExecuting node 201, title: VAE Decode, class type: VAEDecode\nExecuting node 205, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 270, title: Load Upscale Model, class type: UpscaleModelLoader\nExecuting node 271, title: Upscale Image (using Model), class type: ImageUpscaleWithModel\nExecuting node 279, title: Upscale Image, class type: ImageScale\nExecuting node 272, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 770, title: FILM VFI, class type: FILM VFI\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Final clearing cache...\nComfy-VFI: Done cache clearing\nExecuting node 219, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nPrompt executed in 250.04 seconds\noutputs: {'53': {'gifs': [{'filename': 'preview_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '205': {'gifs': [{'filename': 'upscaled_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '272': {'gifs': [{'filename': 'upscaled_model_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '219': {'gifs': [{'filename': 'interpolated_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}}\n====================================\npreview_00001.mp4\nupscaled_00001.mp4\nupscaled_model_00001.mp4\ninterpolated_00001.mp4", "metrics": { "predict_time": 270.722101, "total_time": 298.365949 }, "output": [ "https://replicate.delivery/pbxt/YhuZXJfRfipvOU7Qf73LXnPDkBcVqf9i9sBqJpWqVtaIEz2KB/preview_00001.mp4", "https://replicate.delivery/pbxt/38LFmx7PZNarONApyM2amJJapsO6deZ2Tn5ldlucnu0hY2WJA/upscaled_00001.mp4", "https://replicate.delivery/pbxt/rp0xUZfVntSzHyXLrW2uajlXgzt2kPZyersE1u2AGVYDxstSA/upscaled_model_00001.mp4", "https://replicate.delivery/pbxt/MAxPGUCoA95ZE9cVfVSj9fZukRIbKF0Fsh3VVgdVJaIExstSA/interpolated_00001.mp4" ], "started_at": "2024-04-24T14:37:41.986848Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/36pah0gewxrgm0cf225ak122wm", "cancel": "https://api.replicate.com/v1/predictions/36pah0gewxrgm0cf225ak122wm/cancel" }, "version": "a75043c99151a0d97d02ea55998168172188e9687c6f2e66c102ea99fd7ca4de" }
Generated inRandom seed set to: 3961438974 Checking inputs β /tmp/inputs/2.png β /tmp/inputs/1.png β /tmp/inputs/3.png β /tmp/inputs/4.png β /tmp/inputs/circles.mp4 β /tmp/inputs/style.png ==================================== Checking weights Including weights for IPAdapter preset: PLUS (high strength) β³ Downloading control_v1p_sd15_qrcode_monster.safetensors to ComfyUI/models/controlnet βοΈ Downloaded control_v1p_sd15_qrcode_monster.safetensors in 0.58s, size: 689.12MB β control_v1p_sd15_qrcode_monster.safetensors β³ Downloading AnimateLCM_sd15_t2v_lora.safetensors to ComfyUI/models/loras βοΈ Downloaded AnimateLCM_sd15_t2v_lora.safetensors in 0.25s, size: 128.39MB β AnimateLCM_sd15_t2v_lora.safetensors β³ Downloading vae-ft-mse-840000-ema-pruned.safetensors to ComfyUI/models/vae βοΈ Downloaded vae-ft-mse-840000-ema-pruned.safetensors in 0.34s, size: 319.14MB β vae-ft-mse-840000-ema-pruned.safetensors β³ Downloading AnimateLCM_sd15_t2v.ckpt to ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/models βοΈ Downloaded AnimateLCM_sd15_t2v.ckpt in 1.37s, size: 1729.05MB β AnimateLCM_sd15_t2v.ckpt β³ Downloading rcnzCartoon3d_v20.safetensors to ComfyUI/models/checkpoints βοΈ Downloaded rcnzCartoon3d_v20.safetensors in 10.13s, size: 2033.83MB β rcnzCartoon3d_v20.safetensors β³ Downloading ip-adapter-plus_sd15.safetensors to ComfyUI/models/ipadapter βοΈ Downloaded ip-adapter-plus_sd15.safetensors in 0.30s, size: 93.63MB β ip-adapter-plus_sd15.safetensors β³ Downloading CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors to ComfyUI/models/clip_vision βοΈ Downloaded CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors in 1.46s, size: 2411.24MB β CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors β³ Downloading RealESRGAN_x4.pth to ComfyUI/models/upscale_models βοΈ Downloaded RealESRGAN_x4.pth in 0.18s, size: 63.94MB β RealESRGAN_x4.pth β³ Downloading ip-adapter-plus_sdxl_vit-h.safetensors to ComfyUI/models/ipadapter βοΈ Downloaded ip-adapter-plus_sdxl_vit-h.safetensors in 0.72s, size: 808.26MB β ip-adapter-plus_sdxl_vit-h.safetensors β³ Downloading film_net_fp32.pt to ComfyUI/custom_nodes/ComfyUI-Frame-Interpolation/ckpts/film βοΈ Downloaded film_net_fp32.pt in 0.22s, size: 131.53MB β film_net_fp32.pt ==================================== Running workflow got prompt Executing node 564, title: Load Checkpoint, class type: CheckpointLoaderSimple model_type EPS Using pytorch attention in VAE Using pytorch attention in VAE clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] loaded straight to GPU Requested to load BaseModel Loading 1 new model Executing node 563, title: LoraLoaderModelOnly, class type: LoraLoaderModelOnly [AnimateDiffEvo] - INFO - Loading motion module AnimateLCM_sd15_t2v.ckpt via Gen2 Executing node 87, title: Load AnimateDiff Model ππ π β‘, class type: ADE_LoadAnimateDiffModel Executing node 256, title: Motion Scale ππ π , class type: ADE_MultivalDynamic Executing node 79, title: Apply AnimateDiff Model ππ π β‘, class type: ADE_ApplyAnimateDiffModelSimple Executing node 156, title: Context OptionsβLooped Uniform ππ π , class type: ADE_LoopedUniformContextOptions Executing node 77, title: Use Evolved Sampling ππ π β‘, class type: ADE_UseEvolvedSampling Executing node 573, title: IPAdapter Unified Loader, class type: IPAdapterUnifiedLoader INFO: Clip Vision model loaded from /src/ComfyUI/models/clip_vision/CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors INFO: IPAdapter model loaded from /src/ComfyUI/models/ipadapter/ip-adapter-plus_sd15.safetensors Executing node 142, title: Load Image, class type: LoadImage Executing node 701, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced Requested to load CLIPVisionModelProjection Loading 1 new model Executing node 545, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch Executing node 135, title: Load Image, class type: LoadImage Executing node 707, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced Executing node 548, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch Executing node 680, title: Load Image, class type: LoadImage Executing node 710, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced Executing node 681, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch Executing node 683, title: Load Image, class type: LoadImage Executing node 713, title: CreateFadeMaskAdvanced, class type: CreateFadeMaskAdvanced Executing node 682, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch INFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting. Executing node 752, title: Load Image, class type: LoadImage Executing node 751, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch INFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting. Requested to load SD1ClipModel Loading 1 new model Executing node 565, title: Positive, class type: CLIPTextEncode Executing node 566, title: Negative, class type: CLIPTextEncode Executing node 134, title: Empty Latent Image, class type: EmptyLatentImage Executing node 80, title: KSampler, class type: KSampler [AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (96) greater than context_length 16. [AnimateDiffEvo] - INFO - Using motion module AnimateLCM_sd15_t2v.ckpt:v2. Requested to load BaseModel Requested to load AnimateDiffModel Loading 2 new models 0%| | 0/8 [00:00<?, ?it/s] 12%|ββ | 1/8 [00:03<00:22, 3.27s/it] 25%|βββ | 2/8 [00:05<00:14, 2.47s/it] 38%|ββββ | 3/8 [00:07<00:11, 2.22s/it] 50%|βββββ | 4/8 [00:09<00:08, 2.10s/it] 62%|βββββββ | 5/8 [00:10<00:06, 2.03s/it] 75%|ββββββββ | 6/8 [00:12<00:03, 2.00s/it] 88%|βββββββββ | 7/8 [00:14<00:01, 1.97s/it] 100%|ββββββββββ| 8/8 [00:16<00:00, 1.96s/it] 100%|ββββββββββ| 8/8 [00:16<00:00, 2.09s/it] Executing node 85, title: Load VAE, class type: VAELoader Using pytorch attention in VAE Using pytorch attention in VAE Requested to load AutoencoderKL Loading 1 new model Executing node 84, title: VAE Decode, class type: VAEDecode Executing node 53, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 203, title: Upscale Image By, class type: ImageScaleBy Executing node 204, title: VAE Encode, class type: VAEEncode Executing node 198, title: KSampler, class type: KSampler [AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (96) greater than context_length 16. [AnimateDiffEvo] - INFO - Using motion module AnimateLCM_sd15_t2v.ckpt:v2. 0%| | 0/8 [00:00<?, ?it/s] 12%|ββ | 1/8 [00:03<00:27, 3.89s/it] 25%|βββ | 2/8 [00:07<00:23, 3.87s/it] 38%|ββββ | 3/8 [00:11<00:19, 3.94s/it] 50%|βββββ | 4/8 [00:15<00:15, 3.92s/it] 62%|βββββββ | 5/8 [00:19<00:11, 3.90s/it] 75%|ββββββββ | 6/8 [00:23<00:07, 3.89s/it] 88%|βββββββββ | 7/8 [00:27<00:03, 3.89s/it] 100%|ββββββββββ| 8/8 [00:31<00:00, 3.89s/it] 100%|ββββββββββ| 8/8 [00:31<00:00, 3.89s/it] Executing node 201, title: VAE Decode, class type: VAEDecode Executing node 205, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 270, title: Load Upscale Model, class type: UpscaleModelLoader Executing node 271, title: Upscale Image (using Model), class type: ImageUpscaleWithModel Executing node 279, title: Upscale Image, class type: ImageScale Executing node 272, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 770, title: FILM VFI, class type: FILM VFI Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Final clearing cache... Comfy-VFI: Done cache clearing Executing node 219, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Prompt executed in 250.04 seconds outputs: {'53': {'gifs': [{'filename': 'preview_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '205': {'gifs': [{'filename': 'upscaled_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '272': {'gifs': [{'filename': 'upscaled_model_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '219': {'gifs': [{'filename': 'interpolated_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}} ==================================== preview_00001.mp4 upscaled_00001.mp4 upscaled_model_00001.mp4 interpolated_00001.mp4
Prediction
fofr/video-morpher:355c6bbaf8bc2deeafd3e2384a50af51bc2091a8be96dc082f1ef02c74640bafID4r54mjhhzdrgg0cf22mrkbzee0StatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedby @fofrInput
- mode
- upscaled-and-interpolated
- prompt
- checkpoint
- 3D
- aspect_ratio
- 4:3
- style_strength
- 1
- negative_prompt
{ "mode": "upscaled-and-interpolated", "prompt": "", "checkpoint": "3D", "style_image": "https://replicate.delivery/pbxt/Kns41UhVhW7rmSPxTxTGK7oWd441x2Xjv2DI5w8oep07lE0o/hero-char.png", "aspect_ratio": "4:3", "style_strength": 1, "negative_prompt": "", "subject_image_1": "https://replicate.delivery/pbxt/KnswBtSnF1TUPOiWuMQKiVbV6MFwe6XiwqPAYvKg6gT3lQ71/fofr_a_middle_aged_man_with_thick_glasses_16f412ff-db00-4e06-acdf-828885df6c58.webp", "subject_image_2": "https://replicate.delivery/pbxt/KnswBrEyhOFdEojl0CdENgE5DlTj9kGhEK4z2iCbRAsABGAk/fofr_a_blonde_woman_studio_headshot_cf000993-a68c-499f-9619-026601661d97.png", "subject_image_3": "https://replicate.delivery/pbxt/KnswBqwa5yZrbPOaHiY76FDYWfgXwOM5ccRezIVKyjOcP73B/marble-statue-antinous-height-180-cm-9513049.jpg.webp", "subject_image_4": "https://replicate.delivery/pbxt/KnswBvWex3ox1iONPTqc8SjNLtYCIOXMa6WbFK4hWrsGii99/ComfyUI_02710_.png" }
Install Replicateβs Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/video-morpher:355c6bbaf8bc2deeafd3e2384a50af51bc2091a8be96dc082f1ef02c74640baf", { input: { mode: "upscaled-and-interpolated", prompt: "", checkpoint: "3D", style_image: "https://replicate.delivery/pbxt/Kns41UhVhW7rmSPxTxTGK7oWd441x2Xjv2DI5w8oep07lE0o/hero-char.png", aspect_ratio: "4:3", style_strength: 1, negative_prompt: "", subject_image_1: "https://replicate.delivery/pbxt/KnswBtSnF1TUPOiWuMQKiVbV6MFwe6XiwqPAYvKg6gT3lQ71/fofr_a_middle_aged_man_with_thick_glasses_16f412ff-db00-4e06-acdf-828885df6c58.webp", subject_image_2: "https://replicate.delivery/pbxt/KnswBrEyhOFdEojl0CdENgE5DlTj9kGhEK4z2iCbRAsABGAk/fofr_a_blonde_woman_studio_headshot_cf000993-a68c-499f-9619-026601661d97.png", subject_image_3: "https://replicate.delivery/pbxt/KnswBqwa5yZrbPOaHiY76FDYWfgXwOM5ccRezIVKyjOcP73B/marble-statue-antinous-height-180-cm-9513049.jpg.webp", subject_image_4: "https://replicate.delivery/pbxt/KnswBvWex3ox1iONPTqc8SjNLtYCIOXMa6WbFK4hWrsGii99/ComfyUI_02710_.png" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicateβs Python client library:pip install replicate
Import the client:import replicate
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/video-morpher:355c6bbaf8bc2deeafd3e2384a50af51bc2091a8be96dc082f1ef02c74640baf", input={ "mode": "upscaled-and-interpolated", "prompt": "", "checkpoint": "3D", "style_image": "https://replicate.delivery/pbxt/Kns41UhVhW7rmSPxTxTGK7oWd441x2Xjv2DI5w8oep07lE0o/hero-char.png", "aspect_ratio": "4:3", "style_strength": 1, "negative_prompt": "", "subject_image_1": "https://replicate.delivery/pbxt/KnswBtSnF1TUPOiWuMQKiVbV6MFwe6XiwqPAYvKg6gT3lQ71/fofr_a_middle_aged_man_with_thick_glasses_16f412ff-db00-4e06-acdf-828885df6c58.webp", "subject_image_2": "https://replicate.delivery/pbxt/KnswBrEyhOFdEojl0CdENgE5DlTj9kGhEK4z2iCbRAsABGAk/fofr_a_blonde_woman_studio_headshot_cf000993-a68c-499f-9619-026601661d97.png", "subject_image_3": "https://replicate.delivery/pbxt/KnswBqwa5yZrbPOaHiY76FDYWfgXwOM5ccRezIVKyjOcP73B/marble-statue-antinous-height-180-cm-9513049.jpg.webp", "subject_image_4": "https://replicate.delivery/pbxt/KnswBvWex3ox1iONPTqc8SjNLtYCIOXMa6WbFK4hWrsGii99/ComfyUI_02710_.png" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "fofr/video-morpher:355c6bbaf8bc2deeafd3e2384a50af51bc2091a8be96dc082f1ef02c74640baf", "input": { "mode": "upscaled-and-interpolated", "prompt": "", "checkpoint": "3D", "style_image": "https://replicate.delivery/pbxt/Kns41UhVhW7rmSPxTxTGK7oWd441x2Xjv2DI5w8oep07lE0o/hero-char.png", "aspect_ratio": "4:3", "style_strength": 1, "negative_prompt": "", "subject_image_1": "https://replicate.delivery/pbxt/KnswBtSnF1TUPOiWuMQKiVbV6MFwe6XiwqPAYvKg6gT3lQ71/fofr_a_middle_aged_man_with_thick_glasses_16f412ff-db00-4e06-acdf-828885df6c58.webp", "subject_image_2": "https://replicate.delivery/pbxt/KnswBrEyhOFdEojl0CdENgE5DlTj9kGhEK4z2iCbRAsABGAk/fofr_a_blonde_woman_studio_headshot_cf000993-a68c-499f-9619-026601661d97.png", "subject_image_3": "https://replicate.delivery/pbxt/KnswBqwa5yZrbPOaHiY76FDYWfgXwOM5ccRezIVKyjOcP73B/marble-statue-antinous-height-180-cm-9513049.jpg.webp", "subject_image_4": "https://replicate.delivery/pbxt/KnswBvWex3ox1iONPTqc8SjNLtYCIOXMa6WbFK4hWrsGii99/ComfyUI_02710_.png" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicateβs HTTP API reference docs.
Output
{ "completed_at": "2024-04-24T15:15:30.860405Z", "created_at": "2024-04-24T15:11:14.939000Z", "data_removed": false, "error": null, "id": "4r54mjhhzdrgg0cf22mrkbzee0", "input": { "mode": "upscaled-and-interpolated", "prompt": "", "checkpoint": "3D", "style_image": "https://replicate.delivery/pbxt/Kns41UhVhW7rmSPxTxTGK7oWd441x2Xjv2DI5w8oep07lE0o/hero-char.png", "aspect_ratio": "4:3", "style_strength": 1, "negative_prompt": "", "subject_image_1": "https://replicate.delivery/pbxt/KnswBtSnF1TUPOiWuMQKiVbV6MFwe6XiwqPAYvKg6gT3lQ71/fofr_a_middle_aged_man_with_thick_glasses_16f412ff-db00-4e06-acdf-828885df6c58.webp", "subject_image_2": "https://replicate.delivery/pbxt/KnswBrEyhOFdEojl0CdENgE5DlTj9kGhEK4z2iCbRAsABGAk/fofr_a_blonde_woman_studio_headshot_cf000993-a68c-499f-9619-026601661d97.png", "subject_image_3": "https://replicate.delivery/pbxt/KnswBqwa5yZrbPOaHiY76FDYWfgXwOM5ccRezIVKyjOcP73B/marble-statue-antinous-height-180-cm-9513049.jpg.webp", "subject_image_4": "https://replicate.delivery/pbxt/KnswBvWex3ox1iONPTqc8SjNLtYCIOXMa6WbFK4hWrsGii99/ComfyUI_02710_.png" }, "logs": "Random seed set to: 1561674246\nChecking inputs\nβ /tmp/inputs/2.png\nβ /tmp/inputs/1.png\nβ /tmp/inputs/3.png\nβ /tmp/inputs/4.png\nβ /tmp/inputs/circles.mp4\nβ /tmp/inputs/style.png\n====================================\nChecking weights\nIncluding weights for IPAdapter preset: PLUS (high strength)\nβ AnimateLCM_sd15_t2v.ckpt\nβ ip-adapter-plus_sd15.safetensors\nβ vae-ft-mse-840000-ema-pruned.safetensors\nβ film_net_fp32.pt\nβ³ Downloading rcnzCartoon3d_v20.safetensors to ComfyUI/models/checkpoints\nβοΈ Downloaded rcnzCartoon3d_v20.safetensors in 1.87s, size: 2033.83MB\nβ rcnzCartoon3d_v20.safetensors\nβ control_v1p_sd15_qrcode_monster.safetensors\nβ RealESRGAN_x4.pth\nβ CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors\nβ ip-adapter-plus_sdxl_vit-h.safetensors\nβ AnimateLCM_sd15_t2v_lora.safetensors\n====================================\nRunning workflow\ngot prompt\nExecuting node 564, title: Load Checkpoint, class type: CheckpointLoaderSimple\nmodel_type EPS\nUsing pytorch attention in VAE\nUsing pytorch attention in VAE\nclip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']\nloaded straight to GPU\nRequested to load BaseModel\nLoading 1 new model\nExecuting node 563, title: LoraLoaderModelOnly, class type: LoraLoaderModelOnly\nExecuting node 77, title: Use Evolved Sampling ππ π β‘, class type: ADE_UseEvolvedSampling\nExecuting node 573, title: IPAdapter Unified Loader, class type: IPAdapterUnifiedLoader\nExecuting node 545, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\nExecuting node 135, title: Load Image, class type: LoadImage\nExecuting node 548, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\nExecuting node 680, title: Load Image, class type: LoadImage\nExecuting node 681, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\n\u001b[33mINFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting.\u001b[0m\nExecuting node 683, title: Load Image, class type: LoadImage\nExecuting node 682, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\nExecuting node 752, title: Load Image, class type: LoadImage\nExecuting node 751, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch\n\u001b[33mINFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting.\u001b[0m\nRequested to load SD1ClipModel\nLoading 1 new model\nExecuting node 565, title: Positive, class type: CLIPTextEncode\nExecuting node 566, title: Negative, class type: CLIPTextEncode\nExecuting node 134, title: Empty Latent Image, class type: EmptyLatentImage\nExecuting node 80, title: KSampler, class type: KSampler\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Sliding context window activated - latents passed in (96) greater than context_length 16.\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Using motion module AnimateLCM_sd15_t2v.ckpt:v2.\nRequested to load BaseModel\nLoading 1 new model\n 0%| | 0/8 [00:00<?, ?it/s]\n 12%|ββ | 1/8 [00:01<00:13, 1.90s/it]\n 25%|βββ | 2/8 [00:03<00:11, 1.91s/it]\n 38%|ββββ | 3/8 [00:05<00:09, 1.91s/it]\n 50%|βββββ | 4/8 [00:07<00:07, 1.91s/it]\n 62%|βββββββ | 5/8 [00:09<00:05, 1.91s/it]\n 75%|ββββββββ | 6/8 [00:11<00:03, 1.91s/it]\n 88%|βββββββββ | 7/8 [00:13<00:01, 1.91s/it]\n100%|ββββββββββ| 8/8 [00:15<00:00, 1.91s/it]\n100%|ββββββββββ| 8/8 [00:15<00:00, 1.91s/it]\nExecuting node 84, title: VAE Decode, class type: VAEDecode\nExecuting node 53, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 203, title: Upscale Image By, class type: ImageScaleBy\nExecuting node 204, title: VAE Encode, class type: VAEEncode\nExecuting node 198, title: KSampler, class type: KSampler\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Sliding context window activated - latents passed in (96) greater than context_length 16.\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Using motion module AnimateLCM_sd15_t2v.ckpt:v2.\n 0%| | 0/8 [00:00<?, ?it/s]\n 12%|ββ | 1/8 [00:03<00:27, 3.87s/it]\n 25%|βββ | 2/8 [00:07<00:23, 3.85s/it]\n 38%|ββββ | 3/8 [00:11<00:19, 3.85s/it]\n 50%|βββββ | 4/8 [00:15<00:15, 3.85s/it]\n 62%|βββββββ | 5/8 [00:19<00:11, 3.85s/it]\n 75%|ββββββββ | 6/8 [00:23<00:07, 3.85s/it]\n 88%|βββββββββ | 7/8 [00:26<00:03, 3.85s/it]\n100%|ββββββββββ| 8/8 [00:30<00:00, 3.85s/it]\n100%|ββββββββββ| 8/8 [00:30<00:00, 3.85s/it]\nRequested to load AutoencoderKL\nLoading 1 new model\nExecuting node 201, title: VAE Decode, class type: VAEDecode\nExecuting node 205, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 271, title: Upscale Image (using Model), class type: ImageUpscaleWithModel\nExecuting node 279, title: Upscale Image, class type: ImageScale\nExecuting node 272, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 770, title: FILM VFI, class type: FILM VFI\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Final clearing cache...\nComfy-VFI: Done cache clearing\nExecuting node 219, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nPrompt executed in 246.67 seconds\noutputs: {'53': {'gifs': [{'filename': 'preview_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '205': {'gifs': [{'filename': 'upscaled_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '272': {'gifs': [{'filename': 'upscaled_model_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '219': {'gifs': [{'filename': 'interpolated_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}}\n====================================\npreview_00001.mp4\nupscaled_00001.mp4\nupscaled_model_00001.mp4\ninterpolated_00001.mp4", "metrics": { "predict_time": 255.879579, "total_time": 255.921405 }, "output": [ "https://replicate.delivery/pbxt/eZTBeeFyOwOtxIe0SgRUcXBFmcrLBfwjAKGmWgUx8NV9BqtVC/preview_00001.mp4", "https://replicate.delivery/pbxt/zrphtsj2VTphCtRLfsJABGNV4XA5W59BXFqkj1EyHu2Io2WJA/upscaled_00001.mp4", "https://replicate.delivery/pbxt/68ecY1ceCPu9VE8uy9tfRUiJ5FbYEAerqH5DjLD4MfgOCqtVC/upscaled_model_00001.mp4", "https://replicate.delivery/pbxt/199QGHuGw8IvE5hvUSLTRpexvf9bLwZ8sgToTS0oWLWSQttSA/interpolated_00001.mp4" ], "started_at": "2024-04-24T15:11:14.980826Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/4r54mjhhzdrgg0cf22mrkbzee0", "cancel": "https://api.replicate.com/v1/predictions/4r54mjhhzdrgg0cf22mrkbzee0/cancel" }, "version": "355c6bbaf8bc2deeafd3e2384a50af51bc2091a8be96dc082f1ef02c74640baf" }
Generated inRandom seed set to: 1561674246 Checking inputs β /tmp/inputs/2.png β /tmp/inputs/1.png β /tmp/inputs/3.png β /tmp/inputs/4.png β /tmp/inputs/circles.mp4 β /tmp/inputs/style.png ==================================== Checking weights Including weights for IPAdapter preset: PLUS (high strength) β AnimateLCM_sd15_t2v.ckpt β ip-adapter-plus_sd15.safetensors β vae-ft-mse-840000-ema-pruned.safetensors β film_net_fp32.pt β³ Downloading rcnzCartoon3d_v20.safetensors to ComfyUI/models/checkpoints βοΈ Downloaded rcnzCartoon3d_v20.safetensors in 1.87s, size: 2033.83MB β rcnzCartoon3d_v20.safetensors β control_v1p_sd15_qrcode_monster.safetensors β RealESRGAN_x4.pth β CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors β ip-adapter-plus_sdxl_vit-h.safetensors β AnimateLCM_sd15_t2v_lora.safetensors ==================================== Running workflow got prompt Executing node 564, title: Load Checkpoint, class type: CheckpointLoaderSimple model_type EPS Using pytorch attention in VAE Using pytorch attention in VAE clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] loaded straight to GPU Requested to load BaseModel Loading 1 new model Executing node 563, title: LoraLoaderModelOnly, class type: LoraLoaderModelOnly Executing node 77, title: Use Evolved Sampling ππ π β‘, class type: ADE_UseEvolvedSampling Executing node 573, title: IPAdapter Unified Loader, class type: IPAdapterUnifiedLoader Executing node 545, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch Executing node 135, title: Load Image, class type: LoadImage Executing node 548, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch Executing node 680, title: Load Image, class type: LoadImage Executing node 681, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch INFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting. Executing node 683, title: Load Image, class type: LoadImage Executing node 682, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch Executing node 752, title: Load Image, class type: LoadImage Executing node 751, title: IPAdapter Batch (Adv.), class type: IPAdapterBatch INFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting. Requested to load SD1ClipModel Loading 1 new model Executing node 565, title: Positive, class type: CLIPTextEncode Executing node 566, title: Negative, class type: CLIPTextEncode Executing node 134, title: Empty Latent Image, class type: EmptyLatentImage Executing node 80, title: KSampler, class type: KSampler [AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (96) greater than context_length 16. [AnimateDiffEvo] - INFO - Using motion module AnimateLCM_sd15_t2v.ckpt:v2. Requested to load BaseModel Loading 1 new model 0%| | 0/8 [00:00<?, ?it/s] 12%|ββ | 1/8 [00:01<00:13, 1.90s/it] 25%|βββ | 2/8 [00:03<00:11, 1.91s/it] 38%|ββββ | 3/8 [00:05<00:09, 1.91s/it] 50%|βββββ | 4/8 [00:07<00:07, 1.91s/it] 62%|βββββββ | 5/8 [00:09<00:05, 1.91s/it] 75%|ββββββββ | 6/8 [00:11<00:03, 1.91s/it] 88%|βββββββββ | 7/8 [00:13<00:01, 1.91s/it] 100%|ββββββββββ| 8/8 [00:15<00:00, 1.91s/it] 100%|ββββββββββ| 8/8 [00:15<00:00, 1.91s/it] Executing node 84, title: VAE Decode, class type: VAEDecode Executing node 53, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 203, title: Upscale Image By, class type: ImageScaleBy Executing node 204, title: VAE Encode, class type: VAEEncode Executing node 198, title: KSampler, class type: KSampler [AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (96) greater than context_length 16. [AnimateDiffEvo] - INFO - Using motion module AnimateLCM_sd15_t2v.ckpt:v2. 0%| | 0/8 [00:00<?, ?it/s] 12%|ββ | 1/8 [00:03<00:27, 3.87s/it] 25%|βββ | 2/8 [00:07<00:23, 3.85s/it] 38%|ββββ | 3/8 [00:11<00:19, 3.85s/it] 50%|βββββ | 4/8 [00:15<00:15, 3.85s/it] 62%|βββββββ | 5/8 [00:19<00:11, 3.85s/it] 75%|ββββββββ | 6/8 [00:23<00:07, 3.85s/it] 88%|βββββββββ | 7/8 [00:26<00:03, 3.85s/it] 100%|ββββββββββ| 8/8 [00:30<00:00, 3.85s/it] 100%|ββββββββββ| 8/8 [00:30<00:00, 3.85s/it] Requested to load AutoencoderKL Loading 1 new model Executing node 201, title: VAE Decode, class type: VAEDecode Executing node 205, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 271, title: Upscale Image (using Model), class type: ImageUpscaleWithModel Executing node 279, title: Upscale Image, class type: ImageScale Executing node 272, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 770, title: FILM VFI, class type: FILM VFI Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Final clearing cache... Comfy-VFI: Done cache clearing Executing node 219, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Prompt executed in 246.67 seconds outputs: {'53': {'gifs': [{'filename': 'preview_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '205': {'gifs': [{'filename': 'upscaled_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '272': {'gifs': [{'filename': 'upscaled_model_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '219': {'gifs': [{'filename': 'interpolated_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}} ==================================== preview_00001.mp4 upscaled_00001.mp4 upscaled_model_00001.mp4 interpolated_00001.mp4
Prediction
fofr/video-morpher:e70e975067d2b5dbe9e2d9022833d27230a1bdeb3f4af6fe6bb49a548a3039a7IDykjrhncvmdrm80cke9s9rhsqc4StatusSucceededSourceWebHardwareL40STotal durationCreatedInput
- mode
- upscaled-and-interpolated
- prompt
- bright, vibrant, high contrast
- checkpoint
- 3D
- aspect_ratio
- 3:4
- style_strength
- 0.5
- use_controlnet
- negative_prompt
- dark, gloomy
{ "mode": "upscaled-and-interpolated", "prompt": "bright, vibrant, high contrast", "checkpoint": "3D", "style_image": "https://replicate.delivery/pbxt/KnxLVfe6BRRu2zHc3gT99mnwaemKfR4JzaZWxMCLsZYSTKzp/2024-03-05--06-47-29-u-q1-fofr_tropical_purple_beksinski_aaad09f0-d194-4e40-b312-51054fc4ebbf.png", "aspect_ratio": "3:4", "style_strength": 0.5, "use_controlnet": true, "negative_prompt": "dark, gloomy", "subject_image_1": "https://replicate.delivery/pbxt/KnxLX9qnA82YKCkScCJZf5VCX6hy5RuprGwLEAVjp3vu6Oh1/1.webp", "subject_image_2": "https://replicate.delivery/pbxt/KnxLWqPg086DnRTUvxDM7gCBYi3W3coIbn3Q8jXnTJsfvZzt/2.webp", "subject_image_3": "https://replicate.delivery/pbxt/KnxLWU6aHjc6kVW2QOF0xC4oen8zuPnlsK2o24GtOV2bJEn8/4.webp", "subject_image_4": "https://replicate.delivery/pbxt/KnxLVg54ySFfg8s78YyfzK0Qgse8YdTbSmEZKpnkFkmSFlsf/4.webp" }
Install Replicateβs Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/video-morpher:e70e975067d2b5dbe9e2d9022833d27230a1bdeb3f4af6fe6bb49a548a3039a7", { input: { mode: "upscaled-and-interpolated", prompt: "bright, vibrant, high contrast", checkpoint: "3D", style_image: "https://replicate.delivery/pbxt/KnxLVfe6BRRu2zHc3gT99mnwaemKfR4JzaZWxMCLsZYSTKzp/2024-03-05--06-47-29-u-q1-fofr_tropical_purple_beksinski_aaad09f0-d194-4e40-b312-51054fc4ebbf.png", aspect_ratio: "3:4", style_strength: 0.5, use_controlnet: true, negative_prompt: "dark, gloomy", subject_image_1: "https://replicate.delivery/pbxt/KnxLX9qnA82YKCkScCJZf5VCX6hy5RuprGwLEAVjp3vu6Oh1/1.webp", subject_image_2: "https://replicate.delivery/pbxt/KnxLWqPg086DnRTUvxDM7gCBYi3W3coIbn3Q8jXnTJsfvZzt/2.webp", subject_image_3: "https://replicate.delivery/pbxt/KnxLWU6aHjc6kVW2QOF0xC4oen8zuPnlsK2o24GtOV2bJEn8/4.webp", subject_image_4: "https://replicate.delivery/pbxt/KnxLVg54ySFfg8s78YyfzK0Qgse8YdTbSmEZKpnkFkmSFlsf/4.webp" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicateβs Python client library:pip install replicate
Import the client:import replicate
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/video-morpher:e70e975067d2b5dbe9e2d9022833d27230a1bdeb3f4af6fe6bb49a548a3039a7", input={ "mode": "upscaled-and-interpolated", "prompt": "bright, vibrant, high contrast", "checkpoint": "3D", "style_image": "https://replicate.delivery/pbxt/KnxLVfe6BRRu2zHc3gT99mnwaemKfR4JzaZWxMCLsZYSTKzp/2024-03-05--06-47-29-u-q1-fofr_tropical_purple_beksinski_aaad09f0-d194-4e40-b312-51054fc4ebbf.png", "aspect_ratio": "3:4", "style_strength": 0.5, "use_controlnet": True, "negative_prompt": "dark, gloomy", "subject_image_1": "https://replicate.delivery/pbxt/KnxLX9qnA82YKCkScCJZf5VCX6hy5RuprGwLEAVjp3vu6Oh1/1.webp", "subject_image_2": "https://replicate.delivery/pbxt/KnxLWqPg086DnRTUvxDM7gCBYi3W3coIbn3Q8jXnTJsfvZzt/2.webp", "subject_image_3": "https://replicate.delivery/pbxt/KnxLWU6aHjc6kVW2QOF0xC4oen8zuPnlsK2o24GtOV2bJEn8/4.webp", "subject_image_4": "https://replicate.delivery/pbxt/KnxLVg54ySFfg8s78YyfzK0Qgse8YdTbSmEZKpnkFkmSFlsf/4.webp" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run fofr/video-morpher using Replicateβs API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "fofr/video-morpher:e70e975067d2b5dbe9e2d9022833d27230a1bdeb3f4af6fe6bb49a548a3039a7", "input": { "mode": "upscaled-and-interpolated", "prompt": "bright, vibrant, high contrast", "checkpoint": "3D", "style_image": "https://replicate.delivery/pbxt/KnxLVfe6BRRu2zHc3gT99mnwaemKfR4JzaZWxMCLsZYSTKzp/2024-03-05--06-47-29-u-q1-fofr_tropical_purple_beksinski_aaad09f0-d194-4e40-b312-51054fc4ebbf.png", "aspect_ratio": "3:4", "style_strength": 0.5, "use_controlnet": true, "negative_prompt": "dark, gloomy", "subject_image_1": "https://replicate.delivery/pbxt/KnxLX9qnA82YKCkScCJZf5VCX6hy5RuprGwLEAVjp3vu6Oh1/1.webp", "subject_image_2": "https://replicate.delivery/pbxt/KnxLWqPg086DnRTUvxDM7gCBYi3W3coIbn3Q8jXnTJsfvZzt/2.webp", "subject_image_3": "https://replicate.delivery/pbxt/KnxLWU6aHjc6kVW2QOF0xC4oen8zuPnlsK2o24GtOV2bJEn8/4.webp", "subject_image_4": "https://replicate.delivery/pbxt/KnxLVg54ySFfg8s78YyfzK0Qgse8YdTbSmEZKpnkFkmSFlsf/4.webp" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicateβs HTTP API reference docs.
Output
{ "completed_at": "2024-11-28T11:08:35.822944Z", "created_at": "2024-11-28T11:05:33.859000Z", "data_removed": false, "error": null, "id": "ykjrhncvmdrm80cke9s9rhsqc4", "input": { "mode": "upscaled-and-interpolated", "prompt": "bright, vibrant, high contrast", "checkpoint": "3D", "style_image": "https://replicate.delivery/pbxt/KnxLVfe6BRRu2zHc3gT99mnwaemKfR4JzaZWxMCLsZYSTKzp/2024-03-05--06-47-29-u-q1-fofr_tropical_purple_beksinski_aaad09f0-d194-4e40-b312-51054fc4ebbf.png", "aspect_ratio": "3:4", "style_strength": 0.5, "use_controlnet": true, "negative_prompt": "dark, gloomy", "subject_image_1": "https://replicate.delivery/pbxt/KnxLX9qnA82YKCkScCJZf5VCX6hy5RuprGwLEAVjp3vu6Oh1/1.webp", "subject_image_2": "https://replicate.delivery/pbxt/KnxLWqPg086DnRTUvxDM7gCBYi3W3coIbn3Q8jXnTJsfvZzt/2.webp", "subject_image_3": "https://replicate.delivery/pbxt/KnxLWU6aHjc6kVW2QOF0xC4oen8zuPnlsK2o24GtOV2bJEn8/4.webp", "subject_image_4": "https://replicate.delivery/pbxt/KnxLVg54ySFfg8s78YyfzK0Qgse8YdTbSmEZKpnkFkmSFlsf/4.webp" }, "logs": "Random seed set to: 4049182242\nChecking inputs\nβ /tmp/inputs/2.png\nβ /tmp/inputs/1.png\nβ /tmp/inputs/3.png\nβ /tmp/inputs/4.png\nβ /tmp/inputs/circles.mp4\nβ /tmp/inputs/style.png\n====================================\nChecking weights\nIncluding weights for IPAdapter preset: PLUS (high strength)\nβ RealESRGAN_x4.pth\nβ film_net_fp32.pt\nβ ip-adapter-plus_sdxl_vit-h.safetensors\nβ CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors\nβ AnimateLCM_sd15_t2v.ckpt\nβ control_v1p_sd15_qrcode_monster.safetensors\nβ vae-ft-mse-840000-ema-pruned.safetensors\nβ ip-adapter-plus_sd15.safetensors\nβ AnimateLCM_sd15_t2v_lora.safetensors\nβ rcnzCartoon3d_v20.safetensors\n====================================\nRunning workflow\ngot prompt\nExecuting node 746, title: Load Video (Upload) π₯π ₯π π ’, class type: VHS_LoadVideo\nExecuting node 461, title: π§ Simple Math, class type: SimpleMath+\nExecuting node 454, title: RepeatImageBatch, class type: RepeatImageBatch\nExecuting node 458, title: Split Image Batch π₯π ₯π π ’, class type: VHS_SplitImages\nExecuting node 125, title: Apply Advanced ControlNet ππ π π , class type: ACN_AdvancedControlNetApply\nExecuting node 80, title: KSampler, class type: KSampler\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Sliding context window activated - latents passed in (96) greater than context_length 16.\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Using motion module AnimateLCM_sd15_t2v.ckpt:v2.\nRequested to load ControlNet\nLoading 1 new model\n 0%| | 0/11 [00:00<?, ?it/s]\n 9%|β | 1/11 [00:01<00:11, 1.11s/it]\n 18%|ββ | 2/11 [00:02<00:09, 1.11s/it]\n 27%|βββ | 3/11 [00:03<00:09, 1.18s/it]\n 36%|ββββ | 4/11 [00:04<00:08, 1.21s/it]\n 45%|βββββ | 5/11 [00:05<00:06, 1.14s/it]\n 55%|ββββββ | 6/11 [00:06<00:05, 1.11s/it]\n 64%|βββββββ | 7/11 [00:07<00:04, 1.08s/it]\n 73%|ββββββββ | 8/11 [00:09<00:03, 1.13s/it]\n 82%|βββββββββ | 9/11 [00:10<00:02, 1.12s/it]\n 91%|βββββββββ | 10/11 [00:11<00:01, 1.09s/it]\n100%|ββββββββββ| 11/11 [00:12<00:00, 1.07s/it]\n100%|ββββββββββ| 11/11 [00:12<00:00, 1.11s/it]\nExecuting node 84, title: VAE Decode, class type: VAEDecode\nExecuting node 53, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 203, title: Upscale Image By, class type: ImageScaleBy\nExecuting node 204, title: VAE Encode, class type: VAEEncode\nExecuting node 198, title: KSampler, class type: KSampler\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Sliding context window activated - latents passed in (96) greater than context_length 16.\n[AnimateDiffEvo] - \u001b[0;32mINFO\u001b[0m - Using motion module AnimateLCM_sd15_t2v.ckpt:v2.\n 0%| | 0/11 [00:00<?, ?it/s]\n 9%|β | 1/11 [00:02<00:21, 2.20s/it]\n 18%|ββ | 2/11 [00:04<00:19, 2.19s/it]\n 27%|βββ | 3/11 [00:06<00:17, 2.19s/it]\n 36%|ββββ | 4/11 [00:08<00:15, 2.19s/it]\n 45%|βββββ | 5/11 [00:10<00:13, 2.19s/it]\n 55%|ββββββ | 6/11 [00:13<00:10, 2.19s/it]\n 64%|βββββββ | 7/11 [00:15<00:08, 2.19s/it]\n 73%|ββββββββ | 8/11 [00:17<00:06, 2.19s/it]\n 82%|βββββββββ | 9/11 [00:19<00:04, 2.18s/it]\n 91%|βββββββββ | 10/11 [00:21<00:02, 2.18s/it]\n100%|ββββββββββ| 11/11 [00:24<00:00, 2.18s/it]\n100%|ββββββββββ| 11/11 [00:24<00:00, 2.19s/it]\nRequested to load AutoencoderKL\nLoading 1 new model\nExecuting node 201, title: VAE Decode, class type: VAEDecode\nExecuting node 205, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 271, title: Upscale Image (using Model), class type: ImageUpscaleWithModel\nExecuting node 279, title: Upscale Image, class type: ImageScale\nExecuting node 272, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nExecuting node 770, title: FILM VFI, class type: FILM VFI\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Clearing cache...\nComfy-VFI: Done cache clearing\nComfy-VFI: Final clearing cache...\nComfy-VFI: Done cache clearing\nExecuting node 219, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine\nPrompt executed in 176.75 seconds\noutputs: {'53': {'gifs': [{'filename': 'preview_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '205': {'gifs': [{'filename': 'upscaled_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '272': {'gifs': [{'filename': 'upscaled_model_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '219': {'gifs': [{'filename': 'interpolated_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}}\n====================================\npreview_00001.mp4\nupscaled_00001.mp4\nupscaled_model_00001.mp4\ninterpolated_00001.mp4", "metrics": { "predict_time": 181.956443524, "total_time": 181.963944 }, "output": [ "https://replicate.delivery/xezq/fqOdvvfinVpSLUUMepVqdywzaSMyP2ru3DE33XUAAetPTAWPB/preview_00001.mp4", "https://replicate.delivery/xezq/fFvJWDTii62RfEK5ExHpy88YYFFVuvYUoR2sqBtcBb0zEg1TA/upscaled_00001.mp4", "https://replicate.delivery/xezq/oKSuEYjOC2rFF99foM6NVy0e1053GKPZn1AarSgHrAtzEg1TA/upscaled_model_00001.mp4", "https://replicate.delivery/xezq/sZQRsofi4U2Yf0x5wfrwJzRr4IipUs8tuixRqdIRJM3mJArnA/interpolated_00001.mp4" ], "started_at": "2024-11-28T11:05:33.866501Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-tyikq3iqnxxvcbm44lu25xmm75evlu5c47aadad5o23zloamp4ka", "get": "https://api.replicate.com/v1/predictions/ykjrhncvmdrm80cke9s9rhsqc4", "cancel": "https://api.replicate.com/v1/predictions/ykjrhncvmdrm80cke9s9rhsqc4/cancel" }, "version": "e70e975067d2b5dbe9e2d9022833d27230a1bdeb3f4af6fe6bb49a548a3039a7" }
Generated inRandom seed set to: 4049182242 Checking inputs β /tmp/inputs/2.png β /tmp/inputs/1.png β /tmp/inputs/3.png β /tmp/inputs/4.png β /tmp/inputs/circles.mp4 β /tmp/inputs/style.png ==================================== Checking weights Including weights for IPAdapter preset: PLUS (high strength) β RealESRGAN_x4.pth β film_net_fp32.pt β ip-adapter-plus_sdxl_vit-h.safetensors β CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors β AnimateLCM_sd15_t2v.ckpt β control_v1p_sd15_qrcode_monster.safetensors β vae-ft-mse-840000-ema-pruned.safetensors β ip-adapter-plus_sd15.safetensors β AnimateLCM_sd15_t2v_lora.safetensors β rcnzCartoon3d_v20.safetensors ==================================== Running workflow got prompt Executing node 746, title: Load Video (Upload) π₯π ₯π π ’, class type: VHS_LoadVideo Executing node 461, title: π§ Simple Math, class type: SimpleMath+ Executing node 454, title: RepeatImageBatch, class type: RepeatImageBatch Executing node 458, title: Split Image Batch π₯π ₯π π ’, class type: VHS_SplitImages Executing node 125, title: Apply Advanced ControlNet ππ π π , class type: ACN_AdvancedControlNetApply Executing node 80, title: KSampler, class type: KSampler [AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (96) greater than context_length 16. [AnimateDiffEvo] - INFO - Using motion module AnimateLCM_sd15_t2v.ckpt:v2. Requested to load ControlNet Loading 1 new model 0%| | 0/11 [00:00<?, ?it/s] 9%|β | 1/11 [00:01<00:11, 1.11s/it] 18%|ββ | 2/11 [00:02<00:09, 1.11s/it] 27%|βββ | 3/11 [00:03<00:09, 1.18s/it] 36%|ββββ | 4/11 [00:04<00:08, 1.21s/it] 45%|βββββ | 5/11 [00:05<00:06, 1.14s/it] 55%|ββββββ | 6/11 [00:06<00:05, 1.11s/it] 64%|βββββββ | 7/11 [00:07<00:04, 1.08s/it] 73%|ββββββββ | 8/11 [00:09<00:03, 1.13s/it] 82%|βββββββββ | 9/11 [00:10<00:02, 1.12s/it] 91%|βββββββββ | 10/11 [00:11<00:01, 1.09s/it] 100%|ββββββββββ| 11/11 [00:12<00:00, 1.07s/it] 100%|ββββββββββ| 11/11 [00:12<00:00, 1.11s/it] Executing node 84, title: VAE Decode, class type: VAEDecode Executing node 53, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 203, title: Upscale Image By, class type: ImageScaleBy Executing node 204, title: VAE Encode, class type: VAEEncode Executing node 198, title: KSampler, class type: KSampler [AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (96) greater than context_length 16. [AnimateDiffEvo] - INFO - Using motion module AnimateLCM_sd15_t2v.ckpt:v2. 0%| | 0/11 [00:00<?, ?it/s] 9%|β | 1/11 [00:02<00:21, 2.20s/it] 18%|ββ | 2/11 [00:04<00:19, 2.19s/it] 27%|βββ | 3/11 [00:06<00:17, 2.19s/it] 36%|ββββ | 4/11 [00:08<00:15, 2.19s/it] 45%|βββββ | 5/11 [00:10<00:13, 2.19s/it] 55%|ββββββ | 6/11 [00:13<00:10, 2.19s/it] 64%|βββββββ | 7/11 [00:15<00:08, 2.19s/it] 73%|ββββββββ | 8/11 [00:17<00:06, 2.19s/it] 82%|βββββββββ | 9/11 [00:19<00:04, 2.18s/it] 91%|βββββββββ | 10/11 [00:21<00:02, 2.18s/it] 100%|ββββββββββ| 11/11 [00:24<00:00, 2.18s/it] 100%|ββββββββββ| 11/11 [00:24<00:00, 2.19s/it] Requested to load AutoencoderKL Loading 1 new model Executing node 201, title: VAE Decode, class type: VAEDecode Executing node 205, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 271, title: Upscale Image (using Model), class type: ImageUpscaleWithModel Executing node 279, title: Upscale Image, class type: ImageScale Executing node 272, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Executing node 770, title: FILM VFI, class type: FILM VFI Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Clearing cache... Comfy-VFI: Done cache clearing Comfy-VFI: Final clearing cache... Comfy-VFI: Done cache clearing Executing node 219, title: Video Combine π₯π ₯π π ’, class type: VHS_VideoCombine Prompt executed in 176.75 seconds outputs: {'53': {'gifs': [{'filename': 'preview_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '205': {'gifs': [{'filename': 'upscaled_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '272': {'gifs': [{'filename': 'upscaled_model_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}, '219': {'gifs': [{'filename': 'interpolated_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4'}]}} ==================================== preview_00001.mp4 upscaled_00001.mp4 upscaled_model_00001.mp4 interpolated_00001.mp4
Want to make some of these yourself?
Run this model