replicate / wan-2.1-1.3b-hotswap-lora-internal

  • Public
  • 1 run
  • H100

Input

Run this model in Node.js with one line of code:

npx create-replicate --model=replicate/wan-2.1-1.3b-hotswap-lora-internal
or set up a project from scratch
npm install replicate
Set the REPLICATE_API_TOKEN environment variable:
export REPLICATE_API_TOKEN=<paste-your-token-here>

Find your API token in your account settings.

Import and set up the client:
import Replicate from "replicate";

const replicate = new Replicate({
  auth: process.env.REPLICATE_API_TOKEN,
});

Run replicate/wan-2.1-1.3b-hotswap-lora-internal using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.

const output = await replicate.run(
  "replicate/wan-2.1-1.3b-hotswap-lora-internal:7c9d20f6df3adf69273ea401e4f5d4834278064cf3ef2a91a69e97cc9c60b33c",
  {
    input: {
      model: "14b",
      frames: 81,
      fast_mode: "Balanced",
      resolution: "480p",
      aspect_ratio: "16:9",
      sample_shift: 8,
      sample_steps: 30,
      negative_prompt: "",
      lora_strength_clip: 1,
      sample_guide_scale: 5,
      lora_strength_model: 1
    }
  }
);

// To access the file URL:
console.log(output[0].url()); //=> "http://example.com"

// To write the file to disk:
fs.writeFile("my-image.png", output[0]);

To learn more, take a look at the guide on getting started with Node.js.

Output

No output yet! Press "Submit" to start a prediction.

Run time and cost

This model runs on Nvidia H100 GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

This model doesn't have a readme.