rinongal / stylegan-nada

StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators

  • Public
  • 101.8K runs
  • T4
  • GitHub
  • Paper
  • License
Iterate in playground

Input

Run this model in Node.js with one line of code:

npx create-replicate --model=rinongal/stylegan-nada
or set up a project from scratch
npm install replicate
Set the REPLICATE_API_TOKEN environment variable:
export REPLICATE_API_TOKEN=<paste-your-token-here>

Find your API token in your account settings.

Import and set up the client:
import Replicate from "replicate";
import fs from "node:fs";

const replicate = new Replicate({
  auth: process.env.REPLICATE_API_TOKEN,
});

Run rinongal/stylegan-nada using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.

const output = await replicate.run(
  "rinongal/stylegan-nada:6b2af4ac56fa2384f8f86fc7620943d5fc7689dcbb6183733743a215296d0e30",
  {
    input: {
      input: "nodejs",
      style_list: "joker,anime,modigliani",
      output_style: "all",
      video_format: "mp4",
      with_editing: true,
      generate_video: false
    }
  }
);

// To access the file URL:
console.log(output.url()); //=> "http://example.com"

// To write the file to disk:
fs.writeFile("my-image.png", output);

To learn more, take a look at the guide on getting started with Node.js.

Output

No output yet! Press "Submit" to start a prediction.

Run time and cost

This model costs approximately $0.035 to run on Replicate, or 28 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia T4 GPU hardware. Predictions typically complete within 3 minutes. The predict time for this model varies significantly based on the inputs.

Readme

This is an inference-only implementation of our work on converting image generators between domains using nothing more than a textual prompt.

The page currently supports inversion and cross-domain editing of real images, using 24 of our favorite models.

We recommend starting with output_style set to ‘all’ in order to view all currently available options. Once you found a style you like, you can generate a higher resolution output using only that style.

To use multiple styles at once, set output_style to ‘list - enter below’ and fill in the style_list input with a comma separated list of your desired models (e.g. ‘joker,anime,modigliani’).

For more information (or in order to train your own model) please visit our project page, our GitHub repository or our Colab notebook.

StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators

Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, Daniel Cohen-Or

Abstract:
Can a generative model be trained to produce images from a specific domain, guided by a text prompt only, without seeing any image? In other words: can an image generator be trained blindly? Leveraging the semantic power of large scale Contrastive-Language-Image-Pre-training (CLIP) models, we present a text-driven method that allows shifting a generative model to new domains, without having to collect even a single image from those domains. We show that through natural language prompts and a few minutes of training, our method can adapt a generator across a multitude of domains characterized by diverse styles and shapes. Notably, many of these modifications would be difficult or outright impossible to reach with existing methods. We conduct an extensive set of experiments and comparisons across a wide range of domains. These demonstrate the effectiveness of our approach and show that our shifted models maintain the latent-space properties that make generative models appealing for downstream tasks.

Citation

If you make use of our work, please cite our paper:

@misc{gal2021stylegannada,
      title={StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators}, 
      author={Rinon Gal and Or Patashnik and Haggai Maron and Gal Chechik and Daniel Cohen-Or},
      year={2021},
      eprint={2108.00946},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}