lucataco / controlnet-tile

Controlnet v1.1 - Tile Version

  • Public
  • 4K runs
  • GitHub
  • Paper
  • License

Run time and cost

This model costs approximately $0.082 to run on Replicate, or 12 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 84 seconds. The predict time for this model varies significantly based on the inputs.

Readme

Implementation of lllyasviel/control_v11f1e_sd15_tile, in an effort to find a basic image enhancer to upscale SDXL images at 1024 up to 2048

About

This checkpoint is a conversion of the original checkpoint into diffusers format. It can be used in combination with Stable Diffusion, such as runwayml/stable-diffusion-v1-5

ControlNet is a neural network structure to control diffusion models by adding extra conditions.

This checkpoint corresponds to the ControlNet conditioned on tiled image. Conceptually, it is similar to a super-resolution model, but its usage is not limited to that. It is also possible to generate details at the same size as the input (conditione) image.

Model Details

  • Developed by: Lvmin Zhang, Maneesh Agrawala

  • Model type: Diffusion-based text-to-image generation model

  • Language(s): English

  • License: The CreativeML OpenRAIL M license is an Open RAIL M license, adapted from the work that BigScience and the RAIL Initiative are jointly carrying in the area of responsible AI licensing. See also the article about the BLOOM Open RAIL license on which our license is based.

  • Resources for more information: GitHub Repository, Paper.

@misc{zhang2023adding, title={Adding Conditional Control to Text-to-Image Diffusion Models}, author={Lvmin Zhang and Maneesh Agrawala}, year={2023}, eprint={2302.05543}, archivePrefix={arXiv}, primaryClass={cs.CV} }