pschaldenbrand / style-clip-draw

Styled text-to-drawing synthesis method.

  • Public
  • 2.1K runs
  • T4
  • GitHub
  • Paper
  • License

Input

style_image
*string
Shift + Return to add a new line

Text description of the desired drawing

*file

Style Image

integer

Number of drawing strokes.

Default: 256

integer

How strong the style should be. 100 (max) is all style. 0 (min) is very little styling.

Default: 50

Output

file
Generated in

This example was created by a different version, pschaldenbrand/style-clip-draw:f9213994.

Run time and cost

This model costs approximately $0.083 to run on Replicate, or 12 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia T4 GPU hardware. Predictions typically complete within 7 minutes. The predict time for this model varies significantly based on the inputs.

Readme

StyleCLIPDraw

Peter Schaldenbrand, Zhixuan Liu, Jean Oh September 2021

Featured at the 2021 NeurIPS Workshop on Machine Learning and Design. ArXiv pre-print.

Note: This is a version of StyleCLIPDraw that is optimized for short runtime. As such, the results will not be exactly like the original model.

StyleCLIPDraw adds a style loss to the CLIPDraw (Frans et al. 2021) (code) text-to-drawing synthesis model to allow artistic control of the synthesized drawings in addition to control of the content via text. Whereas performing decoupled style transfer on a generated image only affects the texture, our proposed coupled approach is able to capture a style in both texture and shape, suggesting that the style of the drawing is coupled with the drawing process itself.