Readme
Running on Anything v3.0 with pixel-portrait LoRA by Nerijs & Latent Consistency Model (LCM) LoRA.
Uses Canny ControlNet with Img2Img pipeline.
Convert any image to pixel portraits (Updated 1 year, 7 months ago)
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run tripathiarpan20/pixel-portrait-lcm-anythingv3.0 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"tripathiarpan20/pixel-portrait-lcm-anythingv3.0:0c69710a5c14db6b86e37e3dd04f36c7c78d289836460f061f5345e1eb39a854",
input={
"seed": 42,
"image": "",
"prompt": "A person, pixel art",
"strength": 0.5,
"controlnet_end": 1,
"guidance_scale": 8,
"negative_prompt": "bad quality, low quality",
"controlnet_scale": 0.8,
"controlnet_start": 0,
"canny_low_threshold": 0.31,
"num_inference_steps": 10,
"canny_high_threshold": 0.78
}
)
print(output)
To learn more, take a look at the guide on getting started with Python.
No output yet! Press "Submit" to start a prediction.
This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.
Running on Anything v3.0 with pixel-portrait LoRA by Nerijs & Latent Consistency Model (LCM) LoRA.
Uses Canny ControlNet with Img2Img pipeline.