You're looking at a specific version of this model. Jump to the model overview.

zedge /stable-diffusion:c2fe2329

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
prompt
string
astronaut in a 90s college party, vhs photo
Prompt
width
integer
1024
Width of output image
height
integer
1024
Height of output image
seed
integer
-1
Random seed (negative for random)
safety_prompt
string
Analyze the provided image for hate content. Output "True" if the image contains any of the following: - Nazi symbols (e.g., swastika) - Symbols/propaganda associated with terrorist organizations - Graphic violence (explicit depictions of severe injury, gore, mutilation, or torture) - Content promoting suicide - Symbols/imagery related to White supremacist groups - Recognizable symbols/imagery promoting violent misogyny or anti-LGBTQ+ hate - Dehumanizing caricatures or propaganda targeting racial, ethnic, or religious groups - Prominent text within the image that clearly constitutes direct hate speech (e.g., slurs, calls for violence against protected groups) Otherwise, output "False".
Prompt for InternVL to check for NSFW content
num_outputs
integer
1

Min: 1

Max: 4

Number of images to output
warm_delay
integer
-1
Parameter for warming the model. If set, returns empty dict after specified seconds
disable_nsfw_checker
boolean
False
Disable safety checker for generated images.
verbose
boolean
False
Print detailed timing information
remove_background
boolean
False
Remove background from the image
threshold
integer
80

Max: 255

Threshold for transparency (0-255). Higher values make more pixels transparent.
stray_removal
number
0.01

Min: 0.001

Max: 0.3

Remove components smaller than this ratio of the largest component (0.01 = 1%, 0.1 = 10%, etc.)

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'additionalProperties': True, 'title': 'Output', 'type': 'object'}