You're looking at a specific version of this model. Jump to the model overview.
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt_a |
string
|
funky synth solo
|
The prompt for your audio
|
denoising |
number
|
0.75
Max: 1 |
How much to transform input spectrogram
|
prompt_b |
string
|
The second prompt to interpolate with the first, leave blank if no interpolation
|
|
alpha |
number
|
0.5
Max: 1 |
Interpolation alpha if using two prompts. A value of 0 uses prompt_a fully, a value of 1 uses prompt_b fully
|
num_inference_steps |
integer
|
50
Min: 1 |
Number of steps to run the diffusion model
|
seed_image_id |
string
(enum)
|
vibes
Options: agile, marim, mask_beat_lines_80, mask_gradient_dark, mask_gradient_top_70, mask_graident_top_fifth_75, mask_top_third_75, mask_top_third_95, motorway, og_beat, vibes |
Seed spectrogram to use
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'properties': {'audio': {'format': 'uri', 'title': 'Audio', 'type': 'string'},
'spectrogram': {'format': 'uri',
'title': 'Spectrogram',
'type': 'string'}},
'required': ['audio', 'spectrogram'],
'title': 'Output',
'type': 'object'}