jhurliman/music-flamingo
Public
44
runs
Run jhurliman/music-flamingo with an API
Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.
Input schema
The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.
| Field | Type | Default value | Description |
|---|---|---|---|
| audio |
string
|
Audio file to analyze (WAV, MP3, FLAC). Max ~10 minutes.
|
|
| prompt |
string
|
Describe this track in full detail - tell me the genre, tempo, and key, then dive into the instruments, production style, and overall mood it creates.
|
Question or instruction about the audio.
|
| max_new_tokens |
integer
|
1024
Min: 64 Max: 2048 |
Maximum number of tokens to generate.
|
| temperature |
number
|
0.7
Max: 1.5 |
Sampling temperature. Higher = more creative, lower = more focused.
|
| do_sample |
boolean
|
True
|
Whether to use sampling (True) or greedy decoding (False).
|
{
"type": "object",
"title": "Input",
"required": [
"audio"
],
"properties": {
"audio": {
"type": "string",
"title": "Audio",
"format": "uri",
"x-order": 0,
"description": "Audio file to analyze (WAV, MP3, FLAC). Max ~10 minutes."
},
"prompt": {
"type": "string",
"title": "Prompt",
"default": "Describe this track in full detail - tell me the genre, tempo, and key, then dive into the instruments, production style, and overall mood it creates.",
"x-order": 1,
"description": "Question or instruction about the audio."
},
"do_sample": {
"type": "boolean",
"title": "Do Sample",
"default": true,
"x-order": 4,
"description": "Whether to use sampling (True) or greedy decoding (False)."
},
"temperature": {
"type": "number",
"title": "Temperature",
"default": 0.7,
"maximum": 1.5,
"minimum": 0.0,
"x-order": 3,
"description": "Sampling temperature. Higher = more creative, lower = more focused."
},
"max_new_tokens": {
"type": "integer",
"title": "Max New Tokens",
"default": 1024,
"maximum": 2048,
"minimum": 64,
"x-order": 2,
"description": "Maximum number of tokens to generate."
}
}
}
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{
"type": "string",
"title": "Output"
}