keeandev
/
whisperx
Word-level time accuracy, gpu accelerated transcribe & diarize, automatic speech recognition w/ whisper (large-v3).
Run keeandev/whisperx with an API
Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.
Input schema
The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
audio |
string
|
Audio file
|
|
language |
string
|
ISO code of the language spoken in the audio, leave empty to perform automatic language detection.
|
|
batch_size |
integer
|
32
|
Parallelization of input audio transcription.
|
task |
string
(enum)
|
transcribe
Options: transcribe, translate |
Task for whisper to do.
|
align_output |
boolean
|
False
|
Use if you need word-level timing and not just batched transcription.
|
diarize_speakers |
boolean
|
False
|
Use if you want speaker IDs attached to the transcription.
|
only_text |
boolean
|
False
|
Set if you only want to return text; otherwise, segment metadata will be returned as well.
|
debug |
boolean
|
False
|
Print out process information.
|
{
"type": "object",
"title": "Input",
"required": [
"audio"
],
"properties": {
"task": {
"enum": [
"transcribe",
"translate"
],
"type": "string",
"title": "task",
"description": "Task for whisper to do.",
"default": "transcribe",
"x-order": 3
},
"audio": {
"type": "string",
"title": "Audio",
"format": "uri",
"x-order": 0,
"description": "Audio file"
},
"debug": {
"type": "boolean",
"title": "Debug",
"default": false,
"x-order": 7,
"description": "Print out process information."
},
"language": {
"type": "string",
"title": "Language",
"x-order": 1,
"description": "ISO code of the language spoken in the audio, leave empty to perform automatic language detection."
},
"only_text": {
"type": "boolean",
"title": "Only Text",
"default": false,
"x-order": 6,
"description": "Set if you only want to return text; otherwise, segment metadata will be returned as well."
},
"batch_size": {
"type": "integer",
"title": "Batch Size",
"default": 32,
"x-order": 2,
"description": "Parallelization of input audio transcription."
},
"align_output": {
"type": "boolean",
"title": "Align Output",
"default": false,
"x-order": 4,
"description": "Use if you need word-level timing and not just batched transcription."
},
"diarize_speakers": {
"type": "boolean",
"title": "Diarize Speakers",
"default": false,
"x-order": 5,
"description": "Use if you want speaker IDs attached to the transcription."
}
}
}
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{
"type": "string",
"title": "Output"
}