nicknaskida
/
whisper-diarization
⚡️ Insanely Fast audio transcription | whisper large-v3 | speaker diarization | word & sentence level timestamps | prompt | hotwords. Fork of thomasmol/whisper-diarization. Added batched whisper, 3x-4x speedup 🚀
- Public
- 61 runs
- GitHub
Run nicknaskida/whisper-diarization with an API
Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.
Input schema
The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
file_string |
string
|
Either provide: Base64 encoded audio file,
|
|
file_url |
string
|
Or provide: A direct audio file URL
|
|
file |
string
|
Or an audio file
|
|
hf_token |
string
|
Provide a hf.co/settings/token for Pyannote.audio to diarise the audio clips. You need to agree to the terms in 'https://huggingface.co/pyannote/speaker-diarization-3.1' and 'https://huggingface.co/pyannote/segmentation-3.0' first.
|
|
group_segments |
boolean
|
True
|
Group segments of same speaker shorter apart than 2 seconds
|
transcript_output_format |
string
(enum)
|
both
Options: words_only, segments_only, both |
Specify the format of the transcript output: individual words with timestamps, full text of segments, or a combination of both.
|
num_speakers |
integer
|
2
Min: 1 Max: 50 |
Number of speakers, leave empty to autodetect.
|
translate |
boolean
|
False
|
Translate the speech into English.
|
language |
string
|
Language of the spoken words as a language code like 'en'. Leave empty to auto detect language.
|
|
prompt |
string
|
Vocabulary: provide names, acronyms and loanwords in a list. Use punctuation for best accuracy.
|
|
batch_size |
integer
|
64
Min: 1 |
Batch size for inference. (Reduce if face OOM error)
|
offset_seconds |
integer
|
0
|
Offset in seconds, used for chunked inputs
|
{
"type": "object",
"title": "Input",
"properties": {
"file": {
"type": "string",
"title": "File",
"format": "uri",
"x-order": 2,
"description": "Or an audio file"
},
"prompt": {
"type": "string",
"title": "Prompt",
"x-order": 9,
"description": "Vocabulary: provide names, acronyms and loanwords in a list. Use punctuation for best accuracy."
},
"file_url": {
"type": "string",
"title": "File Url",
"x-order": 1,
"description": "Or provide: A direct audio file URL"
},
"hf_token": {
"type": "string",
"title": "Hf Token",
"x-order": 3,
"description": "Provide a hf.co/settings/token for Pyannote.audio to diarise the audio clips. You need to agree to the terms in 'https://huggingface.co/pyannote/speaker-diarization-3.1' and 'https://huggingface.co/pyannote/segmentation-3.0' first."
},
"language": {
"type": "string",
"title": "Language",
"x-order": 8,
"description": "Language of the spoken words as a language code like 'en'. Leave empty to auto detect language."
},
"translate": {
"type": "boolean",
"title": "Translate",
"default": false,
"x-order": 7,
"description": "Translate the speech into English."
},
"batch_size": {
"type": "integer",
"title": "Batch Size",
"default": 64,
"minimum": 1,
"x-order": 10,
"description": "Batch size for inference. (Reduce if face OOM error)"
},
"file_string": {
"type": "string",
"title": "File String",
"x-order": 0,
"description": "Either provide: Base64 encoded audio file,"
},
"num_speakers": {
"type": "integer",
"title": "Num Speakers",
"default": 2,
"maximum": 50,
"minimum": 1,
"x-order": 6,
"description": "Number of speakers, leave empty to autodetect."
},
"group_segments": {
"type": "boolean",
"title": "Group Segments",
"default": true,
"x-order": 4,
"description": "Group segments of same speaker shorter apart than 2 seconds"
},
"offset_seconds": {
"type": "integer",
"title": "Offset Seconds",
"default": 0,
"minimum": 0,
"x-order": 11,
"description": "Offset in seconds, used for chunked inputs"
},
"transcript_output_format": {
"enum": [
"words_only",
"segments_only",
"both"
],
"type": "string",
"title": "transcript_output_format",
"description": "Specify the format of the transcript output: individual words with timestamps, full text of segments, or a combination of both.",
"default": "both",
"x-order": 5
}
}
}
Output schema
The shape of the response you’ll get when you run this model with an API.
{
"type": "object",
"title": "Output",
"required": [
"segments"
],
"properties": {
"language": {
"type": "string",
"title": "Language"
},
"segments": {
"type": "array",
"items": {},
"title": "Segments"
},
"num_speakers": {
"type": "integer",
"title": "Num Speakers"
}
}
}