You're looking at a specific version of this model. Jump to the model overview.
nicknaskida /whisper-diarization:c643440e
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
file_string |
string
|
Either provide: Base64 encoded audio file,
|
|
file_url |
string
|
Or provide: A direct audio file URL
|
|
file |
string
|
Or an audio file
|
|
hf_token |
string
|
Provide a hf.co/settings/token for Pyannote.audio to diarise the audio clips. You need to agree to the terms in 'https://huggingface.co/pyannote/speaker-diarization-3.1' and 'https://huggingface.co/pyannote/segmentation-3.0' first.
|
|
group_segments |
boolean
|
True
|
Group segments of same speaker shorter apart than 2 seconds
|
transcript_output_format |
string
(enum)
|
both
Options: words_only, segments_only, both |
Specify the format of the transcript output: individual words with timestamps, full text of segments, or a combination of both.
|
num_speakers |
integer
|
2
Min: 1 Max: 50 |
Number of speakers, leave empty to autodetect.
|
translate |
boolean
|
False
|
Translate the speech into English.
|
language |
string
|
Language of the spoken words as a language code like 'en'. Leave empty to auto detect language.
|
|
prompt |
string
|
Vocabulary: provide names, acronyms and loanwords in a list. Use punctuation for best accuracy.
|
|
batch_size |
integer
|
64
Min: 1 |
Batch size for inference. (Reduce if face OOM error)
|
offset_seconds |
integer
|
0
|
Offset in seconds, used for chunked inputs
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'properties': {'language': {'title': 'Language', 'type': 'string'},
'num_speakers': {'title': 'Num Speakers', 'type': 'integer'},
'segments': {'items': {},
'title': 'Segments',
'type': 'array'}},
'required': ['segments'],
'title': 'Output',
'type': 'object'}