You're looking at a specific version of this model. Jump to the model overview.

jarvissan22 /diarization-and-speaker-embedding:e759e02f

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
file_string
string
Base64 encoded audio file
file_url
string
A direct audio file URL
file
string
An audio file
hf_token
string
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
Provide a hf.co/settings/token for Pyannote.audio. You need to agree to the terms in 'https://huggingface.co/pyannote/speaker-diarization-3.1' and 'https://huggingface.co/pyannote/segmentation-3.0' first.
num_speakers
integer

Min: 1

Max: 50

Number of speakers, leave empty to autodetect.
min_speakers
integer
1

Min: 1

Max: 50

Minimum number of speakers
max_speakers
integer
10

Min: 1

Max: 50

Maximum number of speakers
language
string
ja
Language of the spoken words as a language code like 'ja'. Leave empty to auto detect language.
whisper_model_size
None
small
Whisper model size for transcription
batch_size
integer
64

Min: 1

Batch size for inference. (Reduce if face OOM error)
offset_seconds
number
0
Offset in seconds, used for chunked inputs

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'properties': {'audio_duration': {'title': 'Audio Duration', 'type': 'number'},
                'diarization_segments': {'items': {},
                                         'title': 'Diarization Segments',
                                         'type': 'array'},
                'language': {'title': 'Language', 'type': 'string'},
                'processing_time': {'title': 'Processing Time',
                                    'type': 'number'},
                'speaker_count': {'title': 'Speaker Count', 'type': 'integer'},
                'speaker_embeddings': {'additionalProperties': True,
                                       'title': 'Speaker Embeddings',
                                       'type': 'object'},
                'speaker_info': {'items': {},
                                 'title': 'Speaker Info',
                                 'type': 'array'},
                'transcript_segments': {'items': {},
                                        'title': 'Transcript Segments',
                                        'type': 'array'}},
 'required': ['diarization_segments',
              'transcript_segments',
              'speaker_embeddings',
              'speaker_info'],
 'title': 'DiarizationEmbeddingOutput',
 'type': 'object'}