
jarvissan22/diarization-and-speaker-embedding

Run jarvissan22/diarization-and-speaker-embedding with an API
Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.
Input schema
The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
file_string |
string
|
Base64 encoded audio file
|
|
file_url |
string
|
A direct audio file URL
|
|
file |
string
|
An audio file
|
|
hf_token |
string
|
Provide a hf.co/settings/token for Pyannote.audio. You need to agree to the terms in 'https://huggingface.co/pyannote/speaker-diarization-3.1' and 'https://huggingface.co/pyannote/segmentation-3.0' first.
|
|
num_speakers |
integer
|
Min: 1 Max: 50 |
Number of speakers, leave empty to autodetect.
|
min_speakers |
integer
|
1
Min: 1 Max: 50 |
Minimum number of speakers
|
max_speakers |
integer
|
10
Min: 1 Max: 50 |
Maximum number of speakers
|
language |
string
|
ja
|
Language of the spoken words as a language code like 'ja'. Leave empty to auto detect language.
|
batch_size |
integer
|
64
Min: 1 |
Batch size for inference. (Reduce if face OOM error)
|
offset_seconds |
number
|
0
|
Offset in seconds, used for chunked inputs
|
{
"type": "object",
"title": "Input",
"required": [
"hf_token"
],
"properties": {
"file": {
"type": "string",
"title": "File",
"format": "uri",
"x-order": 2,
"description": "An audio file"
},
"file_url": {
"type": "string",
"title": "File Url",
"x-order": 1,
"description": "A direct audio file URL"
},
"hf_token": {
"type": "string",
"title": "Hf Token",
"x-order": 3,
"description": "Provide a hf.co/settings/token for Pyannote.audio. You need to agree to the terms in 'https://huggingface.co/pyannote/speaker-diarization-3.1' and 'https://huggingface.co/pyannote/segmentation-3.0' first."
},
"language": {
"type": "string",
"title": "Language",
"default": "ja",
"x-order": 7,
"description": "Language of the spoken words as a language code like 'ja'. Leave empty to auto detect language."
},
"batch_size": {
"type": "integer",
"title": "Batch Size",
"default": 64,
"minimum": 1,
"x-order": 8,
"description": "Batch size for inference. (Reduce if face OOM error)"
},
"file_string": {
"type": "string",
"title": "File String",
"x-order": 0,
"description": "Base64 encoded audio file"
},
"max_speakers": {
"type": "integer",
"title": "Max Speakers",
"default": 10,
"maximum": 50,
"minimum": 1,
"x-order": 6,
"description": "Maximum number of speakers"
},
"min_speakers": {
"type": "integer",
"title": "Min Speakers",
"default": 1,
"maximum": 50,
"minimum": 1,
"x-order": 5,
"description": "Minimum number of speakers"
},
"num_speakers": {
"type": "integer",
"title": "Num Speakers",
"maximum": 50,
"minimum": 1,
"x-order": 4,
"description": "Number of speakers, leave empty to autodetect."
},
"offset_seconds": {
"type": "number",
"title": "Offset Seconds",
"default": 0,
"minimum": 0,
"x-order": 9,
"description": "Offset in seconds, used for chunked inputs"
}
}
}
Output schema
The shape of the response you’ll get when you run this model with an API.
{
"type": "object",
"title": "DiarizationEmbeddingOutput",
"required": [
"diarization_segments",
"transcript_segments",
"speaker_embeddings",
"speaker_info"
],
"properties": {
"language": {
"type": "string",
"title": "Language"
},
"speaker_info": {
"type": "array",
"items": {},
"title": "Speaker Info"
},
"speaker_count": {
"type": "integer",
"title": "Speaker Count"
},
"audio_duration": {
"type": "number",
"title": "Audio Duration"
},
"processing_time": {
"type": "number",
"title": "Processing Time"
},
"speaker_embeddings": {
"type": "object",
"title": "Speaker Embeddings",
"additionalProperties": true
},
"transcript_segments": {
"type": "array",
"items": {},
"title": "Transcript Segments"
},
"diarization_segments": {
"type": "array",
"items": {},
"title": "Diarization Segments"
}
}
}