nicknaskida / whisper-diarization

⚡️ Insanely Fast audio transcription | whisper large-v3 | speaker diarization | word & sentence level timestamps | prompt | hotwords. Fork of thomasmol/whisper-diarization. Added batched whisper, 3x-4x speedup 🚀

  • Public
  • 34 runs
  • GitHub

Run nicknaskida/whisper-diarization with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
file_string
string
Either provide: Base64 encoded audio file,
file_url
string
Or provide: A direct audio file URL
file
string
Or an audio file
hf_token
string
Provide a hf.co/settings/token for Pyannote.audio to diarise the audio clips. You need to agree to the terms in 'https://huggingface.co/pyannote/speaker-diarization-3.1' and 'https://huggingface.co/pyannote/segmentation-3.0' first.
group_segments
boolean
True
Group segments of same speaker shorter apart than 2 seconds
transcript_output_format
string (enum)
both

Options:

words_only, segments_only, both

Specify the format of the transcript output: individual words with timestamps, full text of segments, or a combination of both.
num_speakers
integer
2

Min: 1

Max: 50

Number of speakers, leave empty to autodetect.
translate
boolean
False
Translate the speech into English.
language
string
Language of the spoken words as a language code like 'en'. Leave empty to auto detect language.
prompt
string
Vocabulary: provide names, acronyms and loanwords in a list. Use punctuation for best accuracy.
batch_size
integer
64

Min: 1

Batch size for inference. (Reduce if face OOM error)
offset_seconds
integer
0
Offset in seconds, used for chunked inputs

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "object",
  "title": "Output",
  "required": [
    "segments"
  ],
  "properties": {
    "language": {
      "type": "string",
      "title": "Language"
    },
    "segments": {
      "type": "array",
      "items": {},
      "title": "Segments"
    },
    "num_speakers": {
      "type": "integer",
      "title": "Num Speakers"
    }
  }
}