keeandev / whisperx

Word-level time accuracy, gpu accelerated transcribe & diarize, automatic speech recognition w/ whisper (large-v3).

  • Public
  • 333 runs
  • GitHub
  • Paper
  • License

Run keeandev/whisperx with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
audio
string
Audio file
language
string
ISO code of the language spoken in the audio, leave empty to perform automatic language detection.
batch_size
integer
32
Parallelization of input audio transcription.
task
string (enum)
transcribe

Options:

transcribe, translate

Task for whisper to do.
align_output
boolean
False
Use if you need word-level timing and not just batched transcription.
diarize_speakers
boolean
False
Use if you want speaker IDs attached to the transcription.
only_text
boolean
False
Set if you only want to return text; otherwise, segment metadata will be returned as well.
debug
boolean
False
Print out process information.

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "string",
  "title": "Output"
}