victor-upmeet / whisperx

Accelerated transcription, word-level timestamps and diarization with whisperX large-v3

  • Public
  • 93.6K runs
  • GitHub
  • Paper
  • License



Run time and cost

This model runs on Nvidia A40 GPU hardware. Predictions typically complete within 84 seconds. The predict time for this model varies significantly based on the inputs.



The purpose of this model is to offer the possibility to transcribe audio files that do not exceed a few hours long and do not weigh more than a couple 100 MB. If you need to transcribe audio files bigger than that, please go to victor-upmeet/whisperx-a40-large which is the same model but runs on a A40 (Large) hardware. It costs more, but has more RAM which will allow very large files to be handled.

Model Information

WhisperX provides fast automatic speech recognition (70x realtime with large-v3) with word-level timestamps and speaker diarization.

Whisper is an ASR model developed by OpenAI, trained on a large dataset of diverse audio. Whilst it does produces highly accurate transcriptions, the corresponding timestamps are at the utterance-level, not per word, and can be inaccurate by several seconds. OpenAI’s whisper does not natively support batching, but WhisperX does.

Model used is for transcription is large-v3 from faster-whisper.

For more information about WhisperX, including implementation details, see the WhisperX github repo.


      title={WhisperX: Time-Accurate Speech Transcription of Long-Form Audio}, 
      author={Max Bain and Jaesung Huh and Tengda Han and Andrew Zisserman},