stayallive / whisper-subtitles

Generate subtitles (.srt and .vtt) from audio files using OpenAI's Whisper models.

  • Public
  • 5.2K runs
  • T4
  • GitHub
  • License

Input

Video Player is loading.
Current Time 00:00:000
Duration 00:00:000
Loaded: 0%
Stream Type LIVE
Remaining Time 00:00:000
 
1x
*file

Audio file to generate subtitles for.

string

Name of the Whisper model to use.

Default: "small"

string

Language of the audio.

Default: "en"

boolean

Enable the voice activity detection (VAD) to filter out parts of the audio without speech.

Default: true

Output

preview

We, the people of the United States, in order to form a more perfect union, establish justice, ensure domestic tranquility, provide for the common defense, promote the general welfare, and secure the blessings of liberty to ourselves and our posterity to ordain and establish this Constitution for the United States of America.
Generated in

This output was created using a different version of the model, stayallive/whisper-subtitles:4fcbb183.

Run time and cost

This model costs approximately $0.052 to run on Replicate, or 19 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia T4 GPU hardware. Predictions typically complete within 4 minutes. The predict time for this model varies significantly based on the inputs.

Readme

Generate subtitles (.srt and .vtt) from audio files using OpenAI’s Whisper models.

Using faster-whisper, a reimplementation of OpenAI’s Whisper model using CTranslate2, which is a fast inference engine for Transformer models.

This is a fork of m1guelpf/whisper-subtitles with added support for VAD, selecting a language, use the language specific models and download the .vtt/.srt files directly from the result.