m1guelpf / whisper-subtitles

Generate subtitles from an audio file, using OpenAI's Whisper model.

  • Public
  • 72.4K runs
  • T4
  • GitHub
  • Paper
  • License

Input

Video Player is loading.
Current Time 00:00:000
Duration 00:00:000
Loaded: 0%
Stream Type LIVE
Remaining Time 00:00:000
 
1x
*file

Audio file to transcribe

string

Name of the Whisper model to use.

Default: "base"

string

Whether to generate subtitles on the SRT or VTT format.

Default: "vtt"

Output

text

This is the micro-machining, presenting the most miniature modicator of Michael machine. Each one has dramatic details for a fixed-roomed precision-page art plus incredible micro-machining pocket placet, physical police station, fire station, restaurant, service station, and more. Perfect pocket portables to take any place. And there are many miniature placets to play with. And each one comes with its own special edition, micro-machining vehicle and fun fantastic features that miraculously move. Raise the boat lift at the airport, marine a man in the gun turret at the army that's cleaning your car at the car wash, raise the toll bridge. And these placets fit together to form a micro-machining world. Micro-machining pocket placet, search for men in suit, hide in a suit perfectly precise, so dazzlingly detailed, you'll want to pocket them all.

language

english

subtitles

WEBVTT 00:00.000 --> 00:02.000 This is the micro-machining, presenting the most miniature 00:02.000 --> 00:03.300 modicator of Michael machine. 00:03.300 --> 00:04.600 Each one has dramatic details for a fixed-roomed 00:04.600 --> 00:06.120 precision-page art plus incredible micro-machining 00:06.120 --> 00:07.680 pocket placet, physical police station, fire station, 00:07.680 --> 00:08.840 restaurant, service station, and more. 00:08.840 --> 00:10.320 Perfect pocket portables to take any place. 00:10.320 --> 00:11.560 And there are many miniature placets to play with. 00:11.560 --> 00:12.800 And each one comes with its own special edition, 00:12.800 --> 00:14.320 micro-machining vehicle and fun fantastic features 00:14.320 --> 00:15.360 that miraculously move. 00:15.360 --> 00:16.280 Raise the boat lift at the airport, 00:16.280 --> 00:17.460 marine a man in the gun turret at the army 00:17.460 --> 00:18.480 that's cleaning your car at the car wash, 00:18.480 --> 00:19.320 raise the toll bridge. 00:19.320 --> 00:21.280 And these placets fit together to form a micro-machining world. 00:21.280 --> 00:22.200 Micro-machining pocket placet, 00:22.200 --> 00:22.680 search for men in suit, 00:22.680 --> 00:23.440 hide in a suit perfectly precise, 00:23.440 --> 00:40.040 so dazzlingly detailed, you'll want to pocket them all.
Generated in

Run time and cost

This model costs approximately $0.0024 to run on Replicate, or 416 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia T4 GPU hardware. Predictions typically complete within 11 seconds. The predict time for this model varies significantly based on the inputs.

Readme

Whisper

Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.

Approach

Approach

A Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. All of these tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing for a single model to replace many different stages of a traditional speech processing pipeline. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets.

Setup

We used Python 3.9.9 and PyTorch 1.10.1 to train and test our models, but the codebase is expected to be compatible with Python 3.7 or later and recent PyTorch versions. The codebase also depends on a few Python packages, most notably HuggingFace Transformers for their fast tokenizer implementation and ffmpeg-python for reading audio files. It also requires the command-line tool ffmpeg.

Available models and languages

There are five model sizes, four with English-only versions, offering speed and accuracy tradeoffs. Below are the names of the available models and their approximate memory requirements and relative speed.

Size Parameters English-only model Multilingual model Required VRAM Relative speed
tiny 39 M tiny.en tiny ~1 GB ~32x
base 74 M base.en base ~1 GB ~16x
small 244 M small.en small ~2 GB ~6x
medium 769 M medium.en medium ~5 GB ~2x
large 1550 M N/A large ~10 GB 1x

For English-only applications, the .en models tend to perform better, especially for the tiny.en and base.en models. We observed that the difference becomes less significant for the small.en and medium.en models.

Whisper’s performance varies widely depending on the language. The figure below shows a WER breakdown by languages of Fleurs dataset, using the large model. More WER and BLEU scores corresponding to the other models and datasets can be found in Appendix D in the paper.

WER breakdown by language

License

The code and the model weights of Whisper are released under the MIT License. See LICENSE for further details.