openai / whisper

Convert speech in audio to text

  • Public
  • 74.4M runs
  • T4
  • GitHub
  • Weights
  • Paper
  • License

Input

Video Player is loading.
Current Time 00:00:000
Duration 00:00:000
Loaded: 0%
Stream Type LIVE
Remaining Time 00:00:000
 
1x
*file

Audio file

string

Choose the format for the transcription

Default: "plain text"

boolean

Translate the text to English when set to True

Default: false

string

Language spoken in the audio, specify 'auto' for automatic language detection

Default: "auto"

number

temperature to use for sampling

Default: 0

number

optional patience value to use in beam decoding, as in https://arxiv.org/abs/2204.05424, the default (1.0) is equivalent to conventional beam search

string
Shift + Return to add a new line

comma-separated list of token ids to suppress during sampling; '-1' will suppress most special characters except common punctuations

Default: "-1"

string
Shift + Return to add a new line

optional text to provide as a prompt for the first window.

boolean

if True, provide the previous output of the model as a prompt for the next window; disabling may make the text inconsistent across windows, but the model becomes less prone to getting stuck in a failure loop

Default: true

number

temperature to increase when falling back when the decoding fails to meet either of the thresholds below

Default: 0.2

number

if the gzip compression ratio is higher than this value, treat the decoding as failed

Default: 2.4

number

if the average log probability is lower than this value, treat the decoding as failed

Default: -1

number

if the probability of the <|nospeech|> token is higher than this value AND the decoding has failed due to `logprob_threshold`, consider the segment as silence

Default: 0.6

Output

segments

[ { "id": 0, "end": 18.6, "seek": 0, "text": " the little tales they tell are false the door was barred locked and bolted as well ripe pears are fit for a queen's table a big wet stain was on the round carpet", "start": 0, "tokens": [ 50365, 264, 707, 27254, 436, 980, 366, 7908, 264, 2853, 390, 2159, 986, 9376, 293, 13436, 292, 382, 731, 31421, 520, 685, 366, 3318, 337, 257, 12206, 311, 3199, 257, 955, 6630, 16441, 390, 322, 264, 3098, 18119, 51295 ], "avg_logprob": -0.060722851171726135, "temperature": 0, "no_speech_prob": 0.05907342955470085, "compression_ratio": 1.412280701754386 }, { "id": 1, "end": 31.840000000000003, "seek": 1860, "text": " the kite dipped and swayed but stayed aloft the pleasant hours fly by much too soon the room was crowded with a mild wab", "start": 18.6, "tokens": [ 50365, 264, 38867, 45162, 293, 27555, 292, 457, 9181, 419, 6750, 264, 16232, 2496, 3603, 538, 709, 886, 2321, 264, 1808, 390, 21634, 365, 257, 15154, 261, 455, 51027 ], "avg_logprob": -0.1184891973223005, "temperature": 0, "no_speech_prob": 0.000253104604780674, "compression_ratio": 1.696969696969697 }, { "id": 2, "end": 45.2, "seek": 1860, "text": " the room was crowded with a wild mob this strong arm shall shield your honour she blushed when he gave her a white orchid", "start": 31.840000000000003, "tokens": [ 51027, 264, 1808, 390, 21634, 365, 257, 4868, 4298, 341, 2068, 3726, 4393, 10257, 428, 20631, 750, 25218, 292, 562, 415, 2729, 720, 257, 2418, 34850, 327, 51695 ], "avg_logprob": -0.1184891973223005, "temperature": 0, "no_speech_prob": 0.000253104604780674, "compression_ratio": 1.696969696969697 }, { "id": 3, "end": 48.6, "seek": 1860, "text": " the beetle droned in the hot june sun", "start": 45.2, "tokens": [ 51695, 264, 49735, 1224, 19009, 294, 264, 2368, 361, 2613, 3295, 51865 ], "avg_logprob": -0.1184891973223005, "temperature": 0, "no_speech_prob": 0.000253104604780674, "compression_ratio": 1.696969696969697 }, { "id": 4, "end": 52.38, "seek": 4860, "text": " the beetle droned in the hot june sun", "start": 48.6, "tokens": [ 50365, 264, 49735, 1224, 19009, 294, 264, 2368, 361, 2613, 3295, 50554 ], "avg_logprob": -0.30115177081181455, "temperature": 0.2, "no_speech_prob": 0.292143315076828, "compression_ratio": 0.8409090909090909 } ]

transcription

the little tales they tell are false the door was barred locked and bolted as well ripe pears are fit for a queen's table a big wet stain was on the round carpet the kite dipped and swayed but stayed aloft the pleasant hours fly by much too soon the room was crowded with a mild wab the room was crowded with a wild mob this strong arm shall shield your honour she blushed when he gave her a white orchid the beetle droned in the hot june sun the beetle droned in the hot june sun

detected_language

english
Generated in

This example was created by a different version, openai/whisper:4d507972.

Run time and cost

This model costs approximately $0.00068 to run on Replicate, or 1470 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia T4 GPU hardware. Predictions typically complete within 4 seconds.

Readme

Whisper Large-v3

Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition, translation, and language identification.

This version runs only the most recent Whisper model, large-v3. It’s optimized for high performance and simplicity.

Model Versions

Model Size Version
large-v3 link
large-v2 link
all others link

While this implementation only uses the large-v3 model, we maintain links to previous versions for reference.

For users who need different model sizes, check out our multi-model version.

Model Description

Approach

Whisper uses a Transformer sequence-to-sequence model trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. All of these tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing for a single model to replace many different stages of a traditional speech processing pipeline.

[Blog] [Paper] [Model card]

License

The code and model weights of Whisper are released under the MIT License. See LICENSE for further details.

Citation

@misc{https://doi.org/10.48550/arxiv.2212.04356,
  doi = {10.48550/ARXIV.2212.04356},
  url = {https://arxiv.org/abs/2212.04356},
  author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
  title = {Robust Speech Recognition via Large-Scale Weak Supervision},
  publisher = {arXiv},
  year = {2022},
  copyright = {arXiv.org perpetual, non-exclusive license}
}