You're looking at a specific version of this model. Jump to the model overview.

collectiveai-team /crisperwhisper:891ce328

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
audio
string
Audio file
task
None
transcribe
Task to perform: transcribe or translate to another language.
language
None
None
Language spoken in the audio, specify 'None' to perform language detection.
batch_size
integer
24
Number of parallel batches you want to compute. Reduce if you face OOMs.
timestamp
None
chunk
Whisper supports both chunked as well as word level timestamps.

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'title': 'Output'}