pbarker/whisperx

Public
22 runs

Input

Run this to download the model and run it in your local environment:

docker run -d -p 5000:5000 --gpus=all r8.im/pbarker/whisperx@sha256:d8687841d63ba54ad46e2cad0a0e60fed6306bd35cacf4c00aaccddb59a7dbae
curl -s -X POST \
  -H "Content-Type: application/json" \
  -d $'{
    "input": {
      "debug": false,
      "only_text": false,
      "batch_size": 32,
      "align_output": false
    }
  }' \
  http://localhost:5000/predictions

To learn more, take a look at the Cog documentation.

Output

No output yet! Press "Submit" to start a prediction.

Run time and cost

This model runs on Nvidia T4 GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

This model doesn't have a readme.