Failed to load versions. Head to the versions page to see all versions for this model.
You're looking at a specific version of this model. Jump to the model overview.
platform-kit /mars5-tts:6aed0f11
Input
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run platform-kit/mars5-tts using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"platform-kit/mars5-tts:6aed0f11f3ba7b13d59ab3228355e7b1ea943479673cc57e10e99ba766536811",
{
input: {
text: "To see how much you've spent, go to your dashboard.",
top_k: 100,
temperature: 1.5,
freq_penalty: 3,
ref_audio_file: "https://replicate.delivery/pbxt/L9a6SelzU0B2DIWeNpkNR0CKForWSbkswoUP69L0NLjLswVV/voice_sample.wav",
rep_penalty_window: 100,
ref_audio_transcript: "Hi there. I'm your new voice clone. Try your best to upload quality audio."
}
}
);
// To access the file URL:
console.log(output.url()); //=> "http://example.com"
// To write the file to disk:
fs.writeFile("my-image.png", output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run platform-kit/mars5-tts using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"platform-kit/mars5-tts:6aed0f11f3ba7b13d59ab3228355e7b1ea943479673cc57e10e99ba766536811",
input={
"text": "To see how much you've spent, go to your dashboard.",
"top_k": 100,
"temperature": 1.5,
"freq_penalty": 3,
"ref_audio_file": "https://replicate.delivery/pbxt/L9a6SelzU0B2DIWeNpkNR0CKForWSbkswoUP69L0NLjLswVV/voice_sample.wav",
"rep_penalty_window": 100,
"ref_audio_transcript": "Hi there. I'm your new voice clone. Try your best to upload quality audio."
}
)
print(output)
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run platform-kit/mars5-tts using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "6aed0f11f3ba7b13d59ab3228355e7b1ea943479673cc57e10e99ba766536811",
"input": {
"text": "To see how much you\'ve spent, go to your dashboard.",
"top_k": 100,
"temperature": 1.5,
"freq_penalty": 3,
"ref_audio_file": "https://replicate.delivery/pbxt/L9a6SelzU0B2DIWeNpkNR0CKForWSbkswoUP69L0NLjLswVV/voice_sample.wav",
"rep_penalty_window": 100,
"ref_audio_transcript": "Hi there. I\'m your new voice clone. Try your best to upload quality audio."
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Add a payment method to run this model.
By signing in, you agree to our
terms of service and privacy policy
Output
- Chapters
- descriptions off, selected
- captions settings, opens captions settings dialog
- captions off, selected
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
End of dialog window.
{
"completed_at": "2024-06-25T07:40:12.683208Z",
"created_at": "2024-06-25T07:39:31.501000Z",
"data_removed": false,
"error": null,
"id": "my4wz1z95nrgg0cg9sas3b3erm",
"input": {
"text": "To see how much you've spent, go to your dashboard.",
"top_k": 100,
"temperature": 1.5,
"freq_penalty": 3,
"ref_audio_file": "https://replicate.delivery/pbxt/L9a6SelzU0B2DIWeNpkNR0CKForWSbkswoUP69L0NLjLswVV/voice_sample.wav",
"rep_penalty_window": 100,
"ref_audio_transcript": "Hi there. I'm your new voice clone. Try your best to upload quality audio."
},
"logs": ">>> Running inference\nNote: using deep clone. Assuming input `c_phones` is concatenated prompt and output phones. Also assuming no padded indices in `c_codes`.\nNew x: torch.Size([1, 1073, 8]) | new x_known: torch.Size([1, 1073, 8]) . Base prompt: torch.Size([1, 428, 8]). New padding mask: torch.Size([1, 1073]) | m shape: torch.Size([1, 1073, 8])\n>>>>> Done with inference",
"metrics": {
"predict_time": 41.170100214,
"total_time": 41.182208
},
"output": "https://replicate.delivery/pbxt/YKde5zVCgyVwZKRATQyfxJwykWEPuU5iyb4uErsO18McZCCTA/output.mp3",
"started_at": "2024-06-25T07:39:31.513108Z",
"status": "succeeded",
"urls": {
"get": "https://api.replicate.com/v1/predictions/my4wz1z95nrgg0cg9sas3b3erm",
"cancel": "https://api.replicate.com/v1/predictions/my4wz1z95nrgg0cg9sas3b3erm/cancel"
},
"version": "6aed0f11f3ba7b13d59ab3228355e7b1ea943479673cc57e10e99ba766536811"
}
>>> Running inference
Note: using deep clone. Assuming input `c_phones` is concatenated prompt and output phones. Also assuming no padded indices in `c_codes`.
New x: torch.Size([1, 1073, 8]) | new x_known: torch.Size([1, 1073, 8]) . Base prompt: torch.Size([1, 428, 8]). New padding mask: torch.Size([1, 1073]) | m shape: torch.Size([1, 1073, 8])
>>>>> Done with inference