Readme
This model doesn't have a readme.
(Updated 1 year, 1 month ago)
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run datong-new/vc using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"datong-new/vc:c0edf71c5242cb90457701a2f44292bfc12c333aab31878ab57ee164bfc07259",
{
input: {
text: "因此,没有一个固定的答案来描述所有情况下编码后字符串的具体长度,它会根据你需要编码的原始数据的大小而有所不同。对于具体情况下的精确长度,需要根据实际的原始数据和编码规则来确定。",
refer_video: "https://replicate.delivery/pbxt/Kp95btr4Xmaj6eWm7zpQVyO5kAeEtfPvt5jL22TkYxq0H2Dr/myvoice.wav",
text_language: "中文",
video_language: "中文"
}
}
);
console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run datong-new/vc using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"datong-new/vc:c0edf71c5242cb90457701a2f44292bfc12c333aab31878ab57ee164bfc07259",
input={
"text": "因此,没有一个固定的答案来描述所有情况下编码后字符串的具体长度,它会根据你需要编码的原始数据的大小而有所不同。对于具体情况下的精确长度,需要根据实际的原始数据和编码规则来确定。",
"refer_video": "https://replicate.delivery/pbxt/Kp95btr4Xmaj6eWm7zpQVyO5kAeEtfPvt5jL22TkYxq0H2Dr/myvoice.wav",
"text_language": "中文",
"video_language": "中文"
}
)
print(output)
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run datong-new/vc using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "datong-new/vc:c0edf71c5242cb90457701a2f44292bfc12c333aab31878ab57ee164bfc07259",
"input": {
"text": "因此,没有一个固定的答案来描述所有情况下编码后字符串的具体长度,它会根据你需要编码的原始数据的大小而有所不同。对于具体情况下的精确长度,需要根据实际的原始数据和编码规则来确定。",
"refer_video": "https://replicate.delivery/pbxt/Kp95btr4Xmaj6eWm7zpQVyO5kAeEtfPvt5jL22TkYxq0H2Dr/myvoice.wav",
"text_language": "中文",
"video_language": "中文"
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Add a payment method to run this model.
By signing in, you agree to our
terms of service and privacy policy
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
End of dialog window.
{
"completed_at": "2024-04-28T04:26:59.426298Z",
"created_at": "2024-04-28T04:24:59.678000Z",
"data_removed": false,
"error": null,
"id": "b1b0gaqr3srgg0cf4bs8e7nph4",
"input": {
"text": "因此,没有一个固定的答案来描述所有情况下编码后字符串的具体长度,它会根据你需要编码的原始数据的大小而有所不同。对于具体情况下的精确长度,需要根据实际的原始数据和编码规则来确定。",
"refer_video": "https://replicate.delivery/pbxt/Kp95btr4Xmaj6eWm7zpQVyO5kAeEtfPvt5jL22TkYxq0H2Dr/myvoice.wav",
"text_language": "中文",
"video_language": "中文"
},
"logs": "prompt_text 对于我的电话,他回的越来越少,看向硝烟的眼神却越来越温柔,他们吵了四年的CP,从未承认,却一直暧昧。\nINFO: 127.0.0.1:37136 - \"GET /?refer_wav_path=%2Ftmp%2Ftmpun1yp9m7myvoice.wav&prompt_text=%E5%AF%B9%E4%BA%8E%E6%88%91%E7%9A%84%E7%94%B5%E8%AF%9D%2C%E4%BB%96%E5%9B%9E%E7%9A%84%E8%B6%8A%E6%9D%A5%E8%B6%8A%E5%B0%91%2C%E7%9C%8B%E5%90%91%E7%A1%9D%E7%83%9F%E7%9A%84%E7%9C%BC%E7%A5%9E%E5%8D%B4%E8%B6%8A%E6%9D%A5%E8%B6%8A%E6%B8%A9%E6%9F%94%2C%E4%BB%96%E4%BB%AC%E5%90%B5%E4%BA%86%E5%9B%9B%E5%B9%B4%E7%9A%84CP%2C%E4%BB%8E%E6%9C%AA%E6%89%BF%E8%AE%A4%2C%E5%8D%B4%E4%B8%80%E7%9B%B4%E6%9A%A7%E6%98%A7%E3%80%82&prompt_language=all_zh&text=%E5%9B%A0%E6%AD%A4%EF%BC%8C%E6%B2%A1%E6%9C%89%E4%B8%80%E4%B8%AA%E5%9B%BA%E5%AE%9A%E7%9A%84%E7%AD%94%E6%A1%88%E6%9D%A5%E6%8F%8F%E8%BF%B0%E6%89%80%E6%9C%89%E6%83%85%E5%86%B5%E4%B8%8B%E7%BC%96%E7%A0%81%E5%90%8E%E5%AD%97%E7%AC%A6%E4%B8%B2%E7%9A%84%E5%85%B7%E4%BD%93%E9%95%BF%E5%BA%A6%EF%BC%8C%E5%AE%83%E4%BC%9A%E6%A0%B9%E6%8D%AE%E4%BD%A0%E9%9C%80%E8%A6%81%E7%BC%96%E7%A0%81%E7%9A%84%E5%8E%9F%E5%A7%8B%E6%95%B0%E6%8D%AE%E7%9A%84%E5%A4%A7%E5%B0%8F%E8%80%8C%E6%9C%89%E6%89%80%E4%B8%8D%E5%90%8C%E3%80%82%E5%AF%B9%E4%BA%8E%E5%85%B7%E4%BD%93%E6%83%85%E5%86%B5%E4%B8%8B%E7%9A%84%E7%B2%BE%E7%A1%AE%E9%95%BF%E5%BA%A6%EF%BC%8C%E9%9C%80%E8%A6%81%E6%A0%B9%E6%8D%AE%E5%AE%9E%E9%99%85%E7%9A%84%E5%8E%9F%E5%A7%8B%E6%95%B0%E6%8D%AE%E5%92%8C%E7%BC%96%E7%A0%81%E8%A7%84%E5%88%99%E6%9D%A5%E7%A1%AE%E5%AE%9A%E3%80%82&text_language=all_zh HTTP/1.1\" 200 OK\nBuilding prefix dict from the default dictionary ...\nDEBUG:jieba_fast:Building prefix dict from the default dictionary ...\nDumping model to file cache /tmp/jieba.cache\nDEBUG:jieba_fast:Dumping model to file cache /tmp/jieba.cache\nLoading model cost 0.631 seconds.\nPrefix dict has been built succesfully.\nDEBUG:jieba_fast:Loading model cost 0.631 seconds.\nDEBUG:jieba_fast:Prefix dict has been built succesfully.\n 0%| | 0/1500 [00:00<?, ?it/s]\n 0%| | 4/1500 [00:00<00:38, 38.56it/s]\n 1%| | 14/1500 [00:00<00:20, 72.18it/s]\n 2%|▏ | 24/1500 [00:00<00:18, 81.69it/s]\n 2%|▏ | 34/1500 [00:00<00:17, 85.85it/s]\n 3%|▎ | 44/1500 [00:00<00:16, 88.20it/s]\n 4%|▎ | 54/1500 [00:00<00:16, 89.57it/s]\n 4%|▍ | 63/1500 [00:00<00:16, 89.29it/s]\n 5%|▍ | 73/1500 [00:00<00:15, 90.09it/s]\n 6%|▌ | 83/1500 [00:00<00:15, 90.67it/s]\n 6%|▌ | 93/1500 [00:01<00:15, 91.84it/s]\n 7%|▋ | 103/1500 [00:01<00:14, 93.37it/s]\n 8%|▊ | 113/1500 [00:01<00:14, 94.55it/s]\n 8%|▊ | 123/1500 [00:01<00:14, 95.18it/s]\n 9%|▉ | 133/1500 [00:01<00:14, 95.46it/s]\n 10%|▉ | 143/1500 [00:01<00:14, 94.43it/s]\n 10%|█ | 153/1500 [00:01<00:14, 93.61it/s]\n 11%|█ | 163/1500 [00:01<00:14, 93.12it/s]\n 12%|█▏ | 173/1500 [00:01<00:14, 92.68it/s]\n 12%|█▏ | 183/1500 [00:02<00:14, 92.50it/s]\n 13%|█▎ | 193/1500 [00:02<00:14, 92.44it/s]\n 14%|█▎ | 203/1500 [00:02<00:14, 92.55it/s]\n 14%|█▍ | 213/1500 [00:02<00:13, 92.76it/s]\n 15%|█▍ | 223/1500 [00:02<00:13, 94.07it/s]\n 16%|█▌ | 233/1500 [00:02<00:13, 94.97it/s]\n 16%|█▌ | 243/1500 [00:02<00:13, 95.47it/s]\n 17%|█▋ | 253/1500 [00:02<00:13, 95.74it/s]\n 18%|█▊ | 263/1500 [00:02<00:13, 94.63it/s]\n 18%|█▊ | 273/1500 [00:02<00:13, 94.03it/s]\n 19%|█▉ | 283/1500 [00:03<00:13, 93.50it/s]\n 20%|█▉ | 293/1500 [00:03<00:12, 93.21it/s]\n 20%|██ | 303/1500 [00:03<00:12, 92.97it/s]\n 21%|██ | 313/1500 [00:03<00:12, 92.63it/s]\n 22%|██▏ | 323/1500 [00:03<00:12, 92.16it/s]\n 22%|██▏ | 333/1500 [00:03<00:12, 91.61it/s]\n 23%|██▎ | 343/1500 [00:03<00:12, 91.70it/s]\n 24%|██▎ | 353/1500 [00:03<00:12, 92.05it/s]\n 24%|██▍ | 363/1500 [00:03<00:12, 92.17it/s]\n 25%|██▍ | 373/1500 [00:04<00:12, 92.31it/s]\n 26%|██▌ | 383/1500 [00:04<00:12, 92.67it/s]\n 26%|██▌ | 393/1500 [00:04<00:11, 92.82it/s]\n 27%|██▋ | 403/1500 [00:04<00:11, 93.01it/s]\n 28%|██▊ | 413/1500 [00:04<00:11, 93.33it/s]\n 28%|██▊ | 423/1500 [00:04<00:11, 92.49it/s]\n 29%|██▉ | 433/1500 [00:04<00:11, 92.71it/s]\n 30%|██▉ | 443/1500 [00:04<00:11, 92.72it/s]\n 30%|███ | 453/1500 [00:04<00:11, 92.60it/s]\n 31%|███ | 463/1500 [00:05<00:11, 94.02it/s]\n 32%|███▏ | 473/1500 [00:05<00:10, 95.00it/s]\n 32%|███▏ | 483/1500 [00:05<00:10, 95.02it/s]\n 33%|███▎ | 493/1500 [00:05<00:10, 94.37it/s]\n 34%|███▎ | 503/1500 [00:05<00:10, 93.12it/s]\n 34%|███▍ | 513/1500 [00:05<00:10, 93.21it/s]\n 35%|███▍ | 523/1500 [00:05<00:10, 93.09it/s]\n 36%|███▌ | 533/1500 [00:05<00:10, 93.11it/s]\n 36%|███▌ | 543/1500 [00:05<00:10, 93.12it/s]\n 37%|███▋ | 553/1500 [00:05<00:10, 93.18it/s]\n 38%|███▊ | 563/1500 [00:06<00:10, 93.17it/s]\n 38%|███▊ | 573/1500 [00:06<00:09, 93.29it/s]\n 39%|███▉ | 583/1500 [00:06<00:09, 94.03it/s]\n 40%|███▉ | 593/1500 [00:06<00:09, 93.87it/s]\nT2S Decoding EOS [356 -> 963]\n 40%|████ | 603/1500 [00:06<00:09, 93.75it/s]\n40%|████ | 606/1500 [00:06<00:09, 92.30it/s]\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\nTo disable this warning, you can either:\n- Avoid using `tokenizers` before the fork if possible\n- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n/root/.pyenv/versions/3.9.19/lib/python3.9/site-packages/torch/functional.py:650: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.\nNote: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:863.)\nreturn _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]",
"metrics": {
"predict_time": 13.702274,
"total_time": 119.748298
},
"output": {
"audio": "https://replicate.delivery/pbxt/FF1JjIkpSSbuIZYEqapFWplwlSGppiVEsfdVmPmG5eelQwdlA/output.wav"
},
"started_at": "2024-04-28T04:26:45.724024Z",
"status": "succeeded",
"urls": {
"get": "https://api.replicate.com/v1/predictions/b1b0gaqr3srgg0cf4bs8e7nph4",
"cancel": "https://api.replicate.com/v1/predictions/b1b0gaqr3srgg0cf4bs8e7nph4/cancel"
},
"version": "0e4ff17150fec894622c9df31c2d3c2ad83021fa7fb020533f4a80f7f193ff26"
}
prompt_text 对于我的电话,他回的越来越少,看向硝烟的眼神却越来越温柔,他们吵了四年的CP,从未承认,却一直暧昧。
INFO: 127.0.0.1:37136 - "GET /?refer_wav_path=%2Ftmp%2Ftmpun1yp9m7myvoice.wav&prompt_text=%E5%AF%B9%E4%BA%8E%E6%88%91%E7%9A%84%E7%94%B5%E8%AF%9D%2C%E4%BB%96%E5%9B%9E%E7%9A%84%E8%B6%8A%E6%9D%A5%E8%B6%8A%E5%B0%91%2C%E7%9C%8B%E5%90%91%E7%A1%9D%E7%83%9F%E7%9A%84%E7%9C%BC%E7%A5%9E%E5%8D%B4%E8%B6%8A%E6%9D%A5%E8%B6%8A%E6%B8%A9%E6%9F%94%2C%E4%BB%96%E4%BB%AC%E5%90%B5%E4%BA%86%E5%9B%9B%E5%B9%B4%E7%9A%84CP%2C%E4%BB%8E%E6%9C%AA%E6%89%BF%E8%AE%A4%2C%E5%8D%B4%E4%B8%80%E7%9B%B4%E6%9A%A7%E6%98%A7%E3%80%82&prompt_language=all_zh&text=%E5%9B%A0%E6%AD%A4%EF%BC%8C%E6%B2%A1%E6%9C%89%E4%B8%80%E4%B8%AA%E5%9B%BA%E5%AE%9A%E7%9A%84%E7%AD%94%E6%A1%88%E6%9D%A5%E6%8F%8F%E8%BF%B0%E6%89%80%E6%9C%89%E6%83%85%E5%86%B5%E4%B8%8B%E7%BC%96%E7%A0%81%E5%90%8E%E5%AD%97%E7%AC%A6%E4%B8%B2%E7%9A%84%E5%85%B7%E4%BD%93%E9%95%BF%E5%BA%A6%EF%BC%8C%E5%AE%83%E4%BC%9A%E6%A0%B9%E6%8D%AE%E4%BD%A0%E9%9C%80%E8%A6%81%E7%BC%96%E7%A0%81%E7%9A%84%E5%8E%9F%E5%A7%8B%E6%95%B0%E6%8D%AE%E7%9A%84%E5%A4%A7%E5%B0%8F%E8%80%8C%E6%9C%89%E6%89%80%E4%B8%8D%E5%90%8C%E3%80%82%E5%AF%B9%E4%BA%8E%E5%85%B7%E4%BD%93%E6%83%85%E5%86%B5%E4%B8%8B%E7%9A%84%E7%B2%BE%E7%A1%AE%E9%95%BF%E5%BA%A6%EF%BC%8C%E9%9C%80%E8%A6%81%E6%A0%B9%E6%8D%AE%E5%AE%9E%E9%99%85%E7%9A%84%E5%8E%9F%E5%A7%8B%E6%95%B0%E6%8D%AE%E5%92%8C%E7%BC%96%E7%A0%81%E8%A7%84%E5%88%99%E6%9D%A5%E7%A1%AE%E5%AE%9A%E3%80%82&text_language=all_zh HTTP/1.1" 200 OK
Building prefix dict from the default dictionary ...
DEBUG:jieba_fast:Building prefix dict from the default dictionary ...
Dumping model to file cache /tmp/jieba.cache
DEBUG:jieba_fast:Dumping model to file cache /tmp/jieba.cache
Loading model cost 0.631 seconds.
Prefix dict has been built succesfully.
DEBUG:jieba_fast:Loading model cost 0.631 seconds.
DEBUG:jieba_fast:Prefix dict has been built succesfully.
0%| | 0/1500 [00:00<?, ?it/s]
0%| | 4/1500 [00:00<00:38, 38.56it/s]
1%| | 14/1500 [00:00<00:20, 72.18it/s]
2%|▏ | 24/1500 [00:00<00:18, 81.69it/s]
2%|▏ | 34/1500 [00:00<00:17, 85.85it/s]
3%|▎ | 44/1500 [00:00<00:16, 88.20it/s]
4%|▎ | 54/1500 [00:00<00:16, 89.57it/s]
4%|▍ | 63/1500 [00:00<00:16, 89.29it/s]
5%|▍ | 73/1500 [00:00<00:15, 90.09it/s]
6%|▌ | 83/1500 [00:00<00:15, 90.67it/s]
6%|▌ | 93/1500 [00:01<00:15, 91.84it/s]
7%|▋ | 103/1500 [00:01<00:14, 93.37it/s]
8%|▊ | 113/1500 [00:01<00:14, 94.55it/s]
8%|▊ | 123/1500 [00:01<00:14, 95.18it/s]
9%|▉ | 133/1500 [00:01<00:14, 95.46it/s]
10%|▉ | 143/1500 [00:01<00:14, 94.43it/s]
10%|█ | 153/1500 [00:01<00:14, 93.61it/s]
11%|█ | 163/1500 [00:01<00:14, 93.12it/s]
12%|█▏ | 173/1500 [00:01<00:14, 92.68it/s]
12%|█▏ | 183/1500 [00:02<00:14, 92.50it/s]
13%|█▎ | 193/1500 [00:02<00:14, 92.44it/s]
14%|█▎ | 203/1500 [00:02<00:14, 92.55it/s]
14%|█▍ | 213/1500 [00:02<00:13, 92.76it/s]
15%|█▍ | 223/1500 [00:02<00:13, 94.07it/s]
16%|█▌ | 233/1500 [00:02<00:13, 94.97it/s]
16%|█▌ | 243/1500 [00:02<00:13, 95.47it/s]
17%|█▋ | 253/1500 [00:02<00:13, 95.74it/s]
18%|█▊ | 263/1500 [00:02<00:13, 94.63it/s]
18%|█▊ | 273/1500 [00:02<00:13, 94.03it/s]
19%|█▉ | 283/1500 [00:03<00:13, 93.50it/s]
20%|█▉ | 293/1500 [00:03<00:12, 93.21it/s]
20%|██ | 303/1500 [00:03<00:12, 92.97it/s]
21%|██ | 313/1500 [00:03<00:12, 92.63it/s]
22%|██▏ | 323/1500 [00:03<00:12, 92.16it/s]
22%|██▏ | 333/1500 [00:03<00:12, 91.61it/s]
23%|██▎ | 343/1500 [00:03<00:12, 91.70it/s]
24%|██▎ | 353/1500 [00:03<00:12, 92.05it/s]
24%|██▍ | 363/1500 [00:03<00:12, 92.17it/s]
25%|██▍ | 373/1500 [00:04<00:12, 92.31it/s]
26%|██▌ | 383/1500 [00:04<00:12, 92.67it/s]
26%|██▌ | 393/1500 [00:04<00:11, 92.82it/s]
27%|██▋ | 403/1500 [00:04<00:11, 93.01it/s]
28%|██▊ | 413/1500 [00:04<00:11, 93.33it/s]
28%|██▊ | 423/1500 [00:04<00:11, 92.49it/s]
29%|██▉ | 433/1500 [00:04<00:11, 92.71it/s]
30%|██▉ | 443/1500 [00:04<00:11, 92.72it/s]
30%|███ | 453/1500 [00:04<00:11, 92.60it/s]
31%|███ | 463/1500 [00:05<00:11, 94.02it/s]
32%|███▏ | 473/1500 [00:05<00:10, 95.00it/s]
32%|███▏ | 483/1500 [00:05<00:10, 95.02it/s]
33%|███▎ | 493/1500 [00:05<00:10, 94.37it/s]
34%|███▎ | 503/1500 [00:05<00:10, 93.12it/s]
34%|███▍ | 513/1500 [00:05<00:10, 93.21it/s]
35%|███▍ | 523/1500 [00:05<00:10, 93.09it/s]
36%|███▌ | 533/1500 [00:05<00:10, 93.11it/s]
36%|███▌ | 543/1500 [00:05<00:10, 93.12it/s]
37%|███▋ | 553/1500 [00:05<00:10, 93.18it/s]
38%|███▊ | 563/1500 [00:06<00:10, 93.17it/s]
38%|███▊ | 573/1500 [00:06<00:09, 93.29it/s]
39%|███▉ | 583/1500 [00:06<00:09, 94.03it/s]
40%|███▉ | 593/1500 [00:06<00:09, 93.87it/s]
T2S Decoding EOS [356 -> 963]
40%|████ | 603/1500 [00:06<00:09, 93.75it/s]
40%|████ | 606/1500 [00:06<00:09, 92.30it/s]
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
/root/.pyenv/versions/3.9.19/lib/python3.9/site-packages/torch/functional.py:650: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:863.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
This output was created using a different version of the model, datong-new/vc:0e4ff171.
This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.
This model doesn't have a readme.
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
This model runs on L40S hardware which costs $0.000975 per second. View more.
Choose a file from your machine
Hint: you can also drag files onto the input
prompt_text 对于我的电话,他回的越来越少,看向硝烟的眼神却越来越温柔,他们吵了四年的CP,从未承认,却一直暧昧。
INFO: 127.0.0.1:37136 - "GET /?refer_wav_path=%2Ftmp%2Ftmpun1yp9m7myvoice.wav&prompt_text=%E5%AF%B9%E4%BA%8E%E6%88%91%E7%9A%84%E7%94%B5%E8%AF%9D%2C%E4%BB%96%E5%9B%9E%E7%9A%84%E8%B6%8A%E6%9D%A5%E8%B6%8A%E5%B0%91%2C%E7%9C%8B%E5%90%91%E7%A1%9D%E7%83%9F%E7%9A%84%E7%9C%BC%E7%A5%9E%E5%8D%B4%E8%B6%8A%E6%9D%A5%E8%B6%8A%E6%B8%A9%E6%9F%94%2C%E4%BB%96%E4%BB%AC%E5%90%B5%E4%BA%86%E5%9B%9B%E5%B9%B4%E7%9A%84CP%2C%E4%BB%8E%E6%9C%AA%E6%89%BF%E8%AE%A4%2C%E5%8D%B4%E4%B8%80%E7%9B%B4%E6%9A%A7%E6%98%A7%E3%80%82&prompt_language=all_zh&text=%E5%9B%A0%E6%AD%A4%EF%BC%8C%E6%B2%A1%E6%9C%89%E4%B8%80%E4%B8%AA%E5%9B%BA%E5%AE%9A%E7%9A%84%E7%AD%94%E6%A1%88%E6%9D%A5%E6%8F%8F%E8%BF%B0%E6%89%80%E6%9C%89%E6%83%85%E5%86%B5%E4%B8%8B%E7%BC%96%E7%A0%81%E5%90%8E%E5%AD%97%E7%AC%A6%E4%B8%B2%E7%9A%84%E5%85%B7%E4%BD%93%E9%95%BF%E5%BA%A6%EF%BC%8C%E5%AE%83%E4%BC%9A%E6%A0%B9%E6%8D%AE%E4%BD%A0%E9%9C%80%E8%A6%81%E7%BC%96%E7%A0%81%E7%9A%84%E5%8E%9F%E5%A7%8B%E6%95%B0%E6%8D%AE%E7%9A%84%E5%A4%A7%E5%B0%8F%E8%80%8C%E6%9C%89%E6%89%80%E4%B8%8D%E5%90%8C%E3%80%82%E5%AF%B9%E4%BA%8E%E5%85%B7%E4%BD%93%E6%83%85%E5%86%B5%E4%B8%8B%E7%9A%84%E7%B2%BE%E7%A1%AE%E9%95%BF%E5%BA%A6%EF%BC%8C%E9%9C%80%E8%A6%81%E6%A0%B9%E6%8D%AE%E5%AE%9E%E9%99%85%E7%9A%84%E5%8E%9F%E5%A7%8B%E6%95%B0%E6%8D%AE%E5%92%8C%E7%BC%96%E7%A0%81%E8%A7%84%E5%88%99%E6%9D%A5%E7%A1%AE%E5%AE%9A%E3%80%82&text_language=all_zh HTTP/1.1" 200 OK
Building prefix dict from the default dictionary ...
DEBUG:jieba_fast:Building prefix dict from the default dictionary ...
Dumping model to file cache /tmp/jieba.cache
DEBUG:jieba_fast:Dumping model to file cache /tmp/jieba.cache
Loading model cost 0.631 seconds.
Prefix dict has been built succesfully.
DEBUG:jieba_fast:Loading model cost 0.631 seconds.
DEBUG:jieba_fast:Prefix dict has been built succesfully.
0%| | 0/1500 [00:00<?, ?it/s]
0%| | 4/1500 [00:00<00:38, 38.56it/s]
1%| | 14/1500 [00:00<00:20, 72.18it/s]
2%|▏ | 24/1500 [00:00<00:18, 81.69it/s]
2%|▏ | 34/1500 [00:00<00:17, 85.85it/s]
3%|▎ | 44/1500 [00:00<00:16, 88.20it/s]
4%|▎ | 54/1500 [00:00<00:16, 89.57it/s]
4%|▍ | 63/1500 [00:00<00:16, 89.29it/s]
5%|▍ | 73/1500 [00:00<00:15, 90.09it/s]
6%|▌ | 83/1500 [00:00<00:15, 90.67it/s]
6%|▌ | 93/1500 [00:01<00:15, 91.84it/s]
7%|▋ | 103/1500 [00:01<00:14, 93.37it/s]
8%|▊ | 113/1500 [00:01<00:14, 94.55it/s]
8%|▊ | 123/1500 [00:01<00:14, 95.18it/s]
9%|▉ | 133/1500 [00:01<00:14, 95.46it/s]
10%|▉ | 143/1500 [00:01<00:14, 94.43it/s]
10%|█ | 153/1500 [00:01<00:14, 93.61it/s]
11%|█ | 163/1500 [00:01<00:14, 93.12it/s]
12%|█▏ | 173/1500 [00:01<00:14, 92.68it/s]
12%|█▏ | 183/1500 [00:02<00:14, 92.50it/s]
13%|█▎ | 193/1500 [00:02<00:14, 92.44it/s]
14%|█▎ | 203/1500 [00:02<00:14, 92.55it/s]
14%|█▍ | 213/1500 [00:02<00:13, 92.76it/s]
15%|█▍ | 223/1500 [00:02<00:13, 94.07it/s]
16%|█▌ | 233/1500 [00:02<00:13, 94.97it/s]
16%|█▌ | 243/1500 [00:02<00:13, 95.47it/s]
17%|█▋ | 253/1500 [00:02<00:13, 95.74it/s]
18%|█▊ | 263/1500 [00:02<00:13, 94.63it/s]
18%|█▊ | 273/1500 [00:02<00:13, 94.03it/s]
19%|█▉ | 283/1500 [00:03<00:13, 93.50it/s]
20%|█▉ | 293/1500 [00:03<00:12, 93.21it/s]
20%|██ | 303/1500 [00:03<00:12, 92.97it/s]
21%|██ | 313/1500 [00:03<00:12, 92.63it/s]
22%|██▏ | 323/1500 [00:03<00:12, 92.16it/s]
22%|██▏ | 333/1500 [00:03<00:12, 91.61it/s]
23%|██▎ | 343/1500 [00:03<00:12, 91.70it/s]
24%|██▎ | 353/1500 [00:03<00:12, 92.05it/s]
24%|██▍ | 363/1500 [00:03<00:12, 92.17it/s]
25%|██▍ | 373/1500 [00:04<00:12, 92.31it/s]
26%|██▌ | 383/1500 [00:04<00:12, 92.67it/s]
26%|██▌ | 393/1500 [00:04<00:11, 92.82it/s]
27%|██▋ | 403/1500 [00:04<00:11, 93.01it/s]
28%|██▊ | 413/1500 [00:04<00:11, 93.33it/s]
28%|██▊ | 423/1500 [00:04<00:11, 92.49it/s]
29%|██▉ | 433/1500 [00:04<00:11, 92.71it/s]
30%|██▉ | 443/1500 [00:04<00:11, 92.72it/s]
30%|███ | 453/1500 [00:04<00:11, 92.60it/s]
31%|███ | 463/1500 [00:05<00:11, 94.02it/s]
32%|███▏ | 473/1500 [00:05<00:10, 95.00it/s]
32%|███▏ | 483/1500 [00:05<00:10, 95.02it/s]
33%|███▎ | 493/1500 [00:05<00:10, 94.37it/s]
34%|███▎ | 503/1500 [00:05<00:10, 93.12it/s]
34%|███▍ | 513/1500 [00:05<00:10, 93.21it/s]
35%|███▍ | 523/1500 [00:05<00:10, 93.09it/s]
36%|███▌ | 533/1500 [00:05<00:10, 93.11it/s]
36%|███▌ | 543/1500 [00:05<00:10, 93.12it/s]
37%|███▋ | 553/1500 [00:05<00:10, 93.18it/s]
38%|███▊ | 563/1500 [00:06<00:10, 93.17it/s]
38%|███▊ | 573/1500 [00:06<00:09, 93.29it/s]
39%|███▉ | 583/1500 [00:06<00:09, 94.03it/s]
40%|███▉ | 593/1500 [00:06<00:09, 93.87it/s]
T2S Decoding EOS [356 -> 963]
40%|████ | 603/1500 [00:06<00:09, 93.75it/s]
40%|████ | 606/1500 [00:06<00:09, 92.30it/s]
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
/root/.pyenv/versions/3.9.19/lib/python3.9/site-packages/torch/functional.py:650: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:863.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]