vits_ljspeech / README.md
lmxue's picture
Update README.md
a0df1ce
|
raw
history blame
1.51 kB
---
license: mit
language:
- en
---
# Pretrained Model of Amphion VITS
We provide the pre-trained checkpoint of [VITS](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/VITS) trained on LJSpeech, which consists of 13,100 short audio clips of a single speaker and have a total length of approximately 24 hours.
## Quick Start
To utilize the pretrained models, just run the following commands:
### Step1: Download the checkpoint
```bash
git lfs install
git clone https://huggingface.co/amphion/vits-ljspeech
```
### Step2: Clone the Amphion's Source Code of GitHub
```bash
git clone https://github.com/open-mmlab/Amphion.git
```
### Step3: Specify the checkpoint's path
Use the soft link to specify the downloaded checkpoint in the first step:
```bash
cd Amphion
mkdir -p ckpts/tts
ln -s ../../../vits-ljspeech ckpts/tts/
```
### Step4: Inference
You can follow the inference part of [this recipe](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/VITS#4-inference) to generate speech from text. For example, if you want to synthesize a clip of speech with the text of "This is a clip of generated speech with the given text from a TTS model.", just, run:
```bash
sh egs/tts/VITS/run.sh --stage 3 --gpu "0" \
--config "ckpts/tts/vits-ljspeech/args.json" \
--infer_expt_dir "ckpts/tts/vits-ljspeech/" \
--infer_output_dir ckpts/tts/vits-ljspeech/result \
--infer_mode "single" \
--infer_text "This is a clip of generated speech with the given text from a TTS model."
```