Text-to-Speech
Safetensors
Hecheng0625 commited on
Commit
dbab0c0
·
verified ·
1 Parent(s): 48e1fda

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -10,8 +10,8 @@ pipeline_tag: text-to-speech
10
  [![arXiv](https://img.shields.io/badge/arXiv-Paper-COLOR.svg)](https://arxiv.org/pdf/2502.03128)
11
  [![readme](https://img.shields.io/badge/README-Key%20Features-blue)](https://github.com/open-mmlab/Amphion/models/tts/metis/README.md)
12
  [![hf](https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-model-yellow)](https://huggingface.co/amphion/metis)
13
- [![ModelScope](https://img.shields.io/badge/ModelScope-model-cyan)](https://modelscope.cn/models/amphion/metis)
14
-
15
 
16
  ## Overview
17
 
@@ -22,10 +22,11 @@ Experiments demonstrate that Metis can serve as a foundation model for unified s
22
  across five speech generation tasks, including zero-shot text-to-speech, voice conversion, target speaker extraction, speech enhancement, and lip-to-speech, even with fewer than 20M trainable parameters or 300 times less training data.
23
  Audio samples are are available at [demo page](https://metis-demo.github.io/).
24
 
25
- ## News
26
 
27
  - **2025/02/25**: We release ***Metis***, a foundation model for unified speech generation. The system supports zero-shot text-to-speech, voice conversion, target speaker extraction, speech enhancement, and lip-to-speech.
28
 
 
29
 
30
  ## Model Introduction
31
 
 
10
  [![arXiv](https://img.shields.io/badge/arXiv-Paper-COLOR.svg)](https://arxiv.org/pdf/2502.03128)
11
  [![readme](https://img.shields.io/badge/README-Key%20Features-blue)](https://github.com/open-mmlab/Amphion/models/tts/metis/README.md)
12
  [![hf](https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-model-yellow)](https://huggingface.co/amphion/metis)
13
+ <!-- [![ModelScope](https://img.shields.io/badge/ModelScope-model-cyan)](https://modelscope.cn/models/amphion/metis)
14
+ -->
15
 
16
  ## Overview
17
 
 
22
  across five speech generation tasks, including zero-shot text-to-speech, voice conversion, target speaker extraction, speech enhancement, and lip-to-speech, even with fewer than 20M trainable parameters or 300 times less training data.
23
  Audio samples are are available at [demo page](https://metis-demo.github.io/).
24
 
25
+ <!-- ## News
26
 
27
  - **2025/02/25**: We release ***Metis***, a foundation model for unified speech generation. The system supports zero-shot text-to-speech, voice conversion, target speaker extraction, speech enhancement, and lip-to-speech.
28
 
29
+ -->
30
 
31
  ## Model Introduction
32