Update README.md
Browse files
README.md
CHANGED
@@ -118,7 +118,7 @@ From my early tests:
|
|
118 |
- It seems that performance is on par with the original
|
119 |
- It seems that this combination is faster than just using the CTranslate2 int8 quantization.
|
120 |
Quantization method TBA.
|
121 |
-
To use this model, use the faster_whisper module as stated in [the original faster-
|
122 |
|
123 |
Any benchmark results are appreciated. I probably do not have time to do it myself.
|
124 |
## Model Details
|
|
|
118 |
- It seems that performance is on par with the original
|
119 |
- It seems that this combination is faster than just using the CTranslate2 int8 quantization.
|
120 |
Quantization method TBA.
|
121 |
+
To use this model, use the faster_whisper module as stated in [the original faster-whisper model](https://huggingface.co/Systran/faster-whisper-large-v3)
|
122 |
|
123 |
Any benchmark results are appreciated. I probably do not have time to do it myself.
|
124 |
## Model Details
|