Update README.md
Browse files
README.md
CHANGED
@@ -33,6 +33,8 @@ slu = EndToEndSLU.from_hparams("/network/tmp1/ravanelm/slu-direct-fluent-speech-
|
|
33 |
slu.decode_file("/network/tmp1/ravanelm/slu-direct-fluent-speech-commands-librispeech-asr/example_fsc.wav")
|
34 |
>>> '{"action:" "activate"| "object": "lights"| "location": "bedroom"}'
|
35 |
```
|
|
|
|
|
36 |
### Inference on GPU
|
37 |
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
|
38 |
|
|
|
33 |
slu.decode_file("/network/tmp1/ravanelm/slu-direct-fluent-speech-commands-librispeech-asr/example_fsc.wav")
|
34 |
>>> '{"action:" "activate"| "object": "lights"| "location": "bedroom"}'
|
35 |
```
|
36 |
+
The system is trained with recordings sampled at 16kHz (single channel).
|
37 |
+
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *decode_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *decode_batch*.
|
38 |
### Inference on GPU
|
39 |
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
|
40 |
|