Automatic Speech Recognition
Transformers
Safetensors
Japanese
whisper
audio
hf-asr-leaderboard
Eval Results
Inference Endpoints
asahi417 commited on
Commit
f116c5b
1 Parent(s): 1d6323e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -222,7 +222,6 @@ pipe = pipeline(
222
  torch_dtype=torch_dtype,
223
  device=device,
224
  model_kwargs=model_kwargs,
225
- chunk_length_s=15,
226
  batch_size=16
227
  )
228
 
@@ -231,7 +230,7 @@ dataset = load_dataset("japanese-asr/ja_asr.reazonspeech_test", split="test")
231
  sample = {"array": np.concatenate([i["array"] for i in dataset[:20]["audio"]]), "sampling_rate": dataset[0]['audio']['sampling_rate']}
232
 
233
  # run inference
234
- result = pipe(sample, generate_kwargs=generate_kwargs)
235
  print(result["text"])
236
  ```
237
 
 
222
  torch_dtype=torch_dtype,
223
  device=device,
224
  model_kwargs=model_kwargs,
 
225
  batch_size=16
226
  )
227
 
 
230
  sample = {"array": np.concatenate([i["array"] for i in dataset[:20]["audio"]]), "sampling_rate": dataset[0]['audio']['sampling_rate']}
231
 
232
  # run inference
233
+ result = pipe(sample, chunk_length_s=15, generate_kwargs=generate_kwargs)
234
  print(result["text"])
235
  ```
236