Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
anuragshas
/
wav2vec2-large-xls-r-300m-ha-cv8
like
1
Automatic Speech Recognition
Transformers
PyTorch
TensorBoard
mozilla-foundation/common_voice_8_0
Hausa
wav2vec2
Generated from Trainer
robust-speech-event
hf-asr-leaderboard
Eval Results
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Metrics
Training metrics
Community
1
Train
Deploy
Use this model
554b2c2
wav2vec2-large-xls-r-300m-ha-cv8
2 contributors
History:
10 commits
anuragshas
Upload HausaCV8_Xls_R_300m.ipynb
554b2c2
almost 3 years ago
language_model
Upload lm-boosted decoder
almost 3 years ago
runs
End of training
almost 3 years ago
.gitattributes
1.18 kB
initial commit
almost 3 years ago
.gitignore
13 Bytes
Training in progress, step 400
almost 3 years ago
HausaCV8_Xls_R_300m.ipynb
337 kB
Upload HausaCV8_Xls_R_300m.ipynb
almost 3 years ago
README.md
2.33 kB
update model card README.md
almost 3 years ago
added_tokens.json
23 Bytes
add tokenizer
almost 3 years ago
alphabet.json
258 Bytes
Upload lm-boosted decoder
almost 3 years ago
config.json
2.06 kB
Training in progress, step 400
almost 3 years ago
eval.py
4.71 kB
Create eval.py
almost 3 years ago
preprocessor_config.json
262 Bytes
Upload lm-boosted decoder
almost 3 years ago
pytorch_model.bin
pickle
Detected Pickle imports (3)
"collections.OrderedDict"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.FloatStorage"
What is a pickle import?
1.26 GB
LFS
End of training
almost 3 years ago
special_tokens_map.json
888 Bytes
Upload lm-boosted decoder
almost 3 years ago
tokenizer_config.json
347 Bytes
Upload lm-boosted decoder
almost 3 years ago
training_args.bin
pickle
Detected Pickle imports (6)
"transformers.training_args.OptimizerNames"
,
"transformers.trainer_utils.HubStrategy"
,
"transformers.trainer_utils.SchedulerType"
,
"transformers.training_args.TrainingArguments"
,
"torch.device"
,
"transformers.trainer_utils.IntervalStrategy"
How to fix it?
3.06 kB
LFS
Training in progress, step 400
almost 3 years ago
vocab.json
328 Bytes
add tokenizer
almost 3 years ago