CER: 13.7%
transformers-4.46.3
Train Args:
per_device_train_batch_size=16,
gradient_accumulation_steps=1,
learning_rate=1e-5,
gradient_checkpointing=True,
per_device_eval_batch_size=16,
generation_max_length=225,
Hardware:
NVIDIA Tesla V100 16GB * 4
FAQ:
- If having tokenizer issue during inference, please update your transformers version to >= 4.46.3
pip install --upgrade transformers==4.46.3
- Downloads last month
- 76
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.