File size: 1,508 Bytes
6b1473a 5036248 a6591f9 5036248 750d816 5036248 6b1473a a6591f9 6b1473a a6591f9 b01b989 6b1473a 5036248 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
---
language: bn
---
# Bangla-Electra
This is a second attempt at a Bangla/Bengali language model trained with
Google Research's [ELECTRA](https://github.com/google-research/electra).
**As of 2022 I recommend Google's MuRIL model trained on English, Bangla, and other major Indian languages, both in their script and latinized script**: https://huggingface.co/google/muril-base-cased and https://huggingface.co/google/muril-large-cased
**For causal language models, I would suggest https://huggingface.co/sberbank-ai/mGPT, though this is a large model**
Tokenization and pre-training CoLab: https://colab.research.google.com/drive/1gpwHvXAnNQaqcu-YNx1kafEVxz07g2jL
V1 - 120,000 steps; V2 - 190,000 steps
## Classification
Classification with SimpleTransformers: https://colab.research.google.com/drive/1vltPI81atzRvlALv4eCvEB0KdFoEaCOb
On Soham Chatterjee's [news classification task](https://github.com/soham96/Bangla2Vec):
(Random: 16.7%, mBERT: 72.3%, Bangla-Electra: 82.3%)
Similar to mBERT on some tasks and configurations described in https://arxiv.org/abs/2004.07807
## Question Answering
This model can be used for Question Answering - this notebook uses Bangla questions from Google's TyDi dataset:
https://colab.research.google.com/drive/1i6fidh2tItf_-IDkljMuaIGmEU6HT2Ar
## Corpus
Trained on a web crawl from https://oscar-corpus.com/ (deduped version, 5.8GB) and 1 July 2020 dump of bn.wikipedia.org (414MB)
## Vocabulary
Included as vocab.txt in the upload - vocab_size is 29898
|