metadata
language: ms
t5-small-bahasa-cased
Pretrained T5 small on both standard and local language model for Malay.
Pretraining Corpus
t5-small-bahasa-cased
model was pretrained on multiple tasks. Below is list of tasks we trained on,
- Language masking task on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
- News title prediction on bahasa news.
- Next sentence prediction on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
- Translated QA Natural.
- Text Similarity task on translated SNLI and translated MNLI.
- EN-MS translation.
- MS-EN translation.
- Abstractive Summarization.
- Knowledge Graph triples generation.
- Paraphrase.
- Social media normalization.
- Noisy EN-MS translation.
- Noisy MS-EN translation.
Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare
Pretraining details
- This model was trained using Google T5 repository https://github.com/google-research/text-to-text-transfer-transformer, on v3-8 TPU.
- All steps can reproduce from here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5
Supported prefix
soalan: {string}
, trained using Natural QA.ringkasan: {string}
, for abstractive summarization.tajuk: {string}
, for abstractive title.parafrasa: {string}
, for abstractive paraphrase.terjemah Inggeris ke Melayu: {string}
, for EN-MS translation.terjemah Melayu ke Inggeris: {string}
, for MS-EN translation.grafik pengetahuan: {string}
, for MS text to EN Knowledge Graph triples format.ayat1: {string1} ayat2: {string2}
, semantic similarity.