license: apache-2.0
language:
- lua
datasets:
- allenai/nllb
library_name: transformers
pipeline_tag: text-generation
tags:
- goldfish
- arxiv:2408.10441
lua_latn_full
Goldfish is a suite of monolingual language models trained for 350 languages. This model is the Luba-Lulua (Latin script) model trained on 20MB of data (all our data in the language), after accounting for an estimated byte premium of 1.19; content-matched text in Luba-Lulua takes on average 1.19x as many UTF-8 bytes to encode as English. The Goldfish models are trained primarily for comparability across languages and for low-resource languages; Goldfish performance for high-resource languages is not designed to be comparable with modern large language models (LLMs).
Note: lua_latn is an individual language code. It is not contained in any macrolanguage codes contained in Goldfish (for script latn).
All training and hyperparameter details are in our paper, Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024).
Training code and sample usage: https://github.com/tylerachang/goldfish
Sample usage also in this Google Colab: link
Model details:
To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json. All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences. For best results, make sure that [CLS] is prepended to your input sequence (see sample usage linked above)! Details for this model specifically:
- Architecture: gpt2
- Parameters: 124770816
- Maximum sequence length: 512 tokens
- Training text data (raw): 24.69MB
- Training text data (byte premium scaled): 20.825MB
- Training tokens: 6322688 (x10 epochs)
- Vocabulary size: 50000
- Compute cost: 3.2259375562752e+16 FLOPs or ~3.0 NVIDIA A6000 GPU hours
Training datasets (percentages prior to deduplication):
- 100.00000%: NLLB (CommonCrawl and ParaCrawl)
Citation
If you use this model, please cite:
@article{chang-etal-2024-goldfish,
title={Goldfish: Monolingual Language Models for 350 Languages},
author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
journal={Preprint},
year={2024},
url={https://www.arxiv.org/abs/2408.10441},
}