Papers
arxiv:2401.16640

TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese

Published on Jan 30
Authors:

Abstract

Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes restrict the byproducts they produce, like computational demands and licensing regimes. In this study, we document the development of open-foundation models tailored for use in low-resource settings, their limitations, and their benefits. This is the TeenyTinyLlama pair: two compact models for Brazilian Portuguese text generation. We release them under the permissive Apache 2.0 license on GitHub and Hugging Face for community use and further development. See https://github.com/Nkluge-correa/TeenyTinyLlama

Community

Buscando pessoas para construir uma MistralAI brasileira!

Paper author

Buscando pessoas para construir uma MistralAI brasileira!

Isso seria muito legal! Podem contar comigo para o que for possível.

Sign up or log in to comment

Models citing this paper 27

Browse 27 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2401.16640 in a dataset README.md to link it from this page.

Spaces citing this paper 2

Collections including this paper 2