|
--- |
|
language: ar |
|
license: apache-2.0 |
|
datasets: uonlp/CulturaX |
|
--- |
|
|
|
# mistral7b-ar-tokenizer-swap-pure-bf16 |
|
|
|
Mistral-7B-v0.1 adapted to Arabic as part of our study on efficient language adaptation: "Language Adaptation on a Tight Academic Compute Budget: Tokenizer Swapping Works and Pure bfloat16 Is Enough". |
|
|
|
Code: https://github.com/konstantinjdobler/tight-budget-llm-adaptation |
|
|
|
Paper: https://openreview.net/forum?id=VYfJaHeVod |
|
|
|
## Usage |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("konstantindobler/mistral7b-ar-tokenizer-swap-pure-bf16") |
|
model = AutoModelForCausalLM.from_pretrained("konstantindobler/mistral7b-ar-tokenizer-swap-pure-bf16") |
|
|
|
# Use model and tokenizer as usual |
|
``` |
|
|
|
## Details |
|
The model is based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and was adapted to Arabic. |
|
The original tokenizer was replaced by a language-specific Arabic tokenizer with a vocabulary of 32768 tokens. The new embeddings were initialized with [FOCUS](https://github.com/konstantinjdobler/focus). Additionally, we tuned just the embeddings for 100 steps before training the full model. |
|
The model was then trained on 8 billion Arabic tokens from [uonlp/CulturaX](https://huggingface.co/uonlp/CulturaX) with pure bfloat16 precision (no mixed precision). More details and hyperparameters can be found [in the paper](https://openreview.net/forum?id=VYfJaHeVod). |
|
|
|
## Disclaimer |
|
The web-scale dataset used for pretraining and tokenizer training ([uonlp/CulturaX](https://huggingface.co/uonlp/CulturaX)) might contain personal and sensitive information. |
|
Such behavior needs to be assessed carefully before any real-world deployment of the models. |
|
|
|
## Citation |
|
Please cite as follows: |
|
|
|
```bibtex |
|
@inproceedings{dobler2024language, |
|
title={Language Adaptation on a Tight Academic Compute Budget: Tokenizer Swapping Works and Pure bfloat16 Is Enough}, |
|
author={Konstantin Dobler and Gerard de Melo}, |
|
booktitle={2nd Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@ICML 2024)}, |
|
year={2024}, |
|
url={https://openreview.net/forum?id=VYfJaHeVod} |
|
} |
|
``` |
|
|