metadata
language:
- ms
Pretrain 1.4B 4096 context length Mamba on Malaysian text
README at https://github.com/mesolitica/malaya/tree/5.1/pretrained-model/mamba
- Trained on 90B tokens, gathered at https://github.com/malaysia-ai/dedup-text-dataset/tree/main/pretrain-llm
- We use Ray cluster to train on 5 nodes of 8x A100 80GB, https://github.com/malaysia-ai/jupyter-gpu/tree/main/ray
WandB, https://wandb.ai/mesolitica/pretrain-mamba-1.4b?workspace=user-husein-mesolitica
WandB report, https://wandb.ai/mesolitica/pretrain-mamba-1.4b/reports/Mamba-1-4B-vs-Mistral-1-1B--Vmlldzo2MjA1MDc1