MLX
English
llama
pcuenq's picture
pcuenq HF staff
Update README.md (#5)
ee2da86
metadata
license: apache-2.0
datasets:
  - cerebras/SlimPajama-627B
  - bigcode/starcoderdata
  - OpenAssistant/oasst_top1_2023-08-25
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T
language:
  - en
library_name: mlx

TinyLlama-1.1B

https://github.com/jzhang38/TinyLlama

The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. This repository contains the TinyLlama-1.1B-Chat-v0.6 weights in npz format suitable for use with Apple's MLX framework. For more information about the model, please review its model card

How to use

pip install mlx
pip install huggingface_hub
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples

huggingface-cli download --local-dir-use-symlinks False --local-dir tinyllama-1.1B-Chat-v0.6 mlx-community/tinyllama-1.1B-Chat-v0.6

# Run example
python llms/llama/llama.py --model-path tinyllama-1.1B-Chat-v0.6 --prompt "My name is"