MLX
English
llama
File size: 1,030 Bytes
cd79ce3
 
babd285
 
 
 
2fa434c
babd285
 
2fa434c
cd79ce3
babd285
 
 
 
 
 
 
2fa434c
babd285
 
2fa434c
babd285
2fa434c
 
 
 
 
babd285
2fa434c
babd285
2fa434c
ee2da86
babd285
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- OpenAssistant/oasst_top1_2023-08-25
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T
language:
- en
library_name: mlx
---
<div align="center">

# TinyLlama-1.1B
</div>

https://github.com/jzhang38/TinyLlama

The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. This repository contains the TinyLlama-1.1B-Chat-v0.6 weights in npz format suitable for use with Apple's MLX framework. For more information about the model, please review [its model card](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6)


#### How to use

```
pip install mlx
pip install huggingface_hub
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples

huggingface-cli download --local-dir-use-symlinks False --local-dir tinyllama-1.1B-Chat-v0.6 mlx-community/tinyllama-1.1B-Chat-v0.6

# Run example
python llms/llama/llama.py --model-path tinyllama-1.1B-Chat-v0.6 --prompt "My name is"
```