Update README.md
Browse files
README.md
CHANGED
@@ -11,6 +11,13 @@ tags:
|
|
11 |
This is a conversion from https://huggingface.co/meta-llama/Llama-2-70b-chat-hf to the RKLLM format for Rockchip devices.
|
12 |
This runs on the NPU from the RK3588.
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
# But wait... will this run on my RK3588?
|
15 |
No. But I found interesting to see what happens if I converted it.
|
16 |
Let's hope Microsoft never knows that I was using their SSDs as swap because they don't allow more than 32 GB RAM for the students subscription :P
|
@@ -19,7 +26,7 @@ Let's hope Microsoft never knows that I was using their SSDs as swap because the
|
|
19 |
|
20 |
And this is before finishing, it will probably get to 600 GBs of RAM + Swap.
|
21 |
|
22 |
-
But hey! You can always try yourself getting a
|
23 |
|
24 |
# Main repo
|
25 |
See this for my full collection of converted LLMs for the RK3588's NPU:
|
|
|
11 |
This is a conversion from https://huggingface.co/meta-llama/Llama-2-70b-chat-hf to the RKLLM format for Rockchip devices.
|
12 |
This runs on the NPU from the RK3588.
|
13 |
|
14 |
+
# Convert to one file
|
15 |
+
Run:
|
16 |
+
|
17 |
+
```bash
|
18 |
+
cat llama2-chat-70b-hf-0* > llama2-chat-70b-hf.rkllm
|
19 |
+
```
|
20 |
+
|
21 |
# But wait... will this run on my RK3588?
|
22 |
No. But I found interesting to see what happens if I converted it.
|
23 |
Let's hope Microsoft never knows that I was using their SSDs as swap because they don't allow more than 32 GB RAM for the students subscription :P
|
|
|
26 |
|
27 |
And this is before finishing, it will probably get to 600 GBs of RAM + Swap.
|
28 |
|
29 |
+
But hey! You can always try yourself getting a 512GB SSD (and use around 100-250 GB as swap), a 32 GB of RAM SBC, have some patience and see if it loads. Good luck with that!
|
30 |
|
31 |
# Main repo
|
32 |
See this for my full collection of converted LLMs for the RK3588's NPU:
|