MaziyarPanahi
commited on
Upload folder using huggingface_hub
Browse files- .gitattributes +7 -0
- Meta-Llama-3.1-70B-Instruct-GGUF_imatrix.dat +3 -0
- Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00001-of-00006.gguf +3 -0
- Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00002-of-00006.gguf +3 -0
- Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00003-of-00006.gguf +3 -0
- Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00004-of-00006.gguf +3 -0
- Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00005-of-00006.gguf +3 -0
- Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00006-of-00006.gguf +3 -0
- README.md +1 -11
.gitattributes
CHANGED
@@ -41,3 +41,10 @@ Meta-Llama-3.1-70B-Instruct.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
|
41 |
Meta-Llama-3.1-70B-Instruct.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
Meta-Llama-3.1-70B-Instruct.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
Meta-Llama-3.1-70B-Instruct.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
Meta-Llama-3.1-70B-Instruct.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
Meta-Llama-3.1-70B-Instruct.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
Meta-Llama-3.1-70B-Instruct.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
Meta-Llama-3.1-70B-Instruct-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
|
45 |
+
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00001-of-00006.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00002-of-00006.gguf filter=lfs diff=lfs merge=lfs -text
|
47 |
+
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00003-of-00006.gguf filter=lfs diff=lfs merge=lfs -text
|
48 |
+
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00004-of-00006.gguf filter=lfs diff=lfs merge=lfs -text
|
49 |
+
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00005-of-00006.gguf filter=lfs diff=lfs merge=lfs -text
|
50 |
+
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00006-of-00006.gguf filter=lfs diff=lfs merge=lfs -text
|
Meta-Llama-3.1-70B-Instruct-GGUF_imatrix.dat
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1e2b854a6bf2fb589e9e98cc6a55ac9c32ccf9d0dbbb8dd0aede7c4ef36bbb06
|
3 |
+
size 24922274
|
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00001-of-00006.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4e2683c351f9dbf5af4e4210c6739b754815e6be03f0efc0ddbf100ea5788aa2
|
3 |
+
size 10697125376
|
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00002-of-00006.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ae12924bb9397d5260efe744e13e81dda30d2b0e043f05958f4b2818b96e644c
|
3 |
+
size 10212744800
|
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00003-of-00006.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a1113b6129394a7c2ddc039c14eb31e8d430f2865c909b4ed11a1a087162b33f
|
3 |
+
size 10020101728
|
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00004-of-00006.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ab5001643fa91798e2d2d7058f665e9e62d7c3cad776c02e6998eaf351a2b1a3
|
3 |
+
size 9889324640
|
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00005-of-00006.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8a56c61d2023334f0b2b5bf1a22c6415248fe46e220ef91df3afb2308c5f241d
|
3 |
+
size 9889324640
|
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00006-of-00006.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:678835a6f5bc4315fa3f4fc70570dfe579b403d5a57b350adaa25c3baf61cbef
|
3 |
+
size 7179523008
|
README.md
CHANGED
@@ -1,13 +1,4 @@
|
|
1 |
---
|
2 |
-
language:
|
3 |
-
- en
|
4 |
-
- de
|
5 |
-
- fr
|
6 |
-
- it
|
7 |
-
- pt
|
8 |
-
- hi
|
9 |
-
- es
|
10 |
-
- th
|
11 |
tags:
|
12 |
- quantized
|
13 |
- 2-bit
|
@@ -25,7 +16,6 @@ inference: false
|
|
25 |
model_creator: meta-llama
|
26 |
pipeline_tag: text-generation
|
27 |
quantized_by: MaziyarPanahi
|
28 |
-
license: llama3.1
|
29 |
---
|
30 |
# [MaziyarPanahi/Meta-Llama-3.1-70B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-70B-Instruct-GGUF)
|
31 |
- Model creator: [meta-llama](https://huggingface.co/meta-llama)
|
@@ -53,4 +43,4 @@ Here is an incomplete list of clients and libraries that are known to support GG
|
|
53 |
|
54 |
## Special thanks
|
55 |
|
56 |
-
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
tags:
|
3 |
- quantized
|
4 |
- 2-bit
|
|
|
16 |
model_creator: meta-llama
|
17 |
pipeline_tag: text-generation
|
18 |
quantized_by: MaziyarPanahi
|
|
|
19 |
---
|
20 |
# [MaziyarPanahi/Meta-Llama-3.1-70B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-70B-Instruct-GGUF)
|
21 |
- Model creator: [meta-llama](https://huggingface.co/meta-llama)
|
|
|
43 |
|
44 |
## Special thanks
|
45 |
|
46 |
+
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|