fedric95 commited on
Commit
cbe497a
·
verified ·
1 Parent(s): fb3c064

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -3
README.md CHANGED
@@ -1,3 +1,63 @@
1
- ---
2
- license: gemma
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/gemma-7b
3
+ library_name: transformers
4
+ license: gemma
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - conversational
8
+ quantized_by: fedric95
9
+ extra_gated_heading: Access Gemma on Hugging Face
10
+ extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
11
+ agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
12
+ Face and click below. Requests are processed immediately.
13
+ extra_gated_button_content: Acknowledge license
14
+ ---
15
+
16
+ ## Llamacpp Quantizations of Meta-Llama-3.1-8B
17
+
18
+ Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3583">b3583</a> for quantization.
19
+
20
+ Original model: https://huggingface.co/google/gemma-7b
21
+
22
+ ## Download a file (not the whole branch) from below:
23
+
24
+ | Filename | Quant type | File Size | Perplexity (wikitext-2-raw-v1.test) |
25
+ | -------- | ---------- | --------- | ----------- |
26
+ | [gemma-7b.FP32.gguf](https://huggingface.co/fedric95/gemma-7b-GGUF/blob/main/gemma-7b.FP32.gguf) | BF16 | 17.1 GB | 6.9857 +/- 0.04411 |
27
+ | [gemma-7b-Q8_0.gguf](https://huggingface.co/fedric95/gemma-7b-GGUF/blob/main/gemma-7b-Q8_0.gguf) | Q8_0 | 9.08 GB | 7.0373 +/- 0.04456 |
28
+ | [gemma-7b-Q6_K.gguf](https://huggingface.co/fedric95/gemma-7b-GGUF/blob/main/gemma-7b-Q6_K.gguf) | Q6_K | 7.01 GB | 7.3858 +/- 0.04762 |
29
+ | [gemma-7b-Q5_K_M.gguf](https://huggingface.co/fedric95/gemma-7b-GGUF/blob/main/gemma-7b-Q5_K_M.gguf) | Q5_K_M | 6.14 GB | 7.4227 +/- 0.04781 |
30
+ | [gemma-7b-Q5_K_S.gguf](https://huggingface.co/fedric95/gemma-7b-GGUF/blob/main/gemma-7b-Q5_K_S.gguf) | Q5_K_S | 5.98 GB | 7.5232 +/- 0.04857 |
31
+ | [gemma-7b-Q4_K_M.gguf](https://huggingface.co/fedric95/gemma-7b-GGUF/blob/main/gemma-7b-Q4_K_M.gguf) | Q4_K_M | 5.33 GB | 7.5800 +/- 0.04918 |
32
+ | [gemma-7b-Q4_K_S.gguf](https://huggingface.co/fedric95/gemma-7b-GGUF/blob/main/gemma-7b-Q4_K_S.gguf) | Q4_K_S | 5.05 GB | 7.9673 +/- 0.05225 |
33
+ | [gemma-7b-Q3_K_L.gguf](https://huggingface.co/fedric95/gemma-7b-GGUF/blob/main/gemma-7b-Q3_K_L.gguf) | Q3_K_L | 4.71 GB | 7.9586 +/- 0.05186 |
34
+ | [gemma-7b-Q3_K_M.gguf](https://huggingface.co/fedric95/gemma-7b-GGUF/blob/main/gemma-7b-Q3_K_M.gguf) | Q3_K_M | 4.37 GB | 8.4077 +/- 0.05545 |
35
+ | [gemma-7b-Q3_K_S.gguf](https://huggingface.co/fedric95/gemma-7b-GGUF/blob/main/gemma-7b-Q3_K_S.gguf) | Q3_K_S | 3.98 GB | 102.6126 +/- 1.62310 |
36
+ | [gemma-7b-Q2_K.gguf](https://huggingface.co/fedric95/gemma-7b-GGUF/blob/main/gemma-7b-Q2_K.gguf) | Q2_K | 3.48 GB | 3970.5385 +/- 102.46527 |
37
+
38
+ ## Downloading using huggingface-cli
39
+
40
+ First, make sure you have hugginface-cli installed:
41
+
42
+ ```
43
+ pip install -U "huggingface_hub[cli]"
44
+ ```
45
+
46
+ Then, you can target the specific file you want:
47
+
48
+ ```
49
+ huggingface-cli download fedric95/gemma-7b-GGUF --include "gemma-7b-Q4_K_M.gguf" --local-dir ./
50
+ ```
51
+
52
+ If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
53
+
54
+ ```
55
+ huggingface-cli download fedric95/gemma-7b-GGUF --include "gemma-7b-Q8_0.gguf/*" --local-dir gemma-7b-Q8_0
56
+ ```
57
+
58
+ You can either specify a new local-dir (gemma-7b-Q8_0) or download them all in place (./)
59
+
60
+
61
+ ## Reproducibility
62
+
63
+ https://github.com/ggerganov/llama.cpp/discussions/9020#discussioncomment-10335638