Updated README
Browse files
README.md
CHANGED
@@ -1,3 +1,27 @@
|
|
1 |
-
---
|
2 |
-
license: llama3.1
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama3.1
|
3 |
+
tags:
|
4 |
+
- openvino
|
5 |
+
- int4
|
6 |
+
---
|
7 |
+
|
8 |
+
This is an INT4 quantized version of the `meta-llama/Llama-3.1-8B-Instruct` model. The Python packages used in creating this model are as follows:
|
9 |
+
```
|
10 |
+
openvino==2024.4.0
|
11 |
+
optimum==1.23.3
|
12 |
+
optimum-intel==1.20.1
|
13 |
+
nncf==2.13.0
|
14 |
+
torch==2.5.1
|
15 |
+
transformers==4.46.1
|
16 |
+
```
|
17 |
+
This quantized model is created using the following command:
|
18 |
+
```
|
19 |
+
optimum-cli export openvino -m "meta-llama/Llama-3.1-8B-Instruct" --task text-generation-with-past --weight-format int4 --group-size 128 --trust-remote-code ./llama-3_1-8b-instruct-ov-int4
|
20 |
+
```
|
21 |
+
For more details, run the following command from your Python environment: `optimum-cli export openvino --help`
|
22 |
+
|
23 |
+
INFO:nncf:Statistics of the bitwidth distribution:
|
24 |
+
| Num bits (N) | % all parameters (layers) | % ratio-defining parameters (layers) |
|
25 |
+
|--------------|---------------------------|--------------------------------------|
|
26 |
+
| 8 | 13% (2 / 226) | 0% (0 / 224) |
|
27 |
+
| 4 | 87% (224 / 226) | 100% (224 / 224) |
|