Update README.md
Browse files
README.md
CHANGED
@@ -11,11 +11,14 @@ license: llama2
|
|
11 |
|
12 |
# Llama-2 ONNX
|
13 |
|
14 |
-
This repository contains optimized version of Llama-2 7B
|
15 |
|
16 |
## Downloading the model
|
17 |
|
18 |
-
You can use `huggingface_hub` to download this repository. This can be done through both python
|
|
|
|
|
|
|
19 |
|
20 |
With CLI:
|
21 |
|
@@ -27,7 +30,8 @@ pip install -U huggingface_hub
|
|
27 |
```sh
|
28 |
huggingface-cli download alpindale/Llama-2-7b-ONNX --repo-type model --cache-dir /path/to/custom/cache/directory --local-dir /path/to/download/dir --local-dir-use-symlinks False
|
29 |
```
|
30 |
-
The `--cache-dir` kwarg is only necessary if your default cache directory (`~/.cache`)
|
|
|
31 |
|
32 |
## Using the model
|
33 |
The repository provides example code for running the models.
|
@@ -46,7 +50,12 @@ Alternatively, you can use the Gradio chat interface to run the models.
|
|
46 |
|
47 |
First, install the required packages:
|
48 |
```sh
|
49 |
-
pip install -r requirements.txt
|
|
|
|
|
|
|
|
|
|
|
50 |
```
|
51 |
|
52 |
Then you can simply run:
|
|
|
11 |
|
12 |
# Llama-2 ONNX
|
13 |
|
14 |
+
This repository contains optimized version of Llama-2 7B.
|
15 |
|
16 |
## Downloading the model
|
17 |
|
18 |
+
You can use `huggingface_hub` to download this repository. This can be done through both python
|
19 |
+
scripting and the commandline. Refer to the
|
20 |
+
[HuggingFace Hub Documentation](https://huggingface.co/docs/huggingface_hub/guides/download) for
|
21 |
+
the Python examples.
|
22 |
|
23 |
With CLI:
|
24 |
|
|
|
30 |
```sh
|
31 |
huggingface-cli download alpindale/Llama-2-7b-ONNX --repo-type model --cache-dir /path/to/custom/cache/directory --local-dir /path/to/download/dir --local-dir-use-symlinks False
|
32 |
```
|
33 |
+
The `--cache-dir` kwarg is only necessary if your default cache directory (`~/.cache`)
|
34 |
+
does not have enough disk space to accomodate the entire repository.
|
35 |
|
36 |
## Using the model
|
37 |
The repository provides example code for running the models.
|
|
|
50 |
|
51 |
First, install the required packages:
|
52 |
```sh
|
53 |
+
pip install -r ChatApp/requirements.txt
|
54 |
+
```
|
55 |
+
|
56 |
+
Set the Python path to the root directory of the repository (necessary for importing the required modules):
|
57 |
+
```sh
|
58 |
+
export PYTHONPATH=$PYTHONPATH:$(pwd)
|
59 |
```
|
60 |
|
61 |
Then you can simply run:
|