File size: 3,768 Bytes
dd16845 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
---
language:
- de
- en
pipeline_tag: text-generation
inference: false
tags:
- pytorch
- llama
- llama-2
- german
- deutsch
- ggml
---
# Llama 2 13b Chat German
These files are GGML format model files for [Llama 2 13b Chat German](https://huggingface.co/jphme/Llama-2-13b-chat-german). Please find all information about the model in the original repository.
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Prompt Template
Llama2 Chat uses a new prompt format:
```
<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. Please answer in the same language as the user.
<</SYS>>
This is a test question[/INST] This is a answer </s><s>
```
See also the original implementation [here](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L213).
There is also a (so far non-documented?) method right in transformers to generate the correct tokenization: [LLamaTokenizer._build_conversation_input_ids](https://github.com/huggingface/transformers/blob/b257c46a075419c09e5ce5c5aa39bc346ecdb9a5/src/transformers/models/llama/tokenization_llama.py#L334).
## Compatibility
### `q4_0`
So far, I only quantized a `q4_0` version for my own use. Please let me know if there is demand for other quantizations.
These should be compatbile with any UIs, tools and libraries released since late May.
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| llama-13b-chat-german.ggmlv3.q4_0.bin | q4_0 | 4 | 7.37 GB | ~9.8 GB | Original llama.cpp quant method, 4-bit. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m llama-13b-chat-german.ggmlv3.q4_0.bin --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "..."
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
## Thanks
Special thanks to [Meta](https://huggingface.co/meta-llama) for the great Orca Mini base model and [TheBloke](https://huggingface.co/TheBloke) for his great work quantizing billions of models (and for his template for this README). |