--- license: llama2 language: - de library_name: transformers pipeline_tag: text-generation inference: false model_creator: jphme model_name: EM German model_type: llama prompt_template: > Du bist ein hilfreicher KI Assistent, der den Anweisungen des Nutzers sehr gut folgt und ausführliche Antworten gibt! USER: Was ist 1+1? ASSISTANT: tags: - facebook - meta - pytorch - llama - llama-2 - german - deutsch --- ![EM Logo](em_model_logo_web.jpeg) **EM German (v01)** is a Llama2/Mistral/LeoLM-based model family, finetuned on a large dataset of various instructions in German language. The models are optimized for German text, providing proficiency in understanding, generating, and interacting with German language content. Please find all Informations, Example Outputs, RAG prompt format and eval results for the EM German Model family in [our Github Repository](https://github.com/jphme/EM_German). (Für weitere Informationen und Anleitungen auf Deutsch, besuchen Sie bitte [unser Github Repository](https://github.com/jphme/EM_German/blob/main/README_DE.md).) # Links & Demos ## Model Links | Base Model | HF | GPTQ | GGUF | AWQ | |-------|-------|-------|-------|-------| | Llama2 7b | [Link](https://huggingface.co/jphme/em_german_7b_v01) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-AWQ) | | Llama2 13b | [Link](https://huggingface.co/jphme/em_german_13b_v01) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-AWQ) | | Llama2 70b | [Link](https://huggingface.co/jphme/em_german_70b_v01) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-AWQ) | | [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [Link](https://huggingface.co/jphme/em_german_mistral_v01) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-AWQ) | | [LeoLM 7b](https://huggingface.co/LeoLM/leo-hessianai-7b) | [Link](https://huggingface.co/jphme/em_german_7b_leo) | [Link](https://huggingface.co/jphme/em_german_7b_leo_gptq) | [Link](hhttps://huggingface.co/jphme/em_german_7b_leo_gguf) | tbc | | LeoLM 13b | soon | soon | soon | tbc | ### Notes about the different versions: For the 7b models, we recommend the use of the "LeoLM" variant if text output quality is important and the Mistral variant, if reasoning/understanding is the main priority. Both should give better results than the Llama-2 7b model and often even the Llama2 13b model. If you get unsatisfying results with one or another EM German version, please try a different (and/or larger) model or version for your usecase. ## Demos: You can use some of the models with **free** google Colab instances (e.g. the 7bn model in 8bit or the 13b model with GPTQ): * [Example Colab Notebook for 13b with GPTQ](https://colab.research.google.com/drive/1IJfJdVwGkfe5MYOqHptystR3FBeEUdGn?usp=sharing) * [Example Colab Notebook for 7b with 8bit-Loading](https://colab.research.google.com/drive/1bsv6vkLM4AlCpSyXA6ol9P32zxZmf7Zu?usp=sharing) * For further information and GUI use, please visit [our Github Repository](https://github.com/jphme/EM_German). # Prompt Format This model follows the Vicuna format without linebreaks (but should work with linebreaks as well). The format is as follows: ``` Du bist ein hilfreicher Assistent. USER: ASSISTANT: ``` You can swap the standard system prompt for a better suited one (see below for RAG-tasks). # Acknowledgements: Many thanks to [winglian/caseus](https://huggingface.co/winglian) for his great work on Axolotl which I used to train the EM mdoels. I am also grateful to [Jon Durbin](https://huggingface.co/jondurbin) and his [Airoboros](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1) models and code from which I borrowed many ideas and code snippets. Additionally many thanks to [Björn Plüster](https://huggingface.co/bjoernp) and the LeoLM team for the outstanding pretraining work on LeoLM and last but not least many many thanks to [TheBloke](https://huggingface.co/TheBloke) for the preparation of quantized versions in all formats under the sun. The 70b model was trained with support of the [OVH Cloud Startup Program](https://startup.ovhcloud.com/en/). # Contact I you are interested in customized LLMs for business applications, please get in contact with me via [my website](https://www.jph.me). I am also always happy about suggestions and feedback. *PS: We are also always interested in support for our startup ellamind, which will offer customized models for business applications in the future (currently still in stealth mode). Please get in touch if you are interested!* # Disclaimer: The license on this model does not constitute legal advice. I am not responsible for the actions of third parties who use this model. This model should only be used for research purposes. The original Llama2 license applies and is distributed with the model files.