em_german_13b_v01 / README.md
jphme's picture
update readme
d596113
|
raw
history blame
5.41 kB
metadata
license: llama2
language:
  - de
library_name: transformers
pipeline_tag: text-generation
inference: false
model_creator: jphme
model_name: EM German
model_type: llama
prompt_template: >
  Du bist ein hilfreicher KI Assistent, der den Anweisungen des Nutzers sehr gut
  folgt und ausführliche Antworten gibt! USER: Was ist 1+1? ASSISTANT:
tags:
  - facebook
  - meta
  - pytorch
  - llama
  - llama-2
  - german
  - deutsch

EM Logo

EM German (v01) is a Llama2/Mistral/LeoLM-based model family, finetuned on a large dataset of various instructions in German language. The models are optimized for German text, providing proficiency in understanding, generating, and interacting with German language content.

Please find all Informations, Example Outputs, RAG prompt format and eval results for the EM German Model family in our Github Repository.

(Für weitere Informationen und Anleitungen auf Deutsch, besuchen Sie bitte unser Github Repository.)

Links & Demos

Model Links

Base Model HF GPTQ GGUF AWQ
Llama2 7b Link Link Link Link
Llama2 13b Link Link Link Link
Llama2 70b Link Link Link Link
Mistral 7b Link Link Link Link
LeoLM 7b Link Link Link tbc
LeoLM 13b soon soon soon tbc

Notes about the different versions:

For the 7b models, we recommend the use of the "LeoLM" variant if text output quality is important and the Mistral variant, if reasoning/understanding is the main priority. Both should give better results than the Llama-2 7b model and often even the Llama2 13b model.

If you get unsatisfying results with one or another EM German version, please try a different (and/or larger) model or version for your usecase.

Demos:

You can use some of the models with free google Colab instances (e.g. the 7bn model in 8bit or the 13b model with GPTQ):

Prompt Format

This model follows the Vicuna format without linebreaks (but should work with linebreaks as well). The format is as follows:

Du bist ein hilfreicher Assistent. USER: <instruction> ASSISTANT:

You can swap the standard system prompt for a better suited one (see below for RAG-tasks).

Acknowledgements:

Many thanks to winglian/caseus for his great work on Axolotl which I used to train the EM mdoels. I am also grateful to Jon Durbin and his Airoboros models and code from which I borrowed many ideas and code snippets. Additionally many thanks to Björn Plüster and the LeoLM team for the outstanding pretraining work on LeoLM and last but not least many many thanks to TheBloke for the preparation of quantized versions in all formats under the sun. The 70b model was trained with support of the OVH Cloud Startup Program.

Contact

I you are interested in customized LLMs for business applications, please get in contact with me via my website. I am also always happy about suggestions and feedback.

PS: We are also always interested in support for our startup ellamind, which will offer customized models for business applications in the future (currently still in stealth mode). Please get in touch if you are interested!

Disclaimer:

The license on this model does not constitute legal advice. I am not responsible for the actions of third parties who use this model. This model should only be used for research purposes. The original Llama2 license applies and is distributed with the model files.