--- license: apache-2.0 language: - de pipeline_tag: text-generation tags: - german - deutsch - simplification - vereinfachung --- # Model Card for Model ID We fine-tuned [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) with a set of ca. 800 newspaper articles which have been simplified by the Austrian Press Agency. Our aim was to have a model which can simplify German-language text. ## Model Details ### Model Description - **Developed by:** Members of the [Public Interest AI research group](https://publicinterest.ai/), [HIIG Berlin](https://www.hiig.de/) - **Model type:** simplification model, text generation - **Language(s) (NLP):** German - **License:** Apache 2.0 - **Finetuned from model:** meta-llama/Meta-Llama-3-8B-Instruct ### Model Sources - **Repository:** https://github.com/fhewett/simba - **Project website:** https://publicinterest.ai/tool/simba ## Uses ### Direct Use This model works best for simplifying German-language newspaper articles (news items, not commentaries or editorials). It may work for other types of texts. ### Downstream Use We have fine-tuned using only newspaper articles. We have not yet performed extensive out-of-domain testing, but believe that the model's capabilities could be improved by fine-tuning on more diverse data. Contact us if you have a dataset which you think could work (parallel texts, German standard & German simplified). ## Bias, Risks, and Limitations As with most text generation models, the model sometimes produces information that is incorrect. ### Recommendations Please check manually that your output text corresponds to the input text, as factual inconsistencies may have arisen. ## How to Get Started with the Model We offer two tools to interact with our model: an online app and a browser extension. They can be viewed and used [here](https://publicinterest.ai/tool/simba?lang=en). Alternatively, to load the model using transformers: ``` from transformers import AutoTokenizer, AutoModelForCausalLM import torch device = "cuda" tokenizer = AutoTokenizer.from_pretrained("hiig-piai/simba_best_092024") model = AutoModelForCausalLM.from_pretrained("hiig-piai/simba_best_092024", torch_dtype=torch.float16).to(device) ``` We used the following prompt at inference to test our model: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> Du bist ein hilfreicher Assistent und hilfst dem User, Texte besser zu verstehen.<|eot_id|><|start_header_id|>user<|end_header_id|> Kannst du bitte den folgenden Text zusammenfassen und sprachlich auf ein A2-Niveau in Deutsch vereinfachen? Schreibe maximal 5 Sätze. {input_text}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Training Details ### Training Data A sample of the data used to train our model can be found [here](https://github.com/fhewett/apa-rst/tree/main/original_texts). #### Training Hyperparameters - **Training regime:** Our training script can be found [here](https://github.com/fhewett/simba/blob/main/models/train_simba.py). ## Evaluation #### Summary ## Model Card Contact simba -at- hiig.de