hadiasghari
commited on
Commit
•
7fdc351
1
Parent(s):
e296d29
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,115 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- de
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
tags:
|
7 |
+
- german
|
8 |
+
- deutsch
|
9 |
+
- simplification
|
10 |
+
- vereinfachung
|
11 |
+
---
|
12 |
+
# Model Card for Model ID
|
13 |
+
|
14 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
15 |
+
|
16 |
+
We fine-tuned the [jphme/em_german_leo_mistral](https://huggingface.co/jphme/em_german_leo_mistral) with a set of ca. 2000 newspaper articles which have been simplified by the Austrian Press Agency.
|
17 |
+
Our aim was to have a model which can simplify German-language text.
|
18 |
+
|
19 |
+
## Model Details
|
20 |
+
|
21 |
+
### Model Description
|
22 |
+
|
23 |
+
<!-- Provide a longer summary of what this model is. -->
|
24 |
+
|
25 |
+
|
26 |
+
|
27 |
+
- **Developed by:** Members of the [Public Interest AI research group](https://publicinterest.ai/), [HIIG Berlin](https://www.hiig.de/)
|
28 |
+
- **Model type:** simplification model, text generation
|
29 |
+
- **Language(s) (NLP):** German
|
30 |
+
- **License:** Apache 2.0
|
31 |
+
- **Finetuned from model:** jphme/em_german_leo_mistral
|
32 |
+
|
33 |
+
### Model Sources
|
34 |
+
|
35 |
+
<!-- Provide the basic links for the model. -->
|
36 |
+
|
37 |
+
- **Repository:** https://github.com/fhewett/simba
|
38 |
+
<!-- - **Paper [optional]:** [More Information Needed] -->
|
39 |
+
- **Project website:** https://publicinterest.ai/tool/simba
|
40 |
+
|
41 |
+
## Uses
|
42 |
+
|
43 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
44 |
+
|
45 |
+
### Direct Use
|
46 |
+
|
47 |
+
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
48 |
+
|
49 |
+
This model works best for simplifying German-language newspaper articles (news items, not commentaries or editorials). It may work for other types of texts.
|
50 |
+
|
51 |
+
### Downstream Use
|
52 |
+
|
53 |
+
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
54 |
+
We have fine-tuned using only newspaper articles. We have not yet performed extensive out-of-domain testing, but believe that the model's capabilities could be improved by fine-tuning on more diverse data. Contact us if you have a dataset which you think could work (parallel texts, German standard & German simplified).
|
55 |
+
|
56 |
+
<!-- ### Out-of-Scope Use -->
|
57 |
+
|
58 |
+
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
59 |
+
|
60 |
+
## Bias, Risks, and Limitations
|
61 |
+
|
62 |
+
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
63 |
+
|
64 |
+
As with most text generation models, the model sometimes produces information that is incorrect.
|
65 |
+
|
66 |
+
### Recommendations
|
67 |
+
|
68 |
+
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
69 |
+
|
70 |
+
Please check manually that your output text corresponds to the input text, as factual inconsistencies may have arisen.
|
71 |
+
|
72 |
+
## How to Get Started with the Model
|
73 |
+
|
74 |
+
Use the code below to get started with the model.
|
75 |
+
|
76 |
+
[More Information Needed]
|
77 |
+
|
78 |
+
## Training Details
|
79 |
+
|
80 |
+
### Training Data
|
81 |
+
|
82 |
+
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
83 |
+
|
84 |
+
A sample of the data used to train our model can be found [here](https://github.com/fhewett/apa-rst/tree/main/original_texts).
|
85 |
+
|
86 |
+
#### Training Hyperparameters
|
87 |
+
|
88 |
+
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
89 |
+
|
90 |
+
<!-- #### Speeds, Sizes, Times [optional] -->
|
91 |
+
|
92 |
+
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
93 |
+
|
94 |
+
## Evaluation
|
95 |
+
|
96 |
+
<!-- This section describes the evaluation protocols and provides the results. -->
|
97 |
+
|
98 |
+
#### Summary
|
99 |
+
|
100 |
+
For now, we have manually checked the performance of our model on a small sample of texts. Whilst it seems to produce good summaries of all texts, it only seems to simplify newspaper articles (i.e. similar to our training data). We have not yet applied any large-scale metrics based evaluation.
|
101 |
+
|
102 |
+
|
103 |
+
<!-- ## Citation [optional]
|
104 |
+
|
105 |
+
**BibTeX:**
|
106 |
+
|
107 |
+
[More Information Needed]
|
108 |
+
|
109 |
+
**APA:**
|
110 |
+
|
111 |
+
[More Information Needed]-->
|
112 |
+
|
113 |
+
## Model Card Contact
|
114 |
+
|
115 |
+
simba -at- hiig.de
|