pheinisch commited on
Commit
02ed89c
1 Parent(s): bb57751

Create ReadMe

Browse files
Files changed (1) hide show
  1. README.md +33 -0
README.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ metrics:
5
+ - f1
6
+ pipeline_tag: text-classification
7
+ tags:
8
+ - classification
9
+ - framing
10
+ - MediaFrames
11
+ - argument classification
12
+ - multilabel
13
+ - RoBERTa-base
14
+ ---
15
+
16
+ A model for predicting a subset of MediaFrames given an argument (has not to be structured in premise/ conclusion or something else). To investigate the generic frame classes, have a look at [The Media Frames Corpus: Annotations of Frames Across Issues](https://aclanthology.org/P15-2072/)
17
+
18
+ Also, this model was fine-tuned on the data provided by [this paper](https://aclanthology.org/P15-2072/). To be precise, we did the following:
19
+
20
+ > To apply these frames to arguments from DDO, we fine-tune a range of classifiers on a comprehensive training dataset of more than 10,000 newspaper articles that discuss immigration, same-sex marriage, and marijuana, containing 146,001 labeled text spans labeled with a single MediaFrame-class per annotator. To apply this dataset to our argumentative domain, we broaden the annotated spans to sentence level (see [here](https://www.degruyter.com/document/doi/10.1515/itit-2020-0054/html)). Since an argument can address more than a single frame, we design the argument-frame classification task as a multi-label problem by combining all annotations for a sentence into a frame target set. In addition, to broaden the target frame sets, we create new instances merging two instances by combining their textual representation and unifying their target frame set.
21
+
22
+ On the test split of this composed dataset, we measure the following performances:
23
+
24
+ ````txt
25
+ "test_macro avg -> f1-score": 0.7323500703250138,
26
+ "test_macro avg -> precision": 0.7240108073952866,
27
+ "test_macro avg -> recall": 0.7413112856192988,
28
+ "test_macro avg -> support": 27705,
29
+ "test_micro avg -> f1-score": 0.7956475205137353,
30
+ "test_micro avg -> precision": 0.7865279492153059,
31
+ "test_micro avg -> recall": 0.804981050351922,
32
+ "test_micro avg -> support": 27705,
33
+ ````