Datasets:

Modalities:
Text
Formats:
csv
Languages:
Polish
Libraries:
Datasets
pandas
License:
Files changed (1) hide show
  1. README.md +145 -0
README.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ language:
7
+ - pl
8
+ license:
9
+ - cc-by-nc-sa-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: 'CDSC-E'
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-classification
19
+ task_ids:
20
+ - natural-language-inference
21
+ ---
22
+
23
+ # klej-cdsc-e
24
+
25
+ ## Description
26
+
27
+ Polish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness (**CDSC-R**) and entailment (**CDSC-E**). The dataset may be used to evaluate compositional distributional semantics models of Polish. The dataset was presented at ACL 2017.
28
+
29
+ Although the SICK corpus inspires the main design of the dataset, it differs in detail. As in SICK, the sentences come from image captions, but the set of chosen images is much more diverse as they come from 46 thematic groups.
30
+
31
+
32
+ ## Tasks (input, output, and metrics)
33
+
34
+ The entailment relation between two sentences is labeled with *entailment*, *contradiction*, or *neutral*. The task is to predict if the premise entails the hypothesis (entailment), negates the hypothesis (contradiction), or is unrelated (neutral).
35
+
36
+ **Input**: ('sentence_A', 'sentence_B'): sentence pair
37
+
38
+ **Output** ('entailment_judgment' column): one of the possible entailment relations (*entailment*, *contradiction*, *neutral*)
39
+
40
+ **Domain:** image captions
41
+
42
+ *Example:*
43
+
44
+ - b **entails** a (a **wynika z** b) – if a situation or an event described by sentence b occurs, it is recognized that a situation or an event described by a occurs as well, i.e., a and b refer to the same event or the same situation;
45
+ Żaden mężczyzna nie stoi na przystanku autobusowym. (Eng. No man standing at the bus stop.) vs. Mężczyzna z żółtą i białą reklamówką w ręce stoi na przystanku obok autobusu. (Eng. A man with a yellow and white commercial in his hand stands at a bus stop next to a bus.) → **entailment**
46
+
47
+ **Measurements**: Accuracy
48
+
49
+ ## Data splits
50
+
51
+ | Subset | Cardinality |
52
+ | ------------- | ----------: |
53
+ | train | 8000 |
54
+ | validation | 1000 |
55
+ | test | 1000 |
56
+
57
+
58
+ ## Class distribution
59
+
60
+ | Class | train | validation | test |
61
+ |:--------------|--------:|-------------:|-------:|
62
+ | NEUTRAL | 0.744 | 0.741 | 0.744 |
63
+ | ENTAILMENT | 0.179 | 0.185 | 0.190 |
64
+ | CONTRADICTION | 0.077 | 0.074 | 0.066 |
65
+
66
+ ## Citation
67
+
68
+ ```
69
+ @inproceedings{wroblewska-krasnowska-kieras-2017-polish,
70
+ title = "{P}olish evaluation dataset for compositional distributional semantics models",
71
+ author = "Wr{\'o}blewska, Alina and
72
+ Krasnowska-Kiera{\'s}, Katarzyna",
73
+ booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
74
+ month = jul,
75
+ year = "2017",
76
+ address = "Vancouver, Canada",
77
+ publisher = "Association for Computational Linguistics",
78
+ url = "https://aclanthology.org/P17-1073",
79
+ doi = "10.18653/v1/P17-1073",
80
+ pages = "784--792",
81
+ abstract = "The paper presents a procedure of building an evaluation dataset. for the validation of compositional distributional semantics models estimated for languages other than English. The procedure generally builds on steps designed to assemble the SICK corpus, which contains pairs of English sentences annotated for semantic relatedness and entailment, because we aim at building a comparable dataset. However, the implementation of particular building steps significantly differs from the original SICK design assumptions, which is caused by both lack of necessary extraneous resources for an investigated language and the need for language-specific transformation rules. The designed procedure is verified on Polish, a fusional language with a relatively free word order, and contributes to building a Polish evaluation dataset. The resource consists of 10K sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish.",
82
+ }
83
+ ```
84
+
85
+ ## License
86
+
87
+ ```
88
+ Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
89
+ ```
90
+
91
+ ## Links
92
+
93
+ [HuggingFace](https://huggingface.co/datasets/allegro/klej-cdsc-e)
94
+
95
+ [Source](http://zil.ipipan.waw.pl/Scwad/CDSCorpus)
96
+
97
+ [Paper](https://aclanthology.org/P17-1073.pdf)
98
+
99
+ ## Examples
100
+
101
+ ### Loading
102
+
103
+ ```python
104
+ from pprint import pprint
105
+
106
+ from datasets import load_dataset
107
+
108
+ dataset = load_dataset("allegro/klej-cdsc-e")
109
+ pprint(dataset["train"][0])
110
+
111
+ # {'entailment_judgment': 'NEUTRAL',
112
+ # 'pair_ID': 1,
113
+ # 'sentence_A': 'Chłopiec w czerwonych trampkach skacze wysoko do góry '
114
+ # 'nieopodal fontanny .',
115
+ # 'sentence_B': 'Chłopiec w bluzce w paski podskakuje wysoko obok brązowej '
116
+ # 'fontanny .'}
117
+ ```
118
+
119
+ ### Evaluation
120
+
121
+ ```python
122
+ import random
123
+ from pprint import pprint
124
+
125
+ from datasets import load_dataset, load_metric
126
+
127
+ dataset = load_dataset("allegro/klej-cdsc-e")
128
+ dataset = dataset.class_encode_column("entailment_judgment")
129
+ references = dataset["test"]["entailment_judgment"]
130
+
131
+ # generate random predictions
132
+ predictions = [random.randrange(max(references) + 1) for _ in range(len(references))]
133
+
134
+ acc = load_metric("accuracy")
135
+ f1 = load_metric("f1")
136
+
137
+ acc_score = acc.compute(predictions=predictions, references=references)
138
+ f1_score = f1.compute(predictions=predictions, references=references, average="macro")
139
+
140
+ pprint(acc_score)
141
+ pprint(f1_score)
142
+
143
+ # {'accuracy': 0.325}
144
+ # {'f1': 0.2736171695141161}
145
+ ```