michal-stefanik commited on
Commit
75b7816
1 Parent(s): 616f032

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -2
README.md CHANGED
@@ -8,7 +8,7 @@ dataset_info:
8
  dtype: string
9
  - name: sentence
10
  dtype: string
11
- - name: y
12
  dtype: string
13
  - name: confidence
14
  dtype: string
@@ -66,7 +66,7 @@ dataset_info:
66
  dtype: string
67
  - name: sentence
68
  dtype: string
69
- - name: y
70
  dtype: string
71
  - name: confidence
72
  dtype: string
@@ -105,4 +105,83 @@ configs:
105
  path: en/validation-*
106
  - split: test
107
  path: en/test-*
 
 
 
 
 
 
 
 
 
108
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  dtype: string
9
  - name: sentence
10
  dtype: string
11
+ - name: 'y'
12
  dtype: string
13
  - name: confidence
14
  dtype: string
 
66
  dtype: string
67
  - name: sentence
68
  dtype: string
69
+ - name: 'y'
70
  dtype: string
71
  - name: confidence
72
  dtype: string
 
105
  path: en/validation-*
106
  - split: test
107
  path: en/test-*
108
+ license: apache-2.0
109
+ task_categories:
110
+ - text-classification
111
+ language:
112
+ - en
113
+ - cs
114
+ tags:
115
+ - education
116
+ pretty_name: Czech-English Reflective Dataset (CEReD)
117
  ---
118
+
119
+ # Dataset Card for Czech-English Reflective Dataset (CEReD)
120
+
121
+ This directory contains an anonymized data set of separated sentences and original reflective journals collected within the Reflection Classification project: https://github.com/EduMUNI/reflection-classification
122
+ See the project repository for more details and the [corresponding paper](https://rdcu.be/cUWGY) for more details on data curation methodology.
123
+
124
+ The data is available in in two types of subsets:
125
+
126
+ 1. The `cs-orig-diaries` contains the full texts of the original reflection journals together with the authors' responses to our questionnaire.
127
+ Entries in this split contain the following attributes:
128
+ * `id`: unique reflective diary id
129
+ * `person_id`: synthetic id of a creator of the diary
130
+ * `subject`: subject that the reflective diary concern
131
+ * `ordering`: relative rank of the diary relative to other diaries of the same author
132
+ * `Q1`: Teacher evaluation: "Student treated the leading teacher with respect."
133
+ * `Q2`: Teacher evaluation: "Student took responsibility in a preparation for practice."
134
+ * `Q3`: Teacher evaluation: "Student discussed specific means of their further development."
135
+ * `Q4`: Teacher evaluation: "Student actively asked me for a support, feedback, reflection."
136
+ * `Q5`: Teacher evaluation: "Student actively reflected on their activity on practice."
137
+ * `Q6`: Teacher evaluation: "Student recognized the situation of the class and reacted to it with selected stragegy."
138
+ * `Q7`: Teacher evaluation: "Student shown interest in a situation in school, in general."
139
+ * `diary`: Text of the reflective diary
140
+
141
+ All questions `Q[1-7]` are part of the questionnaire
142
+ filled by the supervising teacher on the relevant practice.
143
+ The questionnaire concerned the performance evaluation of
144
+ the candidate teacher student, that authored the reflective diary.
145
+
146
+ 3. Subsets `cs` and `en` contain separate sentences that can be used for training a classifier, in
147
+ selected language: original: Czech (`cs`) or translated: English (`en`).
148
+ Sentences are divided into train, validation (val) and test set.
149
+ This split can be used to evaluate the classifier on the same
150
+ data, as we did, hence it allows for comparability of
151
+ the results.
152
+ Again, the tab-separated `sentences.tsv` files contain following
153
+ attributes:
154
+ * `idx`: unique sentence id
155
+ * `context`: textual context surrounding the classified sentence
156
+ * `sentence`: text of the classified sentence
157
+ * `y`: target category of the sentence, that annotators agreed upon
158
+ * `confidence`: confidence, or typicality of the sentence in its assigned category. Annotators were asked: "How typical is this sentence for the picked category?"
159
+ * `y_requires_context`: whether annotators needed to look at the context, when selecting a category.
160
+
161
+ # Citation
162
+
163
+ For the data collection methodology:
164
+ ```bibtex
165
+ @Article{Nehyba2022applications,
166
+ author={Nehyba, Jan and {\v{S}}tef{\'a}nik, Michal},
167
+ title={Applications of deep language models for reflective writings},
168
+ journal={Education and Information Technologies},
169
+ year={2022},
170
+ month={Sep},
171
+ day={05},
172
+ issn={1573-7608},
173
+ doi={10.1007/s10639-022-11254-7},
174
+ url={https://doi.org/10.1007/s10639-022-11254-7}
175
+ }
176
+ ```
177
+
178
+ For the dataset itself:
179
+ ```bibtex
180
+ @misc{Stefanik2021CEReD,
181
+ title = {Czech and English Reflective Dataset ({CEReD})},
182
+ author = {{\v S}tef{\'a}nik, Michal and Nehyba, Jan},
183
+ url = {http://hdl.handle.net/11372/LRT-3573},
184
+ copyright = {Creative Commons - Attribution 4.0 International ({CC} {BY} 4.0)},
185
+ year = {2021}
186
+ }
187
+ ```