Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Libraries:
Datasets
pandas
License:
File size: 5,938 Bytes
c54ce8d
 
b5b67ee
c54ce8d
 
 
 
 
 
 
75b7816
c54ce8d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
616f032
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b5b67ee
 
 
 
 
 
 
 
75b7816
b5b67ee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c54ce8d
 
 
 
 
 
 
 
 
616f032
 
 
 
b5b67ee
 
 
 
 
 
 
 
75b7816
 
 
 
 
 
 
 
 
c54ce8d
75b7816
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c8088f9
 
75b7816
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
---
dataset_info:
- config_name: cs
  features:
  - name: idx
    dtype: int64
  - name: context
    dtype: string
  - name: sentence
    dtype: string
  - name: 'y'
    dtype: string
  - name: confidence
    dtype: string
  - name: y_requires_context
    dtype: string
  splits:
  - name: train
    num_bytes: 3069614
    num_examples: 6096
  - name: validation
    num_bytes: 173932
    num_examples: 339
  - name: test
    num_bytes: 168255
    num_examples: 339
  download_size: 2135425
  dataset_size: 3411801
- config_name: cs-orig-diaries
  features:
  - name: id
    dtype: int64
  - name: person_id
    dtype: int64
  - name: subject
    dtype: string
  - name: ordering
    dtype: int64
  - name: Q1
    dtype: int64
  - name: Q2
    dtype: int64
  - name: Q3
    dtype: int64
  - name: Q4
    dtype: int64
  - name: Q5
    dtype: int64
  - name: Q6
    dtype: int64
  - name: Q7
    dtype: int64
  - name: diary
    dtype: string
  splits:
  - name: train
    num_bytes: 3071134
    num_examples: 950
  download_size: 1845241
  dataset_size: 3071134
- config_name: en
  features:
  - name: idx
    dtype: int64
  - name: context
    dtype: string
  - name: sentence
    dtype: string
  - name: 'y'
    dtype: string
  - name: confidence
    dtype: string
  - name: y_requires_context
    dtype: string
  splits:
  - name: train
    num_bytes: 3011633
    num_examples: 6096
  - name: validation
    num_bytes: 170585
    num_examples: 339
  - name: test
    num_bytes: 169709
    num_examples: 339
  download_size: 1876865
  dataset_size: 3351927
configs:
- config_name: cs
  data_files:
  - split: train
    path: cs/train-*
  - split: validation
    path: cs/validation-*
  - split: test
    path: cs/test-*
- config_name: cs-orig-diaries
  data_files:
  - split: train
    path: cs-orig-diaries/train-*
- config_name: en
  data_files:
  - split: train
    path: en/train-*
  - split: validation
    path: en/validation-*
  - split: test
    path: en/test-*
license: apache-2.0
task_categories:
- text-classification
language:
- en
- cs
tags:
- education
pretty_name: Czech-English Reflective Dataset (CEReD)
---

# Dataset Card for Czech-English Reflective Dataset (CEReD)

This directory contains an anonymized data set of separated sentences and original reflective journals collected within the Reflection Classification project: https://github.com/EduMUNI/reflection-classification
See the project repository for more details and the [corresponding paper](https://rdcu.be/cUWGY) for more details on data curation methodology.

The data is available in in two types of subsets:

1. The `cs-orig-diaries` contains the full texts of the original reflection journals together with the authors' responses to our questionnaire.
  Entries in this split contain the following attributes:
   * `id`: unique reflective diary id
   * `person_id`: synthetic id of a creator of the diary
   * `subject`: subject that the reflective diary concern
   * `ordering`: relative rank of the diary relative to other diaries of the same author
   * `Q1`: Teacher evaluation: "Student treated the leading teacher with respect."
   * `Q2`: Teacher evaluation: "Student took responsibility in a preparation for practice."
   * `Q3`: Teacher evaluation: "Student discussed specific means of their further development."
   * `Q4`: Teacher evaluation: "Student actively asked me for a support, feedback, reflection."
   * `Q5`: Teacher evaluation: "Student actively reflected on their activity on practice."
   * `Q6`: Teacher evaluation: "Student recognized the situation of the class and reacted to it with selected stragegy."
   * `Q7`: Teacher evaluation: "Student shown interest in a situation in school, in general."
   * `diary`: Text of the reflective diary
    
   All questions `Q[1-7]` are part of the questionnaire
   filled by the supervising teacher on the relevant practice. 
   The questionnaire concerned the performance evaluation of
   the candidate teacher student, that authored the reflective diary.
   
3. Subsets `cs` and `en` contain separate sentences that can be used for training a classifier, in
   selected language: original: Czech (`cs`) or translated: English (`en`).
   Sentences are divided into train, validation (val) and test set.
   This split can be used to evaluate the classifier on the same
   data, as we did, hence it allows for comparability of 
   the results.
   Again, the tab-separated `sentences.tsv` files contain following 
   attributes:
    * `idx`: unique sentence id
    * `context`: textual context surrounding the classified sentence
    * `sentence`: text of the classified sentence
    * `y`: target category of the sentence, that annotators agreed upon
    * `confidence`: confidence, or typicality of the sentence in its assigned category. Annotators were asked: "How typical is this sentence for the picked category?"
    * `y_requires_context`: whether annotators needed to look at the context, when selecting a category.

For details on the taxonomy of annotated classification, we also make available the [annotation manual](https://github.com/EduMUNI/reflection-classification/blob/master/data/annotation_manual.pdf).

# Citation

For the data collection methodology:
```bibtex
@Article{Nehyba2022applications,
  author={Nehyba, Jan and {\v{S}}tef{\'a}nik, Michal},
  title={Applications of deep language models for reflective writings},
  journal={Education and Information Technologies},
  year={2022},
  month={Sep},
  day={05},
  issn={1573-7608},
  doi={10.1007/s10639-022-11254-7},
  url={https://doi.org/10.1007/s10639-022-11254-7}
}
```

For the dataset itself:
```bibtex
 @misc{Stefanik2021CEReD,
   title = {Czech and English Reflective Dataset ({CEReD})},
   author = {{\v S}tef{\'a}nik, Michal and Nehyba, Jan},
   url = {http://hdl.handle.net/11372/LRT-3573},
   copyright = {Creative Commons - Attribution 4.0 International ({CC} {BY} 4.0)},
   year = {2021} 
 }
```