rfernand commited on
Commit
db982dc
1 Parent(s): 34d4947

Upload 29 files

Browse files
README.md CHANGED
@@ -1,3 +1,233 @@
1
  ---
2
- license: other
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language:
5
+ - en
6
+ language_creators:
7
+ - machine-generated
8
+ license:
9
+ - other
10
+ multilinguality:
11
+ - monolingualfile:///C:/Users/rfernand/Downloads/18599-Final%20PDF-25853-1-10-20220929%20(2).pdf
12
+ pretty_name: Active/Passive/Logical Transforms
13
+ size_categories:
14
+ - 10K<n<100K
15
+ - 1K<n<10K
16
+ - n<1K
17
+ source_datasets:
18
+ - original
19
+ tags:
20
+ - struct2struct
21
+ - tree2tree
22
+ task_categories:
23
+ - text2text-generation
24
+ task_ids: []
25
  ---
26
+ # Dataset Card for Active/Passive/Logical Transforms
27
+
28
+ ## Table of Contents
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Dataset Subsets (Tasks)](#data-tasks)
35
+ - [Dataset Splits](#data-splits)
36
+ - [Data Instances](#data-instances)
37
+ - [Data Fields](#data-fields)
38
+ - [Dataset Creation](#dataset-creation)
39
+ - [Curation Rationale](#curation-rationale)
40
+ - [Source Data](#source-data)
41
+ - [Annotations](#annotations)
42
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
44
+ - [Social Impact of Dataset](#social-impact-of-dataset)
45
+ - [Discussion of Biases](#discussion-of-biases)
46
+ - [Other Known Limitations](#other-known-limitations)
47
+ - [Additional Information](#additional-information)
48
+ - [Dataset Curators](#dataset-curators)
49
+ - [Licensing Information](#licensing-information)
50
+ - [Citation Information](#citation-information)
51
+ - [Contributions](#contributions)
52
+
53
+ ## Dataset Description
54
+
55
+ - **Homepage:**
56
+ - **Repository:**
57
+ - **Paper:**
58
+ - **Leaderboard:**
59
+ - **Point of Contact:** [Roland Fernandez](mailto:[email protected])
60
+
61
+ ### Dataset Summary
62
+
63
+ This dataset is a synthetic dataset containing structure-to-structure transformation tasks between
64
+ English sentences in 3 forms: active, passive, and logical. The dataset also includes several
65
+ tree-transformation diagnostic/warm-up tasks.
66
+
67
+ ### Supported Tasks and Leaderboards
68
+
69
+ [TBD]
70
+
71
+ ### Languages
72
+
73
+ All data is in English.
74
+
75
+ ## Dataset Structure
76
+
77
+ The dataset consists of several subsets, or tasks. Each task contains a train split, a validation split, and a
78
+ test split, with most tasks also containing two out-of-distruction splits (one for new adjectives and one for longer adjective phrases).
79
+
80
+ Each sample in a split contains a source string, a target string, and 0-2 annotation strings.
81
+
82
+ ### Dataset Subsets (Tasks)
83
+ The dataset consists of diagnostic/warm-up tasks and core tasks. The core tasks represent the translation of English sentences between the active, passive, and logical forms.
84
+
85
+ The 12 diagnostic/warm-up tasks are:
86
+
87
+ ```
88
+ - car_cdr_cons (small phrase translation tasks that require only: CAR, CDR, or CAR+CDR+CONS operations)
89
+ - car_cdr_cons_tuc (same task as car_cdr_cons, but requires mapping lowercase fillers to their uppercase tokens)
90
+ - car_cdr_rcons (same task as car_cdr_cons, but the CONS samples have their left/right children swapped)
91
+ - car_cdr_rcons_tuc (same task as car_cdr_rcons, but requires mapping lowercase fillers to their uppercase tokens)
92
+ - car_cdr_seq (each samples requires 1-4 combinations of CAR and CDR, as identified by the root filler oken)
93
+ - car_cdr_seq_40k (same task as car_cdr_seq, but train samples increased from 10K to 40K)
94
+ - car_cdr_seq_tuc (same task as car_cdr_seq, but requires mapping lowercase fillers to their uppercase tokens)
95
+ - car_cdr_seq_40k_tuc (same task as car_cdr_seq_tuc, but train samples increased from 10K to 40K)
96
+ - car_cdr_seq_path (similiar to car_cdr_seq, but each needed operation in represented as a node in the left child of the root)
97
+ - car_cdr_seq_path_40k (same task as car_cdr_seq_path, but train samples increased from 10K to 40K)
98
+ - car_cdr_seq_path_40k_tuc (same task as car_cdr_seq_path_40k, but requires mapping lowercase fillers to their uppercase tokens)
99
+ - car_cdr_seq_path_tuc (same task as car_cdr_seq_path, but requires mapping lowercase fillers to their uppercase tokens)
100
+ ```
101
+
102
+ There are 14 core tasks are:
103
+ ```
104
+ - active_active_stb (active sentence translation, from sentence to parenthesized tree form, both directions)
105
+ - active_active_stb_40k (same task as active_active_stb, but train samples increased from 10K to 40K)
106
+ - active_logical_ttb (active to logical tree translation, in both directions)
107
+ - active_logical_ttb_40k (same task as active_logical_ttb, but train samples increased from 10K to 40K)
108
+ - active_passive_ssb (active to passive sentence translation, in both directions)
109
+ - active_passive_ssb_40k (same task as active_passive_ssb, but train samples increased from 10K to 40K)
110
+ - active_passive_ttb (active to passive tree translation, in both directions)
111
+ - active_passive_ttb_40k (same task as active_passive_ttb, but train samples increased from 10K to 40K)
112
+ - actpass_logical_tt (mixture of active to logical and passive to logical tree translations, single direction)
113
+ - actpass_logical_tt_40k (same task as actpass_logical_tt, but train samples increased from 10K to 40K)
114
+ - passive_logical_ttb (passive to logical tree translation, in both directions)
115
+ - passive_logical_ttb_40k (same task as passive_logical_ttb, but train samples increased from 10K to 40K)
116
+ - passive_passive_stb (passive sentence translation, from sentence to parenthesized tree form, both directions)
117
+ - passive_passive_stb_40k (same task as passive_passive_stb, but train samples increased from 10K to 40K)
118
+ ```
119
+
120
+ ### Data Splits
121
+
122
+ Most tasks have the following splits:
123
+ - train
124
+ - validation
125
+ - test
126
+ - ood_new
127
+ - ood_long
128
+
129
+ Here is a table showing how the number of examples varies by split (for most tasks):
130
+
131
+ | Dataset Split | Number of Instances in Split |
132
+ | ------------- | ------------------------------------------- |
133
+ | train | 10,000 |
134
+ | validation | 1,250 |
135
+ | test | 1,250 |
136
+ | ood_new | 1,250 |
137
+ | ood_long | 1,250 |
138
+
139
+
140
+ ### Data Instances
141
+
142
+ For each sample, there is source and target string. Source and target string are either plain text, or a parenthesized
143
+ version of a tree, depending on the task.
144
+
145
+ Here is an example from the *train* split of the *active_passive_ttb* task:
146
+
147
+ ```
148
+ {
149
+ 'source': '( S ( NP ( DET his ) ( AP ( N cat ) ) ) ( VP ( V discovered ) ( NP ( DET the ) ( AP ( ADJ blue ) ( AP ( N priest ) ) ) ) ) )',
150
+ 'target': '( S ( NP ( DET the ) ( AP ( ADJ blue ) ( AP ( N priest ) ) ) ) ( VP ( AUXPS was ) ( VPPS ( V discovered ) ( PPPS ( PPS by ) ( NP ( DET his ) ( AP ( N cat ) ) ) ) ) ) )',
151
+ 'direction': 'forward'
152
+ }
153
+ ```
154
+
155
+ ### Data Fields
156
+
157
+ - `source`: the string denoting the sequence or tree structure to be translated
158
+ - `target`: the string denoting the gold (aka label) sequence or tree structure
159
+
160
+ Optional annotation fields (their presence varies by task):
161
+
162
+ - `direction`: describes the direction of the translation (forward, backward), relative to the task name
163
+ - `count` : a string denoting the count of symbolic operations needed (e.g., "s3") to translate the source to the target
164
+ - `class` : a string denoting the type of translation needed
165
+
166
+ ## Dataset Creation
167
+
168
+ ### Curation Rationale
169
+
170
+ We wanted a dataset comprised of relatively simple English active/passive/logical form translations, where we could focus
171
+ on two types of out of distribution generalization: longer source sequences and new adjectives.
172
+
173
+ ### Source Data
174
+
175
+ [N/A]
176
+
177
+ #### Initial Data Collection and Normalization
178
+
179
+ [N/A]
180
+
181
+ #### Who are the source language producers?
182
+
183
+ The dataset by generated from templates designed by Paul Smolensky and Roland Fernandez.
184
+
185
+ ### Annotations
186
+
187
+ Besides the source and target structured sequences, some of the subsets (tasks) contain 1-2 additional columns that
188
+ describe the category and tree depth of each sample.
189
+
190
+ #### Annotation process
191
+
192
+ The annotation columns were generated from the each sample template and source sequence.
193
+
194
+ #### Who are the annotators?
195
+
196
+ [N/A]
197
+
198
+ ### Personal and Sensitive Information
199
+
200
+ No names or other sensitive information are included in the data.
201
+
202
+ ## Considerations for Using the Data
203
+
204
+ ### Social Impact of Dataset
205
+
206
+ The purpose of this dataset is to help develop models that can translated structured data from one form to another, in a
207
+ way that generalizes to out of distribution adjective values and lengths.
208
+
209
+ ### Discussion of Biases
210
+
211
+ [TBD]
212
+
213
+ ### Other Known Limitations
214
+
215
+ [TBD]
216
+
217
+ ## Additional Information
218
+
219
+ ### Dataset Curators
220
+
221
+ The dataset by generated from templates designed by Paul Smolensky and Roland Fernandez.
222
+
223
+ ### Licensing Information
224
+
225
+ This dataset is released under the [Permissive 2.0 license](https://cdla.dev/permissive-2-0/).
226
+
227
+ ### Citation Information
228
+
229
+ [TBD]
230
+
231
+ ### Contributions
232
+
233
+ Thanks to [@rfernand2](https://github.com/rfernand2) for adding this dataset.
active_active_stb.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9a78b795d48e844cef12629b695cd6fcfe584b366efdd0b8f564e2ea5fa5838
3
+ size 370739
active_active_stb_40k.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a61f9c742a4d9121807935f037be2d2d16798f11dd48af9a4ea2d3e53b2201cf
3
+ size 1467464
active_logical_ttb.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50374cc9e5173b33916687a51f6783feb50ca291e25a09c6c8c06dd72c98cb8a
3
+ size 332978
active_logical_ttb_40k.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7300e6f7020ad367f1f4977f398b703e818bda32e8e9a924ec3db0c5a7d4583
3
+ size 1318875
active_passive_ssb.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c752f22c5ee36f39d99f058fe08e7c7a74824ba20fd2eb0aba804f4b6d5fd42
3
+ size 266018
active_passive_ssb_40k.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a534a0a6979cd36aa1ea5991c8b528db74f361bd1c93faa118c147daa5e9053
3
+ size 1050369
active_passive_ttb.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:360f07d8898a2ba1cce8d3350f35d515fd630e7d3c5f4bd52ec470ae2754645c
3
+ size 365532
active_passive_ttb_40k.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94c580577add92d75f6748eb404d899a0116a3910e482dc296c923ea71c0764a
3
+ size 1447423
actpass_logical_tt.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14396b24a8dffed30204f965994adf0e112912843a940126d596a689e5c41413
3
+ size 335073
actpass_logical_tt_40k.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2ba2b4abc16268bf0d7c044adf0b7b5d95a42ee1ad2f97696598f3d8cef20e6
3
+ size 1327673
car_cdr_cons.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ba06a2f9de2cc7530b3846e3e36913411e26a3bc3a140c4e6e1ece53b6fc7c5
3
+ size 5244
car_cdr_cons_tuc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9b5954a9639882ff8669e61ee0d4d3f957dcbc0071ba5c0161f734cc00f6f26
3
+ size 5634
car_cdr_rcons.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d32c7cb899c0251f14bcbfb8b01914a068d787f519d3f2950079d193a935341
3
+ size 5323
car_cdr_rcons_tuc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50f8976fc948127a26fa4f2297fddaf84d530685dce9eff76048b46bfe615b68
3
+ size 5746
car_cdr_seq.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f6852343f289322cf191247b29a34e44bca75ad5bdf9b1f47b0d86452af8d22
3
+ size 334088
car_cdr_seq_40k.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:227647b0c0f121d3c49dfa66187d0572768388d3f04ef89696456bd1875d5c7f
3
+ size 1317512
car_cdr_seq_40k_tuc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79a83bdf7e785cc836e32b63fb2c93b8f381ca24a0ac4ef2cb80e99cb3f24d5c
3
+ size 1356014
car_cdr_seq_path.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21eae58557c8d872b68fbe3f4a07f2794fbce02916d9b7517ba7d721e5f9476f
3
+ size 350890
car_cdr_seq_path_40k.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f995b1758a02d3dcb83e00d61d3aff7e4151006259f9485be44d82ffa998959
3
+ size 1391078
car_cdr_seq_path_40k_tuc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1966c2936c2ec816f1652a3fd2e8909d97b416c06d9412a2b5b7c082d86edff
3
+ size 1433208
car_cdr_seq_path_tuc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13268c6c054f316da29941047079b0febbfb61370788b123ed994e54db8f07ef
3
+ size 361916
car_cdr_seq_tuc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c8d1fe1a2dbd8a7bb443c0b02e5f813080f315d8aebabfebdf5be34155813a6
3
+ size 341644
nc_pat.py ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # nc_pat.py: the HF datasets "loading script" for the NC_PAT dataset (defines configurations/tasks, columns, etc.)
2
+ import os
3
+ import json
4
+ import datasets
5
+ from datasets import Split, SplitGenerator
6
+
7
+ no_extra = {
8
+ "source": datasets.Value("string"),
9
+ "target": datasets.Value("string"),
10
+ }
11
+
12
+ samp_class = {
13
+ "source": datasets.Value("string"),
14
+ "target": datasets.Value("string"),
15
+ "class": datasets.Value("string"),
16
+ }
17
+
18
+ count_class = {
19
+ "source": datasets.Value("string"),
20
+ "target": datasets.Value("string"),
21
+ "count": datasets.Value("string"),
22
+ "class": datasets.Value("string"),
23
+ }
24
+
25
+ dir_only = {
26
+ "source": datasets.Value("string"),
27
+ "target": datasets.Value("string"),
28
+ "direction": datasets.Value("string"),
29
+ }
30
+
31
+ configs = [
32
+ {"name": "car_cdr_cons",
33
+ "desc": "small phrase translation tasks that require only: CAR, CDR, or CAR+CDR+CONS operations",
34
+ "features": samp_class},
35
+
36
+ {"name": "car_cdr_cons_tuc",
37
+ "desc": "same task as car_cdr_cons, but requires mapping lowercase fillers to their uppercase tokens",
38
+ "features": samp_class},
39
+
40
+ {"name": "car_cdr_rcons",
41
+ "desc": "same task as car_cdr_cons, but the CONS samples have their left/right children swapped",
42
+ "features": samp_class},
43
+
44
+ {"name": "car_cdr_rcons_tuc",
45
+ "desc": "same task as car_cdr_rcons, but requires mapping lowercase fillers to their uppercase tokens",
46
+ "features": samp_class},
47
+
48
+ {"name": "car_cdr_seq",
49
+ "desc": "each samples requires 1-4 combinations of CAR and CDR, as identified by the root filler token",
50
+ "features": count_class},
51
+
52
+ {"name": "car_cdr_seq_40k",
53
+ "desc": "same task as car_cdr_seq, but train samples increased from 10K to 40K",
54
+ "features": count_class},
55
+
56
+ {"name": "car_cdr_seq_tuc",
57
+ "desc": "same task as car_cdr_seq, but requires mapping lowercase fillers to their uppercase tokens",
58
+ "features": count_class},
59
+
60
+ {"name": "car_cdr_seq_40k_tuc",
61
+ "desc": "same task as car_cdr_seq_tuc, but train samples increased from 10K to 40K",
62
+ "features": count_class},
63
+
64
+ {"name": "car_cdr_seq_path",
65
+ "desc": "similiar to car_cdr_seq, but each needed operation in represented as a node in the left child of the root",
66
+ "features": count_class},
67
+
68
+ {"name": "car_cdr_seq_path_40k",
69
+ "desc": "same task as car_cdr_seq_path, but train samples increased from 10K to 40K",
70
+ "features": count_class},
71
+
72
+ {"name": "car_cdr_seq_path_40k_tuc",
73
+ "desc": "same task as car_cdr_seq_path_40k, but requires mapping lowercase fillers to their uppercase tokens",
74
+ "features": count_class},
75
+
76
+ {"name": "car_cdr_seq_path_tuc",
77
+ "desc": "same task as car_cdr_seq_path, but requires mapping lowercase fillers to their uppercase tokens",
78
+ "features": count_class},
79
+
80
+ {"name": "active_active_stb",
81
+ "desc": "active sentence translation, from sentence to parenthesized tree form, both directions",
82
+ "features": dir_only},
83
+
84
+ {"name": "active_active_stb_40k",
85
+ "desc": "same task as active_active_stb, but train samples increased from 10K to 40K",
86
+ "features": dir_only},
87
+
88
+ {"name": "active_logical_ttb",
89
+ "desc": "active to logical tree translation, in both directions",
90
+ "features": dir_only},
91
+
92
+ {"name": "active_logical_ttb_40k",
93
+ "desc": "same task as active_logical_ttb, but train samples increased from 10K to 40K",
94
+ "features": dir_only},
95
+
96
+ {"name": "active_passive_ssb",
97
+ "desc": "active to passive sentence translation, in both directions",
98
+ "features": dir_only},
99
+
100
+ {"name": "active_passive_ssb_40k",
101
+ "desc": "same task as active_passive_ssb, but train samples increased from 10K to 40K",
102
+ "features": dir_only},
103
+
104
+ {"name": "active_passive_ttb",
105
+ "desc": "active to passive tree translation, in both directions",
106
+ "features": dir_only},
107
+
108
+ {"name": "active_passive_ttb_40k",
109
+ "desc": "same task as active_passive_ttb, but train samples increased from 10K to 40K",
110
+ "features": dir_only},
111
+
112
+ {"name": "actpass_logical_tt",
113
+ "desc": "mixture of active to logical and passive to logical tree translations, single direction",
114
+ "features": no_extra},
115
+
116
+ {"name": "actpass_logical_tt_40k",
117
+ "desc": "same task as actpass_logical_tt, but train samples increased from 10K to 40K",
118
+ "features": no_extra},
119
+
120
+ {"name": "passive_logical_ttb",
121
+ "desc": "passive to logical tree translation, in both directions",
122
+ "features": dir_only},
123
+
124
+ {"name": "passive_logical_ttb_40k",
125
+ "desc": "same task as passive_logical_ttb, but train samples increased from 10K to 40K",
126
+ "features": dir_only},
127
+
128
+ {"name": "passive_passive_stb",
129
+ "desc": "passive sentence translation, from sentence to parenthesized tree form, both directions",
130
+ "features": dir_only},
131
+
132
+ {"name": "passive_passive_stb_40k",
133
+ "desc": "same task as passive_passive_stb, but train samples increased from 10K to 40K",
134
+ "features": dir_only},
135
+ ]
136
+
137
+ class NcPatConfig(datasets.BuilderConfig):
138
+ """BuilderConfig for NC_PAT dataset."""
139
+
140
+ def __init__(self, features=None, **kwargs):
141
+ # Version history:
142
+ # 0.0.17: Initial version released to HF datasets
143
+ super().__init__(version=datasets.Version("0.0.17"), **kwargs)
144
+
145
+ self.features = features
146
+ self.label_classes = None
147
+ self.data_url = "./{}.zip".format(kwargs["name"])
148
+ self.citation = None
149
+ self.homepage = None
150
+
151
+ def _info(self):
152
+ return datasets.DatasetInfo(
153
+ description=self.description,
154
+ features=self.features,
155
+ # No default supervised_keys (as we have to pass both question
156
+ # and context as input).
157
+ supervised_keys=None,
158
+ homepage=self.homepage,
159
+ citation=self.citation,
160
+ )
161
+
162
+ class NcPat(datasets.GeneratorBasedBuilder):
163
+ BUILDER_CONFIGS = [NcPatConfig(name=c["name"], description=c["desc"], features=c["features"]) for c in configs]
164
+ VERSION = datasets.Version("0.0.17")
165
+
166
+ def _info(self):
167
+ return datasets.DatasetInfo(
168
+ description="The dataset consists of diagnostic/warm-up tasks and core tasks within this dataset." +
169
+ "The core tasks represent the translation of English sentences between the active, passive, and logical forms.",
170
+ supervised_keys=None,
171
+ homepage=None,
172
+ citation=None,
173
+ )
174
+
175
+ def _split_generators(self, dl_manager: datasets.DownloadManager):
176
+ url = self.config.data_url
177
+ dl_dir = dl_manager.download_and_extract(url)
178
+ task = self.config_id
179
+
180
+ splits = [
181
+ SplitGenerator(name=Split.TRAIN, gen_kwargs={"data_file": os.path.join(dl_dir, "train.jsonl")}),
182
+ SplitGenerator(name=Split.VALIDATION, gen_kwargs={"data_file": os.path.join(dl_dir, "dev.jsonl")}),
183
+ SplitGenerator(name=Split.TEST, gen_kwargs={"data_file": os.path.join(dl_dir, "test.jsonl")}),
184
+ ]
185
+
186
+ if not task.startswith("car_cdr_cons") and not task.startswith("car_cdr_rcons"):
187
+ splits += [
188
+ SplitGenerator(name="ood_new", gen_kwargs={"data_file": os.path.join(dl_dir, "ood_new_adj.jsonl")}),
189
+ SplitGenerator(name="ood_long", gen_kwargs={"data_file": os.path.join(dl_dir, "ood_long_adj.jsonl")}),
190
+ ]
191
+
192
+ return splits
193
+
194
+ def _generate_examples(self, data_file):
195
+ with open(data_file, encoding="utf-8") as f:
196
+ for i, line in enumerate(f):
197
+ key = str(i)
198
+ row = json.loads(line)
199
+ yield key, row
200
+
201
+ if __name__ == "__main__":
202
+ # short test
203
+ builder = NcPat.BUILDER_CONFIGS[0]
204
+ print("name: {}, desc: {}".format(builder.name, builder.description))
205
+
passive_logical_ttb.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95cbe08ed9ff4af510ef24708dde7dc57b7beecd3a85ac415df6e8e763f3709c
3
+ size 363364
passive_logical_ttb_40k.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8192baf4c5e1e5cca1a9ab24a9ab00df1ff587376140808262ab3c923da43254
3
+ size 1442210
passive_passive_stb.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af83d88bb288b88b4b9337d12bf4eca296faea66ba40886a6748f5549e9c92cf
3
+ size 391492
passive_passive_stb_40k.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8efd9d40efaf628aae8a5ec75673cc6b027a72c05086023af93679d5808356b8
3
+ size 1558842
use.txt ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Community Data License Agreement - Permissive - Version 2.0
2
+
3
+ This is the Community Data License Agreement - Permissive, Version 2.0 (the "agreement"). Data Provider(s) and Data Recipient(s) agree as follows:
4
+
5
+ 1. Provision of the Data
6
+
7
+ 1.1. A Data Recipient may use, modify, and share the Data made available by Data Provider(s) under this agreement if that Data Recipient follows the terms of this agreement.
8
+
9
+ 1.2. This agreement does not impose any restriction on a Data Recipient's use, modification, or sharing of any portions of the Data that are in the public domain or that may be used, modified, or shared under any other legal exception or limitation.
10
+
11
+ 2. Conditions for Sharing Data
12
+
13
+ 2.1. A Data Recipient may share Data, with or without modifications, so long as the Data Recipient makes available the text of this agreement with the shared Data.
14
+
15
+ 3. No Restrictions on Results
16
+
17
+ 3.1. This agreement does not impose any restriction or obligations with respect to the use, modification, or sharing of Results.
18
+
19
+ 4. No Warranty; Limitation of Liability
20
+
21
+ 4.1. All Data Recipients receive the Data subject to the following terms:
22
+
23
+ THE DATA IS PROVIDED ON AN "AS IS" BASIS, WITHOUT REPRESENTATIONS, WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
24
+
25
+ NO DATA PROVIDER SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE DATA OR RESULTS, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
26
+
27
+ 5. Definitions
28
+
29
+ 5.1. "Data" means the material received by a Data Recipient under this agreement.
30
+
31
+ 5.2. "Data Provider" means any person who is the source of Data provided under this agreement and in reliance on a Data Recipient's agreement to its terms.
32
+
33
+ 5.3. "Data Recipient" means any person who receives Data directly or indirectly from a Data Provider and agrees to the terms of this agreement.
34
+
35
+ 5.4. "Results" means any outcome obtained by computational analysis of Data, including for example machine learning models and models' insights.