Datasets:
martynawck
commited on
Commit
•
63ad164
1
Parent(s):
d81c5c3
Update README.md (#1)
Browse files- Update README.md (69f5bd7c60aa50331c903af9b7dcf026a1c72135)
README.md
CHANGED
@@ -17,6 +17,7 @@ source_datasets:
|
|
17 |
tags:
|
18 |
- National Corpus of Polish
|
19 |
- Narodowy Korpus Języka Polskiego
|
|
|
20 |
task_categories:
|
21 |
- token-classification
|
22 |
task_ids:
|
@@ -166,6 +167,69 @@ and then we repeat the above-mentioned procedure per category. This provides us
|
|
166 |
|
167 |
This resource can be mainly used for training the morphosyntactic analyzer models for Polish. It support such tasks as: lemmatization, part-of-speech recognition, dependency parsing.
|
168 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
169 |
### Languages
|
170 |
|
171 |
Polish (monolingual)
|
@@ -174,29 +238,51 @@ Polish (monolingual)
|
|
174 |
|
175 |
### Data Instances
|
176 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
177 |
```
|
178 |
-
{
|
179 |
-
'
|
180 |
-
'
|
181 |
-
'
|
182 |
-
'
|
183 |
-
'
|
184 |
-
'
|
185 |
-
'
|
186 |
-
'
|
187 |
-
'
|
|
|
|
|
|
|
|
|
|
|
188 |
```
|
189 |
|
190 |
### Data Fields
|
191 |
|
192 |
-
- `
|
|
|
193 |
- `tokens` (sequence of strings): tokens of the text defined as in NKJP.
|
194 |
- `lemmas` (sequence of strings): lemmas corresponding to the tokens.
|
195 |
-
- `
|
196 |
-
- `
|
197 |
-
- `
|
198 |
-
- `
|
199 |
-
- `
|
|
|
|
|
|
|
200 |
|
201 |
### Data Splits
|
202 |
|
|
|
17 |
tags:
|
18 |
- National Corpus of Polish
|
19 |
- Narodowy Korpus Języka Polskiego
|
20 |
+
- Universal Dependencies
|
21 |
task_categories:
|
22 |
- token-classification
|
23 |
task_ids:
|
|
|
167 |
|
168 |
This resource can be mainly used for training the morphosyntactic analyzer models for Polish. It support such tasks as: lemmatization, part-of-speech recognition, dependency parsing.
|
169 |
|
170 |
+
### Supported versions
|
171 |
+
|
172 |
+
This dataset is available for two tagsets and in 3 formats.
|
173 |
+
|
174 |
+
Tagsets:
|
175 |
+
- UD
|
176 |
+
- NKJP
|
177 |
+
|
178 |
+
File formats:
|
179 |
+
- conllu
|
180 |
+
- conll
|
181 |
+
- conll with SpaceAfter token
|
182 |
+
|
183 |
+
All the available combinations can be found below:
|
184 |
+
|
185 |
+
- fair_by_name + nkjp tagset + conllu format
|
186 |
+
|
187 |
+
```
|
188 |
+
load_dataset("nlprepl", name="by_name-nkjp-conllu")
|
189 |
+
```
|
190 |
+
|
191 |
+
- fair_by_name + nkjp tagset + conll format
|
192 |
+
|
193 |
+
```
|
194 |
+
load_dataset("nlprepl", name="by_name-nkjp-conll")
|
195 |
+
```
|
196 |
+
|
197 |
+
- fair_by_name + nkjp tagset + conll-SpaceAfter format
|
198 |
+
|
199 |
+
```
|
200 |
+
load_dataset("nlprepl", name="by_name-nkjp-conll_space_after")
|
201 |
+
```
|
202 |
+
|
203 |
+
- fair_by_name + UD tagset + conllu format
|
204 |
+
|
205 |
+
```
|
206 |
+
load_dataset("nlprepl", name="by_name-nkjp-conllu")
|
207 |
+
```
|
208 |
+
|
209 |
+
- fair_by_type + nkjp tagset + conllu format
|
210 |
+
|
211 |
+
```
|
212 |
+
load_dataset("nlprepl", name="by_type-nkjp-conllu")
|
213 |
+
```
|
214 |
+
|
215 |
+
- fair_by_type + nkjp tagset + conll format
|
216 |
+
|
217 |
+
```
|
218 |
+
load_dataset("nlprepl", name="by_type-nkjp-conll")
|
219 |
+
```
|
220 |
+
|
221 |
+
- fair_by_type + nkjp tagset + conll-SpaceAfter format
|
222 |
+
|
223 |
+
```
|
224 |
+
load_dataset("nlprepl", name="by_type-nkjp-conll_space_after")
|
225 |
+
```
|
226 |
+
|
227 |
+
- fair_by_type + UD tagset + conllu format
|
228 |
+
|
229 |
+
```
|
230 |
+
load_dataset("nlprepl", name="by_type-nkjp-conllu")
|
231 |
+
```
|
232 |
+
|
233 |
### Languages
|
234 |
|
235 |
Polish (monolingual)
|
|
|
238 |
|
239 |
### Data Instances
|
240 |
|
241 |
+
|
242 |
+
"sent_id": datasets.Value("string"),
|
243 |
+
"text": datasets.Value("string"),
|
244 |
+
"id": datasets.Value("string"),
|
245 |
+
"tokens": datasets.Sequence(datasets.Value("string")),
|
246 |
+
"lemmas": datasets.Sequence(datasets.Value("string")),
|
247 |
+
"upos": datasets.Sequence(datasets.Value("string")),
|
248 |
+
"xpos": datasets.Sequence(datasets.Value("string")),
|
249 |
+
"feats": datasets.Sequence(datasets.Value("string")),
|
250 |
+
"head": datasets.Sequence(datasets.Value("string")),
|
251 |
+
"deprel": datasets.Sequence(datasets.Value("string")),
|
252 |
+
"deps": datasets.Sequence(datasets.Value("string")),
|
253 |
+
"misc"
|
254 |
```
|
255 |
+
{
|
256 |
+
'sent_id': '3',
|
257 |
+
'text': 'I zawrócił na rzekę.',
|
258 |
+
'orig_file_sentence': '030-2-000000002#2-3',
|
259 |
+
'id': ['1', '2', '3', '4', '5']
|
260 |
+
'tokens': ['I', 'zawrócił', 'na', 'rzekę', '.'],
|
261 |
+
'lemmas': ['i', 'zawrócić', 'na', 'rzeka', '.'],
|
262 |
+
'upos': ['conj', 'praet', 'prep', 'subst', 'interp'],
|
263 |
+
'xpos': ['con', 'praet:sg:m1:perf', 'prep:acc', 'subst:sg:acc:f', 'interp'],
|
264 |
+
'feats': ['', 'sg|m1|perf', 'acc', 'sg|acc|f', ''],
|
265 |
+
'head': ['0', '1', '2', '3', '1'],
|
266 |
+
'deprel': ['root', 'conjunct', 'adjunct', 'comp', 'punct'],
|
267 |
+
'deps': [''', '', '', '', ''],
|
268 |
+
'misc': ['', '', '', '', '']
|
269 |
+
}
|
270 |
```
|
271 |
|
272 |
### Data Fields
|
273 |
|
274 |
+
- `sent_id`, `text`, `orig_file_sentence` (strings): XML identifiers of the present text (document), paragraph and sentence in NKJP. (These allow to map the data point back to the source corpus and to identify paragraphs/samples.)
|
275 |
+
- `id` (sequence of strings): ids of the appropriate tokens.
|
276 |
- `tokens` (sequence of strings): tokens of the text defined as in NKJP.
|
277 |
- `lemmas` (sequence of strings): lemmas corresponding to the tokens.
|
278 |
+
- `upos` (sequence of strings): universal part-of-speech tags corresponding to the tokens
|
279 |
+
- `xpos` (sequence of labels): Optional language-specific (or treebank-specific) part-of-speech / morphological tag; underscore if not available.
|
280 |
+
- `feats` (sequence of labels): List of morphological features from the universal feature inventory or from a defined language-specific extension; underscore if not available.
|
281 |
+
- `head` (sequence of labels): Head of the current word, which is either a value of ID or zero (0).
|
282 |
+
- `deprel` (sequence of labels): Universal dependency relation to the HEAD of the token.
|
283 |
+
- `deps` (sequence of labels): Enhanced dependency graph in the form of a list of head-deprel pairs.
|
284 |
+
- `misc` (sequence of labels): Any other annotation (most commonly contains SpaceAfter tag).
|
285 |
+
|
286 |
|
287 |
### Data Splits
|
288 |
|