Datasets:
File size: 7,258 Bytes
c451938 740f62a c451938 740f62a c451938 740f62a c451938 740f62a c451938 740f62a c451938 740f62a c451938 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 |
---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- pl
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: NLPre-PL_dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- National Corpus of Polish
- Narodowy Korpus Języka Polskiego
task_categories:
- token-classification
task_ids:
- part-of-speech
- lemmatization
- parsing
dataset_info:
- config_name: nlprepl_by_name
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': _
'14': ADV
'15': INTJ
'16': VERB
'17': AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 0
num_examples: 69360
- name: dev
num_bytes: 0
num_examples: 7669
- name: test
num_bytes: 0
num_examples: 8633
download_size: 3088237
dataset_size: 5120697
- config_name: nlprepl_by_type
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': _
'14': ADV
'15': INTJ
'16': VERB
'17': AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 0
num_examples: 68943
- name: dev
num_bytes: 0
num_examples: 7755
- name: test
num_bytes: 0
num_examples: 8964
download_size: 3088237
dataset_size: 5120697
---
# Dataset Card for NLPre-PL – fairly divided version of NKJP1M
### Dataset Summary
This is the official NLPre-PL dataset - a uniformly paragraph-level divided version of NKJP1M corpus – the 1-million token balanced subcorpus of the National Corpus of Polish (Narodowy Korpus Języka Polskiego)
The NLPre dataset aims at fairly dividing the paragraphs length-wise and topic-wise into train, development, and test sets. Thus, we ensure a similar number of segments
distribution per paragraph and avoid the situation when paragraphs with a small (or large) number of segments are available only e.g. during test time.
We treat paragraphs as indivisible units (to ensure there is no data leakage between different dataset types). The paragraphs inherit the corresponding document's ID and type (a book, an article, etc.).
We provide two variations of the dataset, based on the fair division of paragraphs:
- fair by document's ID
- fair by document's type
### Creation of the dataset
We investigate the distribution over the number of segments in each paragraph. Being Gaussian-like, we divide the paragraphs into 10 buckets of roughly similar size and then sample from them with respective ratios of 0.8 : 0.1 : 0.1
(corresponding to training, development, and testing subsets).
This data selection technique assures a similar distribution of segment numbers per paragraph in our three subsets. We call it **fair_by_name** (shortly: **by_name**)
since it is divided equitably regarding the unique IDs of the documents.
For creating our second split, we also consider the type of document a paragraph belongs to. We first group paragraphs into categories equal to the document types,
and then we repeat the above-mentioned procedure per category. This provides us with a second split: **fair_by_type** (shortly: **by_type**).
### Supported Tasks and Leaderboards
This resource can be mainly used for training the morphosyntactic analyzer models for Polish. It support such tasks as: lemmatization, part-of-speech recognition, dependency parsing.
### Languages
Polish (monolingual)
## Dataset Structure
### Data Instances
```
{'nkjp_text': 'NKJP_1M_1102000002',
'nkjp_par': 'morph_1-p',
'nkjp_sent': 'morph_1.18-s',
'tokens': ['-', 'Nie', 'mam', 'pieniędzy', ',', 'da', 'mi', 'pani', 'wywiad', '?'],
'lemmas': ['-', 'nie', 'mieć', 'pieniądz', ',', 'dać', 'ja', 'pani', 'wywiad', '?'],
'cposes': [8, 11, 10, 9, 8, 10, 9, 9, 9, 8],
'poses': [19, 25, 12, 35, 19, 12, 28, 35, 35, 19],
'tags': [266, 464, 213, 923, 266, 218, 692, 988, 961, 266],
'nps': [False, False, False, False, True, False, False, False, False, True],
'nkjp_ids': ['morph_1.9-seg', 'morph_1.10-seg', 'morph_1.11-seg', 'morph_1.12-seg', 'morph_1.13-seg', 'morph_1.14-seg', 'morph_1.15-seg', 'morph_1.16-seg', 'morph_1.17-seg', 'morph_1.18-seg']}
```
### Data Fields
- `nkjp_text`, `nkjp_par`, `nkjp_sent` (strings): XML identifiers of the present text (document), paragraph and sentence in NKJP. (These allow to map the data point back to the source corpus and to identify paragraphs/samples.)
- `tokens` (sequence of strings): tokens of the text defined as in NKJP.
- `lemmas` (sequence of strings): lemmas corresponding to the tokens.
- `tags` (sequence of labels): morpho-syntactic tags according to Morfeusz2 tagset (1019 distinct tags).
- `poses` (sequence of labels): flexemic class (detailed part of speech, 40 classes) – the first element of the corresponding tag.
- `cposes` (sequence of labels): coarse part of speech (13 classes): all verbal and deverbal flexemic classes get mapped to a `V`, nominal – `N`, adjectival – `A`, “strange” (abbreviations, alien elements, symbols, emojis…) – `X`, rest as in `poses`.
- `nps` (sequence of booleans): `True` means that the corresponding token is not preceded by a space in the source text.
- `nkjp_ids` (sequence of strings): XML identifiers of particular tokens in NKJP (probably an overkill).
### Data Splits
#### Fair_by_name
| | Train | Validation | Test |
| ----- | ------ | ----- | ---- |
| sentences | 69360 | 7669 | 8633 |
| tokens | 984077 | 109900 | 121907 |
#### Fair_by_type
| | Train | Validation | Test |
| ----- | ------ | ----- | ---- |
| sentences | 68943 | 7755 | 8964 |
| tokens | 978371 | 112454 | 125059 |
## Licensing Information
![Creative Commons License](https://i.creativecommons.org/l/by/4.0/80x15.png) This work is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
<!--
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
--> |