|
--- |
|
annotations_creators: |
|
- expert-generated |
|
- machine-generated |
|
language_creators: |
|
- found |
|
language: |
|
- da |
|
license: |
|
- cc-by-sa-4.0 |
|
multilinguality: |
|
- monolingual |
|
size_categories: |
|
- 1K<n<10K |
|
source_datasets: |
|
- dane |
|
- extended|other-Danish-Universal-Dependencies-treebank |
|
- DANSK |
|
task_categories: |
|
- token-classification |
|
task_ids: |
|
- named-entity-recognition |
|
- part-of-speech |
|
paperswithcode_id: dane |
|
pretty_name: DaNE+ |
|
|
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
- name: ents |
|
list: |
|
- name: end |
|
dtype: int64 |
|
- name: label |
|
dtype: string |
|
- name: start |
|
dtype: int64 |
|
- name: sents |
|
list: |
|
- name: end |
|
dtype: int64 |
|
- name: start |
|
dtype: int64 |
|
- name: tokens |
|
list: |
|
- name: dep |
|
dtype: string |
|
- name: end |
|
dtype: int64 |
|
- name: head |
|
dtype: int64 |
|
- name: id |
|
dtype: int64 |
|
- name: lemma |
|
dtype: string |
|
- name: morph |
|
dtype: string |
|
- name: pos |
|
dtype: string |
|
- name: start |
|
dtype: int64 |
|
- name: tag |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 7886693 |
|
num_examples: 4383 |
|
- name: dev |
|
num_bytes: 1016350 |
|
num_examples: 564 |
|
- name: test |
|
num_bytes: 991137 |
|
num_examples: 565 |
|
download_size: 1627548 |
|
dataset_size: 9894180 |
|
--- |
|
|
|
# DaNE+ |
|
|
|
This is a version of [DaNE](https://huggingface.co/datasets/dane), where the original NER labels have been updated to follow the ontonotes annotation scheme. The annotation process used the model trained on the Danish dataset [DANSK](https://huggingface.co/datasets/chcaa/DANSK) for the first round of annotation and then all the discrepancies were manually reviewed and corrected by Kenneth C. Enevoldsen. A discrepancy include notably also includes newly added entities such as `PRODUCT` and `WORK_OF_ART`. Thus in practice a great deal of entities were manually reviews. If there was an uncertainty the annotation was left as it was. |
|
|
|
The additional annotations (e.g. part-of-speech tags) stems from the Danish Dependency Treebank, however, if you wish to use these I would recommend using the latest version as this version here will likely become outdated over time. |
|
|
|
|
|
## Process of annotation |
|
|
|
1) Install the requirements: |
|
``` |
|
--extra-index-url pip install prodigy -f https://{DOWNLOAD KEY}@download.prodi.gy |
|
prodigy>=1.11.0,<2.0.0 |
|
``` |
|
|
|
2) Create outline dataset |
|
```bash |
|
python annotate.py |
|
``` |
|
|
|
3) Review and correction annotation using prodigy: |
|
Add datasets to prodigy |
|
```bash |
|
prodigy db-in dane reference.jsonl |
|
prodigy db-in dane_plus_mdl_pred predictions.jsonl |
|
``` |
|
|
|
Run review using prodigy: |
|
```bash |
|
prodigy review daneplus dane_plus_mdl_pred,dane --view-id ner_manual --l NORP,CARDINAL,PRODUCT,ORGANIZATION,PERSON,WORK_OF_ART,EVENT,LAW,QUANTITY,DATE,TIME,ORDINAL,LOCATION,GPE,MONEY,PERCENT,FACILITY |
|
``` |
|
|
|
Export the dataset: |
|
```bash |
|
prodigy data-to-spacy daneplus --ner daneplus --lang da -es 0 |
|
``` |
|
|
|
4) Redo the original split: |
|
|
|
```bash |
|
python split.py |
|
``` |
|
|