Datasets:

Modalities:
Text
Languages:
Spanish
Libraries:
Datasets
License:
pharmaconer / README.md
mapama247's picture
Update README.md
c720ab1
|
raw
history blame
6.07 kB
metadata
annotations_creators:
  - expert-generated
languages:
  - es
multilinguality:
  - monolingual
task_categories:
  - text-classification
  - multi-label-text-classification
task_ids:
  - named-entity-recognition
licenses:
  - cc-by-4-0

PharmaCoNER Corpus

Introduction

This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje (Plan TL).

It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).

The annotation of the entire set of entity mentions was carried out by domain experts. It includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.

The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each. In terms of training examples, this translates to a total of 8130, 3788 and 3953 annotated sentences in each set. The original dataset was distributed in Brat format (https://brat.nlplab.org/standoff.html).

For further information, please visit the official website.

BibTeX citation

If you use these resources in your work, please cite the following paper:

@inproceedings{,
    title = "PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",
    author = "Gonzalez-Agirre, Aitor  and
      Marimon, Montserrat  and
      Intxaurrondo, Ander  and
      Rabal, Obdulia  and
      Villegas, Marta  and
      Krallinger, Martin",
    booktitle = "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D19-5701",
    doi = "10.18653/v1/D19-5701",
    pages = "1--10",
}

Digital Object Identifier (DOI) and access to dataset files

https://zenodo.org/record/4270158#.YTnXP0MzY0F

Supported Tasks and Leaderboards

Named Entity Recognition

Languages

ES - Spanish

Directory structure

  • README.md
  • pharmaconer.py
  • dev-set_1.1.conll
  • test-set_1.1.conll
  • train-set_1.1.conll

Dataset Structure

Data Instances

Three four-column files, one for each split.

Data Fields

Every file has four columns:

  • 1st column: Word form or punctuation symbol
  • 2nd column: Original BRAT file name
  • 3rd column: Spans
  • 4th column: IOB tag

Example:

La                S0004-06142006000900008-1  123_125  O
paciente          S0004-06142006000900008-1  126_134  O
tenía             S0004-06142006000900008-1  135_140  O
antecedentes      S0004-06142006000900008-1  141_153  O
de                S0004-06142006000900008-1  154_156  O
hipotiroidismo    S0004-06142006000900008-1  157_171  O
,                 S0004-06142006000900008-1  171_172  O
hipertensión      S0004-06142006000900008-1  173_185  O
arterial          S0004-06142006000900008-1  186_194  O
en                S0004-06142006000900008-1  195_197  O
tratamiento       S0004-06142006000900008-1  198_209  O
habitual          S0004-06142006000900008-1  210_218  O
con               S0004-06142006000900008-1  219-222  O
atenolol          S0004-06142006000900008-1  223_231  B-NORMALIZABLES
y                 S0004-06142006000900008-1  232_233  O
enalapril         S0004-06142006000900008-1  234_243  B-NORMALIZABLES

Data Splits

  • train: 8,074 tokens
  • development: 3,764 tokens
  • test: 3,931 tokens

Dataset Creation

Methodology

TO DO

Curation Rationale

For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.

Source Data

Initial Data Collection and Normalization

TO DO

Who are the source language producers?

TO DO

Annotations

Annotation process

The annotation process of the PharmaCoNER corpus was inspired by previous annotation schemes and corpora used for the BioCreative CHEMDNER and GPRO tracks, translating the guidelines used for these tracks into Spanish and adapting them to the characteristics and needs of clinically oriented documents by modifying the annotation criteria and rules to cover medical information needs. This adaptation was carried out in collaboration with practicing physicians and medicinal chemistry experts. The adaptation, translation and refinement of the guidelines (Rabal et al., 2018) was done on a sample set of the SPACCC corpus and linked to an iterative process of annotation consistency analysis through interannotator agreement (IAA) studies until a high annotation quality in terms of IAA was reached. The final, IAA measure obtained for this corpus was calculated on a set of 50 records that were double annotated (blinded) by two different expert annotators, reaching a pairwise agreement of 93% on the exact entity mention comparison level and 76% agreement when also the entity concept normalization was taken into account. Entity normalization was carried out primarily against the SNOMED-CT knowledge base. Note that there is a SNOMED-CT version directly released by the Spanish Ministry of Health twice a year

Who are the annotators?

Practicing physicians and medicinal chemistry experts.

Dataset Curators

The Text Mining Unit from Barcelona Supercomputing center.

Personal and Sensitive Information

No personal or sensitive information included.

Contact

[email protected]

License

Attribution 4.0 International License
This work is licensed under a Attribution 4.0 International License.