ttc_dummy / README.md
asahi417's picture
init
977ab25
|
raw
history blame
3.07 kB
metadata
language:
  - en
license:
  - other
multilinguality:
  - monolingual
size_categories:
  - 1k<10K
task_categories:
  - token-classification
task_ids:
  - named-entity-recognition
pretty_name: TTC

Dataset Card for "tner/ttc"

Dataset Description

Dataset Summary

Broad Twitter Corpus NER dataset formatted in a part of TNER project.

  • Entity Types: LOC, ORG, PER

Dataset Structure

Data Instances

An example of train looks as follows.

{
    'tokens': ['😝', 'lemme', 'ask', '$MENTION$', ',', 'Timb', '???', '"', '$MENTION$', ':', '$RESERVED$', '!!!', '"', '$MENTION$', ':', '$MENTION$', 'Nezzzz', '!!', 'How', "'", 'bout', 'do', 'a', 'duet', 'with', '$MENTION$', '??!', ';)', '"'],
    'tags': [6, 6, 6, 6, 6, 2, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6]
}

Label ID

The label2id dictionary can be found at here.

{
    "B-LOC": 0,
    "B-ORG": 1,
    "B-PER": 2,
    "I-LOC": 3,
    "I-ORG": 4,
    "I-PER": 5,
    "O": 6
}

Data Splits

name train validation test
ttc 9995 500 1477

Citation Information

@inproceedings{rijhwani-preotiuc-pietro-2020-temporally,
    title = "Temporally-Informed Analysis of Named Entity Recognition",
    author = "Rijhwani, Shruti  and
      Preotiuc-Pietro, Daniel",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.acl-main.680",
    doi = "10.18653/v1/2020.acl-main.680",
    pages = "7605--7617",
    abstract = "Natural language processing models often have to make predictions on text data that evolves over time as a result of changes in language use or the information described in the text. However, evaluation results on existing data sets are seldom reported by taking the timestamp of the document into account. We analyze and propose methods that make better use of temporally-diverse training data, with a focus on the task of named entity recognition. To support these experiments, we introduce a novel data set of English tweets annotated with named entities. We empirically demonstrate the effect of temporal drift on performance, and how the temporal information of documents can be used to obtain better models compared to those that disregard temporal information. Our analysis gives insights into why this information is useful, in the hope of informing potential avenues of improvement for named entity recognition as well as other NLP tasks under similar experimental setups.",
}