Datasets:
Tasks:
Other
Formats:
csv
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
10K - 100K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -31,11 +31,72 @@ configs:
|
|
31 |
path: data/val.tsv
|
32 |
---
|
33 |
|
|
|
34 |
|
35 |
-
|
36 |
|
37 |
-
|
38 |
|
39 |
### Dataset Summary
|
40 |
-
|
41 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
path: data/val.tsv
|
32 |
---
|
33 |
|
34 |
+
Here is the Hugging Face datasets card for the TweetTER dataset:
|
35 |
|
36 |
+
---
|
37 |
|
38 |
+
## TweetTER
|
39 |
|
40 |
### Dataset Summary
|
41 |
+
|
42 |
+
TweetTER (Tweet Target Entity Retrieval) is a novel benchmark designed to address the challenges in entity linking, particularly in noisy domains like social media. Unlike traditional entity linking tasks that rely on a comprehensive knowledge base, TweetTER reframes entity linking as a binary entity retrieval task. This approach allows for the evaluation of language models’ performance without depending on a conventional knowledge base, offering a more practical and versatile framework for assessing the effectiveness of language models in entity retrieval tasks.
|
43 |
+
|
44 |
+
More details on the task and an evaluation of language models can be found on the [Here: <TweetTER: A Benchmark for Target Entity Retrieval on Twitter without Knowledge Bases>](https://aclanthology.org/2024.lrec-main.1468/)
|
45 |
+
|
46 |
+
### Features
|
47 |
+
|
48 |
+
- `target` (string): The target named entity.
|
49 |
+
- `context` (string): The tweet in which the target entity has appeared.
|
50 |
+
- `start` (int): The character index at which the target starts in the provided context.
|
51 |
+
- `end` (int): The character index at which the target ends in the provided context.
|
52 |
+
- `definition` (string): A possible candidate definition collected from Wikidata, to be matched against the target entity.
|
53 |
+
- `date` (string): The date of the tweet.
|
54 |
+
- `label` (int): The binary label (0 or 1) indicating if the provided definition is a match (1) or a non-match (0) with the target entity.
|
55 |
+
|
56 |
+
### Usage
|
57 |
+
|
58 |
+
To load the dataset:
|
59 |
+
|
60 |
+
```python
|
61 |
+
from datasets import load_dataset
|
62 |
+
|
63 |
+
data = load_dataset('cardiffnlp/tweet_ter')
|
64 |
+
```
|
65 |
+
|
66 |
+
### Dataset Structure
|
67 |
+
|
68 |
+
#### Example
|
69 |
+
|
70 |
+
| target | context | start | end | definition | date | label |
|
71 |
+
|---------------|--------------------------------------------|-------|-----|----------------------------------------------------|------------|-------|
|
72 |
+
| Python | Learning Python programming is fun! | 9 | 15 | A high-level programming language | 2023-01-02 | 1 |
|
73 |
+
| Paris | Paris is beautiful in the spring. | 0 | 5 | Capital city of France | 2023-01-03 | 1 |
|
74 |
+
|
75 |
+
### Citation
|
76 |
+
|
77 |
+
If you use this dataset, please cite the following paper:
|
78 |
+
|
79 |
+
```bibtex
|
80 |
+
@inproceedings{rezaee-etal-2024-tweetter-benchmark,
|
81 |
+
title = "{T}weet{TER}: A Benchmark for Target Entity Retrieval on {T}witter without Knowledge Bases",
|
82 |
+
author = "Rezaee, Kiamehr and
|
83 |
+
Camacho-Collados, Jose and
|
84 |
+
Pilehvar, Mohammad Taher",
|
85 |
+
editor = "Calzolari, Nicoletta and
|
86 |
+
Kan, Min-Yen and
|
87 |
+
Hoste, Veronique and
|
88 |
+
Lenci, Alessandro and
|
89 |
+
Sakti, Sakriani and
|
90 |
+
Xue, Nianwen",
|
91 |
+
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
|
92 |
+
month = may,
|
93 |
+
year = "2024",
|
94 |
+
address = "Torino, Italy",
|
95 |
+
publisher = "ELRA and ICCL",
|
96 |
+
url = "https://aclanthology.org/2024.lrec-main.1468",
|
97 |
+
pages = "16890--16896",
|
98 |
+
abstract = "Entity linking is a well-established task in NLP consisting of associating entity mentions with entries in a knowledge base. Current models have demonstrated competitive performance in standard text settings. However, when it comes to noisy domains such as social media, certain challenges still persist. Typically, to evaluate entity linking on existing benchmarks, a comprehensive knowledge base is necessary and models are expected to possess an understanding of all the entities contained within the knowledge base. However, in practical scenarios where the objective is to retrieve sentences specifically related to a particular entity, strict adherence to a complete understanding of all entities in the knowledge base may not be necessary. To address this gap, we introduce TweetTER (Tweet Target Entity Retrieval), a novel benchmark that aims to bridge the challenges in entity linking. The distinguishing feature of this benchmark is its approach of re-framing entity linking as a binary entity retrieval task. This enables the evaluation of language models{'} performance without relying on a conventional knowledge base, providing a more practical and versatile evaluation framework for assessing the effectiveness of language models in entity retrieval tasks.",
|
99 |
+
}
|
100 |
+
```
|
101 |
+
|
102 |
+
---
|