Datasets:
Modalities:
Text
Sub-tasks:
masked-language-modeling
Languages:
English
Size:
1M - 10M
ArXiv:
License:
Ekin Akyürek
commited on
Commit
·
70cedfe
1
Parent(s):
9639acc
update README
Browse files
README.md
CHANGED
@@ -48,9 +48,10 @@ task_ids:
|
|
48 |
- **Point of Contact:** [email protected]
|
49 |
- **Size of downloaded dataset files:** 113.7 MB
|
50 |
- **Size of the generated dataset:** 1006.6 MB
|
51 |
-
- **Total amount of disk used:** 1120.3 MB
|
|
|
52 |
### Dataset Summary
|
53 |
-
FTRACE is a zero-shot infromation retrieval benchmark deviced for tracing a language model’s predictions back to training examples. In the accompanying paper, we evaluate commonly studied influence methods, including gradient-based (TracIn) and embedding-based approaches. The dataset contains two parts. First, factual queries for that we trace the knowledge are extracted from existing LAMA queries (Petroni et al., 2019). Second, Wikidata sentences are extracted from TREx corpus (Elsahar et al., 2018). We annotate the extracted sentences with their stated facts, and these facts can be mathed with the facts in query set. In both parts, we provide (input, target) pairs as a masked language modeling task -- see examples in the below. However, one can use the same data in other formalities for example auto-regressive completion via a processing of `input_pretokenized` and `targets_pretokenized` field.
|
54 |
### Supported Tasks and Leaderboards
|
55 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
56 |
### Languages
|
@@ -124,6 +125,8 @@ The data fields are the same among all splits.
|
|
124 |
### Curation Rationale
|
125 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
126 |
### Source Data
|
|
|
|
|
127 |
#### Initial Data Collection and Normalization
|
128 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
129 |
#### Who are the source language producers?
|
|
|
48 |
- **Point of Contact:** [email protected]
|
49 |
- **Size of downloaded dataset files:** 113.7 MB
|
50 |
- **Size of the generated dataset:** 1006.6 MB
|
51 |
+
- **Total amount of disk used:** 1120.3 MB
|
52 |
+
|
53 |
### Dataset Summary
|
54 |
+
FTRACE is a zero-shot infromation retrieval benchmark deviced for tracing a language model’s predictions back to training examples. In the accompanying paper, we evaluate commonly studied influence methods, including gradient-based (TracIn) and embedding-based approaches. The dataset contains two parts. First, factual queries for that we trace the knowledge are extracted from existing LAMA queries (Petroni et al., 2019). Second, Wikidata sentences are extracted from TREx corpus (Elsahar et al., 2018). We annotate the extracted sentences with their stated facts, and these facts can be mathed with the facts in query set. In both parts, we provide (input, target) pairs as a masked language modeling task -- see examples in the below. However, one can use the same data in other formalities for example auto-regressive completion via a processing of `input_pretokenized` and `targets_pretokenized` field.
|
55 |
### Supported Tasks and Leaderboards
|
56 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
57 |
### Languages
|
|
|
125 |
### Curation Rationale
|
126 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
127 |
### Source Data
|
128 |
+
LAMA: https://github.com/facebookresearch/LAMA
|
129 |
+
TRex: https://hadyelsahar.github.io/t-rex/
|
130 |
#### Initial Data Collection and Normalization
|
131 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
132 |
#### Who are the source language producers?
|