Readme
Browse files
README.md
ADDED
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Training Data for Text Embedding Models
|
2 |
+
|
3 |
+
This repository contains training files to train text embedding models, e.g. using [sentence-transformers](https://www.SBERT.net).
|
4 |
+
|
5 |
+
## Data Format
|
6 |
+
|
7 |
+
All files are in a `jsonl.gz` format: Each line contains a JSON-object that represent one training example.
|
8 |
+
|
9 |
+
The JSON objects can come in different formats:
|
10 |
+
- **Pairs:** `["text1", "text2"]` - This is a positive pair that should be close in vector space.
|
11 |
+
- **Triplets:** `["anchor", "positive", "negative"]` - This is a triplet: The `positive` text should be close to the `anchor`, while the `negative` text should be distant to the `anchor`.
|
12 |
+
|
13 |
+
|
14 |
+
## Available Datasets
|
15 |
+
|
16 |
+
**Note: I'm currently in the process to upload the files. Please check again next week to get the full list of datasets**
|
17 |
+
|
18 |
+
We measure the performance for each training dataset by training the [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model on it with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss), a batch size of 256, for 2000 training steps. The performance is then averaged across 14 sentence embedding benchmark datasets from diverse domains (Reddit, Twitter, News, Publications, E-Mails, ...).
|
19 |
+
|
20 |
+
| Dataset | Description | Size (#Lines) | Performance |
|
21 |
+
| --- | --- | :---: | :---: |
|
22 |
+
| [AllNLI.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/AllNLI.jsonl.gz) | Combination of SNLI + MultiNLI Triplets: (Anchor, Entailment_Text, Contradiction_Text) | 277,230 | 56.57
|
23 |
+
| [NQ-train_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/NQ-train_pairs.jsonl.gz) | Training pairs (query, answer_passage) from the NQ dataset | 100,231 | 57.48
|
24 |
+
| [PAQ_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/PAQ_pairs.jsonl.gz) | Training pairs (query, answer_passage) from the PAQ dataset | 64,371,441 | 56.11
|
25 |
+
| [S2ORC_title_abstract.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/S2ORC_title_abstract.jsonl.gz) | (Title, Abstract) pairs of scientific papers | 41,769,185 | 57.39
|
26 |
+
| [SimpleWiki.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/SimpleWiki.jsonl.gz) | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 102,225 | 56.15 |
|
27 |
+
| [TriviaQA_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/TriviaQA_pairs.jsonl.gz) | Pairs (query, answer) from TriviaQA dataset | 73,346 | 55.56 |
|
28 |
+
| [altlex.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/altlex.jsonl.gz) | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 112,696 | 55.95 |
|
29 |
+
| [codesearchnet.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/codesearchnet.jsonl.gz) | CodeSearchNet corpus is a dataset of (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages. | 1,151,414 | 55.80 |
|
30 |
+
| [eli5_question_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/eli5_question_answer.jsonl.gz) | (Question, Answer)-Pairs from ELI5 dataset | 325,475 | 58.24
|
31 |
+
| [fever_train.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/fever_train.jsonl.gz) | Training data from the FEVER corpus | 139,051 | 52.63
|
32 |
+
| [gooaq_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/gooaq_pairs.jsonl.gz) | (Question, Answer)-Pairs from Google auto suggest | 3,012,496 | 59.06
|
33 |
+
| [quora_duplicates.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/quora_duplicates.jsonl.gz) | Duplicate question pairs from Quora | 103,663 | 57.36
|
34 |
+
| [sentence-compression.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/sentence-compression.jsonl.gz) | Pairs (long_text, short_text) about sentence-compression | 180,000 | 55.63
|
35 |
+
| [specter_train_triples.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/specter_train_triples.jsonl.gz) | Triplets (Title, related_title, hard_negative) for Scientific Publications from Specter | 684,100 | 56.32
|
36 |
+
| [squad_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/squad_pairs.jsonl.gz) | (Question, Answer_Passage) Pairs from SQuAD dataset | 87,599 | 58.02
|
37 |
+
| [stackexchange_duplicate_questions_body_body.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/stackexchange_duplicate_questions_body_body.jsonl.gz) | (Body, Body) pairs of duplicate questions from StackExchange | 250,519 | 57.26
|
38 |
+
| [stackexchange_duplicate_questions_title-body_title-body.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/stackexchange_duplicate_questions_title-body_title-body.jsonl.gz) | (Title+Body, Title+Body) pairs of duplicate questions from StackExchange | 250,460 | 57.30
|
39 |
+
| [stackexchange_duplicate_questions_title_title.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/stackexchange_duplicate_questions_title_title.jsonl.gz) | (Title, Title) pairs of duplicate questions from StackExchange | 304,525 | 58.47
|
40 |
+
| [wikihow.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/wikihow.jsonl.gz) | (Summary, Text) from WikiHow | 128,542 | 57.67
|
41 |
+
| [yahoo_answers_question_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/yahoo_answers_question_answer.jsonl.gz) | (Question_Body, Answer) pairs from Yahoo Answers | 681,164 | 57.74
|
42 |
+
| [yahoo_answers_title_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/yahoo_answers_title_answer.jsonl.gz) | (Title, Answer) pairs from Yahoo Answers | 1,198,260 | 58.65
|
43 |
+
| [yahoo_answers_title_question.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/yahoo_answers_title_question.jsonl.gz) | (Title, Question_Body) pairs from Yahoo Answers | 659,896 | 58.05
|