Training Data for Text Embedding Models
This repository contains training files to train text embedding models, e.g. using sentence-transformers.
Data Format
All files are in a jsonl.gz
format: Each line contains a JSON-object that represent one training example.
The JSON objects can come in different formats:
- Pairs:
["text1", "text2"]
- This is a positive pair that should be close in vector space. - Triplets:
["anchor", "positive", "negative"]
- This is a triplet: Thepositive
text should be close to theanchor
, while thenegative
text should be distant to theanchor
.
Available Datasets
Note: I'm currently in the process to upload the files. Please check again next week to get the full list of datasets
We measure the performance for each training dataset by training the nreimers/MiniLM-L6-H384-uncased model on it with MultipleNegativesRankingLoss, a batch size of 256, for 2000 training steps. The performance is then averaged across 14 sentence embedding benchmark datasets from diverse domains (Reddit, Twitter, News, Publications, E-Mails, ...).
Dataset | Description | Size (#Lines) | Performance | Reference |
---|---|---|---|---|
AllNLI.jsonl.gz | Combination of SNLI + MultiNLI Triplets: (Anchor, Entailment_Text, Contradiction_Text) | 277,230 | 56.57 | SNLI and MNLI |
NQ-train_pairs.jsonl.gz | Training pairs (query, answer_passage) from the NQ dataset | 100,231 | 57.48 | Natural Questions |
PAQ_pairs.jsonl.gz | Training pairs (query, answer_passage) from the PAQ dataset | 64,371,441 | 56.11 | PAQ |
S2ORC_title_abstract.jsonl.gz | (Title, Abstract) pairs of scientific papers | 41,769,185 | 57.39 | S2ORC |
SimpleWiki.jsonl.gz | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 102,225 | 56.15 | SimpleWiki |
TriviaQA_pairs.jsonl.gz | Pairs (query, answer) from TriviaQA dataset | 73,346 | 55.56 | TriviaQA |
altlex.jsonl.gz | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 112,696 | 55.95 | altlex |
codesearchnet.jsonl.gz | CodeSearchNet corpus is a dataset of (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages. | 1,151,414 | 55.80 | CodeSearchNet |
eli5_question_answer.jsonl.gz | (Question, Answer)-Pairs from ELI5 dataset | 325,475 | 58.24 | ELI5 |
fever_train.jsonl.gz | Training data from the FEVER corpus | 139,051 | 52.63 | FEVER |
gooaq_pairs.jsonl.gz | (Question, Answer)-Pairs from Google auto suggest | 3,012,496 | 59.06 | GooAQ |
quora_duplicates.jsonl.gz | Duplicate question pairs from Quora | 103,663 | 57.36 | QQP |
sentence-compression.jsonl.gz | Pairs (long_text, short_text) about sentence-compression | 180,000 | 55.63 | Sentence-Compression |
specter_train_triples.jsonl.gz | Triplets (Title, related_title, hard_negative) for Scientific Publications from Specter | 684,100 | 56.32 | SPECTER |
squad_pairs.jsonl.gz | (Question, Answer_Passage) Pairs from SQuAD dataset | 87,599 | 58.02 | SQuAD |
stackexchange_duplicate_questions_body_body.jsonl.gz | (Body, Body) pairs of duplicate questions from StackExchange | 250,519 | 57.26 | Stack Exchange Data API |
stackexchange_duplicate_questions_title-body_title-body.jsonl.gz | (Title+Body, Title+Body) pairs of duplicate questions from StackExchange | 250,460 | 57.30 | Stack Exchange Data API |
stackexchange_duplicate_questions_title_title.jsonl.gz | (Title, Title) pairs of duplicate questions from StackExchange | 304,525 | 58.47 | Stack Exchange Data API |
wikihow.jsonl.gz | (Summary, Text) from WikiHow | 128,542 | 57.67 | WikiHow |
yahoo_answers_question_answer.jsonl.gz | (Question_Body, Answer) pairs from Yahoo Answers | 681,164 | 57.74 | Yahoo Answers |
yahoo_answers_title_answer.jsonl.gz | (Title, Answer) pairs from Yahoo Answers | 1,198,260 | 58.65 | Yahoo Answers |
yahoo_answers_title_question.jsonl.gz | (Title, Question_Body) pairs from Yahoo Answers | 659,896 | 58.05 | Yahoo Answers |