Training Data for Text Embedding Models
This repository contains training files to train text embedding models, e.g. using sentence-transformers.
Data Format
All files are in a jsonl.gz
format: Each line contains a JSON-object that represent one training example.
The JSON objects can come in different formats:
- Pairs:
["text1", "text2"]
- This is a positive pair that should be close in vector space. - Triplets:
["anchor", "positive", "negative"]
- This is a triplet: Thepositive
text should be close to theanchor
, while thenegative
text should be distant to theanchor
. - Sets:
{"set": ["text1", "text2", ...]}
A set of texts describing the same thing, e.g. different paraphrases of the same question, different captions for the same image. Any combination of the elements is considered as a positive pair. - Query-Pairs:
{"query": "text", "pos": ["text1", "text2", ...]}
A query together with a set of positive texts. Can be formed to a pair["query", "positive"]
by randomly selecting a text frompos
. - Query-Triplets:
{"query": "text", "pos": ["text1", "text2", ...], "neg": ["text1", "text2", ...]}
A query together with a set of positive texts and negative texts. Can be formed to a triplet["query", "positive", "negative"]
by randomly selecting a text frompos
andneg
.
Available Datasets
Note: I'm currently in the process to upload the files. Please check again next week to get the full list of datasets
We measure the performance for each training dataset by training the nreimers/MiniLM-L6-H384-uncased model on it with MultipleNegativesRankingLoss, a batch size of 256, for 2000 training steps. The performance is then averaged across 14 sentence embedding benchmark datasets from diverse domains (Reddit, Twitter, News, Publications, E-Mails, ...).
Dataset | Description | Size (#Lines) | Performance | Reference |
---|---|---|---|---|
AllNLI.jsonl.gz | Combination of SNLI + MultiNLI Triplets: (Anchor, Entailment_Text, Contradiction_Text) | 277,230 | 56.57 | SNLI and MNLI |
altlex.jsonl.gz | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 112,696 | 55.95 | altlex |
coco_captions.jsonl.gz | Different captions for the same image | 82,783 | 53.77 | COCO |
codesearchnet.jsonl.gz | CodeSearchNet corpus is a dataset of (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages. | 1,151,414 | 55.80 | CodeSearchNet |
eli5_question_answer.jsonl.gz | (Question, Answer)-Pairs from ELI5 dataset | 325,475 | 58.24 | ELI5 |
fever_train.jsonl.gz | Training data from the FEVER corpus | 139,051 | 52.63 | FEVER |
flickr30k_captions.jsonl.gz | Different captions for the same image from the Flickr30k dataset | 31,783 | 54.68 | Flickr30k |
gooaq_pairs.jsonl.gz | (Question, Answer)-Pairs from Google auto suggest | 3,012,496 | 59.06 | GooAQ |
msmarco-triplets.jsonl.gz | (Question, Answer, Negative)-Triplets from MS MARCO Passages dataset | 499,184 | 58.76 | MS MARCO Passages |
NQ-train_pairs.jsonl.gz | Training pairs (query, answer_passage) from the NQ dataset | 100,231 | 57.48 | Natural Questions |
PAQ_pairs.jsonl.gz | Training pairs (query, answer_passage) from the PAQ dataset | 64,371,441 | 56.11 | PAQ |
quora_duplicates.jsonl.gz | Duplicate question pairs from Quora | 103,663 | 57.36 | QQP |
sentence-compression.jsonl.gz | Pairs (long_text, short_text) about sentence-compression | 180,000 | 55.63 | Sentence-Compression |
specter_train_triples.jsonl.gz | Triplets (Title, related_title, hard_negative) for Scientific Publications from Specter | 684,100 | 56.32 | SPECTER |
squad_pairs.jsonl.gz | (Question, Answer_Passage) Pairs from SQuAD dataset | 87,599 | 58.02 | SQuAD |
stackexchange_duplicate_questions_body_body.jsonl.gz | (Body, Body) pairs of duplicate questions from StackExchange | 250,519 | 57.26 | Stack Exchange Data API |
stackexchange_duplicate_questions_title-body_title-body.jsonl.gz | (Title+Body, Title+Body) pairs of duplicate questions from StackExchange | 250,460 | 57.30 | Stack Exchange Data API |
stackexchange_duplicate_questions_title_title.jsonl.gz | (Title, Title) pairs of duplicate questions from StackExchange | 304,525 | 58.47 | Stack Exchange Data API |
S2ORC_title_abstract.jsonl.gz | (Title, Abstract) pairs of scientific papers | 41,769,185 | 57.39 | S2ORC |
searchQA_top5_snippets.jsonl.gz | Question + Top5 text snippets from SearchQA dataset. Top5 | 117,220 | 57.34 | search_qa |
SimpleWiki.jsonl.gz | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 102,225 | 56.15 | SimpleWiki |
TriviaQA_pairs.jsonl.gz | Pairs (query, answer) from TriviaQA dataset | 73,346 | 55.56 | TriviaQA |
WikiAnswers.jsonl.gz | Sets of duplicates questions | 27,383,151 | 57.34 | WikiAnswers Corpus |
wikihow.jsonl.gz | (Summary, Text) from WikiHow | 128,542 | 57.67 | WikiHow |
yahoo_answers_question_answer.jsonl.gz | (Question_Body, Answer) pairs from Yahoo Answers | 681,164 | 57.74 | Yahoo Answers |
yahoo_answers_title_answer.jsonl.gz | (Title, Answer) pairs from Yahoo Answers | 1,198,260 | 58.65 | Yahoo Answers |
yahoo_answers_title_question.jsonl.gz | (Title, Question_Body) pairs from Yahoo Answers | 659,896 | 58.05 | Yahoo Answers |