update readme
Browse files
README.md
CHANGED
@@ -17,27 +17,29 @@ The JSON objects can come in different formats:
|
|
17 |
|
18 |
We measure the performance for each training dataset by training the [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model on it with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss), a batch size of 256, for 2000 training steps. The performance is then averaged across 14 sentence embedding benchmark datasets from diverse domains (Reddit, Twitter, News, Publications, E-Mails, ...).
|
19 |
|
20 |
-
|
21 |
-
|
22 |
-
|
|
23 |
-
|
|
24 |
-
| [
|
25 |
-
| [
|
26 |
-
| [
|
27 |
-
| [
|
28 |
-
| [
|
29 |
-
| [
|
30 |
-
| [
|
31 |
-
| [
|
32 |
-
| [
|
33 |
-
| [
|
34 |
-
| [
|
35 |
-
| [
|
36 |
-
| [
|
37 |
-
| [
|
38 |
-
| [
|
39 |
-
| [
|
40 |
-
| [
|
41 |
-
| [
|
42 |
-
| [
|
43 |
-
| [
|
|
|
|
|
|
17 |
|
18 |
We measure the performance for each training dataset by training the [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model on it with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss), a batch size of 256, for 2000 training steps. The performance is then averaged across 14 sentence embedding benchmark datasets from diverse domains (Reddit, Twitter, News, Publications, E-Mails, ...).
|
19 |
|
20 |
+
|
21 |
+
|
22 |
+
| Dataset | Description | Size (#Lines) | Performance | Reference |
|
23 |
+
| --- | --- | :---: | :---: | --- |
|
24 |
+
| [AllNLI.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/AllNLI.jsonl.gz) | Combination of SNLI + MultiNLI Triplets: (Anchor, Entailment_Text, Contradiction_Text) | 277,230 | 56.57 | [SNLI](https://huggingface.co/datasets/snli) and [MNLI](https://huggingface.co/datasets/multi_nli)
|
25 |
+
| [NQ-train_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/NQ-train_pairs.jsonl.gz) | Training pairs (query, answer_passage) from the NQ dataset | 100,231 | 57.48 | [Natural Questions](https://ai.google.com/research/NaturalQuestions)
|
26 |
+
| [PAQ_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/PAQ_pairs.jsonl.gz) | Training pairs (query, answer_passage) from the PAQ dataset | 64,371,441 | 56.11 | [PAQ](https://github.com/facebookresearch/PAQ)
|
27 |
+
| [S2ORC_title_abstract.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/S2ORC_title_abstract.jsonl.gz) | (Title, Abstract) pairs of scientific papers | 41,769,185 | 57.39 | [S2ORC](https://github.com/allenai/s2orc)
|
28 |
+
| [SimpleWiki.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/SimpleWiki.jsonl.gz) | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 102,225 | 56.15 | [SimpleWiki](https://cs.pomona.edu/~dkauchak/simplification/)
|
29 |
+
| [TriviaQA_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/TriviaQA_pairs.jsonl.gz) | Pairs (query, answer) from TriviaQA dataset | 73,346 | 55.56 | [TriviaQA](https://huggingface.co/datasets/trivia_qa)
|
30 |
+
| [altlex.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/altlex.jsonl.gz) | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 112,696 | 55.95 | [altlex](https://github.com/chridey/altlex/)
|
31 |
+
| [codesearchnet.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/codesearchnet.jsonl.gz) | CodeSearchNet corpus is a dataset of (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages. | 1,151,414 | 55.80 | [CodeSearchNet](https://huggingface.co/datasets/code_search_net)
|
32 |
+
| [eli5_question_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/eli5_question_answer.jsonl.gz) | (Question, Answer)-Pairs from ELI5 dataset | 325,475 | 58.24 | [ELI5](https://huggingface.co/datasets/eli5)
|
33 |
+
| [fever_train.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/fever_train.jsonl.gz) | Training data from the FEVER corpus | 139,051 | 52.63 | [FEVER](https://huggingface.co/datasets/fever)
|
34 |
+
| [gooaq_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/gooaq_pairs.jsonl.gz) | (Question, Answer)-Pairs from Google auto suggest | 3,012,496 | 59.06 | [GooAQ](https://github.com/allenai/gooaq)
|
35 |
+
| [quora_duplicates.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/quora_duplicates.jsonl.gz) | Duplicate question pairs from Quora | 103,663 | 57.36 | [QQP](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
|
36 |
+
| [sentence-compression.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/sentence-compression.jsonl.gz) | Pairs (long_text, short_text) about sentence-compression | 180,000 | 55.63 | [Sentence-Compression](https://github.com/google-research-datasets/sentence-compression)
|
37 |
+
| [specter_train_triples.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/specter_train_triples.jsonl.gz) | Triplets (Title, related_title, hard_negative) for Scientific Publications from Specter | 684,100 | 56.32 | [SPECTER](https://github.com/allenai/specter)
|
38 |
+
| [squad_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/squad_pairs.jsonl.gz) | (Question, Answer_Passage) Pairs from SQuAD dataset | 87,599 | 58.02 | [SQuAD](https://huggingface.co/datasets/squad)
|
39 |
+
| [stackexchange_duplicate_questions_body_body.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/stackexchange_duplicate_questions_body_body.jsonl.gz) | (Body, Body) pairs of duplicate questions from StackExchange | 250,519 | 57.26 | [Stack Exchange Data API](https://data.stackexchange.com/apple/query/fork/1456963)
|
40 |
+
| [stackexchange_duplicate_questions_title-body_title-body.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/stackexchange_duplicate_questions_title-body_title-body.jsonl.gz) | (Title+Body, Title+Body) pairs of duplicate questions from StackExchange | 250,460 | 57.30 | [Stack Exchange Data API](https://data.stackexchange.com/apple/query/fork/1456963)
|
41 |
+
| [stackexchange_duplicate_questions_title_title.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/stackexchange_duplicate_questions_title_title.jsonl.gz) | (Title, Title) pairs of duplicate questions from StackExchange | 304,525 | 58.47 | [Stack Exchange Data API](https://data.stackexchange.com/apple/query/fork/1456963)
|
42 |
+
| [wikihow.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/wikihow.jsonl.gz) | (Summary, Text) from WikiHow | 128,542 | 57.67 | [WikiHow](https://github.com/pvl/wikihow_pairs_dataset)
|
43 |
+
| [yahoo_answers_question_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/yahoo_answers_question_answer.jsonl.gz) | (Question_Body, Answer) pairs from Yahoo Answers | 681,164 | 57.74 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset)
|
44 |
+
| [yahoo_answers_title_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/yahoo_answers_title_answer.jsonl.gz) | (Title, Answer) pairs from Yahoo Answers | 1,198,260 | 58.65 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset)
|
45 |
+
| [yahoo_answers_title_question.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/yahoo_answers_title_question.jsonl.gz) | (Title, Question_Body) pairs from Yahoo Answers | 659,896 | 58.05 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset)
|