Dataset Viewer
Auto-converted to Parquet
model_id
string
name
string
number_params
int64
description
string
date_published
string
paper_link
string
code_link
string
is_api_endpoint
bool
nr_of_tokens
int64
architecture
string
is_open_weights
bool
is_open_dataset
bool
is_mixture_of_experts
bool
model_alignment
string
reinforcement_learning_from_human_feedback
bool
domain_specific_pretraining
bool
domain_specific_finetuning
bool
tool_use
bool
tool_type
string
temperature
float64
epochs
int64
reasoning_model
bool
reasoning_type
string
overall_score
float64
Analytical Chemistry
float64
Chemical Preference
float64
General Chemistry
float64
Inorganic Chemistry
float64
Materials Science
float64
Organic Chemistry
float64
Physical Chemistry
float64
Technical Chemistry
float64
Toxicity and Safety
float64
mistral-large-2-123b
Mistral-Large-2
123,000,000,000
Mistral Large 2 has a 128k context window and supports dozens of languages and along with 80+ coding languages. Mistral Large 2 is designed for single-node inference with long-context applications in mind.
2024-07-24
null
https://huggingface.co/mistralai/Mistral-Large-Instruct-2407
true
null
DecoderOnly
true
false
false
null
null
false
false
false
null
0
null
false
null
0.569943
0.480263
0.546454
0.785235
0.793478
0.666667
0.732558
0.690909
0.675
0.395556
llama3.1-70b-instruct
Llama-3.1-70B-Instruct
70,000,000,000
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
2024-07-23
https://arxiv.org/abs/2407.21783
https://github.com/meta-llama/llama3
true
15,000,000,000,000
DecoderOnly
true
false
false
DPO
true
false
false
false
null
0
null
false
null
0.533716
0.407895
0.519481
0.691275
0.771739
0.666667
0.662791
0.642424
0.65
0.383704
claude3.5
Claude-3.5 (Sonnet)
null
Claude models are general purpose large language models. They use a transformer architecture and are trained via unsupervised learning, RLHF, and Constitutional AI (including both a supervised and Reinforcement Learning (RL) phase). Claude 3.5 was developed by Anthropic.
2024-06-20
null
null
true
null
null
false
false
null
null
true
false
false
false
null
0
null
false
null
0.625538
0.565789
0.584416
0.825503
0.836957
0.714286
0.825581
0.769697
0.85
0.44
mixtral-8x7b-instruct-T-one
Mixtral-8x7b-Instruct (Temperature 1.0)
47,000,000,000
Mixtral is a sparse mixture-of-experts network. It is a decoder-only model where the feedforward block picks from a set of 8 distinct groups of parameters. At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.
2023-12-11
https://arxiv.org/abs/2401.04088
https://huggingface.co/mistralai/Mixtral-8x7B-v0.1
true
null
DecoderOnly
true
false
true
DPO
false
false
false
false
null
1
null
false
null
0.418221
0.276316
0.522478
0.449664
0.51087
0.404762
0.472093
0.345455
0.325
0.266667
command-r+
Command-R+
104,000,000,000
Cohere Command R is a family of highly scalable language models that balance high performance with strong accuracy. Command-R models were released by Cohere.
2024-04-04
null
https://huggingface.co/CohereForAI/c4ai-command-r-plus
true
null
null
false
false
null
null
true
false
false
false
null
0
null
false
null
0.447633
0.342105
0.513487
0.496644
0.521739
0.464286
0.551163
0.327273
0.5
0.311111
gpt-4o-react
GPT-4o React
null
GPT-4o is OpenAI's third major iteration of their popular large multimodal model, GPT-4, which expands on the capabilities of GPT-4 with Vision.
2024-05-13
null
null
true
null
DecoderOnly
false
false
null
null
true
false
false
true
ArXiV, Web search, Wikipedia, Wolfram alpha calculator, SMILES to IUPAC name and IUPAC name to SMILES converters
0
null
false
null
0.50538
0.467105
0.420579
0.758389
0.728261
0.559524
0.718605
0.6
0.725
0.374815
llama3.1-405b-instruct
Llama-3.1-405B-Instruct
405,000,000,000
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
2024-07-23
https://arxiv.org/abs/2407.21783
https://github.com/meta-llama/llama3
true
15,000,000,000,000
DecoderOnly
true
false
false
DPO
true
false
false
false
null
0
null
false
null
0.579268
0.506579
0.54046
0.791946
0.771739
0.654762
0.755814
0.709091
0.7
0.419259
mixtral-8x7b-instruct
Mixtral-8x7b-Instruct
47,000,000,000
Mixtral is a sparse mixture-of-experts network. It is a decoder-only model where the feedforward block picks from a set of 8 distinct groups of parameters. At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.
2023-12-11
https://arxiv.org/abs/2401.04088
https://huggingface.co/mistralai/Mixtral-8x7B-v0.1
true
null
DecoderOnly
true
false
true
DPO
false
false
false
false
null
0
null
false
null
0.42396
0.269737
0.535465
0.422819
0.554348
0.416667
0.47907
0.333333
0.325
0.26963
llama3.1-8b-instruct-T-one
Llama-3.1-8B-Instruct (Temperature 1.0)
8,000,000,000
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
2024-07-23
https://arxiv.org/abs/2407.21783
https://github.com/meta-llama/llama3
true
15,000,000,000,000
DecoderOnly
true
false
false
DPO
true
false
false
false
null
1
null
false
null
0.461263
0.361842
0.523477
0.530201
0.478261
0.416667
0.576744
0.418182
0.4
0.32
gpt-4o
GPT-4o
null
GPT-4o is OpenAI's third major iteration of their popular large multimodal model, GPT-4, which expands on the capabilities of GPT-4 with Vision.
2024-05-13
null
null
true
null
DecoderOnly
false
false
null
null
true
false
false
false
null
0
null
false
null
0.610832
0.559211
0.589411
0.805369
0.804348
0.75
0.755814
0.715152
0.75
0.441481
llama3-70b-instruct-T-one
Llama-3-70B-Instruct (Temperature 1.0)
70,000,000,000
Llama 3 models were trained on a text corpus of over 15T tokens. These models use a tokenizer with a vocabulary of 128K tokens. Additionally, improvements in the post-training procedures substantially reduced false refusal rates, improved alignment, and increased diversity in model responses.
2024-04-18
null
https://github.com/meta-llama/llama3
true
15,000,000,000,000
DecoderOnly
true
false
false
DPO
true
false
false
false
null
1
null
false
null
0.516499
0.375
0.53047
0.604027
0.684783
0.619048
0.632558
0.6
0.6
0.373333
paper-qa
PaperQA2
null
PaperQA2 is a package for doing high-accuracy retrieval augmented generation (RAG) on PDFs or text files, with a focus on the scientific literature. We used PaperQA2 via the non-public API deployed by FutureHouse and the default settings (using Claude-3.5-Sonnet as summarizing and answer-generating LLM).
2024-09-11
https://storage.googleapis.com/fh-public/paperqa/Language_Agents_Science.pdf
https://github.com/Future-House/paper-qa
false
null
null
null
null
null
null
null
null
null
true
Paper Search, Gather Evidence, Generate Answer, Citation Traversal
0
null
false
null
0.568867
0.460526
0.563437
0.724832
0.73913
0.690476
0.67907
0.678788
0.7
0.423704
gemma-1-1-7b-it
Gemma-1.1-7B-it
7,000,000,000
Gemma is a family of lightweight, open models built from the research and technology that Google used to create the Gemini models. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning.
2024-02-21
https://arxiv.org/abs/2403.08295
https://github.com/google-deepmind/gemma
true
6,000,000,000,000
DecoderOnly
true
false
false
PPO
true
false
false
false
null
0
null
false
null
0.192253
0.210526
0.004995
0.33557
0.413043
0.357143
0.37907
0.290909
0.375
0.22963
llama2-13b-chat
Llama-2-13B Chat
13,000,000,000
LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. Llama was released by Meta AI.
2023-07-18
https://arxiv.org/abs/2302.13971
https://huggingface.co/meta-llama/Llama-2-13b-chat-hf
false
2,000,000,000,000
DecoderOnly
true
false
false
null
true
false
false
false
null
0
null
false
null
0.25538
0.092105
0.484515
0.114094
0.271739
0.095238
0.153488
0.151515
0.1
0.100741
llama3.1-8b-instruct
Llama-3.1-8B-Instruct
8,000,000,000
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
2024-07-23
https://arxiv.org/abs/2407.21783
https://github.com/meta-llama/llama3
true
15,000,000,000,000
DecoderOnly
true
false
false
DPO
true
false
false
false
null
0
null
false
null
0.471664
0.394737
0.527473
0.503356
0.5
0.404762
0.581395
0.509091
0.45
0.325926
gemma-2-9b-it-T-one
Gemma-2-9B-it (Temperature 1.0)
9,000,000,000
Gemma is a family of lightweight, open models built from the research and technology that Google used to create the Gemini models. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning.
2024-06-27
https://arxiv.org/abs/2408.00118
https://github.com/google-deepmind/gemma
true
8,000,000,000,000
DecoderOnly
true
false
false
PPO
true
false
false
false
null
1
null
false
null
0.480273
0.289474
0.557443
0.557047
0.543478
0.5
0.546512
0.466667
0.475
0.342222
claude3.5-react
Claude-3.5 (Sonnet) React
null
Claude models are general purpose large language models. They use a transformer architecture and are trained via unsupervised learning, RLHF, and Constitutional AI (including both a supervised and Reinforcement Learning (RL) phase). Claude 3.5 was developed by Anthropic.
2024-06-20
null
null
true
null
null
false
false
null
null
true
false
false
true
ArXiV, Web search, Wikipedia, Wolfram alpha calculator, SMILES to IUPAC name and IUPAC name to SMILES converters
0
null
false
null
0.624821
0.578947
0.599401
0.872483
0.804348
0.678571
0.837209
0.757576
0.8
0.408889
llama3.1-70b-instruct-T-one
Llama-3.1-70B-Instruct (Temperature 1.0)
70,000,000,000
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
2024-07-23
https://arxiv.org/abs/2407.21783
https://github.com/meta-llama/llama3
true
15,000,000,000,000
DecoderOnly
true
false
false
DPO
true
false
false
false
null
1
null
false
null
0.511119
0.368421
0.535465
0.66443
0.695652
0.654762
0.553488
0.557576
0.55
0.38963
gemma-2-9b-it
Gemma-2-9B-it
9,000,000,000
Gemma is a family of lightweight, open models built from the research and technology that Google used to create the Gemini models. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning.
2024-06-27
https://arxiv.org/abs/2408.00118
https://github.com/google-deepmind/gemma
true
8,000,000,000,000
DecoderOnly
true
false
false
PPO
true
false
false
false
null
0
null
false
null
0.482425
0.315789
0.551449
0.543624
0.554348
0.52381
0.555814
0.484848
0.525
0.339259
llama2-70b-chat
Llama-2-70B Chat
70,000,000,000
LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. Llama was released by Meta AI.
2023-07-18
https://arxiv.org/abs/2302.13971
https://huggingface.co/meta-llama/Llama-2-70b-chat-hf
false
2,000,000,000,000
DecoderOnly
true
false
false
null
true
false
false
false
null
0.01
null
false
null
0.266141
0.072368
0.487512
0.134228
0.217391
0.178571
0.146512
0.169697
0.125
0.136296
llama3-8b-instruct
Llama-3-8B-Instruct
8,000,000,000
Llama 3 models were trained on a text corpus of over 15T tokens. These models use a tokenizer with a vocabulary of 128K tokens. Additionally, improvements in the post-training procedures substantially reduced false refusal rates, improved alignment, and increased diversity in model responses.
2024-04-18
null
https://github.com/meta-llama/llama3
true
15,000,000,000,000
DecoderOnly
true
false
false
DPO
true
false
false
false
null
0
null
false
null
0.455524
0.407895
0.515485
0.442953
0.48913
0.416667
0.562791
0.369697
0.6
0.324444
gemma-1-1-7b-it-T-one
Gemma-1.1-7B-it (Temperature 1.0)
7,000,000,000
Gemma is a family of lightweight, open models built from the research and technology that Google used to create the Gemini models. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning.
2024-02-21
https://arxiv.org/abs/2403.08295
https://github.com/google-deepmind/gemma
true
6,000,000,000,000
DecoderOnly
true
false
false
PPO
true
false
false
false
null
1
null
false
null
0.190818
0.210526
0.008991
0.348993
0.413043
0.357143
0.372093
0.30303
0.375
0.216296
galactica_120b
Galactica-120b
120,000,000,000
Galactica is a large language model developed by Facebook. It is a transformer-based model trained on a large corpus of scientific data.
2022-11-01
https://galactica.org/paper.pdf
https://huggingface.co/facebook/galactica-120b
false
450,000,000,000
DecoderOnly
true
false
false
null
false
true
false
false
null
0
4
false
null
0.015067
0
0
0.046053
0.053191
0
0.011338
0.055866
0
0.023188
llama3-8b-instruct-T-one
Llama-3-8B-Instruct (Temperature 1.0)
8,000,000,000
Llama 3 models were trained on a text corpus of over 15T tokens. These models use a tokenizer with a vocabulary of 128K tokens. Additionally, improvements in the post-training procedures substantially reduced false refusal rates, improved alignment, and increased diversity in model responses.
2024-04-18
null
https://github.com/meta-llama/llama3
true
15,000,000,000,000
DecoderOnly
true
false
false
DPO
true
false
false
false
null
1
null
false
null
0.4566
0.401316
0.52048
0.436242
0.543478
0.452381
0.551163
0.345455
0.625
0.324444
gemini-pro
Gemini-Pro
null
Gemini models are built from the ground up for multimodality: reasoning seamlessly across text, images, audio, video, and code.
2024-06-07
https://arxiv.org/abs/2403.05530
null
true
null
DecoderOnly
false
false
null
null
true
false
false
false
null
0
null
false
null
0.452654
0.388158
0.5005
0.483221
0.467391
0.5
0.567442
0.448485
0.475
0.308148
gpt-4
GPT-4
null
GPT-4 is a large multimodal model released by OpenAI to succeed GPT-3.5 Turbo. It features a context window of 32k tokens.
2023-03-14
https://arxiv.org/abs/2303.08774
null
true
null
DecoderOnly
false
false
null
null
null
false
false
false
null
0
null
false
null
0.412841
0.427632
0.163836
0.697987
0.695652
0.607143
0.67907
0.642424
0.7
0.41037
llama3-70b-instruct
Llama-3-70B-Instruct
70,000,000,000
Llama 3 models were trained on a text corpus of over 15T tokens. These models use a tokenizer with a vocabulary of 128K tokens. Additionally, improvements in the post-training procedures substantially reduced false refusal rates, improved alignment, and increased diversity in model responses.
2024-04-18
null
https://github.com/meta-llama/llama3
true
15,000,000,000,000
DecoderOnly
true
false
false
DPO
true
false
false
false
null
0
null
false
null
0.517934
0.414474
0.532468
0.604027
0.663043
0.630952
0.632558
0.593939
0.625
0.368889
phi-3-medium-4k-instruct
Phi-3-Medium-4k-Instruct
14,000,000,000
The Phi-3-Medium-4K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants 4K and 128K which is the context length (in tokens) that it can support.
2024-05-21
https://arxiv.org/abs/2404.14219
https://huggingface.co/microsoft/Phi-3-medium-4k-instruct
false
4,800,000,000,000
DecoderOnly
true
false
false
null
true
false
false
false
null
0
null
false
null
0.474534
0.342105
0.532468
0.47651
0.630435
0.547619
0.560465
0.460606
0.55
0.331852
o1-preview
o1-preview
null
o1 is trained with reinforcement learning and chain-of-thought reasoning to improve safety, robustness, and reasoning capabilities. The family includes o1-preview and o1-mini versions.
2024-09-12
https://cdn.openai.com/o1-system-card-20240917.pdf
null
true
null
DecoderOnly
false
false
null
null
true
false
false
false
null
1
null
true
medium
0.643472
0.625
0.563437
0.932886
0.902174
0.72619
0.830233
0.848485
0.85
0.475556
claude3
Claude-3 (Opus)
null
Claude models are general purpose large language models. They use a transformer architecture and are trained via unsupervised learning, RLHF, and Constitutional AI (including both a supervised and Reinforcement Learning (RL) phase). Claude 3 was developed by Anthropic.
2024-03-04
https://www-cdn.anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.pdf
null
true
null
null
false
false
null
null
true
false
false
false
null
0
null
false
null
0.569584
0.467105
0.565435
0.765101
0.793478
0.630952
0.695349
0.648485
0.7
0.41037
gpt-3.5-turbo
GPT-3.5 Turbo
null
GPT-3.5 Turbo, developed by OpenAI, features a context window of 4096 tokens.
2023-11-06
null
null
true
null
DecoderOnly
false
false
null
null
true
false
false
false
null
0
null
false
null
0.466284
0.381579
0.534466
0.489933
0.543478
0.47619
0.588372
0.4
0.4
0.30963
claude2
Claude-2
null
Claude models are general purpose large language models. They use a transformer architecture and are trained via unsupervised learning, RLHF, and Constitutional AI (including both a supervised and Reinforcement Learning (RL) phase). Claude 2 was developed by Anthropic.
2023-07-11
https://www-cdn.anthropic.com/bd2a28d2535bfb0494cc8e2a3bf135d2e7523226/Model-Card-Claude-2.pdf
null
true
null
null
false
false
null
null
true
false
false
false
null
0
null
false
null
0.473458
0.375
0.511489
0.503356
0.608696
0.464286
0.593023
0.50303
0.475
0.331852
README.md exists but content is empty.
Downloads last month
536

Space using jablonkagroup/ChemBench-Results 1

Collection including jablonkagroup/ChemBench-Results