model_type
stringclasses
5 values
model
stringlengths
13
62
AVG
float64
0.04
0.7
CG
float64
0
0.68
EL
float64
0
0.62
FA
float64
0
0.35
HE
float64
0
0.79
MC
float64
0
0.92
MR
float64
0
0.95
MT
float64
0.3
0.86
NLI
float64
0
0.82
QA
float64
0.01
0.77
RC
float64
0.04
0.93
SUM
float64
0
0.29
aio_char_f1
float64
0
0.9
alt-e-to-j_bert_score_ja_f1
float64
0.26
0.88
alt-e-to-j_bleu_ja
float64
0.32
16
alt-e-to-j_comet_wmt22
float64
0.29
0.92
alt-j-to-e_bert_score_en_f1
float64
0.37
0.96
alt-j-to-e_bleu_en
float64
0.02
20.1
alt-j-to-e_comet_wmt22
float64
0.3
0.89
chabsa_set_f1
float64
0
0.62
commonsensemoralja_exact_match
float64
0
0.94
jamp_exact_match
float64
0
0.74
janli_exact_match
float64
0
0.95
jcommonsenseqa_exact_match
float64
0
0.97
jemhopqa_char_f1
float64
0
0.71
jmmlu_exact_match
float64
0
0.78
jnli_exact_match
float64
0
0.91
jsem_exact_match
float64
0
0.81
jsick_exact_match
float64
0
0.87
jsquad_char_f1
float64
0.04
0.93
jsts_pearson
float64
-0.23
0.91
jsts_spearman
float64
-0.22
0.88
kuci_exact_match
float64
0
0.86
mawps_exact_match
float64
0
0.95
mbpp_code_exec
float64
0
0.68
mbpp_pylint_check
float64
0
0.99
mmlu_en_exact_match
float64
0
0.82
niilc_char_f1
float64
0.01
0.7
wiki_coreference_set_f1
float64
0
0.13
wiki_dependency_set_f1
float64
0
0.55
wiki_ner_set_f1
float64
0
0.19
wiki_pas_set_f1
float64
0
0.12
wiki_reading_char_f1
float64
0.02
0.91
wikicorpus-e-to-j_bert_score_ja_f1
float64
0.15
0.87
wikicorpus-e-to-j_bleu_ja
float64
0.17
18.3
wikicorpus-e-to-j_comet_wmt22
float64
0.3
0.87
wikicorpus-j-to-e_bert_score_en_f1
float64
0.26
0.92
wikicorpus-j-to-e_bleu_en
float64
0.03
13.8
wikicorpus-j-to-e_comet_wmt22
float64
0.28
0.78
xlsum_ja_bert_score_ja_f1
float64
0
0.79
xlsum_ja_bleu_ja
float64
0
10.2
xlsum_ja_rouge1
float64
0.05
52.8
xlsum_ja_rouge2
float64
0.01
29.2
xlsum_ja_rouge2_scaling
float64
0
0.29
xlsum_ja_rougeLsum
float64
0.05
44.9
architecture
stringclasses
11 values
precision
stringclasses
2 values
license
stringclasses
12 values
params
float64
0.14
70.6
likes
int64
0
4.03k
revision
stringclasses
1 value
num_few_shot
int64
0
4
add_special_tokens
stringclasses
2 values
llm_jp_eval_version
stringclasses
1 value
vllm_version
stringclasses
1 value
β­• : instruction-tuned
HuggingFaceTB/SmolLM2-1.7B-Instruct
0.1125
0.0201
0
0.0419
0.0001
0.1114
0
0.4906
0.2178
0.0929
0.202
0.0609
0.0637
0.6445
2.171
0.4996
0.8176
4.2536
0.5849
0
0.1375
0.1609
0.0389
0.0554
0.1303
0
0.2067
0.5739
0.1086
0.202
0
0
0.1414
0
0.0201
0.0643
0.0001
0.0846
0
0
0
0
0.2096
0.5907
1.2672
0.4391
0.7696
2.0614
0.4387
0.6474
1.3966
22.9487
6.0997
0.0609
19.3158
LlamaForCausalLM
bfloat16
apache-2.0
1.711
386
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
HuggingFaceTB/SmolLM2-1.7B-Instruct
0.3132
0.0201
0.3
0.0601
0.372
0.3895
0.358
0.598
0.4878
0.1776
0.6213
0.0609
0.1069
0.7141
4.237
0.5577
0.9007
9.3078
0.7489
0.3
0.4677
0.3822
0.4944
0.4013
0.2837
0.2799
0.5464
0.5221
0.4938
0.6213
0.0885
0.0425
0.2997
0.358
0.0201
0.0643
0.4642
0.1422
0.0067
0.0701
0.0177
0.018
0.1878
0.6428
4.22
0.4695
0.8551
6.7407
0.6158
0.6474
1.3966
22.9487
6.0997
0.0609
19.3158
LlamaForCausalLM
bfloat16
apache-2.0
1.711
386
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
microsoft/Phi-3.5-mini-instruct
0.5115
0.3333
0.4293
0.1665
0.5605
0.7351
0.668
0.8029
0.6463
0.2988
0.8783
0.1073
0.2736
0.8315
9.2633
0.8664
0.9357
13.052
0.8509
0.4293
0.8094
0.4799
0.6444
0.8418
0.3278
0.4696
0.7399
0.7456
0.6219
0.8783
0.8708
0.8408
0.5542
0.668
0.3333
0.5743
0.6513
0.295
0.0031
0.2352
0.0708
0.0295
0.4938
0.7752
7.0194
0.7779
0.8853
8.5456
0.7163
0.7067
2.5421
29.953
10.7278
0.1073
25.3397
Phi3ForCausalLM
bfloat16
mit
3.821
659
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
microsoft/Phi-3.5-mini-instruct
0.4079
0.3333
0
0.0694
0.2764
0.6565
0.578
0.7483
0.609
0.2606
0.8477
0.1073
0.2351
0.8141
7.7471
0.8517
0.842
6.1533
0.766
0
0.6728
0.4856
0.6056
0.7819
0.2692
0.1717
0.6635
0.7279
0.5622
0.8477
0.863
0.8321
0.5146
0.578
0.3333
0.5743
0.3811
0.2776
0.0033
0
0
0
0.3439
0.7519
5.8201
0.7556
0.8121
4.1232
0.6198
0.7067
2.5421
29.953
10.7278
0.1073
25.3397
Phi3ForCausalLM
bfloat16
mit
3.821
659
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
meta-llama/Llama-2-7b-hf
0.3716
0
0.4038
0.1244
0.294
0.401
0.406
0.7793
0.4439
0.3807
0.7993
0.0552
0.3322
0.8228
9.1254
0.8494
0.9394
14.3971
0.8528
0.4038
0.7059
0.3707
0.5736
0.2547
0.4849
0.2858
0.3611
0.7027
0.2113
0.7993
0.1678
0.1723
0.2423
0.406
0
0
0.3022
0.3249
0
0.2651
0
0.0307
0.3265
0.7663
8.5506
0.7363
0.8784
9.0994
0.6789
0.6298
1.2545
18.1639
5.5316
0.0552
15.3683
LlamaForCausalLM
float16
llama2
6.738
1,816
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
meta-llama/Llama-2-7b-hf
0.1687
0
0
0.0481
0.0018
0.3233
0
0.6016
0.3535
0.1424
0.3304
0.0552
0.114
0.7166
3.966
0.6505
0.8124
4.0869
0.7066
0
0.5316
0.3276
0.5
0.1868
0.1472
0.0017
0.1426
0.5764
0.2208
0.3304
-0.0585
-0.0586
0.2515
0
0
0
0.0019
0.1659
0
0
0
0
0.2407
0.6502
2.8162
0.5334
0.761
1.1278
0.5158
0.6298
1.2545
18.1639
5.5316
0.0552
15.3683
LlamaForCausalLM
float16
llama2
6.738
1,816
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
meta-llama/Llama-2-13b-hf
0.2113
0
0
0.0471
0.0831
0.2704
0.11
0.707
0.3721
0.2367
0.4661
0.032
0.2265
0.7642
6.7761
0.7388
0.9273
11.8894
0.8358
0
0.5321
0.3276
0.5
0.0277
0.2185
0.1135
0.145
0.6673
0.2208
0.4661
0.1324
0.1054
0.2513
0.11
0
0
0.0526
0.2652
0
0
0
0
0.2353
0.6947
4.2536
0.611
0.8659
7.1712
0.6422
0.5954
0.7569
9.5024
3.1942
0.032
8.1023
LlamaForCausalLM
float16
llama2
13.016
575
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
meta-llama/Llama-2-13b-hf
0.426
0
0.4266
0.1716
0.4522
0.573
0.532
0.8117
0.3877
0.4342
0.8646
0.032
0.4495
0.8421
10.4674
0.8797
0.9454
15.5483
0.865
0.4266
0.7074
0.3276
0.5264
0.6908
0.4483
0.3982
0.1693
0.6818
0.2336
0.8646
0.3386
0.3693
0.3209
0.532
0
0
0.5061
0.4049
0.0081
0.3698
0.0354
0.024
0.4209
0.7923
9.6327
0.7813
0.8944
9.9996
0.721
0.5954
0.7569
9.5024
3.1942
0.032
8.1023
LlamaForCausalLM
float16
llama2
13.016
575
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
HuggingFaceTB/SmolLM2-135M-Instruct
0.0512
0
0
0.0206
0
0.0212
0
0.3867
0
0.0695
0.0405
0.0245
0.0458
0.619
0.8368
0.4523
0.7516
0.0258
0.3719
0
0
0
0
0.0054
0.0975
0
0
0
0
0.0405
-0.012
-0.0051
0.0582
0
0
0
0
0.0653
0
0
0
0
0.1031
0.5712
0.7741
0.4006
0.7339
0.0346
0.322
0.5529
1.1043
13.6896
2.455
0.0245
10.1232
LlamaForCausalLM
bfloat16
apache-2.0
0.135
64
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
HuggingFaceTB/SmolLM2-135M-Instruct
0.1564
0
0.0467
0.0258
0.2423
0.2952
0.018
0.3657
0.4649
0.0988
0.1389
0.0245
0.0579
0.5877
1.8208
0.3401
0.8381
5.289
0.4561
0.0467
0.4729
0.3161
0.4819
0.1698
0.1584
0.242
0.5534
0.6717
0.3012
0.1389
0
0
0.2429
0.018
0
0
0.2426
0.0802
0.0014
0.0023
0
0
0.1254
0.5456
0.165
0.3184
0.7572
3.5477
0.3482
0.5529
1.1043
13.6896
2.455
0.0245
10.1232
LlamaForCausalLM
bfloat16
apache-2.0
0.135
64
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
meta-llama/Llama-2-70b-hf
0.3667
0
0.0993
0.0968
0.4288
0.4773
0.432
0.801
0.4036
0.4217
0.8618
0.0109
0.4475
0.8347
10.5531
0.8733
0.9486
15.0629
0.8718
0.0993
0.5321
0.4397
0.5236
0.471
0.4006
0.3095
0.32
0.4053
0.3292
0.8618
0.6347
0.6299
0.429
0.432
0
0
0.5481
0.417
0.005
0.0007
0
0
0.4783
0.7794
9.0032
0.769
0.8801
9.3402
0.6901
0.4141
0.3545
6.0591
1.0916
0.0109
5.3838
LlamaForCausalLM
float16
llama2
68.977
834
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
meta-llama/Llama-2-70b-hf
0.5431
0
0.5042
0.2643
0.6021
0.8048
0.758
0.8421
0.6982
0.5766
0.9123
0.0109
0.6717
0.8629
12.2736
0.9025
0.9549
18.0186
0.8828
0.5042
0.8292
0.5747
0.7708
0.8651
0.5139
0.5431
0.7654
0.649
0.7309
0.9123
0.8402
0.8032
0.7201
0.758
0
0
0.6612
0.5443
0.01
0.3707
0.1327
0.0635
0.7448
0.8415
14.17
0.8393
0.9071
11.7077
0.744
0.4141
0.3545
6.0591
1.0916
0.0109
5.3838
LlamaForCausalLM
float16
llama2
68.977
834
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
meta-llama/Llama-2-70b-chat-hf
0.4775
0
0.4895
0.2532
0.5196
0.5884
0.68
0.8327
0.4917
0.4515
0.9068
0.0395
0.5378
0.8499
10.2466
0.8972
0.9466
15.4067
0.8698
0.4895
0.5907
0.5345
0.65
0.7516
0.3572
0.4343
0.3985
0.512
0.3635
0.9068
0.7316
0.6826
0.423
0.68
0
0
0.6048
0.4595
0.0178
0.4502
0.115
0.0611
0.6221
0.8079
9.4403
0.8212
0.8966
9.8246
0.7427
0.6525
0.7831
10.8409
3.946
0.0395
9.4934
LlamaForCausalLM
float16
llama2
68.977
2,163
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
meta-llama/Llama-2-70b-chat-hf
0.2468
0
0.0057
0.0707
0.0316
0.3434
0.014
0.8115
0.4064
0.2276
0.7641
0.0395
0.1834
0.8267
9.3146
0.878
0.9428
13.6883
0.8669
0.0057
0.6518
0.3851
0.6194
0.092
0.2419
0.0121
0.1935
0.6717
0.1622
0.7641
0
0
0.2863
0.014
0
0
0.0511
0.2574
0.005
0
0
0
0.3485
0.7674
7.5851
0.7775
0.8876
8.6027
0.7237
0.6525
0.7831
10.8409
3.946
0.0395
9.4934
LlamaForCausalLM
float16
llama2
68.977
2,163
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
meta-llama/Llama-2-13b-chat-hf
0.4072
0.01
0.4534
0.1756
0.4334
0.4679
0.468
0.8094
0.4063
0.3749
0.8608
0.0197
0.3681
0.8251
8.6421
0.8716
0.9386
13.9265
0.8533
0.4534
0.485
0.3966
0.5458
0.6157
0.4137
0.3666
0.3114
0.4615
0.3164
0.8608
0.1138
0.1537
0.3029
0.468
0.01
0.0643
0.5001
0.343
0.0067
0.2901
0.0796
0.0402
0.4612
0.7792
7.4954
0.7907
0.8872
8.3329
0.722
0.6403
0.338
7.1771
1.9659
0.0197
5.9655
LlamaForCausalLM
float16
llama2
13.016
1,031
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
meta-llama/Llama-2-13b-chat-hf
0.1965
0.01
0
0.0307
0
0.2881
0.006
0.7564
0.389
0.1588
0.5032
0.0197
0.1073
0.7856
6.5379
0.8139
0.9347
11.7644
0.8521
0
0.5338
0.3563
0.5458
0.0679
0.2016
0
0.145
0.6711
0.2267
0.5032
-0.0406
-0.0202
0.2625
0.006
0.01
0.0643
0.0001
0.1674
0
0
0
0
0.1533
0.7053
4.6369
0.6646
0.8762
7.8733
0.6951
0.6403
0.338
7.1771
1.9659
0.0197
5.9655
LlamaForCausalLM
float16
llama2
13.016
1,031
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
meta-llama/Llama-2-7b-chat-hf
0.1775
0.004
0.0021
0.0336
0.001
0.156
0
0.7517
0.3729
0.1608
0.4117
0.0587
0.0921
0.7913
6.8156
0.8184
0.9288
10.6718
0.8371
0.0021
0.4679
0.3276
0.5
0
0.2719
0
0.145
0.6711
0.2208
0.4117
-0.0348
-0.0294
0
0
0.004
0.0181
0.0019
0.1183
0
0
0
0
0.1678
0.7131
4.1477
0.6777
0.8685
7.5446
0.6736
0.6645
1.0328
18.2948
5.8664
0.0587
15.4782
LlamaForCausalLM
float16
llama2
6.738
4,027
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
meta-llama/Llama-2-7b-chat-hf
0.3829
0.004
0.4324
0.1234
0.3912
0.4357
0.424
0.7962
0.4028
0.3133
0.83
0.0587
0.2726
0.8143
7.6973
0.8447
0.9373
12.9348
0.8543
0.4324
0.515
0.3534
0.5167
0.5264
0.403
0.3332
0.3558
0.5082
0.2797
0.83
0.0895
-0.0071
0.2657
0.424
0.004
0.0181
0.4491
0.2644
0.0108
0.2075
0.0088
0.0433
0.3466
0.7639
6.8282
0.765
0.8868
9.3614
0.7208
0.6645
1.0328
18.2948
5.8664
0.0587
15.4782
LlamaForCausalLM
float16
llama2
6.738
4,027
main
4
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
nitky/Llama-3.1-SuperSwallow-70B-Instruct-v0.1
0.6321
0.5743
0.2988
0.1757
0.6827
0.8723
0.886
0.8552
0.7453
0.6721
0.9109
0.2801
0.8517
0.8743
15.0175
0.9153
0.9593
19.1388
0.8903
0.2988
0.9236
0.681
0.7319
0.9187
0.5433
0.6713
0.7642
0.7778
0.7715
0.9109
0.901
0.8688
0.7746
0.886
0.5743
0.9618
0.6942
0.6212
0
0.0238
0.0619
0.0004
0.7925
0.8534
14.18
0.8583
0.9052
11.7155
0.7567
0.7878
7.2512
52.569
28.0006
0.2801
44.6053
LlamaForCausalLM
bfloat16
llama3.1
70.554
0
main
0
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
nitky/Llama-3.1-SuperSwallow-70B-Instruct-v0.1
0.6981
0.5743
0.5974
0.3439
0.7321
0.9174
0.934
0.864
0.7734
0.7385
0.9241
0.2801
0.8866
0.8805
15.4651
0.9187
0.9602
19.1445
0.8923
0.5974
0.9396
0.6408
0.8
0.9553
0.6498
0.7125
0.8377
0.7633
0.8255
0.9241
0.8998
0.8745
0.8572
0.934
0.5743
0.9618
0.7516
0.6792
0.1153
0.5232
0.0973
0.0772
0.9065
0.8702
18.158
0.868
0.9185
12.9365
0.7768
0.7878
7.2512
52.569
28.0006
0.2801
44.6053
LlamaForCausalLM
bfloat16
llama3.1
70.554
0
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Nurture-intelligence/kEy-llama3.1-8b-v0.1
0.5337
0.0221
0.4982
0.2367
0.5815
0.8132
0.748
0.8321
0.6821
0.4795
0.8945
0.0833
0.478
0.8483
11.1555
0.8878
0.9494
15.5348
0.8757
0.4982
0.8462
0.5144
0.7014
0.9062
0.5374
0.5363
0.7412
0.7285
0.7252
0.8945
0.7829
0.7764
0.6871
0.748
0.0221
0.0843
0.6266
0.423
0.0203
0.2674
0.0885
0.0397
0.7674
0.8159
10.7137
0.8168
0.9007
10.4068
0.7482
0.6812
2.9249
21.6587
8.3465
0.0833
19.0475
LlamaForCausalLM
bfloat16
llama3.1
8.03
0
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Nurture-intelligence/kEy-llama3.1-8b-v0.1
0.3703
0.0221
0.0396
0.1321
0.1612
0.7019
0.45
0.8046
0.5926
0.2603
0.8254
0.0833
0.228
0.818
9.2794
0.8538
0.9483
14.4787
0.875
0.0396
0.8452
0.4971
0.6389
0.7471
0.3696
0.0076
0.3965
0.7064
0.724
0.8254
0.84
0.8027
0.5136
0.45
0.0221
0.0843
0.3147
0.1834
0
0.0026
0
0
0.658
0.7652
7.9493
0.7617
0.8917
8.9791
0.7279
0.6812
2.9249
21.6587
8.3465
0.0833
19.0475
LlamaForCausalLM
bfloat16
llama3.1
8.03
0
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen2.5-0.5B-Instruct
0.3112
0
0.3188
0.0459
0.3505
0.3865
0.338
0.6754
0.3241
0.2327
0.6879
0.0633
0.1399
0.7702
6.2138
0.7381
0.9086
9.6765
0.7753
0.3188
0.4765
0.3333
0.5
0.3816
0.4004
0.3152
0.3032
0.3308
0.1532
0.6879
0.0306
0.0478
0.3015
0.338
0
0
0.3859
0.1577
0.0017
0.0251
0
0.0112
0.1917
0.6851
4.7234
0.5876
0.8518
6.795
0.6007
0.6502
1.373
19.6544
6.3429
0.0633
16.5086
Qwen2ForCausalLM
bfloat16
apache-2.0
0.494
114
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen2.5-0.5B-Instruct
0.2175
0
0
0.0439
0.1664
0.339
0.072
0.623
0.3412
0.1561
0.5873
0.0633
0.1022
0.7198
3.787
0.6594
0.8874
7.7052
0.7299
0
0.5321
0.3305
0.5
0.2332
0.2198
0.1954
0.272
0.3245
0.2789
0.5873
0.0797
0.0644
0.2517
0.072
0
0
0.1374
0.1463
0.0024
0
0
0
0.2172
0.6519
3.7086
0.5474
0.8271
5.4965
0.5551
0.6502
1.373
19.6544
6.3429
0.0633
16.5086
Qwen2ForCausalLM
bfloat16
apache-2.0
0.494
114
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen2.5-1.5B-Instruct
0.2725
0.1004
0
0.0418
0.109
0.4618
0.24
0.6687
0.5598
0.071
0.696
0.0496
0.0558
0.7385
3.0819
0.7346
0.8967
7.9193
0.763
0
0.5321
0.3736
0.4875
0.5058
0.0833
0.1423
0.5892
0.6585
0.6903
0.696
0.0666
0.0631
0.3475
0.24
0.1004
0.2711
0.0756
0.0739
0
0
0
0
0.2088
0.6756
2.4077
0.6209
0.8119
4.0423
0.5563
0.6351
1.6411
14.2275
4.9633
0.0496
12.3624
Qwen2ForCausalLM
bfloat16
apache-2.0
1.544
207
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen2.5-1.5B-Instruct
0.4431
0.1004
0.4174
0.0957
0.4958
0.6742
0.616
0.7663
0.5614
0.2457
0.8512
0.0496
0.2139
0.8074
7.9061
0.8254
0.931
12.1493
0.8358
0.4174
0.6979
0.4052
0.5903
0.8114
0.2792
0.4428
0.7219
0.6667
0.4232
0.8512
0.512
0.5023
0.5133
0.616
0.1004
0.2711
0.5487
0.2441
0.0035
0.1167
0.0088
0.0227
0.3267
0.7492
6.0899
0.7287
0.8724
7.4907
0.6754
0.6351
1.6411
14.2275
4.9633
0.0496
12.3624
Qwen2ForCausalLM
bfloat16
apache-2.0
1.544
207
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen2.5-3B-Instruct
0.2676
0.3514
0.001
0.0671
0.2758
0.4592
0.21
0.6007
0.3818
0.1736
0.3497
0.0734
0.194
0.7233
6.8982
0.7256
0.8134
11.3026
0.5163
0.001
0
0.4052
0.4458
0.7971
0.1349
0.4185
0.4967
0.0076
0.5539
0.3497
0.8526
0.8259
0.5806
0.21
0.3514
0.7189
0.133
0.1917
0.0017
0
0
0
0.3341
0.6901
5.1057
0.6727
0.7963
6.8165
0.4882
0.672
1.4822
20.4014
7.3519
0.0734
17.8105
Qwen2ForCausalLM
bfloat16
other
3.086
106
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen2.5-3B-Instruct
0.4801
0.3514
0.4735
0.1477
0.5605
0.6906
0.436
0.7301
0.6653
0.2805
0.8719
0.0734
0.2534
0.8242
9.6724
0.8503
0.8746
13.1343
0.6887
0.4735
0.5213
0.4741
0.6139
0.8785
0.2993
0.5391
0.758
0.697
0.7834
0.8719
0.8111
0.7792
0.6721
0.436
0.3514
0.7189
0.5818
0.2887
0.0446
0.1277
0.0442
0.0319
0.4902
0.7681
6.9696
0.7635
0.8447
8.1876
0.6178
0.672
1.4822
20.4014
7.3519
0.0734
17.8105
Qwen2ForCausalLM
bfloat16
other
3.086
106
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen2.5-Math-7B-Instruct
0.3184
0.01
0.0735
0.0299
0.4655
0.3971
0.766
0.5084
0.6033
0.17
0.4775
0.0007
0.0377
0.6759
1.7291
0.4147
0.8902
9.1108
0.7055
0.0735
0.5363
0.431
0.5958
0.3887
0.3739
0.4154
0.6015
0.601
0.7871
0.4775
0.1748
0.1813
0.2663
0.766
0.01
0.0241
0.5157
0.0984
0
0.0053
0.0088
0
0.1356
0.6111
2.1391
0.3768
0.832
5.8111
0.5367
0.5616
0.1476
1.4387
0.0653
0.0007
1.2408
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
39
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen2.5-Math-7B-Instruct
0.1196
0.01
0
0.0192
0.0716
0.2356
0.02
0.4453
0.1726
0.0938
0.2468
0.0007
0.0266
0.6109
0.6089
0.36
0.8557
6.6075
0.5787
0
0.4749
0.0144
0.0403
0.1412
0.1896
0.1217
0.3698
0
0.4386
0.2468
-0.0907
-0.0874
0.0908
0.02
0.01
0.0241
0.0215
0.0651
0
0
0
0
0.096
0.5882
1.0247
0.3645
0.819
5.1251
0.4779
0.5616
0.1476
1.4387
0.0653
0.0007
1.2408
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
39
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Saxo/Linkbricks-Horizon-AI-Korean-Advanced-70B
0.6532
0.5321
0.5922
0.292
0.7096
0.8707
0.926
0.8485
0.7508
0.6191
0.9207
0.1239
0.6921
0.8702
14.6719
0.9066
0.958
18.1895
0.8886
0.5922
0.895
0.6609
0.8181
0.9276
0.6063
0.6659
0.7313
0.7992
0.7443
0.9207
0.8926
0.8622
0.7893
0.926
0.5321
0.9137
0.7534
0.5588
0.1073
0.3754
0.0973
0.0673
0.8124
0.8434
13.2933
0.8471
0.9104
12.386
0.7516
0.71
3.0985
32.9632
12.3839
0.1239
27.9421
LlamaForCausalLM
bfloat16
apache-2.0
70.554
0
main
4
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Saxo/Linkbricks-Horizon-AI-Korean-Advanced-70B
0.5899
0.5321
0.4123
0.101
0.6678
0.8377
0.852
0.8427
0.7522
0.4712
0.8957
0.1239
0.4827
0.8576
13.1787
0.9021
0.9527
17.3445
0.8831
0.4123
0.8715
0.6753
0.7681
0.9169
0.4586
0.6292
0.7087
0.7948
0.8141
0.8957
0.8657
0.8488
0.7247
0.852
0.5321
0.9137
0.7065
0.4723
0
0
0.0442
0.0014
0.4593
0.8281
10.9482
0.8396
0.9046
11.2471
0.7459
0.71
3.0985
32.9632
12.3839
0.1239
27.9421
LlamaForCausalLM
bfloat16
apache-2.0
70.554
0
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Saxo/Linkbricks-Horizon-AI-Korean-Advanced-56B
0.4186
0.3052
0.0137
0.0991
0.4866
0.7121
0.552
0.7339
0.5541
0.22
0.8065
0.1211
0.2716
0.8274
10.5817
0.8668
0.7833
2.146
0.7188
0.0137
0.7565
0.5029
0.65
0.8257
0.1018
0.4067
0.4404
0.7203
0.4571
0.8065
0.8657
0.8346
0.5542
0.552
0.3052
0.6526
0.5666
0.2867
0
0.0013
0
0
0.494
0.7764
8.58
0.7772
0.7671
1.9325
0.5729
0.7107
2.5837
35.0072
12.1349
0.1211
28.4508
MixtralForCausalLM
bfloat16
apache-2.0
46.703
0
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Saxo/Linkbricks-Horizon-AI-Korean-Advanced-56B
0.5377
0.3052
0.4558
0.2158
0.5823
0.7683
0.694
0.8151
0.6663
0.3946
0.896
0.1211
0.3568
0.8406
10.6758
0.8684
0.9464
16.4195
0.8694
0.4558
0.8179
0.5489
0.6875
0.8597
0.4629
0.4908
0.7128
0.7569
0.6255
0.896
0.8686
0.8198
0.6273
0.694
0.3052
0.6526
0.6737
0.3641
0.005
0.3567
0.0796
0.0552
0.5824
0.8021
10.3577
0.7956
0.8958
10.4371
0.7268
0.7107
2.5837
35.0072
12.1349
0.1211
28.4508
MixtralForCausalLM
bfloat16
apache-2.0
46.703
0
main
4
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Saxo/Linkbricks-Horizon-AI-Korean-Superb-27B
0.5784
0
0.4888
0.2799
0.6729
0.8453
0.896
0.8536
0.7155
0.5921
0.9048
0.1138
0.6938
0.8721
13.9883
0.911
0.9555
18.4696
0.8854
0.4888
0.8878
0.5977
0.7556
0.9196
0.5104
0.63
0.7498
0.7734
0.701
0.9048
0.9003
0.8786
0.7285
0.896
0
0
0.7157
0.5721
0.0241
0.3752
0.1239
0.0572
0.8189
0.8492
13.3692
0.8507
0.9116
11.7373
0.7673
0.7053
2.9798
30.9162
11.3832
0.1138
26.3574
Gemma2ForCausalLM
bfloat16
apache-2.0
27.227
1
main
4
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Saxo/Linkbricks-Horizon-AI-Korean-Superb-27B
0.4752
0
0.3693
0.1546
0.4889
0.7401
0.794
0.8375
0.5963
0.4787
0.6537
0.1138
0.5243
0.8489
11.6238
0.9024
0.9493
15.7088
0.8748
0.3693
0.8367
0.5201
0.6375
0.8275
0.4597
0.4592
0.4454
0.7595
0.619
0.6537
0.8575
0.8442
0.5562
0.794
0
0
0.5187
0.452
0
0.0003
0.0177
0.0006
0.7543
0.8123
9.2255
0.8283
0.8986
9.8337
0.7443
0.7053
2.9798
30.9162
11.3832
0.1138
26.3574
Gemma2ForCausalLM
bfloat16
apache-2.0
27.227
1
main
0
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Saxo/Linkbricks-Horizon-AI-Korean-LLAMA3blend-8x8b
0.1721
0.0221
0
0.0207
0.0513
0.1792
0
0.7006
0.024
0.1449
0.7038
0.046
0.0765
0.7761
6.8048
0.7483
0.9182
9.8772
0.8069
0
0.246
0
0
0.0599
0.1966
0.0034
0
0.1199
0
0.7038
0.0189
0.011
0.2319
0
0.0221
0.1365
0.0991
0.1616
0
0
0
0
0.1036
0.6901
4.5453
0.607
0.8623
7.0875
0.6401
0.6344
1.8469
14.604
4.6105
0.046
12.3083
MixtralForCausalLM
bfloat16
apache-2.0
47.491
0
main
0
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Saxo/Linkbricks-Horizon-AI-Korean-LLAMA3blend-8x8b
0.4287
0.0221
0.3743
0.0344
0.5065
0.6831
0.638
0.7637
0.5358
0.2853
0.8263
0.046
0.2365
0.8091
8.1977
0.7909
0.9403
13.6463
0.8577
0.3743
0.6786
0.4626
0.5167
0.8329
0.305
0.4177
0.583
0.6591
0.4577
0.8263
0.7341
0.6883
0.5378
0.638
0.0221
0.1365
0.5953
0.3144
0
0.0586
0.0265
0.0469
0.0397
0.7536
8.0012
0.6904
0.8862
8.9768
0.7157
0.6344
1.8469
14.604
4.6105
0.046
12.3083
MixtralForCausalLM
bfloat16
apache-2.0
47.491
0
main
4
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Saxo/Linkbricks-Horizon-AI-Korean-Superb-22B
0.5851
0.498
0.4856
0.2728
0.5927
0.7679
0.778
0.8345
0.7428
0.4437
0.9099
0.1104
0.4165
0.8528
11.3785
0.8934
0.9508
16.5359
0.8774
0.4856
0.7788
0.6178
0.8222
0.8499
0.5034
0.5131
0.7518
0.7538
0.7682
0.9099
0.8771
0.8301
0.6751
0.778
0.498
0.9257
0.6723
0.4111
0.0095
0.4314
0.0973
0.0754
0.7507
0.8158
10.4512
0.8228
0.9012
10.3898
0.7445
0.7033
2.6601
31.7098
11.0387
0.1104
26.7946
MistralForCausalLM
bfloat16
apache-2.0
22.247
0
main
4
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Saxo/Linkbricks-Horizon-AI-Korean-Superb-22B
0.5132
0.498
0.134
0.127
0.5569
0.7256
0.712
0.8254
0.7278
0.3634
0.8646
0.1104
0.3482
0.8332
10.9112
0.8856
0.9483
14.7589
0.8744
0.134
0.8124
0.6092
0.8042
0.8034
0.4094
0.477
0.6919
0.7532
0.7804
0.8646
0.882
0.8438
0.5611
0.712
0.498
0.9257
0.6369
0.3327
0.0167
0.0036
0.0265
0.0061
0.5817
0.7895
7.8377
0.8081
0.8949
9.5267
0.7334
0.7033
2.6601
31.7098
11.0387
0.1104
26.7946
MistralForCausalLM
bfloat16
apache-2.0
22.247
0
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Groq/Llama-3-Groq-70B-Tool-Use
0.5794
0
0.5527
0.2538
0.6871
0.8549
0.888
0.8494
0.7144
0.5906
0.9028
0.0791
0.6525
0.8615
12.384
0.9042
0.9552
17.3305
0.8848
0.5527
0.893
0.6523
0.7347
0.9428
0.5614
0.6306
0.569
0.7765
0.8397
0.9028
0.8723
0.8424
0.7288
0.888
0
0
0.7436
0.558
0.0472
0.2539
0.115
0.0562
0.7966
0.8309
11.312
0.8422
0.9092
11.5112
0.7665
0.6678
1.7417
23.5293
7.895
0.0791
19.3868
LlamaForCausalLM
bfloat16
llama3
70.554
150
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Groq/Llama-3-Groq-70B-Tool-Use
0.2349
0
0.0479
0.0263
0.1835
0.6307
0.026
0.8043
0.4793
0.0769
0.23
0.0791
0.0178
0.8191
7.6383
0.8472
0.9467
14.2313
0.8706
0.0479
0.8224
0.3736
0.5097
0.706
0.1918
0.2612
0.4207
0.4028
0.6899
0.23
0.6779
0.6869
0.3637
0.026
0
0
0.1058
0.021
0
0
0
0
0.1313
0.7728
6.5564
0.7711
0.8768
8.8106
0.7284
0.6678
1.7417
23.5293
7.895
0.0791
19.3868
LlamaForCausalLM
bfloat16
llama3
70.554
150
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Saxo/Linkbricks-Horizon-AI-Superb-70B
0.0382
0
0
0.0063
0
0.0002
0
0.3028
0.0199
0.0386
0.0352
0.0173
0.0312
0.5337
0.7079
0.3192
0.7474
2.4924
0.3026
0
0
0.0603
0
0
0.0664
0
0.0345
0
0.0049
0.0352
0.187
0.1867
0.0007
0
0
0.0763
0
0.0181
0
0
0
0
0.0314
0.5103
0.5081
0.3027
0.7327
2.2188
0.2868
0.5176
0.6032
5.6811
1.7448
0.0173
4.5622
LlamaForCausalLM
bfloat16
apache-2.0
70.554
0
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Saxo/Linkbricks-Horizon-AI-Superb-70B
0.3769
0
0.4294
0.1865
0.5792
0.5504
0.004
0.4209
0.6253
0.5621
0.7706
0.0173
0.7402
0.4935
3.1239
0.2871
0.823
11.5232
0.4972
0.4294
0.5721
0.3822
0.6306
0.6935
0.3792
0.5332
0.5534
0.7424
0.8179
0.7706
0.5867
0.5814
0.3857
0.004
0
0.0763
0.6252
0.5668
0.0193
0.1452
0.0796
0.019
0.6694
0.6592
4.2524
0.5034
0.7807
5.6633
0.3959
0.5176
0.6032
5.6811
1.7448
0.0173
4.5622
LlamaForCausalLM
bfloat16
apache-2.0
70.554
0
main
4
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Saxo/Linkbricks-Horizon-AI-Superb-27B
0.4032
0.01
0.1079
0.0961
0.4781
0.5902
0.558
0.7516
0.595
0.3292
0.8155
0.1032
0.3536
0.841
11.7538
0.8846
0.8145
5.3192
0.7474
0.1079
0.6896
0.4741
0.65
0.6336
0.3612
0.3976
0.4626
0.7405
0.6477
0.8155
0.7157
0.7146
0.4475
0.558
0.01
0.0201
0.5586
0.2728
0
0
0.0088
0
0.4718
0.772
7.8411
0.7582
0.8162
5.4554
0.6161
0.6958
2.4583
28.2256
10.3175
0.1032
23.8243
Gemma2ForCausalLM
bfloat16
apache-2.0
27.227
0
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Saxo/Linkbricks-Horizon-AI-Superb-27B
0.5489
0.01
0.5032
0.2534
0.6185
0.7689
0.796
0.8392
0.7205
0.5195
0.9051
0.1032
0.6184
0.8679
13.3313
0.9029
0.9539
17.8256
0.8807
0.5032
0.8104
0.569
0.7944
0.8615
0.4728
0.5628
0.8012
0.7506
0.6872
0.9051
0.8501
0.8478
0.6349
0.796
0.01
0.0201
0.6743
0.4674
0.0351
0.3239
0.1239
0.0716
0.7127
0.8291
11.6168
0.8233
0.9086
11.9401
0.75
0.6958
2.4583
28.2256
10.3175
0.1032
23.8243
Gemma2ForCausalLM
bfloat16
apache-2.0
27.227
0
main
4
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
DataPilot/Llama3.1-ArrowSE-v0.4
0.3805
0.01
0.0869
0.1288
0.3365
0.5511
0.446
0.8173
0.5798
0.3142
0.8138
0.1012
0.3983
0.8414
9.5501
0.885
0.9443
13.4254
0.8683
0.0869
0.6062
0.4828
0.5597
0.5719
0.3555
0.296
0.5173
0.7134
0.6257
0.8138
0.7019
0.7031
0.4753
0.446
0.01
0.0562
0.3771
0.1889
0
0.0037
0
0.0017
0.6384
0.7768
7.2742
0.7853
0.8925
9.0989
0.7305
0.6964
2.6875
26.2482
10.1332
0.1012
22.9713
LlamaForCausalLM
bfloat16
llama3
8.03
12
main
0
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
DataPilot/Llama3.1-ArrowSE-v0.4
0.5353
0.01
0.5226
0.2403
0.5439
0.8209
0.732
0.8297
0.6535
0.5314
0.9031
0.1012
0.5902
0.8449
10.75
0.8804
0.9468
14.7745
0.8713
0.5226
0.8317
0.4971
0.6444
0.9142
0.5258
0.5075
0.7765
0.7165
0.633
0.9031
0.822
0.7741
0.7168
0.732
0.01
0.0562
0.5803
0.4781
0.0133
0.2512
0.0619
0.0596
0.8155
0.8118
9.9136
0.8201
0.8998
10.2738
0.7469
0.6964
2.6875
26.2482
10.1332
0.1012
22.9713
LlamaForCausalLM
bfloat16
llama3
8.03
12
main
4
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Saxo/Linkbricks-Horizon-AI-Japanese-Advanced-70B
0.6143
0.5783
0.3583
0.1572
0.709
0.8749
0.904
0.8457
0.7753
0.547
0.9092
0.0987
0.7101
0.8613
13.8252
0.9037
0.9599
18.7713
0.8914
0.3583
0.9053
0.6954
0.7958
0.9428
0.4228
0.6812
0.7909
0.7967
0.7978
0.9092
0.8968
0.8638
0.7765
0.904
0.5783
0.994
0.7368
0.5081
0
0.0206
0.0265
0.0072
0.7315
0.829
12.3399
0.8306
0.9095
12.2603
0.7572
0.6875
3.3843
26.3633
9.8786
0.0987
22.5546
LlamaForCausalLM
bfloat16
apache-2.0
70.554
0
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Saxo/Linkbricks-Horizon-AI-Japanese-Advanced-70B
0.678
0.5783
0.5763
0.3458
0.743
0.8971
0.948
0.8516
0.786
0.7098
0.9235
0.0987
0.8565
0.8711
14.5233
0.9081
0.961
19.5881
0.893
0.5763
0.9091
0.6609
0.8861
0.95
0.6051
0.7159
0.7597
0.791
0.8324
0.9235
0.9017
0.8741
0.8323
0.948
0.5783
0.994
0.7701
0.668
0.133
0.5174
0.115
0.0726
0.8911
0.8477
15.1291
0.8397
0.9159
12.8306
0.7657
0.6875
3.3843
26.3633
9.8786
0.0987
22.5546
LlamaForCausalLM
bfloat16
apache-2.0
70.554
0
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2.5-1.5B
0.2491
0.2369
0
0.0509
0.1798
0.4052
0.06
0.6805
0.539
0.0543
0.4533
0.0806
0.0612
0.7507
4.3047
0.7409
0.8971
7.5066
0.7932
0
0.5321
0.3477
0.5
0.412
0
0.0401
0.5571
0.6717
0.6186
0.4533
0
0
0.2717
0.06
0.2369
0.6245
0.3195
0.1019
0
0.0013
0
0
0.2535
0.6731
2.6965
0.6005
0.8219
4.2373
0.5873
0.6768
1.6616
22.9674
8.0693
0.0806
19.6177
Qwen2ForCausalLM
bfloat16
apache-2.0
1.544
47
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2.5-1.5B
0.4527
0.2369
0.3987
0.0805
0.4975
0.6418
0.614
0.7562
0.5518
0.2726
0.849
0.0806
0.2178
0.8082
7.559
0.8353
0.9326
12.2128
0.8418
0.3987
0.6328
0.431
0.5222
0.8025
0.3618
0.4414
0.7194
0.7058
0.3806
0.849
0.4837
0.4659
0.4902
0.614
0.2369
0.6245
0.5537
0.2384
0.0025
0.1087
0.0088
0.0307
0.2519
0.7315
6.0025
0.6865
0.8702
7.2415
0.6615
0.6768
1.6616
22.9674
8.0693
0.0806
19.6177
Qwen2ForCausalLM
bfloat16
apache-2.0
1.544
47
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2.5-3B
0.3406
0.4578
0
0.0507
0.3996
0.5643
0.056
0.7402
0.5557
0.1323
0.6944
0.0954
0.104
0.7964
6.9458
0.8142
0.9095
9.5434
0.8262
0
0.5681
0.3764
0.5181
0.6819
0.146
0.3629
0.6343
0.6471
0.6024
0.6944
0.5896
0.6394
0.4428
0.056
0.4578
0.9076
0.4364
0.1469
0
0
0
0
0.2537
0.7239
4.8137
0.6844
0.8361
4.9553
0.6359
0.6905
1.7988
27.3375
9.5466
0.0954
23.1391
Qwen2ForCausalLM
bfloat16
other
3.086
40
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2.5-3B
0.523
0.4578
0.4363
0.1307
0.5721
0.7298
0.652
0.7946
0.6547
0.3515
0.8775
0.0954
0.2818
0.8264
9.4681
0.8626
0.942
13.5994
0.8623
0.4363
0.8016
0.4655
0.6056
0.8454
0.4773
0.5309
0.6906
0.7323
0.7796
0.8775
0.8128
0.7891
0.5425
0.652
0.4578
0.9076
0.6134
0.2954
0.0334
0.1174
0.0354
0.0362
0.4311
0.7648
7.1679
0.7499
0.884
8.7391
0.7037
0.6905
1.7988
27.3375
9.5466
0.0954
23.1391
Qwen2ForCausalLM
bfloat16
other
3.086
40
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2.5-7B
0.3405
0.0161
0.1551
0.0581
0.4239
0.6685
0.202
0.7973
0.6331
0.2246
0.47
0.0968
0.2371
0.8215
8.4662
0.8716
0.9366
13.4706
0.8402
0.1551
0.7062
0.4741
0.5889
0.7122
0.2563
0.3869
0.6146
0.7588
0.7292
0.47
0.837
0.8182
0.5872
0.202
0.0161
0.0301
0.4609
0.1804
0
0.0019
0
0
0.2885
0.7565
6.0992
0.7721
0.879
7.8584
0.7053
0.6931
2.3301
25.1482
9.6837
0.0968
21.9673
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
80
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2.5-7B
0.5415
0.0161
0.4283
0.2223
0.6666
0.8379
0.792
0.8278
0.736
0.4256
0.9066
0.0968
0.3938
0.8487
10.8301
0.8925
0.9504
15.7946
0.8779
0.4283
0.8657
0.5891
0.6972
0.9223
0.4566
0.6365
0.8332
0.7519
0.8084
0.9066
0.8611
0.8322
0.7258
0.792
0.0161
0.0301
0.6967
0.4265
0.0128
0.3366
0.0531
0.0548
0.6541
0.7976
8.6186
0.8058
0.8955
9.8599
0.7349
0.6931
2.3301
25.1482
9.6837
0.0968
21.9673
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
80
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2.5-14B
0.64
0.5763
0.5457
0.2571
0.7196
0.8852
0.878
0.8444
0.7555
0.5439
0.9213
0.1131
0.5462
0.8624
11.8535
0.9063
0.9556
17.5633
0.8862
0.5457
0.9036
0.6264
0.7722
0.9598
0.5515
0.6922
0.8283
0.7702
0.7806
0.9213
0.8994
0.8718
0.7922
0.878
0.5763
0.8916
0.7471
0.5341
0.0389
0.3583
0.0531
0.06
0.7752
0.823
10.6974
0.8359
0.9015
10.8796
0.7493
0.704
2.5569
29.7912
11.3442
0.1131
25.5429
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
44
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2.5-14B
0.5301
0.5763
0.2738
0.139
0.3851
0.8007
0.768
0.8298
0.7048
0.3818
0.8592
0.1131
0.4457
0.8436
10.487
0.8953
0.9536
15.967
0.8831
0.2738
0.8184
0.5948
0.7181
0.9053
0.3196
0.5784
0.7305
0.7266
0.7538
0.8592
0.8867
0.8563
0.6784
0.768
0.5763
0.8916
0.1918
0.3801
0.001
0.0023
0.0265
0
0.6653
0.785
7.9777
0.808
0.8909
9.3913
0.7328
0.704
2.5569
29.7912
11.3442
0.1131
25.5429
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
44
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2.5-32B
0.475
0
0.216
0.1525
0.2789
0.8509
0.748
0.838
0.7234
0.3905
0.9066
0.1205
0.3469
0.8501
12.3975
0.9029
0.9551
16.6235
0.8855
0.216
0.8933
0.5862
0.7306
0.9169
0.4224
0.3293
0.7979
0.7784
0.7238
0.9066
0.8875
0.8563
0.7426
0.748
0
0
0.2285
0.4021
0.005
0.0022
0.0354
0.0006
0.7194
0.7937
8.6852
0.8211
0.8952
9.5585
0.7423
0.7091
2.6689
32.4883
12.0549
0.1205
27.3971
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
50
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2.5-32B
0.606
0
0.558
0.2789
0.7714
0.8927
0.922
0.8496
0.781
0.566
0.9254
0.1205
0.5788
0.8663
12.9367
0.9097
0.9572
17.9768
0.8879
0.558
0.8948
0.6667
0.8153
0.9598
0.5727
0.7509
0.8866
0.7942
0.742
0.9254
0.9084
0.881
0.8235
0.922
0
0
0.792
0.5464
0.0194
0.3497
0.1239
0.0827
0.8187
0.8408
13.6989
0.8426
0.9071
11.5011
0.7582
0.7091
2.6689
32.4883
12.0549
0.1205
27.3971
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
50
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2-0.5B
0.2949
0.0602
0.2686
0.0443
0.3112
0.363
0.112
0.6094
0.5137
0.2384
0.6638
0.0591
0.1361
0.741
5.7629
0.6806
0.8992
8.9861
0.7462
0.2686
0.5316
0.3678
0.5
0.3083
0.4122
0.2773
0.5534
0.673
0.4743
0.6638
0.0347
0.0278
0.2492
0.112
0.0602
0.3072
0.345
0.1668
0.002
0.017
0.0088
0.0108
0.183
0.6161
3.7635
0.4771
0.8302
5.6157
0.5337
0.646
1.4349
19.0811
5.8989
0.0591
15.7662
Qwen2ForCausalLM
bfloat16
apache-2.0
0.494
117
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2-0.5B
0.1231
0.0602
0
0.0393
0.0091
0.2466
0.01
0.529
0.03
0.1042
0.266
0.0591
0.0638
0.673
1.8786
0.5619
0.7551
0.2936
0.614
0
0.4411
0.0029
0.0014
0.0894
0.1485
0.0152
0.0012
0.1446
0
0.266
-0.0319
-0.0703
0.2092
0.01
0.0602
0.3072
0.003
0.1005
0
0
0
0
0.1967
0.6312
1.9647
0.4636
0.7469
0.3821
0.4765
0.646
1.4349
19.0811
5.8989
0.0591
15.7662
Qwen2ForCausalLM
bfloat16
apache-2.0
0.494
117
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2-1.5B
0.2159
0
0.0005
0.0515
0.0552
0.3876
0.062
0.717
0.2873
0.1365
0.5979
0.0796
0.0854
0.7869
7.1799
0.7999
0.928
10.8136
0.8304
0.0005
0.6701
0.3448
0.5
0.2395
0.1978
0
0.168
0.1686
0.2551
0.5979
0.2467
0.3176
0.2532
0.062
0
0.002
0.1105
0.1264
0
0
0
0
0.2574
0.6978
3.6941
0.6341
0.8499
6.0846
0.6036
0.6744
1.7489
23.2472
7.962
0.0796
19.6473
Qwen2ForCausalLM
bfloat16
apache-2.0
1.544
82
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2-1.5B
0.3898
0
0.3824
0.0846
0.4564
0.4979
0.488
0.7415
0.4372
0.2992
0.8212
0.0796
0.2042
0.8063
8.479
0.8239
0.9308
12.5316
0.8349
0.3824
0.7179
0.3362
0.5306
0.4727
0.4475
0.4163
0.6857
0.4242
0.2093
0.8212
0.5229
0.4666
0.3031
0.488
0
0.002
0.4965
0.2461
0
0.0764
0
0.0254
0.3211
0.7171
5.9459
0.6639
0.8651
7.6306
0.6435
0.6744
1.7489
23.2472
7.962
0.0796
19.6473
Qwen2ForCausalLM
bfloat16
apache-2.0
1.544
82
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2-7B
0.5209
0.0181
0.4777
0.1976
0.6274
0.764
0.748
0.8128
0.6818
0.3944
0.8987
0.1093
0.3506
0.8404
10.1222
0.8847
0.9488
15.6531
0.8752
0.4777
0.8274
0.4684
0.7236
0.8767
0.4632
0.5877
0.8061
0.7172
0.6937
0.8987
0.8436
0.8037
0.588
0.748
0.0181
0.0241
0.6671
0.3692
0.0235
0.2782
0.0531
0.0348
0.5981
0.7796
8.1771
0.7727
0.8887
9.305
0.7188
0.7026
2.4508
30.999
10.9279
0.1093
26.2173
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
142
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2-7B
0.3257
0.0181
0
0.0636
0.3982
0.6449
0.07
0.7919
0.6025
0.1357
0.7481
0.1093
0.1295
0.8245
9.1181
0.8698
0.9442
13.8904
0.868
0
0.7841
0.4397
0.5278
0.6667
0.0931
0.2191
0.5703
0.6862
0.7887
0.7481
0.8077
0.7238
0.4838
0.07
0.0181
0.0241
0.5773
0.1845
0
0.0009
0
0
0.3173
0.7413
5.4957
0.7422
0.8699
7.7003
0.6874
0.7026
2.4508
30.999
10.9279
0.1093
26.2173
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
142
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2-57B-A14B
0.6139
0.5241
0.5205
0.2306
0.6855
0.8517
0.858
0.8323
0.7124
0.5017
0.9142
0.1217
0.5089
0.8535
11.3668
0.8992
0.9535
16.8441
0.8825
0.5205
0.8783
0.5287
0.7417
0.9303
0.5161
0.6524
0.8632
0.7626
0.6657
0.9142
0.8805
0.8502
0.7466
0.858
0.5241
0.9237
0.7186
0.48
0.014
0.3419
0.0619
0.0427
0.6923
0.8097
10.0138
0.8137
0.897
9.9209
0.734
0.7097
2.7977
33.4329
12.1784
0.1217
28.1609
Qwen2MoeForCausalLM
bfloat16
apache-2.0
57.409
48
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2-57B-A14B
0.4889
0.5241
0.0706
0.0897
0.5516
0.7261
0.688
0.8189
0.6265
0.3184
0.8428
0.1217
0.401
0.84
10.8247
0.8917
0.9519
15.7549
0.8787
0.0706
0.8279
0.5172
0.6056
0.7471
0.1775
0.5549
0.6639
0.7266
0.619
0.8428
0.8494
0.8154
0.6034
0.688
0.5241
0.9237
0.5482
0.3768
0
0
0
0
0.4486
0.7683
6.6972
0.7793
0.8906
9.154
0.7261
0.7097
2.7977
33.4329
12.1784
0.1217
28.1609
Qwen2MoeForCausalLM
bfloat16
apache-2.0
57.409
48
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen2-0.5B-Instruct
0.2922
0
0.2671
0.0411
0.318
0.3646
0.158
0.6466
0.4373
0.253
0.6719
0.0572
0.1436
0.7508
5.6235
0.7076
0.9048
8.6022
0.7607
0.2671
0.4865
0.3592
0.5
0.3592
0.4299
0.2909
0.5534
0.6016
0.1721
0.6719
0.1028
0.0833
0.2481
0.158
0
0
0.3452
0.1853
0
0.0138
0
0.0044
0.1871
0.6329
3.5797
0.5359
0.8491
6.2043
0.582
0.6387
1.4526
16.7298
5.7426
0.0572
14.0599
Qwen2ForCausalLM
bfloat16
apache-2.0
0.494
164
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen2-0.5B-Instruct
0.1651
0
0
0.0367
0.0023
0.3055
0.016
0.5734
0.3453
0.1141
0.3657
0.0572
0.0697
0.6849
1.8917
0.6081
0.8245
3.9205
0.6858
0
0.4677
0.2557
0.4889
0.2046
0.1869
0
0.1454
0.6168
0.2196
0.3657
-0.0544
-0.0494
0.2442
0.016
0
0
0.0046
0.0858
0
0
0
0
0.1837
0.6393
1.6557
0.496
0.7729
1.7655
0.5035
0.6387
1.4526
16.7298
5.7426
0.0572
14.0599
Qwen2ForCausalLM
bfloat16
apache-2.0
0.494
164
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen2-1.5B-Instruct
0.4076
0
0.421
0.0889
0.4649
0.5432
0.514
0.7642
0.4777
0.3026
0.8429
0.0643
0.2005
0.8071
8.8463
0.825
0.9311
13.0961
0.8374
0.421
0.6686
0.3362
0.5167
0.6175
0.4575
0.4242
0.6598
0.4886
0.3871
0.8429
0.2818
0.2412
0.3436
0.514
0
0
0.5057
0.2497
0.0014
0.0934
0
0.0245
0.325
0.7413
5.8109
0.7171
0.8751
8.1549
0.6771
0.6605
1.7907
17.9004
6.444
0.0643
15.5797
Qwen2ForCausalLM
bfloat16
apache-2.0
1.544
132
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen2-1.5B-Instruct
0.2454
0
0
0.0403
0.002
0.4553
0.098
0.7435
0.4054
0.1512
0.7396
0.0643
0.1053
0.7869
7.1679
0.8156
0.929
10.5792
0.8341
0
0.7583
0.4224
0.5625
0.3494
0.1957
0
0.4984
0.1698
0.3739
0.7396
0.6
0.6212
0.2583
0.098
0
0
0.0039
0.1525
0
0
0
0
0.2013
0.692
3.808
0.6701
0.8659
6.1582
0.6542
0.6605
1.7907
17.9004
6.444
0.0643
15.5797
Qwen2ForCausalLM
bfloat16
apache-2.0
1.544
132
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen2-7B-Instruct
0.3738
0
0.0956
0.0934
0.3847
0.6992
0.388
0.8107
0.6438
0.1287
0.7874
0.0802
0.0869
0.83
9.8287
0.8843
0.9452
13.3766
0.8689
0.0956
0.8039
0.5172
0.6111
0.7793
0.1577
0.2488
0.7005
0.6698
0.7205
0.7874
0.3949
0.3625
0.5143
0.388
0
0
0.5206
0.1414
0
0.0009
0
0
0.4661
0.7496
5.0348
0.7734
0.8827
7.6153
0.716
0.681
2.3279
20.6489
8.0238
0.0802
18.2714
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
600
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen2-7B-Instruct
0.5169
0
0.4696
0.2008
0.6092
0.8024
0.742
0.8247
0.6934
0.3676
0.8964
0.0802
0.3376
0.8362
10.2944
0.8901
0.9485
14.9416
0.8743
0.4696
0.8429
0.5259
0.7306
0.891
0.3902
0.5648
0.8131
0.6717
0.7258
0.8964
0.8585
0.8243
0.6733
0.742
0
0
0.6535
0.3751
0.0159
0.3047
0.0088
0.0375
0.6368
0.7802
7.2739
0.801
0.892
9.2122
0.7333
0.681
2.3279
20.6489
8.0238
0.0802
18.2714
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
600
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen2-57B-A14B-Instruct
0.2776
0.01
0.1687
0.0991
0.0008
0.6917
0.118
0.7986
0.3987
0.2148
0.4609
0.0922
0.2165
0.8117
9.9367
0.8472
0.9505
15.6139
0.8761
0.1687
0.5015
0.5144
0.2931
0.8749
0.1803
0.0003
0.606
0.0524
0.5279
0.4609
0.8692
0.8354
0.6988
0.118
0.01
0.0141
0.0014
0.2475
0
0.0003
0
0
0.4953
0.7526
7.0226
0.7637
0.8834
8.8813
0.7073
0.6888
2.7934
23.8128
9.2291
0.0922
20.8889
Qwen2MoeForCausalLM
bfloat16
apache-2.0
57.409
77
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen2-57B-A14B-Instruct
0.5503
0.01
0.4993
0.2334
0.6246
0.8566
0.864
0.8292
0.6731
0.4597
0.9112
0.0922
0.4843
0.8471
11.915
0.8817
0.9527
16.7566
0.8802
0.4993
0.853
0.5661
0.7431
0.9446
0.4926
0.5244
0.7338
0.7229
0.5998
0.9112
0.8906
0.8581
0.7722
0.864
0.01
0.0141
0.7247
0.4023
0.0075
0.3219
0.0973
0.0435
0.6968
0.813
10.2411
0.8222
0.8952
9.9832
0.7328
0.6888
2.7934
23.8128
9.2291
0.0922
20.8889
Qwen2MoeForCausalLM
bfloat16
apache-2.0
57.409
77
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen1.5-0.5B-Chat
0.1882
0
0.1331
0.0412
0.2358
0.1121
0.052
0.5198
0.2573
0.1776
0.5409
0.0001
0.0879
0.6924
2.0155
0.5452
0.8582
6.4008
0.646
0.1331
0.1976
0.3276
0.4764
0.1385
0.3413
0.2064
0
0.1888
0.2937
0.5409
-0.0691
-0.0638
0
0.052
0
0
0.2652
0.1035
0
0.0097
0
0.0067
0.1895
0.6312
0.4018
0.4103
0.8041
4.1518
0.4777
0.001
0.0021
0.0465
0.0102
0.0001
0.0465
Qwen2ForCausalLM
bfloat16
other
0.62
75
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen1.5-0.5B-Chat
0.0547
0
0
0.0035
0.0315
0
0
0.3765
0
0.0668
0.1231
0.0001
0.0384
0.5709
0.3243
0.4203
0.3723
1.5193
0.3961
0
0
0
0
0
0.08
0.0059
0
0
0
0.1231
0.0225
0.0262
0
0
0
0
0.0571
0.0819
0
0
0
0
0.0175
0.4178
0.2666
0.345
0.2607
0.6532
0.3447
0.001
0.0021
0.0465
0.0102
0.0001
0.0465
Qwen2ForCausalLM
bfloat16
other
0.62
75
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen1.5-1.8B-Chat
0.1002
0
0
0.019
0
0.0192
0
0.5877
0.0405
0.0968
0.2801
0.0584
0.0447
0.6759
4.5707
0.6215
0.8582
7.6355
0.6663
0
0.0576
0.0517
0
0
0.2102
0
0.0859
0
0.0652
0.2801
0.0029
0.0043
0
0
0
0
0.0001
0.0355
0
0
0
0
0.095
0.6216
2.8529
0.5342
0.8035
4.2879
0.529
0.6509
1.56
16.1139
5.8593
0.0584
13.7961
Qwen2ForCausalLM
bfloat16
other
1.837
48
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen1.5-1.8B-Chat
0.2952
0
0.2478
0.0269
0.3644
0.3713
0.26
0.6489
0.3577
0.2086
0.7035
0.0584
0.134
0.7673
6.7582
0.7306
0.903
9.8768
0.7624
0.2478
0.4619
0.3391
0.5
0.3816
0.3193
0.3265
0.3685
0.4104
0.1707
0.7035
0.1143
0.063
0.2702
0.26
0
0
0.4024
0.1723
0
0.0313
0
0.0029
0.1001
0.6293
3.2816
0.5376
0.8285
6.1865
0.5649
0.6509
1.56
16.1139
5.8593
0.0584
13.7961
Qwen2ForCausalLM
bfloat16
other
1.837
48
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen1.5-7B-Chat
0.452
0
0.4605
0.1378
0.4991
0.686
0.584
0.7983
0.5887
0.3037
0.8698
0.0445
0.2248
0.8272
9.1338
0.8683
0.9322
12.1327
0.8485
0.4605
0.7776
0.4224
0.6667
0.8043
0.4178
0.4335
0.6623
0.5644
0.6276
0.8698
0.7934
0.743
0.4762
0.584
0
0
0.5647
0.2685
0.0071
0.2324
0.0088
0.0434
0.3972
0.7536
6.1516
0.7651
0.8782
7.9472
0.7113
0.6556
1.1577
15.3333
4.4536
0.0445
13.002
Qwen2ForCausalLM
bfloat16
other
7.721
164
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen1.5-7B-Chat
0.2843
0
0.0293
0.0433
0.3305
0.4524
0.33
0.7641
0.1841
0.1599
0.7894
0.0445
0.1079
0.7455
7.5823
0.8066
0.9299
11.8686
0.8396
0.0293
0.7162
0.1207
0.4708
0.2726
0.1985
0.3163
0.1348
0.0019
0.1924
0.7894
0.1584
0.1598
0.3684
0.33
0
0
0.3447
0.1732
0
0
0
0
0.2165
0.7197
4.5818
0.726
0.8688
7.2124
0.684
0.6556
1.1577
15.3333
4.4536
0.0445
13.002
Qwen2ForCausalLM
bfloat16
other
7.721
164
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen1.5-14B-Chat
0.4056
0
0.1907
0.0614
0.5275
0.7393
0.37
0.8068
0.5557
0.2627
0.8505
0.0967
0.206
0.825
9.9127
0.8698
0.9436
13.428
0.8687
0.1907
0.8099
0.4799
0.6375
0.8293
0.3008
0.4801
0.5559
0.3782
0.727
0.8505
0.7715
0.8036
0.5787
0.37
0
0.002
0.5748
0.2813
0
0
0.0088
0
0.298
0.7525
5.8479
0.7672
0.8818
8.244
0.7214
0.6983
2.1618
29.2494
9.6714
0.0967
24.6822
Qwen2ForCausalLM
bfloat16
other
14.167
111
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen1.5-14B-Chat
0.4974
0
0.4807
0.1831
0.5779
0.7963
0.698
0.8227
0.5921
0.3317
0.8925
0.0967
0.2976
0.845
10.0664
0.8865
0.9451
14.3919
0.8711
0.4807
0.8354
0.4167
0.7389
0.8731
0.3754
0.5256
0.6397
0.601
0.564
0.8925
0.8122
0.7942
0.6805
0.698
0
0.002
0.6302
0.322
0.0072
0.271
0.0708
0.016
0.5506
0.7814
7.0721
0.8005
0.886
8.4866
0.7326
0.6983
2.1618
29.2494
9.6714
0.0967
24.6822
Qwen2ForCausalLM
bfloat16
other
14.167
111
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen1.5-4B-Chat
0.3956
0
0.357
0.0631
0.4541
0.5809
0.506
0.7488
0.4775
0.259
0.8282
0.0768
0.1654
0.8046
8.1103
0.8109
0.9273
11.9636
0.8319
0.357
0.632
0.3966
0.5514
0.6783
0.401
0.4019
0.387
0.6604
0.3921
0.8282
0.6253
0.5879
0.4324
0.506
0
0
0.5063
0.2105
0
0.065
0.0442
0.0084
0.1978
0.7091
5.8707
0.6667
0.8744
7.9838
0.6858
0.618
1.4681
23.955
7.6708
0.0768
19.7316
Qwen2ForCausalLM
bfloat16
other
3.95
38
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen1.5-4B-Chat
0.1012
0
0
0.0144
0
0
0
0.5369
0
0.005
0.4796
0.0768
0.0021
0.2557
2.4854
0.4695
0.7929
9.5308
0.7447
0
0
0
0
0
0
0
0
0
0
0.4796
0
0
0
0
0
0
0
0.0128
0
0
0
0
0.0722
0.1513
1.6932
0.3996
0.5721
4.8336
0.5337
0.618
1.4681
23.955
7.6708
0.0768
19.7316
Qwen2ForCausalLM
bfloat16
other
3.95
38
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen1.5-32B-Chat
0.4491
0.002
0.3352
0.0994
0.5655
0.7991
0.464
0.8156
0.6205
0.2682
0.8672
0.1034
0.2627
0.8289
9.7996
0.8725
0.9479
15.4891
0.8749
0.3352
0.8479
0.4856
0.6167
0.8865
0.2206
0.5369
0.5505
0.702
0.7477
0.8672
0.8485
0.8333
0.6627
0.464
0.002
0.002
0.5941
0.3213
0.0067
0.0051
0
0.0027
0.4826
0.7703
6.928
0.7853
0.887
8.8918
0.7296
0.7034
2.7394
30.1233
10.3423
0.1034
25.5046
Qwen2ForCausalLM
bfloat16
other
32.512
108
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Qwen/Qwen1.5-32B-Chat
0.5373
0.002
0.5151
0.2289
0.6696
0.8254
0.778
0.8343
0.6441
0.4013
0.908
0.1034
0.3572
0.8501
11.1126
0.8932
0.9492
16.067
0.8756
0.5151
0.8515
0.5086
0.7417
0.9124
0.4547
0.6159
0.7169
0.6679
0.5855
0.908
0.8499
0.8268
0.7125
0.778
0.002
0.002
0.7233
0.3922
0.0179
0.3671
0.0708
0.0613
0.6277
0.8053
9.0928
0.8211
0.8954
9.8035
0.7474
0.7034
2.7394
30.1233
10.3423
0.1034
25.5046
Qwen2ForCausalLM
bfloat16
other
32.512
108
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen1.5-32B
0.5655
0.1767
0.5109
0.228
0.6524
0.8314
0.802
0.8285
0.7195
0.4361
0.9089
0.1261
0.4019
0.8497
11.2727
0.8908
0.9522
16.6836
0.8798
0.5109
0.857
0.5747
0.7
0.9106
0.488
0.6027
0.8307
0.7734
0.7187
0.9089
0.8656
0.8391
0.7267
0.802
0.1767
0.3655
0.7021
0.4185
0.0457
0.3332
0.0708
0.0702
0.6201
0.8073
9.8641
0.8062
0.8985
10.4966
0.7371
0.7147
2.7423
35.6447
12.6092
0.1261
29.8654
Qwen2ForCausalLM
bfloat16
other
32.512
82
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen1.5-32B
0.3937
0.1767
0.1485
0.065
0.1771
0.6853
0.468
0.8034
0.5898
0.2395
0.8513
0.1261
0.2662
0.8291
10.0696
0.8712
0.949
15.1445
0.8749
0.1485
0.8226
0.4454
0.5639
0.7775
0.1258
0.0412
0.5468
0.6534
0.7394
0.8513
0.8487
0.827
0.4558
0.468
0.1767
0.3655
0.3131
0.3266
0
0.0004
0.0177
0
0.307
0.7586
6.3927
0.7604
0.8813
8.7998
0.7073
0.7147
2.7423
35.6447
12.6092
0.1261
29.8654
Qwen2ForCausalLM
bfloat16
other
32.512
82
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen1.5-4B
0.3919
0.002
0.3934
0.0663
0.4464
0.5462
0.502
0.7433
0.4835
0.2684
0.8345
0.0249
0.1819
0.8088
8.6103
0.8119
0.9295
12.4554
0.8352
0.3934
0.6671
0.4397
0.5681
0.5853
0.3962
0.3846
0.3895
0.6843
0.3361
0.8345
0.6416
0.5944
0.3862
0.502
0.002
0.002
0.5082
0.227
0.0087
0.1051
0.0177
0.0308
0.1693
0.7281
6.7202
0.6631
0.8721
8.2854
0.663
0.6105
0.6209
7.2195
2.4899
0.0249
6.3981
Qwen2ForCausalLM
bfloat16
other
3.95
33
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen1.5-4B
0.21
0.002
0
0.047
0.0784
0.2695
0.002
0.6713
0.5074
0.0724
0.6355
0.0249
0.0485
0.7439
5.5636
0.6997
0.9199
10.8054
0.8191
0
0.5358
0.3391
0.3931
0.0179
0.0685
0.0028
0.5534
0.6313
0.6201
0.6355
-0.0795
-0.0382
0.2548
0.002
0.002
0.002
0.1539
0.1001
0
0
0
0
0.2349
0.6822
3.2091
0.5786
0.8373
5.7173
0.5879
0.6105
0.6209
7.2195
2.4899
0.0249
6.3981
Qwen2ForCausalLM
bfloat16
other
3.95
33
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen1.5-7B
0.2621
0.1044
0.0294
0.0502
0.1352
0.4077
0.008
0.764
0.4396
0.1447
0.7127
0.0875
0.1064
0.8077
7.8239
0.849
0.937
11.9559
0.849
0.0294
0.7482
0.3937
0.5028
0.2198
0.1826
0.0071
0.2872
0.5644
0.4502
0.7127
0.1934
0.1797
0.255
0.008
0.1044
0.3092
0.2634
0.1453
0
0
0
0
0.251
0.7209
4.5154
0.7113
0.8639
6.1827
0.6467
0.6857
1.7991
26.6454
8.7404
0.0875
22.3706
Qwen2ForCausalLM
bfloat16
other
7.721
46
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen1.5-7B
0.4733
0.1044
0.4359
0.1623
0.511
0.665
0.596
0.7868
0.6364
0.335
0.8859
0.0875
0.2516
0.8337
9.9868
0.8661
0.9418
14.1796
0.8623
0.4359
0.7397
0.4511
0.6667
0.7909
0.4692
0.4459
0.5727
0.7096
0.7818
0.8859
0.7191
0.6025
0.4645
0.596
0.1044
0.3092
0.5761
0.2842
0.0063
0.3007
0.0265
0.0476
0.4302
0.7593
7.5492
0.7303
0.8809
9.2991
0.6885
0.6857
1.7991
26.6454
8.7404
0.0875
22.3706
Qwen2ForCausalLM
bfloat16
other
7.721
46
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen1.5-14B
0.379
0.4016
0.1098
0.0529
0.2327
0.6802
0.214
0.7907
0.5347
0.243
0.8109
0.098
0.2195
0.8258
9.662
0.8675
0.9462
14.4563
0.8707
0.1098
0.773
0.4023
0.5528
0.7784
0.2503
0.0274
0.2983
0.7266
0.6937
0.8109
0.6676
0.6242
0.4892
0.214
0.4016
0.9076
0.438
0.2591
0
0
0
0
0.2645
0.7352
5.6415
0.7253
0.8795
8.4139
0.6992
0.6929
2.136
28.3663
9.7907
0.098
23.9066
Qwen2ForCausalLM
bfloat16
other
14.167
36
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen1.5-14B
0.5449
0.4016
0.4557
0.1829
0.5875
0.7821
0.698
0.8129
0.701
0.3734
0.9009
0.098
0.3359
0.8468
10.9283
0.8853
0.9477
16.019
0.8717
0.4557
0.8394
0.4971
0.7042
0.8525
0.4275
0.5377
0.8205
0.738
0.7453
0.9009
0.8394
0.8073
0.6544
0.698
0.4016
0.9076
0.6374
0.3568
0.0095
0.2354
0.0885
0.0573
0.5236
0.7832
8.4909
0.7736
0.8924
9.6196
0.721
0.6929
2.136
28.3663
9.7907
0.098
23.9066
Qwen2ForCausalLM
bfloat16
other
14.167
36
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen1.5-0.5B
0.1021
0.0141
0
0.021
0.0137
0.1278
0
0.4391
0.0842
0.0952
0.2815
0.0464
0.047
0.6541
2.1699
0.5144
0.7454
0.112
0.4857
0
0
0.1897
0.025
0.1385
0.1576
0
0.0542
0.0013
0.151
0.2815
-0.0088
-0.0065
0.245
0
0.0141
0.1426
0.0274
0.0809
0
0
0
0
0.1052
0.57
1.2143
0.3943
0.7279
0.2155
0.362
0.6259
1.0483
17.5217
4.6302
0.0464
14.0292
Qwen2ForCausalLM
bfloat16
other
0.62
145
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen1.5-0.5B
0.2266
0.0141
0.187
0.0245
0.2762
0.3182
0.018
0.5568
0.4567
0.1842
0.411
0.0464
0.0914
0.7163
5.2729
0.5874
0.8863
8.1905
0.696
0.187
0.481
0.408
0.5389
0.2234
0.3342
0.2508
0.5534
0.6212
0.1618
0.411
0
0
0.2501
0.018
0.0141
0.1426
0.3017
0.127
0.0053
0.0111
0
0.0068
0.0994
0.6215
3.4396
0.4542
0.816
5.1922
0.4896
0.6259
1.0483
17.5217
4.6302
0.0464
14.0292
Qwen2ForCausalLM
bfloat16
other
0.62
145
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen1.5-1.8B
0.1242
0.0141
0
0.0299
0.0523
0.1541
0
0.5657
0.0883
0.083
0.3281
0.0511
0.0536
0.7288
4.0971
0.6519
0.8198
4.4674
0.6508
0
0.2119
0
0.0778
0.0107
0.1292
0.0003
0
0.3636
0
0.3281
-0.0385
-0.0291
0.2396
0
0.0141
0.1285
0.1043
0.0663
0
0
0
0
0.1497
0.6533
1.8221
0.4888
0.7721
2.4003
0.4713
0.6366
1.4833
17.1396
5.1175
0.0511
14.6158
Qwen2ForCausalLM
bfloat16
other
1.837
43
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen1.5-1.8B
0.2955
0.0141
0.2569
0.0346
0.3669
0.3921
0.152
0.6501
0.4614
0.1737
0.6979
0.0511
0.1283
0.7806
7.2504
0.7447
0.9124
10.9372
0.7826
0.2569
0.516
0.3448
0.5
0.3932
0.2378
0.3437
0.5189
0.6383
0.3051
0.6979
0.0886
0.0845
0.2669
0.152
0.0141
0.1285
0.3902
0.1551
0
0.0257
0.0088
0
0.1386
0.6382
4.304
0.5034
0.8447
6.5702
0.5697
0.6366
1.4833
17.1396
5.1175
0.0511
14.6158
Qwen2ForCausalLM
bfloat16
other
1.837
43
main
4
False
v1.4.1
v0.6.3.post1