lunahr commited on
Commit
3eb9d26
·
verified ·
1 Parent(s): 1962734

new version (?)

Browse files
Files changed (1) hide show
  1. README.md +14 -7
README.md CHANGED
@@ -30,7 +30,8 @@ model-index:
30
  value: 73.44
31
  name: strict accuracy
32
  source:
33
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lunahr/thea-3b-25r
 
34
  name: Open LLM Leaderboard
35
  - task:
36
  type: text-generation
@@ -45,7 +46,8 @@ model-index:
45
  value: 22.55
46
  name: normalized accuracy
47
  source:
48
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lunahr/thea-3b-25r
 
49
  name: Open LLM Leaderboard
50
  - task:
51
  type: text-generation
@@ -60,7 +62,8 @@ model-index:
60
  value: 16.31
61
  name: exact match
62
  source:
63
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lunahr/thea-3b-25r
 
64
  name: Open LLM Leaderboard
65
  - task:
66
  type: text-generation
@@ -75,7 +78,8 @@ model-index:
75
  value: 2.35
76
  name: acc_norm
77
  source:
78
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lunahr/thea-3b-25r
 
79
  name: Open LLM Leaderboard
80
  - task:
81
  type: text-generation
@@ -90,7 +94,8 @@ model-index:
90
  value: 3.57
91
  name: acc_norm
92
  source:
93
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lunahr/thea-3b-25r
 
94
  name: Open LLM Leaderboard
95
  - task:
96
  type: text-generation
@@ -107,8 +112,10 @@ model-index:
107
  value: 24.25
108
  name: accuracy
109
  source:
110
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lunahr/thea-3b-25r
 
111
  name: Open LLM Leaderboard
 
112
  ---
113
 
114
  # Model Description
@@ -158,4 +165,4 @@ print("ANSWER: " + response_output)
158
 
159
  This Llama model was trained faster than [Unsloth](https://github.com/unslothai/unsloth) using [custom training code](https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4).
160
 
161
- Visit https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4 to find out how you can finetune your models using BOTH of the Kaggle provided GPUs.
 
30
  value: 73.44
31
  name: strict accuracy
32
  source:
33
+ url: >-
34
+ https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lunahr/thea-3b-25r
35
  name: Open LLM Leaderboard
36
  - task:
37
  type: text-generation
 
46
  value: 22.55
47
  name: normalized accuracy
48
  source:
49
+ url: >-
50
+ https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lunahr/thea-3b-25r
51
  name: Open LLM Leaderboard
52
  - task:
53
  type: text-generation
 
62
  value: 16.31
63
  name: exact match
64
  source:
65
+ url: >-
66
+ https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lunahr/thea-3b-25r
67
  name: Open LLM Leaderboard
68
  - task:
69
  type: text-generation
 
78
  value: 2.35
79
  name: acc_norm
80
  source:
81
+ url: >-
82
+ https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lunahr/thea-3b-25r
83
  name: Open LLM Leaderboard
84
  - task:
85
  type: text-generation
 
94
  value: 3.57
95
  name: acc_norm
96
  source:
97
+ url: >-
98
+ https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lunahr/thea-3b-25r
99
  name: Open LLM Leaderboard
100
  - task:
101
  type: text-generation
 
112
  value: 24.25
113
  name: accuracy
114
  source:
115
+ url: >-
116
+ https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lunahr/thea-3b-25r
117
  name: Open LLM Leaderboard
118
+ new_version: lunahr/thea-3b-50r-u1
119
  ---
120
 
121
  # Model Description
 
165
 
166
  This Llama model was trained faster than [Unsloth](https://github.com/unslothai/unsloth) using [custom training code](https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4).
167
 
168
+ Visit https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4 to find out how you can finetune your models using BOTH of the Kaggle provided GPUs.