Update README.md
Browse files
README.md
CHANGED
@@ -8,6 +8,10 @@ base_model:
|
|
8 |
|
9 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6328952f798f8d122ce62a44/TOv_fhS8IFg7tpRpKtbez.png)
|
10 |
|
|
|
|
|
|
|
|
|
11 |
## Model Overview
|
12 |
**TinyLlama-R1** is a lightweight transformer model designed to handle instruction-following and reasoning tasks, particularly in STEM domains. This model was trained using the **Magpie Reasoning V2 250K-CoT** dataset, with a goal to improve reasoning through high-quality instruction-response pairs. However, based on early tests, **TinyLlama-R1** shows reduced responsiveness to system-level instructions, likely due to the absence of system messages in the dataset.
|
13 |
|
|
|
8 |
|
9 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6328952f798f8d122ce62a44/TOv_fhS8IFg7tpRpKtbez.png)
|
10 |
|
11 |
+
This model sponsored by the generous support of Cherry Republic.
|
12 |
+
|
13 |
+
https://www.cherryrepublic.com/
|
14 |
+
|
15 |
## Model Overview
|
16 |
**TinyLlama-R1** is a lightweight transformer model designed to handle instruction-following and reasoning tasks, particularly in STEM domains. This model was trained using the **Magpie Reasoning V2 250K-CoT** dataset, with a goal to improve reasoning through high-quality instruction-response pairs. However, based on early tests, **TinyLlama-R1** shows reduced responsiveness to system-level instructions, likely due to the absence of system messages in the dataset.
|
17 |
|