Update README.md
Browse files![example1part2.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/CoBYmYiRt9Z4IDFoOwHxc.png)
README.md
CHANGED
@@ -18,7 +18,7 @@ language:
|
|
18 |
pipeline_tag: text-generation
|
19 |
---
|
20 |
|
21 |
-
- ⚠️ A low temperature must be used to ensure it won't fail at reasoning. we use 0.3!
|
22 |
- this is out flagship model, with top-tier reasoning, rivaling gemini-flash-exp-2.0-thinking and o1 mini. results are overall similar to both of them, we are not comparing to qwq as it has much longer results which waste tokens.
|
23 |
|
24 |
|
@@ -34,8 +34,12 @@ the model uses this prompt: (modified phi-4 prompt)
|
|
34 |
|
35 |
# Examples:
|
36 |
(q4_k_m, 10GB rtx 3080, 64GB memory, running inside of MSTY, all use "You are a friendly ai assistant." as the System prompt.)
|
37 |
-
|
38 |
-
![
|
|
|
|
|
|
|
|
|
39 |
|
40 |
# Uploaded model
|
41 |
|
|
|
18 |
pipeline_tag: text-generation
|
19 |
---
|
20 |
|
21 |
+
- ⚠️ A low temperature must be used to ensure it won't fail at reasoning. we use 0.3 - 0.8!
|
22 |
- this is out flagship model, with top-tier reasoning, rivaling gemini-flash-exp-2.0-thinking and o1 mini. results are overall similar to both of them, we are not comparing to qwq as it has much longer results which waste tokens.
|
23 |
|
24 |
|
|
|
34 |
|
35 |
# Examples:
|
36 |
(q4_k_m, 10GB rtx 3080, 64GB memory, running inside of MSTY, all use "You are a friendly ai assistant." as the System prompt.)
|
37 |
+
**example 1:**
|
38 |
+
![example1part1.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/Dcd6-wbpDQuXoulHaqATo.png)
|
39 |
+
![example1part2.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/CoBYmYiRt9Z4IDFoOwHxc.png)
|
40 |
+
|
41 |
+
**example 2:**
|
42 |
+
![example2](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/c4h-nw0DPTrQgX-_tvBoT.png)
|
43 |
|
44 |
# Uploaded model
|
45 |
|