Update README.md
Browse files
README.md
CHANGED
@@ -18,8 +18,10 @@ language:
|
|
18 |
pipeline_tag: text-generation
|
19 |
---
|
20 |
|
|
|
|
|
|
|
21 |
|
22 |
-
this is out flagship model, with top-tier reasoning, rivaling gemini-flash-exp-2.0-thinking and o1 mini. results are overall similar to both of them, we are not comparing to qwq as it has much longer results which waste tokens.
|
23 |
the model uses this prompt: (modified phi-4 prompt)
|
24 |
```
|
25 |
{{ if .System }}<|system|>
|
@@ -29,6 +31,12 @@ the model uses this prompt: (modified phi-4 prompt)
|
|
29 |
{{ end }}<|assistant|>{{ .CoT }}<|CoT|>
|
30 |
{{ .Response }}<|FinalAnswer|><|im_end|>
|
31 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
# Uploaded model
|
33 |
|
34 |
- **Developed by:** Pinkstack
|
|
|
18 |
pipeline_tag: text-generation
|
19 |
---
|
20 |
|
21 |
+
- ⚠️ A low temperature must be used to ensure it won't fail at reasoning. we use 0.3!
|
22 |
+
- this is out flagship model, with top-tier reasoning, rivaling gemini-flash-exp-2.0-thinking and o1 mini. results are overall similar to both of them, we are not comparing to qwq as it has much longer results which waste tokens.
|
23 |
+
|
24 |
|
|
|
25 |
the model uses this prompt: (modified phi-4 prompt)
|
26 |
```
|
27 |
{{ if .System }}<|system|>
|
|
|
31 |
{{ end }}<|assistant|>{{ .CoT }}<|CoT|>
|
32 |
{{ .Response }}<|FinalAnswer|><|im_end|>
|
33 |
```
|
34 |
+
|
35 |
+
# Examples:
|
36 |
+
(q4_k_m, 10GB rtx 3080, 64GB memory, running inside of MSTY, all use "You are a friendly ai assistant." as the System prompt.)
|
37 |
+
![example1.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/j-Q2djj102JVg0CQPAjZY.png)
|
38 |
+
![example2.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/LoSj_N2kSzRyABJQP5jP9.png)
|
39 |
+
|
40 |
# Uploaded model
|
41 |
|
42 |
- **Developed by:** Pinkstack
|