Update README.md
Browse files
README.md
CHANGED
@@ -17,9 +17,45 @@ language:
|
|
17 |
|
18 |
![die_Walkure-totheVictor.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/LbKT8JHfHNRUw8CN3-Wga.png)
|
19 |
|
20 |
-
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-Distill-Qwen-7B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) using Importance
|
21 |
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) for more details on the model.
|
22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
## Use with llama.cpp
|
24 |
Install llama.cpp through brew (works on Mac and Linux)
|
25 |
|
|
|
17 |
|
18 |
![die_Walkure-totheVictor.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/LbKT8JHfHNRUw8CN3-Wga.png)
|
19 |
|
20 |
+
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-Distill-Qwen-7B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) using Importance Matrix Quantization and an inference improveing dataset updated for tool/function use using llama.cp
|
21 |
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) for more details on the model.
|
22 |
|
23 |
+
## For use with GPT4ALL
|
24 |
+
for analyzing function, system template in Jinja
|
25 |
+
```
|
26 |
+
{{- '<|im_start|>system\n' }}
|
27 |
+
{% if toolList|length > 0 %}You have access to the following functions:
|
28 |
+
{% for tool in toolList %}
|
29 |
+
Use the function '{{tool.function}}' to: '{{tool.description}}'
|
30 |
+
{% if tool.parameters|length > 0 %}
|
31 |
+
parameters:
|
32 |
+
{% for info in tool.parameters %}
|
33 |
+
{{info.name}}:
|
34 |
+
type: {{info.type}}
|
35 |
+
description: {{info.description}}
|
36 |
+
required: {{info.required}}
|
37 |
+
{% endfor %}
|
38 |
+
{% endif %}
|
39 |
+
# Tool Instructions
|
40 |
+
If you CHOOSE to call this function ONLY reply with the following format:
|
41 |
+
'{{tool.symbolicFormat}}'
|
42 |
+
Here is an example. If the user says, '{{tool.examplePrompt}}', then you reply
|
43 |
+
'{{tool.exampleCall}}'
|
44 |
+
After the result you might reply with, '{{tool.exampleReply}}'
|
45 |
+
{% endfor %}
|
46 |
+
You MUST include both the start and end tags when you use a function.
|
47 |
+
|
48 |
+
You are a helpful AI assistant who uses the functions to break down, analyze, perform, and verify complex reasoning tasks. You SHOULD try to verify your answers using the functions where possible.
|
49 |
+
{% endif %}
|
50 |
+
{{- '<|im_end|>\n' }}
|
51 |
+
{% for message in messages %}
|
52 |
+
{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n' }}
|
53 |
+
{% endfor %}
|
54 |
+
{% if add_generation_prompt %}
|
55 |
+
{{ '<|im_start|>assistant\n' }}
|
56 |
+
{% endif %}
|
57 |
+
```
|
58 |
+
|
59 |
## Use with llama.cpp
|
60 |
Install llama.cpp through brew (works on Mac and Linux)
|
61 |
|