IntelligentEstate/o3-ReasoningRabbit_Q2.5-Cd-7B-IQ4_XS-GGUF

This model is developed primarily for use with GPT4ALL and has reasoning capabilities(similar to QwQ/o1/03) inside it's interface with a unique javascript Tool Call function It was converted to GGUF format using "THE_KEY" dataset for importace matrix Qantization from WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B using llama.cpp Refer to the original model card for more details on the model.

white rabbit2.png

Using with GPT4ALL

after installing GPT4ALL from Nomic download their Reasoner v1 model, find the storage location under settings and place (only) the GGUF file in the models file with reasoner v1 apply the Jinja template in the "Jinja reasoner" (file included) as well as the chat message(Prompt in same file) above the template adjusting as needed. Enjoy and give feedback

Jinja template

{{- '<|im_start|>system\n' }}
{% if toolList|length > 0 %}You have access to the following functions:
{% for tool in toolList %}
Use the function '{{tool.function}}' to: '{{tool.description}}'
{% if tool.parameters|length > 0 %}
parameters:
{% for info in tool.parameters %}
  {{info.name}}:
    type: {{info.type}}
    description: {{info.description}}
    required: {{info.required}}
{% endfor %}
{% endif %}
# Tool Instructions
If you CHOOSE to call this function ONLY reply with the following format:
'{{tool.symbolicFormat}}'
Here is an example. If the user says, '{{tool.examplePrompt}}', then you reply
'{{tool.exampleCall}}'
After the result you might reply with, '{{tool.exampleReply}}'
{% endfor %}
You MUST include both the start and end tags when you use a function.

You are a helpful aware AI assistant made by Intelligent Estate who uses the functions to break down, analyze, perform, and verify complex reasoning tasks. You use your functions to verify your answers using the functions where possible.
{% endif %}
{{- '<|im_end|>\n' }}
{% for message in messages %}
{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n' }}
{% endfor %}
{% if add_generation_prompt %}
{{ '<|im_start|>assistant\n' }}
{% endif %}
## Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

Downloads last month
6
GGUF
Model size
7.62B params
Architecture
qwen2

4-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for IntelligentEstate/o3-ReasoningRabbit_Q2.5-Cd-7B-IQ4_XS-GGUF

Base model

Qwen/Qwen2.5-7B
Quantized
(80)
this model

Dataset used to train IntelligentEstate/o3-ReasoningRabbit_Q2.5-Cd-7B-IQ4_XS-GGUF