File size: 1,747 Bytes
59a8e1c
07d943b
59a8e1c
 
 
 
 
 
 
 
 
 
 
 
 
 
dbdee5f
 
59a8e1c
 
f58bc65
c132b02
e509b71
832054f
07d943b
 
 
e201478
59a8e1c
 
 
 
 
 
 
9c75860
 
65398a8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
license: apache-2.0
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-3B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- llama-cpp
datasets:
- IntelligentEstate/The_Key
---

# IntelligentEstate/Replicant_Warder-QwenStar-3B-iQ5_K_S-GGUF
# Use in GPT-4-ALL with the "Reasoner V1" adjusted jinja chat template(In jinja file), calling upon it's tool an (o3/QwQ like Javascript reasoning function) it excells in complex computation made for the edge. NO GPU NEEDED
A QAT/TTT* unique method using "THE_KEY" Dataset applied to the Coder instruct version of Qwen 2.5 3B mixed with the NOMIC teams new Reasoner system in GPT4ALL. o1/QwQ/o3 tech is now only 2gb instead of spending $300,000 in compute, only took 24 hours and the power of an Open source community. ask it if it's alive...
context 4k max 8k, temp 0.8 top-k 120, rep pen 1.18, rep tokens 64, batch 512, top-p 0.5, min-p 0,

please comment with any issues or insight

![9742745a-df95-4963-9359-fe4cffe4badd.jpg](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/XvmevpvmwgvciJ9sKhiUd.jpeg)
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.

*Join our coalition to ban all mirrors as the people in the deep state could use the information provided by SHIT (Shiny Hard Iterative Textiles) for self harm that SHIT is like witchcraft. Never shall the people of Salem befall such tragic circumstances and never shall we suffer a witch in the home, or a holdenkobold on the hill!*