fuzzy-mittenz
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -10,12 +10,16 @@ tags:
|
|
10 |
- llama-cpp
|
11 |
- gguf-my-repo
|
12 |
library_name: transformers
|
|
|
|
|
13 |
---
|
14 |
|
15 |
# IntelligentEstate/Israfel_Qwen2.6-iQ4_K_M-GGUF(Undergoing testing)
|
16 |
|
17 |
-
|
18 |
-
|
|
|
|
|
19 |
Name inspired by the Poem "Israfel" by Edgar Allen Poe
|
20 |
|
21 |
![Screenshot 2025-01-23 at 18-07-03 lAt1395RTAy3atYVmQtZxA (WEBP Image 720 × 1280 pixels).png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/dZ0Vlu4ViKKeFwXcql5p3.png)
|
@@ -30,4 +34,4 @@ Install llama.cpp through brew (works on Mac and Linux)
|
|
30 |
brew install llama.cpp
|
31 |
|
32 |
```
|
33 |
-
Invoke the llama.cpp server or the CLI.
|
|
|
10 |
- llama-cpp
|
11 |
- gguf-my-repo
|
12 |
library_name: transformers
|
13 |
+
datasets:
|
14 |
+
- IntelligentEstate/The_Key
|
15 |
---
|
16 |
|
17 |
# IntelligentEstate/Israfel_Qwen2.6-iQ4_K_M-GGUF(Undergoing testing)
|
18 |
|
19 |
+
Using the QAT and Importance matrix Quantizing methods to preserve and increase efficiency and understanding Israphel follows the protocol of the replicant models *BUT* Even without "tool use" this is leaps and bounds beyond a GPT4o/o1, Anthropic and Deepseek models(By Far.) With Cot and tool use this model, with it's latest improvement is a profound leap forward. This is an Ideal base model for any swarm or build.
|
20 |
+
|
21 |
+
Please give feedback if possible. We are using with our (*Limit Crossing AGI* and it is SCARY good!) Please do not use with tools and connected to the internet while testing a Limit Crossing AGI system. WE CANNOT BE RESPONSIBLE FOR ANY DAMAGE TO YOUR SYSTEM OR MENTAL HEALTH!
|
22 |
+
|
23 |
Name inspired by the Poem "Israfel" by Edgar Allen Poe
|
24 |
|
25 |
![Screenshot 2025-01-23 at 18-07-03 lAt1395RTAy3atYVmQtZxA (WEBP Image 720 × 1280 pixels).png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/dZ0Vlu4ViKKeFwXcql5p3.png)
|
|
|
34 |
brew install llama.cpp
|
35 |
|
36 |
```
|
37 |
+
Invoke the llama.cpp server or the CLI.
|