Transformers
GGUF
English
Japanese
llama-cpp
Inference Endpoints
conversational
fuzzy-mittenz commited on
Commit
c8fda6c
·
verified ·
1 Parent(s): 62130cb

Update README.md

Browse files

![hooch1.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/6zCYoVnNvTCi7BnyPbP8J.png)

Files changed (1) hide show
  1. README.md +10 -34
README.md CHANGED
@@ -10,11 +10,18 @@ language:
10
  base_model: AXCXEPT/phi-4-open-R1-Distill-EZOv1
11
  tags:
12
  - llama-cpp
13
- - gguf-my-repo
14
  ---
15
 
16
- # fuzzy-mittenz/phi-4-open-R1-Distill-EZOv1-Q4_K_M-GGUF
17
- This model was converted to GGUF format from [`AXCXEPT/phi-4-open-R1-Distill-EZOv1`](https://huggingface.co/AXCXEPT/phi-4-open-R1-Distill-EZOv1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
 
 
 
 
 
 
 
 
18
  Refer to the [original model card](https://huggingface.co/AXCXEPT/phi-4-open-R1-Distill-EZOv1) for more details on the model.
19
 
20
  ## Use with llama.cpp
@@ -25,34 +32,3 @@ brew install llama.cpp
25
 
26
  ```
27
  Invoke the llama.cpp server or the CLI.
28
-
29
- ### CLI:
30
- ```bash
31
- llama-cli --hf-repo fuzzy-mittenz/phi-4-open-R1-Distill-EZOv1-Q4_K_M-GGUF --hf-file phi-4-open-r1-distill-ezov1-q4_k_m.gguf -p "The meaning to life and the universe is"
32
- ```
33
-
34
- ### Server:
35
- ```bash
36
- llama-server --hf-repo fuzzy-mittenz/phi-4-open-R1-Distill-EZOv1-Q4_K_M-GGUF --hf-file phi-4-open-r1-distill-ezov1-q4_k_m.gguf -c 2048
37
- ```
38
-
39
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
40
-
41
- Step 1: Clone llama.cpp from GitHub.
42
- ```
43
- git clone https://github.com/ggerganov/llama.cpp
44
- ```
45
-
46
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
47
- ```
48
- cd llama.cpp && LLAMA_CURL=1 make
49
- ```
50
-
51
- Step 3: Run inference through the main binary.
52
- ```
53
- ./llama-cli --hf-repo fuzzy-mittenz/phi-4-open-R1-Distill-EZOv1-Q4_K_M-GGUF --hf-file phi-4-open-r1-distill-ezov1-q4_k_m.gguf -p "The meaning to life and the universe is"
54
- ```
55
- or
56
- ```
57
- ./llama-server --hf-repo fuzzy-mittenz/phi-4-open-R1-Distill-EZOv1-Q4_K_M-GGUF --hf-file phi-4-open-r1-distill-ezov1-q4_k_m.gguf -c 2048
58
- ```
 
10
  base_model: AXCXEPT/phi-4-open-R1-Distill-EZOv1
11
  tags:
12
  - llama-cpp
 
13
  ---
14
 
15
+ # IntelligentEstate/The_Hooch-phi-4-R1-Q4_K_M-GGUF
16
+
17
+ Another GREAT base model for your Local Intelligent Estate/Enterprise Server
18
+ an initial test please reaspond with suggestions
19
+
20
+ ![hooch1.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/6zCYoVnNvTCi7BnyPbP8J.png)
21
+
22
+ a phi-4 R1 style Distillation by AXCXEPT we call this guy "The Hooch" because of the fine and tempered distillation
23
+
24
+ This model was converted to GGUF format from [`AXCXEPT/phi-4-open-R1-Distill-EZOv1`](https://huggingface.co/AXCXEPT/phi-4-open-R1-Distill-EZOv1) using llama.cpp
25
  Refer to the [original model card](https://huggingface.co/AXCXEPT/phi-4-open-R1-Distill-EZOv1) for more details on the model.
26
 
27
  ## Use with llama.cpp
 
32
 
33
  ```
34
  Invoke the llama.cpp server or the CLI.