Triangle104 commited on
Commit
9325b53
·
verified ·
1 Parent(s): 39488c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -1
README.md CHANGED
@@ -6,12 +6,62 @@ tags:
6
  - merge
7
  - llama-cpp
8
  - gguf-my-repo
 
9
  ---
10
 
11
  # Triangle104/Phi-4-RP-V0.2-Q4_K_M-GGUF
12
  This model was converted to GGUF format from [`bunnycore/Phi-4-RP-V0.2`](https://huggingface.co/bunnycore/Phi-4-RP-V0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/bunnycore/Phi-4-RP-V0.2) for more details on the model.
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ## Use with llama.cpp
16
  Install llama.cpp through brew (works on Mac and Linux)
17
 
@@ -50,4 +100,4 @@ Step 3: Run inference through the main binary.
50
  or
51
  ```
52
  ./llama-server --hf-repo Triangle104/Phi-4-RP-V0.2-Q4_K_M-GGUF --hf-file phi-4-rp-v0.2-q4_k_m.gguf -c 2048
53
- ```
 
6
  - merge
7
  - llama-cpp
8
  - gguf-my-repo
9
+ license: mit
10
  ---
11
 
12
  # Triangle104/Phi-4-RP-V0.2-Q4_K_M-GGUF
13
  This model was converted to GGUF format from [`bunnycore/Phi-4-RP-V0.2`](https://huggingface.co/bunnycore/Phi-4-RP-V0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
14
  Refer to the [original model card](https://huggingface.co/bunnycore/Phi-4-RP-V0.2) for more details on the model.
15
 
16
+ ---
17
+ Model details:
18
+ -
19
+ Phi-4-RP-V0.2 is based on the Phi-4 architecture, which is a state-of-the-art large language model designed to handle a wide range of natural language tasks with high efficiency and performance.
20
+
21
+ Primary Use Cases
22
+
23
+ Interactive Storytelling : Engage users in dynamic, immersive stories where they can take on different roles and make choices that influence the narrative.
24
+ Role-Playing Games (RPGs) : Provide rich, interactive experiences in RPGs, enhancing gameplay through intelligent character interactions.
25
+ Virtual Assistants : Offer personalized, engaging conversations that simulate human-like interactions for customer support or entertainment purposes.
26
+
27
+ Training Data
28
+
29
+ Phi-4-RP-V0.2 is specifically trained on role-playing datasets to ensure comprehensive understanding and versatility in various role-playing contexts. This includes but is not limited to:
30
+
31
+ Role-playing game scripts and narratives.
32
+ Interactive storytelling scenarios.
33
+ Character dialogues and interactions from diverse fictional settings.
34
+
35
+ Input Formats
36
+
37
+ Given the nature of the training data, phi-4 is best suited for prompts using the chat format as follows:
38
+
39
+ <|im_start|>system<|im_sep|>
40
+ You are a medieval knight and must provide explanations to modern people.<|im_end|>
41
+ <|im_start|>user<|im_sep|>
42
+ How should I explain the Internet?<|im_end|>
43
+ <|im_start|>assistant<|im_sep|>
44
+
45
+ Merge Method
46
+
47
+ This model was merged using the passthrough merge method using unsloth/phi-4 + bunnycore/Phi-4-rp-v1-lora as a base.
48
+
49
+ Models Merged
50
+
51
+ The following models were included in the merge:
52
+
53
+ Configuration
54
+
55
+ The following YAML configuration was used to produce this model:
56
+
57
+ base_model: unsloth/phi-4+bunnycore/Phi-4-rp-v1-lora
58
+ dtype: bfloat16
59
+ merge_method: passthrough
60
+ models:
61
+ - model: unsloth/phi-4+bunnycore/Phi-4-rp-v1-lora
62
+ tokenizer_source: unsloth/phi-4
63
+
64
+ ---
65
  ## Use with llama.cpp
66
  Install llama.cpp through brew (works on Mac and Linux)
67
 
 
100
  or
101
  ```
102
  ./llama-server --hf-repo Triangle104/Phi-4-RP-V0.2-Q4_K_M-GGUF --hf-file phi-4-rp-v0.2-q4_k_m.gguf -c 2048
103
+ ```