Update README.md
Browse files
README.md
CHANGED
@@ -4,6 +4,7 @@ license: apache-2.0
|
|
4 |
# 🔎Taiwan-inquiry_7B_v2.1.gguf
|
5 |
- Model creator: [Joseph (Chen-Wei) Li](https://www.linkedin.com/in/joseph-li-3a453b231/)
|
6 |
- Original model: [Taiwan-inquiry_7B_2.1](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1)
|
|
|
7 |
| Name | Quant method | Bits | Size | Use case |
|
8 |
| ---- | :----: | :----: | :----: | ----- |
|
9 |
| [Taiwan-inquiry_7B_v2.1-Q4_K_M.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q4_K_M.gguf) | Q4_K_M | 4 | 4.54 GB | medium, balanced quality - recommended |
|
@@ -11,6 +12,7 @@ license: apache-2.0
|
|
11 |
| [Taiwan-inquiry_7B_v2.1-Q6_K.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q6_K.gguf)| Q6_K | 6 | 6.14 GB| very large, extremely low quality loss |
|
12 |
| [Taiwan-inquiry_7B_v2.1-Q8_0.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q8_0.gguf) | Q8_0 | 8 | 7.96 GB | very large, extremely low quality loss - not recommended |
|
13 |
| [Taiwan-inquiry_7B_v2.1.gguf ](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1.gguf) | No quantization | 16 or 32 | 15 GB | very large, no quality loss - not recommended |
|
|
|
14 |
## Usage of the model
|
15 |
- The user can take on the role of a doctor, and the model can engage in conversation with you as if it were a patient.
|
16 |
- You can provide the model with a brief patient background in the system prompt, and the model will respond based on that prompt. **(using my patient generator: [**colab**](https://colab.research.google.com/drive/17MSob_tQ2hPtMBL0xOF2zzV6WWe4dEG6?usp=sharing))**
|
|
|
4 |
# 🔎Taiwan-inquiry_7B_v2.1.gguf
|
5 |
- Model creator: [Joseph (Chen-Wei) Li](https://www.linkedin.com/in/joseph-li-3a453b231/)
|
6 |
- Original model: [Taiwan-inquiry_7B_2.1](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1)
|
7 |
+
-
|
8 |
| Name | Quant method | Bits | Size | Use case |
|
9 |
| ---- | :----: | :----: | :----: | ----- |
|
10 |
| [Taiwan-inquiry_7B_v2.1-Q4_K_M.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q4_K_M.gguf) | Q4_K_M | 4 | 4.54 GB | medium, balanced quality - recommended |
|
|
|
12 |
| [Taiwan-inquiry_7B_v2.1-Q6_K.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q6_K.gguf)| Q6_K | 6 | 6.14 GB| very large, extremely low quality loss |
|
13 |
| [Taiwan-inquiry_7B_v2.1-Q8_0.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q8_0.gguf) | Q8_0 | 8 | 7.96 GB | very large, extremely low quality loss - not recommended |
|
14 |
| [Taiwan-inquiry_7B_v2.1.gguf ](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1.gguf) | No quantization | 16 or 32 | 15 GB | very large, no quality loss - not recommended |
|
15 |
+
|
16 |
## Usage of the model
|
17 |
- The user can take on the role of a doctor, and the model can engage in conversation with you as if it were a patient.
|
18 |
- You can provide the model with a brief patient background in the system prompt, and the model will respond based on that prompt. **(using my patient generator: [**colab**](https://colab.research.google.com/drive/17MSob_tQ2hPtMBL0xOF2zzV6WWe4dEG6?usp=sharing))**
|