ChenWeiLi commited on
Commit
cee8323
1 Parent(s): 25e03f0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -5,13 +5,15 @@ license: apache-2.0
5
  - Model creator: [Joseph (Chen-Wei) Li](https://www.linkedin.com/in/joseph-li-3a453b231/)
6
  - Original model: [Taiwan-inquiry_7B_2.1](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1)
7
 
8
- | Name | Quant method | Bits | Size | Use case |
9
- | ---- | :----: | :----: | :----: | ----- |
10
- | [Taiwan-inquiry_7B_v2.1-Q4_K_M.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q4_K_M.gguf) | Q4_K_M | 4 | 4.54 GB | medium, balanced quality - recommended |
11
- | [Taiwan-inquiry_7B_v2.1-Q5_K_M.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q5_K_M.gguf) | Q5_K_M | 5 | 5.32 GB | large, very low quality loss - recommended |
12
- | [Taiwan-inquiry_7B_v2.1-Q6_K.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q6_K.gguf)| Q6_K | 6 | 6.14 GB| very large, extremely low quality loss |
13
- | [Taiwan-inquiry_7B_v2.1-Q8_0.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q8_0.gguf) | Q8_0 | 8 | 7.96 GB | very large, extremely low quality loss - not recommended |
14
- | [Taiwan-inquiry_7B_v2.1.gguf ](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1.gguf) | No quantization | 16 or 32 | 15 GB | very large, no quality loss - not recommended |
 
 
15
 
16
  ## Reference
17
  - [llama.cpp](https://github.com/ggerganov/llama.cpp)
 
5
  - Model creator: [Joseph (Chen-Wei) Li](https://www.linkedin.com/in/joseph-li-3a453b231/)
6
  - Original model: [Taiwan-inquiry_7B_2.1](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1)
7
 
8
+ "The model was fine-tuned based on the **Breeze-7B-Instruct-v1_0** model using a dataset that includes 614 authentic dialogues from the National Cheng Kung University Hospital.
9
+ Additionally, 336 synthetic dialogues were included in the training set, carefully crafted to encompass themes drawn from sample questions of the OSCE (臨床技能測驗) sample questions in Taiwan.
10
+ These synthetic dialogues were generated using GPT-3.5, Geminio-Pro and Breexe-8x7B-Instruct-v0_1.
11
+ The training process was geared towards simulating verbal exchanges between doctors and patients within a hospital environment."
12
+
13
+ ### Usage of the model
14
+ - The user can take on the role of a doctor, and the model can engage in conversation with you as if it were a patient.
15
+ - You can provide the model with a brief patient background in the system prompt, and the model will respond based on that prompt. **(using my patient generator: [**colab**](https://colab.research.google.com/drive/17MSob_tQ2hPtMBL0xOF2zzV6WWe4dEG6?usp=sharing))**
16
+ - Directly asking the certain disease about the symptoms and the possible therapies.**(Warning: It's not medical advice!)**
17
 
18
  ## Reference
19
  - [llama.cpp](https://github.com/ggerganov/llama.cpp)