File size: 2,734 Bytes
8679ab0 e88215d 2f47d5d dc353bd bbda2a9 b774150 3a2e290 cee8323 e88215d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
---
license: apache-2.0
---
# 🔎Taiwan-inquiry_7B_v2.1.gguf
- Model creator: [Joseph (Chen-Wei) Li](https://www.linkedin.com/in/joseph-li-3a453b231/)
- Original model: [Taiwan-inquiry_7B_2.1](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1)
| Name | Quant method | Bits | Size | Use case |
| ---- | :----: | :----: | :----: | ----- |
| [Taiwan-inquiry_7B_v2.1-Q4_K_M.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q4_K_M.gguf) | Q4_K_M | 4 | 4.54 GB | medium, balanced quality - recommended |
| [Taiwan-inquiry_7B_v2.1-Q5_K_M.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q5_K_M.gguf) | Q5_K_M | 5 | 5.32 GB | large, very low quality loss - recommended |
| [Taiwan-inquiry_7B_v2.1-Q6_K.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q6_K.gguf)| Q6_K | 6 | 6.14 GB| very large, extremely low quality loss |
| [Taiwan-inquiry_7B_v2.1-Q8_0.gguf](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1-Q8_0.gguf) | Q8_0 | 8 | 7.96 GB | very large, extremely low quality loss - not recommended |
| [Taiwan-inquiry_7B_v2.1.gguf ](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1.gguf/blob/main/Taiwan-inquiry_7B_v2.1.gguf) | No quantization | 16 or 32 | 15 GB | very large, no quality loss - not recommended |
## Usage of the model
- The user can take on the role of a doctor, and the model can engage in conversation with you as if it were a patient.
- You can provide the model with a brief patient background in the system prompt, and the model will respond based on that prompt. **(using my patient generator: [**colab**](https://colab.research.google.com/drive/17MSob_tQ2hPtMBL0xOF2zzV6WWe4dEG6?usp=sharing))**
- Directly asking the certain disease about the symptoms and the possible therapies.**(Warning: It's not medical advice!)**
## Reference
- [llama.cpp](https://github.com/ggerganov/llama.cpp)
- [LM studio](https://lmstudio.ai/)
- [將 HuggingFace 模型轉換為 GGUF 及使用 llama.cpp 進行量化--以INX-TEXT/Bailong-instruct-7B 為例](https://medium.com/@NeroHin/%E5%B0%87-huggingface-%E6%A0%BC%E5%BC%8F%E6%A8%A1%E5%BC%8F%E8%BD%89%E6%8F%9B%E7%82%BA-gguf-%E4%BB%A5inx-text-bailong-instruct-7b-%E7%82%BA%E4%BE%8B-a2cfdd892cbc)
- [[LM Studio]執行語言模型的最好程式介面 無需特別設定便可以使用語言模型|方便管理與使用多種模型 可快速架設與OpenAI相容的伺服器](https://the-walking-fish.com/p/lmstudio/#google_vignette)
- [[Day 15] - 鋼鐵草泥馬 🦙 LLM chatbot 🤖 (6/10)|GGML 量化 LLaMa](https://ithelp.ithome.com.tw/articles/10331431) |