Update README.md
Browse files
README.md
CHANGED
@@ -5,14 +5,12 @@ datasets:
|
|
5 |
inference: false
|
6 |
---
|
7 |
|
8 |
-
# WizardLM
|
9 |
|
10 |
-
These files are GPTQ 4bit model files for [Eric Hartford's 'uncensored'
|
11 |
|
12 |
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
13 |
|
14 |
-
Eric did a fresh 7B training using the WizardLM method, on [a dataset edited to remove all the "I'm sorry.." type ChatGPT responses](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered).
|
15 |
-
|
16 |
## Other repositories available
|
17 |
|
18 |
* [4bit GPTQ model for GPU inference](https://huggingface.co/TheBloke/WizardLM-30B-Uncensored-GPTQ)
|
|
|
5 |
inference: false
|
6 |
---
|
7 |
|
8 |
+
# WizardLM 30B Uncensored
|
9 |
|
10 |
+
These files are GPTQ 4bit model files for [Eric Hartford's WizardLM 30B 'uncensored'](https://huggingface.co/ehartford/WizardLM-30B-Uncensored).
|
11 |
|
12 |
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
13 |
|
|
|
|
|
14 |
## Other repositories available
|
15 |
|
16 |
* [4bit GPTQ model for GPU inference](https://huggingface.co/TheBloke/WizardLM-30B-Uncensored-GPTQ)
|