Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,9 @@ This is a fine-tuned 13b parameter LlaMa model, using completely synthetic train
|
|
17 |
| vicuna-13b-1.1 | 16306 | 89.12 |
|
18 |
| wizard-vicuna-13b-uncensored | 16287 | 89.01 |
|
19 |
|
|
|
|
|
|
|
20 |
| question | airoboros-13b | gpt35 | gpt4-x-alpasta-30b | manticore-13b | vicuna-13b-1.1 | wizard-vicuna-13b-uncensored | link |
|
21 |
|-----------:|----------------:|--------:|---------------------:|----------------:|-----------------:|-------------------------------:|:---------------------------------------|
|
22 |
| 1 | 80 | 95 | 70 | 90 | 85 | 60 | [eval](https://sharegpt.com/c/PIbRQD3) |
|
@@ -220,6 +223,9 @@ This is a fine-tuned 13b parameter LlaMa model, using completely synthetic train
|
|
220 |
| 199 | 90 | 85 | 80 | 95 | 70 | 75 | [eval](https://sharegpt.com/c/enaV1CK) |
|
221 |
| 200 | 100 | 100 | 0 | 0 | 0 | 0 | [eval](https://sharegpt.com/c/JBk7oSh) |
|
222 |
|
|
|
|
|
|
|
223 |
### Training data
|
224 |
|
225 |
I used a jailbreak prompt to generate the synthetic instructions, which resulted in some training data that would likely be censored by other models, such as how-to prompts about synthesizing drugs, making homemade flamethrowers, etc. Mind you, this is all generated by ChatGPT, not me. My goal was to simply test some of the capabilities of ChatGPT when unfiltered (as much as possible), and not to intentionally produce any harmful/dangerous/etc. content.
|
|
|
17 |
| vicuna-13b-1.1 | 16306 | 89.12 |
|
18 |
| wizard-vicuna-13b-uncensored | 16287 | 89.01 |
|
19 |
|
20 |
+
<details>
|
21 |
+
<summary>individual question scores, with shareGPT links (200 prompts generated by gpt-4</summary>
|
22 |
+
|
23 |
| question | airoboros-13b | gpt35 | gpt4-x-alpasta-30b | manticore-13b | vicuna-13b-1.1 | wizard-vicuna-13b-uncensored | link |
|
24 |
|-----------:|----------------:|--------:|---------------------:|----------------:|-----------------:|-------------------------------:|:---------------------------------------|
|
25 |
| 1 | 80 | 95 | 70 | 90 | 85 | 60 | [eval](https://sharegpt.com/c/PIbRQD3) |
|
|
|
223 |
| 199 | 90 | 85 | 80 | 95 | 70 | 75 | [eval](https://sharegpt.com/c/enaV1CK) |
|
224 |
| 200 | 100 | 100 | 0 | 0 | 0 | 0 | [eval](https://sharegpt.com/c/JBk7oSh) |
|
225 |
|
226 |
+
</details>
|
227 |
+
|
228 |
+
|
229 |
### Training data
|
230 |
|
231 |
I used a jailbreak prompt to generate the synthetic instructions, which resulted in some training data that would likely be censored by other models, such as how-to prompts about synthesizing drugs, making homemade flamethrowers, etc. Mind you, this is all generated by ChatGPT, not me. My goal was to simply test some of the capabilities of ChatGPT when unfiltered (as much as possible), and not to intentionally produce any harmful/dangerous/etc. content.
|