Update README.md
Browse files
README.md
CHANGED
@@ -18,14 +18,16 @@ Instruction-tuned version of the fully trained Open LLama 7B v2 model. The mode
|
|
18 |
<b> NOTE </b> : Fast tokenizer results in incorrect encoding, set the ```use_fast = False``` parameter, when instantiating the tokenizer
|
19 |
|
20 |
## License
|
21 |
-
- <b>Commercially Viable
|
22 |
|
23 |
-
|
24 |
|
25 |
-
Open-instruct
|
|
|
|
|
26 |
- Mosaic/Dolly-HHRLHF + filtered OASST1 - cc by 3.0
|
27 |
|
28 |
-
Subset of COT SUBMIX (FROM FLAN V2) Zeroshot examples
|
29 |
- ESNLI - MIT
|
30 |
- ECQA - CDLA 1.0 - Sharing
|
31 |
- Strategy - MIT
|
|
|
18 |
<b> NOTE </b> : Fast tokenizer results in incorrect encoding, set the ```use_fast = False``` parameter, when instantiating the tokenizer
|
19 |
|
20 |
## License
|
21 |
+
- <b>Commercially Viable</b>
|
22 |
|
23 |
+
## Datasets used for Fine-Tuning
|
24 |
|
25 |
+
<b>Open-instruct</b>
|
26 |
+
|
27 |
+
<b>Open-instruct-v1</b>
|
28 |
- Mosaic/Dolly-HHRLHF + filtered OASST1 - cc by 3.0
|
29 |
|
30 |
+
<b>Subset of COT SUBMIX (FROM FLAN V2) Zeroshot examples</b>
|
31 |
- ESNLI - MIT
|
32 |
- ECQA - CDLA 1.0 - Sharing
|
33 |
- Strategy - MIT
|