Teja-Gollapudi commited on
Commit
77fbbc5
1 Parent(s): b06b002

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  license: cc-by-sa-3.0
3
  datasets:
4
- - VMware/open-instruct-v1-oasst-dolly-hhrlhf
5
  language:
6
  - en
7
  library_name: transformers
@@ -20,7 +20,9 @@ Instruction-tuned version of the fully trained Open LLama 7B v2 model. The mode
20
  ## License
21
  - <b>Commercially Viable </b>
22
 
23
- - Open-instruct-v1
 
 
24
  - Mosaic/Dolly-HHRLHF + filtered OASST1 - cc by 3.0
25
 
26
  Subset of COT SUBMIX (FROM FLAN V2) Zeroshot examples
@@ -31,14 +33,16 @@ Subset of COT SUBMIX (FROM FLAN V2) Zeroshot examples
31
  - gsmk8 - MIT
32
  - aqua - MIT
33
  - qasc - Apache 2.0
 
34
  - Language Model, ([openlm-research/open_llama_v2_7b](https://huggingface.co/openlm-research/open_llama_v2_7b)) is under apache-2.0
 
35
 
36
 
37
  ## Nomenclature
38
 
39
  - Model : Open-llama-v2
40
  - Model Size: 7B parameters
41
- - Dataset: Open-instruct(oasst,dolly, hhrlhf)
42
 
43
  ## Use in Transformers
44
 
@@ -47,7 +51,7 @@ import os
47
  import torch
48
  from transformers import AutoModelForCausalLM, AutoTokenizer
49
 
50
- model_name = 'VMware/open-llama-7b-open-instruct'
51
 
52
 
53
  tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
 
1
  ---
2
  license: cc-by-sa-3.0
3
  datasets:
4
+ - VMware/open-instruct
5
  language:
6
  - en
7
  library_name: transformers
 
20
  ## License
21
  - <b>Commercially Viable </b>
22
 
23
+ <b>Open-instruct <br>
24
+
25
+ Open-instruct-v1
26
  - Mosaic/Dolly-HHRLHF + filtered OASST1 - cc by 3.0
27
 
28
  Subset of COT SUBMIX (FROM FLAN V2) Zeroshot examples
 
33
  - gsmk8 - MIT
34
  - aqua - MIT
35
  - qasc - Apache 2.0
36
+ <br>
37
  - Language Model, ([openlm-research/open_llama_v2_7b](https://huggingface.co/openlm-research/open_llama_v2_7b)) is under apache-2.0
38
+ - Dataset ([VMware/open-instruct](https://huggingface.co/datasets/VMware/open-instruct)) is under cc-by-sa-3.0
39
 
40
 
41
  ## Nomenclature
42
 
43
  - Model : Open-llama-v2
44
  - Model Size: 7B parameters
45
+ - Dataset: Open-instruct
46
 
47
  ## Use in Transformers
48
 
 
51
  import torch
52
  from transformers import AutoModelForCausalLM, AutoTokenizer
53
 
54
+ model_name = 'VMware/open-llama-7b-v2-open-instruct'
55
 
56
 
57
  tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)