update readme.md
Browse files
README.md
CHANGED
@@ -4,9 +4,9 @@ language:
|
|
4 |
- en
|
5 |
---
|
6 |
|
7 |
-
# RedPajama-Chat-
|
8 |
|
9 |
-
RedPajama-Chat-
|
10 |
It is further fine-tuned on OASST1 and Dolly2 to enhance chatting ability.
|
11 |
|
12 |
## Model Details
|
@@ -41,8 +41,8 @@ MIN_TRANSFORMERS_VERSION = '4.25.1'
|
|
41 |
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
|
42 |
|
43 |
# init
|
44 |
-
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-Chat-
|
45 |
-
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-Chat-
|
46 |
model = model.to('cuda:0')
|
47 |
# infer
|
48 |
prompt = "<human>: Who is Alan Turing?\n<bot>:"
|
@@ -83,8 +83,8 @@ MIN_TRANSFORMERS_VERSION = '4.25.1'
|
|
83 |
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
|
84 |
|
85 |
# init
|
86 |
-
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-Chat-
|
87 |
-
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-Chat-
|
88 |
|
89 |
# infer
|
90 |
prompt = "<human>: Who is Alan Turing?\n<bot>:"
|
@@ -106,8 +106,8 @@ Alan Mathison Turing (23 June 1912 – 7 June 1954) was an English computer scie
|
|
106 |
```python
|
107 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
108 |
# init
|
109 |
-
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-Chat-
|
110 |
-
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-Chat-
|
111 |
# infer
|
112 |
inputs = tokenizer("<human>: Hello!\n<bot>:", return_tensors='pt').to(model.device)
|
113 |
outputs = model.generate(**inputs, max_new_tokens=10, do_sample=True, temperature=0.8)
|
@@ -141,13 +141,13 @@ It is the responsibility of the end user to ensure that the model is used in a r
|
|
141 |
|
142 |
#### Out-of-Scope Use
|
143 |
|
144 |
-
RedPajama-Chat-
|
145 |
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
|
146 |
It is important to consider the limitations of the model and to only use it for its intended purpose.
|
147 |
|
148 |
#### Misuse and Malicious Use
|
149 |
|
150 |
-
RedPajama-Chat-
|
151 |
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the OpenChatKit community project.
|
152 |
|
153 |
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
|
@@ -164,7 +164,7 @@ Using the model to generate content that is cruel to individuals is a misuse of
|
|
164 |
|
165 |
## Limitations
|
166 |
|
167 |
-
RedPajama-Chat-
|
168 |
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
|
169 |
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
|
170 |
|
|
|
4 |
- en
|
5 |
---
|
6 |
|
7 |
+
# RedPajama-INCITE-Chat-7B-v0.1
|
8 |
|
9 |
+
RedPajama-INCITE-Chat-7B-v0.1, is a large transformer-based language model developed by Together Computer and trained on the RedPajama-Data-1T dataset.
|
10 |
It is further fine-tuned on OASST1 and Dolly2 to enhance chatting ability.
|
11 |
|
12 |
## Model Details
|
|
|
41 |
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
|
42 |
|
43 |
# init
|
44 |
+
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
|
45 |
+
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1", torch_dtype=torch.float16)
|
46 |
model = model.to('cuda:0')
|
47 |
# infer
|
48 |
prompt = "<human>: Who is Alan Turing?\n<bot>:"
|
|
|
83 |
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
|
84 |
|
85 |
# init
|
86 |
+
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
|
87 |
+
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
|
88 |
|
89 |
# infer
|
90 |
prompt = "<human>: Who is Alan Turing?\n<bot>:"
|
|
|
106 |
```python
|
107 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
108 |
# init
|
109 |
+
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
|
110 |
+
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1", torch_dtype=torch.bfloat16)
|
111 |
# infer
|
112 |
inputs = tokenizer("<human>: Hello!\n<bot>:", return_tensors='pt').to(model.device)
|
113 |
outputs = model.generate(**inputs, max_new_tokens=10, do_sample=True, temperature=0.8)
|
|
|
141 |
|
142 |
#### Out-of-Scope Use
|
143 |
|
144 |
+
`RedPajama-INCITE-Chat-7B-v0.1` is a language model and may not perform well for other use cases outside of its intended scope.
|
145 |
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
|
146 |
It is important to consider the limitations of the model and to only use it for its intended purpose.
|
147 |
|
148 |
#### Misuse and Malicious Use
|
149 |
|
150 |
+
`RedPajama-INCITE-Chat-7B-v0.1` is designed for language modeling.
|
151 |
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the OpenChatKit community project.
|
152 |
|
153 |
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
|
|
|
164 |
|
165 |
## Limitations
|
166 |
|
167 |
+
`RedPajama-INCITE-Chat-7B-v0.1`, like other language models, has limitations that should be taken into consideration.
|
168 |
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
|
169 |
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
|
170 |
|