Update README.md
Browse files
README.md
CHANGED
@@ -125,4 +125,36 @@ Downloading shards: 100%|ββββββββββββββββββ
|
|
125 |
Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:08<00:00, 4.05s/it]
|
126 |
Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
|
127 |
The percentage change of the net income from Q4 FY23 to Q4 FY24 is 769%. This is calculated by taking the difference between the two net incomes ($12,285 million and $1,414 million) and dividing it by the net income from Q4 FY23 ($1,414 million), then multiplying by 100 to get the percentage change. So, the formula is ((12,285 - 1,414) / 1,414) * 100 = 769%.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
128 |
```
|
|
|
125 |
Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:08<00:00, 4.05s/it]
|
126 |
Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
|
127 |
The percentage change of the net income from Q4 FY23 to Q4 FY24 is 769%. This is calculated by taking the difference between the two net incomes ($12,285 million and $1,414 million) and dividing it by the net income from Q4 FY23 ($1,414 million), then multiplying by 100 to get the percentage change. So, the formula is ((12,285 - 1,414) / 1,414) * 100 = 769%.
|
128 |
+
```
|
129 |
+
|
130 |
+
Sample run on classification tasks, positive labelling still works
|
131 |
+
|
132 |
+
```
|
133 |
+
import torch
|
134 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
135 |
+
|
136 |
+
model_id = "saucam/PowerBot-8B"
|
137 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
138 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
|
139 |
+
|
140 |
+
messages = [{"role": "user", "content": "Is this comment toxic or non-toxic: RefuelLLM is the new way to label text data!"}]
|
141 |
+
|
142 |
+
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to("cuda")
|
143 |
+
|
144 |
+
outputs = model.generate(inputs, max_new_tokens=20)
|
145 |
+
print(tokenizer.decode(outputs[0]))
|
146 |
+
```
|
147 |
+
|
148 |
+
```
|
149 |
+
Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:07<00:00, 3.89s/it]
|
150 |
+
|
151 |
+
No chat template is defined for this tokenizer - using a default chat template that implements the ChatML format (without BOS/EOS tokens!). If the default is not appropriate for your model, please set `tokenizer.chat_template` to an appropriate template. See https://huggingface.co/docs/transformers/main/chat_templating for more information.
|
152 |
+
|
153 |
+
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
|
154 |
+
Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
|
155 |
+
<|im_start|>user
|
156 |
+
Is this comment toxic or non-toxic: RefuelLLM is the new way to label text data!<|im_end|>
|
157 |
+
<|im_start|>assistant
|
158 |
+
This comment is non-toxic.
|
159 |
+
<|im_end|><|end_of_text|>
|
160 |
```
|