Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,72 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- HuggingFaceH4/ultrachat_200k
|
5 |
+
- openchat/openchat_sharegpt4_dataset
|
6 |
+
- Open-Orca/SlimOrca
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
pipeline_tag: conversational
|
10 |
+
tags:
|
11 |
+
- text-generation-inference
|
12 |
+
inference: false
|
13 |
---
|
14 |
+
# π Falcon-RW-1B-Chat
|
15 |
+
|
16 |
+
**Falcon-RW-1B-Chat is a conversational model with 1 billion parameters. It's a further refinement of the [Falcon-RW-1B-Instruct-OpenOrca](https://huggingface.co/ericzzz/falcon-rw-1b-instruct-openorca), trained on selected data from the [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) and [openchat/openchat_sharegpt4_dataset](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset) datasets.**
|
17 |
+
|
18 |
+
The underlying Falcon-RW-1B-Instruct-OpenOrca model is built on the [Falcon-RW-1B](https://huggingface.co/tiiuae/falcon-rw-1b), a causal decoder-only model. It has been instruction-finetuned using the [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset.
|
19 |
+
|
20 |
+
**π Evaluation Results**
|
21 |
+
|
22 |
+
TBA
|
23 |
+
|
24 |
+
**π― Purpose**
|
25 |
+
|
26 |
+
The Falcon-RW-1B-Chat aims to add conversational capabilities to the Falcon-RW-1B-Instruct-OpenOrca model. This initiative is driven by the need for a smaller, open-source, instruction-finetuned, ready-to-use model, suitable for users with limited computational resources, like lower-end consumer GPUs.
|
27 |
+
|
28 |
+
## π Example Code
|
29 |
+
|
30 |
+
```python
|
31 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
32 |
+
import torch
|
33 |
+
|
34 |
+
model_name = "ericzzz/falcon-rw-1b-chat"
|
35 |
+
|
36 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
37 |
+
model = AutoModelForCausalLM.from_pretrained(
|
38 |
+
model_name, device_map="auto", torch_dtype=torch.bfloat16
|
39 |
+
)
|
40 |
+
|
41 |
+
chat_history = [
|
42 |
+
{"role": "user", "content": "Hello!"},
|
43 |
+
{"role": "assistant", "content": "Hello! How can I assist you today?"},
|
44 |
+
{"role": "user", "content": "Explain what AI is."},
|
45 |
+
]
|
46 |
+
|
47 |
+
input_ids = tokenizer.apply_chat_template(
|
48 |
+
chat_history, tokenize=True, add_generation_prompt=True, return_tensors="pt"
|
49 |
+
).to(model.device)
|
50 |
+
output_tokens = model.generate(
|
51 |
+
input_ids,
|
52 |
+
do_sample=True,
|
53 |
+
temperature=0.7,
|
54 |
+
repetition_penalty=1.05,
|
55 |
+
max_new_tokens=200,
|
56 |
+
)
|
57 |
+
output_text = tokenizer.decode(
|
58 |
+
output_tokens[0][len(input_ids[0]) :], skip_special_tokens=True
|
59 |
+
)
|
60 |
+
|
61 |
+
print(output_text)
|
62 |
+
```
|
63 |
+
|
64 |
+
## β οΈ Limitations
|
65 |
+
|
66 |
+
This model may generate inaccurate or misleading information and is prone to hallucination, creating plausible but false narratives. It lacks the ability to discern factual content from fiction and may inadvertently produce biased, harmful or offensive content. Its understanding of complex, nuanced queries is limited. Users should be aware of this and verify any information obtained from the model.
|
67 |
+
|
68 |
+
The model is provided 'as is' without any warranties, and the creators are not liable for any damages arising from its use. Users are responsible for their interactions with the model.
|
69 |
+
|
70 |
+
## π¬ Contact
|
71 |
+
|
72 |
+
For further inquiries or feedback, please contact at [email protected].
|