Update README.md
Browse files
README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
license: other
|
3 |
license_name: tongyi-qianwen-research
|
4 |
license_link: >-
|
5 |
-
https://huggingface.co/Qwen/
|
6 |
language:
|
7 |
- en
|
8 |
pipeline_tag: text-generation
|
@@ -10,12 +10,12 @@ tags:
|
|
10 |
- chat
|
11 |
---
|
12 |
|
13 |
-
#
|
14 |
|
15 |
|
16 |
## Introduction
|
17 |
|
18 |
-
|
19 |
|
20 |
* 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B;
|
21 |
* Significant performance improvement in human preference for chat models;
|
@@ -47,10 +47,10 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
47 |
device = "cuda" # the device to load the model onto
|
48 |
|
49 |
model = AutoModelForCausalLM.from_pretrained(
|
50 |
-
"Qwen/
|
51 |
device_map="auto"
|
52 |
)
|
53 |
-
tokenizer = AutoTokenizer.from_pretrained("Qwen/
|
54 |
|
55 |
prompt = "Give me a short introduction to large language model."
|
56 |
messages = [
|
|
|
2 |
license: other
|
3 |
license_name: tongyi-qianwen-research
|
4 |
license_link: >-
|
5 |
+
https://huggingface.co/Qwen/Qwen-1.5-0_5B-Chat/blob/main/LICENSE
|
6 |
language:
|
7 |
- en
|
8 |
pipeline_tag: text-generation
|
|
|
10 |
- chat
|
11 |
---
|
12 |
|
13 |
+
# Qwen-1.5-0.5B-Chat
|
14 |
|
15 |
|
16 |
## Introduction
|
17 |
|
18 |
+
Qwen-1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
|
19 |
|
20 |
* 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B;
|
21 |
* Significant performance improvement in human preference for chat models;
|
|
|
47 |
device = "cuda" # the device to load the model onto
|
48 |
|
49 |
model = AutoModelForCausalLM.from_pretrained(
|
50 |
+
"Qwen/Qwen-1.5-0_5B-Chat",
|
51 |
device_map="auto"
|
52 |
)
|
53 |
+
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-1.5-0_5B-Chat")
|
54 |
|
55 |
prompt = "Give me a short introduction to large language model."
|
56 |
messages = [
|