dnhkng commited on
Commit
1009e91
1 Parent(s): 802103d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -3
README.md CHANGED
@@ -1,3 +1,44 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ This is a new kind of model optimization. This model is based on Googles's Phi-3-medium-4k-instruct.
6
+
7
+
8
+ #### Usgae with Transformers AutoModelForCausalLM
9
+ ```python
10
+ import torch
11
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
12
+
13
+ torch.random.manual_seed(0)
14
+ model_id = "dnhkng/RYS-Phi-3-medium-4k-instruct"
15
+ model = AutoModelForCausalLM.from_pretrained(
16
+ model_id,
17
+ device_map="cuda",
18
+ torch_dtype="auto",
19
+ trust_remote_code=True,
20
+ )
21
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
22
+
23
+ messages = [
24
+ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
25
+ {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
26
+ {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
27
+ ]
28
+
29
+ pipe = pipeline(
30
+ "text-generation",
31
+ model=model,
32
+ tokenizer=tokenizer,
33
+ )
34
+
35
+ generation_args = {
36
+ "max_new_tokens": 500,
37
+ "return_full_text": False,
38
+ "temperature": 0.0,
39
+ "do_sample": False,
40
+ }
41
+
42
+ output = pipe(messages, **generation_args)
43
+ print(output[0]['generated_text'])
44
+ ```