Xenova HF staff commited on
Commit
e57a649
1 Parent(s): a507d2a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -69,7 +69,7 @@ inputs = tokenizer.apply_chat_template(
69
  ).to("cuda")
70
 
71
  outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
72
- print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
73
  ```
74
 
75
  ### AutoAWQ
@@ -109,7 +109,7 @@ inputs = tokenizer.apply_chat_template(
109
  ).to("cuda")
110
 
111
  outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
112
- print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
113
  ```
114
 
115
  The AutoAWQ script has been adapted from [`AutoAWQ/examples/generate.py`](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/generate.py).
 
69
  ).to("cuda")
70
 
71
  outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
72
+ print(tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:], skip_special_tokens=True)[0])
73
  ```
74
 
75
  ### AutoAWQ
 
109
  ).to("cuda")
110
 
111
  outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
112
+ print(tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:], skip_special_tokens=True)[0])
113
  ```
114
 
115
  The AutoAWQ script has been adapted from [`AutoAWQ/examples/generate.py`](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/generate.py).