Abhinav Kulkarni
commited on
Commit
•
e3673d3
1
Parent(s):
b3f236e
Updated README
Browse files
README.md
CHANGED
@@ -24,7 +24,7 @@ Please refer to the AWQ quantization license ([link](https://github.com/llm-awq/
|
|
24 |
|
25 |
## CUDA Version
|
26 |
|
27 |
-
This model was successfully tested on CUDA driver v530.30.02 and runtime v11.7 with Python v3.10.11. Please note that AWQ requires NVIDIA GPUs with compute capability of
|
28 |
|
29 |
For Docker users, the `nvcr.io/nvidia/pytorch:23.06-py3` image is runtime v12.1 but otherwise the same as the configuration above and has also been verified to work.
|
30 |
|
@@ -85,7 +85,7 @@ output = model.generate(
|
|
85 |
repetition_penalty=1.1,
|
86 |
eos_token_id=tokenizer.eos_token_id
|
87 |
)
|
88 |
-
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
89 |
```
|
90 |
|
91 |
## Evaluation
|
|
|
24 |
|
25 |
## CUDA Version
|
26 |
|
27 |
+
This model was successfully tested on CUDA driver v530.30.02 and runtime v11.7 with Python v3.10.11. Please note that AWQ requires NVIDIA GPUs with compute capability of `8.0` or higher.
|
28 |
|
29 |
For Docker users, the `nvcr.io/nvidia/pytorch:23.06-py3` image is runtime v12.1 but otherwise the same as the configuration above and has also been verified to work.
|
30 |
|
|
|
85 |
repetition_penalty=1.1,
|
86 |
eos_token_id=tokenizer.eos_token_id
|
87 |
)
|
88 |
+
# print(tokenizer.decode(output[0], skip_special_tokens=True))
|
89 |
```
|
90 |
|
91 |
## Evaluation
|