mwitiderrick commited on
Commit
a0df302
·
1 Parent(s): 5d12f37

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -112,7 +112,7 @@ python sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py TinyLlama/
112
  python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment
113
  cp deployment/model.onnx deployment/model-orig.onnx
114
  ```
115
- Run this kv-cache injection afterwards:
116
  ```python
117
  import os
118
  import onnx
@@ -124,7 +124,7 @@ model = KeyValueCacheInjector(model_path=os.path.dirname(input_file)).apply(mode
124
  onnx.save(model, output_file)
125
  print(f"Modified model saved to: {output_file}")
126
  ```
127
- Follow the instructions on our [One Shot With SparseML](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/transformers/sparsification/obcq) guide for step-by-step instruction on performing one-shot quantization on your own large language models.
128
  ## Slack
129
 
130
  For further support, and discussions on these models and AI in general, join us at [Neural Magic's Slack server](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)
 
112
  python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment
113
  cp deployment/model.onnx deployment/model-orig.onnx
114
  ```
115
+ Run this kv-cache injection to speed up the model at inference by caching the Key and Value states:
116
  ```python
117
  import os
118
  import onnx
 
124
  onnx.save(model, output_file)
125
  print(f"Modified model saved to: {output_file}")
126
  ```
127
+ Follow the instructions on our [One Shot With SparseML](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/transformers/sparsification/obcq) guide for step-by-step instructions for performing one-shot quantization on large language models.
128
  ## Slack
129
 
130
  For further support, and discussions on these models and AI in general, join us at [Neural Magic's Slack server](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)