--- base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.4 inference: false model_type: llama prompt_template: | <|im_start|>user\n {prompt}<|im_end|>\n <|im_start|>assistant\n quantized_by: mwitiderrick tags: - deepsparse --- ## TinyLlama 1.1B Chat 0.4 - DeepSparse This repo contains model files for [TinyLlama 1.1B Chat](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.4) optimized for [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models. This model was quantized and pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml). ## Inference Install [DeepSparse LLM](https://github.com/neuralmagic/deepsparse) for fast inference on CPUs: ```bash pip install deepsparse-nightly[llm] ``` Run in a [Python pipeline](https://github.com/neuralmagic/deepsparse/blob/main/docs/llms/text-generation-pipeline.md): ```python from deepsparse import TextGeneration prompt = "How to make banana bread?" formatted_prompt = f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n" model = TextGeneration(model="hf:nm-testing/TinyLlama-1.1B-Chat-v0.4-pruned50-quant") print(model(formatted_prompt, max_new_tokens=500).generations[0].text) """ Banana bread is a delicious and easy-to-make recipe that is sure to please. Here is a recipe for making banana bread: Ingredients: For the Banana Bread: - 1 cup of sugar - 1 cup of flour - 1/2 cup of mashed bananas - 1/4 cup of milk - 1/2 cup of melted butter - 1/4 cup of baking powder - 1/4 cup of baking soda - 1/4 cup of eggs - 1/4 cup of milk - 1/4 cup of sugar Instructions: 1. Preheat the oven to 325°F (160°C). 2. In a large bowl, combine the sugar and flour. 3. In a separate bow, combine the mashed bananas, milk, butter, baking powder, baking soda, milk, sugar. 4. Add the bananas and milk into the flour-sugar mixture. 5. Pour the milk into the bowl of the flour-sugar mixture. 6. Pour the baking powder into the bowl of the flour-sugar mixture. 7. Pour the mashed bananas into the bowl of the flour-sugar mixture. 8. Add the eggs into the bowl of the flour-sugar mixture. 9. Stir the mixture until it becomes a dough. 10. Grease a 9-inch (23 cm) square pan. 11. Pour the mixture into the pan. 12. Bake the banana bread in the oven for 40 minutes. 13. Remove the banana bread from the oven and cool it. 14. Cut the bread into 16 pieces. 15. Make the glaze: 16. Sprinkle the sugar over the bread. 17. Bake the bread in the oven for 30 minutes. """ ``` ```python from deepsparse import TextGeneration prompt = "How to get in a good university?" formatted_prompt = f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n" model = TextGeneration(model="hf:nm-testing/TinyLlama-1.1B-Chat-v0.4-pruned50-quant") print(model(formatted_prompt, max_new_tokens=200).generations[0].text) """ There are many factors to consider when choosing a university. Here are some tips for getting into a good university: 1. Research your options: Consider the schools in your area and the ones in your desired location. Research their reputation, tuition, and academic programs. 2. Apply to multiple universities: Apply to multiple universities, ensuring that you are applying to the best option for you. 3. Get a job: If you are applying to a university, you will need to find a job to support your studies. This will help you budget and manage your time. 4. Get involved with your community: Your university will likely have a community of students and faculty. Engage with this community by volunteering, participating in clubs, and engaging with others in your community. 5. Get involved with extracurricular activities: Universities often have many extracurricular activities, which can help you meet new people """ ``` ## Prompt template ``` <|im_start|>user\n {prompt}<|im_end|>\n <|im_start|>assistant\n ``` ## Sparsification For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below. ```bash git clone https://github.com/neuralmagic/sparseml pip install -e "sparseml[transformers]" wget https://huggingface.co/nm-testing/TinyLlama-1.1B-Chat-v0.4-pruned50-quant/raw/main/recipe.yaml # download recipe python sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py TinyLlama/TinyLlama-1.1B-Chat-v0.4 open_platypus --recipe recipe.yaml --save True python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment cp deployment/model.onnx deployment/model-orig.onnx ``` Run this kv-cache injection afterwards: ```python import os import onnx from sparseml.exporters.kv_cache_injector import KeyValueCacheInjector input_file = "deployment/model-orig.onnx" output_file = "deployment/model.onnx" model = onnx.load(input_file, load_external_data=False) model = KeyValueCacheInjector(model_path=os.path.dirname(input_file)).apply(model) onnx.save(model, output_file) print(f"Modified model saved to: {output_file}") ``` Follow the instructions on our [One Shot With SparseML][https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/transformers/sparsification/obcq] guide for step-by-step instruction on performing one-shot quantization on your own large language models. ## Slack For further support, and discussions on these models and AI in general, join us at [Neural Magic's Slack server](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)