Sparse MPT-7B-Chat - DeepSparse

Chat-aligned MPT 7b model pruned to 50% and quantized using SparseGPT for inference with DeepSparse

from deepsparse import TextGeneration
model = TextGeneration(model="hf:neuralmagic/mpt-7b-chat-pruned50-quant")
model("Tell me a joke.", max_new_tokens=50)
Downloads last month
15
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API does not yet support model repos that contain custom code.

Space using neuralmagic/mpt-7b-chat-pruned50-quant-ds 1

Collection including neuralmagic/mpt-7b-chat-pruned50-quant-ds