FP8-Dynamic (Linear layers, PTQ activations) quant of cyberagent/Mistral-Nemo-Japanese-Instruct-2408 w/ LLM Compressor 0.4.0

Downloads last month
14
Safetensors
Model size
12.2B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for shisa-ai/Mistral-Nemo-Japanese-Instruct-FP8-Dynamic

Collection including shisa-ai/Mistral-Nemo-Japanese-Instruct-FP8-Dynamic