FP8 activation quantization performed with llm-compressor
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for ig1/Qwen2.5-Coder-32B-Instruct-FP8-Dynamic
Base model
Qwen/Qwen2.5-32B
Finetuned
Qwen/Qwen2.5-Coder-32B
Finetuned
Qwen/Qwen2.5-Coder-32B-Instruct