Model Card: vital-ai/watt-tool-70B-awq
Model Description
This model, vital-ai/watt-tool-70B-awq
, is a quantized version of the base model watt-ai/watt-tool-70B
. The quantization process was performed to reduce the model size and improve inference speed while maintaining high performance.
Base Model: watt-ai/watt-tool-70B
Quantization Method: 4-bit AWQ
- Downloads last month
- 2,441
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for vital-ai/watt-tool-70B-awq
Base model
meta-llama/Llama-3.1-70B
Finetuned
meta-llama/Llama-3.3-70B-Instruct
Finetuned
watt-ai/watt-tool-70B