It is speedy and making sense. Thanks much to agentica-org. 16-f already works very great on my edge computer. Please coach me if I make any mistake to convert it to gguf format.

Downloads last month
127
GGUF
Model size
1.78B params
Architecture
qwen2

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for heylobc/DeepScaleR-1.5B-Preview-f16.gguf