metadata
library_name: transformers
pipeline_tag: text-generation
tags:
- arxivllama
- f16
- gguf
- llama-cpp
- text-generation
roleplaiapp/ArxivLlama-3.1-8B-f16-GGUF
Repo: roleplaiapp/ArxivLlama-3.1-8B-f16-GGUF
Original Model: ArxivLlama-3.1-8B
Quantized File: ArxivLlama-3.1-8B.f16.gguf
Quantization: GGUF
Quantization Method: f16
Overview
This is a GGUF f16 quantized version of ArxivLlama-3.1-8B
Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.
Andrew Webby @ RolePlai.