Llama-3-8B-16K-GGUF
- This is quantized version of mattshumer/Llama-3-8B-16K created using llama.cpp
Model Description
This is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the Yukang/LongAlpaca-16k-length
dataset.
rope_theta
was set to 1000000.0
. Trained with Axolotl.
- Downloads last month
- 57
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for QuantFactory/Llama-3-8B-16K-GGUF
Base model
mattshumer/Llama-3-8B-16K