This is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the Yukang/LongAlpaca-16k-length dataset.

rope_theta was set to 1000000.0. Trained with Axolotl.

Downloads last month
62
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for mattshumer/Llama-3-8B-16K

Finetunes
4 models
Merges
1 model
Quantizations
4 models

Dataset used to train mattshumer/Llama-3-8B-16K

Space using mattshumer/Llama-3-8B-16K 1