Edit model card

GGUFs for Yi 34b 200k v2: https://huggingface.co/01-ai/Yi-34B-200K

This is the 24/03/07 updated version with enhanced long-context performance. From their notes:

The long text capability of the Yi-34B-200K has been enhanced. In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pretrain the model on 5B tokens long-context data mixture and demonstrate a near-all-green performance.

FP16 split with peazip. Recombine with peazip, 7zip, or a simple concatenate command.

Downloads last month
105
GGUF
Model size
34.4B params
Architecture
llama
Inference API
Unable to determine this model's library. Check the docs .