My first ever successful merge: QueenLiz 120B
this is a linear merge of Quartet70B (https://huggingface.co/alchemonaut/QuartetAnemoi-70B-t0.0001) and Lzlv 70B (https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)
Sharing it here so hopefully someone else with proper machine can try this out.
NOTE: Context Window should be 32K
My Q4KM GGUF here :https://huggingface.co/Noodlz/QueenLiz-120B-GGUF
Thanks to the Mad Skills by @mradermacher - a whole set of iMat quantized GGUF files here: https://huggingface.co/mradermacher/QueenLiz-120B-i1-GGUF/tree/main
base_model:
- alchemonaut/QuartetAnemoi-70B-t0.0001
- lizpreciatior/lzlv_70b_fp16_hf library_name: transformers tags:
- mergekit
- merge
output_folder
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the linear merge method.
Models Merged
The following models were included in the merge: