Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

My first ever successful merge: QueenLiz 120B

image/png

this is a linear merge of Quartet70B (https://huggingface.co/alchemonaut/QuartetAnemoi-70B-t0.0001) and Lzlv 70B (https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)

Sharing it here so hopefully someone else with proper machine can try this out.

NOTE: Context Window should be 32K

My Q4KM GGUF here :https://huggingface.co/Noodlz/QueenLiz-120B-GGUF

Thanks to the Mad Skills by @mradermacher - a whole set of iMat quantized GGUF files here: https://huggingface.co/mradermacher/QueenLiz-120B-i1-GGUF/tree/main


base_model:

  • alchemonaut/QuartetAnemoi-70B-t0.0001
  • lizpreciatior/lzlv_70b_fp16_hf library_name: transformers tags:
  • mergekit
  • merge

output_folder

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear merge method.

Models Merged

The following models were included in the merge:


license: other license_name: non-commercial-research-only license_link: LICENSE

Downloads last month
19
Safetensors
Model size
120B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Noodlz/QueenLiz-120B

Quantizations
2 models