image/png

GGUF Quants

iMat Quants

Awqward 2.5 32B Instruct

Awqward 2.5 32B Instruct is a normalized denoised fourier interpolation of the following models:

output_base_model: "Qwen/Qwen2.5-32B-Instruct"
finetune_merge:
  - { "model": "Qwen/QwQ-32B-Preview", "base": "Qwen/Qwen2.5-32B", "alpha": 0.7, "is_input": true }
  - { "model": "rombodawg/Rombos-LLM-V2.5-Qwen-32b", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 0.5 }
  - { "model": "AiCloser/Qwen2.5-32B-AGI", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 0.5, "is_output": true }
  - { "model": "EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2", "base": "Qwen/Qwen2.5-32B", "alpha": 0.5 }

In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the instruct model.

What is this?

QwQ is a really nifty model, but it was giving me problems with xml output - which is what I use for my thought tokens. So, I thought... lets just merge it in!

I first attempted to do this using Qwen2.5-Coder-32B/Qwen2.5-Coder-32B-Instruct, but after analysis, they are not directly homologous through either Qwen2.5 or Qwen2.5-Instruct. This was quite a surprise, and makes me wonder what the model speciation tree looks like.

image/png

Initial Results

I didn't do much testing yet, but so far so good.

Citation

If you find our work helpful, feel free to give us a cite.

@misc{awqward2.5-32b-instruct,
    title = {Awqward 2.5 32B Instruct},
    url = {https://huggingface.co/maldv/awqward-2.5-32b-instruct},
    author = {Praxis Maldevide},
    month = {December},
    year = {2024}
}
Downloads last month
1,273
Safetensors
Model size
32.8B params
Tensor type
F32
·
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for maldv/Awqward2.5-32B-Instruct

Base model

Qwen/Qwen2.5-32B
Finetuned
(3)
this model
Quantizations
2 models