dranger003's picture
Update README.md
1eb89d2 verified
|
raw
history blame
949 Bytes
metadata
license: cc-by-nc-4.0
pipeline_tag: text-generation
library_name: gguf

GGUF importance matrix (imatrix) quants for https://huggingface.co/abideen/AlphaMonarch-laser
The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a general purpose imatrix calibration dataset.

AlphaMonarch-laser is a DPO fine-tuned of mlabonne/NeuralMonarch-7B using the argilla/OpenHermes2.5-dpo-binarized-alpha preference dataset but achieves better performance then mlabonne/AlphaMonarch-7B using LaserQLoRA. We have fine-tuned this model only on half of the projections, but have achieved better results as compared to the version released by Maximme Labonne. We have trained this model for 1080 steps.

Layers Context Template
32
32768
[INST] {prompt} [/INST]
{response}