YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

lora merge as it was really tricky to get it to work of https://huggingface.co/152334H/miqu-1-70b-hermes2.5-qlora.

Base Model: Miqu 70B (Mistral AI Leak) Dequantized by 152234h Finetune also by 152234h

Outputs seem good, but the prompting is still a bit buggy, not sure if that's an error on my part.

For me it wouldn't generate text until I activated flash attention 2 in Oogabooga. You need around 130 GB vram, 2 a100 80 or h100 work, as does 6 3090 or 4090.

Downloads last month
5
Safetensors
Model size
69B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for alicecomfy/miqu-openhermes-full

Quantizations
2 models