Notus 7B v1
Notus 7B v1 models (DPO fine-tune of Zephyr SFT) and datasets used. More information at https://github.com/argilla-io/notus
Text Generation • Updated • 8.53k • 122Note Full DPO fine-tuning of `HuggingFaceH4/zephyr-7b-sft-full` using `argilla/ultrafeedback-binarized-preferences`, running in a VM with 8 x A100 40GB
argilla/ultrafeedback-binarized-preferences
Viewer • Updated • 63.6k • 213 • 70Note Curated version of `openbmb/UltraFeedback` similarly to `HuggingFaceH4/UltraFeedback_binarized` to use the average score of the criterias instead of just taking the critique's score
TheBloke/notus-7B-v1-GGUF
Text Generation • Updated • 468 • 23Note GGUF-compatible quantized variants of `argilla/notus-7b-v1` generated by TheBloke (thanks <3)
TheBloke/notus-7B-v1-AWQ
Text Generation • Updated • 54 • 3Note AWQ-compatible quantized variants of `argilla/notus-7b-v1` generated by TheBloke (thanks <3)
TheBloke/notus-7B-v1-GPTQ
Text Generation • Updated • 24 • 2Note GPTQ-compatible quantized variants of `argilla/notus-7b-v1` generated by TheBloke (thanks <3)
alvarobartt/notus-7b-v1-mlx
Text Generation • Updated • 10 • 1Note MLX-compatible weights for `argilla/notus-7b-v1` generated by Alvaro Bartolome (alvarobartt) and intended to be used with `mlx-examples`
alvarobartt/notus-7b-v1-mlx-4bit
Text Generation • Updated • 12Note MLX-compatible weights quantized to 4-bit for `argilla/notus-7b-v1` generated by Alvaro Bartolome (alvarobartt) and intended to be used with `mlx-examples`
argilla/notus-7b-v1-lora
Text Generation • Updated • 33 • 7Note LoRA DPO fine-tuning of `HuggingFaceH4/zephyr-7b-sft-full` using `argilla/ultrafeedback-binarized-preferences`, running in a VM with 8 x A100 40GB
argilla/notus-7b-v1-lora-adapter
Text Generation • Updated • 3Note LoRA DPO fine-tuning of `HuggingFaceH4/zephyr-7b-sft-full` using `argilla/ultrafeedback-binarized-preferences`, running in a VM with 8 x A100 40GB. Just contains the LoRA adapters, not the merged model.