4.0bpw
6.0bpw
8.0bpw

これは、Sdff-LtbaさんのLightChatAssistant-2x7Bモデルをexl2量子化したものです。
Q4 cacheモードによる、32kのContextSize対応、8.0bpw量子化でVRAM16GBでフルロード可能です。

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for RioShiina/LightChatAssistant-2x7B-exl2