GGUF
Russian
conversational

Llama.cpp compatible versions of an original 8B model.

Download one of the versions, for example saiga_yandexgpt_8b.Q4_K_M.gguf.

wget https://huggingface.co/IlyaGusev/saiga_yandexgpt_8b_gguf/resolve/main/saiga_yandexgpt_8b.Q4_K_M.gguf

Download interact_gguf.py

https://raw.githubusercontent.com/IlyaGusev/saiga/refs/heads/main/scripts/interact_gguf.py

How to run:

pip install llama-cpp-python fire

python3 interact_gguf.py saiga_yandexgpt_8b.Q4_K_M.gguf

System requirements:

  • 9GB RAM for q8_0 and less for smaller quantizations
Downloads last month
1,464
GGUF
Model size
8.04B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Datasets used to train IlyaGusev/saiga_yandexgpt_8b_gguf

Collection including IlyaGusev/saiga_yandexgpt_8b_gguf