-
-
-
-
-
-
Inference status
Active filters:
gptq
TheBloke/storytime-13B-GPTQ
Text Generation
•
Updated
•
310
•
30
TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
Text Generation
•
Updated
•
1.13M
•
77
TheBloke/llava-v1.5-13B-GPTQ
Text Generation
•
Updated
•
353
•
35
TheBloke/zephyr-7B-beta-GPTQ
Text Generation
•
Updated
•
4.15k
•
55
TheBloke/claude2-alpaca-7B-GPTQ
Text Generation
•
Updated
•
27
•
3
TheBloke/Synatra-7B-v0.3-RP-GPTQ
Text Generation
•
Updated
•
27
•
7
TheBloke/Synatra-RP-Orca-2-7B-v0.1-GPTQ
Text Generation
•
Updated
•
31
•
2
TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ
Text Generation
•
Updated
•
57k
•
134
TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
Text Generation
•
Updated
•
440k
•
49
TheBloke/Llama-2-7B-ft-instruct-es-GPTQ
Text Generation
•
Updated
•
115
•
2
TheBloke/openchat-3.5-0106-GPTQ
Text Generation
•
Updated
•
165
•
7
TheBloke/Everyone-Coder-33B-Base-GPTQ
Text Generation
•
Updated
•
27
•
3
TheBloke/CodeLlama-70B-Instruct-GPTQ
Text Generation
•
Updated
•
505
•
12
TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ
Updated
•
950
•
54
Qwen/Qwen1.5-7B-Chat-GPTQ-Int4
Text Generation
•
Updated
•
467
•
19
Duxiaoman-DI/XuanYuan2-70B-Chat-4bit
Text Generation
•
Updated
•
35
•
2
TechxGenus/Meta-Llama-3-8B-GPTQ
Text Generation
•
Updated
•
2.12k
•
5
kaitchup/Phi-3-mini-4k-instruct-gptq-4bit
Text Generation
•
Updated
•
2.24k
•
2
nm-testing/Llama-2-7b-pruned2.4-Marlin_24
Text Generation
•
Updated
•
1.68k
•
1
neuralmagic/Mistral-7B-Instruct-v0.3-GPTQ-4bit
Text Generation
•
Updated
•
98.4k
•
15
cookey39/Five_Phases_Mindset
Text Generation
•
Updated
•
30
•
1
Qwen/Qwen2-7B-Instruct-GPTQ-Int4
Text Generation
•
Updated
•
4.73k
•
23
allganize/Llama-3-Alpha-Ko-8B-Instruct-GPTQ
Text Generation
•
Updated
•
16
•
4
ArthurGprog/Codestral-22B-v0.1-FIM-Fix-GPTQ
Text Generation
•
Updated
•
193
•
4
Granther/Gemma-2-9B-Instruct-4Bit-GPTQ
Text Generation
•
Updated
•
644
•
3
neuralmagic/Meta-Llama-3-8B-Instruct-quantized.w8a16
Text Generation
•
Updated
•
38.5k
•
2
marcsun13/gemma-2-9b-it-GPTQ
Text Generation
•
Updated
•
3.13k
•
3
AI-MO/NuminaMath-7B-TIR-GPTQ
Text Generation
•
Updated
•
1.78k
•
5
model-scope/glm-4-9b-chat-GPTQ-Int4
Text Generation
•
Updated
•
51
•
6
model-scope/glm-4-9b-chat-GPTQ-Int8
Text Generation
•
Updated
•
16
•
2