This LLM seems to be trolling me??
3
#9 opened 5 months ago
by
skynet24
Reducing Latency in Locally Hosted model
1
#8 opened 7 months ago
by
anshulchandel
Not working on M1 Max using llama-cpp-python
#7 opened 11 months ago
by
shroominic
Missing tokenizer.model file
3
#6 opened 12 months ago
by
whatever1983
not working
5
#3 opened about 1 year ago
by
imhsouna
Free and ready to use deepseek-coder-6.7B-instruct-GGUF model as OpenAI API compatible endpoint
#2 opened about 1 year ago
by
limcheekin
This model cannot be used normally
19
#1 opened about 1 year ago
by
hyunfzen