does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack.
#34 opened 2 months ago
by
pythonchatbot05
Trying to use llama-2-7b-chat.Q4_K_M.gguf with/without tensorflow weights
2
#33 opened 6 months ago
by
cgthayer
Fine tuning
1
#32 opened 7 months ago
by
MD1998
How to know which bit model deploying using sagemaker
#31 opened 7 months ago
by
sarvanand
GGUF Model not loading at all
2
#30 opened 7 months ago
by
jdhadljasnajd
Waiting for Meta-Llama-3-8B-Instruct-gguf
1
#29 opened 7 months ago
by
anuragrawal
Local model changing in model.py
#28 opened 7 months ago
by
aaromal
Update Discord Invite link
#27 opened 7 months ago
by
ZennyKenny
Refer the model on HuggingFace
#26 opened 8 months ago
by
vidhiparikh
how to deal with number of tokens exceeded maximum context length
2
#25 opened 8 months ago
by
Janmejay123
Connect to vector store through azure ai search index
#24 opened 8 months ago
by
Janmejay123
Help with llama-2-7b-chat.Q4_K_M.gguf already in local downloads
1
#23 opened 9 months ago
by
RaphaellG
401 authentication error
#22 opened 9 months ago
by
Madhumitha19
Why "llama-2-7b-chat.Q8_0.gguf" model is not recommended
3
#21 opened 9 months ago
by
AhmetOnur
RMSNorm eps value is wrong
#20 opened 9 months ago
by
qnixsynapse
RMSNorm eps value is wrong
#19 opened 9 months ago
by
qnixsynapse
Problem while making multiple request at a time from seperate chat bot instances
1
#18 opened 10 months ago
by
krishnapiya
Model is responding for out of context answer also need to know from where its taking the answer?
1
#17 opened 10 months ago
by
AdarshaAG
[AUTOMATED] Model Memory Requirements
#16 opened 11 months ago
by
model-sizer-bot
Is there any linguistic model that supports the Arabic language?
#15 opened 12 months ago
by
abdellatif1
How to use system prompts?
2
#14 opened 12 months ago
by
luissimoes
Produced Output Stability
2
#13 opened 12 months ago
by
luissimoes
proper embedding for llama-2-7b-chat.Q4_K_M.gguf
1
#12 opened about 1 year ago
by
awarity-dev
"TheBloke/Llama-2-7b-Chat-GGUF does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack."
1
#11 opened about 1 year ago
by
swvajanyatek
Running locally: Cannot load model "llama-2-7b-chat.Q2_K.gguf"
3
#10 opened about 1 year ago
by
Learner
Check the model's maximum input value
1
#9 opened about 1 year ago
by
minhdang
Error while using CTransformers: Model file 'llama-2-7b-chat.q4_K_M.gguf' not found
3
#7 opened about 1 year ago
by
gaukelkar
error loading model: GGUF with latest llama_cpp_python 0.2.11
1
#6 opened about 1 year ago
by
Kerlion
What model and minimum gpu requirements are required for PDF-QA?
2
#5 opened about 1 year ago
by
saifhassan
This model giving the correct answer but twice
#4 opened about 1 year ago
by
Srp7
Llama 2 GGUF streaming support?
#3 opened about 1 year ago
by
latinostats
Inconsistent Model Name
2
#2 opened about 1 year ago
by
Yao-Lirong
Error with runpod
#1 opened about 1 year ago
by
sunshein