OSError: anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g does not appear to have a file named pytorch_model-00001-of-00006.bin.
1
#55 opened 7 months ago
by
pasan-SK
Model cant be loaded
2
#54 opened about 1 year ago
by
Tulakor
"RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!"
1
#53 opened over 1 year ago
by
sneet
Fine tuning "gpt4-x-alpaca-13b-native-4bit-128g".
#52 opened over 1 year ago
by
muzammil-eds
Setup on MBP
#51 opened over 1 year ago
by
NinjAIbot
IndexError: list index out of range
#50 opened over 1 year ago
by
zeyad-shaban
The model weights are not tied and json-files are different from the original LlamaTokenizer file?
#49 opened over 1 year ago
by
deleted
error loading model: unexpectedly reached end of file
#48 opened over 1 year ago
by
toomox
Error: failed to load model 'ggml-model-q4_1.bin'
3
#47 opened over 1 year ago
by
Arthur-101
Any Constructive help is always welcome. :/
#46 opened over 1 year ago
by
TehNinja
Requantize to support latest code on llama.cpp
1
#45 opened over 1 year ago
by
TusharRay
你好
1
#44 opened over 1 year ago
by
SunYangGunang
Where is the info on how to format the prompt?
#43 opened over 1 year ago
by
MaxLohMusic
Out of memory? somehow
#42 opened over 1 year ago
by
LudoPog
error, says this model is not found?
1
#41 opened over 1 year ago
by
yutoliho
Running out of memory with 12GB of VRAM on 3080TI
3
#39 opened over 1 year ago
by
faaaaaaaaaaaa
Issues running the model in python
2
#38 opened over 1 year ago
by
Kralos-R
How I got this to run with oobabooga/ text-generation-webui
4
#37 opened over 1 year ago
by
socter
Colab for finetuning
#36 opened over 1 year ago
by
robertsw
RuntimeError: [enforce fail at C:\cb\pytorch_1000000000000\work\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 1048576 bytes.
2
#35 opened over 1 year ago
by
FastRide2
gradio error?
#34 opened over 1 year ago
by
JLEADO
Gpt4-x-alpaca gives gibberish numbers instead of words
4
#33 opened over 1 year ago
by
Snim
RTX 3070, only getting about 0,38 tokens/minute
3
#32 opened over 1 year ago
by
jojokingxp45
No response from Abaca when using GPU version
2
#31 opened over 1 year ago
by
ZeroH3art
For those who complain about that its Censored
2
#30 opened over 1 year ago
by
Shivero
Running an rtx3060 with 12GBvram - managed to get this model working on method in link in description
3
#28 opened over 1 year ago
by
planetfrog
RuntimeError: Internal: D:\a\sentencepiece\sentencepiece\src\sentencepiece_processor.cc(1102) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
5
#27 opened over 1 year ago
by
deleted
for people with low Mid pc specs
#26 opened over 1 year ago
by
bhaveshNOm
Request for Colab Version
5
#25 opened over 1 year ago
by
zaeaz
Emoji overload
2
#24 opened over 1 year ago
by
Horned
ValueError: Tokenizer class LlamaTokenizer does not exist or is not currently imported. (while using Transformers)
#23 opened over 1 year ago
by
ShifraSec
Temporal fix for "DefaultCPUAIIocator: not enough memory: you tried to allocate 13107200 bytes" error
4
#22 opened over 1 year ago
by
xiiredrum
Issue with tokenizor using Ooga Booga?
4
#21 opened over 1 year ago
by
mm04926412
Stuck at "Filtering content: 40% (2/5)" when cloning repository
12
#20 opened over 1 year ago
by
splork
I'm trying to run this using oobabooga but I'm getting 0.17 tokens/second.
5
#18 opened over 1 year ago
by
Said2k
LLAMA running slow with this model.
1
#17 opened over 1 year ago
by
BoreGuy1998
Error loading gpt4-x-alpaca-13b-native-4bit-128g on Alienware M15 Ryzen Edition R5 laptop
1
#16 opened over 1 year ago
by
Omarrrrz
out of memory error when launching from oobabooga web ui
34
#15 opened over 1 year ago
by
bhaveshNOm
torch.cuda.OutOfMemoryError
1
#14 opened over 1 year ago
by
Forceee
Only small amount of tokens generated in oobabooga
2
#13 opened over 1 year ago
by
synthetisoft
Is Vicuna involved by any means?
#12 opened over 1 year ago
by
sneedingface
Does this model able to run on 3060 12g?
3
#11 opened over 1 year ago
by
cyx123
CUDA out of memory
3
#10 opened over 1 year ago
by
n01sf8
Cuda error with ooba-booga WebUI
4
#9 opened over 1 year ago
by
Timo956
Hmmm.... Problem with another language...
1
#8 opened over 1 year ago
by
RGTails
Safetensors version?
#7 opened over 1 year ago
by
shalak
Error using ooba-gooba
39
#6 opened over 1 year ago
by
blueisbest
7B model thanks?
#4 opened over 1 year ago
by
tamal777
How can i use this model with GPTQ-for-LLaMa?
1
#3 opened over 1 year ago
by
jini1114
Error for LlamaForCausalLM.from_pretrained in HuggingFace
3
#2 opened over 1 year ago
by
Selyam