Wrong number of tensors; expected 292, got 291
ValueError: Ollama call failed with status code 500. Details: {"error":"llama runner process has terminated: error loading model: done_getting_tensors: wrong number of tensors; expected 292, got 291"}
These models are broken again even after 3rd attempt after updated huggingface repo? Ive never seen this error before.
Just use llamacpp. It has been updated with the RoPE scaling patch.
@qnixsynapse , I did use llamacpp to convert 32 bit safetensors to BF16, then quantized to Q8_0, Q_6_K, Q_5_K_M. Then convert to ollama in cli using modelfile. This is the message ollama kicks out, errors out. There is something big time broken that Meta need to sort out.
Why are you using ollama in the first place? Use llamacpp. The latest one has the rope scaling patch.
And it isn't a fault of meta if ollama doesn't update llamacpp being a wrapper.
I have had the same error message using llama3.1 from unsloth. I was trying to implement the example from the official site from the unsloth git:
https://github.com/unslothai/unsloth -> https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
and the code from the youtuber Mervin:https://www.youtube.com/@MervinPraison -> https://mer.vin/2024/07/llama-3-1-fine-tune/
So unsloth was done with conversion and there was no error in both codes by creating the gguf file.
I was trying both, mervins code and the official code to load the gguf from unsloth to ollama, both with the same error:
Error: llama runner process has terminated: error loading model: done_getting_tensors: wrong number of tensors; expected 292, got 291
Since unsloth implemented an automation to load llama.cpp by calling their functions, I had no idea what kind of version they loaded.
So I went in the llama.cpp directory (I have linux so it was "cd llama.cpp" - search for the llama.cpp folder in your project of course)
and then I executed: sudo git reset --hard 46e12c4692a37bdd31a0432fc5153d7d22bc7f72
And yes, I was asking chatGPT to help me with that problem. I am very happy, that it is working right now, but developing in this field seems to be not staple for the next years. I hope it will work on your system as well!
Best greetings
Matthias
Thanks you Budd, Apprecaited. Ill give it another go. Cheers
This should be fixed in latest Ollama