Can't convert to GGUF

#1
by Joseph717171 - opened

I was going to test out your model today. But, I couldn't get it to convert to GGUF. 😞

Traceback (most recent call last):
  File "/Users/jsarnecki/opt/llama.cpp/convert.py", line 1486, in <module>
    main()
  File "/Users/jsarnecki/opt/llama.cpp/convert.py", line 1422, in main
    model_plus = load_some_model(args.model)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jsarnecki/opt/llama.cpp/convert.py", line 1291, in load_some_model
    model_plus = merge_multifile_models(models_plus)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jsarnecki/opt/llama.cpp/convert.py", line 747, in merge_multifile_models
    model = merge_sharded([mp.model for mp in models_plus])
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jsarnecki/opt/llama.cpp/convert.py", line 726, in merge_sharded
    return {name: convert(name) for name in names}
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jsarnecki/opt/llama.cpp/convert.py", line 726, in <dictcomp>
    return {name: convert(name) for name in names}
                  ^^^^^^^^^^^^^
  File "/Users/jsarnecki/opt/llama.cpp/convert.py", line 701, in convert
    lazy_tensors: list[LazyTensor] = [model[name] for model in models]
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jsarnecki/opt/llama.cpp/convert.py", line 701, in <listcomp>
    lazy_tensors: list[LazyTensor] = [model[name] for model in models]
                                      ~~~~~^^^^^^
KeyError: 'embed_tokens.weight'

Hey @Joseph717171 ! Sorry for the late reply, just now seeing this. I moved last week and went off grid to enjoy the new house. Apologies for not having a convertible version ready to go. I am making a GGUF right now and will post it today for you to use. It is an improved version of this Hermes-2-Pro trained on a newer, better IKM dataset. Hopefully this model is awesome, as I would love for it to be a flagship for the dataset. I'll get the GGUF up for you today

@Joseph717171

Here is the latest GGUF and the full Pytorch version is almost done pushing to its hub: https://huggingface.co/Severian/Nexus-IKM-Hermes-2-Pro-Mistral-7B-GGUF

Sign up or log in to comment