Best model for RP I have ever tried

#2
by Franchu - opened

Thank you for this!
This is the best model for RP (well, at least the other one was) by a huge margin, and I have tried dozens of different models.

@LoneStriker knock, knock, are you considering making an exl2 of this? :)

Thank you!

both of you.

DreamGen org

Thanks for the kind words, and thanks @LoneStriker for the quants!

DreamGen org

After some testing, I am not quite happy with this version -- but more is cooking.

Yes, It's somehow not as good as the previos 7B model.
Waiting patiently for the next bun ;)

This comment has been hidden

@DreamGenX Is there any idea on when some exl2 version or gguf versions of the fixed version will be uploaded?

@Franchu I have trained model with the BOS fix, and it performs better in my evals:

https://huggingface.co/dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2
https://huggingface.co/dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5

@DreamGenX Thank you very much. I will try the new one as soon as I reach home.
This model is so much fun to play with.

Has anybody tried current llama.cpp and --override-kv tokenizer.ggml.pre=str:llama3 ? previous llama.cpp versions had the wrong pre tokenizer, reducing quality considerably, and this was fixed only yesterday. redoing the gguf quants with newer llama.cpp will also fix it (if used with equally new llama.cpp :)

Oh, sorry, didn't read properly. Anyway, the override should work with existing ggufs, is my point.

Sign up or log in to comment