Spaces:
Running
openchat_3.5
What are people's experiences with the openchat_3.5 model?
I find the replies tend to be very short and not detailed. If I pass a few thousand characters as input prompt to the LLM, it tends to ignore the vast majority of the input, and instead of focusing on the input, it still hallucinates heavily, that is even after I told it to only tell me what the input gave it.
GPT-3.5 from chatGPT would perform much better, understanding the details and nuances provided to it through input prompting.
I am surprised that openchat_3.5 is performing much worse than GPT-3.5, the only reason that i can think of is because openchat_3.5 is compared to the March version of GPT.
What do you guys think?
for me, after the system prompt it replies kinda long, even after telling to keep it short. but i feel like it's vastly better than Mistral 7B instruct tho.
Outputs from openchat 3.5 has been consistently very short for me, at about one paragraph. I wonder why we are having such different experiences.
But yeah, its night and day compared to Mistral 7B, mistral would just totally ignore my input when they get long and detailed.
Really would like to know everybody's thoughts on:
- Mistral-7b-Instruct-v0.2
- OpenChat-3.5-1210
How do these two models compare for following instructions, for RAG?