A simple average of the log probabilities of the output tokens from an LLM might be all it takes to tell if the model is hallucinating.🫨
The idea is that if a model is not confident (low output token probabilities), the model may be inventing random stuff.
In these two papers:
1. https://aclanthology.org/2023.eacl-main.75/
2. https://arxiv.org/abs/2303.08896
The authors claim that this simple method is the best heuristic for detecting hallucinations. The beauty is that it only uses the generated token probabilities, so it can be implemented at inference time ⚡