ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models

arXiv license

This page holds the ANAH-v2 model which is trained base on the Internlm2-7B. It is fine-tuned to annotate the hallucination in LLM's responses.

More information please refer to our project page.

πŸ€— How to use the model

You have to follow the prompt in our paper to annotate the hallucination and you can find it easily here.

We also provide some examples of using the ANAH-v2 annotator, which you can refer to for annotating your content.

πŸ–ŠοΈ Citation

If you find this project useful in your research, please consider citing:

@article{gu2024anah,
  title={ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models},
  author={Gu, Yuzhe and Ji, Ziwei and Zhang, Wenwei and Lyu, Chengqi and Lin, Dahua and Chen, Kai},
  journal={arXiv preprint arXiv:2407.04693},
  year={2024}
}
Downloads last month
13
Safetensors
Model size
7.74B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API does not yet support model repos that contain custom code.

Model tree for opencompass/anah-v2

Quantizations
2 models