Model Card for Model ID

Lynx is an open-source hallucination evaluation model. Patronus-Lynx-8B-Instruct was trained on a mix of datasets including CovidQA, PubmedQA, DROP, RAGTruth. The datasets contain a mix of hand-annotated and synthetic data. The maximum sequence length is 8000 tokens.

Model Details

Model Sources

How to Get Started with the Model

Lynx is trained to detect hallucinations in RAG settings. Provided a document, question and answer, the model can evaluate whether the answer is faithful to the document.

To use the model, we recommend using the following prompt:

PROMPT = """
Given the following QUESTION, DOCUMENT and ANSWER you must analyze the provided answer and determine whether it is faithful to the contents of the DOCUMENT. The ANSWER must not offer new information beyond the context provided in the DOCUMENT. The ANSWER also must not contradict information provided in the DOCUMENT. Output your final verdict by strictly following this format: "PASS" if the answer is faithful to the DOCUMENT and "FAIL" if the answer is not faithful to the DOCUMENT. Show your reasoning.

--
QUESTION (THIS DOES NOT COUNT AS BACKGROUND INFORMATION):
{question}

--
DOCUMENT:
{context}

--
ANSWER:
{answer}

--

Your output should be in JSON FORMAT with the keys "REASONING" and "SCORE":
{{"REASONING": <your reasoning as bullet points>, "SCORE": <your final score>}}
"""

The model will output the score as 'PASS' if the answer is faithful to the document or FAIL if the answer is not faithful to the document.

Inference

To run inference, you can use HF pipeline:


model_name = 'PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct'
pipe = pipeline(
          "text-generation",
          model=model_name,
          max_new_tokens=600,
          device="cuda",
          return_full_text=False
        )

messages = [
    {"role": "user", "content": prompt},
]

result = pipe(messages)
print(result[0]['generated_text'])

Since the model is trained in chat format, ensure that you pass the prompt as a user message.

For more information on training details, refer to our ArXiv paper.

Evaluation

The model was evaluated on PatronusAI/HaluBench.

Model HaluEval RAGTruth FinanceBench DROP CovidQA PubmedQA Overall
GPT-4o 87.9% 84.3% 85.3% 84.3% 95.0% 82.1% 86.5%
GPT-4-Turbo 86.0% 85.0% 82.2% 84.8% 90.6% 83.5% 85.0%
GPT-3.5-Turbo 62.2% 50.7% 60.9% 57.2% 56.7% 62.8% 58.7%
Claude-3-Sonnet 84.5% 79.1% 69.7% 84.3% 95.0% 82.9% 78.8%
Claude-3-Haiku 68.9% 78.9% 58.4% 84.3% 95.0% 82.9% 69.0%
RAGAS Faithfulness 70.6% 75.8% 59.5% 59.6% 75.0% 67.7% 66.9%
Mistral-Instruct-7B 78.3% 77.7% 56.3% 56.3% 71.7% 77.9% 69.4%
Llama-3-Instruct-8B 83.1% 80.0% 55.0% 58.2% 75.2% 70.7% 70.4%
Llama-3-Instruct-70B 87.0% 83.8% 72.7% 69.4% 85.0% 82.6% 80.1%
LYNX (8B) 85.7% 80.0% 72.5% 77.8% 96.3% 85.2% 82.9%
LYNX (70B) 88.4% 80.2% 81.4% 86.4% 97.5% 90.4% 87.4%

Citation

If you are using the model, cite using

@article{ravi2024lynx,
  title={Lynx: An Open Source Hallucination Evaluation Model},
  author={Ravi, Selvan Sunitha and Mielczarek, Bartosz and Kannappan, Anand and Kiela, Douwe and Qian, Rebecca},
  journal={arXiv preprint arXiv:2407.08488},
  year={2024}
}

Model Card Contact

@sunitha-ravi @RebeccaQian1 @presidev

Downloads last month
334
Safetensors
Model size
8.03B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct

Finetunes
5 models
Quantizations
9 models

Spaces using PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct 8