Safetensors
English
qwen2

PairJudge RM

PairJudge RM is a pairwise judge reward model designed to enhance Best-of-N sampling for mathematical reasoning tasks. Instead of assigning arbitrary absolute scores to candidate solutions, PairJudge RM compares them in pairs using chain-of-thought (CoT) reasoning and selects the best answer via a knockout tournament strategy.

Overview

  • Pairwise Judgment: Evaluates two candidate solutions simultaneously to determine which is more correct.
  • Chain-of-Thought Reasoning: Leverages CoT to transparently verify each step of the candidate solutions.

Model Architecture & Training

PairJudge RM is built by fine-tuning a pre-trained language model (e.g., Qwen-2.5-7B-Instruct) on the PAIRJUDGE-432K dataset. Key training details include:

  • Optimizer: Adam
  • Learning Rate: 1×10⁻⁵
  • Batch Size: 128
  • Epochs: 8

Usage

Below is an example of how to use PairJudge RM for evaluating candidate solutions:

from transformers import AutoTokenizer, AutoModelForCausalLM

# template file is avaliable in [https://github.com/THU-KEG/PairwiseRM/blob/main/prompt/compare_0_ex.md]
TEMPLATE = open("prompts/compare_0_ex.md", "r").read()

# Load the tokenizer and model from Hugging Face
tokenizer = AutoTokenizer.from_pretrained("THU-KEG/PairJudgeRM")
model = AutoModelForCausalLM.from_pretrained("THU-KEG/PairJudgeRM")

# Example math problem and candidate solutions
question = "If one equilateral triangle in a regular hexagon has a perimeter of 21 inches, what is the hexagon’s perimeter?"
response_a = "Each side is 7 inches; hexagon perimeter is 42 inches."
response_b = "The triangle's perimeter is 21 inches; hexagon perimeter is 126 inches."

# Construct the input prompt for pairwise judgment
input_text = template.format(question=question, response_a=response_a, response_b=response_b)
inputs = tokenizer(input_text, return_tensors="pt")

# Generate the judgment with a chain-of-thought explanation
outputs = model.generate(**inputs, max_new_tokens=2048)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Citation

If you find our work useful, please consider citing our paper:

@article{liu2025PairJudge,
  title={PairJudge RM: Perform Best-of-N Sampling with Knockout Tournament},
  author={Liu, Yantao and Yao, Zijun and Min, Rui and Cao, Yixin and Hou, Lei and Li, Juanzi},
  journal={arXiv preprint arXiv:2501.13007},
  year={2025},
  note={in progress work},
  url={https://doi.org/10.48550/arXiv.2501.13007}
}
Downloads last month
17
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for THU-KEG/PairJudge-RM

Base model

Qwen/Qwen2.5-7B
Finetuned
(667)
this model

Dataset used to train THU-KEG/PairJudge-RM