orionweller commited on
Commit
f6417bb
·
verified ·
1 Parent(s): 288f269

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +180 -0
README.md ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - jhu-clsp/rank1-training-data
5
+ base_model:
6
+ - Mistral/Mistral-Small-2501-24B
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - reranker
10
+ - retrieval
11
+ language:
12
+ - en
13
+ ---
14
+
15
+ # rank1-mistral-2501-24b: Test-Time Compute for Reranking in Information Retrieval
16
+
17
+ 📄 [Paper](https://arxiv.org/abs/2502.18418) | 🚀 [GitHub Repository](https://github.com/orionw/rank1)
18
+
19
+ rank1 is a reasoning reranker model that "thinks" before making relevance judgments. This 24B parameter model is trained from the Mistral-Small 2501 24B base model and leverages test-time compute to generate reasoning chains before deciding if a document is relevant to a query.
20
+
21
+ ## Model Description
22
+
23
+ rank1 introduces a novel approach to information retrieval by generating explicit reasoning chains before making relevance judgments. Unlike traditional rerankers that directly output scores, rank1:
24
+
25
+ 1. Receives a query and document pair
26
+ 2. Generates a reasoning chain within a `<think>...</think>` section
27
+ 3. Makes a binary relevance judgment (`true` or `false`)
28
+ 4. Returns a confidence score based on the logits of the true/false tokens
29
+
30
+ This approach helps the model break down complex relevance decisions into logical steps, improving performance across diverse retrieval tasks.
31
+
32
+ ## Model Family
33
+
34
+ | Model | Base | Description |
35
+ |:------|:-----|:------------|
36
+ | [rank1-7b](https://huggingface.co/jhu-clsp/rank1-7b) | Qwen2.5-7B | Qwen variant (7B parameters) |
37
+ | [rank1-14b](https://huggingface.co/jhu-clsp/rank1-14b) | Qwen2.5-14B | Qwen variant (14B parameters) |
38
+ | [rank1-32b](https://huggingface.co/jhu-clsp/rank1-32b) | Qwen2.5-32B | Qwen variant (32B parameters) |
39
+ | [rank1-mistral-2501-24b](https://huggingface.co/jhu-clsp/rank1-mistral-2501-24b) | Mistral-Small 2501 24B | Current model (24B parameters) |
40
+ | [rank1-llama3-8b](https://huggingface.co/jhu-clsp/rank1-llama3-8b) | Llama 3.1 8B | Trained from Llama 3.1 base |
41
+
42
+ ### Quantized Variants
43
+
44
+ | Model | Description |
45
+ |:------|:------------|
46
+ | [rank1-7b-awq](https://huggingface.co/jhu-clsp/rank1-7b-awq) | Quantized version of rank1-7b |
47
+ | [rank1-14b-awq](https://huggingface.co/jhu-clsp/rank1-14b-awq) | Quantized version of rank1-14b |
48
+ | [rank1-32b-awq](https://huggingface.co/jhu-clsp/rank1-32b-awq) | Quantized version of rank1-32b |
49
+ | [rank1-mistral-2501-24b-awq](https://huggingface.co/jhu-clsp/rank1-mistral-2501-24b-awq) | Quantized version of rank1-mistral-24b |
50
+ | [rank1-llama3-8b-awq](https://huggingface.co/jhu-clsp/rank1-llama3-8b-awq) | Quantized version of rank1-llama3-8b |
51
+
52
+ ## Associated Data and Resources
53
+
54
+ | Resource | Description |
55
+ |:---------|:------------|
56
+ | [rank1-r1-msmarco](https://huggingface.co/datasets/jhu-clsp/rank1-r1-msmarco) | All R1 output examples from MS MARCO |
57
+ | [rank1-training-data](https://huggingface.co/datasets/jhu-clsp/rank1-training-data) | Training data used for rank1 models |
58
+ | [rank1-run-files](https://huggingface.co/datasets/jhu-clsp/rank1-run-files) | Pre-computed run files for use in top 100 doc reranking |
59
+ | [GitHub Repository](https://github.com/orionw/rank1) | Official rank1 repository |
60
+
61
+ ## Usage
62
+ Note that official usage is found on the Github and accounts for edge cases. But for simple use cases the minimal example below works.
63
+
64
+ <details>
65
+ <summary>Click to expand: Minimal example with vLLM</summary>
66
+
67
+ ```python
68
+ from vllm import LLM, SamplingParams
69
+ import math
70
+
71
+ # Initialize the model with vLLM
72
+ model = LLM(
73
+ model="jhu-clsp/rank1-mistral-2501-24b",
74
+ tensor_parallel_size=1, # Number of GPUs
75
+ trust_remote_code=True,
76
+ max_model_len=16000, # Context length
77
+ gpu_memory_utilization=0.9,
78
+ dtype="float16",
79
+ )
80
+
81
+ # Set up sampling parameters
82
+ sampling_params = SamplingParams(
83
+ temperature=0,
84
+ max_tokens=8192,
85
+ logprobs=20,
86
+ stop=["</think> true", "</think> false"],
87
+ skip_special_tokens=False
88
+ )
89
+
90
+ # Prepare the prompt
91
+ def create_prompt(query, document):
92
+ return (
93
+ "Determine if the following passage is relevant to the query. "
94
+ "Answer only with 'true' or 'false'.\n"
95
+ f"Query: {query}\n"
96
+ f"Passage: {document}\n"
97
+ "<think>"
98
+ )
99
+
100
+ # Example usage
101
+ query = "What are the effects of climate change?"
102
+ document = "Climate change leads to rising sea levels, extreme weather events, and disruptions to ecosystems. These effects are caused by increasing greenhouse gas concentrations in the atmosphere due to human activities."
103
+
104
+ # Generate prediction
105
+ prompt = create_prompt(query, document)
106
+ outputs = model.generate([prompt], sampling_params)
107
+
108
+ # Extract score
109
+ output = outputs[0].outputs[0]
110
+ text = output.text
111
+ final_logits = output.logprobs[-1]
112
+
113
+ # Get token IDs for "true" and "false" tokens
114
+ from transformers import AutoTokenizer
115
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/rank1-mistral-2501-24b")
116
+ true_token = tokenizer(" true", add_special_tokens=False).input_ids[0]
117
+ false_token = tokenizer(" false", add_special_tokens=False).input_ids[0]
118
+
119
+ # Calculate relevance score (probability of "true")
120
+ true_logit = final_logits[true_token].logprob
121
+ false_logit = final_logits[false_token].logprob
122
+ true_score = math.exp(true_logit)
123
+ false_score = math.exp(false_logit)
124
+ relevance_score = true_score / (true_score + false_score)
125
+
126
+ print(f"Reasoning chain: {text}")
127
+ print(f"Relevance score: {relevance_score}")
128
+ ```
129
+
130
+ </details>
131
+
132
+ ## Performance
133
+
134
+ rank1-mistral-2501-24b demonstrates strong performance on retrieval benchmarks, particularly on tasks requiring complex reasoning. The model's ability to "think through" relevance decisions makes it especially effective for nuanced topics.
135
+
136
+ For specific benchmark results and comparisons with other models, please refer to the paper and the official GitHub repository.
137
+
138
+ ## Installation
139
+
140
+ Please see the Github for detailed installation instructions.
141
+
142
+ ## MTEB Integration
143
+
144
+ rank1 is compatible with the [MTEB benchmarking framework](https://github.com/embeddings-benchmark/mteb):
145
+
146
+ ```python
147
+ from mteb import MTEB
148
+ from rank1 import rank1 # From the official repo
149
+
150
+ # Initialize the model
151
+ model = rank1(
152
+ model_name_or_path="jhu-clsp/rank1-mistral-2501-24b",
153
+ num_gpus=1,
154
+ device="cuda"
155
+ )
156
+
157
+ # Run evaluation on specific tasks
158
+ evaluation = MTEB(tasks=["NevIR"])
159
+ results = evaluation.run(model)
160
+ ```
161
+
162
+ ## Citation
163
+
164
+ If you use rank1 in your research, please cite our work:
165
+
166
+ ```bibtex
167
+ @misc{weller2025rank1testtimecomputereranking,
168
+ title={Rank1: Test-Time Compute for Reranking in Information Retrieval},
169
+ author={Orion Weller and Kathryn Ricci and Eugene Yang and Andrew Yates and Dawn Lawrie and Benjamin Van Durme},
170
+ year={2025},
171
+ eprint={2502.18418},
172
+ archivePrefix={arXiv},
173
+ primaryClass={cs.IR},
174
+ url={https://arxiv.org/abs/2502.18418},
175
+ }
176
+ ```
177
+
178
+ ## License
179
+
180
+ [MIT License](https://github.com/orionw/rank1/blob/main/LICENSE)