File size: 7,021 Bytes
a9fd75b
 
d0fa1c4
1fd3e6a
a9fd75b
 
d0fa1c4
 
 
a9fd75b
d0fa1c4
1fd3e6a
5b619b1
d0fa1c4
 
 
 
 
 
7d6748a
 
 
 
 
 
f056e20
7d6748a
 
ce67b8a
d0fa1c4
 
ce67b8a
7d6748a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5102f69
7d6748a
 
524a63b
 
7d6748a
 
524a63b
a839720
524a63b
a839720
 
 
524a63b
 
a839720
 
524a63b
7d6748a
 
a839720
 
524a63b
 
a839720
 
 
524a63b
7d6748a
 
 
 
 
 
 
 
 
 
 
 
 
 
5102f69
 
7d6748a
 
d2e9205
bc8d82d
 
d2e9205
7d6748a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82dc9ae
 
938bb7b
d0fa1c4
 
7fae5e8
938bb7b
 
82dc9ae
938bb7b
 
 
 
ee6bccb
ce67b8a
938bb7b
 
 
 
d0fa1c4
 
8968c80
d0fa1c4
8968c80
d0fa1c4
8968c80
d0fa1c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
938bb7b
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
---
language:
  - vi
license: apache-2.0
library_name: transformers
tags:
  - transformers
  - cross-encoder
  - rerank
datasets:
  - unicamp-dl/mmarco
pipeline_tag: text-classification
widget:
  - text: tỉnh nào  diện tích lớn nhất việt nam
    output:
      - label: nghệ an  diện tích lớn nhất việt nam
        score: 0.99999
      - label: bắc ninh  diện tích nhỏ nhất việt nam
        score: 0.0001
---

# Reranker

* [Usage](#usage)
    * [Using FlagEmbedding](#using-flagembedding)
    * [Using Huggingface transformers](#using-huggingface-transformers)
* [Fine tune](#fine-tune)
    * [Data format](#data-format)
* [Performance](#performance)
* [Contact](#contact)
* [Support The Project](#support-the-project)
* [Citation](#citation)

Different from embedding model, reranker uses question and document as input and directly output similarity instead of
embedding.
You can get a relevance score by inputting query and passage to the reranker.
And the score can be mapped to a float value in [0,1] by sigmoid function.

## Usage

### Using FlagEmbedding

```
pip install -U FlagEmbedding
```

Get relevance scores (higher scores indicate more relevance):

```python
from FlagEmbedding import FlagReranker

reranker = FlagReranker('namdp-ptit/ViRanker',
                        use_fp16=True)  # Setting use_fp16 to True speeds up computation with a slight performance degradation

score = reranker.compute_score(['ai là vị vua cuối cùng của việt nam', 'vua bảo đại là vị vua cuối cùng của nước ta'])
print(score)  # 13.71875

# You can map the scores into 0-1 by set "normalize=True", which will apply sigmoid function to the score
score = reranker.compute_score(['ai là vị vua cuối cùng của việt nam', 'vua bảo đại là vị vua cuối cùng của nước ta'],
                               normalize=True)
print(score)  # 0.99999889840464

scores = reranker.compute_score(
    [
        ['ai là vị vua cuối cùng của việt nam', 'vua bảo đại là vị vua cuối cùng của nước ta'],
        ['ai là vị vua cuối cùng của việt nam', 'lý nam đế là vị vua đầu tiên của nước ta']
    ]
)
print(scores)  # [13.7265625, -8.53125]

# You can map the scores into 0-1 by set "normalize=True", which will apply sigmoid function to the score
scores = reranker.compute_score(
    [
        ['ai là vị vua cuối cùng của việt nam', 'vua bảo đại là vị vua cuối của nước ta'],
        ['ai là vị vua cuối cùng của việt nam', 'lý nam đế là vị vua đầu tiên của nước ta']
    ],
    normalize=True
)
print(scores)  # [0.99999889840464, 0.00019716942196222918]
```

### Using Huggingface transformers

```
pip install -U transformers
```

Get relevance scores (higher scores indicate more relevance):

```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('namdp-ptit/ViRanker')
model = AutoModelForSequenceClassification.from_pretrained('namdp-ptit/ViRanker')
model.eval()

pairs = [
    ['ai là vị vua cuối cùng của việt nam', 'vua bảo đại là vị vua cuối cùng của nước ta'],
    ['ai là vị vua cuối cùng của việt nam', 'lý nam đế là vị vua đầu tiên của nước ta']
],
with torch.no_grad():
    inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
    scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
    print(scores)
```

## Fine-tune

### Data Format

Train data should be a json file, where each line is a dict like this:

```
{"query": str, "pos": List[str], "neg": List[str]}
```

`query` is the query, and `pos` is a list of positive texts, `neg` is a list of negative texts. If you have no negative
texts for a query, you can random sample some from the entire corpus as the negatives.

Besides, for each query in the train data, we used LLMs to generate hard negative for them by asking LLMs to create a
document that is the opposite one of the documents in 'pos'.

## Performance

Below is a comparision table of the results we achieved compared to some other pre-trained Cross-Encoders on
the [MS MMarco Passage Reranking - Vi - Dev](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset.

| Model-Name                                                                                                                              | NDCG@3     | MRR@3      | NDCG@5     | MRR@5      | NDCG@10    | MRR@10     | Docs / Sec |
|-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:-----------|:-----------|:-----------|:-----------|:-----------|:-----------|
| [namdp-ptit/ViRanker](https://huggingface.co/namdp-ptit/ViRanker)                                                                       | **0.6815** | **0.6641** | 0.6983     | **0.6894** | 0.7302     | **0.7107** | 2.02       
| [itdainb/PhoRanker](https://huggingface.co/itdainb/PhoRanker)                                                                           | 0.6625     | 0.6458     | **0.7147** | 0.6731     | **0.7422** | 0.6830     | **15**     
| [kien-vu-uet/finetuned-phobert-passage-rerank-best-eval](https://huggingface.co/kien-vu-uet/finetuned-phobert-passage-rerank-best-eval) | 0.0963     | 0.0883     | 0.1396     | 0.1131     | 0.1681     | 0.1246     | **15**     
| [BAAI/bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3)                                                               | 0.6087     | 0.5841     | 0.6513     | 0.6062     | 0.6872     | 0.62091    | 3.51       
| [BAAI/bge-reranker-v2-gemma](https://huggingface.co/BAAI/bge-reranker-v2-gemma)                                                         | 0.6088     | 0.5908     | 0.6446     | 0.6108     | 0.6785     | 0.6249     | 1.29       

## Contact

**Email**: [email protected]

**LinkedIn**: [Dang Phuong Nam](https://www.linkedin.com/in/dang-phuong-nam-157912288/)

**Facebook**: [Phương Nam](https://www.facebook.com/phuong.namdang.7146557)

## Support The Project

If you find this project helpful and wish to support its ongoing development, here are some ways you can contribute:

1. **Star the Repository**: Show your appreciation by starring the repository. Your support motivates further
   development
   and enhancements.
2. **Contribute**: We welcome your contributions! You can help by reporting bugs, submitting pull requests, or
   suggesting new features.
3. **Donate**: If you’d like to support financially, consider making a donation. You can donate through:
    - Vietcombank: 9912692172 - DANG PHUONG NAM

Thank you for your support!

## Citation

Please cite as

```Plaintext
@misc{ViRanker,
  title={ViRanker: A Cross-encoder Model for Vietnamese Text Ranking},
  author={Nam Dang Phuong},
  year={2024},
  publisher={Huggingface},
}
```