File size: 1,251 Bytes
fd02f63 31f401e 6ea69cf 31f401e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
license: apache-2.0
---
# Beam Retrieval: General End-to-End Retrieval for Multi-Hop Question Answering (Zhang et all 2023)
Unofficial mirror of [Beam Retriever](https://github.com/canghongjian/beam_retriever)
This is the finetuned **encoder only** [DebertaV3Large](https://huggingface.co/microsoft/deberta-v3-large) of the Beam Retriever model which can be used for maximum inner product search.
## Usage
```python
from transformers import DebertaV2Model
finetuned_encoder = DebertaV2Model.from_pretrained('scholarly-shadows-syndicate/beam_retriever_unofficial_encoder_only')
```
## Citations
```bibtex
@article{Zhang2023BeamRG,
title={Beam Retrieval: General End-to-End Retrieval for Multi-Hop Question Answering},
author={Jiahao Zhang and H. Zhang and Dongmei Zhang and Yong Liu and Sheng Huang},
journal={ArXiv},
year={2023},
volume={abs/2308.08973},
url={https://api.semanticscholar.org/CorpusID:261030563}
}
```
```bibtex
@article{He2020DeBERTaDB,
title={DeBERTa: Decoding-enhanced BERT with Disentangled Attention},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
journal={ArXiv},
year={2020},
volume={abs/2006.03654},
url={https://api.semanticscholar.org/CorpusID:219531210}
}
```
|