michaelrglass commited on
Commit
c2276df
1 Parent(s): db4a647

Added citation, github repo and paper link to model card.

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md CHANGED
@@ -1,3 +1,93 @@
1
  ---
 
 
 
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - information retrieval
4
+ - reranking
5
  license: apache-2.0
6
  ---
7
+
8
+ # Model Card for FEVER Question Encoder in Re2G
9
+
10
+ # Model Details
11
+
12
+ > The approach of RAG, Multi-DPR, and KGI is to train a neural IR (Information Retrieval) component and further train it end-to-end through its impact in generating the correct output.
13
+
14
+ <img src="https://github.com/IBM/kgi-slot-filling/raw/re2g/model_cards/Re2G_Arch2.png" width="100%">
15
+
16
+ ## Training, Evaluation and Inference
17
+ The code for training, evaluation and inference is in our github in the [re2g branch](https://github.com/IBM/kgi-slot-filling/tree/re2g).
18
+
19
+ ## Usage
20
+
21
+ The best way to use the model is by adapting the [dpr_apply.py](https://github.com/IBM/kgi-slot-filling/blob/re2g/dpr/dpr_apply.py)
22
+
23
+ ## Citation
24
+ ```
25
+ @inproceedings{glass-etal-2022-re2g,
26
+ title = "{R}e2{G}: Retrieve, Rerank, Generate",
27
+ author = "Glass, Michael and
28
+ Rossiello, Gaetano and
29
+ Chowdhury, Md Faisal Mahbub and
30
+ Naik, Ankita and
31
+ Cai, Pengshan and
32
+ Gliozzo, Alfio",
33
+ booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
34
+ month = jul,
35
+ year = "2022",
36
+ address = "Seattle, United States",
37
+ publisher = "Association for Computational Linguistics",
38
+ url = "https://aclanthology.org/2022.naacl-main.194",
39
+ doi = "10.18653/v1/2022.naacl-main.194",
40
+ pages = "2701--2715",
41
+ abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.",
42
+ }
43
+ ```
44
+
45
+ ## Model Description
46
+ The model creators note in the [associated paper](https://aclanthology.org/2022.naacl-main.194.pdf):
47
+ > As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9% to 34% over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.
48
+
49
+ - **Developed by:** IBM
50
+ - **Shared by [Optional]:** IBM
51
+
52
+ - **Model type:** Query/Passage Reranker
53
+ - **Language(s) (NLP):** English
54
+ - **License:** Apache 2.0
55
+ - **Parent Model:** [dpr-question_encoder-multiset-base](https://huggingface.co/facebook/dpr-question_encoder-multiset-base)
56
+ - **Resources for more information:**
57
+ - [GitHub Repo](https://github.com/IBM/kgi-slot-filling)
58
+ - [Associated Paper](https://aclanthology.org/2022.naacl-main.194.pdf)
59
+
60
+
61
+ # Uses
62
+
63
+
64
+ ## Direct Use
65
+ This model can be used for the task of encoding a question to a vector to be used as a query into an Approximate Nearest Neighbors index. It must be used in combination with a context encoder that encodes passages to a vector and indexes them.
66
+
67
+
68
+ # Citation
69
+
70
+
71
+ **BibTeX:**
72
+
73
+ ```bibtex
74
+ @inproceedings{glass-etal-2022-re2g,
75
+ title = "{R}e2{G}: Retrieve, Rerank, Generate",
76
+ author = "Glass, Michael and
77
+ Rossiello, Gaetano and
78
+ Chowdhury, Md Faisal Mahbub and
79
+ Naik, Ankita and
80
+ Cai, Pengshan and
81
+ Gliozzo, Alfio",
82
+ booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
83
+ month = jul,
84
+ year = "2022",
85
+ address = "Seattle, United States",
86
+ publisher = "Association for Computational Linguistics",
87
+ url = "https://aclanthology.org/2022.naacl-main.194",
88
+ doi = "10.18653/v1/2022.naacl-main.194",
89
+ pages = "2701--2715",
90
+ abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.",
91
+ }
92
+
93
+ ```