GenRead: FiD model trained on WebQ

-- This is the model checkpoint of GenRead [2], based on the T5-3B and trained on the WebQ dataset [1].

-- Hyperparameters: 8 x 80GB A100 GPUs; batch size 16; AdamW; LR 5e-5; best dev at 11500 steps.

References:

[1] Semantic parsing on freebase from question-answer pairs. EMNLP 2013.

[2] Generate rather than Retrieve: Large Language Models are Strong Context Generators. arXiv 2022

Model performance

We evaluate it on the WebQ dataset, the EM score is 54.36.

--- license: cc-by-4.0 ---

license: cc-by-4.0

Downloads last month
10
Inference API
Unable to determine this model’s pipeline type. Check the docs .