FiD model trained on TQA

-- This is the model checkpoint of FiD [2], based on the T5 (with 3B parameters) and trained on the TQA dataset [1].

-- Hyperparameters: 8 x 40GB A100 GPUs; batch size 8; AdamW; LR 3e-5; 30000 steps

References:

[1] TriviaQA: A Large Scale Dataset for Reading Comprehension and Question Answering. ACL 2017

[2] Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering. EACL 2021.

Model performance

We evaluate it on the TQA dataset, the EM score is 66.1 on the test set.

Downloads last month
10
Inference API
Unable to determine this model’s pipeline type. Check the docs .