lhbonifacio
commited on
Commit
•
659d152
1
Parent(s):
61acbd9
Update README.md
Browse files
README.md
CHANGED
@@ -14,9 +14,9 @@ widget:
|
|
14 |
- text: "Texto de exemplo em português"
|
15 |
inference: false
|
16 |
---
|
17 |
-
# PTT5-base Reranker finetuned on Portuguese MS MARCO
|
18 |
## Introduction
|
19 |
-
ptt5-base-msmarco-pt-100k is a T5-based model pretrained in the BrWac corpus, finetuned on Portuguese translated version of MS MARCO passage dataset. This model was finetuned for 100k steps.
|
20 |
Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
|
21 |
|
22 |
## Usage
|
@@ -24,17 +24,17 @@ Further information about the dataset or the translation method can be found on
|
|
24 |
|
25 |
from transformers import T5Tokenizer, T5ForConditionalGeneration
|
26 |
|
27 |
-
model_name = 'unicamp-dl/ptt5-base-msmarco-pt-100k'
|
28 |
tokenizer = T5Tokenizer.from_pretrained(model_name)
|
29 |
model = T5ForConditionalGeneration.from_pretrained(model_name)
|
30 |
|
31 |
```
|
32 |
# Citation
|
33 |
-
If you use ptt5-base-msmarco-pt-100k, please cite:
|
34 |
|
35 |
@misc{bonifacio2021mmarco,
|
36 |
title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset},
|
37 |
-
author={Luiz Henrique Bonifacio and Israel Campiotti and Roberto Lotufo and Rodrigo Nogueira},
|
38 |
year={2021},
|
39 |
eprint={2108.13897},
|
40 |
archivePrefix={arXiv},
|
|
|
14 |
- text: "Texto de exemplo em português"
|
15 |
inference: false
|
16 |
---
|
17 |
+
# PTT5-base-msmarco-pt-100k-v1 Reranker finetuned on Portuguese MS MARCO
|
18 |
## Introduction
|
19 |
+
ptt5-base-msmarco-pt-100k-v1 is a T5-based model pretrained in the BrWac corpus, finetuned on Portuguese translated version of MS MARCO passage dataset. In the version v1, the Portuguese dataset was translated using [Helsinki](https://huggingface.co/Helsinki-NLP) NMT model. This model was finetuned for 100k steps.
|
20 |
Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
|
21 |
|
22 |
## Usage
|
|
|
24 |
|
25 |
from transformers import T5Tokenizer, T5ForConditionalGeneration
|
26 |
|
27 |
+
model_name = 'unicamp-dl/ptt5-base-msmarco-pt-100k-v1'
|
28 |
tokenizer = T5Tokenizer.from_pretrained(model_name)
|
29 |
model = T5ForConditionalGeneration.from_pretrained(model_name)
|
30 |
|
31 |
```
|
32 |
# Citation
|
33 |
+
If you use ptt5-base-msmarco-pt-100k-v1, please cite:
|
34 |
|
35 |
@misc{bonifacio2021mmarco,
|
36 |
title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset},
|
37 |
+
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira},
|
38 |
year={2021},
|
39 |
eprint={2108.13897},
|
40 |
archivePrefix={arXiv},
|