Update README.md
Browse files
README.md
CHANGED
@@ -15,25 +15,36 @@ pipeline_tag: text2text-generation
|
|
15 |
|
16 |
# NorT5 x-small
|
17 |
|
|
|
|
|
|
|
|
|
18 |
|
19 |
## Other sizes:
|
20 |
-
- [NorT5 xs (
|
21 |
-
- [NorT5 small (
|
22 |
-
- [NorT5 base (
|
23 |
-
- [NorT5 large (
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
|
26 |
## Example usage
|
27 |
|
28 |
-
This model currently needs a custom wrapper from `modeling_nort5.py
|
|
|
29 |
|
30 |
```python
|
31 |
import torch
|
32 |
-
from transformers import AutoTokenizer
|
33 |
-
from modeling_norbert import NorT5ForConditionalGeneration
|
34 |
|
35 |
-
tokenizer = AutoTokenizer.from_pretrained("
|
36 |
-
t5 =
|
37 |
|
38 |
|
39 |
# MASKED LANGUAGE MODELING
|
@@ -59,4 +70,32 @@ output_tensor = model.generate(input_tensor, max_new_tokens=50, num_beams=4, do_
|
|
59 |
tokenizer.decode(output_tensor.squeeze())
|
60 |
|
61 |
# should output: [BOS]ˈoppvarming, det vil si at det skjer en endring i temperaturen i et medium, f.eks. en ovn eller en radiator, slik at den blir varmere eller kaldere, eller at den blir varmere eller kaldere, eller at den blir
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
62 |
```
|
|
|
15 |
|
16 |
# NorT5 x-small
|
17 |
|
18 |
+
<img src="https://huggingface.co/ltg/norbert3-base/resolve/main/norbert.png" width=12.5%>
|
19 |
+
|
20 |
+
The official release of a new generation of NorT5 language models described in paper [**NorBench — A Benchmark for Norwegian Language Models**](https://arxiv.org/abs/2305.03880). Plese read the paper to learn more details about the model.
|
21 |
+
|
22 |
|
23 |
## Other sizes:
|
24 |
+
- [NorT5 xs (32M)](https://huggingface.co/ltg/nort5-xs)
|
25 |
+
- [NorT5 small (88M)](https://huggingface.co/ltg/nort5-small)
|
26 |
+
- [NorT5 base (228M)](https://huggingface.co/ltg/nort5-base)
|
27 |
+
- [NorT5 large (808M)](https://huggingface.co/ltg/nort5-large)
|
28 |
+
|
29 |
+
|
30 |
+
## Encoder-only NorBERT siblings:
|
31 |
+
- [NorBERT 3 xs (15M)](https://huggingface.co/ltg/norbert3-xs)
|
32 |
+
- [NorBERT 3 small (40M)](https://huggingface.co/ltg/norbert3-small)
|
33 |
+
- [NorBERT 3 base (123M)](https://huggingface.co/ltg/norbert3-base)
|
34 |
+
- [NorBERT 3 large (323M)](https://huggingface.co/ltg/norbert3-large)
|
35 |
|
36 |
|
37 |
## Example usage
|
38 |
|
39 |
+
This model currently needs a custom wrapper from `modeling_nort5.py`, you should therefore load the model with `trust_remote_code=True`.
|
40 |
+
|
41 |
|
42 |
```python
|
43 |
import torch
|
44 |
+
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
|
|
45 |
|
46 |
+
tokenizer = AutoTokenizer.from_pretrained("ltg/nort5-xs")
|
47 |
+
t5 = AutoModelForSeq2SeqLM.from_pretrained("ltg/nort5-xs")
|
48 |
|
49 |
|
50 |
# MASKED LANGUAGE MODELING
|
|
|
70 |
tokenizer.decode(output_tensor.squeeze())
|
71 |
|
72 |
# should output: [BOS]ˈoppvarming, det vil si at det skjer en endring i temperaturen i et medium, f.eks. en ovn eller en radiator, slik at den blir varmere eller kaldere, eller at den blir varmere eller kaldere, eller at den blir
|
73 |
+
```
|
74 |
+
|
75 |
+
|
76 |
+
The following classes are currently implemented: `AutoModel`, `AutoModelForSeq2SeqLM`.
|
77 |
+
|
78 |
+
## Cite us
|
79 |
+
|
80 |
+
```bibtex
|
81 |
+
@inproceedings{samuel-etal-2023-norbench,
|
82 |
+
title = "{N}or{B}ench {--} A Benchmark for {N}orwegian Language Models",
|
83 |
+
author = "Samuel, David and
|
84 |
+
Kutuzov, Andrey and
|
85 |
+
Touileb, Samia and
|
86 |
+
Velldal, Erik and
|
87 |
+
{\O}vrelid, Lilja and
|
88 |
+
R{\o}nningstad, Egil and
|
89 |
+
Sigdel, Elina and
|
90 |
+
Palatkina, Anna",
|
91 |
+
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
|
92 |
+
month = may,
|
93 |
+
year = "2023",
|
94 |
+
address = "T{\'o}rshavn, Faroe Islands",
|
95 |
+
publisher = "University of Tartu Library",
|
96 |
+
url = "https://aclanthology.org/2023.nodalida-1.61",
|
97 |
+
pages = "618--633",
|
98 |
+
abstract = "We present NorBench: a streamlined suite of NLP tasks and probes for evaluating Norwegian language models (LMs) on standardized data splits and evaluation metrics. We also introduce a range of new Norwegian language models (both encoder and encoder-decoder based). Finally, we compare and analyze their performance, along with other existing LMs, across the different benchmark tests of NorBench.",
|
99 |
+
}
|
100 |
+
|
101 |
```
|