|
--- |
|
language: |
|
- 'no' |
|
- nb |
|
- nn |
|
inference: false |
|
tags: |
|
- T5 |
|
- NorT5 |
|
- Norwegian |
|
- encoder-decoder |
|
license: apache-2.0 |
|
pipeline_tag: text2text-generation |
|
--- |
|
|
|
# NorT5 x-small |
|
|
|
<img src="https://huggingface.co/ltg/norbert3-base/resolve/main/norbert.png" width=12.5%> |
|
|
|
The official release of a new generation of NorT5 language models described in paper [**NorBench — A Benchmark for Norwegian Language Models**](https://arxiv.org/abs/2305.03880). Plese read the paper to learn more details about the model. |
|
|
|
|
|
## Other sizes: |
|
- [NorT5 xs (32M)](https://huggingface.co/ltg/nort5-xs) |
|
- [NorT5 small (88M)](https://huggingface.co/ltg/nort5-small) |
|
- [NorT5 base (228M)](https://huggingface.co/ltg/nort5-base) |
|
- [NorT5 large (808M)](https://huggingface.co/ltg/nort5-large) |
|
|
|
|
|
## Encoder-only NorBERT siblings: |
|
- [NorBERT 3 xs (15M)](https://huggingface.co/ltg/norbert3-xs) |
|
- [NorBERT 3 small (40M)](https://huggingface.co/ltg/norbert3-small) |
|
- [NorBERT 3 base (123M)](https://huggingface.co/ltg/norbert3-base) |
|
- [NorBERT 3 large (323M)](https://huggingface.co/ltg/norbert3-large) |
|
|
|
|
|
## Example usage |
|
|
|
This model currently needs a custom wrapper from `modeling_nort5.py`, you should therefore load the model with `trust_remote_code=True`. |
|
|
|
|
|
```python |
|
import torch |
|
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("ltg/nort5-xs") |
|
model = AutoModelForSeq2SeqLM.from_pretrained("ltg/nort5-xs", trust_remote_code=True) |
|
|
|
|
|
# MASKED LANGUAGE MODELING |
|
|
|
sentence = "Brukseksempel: Elektrisk oppvarming. Definisjonen på ordet oppvarming er[MASK_0]." |
|
encoding = tokenizer(sentence) |
|
|
|
input_tensor = torch.tensor([encoding.input_ids]) |
|
output_tensor = model.generate(input_tensor, decoder_start_token_id=7, eos_token_id=8) |
|
tokenizer.decode(output_tensor.squeeze(), skip_special_tokens=True) |
|
|
|
# should output: å varme opp |
|
|
|
|
|
# PREFIX LANGUAGE MODELING |
|
# you need to finetune this model or use `nort5-{size}-lm` model, which is finetuned on prefix language modeling |
|
|
|
sentence = "Brukseksempel: Elektrisk oppvarming. Definisjonen på ordet oppvarming er (Wikipedia) " |
|
encoding = tokenizer(sentence) |
|
|
|
input_tensor = torch.tensor([encoding.input_ids]) |
|
output_tensor = model.generate(input_tensor, max_new_tokens=50, num_beams=4, do_sample=False) |
|
tokenizer.decode(output_tensor.squeeze()) |
|
|
|
# should output: [BOS]ˈoppvarming, det vil si at det skjer en endring i temperaturen i et medium, f.eks. en ovn eller en radiator, slik at den blir varmere eller kaldere, eller at den blir varmere eller kaldere, eller at den blir |
|
``` |
|
|
|
|
|
The following classes are currently implemented: `AutoModel`, `AutoModelForSeq2SeqLM`. |
|
|
|
## Cite us |
|
|
|
```bibtex |
|
@inproceedings{samuel-etal-2023-norbench, |
|
title = "{N}or{B}ench {--} A Benchmark for {N}orwegian Language Models", |
|
author = "Samuel, David and |
|
Kutuzov, Andrey and |
|
Touileb, Samia and |
|
Velldal, Erik and |
|
{\O}vrelid, Lilja and |
|
R{\o}nningstad, Egil and |
|
Sigdel, Elina and |
|
Palatkina, Anna", |
|
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)", |
|
month = may, |
|
year = "2023", |
|
address = "T{\'o}rshavn, Faroe Islands", |
|
publisher = "University of Tartu Library", |
|
url = "https://aclanthology.org/2023.nodalida-1.61", |
|
pages = "618--633", |
|
abstract = "We present NorBench: a streamlined suite of NLP tasks and probes for evaluating Norwegian language models (LMs) on standardized data splits and evaluation metrics. We also introduce a range of new Norwegian language models (both encoder and encoder-decoder based). Finally, we compare and analyze their performance, along with other existing LMs, across the different benchmark tests of NorBench.", |
|
} |
|
|
|
``` |