davda54 commited on
Commit
0cded76
1 Parent(s): 07fae77

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -10
README.md CHANGED
@@ -26,23 +26,22 @@ The official release of a new generation of NorBERT language models described in
26
  - [NorBERT 3 large (323M)](https://huggingface.co/ltg/norbert3-large)
27
 
28
  ## Generative NorT5 siblings:
29
- - [NorT5 xs (15M)](https://huggingface.co/ltg/nort5-xs)
30
- - [NorT5 small (40M)](https://huggingface.co/ltg/nort5-small)
31
- - [NorT5 base (123M)](https://huggingface.co/ltg/nort5-base)
32
- - [NorT5 large (323M)](https://huggingface.co/ltg/nort5-large)
33
 
34
 
35
  ## Example usage
36
 
37
- This model currently needs a custom wrapper from `modeling_norbert.py`. Then you can use it like this:
38
 
39
  ```python
40
  import torch
41
- from transformers import AutoTokenizer
42
- from modeling_norbert import NorbertForMaskedLM
43
 
44
- tokenizer = AutoTokenizer.from_pretrained("path/to/folder")
45
- bert = NorbertForMaskedLM.from_pretrained("path/to/folder")
46
 
47
  mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
48
  input_text = tokenizer("Nå ønsker de seg en[MASK] bolig.", return_tensors="pt")
@@ -53,7 +52,7 @@ output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argma
53
  print(tokenizer.decode(output_text[0].tolist()))
54
  ```
55
 
56
- The following classes are currently implemented: `NorbertForMaskedLM`, `NorbertForSequenceClassification`, `NorbertForTokenClassification`, `NorbertForQuestionAnswering` and `NorbertForMultipleChoice`.
57
 
58
  ## Cite us
59
 
 
26
  - [NorBERT 3 large (323M)](https://huggingface.co/ltg/norbert3-large)
27
 
28
  ## Generative NorT5 siblings:
29
+ - [NorT5 xs (32M)](https://huggingface.co/ltg/nort5-xs)
30
+ - [NorT5 small (88M)](https://huggingface.co/ltg/nort5-small)
31
+ - [NorT5 base (228M)](https://huggingface.co/ltg/nort5-base)
32
+ - [NorT5 large (808M)](https://huggingface.co/ltg/nort5-large)
33
 
34
 
35
  ## Example usage
36
 
37
+ This model currently needs a custom wrapper from `modeling_norbert.py`, you should therefore load the model with ``trust_remote_code=True''.
38
 
39
  ```python
40
  import torch
41
+ from transformers import AutoTokenizer, AutoModelForMaskedLM
 
42
 
43
+ tokenizer = AutoTokenizer.from_pretrained("ltg/norbert3-base")
44
+ model = AutoModelForMaskedLM.from_pretrained("ltg/norbert3-base", trust_remote_code=True)
45
 
46
  mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
47
  input_text = tokenizer("Nå ønsker de seg en[MASK] bolig.", return_tensors="pt")
 
52
  print(tokenizer.decode(output_text[0].tolist()))
53
  ```
54
 
55
+ The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
56
 
57
  ## Cite us
58