File size: 3,238 Bytes
c841fb8
 
 
 
c3a5f8f
be651c6
 
c841fb8
 
 
be651c6
 
b3442e9
 
 
be651c6
c841fb8
 
7ea0934
 
 
c841fb8
 
 
 
 
 
 
 
 
 
 
be651c6
 
7ea0934
c841fb8
 
 
 
 
bca18ba
c841fb8
 
 
 
 
bca18ba
7ea0934
bca18ba
c841fb8
bca18ba
c841fb8
c3a5f8f
c841fb8
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
This is the IndicBART model. For detailed documentation look here: https://indicnlp.ai4bharat.org/indic-bart/ and https://github.com/AI4Bharat/indic-bart/

Usage:

```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer

tokenizer = AlbertTokenizer.from_pretrained("prajdabre/IndicBARTTokenizer", do_lower_case=False, use_fast=False, keep_accents=True)

# Or use tokenizer = AutoTokenizer.from_pretrained("prajdabre/IndicBARTTokenizer", do_lower_case=False, use_fast=False, keep_accents=True)

model = MBartForConditionalGeneration.from_pretrained("prajdabre/IndicBART")

# Or use model = AutoModelForSeq2SeqLM.from_pretrained("prajdabre/IndicBART")


# First tokenize the input and outputs. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>". 
inp = tokenizer("I am a boy <\/s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids

out = tokenizer("<2hi> मैं  एक लड़का हूँ <\/s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids

model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])

# For loss
model_outputs.loss ## This is not label smoothed.

# For logits
model_outputs.logits

# For generation. Pardon the messiness. Note the decoder_start_token_id.

model.eval() # Det dropouts to zero

model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=tokenizer.pad_token_id, decoder_start_token_id=tokenizer(["<2en>"], add_special_tokens=False).input_ids[0][0])


# Decode to get output strings

decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)

print(decoded_output) # I am a boy

# What if we mask?

inp = tokenizer("I am [MASK] </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids

model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=tokenizer.pad_token_id, decoder_start_token_id=tokenizer(["<2en>"], add_special_tokens=False).input_ids[0][0])

decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)

print(decoded_output) # I am happy
```

Notes:
1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible.
2. The tokenizer repo is kept separate from the model repo because unlike mBART-25 and mBART-50 which use a BPE model (MBartTokenizer class) whereas we use the sentencepiece model (AlbertTokenizer class).
3. Currently, keeping the tokenizer and model files in the same repo complicates things so keeping them separate is a temporary solution. This will be fixed in future versions.
4. While I have only shown how to let logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in https://huggingface.co/docs/transformers/model_doc/mbart#transformers.MBartForConditionalGeneration