maxoul commited on
Commit
9e5c9e5
·
verified ·
1 Parent(s): ce9de20

Upload PISCO

Browse files
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/scratch/1/user/mlouis/calmar/pisco_hub_models/mistral_with_mistral_labels",
3
+ "architectures": [
4
+ "PISCO"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "modelling_pisco.PISCOConfig",
8
+ "AutoModel": "modelling_pisco.PISCO"
9
+ },
10
+ "compr_rate": 16,
11
+ "decoder_model_name": "mistralai/Mistral-7B-Instruct-v0.2",
12
+ "device_map": null,
13
+ "lora_r": 16,
14
+ "model_type": "PISCO",
15
+ "sep": true,
16
+ "torch_dtype": "bfloat16",
17
+ "transformers_version": "4.44.2"
18
+ }
generation_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "top_p": null,
3
+ "transformers_version": "4.44.2"
4
+ }
model-00001-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9791259c3ee866c55b5339d42fae3e67138e73b83148e1ac9746fe9c99cb1af
3
+ size 4999790024
model-00002-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75a2dd6edc2d77e2eccd979954f662b1221218e35ec2cdbc4cbb3f52d5887c37
3
+ size 4974831120
model-00003-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27dbfb30c4431410f6c65e6327e06373238f54fc58ff3541410506fdd44f9be1
3
+ size 4676964024
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
modelling_pisco.py ADDED
@@ -0,0 +1,324 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import warnings
2
+ import os
3
+ import torch
4
+ from peft import LoraConfig
5
+ from transformers import AutoModelForCausalLM, AutoTokenizer, PreTrainedModel, PretrainedConfig, AutoConfig, GenerationConfig
6
+ from jinja2.exceptions import TemplateError
7
+
8
+
9
+ def add_memory_tokens_to_inputs(input_ids: torch.Tensor, attention_mask: torch.Tensor, n_mem_tokens: int, tokenizer):
10
+ """
11
+ Concatenate the input ids with n_mem_tokens mem_tokens and update the corresponding attention mask
12
+ """
13
+ assert len(tokenizer.mem_tokens) == n_mem_tokens, f"{len(tokenizer.mem_tokens)} VS {n_mem_tokens}"
14
+ mem_tokens = torch.stack([tokenizer.mem_token_ids_pt] * input_ids.size(0), 0)
15
+ assert len(mem_tokens.size()) == 2
16
+ assert len(mem_tokens) == input_ids.size(0)
17
+ assert len(mem_tokens[0]) == n_mem_tokens
18
+ #mem_tokens = torch.full((input_ids.size(0), n_mem_tokens), tokenizer.mem_token_id, dtype=torch.long)
19
+ input_ids = torch.cat([input_ids, mem_tokens], dim=1)
20
+ attention_mask = torch.cat([attention_mask, torch.ones(input_ids.size(0), n_mem_tokens)], dim=1)
21
+ return input_ids, attention_mask
22
+
23
+
24
+ class PISCOConfig(PretrainedConfig):
25
+
26
+ model_type = "PISCO"
27
+ def __init__(self,
28
+ decoder_model_name: str = "meta-llama/Llama-2-7b-chat-hf",
29
+ compr_rate: int = 16,
30
+ **kwargs):
31
+ super().__init__(**kwargs)
32
+
33
+ self.decoder_model_name = decoder_model_name # model name of decoder
34
+ self.compr_rate = compr_rate # compression rate
35
+ self.lora_r = 16
36
+ self.sep = True
37
+
38
+
39
+ class PISCO(PreTrainedModel):
40
+ config_class = PISCOConfig
41
+ def __init__(self, cfg):
42
+ super().__init__(cfg)
43
+ self.decoder_model_name = cfg.decoder_model_name
44
+ self.sep = cfg.sep
45
+ self.compr_rate = cfg.compr_rate
46
+
47
+ self.create_tokenizer(cfg)
48
+
49
+ # Base model config but we modify vocab size since we added tokens (mainly the mem tokens)
50
+ decoder_config = AutoConfig.from_pretrained(cfg.decoder_model_name)
51
+ decoder_config.vocab_size = len(self.tokenizer)
52
+
53
+ # Initializing placeholder model:
54
+ self.decoder = AutoModelForCausalLM.from_config(decoder_config,
55
+ attn_implementation='flash_attention_2',
56
+ torch_dtype=torch.bfloat16)
57
+
58
+ peft_config = self.get_peft_config(cfg)
59
+
60
+ self.adapter_keys = []
61
+ self.decoder.add_adapter(peft_config, 'decoder_adapter')
62
+ self.decoder.set_adapter('decoder_adapter')
63
+ self.adapter_keys.append('decoder_adapter')
64
+ self.decoder.add_adapter(peft_config, 'encoder_adapter')
65
+ self.adapter_keys.append('encoder_adapter')
66
+
67
+ self.generation_config = GenerationConfig(do_sample=False, top_p=None)
68
+
69
+ print('a')
70
+ # self.decoder = self.decoder.to('cuda')
71
+ print('b')
72
+ if torch.cuda.is_available():
73
+ print('c')
74
+ # self.decoder = self.decoder.to('cuda')
75
+ print('d')
76
+
77
+ def create_tokenizer(self, cfg):
78
+ self.tokenizer = AutoTokenizer.from_pretrained(cfg.decoder_model_name, use_fast=True, padding_side='left')
79
+
80
+ n_mem_tokens = 128 // cfg.compr_rate
81
+ mem_tokens = ['<MEM' + str(i) + '>' for i in range(n_mem_tokens)]
82
+ self.tokenizer.add_special_tokens({'additional_special_tokens': mem_tokens + ['<AE>', '<ENC>', '<SEP>']})
83
+ self.tokenizer.mem_tokens = mem_tokens
84
+
85
+ self.tokenizer.mem_token_ids = [self.tokenizer.convert_tokens_to_ids(elt) for elt in self.tokenizer.mem_tokens]
86
+ self.tokenizer.mem_token_ids_pt = torch.LongTensor(self.tokenizer.mem_token_ids) # required later on for operations on tensors
87
+
88
+ self.tokenizer.ae_token = '<AE>' # token for autoencoding on decoder side
89
+ self.tokenizer.ae_token_id = self.tokenizer.convert_tokens_to_ids('<AE>')
90
+ self.tokenizer.enc_token = '<ENC>' # token for autoencoding on compressor side
91
+ self.tokenizer.sep_token = '<SEP>' # sep token between document
92
+ self.tokenizer.sep_token_id = self.tokenizer.convert_tokens_to_ids('<SEP>')
93
+
94
+ # if pad token exists then use pad token, othrwise bos token
95
+ if self.tokenizer.pad_token_id is None:
96
+ self.tokenizer.pad_token_id = self.tokenizer.bos_token_id
97
+
98
+ def set_all_adapters(self):
99
+ if len(self.adapter_keys) > 0:
100
+ self.decoder.set_adapter(self.adapter_keys)
101
+
102
+ def get_peft_config(self, cfg: PISCOConfig) -> LoraConfig:
103
+ """
104
+ Builds the peft config
105
+ """
106
+ return LoraConfig(task_type="CAUSAL_LM", r=cfg.lora_r, lora_alpha=2* cfg.lora_r, target_modules='all-linear', lora_dropout=0.1)
107
+
108
+ def compress(self, enc_input_ids, enc_attention_mask):
109
+ return self.compr_decoder(enc_input_ids, enc_attention_mask)
110
+
111
+ def replace_emb(self, enc_input_ids, compressed_embs, dec_input_ids):
112
+ """
113
+ Compression logic (either with decoder or with dedicated compressor)
114
+ """
115
+ indices = range(0, enc_input_ids.size(0) + 1, self.generation_top_k)
116
+ input_embeds = self.replace_embeddings(compressed_embs, dec_input_ids, indices)
117
+ return input_embeds
118
+
119
+ def compr_decoder(self, input_ids, attention_mask):
120
+ """
121
+ Compression using the decoder
122
+ """
123
+ assert input_ids.size() == attention_mask.size(), f"{input_ids.size()} vs {attention_mask.size()}"
124
+
125
+ # Switch adapter if we are training two different ones:
126
+ if 'encoder_adapter' in self.adapter_keys:
127
+ self.decoder.set_adapter('encoder_adapter')
128
+
129
+ print(self.decoder.device, input_ids.device, attention_mask.device)
130
+
131
+ emb = self.decoder(input_ids=input_ids,
132
+ attention_mask=attention_mask,
133
+ output_hidden_states=True).hidden_states[-1]
134
+ mask = torch.isin(input_ids, self.tokenizer.mem_token_ids_pt.to(input_ids.device))
135
+ return emb[mask].reshape(emb.size(0), -1, emb.size(-1))
136
+
137
+ def prepare_encoder_inputs_to_decoder(self, texts, max_length, q_texts=None):
138
+ if q_texts is not None:
139
+ texts_to_encode = [self.tokenizer.enc_token + self.tokenizer.bos_token + '\nQuery:\n' + query + 'Document:\n' + text + self.tokenizer.eos_token
140
+ for text, query in zip(texts, q_texts)]
141
+ inp_enc = self.tokenizer(texts_to_encode, return_tensors='pt', padding='max_length', max_length=max_length + 8, truncation=True, add_special_tokens=False)
142
+ else:
143
+ inp_enc = [self.tokenizer.enc_token + self.tokenizer.bos_token + text + self.tokenizer.eos_token for text in texts]
144
+ inp_enc = self.tokenizer(inp_enc, return_tensors='pt', padding="longest", max_length=max_length+3, truncation=True, add_special_tokens=False)
145
+
146
+ num_mem_tokens = 128 // self.compr_rate # maybe change that
147
+ assert num_mem_tokens == len(self.tokenizer.mem_tokens)
148
+ inp_enc['input_ids'], inp_enc['attention_mask'] = add_memory_tokens_to_inputs(inp_enc['input_ids'],
149
+ inp_enc['attention_mask'],
150
+ num_mem_tokens,
151
+ tokenizer=self.tokenizer)
152
+
153
+ return inp_enc
154
+
155
+ def prepare_encoder_inputs(self, texts, max_length):
156
+ return self.prepare_encoder_inputs_to_decoder(texts, max_length)
157
+
158
+ def replace_embeddings(self, compressed_embs, dec_input_ids, indices):
159
+ """
160
+ Replace memory tokens in the decoder input to with the compressed embeddings
161
+ """
162
+ inputs_embeds = self.decoder.get_input_embeddings()(dec_input_ids)
163
+ num_embs = compressed_embs.size(1)
164
+ if self.sep:
165
+ slot_len = num_embs + 1
166
+ else:
167
+ slot_len = num_embs
168
+ # get first mem_token indices
169
+ first_mem_token_indices = torch.argmax((dec_input_ids == self.tokenizer.mem_token_ids[0]).int(), dim=1)
170
+ batch_size = inputs_embeds.size(0)
171
+ # for each example in batch, replace them with compressed embeddings
172
+ for i in range(batch_size):
173
+ for j in range(indices[i], indices[i + 1]):
174
+ start_idx = first_mem_token_indices[i].item() + (j-indices[i]) * slot_len
175
+ assert inputs_embeds[i, start_idx:start_idx + num_embs, :].size() == compressed_embs[j].size(), \
176
+ f"{inputs_embeds[i, start_idx:start_idx + num_embs, :].size()} VS {compressed_embs[j].size()}"
177
+ inputs_embeds[i, start_idx:start_idx + num_embs, :] = compressed_embs[j]
178
+ return inputs_embeds
179
+
180
+ def forward(self,
181
+ enc_input_ids: torch.LongTensor = None,
182
+ enc_attention_mask: torch.LongTensor = None,
183
+ dec_input_ids: torch.LongTensor = None,
184
+ dec_attention_mask: torch.LongTensor = None,
185
+ labels: torch.LongTensor = None):
186
+ """
187
+ enc_input_ids: stores the contexts, should be flattened from all queries before input, can be of shape:
188
+ - (batch_size*generation_top_k, enc_token_length)
189
+ - (batch_size, generation_top_k, enc_token_length)
190
+ enc_attention_mask: attention mask of enc_input_ids, same shape as enc_input_ids
191
+ dec_input_ids: stores the prompts (including mem tokens), dimention (batch_size, dec_token_length)
192
+ dec_attention_mask: attention mask of dec_input_ids
193
+ """
194
+ assert enc_input_ids.size() == enc_attention_mask.size(), f"{enc_input_ids.size()} vs {enc_attention_mask.size()}"
195
+
196
+ if len(enc_input_ids.size()) == 3: # likely from bergen: we just flatten all of this to perform encoding in one batch
197
+ batch_size, top_k, seq_length = enc_input_ids.size()
198
+ enc_input_ids = enc_input_ids.view(batch_size * top_k, seq_length)
199
+ enc_attention_mask = enc_attention_mask.view(batch_size * top_k, seq_length)
200
+
201
+ # Here, we should have top_k times more elements in enc_input_ids than in dec_input_ids
202
+ assert enc_input_ids.size(0) == dec_input_ids.size(0) * self.generation_top_k, \
203
+ f"{enc_input_ids.size(0)} VS {dec_input_ids.size(0)} with generation_top_k={self.generation_top_k}"
204
+
205
+ # Perform compression with gradient tracking
206
+ compressed_embs = self.compress(enc_input_ids, enc_attention_mask)
207
+ inputs_embeds = self.replace_emb(enc_input_ids, compressed_embs, dec_input_ids)
208
+
209
+ # decoding
210
+ if 'decoder_adapter' in self.adapter_keys:
211
+ self.decoder.set_adapter('decoder_adapter')
212
+
213
+ decoder_outputs = self.decoder(inputs_embeds=inputs_embeds, attention_mask=dec_attention_mask, labels=labels)
214
+
215
+ # At end of forward, we need to activate all adapters so that they are both trained...
216
+ self.set_all_adapters()
217
+
218
+ return {"loss": decoder_outputs.loss, "logits": decoder_outputs.logits}
219
+
220
+ def generate_from_text(self, questions: list[str], documents: list[list[str]], max_new_tokens: int = 128) -> list[str]:
221
+ # TODO: test
222
+ self.generation_top_k = len(documents[0])
223
+ assert len(documents) == len(questions)
224
+ assert all([len(context) == len(documents[0]) for context in documents])
225
+ flat_documents = sum(documents, [])
226
+
227
+ model_input = {}
228
+ input_encoder = self.prepare_encoder_inputs(flat_documents, max_length=128)
229
+ device = self.decoder.device
230
+
231
+ model_input['enc_input_ids'], model_input['enc_attention_mask'] = input_encoder['input_ids'].to(device), input_encoder['attention_mask'].to(device)
232
+
233
+ instr = [self.blend_prompt_and_memory_tokens(query=q) for q in questions]
234
+
235
+ inp_dec = self.tokenizer(instr, return_tensors='pt', padding="longest", add_special_tokens=False, truncation=True, max_length=2048)
236
+
237
+ model_input['dec_input_ids'], model_input['dec_attention_mask'] = inp_dec['input_ids'].to(device), inp_dec['attention_mask'].to(device)
238
+
239
+ return self.generate(model_input, max_new_tokens=max_new_tokens)
240
+
241
+ def compress_documents(self, documents: list[str]) -> torch.Tensor:
242
+ # TODO: test
243
+ input_encoder = self.prepare_encoder_inputs(documents, max_length=128)
244
+ enc_input_ids = input_encoder['input_ids'].to(self.decoder.device)
245
+ attention_mask = input_encoder['attention_mask'].to(self.decoder.device)
246
+ print('yo', self.decoder.device, enc_input_ids.device, attention_mask.device)
247
+ return self.compress(enc_input_ids=enc_input_ids, enc_attention_mask=attention_mask)
248
+
249
+ def generate(self, model_input, max_new_tokens=128):
250
+
251
+ enc_input_ids, enc_attention_mask, dec_input_ids, dec_attention_mask = model_input['enc_input_ids'], model_input['enc_attention_mask'], model_input['dec_input_ids'], model_input['dec_attention_mask']
252
+
253
+ print('in gen')
254
+ print(enc_input_ids.size())
255
+ print(dec_input_ids.size())
256
+
257
+ assert enc_input_ids.size() == enc_attention_mask.size()
258
+
259
+ if len(enc_input_ids.size()) == 3: # likely from bergen: we just flatten all of this to perform encoding in one batch
260
+ batch_size, top_k, seq_length = enc_input_ids.size()
261
+ enc_input_ids = enc_input_ids.view(batch_size * top_k, seq_length)
262
+ enc_attention_mask = enc_attention_mask.view(batch_size * top_k, seq_length)
263
+
264
+ # Here, we should have top_k times more elements in enc_input_ids than in dec_input_ids
265
+ assert enc_input_ids.size(0) == dec_input_ids.size(0) * self.generation_top_k, \
266
+ f"{enc_input_ids.size(0)} VS {dec_input_ids.size(0)} with generation_top_k={self.generation_top_k}"
267
+
268
+ compressed_embs = self.compress(enc_input_ids, enc_attention_mask)
269
+ inputs_embeds = self.replace_emb(enc_input_ids, compressed_embs, dec_input_ids)
270
+
271
+ # Switch adapter if we are training two different ones:
272
+ if 'decoder_adapter' in self.adapter_keys:
273
+ self.decoder.set_adapter('decoder_adapter')
274
+
275
+
276
+ output_ids = self.decoder.generate(
277
+ inputs_embeds=inputs_embeds,
278
+ attention_mask=dec_attention_mask,
279
+ generation_config=self.generation_config,
280
+ max_new_tokens=max_new_tokens
281
+ )
282
+
283
+ decoded = self.tokenizer.batch_decode(output_ids, skip_special_tokens=True)
284
+
285
+ return decoded
286
+
287
+ def blend_prompt_and_memory_tokens(self, query: str):
288
+ """
289
+ Takes care of blending the prompt with the memory tokens:
290
+ Also returns, if a label is provided, the position of the first token index of the label (for loss comp later on)
291
+ """
292
+
293
+ mem_tokens_str = ''.join(self.tokenizer.mem_tokens)
294
+ mem_tokens_str += self.tokenizer.sep_token
295
+
296
+ # proper names for "eval" call, don't remove these lines
297
+ docs = mem_tokens_str * self.generation_top_k
298
+ question = query
299
+
300
+ prompt_system = 'You are a helpful assistant. Your task is to extract relevant information from provided documents and to answer to questions as briefly as possible.'
301
+ prompt_user = f"Background:\n{docs}\n\nQuestion:{question}"
302
+
303
+ # Prepare the messages with system and user roles
304
+ messages = [
305
+ {"role": "system", "content": prompt_system},
306
+ {"role": "user", "content": prompt_user.replace(':\ ', ': ')}
307
+ ]
308
+
309
+ # Attempt to apply the system role and catch if it's not supported
310
+ try:
311
+ prompt = self.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
312
+
313
+ except TemplateError as e:
314
+ # Catch the error related to system role and handle it (e.g. gemma)
315
+ if "System role not supported" in str(e):
316
+ # Remove system role and proceed with only the user role
317
+ messages = [{"role": "user", "content": messages[0]['content'] + '\n' + messages[1]['content']}]
318
+ # Apply template again without system role
319
+ prompt = self.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
320
+ else:
321
+ # Re-raise the exception if it's unrelated to system role
322
+ raise e
323
+
324
+ return prompt