batch tokenizer?
#106
by
jjplane
- opened
model_input = tokenizer(batch_messages, return_tensors="pt",padding=True, truncation=True).to("cuda")
rasie ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as pad_token
(tokenizer.pad_token = tokenizer.eos_token e.g.)
or add a new pad token via tokenizer.add_special_tokens({'pad_token': '[PAD]'})
.
jjplane
changed discussion status to
closed