How to use llama3.2 11b for text generation

#47
by PyMangekyo - opened

from transformers import MllamaForConditionalGeneration, AutoProcessor

model_id = "meta-llama/Llama-3.2-11B-Vision-Instruct"
model = MllamaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_id)

messages = [
{"role": "user", "content": [
{"type": "text", "text": "You are a multi report analyzer. Your job is to analyze all the details given from multiple reports."
"Write a detailed fused report for all posts and draw a conclusion from it."}
]}
]
image = []
input_text = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(
image,
input_text,
add_special_tokens=False,
return_tensors="pt"
).to(model.device)

output = model.generate(**inputs, max_new_tokens=500)
print(output)

I am using above script to generate text from llama3.2. It fails for the text generations the error is

ValueError: Invalid input type. Must be a single image, a list of images, or a list of batches of images.

I understand that it expects a list containing images but I want to use it for text generation (summarization). How to do it? Can anyone help?

Use AutoModelForCausalLM and AutoTokenizer instead.

Sign up or log in to comment