Edit model card

πŸ¦™ Llama3.1-8b-instruct-vision Model Card

Model Details

This repository contains a reproduced version of the LLaVA model from the Llama 3.1-8B-Instruct foundation model using the PKU-Alignment/align-anything library.

NOTE: The reproduced version of LLaVA has some different implementation details than the original LLaVA model.

  1. The reproduced LLaVA uses a different conversation template than the original LLaVA model.
  2. The initial model weights are loaded from Llama 3.1 8B Instruct model (meta-llama/Llama 3.1-8B-Instruct) rather than lmsys/vicuna-7b-v1.5.

Model Sources

How to use model (reprod.)

  • Using transformers
from transformers import (
    LlavaForConditionalGeneration,
    AutoProcessor,
)
from PIL import Image

path = <path_to_model_dir>
processor = AutoProcessor.from_pretrained(path)
model = LlavaForConditionalGeneration.from_pretrained(path)

prompt = "<|start_header_id|>user<|end_header_id|>: <image> Give an overview of what's in the image.\n<|start_header_id|>assistant<|end_header_id|>: "
image_path = "align-anything/assets/test_image.webp"
image = Image.open(image_path)

inputs = processor(text=prompt, images=image, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=1024)
print(processor.decode(outputs[0], skip_special_tokens=True))
Downloads last month
2
Inference API
Unable to determine this model's library. Check the docs .

Model tree for PKU-Alignment/llama3.1-8b-instruct-vision

Finetuned
(420)
this model