oe2015's picture
Update README.md
ea338f5 verified
|
raw
history blame
1.41 kB

Llama 3.2 11B-Vision-Instruct Model on Hugging Face

This repository hosts the Llama 3.2 11B-Vision-Instruct model, fine-tuned for generating TikZ code from captions and images, suitable for enhancing scientific visualizations.

Model Description

The Llama 3.2 11B-Vision-Instruct is a multimodal model combining the robust textual understanding and generative capabilities of LLaMA 3.2 with a specialized vision encoder, aimed at integrating detailed visual embeddings with textual data for high-quality output.

Installation

Ensure you have PyTorch and Transformers installed in your environment. If not, you can install them using pip:

pip install torch transformers
import torch
from datetime import date
from PIL import Image, ImageTk
from transformers import MllamaForConditionalGeneration, AutoProcessor
import tkinter as tk
from tkinter import filedialog, ttk, messagebox
import logging
import json
import os

# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')

# Get today's date
date_string: str = date.today().strftime("%d %b %Y")
model_id = "mylesgoose/Llama-3.2-11B-Vision-Instruct"

# Load the model and processor
model = MllamaForConditionalGeneration.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_id)