MISHANM/ibm-granite-vision-3.2-2b-fp16

The MISHANM/ibm-granite-granite-vision-3.2-2b-fp16 model is a sophisticated vision-language model designed for image-to-text generation. It leverages advanced neural architectures to transform visual inputs into coherent textual descriptions.

Model Details

  1. Language: English
  2. Tasks: Imgae to Text Generation

Model Example output

This is the model inference output:

image/png

Getting Started

To begin using the model, ensure you have the necessary dependencies:

pip install transformers>=4.49

Use the code below to get started with the model.

Using Gradio

import gradio as gr
from transformers import AutoProcessor, AutoModelForVision2Seq
import torch
from PIL import Image

device = "cuda" if torch.cuda.is_available() else "cpu"

model_path = "MISHANM/ibm-granite-vision-3.2-2b-fp16"
processor = AutoProcessor.from_pretrained(model_path)
model = AutoModelForVision2Seq.from_pretrained(model_path, ignore_mismatched_sizes=True).to(device)


def process_image_and_prompt(image_path, prompt):
    # Load the image
    image = Image.open(image_path).convert("RGB")

    # Prepare the conversation input
    conversation = [
        {
            "role": "user",
            "content": [
                {"type": "image", "url": image},
                {"type": "text", "text": prompt},
            ],
        },
    ]

    # Process the inputs
    inputs = processor.apply_chat_template(
        conversation,
        add_generation_prompt=True,
        tokenize=True,
        return_dict=True,
        return_tensors="pt"
    ).to(device)


    # Generate the output
    output = model.generate(**inputs, max_new_tokens=100)
    return processor.decode(output[0], skip_special_tokens=True)

# Create the Gradio interface
iface = gr.Interface(
    fn=process_image_and_prompt,
    inputs=[
        gr.Image(type="filepath", label="Upload Image"),
        gr.Textbox(lines=2, placeholder="Enter your prompt here...", label="Prompt")
    ],
    outputs="text",
    title="Granite Vision: Advanced Image-to-Text Generation Model",
    description="Upload an image and enter a text prompt to get a response from the model."
)

# Launch the Gradio app
iface.launch(share=True)
 

Uses

Direct Use

This model is ideal for converting images into descriptive text, making it valuable for creative projects, content creation, and artistic exploration.

Out-of-Scope Use

The model is not intended for generating explicit or harmful content. It may also face challenges with highly abstract or nonsensical prompts.

Bias, Risks, and Limitations

The model may reflect biases present in its training data, potentially resulting in stereotypical or biased outputs. Users should be aware of these limitations and review generated content for accuracy and appropriateness.

Recommendations

Users are encouraged to critically evaluate the model's outputs, especially in sensitive contexts, to ensure they meet the desired standards of accuracy and appropriateness.

Citation Information

@misc{MISHANM/ibm-granite-vision-3.2-2b-fp16,
  author = {Mishan Maurya},
  title = {Introducing Image to Text Generation model},
  year = {2025},
  publisher = {Hugging Face},
  journal = {Hugging Face repository},
  
}
Downloads last month
20
Safetensors
Model size
2.98B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for MISHANM/ibm-granite-vision-3.2-2b-fp16