Safetensors
qwen2_5_vl
ocr-qwen / README.md
syntheticbot's picture
Update README.md
dfc7fce verified
|
raw
history blame
10.2 kB
metadata
license: apache-2.0

syntheticbot/Qwen-VL-7B-ocr

Introduction

syntheticbot/Qwen-VL-7B-ocr is a fine-tuned model for Optical Character Recognition (OCR) tasks, derived from the base model Qwen/Qwen2.5-VL-7B-Instruct. This model is engineered for high accuracy in extracting text from images, including documents and scenes containing text.

Key Enhancements for OCR:

  • Enhanced Text Recognition Accuracy: Superior accuracy across diverse text fonts, styles, sizes, and orientations.
  • Robustness to Document Variations: Specifically trained to manage document complexities like varied layouts, noise, and distortions.
  • Structured Output Generation: Enables structured output formats (JSON, CSV) for recognized text and layout in document images such as invoices and tables.
  • Text Localization: Provides accurate localization of text regions and bounding boxes for text elements within images.
  • Improved Handling of Text in Visuals: Maintains proficiency in analyzing charts and graphics, with enhanced recognition of embedded text.

Model Architecture Updates:

  • Dynamic Resolution and Frame Rate Training for Video Understanding:

  • Streamlined and Efficient Vision Encoder

This repository provides the instruction-tuned and OCR-optimized 7B Qwen-VL-7B-ocr model. For comprehensive details about the foundational model architecture, please refer to the Qwen/Qwen2.5-VL-7B-Instruct repository, as well as the Blog and GitHub pages for Qwen2.5-VL.

Evaluation

OCR Benchmarks

Benchmark Qwen2-VL-7B syntheticbot/Qwen-VL-7B-ocr Improvement Notes
DocVQAtest 94.5 96.5 +2.0 Document VQA, OCR accuracy relevant
InfoVQAtest 76.5 84.5 +8.0 Information seeking VQA, OCR accuracy crucial
ChartQAtest 83.0 89.0 +6.0 Chart understanding with text, OCR accuracy important
TextVQAval 84.3 86.3 +2.0 Text-based VQA, direct OCR relevance
OCRBench 845 885 +40 Direct OCR benchmark
CC_OCR 61.6 81.8 +20.2 Chinese Character OCR benchmark
MMStar (Text Reading Focus) 60.7 65.9 +5.2 MMStar with focus on text reading tasks
Average OCR-Related Score 77.8 84.9 +7.1 Approximate average across OCR-focused benchmarks

Requirements

For optimal performance and access to OCR-specific features, it is recommended to build from source:

pip install git+https://github.com/huggingface/transformers accelerate

Quickstart

The following examples illustrate the use of syntheticbot/Qwen-VL-7B-ocr with πŸ€— Transformers and qwen_vl_utils for OCR applications.

pip install git+https://github.com/huggingface/transformers accelerate

Install the toolkit for streamlined visual input processing:

pip install qwen-vl-utils[decord]==0.0.8

Using πŸ€— Transformers for OCR

from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch

model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
    "syntheticbot/Qwen-VL-7B-ocr",
    torch_dtype="auto",
    device_map="auto"
)

processor = AutoProcessor.from_pretrained("syntheticbot/Qwen-VL-7B-ocr")

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "path/to/your/document_image.jpg",
            },
            {"type": "text", "text": "Extract the text from this image."},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda" if torch.cuda.is_available() else "cpu")

generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print("Extracted Text:", output_text[0])
Example for Structured Output (JSON for Table Extraction)
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
import json

model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
    "syntheticbot/Qwen-VL-7B-ocr",
    torch_dtype="auto",
    device_map="auto"
)
processor = AutoProcessor.from_pretrained("syntheticbot/Qwen-VL-7B-ocr")


messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "path/to/your/table_image.jpg",
            },
            {"type": "text", "text": "Extract the table from this image and output it as JSON."},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda" if torch.cuda.is_available() else "cpu")

generated_ids = model.generate(**inputs, max_new_tokens=1024)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print("Extracted Table (JSON):\n", output_text[0])

try:
    json_output = json.loads(output_text[0])
    print("\nParsed JSON Output:\n", json.dumps(json_output, indent=2))
except json.JSONDecodeError:
    print("\nCould not parse output as JSON. Output is plain text.")
Batch inference for OCR
messages1 = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "path/to/image1.jpg"},
            {"type": "text", "text": "Extract text from this image."},
        ],
    }
]
messages2 = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "path/to/image2.jpg"},
            {"type": "text", "text": "Read the text in this document."},
        ],
    }
]
messages = [messages1, messages2]

texts = [
    processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
    for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=texts,
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda" if torch.cuda.is_available() else "cpu")

generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print("Extracted Texts (Batch):\n", output_texts)

πŸ€– ModelScope

For users in mainland China, ModelScope is recommended. Use snapshot_download for checkpoint management. Adapt model names to syntheticbot/Qwen-VL-7B-ocr in ModelScope implementations.

More Usage Tips for OCR

Input images support local files, URLs, and base64 encoding.

messages = [    {        "role": "user",        "content": [            {"type": "image", "image": "http://path/to/your/document_image.jpg"},            {"type": "text", "text": "Extract the text from this image URL."},        ],    }]

Image Resolution for OCR Accuracy

Higher resolution images typically improve OCR accuracy, especially for small text. Adjust resolution using min_pixels, max_pixels, resized_height, and resized_width parameters with the processor.

min_pixels = 512 * 28 * 28
max_pixels = 2048 * 28 * 28
processor = AutoProcessor.from_pretrained(
    "syntheticbot/Qwen-VL-7B-ocr",
    min_pixels=min_pixels, max_pixels=max_pixels
)

Control resizing dimensions directly:

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "file:///path/to/your/document_image.jpg",
                "resized_height": 600,
                "resized_width": 800,
            },
            {"type": "text", "text": "Extract the text."},
        ],
    }
]

Citation

If you utilize syntheticbot/Qwen-VL-7B-ocr, please cite the base Qwen2.5-VL models:

@misc{qwen2.5-VL,
    title = {Qwen2.5-VL},
    url = {https://qwenlm.github.io/blog/qwen2.5-vl/},
    author = {Qwen Team},
    month = {January},
    year = {2025}
}

@article{Qwen2VL,
  title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
  author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
  journal={arXiv preprint arXiv:2409.12191},
  year={2024}
}

@article{Qwen-VL,
  title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
  author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
  journal={arXiv preprint arXiv:2308.12966},
  year={2023}
}