Usage

Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.10:

torch==2.0.1
torchvision==0.15.2
transformers==4.37.2
tiktoken==0.6.0
verovio==4.3.1
accelerate==0.28.0
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('tadkt/GOT_Vietnamese', trust_remote_code=True)
model = AutoModel.from_pretrained('tadkt/GOT_Vietnamese', trust_remote_code=True, low_cpu_mem_usage=True, device_map='cuda', use_safetensors=True, pad_token_id=tokenizer.eos_token_id)
model = model.eval().cuda()
# input your test image
image_file = 'xxx.jpg'
# plain texts OCR
res = model.chat(tokenizer, image_file, ocr_type='ocr')
print(res)
Downloads last month
17
Safetensors
Model size
561M params
Tensor type
BF16
·
Inference API
Inference API (serverless) does not yet support model repos that contain custom code.