File size: 1,550 Bytes
1315e90
7e25fd9
 
 
1315e90
 
 
8a7aa7a
1315e90
 
 
 
 
4899683
1315e90
 
 
 
 
 
 
c0420c5
1315e90
 
 
 
 
4899683
1315e90
 
 
 
8a7aa7a
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
tags:
- image-to-text
- image-captioning
language:
- th
---
# Thai Image Captioning
Encoder-decoder style image captioning model using [Swin-L](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) and [Wangchanberta](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased). Trained on Thai language MSCOCO and IPU24 dataset.

# Usage

With `VisionEncoderDecoderModel`.
```python
from transformers import VisionEncoderDecoderModel, AutoImageProcessor, AutoTokenizer
device = 'cuda'
gen_kwargs = {"max_length": 120, "num_beams": 4}
model_path = 'Natthaphon/thaicapgen-swin-wangchan'
feature_extractor = AutoImageProcessor.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = VisionEncoderDecoderModel.from_pretrained(model_path).to(device)
pixel_values = feature_extractor(images=[Image.open(image_path)], return_tensors="pt").pixel_values
pixel_values = pixel_values.to(device)
output_ids = model.generate(pixel_values, **gen_kwargs)
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
```
You can also use `AutoModel` to load it. But this requires `trust_remote_code=True`.
```python
from transformers import AutoModel
model_path = 'Natthaphon/thaicapgen-swin-wangchan'
model = AutoModel.from_pretrained(model_path, trust_remote_code=True).to(device)
```

# Acknowledgement
This work is partially supported by the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (PMU-B) [Grant number B04G640107]