updated readme
Browse files- .gitattributes +0 -58
- README.md +49 -0
- data/test-00000-of-00002.parquet +0 -3
- data/test-00001-of-00002.parquet +0 -3
.gitattributes
DELETED
@@ -1,58 +0,0 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.npy filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.npz filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.pickle filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.pkl filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
27 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
28 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
29 |
-
*.tar filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
32 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
33 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
34 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
35 |
-
*.zst filter=lfs diff=lfs merge=lfs -text
|
36 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
37 |
-
# Audio files - uncompressed
|
38 |
-
*.pcm filter=lfs diff=lfs merge=lfs -text
|
39 |
-
*.sam filter=lfs diff=lfs merge=lfs -text
|
40 |
-
*.raw filter=lfs diff=lfs merge=lfs -text
|
41 |
-
# Audio files - compressed
|
42 |
-
*.aac filter=lfs diff=lfs merge=lfs -text
|
43 |
-
*.flac filter=lfs diff=lfs merge=lfs -text
|
44 |
-
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
45 |
-
*.ogg filter=lfs diff=lfs merge=lfs -text
|
46 |
-
*.wav filter=lfs diff=lfs merge=lfs -text
|
47 |
-
# Image files - uncompressed
|
48 |
-
*.bmp filter=lfs diff=lfs merge=lfs -text
|
49 |
-
*.gif filter=lfs diff=lfs merge=lfs -text
|
50 |
-
*.png filter=lfs diff=lfs merge=lfs -text
|
51 |
-
*.tiff filter=lfs diff=lfs merge=lfs -text
|
52 |
-
# Image files - compressed
|
53 |
-
*.jpg filter=lfs diff=lfs merge=lfs -text
|
54 |
-
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
55 |
-
*.webp filter=lfs diff=lfs merge=lfs -text
|
56 |
-
# Video files - compressed
|
57 |
-
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
58 |
-
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
@@ -17,3 +17,52 @@ configs:
|
|
17 |
- split: test
|
18 |
path: data/test-*
|
19 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
- split: test
|
18 |
path: data/test-*
|
19 |
---
|
20 |
+
|
21 |
+
### CulturalVQA
|
22 |
+
Foundation models and vision-language pre-training have notably advanced Vision Language Models (VLMs), enabling multimodal processing of visual and linguistic data. However, their performance has been typically assessed on general scene understanding - recognizing objects, attributes, and actions - rather than cultural comprehension. We introduce CulturalVQA, a visual question-answering benchmark aimed at assessing VLM's geo-diverse cultural understanding. We curate a diverse collection of 2,378 image-question pairs with 1-5 answers per question representing cultures from 11 countries across 5 continents. The questions probe understanding of various facets of culture such as clothing, food, drinks, rituals, and traditions.
|
23 |
+
|
24 |
+
> **Note:** The answers for CulturalVQA benchmark is not publicly available. We are working on creating a competition where participants can upload their predictions and evaluate their models. Stay tuned for more updates!
|
25 |
+
If you need to urgently need to evaluate please contact [email protected]
|
26 |
+
|
27 |
+
### Loading the dataset
|
28 |
+
To load and use the CulturalVQA benchmark, use the following commands:
|
29 |
+
```
|
30 |
+
from datasets import load_dataset
|
31 |
+
|
32 |
+
culturalvqa_dataset = load_dataset('mair-lab/CulturalVQA')
|
33 |
+
```
|
34 |
+
Once the dataset is loaded each instance contains the following fields:
|
35 |
+
|
36 |
+
- `u_id`: A unique identifier for each image-question pair
|
37 |
+
- `image`: The image data in binary format
|
38 |
+
- `question`: The question pertaining to the image
|
39 |
+
|
40 |
+
### Usage and License
|
41 |
+
CulturalVQA is a test-only benchmark and can be used to evaluate models. The images are scraped from the internet and are not owned by the authors. All annotations are released under the CC BY-SA 4.0 license.
|
42 |
+
|
43 |
+
### Citation Information
|
44 |
+
If you are using this dataset, please cite
|
45 |
+
```
|
46 |
+
@inproceedings{nayak-etal-2024-benchmarking,
|
47 |
+
title = "Benchmarking Vision Language Models for Cultural Understanding",
|
48 |
+
author = "Nayak, Shravan and
|
49 |
+
Jain, Kanishk and
|
50 |
+
Awal, Rabiul and
|
51 |
+
Reddy, Siva and
|
52 |
+
Steenkiste, Sjoerd Van and
|
53 |
+
Hendricks, Lisa Anne and
|
54 |
+
Stanczak, Karolina and
|
55 |
+
Agrawal, Aishwarya",
|
56 |
+
editor = "Al-Onaizan, Yaser and
|
57 |
+
Bansal, Mohit and
|
58 |
+
Chen, Yun-Nung",
|
59 |
+
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
|
60 |
+
month = nov,
|
61 |
+
year = "2024",
|
62 |
+
address = "Miami, Florida, USA",
|
63 |
+
publisher = "Association for Computational Linguistics",
|
64 |
+
url = "https://aclanthology.org/2024.emnlp-main.329",
|
65 |
+
pages = "5769--5790"
|
66 |
+
}
|
67 |
+
```
|
68 |
+
|
data/test-00000-of-00002.parquet
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:aaf26f2014fb7d7d0a115dc2ba836e20c52e7c4ca1cd65a194c50e903e89bd96
|
3 |
-
size 293395318
|
|
|
|
|
|
|
|
data/test-00001-of-00002.parquet
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:05bf10123ef62e686330ae2acf96bcacaed9606af036567b41f25f5b5795582e
|
3 |
-
size 258388875
|
|
|
|
|
|
|
|