ChongyanChen commited on
Commit
cefe337
·
1 Parent(s): e3cc333

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ task_categories:
4
+ - visual-question-answering
5
+ pretty_name: VQAonline
6
+ ---
7
+ # VQAonline
8
+
9
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6337e9b676421c05430a0287/6vt42q8w7EWx9vVuZqc3U.png)
10
+
11
+ [**🌐 Homepage**](https://vqaonline.github.io/) | [**🤗 Dataset**](https://huggingface.co/datasets/ChongyanChen/VQAonline/) | [**📖 arXiv**](https://arxiv.org/abs/2311.15562)
12
+
13
+ ## Dataset Description
14
+ We introduce VQAonline, the first VQA dataset in which all contents originate from an authentic use case.
15
+
16
+ VQAonline includes 64K visual questions sourced from an online question answering community (i.e., StackExchange).
17
+
18
+ It differs from prior datasets; examples include that it contains:
19
+ - (1) authentic context that clarifies the question
20
+ - (2) an answer the individual asking the question validated as acceptable from all community provided answers,
21
+ - (3) answers that are considerably longer (e.g., a mean of 173 words versus typically 11 words or fewer in prior work)
22
+ - (4) user-chosen topics for each visual question from 105 diverse topics revealing the dataset’s inherent diversity.
23
+
24
+ ## Dataset Structure
25
+ We designed VQAonline to support few-shot settings given the recent exciting developments around in-context few-shot learning with foundation models.
26
+ - Training set: 665 examples
27
+ - Validation set: 285 examples
28
+ - Test set: 63,746 examples
29
+
30
+
31
+
32
+ ## Contact
33
+ - Chongyan Chen: [email protected]
34
+
35
+ ## Citation
36
+ **BibTeX:**
37
+ ```bibtex
38
+ @article{chen2023vqaonline,
39
+ title={Fully Authentic Visual Question Answering Dataset from Online Communities},
40
+ author={Chen, Chongyan and Liu, Mengchen and Codella, Noel and Li, Yunsheng and Yuan, Lu and Gurari, Danna},
41
+ journal={arXiv preprint arXiv:2311.15562},
42
+ year={2023}
43
+ }
44
+ ```