Datasets:

ArXiv:
License:
P3ngLiu commited on
Commit
c7906bb
·
1 Parent(s): d0209b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -9
README.md CHANGED
@@ -2,16 +2,21 @@
2
  license: openrail
3
  ---
4
 
5
- # OVDEval:How to Evaluate the Generalization of Detection? A Benchmark for Comprehensive Open-Vocabulary Detection
 
 
 
 
6
 
7
  ## Dataset Description
8
 
9
  OVDEval is a new benchmark for OVD model, which includes 9 sub-tasks and introduces evaluations on commonsense knowledge, attribute understanding, position understanding, object relation comprehension, and more. The dataset is meticulously created to provide hard negatives that challenge models' true understanding of visual and linguistic input. Additionally, we identify a problem with the popular Average Precision (AP) metric when benchmarking models on these fine-grained label datasets and propose a new metric called Non-Maximum Suppression Average Precision (NMS-AP) to address this issue.
10
 
11
 
12
- ## Languages
 
 
13
 
14
- The dataset contains questions in English and code solutions in Python.
15
 
16
  ## Dataset Structure
17
 
@@ -44,10 +49,10 @@ The dataset contains questions in English and code solutions in Python.
44
  "height": 254,
45
  "width": 340,
46
  "text": [
47
- "computer without screen on" # "text" represents the positive sample in this image.
48
  ],
49
  "neg_text": [
50
- "computer with screen on" # "neg_text" represents the category for which this image does not belong.
51
  ]
52
  }]
53
  }
@@ -58,11 +63,9 @@ The dataset contains questions in English and code solutions in Python.
58
 
59
  Reference https://github.com/om-ai-lab/OVDEval
60
 
 
61
 
62
- ### Data Fields
63
-
64
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a2e94991d8e7fb24f7688/ngOkek9wJdppyxPB0xZ8Q.png)
65
-
66
 
67
  ## Citation Information
68
  If you find our data, or code helpful, please cite the original paper:
 
2
  license: openrail
3
  ---
4
 
5
+ <h1 align="center"> OVDEval </h1>
6
+ <h2 align="center"> A Comprehensive Evaluation Benchmark for Open-Vocabulary Detection</h2>
7
+ <p align="center">
8
+ <a href="https://arxiv.org/abs/2308.13177"><strong> [Paper 📄] </strong></a>
9
+ </p>
10
 
11
  ## Dataset Description
12
 
13
  OVDEval is a new benchmark for OVD model, which includes 9 sub-tasks and introduces evaluations on commonsense knowledge, attribute understanding, position understanding, object relation comprehension, and more. The dataset is meticulously created to provide hard negatives that challenge models' true understanding of visual and linguistic input. Additionally, we identify a problem with the popular Average Precision (AP) metric when benchmarking models on these fine-grained label datasets and propose a new metric called Non-Maximum Suppression Average Precision (NMS-AP) to address this issue.
14
 
15
 
16
+ ### Data Details
17
+
18
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a2e94991d8e7fb24f7688/ngOkek9wJdppyxPB0xZ8Q.png)
19
 
 
20
 
21
  ## Dataset Structure
22
 
 
49
  "height": 254,
50
  "width": 340,
51
  "text": [
52
+ "computer without screen on" # "text" represents the annotated positive labels of this image.
53
  ],
54
  "neg_text": [
55
+ "computer with screen on" # "neg_text" contains fine-grained hard negative labels which are generated according specific sub-tasks.
56
  ]
57
  }]
58
  }
 
63
 
64
  Reference https://github.com/om-ai-lab/OVDEval
65
 
66
+ ## Languages
67
 
68
+ The dataset contains questions in English and code solutions in Python.
 
 
 
69
 
70
  ## Citation Information
71
  If you find our data, or code helpful, please cite the original paper: