Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ license: openrail
|
|
10 |
|
11 |
## Dataset Description
|
12 |
|
13 |
-
OVDEval is a new benchmark for OVD model, which includes 9 sub-tasks and introduces evaluations on commonsense knowledge, attribute understanding, position understanding, object relation comprehension, and more. The dataset is meticulously created to provide hard negatives that challenge models' true understanding of visual and linguistic input. Additionally, we identify a problem with the popular Average Precision (AP) metric when benchmarking models on these fine-grained label datasets and propose a new metric called Non-Maximum Suppression Average Precision (NMS-AP) to address this issue.
|
14 |
|
15 |
|
16 |
### Data Details
|
@@ -27,7 +27,13 @@ OVDEval is a new benchmark for OVD model, which includes 9 sub-tasks and introdu
|
|
27 |
"supercategory": "object",
|
28 |
"id": 0,
|
29 |
"name": "computer without screen on"
|
30 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
"annotations": [
|
32 |
{
|
33 |
"id": 0,
|
|
|
10 |
|
11 |
## Dataset Description
|
12 |
|
13 |
+
**OVDEval** is a new benchmark for OVD model, which includes 9 sub-tasks and introduces evaluations on commonsense knowledge, attribute understanding, position understanding, object relation comprehension, and more. The dataset is meticulously created to provide hard negatives that challenge models' true understanding of visual and linguistic input. Additionally, we identify a problem with the popular Average Precision (AP) metric when benchmarking models on these fine-grained label datasets and propose a new metric called Non-Maximum Suppression Average Precision (NMS-AP) to address this issue.
|
14 |
|
15 |
|
16 |
### Data Details
|
|
|
27 |
"supercategory": "object",
|
28 |
"id": 0,
|
29 |
"name": "computer without screen on"
|
30 |
+
},
|
31 |
+
{
|
32 |
+
"supercategory": "object",
|
33 |
+
"id": 1,
|
34 |
+
"name": "computer with screen on"
|
35 |
+
}
|
36 |
+
]
|
37 |
"annotations": [
|
38 |
{
|
39 |
"id": 0,
|