MapEval-Visual / README.md
mahirlabibdihan's picture
Update README.md
7bc88ee verified
metadata
license: apache-2.0
task_categories:
  - multiple-choice
  - visual-question-answering
language:
  - en
size_categories:
  - n<1K
configs:
  - config_name: benchmark
    data_files:
      - split: test
        path: dataset.json
paperswithcode_id: mapeval-visual
tags:
  - geospatial

MapEval-Visual

This dataset was introduced in MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models

Example

Image

Query

I am presently visiting Mount Royal Park . Could you please inform me about the nearby historical landmark?

Options

  1. Circle Stone
  2. Secret pool
  3. Maison William Caldwell Cottingham
  4. Poste de cavalerie du Service de police de la Ville de Montreal

Correct Option

  1. Circle Stone

Prerequisite

Download the Vdata.zip and extract in the working directory. This directory contains all the images.

Usage

from datasets import load_dataset
import PIL.Image
# Load dataset
ds = load_dataset("MapEval/MapEval-Visual", name="benchmark")

for item in ds["test"]:
   
    # Start with a clear task description
    prompt = (
        "You are a highly intelligent assistant. "
        "Based on the given image, answer the multiple-choice question by selecting the correct option.\n\n"
        "Question:\n" + item["question"] + "\n\n"
        "Options:\n"
    )
    
    # List the options more clearly
    for i, option in enumerate(item["options"], start=1):
        prompt += f"{i}. {option}\n"
    
    # Add a concluding sentence to encourage selection of the answer
    prompt += "\nSelect the best option by choosing its number."
    
    # Load image from Vdata/ directory
    img = PIL.Image.open(item["context"])
    
    # Use the prompt as needed
    print([prompt, img])  # Replace with your processing logic

    # Then match the output with item["answer"] or item["options"][item["answer"]-1]
    # If item["answer"] == 0: then it's unanswerable

Leaderboard

Model Overall Place Info Nearby Routing Counting Unanswerable
Claude-3.5-Sonnet 61.65 82.64 55.56 45.00 47.73 90.00
GPT-4o 58.90 76.86 57.78 50.00 47.73 40.00
Gemini-1.5-Pro 56.14 76.86 56.67 43.75 32.95 80.00
GPT-4-Turbo 55.89 75.21 56.67 42.50 44.32 40.00
Gemini-1.5-Flash 51.94 70.25 56.47 38.36 32.95 55.00
GPT-4o-mini 50.13 77.69 47.78 41.25 28.41 25.00
Qwen2-VL-7B-Instruct 51.63 71.07 48.89 40.00 40.91 40.00
Glm-4v-9b 48.12 73.55 42.22 41.25 34.09 10.00
InternLm-Xcomposer2 43.11 70.41 48.89 43.75 34.09 10.00
MiniCPM-Llama3-V-2.5 40.60 60.33 32.22 32.50 31.82 30.00
Llama-3-VILA1.5-8B 32.99 46.90 32.22 28.75 26.14 5.00
DocOwl1.5 31.08 43.80 23.33 32.50 27.27 0.00
Llava-v1.6-Mistral-7B-hf 31.33 42.15 28.89 32.50 21.59 15.00
Paligemma-3B-mix-224 30.58 37.19 25.56 38.75 23.86 10.00
Llava-1.5-7B-hf 20.05 22.31 18.89 13.75 28.41 0.00
Human 82.23 81.67 82.42 85.18 78.41 65.00

Citation

If you use this dataset, please cite the original paper:

@article{dihan2024mapeval,
  title={MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models},
  author={Dihan, Mahir Labib and Hassan, Md Tanvir and Parvez, Md Tanvir and Hasan, Md Hasebul and Alam, Md Almash and Cheema, Muhammad Aamir and Ali, Mohammed Eunus and Parvez, Md Rizwan},
  journal={arXiv preprint arXiv:2501.00316},
  year={2024}
}