Datasets:

Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
MapEval-Textual / README.md
mahirlabibdihan's picture
Update README.md
185d9ef verified
metadata
license: apache-2.0
language:
  - en
size_categories:
  - n<1K
task_categories:
  - question-answering
  - multiple-choice
configs:
  - config_name: benchmark
    data_files:
      - split: test
        path: dataset.json
tags:
  - geospatial
annotations_creators:
  - expert-generated
paperswithcode_id: mapeval-textual

MapEval-Textual

MapEval-Textual is created using MapQaTor.

Usage

from datasets import load_dataset

# Load dataset
ds = load_dataset("MapEval/MapEval-Textual", name="benchmark")

# Generate better prompts
for item in ds["test"]:
    # Start with a clear task description
    prompt = (
        "You are a highly intelligent assistant. "
        "Based on the given context, answer the multiple-choice question by selecting the correct option.\n\n"
        "Context:\n" + item["context"] + "\n\n"
        "Question:\n" + item["question"] + "\n\n"
        "Options:\n"
    )
    
    # List the options more clearly
    for i, option in enumerate(item["options"], start=1):
        prompt += f"{i}. {option}\n"
    
    # Add a concluding sentence to encourage selection of the answer
    prompt += "\nSelect the best option by choosing its number."

    # Use the prompt as needed
    print(prompt)  # Replace with your processing logic

Leaderboard

Model Overall Place Info Nearby Routing Trip Unanswerable
Claude-3.5-Sonnet 66.33 73.44 73.49 75.76 49.25 40.00
Gemini-1.5-Pro 66.33 65.63 74.70 69.70 47.76 85.00
GPT-4o 63.33 64.06 74.70 69.70 49.25 40.00
GPT-4-Turbo 62.33 67.19 71.08 71.21 47.76 30.00
Gemini-1.5-Flash 58.67 62.50 67.47 66.67 38.81 50.00
GPT-4o-mini 51.00 46.88 63.86 57.58 40.30 25.00
GPT-3.5-Turbo 37.67 26.56 53.01 48.48 28.36 5.00
Llama-3.1-70B 61.00 70.31 67.47 69.70 40.30 45.00
Llama-3.2-90B 58.33 68.75 66.27 66.67 38.81 30.00
Qwen2.5-72B 57.00 62.50 71.08 63.64 41.79 10.00
Qwen2.5-14B 53.67 57.81 71.08 59.09 32.84 20.00
Gemma-2.0-27B 49.00 39.06 71.08 59.09 31.34 15.00
Gemma-2.0-9B 47.33 50.00 50.60 59.09 34.33 30.00
Llama-3.1-8B 44.00 53.13 57.83 45.45 23.88 20.00
Qwen2.5-7B 43.33 48.44 49.40 42.42 38.81 20.00
Mistral-Nemo 43.33 46.88 50.60 50.00 32.84 15.00
Mixtral-8x7B 43.00 53.13 54.22 45.45 26.87 10.00
Phi-3.5-mini 37.00 40.63 48.19 46.97 20.90 0.00
Llama-3.2-3B 33.00 31.25 49.40 31.82 25.37 0.00
Human 86.67 92.19 90.36 81.81 88.06 65.00

Citation

If you use this dataset, please cite the original paper:

@article{dihan2024mapeval,
  title={MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models},
  author={Dihan, Mahir Labib and Hassan, Md Tanvir and Parvez, Md Tanvir and Hasan, Md Hasebul and Alam, Md Almash and Cheema, Muhammad Aamir and Ali, Mohammed Eunus and Parvez, Md Rizwan},
  journal={arXiv preprint arXiv:2501.00316},
  year={2024}
}