MVTamperBenchEnd / README.md
Srikant86's picture
ntu file removed
ee1b644
metadata
license: mit
extra_gated_prompt: >-
  You agree to not use the dataset to conduct experiments that cause harm to
  human subjects. Please note that the data in this dataset may be subject to
  other agreements. Before using the data, be sure to read the relevant
  agreements carefully to ensure compliant use. Video copyrights belong to the
  original video creators or platforms and are for academic research use only.
task_categories:
  - visual-question-answering
extra_gated_fields:
  Name: text
  Company/Organization: text
  Country: text
  E-Mail: text
modalities:
  - Video
  - Text
configs:
  - config_name: action_sequence
    data_files: json/action_sequence.json
  - config_name: moving_count
    data_files: json/moving_count.json
  - config_name: action_prediction
    data_files: json/action_prediction.json
  - config_name: episodic_reasoning
    data_files: json/episodic_reasoning.json
  - config_name: action_antonym
    data_files: json/action_antonym.json
  - config_name: action_count
    data_files: json/action_count.json
  - config_name: scene_transition
    data_files: json/scene_transition.json
  - config_name: object_shuffle
    data_files: json/object_shuffle.json
  - config_name: object_existence
    data_files: json/object_existence.json
  - config_name: unexpected_action
    data_files: json/unexpected_action.json
  - config_name: moving_direction
    data_files: json/moving_direction.json
  - config_name: state_change
    data_files: json/state_change.json
  - config_name: object_interaction
    data_files: json/object_interaction.json
  - config_name: character_order
    data_files: json/character_order.json
  - config_name: action_localization
    data_files: json/action_localization.json
  - config_name: counterfactual_inference
    data_files: json/counterfactual_inference.json
  - config_name: fine_grained_action
    data_files: json/fine_grained_action.json
  - config_name: moving_attribute
    data_files: json/moving_attribute.json
  - config_name: egocentric_navigation
    data_files: json/egocentric_navigation.json
language:
  - en
size_categories:
  - 1K<n<10K

MVTamperBench Dataset

Overview

MVTamperBenchEnd is a robust benchmark designed to evaluate Vision-Language Models (VLMs) against adversarial video tampering effects. It leverages the diverse and well-structured MVBench dataset, systematically augmented with four distinct tampering techniques:

  1. Masking: Overlays a black rectangle on a 1-second segment, simulating visual data loss.
  2. Repetition: Repeats a 1-second segment, introducing temporal redundancy.
  3. Rotation: Rotates a 1-second segment by 180 degrees, introducing spatial distortion.
  4. Substitution: Replaces a 1-second segment with a random clip from another video, disrupting the temporal and contextual flow.

The tampering effects are applied to the middle of each video to ensure consistent evaluation across models.


Dataset Details

The MVTamperBenchEnd dataset is built upon the MVBench dataset, a widely recognized collection used in video-language evaluation. It features a broad spectrum of content to ensure robust model evaluation, including:

  • Content Diversity: Spanning a variety of objects, activities, and settings.
  • Temporal Dynamics: Videos with temporal dependencies for coherence testing.
  • Benchmark Utility: Recognized datasets enabling comparisons with prior work.

Incorporated Datasets

The MVTamperBenchEnd dataset integrates videos from several sources, each contributing unique characteristics:

Dataset Name Primary Scene Type and Unique Characteristics
STAR Indoor actions and object interactions
PAXION Real-world scenes with nuanced actions
Moments in Time (MiT) V1 Indoor/outdoor scenes across varied contexts
FunQA Humor-focused, creative, real-world events
CLEVRER Simulated scenes for object movement and reasoning
Perception Test First/third-person views for object tracking
Charades-STA Indoor human actions and interactions
MoVQA Diverse scenes for scene transition comprehension
VLN-CE Indoor navigation from agent perspective
TVQA TV show scenes for episodic reasoning

Dataset Expansion

The original MVBench dataset contains 3,487 videos, which have been systematically expanded through tampering effects, resulting in a total(original+tampered) of 17,435 videos. This ensures:

  • Diversity: Varied adversarial challenges for robust evaluation.
  • Volume: Sufficient data for training and testing.

Below is a visual representation of the tampered video length distribution:

Tampered Video Length Distribution


Benchmark Construction

MVTamperBench is built with modularity, scalability, and reproducibility at its core:

  • Modularity: Each tampering effect is implemented as a reusable class, allowing for easy adaptation.
  • Scalability: Supports customizable tampering parameters, such as location and duration.
  • Integration: Fully compatible with VLMEvalKit, enabling seamless evaluations of tampering robustness alongside general VLM capabilities.

By maintaining consistent tampering duration (1 second) and location (center of the video), MVTamperBench ensures fair and comparable evaluations across models.


Download Dataset

You can access the MVTamperBenchEnd dataset directly from the Hugging Face repository:

Download MVTamperBenchEnd Dataset


How to Use

  1. Clone the Hugging Face repository:

    git clone [https://huggingface.co/datasets/mvtamperbenchend](https://huggingface.co/datasets/Srikant86/MVTamperBenchEnd)
    cd mvtamperbench
    
  2. Load the dataset using the Hugging Face datasets library:

    from datasets import load_dataset
    
    dataset = load_dataset("mvtamperbenchend")
    
  3. Explore the dataset structure and metadata:

    print(dataset["train"])
    
  4. Utilize the dataset for tampering detection tasks, model evaluation, and more.


Citation

If you use MVTamperBench in your research, please cite:

@misc{agarwal2024mvtamperbenchevaluatingrobustnessvisionlanguage,
      title={MVTamperBench: Evaluating Robustness of Vision-Language Models},
      author={Amit Agarwal and Srikant Panda and Angeline Charles and Bhargava Kumar and Hitesh Patel and Priyanranjan Pattnayak and Taki Hasan Rafi and Tejaswini Kumar and Dong-Kyu Chae},
      year={2024},
      eprint={2412.19794},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.19794},
}

License

MVTamperBench is released under the MIT License. See LICENSE for details.