CardBench / README.md
Jerry999's picture
Upload README.md with huggingface_hub
3297d95 verified
metadata
language:
  - en
license: apache-2.0
tags:
  - RAG
  - model card generation
  - responsible AI
configs:
  - config_name: model_card
    data_files:
      - split: test
        path: model_card_test.csv
      - split: whole
        path: model_card_whole.csv
  - config_name: data_card
    data_files:
      - split: whole
        path: data_card_whole.csv

Automatic Generation of Model and Data Cards: A Step Towards Responsible AI

The work has been accepted to NAACL 2024 Oral.

Abstract: In an era of model and data proliferation in machine learning/AI especially marked by the rapid advancement of open-sourced technologies, there arises a critical need for standardized consistent documentation. Our work addresses the information incompleteness in current human-written model and data cards. We propose an automated generation approach using Large Language Models (LLMs). Our key contributions include the establishment of CardBench, a comprehensive dataset aggregated from over 4.8k model cards and 1.4k data cards, coupled with the development of the CardGen pipeline comprising a two-step retrieval process. Our approach exhibits enhanced completeness, objectivity, and faithfulness in generated model and data cards, a significant step in responsible AI documentation practices ensuring better accountability and traceability.

Paper Arxiv: https://arxiv.org/abs/2405.06258

ACL Anthology: https://aclanthology.org/2024.naacl-long.110/

Repository and Code: https://github.com/jiarui-liu/AutomatedModelCardGeneration

Dataset descriptions:

  • model_card_test.csv: Contains the test set used for model card generation. We collected the model cards and data cards from the HuggingFace page as of October 1, 2023.
  • model_card_whole.csv: Represents the complete dataset excluding the test set.
  • data_card_whole.csv: Represents the complete dataset for data card generation.
  • Additional files: Other included files may be useful for reproducing our work.

Disclaimer: Please forgive me for not creating this data card as described in our paper. We promise to give it some extra love and polish when we have more time! 🫠

Citation: If you find our work useful, please cite as follows :)

@inproceedings{liu-etal-2024-automatic,
    title = "Automatic Generation of Model and Data Cards: A Step Towards Responsible {AI}",
    author = "Liu, Jiarui  and
      Li, Wenkai  and
      Jin, Zhijing  and
      Diab, Mona",
    editor = "Duh, Kevin  and
      Gomez, Helena  and
      Bethard, Steven",
    booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
    month = jun,
    year = "2024",
    address = "Mexico City, Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.naacl-long.110",
    doi = "10.18653/v1/2024.naacl-long.110",
    pages = "1975--1997",
    abstract = "In an era of model and data proliferation in machine learning/AI especially marked by the rapid advancement of open-sourced technologies, there arises a critical need for standardized consistent documentation. Our work addresses the information incompleteness in current human-written model and data cards. We propose an automated generation approach using Large Language Models (LLMs). Our key contributions include the establishment of CardBench, a comprehensive dataset aggregated from over 4.8k model cards and 1.4k data cards, coupled with the development of the CardGen pipeline comprising a two-step retrieval process. Our approach exhibits enhanced completeness, objectivity, and faithfulness in generated model and data cards, a significant step in responsible AI documentation practices ensuring better accountability and traceability.",
}