File size: 4,348 Bytes
6dc5544
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3297d95
6dc5544
 
 
 
b254975
6dc5544
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
# Example metadata to be added to a dataset card.  
# Full dataset card template at https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md
language:
- en
license: apache-2.0  # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
tags:
- RAG
- model card generation
- responsible AI

configs:  # Optional. This can be used to pass additional parameters to the dataset loader, such as `data_files`, `data_dir`, and any builder-specific parameters  
- config_name: model_card # Name of the dataset subset, if applicable. Example: default
  data_files:
  - split: test 
    path: model_card_test.csv 
  - split: whole  
    path: model_card_whole.csv  
- config_name: data_card  
  data_files:
  - split: whole  
    path: data_card_whole.csv 

---


# Automatic Generation of Model and Data Cards: A Step Towards Responsible AI

The work has been accepted to NAACL 2024 Oral.

**Abstract**: In an era of model and data proliferation in machine learning/AI especially marked by the rapid advancement of open-sourced technologies, there arises a critical need for standardized consistent documentation. Our work addresses the information incompleteness in current human-written model and data cards. We propose an automated generation approach using Large Language Models (LLMs). Our key contributions include the establishment of CardBench, a comprehensive dataset aggregated from over 4.8k model cards and 1.4k data cards, coupled with the development of the CardGen pipeline comprising a two-step retrieval process. Our approach exhibits enhanced completeness, objectivity, and faithfulness in generated model and data cards, a significant step in responsible AI documentation practices ensuring better accountability and traceability.

**Paper Arxiv**: https://arxiv.org/abs/2405.06258

**ACL Anthology**: https://aclanthology.org/2024.naacl-long.110/

**Repository and Code**: https://github.com/jiarui-liu/AutomatedModelCardGeneration

**Dataset descriptions**:
- `model_card_test.csv`: Contains the test set used for model card generation. We collected the model cards and data cards from the HuggingFace page as of October 1, 2023.
- `model_card_whole.csv`: Represents the complete dataset excluding the test set.
- `data_card_whole.csv`: Represents the complete dataset for data card generation.
- **Additional files**: Other included files may be useful for reproducing our work.

Disclaimer: Please forgive me for not creating this data card as described in our paper. We promise to give it some extra love and polish when we have more time! 🫠

**Citation**: If you find our work useful, please cite as follows :)

```
@inproceedings{liu-etal-2024-automatic,
    title = "Automatic Generation of Model and Data Cards: A Step Towards Responsible {AI}",
    author = "Liu, Jiarui  and
      Li, Wenkai  and
      Jin, Zhijing  and
      Diab, Mona",
    editor = "Duh, Kevin  and
      Gomez, Helena  and
      Bethard, Steven",
    booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
    month = jun,
    year = "2024",
    address = "Mexico City, Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.naacl-long.110",
    doi = "10.18653/v1/2024.naacl-long.110",
    pages = "1975--1997",
    abstract = "In an era of model and data proliferation in machine learning/AI especially marked by the rapid advancement of open-sourced technologies, there arises a critical need for standardized consistent documentation. Our work addresses the information incompleteness in current human-written model and data cards. We propose an automated generation approach using Large Language Models (LLMs). Our key contributions include the establishment of CardBench, a comprehensive dataset aggregated from over 4.8k model cards and 1.4k data cards, coupled with the development of the CardGen pipeline comprising a two-step retrieval process. Our approach exhibits enhanced completeness, objectivity, and faithfulness in generated model and data cards, a significant step in responsible AI documentation practices ensuring better accountability and traceability.",
}
```