This repository includes the raw outputs fo the 2025 NAACL Findings paper "Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models." https://arxiv.org/abs/2411.00154
To access the results, unzip the file results.zip
(link).
You will see folders for each experiment setup (i.e., collection, document, sentence, continual training, and fine-tuning). Inside each folder, you will see the results organized by model. We did experiments on Pythia 2.8B, Pythia 6.9B, and GPT Neo 2.7B.
The main files we include are:
- The precomputed MIA attacks are stored in
results/{data_scale}/EleutherAI/{model}/haritzpuerto/{data_partition}/mia_members.jsonl
andmia_nonmembers.jsonl
- The CSV files with the evaluation performance are stored in
results/{data_scale}/EleutherAI/{model}/haritzpuerto/{data_partition}/*.csv
- For each data partition, the used to conduct the experiments. They are stored in
results/{data_scale}/EleutherAI/{model}/haritzpuerto/{data_partition}/members
andnon_members
. You need to open them withdatasets.load_from_disk
The precomputed MIA attacks are stored as a list of jsons. Each json has the following form:
Extract from results/collection_mia/EleutherAI/pythia-6.9b/haritzpuerto/the_pile_00_arxiv/2048/mia_members.jsonl
{
"pred":{
"ppl":9.5,
"ppl/lowercase_ppl":-1.028301890685848,
"ppl/zlib":0.00022461257094747036,
"Min_5.0% Prob":9.479779411764707,
"Min_10.0% Prob":8.171262254901961,
"Min_20.0% Prob":6.549893031784841,
"Min_30.0% Prob":5.498956636807818,
"Min_40.0% Prob":4.719867435819071,
"Min_50.0% Prob":4.099095796676441,
"Min_60.0% Prob":3.588011502442997
},
"label":1
}
The csv results are tables like the following:
Extract from results/collection_mia/EleutherAI/pythia-6.9b/haritzpuerto/the_pile_00_arxiv/2048/dataset_inference_pvalues_10_dataset_size.csv
Dataset Size | Known Datasets | Training Size | Eval Size | F1 | P-value | TPR | FPR | AUC | Chunk-level AUC | Seed |
---|---|---|---|---|---|---|---|---|---|---|
10 | 1000 | 2000 | 2000 | 57.07246213473086 | 0.4321467209427013 | 52.900000000000006 | 38.6 | 0.593152 | 0.5275510595912055 | 670487 |
10 | 1000 | 2000 | 2000 | 56.79208146268461 | 0.555579505655733 | 70.3 | 55.300000000000004 | 0.5959169999999999 | 0.5277849316855144 | 116739 |
Please refer to our 2025 NAACL Findings paper "Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models" for all the details to understand and interpret the results.
Developed at Parameter Lab with the support of Naver AI Lab.
Disclaimer
This repository contains experimental software results and is published for the sole purpose of giving additional background details on the respective publication.
Citation
If this work is useful for you, please consider citing it
@misc{puerto2024scalingmembershipinferenceattacks,
title={Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models},
author={Haritz Puerto and Martin Gubri and Sangdoo Yun and Seong Joon Oh},
year={2024},
eprint={2411.00154},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.00154},
}
✉️ Contact person: Haritz Puerto, [email protected]
🏢 https://www.parameterlab.de/
🌐 https://haritzpuerto.github.io/scaling-mia/
RT.AI https://researchtrend.ai/papers/2411.00154
Don't hesitate to send us an e-mail or report an issue if something is broken (and it shouldn't be) or if you have further questions.
- Downloads last month
- 19