Datasets:

Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
geshijoker commited on
Commit
e3564a6
1 Parent(s): 5bd56e7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +154 -3
README.md CHANGED
@@ -238,9 +238,160 @@ task_categories:
238
  - feature-extraction
239
  language:
240
  - en
241
- tags:
242
- - not-for-all-audiences
243
  pretty_name: ChaosMining
244
  size_categories:
245
  - 10B<n<100B
246
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
238
  - feature-extraction
239
  language:
240
  - en
 
 
241
  pretty_name: ChaosMining
242
  size_categories:
243
  - 10B<n<100B
244
+ ---
245
+ # Dataset Card for Dataset Name
246
+
247
+ ChaosMining is a synthetic dataset that evaluates post-hoc local attribution methods in low signal-to-noise ratio (SNR) environments.
248
+ The post-hoc local attribution methods are explainable AI methods such as Saliency (SA), DeepLift (DL), Integrated Gradient (IG), and Feature Ablation (FA).
249
+ This dataset is used to evaluate the feature selection ability of these methods when a large amount of noise exists.
250
+
251
+ ## Dataset Descriptions
252
+
253
+ There exist three modalities:
254
+ - **Symbolic Functional Data**: Mathematical functions with noise, used to study regression tasks. Derived from human-designed symbolic functions with predictive and irrelevant features.
255
+ - **Vision Data**: Images combining foreground objects from the CIFAR-10 dataset and background noise or flower images. 224x224 images with 32x32 foreground objects and either Gaussian noise or structural flower backgrounds.
256
+ - **Audio Data**: Audio sequences with a mix of relevant (speech commands) and irrelevant (background noise) signals.
257
+
258
+ ### Dataset Sources [optional]
259
+
260
+ Please check out the following
261
+
262
+ - **Repository:** [https://github.com/geshijoker/ChaosMining/tree/main] for data curation and evaluation.
263
+ - **Paper:** [https://arxiv.org/pdf/2406.12150] for details.
264
+
265
+ ### Dataset Details
266
+
267
+ ### Symbolic Functional Data
268
+
269
+ - **Synthetic Generation:** Data is derived from predefined mathematical functions, ensuring a clear ground truth for evaluation.
270
+ - **Functions:** Human-designed symbolic functions combining primitive mathematical operations (e.g., polynomial, trigonometric, exponential functions).
271
+ - **Generation Process:** Each feature is sampled from a normal distribution N(μ,σ^2) with μ=0 and σ=1. Predictive features are computed using the defined symbolic functions, while noise is added by including irrelevant features.
272
+ - **Annotations:** Ground truth annotations are generated based on the symbolic functions used to create the data.
273
+ - **Normalization:** Data values are normalized to ensure consistency across samples.
274
+
275
+ ### Vision Data
276
+
277
+ - **Foreground Images:** CIFAR-10 dataset, containing 32x32 pixel images of common objects.
278
+ - **Background Images:** Flower102 dataset and Gaussian noise images.
279
+ - **Combination:** Foreground images are overlaid onto background images to create synthetic samples. Foreground images are either centered or randomly placed.
280
+ - **Noise Types:** Backgrounds are generated using Gaussian noise for random noise conditions, or sampled from the Flower102 dataset for structured noise conditions.
281
+ - **Annotations:** Each image is annotated with the position of the foreground object and its class label.
282
+ - **Splitting:** The dataset is divided into training and validation sets to ensure no data leakage.
283
+
284
+ ### Audio Data
285
+
286
+ - **Foreground Audio:** Speech Command dataset, containing audio clips of spoken commands.
287
+ - **Background Audio:** Random noise generated from a normal distribution and samples from the Rainforest Connection Species dataset.
288
+ - **Combination:** Each audio sample consists of multiple channels, with only one channel containing the foreground audio and the rest containing background noise.
289
+ - **Noise Conditions:** Background noise is either random (generated from a normal distribution) or structured (sampled from environmental sounds).
290
+ - **Annotations:** Each audio sample is annotated with the class label of the foreground audio and the position of the predictive channel.
291
+ - **Normalization:** Audio signals are normalized to a consistent range for uniform processing.
292
+
293
+ ### Benchmark Metrics:
294
+
295
+ The benchmark processes a **Model × Attribution × Noise Condition** triplet design to evaluate the performance of various post-hoc attribution methods across different scenarios.
296
+
297
+ - **Uniform Score (UScore)**: Measures prediction accuracy normalized to a range of 0 to 1.
298
+ - **Functional Precision (FPrec)**: Measures the overlap between top-k predicted features and actual predictive features.
299
+
300
+ ## Uses
301
+
302
+ ### Dataset Structure
303
+ The configurations of the sub-datasets are ('symbolic_simulation', 'audio_RBFP', 'audio_RBRP', 'audio_SBFP', 'audio_SBRP', 'vision_RBFP', 'vision_RBRP', 'vision_SBFP', 'vision_SBRP').
304
+ Please pick one of them for use. The 'symbolic_simulation' data only has the 'train' split while the others have both the 'train' and 'validation' splits.
305
+
306
+ ### Load Dataset
307
+
308
+ For the general dataloading usage of huggingface API, please refer to [general usage](https://huggingface.co/docs/datasets/loading), including how to work with TensorFlow, PyTorch, JAX ...
309
+ Here we provide the template codes for PyTorch users.
310
+
311
+ ```python
312
+ from datasets import Dataset
313
+ from torch.utils.data import DataLoader
314
+
315
+ # Load the symbolic functional data from huggingface datasets
316
+ dataset = load_dataset('geshijoker/chaosmining', 'symbolic_simulation')
317
+ print(dataset)
318
+
319
+ Out: DatasetDict({
320
+ train: Dataset({
321
+ features: ['num_var', 'function'],
322
+ num_rows: 15
323
+ })
324
+ })
325
+
326
+ # Read the formulas as a list of (number_of_features, function_string) pairs
327
+ formulas = [[data_slice['num_var'], data_slice['function']] for data_slice in dataset['train']]
328
+
329
+ # Load the vision data from huggingface datasets
330
+ dataset = load_dataset('geshijoker/chaosmining', 'vision_RBFP', split='validation', streaming=True)
331
+
332
+ # Convert hugging face Dataset to pytorch Dataset for vision data
333
+ dataset = dataset.with_format('torch')
334
+
335
+ # Use a dataloader for minibatch loading
336
+ dataloader = DataLoader(dataset, batch_size=32)
337
+ next(iter(dataloader_vision))
338
+
339
+ Out: {'image':torch.Size([32, 3, 224, 224]), 'foreground_label':torch.Size([32]), 'position_x':torch.Size([32]), 'position_y':torch.Size([32])}
340
+
341
+ # Load the audio data from huggingface datasets
342
+ dataset = load_dataset('geshijoker/chaosmining', 'audio_RBFP', split='validation', streaming=True)
343
+
344
+ # Convert hugging face Dataset to pytorch Dataset for audio data.
345
+ # Define the transformation
346
+ def transform_audio(example):
347
+ # Remove the 'path' field
348
+ del example['audio']['path']
349
+
350
+ # Directly access the 'array' and 'sampling_rate' from the 'audio' field
351
+ example['sampling_rate'] = example['audio']['sampling_rate']
352
+ example['audio'] = example['audio']['array']
353
+
354
+ return example
355
+
356
+ # Apply the transformation to the dataset
357
+ dataset = dataset.map(transform_audio)
358
+ dataset = dataset.with_format('torch')
359
+
360
+ # Use a dataloader for minibatch loading
361
+ dataloader = DataLoader(dataset, batch_size=32)
362
+ next(iter(dataloader_vision))
363
+
364
+ Out: {'audio':torch.Size([32, 10, 16000]), 'sampling_rate':torch.Size([32]), 'label':List_of_32, 'file_name':List_of_32}
365
+ ```
366
+
367
+ ### Curation Rationale
368
+
369
+ To create controlled, low signal-to-noise ratio environments that test the efficacy of post-hoc local attribution methods.
370
+
371
+ - **Purpose:** To study the effectiveness of neural networks in regression tasks where relevant features are mixed with noise.
372
+ - **Challenges Addressed:** Differentiating between predictive and irrelevant features in a controlled, low signal-to-noise ratio environment.
373
+
374
+ ### Source Data
375
+
376
+ Synthetic data derived from known public datasets (CIFAR-10, Flower102, Speech Commands, Rainforest Connection Species) and generated noise.
377
+
378
+ ### Citation
379
+
380
+ If you use this dataset or code in your research, please cite the paper as follows:
381
+
382
+ **BibTeX:**
383
+ @article{shi2024chaosmining,
384
+ title={ChaosMining: A Benchmark to Evaluate Post-Hoc Local Attribution Methods in Low SNR Environments},
385
+ author={Shi, Ge and Kan, Ziwen and Smucny, Jason and Davidson, Ian},
386
+ journal={arXiv preprint arXiv:2406.12150},
387
+ year={2024}
388
+ }
389
+
390
+ **APA:**
391
+ Shi, G., Kan, Z., Smucny, J., & Davidson, I. (2024). ChaosMining: A Benchmark to Evaluate Post-Hoc Local Attribution Methods in Low SNR Environments. arXiv preprint arXiv:2406.12150.
392
+
393
+ ## Dataset Card Contact
394
+
395
+ Davidson Lab at UC Davis
396
+
397