Search is not available for this dataset
title
string | abstract
string | url
string | category
string | prediction
string | probability
float64 | arxiv_id
string |
---|---|---|---|---|---|---|
Credit card score prediction using machine learning models: A new dataset | The use of credit cards has recently increased, creating an essential need
for credit card assessment methods to minimize potential risks. This study
investigates the utilization of machine learning (ML) models for credit card
default prediction system. The main goal here is to investigate the
best-performing ML model for new proposed credit card scoring dataset. This new
dataset includes credit card transaction histories and customer profiles, is
proposed and tested using a variety of machine learning algorithms, including
logistic regression, decision trees, random forests, multi layer perceptron
(MLP) neural network, XGBoost, and LightGBM. To prepare the data for machine
learning models, we perform data pre-proccessing, feature extraction, feature
selection, and data balancing techniques. Experimental results demonstrate that
MLP outperforms logistic regression, decision trees, random forests, LightGBM,
and XGBoost in terms of predictive performance in true positive rate, achieving
an impressive area under the curve (AUC) of 86.7% and an accuracy rate of
91.6%, with a recall rate exceeding 80%. These results indicate the superiority
of MLP in predicting the default customers and assessing the potential risks.
Furthermore, they help banks and other financial institutions in predicting
loan defaults at an earlier stage. | http://arxiv.org/abs/2310.02956v1 | cs.LG | new_dataset | 0.994289 | 2310.02956 |
Eye Fairness: A Large-Scale 3D Imaging Dataset for Equitable Eye Diseases Screening and Fair Identity Scaling | Fairness or equity in machine learning is profoundly important for societal
well-being, but limited public datasets hinder its progress, especially in the
area of medicine. It is undeniable that fairness in medicine is one of the most
important areas for fairness learning's applications. Currently, no large-scale
public medical datasets with 3D imaging data for fairness learning are
available, while 3D imaging data in modern clinics are standard tests for
disease diagnosis. In addition, existing medical fairness datasets are actually
repurposed datasets, and therefore they typically have limited demographic
identity attributes with at most three identity attributes of age, gender, and
race for fairness modeling. To address this gap, we introduce our Eye Fairness
dataset with 30,000 subjects (Harvard-EF) covering three major eye diseases
including age-related macular degeneration, diabetic retinopathy, and glaucoma
affecting 380 million patients globally. Our Harvard-EF dataset includes both
2D fundus photos and 3D optical coherence tomography scans with six demographic
identity attributes including age, gender, race, ethnicity, preferred language,
and marital status. We also propose a fair identity scaling (FIS) approach
combining group and individual scaling together to improve model fairness. Our
FIS approach is compared with various state-of-the-art fairness learning
methods with superior performance in the racial, gender, and ethnicity fairness
tasks with 2D and 3D imaging data, which demonstrate the utilities of our
Harvard-EF dataset for fairness learning. To facilitate fairness comparisons
between different models, we propose performance-scaled disparity measures,
which can be used to compare model fairness accounting for overall performance
levels. The dataset and code are publicly accessible via
\url{https://ophai.hms.harvard.edu/datasets/harvard-ef30k}. | http://arxiv.org/abs/2310.02492v1 | cs.CV | new_dataset | 0.994452 | 2310.02492 |
Constructing Image-Text Pair Dataset from Books | Digital archiving is becoming widespread owing to its effectiveness in
protecting valuable books and providing knowledge to many people
electronically. In this paper, we propose a novel approach to leverage digital
archives for machine learning. If we can fully utilize such digitized data,
machine learning has the potential to uncover unknown insights and ultimately
acquire knowledge autonomously, just like humans read books. As a first step,
we design a dataset construction pipeline comprising an optical character
reader (OCR), an object detector, and a layout analyzer for the autonomous
extraction of image-text pairs. In our experiments, we apply our pipeline on
old photo books to construct an image-text pair dataset, showing its
effectiveness in image-text retrieval and insight extraction. | http://arxiv.org/abs/2310.01936v1 | cs.CV | new_dataset | 0.994403 | 2310.01936 |
Improving Dialogue Management: Quality Datasets vs Models | Task-oriented dialogue systems (TODS) have become crucial for users to
interact with machines and computers using natural language. One of its key
components is the dialogue manager, which guides the conversation towards a
good goal for the user by providing the best possible response. Previous works
have proposed rule-based systems (RBS), reinforcement learning (RL), and
supervised learning (SL) as solutions for the correct dialogue management; in
other words, select the best response given input by the user. However, this
work argues that the leading cause of DMs not achieving maximum performance
resides in the quality of the datasets rather than the models employed thus
far; this means that dataset errors, like mislabeling, originate a large
percentage of failures in dialogue management. We studied the main errors in
the most widely used datasets, Multiwoz 2.1 and SGD, to demonstrate this
hypothesis. To do this, we have designed a synthetic dialogue generator to
fully control the amount and type of errors introduced in the dataset. Using
this generator, we demonstrated that errors in the datasets contribute
proportionally to the performance of the models | http://arxiv.org/abs/2310.01339v1 | cs.CL | not_new_dataset | 0.991986 | 2310.01339 |
Natural Language Models for Data Visualization Utilizing nvBench Dataset | Translation of natural language into syntactically correct commands for data
visualization is an important application of natural language models and could
be leveraged to many different tasks. A closely related effort is the task of
translating natural languages into SQL queries, which in turn could be
translated into visualization with additional information from the natural
language query supplied\cite{Zhong:2017qr}. Contributing to the progress in
this area of research, we built natural language translation models to
construct simplified versions of data and visualization queries in a language
called Vega Zero. In this paper, we explore the design and performance of these
sequence to sequence transformer based machine learning model architectures
using large language models such as BERT as encoders to predict visualization
commands from natural language queries, as well as apply available T5 sequence
to sequence models to the problem for comparison. | http://arxiv.org/abs/2310.00832v1 | cs.CL | not_new_dataset | 0.992239 | 2310.00832 |
Enhancing Mortality Prediction in Heart Failure Patients: Exploring Preprocessing Methods for Imbalanced Clinical Datasets | Heart failure (HF) is a critical condition in which the accurate prediction
of mortality plays a vital role in guiding patient management decisions.
However, clinical datasets used for mortality prediction in HF often suffer
from an imbalanced distribution of classes, posing significant challenges. In
this paper, we explore preprocessing methods for enhancing one-month mortality
prediction in HF patients. We present a comprehensive preprocessing framework
including scaling, outliers processing and resampling as key techniques. We
also employed an aware encoding approach to effectively handle missing values
in clinical datasets. Our study utilizes a comprehensive dataset from the
Persian Registry Of cardio Vascular disease (PROVE) with a significant class
imbalance. By leveraging appropriate preprocessing techniques and Machine
Learning (ML) algorithms, we aim to improve mortality prediction performance
for HF patients. The results reveal an average enhancement of approximately
3.6% in F1 score and 2.7% in MCC for tree-based models, specifically Random
Forest (RF) and XGBoost (XGB). This demonstrates the efficiency of our
preprocessing approach in effectively handling Imbalanced Clinical Datasets
(ICD). Our findings hold promise in guiding healthcare professionals to make
informed decisions and improve patient outcomes in HF management. | http://arxiv.org/abs/2310.00457v1 | cs.LG | not_new_dataset | 0.980858 | 2310.00457 |
Building Flexible, Scalable, and Machine Learning-ready Multimodal Oncology Datasets | The advancements in data acquisition, storage, and processing techniques have
resulted in the rapid growth of heterogeneous medical data. Integrating
radiological scans, histopathology images, and molecular information with
clinical data is essential for developing a holistic understanding of the
disease and optimizing treatment. The need for integrating data from multiple
sources is further pronounced in complex diseases such as cancer for enabling
precision medicine and personalized treatments. This work proposes Multimodal
Integration of Oncology Data System (MINDS) - a flexible, scalable, and
cost-effective metadata framework for efficiently fusing disparate data from
public sources such as the Cancer Research Data Commons (CRDC) into an
interconnected, patient-centric framework. MINDS offers an interface for
exploring relationships across data types and building cohorts for developing
large-scale multimodal machine learning models. By harmonizing multimodal data,
MINDS aims to potentially empower researchers with greater analytical ability
to uncover diagnostic and prognostic insights and enable evidence-based
personalized care. MINDS tracks granular end-to-end data provenance, ensuring
reproducibility and transparency. The cloud-native architecture of MINDS can
handle exponential data growth in a secure, cost-optimized manner while
ensuring substantial storage optimization, replication avoidance, and dynamic
access capabilities. Auto-scaling, access controls, and other mechanisms
guarantee pipelines' scalability and security. MINDS overcomes the limitations
of existing biomedical data silos via an interoperable metadata-driven approach
that represents a pivotal step toward the future of oncology data integration. | http://arxiv.org/abs/2310.01438v1 | cs.LG | not_new_dataset | 0.989 | 2310.01438 |
Efficient Large Scale Medical Image Dataset Preparation for Machine Learning Applications | In the rapidly evolving field of medical imaging, machine learning algorithms
have become indispensable for enhancing diagnostic accuracy. However, the
effectiveness of these algorithms is contingent upon the availability and
organization of high-quality medical imaging datasets. Traditional Digital
Imaging and Communications in Medicine (DICOM) data management systems are
inadequate for handling the scale and complexity of data required to be
facilitated in machine learning algorithms. This paper introduces an innovative
data curation tool, developed as part of the Kaapana open-source toolkit, aimed
at streamlining the organization, management, and processing of large-scale
medical imaging datasets. The tool is specifically tailored to meet the needs
of radiologists and machine learning researchers. It incorporates advanced
search, auto-annotation and efficient tagging functionalities for improved data
curation. Additionally, the tool facilitates quality control and review,
enabling researchers to validate image and segmentation quality in large
datasets. It also plays a critical role in uncovering potential biases in
datasets by aggregating and visualizing metadata, which is essential for
developing robust machine learning models. Furthermore, Kaapana is integrated
within the Radiological Cooperative Network (RACOON), a pioneering initiative
aimed at creating a comprehensive national infrastructure for the aggregation,
transmission, and consolidation of radiological data across all university
clinics throughout Germany. A supplementary video showcasing the tool's
functionalities can be accessed at https://bit.ly/MICCAI-DEMI2023. | http://arxiv.org/abs/2309.17285v1 | cs.CV | not_new_dataset | 0.978577 | 2309.17285 |
FENDA-FL: Personalized Federated Learning on Heterogeneous Clinical Datasets | Federated learning (FL) is increasingly being recognized as a key approach to
overcoming the data silos that so frequently obstruct the training and
deployment of machine-learning models in clinical settings. This work
contributes to a growing body of FL research specifically focused on clinical
applications along three important directions. First, an extension of the FENDA
method (Kim et al., 2016) to the FL setting is proposed. Experiments conducted
on the FLamby benchmarks (du Terrail et al., 2022a) and GEMINI datasets (Verma
et al., 2017) show that the approach is robust to heterogeneous clinical data
and often outperforms existing global and personalized FL techniques. Further,
the experimental results represent substantive improvements over the original
FLamby benchmarks and expand such benchmarks to include evaluation of
personalized FL methods. Finally, we advocate for a comprehensive checkpointing
and evaluation framework for FL to better reflect practical settings and
provide multiple baselines for comparison. | http://arxiv.org/abs/2309.16825v1 | cs.LG | not_new_dataset | 0.99214 | 2309.16825 |
ComPile: A Large IR Dataset from Production Sources | Code is increasingly becoming a core data modality of modern machine learning
research impacting not only the way we write code with conversational agents
like OpenAI's ChatGPT, Google's Bard, or Anthropic's Claude, the way we
translate code from one language into another, but also the compiler
infrastructure underlying the language. While modeling approaches may vary and
representations differ, the targeted tasks often remain the same within the
individual classes of models. Relying solely on the ability of modern models to
extract information from unstructured code does not take advantage of 70 years
of programming language and compiler development by not utilizing the structure
inherent to programs in the data collection. This detracts from the performance
of models working over a tokenized representation of input code and precludes
the use of these models in the compiler itself. To work towards the first
intermediate representation (IR) based models, we fully utilize the LLVM
compiler infrastructure, shared by a number of languages, to generate a 182B
token dataset of LLVM IR. We generated this dataset from programming languages
built on the shared LLVM infrastructure, including Rust, Swift, Julia, and
C/C++, by hooking into LLVM code generation either through the language's
package manager or the compiler directly to extract the dataset of intermediate
representations from production grade programs. Statistical analysis proves the
utility of our dataset not only for large language model training, but also for
the introspection into the code generation process itself with the dataset
showing great promise for machine-learned compiler components. | http://arxiv.org/abs/2309.15432v1 | cs.PL | new_dataset | 0.994593 | 2309.15432 |
Challenges of building medical image datasets for development of deep learning software in stroke | Despite the large amount of brain CT data generated in clinical practice, the
availability of CT datasets for deep learning (DL) research is currently
limited. Furthermore, the data can be insufficiently or improperly prepared for
machine learning and thus lead to spurious and irreproducible analyses. This
lack of access to comprehensive and diverse datasets poses a significant
challenge for the development of DL algorithms. In this work, we propose a
complete semi-automatic pipeline to address the challenges of preparing a
clinical brain CT dataset for DL analysis and describe the process of
standardising this heterogeneous dataset. Challenges include handling image
sets with different orientations (axial, sagittal, coronal), different image
types (to view soft tissues or bones) and dimensions, and removing redundant
background. The final pipeline was able to process 5,868/10,659 (45%) CT image
datasets. Reasons for rejection include non-axial data (n=1,920), bone
reformats (n=687), separated skull base/vault images (n=1,226), and
registration failures (n=465). Further format adjustments, including image
cropping, resizing and scaling are also needed for DL processing. Of the axial
scans that were not localisers, bone reformats or split brains, 5,868/6,333
(93%) were accepted, while the remaining 465 failed the registration process.
Appropriate preparation of medical imaging datasets for DL is a costly and
time-intensive process. | http://arxiv.org/abs/2309.15081v1 | eess.IV | not_new_dataset | 0.992066 | 2309.15081 |
Real3D-AD: A Dataset of Point Cloud Anomaly Detection | High-precision point cloud anomaly detection is the gold standard for
identifying the defects of advancing machining and precision manufacturing.
Despite some methodological advances in this area, the scarcity of datasets and
the lack of a systematic benchmark hinder its development. We introduce
Real3D-AD, a challenging high-precision point cloud anomaly detection dataset,
addressing the limitations in the field. With 1,254 high-resolution 3D items
from forty thousand to millions of points for each item, Real3D-AD is the
largest dataset for high-precision 3D industrial anomaly detection to date.
Real3D-AD surpasses existing 3D anomaly detection datasets available regarding
point cloud resolution (0.0010mm-0.0015mm), 360 degree coverage and perfect
prototype. Additionally, we present a comprehensive benchmark for Real3D-AD,
revealing the absence of baseline methods for high-precision point cloud
anomaly detection. To address this, we propose Reg3D-AD, a registration-based
3D anomaly detection method incorporating a novel feature memory bank that
preserves local and global representations. Extensive experiments on the
Real3D-AD dataset highlight the effectiveness of Reg3D-AD. For reproducibility
and accessibility, we provide the Real3D-AD dataset, benchmark source code, and
Reg3D-AD on our website:https://github.com/M-3LAB/Real3D-AD. | http://arxiv.org/abs/2309.13226v2 | cs.CV | new_dataset | 0.994488 | 2309.13226 |
OSN-MDAD: Machine Translation Dataset for Arabic Multi-Dialectal Conversations on Online Social Media | While resources for English language are fairly sufficient to understand
content on social media, similar resources in Arabic are still immature. The
main reason that the resources in Arabic are insufficient is that Arabic has
many dialects in addition to the standard version (MSA). Arabs do not use MSA
in their daily communications; rather, they use dialectal versions.
Unfortunately, social users transfer this phenomenon into their use of social
media platforms, which in turn has raised an urgent need for building suitable
AI models for language-dependent applications. Existing machine translation
(MT) systems designed for MSA fail to work well with Arabic dialects. In light
of this, it is necessary to adapt to the informal nature of communication on
social networks by developing MT systems that can effectively handle the
various dialects of Arabic. Unlike for MSA that shows advanced progress in MT
systems, little effort has been exerted to utilize Arabic dialects for MT
systems. While few attempts have been made to build translation datasets for
dialectal Arabic, they are domain dependent and are not OSN cultural-language
friendly. In this work, we attempt to alleviate these limitations by proposing
an online social network-based multidialect Arabic dataset that is crafted by
contextually translating English tweets into four Arabic dialects: Gulf,
Yemeni, Iraqi, and Levantine. To perform the translation, we followed our
proposed guideline framework for content translation, which could be
universally applicable for translation between foreign languages and local
dialects. We validated the authenticity of our proposed dataset by developing
neural MT models for four Arabic dialects. Our results have shown a superior
performance of our NMT models trained using our dataset. We believe that our
dataset can reliably serve as an Arabic multidialectal translation dataset for
informal MT tasks. | http://arxiv.org/abs/2309.12137v1 | cs.CL | new_dataset | 0.994525 | 2309.12137 |
Dataset Factory: A Toolchain For Generative Computer Vision Datasets | Generative AI workflows heavily rely on data-centric tasks - such as
filtering samples by annotation fields, vector distances, or scores produced by
custom classifiers. At the same time, computer vision datasets are quickly
approaching petabyte volumes, rendering data wrangling difficult. In addition,
the iterative nature of data preparation necessitates robust dataset sharing
and versioning mechanisms, both of which are hard to implement ad-hoc. To solve
these challenges, we propose a "dataset factory" approach that separates the
storage and processing of samples from metadata and enables data-centric
operations at scale for machine learning teams and individual researchers. | http://arxiv.org/abs/2309.11608v1 | cs.AI | not_new_dataset | 0.9875 | 2309.11608 |
SignBank+: Multilingual Sign Language Translation Dataset | This work advances the field of sign language machine translation by focusing
on dataset quality and simplification of the translation system. We introduce
SignBank+, a clean version of the SignBank dataset, optimized for machine
translation. Contrary to previous works that employ complex factorization
techniques for translation, we advocate for a simplified text-to-text
translation approach. Our evaluation shows that models trained on SignBank+
surpass those on the original dataset, establishing a new benchmark and
providing an open resource for future research. | http://arxiv.org/abs/2309.11566v1 | cs.CL | new_dataset | 0.994233 | 2309.11566 |
GECTurk: Grammatical Error Correction and Detection Dataset for Turkish | Grammatical Error Detection and Correction (GEC) tools have proven useful for
native speakers and second language learners. Developing such tools requires a
large amount of parallel, annotated data, which is unavailable for most
languages. Synthetic data generation is a common practice to overcome the
scarcity of such data. However, it is not straightforward for morphologically
rich languages like Turkish due to complex writing rules that require
phonological, morphological, and syntactic information. In this work, we
present a flexible and extensible synthetic data generation pipeline for
Turkish covering more than 20 expert-curated grammar and spelling rules
(a.k.a., writing rules) implemented through complex transformation functions.
Using this pipeline, we derive 130,000 high-quality parallel sentences from
professionally edited articles. Additionally, we create a more realistic test
set by manually annotating a set of movie reviews. We implement three baselines
formulating the task as i) neural machine translation, ii) sequence tagging,
and iii) prefix tuning with a pretrained decoder-only model, achieving strong
results. Furthermore, we perform exhaustive experiments on out-of-domain
datasets to gain insights on the transferability and robustness of the proposed
approaches. Our results suggest that our corpus, GECTurk, is high-quality and
allows knowledge transfer for the out-of-domain setting. To encourage further
research on Turkish GEC, we release our datasets, baseline models, and the
synthetic data generation pipeline at https://github.com/GGLAB-KU/gecturk. | http://arxiv.org/abs/2309.11346v1 | cs.CL | new_dataset | 0.994401 | 2309.11346 |
Benchmarks for Pirá 2.0, a Reading Comprehension Dataset about the Ocean, the Brazilian Coast, and Climate Change | Pir\'a is a reading comprehension dataset focused on the ocean, the Brazilian
coast, and climate change, built from a collection of scientific abstracts and
reports on these topics. This dataset represents a versatile language resource,
particularly useful for testing the ability of current machine learning models
to acquire expert scientific knowledge. Despite its potential, a detailed set
of baselines has not yet been developed for Pir\'a. By creating these
baselines, researchers can more easily utilize Pir\'a as a resource for testing
machine learning models across a wide range of question answering tasks. In
this paper, we define six benchmarks over the Pir\'a dataset, covering closed
generative question answering, machine reading comprehension, information
retrieval, open question answering, answer triggering, and multiple choice
question answering. As part of this effort, we have also produced a curated
version of the original dataset, where we fixed a number of grammar issues,
repetitions, and other shortcomings. Furthermore, the dataset has been extended
in several new directions, so as to face the aforementioned benchmarks:
translation of supporting texts from English into Portuguese, classification
labels for answerability, automatic paraphrases of questions and answers, and
multiple choice candidates. The results described in this paper provide several
points of reference for researchers interested in exploring the challenges
provided by the Pir\'a dataset. | http://arxiv.org/abs/2309.10945v1 | cs.CL | new_dataset | 0.994546 | 2309.10945 |
Amplifying Pathological Detection in EEG Signaling Pathways through Cross-Dataset Transfer Learning | Pathology diagnosis based on EEG signals and decoding brain activity holds
immense importance in understanding neurological disorders. With the
advancement of artificial intelligence methods and machine learning techniques,
the potential for accurate data-driven diagnoses and effective treatments has
grown significantly. However, applying machine learning algorithms to
real-world datasets presents diverse challenges at multiple levels. The
scarcity of labelled data, especially in low regime scenarios with limited
availability of real patient cohorts due to high costs of recruitment,
underscores the vital deployment of scaling and transfer learning techniques.
In this study, we explore a real-world pathology classification task to
highlight the effectiveness of data and model scaling and cross-dataset
knowledge transfer. As such, we observe varying performance improvements
through data scaling, indicating the need for careful evaluation and labelling.
Additionally, we identify the challenges of possible negative transfer and
emphasize the significance of some key components to overcome distribution
shifts and potential spurious correlations and achieve positive transfer. We
see improvement in the performance of the target model on the target (NMT)
datasets by using the knowledge from the source dataset (TUAB) when a low
amount of labelled data was available. Our findings indicate a small and
generic model (e.g. ShallowNet) performs well on a single dataset, however, a
larger model (e.g. TCN) performs better on transfer and learning from a larger
and diverse dataset. | http://arxiv.org/abs/2309.10910v1 | cs.LG | not_new_dataset | 0.992257 | 2309.10910 |
A Configurable Library for Generating and Manipulating Maze Datasets | Understanding how machine learning models respond to distributional shifts is
a key research challenge. Mazes serve as an excellent testbed due to varied
generation algorithms offering a nuanced platform to simulate both subtle and
pronounced distributional shifts. To enable systematic investigations of model
behavior on out-of-distribution data, we present $\texttt{maze-dataset}$, a
comprehensive library for generating, processing, and visualizing datasets
consisting of maze-solving tasks. With this library, researchers can easily
create datasets, having extensive control over the generation algorithm used,
the parameters fed to the algorithm of choice, and the filters that generated
mazes must satisfy. Furthermore, it supports multiple output formats, including
rasterized and text-based, catering to convolutional neural networks and
autoregressive transformer models. These formats, along with tools for
visualizing and converting between them, ensure versatility and adaptability in
research applications. | http://arxiv.org/abs/2309.10498v1 | cs.LG | new_dataset | 0.992489 | 2309.10498 |
RenderIH: A Large-scale Synthetic Dataset for 3D Interacting Hand Pose Estimation | The current interacting hand (IH) datasets are relatively simplistic in terms
of background and texture, with hand joints being annotated by a machine
annotator, which may result in inaccuracies, and the diversity of pose
distribution is limited. However, the variability of background, pose
distribution, and texture can greatly influence the generalization ability.
Therefore, we present a large-scale synthetic dataset RenderIH for interacting
hands with accurate and diverse pose annotations. The dataset contains 1M
photo-realistic images with varied backgrounds, perspectives, and hand
textures. To generate natural and diverse interacting poses, we propose a new
pose optimization algorithm. Additionally, for better pose estimation accuracy,
we introduce a transformer-based pose estimation network, TransHand, to
leverage the correlation between interacting hands and verify the effectiveness
of RenderIH in improving results. Our dataset is model-agnostic and can improve
more accuracy of any hand pose estimation method in comparison to other real or
synthetic datasets. Experiments have shown that pretraining on our synthetic
data can significantly decrease the error from 6.76mm to 5.79mm, and our
Transhand surpasses contemporary methods. Our dataset and code are available at
https://github.com/adwardlee/RenderIH. | http://arxiv.org/abs/2309.09301v3 | cs.CV | new_dataset | 0.994482 | 2309.09301 |
HealthFC: A Dataset of Health Claims for Evidence-Based Medical Fact-Checking | Seeking health-related advice on the internet has become a common practice in
the digital era. Determining the trustworthiness of medical claims found online
and finding appropriate evidence for this information is increasingly
challenging. Fact-checking has emerged as an approach to assess the veracity of
factual claims using evidence from credible knowledge sources. To help advance
the automation of this task, in this paper, we introduce a novel dataset of 750
health-related claims, labeled for veracity by medical experts and backed with
evidence from appropriate clinical studies. We provide an analysis of the
dataset, highlighting its characteristics and challenges. The dataset can be
used for Machine Learning tasks related to automated fact-checking such as
evidence retrieval, veracity prediction, and explanation generation. For this
purpose, we provide baseline models based on different approaches, examine
their performance, and discuss the findings. | http://arxiv.org/abs/2309.08503v1 | cs.CL | new_dataset | 0.994519 | 2309.08503 |
Let's Roll: Synthetic Dataset Analysis for Pedestrian Detection Across Different Shutter Types | Computer vision (CV) pipelines are typically evaluated on datasets processed
by image signal processing (ISP) pipelines even though, for
resource-constrained applications, an important research goal is to avoid as
many ISP steps as possible. In particular, most CV datasets consist of global
shutter (GS) images even though most cameras today use a rolling shutter (RS).
This paper studies the impact of different shutter mechanisms on machine
learning (ML) object detection models on a synthetic dataset that we generate
using the advanced simulation capabilities of Unreal Engine 5 (UE5). In
particular, we train and evaluate mainstream detection models with our
synthetically-generated paired GS and RS datasets to ascertain whether there
exists a significant difference in detection accuracy between these two shutter
modalities, especially when capturing low-speed objects (e.g., pedestrians).
The results of this emulation framework indicate the performance between them
are remarkably congruent for coarse-grained detection (mean average precision
(mAP) for IOU=0.5), but have significant differences for fine-grained measures
of detection accuracy (mAP for IOU=0.5:0.95). This implies that ML pipelines
might not need explicit correction for RS for many object detection
applications, but mitigating RS effects in ISP-less ML pipelines that target
fine-grained location of the objects may need additional research. | http://arxiv.org/abs/2309.08136v1 | cs.CV | not_new_dataset | 0.992204 | 2309.08136 |
Multi-Source Domain Adaptation meets Dataset Distillation through Dataset Dictionary Learning | In this paper, we consider the intersection of two problems in machine
learning: Multi-Source Domain Adaptation (MSDA) and Dataset Distillation (DD).
On the one hand, the first considers adapting multiple heterogeneous labeled
source domains to an unlabeled target domain. On the other hand, the second
attacks the problem of synthesizing a small summary containing all the
information about the datasets. We thus consider a new problem called MSDA-DD.
To solve it, we adapt previous works in the MSDA literature, such as
Wasserstein Barycenter Transport and Dataset Dictionary Learning, as well as DD
method Distribution Matching. We thoroughly experiment with this novel problem
on four benchmarks (Caltech-Office 10, Tennessee-Eastman Process, Continuous
Stirred Tank Reactor, and Case Western Reserve University), where we show that,
even with as little as 1 sample per class, one achieves state-of-the-art
adaptation performance. | http://arxiv.org/abs/2309.07666v1 | cs.LG | not_new_dataset | 0.991879 | 2309.07666 |
SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic Classification in 200+ Languages and Dialects | Despite the progress we have recorded in the last few years in multilingual
natural language processing, evaluation is typically limited to a small set of
languages with available datasets which excludes a large number of low-resource
languages. In this paper, we created SIB-200 -- a large-scale open-sourced
benchmark dataset for topic classification in 200 languages and dialects to
address the lack of evaluation dataset for Natural Language Understanding
(NLU). For many of the languages covered in SIB-200, this is the first publicly
available evaluation dataset for NLU. The dataset is based on Flores-200
machine translation corpus. We annotated the English portion of the dataset and
extended the sentence-level annotation to the remaining 203 languages covered
in the corpus. Despite the simplicity of this task, our evaluation in
full-supervised setting, cross-lingual transfer setting and prompting of large
language model setting show that there is still a large gap between the
performance of high-resource and low-resource languages when multilingual
evaluation is scaled to numerous world languages. We found that languages
unseen during the pre-training of multilingual language models,
under-represented language families (like Nilotic and Altantic-Congo), and
languages from the regions of Africa, Americas, Oceania and South East Asia,
often have the lowest performance on our topic classification dataset. We hope
our dataset will encourage a more inclusive evaluation of multilingual language
models on a more diverse set of languages. https://github.com/dadelani/sib-200 | http://arxiv.org/abs/2309.07445v1 | cs.CL | new_dataset | 0.994486 | 2309.07445 |
ProMap: Datasets for Product Mapping in E-commerce | The goal of product mapping is to decide, whether two listings from two
different e-shops describe the same products. Existing datasets of matching and
non-matching pairs of products, however, often suffer from incomplete product
information or contain only very distant non-matching products. Therefore,
while predictive models trained on these datasets achieve good results on them,
in practice, they are unusable as they cannot distinguish very similar but
non-matching pairs of products. This paper introduces two new datasets for
product mapping: ProMapCz consisting of 1,495 Czech product pairs and ProMapEn
consisting of 1,555 English product pairs of matching and non-matching products
manually scraped from two pairs of e-shops. The datasets contain both images
and textual descriptions of the products, including their specifications,
making them one of the most complete datasets for product mapping.
Additionally, the non-matching products were selected in two phases, creating
two types of non-matches -- close non-matches and medium non-matches. Even the
medium non-matches are pairs of products that are much more similar than
non-matches in other datasets -- for example, they still need to have the same
brand and similar name and price. After simple data preprocessing, several
machine learning algorithms were trained on these and two the other datasets to
demonstrate the complexity and completeness of ProMap datasets. ProMap datasets
are presented as a golden standard for further research of product mapping
filling the gaps in existing ones. | http://arxiv.org/abs/2309.06882v1 | cs.LG | new_dataset | 0.99443 | 2309.06882 |
Scalable neural network models and terascale datasets for particle-flow reconstruction | We study scalable machine learning models for full event reconstruction in
high-energy electron-positron collisions based on a highly granular detector
simulation. Particle-flow (PF) reconstruction can be formulated as a supervised
learning task using tracks and calorimeter clusters or hits. We compare a graph
neural network and kernel-based transformer and demonstrate that both avoid
quadratic memory allocation and computational cost while achieving realistic PF
reconstruction. We show that hyperparameter tuning on a supercomputer
significantly improves the physics performance of the models. We also
demonstrate that the resulting model is highly portable across hardware
processors, supporting Nvidia, AMD, and Intel Habana cards. Finally, we
demonstrate that the model can be trained on highly granular inputs consisting
of tracks and calorimeter hits, resulting in a competitive physics performance
with the baseline. Datasets and software to reproduce the studies are published
following the findable, accessible, interoperable, and reusable (FAIR)
principles. | http://arxiv.org/abs/2309.06782v1 | physics.data-an | new_dataset | 0.960086 | 2309.06782 |
Flows for Flows: Morphing one Dataset into another with Maximum Likelihood Estimation | Many components of data analysis in high energy physics and beyond require
morphing one dataset into another. This is commonly solved via reweighting, but
there are many advantages of preserving weights and shifting the data points
instead. Normalizing flows are machine learning models with impressive
precision on a variety of particle physics tasks. Naively, normalizing flows
cannot be used for morphing because they require knowledge of the probability
density of the starting dataset. In most cases in particle physics, we can
generate more examples, but we do not know densities explicitly. We propose a
protocol called flows for flows for training normalizing flows to morph one
dataset into another even if the underlying probability density of neither
dataset is known explicitly. This enables a morphing strategy trained with
maximum likelihood estimation, a setup that has been shown to be highly
effective in related tasks. We study variations on this protocol to explore how
far the data points are moved to statistically match the two datasets.
Furthermore, we show how to condition the learned flows on particular features
in order to create a morphing function for every value of the conditioning
feature. For illustration, we demonstrate flows for flows for toy examples as
well as a collider physics example involving dijet events | http://arxiv.org/abs/2309.06472v1 | hep-ph | not_new_dataset | 0.992158 | 2309.06472 |
MADLAD-400: A Multilingual And Document-Level Large Audited Dataset | We introduce MADLAD-400, a manually audited, general domain 3T token
monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss
the limitations revealed by self-auditing MADLAD-400, and the role data
auditing had in the dataset creation process. We then train and release a
10.7B-parameter multilingual machine translation model on 250 billion tokens
covering over 450 languages using publicly available data, and find that it is
competitive with models that are significantly larger, and report the results
on different domains. In addition, we train a 8B-parameter language model, and
assess the results on few-shot translation. We make the baseline models
available to the research community. | http://arxiv.org/abs/2309.04662v1 | cs.CL | new_dataset | 0.994489 | 2309.04662 |
Beyond Static Datasets: A Deep Interaction Approach to LLM Evaluation | Large Language Models (LLMs) have made progress in various real-world tasks,
which stimulates requirements for the evaluation of LLMs. Existing LLM
evaluation methods are mainly supervised signal-based which depends on static
datasets and cannot evaluate the ability of LLMs in dynamic real-world
scenarios where deep interaction widely exists. Other LLM evaluation methods
are human-based which are costly and time-consuming and are incapable of
large-scale evaluation of LLMs. To address the issues above, we propose a novel
Deep Interaction-based LLM-evaluation framework. In our proposed framework,
LLMs' performances in real-world domains can be evaluated from their deep
interaction with other LLMs in elaborately designed evaluation tasks.
Furthermore, our proposed framework is a general evaluation method that can be
applied to a host of real-world tasks such as machine translation and code
generation. We demonstrate the effectiveness of our proposed method through
extensive experiments on four elaborately designed evaluation tasks. | http://arxiv.org/abs/2309.04369v1 | cs.CL | not_new_dataset | 0.991883 | 2309.04369 |
Dataset Generation and Bonobo Classification from Weakly Labelled Videos | This paper presents a bonobo detection and classification pipeline built from
the commonly used machine learning methods. Such application is motivated by
the need to test bonobos in their enclosure using touch screen devices without
human assistance. This work introduces a newly acquired dataset based on bonobo
recordings generated semi-automatically. The recordings are weakly labelled and
fed to a macaque detector in order to spatially detect the individual present
in the video. Handcrafted features coupled with different classification
algorithms and deep-learning methods using a ResNet architecture are
investigated for bonobo identification. Performance is compared in terms of
classification accuracy on the splits of the database using different data
separation methods. We demonstrate the importance of data preparation and how a
wrong data separation can lead to false good results. Finally, after a
meaningful separation of the data, the best classification performance is
obtained using a fine-tuned ResNet model and reaches 75% of accuracy. | http://arxiv.org/abs/2309.03671v1 | cs.CV | new_dataset | 0.993644 | 2309.03671 |
ORL-AUDITOR: Dataset Auditing in Offline Deep Reinforcement Learning | Data is a critical asset in AI, as high-quality datasets can significantly
improve the performance of machine learning models. In safety-critical domains
such as autonomous vehicles, offline deep reinforcement learning (offline DRL)
is frequently used to train models on pre-collected datasets, as opposed to
training these models by interacting with the real-world environment as the
online DRL. To support the development of these models, many institutions make
datasets publicly available with opensource licenses, but these datasets are at
risk of potential misuse or infringement. Injecting watermarks to the dataset
may protect the intellectual property of the data, but it cannot handle
datasets that have already been published and is infeasible to be altered
afterward. Other existing solutions, such as dataset inference and membership
inference, do not work well in the offline DRL scenario due to the diverse
model behavior characteristics and offline setting constraints. In this paper,
we advocate a new paradigm by leveraging the fact that cumulative rewards can
act as a unique identifier that distinguishes DRL models trained on a specific
dataset. To this end, we propose ORL-AUDITOR, which is the first
trajectory-level dataset auditing mechanism for offline RL scenarios. Our
experiments on multiple offline DRL models and tasks reveal the efficacy of
ORL-AUDITOR, with auditing accuracy over 95% and false positive rates less than
2.88%. We also provide valuable insights into the practical implementation of
ORL-AUDITOR by studying various parameter settings. Furthermore, we demonstrate
the auditing capability of ORL-AUDITOR on open-source datasets from Google and
DeepMind, highlighting its effectiveness in auditing published datasets.
ORL-AUDITOR is open-sourced at https://github.com/link-zju/ORL-Auditor. | http://arxiv.org/abs/2309.03081v1 | cs.CR | not_new_dataset | 0.992096 | 2309.03081 |
Augmenting Chest X-ray Datasets with Non-Expert Annotations | The advancement of machine learning algorithms in medical image analysis
requires the expansion of training datasets. A popular and cost-effective
approach is automated annotation extraction from free-text medical reports,
primarily due to the high costs associated with expert clinicians annotating
chest X-ray images. However, it has been shown that the resulting datasets are
susceptible to biases and shortcuts. Another strategy to increase the size of a
dataset is crowdsourcing, a widely adopted practice in general computer vision
with some success in medical image analysis. In a similar vein to
crowdsourcing, we enhance two publicly available chest X-ray datasets by
incorporating non-expert annotations. However, instead of using diagnostic
labels, we annotate shortcuts in the form of tubes. We collect 3.5k chest drain
annotations for CXR14, and 1k annotations for 4 different tube types in
PadChest. We train a chest drain detector with the non-expert annotations that
generalizes well to expert labels. Moreover, we compare our annotations to
those provided by experts and show "moderate" to "almost perfect" agreement.
Finally, we present a pathology agreement study to raise awareness about ground
truth annotations. We make our annotations and code available. | http://arxiv.org/abs/2309.02244v1 | cs.CV | not_new_dataset | 0.991341 | 2309.02244 |
Artificial Empathy Classification: A Survey of Deep Learning Techniques, Datasets, and Evaluation Scales | From the last decade, researchers in the field of machine learning (ML) and
assistive developmental robotics (ADR) have taken an interest in artificial
empathy (AE) as a possible future paradigm for human-robot interaction (HRI).
Humans learn empathy since birth, therefore, it is challenging to instill this
sense in robots and intelligent machines. Nevertheless, by training over a vast
amount of data and time, imitating empathy, to a certain extent, can be
possible for robots. Training techniques for AE, along with findings from the
field of empathetic AI research, are ever-evolving. The standard workflow for
artificial empathy consists of three stages: 1) Emotion Recognition (ER) using
the retrieved features from video or textual data, 2) analyzing the perceived
emotion or degree of empathy to choose the best course of action, and 3)
carrying out a response action. Recent studies that show AE being used with
virtual agents or robots often include Deep Learning (DL) techniques. For
instance, models like VGGFace are used to conduct ER. Semi-supervised models
like Autoencoders generate the corresponding emotional states and behavioral
responses. However, there has not been any study that presents an independent
approach for evaluating AE, or the degree to which a reaction was empathetic.
This paper aims to investigate and evaluate existing works for measuring and
evaluating empathy, as well as the datasets that have been collected and used
so far. Our goal is to highlight and facilitate the use of state-of-the-art
methods in the area of AE by comparing their performance. This will aid
researchers in the area of AE in selecting their approaches with precision. | http://arxiv.org/abs/2310.00010v1 | cs.RO | not_new_dataset | 0.992233 | 2310.00010 |
DiffuGen: Adaptable Approach for Generating Labeled Image Datasets using Stable Diffusion Models | Generating high-quality labeled image datasets is crucial for training
accurate and robust machine learning models in the field of computer vision.
However, the process of manually labeling real images is often time-consuming
and costly. To address these challenges associated with dataset generation, we
introduce "DiffuGen," a simple and adaptable approach that harnesses the power
of stable diffusion models to create labeled image datasets efficiently. By
leveraging stable diffusion models, our approach not only ensures the quality
of generated datasets but also provides a versatile solution for label
generation. In this paper, we present the methodology behind DiffuGen, which
combines the capabilities of diffusion models with two distinct labeling
techniques: unsupervised and supervised. Distinctively, DiffuGen employs prompt
templating for adaptable image generation and textual inversion to enhance
diffusion model capabilities. | http://arxiv.org/abs/2309.00248v1 | cs.CV | not_new_dataset | 0.992016 | 2309.00248 |
Learning to Taste: A Multimodal Wine Dataset | We present WineSensed, a large multimodal wine dataset for studying the
relations between visual perception, language, and flavor. The dataset
encompasses 897k images of wine labels and 824k reviews of wines curated from
the Vivino platform. It has over 350k unique vintages, annotated with year,
region, rating, alcohol percentage, price, and grape composition. We obtained
fine-grained flavor annotations on a subset by conducting a wine-tasting
experiment with 256 participants who were asked to rank wines based on their
similarity in flavor, resulting in more than 5k pairwise flavor distances. We
propose a low-dimensional concept embedding algorithm that combines human
experience with automatic machine similarity kernels. We demonstrate that this
shared concept embedding space improves upon separate embedding spaces for
coarse flavor classification (alcohol percentage, country, grape, price,
rating) and aligns with the intricate human perception of flavor. | http://arxiv.org/abs/2308.16900v3 | cs.LG | new_dataset | 0.994486 | 2308.16900 |
The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants | We present Belebele, a multiple-choice machine reading comprehension (MRC)
dataset spanning 122 language variants. Significantly expanding the language
coverage of natural language understanding (NLU) benchmarks, this dataset
enables the evaluation of text models in high-, medium-, and low-resource
languages. Each question is based on a short passage from the Flores-200
dataset and has four multiple-choice answers. The questions were carefully
curated to discriminate between models with different levels of general
language comprehension. The English dataset on its own proves difficult enough
to challenge state-of-the-art language models. Being fully parallel, this
dataset enables direct comparison of model performance across all languages. We
use this dataset to evaluate the capabilities of multilingual masked language
models (MLMs) and large language models (LLMs). We present extensive results
and find that despite significant cross-lingual transfer in English-centric
LLMs, much smaller MLMs pretrained on balanced multilingual data still
understand far more languages. We also observe that larger vocabulary size and
conscious vocabulary construction correlate with better performance on
low-resource languages. Overall, Belebele opens up new avenues for evaluating
and analyzing the multilingual capabilities of NLP systems. | http://arxiv.org/abs/2308.16884v1 | cs.CL | new_dataset | 0.994427 | 2308.16884 |
Speech Wikimedia: A 77 Language Multilingual Speech Dataset | The Speech Wikimedia Dataset is a publicly available compilation of audio
with transcriptions extracted from Wikimedia Commons. It includes 1780 hours
(195 GB) of CC-BY-SA licensed transcribed speech from a diverse set of
scenarios and speakers, in 77 different languages. Each audio file has one or
more transcriptions in different languages, making this dataset suitable for
training speech recognition, speech translation, and machine translation
models. | http://arxiv.org/abs/2308.15710v1 | cs.AI | new_dataset | 0.994538 | 2308.15710 |
Probabilistic Dataset Reconstruction from Interpretable Models | Interpretability is often pointed out as a key requirement for trustworthy
machine learning. However, learning and releasing models that are inherently
interpretable leaks information regarding the underlying training data. As such
disclosure may directly conflict with privacy, a precise quantification of the
privacy impact of such breach is a fundamental problem. For instance, previous
work have shown that the structure of a decision tree can be leveraged to build
a probabilistic reconstruction of its training dataset, with the uncertainty of
the reconstruction being a relevant metric for the information leak. In this
paper, we propose of a novel framework generalizing these probabilistic
reconstructions in the sense that it can handle other forms of interpretable
models and more generic types of knowledge. In addition, we demonstrate that
under realistic assumptions regarding the interpretable models' structure, the
uncertainty of the reconstruction can be computed efficiently. Finally, we
illustrate the applicability of our approach on both decision trees and rule
lists, by comparing the theoretical information leak associated to either exact
or heuristic learning algorithms. Our results suggest that optimal
interpretable models are often more compact and leak less information regarding
their training data than greedily-built ones, for a given accuracy level. | http://arxiv.org/abs/2308.15099v1 | cs.AI | not_new_dataset | 0.992246 | 2308.15099 |
Generating tabular datasets under differential privacy | Machine Learning (ML) is accelerating progress across fields and industries,
but relies on accessible and high-quality training data. Some of the most
important datasets are found in biomedical and financial domains in the form of
spreadsheets and relational databases. But this tabular data is often sensitive
in nature. Synthetic data generation offers the potential to unlock sensitive
data, but generative models tend to memorise and regurgitate training data,
which undermines the privacy goal. To remedy this, researchers have
incorporated the mathematical framework of Differential Privacy (DP) into the
training process of deep neural networks. But this creates a trade-off between
the quality and privacy of the resulting data. Generative Adversarial Networks
(GANs) are the dominant paradigm for synthesising tabular data under DP, but
suffer from unstable adversarial training and mode collapse, which are
exacerbated by the privacy constraints and challenging tabular data modality.
This work optimises the quality-privacy trade-off of generative models,
producing higher quality tabular datasets with the same privacy guarantees. We
implement novel end-to-end models that leverage attention mechanisms to learn
reversible tabular representations. We also introduce TableDiffusion, the first
differentially-private diffusion model for tabular data synthesis. Our
experiments show that TableDiffusion produces higher-fidelity synthetic
datasets, avoids the mode collapse problem, and achieves state-of-the-art
performance on privatised tabular data synthesis. By implementing
TableDiffusion to predict the added noise, we enabled it to bypass the
challenges of reconstructing mixed-type tabular data. Overall, the diffusion
paradigm proves vastly more data and privacy efficient than the adversarial
paradigm, due to augmented re-use of each data batch and a smoother iterative
training process. | http://arxiv.org/abs/2308.14784v1 | cs.LG | not_new_dataset | 0.991477 | 2308.14784 |
TpuGraphs: A Performance Prediction Dataset on Large Tensor Computational Graphs | Precise hardware performance models play a crucial role in code
optimizations. They can assist compilers in making heuristic decisions or aid
autotuners in identifying the optimal configuration for a given program. For
example, the autotuner for XLA, a machine learning compiler, discovered 10-20%
speedup on state-of-the-art models serving substantial production traffic at
Google. Although there exist a few datasets for program performance prediction,
they target small sub-programs such as basic blocks or kernels. This paper
introduces TpuGraphs, a performance prediction dataset on full tensor programs,
represented as computational graphs, running on Tensor Processing Units (TPUs).
Each graph in the dataset represents the main computation of a machine learning
workload, e.g., a training epoch or an inference step. Each data sample
contains a computational graph, a compilation configuration, and the execution
time of the graph when compiled with the configuration. The graphs in the
dataset are collected from open-source machine learning programs, featuring
popular model architectures, e.g., ResNet, EfficientNet, Mask R-CNN, and
Transformer. TpuGraphs provides 25x more graphs than the largest graph property
prediction dataset (with comparable graph sizes), and 770x larger graphs on
average compared to existing performance prediction datasets on machine
learning programs. This graph-level prediction task on large graphs introduces
new challenges in learning, ranging from scalability, training efficiency, to
model quality. | http://arxiv.org/abs/2308.13490v1 | cs.LG | new_dataset | 0.994445 | 2308.13490 |
Misinformation Concierge: A Proof-of-Concept with Curated Twitter Dataset on COVID-19 Vaccination | We demonstrate the Misinformation Concierge, a proof-of-concept that provides
actionable intelligence on misinformation prevalent in social media.
Specifically, it uses language processing and machine learning tools to
identify subtopics of discourse and discern non/misleading posts; presents
statistical reports for policy-makers to understand the big picture of
prevalent misinformation in a timely manner; and recommends rebuttal messages
for specific pieces of misinformation, identified from within the corpus of
data - providing means to intervene and counter misinformation promptly. The
Misinformation Concierge proof-of-concept using a curated dataset is accessible
at: https://demo-frontend-uy34.onrender.com/ | http://arxiv.org/abs/2309.00639v1 | cs.CL | new_dataset | 0.993733 | 2309.00639 |
Towards Synthesizing Datasets for IEEE 802.1 Time-sensitive Networking | IEEE 802.1 Time-sensitive Networking (TSN) protocols have recently been
proposed to replace legacy networking technologies across different
mission-critical systems (MCSs). Design, configuration, and maintenance of TSN
within MCSs require advanced methods to tackle the highly complex and
interconnected nature of those systems. Accordingly, artificial intelligence
(AI) and machine learning (ML) models are the most prominent enablers to
develop such methods. However, they usually require a significant amount of
data for model training, which is not easily accessible. This short paper aims
to recapitulate the need for TSN datasets to flourish research on AI/ML-based
techniques for TSN systems. Moreover, it analyzes the main requirements and
alternative designs to build a TSN platform to synthesize realistic datasets. | http://arxiv.org/abs/2308.10255v1 | cs.NI | not_new_dataset | 0.992071 | 2308.10255 |
DatasetEquity: Are All Samples Created Equal? In The Quest For Equity Within Datasets | Data imbalance is a well-known issue in the field of machine learning,
attributable to the cost of data collection, the difficulty of labeling, and
the geographical distribution of the data. In computer vision, bias in data
distribution caused by image appearance remains highly unexplored. Compared to
categorical distributions using class labels, image appearance reveals complex
relationships between objects beyond what class labels provide. Clustering deep
perceptual features extracted from raw pixels gives a richer representation of
the data. This paper presents a novel method for addressing data imbalance in
machine learning. The method computes sample likelihoods based on image
appearance using deep perceptual embeddings and clustering. It then uses these
likelihoods to weigh samples differently during training with a proposed
$\textbf{Generalized Focal Loss}$ function. This loss can be easily integrated
with deep learning algorithms. Experiments validate the method's effectiveness
across autonomous driving vision datasets including KITTI and nuScenes. The
loss function improves state-of-the-art 3D object detection methods, achieving
over $200\%$ AP gains on under-represented classes (Cyclist) in the KITTI
dataset. The results demonstrate the method is generalizable, complements
existing techniques, and is particularly beneficial for smaller datasets and
rare classes. Code is available at:
https://github.com/towardsautonomy/DatasetEquity | http://arxiv.org/abs/2308.09878v2 | cs.CV | not_new_dataset | 0.991977 | 2308.09878 |
Leak Proof PDBBind: A Reorganized Dataset of Protein-Ligand Complexes for More Generalizable Binding Affinity Prediction | Many physics-based and machine-learned scoring functions (SFs) used to
predict protein-ligand binding free energies have been trained on the PDBBind
dataset. However, it is controversial as to whether new SFs are actually
improving since the general, refined, and core datasets of PDBBind are
cross-contaminated with proteins and ligands with high similarity, and hence
they may not perform comparably well in binding prediction of new
protein-ligand complexes. In this work we have carefully prepared a cleaned
PDBBind data set of non-covalent binders that are split into training,
validation, and test datasets to control for data leakage. The resulting
leak-proof (LP)-PDBBind data is used to retrain four popular SFs: AutoDock
vina, Random Forest (RF)-Score, InteractionGraphNet (IGN), and DeepDTA, to
better test their capabilities when applied to new protein-ligand complexes. In
particular we have formulated a new independent data set, BDB2020+, by matching
high quality binding free energies from BindingDB with co-crystalized
ligand-protein complexes from the PDB that have been deposited since 2020.
Based on all the benchmark results, the retrained models using LP-PDBBind that
rely on 3D information perform consistently among the best, with IGN especially
being recommended for scoring and ranking applications for new protein-ligand
systems. | http://arxiv.org/abs/2308.09639v1 | physics.bio-ph | new_dataset | 0.994495 | 2308.09639 |
Spatial LibriSpeech: An Augmented Dataset for Spatial Audio Learning | We present Spatial LibriSpeech, a spatial audio dataset with over 650 hours
of 19-channel audio, first-order ambisonics, and optional distractor noise.
Spatial LibriSpeech is designed for machine learning model training, and it
includes labels for source position, speaking direction, room acoustics and
geometry. Spatial LibriSpeech is generated by augmenting LibriSpeech samples
with 200k+ simulated acoustic conditions across 8k+ synthetic rooms. To
demonstrate the utility of our dataset, we train models on four spatial audio
tasks, resulting in a median absolute error of 6.60{\deg} on 3D source
localization, 0.43m on distance, 90.66ms on T30, and 2.74dB on DRR estimation.
We show that the same models generalize well to widely-used evaluation
datasets, e.g., obtaining a median absolute error of 12.43{\deg} on 3D source
localization on TUT Sound Events 2018, and 157.32ms on T30 estimation on ACE
Challenge. | http://arxiv.org/abs/2308.09514v1 | cs.SD | new_dataset | 0.99442 | 2308.09514 |
Advancing continual lifelong learning in neural information retrieval: definition, dataset, framework, and empirical evaluation | Continual learning refers to the capability of a machine learning model to
learn and adapt to new information, without compromising its performance on
previously learned tasks. Although several studies have investigated continual
learning methods for information retrieval tasks, a well-defined task
formulation is still lacking, and it is unclear how typical learning strategies
perform in this context. To address this challenge, a systematic task
formulation of continual neural information retrieval is presented, along with
a multiple-topic dataset that simulates continuous information retrieval. A
comprehensive continual neural information retrieval framework consisting of
typical retrieval models and continual learning strategies is then proposed.
Empirical evaluations illustrate that the proposed framework can successfully
prevent catastrophic forgetting in neural information retrieval and enhance
performance on previously learned tasks. The results indicate that
embedding-based retrieval models experience a decline in their continual
learning performance as the topic shift distance and dataset volume of new
tasks increase. In contrast, pretraining-based models do not show any such
correlation. Adopting suitable learning strategies can mitigate the effects of
topic shift and data augmentation. | http://arxiv.org/abs/2308.08378v1 | cs.IR | not_new_dataset | 0.991048 | 2308.08378 |
Action Class Relation Detection and Classification Across Multiple Video Datasets | The Meta Video Dataset (MetaVD) provides annotated relations between action
classes in major datasets for human action recognition in videos. Although
these annotated relations enable dataset augmentation, it is only applicable to
those covered by MetaVD. For an external dataset to enjoy the same benefit, the
relations between its action classes and those in MetaVD need to be determined.
To address this issue, we consider two new machine learning tasks: action class
relation detection and classification. We propose a unified model to predict
relations between action classes, using language and visual information
associated with classes. Experimental results show that (i) pre-trained recent
neural network models for texts and videos contribute to high predictive
performance, (ii) the relation prediction based on action label texts is more
accurate than based on videos, and (iii) a blending approach that combines
predictions by both modalities can further improve the predictive performance
in some cases. | http://arxiv.org/abs/2308.07558v1 | cs.CV | new_dataset | 0.771928 | 2308.07558 |
MDB: Interactively Querying Datasets and Models | As models are trained and deployed, developers need to be able to
systematically debug errors that emerge in the machine learning pipeline. We
present MDB, a debugging framework for interactively querying datasets and
models. MDB integrates functional programming with relational algebra to build
expressive queries over a database of datasets and model predictions. Queries
are reusable and easily modified, enabling debuggers to rapidly iterate and
refine queries to discover and characterize errors and model behaviors. We
evaluate MDB on object detection, bias discovery, image classification, and
data imputation tasks across self-driving videos, large language models, and
medical records. Our experiments show that MDB enables up to 10x faster and
40\% shorter queries than other baselines. In a user study, we find developers
can successfully construct complex queries that describe errors of machine
learning models. | http://arxiv.org/abs/2308.06686v1 | cs.DB | not_new_dataset | 0.991822 | 2308.06686 |
How complex is the microarray dataset? A novel data complexity metric for biological high-dimensional microarray data | Data complexity analysis quantifies the hardness of constructing a predictive
model on a given dataset. However, the effectiveness of existing data
complexity measures can be challenged by the existence of irrelevant features
and feature interactions in biological micro-array data. We propose a novel
data complexity measure, depth, that leverages an evolutionary inspired feature
selection algorithm to quantify the complexity of micro-array data. By
examining feature subsets of varying sizes, the approach offers a novel
perspective on data complexity analysis. Unlike traditional metrics, depth is
robust to irrelevant features and effectively captures complexity stemming from
feature interactions. On synthetic micro-array data, depth outperforms existing
methods in robustness to irrelevant features and identifying complexity from
feature interactions. Applied to case-control genotype and gene-expression
micro-array datasets, the results reveal that a single feature of
gene-expression data can account for over 90% of the performance of
multi-feature model, confirming the adequacy of the commonly used
differentially expressed gene (DEG) feature selection method for the gene
expression data. Our study also demonstrates that constructing predictive
models for genotype data is harder than gene expression data. The results in
this paper provide evidence for the use of interpretable machine learning
algorithms on microarray data. | http://arxiv.org/abs/2308.06430v1 | cs.CE | not_new_dataset | 0.9914 | 2308.06430 |
Composable Core-sets for Diversity Approximation on Multi-Dataset Streams | Core-sets refer to subsets of data that maximize some function that is
commonly a diversity or group requirement. These subsets are used in place of
the original data to accomplish a given task with comparable or even enhanced
performance if biases are removed. Composable core-sets are core-sets with the
property that subsets of the core set can be unioned together to obtain an
approximation for the original data; lending themselves to be used for streamed
or distributed data. Recent work has focused on the use of core-sets for
training machine learning models. Preceding solutions such as CRAIG have been
proven to approximate gradient descent while providing a reduced training time.
In this paper, we introduce a core-set construction algorithm for constructing
composable core-sets to summarize streamed data for use in active learning
environments. If combined with techniques such as CRAIG and heuristics to
enhance construction speed, composable core-sets could be used for real time
training of models when the amount of sensor data is large. We provide
empirical analysis by considering extrapolated data for the runtime of such a
brute force algorithm. This algorithm is then analyzed for efficiency through
averaged empirical regression and key results and improvements are suggested
for further research on the topic. | http://arxiv.org/abs/2308.05878v1 | cs.LG | not_new_dataset | 0.991313 | 2308.05878 |
JEDI: Joint Expert Distillation in a Semi-Supervised Multi-Dataset Student-Teacher Scenario for Video Action Recognition | We propose JEDI, a multi-dataset semi-supervised learning method, which
efficiently combines knowledge from multiple experts, learned on different
datasets, to train and improve the performance of individual, per dataset,
student models. Our approach achieves this by addressing two important problems
in current machine learning research: generalization across datasets and
limitations of supervised training due to scarcity of labeled data. We start
with an arbitrary number of experts, pretrained on their own specific dataset,
which form the initial set of student models. The teachers are immediately
derived by concatenating the feature representations from the penultimate
layers of the students. We then train all models in a student-teacher
semi-supervised learning scenario until convergence. In our efficient approach,
student-teacher training is carried out jointly and end-to-end, showing that
both students and teachers improve their generalization capacity during
training. We validate our approach on four video action recognition datasets.
By simultaneously considering all datasets within a unified semi-supervised
setting, we demonstrate significant improvements over the initial experts. | http://arxiv.org/abs/2308.04934v1 | cs.CV | not_new_dataset | 0.991983 | 2308.04934 |
An Analytical Study of Covid-19 Dataset using Graph-Based Clustering Algorithms | Corona VIrus Disease abbreviated as COVID-19 is a novel virus which is
initially identified in Wuhan of China in December of 2019 and now this deadly
disease has spread all over the world. According to World Health Organization
(WHO), a total of 3,124,905 people died from 2019 to 2021, April. In this case,
many methods, AI base techniques, and machine learning algorithms have been
researched and are being used to save people from this pandemic. The SARS-CoV
and the 2019-nCoV, SARS-CoV-2 virus invade our bodies, causing some differences
in the structure of cell proteins. Protein-protein interaction (PPI) is an
essential process in our cells and plays a very important role in the
development of medicines and gives ideas about the disease. In this study, we
performed clustering on PPI networks generated from 92 genes of the Covi-19
dataset. We have used three graph-based clustering algorithms to give intuition
to the analysis of clusters. | http://arxiv.org/abs/2308.04697v1 | cs.LG | not_new_dataset | 0.991305 | 2308.04697 |
When More is Less: Incorporating Additional Datasets Can Hurt Performance By Introducing Spurious Correlations | In machine learning, incorporating more data is often seen as a reliable
strategy for improving model performance; this work challenges that notion by
demonstrating that the addition of external datasets in many cases can hurt the
resulting model's performance. In a large-scale empirical study across
combinations of four different open-source chest x-ray datasets and 9 different
labels, we demonstrate that in 43% of settings, a model trained on data from
two hospitals has poorer worst group accuracy over both hospitals than a model
trained on just a single hospital's data. This surprising result occurs even
though the added hospital makes the training distribution more similar to the
test distribution. We explain that this phenomenon arises from the spurious
correlation that emerges between the disease and hospital, due to
hospital-specific image artifacts. We highlight the trade-off one encounters
when training on multiple datasets, between the obvious benefit of additional
data and insidious cost of the introduced spurious correlation. In some cases,
balancing the dataset can remove the spurious correlation and improve
performance, but it is not always an effective strategy. We contextualize our
results within the literature on spurious correlations to help explain these
outcomes. Our experiments underscore the importance of exercising caution when
selecting training data for machine learning models, especially in settings
where there is a risk of spurious correlations such as with medical imaging.
The risks outlined highlight the need for careful data selection and model
evaluation in future research and practice. | http://arxiv.org/abs/2308.04431v1 | cs.LG | not_new_dataset | 0.992204 | 2308.04431 |
A Dataset and Analysis of Open-Source Machine Learning Products | Machine learning (ML) components are increasingly incorporated into software
products, yet developers face challenges in transitioning from ML prototypes to
products. Academic researchers struggle to propose solutions to these
challenges and evaluate interventions because they often do not have access to
close-sourced ML products from industry. In this study, we define and identify
open-source ML products, curating a dataset of 262 repositories from GitHub, to
facilitate further research and education. As a start, we explore six broad
research questions related to different development activities and report 21
findings from a sample of 30 ML products from the dataset. Our findings reveal
a variety of development practices and architectural decisions surrounding
different types and uses of ML models that offer ample opportunities for future
research innovations. We also find very little evidence of industry best
practices such as model testing and pipeline automation within the open-source
ML products, which leaves room for further investigation to understand its
potential impact on the development and eventual end-user experience for the
products. | http://arxiv.org/abs/2308.04328v1 | cs.SE | new_dataset | 0.994491 | 2308.04328 |
A Comparative Study on TF-IDF feature Weighting Method and its Analysis using Unstructured Dataset | Text Classification is the process of categorizing text into the relevant
categories and its algorithms are at the core of many Natural Language
Processing (NLP). Term Frequency-Inverse Document Frequency (TF-IDF) and NLP
are the most highly used information retrieval methods in text classification.
We have investigated and analyzed the feature weighting method for text
classification on unstructured data. The proposed model considered two features
N-Grams and TF-IDF on the IMDB movie reviews and Amazon Alexa reviews dataset
for sentiment analysis. Then we have used the state-of-the-art classifier to
validate the method i.e., Support Vector Machine (SVM), Logistic Regression,
Multinomial Naive Bayes (Multinomial NB), Random Forest, Decision Tree, and
k-nearest neighbors (KNN). From those two feature extractions, a significant
increase in feature extraction with TF-IDF features rather than based on
N-Gram. TF-IDF got the maximum accuracy (93.81%), precision (94.20%), recall
(93.81%), and F1-score (91.99%) value in Random Forest classifier. | http://arxiv.org/abs/2308.04037v1 | cs.CL | not_new_dataset | 0.992044 | 2308.04037 |
Balanced Face Dataset: Guiding StyleGAN to Generate Labeled Synthetic Face Image Dataset for Underrepresented Group | For a machine learning model to generalize effectively to unseen data within
a particular problem domain, it is well-understood that the data needs to be of
sufficient size and representative of real-world scenarios. Nonetheless,
real-world datasets frequently have overrepresented and underrepresented
groups. One solution to mitigate bias in machine learning is to leverage a
diverse and representative dataset. Training a model on a dataset that covers
all demographics is crucial to reducing bias in machine learning. However,
collecting and labeling large-scale datasets has been challenging, prompting
the use of synthetic data generation and active labeling to decrease the costs
of manual labeling. The focus of this study was to generate a robust face image
dataset using the StyleGAN model. In order to achieve a balanced distribution
of the dataset among different demographic groups, a synthetic dataset was
created by controlling the generation process of StyleGaN and annotated for
different downstream tasks. | http://arxiv.org/abs/2308.03495v1 | cs.CV | new_dataset | 0.994295 | 2308.03495 |
SciGraphQA: A Large-Scale Synthetic Multi-Turn Question-Answering Dataset for Scientific Graphs | In this work, we present SciGraphQA, a synthetic multi-turn question-answer
dataset related to academic graphs. SciGraphQA is 13 times larger than
ChartVQA, the previously largest chart-visual question-answering dataset. It is
also the largest open-sourced chart VQA dataset with non-synthetic charts. To
build our dataset, we selected 290,000 Computer Science or Machine Learning
ArXiv papers published between 2010 and 2020, and then used Palm-2 to generate
295K samples of open-vocabulary multi-turn question-answering dialogues about
the graphs. As context, we provided the text-only Palm-2 with paper title,
abstract, paragraph mentioning the graph, and rich text contextual data from
the graph itself, obtaining dialogues with an average 2.23 question-answer
turns for each graph. We asked GPT-4 to assess the matching quality of our
question-answer turns given the paper's context, obtaining an average rating of
8.7/10 on our 3K test set. We evaluated the 0-shot capability of the most
popular MLLM models such as LLaVa, mPLUGowl, BLIP-2, and openFlamingo's on our
dataset, finding LLaVA-13B being the most performant with a CIDEr score of
0.08. We further enriched the question prompts for LLAVA by including the
serialized data tables extracted from the graphs using the DePlot model,
boosting LLaVA's 0-shot CIDEr to 0.15. To verify the validity of our dataset,
we also fine-tuned LLaVa using our dataset, reaching a substantially higher
CIDEr score of 0.26. We anticipate further accuracy improvement by including
segmentation mask tokens and leveraging larger LLM backbones coupled with
emergent prompting techniques. Our code and data are open-sourced. | http://arxiv.org/abs/2308.03349v1 | cs.CL | new_dataset | 0.994439 | 2308.03349 |
Generalized Oversampling for Learning from Imbalanced datasets and Associated Theory | In supervised learning, it is quite frequent to be confronted with real
imbalanced datasets. This situation leads to a learning difficulty for standard
algorithms. Research and solutions in imbalanced learning have mainly focused
on classification tasks. Despite its importance, very few solutions exist for
imbalanced regression. In this paper, we propose a data augmentation procedure,
the GOLIATH algorithm, based on kernel density estimates which can be used in
classification and regression. This general approach encompasses two large
families of synthetic oversampling: those based on perturbations, such as
Gaussian Noise, and those based on interpolations, such as SMOTE. It also
provides an explicit form of these machine learning algorithms and an
expression of their conditional densities, in particular for SMOTE. New
synthetic data generators are deduced. We apply GOLIATH in imbalanced
regression combining such generator procedures with a wild-bootstrap resampling
technique for the target values. We evaluate the performance of the GOLIATH
algorithm in imbalanced regression situations. We empirically evaluate and
compare our approach and demonstrate significant improvement over existing
state-of-the-art techniques. | http://arxiv.org/abs/2308.02966v1 | stat.ML | not_new_dataset | 0.992111 | 2308.02966 |
Meta-Analysis and Systematic Review for Anomaly Network Intrusion Detection Systems: Detection Methods, Dataset, Validation Methodology, and Challenges | Intrusion detection systems (IDSs) built on artificial intelligence (AI) are
presented as latent mechanisms for actively detecting fresh attacks over a
complex network. Although review papers are used the systematic review or
simple methods to analyse and criticize the anomaly NIDS works, the current
review uses a traditional way as a quantitative description to find current
gaps by synthesizing and summarizing the data comparison without considering
algorithms performance. This paper presents a systematic and meta-analysis
study of AI for network intrusion detection systems (NIDS) focusing on deep
learning (DL) and machine learning (ML) approaches in network security. Deep
learning algorithms are explained in their structure, and data intrusion
network is justified based on an infrastructure of networks and attack types.
By conducting a meta-analysis and debating the validation of the DL and ML
approach by effectiveness, used dataset, detected attacks, classification task,
and time complexity, we offer a thorough benchmarking assessment of the current
NIDS-based publications-based systematic approach. The proposed method is
considered reviewing works for the anomaly-based network intrusion detection
system (anomaly-NIDS) models. Furthermore, the effectiveness of proposed
algorithms and selected datasets are discussed for the recent direction and
improvements of ML and DL to the NIDS. The future trends for improving an
anomaly-IDS for continuing detection in the evolution of cyberattacks are
highlighted in several research studies. | http://arxiv.org/abs/2308.02805v2 | cs.CR | not_new_dataset | 0.992102 | 2308.02805 |
Sinhala-English Parallel Word Dictionary Dataset | Parallel datasets are vital for performing and evaluating any kind of
multilingual task. However, in the cases where one of the considered language
pairs is a low-resource language, the existing top-down parallel data such as
corpora are lacking in both tally and quality due to the dearth of human
annotation. Therefore, for low-resource languages, it is more feasible to move
in the bottom-up direction where finer granular pairs such as dictionary
datasets are developed first. They may then be used for mid-level tasks such as
supervised multilingual word embedding alignment. These in turn can later guide
higher-level tasks in the order of aligning sentence or paragraph text corpora
used for Machine Translation (MT). Even though more approachable than
generating and aligning a massive corpus for a low-resource language, for the
same reason of apathy from larger research entities, even these finer granular
data sets are lacking for some low-resource languages. We have observed that
there is no free and open dictionary data set for the low-resource language,
Sinhala. Thus, in this work, we introduce three parallel English-Sinhala word
dictionaries (En-Si-dict-large, En-Si-dict-filtered, En-Si-dict-FastText) which
help in multilingual Natural Language Processing (NLP) tasks related to English
and Sinhala languages. In this paper, we explain the dataset creation pipeline
as well as the experimental results of the tests we have carried out to verify
the quality of the data sets. The data sets and the related scripts are
available at https://github.com/kasunw22/sinhala-para-dict. | http://arxiv.org/abs/2308.02234v1 | cs.CL | new_dataset | 0.994423 | 2308.02234 |
NuInsSeg: A Fully Annotated Dataset for Nuclei Instance Segmentation in H&E-Stained Histological Images | In computational pathology, automatic nuclei instance segmentation plays an
essential role in whole slide image analysis. While many computerized
approaches have been proposed for this task, supervised deep learning (DL)
methods have shown superior segmentation performances compared to classical
machine learning and image processing techniques. However, these models need
fully annotated datasets for training which is challenging to acquire,
especially in the medical domain. In this work, we release one of the biggest
fully manually annotated datasets of nuclei in Hematoxylin and Eosin
(H&E)-stained histological images, called NuInsSeg. This dataset contains 665
image patches with more than 30,000 manually segmented nuclei from 31 human and
mouse organs. Moreover, for the first time, we provide additional ambiguous
area masks for the entire dataset. These vague areas represent the parts of the
images where precise and deterministic manual annotations are impossible, even
for human experts. The dataset and detailed step-by-step instructions to
generate related segmentation masks are publicly available at
https://www.kaggle.com/datasets/ipateam/nuinsseg and
https://github.com/masih4/NuInsSeg, respectively. | http://arxiv.org/abs/2308.01760v1 | eess.IV | new_dataset | 0.994392 | 2308.01760 |
VisAlign: Dataset for Measuring the Degree of Alignment between AI and Humans in Visual Perception | AI alignment refers to models acting towards human-intended goals,
preferences, or ethical principles. Given that most large-scale deep learning
models act as black boxes and cannot be manually controlled, analyzing the
similarity between models and humans can be a proxy measure for ensuring AI
safety. In this paper, we focus on the models' visual perception alignment with
humans, further referred to as AI-human visual alignment. Specifically, we
propose a new dataset for measuring AI-human visual alignment in terms of image
classification, a fundamental task in machine perception. In order to evaluate
AI-human visual alignment, a dataset should encompass samples with various
scenarios that may arise in the real world and have gold human perception
labels. Our dataset consists of three groups of samples, namely Must-Act (i.e.,
Must-Classify), Must-Abstain, and Uncertain, based on the quantity and clarity
of visual information in an image and further divided into eight categories.
All samples have a gold human perception label; even Uncertain (severely
blurry) sample labels were obtained via crowd-sourcing. The validity of our
dataset is verified by sampling theory, statistical theories related to survey
design, and experts in the related fields. Using our dataset, we analyze the
visual alignment and reliability of five popular visual perception models and
seven abstention methods. Our code and data is available at
\url{https://github.com/jiyounglee-0523/VisAlign}. | http://arxiv.org/abs/2308.01525v2 | cs.CV | new_dataset | 0.994517 | 2308.01525 |
Data Collaboration Analysis applied to Compound Datasets and the Introduction of Projection data to Non-IID settings | Given the time and expense associated with bringing a drug to market,
numerous studies have been conducted to predict the properties of compounds
based on their structure using machine learning. Federated learning has been
applied to compound datasets to increase their prediction accuracy while
safeguarding potentially proprietary information. However, federated learning
is encumbered by low accuracy in not identically and independently distributed
(non-IID) settings, i.e., data partitioning has a large label bias, and is
considered unsuitable for compound datasets, which tend to have large label
bias. To address this limitation, we utilized an alternative method of
distributed machine learning to chemical compound data from open sources,
called data collaboration analysis (DC). We also proposed data collaboration
analysis using projection data (DCPd), which is an improved method that
utilizes auxiliary PubChem data. This improves the quality of individual
user-side data transformations for the projection data for the creation of
intermediate representations. The classification accuracy, i.e., area under the
curve in the receiver operating characteristic curve (ROC-AUC) and AUC in the
precision-recall curve (PR-AUC), of federated averaging (FedAvg), DC, and DCPd
was compared for five compound datasets. We determined that the machine
learning performance for non-IID settings was in the order of DCPd, DC, and
FedAvg, although they were almost the same in identically and independently
distributed (IID) settings. Moreover, the results showed that compared to other
methods, DCPd exhibited a negligible decline in classification accuracy in
experiments with different degrees of label bias. Thus, DCPd can address the
low performance in non-IID settings, which is one of the challenges of
federated learning. | http://arxiv.org/abs/2308.00280v1 | cs.LG | not_new_dataset | 0.992131 | 2308.00280 |
A Suite of Fairness Datasets for Tabular Classification | There have been many papers with algorithms for improving fairness of
machine-learning classifiers for tabular data. Unfortunately, most use only
very few datasets for their experimental evaluation. We introduce a suite of
functions for fetching 20 fairness datasets and providing associated fairness
metadata. Hopefully, these will lead to more rigorous experimental evaluations
in future fairness-aware machine learning research. | http://arxiv.org/abs/2308.00133v1 | cs.LG | new_dataset | 0.971144 | 2308.00133 |
No Fair Lunch: A Causal Perspective on Dataset Bias in Machine Learning for Medical Imaging | As machine learning methods gain prominence within clinical decision-making,
addressing fairness concerns becomes increasingly urgent. Despite considerable
work dedicated to detecting and ameliorating algorithmic bias, today's methods
are deficient with potentially harmful consequences. Our causal perspective
sheds new light on algorithmic bias, highlighting how different sources of
dataset bias may appear indistinguishable yet require substantially different
mitigation strategies. We introduce three families of causal bias mechanisms
stemming from disparities in prevalence, presentation, and annotation. Our
causal analysis underscores how current mitigation methods tackle only a narrow
and often unrealistic subset of scenarios. We provide a practical three-step
framework for reasoning about fairness in medical imaging, supporting the
development of safe and equitable AI prediction models. | http://arxiv.org/abs/2307.16526v1 | cs.LG | not_new_dataset | 0.992025 | 2307.16526 |
ERCPMP: An Endoscopic Image and Video Dataset for Colorectal Polyps Morphology and Pathology | In the recent years, artificial intelligence (AI) and its leading subtypes,
machine learning (ML) and deep learning (DL) and their applications are
spreading very fast in various aspects such as medicine. Today the most
important challenge of developing accurate algorithms for medical prediction,
detection, diagnosis, treatment and prognosis is data. ERCPMP is an Endoscopic
Image and Video Dataset for Recognition of Colorectal Polyps Morphology and
Pathology. This dataset contains demographic, morphological and pathological
data, endoscopic images and videos of 191 patients with colorectal polyps.
Morphological data is included based on the latest international
gastroenterology classification references such as Paris, Pit and JNET
classification. Pathological data includes the diagnosis of the polyps
including Tubular, Villous, Tubulovillous, Hyperplastic, Serrated, Inflammatory
and Adenocarcinoma with Dysplasia Grade & Differentiation. The current version
of this dataset is published and available on Elsevier Mendeley Dataverse and
since it is under development, the latest version is accessible via:
https://databiox.com. | http://arxiv.org/abs/2307.15444v1 | eess.IV | new_dataset | 0.994434 | 2307.15444 |
Decoding the Secrets of Machine Learning in Malware Classification: A Deep Dive into Datasets, Feature Extraction, and Model Performance | Many studies have proposed machine-learning (ML) models for malware detection
and classification, reporting an almost-perfect performance. However, they
assemble ground-truth in different ways, use diverse static- and
dynamic-analysis techniques for feature extraction, and even differ on what
they consider a malware family. As a consequence, our community still lacks an
understanding of malware classification results: whether they are tied to the
nature and distribution of the collected dataset, to what extent the number of
families and samples in the training dataset influence performance, and how
well static and dynamic features complement each other.
This work sheds light on those open questions. by investigating the key
factors influencing ML-based malware detection and classification. For this, we
collect the largest balanced malware dataset so far with 67K samples from 670
families (100 samples each), and train state-of-the-art models for malware
detection and family classification using our dataset. Our results reveal that
static features perform better than dynamic features, and that combining both
only provides marginal improvement over static features. We discover no
correlation between packing and classification accuracy, and that missing
behaviors in dynamically-extracted features highly penalize their performance.
We also demonstrate how a larger number of families to classify make the
classification harder, while a higher number of samples per family increases
accuracy. Finally, we find that models trained on a uniform distribution of
samples per family better generalize on unseen data. | http://arxiv.org/abs/2307.14657v1 | cs.CR | not_new_dataset | 0.992034 | 2307.14657 |
BubbleML: A Multi-Physics Dataset and Benchmarks for Machine Learning | In the field of phase change phenomena, the lack of accessible and diverse
datasets suitable for machine learning (ML) training poses a significant
challenge. Existing experimental datasets are often restricted, with limited
availability and sparse ground truth data, impeding our understanding of this
complex multiphysics phenomena. To bridge this gap, we present the BubbleML
Dataset
\footnote{\label{git_dataset}\url{https://github.com/HPCForge/BubbleML}} which
leverages physics-driven simulations to provide accurate ground truth
information for various boiling scenarios, encompassing nucleate pool boiling,
flow boiling, and sub-cooled boiling. This extensive dataset covers a wide
range of parameters, including varying gravity conditions, flow rates,
sub-cooling levels, and wall superheat, comprising 79 simulations. BubbleML is
validated against experimental observations and trends, establishing it as an
invaluable resource for ML research. Furthermore, we showcase its potential to
facilitate exploration of diverse downstream tasks by introducing two
benchmarks: (a) optical flow analysis to capture bubble dynamics, and (b)
operator networks for learning temperature dynamics. The BubbleML dataset and
its benchmarks serve as a catalyst for advancements in ML-driven research on
multiphysics phase change phenomena, enabling the development and comparison of
state-of-the-art techniques and models. | http://arxiv.org/abs/2307.14623v2 | cs.LG | new_dataset | 0.994481 | 2307.14623 |
Deep Learning Hyperspectral Pansharpening on large scale PRISMA dataset | In this work, we assess several deep learning strategies for hyperspectral
pansharpening. First, we present a new dataset with a greater extent than any
other in the state of the art. This dataset, collected using the ASI PRISMA
satellite, covers about 262200 km2, and its heterogeneity is granted by
randomly sampling the Earth's soil. Second, we adapted several state of the art
approaches based on deep learning to fit PRISMA hyperspectral data and then
assessed, quantitatively and qualitatively, the performance in this new
scenario. The investigation has included two settings: Reduced Resolution (RR)
to evaluate the techniques in a supervised environment and Full Resolution (FR)
for a real-world evaluation. The main purpose is the evaluation of the
reconstruction fidelity of the considered methods. In both scenarios, for the
sake of completeness, we also included machine-learning-free approaches. From
this extensive analysis has emerged that data-driven neural network methods
outperform machine-learning-free approaches and adapt better to the task of
hyperspectral pansharpening, both in RR and FR protocols. | http://arxiv.org/abs/2307.11666v2 | eess.IV | new_dataset | 0.994024 | 2307.11666 |
A Dataset and Strong Baselines for Classification of Czech News Texts | Pre-trained models for Czech Natural Language Processing are often evaluated
on purely linguistic tasks (POS tagging, parsing, NER) and relatively simple
classification tasks such as sentiment classification or article classification
from a single news source. As an alternative, we present
CZEch~NEws~Classification~dataset (CZE-NEC), one of the largest Czech
classification datasets, composed of news articles from various sources
spanning over twenty years, which allows a more rigorous evaluation of such
models. We define four classification tasks: news source, news category,
inferred author's gender, and day of the week. To verify the task difficulty,
we conducted a human evaluation, which revealed that human performance lags
behind strong machine-learning baselines built upon pre-trained transformer
models. Furthermore, we show that language-specific pre-trained encoder
analysis outperforms selected commercially available large-scale generative
language models. | http://arxiv.org/abs/2307.10666v1 | cs.CL | new_dataset | 0.99444 | 2307.10666 |
Novel Batch Active Learning Approach and Its Application to Synthetic Aperture Radar Datasets | Active learning improves the performance of machine learning methods by
judiciously selecting a limited number of unlabeled data points to query for
labels, with the aim of maximally improving the underlying classifier's
performance. Recent gains have been made using sequential active learning for
synthetic aperture radar (SAR) data arXiv:2204.00005. In each iteration,
sequential active learning selects a query set of size one while batch active
learning selects a query set of multiple datapoints. While batch active
learning methods exhibit greater efficiency, the challenge lies in maintaining
model accuracy relative to sequential active learning methods. We developed a
novel, two-part approach for batch active learning: Dijkstra's Annulus Core-Set
(DAC) for core-set generation and LocalMax for batch sampling. The batch active
learning process that combines DAC and LocalMax achieves nearly identical
accuracy as sequential active learning but is more efficient, proportional to
the batch size. As an application, a pipeline is built based on transfer
learning feature embedding, graph learning, DAC, and LocalMax to classify the
FUSAR-Ship and OpenSARShip datasets. Our pipeline outperforms the
state-of-the-art CNN-based methods. | http://arxiv.org/abs/2307.10495v1 | cs.LG | not_new_dataset | 0.991471 | 2307.10495 |
A Step Towards Worldwide Biodiversity Assessment: The BIOSCAN-1M Insect Dataset | In an effort to catalog insect biodiversity, we propose a new large dataset
of hand-labelled insect images, the BIOSCAN-Insect Dataset. Each record is
taxonomically classified by an expert, and also has associated genetic
information including raw nucleotide barcode sequences and assigned barcode
index numbers, which are genetically-based proxies for species classification.
This paper presents a curated million-image dataset, primarily to train
computer-vision models capable of providing image-based taxonomic assessment,
however, the dataset also presents compelling characteristics, the study of
which would be of interest to the broader machine learning community. Driven by
the biological nature inherent to the dataset, a characteristic long-tailed
class-imbalance distribution is exhibited. Furthermore, taxonomic labelling is
a hierarchical classification scheme, presenting a highly fine-grained
classification problem at lower levels. Beyond spurring interest in
biodiversity research within the machine learning community, progress on
creating an image-based taxonomic classifier will also further the ultimate
goal of all BIOSCAN research: to lay the foundation for a comprehensive survey
of global biodiversity. This paper introduces the dataset and explores the
classification task through the implementation and analysis of a baseline
classifier. | http://arxiv.org/abs/2307.10455v1 | cs.CV | new_dataset | 0.994528 | 2307.10455 |
MVA2023 Small Object Detection Challenge for Spotting Birds: Dataset, Methods, and Results | Small Object Detection (SOD) is an important machine vision topic because (i)
a variety of real-world applications require object detection for distant
objects and (ii) SOD is a challenging task due to the noisy, blurred, and
less-informative image appearances of small objects. This paper proposes a new
SOD dataset consisting of 39,070 images including 137,121 bird instances, which
is called the Small Object Detection for Spotting Birds (SOD4SB) dataset. The
detail of the challenge with the SOD4SB dataset is introduced in this paper. In
total, 223 participants joined this challenge. This paper briefly introduces
the award-winning methods. The dataset, the baseline code, and the website for
evaluation on the public testset are publicly available. | http://arxiv.org/abs/2307.09143v1 | cs.CV | new_dataset | 0.994389 | 2307.09143 |
Analyzing Dataset Annotation Quality Management in the Wild | Data quality is crucial for training accurate, unbiased, and trustworthy
machine learning models and their correct evaluation. Recent works, however,
have shown that even popular datasets used to train and evaluate
state-of-the-art models contain a non-negligible amount of erroneous
annotations, bias or annotation artifacts. There exist best practices and
guidelines regarding annotation projects. But to the best of our knowledge, no
large-scale analysis has been performed as of yet on how quality management is
actually conducted when creating natural language datasets and whether these
recommendations are followed. Therefore, we first survey and summarize
recommended quality management practices for dataset creation as described in
the literature and provide suggestions on how to apply them. Then, we compile a
corpus of 591 scientific publications introducing text datasets and annotate it
for quality-related aspects, such as annotator management, agreement,
adjudication or data validation. Using these annotations, we then analyze how
quality management is conducted in practice. We find that a majority of the
annotated publications apply good or very good quality management. However, we
deem the effort of 30% of the works as only subpar. Our analysis also shows
common errors, especially with using inter-annotator agreement and computing
annotation error rates. | http://arxiv.org/abs/2307.08153v2 | cs.CL | not_new_dataset | 0.991888 | 2307.08153 |
Creating a Dataset for High-Performance Computing Code Translation using LLMs: A Bridge Between OpenMP Fortran and C++ | In this study, we present a novel dataset for training machine learning
models translating between OpenMP Fortran and C++ code. To ensure reliability
and applicability, the dataset is created from a range of representative
open-source OpenMP benchmarks. It is also refined using a meticulous code
similarity test. The effectiveness of our dataset is assessed using both
quantitative (CodeBLEU) and qualitative (human evaluation) methods. We showcase
how this dataset significantly elevates the translation competencies of large
language models (LLMs). Specifically, models without prior coding knowledge
experienced a boost of $\mathbf{\times~5.1}$ in their CodeBLEU scores, while
models with some coding familiarity saw an impressive
$\mathbf{\times~9.9}$-fold increase. The best fine-tuned model using our
dataset outperforms GPT-4. It is also reaching human-level accuracy. This work
underscores the immense potential of our dataset in propelling advancements in
the domain of code translation for high-performance computing. The dataset is
accessible at
\href{https://github.com/bin123apple/Fortran-CPP-HPC-code-translation-dataset}{OpenMP-Fortran-CPP-Translation}. | http://arxiv.org/abs/2307.07686v4 | cs.SE | new_dataset | 0.994482 | 2307.07686 |
IntelliGraphs: Datasets for Benchmarking Knowledge Graph Generation | Knowledge Graph Embedding (KGE) models are used to learn continuous
representations of entities and relations. A key task in the literature is
predicting missing links between entities. However, Knowledge Graphs are not
just sets of links but also have semantics underlying their structure.
Semantics is crucial in several downstream tasks, such as query answering or
reasoning. We introduce the subgraph inference task, where a model has to
generate likely and semantically valid subgraphs. We propose IntelliGraphs, a
set of five new Knowledge Graph datasets. The IntelliGraphs datasets contain
subgraphs with semantics expressed in logical rules for evaluating subgraph
inference. We also present the dataset generator that produced the synthetic
datasets. We designed four novel baseline models, which include three models
based on traditional KGEs. We evaluate their expressiveness and show that these
models cannot capture the semantics. We believe this benchmark will encourage
the development of machine learning models that emphasize semantic
understanding. | http://arxiv.org/abs/2307.06698v3 | cs.AI | new_dataset | 0.99407 | 2307.06698 |
A New Dataset and Comparative Study for Aphid Cluster Detection | Aphids are one of the main threats to crops, rural families, and global food
security. Chemical pest control is a necessary component of crop production for
maximizing yields, however, it is unnecessary to apply the chemical approaches
to the entire fields in consideration of the environmental pollution and the
cost. Thus, accurately localizing the aphid and estimating the infestation
level is crucial to the precise local application of pesticides. Aphid
detection is very challenging as each individual aphid is really small and all
aphids are crowded together as clusters. In this paper, we propose to estimate
the infection level by detecting aphid clusters. We have taken millions of
images in the sorghum fields, manually selected 5,447 images that contain
aphids, and annotated each aphid cluster in the image. To use these images for
machine learning models, we crop the images into patches and created a labeled
dataset with over 151,000 image patches. Then, we implement and compare the
performance of four state-of-the-art object detection models. | http://arxiv.org/abs/2307.05929v1 | cs.CV | new_dataset | 0.994511 | 2307.05929 |
Grain and Grain Boundary Segmentation using Machine Learning with Real and Generated Datasets | We report significantly improved accuracy of grain boundary segmentation
using Convolutional Neural Networks (CNN) trained on a combination of real and
generated data. Manual segmentation is accurate but time-consuming, and
existing computational methods are faster but often inaccurate. To combat this
dilemma, machine learning models can be used to achieve the accuracy of manual
segmentation and have the efficiency of a computational method. An extensive
dataset of from 316L stainless steel samples is additively manufactured,
prepared, polished, etched, and then microstructure grain images were
systematically collected. Grain segmentation via existing computational methods
and manual (by-hand) were conducted, to create "real" training data. A Voronoi
tessellation pattern combined with random synthetic noise and simulated
defects, is developed to create a novel artificial grain image fabrication
method. This provided training data supplementation for data-intensive machine
learning methods. The accuracy of the grain measurements from microstructure
images segmented via computational methods and machine learning methods
proposed in this work are calculated and compared to provide much benchmarks in
grain segmentation. Over 400 images of the microstructure of stainless steel
samples were manually segmented for machine learning training applications.
This data and the artificial data is available on Kaggle. | http://arxiv.org/abs/2307.05911v1 | cond-mat.mtrl-sci | new_dataset | 0.994335 | 2307.05911 |
AnuraSet: A dataset for benchmarking Neotropical anuran calls identification in passive acoustic monitoring | Global change is predicted to induce shifts in anuran acoustic behavior,
which can be studied through passive acoustic monitoring (PAM). Understanding
changes in calling behavior requires the identification of anuran species,
which is challenging due to the particular characteristics of neotropical
soundscapes. In this paper, we introduce a large-scale multi-species dataset of
anuran amphibians calls recorded by PAM, that comprises 27 hours of expert
annotations for 42 different species from two Brazilian biomes. We provide open
access to the dataset, including the raw recordings, experimental setup code,
and a benchmark with a baseline model of the fine-grained categorization
problem. Additionally, we highlight the challenges of the dataset to encourage
machine learning researchers to solve the problem of anuran call identification
towards conservation policy. All our experiments and resources can be found on
our GitHub repository https://github.com/soundclim/anuraset. | http://arxiv.org/abs/2307.06860v1 | cs.SD | new_dataset | 0.99454 | 2307.06860 |
MD-HIT: Machine learning for materials property prediction with dataset redundancy control | Materials datasets are usually featured by the existence of many redundant
(highly similar) materials due to the tinkering material design practice over
the history of materials research. For example, the materials project database
has many perovskite cubic structure materials similar to SrTiO$_3$. This sample
redundancy within the dataset makes the random splitting of machine learning
model evaluation to fail so that the ML models tend to achieve over-estimated
predictive performance which is misleading for the materials science community.
This issue is well known in the field of bioinformatics for protein function
prediction, in which a redundancy reduction procedure (CD-Hit) is always
applied to reduce the sample redundancy by ensuring no pair of samples has a
sequence similarity greater than a given threshold. This paper surveys the
overestimated ML performance in the literature for both composition based and
structure based material property prediction. We then propose a material
dataset redundancy reduction algorithm called MD-HIT and evaluate it with
several composition and structure based distance threshold sfor reducing data
set sample redundancy. We show that with this control, the predicted
performance tends to better reflect their true prediction capability. Our
MD-hit code can be freely accessed at https://github.com/usccolumbia/MD-HIT | http://arxiv.org/abs/2307.04351v1 | cond-mat.mtrl-sci | not_new_dataset | 0.992068 | 2307.04351 |
Learning to Group Auxiliary Datasets for Molecule | The limited availability of annotations in small molecule datasets presents a
challenge to machine learning models. To address this, one common strategy is
to collaborate with additional auxiliary datasets. However, having more data
does not always guarantee improvements. Negative transfer can occur when the
knowledge in the target dataset differs or contradicts that of the auxiliary
molecule datasets. In light of this, identifying the auxiliary molecule
datasets that can benefit the target dataset when jointly trained remains a
critical and unresolved problem. Through an empirical analysis, we observe that
combining graph structure similarity and task similarity can serve as a more
reliable indicator for identifying high-affinity auxiliary datasets. Motivated
by this insight, we propose MolGroup, which separates the dataset affinity into
task and structure affinity to predict the potential benefits of each auxiliary
molecule dataset. MolGroup achieves this by utilizing a routing mechanism
optimized through a bi-level optimization framework. Empowered by the meta
gradient, the routing mechanism is optimized toward maximizing the target
dataset's performance and quantifies the affinity as the gating score. As a
result, MolGroup is capable of predicting the optimal combination of auxiliary
datasets for each target dataset. Our extensive experiments demonstrate the
efficiency and effectiveness of MolGroup, showing an average improvement of
4.41%/3.47% for GIN/Graphormer trained with the group of molecule datasets
selected by MolGroup on 11 target molecule datasets. | http://arxiv.org/abs/2307.04052v1 | q-bio.BM | not_new_dataset | 0.991931 | 2307.04052 |
Physics-Infused Machine Learning Based Prediction of VTOL Aerodynamics with Sparse Datasets | Complex optimal design and control processes often require repeated
evaluations of expensive objective functions and consist of large design
spaces. Data-driven surrogates such as neural networks and Gaussian processes
provide an attractive alternative to simulations and are utilized frequently to
represent these objective functions in optimization. However, pure data-driven
models, due to a lack of adherence to basic physics laws and constraints, are
often poor at generalizing and extrapolating. This is particularly the case,
when training occurs over sparse high-fidelity datasets. A class of
Physics-infused machine learning (PIML) models integrate ML models with
low-fidelity partial physics models to improve generalization performance while
retaining computational efficiency. This paper presents two potential
approaches for Physics infused modelling of aircraft aerodynamics which
incorporate Artificial Neural Networks with a low-fidelity Vortex Lattice
Method model with blown wing effects (BLOFI) to improve prediction performance
while also keeping the computational cost tractable. This paper also develops
an end-to-end auto differentiable open-source framework that enables efficient
training of such hybrid models. These two PIML modelling approaches are then
used to predict the aerodynamic coefficients of a 6 rotor eVTOL aircraft given
its control parameters and flight conditions. The models are trained on a
sparse high-fidelity dataset generated using a CHARM model. The trained models
are then compared against the vanilla low-fidelity model and a standard pure
data-driven ANN. Our results show that one of the proposed architectures
outperforms all the other models at a nominal increase in run time. These
results are promising and pave way for PIML frameworks which can generalize
over different aircraft and configurations thereby significantly reducing costs
of design and control. | http://arxiv.org/abs/2307.03286v1 | cs.CE | not_new_dataset | 0.992038 | 2307.03286 |
The FormAI Dataset: Generative AI in Software Security Through the Lens of Formal Verification | This paper presents the FormAI dataset, a large collection of 112, 000
AI-generated compilable and independent C programs with vulnerability
classification. We introduce a dynamic zero-shot prompting technique
constructed to spawn diverse programs utilizing Large Language Models (LLMs).
The dataset is generated by GPT-3.5-turbo and comprises programs with varying
levels of complexity. Some programs handle complicated tasks like network
management, table games, or encryption, while others deal with simpler tasks
like string manipulation. Every program is labeled with the vulnerabilities
found within the source code, indicating the type, line number, and vulnerable
function name. This is accomplished by employing a formal verification method
using the Efficient SMT-based Bounded Model Checker (ESBMC), which uses model
checking, abstract interpretation, constraint programming, and satisfiability
modulo theories to reason over safety/security properties in programs. This
approach definitively detects vulnerabilities and offers a formal model known
as a counterexample, thus eliminating the possibility of generating false
positive reports. We have associated the identified vulnerabilities with Common
Weakness Enumeration (CWE) numbers. We make the source code available for the
112, 000 programs, accompanied by a separate file containing the
vulnerabilities detected in each program, making the dataset ideal for training
LLMs and machine learning algorithms. Our study unveiled that according to
ESBMC, 51.24% of the programs generated by GPT-3.5 contained vulnerabilities,
thereby presenting considerable risks to software safety and security. | http://arxiv.org/abs/2307.02192v2 | cs.DB | new_dataset | 0.99451 | 2307.02192 |
Externally validating the IoTDevID device identification methodology using the CIC IoT 2022 Dataset | In the era of rapid IoT device proliferation, recognizing, diagnosing, and
securing these devices are crucial tasks. The IoTDevID method (IEEE Internet of
Things 2022) proposes a machine learning approach for device identification
using network packet features. In this article we present a validation study of
the IoTDevID method by testing core components, namely its feature set and its
aggregation algorithm, on a new dataset. The new dataset (CIC-IoT-2022) offers
several advantages over earlier datasets, including a larger number of devices,
multiple instances of the same device, both IP and non-IP device data, normal
(benign) usage data, and diverse usage profiles, such as active and idle
states. Using this independent dataset, we explore the validity of IoTDevID's
core components, and also examine the impacts of the new data on model
performance. Our results indicate that data diversity is important to model
performance. For example, models trained with active usage data outperformed
those trained with idle usage data, and multiple usage data similarly improved
performance. Results for IoTDevID were strong with a 92.50 F1 score for 31
IP-only device classes, similar to our results on previous datasets. In all
cases, the IoTDevID aggregation algorithm improved model performance. For
non-IP devices we obtained a 78.80 F1 score for 40 device classes, though with
much less data, confirming that data quantity is also important to model
performance. | http://arxiv.org/abs/2307.08679v1 | cs.NI | new_dataset | 0.994149 | 2307.08679 |
A Critical Re-evaluation of Benchmark Datasets for (Deep) Learning-Based Matching Algorithms | Entity resolution (ER) is the process of identifying records that refer to
the same entities within one or across multiple databases. Numerous techniques
have been developed to tackle ER challenges over the years, with recent
emphasis placed on machine and deep learning methods for the matching phase.
However, the quality of the benchmark datasets typically used in the
experimental evaluations of learning-based matching algorithms has not been
examined in the literature. To cover this gap, we propose four different
approaches to assessing the difficulty and appropriateness of 13 established
datasets: two theoretical approaches, which involve new measures of linearity
and existing measures of complexity, and two practical approaches: the
difference between the best non-linear and linear matchers, as well as the
difference between the best learning-based matcher and the perfect oracle. Our
analysis demonstrates that most of the popular datasets pose rather easy
classification tasks. As a result, they are not suitable for properly
evaluating learning-based matching algorithms. To address this issue, we
propose a new methodology for yielding benchmark datasets. We put it into
practice by creating four new matching tasks, and we verify that these new
benchmarks are more challenging and therefore more suitable for further
advancements in the field. | http://arxiv.org/abs/2307.01231v1 | cs.DB | not_new_dataset | 0.992238 | 2307.01231 |
Dataset balancing can hurt model performance | Machine learning from training data with a skewed distribution of examples
per class can lead to models that favor performance on common classes at the
expense of performance on rare ones. AudioSet has a very wide range of priors
over its 527 sound event classes. Classification performance on AudioSet is
usually evaluated by a simple average over per-class metrics, meaning that
performance on rare classes is equal in importance to the performance on common
ones. Several recent papers have used dataset balancing techniques to improve
performance on AudioSet. We find, however, that while balancing improves
performance on the public AudioSet evaluation data it simultaneously hurts
performance on an unpublished evaluation set collected under the same
conditions. By varying the degree of balancing, we show that its benefits are
fragile and depend on the evaluation set. We also do not find evidence
indicating that balancing improves rare class performance relative to common
classes. We therefore caution against blind application of balancing, as well
as against paying too much attention to small improvements on a public
evaluation set. | http://arxiv.org/abs/2307.00079v1 | cs.LG | not_new_dataset | 0.991947 | 2307.00079 |
X-RiSAWOZ: High-Quality End-to-End Multilingual Dialogue Datasets and Few-shot Agents | Task-oriented dialogue research has mainly focused on a few popular languages
like English and Chinese, due to the high dataset creation cost for a new
language. To reduce the cost, we apply manual editing to automatically
translated data. We create a new multilingual benchmark, X-RiSAWOZ, by
translating the Chinese RiSAWOZ to 4 languages: English, French, Hindi, Korean;
and a code-mixed English-Hindi language. X-RiSAWOZ has more than 18,000
human-verified dialogue utterances for each language, and unlike most
multilingual prior work, is an end-to-end dataset for building
fully-functioning agents.
The many difficulties we encountered in creating X-RiSAWOZ led us to develop
a toolset to accelerate the post-editing of a new language dataset after
translation. This toolset improves machine translation with a hybrid entity
alignment technique that combines neural with dictionary-based methods, along
with many automated and semi-automated validation checks.
We establish strong baselines for X-RiSAWOZ by training dialogue agents in
the zero- and few-shot settings where limited gold data is available in the
target language. Our results suggest that our translation and post-editing
methodology and toolset can be used to create new high-quality multilingual
dialogue agents cost-effectively. Our dataset, code, and toolkit are released
open-source. | http://arxiv.org/abs/2306.17674v1 | cs.CL | new_dataset | 0.994459 | 2306.17674 |
TTSWING: a Dataset for Table Tennis Swing Analysis | We introduce TTSWING, a novel dataset designed for table tennis swing
analysis. This dataset comprises comprehensive swing information obtained
through 9-axis sensors integrated into custom-made racket grips, accompanied by
anonymized demographic data of the players. We detail the data collection and
annotation procedures. Furthermore, we conduct pilot studies utilizing diverse
machine learning models for swing analysis. TTSWING holds tremendous potential
to facilitate innovative research in table tennis analysis and is a valuable
resource for the scientific community. We release the dataset and experimental
codes at https://github.com/DEPhantom/TTSWING. | http://arxiv.org/abs/2306.17550v1 | cs.LG | new_dataset | 0.994443 | 2306.17550 |
Surgical Phase and Instrument Recognition: How to identify appropriate Dataset Splits | Purpose: The development of machine learning models for surgical workflow and
instrument recognition from temporal data represents a challenging task due to
the complex nature of surgical workflows. In particular, the imbalanced
distribution of data is one of the major challenges in the domain of surgical
workflow recognition. In order to obtain meaningful results, careful
partitioning of data into training, validation, and test sets, as well as the
selection of suitable evaluation metrics are crucial. Methods: In this work, we
present an openly available web-based application that enables interactive
exploration of dataset partitions. The proposed visual framework facilitates
the assessment of dataset splits for surgical workflow recognition, especially
with regard to identifying sub-optimal dataset splits. Currently, it supports
visualization of surgical phase and instrument annotations. Results: In order
to validate the dedicated interactive visualizations, we use a dataset split of
the Cholec80 dataset. This dataset split was specifically selected to reflect a
case of strong data imbalance. Using our software, we were able to identify
phases, phase transitions, and combinations of surgical instruments that were
not represented in one of the sets. Conclusion: In order to obtain meaningful
results in highly unbalanced class distributions, special care should be taken
with respect to the selection of an appropriate split. Interactive data
visualization represents a promising approach for the assessment of machine
learning datasets. The source code is available at
https://github.com/Cardio-AI/endovis-ml | http://arxiv.org/abs/2306.16879v1 | cs.LG | not_new_dataset | 0.991637 | 2306.16879 |
MNISQ: A Large-Scale Quantum Circuit Dataset for Machine Learning on/for Quantum Computers in the NISQ era | We introduce the first large-scale dataset, MNISQ, for both the Quantum and
the Classical Machine Learning community during the Noisy Intermediate-Scale
Quantum era. MNISQ consists of 4,950,000 data points organized in 9
subdatasets. Building our dataset from the quantum encoding of classical
information (e.g., MNIST dataset), we deliver a dataset in a dual form: in
quantum form, as circuits, and in classical form, as quantum circuit
descriptions (quantum programming language, QASM). In fact, also the Machine
Learning research related to quantum computers undertakes a dual challenge:
enhancing machine learning exploiting the power of quantum computers, while
also leveraging state-of-the-art classical machine learning methodologies to
help the advancement of quantum computing. Therefore, we perform circuit
classification on our dataset, tackling the task with both quantum and
classical models. In the quantum endeavor, we test our circuit dataset with
Quantum Kernel methods, and we show excellent results up to $97\%$ accuracy. In
the classical world, the underlying quantum mechanical structures within the
quantum circuit data are not trivial. Nevertheless, we test our dataset on
three classical models: Structured State Space sequence model (S4), Transformer
and LSTM. In particular, the S4 model applied on the tokenized QASM sequences
reaches an impressive $77\%$ accuracy. These findings illustrate that quantum
circuit-related datasets are likely to be quantum advantageous, but also that
state-of-the-art machine learning methodologies can competently classify and
recognize quantum circuits. We finally entrust the quantum and classical
machine learning community the fundamental challenge to build more
quantum-classical datasets like ours and to build future benchmarks from our
experiments. The dataset is accessible on GitHub and its circuits are easily
run in qulacs or qiskit. | http://arxiv.org/abs/2306.16627v1 | quant-ph | new_dataset | 0.99445 | 2306.16627 |
Efficient and Multiply Robust Risk Estimation under General Forms of Dataset Shift | Statistical machine learning methods often face the challenge of limited data
available from the population of interest. One remedy is to leverage data from
auxiliary source populations, which share some conditional distributions or are
linked in other ways with the target domain. Techniques leveraging such
\emph{dataset shift} conditions are known as \emph{domain adaptation} or
\emph{transfer learning}. Despite extensive literature on dataset shift,
limited works address how to efficiently use the auxiliary populations to
improve the accuracy of risk evaluation for a given machine learning task in
the target population.
In this paper, we study the general problem of efficiently estimating target
population risk under various dataset shift conditions, leveraging
semiparametric efficiency theory. We consider a general class of dataset shift
conditions, which includes three popular conditions -- covariate, label and
concept shift -- as special cases. We allow for partially non-overlapping
support between the source and target populations. We develop efficient and
multiply robust estimators along with a straightforward specification test of
these dataset shift conditions. We also derive efficiency bounds for two other
dataset shift conditions, posterior drift and location-scale shift. Simulation
studies support the efficiency gains due to leveraging plausible dataset shift
conditions. | http://arxiv.org/abs/2306.16406v2 | stat.ME | not_new_dataset | 0.992208 | 2306.16406 |
MyDigitalFootprint: an extensive context dataset for pervasive computing applications at the edge | The widespread diffusion of connected smart devices has contributed to the
rapid expansion and evolution of the Internet at its edge. Personal mobile
devices interact with other smart objects in their surroundings, adapting
behavior based on rapidly changing user context. The ability of mobile devices
to process this data locally is crucial for quick adaptation. This can be
achieved through a single elaboration process integrated into user applications
or a middleware platform for context processing. However, the lack of public
datasets considering user context complexity in the mobile environment hinders
research progress. We introduce MyDigitalFootprint, a large-scale dataset
comprising smartphone sensor data, physical proximity information, and Online
Social Networks interactions. This dataset supports multimodal context
recognition and social relationship modeling. It spans two months of
measurements from 31 volunteer users in their natural environment, allowing for
unrestricted behavior. Existing public datasets focus on limited context data
for specific applications, while ours offers comprehensive information on the
user context in the mobile environment. To demonstrate the dataset's
effectiveness, we present three context-aware applications utilizing various
machine learning tasks: (i) a social link prediction algorithm based on
physical proximity data, (ii) daily-life activity recognition using
smartphone-embedded sensors data, and (iii) a pervasive context-aware
recommender system. Our dataset, with its heterogeneity of information, serves
as a valuable resource to validate new research in mobile and edge computing. | http://arxiv.org/abs/2306.15990v1 | cs.LG | new_dataset | 0.994506 | 2306.15990 |
Probing the Transition to Dataset-Level Privacy in ML Models Using an Output-Specific and Data-Resolved Privacy Profile | Differential privacy (DP) is the prevailing technique for protecting user
data in machine learning models. However, deficits to this framework include a
lack of clarity for selecting the privacy budget $\epsilon$ and a lack of
quantification for the privacy leakage for a particular data row by a
particular trained model. We make progress toward these limitations and a new
perspective by which to visualize DP results by studying a privacy metric that
quantifies the extent to which a model trained on a dataset using a DP
mechanism is ``covered" by each of the distributions resulting from training on
neighboring datasets. We connect this coverage metric to what has been
established in the literature and use it to rank the privacy of individual
samples from the training set in what we call a privacy profile. We
additionally show that the privacy profile can be used to probe an observed
transition to indistinguishability that takes place in the neighboring
distributions as $\epsilon$ decreases, which we suggest is a tool that can
enable the selection of $\epsilon$ by the ML practitioner wishing to make use
of DP. | http://arxiv.org/abs/2306.15790v1 | cs.LG | not_new_dataset | 0.992298 | 2306.15790 |
Constructing Multilingual Code Search Dataset Using Neural Machine Translation | Code search is a task to find programming codes that semantically match the
given natural language queries. Even though some of the existing datasets for
this task are multilingual on the programming language side, their query data
are only in English. In this research, we create a multilingual code search
dataset in four natural and four programming languages using a neural machine
translation model. Using our dataset, we pre-train and fine-tune the
Transformer-based models and then evaluate them on multiple code search test
sets. Our results show that the model pre-trained with all natural and
programming language data has performed best in most cases. By applying
back-translation data filtering to our dataset, we demonstrate that the
translation quality affects the model's performance to a certain extent, but
the data size matters more. | http://arxiv.org/abs/2306.15604v1 | cs.CL | new_dataset | 0.994473 | 2306.15604 |
Assessing Dataset Quality Through Decision Tree Characteristics in Autoencoder-Processed Spaces | In this paper, we delve into the critical aspect of dataset quality
assessment in machine learning classification tasks. Leveraging a variety of
nine distinct datasets, each crafted for classification tasks with varying
complexity levels, we illustrate the profound impact of dataset quality on
model training and performance. We further introduce two additional datasets
designed to represent specific data conditions - one maximizing entropy and the
other demonstrating high redundancy. Our findings underscore the importance of
appropriate feature selection, adequate data volume, and data quality in
achieving high-performing machine learning models. To aid researchers and
practitioners, we propose a comprehensive framework for dataset quality
assessment, which can help evaluate if the dataset at hand is sufficient and of
the required quality for specific tasks. This research offers valuable insights
into data assessment practices, contributing to the development of more
accurate and robust machine learning models. | http://arxiv.org/abs/2306.15392v1 | cs.LG | not_new_dataset | 0.992243 | 2306.15392 |
Uncovering Political Hate Speech During Indian Election Campaign: A New Low-Resource Dataset and Baselines | The detection of hate speech in political discourse is a critical issue, and
this becomes even more challenging in low-resource languages. To address this
issue, we introduce a new dataset named IEHate, which contains 11,457 manually
annotated Hindi tweets related to the Indian Assembly Election Campaign from
November 1, 2021, to March 9, 2022. We performed a detailed analysis of the
dataset, focusing on the prevalence of hate speech in political communication
and the different forms of hateful language used. Additionally, we benchmark
the dataset using a range of machine learning, deep learning, and
transformer-based algorithms. Our experiments reveal that the performance of
these models can be further improved, highlighting the need for more advanced
techniques for hate speech detection in low-resource languages. In particular,
the relatively higher score of human evaluation over algorithms emphasizes the
importance of utilizing both human and automated approaches for effective hate
speech moderation. Our IEHate dataset can serve as a valuable resource for
researchers and practitioners working on developing and evaluating hate speech
detection techniques in low-resource languages. Overall, our work underscores
the importance of addressing the challenges of identifying and mitigating hate
speech in political discourse, particularly in the context of low-resource
languages. The dataset and resources for this work are made available at
https://github.com/Farhan-jafri/Indian-Election. | http://arxiv.org/abs/2306.14764v2 | cs.CL | new_dataset | 0.994459 | 2306.14764 |
SuperBench: A Super-Resolution Benchmark Dataset for Scientific Machine Learning | Super-Resolution (SR) techniques aim to enhance data resolution, enabling the
retrieval of finer details, and improving the overall quality and fidelity of
the data representation. There is growing interest in applying SR methods to
complex spatiotemporal systems within the Scientific Machine Learning (SciML)
community, with the hope of accelerating numerical simulations and/or improving
forecasts in weather, climate, and related areas. However, the lack of
standardized benchmark datasets for comparing and validating SR methods hinders
progress and adoption in SciML. To address this, we introduce SuperBench, the
first benchmark dataset featuring high-resolution datasets (up to
$2048\times2048$ dimensions), including data from fluid flows, cosmology, and
weather. Here, we focus on validating spatial SR performance from data-centric
and physics-preserved perspectives, as well as assessing robustness to data
degradation tasks. While deep learning-based SR methods (developed in the
computer vision community) excel on certain tasks, despite relatively limited
prior physics information, we identify limitations of these methods in
accurately capturing intricate fine-scale features and preserving fundamental
physical properties and constraints in scientific data. These shortcomings
highlight the importance and subtlety of incorporating domain knowledge into ML
models. We anticipate that SuperBench will significantly advance SR methods for
scientific tasks. | http://arxiv.org/abs/2306.14070v1 | cs.CV | new_dataset | 0.994497 | 2306.14070 |
Unleashing Realistic Air Quality Forecasting: Introducing the Ready-to-Use PurpleAirSF Dataset | Air quality forecasting has garnered significant attention recently, with
data-driven models taking center stage due to advancements in machine learning
and deep learning models. However, researchers face challenges with complex
data acquisition and the lack of open-sourced datasets, hindering efficient
model validation. This paper introduces PurpleAirSF, a comprehensive and easily
accessible dataset collected from the PurpleAir network. With its high temporal
resolution, various air quality measures, and diverse geographical coverage,
this dataset serves as a useful tool for researchers aiming to develop novel
forecasting models, study air pollution patterns, and investigate their impacts
on health and the environment. We present a detailed account of the data
collection and processing methods employed to build PurpleAirSF. Furthermore,
we conduct preliminary experiments using both classic and modern
spatio-temporal forecasting models, thereby establishing a benchmark for future
air quality forecasting tasks. | http://arxiv.org/abs/2306.13948v1 | cs.LG | new_dataset | 0.994467 | 2306.13948 |
Data Coverage for Detecting Representation Bias in Image Datasets: A Crowdsourcing Approach | Existing machine learning models have proven to fail when it comes to their
performance for minority groups, mainly due to biases in data. In particular,
datasets, especially social data, are often not representative of minorities.
In this paper, we consider the problem of representation bias identification on
image datasets without explicit attribute values. Using the notion of data
coverage for detecting a lack of representation, we develop multiple
crowdsourcing approaches. Our core approach, at a high level, is a divide and
conquer algorithm that applies a search space pruning strategy to efficiently
identify if a dataset misses proper coverage for a given group. We provide a
different theoretical analysis of our algorithm, including a tight upper bound
on its performance which guarantees its near-optimality. Using this algorithm
as the core, we propose multiple heuristics to reduce the coverage detection
cost across different cases with multiple intersectional/non-intersectional
groups. We demonstrate how the pre-trained predictors are not reliable and
hence not sufficient for detecting representation bias in the data. Finally, we
adjust our core algorithm to utilize existing models for predicting image
group(s) to minimize the coverage identification cost. We conduct extensive
experiments, including live experiments on Amazon Mechanical Turk to validate
our problem and evaluate our algorithms' performance. | http://arxiv.org/abs/2306.13868v1 | cs.DB | not_new_dataset | 0.992141 | 2306.13868 |
DISCO-10M: A Large-Scale Music Dataset | Music datasets play a crucial role in advancing research in machine learning
for music. However, existing music datasets suffer from limited size,
accessibility, and lack of audio resources. To address these shortcomings, we
present DISCO-10M, a novel and extensive music dataset that surpasses the
largest previously available music dataset by an order of magnitude. To ensure
high-quality data, we implement a multi-stage filtering process. This process
incorporates similarities based on textual descriptions and audio embeddings.
Moreover, we provide precomputed CLAP embeddings alongside DISCO-10M,
facilitating direct application on various downstream tasks. These embeddings
enable efficient exploration of machine learning applications on the provided
data. With DISCO-10M, we aim to democratize and facilitate new research to help
advance the development of novel machine learning models for music. | http://arxiv.org/abs/2306.13512v1 | cs.SD | new_dataset | 0.994343 | 2306.13512 |