Search is not available for this dataset
title
string | abstract
string | url
string | category
string | prediction
string | probability
float64 | arxiv_id
string |
---|---|---|---|---|---|---|
LIPSFUS: A neuromorphic dataset for audio-visual sensory fusion of lip reading | This paper presents a sensory fusion neuromorphic dataset collected with
precise temporal synchronization using a set of Address-Event-Representation
sensors and tools. The target application is the lip reading of several
keywords for different machine learning applications, such as digits, robotic
commands, and auxiliary rich phonetic short words. The dataset is enlarged with
a spiking version of an audio-visual lip reading dataset collected with
frame-based cameras. LIPSFUS is publicly available and it has been validated
with a deep learning architecture for audio and visual classification. It is
intended for sensory fusion architectures based on both artificial and spiking
neural network algorithms. | http://arxiv.org/abs/2304.01080v1 | cs.SD | new_dataset | 0.994471 | 2304.01080 |
Metrics for Dataset Demographic Bias: A Case Study on Facial Expression Recognition | Demographic biases in source datasets have been shown as one of the causes of
unfairness and discrimination in the predictions of Machine Learning models.
One of the most prominent types of demographic bias are statistical imbalances
in the representation of demographic groups in the datasets. In this paper, we
study the measurement of these biases by reviewing the existing metrics,
including those that can be borrowed from other disciplines. We develop a
taxonomy for the classification of these metrics, providing a practical guide
for the selection of appropriate metrics. To illustrate the utility of our
framework, and to further understand the practical characteristics of the
metrics, we conduct a case study of 20 datasets used in Facial Emotion
Recognition (FER), analyzing the biases present in them. Our experimental
results show that many metrics are redundant and that a reduced subset of
metrics may be sufficient to measure the amount of demographic bias. The paper
provides valuable insights for researchers in AI and related fields to mitigate
dataset bias and improve the fairness and accuracy of AI models. The code is
available at https://github.com/irisdominguez/dataset_bias_metrics. | http://arxiv.org/abs/2303.15889v1 | cs.CV | not_new_dataset | 0.992121 | 2303.15889 |
Make the Most Out of Your Net: Alternating Between Canonical and Hard Datasets for Improved Image Demosaicing | Image demosaicing is an important step in the image processing pipeline for
digital cameras, and it is one of the many tasks within the field of image
restoration. A well-known characteristic of natural images is that most patches
are smooth, while high-content patches like textures or repetitive patterns are
much rarer, which results in a long-tailed distribution. This distribution can
create an inductive bias when training machine learning algorithms for image
restoration tasks and for image demosaicing in particular. There have been many
different approaches to address this challenge, such as utilizing specific
losses or designing special network architectures. What makes our work is
unique in that it tackles the problem from a training protocol perspective. Our
proposed training regime consists of two key steps. The first step is a
data-mining stage where sub-categories are created and then refined through an
elimination process to only retain the most helpful sub-categories. The second
step is a cyclic training process where the neural network is trained on both
the mined sub-categories and the original dataset. We have conducted various
experiments to demonstrate the effectiveness of our training method for the
image demosaicing task. Our results show that this method outperforms standard
training across a range of architecture sizes and types, including CNNs and
Transformers. Moreover, we are able to achieve state-of-the-art results with a
significantly smaller neural network, compared to previous state-of-the-art
methods. | http://arxiv.org/abs/2303.15792v1 | eess.IV | not_new_dataset | 0.992105 | 2303.15792 |
Assorted, Archetypal and Annotated Two Million (3A2M) Cooking Recipes Dataset based on Active Learning | Cooking recipes allow individuals to exchange culinary ideas and provide food
preparation instructions. Due to a lack of adequate labeled data, categorizing
raw recipes found online to the appropriate food genres is a challenging task
in this domain. Utilizing the knowledge of domain experts to categorize recipes
could be a solution. In this study, we present a novel dataset of two million
culinary recipes labeled in respective categories leveraging the knowledge of
food experts and an active learning technique. To construct the dataset, we
collect the recipes from the RecipeNLG dataset. Then, we employ three human
experts whose trustworthiness score is higher than 86.667% to categorize 300K
recipe by their Named Entity Recognition (NER) and assign it to one of the nine
categories: bakery, drinks, non-veg, vegetables, fast food, cereals, meals,
sides and fusion. Finally, we categorize the remaining 1900K recipes using
Active Learning method with a blend of Query-by-Committee and Human In The Loop
(HITL) approaches. There are more than two million recipes in our dataset, each
of which is categorized and has a confidence score linked with it. For the 9
genres, the Fleiss Kappa score of this massive dataset is roughly 0.56026. We
believe that the research community can use this dataset to perform various
machine learning tasks such as recipe genre classification, recipe generation
of a specific genre, new recipe creation, etc. The dataset can also be used to
train and evaluate the performance of various NLP tasks such as named entity
recognition, part-of-speech tagging, semantic role labeling, and so on. The
dataset will be available upon publication: https://tinyurl.com/3zu4778y. | http://arxiv.org/abs/2303.16778v1 | cs.CL | new_dataset | 0.994472 | 2303.16778 |
An investigation of licensing of datasets for machine learning based on the GQM model | Dataset licensing is currently an issue in the development of machine
learning systems. And in the development of machine learning systems, the most
widely used are publicly available datasets. However, since the images in the
publicly available dataset are mainly obtained from the Internet, some images
are not commercially available. Furthermore, developers of machine learning
systems do not often care about the license of the dataset when training
machine learning models with it. In summary, the licensing of datasets for
machine learning systems is in a state of incompleteness in all aspects at this
stage.
Our investigation of two collection datasets revealed that most of the
current datasets lacked licenses, and the lack of licenses made it impossible
to determine the commercial availability of the datasets. Therefore, we decided
to take a more scientific and systematic approach to investigate the licensing
of datasets and the licensing of machine learning systems that use the dataset
to make it easier and more compliant for future developers of machine learning
systems. | http://arxiv.org/abs/2303.13735v1 | cs.SE | not_new_dataset | 0.992186 | 2303.13735 |
Enriching Neural Network Training Dataset to Improve Worst-Case Performance Guarantees | Machine learning algorithms, especially Neural Networks (NNs), are a valuable
tool used to approximate non-linear relationships, like the AC-Optimal Power
Flow (AC-OPF), with considerable accuracy -- and achieving a speedup of several
orders of magnitude when deployed for use. Often in power systems literature,
the NNs are trained with a fixed dataset generated prior to the training
process. In this paper, we show that adapting the NN training dataset during
training can improve the NN performance and substantially reduce its worst-case
violations. This paper proposes an algorithm that identifies and enriches the
training dataset with critical datapoints that reduce the worst-case violations
and deliver a neural network with improved worst-case performance guarantees.
We demonstrate the performance of our algorithm in four test power systems,
ranging from 39-buses to 162-buses. | http://arxiv.org/abs/2303.13228v1 | cs.LG | not_new_dataset | 0.99219 | 2303.13228 |
Attribute-preserving Face Dataset Anonymization via Latent Code Optimization | This work addresses the problem of anonymizing the identity of faces in a
dataset of images, such that the privacy of those depicted is not violated,
while at the same time the dataset is useful for downstream task such as for
training machine learning models. To the best of our knowledge, we are the
first to explicitly address this issue and deal with two major drawbacks of the
existing state-of-the-art approaches, namely that they (i) require the costly
training of additional, purpose-trained neural networks, and/or (ii) fail to
retain the facial attributes of the original images in the anonymized
counterparts, the preservation of which is of paramount importance for their
use in downstream tasks. We accordingly present a task-agnostic anonymization
procedure that directly optimizes the images' latent representation in the
latent space of a pre-trained GAN. By optimizing the latent codes directly, we
ensure both that the identity is of a desired distance away from the original
(with an identity obfuscation loss), whilst preserving the facial attributes
(using a novel feature-matching loss in FaRL's deep feature space). We
demonstrate through a series of both qualitative and quantitative experiments
that our method is capable of anonymizing the identity of the images whilst --
crucially -- better-preserving the facial attributes. We make the code and the
pre-trained models publicly available at: https://github.com/chi0tzp/FALCO. | http://arxiv.org/abs/2303.11296v1 | cs.CV | not_new_dataset | 0.992235 | 2303.11296 |
Differentially Private Algorithms for Synthetic Power System Datasets | While power systems research relies on the availability of real-world network
datasets, data owners (e.g., system operators) are hesitant to share data due
to security and privacy risks. To control these risks, we develop
privacy-preserving algorithms for the synthetic generation of optimization and
machine learning datasets. Taking a real-world dataset as input, the algorithms
output its noisy, synthetic version, which preserves the accuracy of the real
data on a specific downstream model or even a large population of those. We
control the privacy loss using Laplace and Exponential mechanisms of
differential privacy and preserve data accuracy using a post-processing convex
optimization. We apply the algorithms to generate synthetic network parameters
and wind power data. | http://arxiv.org/abs/2303.11079v1 | cs.CR | not_new_dataset | 0.992078 | 2303.11079 |
Right the docs: Characterising voice dataset documentation practices used in machine learning | Voice-enabled technology is quickly becoming ubiquitous, and is constituted
from machine learning (ML)-enabled components such as speech recognition and
voice activity detection. However, these systems don't yet work well for
everyone. They exhibit bias - the systematic and unfair discrimination against
individuals or cohorts of individuals in favour of others (Friedman &
Nissembaum, 1996) - across axes such as age, gender and accent.
ML is reliant on large datasets for training. Dataset documentation is
designed to give ML Practitioners (MLPs) a better understanding of a dataset's
characteristics. However, there is a lack of empirical research on voice
dataset documentation specifically. Additionally, while MLPs are frequent
participants in fairness research, little work focuses on those who work with
voice data. Our work makes an empirical contribution to this gap.
Here, we combine two methods to form an exploratory study. First, we
undertake 13 semi-structured interviews, exploring multiple perspectives of
voice dataset documentation practice. Using open and axial coding methods, we
explore MLPs' practices through the lenses of roles and tradeoffs. Drawing from
this work, we then purposively sample voice dataset documents (VDDs) for 9
voice datasets. Our findings then triangulate these two methods, using the
lenses of MLP roles and trade-offs. We find that current VDD practices are
inchoate, inadequate and incommensurate. The characteristics of voice datasets
are codified in fragmented, disjoint ways that often do not meet the needs of
MLPs. Moreover, they cannot be readily compared, presenting a barrier to
practitioners' bias reduction efforts.
We then discuss the implications of these findings for bias practices in
voice data and speech technologies. We conclude by setting out a program of
future work to address these findings -- that is, how we may "right the docs". | http://arxiv.org/abs/2303.10721v1 | cs.HC | not_new_dataset | 0.992146 | 2303.10721 |
ShabbyPages: A Reproducible Document Denoising and Binarization Dataset | Document denoising and binarization are fundamental problems in the document
processing space, but current datasets are often too small and lack sufficient
complexity to effectively train and benchmark modern data-driven machine
learning models. To fill this gap, we introduce ShabbyPages, a new document
image dataset designed for training and benchmarking document denoisers and
binarizers. ShabbyPages contains over 6,000 clean "born digital" images with
synthetically-noised counterparts ("shabby pages") that were augmented using
the Augraphy document augmentation tool to appear as if they have been printed
and faxed, photocopied, or otherwise altered through physical processes. In
this paper, we discuss the creation process of ShabbyPages and demonstrate the
utility of ShabbyPages by training convolutional denoisers which remove real
noise features with a high degree of human-perceptible fidelity, establishing
baseline performance for a new ShabbyPages benchmark. | http://arxiv.org/abs/2303.09339v2 | cs.CV | new_dataset | 0.994527 | 2303.09339 |
PTMTorrent: A Dataset for Mining Open-source Pre-trained Model Packages | Due to the cost of developing and training deep learning models from scratch,
machine learning engineers have begun to reuse pre-trained models (PTMs) and
fine-tune them for downstream tasks. PTM registries known as "model hubs"
support engineers in distributing and reusing deep learning models. PTM
packages include pre-trained weights, documentation, model architectures,
datasets, and metadata. Mining the information in PTM packages will enable the
discovery of engineering phenomena and tools to support software engineers.
However, accessing this information is difficult - there are many PTM
registries, and both the registries and the individual packages may have rate
limiting for accessing the data. We present an open-source dataset, PTMTorrent,
to facilitate the evaluation and understanding of PTM packages. This paper
describes the creation, structure, usage, and limitations of the dataset. The
dataset includes a snapshot of 5 model hubs and a total of 15,913 PTM packages.
These packages are represented in a uniform data schema for cross-hub mining.
We describe prior uses of this data and suggest research opportunities for
mining using our dataset. The PTMTorrent dataset (v1) is available at:
https://app.globus.org/file-manager?origin_id=55e17a6e-9d8f-11ed-a2a2-8383522b48d9&origin_path=%2F~%2F.
Our dataset generation tools are available on GitHub:
https://doi.org/10.5281/zenodo.7570357. | http://arxiv.org/abs/2303.08934v1 | cs.SE | new_dataset | 0.994452 | 2303.08934 |
DACOS-A Manually Annotated Dataset of Code Smells | Researchers apply machine-learning techniques for code smell detection to
counter the subjectivity of many code smells. Such approaches need a large,
manually annotated dataset for training and benchmarking. Existing literature
offers a few datasets; however, they are small in size and, more importantly,
do not focus on the subjective code snippets. In this paper, we present DACOS,
a manually annotated dataset containing 10,267 annotations for 5,192 code
snippets. The dataset targets three kinds of code smells at different
granularity: multifaceted abstraction, complex method, and long parameter list.
The dataset is created in two phases. The first phase helps us identify the
code snippets that are potentially subjective by determining the thresholds of
metrics used to detect a smell. The second phase collects annotations for
potentially subjective snippets. We also offer an extended dataset DACOSX that
includes definitely benign and definitely smelly snippets by using the
thresholds identified in the first phase. We have developed TagMan, a web
application to help annotators view and mark the snippets one-by-one and record
the provided annotations. We make the datasets and the web application
accessible publicly. This dataset will help researchers working on smell
detection techniques to build relevant and context-aware machine-learning
models. | http://arxiv.org/abs/2303.08729v1 | cs.SE | new_dataset | 0.994494 | 2303.08729 |
Dataset Management Platform for Machine Learning | The quality of the data in a dataset can have a substantial impact on the
performance of a machine learning model that is trained and/or evaluated using
the dataset. Effective dataset management, including tasks such as data
cleanup, versioning, access control, dataset transformation, automation,
integrity and security, etc., can help improve the efficiency and speed of the
machine learning process. Currently, engineers spend a substantial amount of
manual effort and time to manage dataset versions or to prepare datasets for
machine learning tasks. This disclosure describes a platform to manage and use
datasets effectively. The techniques integrate dataset management and dataset
transformation mechanisms. A storage engine is described that acts as a source
of truth for all data and handles versioning, access control etc. The dataset
transformation mechanism is a key part to generate a dataset (snapshot) to
serve different purposes. The described techniques can support different
workflows, pipelines, or data orchestration needs, e.g., for training and/or
evaluation of machine learning models. | http://arxiv.org/abs/2303.08301v1 | cs.DB | not_new_dataset | 0.912578 | 2303.08301 |
ForDigitStress: A multi-modal stress dataset employing a digital job interview scenario | We present a multi-modal stress dataset that uses digital job interviews to
induce stress. The dataset provides multi-modal data of 40 participants
including audio, video (motion capturing, facial recognition, eye tracking) as
well as physiological information (photoplethysmography, electrodermal
activity). In addition to that, the dataset contains time-continuous
annotations for stress and occurred emotions (e.g. shame, anger, anxiety,
surprise). In order to establish a baseline, five different machine learning
classifiers (Support Vector Machine, K-Nearest Neighbors, Random Forest,
Long-Short-Term Memory Network) have been trained and evaluated on the proposed
dataset for a binary stress classification task. The best-performing classifier
achieved an accuracy of 88.3% and an F1-score of 87.5%. | http://arxiv.org/abs/2303.07742v1 | cs.LG | new_dataset | 0.994413 | 2303.07742 |
NICHE: A Curated Dataset of Engineered Machine Learning Projects in Python | Machine learning (ML) has gained much attention and been incorporated into
our daily lives. While there are numerous publicly available ML projects on
open source platforms such as GitHub, there have been limited attempts in
filtering those projects to curate ML projects of high quality. The limited
availability of such a high-quality dataset poses an obstacle in understanding
ML projects. To help clear this obstacle, we present NICHE, a manually labelled
dataset consisting of 572 ML projects. Based on evidences of good software
engineering practices, we label 441 of these projects as engineered and 131 as
non-engineered. This dataset can help researchers understand the practices that
are followed in high-quality ML projects. It can also be used as a benchmark
for classifiers designed to identify engineered ML projects. | http://arxiv.org/abs/2303.06286v1 | cs.SE | new_dataset | 0.994428 | 2303.06286 |
Position Paper on Dataset Engineering to Accelerate Science | Data is a critical element in any discovery process. In the last decades, we
observed exponential growth in the volume of available data and the technology
to manipulate it. However, data is only practical when one can structure it for
a well-defined task. For instance, we need a corpus of text broken into
sentences to train a natural language machine-learning model. In this work, we
will use the token \textit{dataset} to designate a structured set of data built
to perform a well-defined task. Moreover, the dataset will be used in most
cases as a blueprint of an entity that at any moment can be stored as a table.
Specifically, in science, each area has unique forms to organize, gather and
handle its datasets. We believe that datasets must be a first-class entity in
any knowledge-intensive process, and all workflows should have exceptional
attention to datasets' lifecycle, from their gathering to uses and evolution.
We advocate that science and engineering discovery processes are extreme
instances of the need for such organization on datasets, claiming for new
approaches and tooling. Furthermore, these requirements are more evident when
the discovery workflow uses artificial intelligence methods to empower the
subject-matter expert. In this work, we discuss an approach to bringing
datasets as a critical entity in the discovery process in science. We
illustrate some concepts using material discovery as a use case. We chose this
domain because it leverages many significant problems that can be generalized
to other science fields. | http://arxiv.org/abs/2303.05545v1 | cs.LG | not_new_dataset | 0.991827 | 2303.05545 |
StyleDiff: Attribute Comparison Between Unlabeled Datasets in Latent Disentangled Space | One major challenge in machine learning applications is coping with
mismatches between the datasets used in the development and those obtained in
real-world applications. These mismatches may lead to inaccurate predictions
and errors, resulting in poor product quality and unreliable systems. In this
study, we propose StyleDiff to inform developers of the differences between the
two datasets for the steady development of machine learning systems. Using
disentangled image spaces obtained from recently proposed generative models,
StyleDiff compares the two datasets by focusing on attributes in the images and
provides an easy-to-understand analysis of the differences between the
datasets. The proposed StyleDiff performs in $O (d N\log N)$, where $N$ is the
size of the datasets and $d$ is the number of attributes, enabling the
application to large datasets. We demonstrate that StyleDiff accurately detects
differences between datasets and presents them in an understandable format
using, for example, driving scenes datasets. | http://arxiv.org/abs/2303.05102v2 | stat.ML | not_new_dataset | 0.991933 | 2303.05102 |
Structural Similarity: When to Use Deep Generative Models on Imbalanced Image Dataset Augmentation | Improving the performance on an imbalanced training set is one of the main
challenges in nowadays Machine Learning. One way to augment and thus re-balance
the image dataset is through existing deep generative models, like
class-conditional Generative Adversarial Networks (cGAN) or Diffusion Models by
synthesizing images on each of the tail-class. Our experiments on imbalanced
image dataset classification show that, the validation accuracy improvement
with such re-balancing method is related to the image similarity between
different classes. Thus, to quantify this image dataset class similarity, we
propose a measurement called Super-Sub Class Structural Similarity
(SSIM-supSubCls) based on Structural Similarity (SSIM). A deep generative model
data augmentation classification (GM-augCls) pipeline is also provided to
verify this metric correlates with the accuracy enhancement. We further
quantify the relationship between them, discovering that the accuracy
improvement decays exponentially with respect to SSIM-supSubCls values. | http://arxiv.org/abs/2303.04854v1 | eess.IV | not_new_dataset | 0.991991 | 2303.04854 |
The Bystander Affect Detection (BAD) Dataset for Failure Detection in HRI | For a robot to repair its own error, it must first know it has made a
mistake. One way that people detect errors is from the implicit reactions from
bystanders -- their confusion, smirks, or giggles clue us in that something
unexpected occurred. To enable robots to detect and act on bystander responses
to task failures, we developed a novel method to elicit bystander responses to
human and robot errors. Using 46 different stimulus videos featuring a variety
of human and machine task failures, we collected a total of 2452 webcam videos
of human reactions from 54 participants. To test the viability of the collected
data, we used the bystander reaction dataset as input to a deep-learning model,
BADNet, to predict failure occurrence. We tested different data labeling
methods and learned how they affect model performance, achieving precisions
above 90%. We discuss strategies to model bystander reactions and predict
failure and how this approach can be used in real-world robotic deployments to
detect errors and improve robot performance. As part of this work, we also
contribute with the "Bystander Affect Detection" (BAD) dataset of bystander
reactions, supporting the development of better prediction models. | http://arxiv.org/abs/2303.04835v1 | cs.RO | new_dataset | 0.994401 | 2303.04835 |
Defectors: A Large, Diverse Python Dataset for Defect Prediction | Defect prediction has been a popular research topic where machine learning
(ML) and deep learning (DL) have found numerous applications. However, these
ML/DL-based defect prediction models are often limited by the quality and size
of their datasets. In this paper, we present Defectors, a large dataset for
just-in-time and line-level defect prediction. Defectors consists of $\approx$
213K source code files ($\approx$ 93K defective and $\approx$ 120K defect-free)
that span across 24 popular Python projects. These projects come from 18
different domains, including machine learning, automation, and
internet-of-things. Such a scale and diversity make Defectors a suitable
dataset for training ML/DL models, especially transformer models that require
large and diverse datasets. We also foresee several application areas of our
dataset including defect prediction and defect explanation.
Dataset link: https://doi.org/10.5281/zenodo.7708984 | http://arxiv.org/abs/2303.04738v4 | cs.SE | new_dataset | 0.99438 | 2303.04738 |
Robustness-preserving Lifelong Learning via Dataset Condensation | Lifelong learning (LL) aims to improve a predictive model as the data source
evolves continuously. Most work in this learning paradigm has focused on
resolving the problem of 'catastrophic forgetting,' which refers to a notorious
dilemma between improving model accuracy over new data and retaining accuracy
over previous data. Yet, it is also known that machine learning (ML) models can
be vulnerable in the sense that tiny, adversarial input perturbations can
deceive the models into producing erroneous predictions. This motivates the
research objective of this paper - specification of a new LL framework that can
salvage model robustness (against adversarial attacks) from catastrophic
forgetting. Specifically, we propose a new memory-replay LL strategy that
leverages modern bi-level optimization techniques to determine the 'coreset' of
the current data (i.e., a small amount of data to be memorized) for ease of
preserving adversarial robustness over time. We term the resulting LL framework
'Data-Efficient Robustness-Preserving LL' (DERPLL). The effectiveness of DERPLL
is evaluated for class-incremental image classification using ResNet-18 over
the CIFAR-10 dataset. Experimental results show that DERPLL outperforms the
conventional coreset-guided LL baseline and achieves a substantial improvement
in both standard accuracy and robust accuracy. | http://arxiv.org/abs/2303.04183v1 | cs.LG | not_new_dataset | 0.991536 | 2303.04183 |
Transfer learning on large datasets for the accurate prediction of material properties | Graph neural networks trained on large crystal structure databases are
extremely effective in replacing ab initio calculations in the discovery and
characterization of materials. However, crystal structure datasets comprising
millions of materials exist only for the Perdew-Burke-Ernzerhof (PBE)
functional. In this work, we investigate the effectiveness of transfer learning
to extend these models to other density functionals. We show that pre-training
significantly reduces the size of the dataset required to achieve chemical
accuracy and beyond. We also analyze in detail the relationship between the
transfer-learning performance and the size of the datasets used for the initial
training of the model and transfer learning. We confirm a linear dependence of
the error on the size of the datasets on a log-log scale, with a similar slope
for both training and the pre-training datasets. This shows that further
increasing the size of the pre-training dataset, i.e. performing additional
calculations with a low-cost functional, is also effective, through transfer
learning, in improving machine-learning predictions with the quality of a more
accurate, and possibly computationally more involved functional. Lastly, we
compare the efficacy of interproperty and intraproperty transfer learning. | http://arxiv.org/abs/2303.03000v1 | cond-mat.mtrl-sci | not_new_dataset | 0.992086 | 2303.03000 |
Multiplexed gradient descent: Fast online training of modern datasets on hardware neural networks without backpropagation | We present multiplexed gradient descent (MGD), a gradient descent framework
designed to easily train analog or digital neural networks in hardware. MGD
utilizes zero-order optimization techniques for online training of hardware
neural networks. We demonstrate its ability to train neural networks on modern
machine learning datasets, including CIFAR-10 and Fashion-MNIST, and compare
its performance to backpropagation. Assuming realistic timescales and hardware
parameters, our results indicate that these optimization techniques can train a
network on emerging hardware platforms orders of magnitude faster than the
wall-clock time of training via backpropagation on a standard GPU, even in the
presence of imperfect weight updates or device-to-device variations in the
hardware. We additionally describe how it can be applied to existing hardware
as part of chip-in-the-loop training, or integrated directly at the hardware
level. Crucially, the MGD framework is highly flexible, and its gradient
descent process can be optimized to compensate for specific hardware
limitations such as slow parameter-update speeds or limited input bandwidth. | http://arxiv.org/abs/2303.03986v1 | cs.LG | not_new_dataset | 0.992209 | 2303.03986 |
Integration of Feature Selection Techniques using a Sleep Quality Dataset for Comparing Regression Algorithms | This research aims to examine the usefulness of integrating various feature
selection methods with regression algorithms for sleep quality prediction. A
publicly accessible sleep quality dataset is used to analyze the effect of
different feature selection techniques on the performance of four regression
algorithms - Linear regression, Ridge regression, Lasso Regression and Random
Forest Regressor. The results are compared to determine the optimal combination
of feature selection techniques and regression algorithms. The conclusion of
the study enriches the current literature on using machine learning for sleep
quality prediction and has practical significance for personalizing sleep
recommendations for individuals. | http://arxiv.org/abs/2303.02467v1 | cs.LG | new_dataset | 0.994275 | 2303.02467 |
Extended Agriculture-Vision: An Extension of a Large Aerial Image Dataset for Agricultural Pattern Analysis | A key challenge for much of the machine learning work on remote sensing and
earth observation data is the difficulty in acquiring large amounts of
accurately labeled data. This is particularly true for semantic segmentation
tasks, which are much less common in the remote sensing domain because of the
incredible difficulty in collecting precise, accurate, pixel-level annotations
at scale. Recent efforts have addressed these challenges both through the
creation of supervised datasets as well as the application of self-supervised
methods. We continue these efforts on both fronts. First, we generate and
release an improved version of the Agriculture-Vision dataset (Chiu et al.,
2020b) to include raw, full-field imagery for greater experimental flexibility.
Second, we extend this dataset with the release of 3600 large, high-resolution
(10cm/pixel), full-field, red-green-blue and near-infrared images for
pre-training. Third, we incorporate the Pixel-to-Propagation Module Xie et al.
(2021b) originally built on the SimCLR framework into the framework of MoCo-V2
Chen et al.(2020b). Finally, we demonstrate the usefulness of this data by
benchmarking different contrastive learning approaches on both downstream
classification and semantic segmentation tasks. We explore both CNN and Swin
Transformer Liu et al. (2021a) architectures within different frameworks based
on MoCo-V2. Together, these approaches enable us to better detect key
agricultural patterns of interest across a field from aerial imagery so that
farmers may be alerted to problematic areas in a timely fashion to inform their
management decisions. Furthermore, the release of these datasets will support
numerous avenues of research for computer vision in remote sensing for
agriculture. | http://arxiv.org/abs/2303.02460v1 | cs.CV | new_dataset | 0.994423 | 2303.02460 |
Domain adaptation using optimal transport for invariant learning using histopathology datasets | Histopathology is critical for the diagnosis of many diseases, including
cancer. These protocols typically require pathologists to manually evaluate
slides under a microscope, which is time-consuming and subjective, leading to
interest in machine learning to automate analysis. However, computational
techniques are limited by batch effects, where technical factors like
differences in preparation protocol or scanners can alter the appearance of
slides, causing models trained on one institution to fail when generalizing to
others. Here, we propose a domain adaptation method that improves the
generalization of histopathological models to data from unseen institutions,
without the need for labels or retraining in these new settings. Our approach
introduces an optimal transport (OT) loss, that extends adversarial methods
that penalize models if images from different institutions can be distinguished
in their representation space. Unlike previous methods, which operate on single
samples, our loss accounts for distributional differences between batches of
images. We show that on the Camelyon17 dataset, while both methods can adapt to
global differences in color distribution, only our OT loss can reliably
classify a cancer phenotype unseen during training. Together, our results
suggest that OT improves generalization on rare but critical phenotypes that
may only make up a small fraction of the total tiles and variation in a slide. | http://arxiv.org/abs/2303.02241v1 | cs.CV | not_new_dataset | 0.991955 | 2303.02241 |
Dataset Creation Pipeline for Camera-Based Heart Rate Estimation | Heart rate is one of the most vital health metrics which can be utilized to
investigate and gain intuitions into various human physiological and
psychological information. Estimating heart rate without the constraints of
contact-based sensors thus presents itself as a very attractive field of
research as it enables well-being monitoring in a wider variety of scenarios.
Consequently, various techniques for camera-based heart rate estimation have
been developed ranging from classical image processing to convoluted deep
learning models and architectures. At the heart of such research efforts lies
health and visual data acquisition, cleaning, transformation, and annotation.
In this paper, we discuss how to prepare data for the task of developing or
testing an algorithm or machine learning model for heart rate estimation from
images of facial regions. The data prepared is to include camera frames as well
as sensor readings from an electrocardiograph sensor. The proposed pipeline is
divided into four main steps, namely removal of faulty data, frame and
electrocardiograph timestamp de-jittering, signal denoising and filtering, and
frame annotation creation. Our main contributions are a novel technique of
eliminating jitter from health sensor and camera timestamps and a method to
accurately time align both visual frame and electrocardiogram sensor data which
is also applicable to other sensor types. | http://arxiv.org/abs/2303.01468v1 | cs.CV | new_dataset | 0.993352 | 2303.01468 |
Creating Synthetic Datasets for Collaborative Filtering Recommender Systems using Generative Adversarial Networks | Research and education in machine learning needs diverse, representative, and
open datasets that contain sufficient samples to handle the necessary training,
validation, and testing tasks. Currently, the Recommender Systems area includes
a large number of subfields in which accuracy and beyond accuracy quality
measures are continuously improved. To feed this research variety, it is
necessary and convenient to reinforce the existing datasets with synthetic
ones. This paper proposes a Generative Adversarial Network (GAN)-based method
to generate collaborative filtering datasets in a parameterized way, by
selecting their preferred number of users, items, samples, and stochastic
variability. This parameterization cannot be made using regular GANs. Our GAN
model is fed with dense, short, and continuous embedding representations of
items and users, instead of sparse, large, and discrete vectors, to make an
accurate and quick learning, compared to the traditional approach based on
large and sparse input vectors. The proposed architecture includes a DeepMF
model to extract the dense user and item embeddings, as well as a clustering
process to convert from the dense GAN generated samples to the discrete and
sparse ones, necessary to create each required synthetic dataset. The results
of three different source datasets show adequate distributions and expected
quality values and evolutions on the generated datasets compared to the source
ones. Synthetic datasets and source codes are available to researchers. | http://arxiv.org/abs/2303.01297v1 | cs.IR | not_new_dataset | 0.977927 | 2303.01297 |
Choosing Public Datasets for Private Machine Learning via Gradient Subspace Distance | Differentially private stochastic gradient descent privatizes model training
by injecting noise into each iteration, where the noise magnitude increases
with the number of model parameters. Recent works suggest that we can reduce
the noise by leveraging public data for private machine learning, by projecting
gradients onto a subspace prescribed by the public data. However, given a
choice of public datasets, it is not a priori clear which one may be most
appropriate for the private task. We give an algorithm for selecting a public
dataset by measuring a low-dimensional subspace distance between gradients of
the public and private examples. We provide theoretical analysis demonstrating
that the excess risk scales with this subspace distance. This distance is easy
to compute and robust to modifications in the setting. Empirical evaluation
shows that trained model accuracy is monotone in this distance. | http://arxiv.org/abs/2303.01256v1 | stat.ML | not_new_dataset | 0.992017 | 2303.01256 |
BioImageLoader: Easy Handling of Bioimage Datasets for Machine Learning | BioImageLoader (BIL) is a python library that handles bioimage datasets for
machine learning applications, easing simple workflows and enabling complex
ones. BIL attempts to wrap the numerous and varied bioimages datasets in
unified interfaces, to easily concatenate, perform image augmentation, and
batch-load them. By acting at a per experimental dataset level, it enables both
a high level of customization and a comparison across experiments. Here we
present the library and show some application it enables, including retraining
published deep learning architectures and evaluating their versatility in a
leave-one-dataset-out fashion. | http://arxiv.org/abs/2303.02158v1 | q-bio.QM | not_new_dataset | 0.93891 | 2303.02158 |
Testing the performance of Multi-class IDS public dataset using Supervised Machine Learning Algorithms | Machine learning, statistical-based, and knowledge-based methods are often
used to implement an Anomaly-based Intrusion Detection System which is software
that helps in detecting malicious and undesired activities in the network
primarily through the Internet. Machine learning comprises Supervised,
Semi-Supervised, and Unsupervised Learning algorithms. Supervised machine
learning uses a trained label dataset. This paper uses four supervised learning
algorithms Random Forest, XGBoost, K-Nearest Neighbours, and Artificial Neural
Network to test the performance of the public dataset. Based on the prediction
accuracy rate, the results show that Random Forest performs better on
multi-class Intrusion Detection System, followed by XGBoost, K-Nearest
Neighbours respective, provided prediction accuracy is taken into perspective.
Otherwise, K-Nearest Neighbours was the best performer considering the time of
training as the metric. It concludes that Random Forest is the best-supervised
machine learning for Intrusion Detection System | http://arxiv.org/abs/2302.14374v1 | cs.CR | not_new_dataset | 0.9921 | 2302.14374 |
Make Every Example Count: On Stability and Utility of Self-Influence for Learning from Noisy NLP Datasets | Increasingly larger datasets have become a standard ingredient to advancing
the state of the art in NLP. However, data quality might have already become
the bottleneck to unlock further gains. Given the diversity and the sizes of
modern datasets, standard data filtering is not straight-forward to apply,
because of the multifacetedness of the harmful data and elusiveness of
filtering rules that would generalize across multiple tasks. We study the
fitness of task-agnostic self-influence scores of training examples for data
cleaning, analyze their efficacy in capturing naturally occurring outliers, and
investigate to what extent self-influence based data cleaning can improve
downstream performance in machine translation, question answering and text
classification, building up on recent approaches to self-influence calculation
and automated curriculum learning. | http://arxiv.org/abs/2302.13959v1 | cs.CL | not_new_dataset | 0.992032 | 2302.13959 |
Data Augmentation with GAN increases the Performance of Arrhythmia Classification for an Unbalanced Dataset | Due to the data shortage problem, which is one of the major problems in the
field of machine learning, the accuracy level of many applications remains well
below the expected. It prevents researchers from producing new artificial
intelligence-based systems with the available data. This problem can be solved
by generating new synthetic data with augmentation methods. In this study, new
ECG signals are produced using MIT-BIH Arrhythmia Database by using Generative
Adversarial Neural Networks (GAN), which is a modern data augmentation method.
These generated data are used for training a machine learning system and real
ECG data for testing it. The obtained results show that this way the
performance of the machine learning system is increased. | http://arxiv.org/abs/2302.13855v1 | eess.SP | not_new_dataset | 0.9916 | 2302.13855 |
HUST bearing: a practical dataset for ball bearing fault diagnosis | In this work, we introduce a practical dataset named HUST bearing, that
provides a large set of vibration data on different ball bearings. This dataset
contains 90 raw vibration data of 6 types of defects (inner crack, outer crack,
ball crack, and their 2-combinations) on 5 types of bearing at 3 working
conditions with the sample rate of 51,200 samples per second. We established
the envelope analysis and order tracking analysis on the introduced dataset to
allow an initial evaluation of the data. A number of classical machine learning
classification methods are used to identify bearing faults of the dataset using
features in different domains. The typical advanced unsupervised transfer
learning algorithms also perform to observe the transferability of knowledge
among parts of the dataset. The experimental results of examined methods on the
dataset gain divergent accuracy up to 100% on classification task and 60-80% on
unsupervised transfer learning task. | http://arxiv.org/abs/2302.12533v2 | cs.LG | new_dataset | 0.994435 | 2302.12533 |
FedPDC:Federated Learning for Public Dataset Correction | As people pay more and more attention to privacy protection, Federated
Learning (FL), as a promising distributed machine learning paradigm, is
receiving more and more attention. However, due to the biased distribution of
data on devices in real life, federated learning has lower classification
accuracy than traditional machine learning in Non-IID scenarios. Although there
are many optimization algorithms, the local model aggregation in the parameter
server is still relatively traditional. In this paper, a new algorithm FedPDC
is proposed to optimize the aggregation mode of local models and the loss
function of local training by using the shared data sets in some industries. In
many benchmark experiments, FedPDC can effectively improve the accuracy of the
global model in the case of extremely unbalanced data distribution, while
ensuring the privacy of the client data. At the same time, the accuracy
improvement of FedPDC does not bring additional communication costs. | http://arxiv.org/abs/2302.12503v1 | cs.LG | not_new_dataset | 0.992014 | 2302.12503 |
VQE-generated Quantum Circuit Dataset for Machine Learning | Quantum machine learning has the potential to computationally outperform
classical machine learning, but it is not yet clear whether it will actually be
valuable for practical problems. While some artificial scenarios have shown
that certain quantum machine learning techniques may be advantageous compared
to their classical counterpart, it is unlikely that quantum machine learning
will outclass traditional methods on popular classical datasets such as MNIST.
In contrast, dealing with quantum data, such as quantum states or circuits, may
be the task where we can benefit from quantum methods. Therefore, it is
important to develop practically meaningful quantum datasets for which we
expect quantum methods to be superior. In this paper, we propose a machine
learning task that is likely to soon arise in the real world: clustering and
classification of quantum circuits. We provide a dataset of quantum circuits
optimized by the variational quantum eigensolver. We utilized six common types
of Hamiltonians in condensed matter physics, with a range of 4 to 16 qubits,
and applied ten different ans\"{a}tze with varying depths (ranging from 3 to
32) to generate a quantum circuit dataset of six distinct classes, each
containing 300 samples. We show that this dataset can be easily learned using
quantum methods. In particular, we demonstrate a successful classification of
our dataset using real 4-qubit devices available through IBMQ. By providing a
setting and an elementary dataset where quantum machine learning is expected to
be beneficial, we hope to encourage and ease the advancement of the field. | http://arxiv.org/abs/2302.09751v2 | quant-ph | new_dataset | 0.994454 | 2302.09751 |
Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English | This study focuses on the generation of Persian named entity datasets through
the application of machine translation on English datasets. The generated
datasets were evaluated by experimenting with one monolingual and one
multilingual transformer model. Notably, the CoNLL 2003 dataset has achieved
the highest F1 score of 85.11%. In contrast, the WNUT 2017 dataset yielded the
lowest F1 score of 40.02%. The results of this study highlight the potential of
machine translation in creating high-quality named entity recognition datasets
for low-resource languages like Persian. The study compares the performance of
these generated datasets with English named entity recognition systems and
provides insights into the effectiveness of machine translation for this task.
Additionally, this approach could be used to augment data in low-resource
language or create noisy data to make named entity systems more robust and
improve them. | http://arxiv.org/abs/2302.09611v1 | cs.CL | not_new_dataset | 0.992009 | 2302.09611 |
HLSDataset: Open-Source Dataset for ML-Assisted FPGA Design using High Level Synthesis | Machine Learning (ML) has been widely adopted in design exploration using
high level synthesis (HLS) to give a better and faster performance, and
resource and power estimation at very early stages for FPGA-based design. To
perform prediction accurately, high-quality and large-volume datasets are
required for training ML models.This paper presents a dataset for ML-assisted
FPGA design using HLS, called HLSDataset. The dataset is generated from widely
used HLS C benchmarks including Polybench, Machsuite, CHStone and Rossetta. The
Verilog samples are generated with a variety of directives including loop
unroll, loop pipeline and array partition to make sure optimized and realistic
designs are covered. The total number of generated Verilog samples is nearly
9,000 per FPGA type. To demonstrate the effectiveness of our dataset, we
undertake case studies to perform power estimation and resource usage
estimation with ML models trained with our dataset. All the codes and dataset
are public at the github repo.We believe that HLSDataset can save valuable time
for researchers by avoiding the tedious process of running tools, scripting and
parsing files to generate the dataset, and enable them to spend more time where
it counts, that is, in training ML models. | http://arxiv.org/abs/2302.10977v2 | cs.AR | new_dataset | 0.994432 | 2302.10977 |
jazznet: A Dataset of Fundamental Piano Patterns for Music Audio Machine Learning Research | This paper introduces the jazznet Dataset, a dataset of fundamental jazz
piano music patterns for developing machine learning (ML) algorithms in music
information retrieval (MIR). The dataset contains 162520 labeled piano
patterns, including chords, arpeggios, scales, and chord progressions with
their inversions, resulting in more than 26k hours of audio and a total size of
95GB. The paper explains the dataset's composition, creation, and generation,
and presents an open-source Pattern Generator using a method called
Distance-Based Pattern Structures (DBPS), which allows researchers to easily
generate new piano patterns simply by defining the distances between pitches
within the musical patterns. We demonstrate that the dataset can help
researchers benchmark new models for challenging MIR tasks, using a
convolutional recurrent neural network (CRNN) and a deep convolutional neural
network. The dataset and code are available via:
https://github.com/tosiron/jazznet. | http://arxiv.org/abs/2302.08632v1 | cs.SD | new_dataset | 0.994368 | 2302.08632 |
Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation | Distribution shift is a major source of failure for machine learning models.
However, evaluating model reliability under distribution shift can be
challenging, especially since it may be difficult to acquire counterfactual
examples that exhibit a specified shift. In this work, we introduce the notion
of a dataset interface: a framework that, given an input dataset and a
user-specified shift, returns instances from that input distribution that
exhibit the desired shift. We study a number of natural implementations for
such an interface, and find that they often introduce confounding shifts that
complicate model evaluation. Motivated by this, we propose a dataset interface
implementation that leverages Textual Inversion to tailor generation to the
input distribution. We then demonstrate how applying this dataset interface to
the ImageNet dataset enables studying model behavior across a diverse array of
distribution shifts, including variations in background, lighting, and
attributes of the objects. Code available at
https://github.com/MadryLab/dataset-interfaces. | http://arxiv.org/abs/2302.07865v2 | cs.LG | not_new_dataset | 0.992049 | 2302.07865 |
Balanced Audiovisual Dataset for Imbalance Analysis | The imbalance problem is widespread in the field of machine learning, which
also exists in multimodal learning areas caused by the intrinsic discrepancy
between modalities of samples. Recent works have attempted to solve the
modality imbalance problem from algorithm perspective, however, they do not
fully analyze the influence of modality bias in datasets. Concretely, existing
multimodal datasets are usually collected under specific tasks, where one
modality tends to perform better than other ones in most conditions. In this
work, to comprehensively explore the influence of modality bias, we first split
existing datasets into different subsets by estimating sample-wise modality
discrepancy. We surprisingly find that: the multimodal models with existing
imbalance algorithms consistently perform worse than the unimodal one on
specific subsets, in accordance with the modality bias. To further explore the
influence of modality bias and analyze the effectiveness of existing imbalance
algorithms, we build a balanced audiovisual dataset, with uniformly distributed
modality discrepancy over the whole dataset. We then conduct extensive
experiments to re-evaluate existing imbalance algorithms and draw some
interesting findings: existing algorithms only provide a compromise between
modalities and suffer from the large modality discrepancy of samples. We hope
that these findings could facilitate future research on the modality imbalance
problem. | http://arxiv.org/abs/2302.10912v2 | cs.LG | new_dataset | 0.994371 | 2302.10912 |
Two-step hyperparameter optimization method: Accelerating hyperparameter search by using a fraction of a training dataset | Hyperparameter optimization (HPO) is an important step in machine learning
(ML) model development, but common practices are archaic -- primarily relying
on manual or grid searches. This is partly because adopting advanced HPO
algorithms introduces added complexity to the workflow, leading to longer
computation times. This poses a notable challenge to ML applications, as
suboptimal hyperparameter selections curtail the potential of ML model
performance, ultimately obstructing the full exploitation of ML techniques. In
this article, we present a two-step HPO method as a strategic solution to
curbing computational demands and wait times, gleaned from practical
experiences in applied ML parameterization work. The initial phase involves a
preliminary evaluation of hyperparameters on a small subset of the training
dataset, followed by a re-evaluation of the top-performing candidate models
post-retraining with the entire training dataset. This two-step HPO method is
universally applicable across HPO search algorithms, and we argue it has
attractive efficiency gains.
As a case study, we present our recent application of the two-step HPO method
to the development of neural network emulators for aerosol activation. Although
our primary use case is a data-rich limit with many millions of samples, we
also find that using up to 0.0025% of the data (a few thousand samples) in the
initial step is sufficient to find optimal hyperparameter configurations from
much more extensive sampling, achieving up to 135-times speedup. The benefits
of this method materialize through an assessment of hyperparameters and model
performance, revealing the minimal model complexity required to achieve the
best performance. The assortment of top-performing models harvested from the
HPO process allows us to choose a high-performing model with a low inference
cost for efficient use in global climate models (GCMs). | http://arxiv.org/abs/2302.03845v2 | cs.LG | not_new_dataset | 0.992181 | 2302.03845 |
Linking Datasets on Organizations Using Half A Billion Open Collaborated Records | Scholars studying organizations often work with multiple datasets lacking
shared unique identifiers or covariates. In such situations, researchers may
turn to approximate string matching methods to combine datasets. String
matching, although useful, faces fundamental challenges. Even when two strings
appear similar to humans, fuzzy matching often does not work because it fails
to adapt to the informativeness of the character combinations presented. Worse,
many entities have multiple names that are dissimilar (e.g., "Fannie Mae" and
"Federal National Mortgage Association"), a case where string matching has
little hope of succeeding. This paper introduces data from a prominent
employment-related networking site (LinkedIn) as a tool to address these
problems. We propose interconnected approaches to leveraging the massive amount
of information from LinkedIn regarding organizational name-to-name links. The
first approach builds a machine learning model for predicting matches from
character strings, treating the trillions of user-contributed organizational
name pairs as a training corpus: this approach constructs a string matching
metric that explicitly maximizes match probabilities. A second approach
identifies relationships between organization names using network
representations of the LinkedIn data. A third approach combines the first and
second. We document substantial improvements over fuzzy matching in
applications, making all methods accessible in open-source software
("LinkOrgs"). | http://arxiv.org/abs/2302.02533v3 | cs.SI | not_new_dataset | 0.992035 | 2302.02533 |
A Machine Learning Approach to Long-Term Drought Prediction using Normalized Difference Indices Computed on a Spatiotemporal Dataset | Climate change and increases in drought conditions affect the lives of many
and are closely tied to global agricultural output and livestock production.
This research presents a novel approach utilizing machine learning frameworks
for drought prediction around water basins. Our method focuses on the
next-frame prediction of the Normalized Difference Drought Index (NDDI) by
leveraging the recently developed SEN2DWATER database. We propose and compare
two prediction methods for estimating NDDI values over a specific land area.
Our work makes possible proactive measures that can ensure adequate water
access for drought-affected communities and sustainable agriculture practices
by implementing a proof-of-concept of short and long-term drought prediction of
changes in water resources. | http://arxiv.org/abs/2302.02440v2 | eess.IV | not_new_dataset | 0.992081 | 2302.02440 |
Predefined domain specific embeddings of food concepts and recipes: A case study on heterogeneous recipe datasets | Although recipe data are very easy to come by nowadays, it is really hard to
find a complete recipe dataset - with a list of ingredients, nutrient values
per ingredient, and per recipe, allergens, etc. Recipe datasets are usually
collected from social media websites where users post and publish recipes.
Usually written with little to no structure, using both standardized and
non-standardized units of measurement. We collect six different recipe
datasets, publicly available, in different formats, and some including data in
different languages. Bringing all of these datasets to the needed format for
applying a machine learning (ML) pipeline for nutrient prediction [1], [2],
includes data normalization using dictionary-based named entity recognition
(NER), rule-based NER, as well as conversions using external domain-specific
resources. From the list of ingredients, domain-specific embeddings are created
using the same embedding space for all recipes - one ingredient dataset is
generated. The result from this normalization process is two corpora - one with
predefined ingredient embeddings and one with predefined recipe embeddings. On
all six recipe datasets, the ML pipeline is evaluated. The results from this
use case also confirm that the embeddings merged using the domain heuristic
yield better results than the baselines. | http://arxiv.org/abs/2302.01005v1 | cs.CL | not_new_dataset | 0.974278 | 2302.01005 |
Do I Have Your Attention: A Large Scale Engagement Prediction Dataset and Baselines | The degree of concentration, enthusiasm, optimism, and passion displayed by
individual(s) while interacting with a machine is referred to as `user
engagement'. Engagement comprises of behavioral, cognitive, and affect related
cues. To create engagement prediction systems that can work in real-world
conditions, it is quintessential to learn from rich, diverse datasets. To this
end, a large scale multi-faceted engagement in the wild dataset EngageNet is
proposed. 31 hours duration data of 127 participants representing different
illumination conditions are recorded. Thorough experiments are performed
exploring the applicability of different features, action units, eye gaze, head
pose, and MARLIN. Data from user interactions (question-answer) are analyzed to
understand the relationship between effective learning and user engagement. To
further validate the rich nature of the dataset, evaluation is also performed
on the EngageWild dataset. The experiments show the usefulness of the proposed
dataset. The code, models, and dataset link are publicly available at
https://github.com/engagenet/engagenet_baselines. | http://arxiv.org/abs/2302.00431v2 | cs.CV | new_dataset | 0.994282 | 2302.00431 |
An Evaluation of Persian-English Machine Translation Datasets with Transformers | Nowadays, many researchers are focusing their attention on the subject of
machine translation (MT). However, Persian machine translation has remained
unexplored despite a vast amount of research being conducted in languages with
high resources, such as English. Moreover, while a substantial amount of
research has been undertaken in statistical machine translation for some
datasets in Persian, there is currently no standard baseline for
transformer-based text2text models on each corpus. This study collected and
analysed the most popular and valuable parallel corpora, which were used for
Persian-English translation. Furthermore, we fine-tuned and evaluated two
state-of-the-art attention-based seq2seq models on each dataset separately (48
results). We hope this paper will assist researchers in comparing their Persian
to English and vice versa machine translation results to a standard baseline. | http://arxiv.org/abs/2302.00321v1 | cs.CL | not_new_dataset | 0.992003 | 2302.00321 |
WebUI: A Dataset for Enhancing Visual UI Understanding with Web Semantics | Modeling user interfaces (UIs) from visual information allows systems to make
inferences about the functionality and semantics needed to support use cases in
accessibility, app automation, and testing. Current datasets for training
machine learning models are limited in size due to the costly and
time-consuming process of manually collecting and annotating UIs. We crawled
the web to construct WebUI, a large dataset of 400,000 rendered web pages
associated with automatically extracted metadata. We analyze the composition of
WebUI and show that while automatically extracted data is noisy, most examples
meet basic criteria for visual UI modeling. We applied several strategies for
incorporating semantics found in web pages to increase the performance of
visual UI understanding models in the mobile domain, where less labeled data is
available: (i) element detection, (ii) screen classification and (iii) screen
similarity. | http://arxiv.org/abs/2301.13280v1 | cs.HC | new_dataset | 0.994506 | 2301.13280 |
Deepfake Detection Analyzing Hybrid Dataset Utilizing CNN and SVM | Social media is currently being used by many individuals online as a major
source of information. However, not all information shared online is true, even
photos and videos can be doctored. Deepfakes have recently risen with the rise
of technological advancement and have allowed nefarious online users to replace
one face with a computer generated face of anyone they would like, including
important political and cultural figures. Deepfakes are now a tool to be able
to spread mass misinformation. There is now an immense need to create models
that are able to detect deepfakes and keep them from being spread as seemingly
real images or videos. In this paper, we propose a new deepfake detection
schema using two popular machine learning algorithms. | http://arxiv.org/abs/2302.10280v1 | cs.CV | new_dataset | 0.994015 | 2302.10280 |
Utilizing Domain Knowledge: Robust Machine Learning for Building Energy Prediction with Small, Inconsistent Datasets | The demand for a huge amount of data for machine learning (ML) applications
is currently a bottleneck in an empirically dominated field. We propose a
method to combine prior knowledge with data-driven methods to significantly
reduce their data dependency. In this study, component-based machine learning
(CBML) as the knowledge-encoded data-driven method is examined in the context
of energy-efficient building engineering. It encodes the abstraction of
building structural knowledge as semantic information in the model
organization. We design a case experiment to understand the efficacy of
knowledge-encoded ML in sparse data input (1% - 0.0125% sampling rate). The
result reveals its three advanced features compared with pure ML methods: 1.
Significant improvement in the robustness of ML to extremely small-size and
inconsistent datasets; 2. Efficient data utilization from different entities'
record collections; 3. Characteristics of accepting incomplete data with high
interpretability and reduced training time. All these features provide a
promising path to alleviating the deployment bottleneck of data-intensive
methods and contribute to efficient real-world data usage. Moreover, four
necessary prerequisites are summarized in this study that ensures the target
scenario benefits by combining prior knowledge and ML generalization. | http://arxiv.org/abs/2302.10784v2 | cs.LG | not_new_dataset | 0.992112 | 2302.10784 |
SEN2DWATER: A Novel Multispectral and Multitemporal Dataset and Deep Learning Benchmark for Water Resources Analysis | Climate change has caused disruption in certain weather patterns, leading to
extreme weather events like flooding and drought in different parts of the
world. In this paper, we propose machine learning methods for analyzing changes
in water resources over a time period of six years, by focusing on lakes and
rivers in Italy and Spain. Additionally, we release open-access code to enable
the expansion of the study to any region of the world.
We create a novel multispectral and multitemporal dataset, SEN2DWATER, which
is freely accessible on GitHub. We introduce suitable indices to monitor
changes in water resources, and benchmark the new dataset on three different
deep learning frameworks: Convolutional Long Short Term Memory (ConvLSTM),
Bidirectional ConvLSTM, and Time Distributed Convolutional Neural Networks
(TD-CNNs). Future work exploring the many potential applications of this
research is also discussed. | http://arxiv.org/abs/2301.07452v1 | eess.SP | new_dataset | 0.994485 | 2301.07452 |
Simplistic Collection and Labeling Practices Limit the Utility of Benchmark Datasets for Twitter Bot Detection | Accurate bot detection is necessary for the safety and integrity of online
platforms. It is also crucial for research on the influence of bots in
elections, the spread of misinformation, and financial market manipulation.
Platforms deploy infrastructure to flag or remove automated accounts, but their
tools and data are not publicly available. Thus, the public must rely on
third-party bot detection. These tools employ machine learning and often
achieve near perfect performance for classification on existing datasets,
suggesting bot detection is accurate, reliable and fit for use in downstream
applications. We provide evidence that this is not the case and show that high
performance is attributable to limitations in dataset collection and labeling
rather than sophistication of the tools. Specifically, we show that simple
decision rules -- shallow decision trees trained on a small number of features
-- achieve near-state-of-the-art performance on most available datasets and
that bot detection datasets, even when combined together, do not generalize
well to out-of-sample datasets. Our findings reveal that predictions are highly
dependent on each dataset's collection and labeling procedures rather than
fundamental differences between bots and humans. These results have important
implications for both transparency in sampling and labeling procedures and
potential biases in research using existing bot detection tools for
pre-processing. | http://arxiv.org/abs/2301.07015v2 | cs.LG | not_new_dataset | 0.992107 | 2301.07015 |
XNLI 2.0: Improving XNLI dataset and performance on Cross Lingual Understanding (XLU) | Natural Language Processing systems are heavily dependent on the availability
of annotated data to train practical models. Primarily, models are trained on
English datasets. In recent times, significant advances have been made in
multilingual understanding due to the steeply increasing necessity of working
in different languages. One of the points that stands out is that since there
are now so many pre-trained multilingual models, we can utilize them for
cross-lingual understanding tasks. Using cross-lingual understanding and
Natural Language Inference, it is possible to train models whose applications
extend beyond the training language. We can leverage the power of machine
translation to skip the tiresome part of translating datasets from one language
to another. In this work, we focus on improving the original XNLI dataset by
re-translating the MNLI dataset in all of the 14 different languages present in
XNLI, including the test and dev sets of XNLI using Google Translate. We also
perform experiments by training models in all 15 languages and analyzing their
performance on the task of natural language inference. We then expand our
boundary to investigate if we could improve performance in low-resource
languages such as Swahili and Urdu by training models in languages other than
English. | http://arxiv.org/abs/2301.06527v1 | cs.CL | not_new_dataset | 0.991678 | 2301.06527 |
TextileNet: A Material Taxonomy-based Fashion Textile Dataset | The rise of Machine Learning (ML) is gradually digitalizing and reshaping the
fashion industry. Recent years have witnessed a number of fashion AI
applications, for example, virtual try-ons. Textile material identification and
categorization play a crucial role in the fashion textile sector, including
fashion design, retails, and recycling. At the same time, Net Zero is a global
goal and the fashion industry is undergoing a significant change so that
textile materials can be reused, repaired and recycled in a sustainable manner.
There is still a challenge in identifying textile materials automatically for
garments, as we lack a low-cost and effective technique for identifying them.
In light of this, we build the first fashion textile dataset, TextileNet, based
on textile material taxonomies - a fibre taxonomy and a fabric taxonomy
generated in collaboration with material scientists. TextileNet can be used to
train and evaluate the state-of-the-art Deep Learning models for textile
materials. We hope to standardize textile related datasets through the use of
taxonomies. TextileNet contains 33 fibres labels and 27 fabrics labels, and has
in total 760,949 images. We use standard Convolutional Neural Networks (CNNs)
and Vision Transformers (ViTs) to establish baselines for this dataset. Future
applications for this dataset range from textile classification to optimization
of the textile supply chain and interactive design for consumers. We envision
that this can contribute to the development of a new AI-based fashion platform. | http://arxiv.org/abs/2301.06160v1 | cs.DL | new_dataset | 0.994497 | 2301.06160 |
A Dataset of Kurdish (Sorani) Named Entities -- An Amendment to Kurdish-BLARK Named Entities | Named Entity Recognition (NER) is one of the essential applications of
Natural Language Processing (NLP). It is also an instrument that plays a
significant role in many other NLP applications, such as Machine Translation
(MT), Information Retrieval (IR), and Part of Speech Tagging (POST). Kurdish is
an under-resourced language from the NLP perspective. Particularly, in all the
categories, the lack of NER resources hinders other aspects of Kurdish
processing. In this work, we present a data set that covers several categories
of NEs in Kurdish (Sorani). The dataset is a significant amendment to a
previously developed dataset in the Kurdish BLARK (Basic Language Resource
Kit). It covers 11 categories and 33261 entries in total. The dataset is
publicly available for non-commercial use under CC BY-NC-SA 4.0 license at
https://kurdishblark.github.io/. | http://arxiv.org/abs/2301.04962v1 | cs.CL | new_dataset | 0.994531 | 2301.04962 |
MotorFactory: A Blender Add-on for Large Dataset Generation of Small Electric Motors | To enable automatic disassembly of different product types with uncertain
conditions and degrees of wear in remanufacturing, agile production systems
that can adapt dynamically to changing requirements are needed. Machine
learning algorithms can be employed due to their generalization capabilities of
learning from various types and variants of products. However, in reality,
datasets with a diversity of samples that can be used to train models are
difficult to obtain in the initial period. This may cause bad performances when
the system tries to adapt to new unseen input data in the future. In order to
generate large datasets for different learning purposes, in our project, we
present a Blender add-on named MotorFactory to generate customized mesh models
of various motor instances. MotorFactory allows to create mesh models which,
complemented with additional add-ons, can be further used to create synthetic
RGB images, depth images, normal images, segmentation ground truth masks, and
3D point cloud datasets with point-wise semantic labels. The created synthetic
datasets may be used for various tasks including motor type classification,
object detection for decentralized material transfer tasks, part segmentation
for disassembly and handling tasks, or even reinforcement learning-based
robotics control or view-planning. | http://arxiv.org/abs/2301.05028v1 | cs.RO | new_dataset | 0.73504 | 2301.05028 |
Dataset of Fluorescence Spectra and Chemical Parameters of Olive Oils | This dataset encompasses fluorescence spectra and chemical parameters of 24
olive oil samples from the 2019-2020 harvest provided by the producer Conde de
Benalua, Granada, Spain. The oils are characterized by different qualities: 10
extra virgin olive oil (EVOO), 8 virgin olive oil (VOO), and 6 lampante olive
oil (LOO) samples. For each sample, the dataset includes fluorescence spectra
obtained with two excitation wavelengths, oil quality, and five chemical
parameters necessary for the quality assessment of olive oil. The fluorescence
spectra were obtained by exciting the samples at 365 nm and 395 nm under
identical conditions. The dataset includes the values of the following chemical
parameters for each olive oil sample: acidity, peroxide value, K270, K232,
ethyl esters, and the quality of the samples (EVOO, VOO, or LOO). The dataset
offers a unique possibility for researchers in food technology to develop
machine learning models based on fluorescence data for the quality assessment
of olive oil due to the availability of both spectroscopic and chemical data.
The dataset can be used, for example, to predict one or multiple chemical
parameters or to classify samples based on their quality from fluorescence
spectra. | http://arxiv.org/abs/2301.04471v1 | q-bio.QM | new_dataset | 0.994517 | 2301.04471 |
EMAHA-DB1: A New Upper Limb sEMG Dataset for Classification of Activities of Daily Living | In this paper, we present electromyography analysis of human activity -
database 1 (EMAHA-DB1), a novel dataset of multi-channel surface
electromyography (sEMG) signals to evaluate the activities of daily living
(ADL). The dataset is acquired from 25 able-bodied subjects while performing 22
activities categorised according to functional arm activity behavioral system
(FAABOS) (3 - full hand gestures, 6 - open/close office draw, 8 - grasping and
holding of small office objects, 2 - flexion and extension of finger movements,
2 - writing and 1 - rest). The sEMG data is measured by a set of five Noraxon
Ultium wireless sEMG sensors with Ag/Agcl electrodes placed on a human hand.
The dataset is analyzed for hand activity recognition classification
performance. The classification is performed using four state-ofthe-art machine
learning classifiers, including Random Forest (RF), Fine K-Nearest Neighbour
(KNN), Ensemble KNN (sKNN) and Support Vector Machine (SVM) with seven
combinations of time domain and frequency domain feature sets. The
state-of-theart classification accuracy on five FAABOS categories is 83:21% by
using the SVM classifier with the third order polynomial kernel using energy
feature and auto regressive feature set ensemble. The classification accuracy
on 22 class hand activities is 75:39% by the same SVM classifier with the log
moments in frequency domain (LMF) feature, modified LMF, time domain
statistical (TDS) feature, spectral band powers (SBP), channel cross
correlation and local binary patterns (LBP) set ensemble. The analysis depicts
the technical challenges addressed by the dataset. The developed dataset can be
used as a benchmark for various classification methods as well as for sEMG
signal analysis corresponding to ADL and for the development of prosthetics and
other wearable robotics. | http://arxiv.org/abs/2301.03325v1 | eess.SP | new_dataset | 0.994468 | 2301.03325 |
Backdoor Attacks Against Dataset Distillation | Dataset distillation has emerged as a prominent technique to improve data
efficiency when training machine learning models. It encapsulates the knowledge
from a large dataset into a smaller synthetic dataset. A model trained on this
smaller distilled dataset can attain comparable performance to a model trained
on the original training dataset. However, the existing dataset distillation
techniques mainly aim at achieving the best trade-off between resource usage
efficiency and model utility. The security risks stemming from them have not
been explored. This study performs the first backdoor attack against the models
trained on the data distilled by dataset distillation models in the image
domain. Concretely, we inject triggers into the synthetic data during the
distillation procedure rather than during the model training stage, where all
previous attacks are performed. We propose two types of backdoor attacks,
namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw
data at the initial distillation phase, while DOORPING iteratively updates the
triggers during the entire distillation procedure. We conduct extensive
evaluations on multiple datasets, architectures, and dataset distillation
techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack
success rate (ASR) scores in some cases, while DOORPING reaches higher ASR
scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive
ablation study to analyze the factors that may affect the attack performance.
Finally, we evaluate multiple defense mechanisms against our backdoor attacks
and show that our attacks can practically circumvent these defense mechanisms. | http://arxiv.org/abs/2301.01197v1 | cs.CR | not_new_dataset | 0.992147 | 2301.01197 |
Chains of Autoreplicative Random Forests for missing value imputation in high-dimensional datasets | Missing values are a common problem in data science and machine learning.
Removing instances with missing values can adversely affect the quality of
further data analysis. This is exacerbated when there are relatively many more
features than instances, and thus the proportion of affected instances is high.
Such a scenario is common in many important domains, for example, single
nucleotide polymorphism (SNP) datasets provide a large number of features over
a genome for a relatively small number of individuals. To preserve as much
information as possible prior to modeling, a rigorous imputation scheme is
acutely needed. While Denoising Autoencoders is a state-of-the-art method for
imputation in high-dimensional data, they still require enough complete cases
to be trained on which is often not available in real-world problems. In this
paper, we consider missing value imputation as a multi-label classification
problem and propose Chains of Autoreplicative Random Forests. Using multi-label
Random Forests instead of neural networks works well for low-sampled data as
there are fewer parameters to optimize. Experiments on several SNP datasets
show that our algorithm effectively imputes missing values based only on
information from the dataset and exhibits better performance than standard
algorithms that do not require any additional information. In this paper, the
algorithm is implemented specifically for SNP data, but it can easily be
adapted for other cases of missing value imputation. | http://arxiv.org/abs/2301.00595v1 | cs.LG | not_new_dataset | 0.992169 | 2301.00595 |
Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting | We introduce Argoverse 2 (AV2) - a collection of three datasets for
perception and forecasting research in the self-driving domain. The annotated
Sensor Dataset contains 1,000 sequences of multimodal data, encompassing
high-resolution imagery from seven ring cameras, and two stereo cameras in
addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain
3D cuboid annotations for 26 object categories, all of which are
sufficiently-sampled to support training and evaluation of 3D perception
models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point
clouds and map-aligned pose. This dataset is the largest ever collection of
lidar sensor data and supports self-supervised learning and the emerging task
of point cloud forecasting. Finally, the Motion Forecasting Dataset contains
250,000 scenarios mined for interesting and challenging interactions between
the autonomous vehicle and other actors in each local scene. Models are tasked
with the prediction of future motion for "scored actors" in each scenario and
are provided with track histories that capture object location, heading,
velocity, and category. In all three datasets, each scenario contains its own
HD Map with 3D lane and crosswalk geometry - sourced from data captured in six
distinct cities. We believe these datasets will support new and existing
machine learning research problems in ways that existing datasets do not. All
datasets are released under the CC BY-NC-SA 4.0 license. | http://arxiv.org/abs/2301.00493v1 | cs.CV | new_dataset | 0.994455 | 2301.00493 |
Knowledge-Based Dataset for Training PE Malware Detection Models | Ontologies are a standard for semantic schemata in many knowledge-intensive
domains of human interest. They are now becoming increasingly important also in
areas until very recently dominated by subsymbolic representations and
machine-learning-based data processing. One such area is information security,
and more specifically malware detection. We propose PE Malware Ontology that
offers a reusable semantic schema for Portable Executable (PE, Windows binary
format) malware files. The ontology was inspired by the structure of the data
in the EMBER dataset and it currently covers the data intended for static
malware analysis. With this proposal, we hope to achieve: a) a unified semantic
representation for PE malware datasets that are available or will be published
in the future; (b) applicability of symbolic, neural-symbolic, or otherwise
explainable approaches in the PE Malware domain that may lead to improved
interpretability of results which may now be characterized by the terms defined
in the ontology; and (c)by joint publishing of semantically treated EMBER data,
including fractional datasets, also improved reproducibility of experiments. | http://arxiv.org/abs/2301.00153v1 | cs.CR | new_dataset | 0.994425 | 2301.00153 |
Online learning techniques for prediction of temporal tabular datasets with regime changes | The application of deep learning to non-stationary temporal datasets can lead
to overfitted models that underperform under regime changes. In this work, we
propose a modular machine learning pipeline for ranking predictions on temporal
panel datasets which is robust under regime changes. The modularity of the
pipeline allows the use of different models, including Gradient Boosting
Decision Trees (GBDTs) and Neural Networks, with and without feature
engineering. We evaluate our framework on financial data for stock portfolio
prediction, and find that GBDT models with dropout display high performance,
robustness and generalisability with reduced complexity and computational cost.
We then demonstrate how online learning techniques, which require no retraining
of models, can be used post-prediction to enhance the results. First, we show
that dynamic feature projection improves robustness by reducing drawdown in
regime changes. Second, we demonstrate that dynamical model ensembling based on
selection of models with good recent performance leads to improved Sharpe and
Calmar ratios of out-of-sample predictions. We also evaluate the robustness of
our pipeline across different data splits and random seeds with good
reproducibility. | http://arxiv.org/abs/2301.00790v4 | q-fin.CP | not_new_dataset | 0.992128 | 2301.00790 |
Curator: Creating Large-Scale Curated Labelled Datasets using Self-Supervised Learning | Applying Machine learning to domains like Earth Sciences is impeded by the
lack of labeled data, despite a large corpus of raw data available in such
domains. For instance, training a wildfire classifier on satellite imagery
requires curating a massive and diverse dataset, which is an expensive and
time-consuming process that can span from weeks to months. Searching for
relevant examples in over 40 petabytes of unlabelled data requires researchers
to manually hunt for such images, much like finding a needle in a haystack. We
present a no-code end-to-end pipeline, Curator, which dramatically minimizes
the time taken to curate an exhaustive labeled dataset. Curator is able to
search massive amounts of unlabelled data by combining self-supervision,
scalable nearest neighbor search, and active learning to learn and
differentiate image representations. The pipeline can also be readily applied
to solve problems across different domains. Overall, the pipeline makes it
practical for researchers to go from just one reference image to a
comprehensive dataset in a diminutive span of time. | http://arxiv.org/abs/2212.14099v1 | cs.CV | not_new_dataset | 0.990807 | 2212.14099 |
Evaluating Generalizability of Deep Learning Models Using Indian-COVID-19 CT Dataset | Computer tomography (CT) have been routinely used for the diagnosis of lung
diseases and recently, during the pandemic, for detecting the infectivity and
severity of COVID-19 disease. One of the major concerns in using ma-chine
learning (ML) approaches for automatic processing of CT scan images in clinical
setting is that these methods are trained on limited and biased sub-sets of
publicly available COVID-19 data. This has raised concerns regarding the
generalizability of these models on external datasets, not seen by the model
during training. To address some of these issues, in this work CT scan images
from confirmed COVID-19 data obtained from one of the largest public
repositories, COVIDx CT 2A were used for training and internal vali-dation of
machine learning models. For the external validation we generated
Indian-COVID-19 CT dataset, an open-source repository containing 3D CT volumes
and 12096 chest CT images from 288 COVID-19 patients from In-dia. Comparative
performance evaluation of four state-of-the-art machine learning models, viz.,
a lightweight convolutional neural network (CNN), and three other CNN based
deep learning (DL) models such as VGG-16, ResNet-50 and Inception-v3 in
classifying CT images into three classes, viz., normal, non-covid pneumonia,
and COVID-19 is carried out on these two datasets. Our analysis showed that the
performance of all the models is comparable on the hold-out COVIDx CT 2A test
set with 90% - 99% accuracies (96% for CNN), while on the external
Indian-COVID-19 CT dataset a drop in the performance is observed for all the
models (8% - 19%). The traditional ma-chine learning model, CNN performed the
best on the external dataset (accu-racy 88%) in comparison to the deep learning
models, indicating that a light-weight CNN is better generalizable on unseen
data. The data and code are made available at https://github.com/aleesuss/c19. | http://arxiv.org/abs/2212.13929v1 | eess.IV | new_dataset | 0.961498 | 2212.13929 |
MindBigData 2022 A Large Dataset of Brain Signals | Understanding our brain is one of the most daunting tasks, one we cannot
expect to complete without the use of technology. MindBigData aims to provide a
comprehensive and updated dataset of brain signals related to a diverse set of
human activities so it can inspire the use of machine learning algorithms as a
benchmark of 'decoding' performance from raw brain activities into its
corresponding (labels) mental (or physical) tasks. Using commercial of the
self, EEG devices or custom ones built by us to explore the limits of the
technology. We describe the data collection procedures for each of the sub
datasets and with every headset used to capture them. Also, we report possible
applications in the field of Brain Computer Interfaces or BCI that could impact
the life of billions, in almost every sector like healthcare game changing use
cases, industry or entertainment to name a few, at the end why not directly
using our brains to 'disintermediate' senses, as the final HCI (Human-Computer
Interaction) device? simply what we call the journey from Type to Touch to Talk
to Think. | http://arxiv.org/abs/2212.14746v1 | eess.SP | new_dataset | 0.994601 | 2212.14746 |
Lab-scale Vibration Analysis Dataset and Baseline Methods for Machinery Fault Diagnosis with Machine Learning | The monitoring of machine conditions in a plant is crucial for production in
manufacturing. A sudden failure of a machine can stop production and cause a
loss of revenue. The vibration signal of a machine is a good indicator of its
condition. This paper presents a dataset of vibration signals from a lab-scale
machine. The dataset contains four different types of machine conditions:
normal, unbalance, misalignment, and bearing fault. Three machine learning
methods (SVM, KNN, and GNB) evaluated the dataset, and a perfect result was
obtained by one of the methods on a 1-fold test. The performance of the
algorithms is evaluated using weighted accuracy (WA) since the data is
balanced. The results show that the best-performing algorithm is the SVM with a
WA of 99.75\% on the 5-fold cross-validations. The dataset is provided in the
form of CSV files in an open and free repository at
https://zenodo.org/record/7006575. | http://arxiv.org/abs/2212.14732v1 | eess.SP | new_dataset | 0.994387 | 2212.14732 |
VQA and Visual Reasoning: An Overview of Recent Datasets, Methods and Challenges | Artificial Intelligence (AI) and its applications have sparked extraordinary
interest in recent years. This achievement can be ascribed in part to advances
in AI subfields including Machine Learning (ML), Computer Vision (CV), and
Natural Language Processing (NLP). Deep learning, a sub-field of machine
learning that employs artificial neural network concepts, has enabled the most
rapid growth in these domains. The integration of vision and language has
sparked a lot of attention as a result of this. The tasks have been created in
such a way that they properly exemplify the concepts of deep learning. In this
review paper, we provide a thorough and an extensive review of the state of the
arts approaches, key models design principles and discuss existing datasets,
methods, their problem formulation and evaluation measures for VQA and Visual
reasoning tasks to understand vision and language representation learning. We
also present some potential future paths in this field of research, with the
hope that our study may generate new ideas and novel approaches to handle
existing difficulties and develop new applications. | http://arxiv.org/abs/2212.13296v1 | cs.CV | not_new_dataset | 0.992316 | 2212.13296 |
MN-DS: A Multilabeled News Dataset for News Articles Hierarchical Classification | This article presents a dataset of 10,917 news articles with hierarchical
news categories collected between 1 January 2019 and 31 December 2019. We
manually labeled the articles based on a hierarchical taxonomy with 17
first-level and 109 second-level categories. This dataset can be used to train
machine learning models for automatically classifying news articles by topic.
This dataset can be helpful for researchers working on news structuring,
classification, and predicting future events based on released news. | http://arxiv.org/abs/2212.12061v3 | cs.CL | new_dataset | 0.994392 | 2212.12061 |
IPProtect: protecting the intellectual property of visual datasets during data valuation | Data trading is essential to accelerate the development of data-driven
machine learning pipelines. The central problem in data trading is to estimate
the utility of a seller's dataset with respect to a given buyer's machine
learning task, also known as data valuation. Typically, data valuation requires
one or more participants to share their raw dataset with others, leading to
potential risks of intellectual property (IP) violations. In this paper, we
tackle the novel task of preemptively protecting the IP of datasets that need
to be shared during data valuation. First, we identify and formalize two kinds
of novel IP risks in visual datasets: data-item (image) IP and statistical
(dataset) IP. Then, we propose a novel algorithm to convert the raw dataset
into a sanitized version, that provides resistance to IP violations, while at
the same time allowing accurate data valuation. The key idea is to limit the
transfer of information from the raw dataset to the sanitized dataset, thereby
protecting against potential intellectual property violations. Next, we analyze
our method for the likely existence of a solution and immunity against
reconstruction attacks. Finally, we conduct extensive experiments on three
computer vision datasets demonstrating the advantages of our method in
comparison to other baselines. | http://arxiv.org/abs/2212.11468v1 | cs.CV | not_new_dataset | 0.992176 | 2212.11468 |
NADBenchmarks -- a compilation of Benchmark Datasets for Machine Learning Tasks related to Natural Disasters | Climate change has increased the intensity, frequency, and duration of
extreme weather events and natural disasters across the world. While the
increased data on natural disasters improves the scope of machine learning (ML)
in this field, progress is relatively slow. One bottleneck is the lack of
benchmark datasets that would allow ML researchers to quantify their progress
against a standard metric. The objective of this short paper is to explore the
state of benchmark datasets for ML tasks related to natural disasters,
categorizing them according to the disaster management cycle. We compile a list
of existing benchmark datasets introduced in the past five years. We propose a
web platform - NADBenchmarks - where researchers can search for benchmark
datasets for natural disasters, and we develop a preliminary version of such a
platform using our compiled list. This paper is intended to aid researchers in
finding benchmark datasets to train their ML models on, and provide general
directions for topics where they can contribute new benchmark datasets. | http://arxiv.org/abs/2212.10735v1 | cs.LG | not_new_dataset | 0.972257 | 2212.10735 |
Berlin V2X: A Machine Learning Dataset from Multiple Vehicles and Radio Access Technologies | The evolution of wireless communications into 6G and beyond is expected to
rely on new machine learning (ML)-based capabilities. These can enable
proactive decisions and actions from wireless-network components to sustain
quality-of-service (QoS) and user experience. Moreover, new use cases in the
area of vehicular and industrial communications will emerge. Specifically in
the area of vehicle communication, vehicle-to-everything (V2X) schemes will
benefit strongly from such advances. With this in mind, we have conducted a
detailed measurement campaign that paves the way to a plethora of diverse
ML-based studies. The resulting datasets offer GPS-located wireless
measurements across diverse urban environments for both cellular (with two
different operators) and sidelink radio access technologies, thus enabling a
variety of different studies towards V2X. The datasets are labeled and sampled
with a high time resolution. Furthermore, we make the data publicly available
with all the necessary information to support the onboarding of new
researchers. We provide an initial analysis of the data showing some of the
challenges that ML needs to overcome and the features that ML can leverage, as
well as some hints at potential research studies. | http://arxiv.org/abs/2212.10343v3 | cs.LG | new_dataset | 0.994484 | 2212.10343 |
Towards an AI-enabled Connected Industry: AGV Communication and Sensor Measurement Datasets | This paper presents two wireless measurement campaigns in industrial
testbeds: industrial Vehicle-to-vehicle (iV2V) and industrial
Vehicle-to-infrastructure plus Sensor (iV2I+), together with detailed
information about the two captured datasets. iV2V covers sidelink communication
scenarios between Automated Guided Vehicles (AGVs), while iV2I+ is conducted at
an industrial setting where an autonomous cleaning robot is connected to a
private cellular network. The combination of different communication
technologies within a common measurement methodology provides insights that can
be exploited by Machine Learning (ML) for tasks such as fingerprinting,
line-of-sight detection, prediction of quality of service or link selection.
Moreover, the datasets are publicly available, labelled and prefiltered for
fast on-boarding and applicability. | http://arxiv.org/abs/2301.03364v4 | cs.NI | not_new_dataset | 0.991902 | 2301.03364 |
IndicMT Eval: A Dataset to Meta-Evaluate Machine Translation metrics for Indian Languages | The rapid growth of machine translation (MT) systems has necessitated
comprehensive studies to meta-evaluate evaluation metrics being used, which
enables a better selection of metrics that best reflect MT quality.
Unfortunately, most of the research focuses on high-resource languages, mainly
English, the observations for which may not always apply to other languages.
Indian languages, having over a billion speakers, are linguistically different
from English, and to date, there has not been a systematic study of evaluating
MT systems from English into Indian languages. In this paper, we fill this gap
by creating an MQM dataset consisting of 7000 fine-grained annotations,
spanning 5 Indian languages and 7 MT systems, and use it to establish
correlations between annotator scores and scores obtained using existing
automatic metrics. Our results show that pre-trained metrics, such as COMET,
have the highest correlations with annotator scores. Additionally, we find that
the metrics do not adequately capture fluency-based errors in Indian languages,
and there is a need to develop metrics focused on Indian languages. We hope
that our dataset and analysis will help promote further research in this area. | http://arxiv.org/abs/2212.10180v2 | cs.CL | new_dataset | 0.994449 | 2212.10180 |
JEMMA: An Extensible Java Dataset for ML4Code Applications | Machine Learning for Source Code (ML4Code) is an active research field in
which extensive experimentation is needed to discover how to best use source
code's richly structured information. With this in mind, we introduce JEMMA, an
Extensible Java Dataset for ML4Code Applications, which is a large-scale,
diverse, and high-quality dataset targeted at ML4Code. Our goal with JEMMA is
to lower the barrier to entry in ML4Code by providing the building blocks to
experiment with source code models and tasks. JEMMA comes with a considerable
amount of pre-processed information such as metadata, representations (e.g.,
code tokens, ASTs, graphs), and several properties (e.g., metrics, static
analysis results) for 50,000 Java projects from the 50KC dataset, with over 1.2
million classes and over 8 million methods. JEMMA is also extensible allowing
users to add new properties and representations to the dataset, and evaluate
tasks on them. Thus, JEMMA becomes a workbench that researchers can use to
experiment with novel representations and tasks operating on source code. To
demonstrate the utility of the dataset, we also report results from two
empirical studies on our data, ultimately showing that significant work lies
ahead in the design of context-aware source code models that can reason over a
broader network of source code entities in a software project, the very task
that JEMMA is designed to help with. | http://arxiv.org/abs/2212.09132v1 | cs.SE | new_dataset | 0.994464 | 2212.09132 |
Balanced Split: A new train-test data splitting strategy for imbalanced datasets | Classification data sets with skewed class proportions are called imbalanced.
Class imbalance is a problem since most machine learning classification
algorithms are built with an assumption of equal representation of all classes
in the training dataset. Therefore to counter the class imbalance problem, many
algorithm-level and data-level approaches have been developed. These mainly
include ensemble learning and data augmentation techniques. This paper shows a
new way to counter the class imbalance problem through a new data-splitting
strategy called balanced split. Data splitting can play an important role in
correctly classifying imbalanced datasets. We show that the commonly used
data-splitting strategies have some disadvantages, and our proposed balanced
split has solved those problems. | http://arxiv.org/abs/2212.11116v1 | cs.LG | not_new_dataset | 0.99188 | 2212.11116 |
An annotated instance segmentation XXL-CT dataset from a historic airplane | The Me 163 was a Second World War fighter airplane and a result of the German
air force secret developments. One of these airplanes is currently owned and
displayed in the historic aircraft exhibition of the Deutsches Museum in
Munich, Germany. To gain insights with respect to its history, design and state
of preservation, a complete CT scan was obtained using an industrial
XXL-computer tomography scanner.
Using the CT data from the Me 163, all its details can visually be examined
at various levels, ranging from the complete hull down to single sprockets and
rivets. However, while a trained human observer can identify and interpret the
volumetric data with all its parts and connections, a virtual dissection of the
airplane and all its different parts would be quite desirable. Nevertheless,
this means, that an instance segmentation of all components and objects of
interest into disjoint entities from the CT data is necessary.
As of currently, no adequate computer-assisted tools for automated or
semi-automated segmentation of such XXL-airplane data are available, in a first
step, an interactive data annotation and object labeling process has been
established. So far, seven 512 x 512 x 512 voxel sub-volumes from the Me 163
airplane have been annotated and labeled, whose results can potentially be used
for various new applications in the field of digital heritage, non-destructive
testing, or machine-learning.
This work describes the data acquisition process of the airplane using an
industrial XXL-CT scanner, outlines the interactive segmentation and labeling
scheme to annotate sub-volumes of the airplane's CT data, describes and
discusses various challenges with respect to interpreting and handling the
annotated and labeled data. | http://arxiv.org/abs/2212.08639v1 | cs.CV | new_dataset | 0.994401 | 2212.08639 |
Wide-scale Monitoring of Satellite Lifetimes: Pitfalls and a Benchmark Dataset | An important task within the broader goal of Space Situational Awareness
(SSA) is to observe changes in the orbits of satellites, where the data spans
thousands of objects over long time scales (decades). The Two-Line Element
(TLE) data provided by the North American Aerospace Defense Command is the most
comprehensive and widely-available dataset cataloguing the orbits of
satellites. This makes it a highly-attractive data source on which to perform
this observation. However, when attempting to infer changes in satellite
behaviour from TLE data, there are a number of potential pitfalls. These mostly
relate to specific features of the TLE data which are not always clearly
documented in the data sources or popular software packages for manipulating
them. These quirks produce a particularly hazardous data type for researchers
from adjacent disciplines (such as anomaly detection or machine learning). We
highlight these features of TLE data and the resulting pitfalls in order to
save future researchers from being trapped. A seperate, significant, issue is
that existing contributions to manoeuvre detection from TLE data evaluate their
algorithms on different satellites, making comparison between these methods
difficult. Moreover, the ground-truth in these datasets is often poor quality,
sometimes being based on subjective human assessment. We therefore release and
describe in-depth an open, curated, benchmark dataset containing TLE data for
15 satellites alongside high-quality ground-truth manoeuvre timestamps. | http://arxiv.org/abs/2212.08662v1 | astro-ph.EP | new_dataset | 0.994566 | 2212.08662 |
Balanced Datasets for IoT IDS | As the Internet of Things (IoT) continues to grow, cyberattacks are becoming
increasingly common. The security of IoT networks relies heavily on intrusion
detection systems (IDSs). The development of an IDS that is accurate and
efficient is a challenging task. As a result, this challenge is made more
challenging by the absence of balanced datasets for training and testing the
proposed IDS. In this study, four commonly used datasets are visualized and
analyzed visually. Moreover, it proposes a sampling algorithm that generates a
sample that represents the original dataset. In addition, it proposes an
algorithm to generate a balanced dataset. Researchers can use this paper as a
starting point when investigating cybersecurity and machine learning. The
proposed sampling algorithms showed reliability in generating well-representing
and balanced samples from NSL-KDD, UNSW-NB15, BotNetIoT-01, and BoTIoT
datasets. | http://arxiv.org/abs/2301.04008v1 | cs.CR | new_dataset | 0.633486 | 2301.04008 |
A large-scale and PCR-referenced vocal audio dataset for COVID-19 | The UK COVID-19 Vocal Audio Dataset is designed for the training and
evaluation of machine learning models that classify SARS-CoV-2 infection status
or associated respiratory symptoms using vocal audio. The UK Health Security
Agency recruited voluntary participants through the national Test and Trace
programme and the REACT-1 survey in England from March 2021 to March 2022,
during dominant transmission of the Alpha and Delta SARS-CoV-2 variants and
some Omicron variant sublineages. Audio recordings of volitional coughs,
exhalations, and speech were collected in the 'Speak up to help beat
coronavirus' digital survey alongside demographic, self-reported symptom and
respiratory condition data, and linked to SARS-CoV-2 test results. The UK
COVID-19 Vocal Audio Dataset represents the largest collection of SARS-CoV-2
PCR-referenced audio recordings to date. PCR results were linked to 70,794 of
72,999 participants and 24,155 of 25,776 positive cases. Respiratory symptoms
were reported by 45.62% of participants. This dataset has additional potential
uses for bioacoustics research, with 11.30% participants reporting asthma, and
27.20% with linked influenza PCR test results. | http://arxiv.org/abs/2212.07738v3 | cs.SD | new_dataset | 0.994483 | 2212.07738 |
AirfRANS: High Fidelity Computational Fluid Dynamics Dataset for Approximating Reynolds-Averaged Navier-Stokes Solutions | Surrogate models are necessary to optimize meaningful quantities in physical
dynamics as their recursive numerical resolutions are often prohibitively
expensive. It is mainly the case for fluid dynamics and the resolution of
Navier-Stokes equations. However, despite the fast-growing field of data-driven
models for physical systems, reference datasets representing real-world
phenomena are lacking. In this work, we develop AirfRANS, a dataset for
studying the two-dimensional incompressible steady-state Reynolds-Averaged
Navier-Stokes equations over airfoils at a subsonic regime and for different
angles of attacks. We also introduce metrics on the stress forces at the
surface of geometries and visualization of boundary layers to assess the
capabilities of models to accurately predict the meaningful information of the
problem. Finally, we propose deep learning baselines on four machine learning
tasks to study AirfRANS under different constraints for generalization
considerations: big and scarce data regime, Reynolds number, and angle of
attack extrapolation. | http://arxiv.org/abs/2212.07564v3 | cs.LG | new_dataset | 0.99442 | 2212.07564 |
Automatic Classification of Galaxy Morphology: a rotationally invariant supervised machine learning method based on the UML-dataset | Classification of galaxy morphology is a challenging but meaningful task for
the enormous amount of data produced by the next-generation telescope. By
introducing the adaptive polar coordinate transformation, we develop a
rotationally invariant supervised machine learning (SML) method that ensures
consistent classifications when rotating galaxy images, which is always
required to be satisfied physically but difficult to achieve algorithmically.
The adaptive polar coordinate transformation, compared with the conventional
method of data augmentation by including additional rotated images in the
training set, is proved to be an effective and efficient method in improving
the robustness of the SML methods. In the previous work, we generated a catalog
of galaxies with well-classified morphologies via our developed unsupervised
machine learning (UML) method. By using this UML-dataset as the training set,
we apply the new method to classify galaxies into five categories
(unclassifiable, irregulars, late-type disks, early-type disks, and spheroids).
In general, the result of our morphological classifications following the
sequence from irregulars to spheroids agrees well with the expected trends of
other galaxy properties, including S\'{e}rsic indices, effective radii,
nonparametric statistics, and colors. Thus, we demonstrate that the
rotationally invariant SML method, together with the previously developed UML
method, completes the entire task of automatic classification of galaxy
morphology. | http://arxiv.org/abs/2212.06981v1 | astro-ph.GA | not_new_dataset | 0.992163 | 2212.06981 |
A Novel Approach For Generating Customizable Light Field Datasets for Machine Learning | To train deep learning models, which often outperform traditional approaches,
large datasets of a specified medium, e.g., images, are used in numerous areas.
However, for light field-specific machine learning tasks, there is a lack of
such available datasets. Therefore, we create our own light field datasets,
which have great potential for a variety of applications due to the abundance
of information in light fields compared to singular images. Using the Unity and
C# frameworks, we develop a novel approach for generating large, scalable, and
reproducible light field datasets based on customizable hardware configurations
to accelerate light field deep learning research. | http://arxiv.org/abs/2212.06701v1 | cs.CV | not_new_dataset | 0.991975 | 2212.06701 |
3DSC - A New Dataset of Superconductors Including Crystal Structures | Data-driven methods, in particular machine learning, can help to speed up the
discovery of new materials by finding hidden patterns in existing data and
using them to identify promising candidate materials. In the case of
superconductors, which are a highly interesting but also a complex class of
materials with many relevant applications, the use of data science tools is to
date slowed down by a lack of accessible data. In this work, we present a new
and publicly available superconductivity dataset ('3DSC'), featuring the
critical temperature $T_\mathrm{c}$ of superconducting materials additionally
to tested non-superconductors. In contrast to existing databases such as the
SuperCon database which contains information on the chemical composition, the
3DSC is augmented by the approximate three-dimensional crystal structure of
each material. We perform a statistical analysis and machine learning
experiments to show that access to this structural information improves the
prediction of the critical temperature $T_\mathrm{c}$ of materials.
Furthermore, we see the 3DSC not as a finished dataset, but we provide ideas
and directions for further research to improve the 3DSC in multiple ways. We
are confident that this database will be useful in applying state-of-the-art
machine learning methods to eventually find new superconductors. | http://arxiv.org/abs/2212.06071v2 | cond-mat.supr-con | new_dataset | 0.994548 | 2212.06071 |
OpenPack: A Large-scale Dataset for Recognizing Packaging Works in IoT-enabled Logistic Environments | Unlike human daily activities, existing publicly available sensor datasets
for work activity recognition in industrial domains are limited by difficulties
in collecting realistic data as close collaboration with industrial sites is
required. This also limits research on and development of AI methods for
industrial applications. To address these challenges and contribute to research
on machine recognition of work activities in industrial domains, in this study,
we introduce a new large-scale dataset for packaging work recognition called
OpenPack. OpenPack contains 53.8 hours of multimodal sensor data, including
keypoints, depth images, acceleration data, and readings from IoT-enabled
devices (e.g., handheld barcode scanners used in work procedures), collected
from 16 distinct subjects with different levels of packaging work experience.
On the basis of this dataset, we propose a neural network model designed to
recognize work activities, which efficiently fuses sensor data and readings
from IoT-enabled devices by processing them within different streams in a
ladder-shaped architecture, and the experiment showed the effectiveness of the
architecture. We believe that OpenPack will contribute to the community of
action/activity recognition with sensors. OpenPack dataset is available at
https://open-pack.github.io/. | http://arxiv.org/abs/2212.11152v1 | cs.CV | new_dataset | 0.994506 | 2212.11152 |
Performance Evaluation of Apache Spark MLlib Algorithms on an Intrusion Detection Dataset | The increase in the use of the Internet and web services and the advent of
the fifth generation of cellular network technology (5G) along with
ever-growing Internet of Things (IoT) data traffic will grow global internet
usage. To ensure the security of future networks, machine learning-based
intrusion detection and prevention systems (IDPS) must be implemented to detect
new attacks, and big data parallel processing tools can be used to handle a
huge collection of training data in these systems. In this paper Apache Spark,
a general-purpose and fast cluster computing platform is used for processing
and training a large volume of network traffic feature data. In this work, the
most important features of the CSE-CIC-IDS2018 dataset are used for
constructing machine learning models and then the most popular machine learning
approaches, namely Logistic Regression, Support Vector Machine (SVM), three
different Decision Tree Classifiers, and Naive Bayes algorithm are used to
train the model using up to eight number of worker nodes. Our Spark cluster
contains seven machines acting as worker nodes and one machine is configured as
both a master and a worker. We use the CSE-CIC-IDS2018 dataset to evaluate the
overall performance of these algorithms on Botnet attacks and distributed
hyperparameter tuning is used to find the best single decision tree parameters.
We have achieved up to 100% accuracy using selected features by the learning
method in our experiments | http://arxiv.org/abs/2212.05269v1 | cs.NI | not_new_dataset | 0.97334 | 2212.05269 |
A Comprehensive Survey on Multi-hop Machine Reading Comprehension Datasets and Metrics | Multi-hop Machine reading comprehension is a challenging task with aim of
answering a question based on disjoint pieces of information across the
different passages. The evaluation metrics and datasets are a vital part of
multi-hop MRC because it is not possible to train and evaluate models without
them, also, the proposed challenges by datasets often are an important
motivation for improving the existing models. Due to increasing attention to
this field, it is necessary and worth reviewing them in detail. This study aims
to present a comprehensive survey on recent advances in multi-hop MRC
evaluation metrics and datasets. In this regard, first, the multi-hop MRC
problem definition will be presented, then the evaluation metrics based on
their multi-hop aspect will be investigated. Also, 15 multi-hop datasets have
been reviewed in detail from 2017 to 2022, and a comprehensive analysis has
been prepared at the end. Finally, open issues in this field have been
discussed. | http://arxiv.org/abs/2212.04070v1 | cs.CL | not_new_dataset | 0.992151 | 2212.04070 |
VISEM-Tracking, a human spermatozoa tracking dataset | A manual assessment of sperm motility requires microscopy observation, which
is challenging due to the fast-moving spermatozoa in the field of view. To
obtain correct results, manual evaluation requires extensive training.
Therefore, computer-assisted sperm analysis (CASA) has become increasingly used
in clinics. Despite this, more data is needed to train supervised machine
learning approaches in order to improve accuracy and reliability in the
assessment of sperm motility and kinematics. In this regard, we provide a
dataset called VISEM-Tracking with 20 video recordings of 30 seconds
(comprising 29,196 frames) of wet sperm preparations with manually annotated
bounding-box coordinates and a set of sperm characteristics analyzed by experts
in the domain. In addition to the annotated data, we provide unlabeled video
clips for easy-to-use access and analysis of the data via methods such as self-
or unsupervised learning. As part of this paper, we present baseline sperm
detection performances using the YOLOv5 deep learning (DL) model trained on the
VISEM-Tracking dataset. As a result, we show that the dataset can be used to
train complex DL models to analyze spermatozoa. | http://arxiv.org/abs/2212.02842v5 | cs.CV | new_dataset | 0.994428 | 2212.02842 |
MapInWild: A Remote Sensing Dataset to Address the Question What Makes Nature Wild | Antrophonegic pressure (i.e. human influence) on the environment is one of
the largest causes of the loss of biological diversity. Wilderness areas, in
contrast, are home to undisturbed ecological processes. However, there is no
biophysical definition of the term wilderness. Instead, wilderness is more of a
philosophical or cultural concept and thus cannot be easily delineated or
categorized in a technical manner. With this paper, (i) we introduce the task
of wilderness mapping by means of machine learning applied to satellite imagery
(ii) and publish MapInWild, a large-scale benchmark dataset curated for that
task. MapInWild is a multi-modal dataset and comprises various geodata acquired
and formed from a diverse set of Earth observation sensors. The dataset
consists of 8144 images with a shape of 1920 x 1920 pixels and is approximately
350 GB in size. The images are weakly annotated with three classes derived from
the World Database of Protected Areas - Strict Nature Reserves, Wilderness
Areas, and National Parks. With the dataset, which shall serve as a testbed for
developments in fields such as explainable machine learning and environmental
remote sensing, we hope to contribute to a deepening of our understanding of
the question "What makes nature wild?". | http://arxiv.org/abs/2212.02265v1 | cs.CV | new_dataset | 0.994585 | 2212.02265 |
WAIR-D: Wireless AI Research Dataset | It is a common sense that datasets with high-quality data samples play an
important role in artificial intelligence (AI), machine learning (ML) and
related studies. However, although AI/ML has been introduced in wireless
researches long time ago, few datasets are commonly used in the research
community. Without a common dataset, AI-based methods proposed for wireless
systems are hard to compare with both the traditional baselines and even each
other. The existing wireless AI researches usually rely on datasets generated
based on statistical models or ray-tracing simulations with limited
environments. The statistical data hinder the trained AI models from further
fine-tuning for a specific scenario, and ray-tracing data with limited
environments lower down the generalization capability of the trained AI models.
In this paper, we present the Wireless AI Research Dataset (WAIR-D)1, which
consists of two scenarios. Scenario 1 contains 10,000 environments with
sparsely dropped user equipments (UEs), and Scenario 2 contains 100
environments with densely dropped UEs. The environments are randomly picked up
from more than 40 cities in the real world map. The large volume of the data
guarantees that the trained AI models enjoy good generalization capability,
while fine-tuning can be easily carried out on a specific chosen environment.
Moreover, both the wireless channels and the corresponding environmental
information are provided in WAIR-D, so that extra-information-aided
communication mechanism can be designed and evaluated. WAIR-D provides the
researchers benchmarks to compare their different designs or reproduce results
of others. In this paper, we show the detailed construction of this dataset and
examples of using it. | http://arxiv.org/abs/2212.02159v1 | cs.LG | new_dataset | 0.994435 | 2212.02159 |
Unveiling the complex structure-property correlation of defects in 2D materials based on high throughput datasets | Modification of physical properties of materials and design of materials with
on-demand characteristics is at the heart of modern technology. Rare
application relies on pure materials--most devices and technologies require
careful design of materials properties through alloying, creating
heterostructures of composites or controllable introduction of defects. At the
same time, such designer materials are notoriously difficult for modelling.
Thus, it is very tempting to apply machine learning methods for such systems.
Unfortunately, there is only a handful of machine learning-friendly material
databases available these days. We develop a platform for easy implementation
of machine learning techniques to materials design and populate it with
datasets on pristine and defected materials. Here we describe datasets of
defects in represented 2D materials such as MoS2, WSe2, hBN, GaSe, InSe, and
black phosphorous, calculated using DFT. Our study provides a data-driven
physical understanding of complex behaviors of defect properties in 2D
materials, holding promise for a guide to the development of efficient machine
learning models. In addition, with the increasing enrollment of datasets, our
database could provide a platform for designing of materials with predetermined
properties. | http://arxiv.org/abs/2212.02110v1 | cond-mat.mtrl-sci | new_dataset | 0.99365 | 2212.02110 |
A dataset for audio-video based vehicle speed estimation | Accurate speed estimation of road vehicles is important for several reasons.
One is speed limit enforcement, which represents a crucial tool in decreasing
traffic accidents and fatalities. Compared with other research areas and
domains, the number of available datasets for vehicle speed estimation is still
very limited. We present a dataset of on-road audio-video recordings of single
vehicles passing by a camera at known speeds, maintained stable by the on-board
cruise control. The dataset contains thirteen vehicles, selected to be as
diverse as possible in terms of manufacturer, production year, engine type,
power and transmission, resulting in a total of $ 400 $ annotated audio-video
recordings. The dataset is fully available and intended as a public benchmark
to facilitate research in audio-video vehicle speed estimation. In addition to
the dataset, we propose a cross-validation strategy which can be used in a
machine learning model for vehicle speed estimation. Two approaches to
training-validation split of the dataset are proposed. | http://arxiv.org/abs/2212.01651v1 | cs.LG | new_dataset | 0.994541 | 2212.01651 |
Calibration and generalizability of probabilistic models on low-data chemical datasets with DIONYSUS | Deep learning models that leverage large datasets are often the state of the
art for modelling molecular properties. When the datasets are smaller (< 2000
molecules), it is not clear that deep learning approaches are the right
modelling tool. In this work we perform an extensive study of the calibration
and generalizability of probabilistic machine learning models on small chemical
datasets. Using different molecular representations and models, we analyse the
quality of their predictions and uncertainties in a variety of tasks (binary,
regression) and datasets. We also introduce two simulated experiments that
evaluate their performance: (1) Bayesian optimization guided molecular design,
(2) inference on out-of-distribution data via ablated cluster splits. We offer
practical insights into model and feature choice for modelling small chemical
datasets, a common scenario in new chemical experiments. We have packaged our
analysis into the DIONYSUS repository, which is open sourced to aid in
reproducibility and extension to new datasets. | http://arxiv.org/abs/2212.01574v2 | cs.CE | not_new_dataset | 0.99222 | 2212.01574 |
5G-NIDD: A Comprehensive Network Intrusion Detection Dataset Generated over 5G Wireless Network | With a plethora of new connections, features, and services introduced, the
5th generation (5G) wireless technology reflects the development of mobile
communication networks and is here to stay for the next decade. The multitude
of services and technologies that 5G incorporates have made modern
communication networks very complex and sophisticated in nature. This
complexity along with the incorporation of Machine Learning (ML) and Artificial
Intelligence (AI) provides the opportunity for the attackers to launch
intelligent attacks against the network and network devices. These attacks
often traverse undetected due to the lack of intelligent security mechanisms to
counter these threats. Therefore, the implementation of real-time, proactive,
and self-adaptive security mechanisms throughout the network would be an
integral part of 5G as well as future communication systems. Therefore, large
amounts of data collected from real networks will play an important role in the
training of AI/ML models to identify and detect malicious content in network
traffic. This work presents 5G-NIDD, a fully labeled dataset built on a
functional 5G test network that can be used by those who develop and test AI/ML
solutions. The work further analyses the collected data using common ML models
and shows the achieved accuracy levels. | http://arxiv.org/abs/2212.01298v1 | cs.CR | new_dataset | 0.994486 | 2212.01298 |
SOLD: Sinhala Offensive Language Dataset | The widespread of offensive content online, such as hate speech and
cyber-bullying, is a global phenomenon. This has sparked interest in the
artificial intelligence (AI) and natural language processing (NLP) communities,
motivating the development of various systems trained to detect potentially
harmful content automatically. These systems require annotated datasets to
train the machine learning (ML) models. However, with a few notable exceptions,
most datasets on this topic have dealt with English and a few other
high-resource languages. As a result, the research in offensive language
identification has been limited to these languages. This paper addresses this
gap by tackling offensive language identification in Sinhala, a low-resource
Indo-Aryan language spoken by over 17 million people in Sri Lanka. We introduce
the Sinhala Offensive Language Dataset (SOLD) and present multiple experiments
on this dataset. SOLD is a manually annotated dataset containing 10,000 posts
from Twitter annotated as offensive and not offensive at both sentence-level
and token-level, improving the explainability of the ML models. SOLD is the
first large publicly available offensive language dataset compiled for Sinhala.
We also introduce SemiSOLD, a larger dataset containing more than 145,000
Sinhala tweets, annotated following a semi-supervised approach. | http://arxiv.org/abs/2212.00851v1 | cs.CL | new_dataset | 0.994478 | 2212.00851 |
EBHI-Seg: A Novel Enteroscope Biopsy Histopathological Haematoxylin and Eosin Image Dataset for Image Segmentation Tasks | Background and Purpose: Colorectal cancer is a common fatal malignancy, the
fourth most common cancer in men, and the third most common cancer in women
worldwide. Timely detection of cancer in its early stages is essential for
treating the disease. Currently, there is a lack of datasets for
histopathological image segmentation of rectal cancer, which often hampers the
assessment accuracy when computer technology is used to aid in diagnosis.
Methods: This present study provided a new publicly available Enteroscope
Biopsy Histopathological Hematoxylin and Eosin Image Dataset for Image
Segmentation Tasks (EBHI-Seg). To demonstrate the validity and extensiveness of
EBHI-Seg, the experimental results for EBHI-Seg are evaluated using classical
machine learning methods and deep learning methods. Results: The experimental
results showed that deep learning methods had a better image segmentation
performance when utilizing EBHI-Seg. The maximum accuracy of the Dice
evaluation metric for the classical machine learning method is 0.948, while the
Dice evaluation metric for the deep learning method is 0.965. Conclusion: This
publicly available dataset contained 5,170 images of six types of tumor
differentiation stages and the corresponding ground truth images. The dataset
can provide researchers with new segmentation algorithms for medical diagnosis
of colorectal cancer, which can be used in the clinical setting to help doctors
and patients. | http://arxiv.org/abs/2212.00532v3 | eess.IV | new_dataset | 0.994417 | 2212.00532 |
Open-Source Ground-based Sky Image Datasets for Very Short-term Solar Forecasting, Cloud Analysis and Modeling: A Comprehensive Survey | Sky-image-based solar forecasting using deep learning has been recognized as
a promising approach in reducing the uncertainty in solar power generation.
However, one of the biggest challenges is the lack of massive and diversified
sky image samples. In this study, we present a comprehensive survey of
open-source ground-based sky image datasets for very short-term solar
forecasting (i.e., forecasting horizon less than 30 minutes), as well as
related research areas which can potentially help improve solar forecasting
methods, including cloud segmentation, cloud classification and cloud motion
prediction. We first identify 72 open-source sky image datasets that satisfy
the needs of machine/deep learning. Then a database of information about
various aspects of the identified datasets is constructed. To evaluate each
surveyed datasets, we further develop a multi-criteria ranking system based on
8 dimensions of the datasets which could have important impacts on usage of the
data. Finally, we provide insights on the usage of these datasets for different
applications. We hope this paper can provide an overview for researchers who
are looking for datasets for very short-term solar forecasting and related
areas. | http://arxiv.org/abs/2211.14709v2 | cs.CV | new_dataset | 0.987291 | 2211.14709 |
Carbon Emission Prediction on the World Bank Dataset for Canada | The continuous rise in CO2 emission into the environment is one of the most
crucial issues facing the whole world. Many countries are making crucial
decisions to control their carbon footprints to escape some of their
catastrophic outcomes. There has been a lot of research going on to project the
amount of carbon emissions in the future, which can help us to develop
innovative techniques to deal with it in advance. Machine learning is one of
the most advanced and efficient techniques for predicting the amount of carbon
emissions from current data. This paper provides the methods for predicting
carbon emissions (CO2 emissions) for the next few years. The predictions are
based on data from the past 50 years. The dataset, which is used for making the
prediction, is collected from World Bank datasets. This dataset contains CO2
emissions (metric tons per capita) of all the countries from 1960 to 2018. Our
method consists of using machine learning techniques to take the idea of what
carbon emission measures will look like in the next ten years and project them
onto the dataset taken from the World Bank's data repository. The purpose of
this research is to compare how different machine learning models (Decision
Tree, Linear Regression, Random Forest, and Support Vector Machine) perform on
a similar dataset and measure the difference between their predictions. | http://arxiv.org/abs/2211.17010v1 | cs.LG | new_dataset | 0.994095 | 2211.17010 |
Elements of effective machine learning datasets in astronomy | In this work, we identify elements of effective machine learning datasets in
astronomy and present suggestions for their design and creation. Machine
learning has become an increasingly important tool for analyzing and
understanding the large-scale flood of data in astronomy. To take advantage of
these tools, datasets are required for training and testing. However, building
machine learning datasets for astronomy can be challenging. Astronomical data
is collected from instruments built to explore science questions in a
traditional fashion rather than to conduct machine learning. Thus, it is often
the case that raw data, or even downstream processed data is not in a form
amenable to machine learning. We explore the construction of machine learning
datasets and we ask: what elements define effective machine learning datasets?
We define effective machine learning datasets in astronomy to be formed with
well-defined data points, structure, and metadata. We discuss why these
elements are important for astronomical applications and ways to put them in
practice. We posit that these qualities not only make the data suitable for
machine learning, they also help to foster usable, reusable, and replicable
science practices. | http://arxiv.org/abs/2211.14401v2 | astro-ph.IM | not_new_dataset | 0.99218 | 2211.14401 |
Composite Score for Anomaly Detection in Imbalanced Real-World Industrial Dataset | In recent years, the industrial sector has evolved towards its fourth
revolution. The quality control domain is particularly interested in advanced
machine learning for computer vision anomaly detection. Nevertheless, several
challenges have to be faced, including imbalanced datasets, the image
complexity, and the zero-false-negative (ZFN) constraint to guarantee the
high-quality requirement. This paper illustrates a use case for an industrial
partner, where Printed Circuit Board Assembly (PCBA) images are first
reconstructed with a Vector Quantized Generative Adversarial Network (VQGAN)
trained on normal products. Then, several multi-level metrics are extracted on
a few normal and abnormal images, highlighting anomalies through reconstruction
differences. Finally, a classifer is trained to build a composite anomaly score
thanks to the metrics extracted. This three-step approach is performed on the
public MVTec-AD datasets and on the partner PCBA dataset, where it achieves a
regular accuracy of 95.69% and 87.93% under the ZFN constraint. | http://arxiv.org/abs/2211.15513v1 | cs.CV | not_new_dataset | 0.992001 | 2211.15513 |