Search is not available for this dataset
title
string | abstract
string | url
string | category
string | prediction
string | probability
float64 | arxiv_id
string |
---|---|---|---|---|---|---|
DC-BENCH: Dataset Condensation Benchmark | Dataset Condensation is a newly emerging technique aiming at learning a tiny
dataset that captures the rich information encoded in the original dataset. As
the size of datasets contemporary machine learning models rely on becomes
increasingly large, condensation methods become a prominent direction for
accelerating network training and reducing data storage. Despite numerous
methods have been proposed in this rapidly growing field, evaluating and
comparing different condensation methods is non-trivial and still remains an
open issue. The quality of condensed dataset are often shadowed by many
critical contributing factors to the end performance, such as data augmentation
and model architectures. The lack of a systematic way to evaluate and compare
condensation methods not only hinders our understanding of existing techniques,
but also discourages practical usage of the synthesized datasets. This work
provides the first large-scale standardized benchmark on Dataset Condensation.
It consists of a suite of evaluations to comprehensively reflect the
generability and effectiveness of condensation methods through the lens of
their generated dataset. Leveraging this benchmark, we conduct a large-scale
study of current condensation methods, and report many insightful findings that
open up new possibilities for future development. The benchmark library,
including evaluators, baseline methods, and generated datasets, is open-sourced
to facilitate future research and application. | http://arxiv.org/abs/2207.09639v2 | cs.LG | not_new_dataset | 0.900296 | 2207.09639 |
PiC: A Phrase-in-Context Dataset for Phrase Understanding and Semantic Search | While contextualized word embeddings have been a de-facto standard, learning
contextualized phrase embeddings is less explored and being hindered by the
lack of a human-annotated benchmark that tests machine understanding of phrase
semantics given a context sentence or paragraph (instead of phrases alone). To
fill this gap, we propose PiC -- a dataset of ~28K of noun phrases accompanied
by their contextual Wikipedia pages and a suite of three tasks for training and
evaluating phrase embeddings. Training on PiC improves ranking models' accuracy
and remarkably pushes span-selection (SS) models (i.e., predicting the start
and end index of the target phrase) near-human accuracy, which is 95% Exact
Match (EM) on semantic search given a query phrase and a passage.
Interestingly, we find evidence that such impressive performance is because the
SS models learn to better capture the common meaning of a phrase regardless of
its actual context. SotA models perform poorly in distinguishing two senses of
the same phrase in two contexts (~60% EM) and in estimating the similarity
between two different phrases in the same context (~70% EM). | http://arxiv.org/abs/2207.09068v5 | cs.CL | new_dataset | 0.994507 | 2207.09068 |
MRCLens: an MRC Dataset Bias Detection Toolkit | Many recent neural models have shown remarkable empirical results in Machine
Reading Comprehension, but evidence suggests sometimes the models take
advantage of dataset biases to predict and fail to generalize on out-of-sample
data. While many other approaches have been proposed to address this issue from
the computation perspective such as new architectures or training procedures,
we believe a method that allows researchers to discover biases, and adjust the
data or the models in an earlier stage will be beneficial. Thus, we introduce
MRCLens, a toolkit that detects whether biases exist before users train the
full model. For the convenience of introducing the toolkit, we also provide a
categorization of common biases in MRC. | http://arxiv.org/abs/2207.08943v1 | cs.CL | not_new_dataset | 0.992014 | 2207.08943 |
Open High-Resolution Satellite Imagery: The WorldStrat Dataset -- With Application to Super-Resolution | Analyzing the planet at scale with satellite imagery and machine learning is
a dream that has been constantly hindered by the cost of difficult-to-access
highly-representative high-resolution imagery. To remediate this, we introduce
here the WorldStrat dataset. The largest and most varied such publicly
available dataset, at Airbus SPOT 6/7 satellites' high resolution of up to 1.5
m/pixel, empowered by European Space Agency's Phi-Lab as part of the ESA-funded
QueryPlanet project, we curate nearly 10,000 sqkm of unique locations to ensure
stratified representation of all types of land-use across the world: from
agriculture to ice caps, from forests to multiple urbanization densities. We
also enrich those with locations typically under-represented in ML datasets:
sites of humanitarian interest, illegal mining sites, and settlements of
persons at risk. We temporally-match each high-resolution image with multiple
low-resolution images from the freely accessible lower-resolution Sentinel-2
satellites at 10 m/pixel. We accompany this dataset with an open-source Python
package to: rebuild or extend the WorldStrat dataset, train and infer baseline
algorithms, and learn with abundant tutorials, all compatible with the popular
EO-learn toolbox. We hereby hope to foster broad-spectrum applications of ML to
satellite imagery, and possibly develop from free public low-resolution
Sentinel2 imagery the same power of analysis allowed by costly private
high-resolution imagery. We illustrate this specific point by training and
releasing several highly compute-efficient baselines on the task of Multi-Frame
Super-Resolution. High-resolution Airbus imagery is CC BY-NC, while the labels
and Sentinel2 imagery are CC BY, and the source code and pre-trained models
under BSD. The dataset is available at https://zenodo.org/record/6810792 and
the software package at https://github.com/worldstrat/worldstrat . | http://arxiv.org/abs/2207.06418v1 | eess.IV | new_dataset | 0.99455 | 2207.06418 |
A Benchmark dataset for predictive maintenance | The paper describes the MetroPT data set, an outcome of a eXplainable
Predictive Maintenance (XPM) project with an urban metro public transportation
service in Porto, Portugal. The data was collected in 2022 that aimed to
evaluate machine learning methods for online anomaly detection and failure
prediction. By capturing several analogic sensor signals (pressure,
temperature, current consumption), digital signals (control signals, discrete
signals), and GPS information (latitude, longitude, and speed), we provide a
dataset that can be easily used to evaluate online machine learning methods.
This dataset contains some interesting characteristics and can be a good
benchmark for predictive maintenance models. | http://arxiv.org/abs/2207.05466v3 | cs.LG | new_dataset | 0.994462 | 2207.05466 |
TweetDIS: A Large Twitter Dataset for Natural Disasters Built using Weak Supervision | Social media is often utilized as a lifeline for communication during natural
disasters. Traditionally, natural disaster tweets are filtered from the Twitter
stream using the name of the natural disaster and the filtered tweets are sent
for human annotation. The process of human annotation to create labeled sets
for machine learning models is laborious, time consuming, at times inaccurate,
and more importantly not scalable in terms of size and real-time use. In this
work, we curate a silver standard dataset using weak supervision. In order to
validate its utility, we train machine learning models on the weakly supervised
data to identify three different types of natural disasters i.e earthquakes,
hurricanes and floods. Our results demonstrate that models trained on the
silver standard dataset achieved performance greater than 90% when classifying
a manually curated, gold-standard dataset. To enable reproducible research and
additional downstream utility, we release the silver standard dataset for the
scientific community. | http://arxiv.org/abs/2207.04947v1 | cs.CL | new_dataset | 0.994451 | 2207.04947 |
Systematic Atomic Structure Datasets for Machine Learning Potentials: Application to Defects in Magnesium | We present a physically motivated strategy for the construction of training
sets for transferable machine learning interatomic potentials. It is based on a
systematic exploration of all possible space groups in random crystal
structures, together with deformations of cell shape, size, and atomic
positions. The resulting potentials turn out to be unbiased and generically
applicable to studies of bulk defects without including any defect structures
in the training set or employing any additional Active Learning. Using this
approach we construct transferable potentials for pure Magnesium that reproduce
the properties of hexagonal closed packed (hcp) and body centered cubic (bcc)
polymorphs very well. In the process we investigate how different types of
training structures impact the properties and the predictive power of the
resulting potential. | http://arxiv.org/abs/2207.04009v4 | cond-mat.mtrl-sci | new_dataset | 0.99341 | 2207.04009 |
SC2EGSet: StarCraft II Esport Replay and Game-state Dataset | As a relatively new form of sport, esports offers unparalleled data
availability. Despite the vast amounts of data that are generated by game
engines, it can be challenging to extract them and verify their integrity for
the purposes of practical and scientific use.
Our work aims to open esports to a broader scientific community by supplying
raw and pre-processed files from StarCraft II esports tournaments. These files
can be used in statistical and machine learning modeling tasks and related to
various laboratory-based measurements (e.g., behavioral tests, brain imaging).
We have gathered publicly available game-engine generated "replays" of
tournament matches and performed data extraction and cleanup using a low-level
application programming interface (API) parser library.
Additionally, we open-sourced and published all the custom tools that were
developed in the process of creating our dataset. These tools include PyTorch
and PyTorch Lightning API abstractions to load and model the data.
Our dataset contains replays from major and premiere StarCraft II tournaments
since 2016. To prepare the dataset, we processed 55 tournament "replaypacks"
that contained 17930 files with game-state information. Based on initial
investigation of available StarCraft II datasets, we observed that our dataset
is the largest publicly available source of StarCraft II esports data upon its
publication.
Analysis of the extracted data holds promise for further Artificial
Intelligence (AI), Machine Learning (ML), psychological, Human-Computer
Interaction (HCI), and sports-related studies in a variety of supervised and
self-supervised tasks. | http://arxiv.org/abs/2207.03428v2 | cs.LG | new_dataset | 0.994536 | 2207.03428 |
A domain-specific language for describing machine learning datasets | Datasets play a central role in the training and evaluation of machine
learning (ML) models. But they are also the root cause of many undesired model
behaviors, such as biased predictions. To overcome this situation, the ML
community is proposing a data-centric cultural shift where data issues are
given the attention they deserve, and more standard practices around the
gathering and processing of datasets start to be discussed and established.
So far, these proposals are mostly high-level guidelines described in natural
language and, as such, they are difficult to formalize and apply to particular
datasets. In this sense, and inspired by these proposals, we define a new
domain-specific language (DSL) to precisely describe machine learning datasets
in terms of their structure, data provenance, and social concerns. We believe
this DSL will facilitate any ML initiative to leverage and benefit from this
data-centric shift in ML (e.g., selecting the most appropriate dataset for a
new project or better replicating other ML results). The DSL is implemented as
a Visual Studio Code plugin, and it has been published under an open source
license. | http://arxiv.org/abs/2207.02848v2 | cs.LG | not_new_dataset | 0.979976 | 2207.02848 |
Shifts 2.0: Extending The Dataset of Real Distributional Shifts | Distributional shift, or the mismatch between training and deployment data,
is a significant obstacle to the usage of machine learning in high-stakes
industrial applications, such as autonomous driving and medicine. This creates
a need to be able to assess how robustly ML models generalize as well as the
quality of their uncertainty estimates. Standard ML baseline datasets do not
allow these properties to be assessed, as the training, validation and test
data are often identically distributed. Recently, a range of dedicated
benchmarks have appeared, featuring both distributionally matched and shifted
data. Among these benchmarks, the Shifts dataset stands out in terms of the
diversity of tasks as well as the data modalities it features. While most of
the benchmarks are heavily dominated by 2D image classification tasks, Shifts
contains tabular weather forecasting, machine translation, and vehicle motion
prediction tasks. This enables the robustness properties of models to be
assessed on a diverse set of industrial-scale tasks and either universal or
directly applicable task-specific conclusions to be reached. In this paper, we
extend the Shifts Dataset with two datasets sourced from industrial, high-risk
applications of high societal importance. Specifically, we consider the tasks
of segmentation of white matter Multiple Sclerosis lesions in 3D magnetic
resonance brain images and the estimation of power consumption in marine cargo
vessels. Both tasks feature ubiquitous distributional shifts and a strict
safety requirement due to the high cost of errors. These new datasets will
allow researchers to further explore robust generalization and uncertainty
estimation in new situations. In this work, we provide a description of the
dataset and baseline results for both tasks. | http://arxiv.org/abs/2206.15407v2 | cs.LG | new_dataset | 0.994488 | 2206.15407 |
Decision Forest Based EMG Signal Classification with Low Volume Dataset Augmented with Random Variance Gaussian Noise | Electromyography signals can be used as training data by machine learning
models to classify various gestures. We seek to produce a model that can
classify six different hand gestures with a limited number of samples that
generalizes well to a wider audience while comparing the effect of our feature
extraction results on model accuracy to other more conventional methods such as
the use of AR parameters on a sliding window across the channels of a signal.
We appeal to a set of more elementary methods such as the use of random bounds
on a signal, but desire to show the power these methods can carry in an online
setting where EMG classification is being conducted, as opposed to more
complicated methods such as the use of the Fourier Transform. To augment our
limited training data, we used a standard technique, known as jitter, where
random noise is added to each observation in a channel wise manner. Once all
datasets were produced using the above methods, we performed a grid search with
Random Forest and XGBoost to ultimately create a high accuracy model. For human
computer interface purposes, high accuracy classification of EMG signals is of
particular importance to their functioning and given the difficulty and cost of
amassing any sort of biomedical data in a high volume, it is valuable to have
techniques that can work with a low amount of high-quality samples with less
expensive feature extraction methods that can reliably be carried out in an
online application. | http://arxiv.org/abs/2206.14947v1 | q-bio.NC | not_new_dataset | 0.991734 | 2206.14947 |
ENS-10: A Dataset For Post-Processing Ensemble Weather Forecasts | Post-processing ensemble prediction systems can improve the reliability of
weather forecasting, especially for extreme event prediction. In recent years,
different machine learning models have been developed to improve the quality of
weather post-processing. However, these models require a comprehensive dataset
of weather simulations to produce high-accuracy results, which comes at a high
computational cost to generate. This paper introduces the ENS-10 dataset,
consisting of ten ensemble members spanning 20 years (1998-2017). The ensemble
members are generated by perturbing numerical weather simulations to capture
the chaotic behavior of the Earth. To represent the three-dimensional state of
the atmosphere, ENS-10 provides the most relevant atmospheric variables at 11
distinct pressure levels and the surface at 0.5-degree resolution for forecast
lead times T=0, 24, and 48 hours (two data points per week). We propose the
ENS-10 prediction correction task for improving the forecast quality at a
48-hour lead time through ensemble post-processing. We provide a set of
baselines and compare their skill at correcting the predictions of three
important atmospheric variables. Moreover, we measure the baselines' skill at
improving predictions of extreme weather events using our dataset. The ENS-10
dataset is available under the Creative Commons Attribution 4.0 International
(CC BY 4.0) license. | http://arxiv.org/abs/2206.14786v2 | cs.LG | new_dataset | 0.994513 | 2206.14786 |
CoAP-DoS: An IoT Network Intrusion Dataset | The need for secure Internet of Things (IoT) devices is growing as IoT
devices are becoming more integrated into vital networks. Many systems rely on
these devices to remain available and provide reliable service. Denial of
service attacks against IoT devices are a real threat due to the fact these low
power devices are very susceptible to denial-of-service attacks. Machine
learning enabled network intrusion detection systems are effective at
identifying new threats, but they require a large amount of data to work well.
There are many network traffic data sets but very few that focus on IoT network
traffic. Within the IoT network data sets there is a lack of CoAP denial of
service data. We propose a novel data set covering this gap. We develop a new
data set by collecting network traffic from real CoAP denial of service attacks
and compare the data on multiple different machine learning classifiers. We
show that the data set is effective on many classifiers. | http://arxiv.org/abs/2206.14341v1 | cs.CR | new_dataset | 0.994392 | 2206.14341 |
Evaluating resampling methods on a real-life highly imbalanced online credit card payments dataset | Various problems of any credit card fraud detection based on machine learning
come from the imbalanced aspect of transaction datasets. Indeed, the number of
frauds compared to the number of regular transactions is tiny and has been
shown to damage learning performances, e.g., at worst, the algorithm can learn
to classify all the transactions as regular. Resampling methods and
cost-sensitive approaches are known to be good candidates to leverage this
issue of imbalanced datasets. This paper evaluates numerous state-of-the-art
resampling methods on a large real-life online credit card payments dataset. We
show they are inefficient because methods are intractable or because metrics do
not exhibit substantial improvements. Our work contributes to this domain in
(1) that we compare many state-of-the-art resampling methods on a large-scale
dataset and in (2) that we use a real-life online credit card payments dataset. | http://arxiv.org/abs/2206.13152v1 | cs.LG | not_new_dataset | 0.992084 | 2206.13152 |
Multi Visual Modality Fall Detection Dataset | Falls are one of the leading cause of injury-related deaths among the elderly
worldwide. Effective detection of falls can reduce the risk of complications
and injuries. Fall detection can be performed using wearable devices or ambient
sensors; these methods may struggle with user compliance issues or false
alarms. Video cameras provide a passive alternative; however, regular RGB
cameras are impacted by changing lighting conditions and privacy concerns. From
a machine learning perspective, developing an effective fall detection system
is challenging because of the rarity and variability of falls. Many existing
fall detection datasets lack important real-world considerations, such as
varied lighting, continuous activities of daily living (ADLs), and camera
placement. The lack of these considerations makes it difficult to develop
predictive models that can operate effectively in the real world. To address
these limitations, we introduce a novel multi-modality dataset (MUVIM) that
contains four visual modalities: infra-red, depth, RGB and thermal cameras.
These modalities offer benefits such as obfuscated facial features and improved
performance in low-light conditions. We formulated fall detection as an anomaly
detection problem, in which a customized spatio-temporal convolutional
autoencoder was trained only on ADLs so that a fall would increase the
reconstruction error. Our results showed that infra-red cameras provided the
highest level of performance (AUC ROC=0.94), followed by thermal (AUC
ROC=0.87), depth (AUC ROC=0.86) and RGB (AUC ROC=0.83). This research provides
a unique opportunity to analyze the utility of camera modalities in detecting
falls in a home setting while balancing performance, passiveness, and privacy. | http://arxiv.org/abs/2206.12740v1 | cs.CV | new_dataset | 0.994479 | 2206.12740 |
The ArtBench Dataset: Benchmarking Generative Models with Artworks | We introduce ArtBench-10, the first class-balanced, high-quality, cleanly
annotated, and standardized dataset for benchmarking artwork generation. It
comprises 60,000 images of artwork from 10 distinctive artistic styles, with
5,000 training images and 1,000 testing images per style. ArtBench-10 has
several advantages over previous artwork datasets. Firstly, it is
class-balanced while most previous artwork datasets suffer from the long tail
class distributions. Secondly, the images are of high quality with clean
annotations. Thirdly, ArtBench-10 is created with standardized data collection,
annotation, filtering, and preprocessing procedures. We provide three versions
of the dataset with different resolutions ($32\times32$, $256\times256$, and
original image size), formatted in a way that is easy to be incorporated by
popular machine learning frameworks. We also conduct extensive benchmarking
experiments using representative image synthesis models with ArtBench-10 and
present in-depth analysis. The dataset is available at
https://github.com/liaopeiyuan/artbench under a Fair Use license. | http://arxiv.org/abs/2206.11404v1 | cs.CV | new_dataset | 0.994519 | 2206.11404 |
Hyperparameter Importance of Quantum Neural Networks Across Small Datasets | As restricted quantum computers are slowly becoming a reality, the search for
meaningful first applications intensifies. In this domain, one of the more
investigated approaches is the use of a special type of quantum circuit - a
so-called quantum neural network -- to serve as a basis for a machine learning
model. Roughly speaking, as the name suggests, a quantum neural network can
play a similar role to a neural network. However, specifically for applications
in machine learning contexts, very little is known about suitable circuit
architectures, or model hyperparameters one should use to achieve good learning
performance. In this work, we apply the functional ANOVA framework to quantum
neural networks to analyze which of the hyperparameters were most influential
for their predictive performance. We analyze one of the most typically used
quantum neural network architectures. We then apply this to $7$ open-source
datasets from the OpenML-CC18 classification benchmark whose number of features
is small enough to fit on quantum hardware with less than $20$ qubits. Three
main levels of importance were detected from the ranking of hyperparameters
obtained with functional ANOVA. Our experiment both confirmed expected patterns
and revealed new insights. For instance, setting well the learning rate is
deemed the most critical hyperparameter in terms of marginal contribution on
all datasets, whereas the particular choice of entangling gates used is
considered the least important except on one dataset. This work introduces new
methodologies to study quantum machine learning models and provides new
insights toward quantum model selection. | http://arxiv.org/abs/2206.09992v1 | quant-ph | not_new_dataset | 0.992209 | 2206.09992 |
ConvGeN: Convex space learning improves deep-generative oversampling for tabular imbalanced classification on smaller datasets | Data is commonly stored in tabular format. Several fields of research are
prone to small imbalanced tabular data. Supervised Machine Learning on such
data is often difficult due to class imbalance. Synthetic data generation,
i.e., oversampling, is a common remedy used to improve classifier performance.
State-of-the-art linear interpolation approaches, such as LoRAS and ProWRAS can
be used to generate synthetic samples from the convex space of the minority
class to improve classifier performance in such cases. Deep generative networks
are common deep learning approaches for synthetic sample generation, widely
used for synthetic image generation. However, their scope on synthetic tabular
data generation in the context of imbalanced classification is not adequately
explored. In this article, we show that existing deep generative models perform
poorly compared to linear interpolation based approaches for imbalanced
classification problems on smaller tabular datasets. To overcome this, we
propose a deep generative model, ConvGeN that combines the idea of convex space
learning with deep generative models. ConvGeN learns the coefficients for the
convex combinations of the minority class samples, such that the synthetic data
is distinct enough from the majority class. Our benchmarking experiments
demonstrate that our proposed model ConvGeN improves imbalanced classification
on such small datasets, as compared to existing deep generative models, while
being at-par with the existing linear interpolation approaches. Moreover, we
discuss how our model can be used for synthetic tabular data generation in
general, even outside the scope of data imbalance and thus, improves the
overall applicability of convex space learning. | http://arxiv.org/abs/2206.09812v2 | cs.LG | not_new_dataset | 0.991775 | 2206.09812 |
The Open Catalyst 2022 (OC22) Dataset and Challenges for Oxide Electrocatalysts | The development of machine learning models for electrocatalysts requires a
broad set of training data to enable their use across a wide variety of
materials. One class of materials that currently lacks sufficient training data
is oxides, which are critical for the development of OER catalysts. To address
this, we developed the OC22 dataset, consisting of 62,331 DFT relaxations
(~9,854,504 single point calculations) across a range of oxide materials,
coverages, and adsorbates. We define generalized total energy tasks that enable
property prediction beyond adsorption energies; we test baseline performance of
several graph neural networks; and we provide pre-defined dataset splits to
establish clear benchmarks for future efforts. In the most general task,
GemNet-OC sees a ~36% improvement in energy predictions when combining the
chemically dissimilar OC20 and OC22 datasets via fine-tuning. Similarly, we
achieved a ~19% improvement in total energy predictions on OC20 and a ~9%
improvement in force predictions in OC22 when using joint training. We
demonstrate the practical utility of a top performing model by capturing
literature adsorption energies and important OER scaling relationships. We
expect OC22 to provide an important benchmark for models seeking to incorporate
intricate long-range electrostatic and magnetic interactions in oxide surfaces.
Dataset and baseline models are open sourced, and a public leaderboard is
available to encourage continued community developments on the total energy
tasks and data. | http://arxiv.org/abs/2206.08917v3 | cond-mat.mtrl-sci | new_dataset | 0.99451 | 2206.08917 |
The ITU Faroese Pairs Dataset | This article documents a dataset of sentence pairs between Faroese and
Danish, produced at ITU Copenhagen. The data covers tranlsation from both
source languages, and is intended for use as training data for machine
translation systems in this language pair. | http://arxiv.org/abs/2206.08727v1 | cs.CL | new_dataset | 0.994252 | 2206.08727 |
Classification of datasets with imputed missing values: does imputation quality matter? | Classifying samples in incomplete datasets is a common aim for machine
learning practitioners, but is non-trivial. Missing data is found in most
real-world datasets and these missing values are typically imputed using
established methods, followed by classification of the now complete, imputed,
samples. The focus of the machine learning researcher is then to optimise the
downstream classification performance. In this study, we highlight that it is
imperative to consider the quality of the imputation. We demonstrate how the
commonly used measures for assessing quality are flawed and propose a new class
of discrepancy scores which focus on how well the method recreates the overall
distribution of the data. To conclude, we highlight the compromised
interpretability of classifier models trained using poorly imputed data. | http://arxiv.org/abs/2206.08478v1 | cs.LG | not_new_dataset | 0.992067 | 2206.08478 |
XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence | Recent advances in machine learning have significantly improved the
understanding of source code data and achieved good performance on a number of
downstream tasks. Open source repositories like GitHub enable this process with
rich unlabeled code data. However, the lack of high quality labeled data has
largely hindered the progress of several code related tasks, such as program
translation, summarization, synthesis, and code search. This paper introduces
XLCoST, Cross-Lingual Code SnippeT dataset, a new benchmark dataset for
cross-lingual code intelligence. Our dataset contains fine-grained parallel
data from 8 languages (7 commonly used programming languages and English), and
supports 10 cross-lingual code tasks. To the best of our knowledge, it is the
largest parallel dataset for source code both in terms of size and the number
of languages. We also provide the performance of several state-of-the-art
baseline models for each task. We believe this new dataset can be a valuable
asset for the research community and facilitate the development and validation
of new methods for cross-lingual code intelligence. | http://arxiv.org/abs/2206.08474v1 | cs.SE | new_dataset | 0.994513 | 2206.08474 |
Shopping Queries Dataset: A Large-Scale ESCI Benchmark for Improving Product Search | Improving the quality of search results can significantly enhance users
experience and engagement with search engines. In spite of several recent
advancements in the fields of machine learning and data mining, correctly
classifying items for a particular user search query has been a long-standing
challenge, which still has a large room for improvement. This paper introduces
the "Shopping Queries Dataset", a large dataset of difficult Amazon search
queries and results, publicly released with the aim of fostering research in
improving the quality of search results. The dataset contains around 130
thousand unique queries and 2.6 million manually labeled (query,product)
relevance judgements. The dataset is multilingual with queries in English,
Japanese, and Spanish. The Shopping Queries Dataset is being used in one of the
KDDCup'22 challenges. In this paper, we describe the dataset and present three
evaluation tasks along with baseline results: (i) ranking the results list,
(ii) classifying product results into relevance categories, and (iii)
identifying substitute products for a given query. We anticipate that this data
will become the gold standard for future research in the topic of product
search. | http://arxiv.org/abs/2206.06588v1 | cs.IR | new_dataset | 0.994427 | 2206.06588 |
Anomaly Detection and Inter-Sensor Transfer Learning on Smart Manufacturing Datasets | Smart manufacturing systems are being deployed at a growing rate because of
their ability to interpret a wide variety of sensed information and act on the
knowledge gleaned from system observations. In many cases, the principal goal
of the smart manufacturing system is to rapidly detect (or anticipate) failures
to reduce operational cost and eliminate downtime. This often boils down to
detecting anomalies within the sensor date acquired from the system. The smart
manufacturing application domain poses certain salient technical challenges. In
particular, there are often multiple types of sensors with varying capabilities
and costs. The sensor data characteristics change with the operating point of
the environment or machines, such as, the RPM of the motor. The anomaly
detection process therefore has to be calibrated near an operating point. In
this paper, we analyze four datasets from sensors deployed from manufacturing
testbeds. We evaluate the performance of several traditional and ML-based
forecasting models for predicting the time series of sensor data. Then,
considering the sparse data from one kind of sensor, we perform transfer
learning from a high data rate sensor to perform defect type classification.
Taken together, we show that predictive failure classification can be achieved,
thus paving the way for predictive maintenance. | http://arxiv.org/abs/2206.06355v1 | cs.LG | not_new_dataset | 0.992153 | 2206.06355 |
A universal synthetic dataset for machine learning on spectroscopic data | To assist in the development of machine learning methods for automated
classification of spectroscopic data, we have generated a universal synthetic
dataset that can be used for model validation. This dataset contains artificial
spectra designed to represent experimental measurements from techniques
including X-ray diffraction, nuclear magnetic resonance, and Raman
spectroscopy. The dataset generation process features customizable parameters,
such as scan length and peak count, which can be adjusted to fit the problem at
hand. As an initial benchmark, we simulated a dataset containing 35,000 spectra
based on 500 unique classes. To automate the classification of this data, eight
different machine learning architectures were evaluated. From the results, we
shed light on which factors are most critical to achieve optimal performance
for the classification task. The scripts used to generate synthetic spectra, as
well as our benchmark dataset and evaluation routines, are made publicly
available to aid in the development of improved machine learning models for
spectroscopic analysis. | http://arxiv.org/abs/2206.06031v2 | cs.LG | new_dataset | 0.994483 | 2206.06031 |
CrowdWorkSheets: Accounting for Individual and Collective Identities Underlying Crowdsourced Dataset Annotation | Human annotated data plays a crucial role in machine learning (ML) research
and development. However, the ethical considerations around the processes and
decisions that go into dataset annotation have not received nearly enough
attention. In this paper, we survey an array of literature that provides
insights into ethical considerations around crowdsourced dataset annotation. We
synthesize these insights, and lay out the challenges in this space along two
layers: (1) who the annotator is, and how the annotators' lived experiences can
impact their annotations, and (2) the relationship between the annotators and
the crowdsourcing platforms, and what that relationship affords them. Finally,
we introduce a novel framework, CrowdWorkSheets, for dataset developers to
facilitate transparent documentation of key decisions points at various stages
of the data annotation pipeline: task formulation, selection of annotators,
platform and infrastructure choices, dataset analysis and evaluation, and
dataset release and maintenance. | http://arxiv.org/abs/2206.08931v1 | cs.HC | not_new_dataset | 0.992289 | 2206.08931 |
Uncovering bias in the PlantVillage dataset | We report our investigation on the use of the popular PlantVillage dataset
for training deep learning based plant disease detection models. We trained a
machine learning model using only 8 pixels from the PlantVillage image
backgrounds. The model achieved 49.0% accuracy on the held-out test set, well
above the random guessing accuracy of 2.6%. This result indicates that the
PlantVillage dataset contains noise correlated with the labels and deep
learning models can easily exploit this bias to make predictions. Possible
approaches to alleviate this problem are discussed. | http://arxiv.org/abs/2206.04374v1 | cs.CV | not_new_dataset | 0.992128 | 2206.04374 |
COVIDx CXR-3: A Large-Scale, Open-Source Benchmark Dataset of Chest X-ray Images for Computer-Aided COVID-19 Diagnostics | After more than two years since the beginning of the COVID-19 pandemic, the
pressure of this crisis continues to devastate globally. The use of chest X-ray
(CXR) imaging as a complementary screening strategy to RT-PCR testing is not
only prevailing but has greatly increased due to its routine clinical use for
respiratory complaints. Thus far, many visual perception models have been
proposed for COVID-19 screening based on CXR imaging. Nevertheless, the
accuracy and the generalization capacity of these models are very much
dependent on the diversity and the size of the dataset they were trained on.
Motivated by this, we introduce COVIDx CXR-3, a large-scale benchmark dataset
of CXR images for supporting COVID-19 computer vision research. COVIDx CXR-3 is
composed of 30,386 CXR images from a multinational cohort of 17,026 patients
from at least 51 countries, making it, to the best of our knowledge, the most
extensive, most diverse COVID-19 CXR dataset in open access form. Here, we
provide comprehensive details on the various aspects of the proposed dataset
including patient demographics, imaging views, and infection types. The hope is
that COVIDx CXR-3 can assist scientists in advancing machine learning research
against both the COVID-19 pandemic and related diseases. | http://arxiv.org/abs/2206.03671v3 | eess.IV | new_dataset | 0.9945 | 2206.03671 |
Network Report: A Structured Description for Network Datasets | The rapid development of network science and technologies depends on
shareable datasets. Currently, there is no standard practice for reporting and
sharing network datasets. Some network dataset providers only share links,
while others provide some contexts or basic statistics. As a result, critical
information may be unintentionally dropped, and network dataset consumers may
misunderstand or overlook critical aspects. Inappropriately using a network
dataset can lead to severe consequences (e.g., discrimination) especially when
machine learning models on networks are deployed in high-stake domains.
Challenges arise as networks are often used across different domains (e.g.,
network science, physics, etc) and have complex structures. To facilitate the
communication between network dataset providers and consumers, we propose
network report. A network report is a structured description that summarizes
and contextualizes a network dataset. Network report extends the idea of
dataset reports (e.g., Datasheets for Datasets) from prior work with
network-specific descriptions of the non-i.i.d. nature, demographic
information, network characteristics, etc. We hope network reports encourage
transparency and accountability in network research and development across
different fields. | http://arxiv.org/abs/2206.03635v1 | cs.SI | not_new_dataset | 0.991656 | 2206.03635 |
The Influence of Dataset Partitioning on Dysfluency Detection Systems | This paper empirically investigates the influence of different data splits
and splitting strategies on the performance of dysfluency detection systems.
For this, we perform experiments using wav2vec 2.0 models with a classification
head as well as support vector machines (SVM) in conjunction with the features
extracted from the wav2vec 2.0 model to detect dysfluencies. We train and
evaluate the systems with different non-speaker-exclusive and speaker-exclusive
splits of the Stuttering Events in Podcasts (SEP-28k) dataset to shed some
light on the variability of results w.r.t. to the partition method used.
Furthermore, we show that the SEP-28k dataset is dominated by only a few
speakers, making it difficult to evaluate. To remedy this problem, we created
SEP-28k-Extended (SEP-28k-E), containing semi-automatically generated speaker
and gender information for the SEP-28k corpus, and suggest different data
splits, each useful for evaluating other aspects of methods for dysfluency
detection. | http://arxiv.org/abs/2206.03400v1 | eess.AS | not_new_dataset | 0.991708 | 2206.03400 |
COVIDx CT-3: A Large-scale, Multinational, Open-Source Benchmark Dataset for Computer-aided COVID-19 Screening from Chest CT Images | Computed tomography (CT) has been widely explored as a COVID-19 screening and
assessment tool to complement RT-PCR testing. To assist radiologists with
CT-based COVID-19 screening, a number of computer-aided systems have been
proposed. However, many proposed systems are built using CT data which is
limited in both quantity and diversity. Motivated to support efforts in the
development of machine learning-driven screening systems, we introduce COVIDx
CT-3, a large-scale multinational benchmark dataset for detection of COVID-19
cases from chest CT images. COVIDx CT-3 includes 431,205 CT slices from 6,068
patients across at least 17 countries, which to the best of our knowledge
represents the largest, most diverse dataset of COVID-19 CT images in
open-access form. Additionally, we examine the data diversity and potential
biases of the COVIDx CT-3 dataset, finding that significant geographic and
class imbalances remain despite efforts to curate data from a wide variety of
sources. | http://arxiv.org/abs/2206.03043v3 | eess.IV | new_dataset | 0.994497 | 2206.03043 |
MorisienMT: A Dataset for Mauritian Creole Machine Translation | In this paper, we describe MorisienMT, a dataset for benchmarking machine
translation quality of Mauritian Creole. Mauritian Creole (Morisien) is the
lingua franca of the Republic of Mauritius and is a French-based creole
language. MorisienMT consists of a parallel corpus between English and
Morisien, French and Morisien and a monolingual corpus for Morisien. We first
give an overview of Morisien and then describe the steps taken to create the
corpora and, from it, the training and evaluation splits. Thereafter, we
establish a variety of baseline models using the created parallel corpora as
well as large French--English corpora for transfer learning. We release our
datasets publicly for research purposes and hope that this spurs research for
Morisien machine translation. | http://arxiv.org/abs/2206.02421v1 | cs.CL | new_dataset | 0.994415 | 2206.02421 |
Interpretable Models Capable of Handling Systematic Missingness in Imbalanced Classes and Heterogeneous Datasets | Application of interpretable machine learning techniques on medical datasets
facilitate early and fast diagnoses, along with getting deeper insight into the
data. Furthermore, the transparency of these models increase trust among
application domain experts. Medical datasets face common issues such as
heterogeneous measurements, imbalanced classes with limited sample size, and
missing data, which hinder the straightforward application of machine learning
techniques. In this paper we present a family of prototype-based (PB)
interpretable models which are capable of handling these issues. The models
introduced in this contribution show comparable or superior performance to
alternative techniques applicable in such situations. However, unlike ensemble
based models, which have to compromise on easy interpretation, the PB models
here do not. Moreover we propose a strategy of harnessing the power of
ensembles while maintaining the intrinsic interpretability of the PB models, by
averaging the model parameter manifolds. All the models were evaluated on a
synthetic (publicly available dataset) in addition to detailed analyses of two
real-world medical datasets (one publicly available). Results indicated that
the models and strategies we introduced addressed the challenges of real-world
medical data, while remaining computationally inexpensive and transparent, as
well as similar or superior in performance compared to their alternatives. | http://arxiv.org/abs/2206.02056v1 | cs.LG | not_new_dataset | 0.992247 | 2206.02056 |
Functional Connectivity Methods for EEG-based Biometrics on a Large, Heterogeneous Dataset | This study examines the utility of functional connectivity (FC) and
graph-based (GB) measures with a support vector machine classifier for use in
electroencephalogram (EEG) based biometrics. Although FC-based features have
been used in biometric applications, studies assessing the identification
algorithms on heterogeneous and large datasets are scarce. This work
investigates the performance of FC and GB metrics on a dataset of 184 subjects
formed by pooling three datasets recorded under different protocols and
acquisition systems. The results demonstrate the higher discriminatory power of
FC than GB metrics. The identification accuracy increases with higher frequency
EEG bands, indicating the enhanced uniqueness of the neural signatures in beta
and gamma bands. Using all the 56 EEG channels common to the three databases,
the best identification accuracy of 97.4% is obtained using phase-locking value
(PLV) based measures extracted from the gamma frequency band. Further, we
investigate the effect of the length of the analysis epoch to determine the
data acquisition time required to obtain satisfactory identification accuracy.
When the number of channels is reduced to 21 from 56, there is a marginal
reduction of 2.4% only in the identification accuracy using PLV features in the
gamma band. Additional experiments have been conducted to study the effect of
the cognitive state of the subject and mismatched train/test conditions on the
performance of the system. | http://arxiv.org/abs/2206.01475v1 | eess.SP | not_new_dataset | 0.992031 | 2206.01475 |
BD-SHS: A Benchmark Dataset for Learning to Detect Online Bangla Hate Speech in Different Social Contexts | Social media platforms and online streaming services have spawned a new breed
of Hate Speech (HS). Due to the massive amount of user-generated content on
these sites, modern machine learning techniques are found to be feasible and
cost-effective to tackle this problem. However, linguistically diverse datasets
covering different social contexts in which offensive language is typically
used are required to train generalizable models. In this paper, we identify the
shortcomings of existing Bangla HS datasets and introduce a large manually
labeled dataset BD-SHS that includes HS in different social contexts. The
labeling criteria were prepared following a hierarchical annotation process,
which is the first of its kind in Bangla HS to the best of our knowledge. The
dataset includes more than 50,200 offensive comments crawled from online social
networking sites and is at least 60% larger than any existing Bangla HS
datasets. We present the benchmark result of our dataset by training different
NLP models resulting in the best one achieving an F1-score of 91.0%. In our
experiments, we found that a word embedding trained exclusively using 1.47
million comments from social media and streaming sites consistently resulted in
better modeling of HS detection in comparison to other pre-trained embeddings.
Our dataset and all accompanying codes is publicly available at
github.com/naurosromim/hate-speech-dataset-for-Bengali-social-media | http://arxiv.org/abs/2206.00372v1 | cs.CL | new_dataset | 0.994455 | 2206.00372 |
Privacy for Free: How does Dataset Condensation Help Privacy? | To prevent unintentional data leakage, research community has resorted to
data generators that can produce differentially private data for model
training. However, for the sake of the data privacy, existing solutions suffer
from either expensive training cost or poor generalization performance.
Therefore, we raise the question whether training efficiency and privacy can be
achieved simultaneously. In this work, we for the first time identify that
dataset condensation (DC) which is originally designed for improving training
efficiency is also a better solution to replace the traditional data generators
for private data generation, thus providing privacy for free. To demonstrate
the privacy benefit of DC, we build a connection between DC and differential
privacy, and theoretically prove on linear feature extractors (and then
extended to non-linear feature extractors) that the existence of one sample has
limited impact ($O(m/n)$) on the parameter distribution of networks trained on
$m$ samples synthesized from $n (n \gg m)$ raw samples by DC. We also
empirically validate the visual privacy and membership privacy of
DC-synthesized data by launching both the loss-based and the state-of-the-art
likelihood-based membership inference attacks. We envision this work as a
milestone for data-efficient and privacy-preserving machine learning. | http://arxiv.org/abs/2206.00240v1 | cs.CR | not_new_dataset | 0.992161 | 2206.00240 |
NusaX: Multilingual Parallel Sentiment Dataset for 10 Indonesian Local Languages | Natural language processing (NLP) has a significant impact on society via
technologies such as machine translation and search engines. Despite its
success, NLP technology is only widely available for high-resource languages
such as English and Chinese, while it remains inaccessible to many languages
due to the unavailability of data resources and benchmarks. In this work, we
focus on developing resources for languages in Indonesia. Despite being the
second most linguistically diverse country, most languages in Indonesia are
categorized as endangered and some are even extinct. We develop the first-ever
parallel resource for 10 low-resource languages in Indonesia. Our resource
includes datasets, a multi-task benchmark, and lexicons, as well as a parallel
Indonesian-English dataset. We provide extensive analyses and describe the
challenges when creating such resources. We hope that our work can spark NLP
research on Indonesian and other underrepresented languages. | http://arxiv.org/abs/2205.15960v2 | cs.CL | new_dataset | 0.994441 | 2205.15960 |
Dataset Condensation via Efficient Synthetic-Data Parameterization | The great success of machine learning with massive amounts of data comes at a
price of huge computation costs and storage for training and tuning. Recent
studies on dataset condensation attempt to reduce the dependence on such
massive data by synthesizing a compact training dataset. However, the existing
approaches have fundamental limitations in optimization due to the limited
representability of synthetic datasets without considering any data regularity
characteristics. To this end, we propose a novel condensation framework that
generates multiple synthetic data with a limited storage budget via efficient
parameterization considering data regularity. We further analyze the
shortcomings of the existing gradient matching-based condensation methods and
develop an effective optimization technique for improving the condensation of
training data information. We propose a unified algorithm that drastically
improves the quality of condensed data against the current state-of-the-art on
CIFAR-10, ImageNet, and Speech Commands. | http://arxiv.org/abs/2205.14959v2 | cs.LG | not_new_dataset | 0.99187 | 2205.14959 |
BAN-Cap: A Multi-Purpose English-Bangla Image Descriptions Dataset | As computers have become efficient at understanding visual information and
transforming it into a written representation, research interest in tasks like
automatic image captioning has seen a significant leap over the last few years.
While most of the research attention is given to the English language in a
monolingual setting, resource-constrained languages like Bangla remain out of
focus, predominantly due to a lack of standard datasets. Addressing this issue,
we present a new dataset BAN-Cap following the widely used Flickr8k dataset,
where we collect Bangla captions of the images provided by qualified
annotators. Our dataset represents a wider variety of image caption styles
annotated by trained people from different backgrounds. We present a
quantitative and qualitative analysis of the dataset and the baseline
evaluation of the recent models in Bangla image captioning. We investigate the
effect of text augmentation and demonstrate that an adaptive attention-based
model combined with text augmentation using Contextualized Word Replacement
(CWR) outperforms all state-of-the-art models for Bangla image captioning. We
also present this dataset's multipurpose nature, especially on machine
translation for Bangla-English and English-Bangla. This dataset and all the
models will be useful for further research. | http://arxiv.org/abs/2205.14462v1 | cs.CL | new_dataset | 0.994511 | 2205.14462 |
MIMII DG: Sound Dataset for Malfunctioning Industrial Machine Investigation and Inspection for Domain Generalization Task | We present a machine sound dataset to benchmark domain generalization
techniques for anomalous sound detection (ASD). Domain shifts are differences
in data distributions that can degrade the detection performance, and handling
them is a major issue for the application of ASD systems. While currently
available datasets for ASD tasks assume that occurrences of domain shifts are
known, in practice, they can be difficult to detect. To handle such domain
shifts, domain generalization techniques that perform well regardless of the
domains should be investigated. In this paper, we present the first ASD dataset
for the domain generalization techniques, called MIMII DG. The dataset consists
of five machine types and three domain shift scenarios for each machine type.
The dataset is dedicated to the domain generalization task with features such
as multiple different values for parameters that cause domain shifts and
introduction of domain shifts that can be difficult to detect, such as shifts
in the background noise. Experimental results using two baseline systems
indicate that the dataset reproduces domain shift scenarios and is useful for
benchmarking domain generalization techniques. | http://arxiv.org/abs/2205.13879v2 | cs.SD | new_dataset | 0.994458 | 2205.13879 |
A Wireless-Vision Dataset for Privacy Preserving Human Activity Recognition | Human Activity Recognition (HAR) has recently received remarkable attention
in numerous applications such as assisted living and remote monitoring.
Existing solutions based on sensors and vision technologies have obtained
achievements but still suffering from considerable limitations in the
environmental requirement. Wireless signals like WiFi-based sensing have
emerged as a new paradigm since it is convenient and not restricted in the
environment. In this paper, a new WiFi-based and video-based neural network
(WiNN) is proposed to improve the robustness of activity recognition where the
synchronized video serves as the supplement for the wireless data. Moreover, a
wireless-vision benchmark (WiVi) is collected for 9 class actions recognition
in three different visual conditions, including the scenes without occlusion,
with partial occlusion, and with full occlusion. Both machine learning methods
- support vector machine (SVM) as well as deep learning methods are used for
the accuracy verification of the data set. Our results show that WiVi data set
satisfies the primary demand and all three branches in the proposed pipeline
keep more than $80\%$ of activity recognition accuracy over multiple action
segmentation from 1s to 3s. In particular, WiNN is the most robust method in
terms of all the actions on three action segmentation compared to the others. | http://arxiv.org/abs/2205.11962v1 | cs.CV | new_dataset | 0.994424 | 2205.11962 |
D4: a Chinese Dialogue Dataset for Depression-Diagnosis-Oriented Chat | In a depression-diagnosis-directed clinical session, doctors initiate a
conversation with ample emotional support that guides the patients to expose
their symptoms based on clinical diagnosis criteria. Such a dialogue system is
distinguished from existing single-purpose human-machine dialog systems, as it
combines task-oriented and chit-chats with uniqueness in dialogue topics and
procedures. However, due to the social stigma associated with mental illness,
the dialogue data related to depression consultation and diagnosis are rarely
disclosed. Based on clinical depression diagnostic criteria ICD-11 and DSM-5,
we designed a 3-phase procedure to construct D$^4$: a Chinese Dialogue Dataset
for Depression-Diagnosis-Oriented Chat, which simulates the dialogue between
doctors and patients during the diagnosis of depression, including diagnosis
results and symptom summary given by professional psychiatrists for each
conversation. Upon the newly-constructed dataset, four tasks mirroring the
depression diagnosis process are established: response generation, topic
prediction, dialog summary, and severity classification of depressive episode
and suicide risk. Multi-scale evaluation results demonstrate that a more
empathy-driven and diagnostic-accurate consultation dialogue system trained on
our dataset can be achieved compared to rule-based bots. | http://arxiv.org/abs/2205.11764v2 | cs.CL | new_dataset | 0.99452 | 2205.11764 |
The MD17 Datasets from the Perspective of Datasets for Gas-Phase "Small" Molecule Potentials | There has been great progress in developing methods for machine-learned
potential energy surfaces. There have also been important assessments of these
methods by comparing so-called learning curves on datasets of electronic
energies and forces, notably the MD17 database. The dataset for each molecule
in this database generally consists of tens of thousands of energies and forces
obtained from DFT direct dynamics at 500 K. We contrast the datasets from this
database for three "small" molecules, ethanol, malonaldehyde, and glycine, with
datasets we have generated with specific targets for the PESs in mind: a
rigorous calculation of the zero-point energy and wavefunction, the tunneling
splitting in malonaldehyde and in the case of glycine a description of all
eight low-lying conformers. We found that the MD17 datasets are too limited for
these targets. We also examine recent datasets for several PESs that describe
small-molecule but complex chemical reactions. Finally, we introduce a new
database, "QM-22", which contains datasets of molecules ranging from 4 to 15
atoms that extend to high energies and a large span of configurations. | http://arxiv.org/abs/2205.11663v1 | physics.chem-ph | new_dataset | 0.994206 | 2205.11663 |
Diversity Over Size: On the Effect of Sample and Topic Sizes for Argument Mining Datasets | The task of Argument Mining, that is extracting argumentative sentences for a
specific topic from large document sources, is an inherently difficult task for
machine learning models and humans alike, as large Argument Mining datasets are
rare and recognition of argumentative sentences requires expert knowledge. The
task becomes even more difficult if it also involves stance detection of
retrieved arguments. Given the cost and complexity of creating suitably large
Argument Mining datasets, we ask whether it is necessary for acceptable
performance to have datasets growing in size. Our findings show that, when
using carefully composed training samples and a model pretrained on related
tasks, we can reach 95% of the maximum performance while reducing the training
sample size by at least 85%. This gain is consistent across three Argument
Mining tasks on three different datasets. We also publish a new dataset for
future benchmarking. | http://arxiv.org/abs/2205.11472v2 | cs.CL | not_new_dataset | 0.988359 | 2205.11472 |
NPU-BOLT: A Dataset for Bolt Object Detection in Natural Scene Images | Bolt joints are very common and important in engineering structures. Due to
extreme service environment and load factors, bolts often get loose or even
disengaged. To real-time or timely detect the loosed or disengaged bolts is an
urgent need in practical engineering, which is critical to keep structural
safety and service life. In recent years, many bolt loosening detection methods
using deep learning and machine learning techniques have been proposed and are
attracting more and more attention. However, most of these studies use bolt
images captured in laboratory for deep leaning model training. The images are
obtained in a well-controlled light, distance, and view angle conditions. Also,
the bolted structures are well designed experimental structures with brand new
bolts and the bolts are exposed without any shelter nearby. It is noted that in
practical engineering, the above well controlled lab conditions are not easy
realized and the real bolt images often have blur edges, oblique perspective,
partial occlusion and indistinguishable colors etc., which make the trained
models obtained in laboratory conditions loss their accuracy or fails.
Therefore, the aim of this study is to develop a dataset named NPU-BOLT for
bolt object detection in natural scene images and open it to researchers for
public use and further development. In the first version of the dataset, it
contains 337 samples of bolt joints images mainly in the natural environment,
with image data sizes ranging from 400*400 to 6000*4000, totaling approximately
1275 bolt targets. The bolt targets are annotated into four categories named
blur bolt, bolt head, bolt nut and bolt side. The dataset is tested with
advanced object detection models including yolov5, Faster-RCNN and CenterNet.
The effectiveness of the dataset is validated. | http://arxiv.org/abs/2205.11191v2 | cs.CV | new_dataset | 0.994497 | 2205.11191 |
TWEET-FID: An Annotated Dataset for Multiple Foodborne Illness Detection Tasks | Foodborne illness is a serious but preventable public health problem -- with
delays in detecting the associated outbreaks resulting in productivity loss,
expensive recalls, public safety hazards, and even loss of life. While social
media is a promising source for identifying unreported foodborne illnesses,
there is a dearth of labeled datasets for developing effective outbreak
detection models. To accelerate the development of machine learning-based
models for foodborne outbreak detection, we thus present TWEET-FID
(TWEET-Foodborne Illness Detection), the first publicly available annotated
dataset for multiple foodborne illness incident detection tasks. TWEET-FID
collected from Twitter is annotated with three facets: tweet class, entity
type, and slot type, with labels produced by experts as well as by crowdsource
workers. We introduce several domain tasks leveraging these three facets: text
relevance classification (TRC), entity mention detection (EMD), and slot
filling (SF). We describe the end-to-end methodology for dataset design,
creation, and labeling for supporting model development for these tasks. A
comprehensive set of results for these tasks leveraging state-of-the-art
single- and multi-task deep learning methods on the TWEET-FID dataset are
provided. This dataset opens opportunities for future research in foodborne
outbreak detection. | http://arxiv.org/abs/2205.10726v2 | cs.CL | new_dataset | 0.994504 | 2205.10726 |
Predicting Seriousness of Injury in a Traffic Accident: A New Imbalanced Dataset and Benchmark | The paper introduces a new dataset to assess the performance of machine
learning algorithms in the prediction of the seriousness of injury in a traffic
accident. The dataset is created by aggregating publicly available datasets
from the UK Department for Transport, which are drastically imbalanced with
missing attributes sometimes approaching 50\% of the overall data
dimensionality. The paper presents the data analysis pipeline starting from the
publicly available data of road traffic accidents and ending with predictors of
possible injuries and their degree of severity. It addresses the huge
incompleteness of public data with a MissForest model. The paper also
introduces two baseline approaches to create injury predictors: a supervised
artificial neural network and a reinforcement learning model. The dataset can
potentially stimulate diverse aspects of machine learning research on
imbalanced datasets and the two approaches can be used as baseline references
when researchers test more advanced learning algorithms in this area. | http://arxiv.org/abs/2205.10441v1 | cs.LG | new_dataset | 0.994418 | 2205.10441 |
Oracle-MNIST: a Realistic Image Dataset for Benchmarking Machine Learning Algorithms | We introduce the Oracle-MNIST dataset, comprising of 28$\times $28 grayscale
images of 30,222 ancient characters from 10 categories, for benchmarking
pattern classification, with particular challenges on image noise and
distortion. The training set totally consists of 27,222 images, and the test
set contains 300 images per class. Oracle-MNIST shares the same data format
with the original MNIST dataset, allowing for direct compatibility with all
existing classifiers and systems, but it constitutes a more challenging
classification task than MNIST. The images of ancient characters suffer from 1)
extremely serious and unique noises caused by three-thousand years of burial
and aging and 2) dramatically variant writing styles by ancient Chinese, which
all make them realistic for machine learning research. The dataset is freely
available at https://github.com/wm-bupt/oracle-mnist. | http://arxiv.org/abs/2205.09442v1 | cs.CV | new_dataset | 0.994455 | 2205.09442 |
DDXPlus: A New Dataset For Automatic Medical Diagnosis | There has been a rapidly growing interest in Automatic Symptom Detection
(ASD) and Automatic Diagnosis (AD) systems in the machine learning research
literature, aiming to assist doctors in telemedicine services. These systems
are designed to interact with patients, collect evidence about their symptoms
and relevant antecedents, and possibly make predictions about the underlying
diseases. Doctors would review the interactions, including the evidence and the
predictions, collect if necessary additional information from patients, before
deciding on next steps. Despite recent progress in this area, an important
piece of doctors' interactions with patients is missing in the design of these
systems, namely the differential diagnosis. Its absence is largely due to the
lack of datasets that include such information for models to train on. In this
work, we present a large-scale synthetic dataset of roughly 1.3 million
patients that includes a differential diagnosis, along with the ground truth
pathology, symptoms and antecedents for each patient. Unlike existing datasets
which only contain binary symptoms and antecedents, this dataset also contains
categorical and multi-choice symptoms and antecedents useful for efficient data
collection. Moreover, some symptoms are organized in a hierarchy, making it
possible to design systems able to interact with patients in a logical way. As
a proof-of-concept, we extend two existing AD and ASD systems to incorporate
the differential diagnosis, and provide empirical evidence that using
differentials as training signals is essential for the efficiency of such
systems or for helping doctors better understand the reasoning of those
systems. | http://arxiv.org/abs/2205.09148v3 | cs.CL | new_dataset | 0.994508 | 2205.09148 |
Dark solitons in Bose-Einstein condensates: a dataset for many-body physics research | We establish a dataset of over $1.6\times10^4$ experimental images of
Bose--Einstein condensates containing solitonic excitations to enable machine
learning (ML) for many-body physics research. About $33~\%$ of this dataset has
manually assigned and carefully curated labels. The remainder is automatically
labeled using SolDet -- an implementation of a physics-informed ML data
analysis framework -- consisting of a convolutional-neural-network-based
classifier and OD as well as a statistically motivated physics-informed
classifier and a quality metric. This technical note constitutes the definitive
reference of the dataset, providing an opportunity for the data science
community to develop more sophisticated analysis tools, to further understand
nonlinear many-body physics, and even advance cold atom experiments. | http://arxiv.org/abs/2205.09114v2 | cond-mat.quant-gas | new_dataset | 0.994393 | 2205.09114 |
Gender and Racial Bias in Visual Question Answering Datasets | Vision-and-language tasks have increasingly drawn more attention as a means
to evaluate human-like reasoning in machine learning models. A popular task in
the field is visual question answering (VQA), which aims to answer questions
about images. However, VQA models have been shown to exploit language bias by
learning the statistical correlations between questions and answers without
looking into the image content: e.g., questions about the color of a banana are
answered with yellow, even if the banana in the image is green. If societal
bias (e.g., sexism, racism, ableism, etc.) is present in the training data,
this problem may be causing VQA models to learn harmful stereotypes. For this
reason, we investigate gender and racial bias in five VQA datasets. In our
analysis, we find that the distribution of answers is highly different between
questions about women and men, as well as the existence of detrimental
gender-stereotypical samples. Likewise, we identify that specific race-related
attributes are underrepresented, whereas potentially discriminatory samples
appear in the analyzed datasets. Our findings suggest that there are dangers
associated to using VQA datasets without considering and dealing with the
potentially harmful stereotypes. We conclude the paper by proposing solutions
to alleviate the problem before, during, and after the dataset collection
process. | http://arxiv.org/abs/2205.08148v3 | cs.CV | not_new_dataset | 0.992183 | 2205.08148 |
Heri-Graphs: A Workflow of Creating Datasets for Multi-modal Machine Learning on Graphs of Heritage Values and Attributes with Social Media | Values (why to conserve) and Attributes (what to conserve) are essential
concepts of cultural heritage. Recent studies have been using social media to
map values and attributes conveyed by public to cultural heritage. However, it
is rare to connect heterogeneous modalities of images, texts, geo-locations,
timestamps, and social network structures to mine the semantic and structural
characteristics therein. This study presents a methodological workflow for
constructing such multi-modal datasets using posts and images on Flickr for
graph-based machine learning (ML) tasks concerning heritage values and
attributes. After data pre-processing using state-of-the-art ML models, the
multi-modal information of visual contents and textual semantics are modelled
as node features and labels, while their social relationships and
spatiotemporal contexts are modelled as links in Multi-Graphs. The workflow is
tested in three cities containing UNESCO World Heritage properties - Amsterdam,
Suzhou, and Venice, which yielded datasets with high consistency for
semi-supervised learning tasks. The entire process is formally described with
mathematical notations, ready to be applied in provisional tasks both as ML
problems with technical relevance and as urban/heritage study questions with
societal interests. This study could also benefit the understanding and mapping
of heritage values and attributes for future research in global cases, aiming
at inclusive heritage management practices. | http://arxiv.org/abs/2205.07545v1 | cs.SI | not_new_dataset | 0.713558 | 2205.07545 |
DendroMap: Visual Exploration of Large-Scale Image Datasets for Machine Learning with Treemaps | In this paper, we present DendroMap, a novel approach to interactively
exploring large-scale image datasets for machine learning (ML). ML
practitioners often explore image datasets by generating a grid of images or
projecting high-dimensional representations of images into 2-D using
dimensionality reduction techniques (e.g., t-SNE). However, neither approach
effectively scales to large datasets because images are ineffectively organized
and interactions are insufficiently supported. To address these challenges, we
develop DendroMap by adapting Treemaps, a well-known visualization technique.
DendroMap effectively organizes images by extracting hierarchical cluster
structures from high-dimensional representations of images. It enables users to
make sense of the overall distributions of datasets and interactively zoom into
specific areas of interests at multiple levels of abstraction. Our case studies
with widely-used image datasets for deep learning demonstrate that users can
discover insights about datasets and trained models by examining the diversity
of images, identifying underperforming subgroups, and analyzing classification
errors. We conducted a user study that evaluates the effectiveness of DendroMap
in grouping and searching tasks by comparing it with a gridified version of
t-SNE and found that participants preferred DendroMap. DendroMap is available
at https://div-lab.github.io/dendromap/. | http://arxiv.org/abs/2205.06935v2 | cs.HC | not_new_dataset | 0.992079 | 2205.06935 |
Machine Learning Workflow to Explain Black-box Models for Early Alzheimer's Disease Classification Evaluated for Multiple Datasets | Purpose: Hard-to-interpret Black-box Machine Learning (ML) were often used
for early Alzheimer's Disease (AD) detection.
Methods: To interpret eXtreme Gradient Boosting (XGBoost), Random Forest
(RF), and Support Vector Machine (SVM) black-box models a workflow based on
Shapley values was developed. All models were trained on the Alzheimer's
Disease Neuroimaging Initiative (ADNI) dataset and evaluated for an independent
ADNI test set, as well as the external Australian Imaging and Lifestyle
flagship study of Ageing (AIBL), and Open Access Series of Imaging Studies
(OASIS) datasets. Shapley values were compared to intuitively interpretable
Decision Trees (DTs), and Logistic Regression (LR), as well as natural and
permutation feature importances. To avoid the reduction of the explanation
validity caused by correlated features, forward selection and aspect
consolidation were implemented.
Results: Some black-box models outperformed DTs and LR. The forward-selected
features correspond to brain areas previously associated with AD. Shapley
values identified biologically plausible associations with moderate to strong
correlations with feature importances. The most important RF features to
predict AD conversion were the volume of the amygdalae, and a cognitive test
score. Good cognitive test performances and large brain volumes decreased the
AD risk. The models trained using cognitive test scores significantly
outperformed brain volumetric models ($p<0.05$). Cognitive Normal (CN) vs. AD
models were successfully transferred to external datasets.
Conclusion: In comparison to previous work, improved performances for ADNI
and AIBL were achieved for CN vs. Mild Cognitive Impairment (MCI)
classification using brain volumes. The Shapley values and the feature
importances showed moderate to strong correlations. | http://arxiv.org/abs/2205.05907v2 | cs.LG | not_new_dataset | 0.992087 | 2205.05907 |
CoCoA-MT: A Dataset and Benchmark for Contrastive Controlled MT with Application to Formality | The machine translation (MT) task is typically formulated as that of
returning a single translation for an input segment. However, in many cases,
multiple different translations are valid and the appropriate translation may
depend on the intended target audience, characteristics of the speaker, or even
the relationship between speakers. Specific problems arise when dealing with
honorifics, particularly translating from English into languages with formality
markers. For example, the sentence "Are you sure?" can be translated in German
as "Sind Sie sich sicher?" (formal register) or "Bist du dir sicher?"
(informal). Using wrong or inconsistent tone may be perceived as inappropriate
or jarring for users of certain cultures and demographics. This work addresses
the problem of learning to control target language attributes, in this case
formality, from a small amount of labeled contrastive data. We introduce an
annotated dataset (CoCoA-MT) and an associated evaluation metric for training
and evaluating formality-controlled MT models for six diverse target languages.
We show that we can train formality-controlled models by fine-tuning on labeled
contrastive data, achieving high accuracy (82% in-domain and 73% out-of-domain)
while maintaining overall quality. | http://arxiv.org/abs/2205.04022v1 | cs.CL | new_dataset | 0.994446 | 2205.04022 |
Ensemble Classifier Design Tuned to Dataset Characteristics for Network Intrusion Detection | Machine Learning-based supervised approaches require highly customized and
fine-tuned methodologies to deliver outstanding performance. This paper
presents a dataset-driven design and performance evaluation of a machine
learning classifier for the network intrusion dataset UNSW-NB15. Analysis of
the dataset suggests that it suffers from class representation imbalance and
class overlap in the feature space. We employed ensemble methods using Balanced
Bagging (BB), eXtreme Gradient Boosting (XGBoost), and Random Forest empowered
by Hellinger Distance Decision Tree (RF-HDDT). BB and XGBoost are tuned to
handle the imbalanced data, and Random Forest (RF) classifier is supplemented
by the Hellinger metric to address the imbalance issue. Two new algorithms are
proposed to address the class overlap issue in the dataset. These two
algorithms are leveraged to help improve the performance of the testing dataset
by modifying the final classification decision made by three base classifiers
as part of the ensemble classifier which employs a majority vote combiner. The
proposed design is evaluated for both binary and multi-category classification.
Comparing the proposed model to those reported on the same dataset in the
literature demonstrate that the proposed model outperforms others by a
significant margin for both binary and multi-category classification cases. | http://arxiv.org/abs/2205.06177v1 | cs.CR | not_new_dataset | 0.992122 | 2205.06177 |
Machine Learning-Friendly Biomedical Datasets for Equivalence and Subsumption Ontology Matching | Ontology Matching (OM) plays an important role in many domains such as
bioinformatics and the Semantic Web, and its research is becoming increasingly
popular, especially with the application of machine learning (ML) techniques.
Although the Ontology Alignment Evaluation Initiative (OAEI) represents an
impressive effort for the systematic evaluation of OM systems, it still suffers
from several limitations including limited evaluation of subsumption mappings,
suboptimal reference mappings, and limited support for the evaluation of
ML-based systems. To tackle these limitations, we introduce five new biomedical
OM tasks involving ontologies extracted from Mondo and UMLS. Each task includes
both equivalence and subsumption matching; the quality of reference mappings is
ensured by human curation, ontology pruning, etc.; and a comprehensive
evaluation framework is proposed to measure OM performance from various
perspectives for both ML-based and non-ML-based OM systems. We report
evaluation results for OM systems of different types to demonstrate the usage
of these resources, all of which are publicly available as part of the new
BioML track at OAEI 2022. | http://arxiv.org/abs/2205.03447v8 | cs.AI | new_dataset | 0.994292 | 2205.03447 |
A High-Resolution Chest CT-Scan Image Dataset for COVID-19 Diagnosis and Differentiation | During the COVID-19 pandemic, computed tomography (CT) is a good way to
diagnose COVID-19 patients. HRCT (High-Resolution Computed Tomography) is a
form of computed tomography that uses advanced methods to improve image
resolution. Publicly accessible COVID-19 CT image datasets are very difficult
to come by due to privacy concerns, which impedes the study and development of
AI-powered COVID-19 diagnostic algorithms based on CT images. To address this
problem, we have introduced HRCTv1-COVID-19, a new COVID-19 high resolution
chest CT Scan image dataset that includes not only COVID-19 cases of Ground
Glass Opacity (GGO), Crazy Paving, and Air Space Consolidation, but also CT
images of cases with negative COVID-19. The HRCTv1-COVID-19 dataset, which
includes slice-level, and patient-level labels, has the potential to aid
COVID-19 research, especially for diagnosis and differentiation using
artificial intelligence algorithms, machine learning and deep learning methods.
This dataset is accessible through web at: http://databiox.com and includes
181,106 chest HRCT images from 395 patients with four labels: GGO, Crazy
Paving, Air Space Consolidation and Negative.
Keywords- Dataset, COVID-19, CT-Scan, Computed Tomography, Medical Imaging,
Chest Image. | http://arxiv.org/abs/2205.03408v1 | eess.IV | new_dataset | 0.994531 | 2205.03408 |
KenSwQuAD -- A Question Answering Dataset for Swahili Low Resource Language | The need for Question Answering datasets in low resource languages is the
motivation of this research, leading to the development of Kencorpus Swahili
Question Answering Dataset, KenSwQuAD. This dataset is annotated from raw story
texts of Swahili low resource language, which is a predominantly spoken in
Eastern African and in other parts of the world. Question Answering (QA)
datasets are important for machine comprehension of natural language for tasks
such as internet search and dialog systems. Machine learning systems need
training data such as the gold standard Question Answering set developed in
this research. The research engaged annotators to formulate QA pairs from
Swahili texts collected by the Kencorpus project, a Kenyan languages corpus.
The project annotated 1,445 texts from the total 2,585 texts with at least 5 QA
pairs each, resulting into a final dataset of 7,526 QA pairs. A quality
assurance set of 12.5% of the annotated texts confirmed that the QA pairs were
all correctly annotated. A proof of concept on applying the set to the QA task
confirmed that the dataset can be usable for such tasks. KenSwQuAD has also
contributed to resourcing of the Swahili language. | http://arxiv.org/abs/2205.02364v3 | cs.CL | new_dataset | 0.994519 | 2205.02364 |
Side-aware Meta-Learning for Cross-Dataset Listener Diagnosis with Subjective Tinnitus | With the development of digital technology, machine learning has paved the
way for the next generation of tinnitus diagnoses. Although machine learning
has been widely applied in EEG-based tinnitus analysis, most current models are
dataset-specific. Each dataset may be limited to a specific range of symptoms,
overall disease severity, and demographic attributes; further, dataset formats
may differ, impacting model performance. This paper proposes a side-aware
meta-learning for cross-dataset tinnitus diagnosis, which can effectively
classify tinnitus in subjects of divergent ages and genders from different data
collection processes. Owing to the superiority of meta-learning, our method
does not rely on large-scale datasets like conventional deep learning models.
Moreover, we design a subject-specific training process to assist the model in
fitting the data pattern of different patients or healthy people. Our method
achieves a high accuracy of 73.8\% in the cross-dataset classification. We
conduct an extensive analysis to show the effectiveness of side information of
ears in enhancing model performance and side-aware meta-learning in improving
the quality of the learned features. | http://arxiv.org/abs/2205.03231v1 | eess.SP | not_new_dataset | 0.99219 | 2205.03231 |
Hausa Visual Genome: A Dataset for Multi-Modal English to Hausa Machine Translation | Multi-modal Machine Translation (MMT) enables the use of visual information
to enhance the quality of translations. The visual information can serve as a
valuable piece of context information to decrease the ambiguity of input
sentences. Despite the increasing popularity of such a technique, good and
sizeable datasets are scarce, limiting the full extent of their potential.
Hausa, a Chadic language, is a member of the Afro-Asiatic language family. It
is estimated that about 100 to 150 million people speak the language, with more
than 80 million indigenous speakers. This is more than any of the other Chadic
languages. Despite a large number of speakers, the Hausa language is considered
low-resource in natural language processing (NLP). This is due to the absence
of sufficient resources to implement most NLP tasks. While some datasets exist,
they are either scarce, machine-generated, or in the religious domain.
Therefore, there is a need to create training and evaluation data for
implementing machine learning tasks and bridging the research gap in the
language. This work presents the Hausa Visual Genome (HaVG), a dataset that
contains the description of an image or a section within the image in Hausa and
its equivalent in English. To prepare the dataset, we started by translating
the English description of the images in the Hindi Visual Genome (HVG) into
Hausa automatically. Afterward, the synthetic Hausa data was carefully
post-edited considering the respective images. The dataset comprises 32,923
images and their descriptions that are divided into training, development,
test, and challenge test set. The Hausa Visual Genome is the first dataset of
its kind and can be used for Hausa-English machine translation, multi-modal
research, and image description, among various other natural language
processing and generation tasks. | http://arxiv.org/abs/2205.01133v2 | cs.CL | new_dataset | 0.994561 | 2205.01133 |
WeatherBench Probability: A benchmark dataset for probabilistic medium-range weather forecasting along with deep learning baseline models | WeatherBench is a benchmark dataset for medium-range weather forecasting of
geopotential, temperature and precipitation, consisting of preprocessed data,
predefined evaluation metrics and a number of baseline models. WeatherBench
Probability extends this to probabilistic forecasting by adding a set of
established probabilistic verification metrics (continuous ranked probability
score, spread-skill ratio and rank histograms) and a state-of-the-art
operational baseline using the ECWMF IFS ensemble forecast. In addition, we
test three different probabilistic machine learning methods -- Monte Carlo
dropout, parametric prediction and categorical prediction, in which the
probability distribution is discretized. We find that plain Monte Carlo dropout
severely underestimates uncertainty. The parametric and categorical models both
produce fairly reliable forecasts of similar quality. The parametric models
have fewer degrees of freedom while the categorical model is more flexible when
it comes to predicting non-Gaussian distributions. None of the models are able
to match the skill of the operational IFS model. We hope that this benchmark
will enable other researchers to evaluate their probabilistic approaches. | http://arxiv.org/abs/2205.00865v1 | physics.ao-ph | new_dataset | 0.994414 | 2205.00865 |
Biographical: A Semi-Supervised Relation Extraction Dataset | Extracting biographical information from online documents is a popular
research topic among the information extraction (IE) community. Various natural
language processing (NLP) techniques such as text classification, text
summarisation and relation extraction are commonly used to achieve this. Among
these techniques, RE is the most common since it can be directly used to build
biographical knowledge graphs. RE is usually framed as a supervised machine
learning (ML) problem, where ML models are trained on annotated datasets.
However, there are few annotated datasets for RE since the annotation process
can be costly and time-consuming. To address this, we developed Biographical,
the first semi-supervised dataset for RE. The dataset, which is aimed towards
digital humanities (DH) and historical research, is automatically compiled by
aligning sentences from Wikipedia articles with matching structured data from
sources including Pantheon and Wikidata. By exploiting the structure of
Wikipedia articles and robust named entity recognition (NER), we match
information with relatively high precision in order to compile annotated
relation pairs for ten different relations that are important in the DH domain.
Furthermore, we demonstrate the effectiveness of the dataset by training a
state-of-the-art neural model to classify relation pairs, and evaluate it on a
manually annotated gold standard set. Biographical is primarily aimed at
training neural models for RE within the domain of digital humanities and
history, but as we discuss at the end of this paper, it can be useful for other
purposes as well. | http://arxiv.org/abs/2205.00806v1 | cs.IR | new_dataset | 0.99452 | 2205.00806 |
Seeing without Looking: Analysis Pipeline for Child Sexual Abuse Datasets | The online sharing and viewing of Child Sexual Abuse Material (CSAM) are
growing fast, such that human experts can no longer handle the manual
inspection. However, the automatic classification of CSAM is a challenging
field of research, largely due to the inaccessibility of target data that is -
and should forever be - private and in sole possession of law enforcement
agencies. To aid researchers in drawing insights from unseen data and safely
providing further understanding of CSAM images, we propose an analysis template
that goes beyond the statistics of the dataset and respective labels. It
focuses on the extraction of automatic signals, provided both by pre-trained
machine learning models, e.g., object categories and pornography detection, as
well as image metrics such as luminance and sharpness. Only aggregated
statistics of sparse signals are provided to guarantee the anonymity of
children and adolescents victimized. The pipeline allows filtering the data by
applying thresholds to each specified signal and provides the distribution of
such signals within the subset, correlations between signals, as well as a bias
evaluation. We demonstrated our proposal on the Region-based annotated Child
Pornography Dataset (RCPD), one of the few CSAM benchmarks in the literature,
composed of over 2000 samples among regular and CSAM images, produced in
partnership with Brazil's Federal Police. Although noisy and limited in several
senses, we argue that automatic signals can highlight important aspects of the
overall distribution of data, which is valuable for databases that can not be
disclosed. Our goal is to safely publicize the characteristics of CSAM
datasets, encouraging researchers to join the field and perhaps other
institutions to provide similar reports on their benchmarks. | http://arxiv.org/abs/2204.14110v1 | cs.CV | not_new_dataset | 0.990529 | 2204.14110 |
Causal Discovery on the Effect of Antipsychotic Drugs on Delirium Patients in the ICU using Large EHR Dataset | Delirium occurs in about 80% cases in the Intensive Care Unit (ICU) and is
associated with a longer hospital stay, increased mortality and other related
issues. Delirium does not have any biomarker-based diagnosis and is commonly
treated with antipsychotic drugs (APD). However, multiple studies have shown
controversy over the efficacy or safety of APD in treating delirium. Since
randomized controlled trials (RCT) are costly and time-expensive, we aim to
approach the research question of the efficacy of APD in the treatment of
delirium using retrospective cohort analysis. We plan to use the Causal
inference framework to look for the underlying causal structure model,
leveraging the availability of large observational data on ICU patients. To
explore safety outcomes associated with APD, we aim to build a causal model for
delirium in the ICU using large observational data sets connecting various
covariates correlated with delirium. We utilized the MIMIC III database, an
extensive electronic health records (EHR) dataset with 53,423 distinct hospital
admissions. Our null hypothesis is: there is no significant difference in
outcomes for delirium patients under different drug-group in the ICU. Through
our exploratory, machine learning based and causal analysis, we had findings
such as: mean length-of-stay and max length-of-stay is higher for patients in
Haloperidol drug group, and haloperidol group has a higher rate of death in a
year compared to other two-groups. Our generated causal model explicitly shows
the functional relationships between different covariates. For future work, we
plan to do time-varying analysis on the dataset. | http://arxiv.org/abs/2205.01057v1 | cs.LG | not_new_dataset | 0.991823 | 2205.01057 |
ARCTIC: A Dataset for Dexterous Bimanual Hand-Object Manipulation | Humans intuitively understand that inanimate objects do not move by
themselves, but that state changes are typically caused by human manipulation
(e.g., the opening of a book). This is not yet the case for machines. In part
this is because there exist no datasets with ground-truth 3D annotations for
the study of physically consistent and synchronised motion of hands and
articulated objects. To this end, we introduce ARCTIC -- a dataset of two hands
that dexterously manipulate objects, containing 2.1M video frames paired with
accurate 3D hand and object meshes and detailed, dynamic contact information.
It contains bi-manual articulation of objects such as scissors or laptops,
where hand poses and object states evolve jointly in time. We propose two novel
articulated hand-object interaction tasks: (1) Consistent motion
reconstruction: Given a monocular video, the goal is to reconstruct two hands
and articulated objects in 3D, so that their motions are spatio-temporally
consistent. (2) Interaction field estimation: Dense relative hand-object
distances must be estimated from images. We introduce two baselines ArcticNet
and InterField, respectively and evaluate them qualitatively and quantitatively
on ARCTIC. Our code and data are available at https://arctic.is.tue.mpg.de. | http://arxiv.org/abs/2204.13662v3 | cs.CV | new_dataset | 0.994582 | 2204.13662 |
Dataset for Robust and Accurate Leading Vehicle Velocity Recognition | Recognition of the surrounding environment using a camera is an important
technology in Advanced Driver-Assistance Systems and Autonomous Driving, and
recognition technology is often solved by machine learning approaches such as
deep learning in recent years. Machine learning requires datasets for learning
and evaluation. To develop robust recognition technology in the real world, in
addition to normal driving environment, data in environments that are difficult
for cameras such as rainy weather or nighttime are essential. We have
constructed a dataset that one can benchmark the technology, targeting the
velocity recognition of the leading vehicle. This task is an important one for
the Advanced Driver-Assistance Systems and Autonomous Driving. The dataset is
available at https://signate.jp/competitions/657 | http://arxiv.org/abs/2204.12717v1 | cs.CV | new_dataset | 0.994422 | 2204.12717 |
A Review on Text-Based Emotion Detection -- Techniques, Applications, Datasets, and Future Directions | Artificial Intelligence (AI) has been used for processing data to make
decisions, interact with humans, and understand their feelings and emotions.
With the advent of the internet, people share and express their thoughts on
day-to-day activities and global and local events through text messaging
applications. Hence, it is essential for machines to understand emotions in
opinions, feedback, and textual dialogues to provide emotionally aware
responses to users in today's online world. The field of text-based emotion
detection (TBED) is advancing to provide automated solutions to various
applications, such as businesses, and finances, to name a few. TBED has gained
a lot of attention in recent times. The paper presents a systematic literature
review of the existing literature published between 2005 to 2021 in TBED. This
review has meticulously examined 63 research papers from IEEE, Science Direct,
Scopus, and Web of Science databases to address four primary research
questions. It also reviews the different applications of TBED across various
research domains and highlights its use. An overview of various emotion models,
techniques, feature extraction methods, datasets, and research challenges with
future directions has also been represented. | http://arxiv.org/abs/2205.03235v1 | cs.CL | not_new_dataset | 0.992101 | 2205.03235 |
Monant Medical Misinformation Dataset: Mapping Articles to Fact-Checked Claims | False information has a significant negative influence on individuals as well
as on the whole society. Especially in the current COVID-19 era, we witness an
unprecedented growth of medical misinformation. To help tackle this problem
with machine learning approaches, we are publishing a feature-rich dataset of
approx. 317k medical news articles/blogs and 3.5k fact-checked claims. It also
contains 573 manually and more than 51k automatically labelled mappings between
claims and articles. Mappings consist of claim presence, i.e., whether a claim
is contained in a given article, and article stance towards the claim. We
provide several baselines for these two tasks and evaluate them on the manually
labelled part of the dataset. The dataset enables a number of additional tasks
related to medical misinformation, such as misinformation characterisation
studies or studies of misinformation diffusion between sources. | http://arxiv.org/abs/2204.12294v1 | cs.CL | new_dataset | 0.994453 | 2204.12294 |
PLOD: An Abbreviation Detection Dataset for Scientific Documents | The detection and extraction of abbreviations from unstructured texts can
help to improve the performance of Natural Language Processing tasks, such as
machine translation and information retrieval. However, in terms of publicly
available datasets, there is not enough data for training
deep-neural-networks-based models to the point of generalising well over data.
This paper presents PLOD, a large-scale dataset for abbreviation detection and
extraction that contains 160k+ segments automatically annotated with
abbreviations and their long forms. We performed manual validation over a set
of instances and a complete automatic validation for this dataset. We then used
it to generate several baseline models for detecting abbreviations and long
forms. The best models achieved an F1-score of 0.92 for abbreviations and 0.89
for detecting their corresponding long forms. We release this dataset along
with our code and all the models publicly in
https://github.com/surrey-nlp/PLOD-AbbreviationDetection | http://arxiv.org/abs/2204.12061v2 | cs.CL | new_dataset | 0.994467 | 2204.12061 |
Towards Accelerated Localization Performance Across Indoor Positioning Datasets | The localization speed and accuracy in the indoor scenario can greatly impact
the Quality of Experience of the user. While many individual machine learning
models can achieve comparable positioning performance, their prediction
mechanisms offer different complexity to the system. In this work, we propose a
fingerprinting positioning method for multi-building and multi-floor
deployments, composed of a cascade of three models for building classification,
floor classification, and 2D localization regression. We conduct an exhaustive
search for the optimally performing one in each step of the cascade while
validating on 14 different openly available datasets. As a result, we bring
forward the best-performing combination of models in terms of overall
positioning accuracy and processing speed and evaluate on independent sets of
samples. We reduce the mean prediction time by 71% while achieving comparable
positioning performance across all considered datasets. Moreover, in case of
voluminous training dataset, the prediction time is reduced down to 1% of the
benchmark's. | http://arxiv.org/abs/2204.10788v1 | eess.SP | not_new_dataset | 0.992091 | 2204.10788 |
Attention in Reasoning: Dataset, Analysis, and Modeling | While attention has been an increasingly popular component in deep neural
networks to both interpret and boost the performance of models, little work has
examined how attention progresses to accomplish a task and whether it is
reasonable. In this work, we propose an Attention with Reasoning capability
(AiR) framework that uses attention to understand and improve the process
leading to task outcomes. We first define an evaluation metric based on a
sequence of atomic reasoning operations, enabling a quantitative measurement of
attention that considers the reasoning process. We then collect human
eye-tracking and answer correctness data, and analyze various machine and human
attention mechanisms on their reasoning capability and how they impact task
performance. To improve the attention and reasoning ability of visual question
answering models, we propose to supervise the learning of attention
progressively along the reasoning process and to differentiate the correct and
incorrect attention patterns. We demonstrate the effectiveness of the proposed
framework in analyzing and modeling attention with better reasoning capability
and task performance. The code and data are available at
https://github.com/szzexpoi/AiR | http://arxiv.org/abs/2204.09774v1 | cs.CV | new_dataset | 0.99431 | 2204.09774 |
A Benchmark for Automatic Medical Consultation System: Frameworks, Tasks and Datasets | In recent years, interest has arisen in using machine learning to improve the
efficiency of automatic medical consultation and enhance patient experience. In
this article, we propose two frameworks to support automatic medical
consultation, namely doctor-patient dialogue understanding and task-oriented
interaction. We create a new large medical dialogue dataset with multi-level
finegrained annotations and establish five independent tasks, including named
entity recognition, dialogue act classification, symptom label inference,
medical report generation and diagnosis-oriented dialogue policy. We report a
set of benchmark results for each task, which shows the usability of the
dataset and sets a baseline for future studies. Both code and data is available
from https://github.com/lemuria-wchen/imcs21. | http://arxiv.org/abs/2204.08997v3 | cs.CL | new_dataset | 0.99439 | 2204.08997 |
Hierarchical Optimal Transport for Comparing Histopathology Datasets | Scarcity of labeled histopathology data limits the applicability of deep
learning methods to under-profiled cancer types and labels. Transfer learning
allows researchers to overcome the limitations of small datasets by
pre-training machine learning models on larger datasets similar to the small
target dataset. However, similarity between datasets is often determined
heuristically. In this paper, we propose a principled notion of distance
between histopathology datasets based on a hierarchical generalization of
optimal transport distances. Our method does not require any training, is
agnostic to model type, and preserves much of the hierarchical structure in
histopathology datasets imposed by tiling. We apply our method to H&E stained
slides from The Cancer Genome Atlas from six different cancer types. We show
that our method outperforms a baseline distance in a cancer-type prediction
task. Our results also show that our optimal transport distance predicts
difficulty of transferability in a tumor vs.normal prediction setting. | http://arxiv.org/abs/2204.08324v2 | cs.CV | not_new_dataset | 0.992029 | 2204.08324 |
Synthetic Distracted Driving (SynDD2) dataset for analyzing distracted behaviors and various gaze zones of a driver | This article presents a synthetic distracted driving (SynDD2 - a continuum of
SynDD1) dataset for machine learning models to detect and analyze drivers'
various distracted behavior and different gaze zones. We collected the data in
a stationary vehicle using three in-vehicle cameras positioned at locations: on
the dashboard, near the rearview mirror, and on the top right-side window
corner. The dataset contains two activity types: distracted activities and gaze
zones for each participant, and each activity type has two sets: without
appearance blocks and with appearance blocks such as wearing a hat or
sunglasses. The order and duration of each activity for each participant are
random. In addition, the dataset contains manual annotations for each activity,
having its start and end time annotated. Researchers could use this dataset to
evaluate the performance of machine learning algorithms to classify various
distracting activities and gaze zones of drivers. | http://arxiv.org/abs/2204.08096v3 | cs.CV | new_dataset | 0.994488 | 2204.08096 |
TabNAS: Rejection Sampling for Neural Architecture Search on Tabular Datasets | The best neural architecture for a given machine learning problem depends on
many factors: not only the complexity and structure of the dataset, but also on
resource constraints including latency, compute, energy consumption, etc.
Neural architecture search (NAS) for tabular datasets is an important but
under-explored problem. Previous NAS algorithms designed for image search
spaces incorporate resource constraints directly into the reinforcement
learning (RL) rewards. However, for NAS on tabular datasets, this protocol
often discovers suboptimal architectures. This paper develops TabNAS, a new and
more effective approach to handle resource constraints in tabular NAS using an
RL controller motivated by the idea of rejection sampling. TabNAS immediately
discards any architecture that violates the resource constraints without
training or learning from that architecture. TabNAS uses a Monte-Carlo-based
correction to the RL policy gradient update to account for this extra filtering
step. Results on several tabular datasets demonstrate the superiority of TabNAS
over previous reward-shaping methods: it finds better models that obey the
constraints. | http://arxiv.org/abs/2204.07615v4 | cs.LG | not_new_dataset | 0.99193 | 2204.07615 |
A9-Dataset: Multi-Sensor Infrastructure-Based Dataset for Mobility Research | Data-intensive machine learning based techniques increasingly play a
prominent role in the development of future mobility solutions - from driver
assistance and automation functions in vehicles, to real-time traffic
management systems realized through dedicated infrastructure. The availability
of high quality real-world data is often an important prerequisite for the
development and reliable deployment of such systems in large scale. Towards
this endeavour, we present the A9-Dataset based on roadside sensor
infrastructure from the 3 km long Providentia++ test field near Munich in
Germany. The dataset includes anonymized and precision-timestamped multi-modal
sensor and object data in high resolution, covering a variety of traffic
situations. As part of the first set of data, which we describe in this paper,
we provide camera and LiDAR frames from two overhead gantry bridges on the A9
autobahn with the corresponding objects labeled with 3D bounding boxes. The
first set includes in total more than 1000 sensor frames and 14000 traffic
objects. The dataset is available for download at https://a9-dataset.com. | http://arxiv.org/abs/2204.06527v2 | cs.CV | new_dataset | 0.994551 | 2204.06527 |
Rapid model transfer for medical image segmentation via iterative human-in-the-loop update: from labelled public to unlabelled clinical datasets for multi-organ segmentation in CT | Despite the remarkable success on medical image analysis with deep learning,
it is still under exploration regarding how to rapidly transfer AI models from
one dataset to another for clinical applications. This paper presents a novel
and generic human-in-the-loop scheme for efficiently transferring a
segmentation model from a small-scale labelled dataset to a larger-scale
unlabelled dataset for multi-organ segmentation in CT. To achieve this, we
propose to use an igniter network which can learn from a small-scale labelled
dataset and generate coarse annotations to start the process of human-machine
interaction. Then, we use a sustainer network for our larger-scale dataset, and
iteratively updated it on the new annotated data. Moreover, we propose a
flexible labelling strategy for the annotator to reduce the initial annotation
workload. The model performance and the time cost of annotation in each subject
evaluated on our private dataset are reported and analysed. The results show
that our scheme can not only improve the performance by 19.7% on Dice, but also
expedite the cost time of manual labelling from 13.87 min to 1.51 min per CT
volume during the model transfer, demonstrating the clinical usefulness with
promising potentials. | http://arxiv.org/abs/2204.06243v1 | cs.CV | not_new_dataset | 0.992176 | 2204.06243 |
A Survey on Legal Judgment Prediction: Datasets, Metrics, Models and Challenges | Legal judgment prediction (LJP) applies Natural Language Processing (NLP)
techniques to predict judgment results based on fact descriptions
automatically. Recently, large-scale public datasets and advances in NLP
research have led to increasing interest in LJP. Despite a clear gap between
machine and human performance, impressive results have been achieved in various
benchmark datasets. In this paper, to address the current lack of comprehensive
survey of existing LJP tasks, datasets, models and evaluations, (1) we analyze
31 LJP datasets in 6 languages, present their construction process and define a
classification method of LJP with 3 different attributes; (2) we summarize 14
evaluation metrics under four categories for different outputs of LJP tasks;
(3) we review 12 legal-domain pretrained models in 3 languages and highlight 3
major research directions for LJP; (4) we show the state-of-art results for 8
representative datasets from different court cases and discuss the open
challenges. This paper can provide up-to-date and comprehensive reviews to help
readers understand the status of LJP. We hope to facilitate both NLP
researchers and legal professionals for further joint efforts in this problem. | http://arxiv.org/abs/2204.04859v1 | cs.CL | not_new_dataset | 0.992228 | 2204.04859 |
BABD: A Bitcoin Address Behavior Dataset for Pattern Analysis | Cryptocurrencies are no longer just the preferred option for cybercriminal
activities on darknets, due to the increasing adoption in mainstream
applications. This is partly due to the transparency associated with the
underpinning ledgers, where any individual can access the record of a
transaction record on the public ledger. In this paper, we build a dataset
comprising Bitcoin transactions between 12 July 2019 and 26 May 2021. This
dataset (hereafter referred to as BABD-13) contains 13 types of Bitcoin
addresses, 5 categories of indicators with 148 features, and 544,462 labeled
data, which is the largest labeled Bitcoin address behavior dataset publicly
available to our knowledge. We then use our proposed dataset on common machine
learning models, namely: k-nearest neighbors algorithm, decision tree, random
forest, multilayer perceptron, and XGBoost. The results show that the accuracy
rates of these machine learning models for the multi-classification task on our
proposed dataset are between 93.24% and 97.13%. We also analyze the proposed
features and their relationships from the experiments, and propose a k-hop
subgraph generation algorithm to extract a k-hop subgraph from the entire
Bitcoin transaction graph constructed by the directed heterogeneous multigraph
starting from a specific Bitcoin address node (e.g., a known transaction
associated with a criminal investigation). Besides, we initially analyze the
behavior patterns of different types of Bitcoin addresses according to the
extracted features. | http://arxiv.org/abs/2204.05746v3 | cs.CR | new_dataset | 0.994555 | 2204.05746 |
BankNote-Net: Open dataset for assistive universal currency recognition | Millions of people around the world have low or no vision. Assistive software
applications have been developed for a variety of day-to-day tasks, including
optical character recognition, scene identification, person recognition, and
currency recognition. This last task, the recognition of banknotes from
different denominations, has been addressed by the use of computer vision
models for image recognition. However, the datasets and models available for
this task are limited, both in terms of dataset size and in variety of
currencies covered. In this work, we collect a total of 24,826 images of
banknotes in variety of assistive settings, spanning 17 currencies and 112
denominations. Using supervised contrastive learning, we develop a machine
learning model for universal currency recognition. This model learns compliant
embeddings of banknote images in a variety of contexts, which can be shared
publicly (as a compressed vector representation), and can be used to train and
test specialized downstream models for any currency, including those not
covered by our dataset or for which only a few real images per denomination are
available (few-shot learning). We deploy a variation of this model for public
use in the last version of the Seeing AI app developed by Microsoft. We share
our encoder model and the embeddings as an open dataset in our BankNote-Net
repository. | http://arxiv.org/abs/2204.03738v1 | cs.CV | new_dataset | 0.994397 | 2204.03738 |
A Comprehensive Review of Sign Language Recognition: Different Types, Modalities, and Datasets | A machine can understand human activities, and the meaning of signs can help
overcome the communication barriers between the inaudible and ordinary people.
Sign Language Recognition (SLR) is a fascinating research area and a crucial
task concerning computer vision and pattern recognition. Recently, SLR usage
has increased in many applications, but the environment, background image
resolution, modalities, and datasets affect the performance a lot. Many
researchers have been striving to carry out generic real-time SLR models. This
review paper facilitates a comprehensive overview of SLR and discusses the
needs, challenges, and problems associated with SLR. We study related works
about manual and non-manual, various modalities, and datasets. Research
progress and existing state-of-the-art SLR models over the past decade have
been reviewed. Finally, we find the research gap and limitations in this domain
and suggest future directions. This review paper will be helpful for readers
and researchers to get complete guidance about SLR and the progressive design
of the state-of-the-art SLR model | http://arxiv.org/abs/2204.03328v1 | cs.CV | not_new_dataset | 0.992116 | 2204.03328 |
CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations | Providing explanations in the context of Visual Question Answering (VQA)
presents a fundamental problem in machine learning. To obtain detailed insights
into the process of generating natural language explanations for VQA, we
introduce the large-scale CLEVR-X dataset that extends the CLEVR dataset with
natural language explanations. For each image-question pair in the CLEVR
dataset, CLEVR-X contains multiple structured textual explanations which are
derived from the original scene graphs. By construction, the CLEVR-X
explanations are correct and describe the reasoning and visual information that
is necessary to answer a given question. We conducted a user study to confirm
that the ground-truth explanations in our proposed dataset are indeed complete
and relevant. We present baseline results for generating natural language
explanations in the context of VQA using two state-of-the-art frameworks on the
CLEVR-X dataset. Furthermore, we provide a detailed analysis of the explanation
generation quality for different question and answer types. Additionally, we
study the influence of using different numbers of ground-truth explanations on
the convergence of natural language generation (NLG) metrics. The CLEVR-X
dataset is publicly available at
\url{https://explainableml.github.io/CLEVR-X/}. | http://arxiv.org/abs/2204.02380v1 | cs.CV | new_dataset | 0.994472 | 2204.02380 |
Stuttgart Open Relay Degradation Dataset (SOReDD) | Real-life industrial use cases for machine learning oftentimes involve
heterogeneous and dynamic assets, processes and data, resulting in a need to
continuously adapt the learning algorithm accordingly. Industrial transfer
learning offers to lower the effort of such adaptation by allowing the
utilization of previously acquired knowledge in solving new (variants of)
tasks. Being data-driven methods, the development of industrial transfer
learning algorithms naturally requires appropriate datasets for training.
However, open-source datasets suitable for transfer learning training, i.e.
spanning different assets, processes and data (variants), are rare. With the
Stuttgart Open Relay Degradation Dataset (SOReDD) we want to offer such a
dataset. It provides data on the degradation of different electromechanical
relays under different operating conditions, allowing for a large number of
different transfer scenarios. Although such relays themselves are usually
inexpensive standard components, their failure often leads to the failure of a
machine as a whole due to their role as the central power switching element of
a machine. The main cost factor in the event of a relay defect is therefore not
the relay itself, but the reduced machine availability. It is therefore
desirable to predict relay degradation as accurately as possible for specific
applications in order to be able to replace relays in good time and avoid
unplanned machine downtimes. Nevertheless, data-driven failure prediction for
electromechanical relays faces the challenge that relay degradation behavior is
highly dependent on the operating conditions, high-resolution measurement data
on relay degradation behavior is only collected in rare cases, and such data
can then only cover a fraction of the possible operating environments. Relays
are thus representative of many other central standard components in automation
technology. | http://arxiv.org/abs/2204.01626v1 | cs.LG | new_dataset | 0.99446 | 2204.01626 |
A Sentinel-2 multi-year, multi-country benchmark dataset for crop classification and segmentation with deep learning | In this work we introduce Sen4AgriNet, a Sentinel-2 based time series multi
country benchmark dataset, tailored for agricultural monitoring applications
with Machine and Deep Learning. Sen4AgriNet dataset is annotated from farmer
declarations collected via the Land Parcel Identification System (LPIS) for
harmonizing country wide labels. These declarations have only recently been
made available as open data, allowing for the first time the labeling of
satellite imagery from ground truth data. We proceed to propose and standardise
a new crop type taxonomy across Europe that address Common Agriculture Policy
(CAP) needs, based on the Food and Agriculture Organization (FAO) Indicative
Crop Classification scheme. Sen4AgriNet is the only multi-country, multi-year
dataset that includes all spectral information. It is constructed to cover the
period 2016-2020 for Catalonia and France, while it can be extended to include
additional countries. Currently, it contains 42.5 million parcels, which makes
it significantly larger than other available archives. We extract two
sub-datasets to highlight its value for diverse Deep Learning applications; the
Object Aggregated Dataset (OAD) and the Patches Assembled Dataset (PAD). OAD
capitalizes zonal statistics of each parcel, thus creating a powerful
label-to-features instance for classification algorithms. On the other hand,
PAD structure generalizes the classification problem to parcel extraction and
semantic segmentation and labeling. The PAD and OAD are examined under three
different scenarios to showcase and model the effects of spatial and temporal
variability across different years and different countries. | http://arxiv.org/abs/2204.00951v2 | cs.CV | new_dataset | 0.994502 | 2204.00951 |
GrowliFlower: An image time series dataset for GROWth analysis of cauLIFLOWER | This article presents GrowliFlower, a georeferenced, image-based UAV time
series dataset of two monitored cauliflower fields of size 0.39 and 0.60 ha
acquired in 2020 and 2021. The dataset contains RGB and multispectral
orthophotos from which about 14,000 individual plant coordinates are derived
and provided. The coordinates enable the dataset users the extraction of
complete and incomplete time series of image patches showing individual plants.
The dataset contains collected phenotypic traits of 740 plants, including the
developmental stage as well as plant and cauliflower size. As the harvestable
product is completely covered by leaves, plant IDs and coordinates are provided
to extract image pairs of plants pre and post defoliation, to facilitate
estimations of cauliflower head size. Moreover, the dataset contains
pixel-accurate leaf and plant instance segmentations, as well as stem
annotations to address tasks like classification, detection, segmentation,
instance segmentation, and similar computer vision tasks. The dataset aims to
foster the development and evaluation of machine learning approaches. It
specifically focuses on the analysis of growth and development of cauliflower
and the derivation of phenotypic traits to foster the development of automation
in agriculture. Two baseline results of instance segmentation at plant and leaf
level based on the labeled instance segmentation data are presented. The entire
data set is publicly available. | http://arxiv.org/abs/2204.00294v1 | cs.CV | new_dataset | 0.994545 | 2204.00294 |
IGRF-RFE: A Hybrid Feature Selection Method for MLP-based Network Intrusion Detection on UNSW-NB15 Dataset | The effectiveness of machine learning models is significantly affected by the
size of the dataset and the quality of features as redundant and irrelevant
features can radically degrade the performance. This paper proposes IGRF-RFE: a
hybrid feature selection method tasked for multi-class network anomalies using
a Multilayer perceptron (MLP) network. IGRF-RFE can be considered as a feature
reduction technique based on both the filter feature selection method and the
wrapper feature selection method. In our proposed method, we use the filter
feature selection method, which is the combination of Information Gain and
Random Forest Importance, to reduce the feature subset search space. Then, we
apply recursive feature elimination(RFE) as a wrapper feature selection method
to further eliminate redundant features recursively on the reduced feature
subsets. Our experimental results obtained based on the UNSW-NB15 dataset
confirm that our proposed method can improve the accuracy of anomaly detection
while reducing the feature dimension. The results show that the feature
dimension is reduced from 42 to 23 while the multi-classification accuracy of
MLP is improved from 82.25% to 84.24%. | http://arxiv.org/abs/2203.16365v2 | cs.LG | not_new_dataset | 0.992208 | 2203.16365 |
An Evaluation Dataset for Legal Word Embedding: A Case Study On Chinese Codex | Word embedding is a modern distributed word representations approach widely
used in many natural language processing tasks. Converting the vocabulary in a
legal document into a word embedding model facilitates subjecting legal
documents to machine learning, deep learning, and other algorithms and
subsequently performing the downstream tasks of natural language processing
vis-\`a-vis, for instance, document classification, contract review, and
machine translation. The most common and practical approach of accuracy
evaluation with the word embedding model uses a benchmark set with linguistic
rules or the relationship between words to perform analogy reasoning via
algebraic calculation. This paper proposes establishing a 1,134 Legal
Analogical Reasoning Questions Set (LARQS) from the 2,388 Chinese Codex corpus
using five kinds of legal relations, which are then used to evaluate the
accuracy of the Chinese word embedding model. Moreover, we discovered that
legal relations might be ubiquitous in the word embedding model. | http://arxiv.org/abs/2203.15173v1 | cs.CL | new_dataset | 0.994422 | 2203.15173 |
LogicInference: A New Dataset for Teaching Logical Inference to seq2seq Models | Machine learning models such as Transformers or LSTMs struggle with tasks
that are compositional in nature such as those involving reasoning/inference.
Although many datasets exist to evaluate compositional generalization, when it
comes to evaluating inference abilities, options are more limited. This paper
presents LogicInference, a new dataset to evaluate the ability of models to
perform logical inference. The dataset focuses on inference using propositional
logic and a small subset of first-order logic, represented both in semi-formal
logical notation, as well as in natural language. We also report initial
results using a collection of machine learning models to establish an initial
baseline in this dataset. | http://arxiv.org/abs/2203.15099v3 | cs.AI | new_dataset | 0.994421 | 2203.15099 |
A Dataset for Speech Emotion Recognition in Greek Theatrical Plays | Machine learning methodologies can be adopted in cultural applications and
propose new ways to distribute or even present the cultural content to the
public. For instance, speech analytics can be adopted to automatically generate
subtitles in theatrical plays, in order to (among other purposes) help people
with hearing loss. Apart from a typical speech-to-text transcription with
Automatic Speech Recognition (ASR), Speech Emotion Recognition (SER) can be
used to automatically predict the underlying emotional content of speech
dialogues in theatrical plays, and thus to provide a deeper understanding how
the actors utter their lines. However, real-world datasets from theatrical
plays are not available in the literature. In this work we present GreThE, the
Greek Theatrical Emotion dataset, a new publicly available data collection for
speech emotion recognition in Greek theatrical plays. The dataset contains
utterances from various actors and plays, along with respective valence and
arousal annotations. Towards this end, multiple annotators have been asked to
provide their input for each speech recording and inter-annotator agreement is
taken into account in the final ground truth generation. In addition, we
discuss the results of some indicative experiments that have been conducted
with machine and deep learning frameworks, using the dataset, along with some
widely used databases in the field of speech emotion recognition. | http://arxiv.org/abs/2203.15568v1 | cs.SD | new_dataset | 0.994507 | 2203.15568 |
Design and Development of Rule-based open-domain Question-Answering System on SQuAD v2.0 Dataset | Human mind is the palace of curious questions that seek answers.
Computational resolution of this challenge is possible through Natural Language
Processing techniques. Statistical techniques like machine learning and deep
learning require a lot of data to train and despite that they fail to tap into
the nuances of language. Such systems usually perform best on close-domain
datasets. We have proposed development of a rule-based open-domain
question-answering system which is capable of answering questions of any domain
from a corresponding context passage. We have used 1000 questions from SQuAD
2.0 dataset for testing the developed system and it gives satisfactory results.
In this paper, we have described the structure of the developed system and have
analyzed the performance. | http://arxiv.org/abs/2204.09659v1 | cs.CL | not_new_dataset | 0.991846 | 2204.09659 |
A large scale multi-view RGBD visual affordance learning dataset | The physical and textural attributes of objects have been widely studied for
recognition, detection and segmentation tasks in computer vision.~A number of
datasets, such as large scale ImageNet, have been proposed for feature learning
using data hungry deep neural networks and for hand-crafted feature extraction.
To intelligently interact with objects, robots and intelligent machines need
the ability to infer beyond the traditional physical/textural attributes, and
understand/learn visual cues, called visual affordances, for affordance
recognition, detection and segmentation. To date there is no publicly available
large dataset for visual affordance understanding and learning. In this paper,
we introduce a large scale multi-view RGBD visual affordance learning dataset,
a benchmark of 47210 RGBD images from 37 object categories, annotated with 15
visual affordance categories. To the best of our knowledge, this is the first
ever and the largest multi-view RGBD visual affordance learning dataset. We
benchmark the proposed dataset for affordance segmentation and recognition
tasks using popular Vision Transformer and Convolutional Neural Networks.
Several state-of-the-art deep learning networks are evaluated each for
affordance recognition and segmentation tasks. Our experimental results
showcase the challenging nature of the dataset and present definite prospects
for new and robust affordance learning algorithms. The dataset is publicly
available at https://sites.google.com/view/afaqshah/dataset. | http://arxiv.org/abs/2203.14092v3 | cs.CV | new_dataset | 0.994432 | 2203.14092 |
Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic Dataset for Narrative Comprehension | Question answering (QA) is a fundamental means to facilitate assessment and
training of narrative comprehension skills for both machines and young
children, yet there is scarcity of high-quality QA datasets carefully designed
to serve this purpose. In particular, existing datasets rarely distinguish
fine-grained reading skills, such as the understanding of varying narrative
elements. Drawing on the reading education research, we introduce FairytaleQA,
a dataset focusing on narrative comprehension of kindergarten to eighth-grade
students. Generated by educational experts based on an evidence-based
theoretical framework, FairytaleQA consists of 10,580 explicit and implicit
questions derived from 278 children-friendly stories, covering seven types of
narrative elements or relations. Our dataset is valuable in two folds: First,
we ran existing QA models on our dataset and confirmed that this annotation
helps assess models' fine-grained learning skills. Second, the dataset supports
question generation (QG) task in the education domain. Through benchmarking
with QG models, we show that the QG model trained on FairytaleQA is capable of
asking high-quality and more diverse questions. | http://arxiv.org/abs/2203.13947v1 | cs.CL | new_dataset | 0.994559 | 2203.13947 |
Impact of Dataset on Acoustic Models for Automatic Speech Recognition | In Automatic Speech Recognition, GMM-HMM had been widely used for acoustic
modelling. With the current advancement of deep learning, the Gaussian Mixture
Model (GMM) from acoustic models has been replaced with Deep Neural Network,
namely DNN-HMM Acoustic Models. The GMM models are widely used to create the
alignments of the training data for the hybrid deep neural network model, thus
making it an important task to create accurate alignments. Many factors such as
training dataset size, training data augmentation, model hyperparameters, etc.,
affect the model learning. Traditionally in machine learning, larger datasets
tend to have better performance, while smaller datasets tend to trigger
over-fitting. The collection of speech data and their accurate transcriptions
is a significant challenge that varies over different languages, and in most
cases, it might be limited to big organizations. Moreover, in the case of
available large datasets, training a model using such data requires additional
time and computing resources, which may not be available. While the data about
the accuracy of state-of-the-art ASR models on open-source datasets are
published, the study about the impact of the size of a dataset on acoustic
models is not readily available. This work aims to investigate the impact of
dataset size variations on the performance of various GMM-HMM Acoustic Models
and their respective computational costs. | http://arxiv.org/abs/2203.13590v1 | cs.LG | not_new_dataset | 0.992176 | 2203.13590 |
Intrinsic Bias Identification on Medical Image Datasets | Machine learning based medical image analysis highly depends on datasets.
Biases in the dataset can be learned by the model and degrade the
generalizability of the applications. There are studies on debiased models.
However, scientists and practitioners are difficult to identify implicit biases
in the datasets, which causes lack of reliable unbias test datasets to valid
models. To tackle this issue, we first define the data intrinsic bias
attribute, and then propose a novel bias identification framework for medical
image datasets. The framework contains two major components, KlotskiNet and
Bias Discriminant Direction Analysis(bdda), where KlostkiNet is to build the
mapping which makes backgrounds to distinguish positive and negative samples
and bdda provides a theoretical solution on determining bias attributes.
Experimental results on three datasets show the effectiveness of the bias
attributes discovered by the framework. | http://arxiv.org/abs/2203.12872v2 | cs.CV | not_new_dataset | 0.99196 | 2203.12872 |
Methods2Test: A dataset of focal methods mapped to test cases | Unit testing is an essential part of the software development process, which
helps to identify issues with source code in early stages of development and
prevent regressions. Machine learning has emerged as viable approach to help
software developers generate automated unit tests. However, generating reliable
unit test cases that are semantically correct and capable of catching software
bugs or unintended behavior via machine learning requires large, metadata-rich,
datasets. In this paper we present Methods2Test: A dataset of focal methods
mapped to test cases: a large, supervised dataset of test cases mapped to
corresponding methods under test (i.e., focal methods). This dataset contains
780,944 pairs of JUnit tests and focal methods, extracted from a total of
91,385 Java open source projects hosted on GitHub with licenses permitting
re-distribution. The main challenge behind the creation of the Methods2Test was
to establish a reliable mapping between a test case and the relevant focal
method. To this aim, we designed a set of heuristics, based on developers' best
practices in software testing, which identify the likely focal method for a
given test case. To facilitate further analysis, we store a rich set of
metadata for each method-test pair in JSON-formatted files. Additionally, we
extract textual corpus from the dataset at different context levels, which we
provide both in raw and tokenized forms, in order to enable researchers to
train and evaluate machine learning models for Automated Test Generation.
Methods2Test is publicly available at:
https://github.com/microsoft/methods2test | http://arxiv.org/abs/2203.12776v1 | cs.SE | new_dataset | 0.994562 | 2203.12776 |
Conditional Generative Data Augmentation for Clinical Audio Datasets | In this work, we propose a novel data augmentation method for clinical audio
datasets based on a conditional Wasserstein Generative Adversarial Network with
Gradient Penalty (cWGAN-GP), operating on log-mel spectrograms. To validate our
method, we created a clinical audio dataset which was recorded in a real-world
operating room during Total Hip Arthroplasty (THA) procedures and contains
typical sounds which resemble the different phases of the intervention. We
demonstrate the capability of the proposed method to generate realistic
class-conditioned samples from the dataset distribution and show that training
with the generated augmented samples outperforms classical audio augmentation
methods in terms of classification performance. The performance was evaluated
using a ResNet-18 classifier which shows a mean Macro F1-score improvement of
1.70% in a 5-fold cross validation experiment using the proposed augmentation
method. Because clinical data is often expensive to acquire, the development of
realistic and high-quality data augmentation methods is crucial to improve the
robustness and generalization capabilities of learning-based algorithms which
is especially important for safety-critical medical applications. Therefore,
the proposed data augmentation method is an important step towards improving
the data bottleneck for clinical audio-based machine learning systems. | http://arxiv.org/abs/2203.11570v3 | cs.SD | new_dataset | 0.994239 | 2203.11570 |
Machine learning for impurity charge-state transition levels in semiconductors from elemental properties using multi-fidelity datasets | Quantifying charge-state transition energy levels of impurities in
semiconductors is critical to understanding and engineering their
optoelectronic properties for applications ranging from solar photovoltaics to
infrared lasers. While these transition levels can be measured and calculated
accurately, such efforts are time-consuming and more rapid prediction methods
would be beneficial. Here, we significantly reduce the time typically required
to predict impurity transition levels using multi-fidelity datasets and a
machine learning approach employing features based on elemental properties and
impurity positions. We use transition levels obtained from low-fidelity (i.e.,
local-density approximation or generalized gradient approximation) density
functional theory (DFT) calculations, corrected using a recently proposed
modified band alignment scheme, which well-approximates transition levels from
high-fidelity DFT (i.e., hybrid HSE06). The model fit to the large
multi-fidelity database shows improved accuracy compared to the models trained
on the more limited high-fidelity values. Crucially, in our approach, when
using the multi-fidelity data, high-fidelity values are not required for model
training, significantly reducing the computational cost required for training
the model. Our machine learning model of transition levels has a root mean
squared (mean absolute) error of 0.36 (0.27) eV vs high-fidelity hybrid
functional values when averaged over 14 semiconductor systems from the II-VI
and III-V families. As a guide for use on other systems, we assessed the model
on simulated data to show the expected accuracy level as a function of bandgap
for new materials of interest. Finally, we use the model to predict a complete
space of impurity charge-state transition levels in all zinc blende III-V and
II-VI systems. | http://arxiv.org/abs/2203.10349v1 | cond-mat.mtrl-sci | not_new_dataset | 0.992173 | 2203.10349 |
ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection | Toxic language detection systems often falsely flag text that contains
minority group mentions as toxic, as those groups are often the targets of
online hate. Such over-reliance on spurious correlations also causes systems to
struggle with detecting implicitly toxic language. To help mitigate these
issues, we create ToxiGen, a new large-scale and machine-generated dataset of
274k toxic and benign statements about 13 minority groups. We develop a
demonstration-based prompting framework and an adversarial
classifier-in-the-loop decoding method to generate subtly toxic and benign text
with a massive pretrained language model. Controlling machine generation in
this way allows ToxiGen to cover implicitly toxic text at a larger scale, and
about more demographic groups, than previous resources of human-written text.
We conduct a human evaluation on a challenging subset of ToxiGen and find that
annotators struggle to distinguish machine-generated text from human-written
language. We also find that 94.5% of toxic examples are labeled as hate speech
by human annotators. Using three publicly-available datasets, we show that
finetuning a toxicity classifier on our data improves its performance on
human-written data substantially. We also demonstrate that ToxiGen can be used
to fight machine-generated toxicity as finetuning improves the classifier
significantly on our evaluation subset. Our code and data can be found at
https://github.com/microsoft/ToxiGen. | http://arxiv.org/abs/2203.09509v4 | cs.CL | new_dataset | 0.994555 | 2203.09509 |
Machine Learning for Encrypted Malicious Traffic Detection: Approaches, Datasets and Comparative Study | As people's demand for personal privacy and data security becomes a priority,
encrypted traffic has become mainstream in the cyber world. However, traffic
encryption is also shielding malicious and illegal traffic introduced by
adversaries, from being detected. This is especially so in the post-COVID-19
environment where malicious traffic encryption is growing rapidly. Common
security solutions that rely on plain payload content analysis such as deep
packet inspection are rendered useless. Thus, machine learning based approaches
have become an important direction for encrypted malicious traffic detection.
In this paper, we formulate a universal framework of machine learning based
encrypted malicious traffic detection techniques and provided a systematic
review. Furthermore, current research adopts different datasets to train their
models due to the lack of well-recognized datasets and feature sets. As a
result, their model performance cannot be compared and analyzed reliably.
Therefore, in this paper, we analyse, process and combine datasets from 5
different sources to generate a comprehensive and fair dataset to aid future
research in this field. On this basis, we also implement and compare 10
encrypted malicious traffic detection algorithms. We then discuss challenges
and propose future directions of research. | http://arxiv.org/abs/2203.09332v1 | cs.CR | not_new_dataset | 0.992056 | 2203.09332 |