id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2404.19253 | Liam Roy | Liam Roy, Dana Kulic, Elizabeth Croft | Learning to Communicate Functional States with Nonverbal Expressions for
Improved Human-Robot Collaboration | 8 Pages, Accepted to RA-L March 2024 | LRA.2024.3384037 | 10.1109/LRA.2024.3384037 | null | cs.RO cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | Collaborative robots must effectively communicate their internal state to
humans to enable a smooth interaction. Nonverbal communication is widely used
to communicate information during human-robot interaction, however, such
methods may also be misunderstood, leading to communication errors. In this
work, we explore modulating the acoustic parameter values (pitch bend, beats
per minute, beats per loop) of nonverbal auditory expressions to convey
functional robot states (accomplished, progressing, stuck). We propose a
reinforcement learning (RL) algorithm based on noisy human feedback to produce
accurately interpreted nonverbal auditory expressions. The proposed approach
was evaluated through a user study with 24 participants. The results
demonstrate that: 1. Our proposed RL-based approach is able to learn suitable
acoustic parameter values which improve the users' ability to correctly
identify the state of the robot. 2. Algorithm initialization informed by
previous user data can be used to significantly speed up the learning process.
3. The method used for algorithm initialization strongly influences whether
participants converge to similar sounds for each robot state. 4. Modulation of
pitch bend has the largest influence on user association between sounds and
robotic states.
| [
{
"created": "Tue, 30 Apr 2024 04:18:21 GMT",
"version": "v1"
}
] | 2024-05-01 | [
[
"Roy",
"Liam",
""
],
[
"Kulic",
"Dana",
""
],
[
"Croft",
"Elizabeth",
""
]
] |
2404.19277 | Li Liu | Wentao Lei, Li Liu, Jun Wang | Bridge to Non-Barrier Communication: Gloss-Prompted Fine-grained Cued
Speech Gesture Generation with Diffusion Model | null | IJCAI 2024 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Cued Speech (CS) is an advanced visual phonetic encoding system that
integrates lip reading with hand codings, enabling people with hearing
impairments to communicate efficiently. CS video generation aims to produce
specific lip and gesture movements of CS from audio or text inputs. The main
challenge is that given limited CS data, we strive to simultaneously generate
fine-grained hand and finger movements, as well as lip movements, meanwhile the
two kinds of movements need to be asynchronously aligned. Existing CS
generation methods are fragile and prone to poor performance due to
template-based statistical models and careful hand-crafted pre-processing to
fit the models. Therefore, we propose a novel Gloss-prompted Diffusion-based CS
Gesture generation framework (called GlossDiff). Specifically, to integrate
additional linguistic rules knowledge into the model. we first introduce a
bridging instruction called \textbf{Gloss}, which is an automatically generated
descriptive text to establish a direct and more delicate semantic connection
between spoken language and CS gestures. Moreover, we first suggest rhythm is
an important paralinguistic feature for CS to improve the communication
efficacy. Therefore, we propose a novel Audio-driven Rhythmic Module (ARM) to
learn rhythm that matches audio speech. Moreover, in this work, we design,
record, and publish the first Chinese CS dataset with four CS cuers. Extensive
experiments demonstrate that our method quantitatively and qualitatively
outperforms current state-of-the-art (SOTA) methods. We release the code and
data at https://glossdiff.github.io/.
| [
{
"created": "Tue, 30 Apr 2024 05:54:40 GMT",
"version": "v1"
}
] | 2024-05-01 | [
[
"Lei",
"Wentao",
""
],
[
"Liu",
"Li",
""
],
[
"Wang",
"Jun",
""
]
] |
2404.19403 | Lei Zhuang | Lei Zhuang, Jingdong Zhao, Yuntao Li, Zichun Xu, Liangliang Zhao and
Hong Liu | Transformer-Enhanced Motion Planner: Attention-Guided Sampling for
State-Specific Decision Making | This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible | IEEE Robotics and Automation Letters (RA-L), 2024 | 10.1109/LRA.2024.3450305 | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sampling-based motion planning (SBMP) algorithms are renowned for their
robust global search capabilities. However, the inherent randomness in their
sampling mechanisms often result in inconsistent path quality and limited
search efficiency. In response to these challenges, this work proposes a novel
deep learning-based motion planning framework, named Transformer-Enhanced
Motion Planner (TEMP), which synergizes an Environmental Information Semantic
Encoder (EISE) with a Motion Planning Transformer (MPT). EISE converts
environmental data into semantic environmental information (SEI), providing MPT
with an enriched environmental comprehension. MPT leverages an attention
mechanism to dynamically recalibrate its focus on SEI, task objectives, and
historical planning data, refining the sampling node generation. To demonstrate
the capabilities of TEMP, we train our model using a dataset comprised of
planning results produced by the RRT*. EISE and MPT are collaboratively
trained, enabling EISE to autonomously learn and extract patterns from
environmental data, thereby forming semantic representations that MPT could
more effectively interpret and utilize for motion planning. Subsequently, we
conducted a systematic evaluation of TEMP's efficacy across diverse task
dimensions, which demonstrates that TEMP achieves exceptional performance
metrics and a heightened degree of generalizability compared to
state-of-the-art SBMPs.
| [
{
"created": "Tue, 30 Apr 2024 09:48:11 GMT",
"version": "v1"
}
] | 2024-09-23 | [
[
"Zhuang",
"Lei",
""
],
[
"Zhao",
"Jingdong",
""
],
[
"Li",
"Yuntao",
""
],
[
"Xu",
"Zichun",
""
],
[
"Zhao",
"Liangliang",
""
],
[
"Liu",
"Hong",
""
]
] |
2405.00027 | Wen Cao | Wen Cao, Ehsan Miandji and Jonas Unger | Multidimensional Compressed Sensing for Spectral Light Field Imaging | 8 pages, published of VISAPP 2024 | In Proceedings of the 19th International Joint Conference on
Computer Vision, Imaging and Computer Graphics Theory and Applications -
Volume 4: VISAPP 2024, ISBN 978-989-758-679-8, ISSN 2184-4321, pages 349-356 | 10.5220/0012431300003660 | null | cs.CV cs.GR cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers a compressive multi-spectral light field camera model
that utilizes a one-hot spectralcoded mask and a microlens array to capture
spatial, angular, and spectral information using a single monochrome sensor. We
propose a model that employs compressed sensing techniques to reconstruct the
complete multi-spectral light field from undersampled measurements. Unlike
previous work where a light field is vectorized to a 1D signal, our method
employs a 5D basis and a novel 5D measurement model, hence, matching the
intrinsic dimensionality of multispectral light fields. We mathematically and
empirically show the equivalence of 5D and 1D sensing models, and most
importantly that the 5D framework achieves orders of magnitude faster
reconstruction while requiring a small fraction of the memory. Moreover, our
new multidimensional sensing model opens new research directions for designing
efficient visual data acquisition algorithms and hardware.
| [
{
"created": "Tue, 27 Feb 2024 23:49:43 GMT",
"version": "v1"
}
] | 2024-10-08 | [
[
"Cao",
"Wen",
""
],
[
"Miandji",
"Ehsan",
""
],
[
"Unger",
"Jonas",
""
]
] |
2405.00070 | Nisha Pillai | Nisha Pillai, Bindu Nanduri, Michael J Rothrock Jr., Zhiqian Chen,
Mahalingam Ramkumar | Bayesian-Guided Generation of Synthetic Microbiomes with Minimized
Pathogenicity | null | The 46th Annual International Conference of the IEEE Engineering
in Medicine and Biology Society (IEEE EMBC), 2024 | null | null | q-bio.QM cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synthetic microbiomes offer new possibilities for modulating microbiota, to
address the barriers in multidtug resistance (MDR) research. We present a
Bayesian optimization approach to enable efficient searching over the space of
synthetic microbiome variants to identify candidates predictive of reduced MDR.
Microbiome datasets were encoded into a low-dimensional latent space using
autoencoders. Sampling from this space allowed generation of synthetic
microbiome signatures. Bayesian optimization was then implemented to select
variants for biological screening to maximize identification of designs with
restricted MDR pathogens based on minimal samples. Four acquisition functions
were evaluated: expected improvement, upper confidence bound, Thompson
sampling, and probability of improvement. Based on each strategy, synthetic
samples were prioritized according to their MDR detection. Expected
improvement, upper confidence bound, and probability of improvement
consistently produced synthetic microbiome candidates with significantly fewer
searches than Thompson sampling. By combining deep latent space mapping and
Bayesian learning for efficient guided screening, this study demonstrated the
feasibility of creating bespoke synthetic microbiomes with customized MDR
profiles.
| [
{
"created": "Mon, 29 Apr 2024 21:30:30 GMT",
"version": "v1"
}
] | 2024-05-02 | [
[
"Pillai",
"Nisha",
""
],
[
"Nanduri",
"Bindu",
""
],
[
"Rothrock",
"Michael J",
"Jr."
],
[
"Chen",
"Zhiqian",
""
],
[
"Ramkumar",
"Mahalingam",
""
]
] |
2405.00123 | Ehsan Hoseinzade | Ehsan Hoseinzade, Ke Wang | Graph Neural Network Approach to Semantic Type Detection in Tables | null | In Pacific-Asia Conference on Knowledge Discovery and Data Mining,
pp. 121-133. Singapore: Springer Nature Singapore, 2024 | 10.1007/978-981-97-2266-2_10 | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | This study addresses the challenge of detecting semantic column types in
relational tables, a key task in many real-world applications. While language
models like BERT have improved prediction accuracy, their token input
constraints limit the simultaneous processing of intra-table and inter-table
information. We propose a novel approach using Graph Neural Networks (GNNs) to
model intra-table dependencies, allowing language models to focus on
inter-table information. Our proposed method not only outperforms existing
state-of-the-art algorithms but also offers novel insights into the utility and
functionality of various GNN types for semantic type detection. The code is
available at https://github.com/hoseinzadeehsan/GAIT
| [
{
"created": "Tue, 30 Apr 2024 18:17:44 GMT",
"version": "v1"
}
] | 2024-05-02 | [
[
"Hoseinzade",
"Ehsan",
""
],
[
"Wang",
"Ke",
""
]
] |
2405.00291 | Jionghao Lin | Jionghao Lin, Eason Chen, Zeifei Han, Ashish Gurung, Danielle R.
Thomas, Wei Tan, Ngoc Dang Nguyen, Kenneth R. Koedinger | How Can I Improve? Using GPT to Highlight the Desired and Undesired
Parts of Open-ended Responses | 11 pages, full research paper, EDM 2024 | A&A 687, A227 (2024) | 10.1051/0004-6361/202349120 | null | cs.CL cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | Automated explanatory feedback systems play a crucial role in facilitating
learning for a large cohort of learners by offering feedback that incorporates
explanations, significantly enhancing the learning process. However, delivering
such explanatory feedback in real-time poses challenges, particularly when high
classification accuracy for domain-specific, nuanced responses is essential.
Our study leverages the capabilities of large language models, specifically
Generative Pre-Trained Transformers (GPT), to explore a sequence labeling
approach focused on identifying components of desired and less desired praise
for providing explanatory feedback within a tutor training dataset. Our aim is
to equip tutors with actionable, explanatory feedback during online training
lessons. To investigate the potential of GPT models for providing the
explanatory feedback, we employed two commonly-used approaches: prompting and
fine-tuning. To quantify the quality of highlighted praise components
identified by GPT models, we introduced a Modified Intersection over Union
(M-IoU) score. Our findings demonstrate that: (1) the M-IoU score effectively
correlates with human judgment in evaluating sequence quality; (2) using
two-shot prompting on GPT-3.5 resulted in decent performance in recognizing
effort-based (M-IoU of 0.46) and outcome-based praise (M-IoU of 0.68); and (3)
our optimally fine-tuned GPT-3.5 model achieved M-IoU scores of 0.64 for
effort-based praise and 0.84 for outcome-based praise, aligning with the
satisfaction levels evaluated by human coders. Our results show promise for
using GPT models to provide feedback that focuses on specific elements in their
open-ended responses that are desirable or could use improvement.
| [
{
"created": "Wed, 1 May 2024 02:59:10 GMT",
"version": "v1"
}
] | 2024-07-17 | [
[
"Lin",
"Jionghao",
""
],
[
"Chen",
"Eason",
""
],
[
"Han",
"Zeifei",
""
],
[
"Gurung",
"Ashish",
""
],
[
"Thomas",
"Danielle R.",
""
],
[
"Tan",
"Wei",
""
],
[
"Nguyen",
"Ngoc Dang",
""
],
[
"Koedinger",
"Kenneth R.",
""
]
] |
2405.00516 | Lucas Thil | Lucas-Andre\"i Thil, Mirela Popa, Gerasimos Spanakis | Navigating WebAI: Training Agents to Complete Web Tasks with Large
Language Models and Reinforcement Learning | ACM 2024, Avila Spain. 9 pages | Proceedings of the 39th ACM/SIGAPP Symposium on Applied Computing,
2024 | 10.1145/3605098.3635903 | 9798400702433 | cs.LG cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in language models have demonstrated remarkable
improvements in various natural language processing (NLP) tasks such as web
navigation. Supervised learning (SL) approaches have achieved impressive
performance while utilizing significantly less training data compared to
previous methods. However, these SL-based models fall short when compared to
reinforcement learning (RL) approaches, which have shown superior results. In
this paper, we propose a novel approach that combines SL and RL techniques over
the MiniWoB benchmark to leverage the strengths of both methods. We also
address a critical limitation in previous models' understanding of HTML
content, revealing a tendency to memorize target elements rather than
comprehend the underlying structure. To rectify this, we propose methods to
enhance true understanding and present a new baseline of results. Our
experiments demonstrate that our approach outperforms previous SL methods on
certain tasks using less data and narrows the performance gap with RL models,
achieving 43.58\% average accuracy in SL and 36.69\% when combined with a
multimodal RL approach. This study sets a new direction for future web
navigation and offers insights into the limitations and potential of language
modeling for computer tasks.
| [
{
"created": "Wed, 1 May 2024 13:51:45 GMT",
"version": "v1"
}
] | 2024-05-31 | [
[
"Thil",
"Lucas-Andreï",
""
],
[
"Popa",
"Mirela",
""
],
[
"Spanakis",
"Gerasimos",
""
]
] |
2405.00523 | Donghee Choi | Donghee Choi, Mogan Gim, Donghyeon Park, Mujeen Sung, Hyunjae Kim,
Jaewoo Kang, Jihun Choi | CookingSense: A Culinary Knowledgebase with Multidisciplinary Assertions | LREC-COLING 2024 Accepted | LREC-COLING 2024 | null | https://aclanthology.org/2024.lrec-main.354 | cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper introduces CookingSense, a descriptive collection of knowledge
assertions in the culinary domain extracted from various sources, including web
data, scientific papers, and recipes, from which knowledge covering a broad
range of aspects is acquired. CookingSense is constructed through a series of
dictionary-based filtering and language model-based semantic filtering
techniques, which results in a rich knowledgebase of multidisciplinary
food-related assertions. Additionally, we present FoodBench, a novel benchmark
to evaluate culinary decision support systems. From evaluations with FoodBench,
we empirically prove that CookingSense improves the performance of retrieval
augmented language models. We also validate the quality and variety of
assertions in CookingSense through qualitative analysis.
| [
{
"created": "Wed, 1 May 2024 13:58:09 GMT",
"version": "v1"
}
] | 2024-08-13 | [
[
"Choi",
"Donghee",
""
],
[
"Gim",
"Mogan",
""
],
[
"Park",
"Donghyeon",
""
],
[
"Sung",
"Mujeen",
""
],
[
"Kim",
"Hyunjae",
""
],
[
"Kang",
"Jaewoo",
""
],
[
"Choi",
"Jihun",
""
]
] |
2405.00666 | Zheng Zeng | Zheng Zeng, Valentin Deschaintre, Iliyan Georgiev, Yannick
Hold-Geoffroy, Yiwei Hu, Fujun Luan, Ling-Qi Yan, Milo\v{s} Ha\v{s}an | RGB$\leftrightarrow$X: Image decomposition and synthesis using material-
and lighting-aware diffusion models | null | SIGGRAPH Conference Papers '24, July 27-August 1, 2024, Denver,
CO, USA | 10.1145/3641519.3657445 | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The three areas of realistic forward rendering, per-pixel inverse rendering,
and generative image synthesis may seem like separate and unrelated sub-fields
of graphics and vision. However, recent work has demonstrated improved
estimation of per-pixel intrinsic channels (albedo, roughness, metallicity)
based on a diffusion architecture; we call this the RGB$\rightarrow$X problem.
We further show that the reverse problem of synthesizing realistic images given
intrinsic channels, X$\rightarrow$RGB, can also be addressed in a diffusion
framework.
Focusing on the image domain of interior scenes, we introduce an improved
diffusion model for RGB$\rightarrow$X, which also estimates lighting, as well
as the first diffusion X$\rightarrow$RGB model capable of synthesizing
realistic images from (full or partial) intrinsic channels. Our
X$\rightarrow$RGB model explores a middle ground between traditional rendering
and generative models: we can specify only certain appearance properties that
should be followed, and give freedom to the model to hallucinate a plausible
version of the rest.
This flexibility makes it possible to use a mix of heterogeneous training
datasets, which differ in the available channels. We use multiple existing
datasets and extend them with our own synthetic and real data, resulting in a
model capable of extracting scene properties better than previous work and of
generating highly realistic images of interior scenes.
| [
{
"created": "Wed, 1 May 2024 17:54:05 GMT",
"version": "v1"
}
] | 2024-05-02 | [
[
"Zeng",
"Zheng",
""
],
[
"Deschaintre",
"Valentin",
""
],
[
"Georgiev",
"Iliyan",
""
],
[
"Hold-Geoffroy",
"Yannick",
""
],
[
"Hu",
"Yiwei",
""
],
[
"Luan",
"Fujun",
""
],
[
"Yan",
"Ling-Qi",
""
],
[
"Hašan",
"Miloš",
""
]
] |
2405.00726 | Saydul Akbar Murad | Saydul Akbar Murad and Nick Rahimi | Unveiling Thoughts: A Review of Advancements in EEG Brain Signal
Decoding into Text | null | IEEE Transactions on Cognitive and Developmental Systems (2024) | 10.1109/TCDS.2024.3462452 | null | eess.SP cs.AI cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The conversion of brain activity into text using electroencephalography (EEG)
has gained significant traction in recent years. Many researchers are working
to develop new models to decode EEG signals into text form. Although this area
has shown promising developments, it still faces numerous challenges that
necessitate further improvement. It's important to outline this area's recent
developments and future research directions. In this review article, we
thoroughly summarize the progress in EEG-to-text conversion. Firstly, we talk
about how EEG-to-text technology has grown and what problems we still face.
Secondly, we discuss existing techniques used in this field. This includes
methods for collecting EEG data, the steps to process these signals, and the
development of systems capable of translating these signals into coherent text.
We conclude with potential future research directions, emphasizing the need for
enhanced accuracy, reduced system constraints, and the exploration of novel
applications across varied sectors. By addressing these aspects, this review
aims to contribute to developing more accessible and effective Brain-Computer
Interface (BCI) technology for a broader user base.
| [
{
"created": "Fri, 26 Apr 2024 21:18:05 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Sep 2024 04:28:34 GMT",
"version": "v2"
}
] | 2024-09-23 | [
[
"Murad",
"Saydul Akbar",
""
],
[
"Rahimi",
"Nick",
""
]
] |
2405.00821 | Gregorios Katsios | Gregorios Katsios, Ning Sa, Ankita Bhaumik, Tomek Strzalkowski | Uncovering Agendas: A Novel French & English Dataset for Agenda
Detection on Social Media | null | 2024.lrec-main.1476 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The behavior and decision making of groups or communities can be dramatically
influenced by individuals pushing particular agendas, e.g., to promote or
disparage a person or an activity, to call for action, etc.. In the examination
of online influence campaigns, particularly those related to important
political and social events, scholars often concentrate on identifying the
sources responsible for setting and controlling the agenda (e.g., public
media). In this article we present a methodology for detecting specific
instances of agenda control through social media where annotated data is
limited or non-existent. By using a modest corpus of Twitter messages centered
on the 2022 French Presidential Elections, we carry out a comprehensive
evaluation of various approaches and techniques that can be applied to this
problem. Our findings demonstrate that by treating the task as a textual
entailment problem, it is possible to overcome the requirement for a large
annotated training dataset.
| [
{
"created": "Wed, 1 May 2024 19:02:35 GMT",
"version": "v1"
}
] | 2024-06-13 | [
[
"Katsios",
"Gregorios",
""
],
[
"Sa",
"Ning",
""
],
[
"Bhaumik",
"Ankita",
""
],
[
"Strzalkowski",
"Tomek",
""
]
] |
2405.00841 | Juncheng Li | Juncheng Li and David J. Cappelleri | Sim-Grasp: Learning 6-DOF Grasp Policies for Cluttered Environments
Using a Synthetic Benchmark | null | IEEE Robotics and Automation Letters (2024) 1-8 | 10.1109/LRA.2024.3430712 | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present Sim-Grasp, a robust 6-DOF two-finger grasping
system that integrates advanced language models for enhanced object
manipulation in cluttered environments. We introduce the Sim-Grasp-Dataset,
which includes 1,550 objects across 500 scenarios with 7.9 million annotated
labels, and develop Sim-GraspNet to generate grasp poses from point clouds. The
Sim-Grasp-Polices achieve grasping success rates of 97.14% for single objects
and 87.43% and 83.33% for mixed clutter scenarios of Levels 1-2 and Levels 3-4
objects, respectively. By incorporating language models for target
identification through text and box prompts, Sim-Grasp enables both
object-agnostic and target picking, pushing the boundaries of intelligent
robotic systems.
| [
{
"created": "Wed, 1 May 2024 20:08:51 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Jul 2024 22:12:11 GMT",
"version": "v2"
}
] | 2024-07-18 | [
[
"Li",
"Juncheng",
""
],
[
"Cappelleri",
"David J.",
""
]
] |
2405.01175 | Zijia Wang | Zijia Wang, Wenbin Yang, Zhisong Liu, Zhen Jia | Uncertainty-aware self-training with expectation maximization basis
transformation | null | 36th Conference on Neural Information Processing Systems (NeurIPS
2022) | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Self-training is a powerful approach to deep learning. The key process is to
find a pseudo-label for modeling. However, previous self-training algorithms
suffer from the over-confidence issue brought by the hard labels, even some
confidence-related regularizers cannot comprehensively catch the uncertainty.
Therefore, we propose a new self-training framework to combine uncertainty
information of both model and dataset. Specifically, we propose to use
Expectation-Maximization (EM) to smooth the labels and comprehensively estimate
the uncertainty information. We further design a basis extraction network to
estimate the initial basis from the dataset. The obtained basis with
uncertainty can be filtered based on uncertainty information. It can then be
transformed into the real hard label to iteratively update the model and basis
in the retraining process. Experiments on image classification and semantic
segmentation show the advantages of our methods among confidence-aware
self-training algorithms with 1-3 percentage improvement on different datasets.
| [
{
"created": "Thu, 2 May 2024 11:01:31 GMT",
"version": "v1"
}
] | 2024-05-03 | [
[
"Wang",
"Zijia",
""
],
[
"Yang",
"Wenbin",
""
],
[
"Liu",
"Zhisong",
""
],
[
"Jia",
"Zhen",
""
]
] |
2405.01273 | Praveen Chandaliya Dr | Praveen Kumar Chandaliya, Kiran Raja, Raghavendra Ramachandra, Zahid
Akhtar, Christoph Busch | Towards Inclusive Face Recognition Through Synthetic Ethnicity
Alteration | 8 Pages | Automatic Face and Gesture Recognition 2024 | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Numerous studies have shown that existing Face Recognition Systems (FRS),
including commercial ones, often exhibit biases toward certain ethnicities due
to under-represented data. In this work, we explore ethnicity alteration and
skin tone modification using synthetic face image generation methods to
increase the diversity of datasets. We conduct a detailed analysis by first
constructing a balanced face image dataset representing three ethnicities:
Asian, Black, and Indian. We then make use of existing Generative Adversarial
Network-based (GAN) image-to-image translation and manifold learning models to
alter the ethnicity from one to another. A systematic analysis is further
conducted to assess the suitability of such datasets for FRS by studying the
realistic skin-tone representation using Individual Typology Angle (ITA).
Further, we also analyze the quality characteristics using existing Face image
quality assessment (FIQA) approaches. We then provide a holistic FRS
performance analysis using four different systems. Our findings pave the way
for future research works in (i) developing both specific ethnicity and general
(any to any) ethnicity alteration models, (ii) expanding such approaches to
create databases with diverse skin tones, (iii) creating datasets representing
various ethnicities which further can help in mitigating bias while addressing
privacy concerns.
| [
{
"created": "Thu, 2 May 2024 13:31:09 GMT",
"version": "v1"
},
{
"created": "Tue, 7 May 2024 03:31:22 GMT",
"version": "v2"
}
] | 2024-05-08 | [
[
"Chandaliya",
"Praveen Kumar",
""
],
[
"Raja",
"Kiran",
""
],
[
"Ramachandra",
"Raghavendra",
""
],
[
"Akhtar",
"Zahid",
""
],
[
"Busch",
"Christoph",
""
]
] |
2405.01458 | Samee Arif | Samee Arif, Sualeha Farid, Awais Athar, Agha Ali Raza | UQA: Corpus for Urdu Question Answering | null | Proceedings of the 2024 Joint International Conference on
Computational Linguistics, Language Resources and Evaluation (LREC-COLING
2024), pp. 17237-17244, May 2024 | null | null | cs.CL cs.AI cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces UQA, a novel dataset for question answering and text
comprehension in Urdu, a low-resource language with over 70 million native
speakers. UQA is generated by translating the Stanford Question Answering
Dataset (SQuAD2.0), a large-scale English QA dataset, using a technique called
EATS (Enclose to Anchor, Translate, Seek), which preserves the answer spans in
the translated context paragraphs. The paper describes the process of selecting
and evaluating the best translation model among two candidates: Google
Translator and Seamless M4T. The paper also benchmarks several state-of-the-art
multilingual QA models on UQA, including mBERT, XLM-RoBERTa, and mT5, and
reports promising results. For XLM-RoBERTa-XL, we have an F1 score of 85.99 and
74.56 EM. UQA is a valuable resource for developing and testing multilingual
NLP systems for Urdu and for enhancing the cross-lingual transferability of
existing models. Further, the paper demonstrates the effectiveness of EATS for
creating high-quality datasets for other languages and domains. The UQA dataset
and the code are publicly available at www.github.com/sameearif/UQA.
| [
{
"created": "Thu, 2 May 2024 16:44:31 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Jul 2024 18:46:11 GMT",
"version": "v2"
}
] | 2024-07-24 | [
[
"Arif",
"Samee",
""
],
[
"Farid",
"Sualeha",
""
],
[
"Athar",
"Awais",
""
],
[
"Raza",
"Agha Ali",
""
]
] |
2405.01561 | Jaida Gao | Jaida Gao, Calab Su, Etai Miller, Kevin Lu, Yu Meng | Rapid Mobile App Development for Generative AI Agents on MIT App
Inventor | null | Journal of advances in information science and technology 2(3)
1-8, March 2024 | 10.5281/zenodo.10899798 | null | cs.SE cs.AI cs.CY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The evolution of Artificial Intelligence (AI) stands as a pivotal force
shaping our society, finding applications across diverse domains such as
education, sustainability, and safety. Leveraging AI within mobile applications
makes it easily accessible to the public, catalyzing its transformative
potential. In this paper, we present a methodology for the rapid development of
AI agent applications using the development platform provided by MIT App
Inventor. To demonstrate its efficacy, we share the development journey of
three distinct mobile applications: SynchroNet for fostering sustainable
communities; ProductiviTeams for addressing procrastination; and iHELP for
enhancing community safety. All three applications seamlessly integrate a
spectrum of generative AI features, leveraging OpenAI APIs. Furthermore, we
offer insights gleaned from overcoming challenges in integrating diverse tools
and AI functionalities, aiming to inspire young developers to join our efforts
in building practical AI agent applications.
| [
{
"created": "Mon, 1 Apr 2024 02:35:19 GMT",
"version": "v1"
}
] | 2024-05-06 | [
[
"Gao",
"Jaida",
""
],
[
"Su",
"Calab",
""
],
[
"Miller",
"Etai",
""
],
[
"Lu",
"Kevin",
""
],
[
"Meng",
"Yu",
""
]
] |
2405.01586 | Tohida Rehman Ms. | Tohida Rehman, Raghubir Bose, Samiran Chattopadhyay, Debarshi Kumar
Sanyal | Transfer Learning and Transformer Architecture for Financial Sentiment
Analysis | 12 pages, 9 figures | Proceedings of International Conference on Computational
Intelligence, Data Science and Cloud Computing: IEM-ICDC 2021,pages 17--27 | 10.1007/978-981-19-1657-1_2 | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Financial sentiment analysis allows financial institutions like Banks and
Insurance Companies to better manage the credit scoring of their customers in a
better way. Financial domain uses specialized mechanisms which makes sentiment
analysis difficult. In this paper, we propose a pre-trained language model
which can help to solve this problem with fewer labelled data. We extend on the
principles of Transfer learning and Transformation architecture principles and
also take into consideration recent outbreak of pandemics like COVID. We apply
the sentiment analysis to two different sets of data. We also take smaller
training set and fine tune the same as part of the model.
| [
{
"created": "Sun, 28 Apr 2024 17:15:07 GMT",
"version": "v1"
}
] | 2024-05-06 | [
[
"Rehman",
"Tohida",
""
],
[
"Bose",
"Raghubir",
""
],
[
"Chattopadhyay",
"Samiran",
""
],
[
"Sanyal",
"Debarshi Kumar",
""
]
] |
2405.01587 | Nidhi Kamal | Nidhi Kamal, Saurabh Yadav, Jorawar Singh, Aditi Avasthi | Improve Academic Query Resolution through BERT-based Question Extraction
from Images | null | 2024 IEEE International Conference on Interdisciplinary Approaches
in Technology and Management for Social Innovation (IATMSI) volume 2 (2024)
1-4 | 10.1109/IATMSI60426.2024.10502904 | null | cs.CL cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Providing fast and accurate resolution to the student's query is an essential
solution provided by Edtech organizations. This is generally provided with a
chat-bot like interface to enable students to ask their doubts easily. One
preferred format for student queries is images, as it allows students to
capture and post questions without typing complex equations and information.
However, this format also presents difficulties, as images may contain multiple
questions or textual noise that lowers the accuracy of existing single-query
answering solutions. In this paper, we propose a method for extracting
questions from text or images using a BERT-based deep learning model and
compare it to the other rule-based and layout-based methods. Our method aims to
improve the accuracy and efficiency of student query resolution in Edtech
organizations.
| [
{
"created": "Sun, 28 Apr 2024 19:11:08 GMT",
"version": "v1"
}
] | 2024-05-06 | [
[
"Kamal",
"Nidhi",
""
],
[
"Yadav",
"Saurabh",
""
],
[
"Singh",
"Jorawar",
""
],
[
"Avasthi",
"Aditi",
""
]
] |
2405.01820 | Cedric Deslandes Whitney | Cedric Deslandes Whitney, Justin Norman | Real Risks of Fake Data: Synthetic Data, Diversity-Washing and Consent
Circumvention | null | FAccT '24, June 03--06, 2024, Rio de Janeiro, Brazil | 10.1145/3630106.3659002 | null | cs.CY cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning systems require representations of the real world for
training and testing - they require data, and lots of it. Collecting data at
scale has logistical and ethical challenges, and synthetic data promises a
solution to these challenges. Instead of needing to collect photos of real
people's faces to train a facial recognition system, a model creator could
create and use photo-realistic, synthetic faces. The comparative ease of
generating this synthetic data rather than relying on collecting data has made
it a common practice. We present two key risks of using synthetic data in model
development. First, we detail the high risk of false confidence when using
synthetic data to increase dataset diversity and representation. We base this
in the examination of a real world use-case of synthetic data, where synthetic
datasets were generated for an evaluation of facial recognition technology.
Second, we examine how using synthetic data risks circumventing consent for
data usage. We illustrate this by considering the importance of consent to the
U.S. Federal Trade Commission's regulation of data collection and affected
models. Finally, we discuss how these two risks exemplify how synthetic data
complicates existing governance and ethical practice; by decoupling data from
those it impacts, synthetic data is prone to consolidating power away those
most impacted by algorithmically-mediated harm.
| [
{
"created": "Fri, 3 May 2024 02:47:44 GMT",
"version": "v1"
}
] | 2024-05-06 | [
[
"Whitney",
"Cedric Deslandes",
""
],
[
"Norman",
"Justin",
""
]
] |
2405.01885 | Deng Li | Deng Li, Bohao Xing, Xin Liu | Enhancing Micro Gesture Recognition for Emotion Understanding via
Context-aware Visual-Text Contrastive Learning | accepted by IEEE Signal Processing Letters | IEEE Signal Processing Letters (2024) | 10.1109/LSP.2024.3396656 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Psychological studies have shown that Micro Gestures (MG) are closely linked
to human emotions. MG-based emotion understanding has attracted much attention
because it allows for emotion understanding through nonverbal body gestures
without relying on identity information (e.g., facial and electrocardiogram
data). Therefore, it is essential to recognize MG effectively for advanced
emotion understanding. However, existing Micro Gesture Recognition (MGR)
methods utilize only a single modality (e.g., RGB or skeleton) while
overlooking crucial textual information. In this letter, we propose a simple
but effective visual-text contrastive learning solution that utilizes text
information for MGR. In addition, instead of using handcrafted prompts for
visual-text contrastive learning, we propose a novel module called Adaptive
prompting to generate context-aware prompts. The experimental results show that
the proposed method achieves state-of-the-art performance on two public
datasets. Furthermore, based on an empirical study utilizing the results of MGR
for emotion understanding, we demonstrate that using the textual results of MGR
significantly improves performance by 6%+ compared to directly using video as
input.
| [
{
"created": "Fri, 3 May 2024 07:11:25 GMT",
"version": "v1"
}
] | 2024-05-06 | [
[
"Li",
"Deng",
""
],
[
"Xing",
"Bohao",
""
],
[
"Liu",
"Xin",
""
]
] |
2405.01942 | Cl\'ement Brutti-Mairesse | Cl\'ement Brutti-Mairesse and Lo\"ic Verlingue | CRCL at SemEval-2024 Task 2: Simple prompt optimizations | null | SemEval-2024 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present a baseline for the SemEval 2024 task 2 challenge, whose objective
is to ascertain the inference relationship between pairs of clinical trial
report sections and statements. We apply prompt optimization techniques with
LLM Instruct models provided as a Language Model-as-a-Service (LMaaS). We
observed, in line with recent findings, that synthetic CoT prompts
significantly enhance manually crafted ones.
| [
{
"created": "Fri, 3 May 2024 09:10:40 GMT",
"version": "v1"
}
] | 2024-05-06 | [
[
"Brutti-Mairesse",
"Clément",
""
],
[
"Verlingue",
"Loïc",
""
]
] |
2405.01971 | Alberto Pretto | Emilio Olivastri, Daniel Fusaro, Wanmeng Li, Simone Mosco, and Alberto
Pretto | A Sonar-based AUV Positioning System for Underwater Environments with
Low Infrastructure Density | Accepted to the IEEE ICRA Workshop on Field Robotics 2024 | IEEE ICRA Workshop on Field Robotics 2024 | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing demand for underwater vehicles highlights the necessity for
robust localization solutions in inspection missions. In this work, we present
a novel real-time sonar-based underwater global positioning algorithm for AUVs
(Autonomous Underwater Vehicles) designed for environments with a sparse
distribution of human-made assets. Our approach exploits two synergistic data
interpretation frontends applied to the same stream of sonar data acquired by a
multibeam Forward-Looking Sonar (FSD). These observations are fused within a
Particle Filter (PF) either to weigh more particles that belong to
high-likelihood regions or to solve symmetric ambiguities. Preliminary
experiments carried out on a simulated environment resembling a real underwater
plant provided promising results. This work represents a starting point towards
future developments of the method and consequent exhaustive evaluations also in
real-world scenarios.
| [
{
"created": "Fri, 3 May 2024 09:53:28 GMT",
"version": "v1"
}
] | 2024-05-06 | [
[
"Olivastri",
"Emilio",
""
],
[
"Fusaro",
"Daniel",
""
],
[
"Li",
"Wanmeng",
""
],
[
"Mosco",
"Simone",
""
],
[
"Pretto",
"Alberto",
""
]
] |
2405.01995 | Stefano Savazzi | S. Savazzi, V. Rampa, S. Kianoush, A. Minora, L. Costa | Cooperation and Federation in Distributed Radar Point Cloud Processing | null | 2023 IEEE 34th Annual International Symposium on Personal, Indoor
and Mobile Radio Communications (PIMRC) | 10.1109/PIMRC56721.2023.10294026 | null | cs.LG cs.CV cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | The paper considers the problem of human-scale RF sensing utilizing a network
of resource-constrained MIMO radars with low range-azimuth resolution. The
radars operate in the mmWave band and obtain time-varying 3D point cloud (PC)
information that is sensitive to body movements. They also observe the same
scene from different views and cooperate while sensing the environment using a
sidelink communication channel. Conventional cooperation setups allow the
radars to mutually exchange raw PC information to improve ego sensing. The
paper proposes a federation mechanism where the radars exchange the parameters
of a Bayesian posterior measure of the observed PCs, rather than raw data. The
radars act as distributed parameter servers to reconstruct a global posterior
(i.e., federated posterior) using Bayesian tools. The paper quantifies and
compares the benefits of radar federation with respect to cooperation
mechanisms. Both approaches are validated by experiments with a real-time
demonstration platform. Federation makes minimal use of the sidelink
communication channel (20 {\div} 25 times lower bandwidth use) and is less
sensitive to unresolved targets. On the other hand, cooperation reduces the
mean absolute target estimation error of about 20%.
| [
{
"created": "Fri, 3 May 2024 10:50:30 GMT",
"version": "v1"
}
] | 2024-05-06 | [
[
"Savazzi",
"S.",
""
],
[
"Rampa",
"V.",
""
],
[
"Kianoush",
"S.",
""
],
[
"Minora",
"A.",
""
],
[
"Costa",
"L.",
""
]
] |
2405.02332 | Adrien Le Coz | Adrien LeCoz, Houssem Ouertatani, St\'ephane Herbin, Faouzi Adjed | Efficient Exploration of Image Classifier Failures with Bayesian
Optimization and Text-to-Image Models | null | Generative Models for Computer Vision - CVPR 2024 Workshop, Jun
2024, Seattle, United States | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image classifiers should be used with caution in the real world. Performance
evaluated on a validation set may not reflect performance in the real world. In
particular, classifiers may perform well for conditions that are frequently
encountered during training, but poorly for other infrequent conditions. In
this study, we hypothesize that recent advances in text-to-image generative
models make them valuable for benchmarking computer vision models such as image
classifiers: they can generate images conditioned by textual prompts that cause
classifier failures, allowing failure conditions to be described with textual
attributes. However, their generation cost becomes an issue when a large number
of synthetic images need to be generated, which is the case when many different
attribute combinations need to be tested. We propose an image classifier
benchmarking method as an iterative process that alternates image generation,
classifier evaluation, and attribute selection. This method efficiently
explores the attributes that ultimately lead to poor behavior detection.
| [
{
"created": "Fri, 26 Apr 2024 06:22:43 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Sep 2024 09:21:03 GMT",
"version": "v2"
}
] | 2024-09-30 | [
[
"LeCoz",
"Adrien",
""
],
[
"Ouertatani",
"Houssem",
""
],
[
"Herbin",
"Stéphane",
""
],
[
"Adjed",
"Faouzi",
""
]
] |
2405.02548 | Ahmed Bensaoud | Ahmed Bensaoud, Jugal Kalita | CNN-LSTM and Transfer Learning Models for Malware Classification based
on Opcodes and API Calls | null | Bensaoud, A., & Kalita, J. (2024). CNN-LSTM and transfer learning
models for malware classification based on opcodes and API calls.
Knowledge-Based Systems, 111543 | 10.1016/j.knosys.2024.111543 | null | cs.CR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel model for a malware classification system
based on Application Programming Interface (API) calls and opcodes, to improve
classification accuracy. This system uses a novel design of combined
Convolutional Neural Network and Long Short-Term Memory. We extract opcode
sequences and API Calls from Windows malware samples for classification. We
transform these features into N-grams (N = 2, 3, and 10)-gram sequences. Our
experiments on a dataset of 9,749,57 samples produce high accuracy of 99.91%
using the 8-gram sequences. Our method significantly improves the malware
classification performance when using a wide range of recent deep learning
architectures, leading to state-of-the-art performance. In particular, we
experiment with ConvNeXt-T, ConvNeXt-S, RegNetY-4GF, RegNetY-8GF, RegNetY-12GF,
EfficientNetV2, Sequencer2D-L, Swin-T, ViT-G/14, ViT-Ti, ViT-S, VIT-B, VIT-L,
and MaxViT-B. Among these architectures, Swin-T and Sequencer2D-L architectures
achieved high accuracies of 99.82% and 99.70%, respectively, comparable to our
CNN-LSTM architecture although not surpassing it.
| [
{
"created": "Sat, 4 May 2024 03:13:13 GMT",
"version": "v1"
}
] | 2024-05-07 | [
[
"Bensaoud",
"Ahmed",
""
],
[
"Kalita",
"Jugal",
""
]
] |
2405.02573 | Hieu Ngo | Hieu Ngo Trung, Duong Tran Ham, Tin Huynh, Kiem Hoang | A Combination of BERT and Transformer for Vietnamese Spelling Correction | 13 pages | ACIIDS 2022, LNCS, vol 13757, Springer, Cham | 10.1007/978-3-031-21743-2_43 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recently, many studies have shown the efficiency of using Bidirectional
Encoder Representations from Transformers (BERT) in various Natural Language
Processing (NLP) tasks. Specifically, English spelling correction task that
uses Encoder-Decoder architecture and takes advantage of BERT has achieved
state-of-the-art result. However, to our knowledge, there is no implementation
in Vietnamese yet. Therefore, in this study, a combination of Transformer
architecture (state-of-the-art for Encoder-Decoder model) and BERT was proposed
to deal with Vietnamese spelling correction. The experiment results have shown
that our model outperforms other approaches as well as the Google Docs Spell
Checking tool, achieves an 86.24 BLEU score on this task.
| [
{
"created": "Sat, 4 May 2024 05:24:19 GMT",
"version": "v1"
}
] | 2024-05-07 | [
[
"Trung",
"Hieu Ngo",
""
],
[
"Ham",
"Duong Tran",
""
],
[
"Huynh",
"Tin",
""
],
[
"Hoang",
"Kiem",
""
]
] |
2405.02654 | Tianyu Ren | Tianyu Ren, Xiao-Jun Zeng | Enhancing Cooperation through Selective Interaction and Long-term
Experiences in Multi-Agent Reinforcement Learning | Accepted at IJCAI 2024 (33rd International Joint Conference on
Artificial Intelligence - Jeju) | IJCAI (2024) 193-201; | 10.24963/ijcai.2024/22 | null | cs.MA cs.AI cs.GT | http://creativecommons.org/licenses/by/4.0/ | The significance of network structures in promoting group cooperation within
social dilemmas has been widely recognized. Prior studies attribute this
facilitation to the assortment of strategies driven by spatial interactions.
Although reinforcement learning has been employed to investigate the impact of
dynamic interaction on the evolution of cooperation, there remains a lack of
understanding about how agents develop neighbour selection behaviours and the
formation of strategic assortment within an explicit interaction structure. To
address this, our study introduces a computational framework based on
multi-agent reinforcement learning in the spatial Prisoner's Dilemma game. This
framework allows agents to select dilemma strategies and interacting neighbours
based on their long-term experiences, differing from existing research that
relies on preset social norms or external incentives. By modelling each agent
using two distinct Q-networks, we disentangle the coevolutionary dynamics
between cooperation and interaction. The results indicate that long-term
experience enables agents to develop the ability to identify non-cooperative
neighbours and exhibit a preference for interaction with cooperative ones. This
emergent self-organizing behaviour leads to the clustering of agents with
similar strategies, thereby increasing network reciprocity and enhancing group
cooperation.
| [
{
"created": "Sat, 4 May 2024 12:42:55 GMT",
"version": "v1"
},
{
"created": "Sun, 18 Aug 2024 14:30:52 GMT",
"version": "v2"
}
] | 2024-08-20 | [
[
"Ren",
"Tianyu",
""
],
[
"Zeng",
"Xiao-Jun",
""
]
] |
2405.02711 | Jordyn Young | Jordyn Young, Laala M Jawara, Diep N Nguyen, Brian Daly, Jina Huh-Yoo,
and Afsaneh Razi | The Role of AI in Peer Support for Young People: A Study of Preferences
for Human- and AI-Generated Responses | null | Proceedings of the CHI Conference on Human Factors in Computing
Systems 2024 | 10.1145/3613904.3642574 | null | cs.HC cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Generative Artificial Intelligence (AI) is integrated into everyday
technology, including news, education, and social media. AI has further
pervaded private conversations as conversational partners, auto-completion, and
response suggestions. As social media becomes young people's main method of
peer support exchange, we need to understand when and how AI can facilitate and
assist in such exchanges in a beneficial, safe, and socially appropriate way.
We asked 622 young people to complete an online survey and evaluate blinded
human- and AI-generated responses to help-seeking messages. We found that
participants preferred the AI-generated response to situations about
relationships, self-expression, and physical health. However, when addressing a
sensitive topic, like suicidal thoughts, young people preferred the human
response. We also discuss the role of training in online peer support exchange
and its implications for supporting young people's well-being. Disclaimer: This
paper includes sensitive topics, including suicide ideation. Reader discretion
is advised.
| [
{
"created": "Sat, 4 May 2024 16:53:19 GMT",
"version": "v1"
}
] | 2024-05-07 | [
[
"Young",
"Jordyn",
""
],
[
"Jawara",
"Laala M",
""
],
[
"Nguyen",
"Diep N",
""
],
[
"Daly",
"Brian",
""
],
[
"Huh-Yoo",
"Jina",
""
],
[
"Razi",
"Afsaneh",
""
]
] |
2405.03055 | A. Ben Hamza | Zaedul Islam and A. Ben Hamza | Multi-hop graph transformer network for 3D human pose estimation | null | Journal of Visual Communication and Image Representation, 2024 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Accurate 3D human pose estimation is a challenging task due to occlusion and
depth ambiguity. In this paper, we introduce a multi-hop graph transformer
network designed for 2D-to-3D human pose estimation in videos by leveraging the
strengths of multi-head self-attention and multi-hop graph convolutional
networks with disentangled neighborhoods to capture spatio-temporal
dependencies and handle long-range interactions. The proposed network
architecture consists of a graph attention block composed of stacked layers of
multi-head self-attention and graph convolution with learnable adjacency
matrix, and a multi-hop graph convolutional block comprised of multi-hop
convolutional and dilated convolutional layers. The combination of multi-head
self-attention and multi-hop graph convolutional layers enables the model to
capture both local and global dependencies, while the integration of dilated
convolutional layers enhances the model's ability to handle spatial details
required for accurate localization of the human body joints. Extensive
experiments demonstrate the effectiveness and generalization ability of our
model, achieving competitive performance on benchmark datasets.
| [
{
"created": "Sun, 5 May 2024 21:29:20 GMT",
"version": "v1"
}
] | 2024-05-07 | [
[
"Islam",
"Zaedul",
""
],
[
"Hamza",
"A. Ben",
""
]
] |
2405.03279 | Qizhou Chen | Qizhou Chen, Taolin Zhang, Xiaofeng He, Dongyang Li, Chengyu Wang,
Longtao Huang, Hui Xue | Lifelong Knowledge Editing for LLMs with Retrieval-Augmented Continuous
Prompt Learning | 16 pages, 4 figures, 6 tables | EMNLP 2024 main | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Model editing aims to correct outdated or erroneous knowledge in large
language models (LLMs) without the need for costly retraining. Lifelong model
editing is the most challenging task that caters to the continuous editing
requirements of LLMs. Prior works primarily focus on single or batch editing;
nevertheless, these methods fall short in lifelong editing scenarios due to
catastrophic knowledge forgetting and the degradation of model performance.
Although retrieval-based methods alleviate these issues, they are impeded by
slow and cumbersome processes of integrating the retrieved knowledge into the
model. In this work, we introduce RECIPE, a RetriEval-augmented ContInuous
Prompt lEarning method, to boost editing efficacy and inference efficiency in
lifelong learning. RECIPE first converts knowledge statements into short and
informative continuous prompts, prefixed to the LLM's input query embedding, to
efficiently refine the response grounded on the knowledge. It further
integrates the Knowledge Sentinel (KS) that acts as an intermediary to
calculate a dynamic threshold, determining whether the retrieval repository
contains relevant knowledge. Our retriever and prompt encoder are jointly
trained to achieve editing properties, i.e., reliability, generality, and
locality. In our experiments, RECIPE is assessed extensively across multiple
LLMs and editing datasets, where it achieves superior editing performance.
RECIPE also demonstrates its capability to maintain the overall performance of
LLMs alongside showcasing fast editing and inference speed.
| [
{
"created": "Mon, 6 May 2024 08:52:11 GMT",
"version": "v1"
},
{
"created": "Wed, 8 May 2024 03:45:51 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Oct 2024 12:29:46 GMT",
"version": "v3"
}
] | 2024-10-10 | [
[
"Chen",
"Qizhou",
""
],
[
"Zhang",
"Taolin",
""
],
[
"He",
"Xiaofeng",
""
],
[
"Li",
"Dongyang",
""
],
[
"Wang",
"Chengyu",
""
],
[
"Huang",
"Longtao",
""
],
[
"Xue",
"Hui",
""
]
] |
2405.03301 | Antonio De Santis | Matteo Bianchi, Antonio De Santis, Andrea Tocchetti and Marco
Brambilla | Interpretable Network Visualizations: A Human-in-the-Loop Approach for
Post-hoc Explainability of CNN-based Image Classification | International Joint Conference on Artificial Intelligence 2024 (to be
published) | IJCAI 2024 | 10.24963/ijcai.2024/411 | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transparency and explainability in image classification are essential for
establishing trust in machine learning models and detecting biases and errors.
State-of-the-art explainability methods generate saliency maps to show where a
specific class is identified, without providing a detailed explanation of the
model's decision process. Striving to address such a need, we introduce a
post-hoc method that explains the entire feature extraction process of a
Convolutional Neural Network. These explanations include a layer-wise
representation of the features the model extracts from the input. Such features
are represented as saliency maps generated by clustering and merging similar
feature maps, to which we associate a weight derived by generalizing Grad-CAM
for the proposed methodology. To further enhance these explanations, we include
a set of textual labels collected through a gamified crowdsourcing activity and
processed using NLP techniques and Sentence-BERT. Finally, we show an approach
to generate global explanations by aggregating labels across multiple images.
| [
{
"created": "Mon, 6 May 2024 09:21:35 GMT",
"version": "v1"
}
] | 2024-07-30 | [
[
"Bianchi",
"Matteo",
""
],
[
"De Santis",
"Antonio",
""
],
[
"Tocchetti",
"Andrea",
""
],
[
"Brambilla",
"Marco",
""
]
] |
2405.03305 | Harry Robertshaw | Harry Robertshaw, Lennart Karstensen, Benjamin Jackson, Hadi Sadati,
Kawal Rhode, Sebastien Ourselin, Alejandro Granados, Thomas C Booth | Artificial Intelligence in the Autonomous Navigation of Endovascular
Interventions: A Systematic Review | Abstract shortened for arXiv character limit | (2023) Front. Hum. Neurosci. 17:1239374 | 10.3389/fnhum.2023.1239374 | null | cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | Purpose: Autonomous navigation of devices in endovascular interventions can
decrease operation times, improve decision-making during surgery, and reduce
operator radiation exposure while increasing access to treatment. This
systematic review explores recent literature to assess the impact, challenges,
and opportunities artificial intelligence (AI) has for the autonomous
endovascular intervention navigation.
Methods: PubMed and IEEEXplore databases were queried. Eligibility criteria
included studies investigating the use of AI in enabling the autonomous
navigation of catheters/guidewires in endovascular interventions. Following
PRISMA, articles were assessed using QUADAS-2. PROSPERO: CRD42023392259.
Results: Among 462 studies, fourteen met inclusion criteria. Reinforcement
learning (9/14, 64%) and learning from demonstration (7/14, 50%) were used as
data-driven models for autonomous navigation. Studies predominantly utilised
physical phantoms (10/14, 71%) and in silico (4/14, 29%) models. Experiments
within or around the blood vessels of the heart were reported by the majority
of studies (10/14, 71%), while simple non-anatomical vessel platforms were used
in three studies (3/14, 21%), and the porcine liver venous system in one study.
We observed that risk of bias and poor generalisability were present across
studies. No procedures were performed on patients in any of the studies
reviewed. Studies lacked patient selection criteria, reference standards, and
reproducibility, resulting in low clinical evidence levels.
Conclusions: AI's potential in autonomous endovascular navigation is
promising, but in an experimental proof-of-concept stage, with a technology
readiness level of 3. We highlight that reference standards with
well-identified performance metrics are crucial to allow for comparisons of
data-driven algorithms proposed in the years to come.
| [
{
"created": "Mon, 6 May 2024 09:28:30 GMT",
"version": "v1"
}
] | 2024-06-18 | [
[
"Robertshaw",
"Harry",
""
],
[
"Karstensen",
"Lennart",
""
],
[
"Jackson",
"Benjamin",
""
],
[
"Sadati",
"Hadi",
""
],
[
"Rhode",
"Kawal",
""
],
[
"Ourselin",
"Sebastien",
""
],
[
"Granados",
"Alejandro",
""
],
[
"Booth",
"Thomas C",
""
]
] |
2405.03435 | Ming Gao | Qunlong Ma, Zhi Ma, Ming Gao | A method for quantifying the generalization capabilities of generative
models for solving Ising models | 10 pages, 7 figures | Mach. Learn.: Sci. Technol. 5 (2024) 025011 | 10.1088/2632-2153/ad3710 | null | cond-mat.dis-nn cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For Ising models with complex energy landscapes, whether the ground state can
be found by neural networks depends heavily on the Hamming distance between the
training datasets and the ground state. Despite the fact that various recently
proposed generative models have shown good performance in solving Ising models,
there is no adequate discussion on how to quantify their generalization
capabilities. Here we design a Hamming distance regularizer in the framework of
a class of generative models, variational autoregressive networks (VAN), to
quantify the generalization capabilities of various network architectures
combined with VAN. The regularizer can control the size of the overlaps between
the ground state and the training datasets generated by networks, which,
together with the success rates of finding the ground state, form a
quantitative metric to quantify their generalization capabilities. We conduct
numerical experiments on several prototypical network architectures combined
with VAN, including feed-forward neural networks, recurrent neural networks,
and graph neural networks, to quantify their generalization capabilities when
solving Ising models. Moreover, considering the fact that the quantification of
the generalization capabilities of networks on small-scale problems can be used
to predict their relative performance on large-scale problems, our method is of
great significance for assisting in the Neural Architecture Search field of
searching for the optimal network architectures when solving large-scale Ising
models.
| [
{
"created": "Mon, 6 May 2024 12:58:48 GMT",
"version": "v1"
}
] | 2024-05-07 | [
[
"Ma",
"Qunlong",
""
],
[
"Ma",
"Zhi",
""
],
[
"Gao",
"Ming",
""
]
] |
2405.03500 | Yuefeng Zhang | Yuefeng Zhang | A Rate-Distortion-Classification Approach for Lossy Image Compression | 15 pages | Digital Signal Processing Volume 141, September 2023, 104163 | 10.1016/j.dsp.2023.104163 | null | cs.MM cs.AI cs.CV cs.IT math.IT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In lossy image compression, the objective is to achieve minimal signal
distortion while compressing images to a specified bit rate. The increasing
demand for visual analysis applications, particularly in classification tasks,
has emphasized the significance of considering semantic distortion in
compressed images. To bridge the gap between image compression and visual
analysis, we propose a Rate-Distortion-Classification (RDC) model for lossy
image compression, offering a unified framework to optimize the trade-off
between rate, distortion, and classification accuracy. The RDC model is
extensively analyzed both statistically on a multi-distribution source and
experimentally on the widely used MNIST dataset. The findings reveal that the
RDC model exhibits desirable properties, including monotonic non-increasing and
convex functions, under certain conditions. This work provides insights into
the development of human-machine friendly compression methods and Video Coding
for Machine (VCM) approaches, paving the way for end-to-end image compression
techniques in real-world applications.
| [
{
"created": "Mon, 6 May 2024 14:11:36 GMT",
"version": "v1"
}
] | 2024-05-07 | [
[
"Zhang",
"Yuefeng",
""
]
] |
2405.03652 | Zhiyuan Li | Chenyu Gao, Shunxing Bao, Michael Kim, Nancy Newlin, Praitayini
Kanakaraj, Tianyuan Yao, Gaurav Rudravaram, Yuankai Huo, Daniel Moyer, Kurt
Schilling, Walter Kukull, Arthur Toga, Derek Archer, Timothy Hohman, Bennett
Landman, Zhiyuan Li | Field-of-View Extension for Brain Diffusion MRI via Deep Generative
Models | 20 pages, 11 figures | Journal of Medical Imaging, Vol. 11, Issue 4, 044008 (August 2024) | 10.1117/1.JMI.11.4.044008 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Purpose: In diffusion MRI (dMRI), the volumetric and bundle analyses of
whole-brain tissue microstructure and connectivity can be severely impeded by
an incomplete field-of-view (FOV). This work aims to develop a method for
imputing the missing slices directly from existing dMRI scans with an
incomplete FOV. We hypothesize that the imputed image with complete FOV can
improve the whole-brain tractography for corrupted data with incomplete FOV.
Therefore, our approach provides a desirable alternative to discarding the
valuable dMRI data, enabling subsequent tractography analyses that would
otherwise be challenging or unattainable with corrupted data. Approach: We
propose a framework based on a deep generative model that estimates the absent
brain regions in dMRI scans with incomplete FOV. The model is capable of
learning both the diffusion characteristics in diffusion-weighted images (DWI)
and the anatomical features evident in the corresponding structural images for
efficiently imputing missing slices of DWI outside of incomplete FOV. Results:
For evaluating the imputed slices, on the WRAP dataset the proposed framework
achieved PSNRb0=22.397, SSIMb0=0.905, PSNRb1300=22.479, SSIMb1300=0.893; on the
NACC dataset it achieved PSNRb0=21.304, SSIMb0=0.892, PSNRb1300=21.599,
SSIMb1300= 0.877. The proposed framework improved the tractography accuracy, as
demonstrated by an increased average Dice score for 72 tracts (p < 0.001) on
both the WRAP and NACC datasets. Conclusions: Results suggest that the proposed
framework achieved sufficient imputation performance in dMRI data with
incomplete FOV for improving whole-brain tractography, thereby repairing the
corrupted data. Our approach achieved more accurate whole-brain tractography
results with extended and complete FOV and reduced the uncertainty when
analyzing bundles associated with Alzheimer's Disease.
| [
{
"created": "Mon, 6 May 2024 17:23:42 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Aug 2024 20:29:12 GMT",
"version": "v2"
}
] | 2024-08-30 | [
[
"Gao",
"Chenyu",
""
],
[
"Bao",
"Shunxing",
""
],
[
"Kim",
"Michael",
""
],
[
"Newlin",
"Nancy",
""
],
[
"Kanakaraj",
"Praitayini",
""
],
[
"Yao",
"Tianyuan",
""
],
[
"Rudravaram",
"Gaurav",
""
],
[
"Huo",
"Yuankai",
""
],
[
"Moyer",
"Daniel",
""
],
[
"Schilling",
"Kurt",
""
],
[
"Kukull",
"Walter",
""
],
[
"Toga",
"Arthur",
""
],
[
"Archer",
"Derek",
""
],
[
"Hohman",
"Timothy",
""
],
[
"Landman",
"Bennett",
""
],
[
"Li",
"Zhiyuan",
""
]
] |
2405.03711 | Shaoshi Yang Prof. | Xiao Hu, Tianshu Wang, Min Gong, Shaoshi Yang | Guidance Design for Escape Flight Vehicle Using Evolution Strategy
Enhanced Deep Reinforcement Learning | 13 pages, 13 figures, accepted to appear on IEEE Access, Mar. 2024 | IEEE Access, vol. 12, pp. 48210-48222, Mar. 2024 | 10.1109/ACCESS.2024.3383322 | null | cs.LG cs.AI cs.NE cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Guidance commands of flight vehicles are a series of data sets with fixed
time intervals, thus guidance design constitutes a sequential decision problem
and satisfies the basic conditions for using deep reinforcement learning (DRL).
In this paper, we consider the scenario where the escape flight vehicle (EFV)
generates guidance commands based on DRL and the pursuit flight vehicle (PFV)
generates guidance commands based on the proportional navigation method. For
the EFV, the objective of the guidance design entails progressively maximizing
the residual velocity, subject to the constraint imposed by the given evasion
distance. Thus an irregular dynamic max-min problem of extremely large-scale is
formulated, where the time instant when the optimal solution can be attained is
uncertain and the optimum solution depends on all the intermediate guidance
commands generated before. For solving this problem, a two-step strategy is
conceived. In the first step, we use the proximal policy optimization (PPO)
algorithm to generate the guidance commands of the EFV. The results obtained by
PPO in the global search space are coarse, despite the fact that the reward
function, the neural network parameters and the learning rate are designed
elaborately. Therefore, in the second step, we propose to invoke the evolution
strategy (ES) based algorithm, which uses the result of PPO as the initial
value, to further improve the quality of the solution by searching in the local
space. Simulation results demonstrate that the proposed guidance design method
based on the PPO algorithm is capable of achieving a residual velocity of 67.24
m/s, higher than the residual velocities achieved by the benchmark soft
actor-critic and deep deterministic policy gradient algorithms. Furthermore,
the proposed ES-enhanced PPO algorithm outperforms the PPO algorithm by 2.7\%,
achieving a residual velocity of 69.04 m/s.
| [
{
"created": "Sat, 4 May 2024 06:18:15 GMT",
"version": "v1"
}
] | 2024-05-08 | [
[
"Hu",
"Xiao",
""
],
[
"Wang",
"Tianshu",
""
],
[
"Gong",
"Min",
""
],
[
"Yang",
"Shaoshi",
""
]
] |
2405.03716 | Abdallah Namoun | Abdallah Namoun, Ahmed Alrehaili, Zaib Un Nisa, Hani Almoamari, Ali
Tufail | Predicting the usability of mobile applications using AI tools: the rise
of large user interface models, opportunities, and challenges | 12 pages, 3 figures, 4 tables, The 7th International Conference on
Emerging Data and Industry (EDI40) | 2024; Procedia Computer Science | null | null | cs.HC cs.AI | http://creativecommons.org/licenses/by/4.0/ | This article proposes the so-called large user interface models (LUIMs) to
enable the generation of user interfaces and prediction of usability using
artificial intelligence in the context of mobile applications.
| [
{
"created": "Sun, 5 May 2024 09:24:48 GMT",
"version": "v1"
}
] | 2024-05-08 | [
[
"Namoun",
"Abdallah",
""
],
[
"Alrehaili",
"Ahmed",
""
],
[
"Nisa",
"Zaib Un",
""
],
[
"Almoamari",
"Hani",
""
],
[
"Tufail",
"Ali",
""
]
] |
2405.03846 | \'Ad\'am Fodor | \'Ad\'am Fodor, Rachid R. Saboundji, Andr\'as L\H{o}rincz | Enhancing Apparent Personality Trait Analysis with Cross-Modal
Embeddings | 14 pages, 4 figures | Annales Universitatis Scientiarium Budapestinensis de Rolando
E\"otv\"os Nominatae. Sectio Computatorica, MaCS Special Issue, 2021 | null | null | cs.CV cs.HC | http://creativecommons.org/licenses/by/4.0/ | Automatic personality trait assessment is essential for high-quality
human-machine interactions. Systems capable of human behavior analysis could be
used for self-driving cars, medical research, and surveillance, among many
others. We present a multimodal deep neural network with a Siamese extension
for apparent personality trait prediction trained on short video recordings and
exploiting modality invariant embeddings. Acoustic, visual, and textual
information are utilized to reach high-performance solutions in this task. Due
to the highly centralized target distribution of the analyzed dataset, the
changes in the third digit are relevant. Our proposed method addresses the
challenge of under-represented extreme values, achieves 0.0033 MAE average
improvement, and shows a clear advantage over the baseline multimodal DNN
without the introduced module.
| [
{
"created": "Mon, 6 May 2024 20:51:28 GMT",
"version": "v1"
}
] | 2024-05-08 | [
[
"Fodor",
"Ádám",
""
],
[
"Saboundji",
"Rachid R.",
""
],
[
"Lőrincz",
"András",
""
]
] |
2405.03862 | Razan Baltaji | Razan Baltaji, Babak Hemmatian, Lav R. Varshney | Persona Inconstancy in Multi-Agent LLM Collaboration: Conformity,
Confabulation, and Impersonation | 16 pages, 8 figures, 3 tables | The 2nd Workshop on Cross-Cultural Considerations in NLP (2024) | null | null | cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Multi-agent AI systems can be used for simulating collective decision-making
in scientific and practical applications. They can also be used to introduce a
diverse group discussion step in chatbot pipelines, enhancing the cultural
sensitivity of the chatbot's responses. These applications, however, are
predicated on the ability of AI agents to reliably adopt assigned personas and
mimic human interactions. To see whether LLM agents satisfy these requirements,
we examine AI agent ensembles engaged in cross-national collaboration and
debate by analyzing their private responses and chat transcripts. Our findings
suggest that multi-agent discussions can support collective AI decisions that
more often reflect diverse perspectives, yet this effect is tempered by the
agents' susceptibility to conformity due to perceived peer pressure and
occasional challenges in maintaining consistent personas and opinions.
Instructions that encourage debate in support of one's opinions rather than
collaboration increase the rate of inconstancy. Without addressing the factors
we identify, the full potential of multi-agent frameworks for producing more
culturally diverse AI outputs or more realistic simulations of group
decision-making may remain untapped.
| [
{
"created": "Mon, 6 May 2024 21:20:35 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Jul 2024 14:50:25 GMT",
"version": "v2"
},
{
"created": "Wed, 14 Aug 2024 18:01:13 GMT",
"version": "v3"
}
] | 2024-08-16 | [
[
"Baltaji",
"Razan",
""
],
[
"Hemmatian",
"Babak",
""
],
[
"Varshney",
"Lav R.",
""
]
] |
2405.03920 | Rakesh M. Verma | Dainis Boumber, Rakesh M. Verma, Fatima Zahra Qachfar | A Roadmap for Multilingual, Multimodal Domain Independent Deception
Detection | 6 pages, 1 figure, shorter version in SIAM International Conference
on Data Mining (SDM) 2024 | Proc. SDM 2024, 396-399 | null | null | cs.CL cs.AI cs.MM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Deception, a prevalent aspect of human communication, has undergone a
significant transformation in the digital age. With the globalization of online
interactions, individuals are communicating in multiple languages and mixing
languages on social media, with varied data becoming available in each language
and dialect. At the same time, the techniques for detecting deception are
similar across the board. Recent studies have shown the possibility of the
existence of universal linguistic cues to deception across domains within the
English language; however, the existence of such cues in other languages
remains unknown. Furthermore, the practical task of deception detection in
low-resource languages is not a well-studied problem due to the lack of labeled
data. Another dimension of deception is multimodality. For example, a picture
with an altered caption in fake news or disinformation may exist. This paper
calls for a comprehensive investigation into the complexities of deceptive
language across linguistic boundaries and modalities within the realm of
computer security and natural language processing and the possibility of using
multilingual transformer models and labeled data in various languages to
universally address the task of deception detection.
| [
{
"created": "Tue, 7 May 2024 00:38:34 GMT",
"version": "v1"
}
] | 2024-05-08 | [
[
"Boumber",
"Dainis",
""
],
[
"Verma",
"Rakesh M.",
""
],
[
"Qachfar",
"Fatima Zahra",
""
]
] |
2405.03924 | Zhanhao Zhao | Beng Chin Ooi, Shaofeng Cai, Gang Chen, Yanyan Shen, Kian-Lee Tan,
Yuncheng Wu, Xiaokui Xiao, Naili Xing, Cong Yue, Lingze Zeng, Meihui Zhang,
Zhanhao Zhao | NeurDB: An AI-powered Autonomous Data System | null | SCIENCE CHINA Information Sciences 67, 10 (2024) | 10.1007/s11432-024-4125-9 | null | cs.DB cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the wake of rapid advancements in artificial intelligence (AI), we stand
on the brink of a transformative leap in data systems. The imminent fusion of
AI and DB (AIxDB) promises a new generation of data systems, which will relieve
the burden on end-users across all industry sectors by featuring AI-enhanced
functionalities, such as personalized and automated in-database AI-powered
analytics, self-driving capabilities for improved system performance, etc. In
this paper, we explore the evolution of data systems with a focus on deepening
the fusion of AI and DB. We present NeurDB, an AI-powered autonomous data
system designed to fully embrace AI design in each major system component and
provide in-database AI-powered analytics. We outline the conceptual and
architectural overview of NeurDB, discuss its design choices and key
components, and report its current development and future plan.
| [
{
"created": "Tue, 7 May 2024 00:51:48 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Jul 2024 08:48:45 GMT",
"version": "v2"
}
] | 2024-09-16 | [
[
"Ooi",
"Beng Chin",
""
],
[
"Cai",
"Shaofeng",
""
],
[
"Chen",
"Gang",
""
],
[
"Shen",
"Yanyan",
""
],
[
"Tan",
"Kian-Lee",
""
],
[
"Wu",
"Yuncheng",
""
],
[
"Xiao",
"Xiaokui",
""
],
[
"Xing",
"Naili",
""
],
[
"Yue",
"Cong",
""
],
[
"Zeng",
"Lingze",
""
],
[
"Zhang",
"Meihui",
""
],
[
"Zhao",
"Zhanhao",
""
]
] |
2405.03945 | Seungnyun Kim Mr | Seungnyun Kim, Jihoon Moon, Jinhong Kim, Yongjun Ahn, Donghoon Kim,
Sunwoo Kim, Kyuhong Shim, Byonghyo Shim | Role of Sensing and Computer Vision in 6G Wireless Communications | null | IEEE Wireless Communications, 2024 | 10.1109/MWC.016.2300526 | null | cs.CV cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, we are witnessing the remarkable progress and widespread adoption
of sensing technologies in autonomous driving, robotics, and metaverse.
Considering the rapid advancement of computer vision (CV) technology to analyze
the sensing information, we anticipate a proliferation of wireless applications
exploiting the sensing and CV technologies in 6G. In this article, we provide a
holistic overview of the sensing and CV-aided wireless communications (SVWC)
framework for 6G. By analyzing the high-resolution sensing information through
the powerful CV techniques, SVWC can quickly and accurately understand the
wireless environments and then perform the wireless tasks. To demonstrate the
efficacy of SVWC, we design the whole process of SVWC including the sensing
dataset collection, DL model training, and execution of realistic wireless
tasks. From the numerical evaluations on 6G communication scenarios, we show
that SVWC achieves considerable performance gains over the conventional 5G
systems in terms of positioning accuracy, data rate, and access latency.
| [
{
"created": "Tue, 7 May 2024 02:10:30 GMT",
"version": "v1"
}
] | 2024-09-11 | [
[
"Kim",
"Seungnyun",
""
],
[
"Moon",
"Jihoon",
""
],
[
"Kim",
"Jinhong",
""
],
[
"Ahn",
"Yongjun",
""
],
[
"Kim",
"Donghoon",
""
],
[
"Kim",
"Sunwoo",
""
],
[
"Shim",
"Kyuhong",
""
],
[
"Shim",
"Byonghyo",
""
]
] |
2405.03952 | Zixing Zhang | Zhongren Dong, Zixing Zhang, Weixiang Xu, Jing Han, Jianjun Ou,
Bj\"orn W. Schuller | HAFFormer: A Hierarchical Attention-Free Framework for Alzheimer's
Disease Detection From Spontaneous Speech | null | publised at ICASSP 2024 | null | null | cs.SD cs.CL eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatically detecting Alzheimer's Disease (AD) from spontaneous speech
plays an important role in its early diagnosis. Recent approaches highly rely
on the Transformer architectures due to its efficiency in modelling long-range
context dependencies. However, the quadratic increase in computational
complexity associated with self-attention and the length of audio poses a
challenge when deploying such models on edge devices. In this context, we
construct a novel framework, namely Hierarchical Attention-Free Transformer
(HAFFormer), to better deal with long speech for AD detection. Specifically, we
employ an attention-free module of Multi-Scale Depthwise Convolution to replace
the self-attention and thus avoid the expensive computation, and a GELU-based
Gated Linear Unit to replace the feedforward layer, aiming to automatically
filter out the redundant information. Moreover, we design a hierarchical
structure to force it to learn a variety of information grains, from the frame
level to the dialogue level. By conducting extensive experiments on the
ADReSS-M dataset, the introduced HAFFormer can achieve competitive results
(82.6% accuracy) with other recent work, but with significant computational
complexity and model size reduction compared to the standard Transformer. This
shows the efficiency of HAFFormer in dealing with long audio for AD detection.
| [
{
"created": "Tue, 7 May 2024 02:19:16 GMT",
"version": "v1"
}
] | 2024-05-08 | [
[
"Dong",
"Zhongren",
""
],
[
"Zhang",
"Zixing",
""
],
[
"Xu",
"Weixiang",
""
],
[
"Han",
"Jing",
""
],
[
"Ou",
"Jianjun",
""
],
[
"Schuller",
"Björn W.",
""
]
] |
2405.03955 | Yosuke Kaga | Yosuke Kaga, Yusei Suzuki, Kenta Takahashi | IPFed: Identity protected federated learning for user authentication | null | 2023 Asia Pacific Signal and Information Processing Association
Annual Summit and Conference (APSIPA ASC) | 10.1109/APSIPAASC58517.2023.10317108 | null | cs.CV cs.CR cs.LG | http://creativecommons.org/licenses/by/4.0/ | With the development of laws and regulations related to privacy preservation,
it has become difficult to collect personal data to perform machine learning.
In this context, federated learning, which is distributed learning without
sharing personal data, has been proposed. In this paper, we focus on federated
learning for user authentication. We show that it is difficult to achieve both
privacy preservation and high accuracy with existing methods. To address these
challenges, we propose IPFed which is privacy-preserving federated learning
using random projection for class embedding. Furthermore, we prove that IPFed
is capable of learning equivalent to the state-of-the-art method. Experiments
on face image datasets show that IPFed can protect the privacy of personal data
while maintaining the accuracy of the state-of-the-art method.
| [
{
"created": "Tue, 7 May 2024 02:29:41 GMT",
"version": "v1"
}
] | 2024-05-08 | [
[
"Kaga",
"Yosuke",
""
],
[
"Suzuki",
"Yusei",
""
],
[
"Takahashi",
"Kenta",
""
]
] |
2405.03960 | Zixing Zhang | Xupeng Zha, Huan Zhao, Zixing Zhang | ESIHGNN: Event-State Interactions Infused Heterogeneous Graph Neural
Network for Conversational Emotion Recognition | null | published at ICASSP 2024 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conversational Emotion Recognition (CER) aims to predict the emotion
expressed by an utterance (referred to as an ``event'') during a conversation.
Existing graph-based methods mainly focus on event interactions to comprehend
the conversational context, while overlooking the direct influence of the
speaker's emotional state on the events. In addition, real-time modeling of the
conversation is crucial for real-world applications but is rarely considered.
Toward this end, we propose a novel graph-based approach, namely Event-State
Interactions infused Heterogeneous Graph Neural Network (ESIHGNN), which
incorporates the speaker's emotional state and constructs a heterogeneous
event-state interaction graph to model the conversation. Specifically, a
heterogeneous directed acyclic graph neural network is employed to dynamically
update and enhance the representations of events and emotional states at each
turn, thereby improving conversational coherence and consistency. Furthermore,
to further improve the performance of CER, we enrich the graph's edges with
external knowledge. Experimental results on four publicly available CER
datasets show the superiority of our approach and the effectiveness of the
introduced heterogeneous event-state interaction graph.
| [
{
"created": "Tue, 7 May 2024 02:46:11 GMT",
"version": "v1"
}
] | 2024-05-08 | [
[
"Zha",
"Xupeng",
""
],
[
"Zhao",
"Huan",
""
],
[
"Zhang",
"Zixing",
""
]
] |
2405.03974 | Yukui Luo | Ziyu Liu, Tong Zhou, Yukui Luo, Xiaolin Xu | TBNet: A Neural Architectural Defense Framework Facilitating DNN Model
Protection in Trusted Execution Environments | null | DAC2024 | null | null | cs.CR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trusted Execution Environments (TEEs) have become a promising solution to
secure DNN models on edge devices. However, the existing solutions either
provide inadequate protection or introduce large performance overhead. Taking
both security and performance into consideration, this paper presents TBNet, a
TEE-based defense framework that protects DNN model from a neural architectural
perspective. Specifically, TBNet generates a novel Two-Branch substitution
model, to respectively exploit (1) the computational resources in the untrusted
Rich Execution Environment (REE) for latency reduction and (2) the
physically-isolated TEE for model protection. Experimental results on a
Raspberry Pi across diverse DNN model architectures and datasets demonstrate
that TBNet achieves efficient model protection at a low cost.
| [
{
"created": "Tue, 7 May 2024 03:08:30 GMT",
"version": "v1"
}
] | 2024-05-08 | [
[
"Liu",
"Ziyu",
""
],
[
"Zhou",
"Tong",
""
],
[
"Luo",
"Yukui",
""
],
[
"Xu",
"Xiaolin",
""
]
] |
2405.04136 | Benjamin Wolff | Benjamin Wolff, Eva Seidlmayer and Konrad U. F\"orstner | Enriched BERT Embeddings for Scholarly Publication Classification | 8 pages, 2 figures, NSLP2024 conference | Natural Scientific Language Processing and Research Knowledge
Graphs (2024), LNAI 14770, 234-243 | 10.1007/978-3-031-65794-8_16 | null | cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | With the rapid expansion of academic literature and the proliferation of
preprints, researchers face growing challenges in manually organizing and
labeling large volumes of articles. The NSLP 2024 FoRC Shared Task I addresses
this challenge organized as a competition. The goal is to develop a classifier
capable of predicting one of 123 predefined classes from the Open Research
Knowledge Graph (ORKG) taxonomy of research fields for a given article.This
paper presents our results. Initially, we enrich the dataset (containing
English scholarly articles sourced from ORKG and arXiv), then leverage
different pre-trained language Models (PLMs), specifically BERT, and explore
their efficacy in transfer learning for this downstream task. Our experiments
encompass feature-based and fine-tuned transfer learning approaches using
diverse PLMs, optimized for scientific tasks, including SciBERT, SciNCL, and
SPECTER2. We conduct hyperparameter tuning and investigate the impact of data
augmentation from bibliographic databases such as OpenAlex, Semantic Scholar,
and Crossref. Our results demonstrate that fine-tuning pre-trained models
substantially enhances classification performance, with SPECTER2 emerging as
the most accurate model. Moreover, enriching the dataset with additional
metadata improves classification outcomes significantly, especially when
integrating information from S2AG, OpenAlex and Crossref. Our best-performing
approach achieves a weighted F1-score of 0.7415. Overall, our study contributes
to the advancement of reliable automated systems for scholarly publication
categorization, offering a potential solution to the laborious manual curation
process, thereby facilitating researchers in efficiently locating relevant
resources.
| [
{
"created": "Tue, 7 May 2024 09:05:20 GMT",
"version": "v1"
}
] | 2024-08-16 | [
[
"Wolff",
"Benjamin",
""
],
[
"Seidlmayer",
"Eva",
""
],
[
"Förstner",
"Konrad U.",
""
]
] |
2405.04163 | Soumyadeep Roy | Gunjan Balde, Soumyadeep Roy, Mainack Mondal, Niloy Ganguly | MEDVOC: Vocabulary Adaptation for Fine-tuning Pre-trained Language
Models on Medical Text Summarization | 13 pages, Accepted to the 33rd International Joint Conference on
Artificial Intelligence, IJCAI 2024 (Main) Track | Proceedings of the Thirty-Third International Joint Conference on
Artificial Intelligence Main Track (IJCAI 2024). Pages 6180-6188 | 10.24963/ijcai.2024/683 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work presents a dynamic vocabulary adaptation strategy, MEDVOC, for
fine-tuning pre-trained language models (PLMs) like BertSumAbs, BART, and
PEGASUS for improved medical text summarization. In contrast to existing domain
adaptation approaches in summarization, MEDVOC treats vocabulary as an
optimizable parameter and optimizes the PLM vocabulary based on fragment score
conditioned only on the downstream task's reference summaries. Unlike previous
works on vocabulary adaptation (limited only to classification tasks),
optimizing vocabulary based on summarization tasks requires an extremely costly
intermediate fine-tuning step on large summarization datasets. To that end, our
novel fragment score-based hyperparameter search very significantly reduces
this fine-tuning time -- from 450 days to less than 2 days on average.
Furthermore, while previous works on vocabulary adaptation are often primarily
tied to single PLMs, MEDVOC is designed to be deployable across multiple PLMs
(with varying model vocabulary sizes, pre-training objectives, and model sizes)
-- bridging the limited vocabulary overlap between the biomedical literature
domain and PLMs. MEDVOC outperforms baselines by 15.74% in terms of Rouge-L in
zero-shot setting and shows gains of 17.29% in high Out-Of-Vocabulary (OOV)
concentrations. Our human evaluation shows MEDVOC generates more faithful
medical summaries (88% compared to 59% in baselines). We make the codebase
publicly available at https://github.com/gb-kgp/MEDVOC.
| [
{
"created": "Tue, 7 May 2024 10:00:00 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Aug 2024 12:43:13 GMT",
"version": "v2"
}
] | 2024-08-20 | [
[
"Balde",
"Gunjan",
""
],
[
"Roy",
"Soumyadeep",
""
],
[
"Mondal",
"Mainack",
""
],
[
"Ganguly",
"Niloy",
""
]
] |
2405.04241 | Cristina Carmona-Duarte | Alejandro Garcia-Sosa, Jose J. Quintana-Hernandez, Miguel A. Ferrer
Ballester, Cristina Carmona-Duarte | Exploring the Potential of Robot-Collected Data for Training Gesture
Classification Systems | null | IGS2023, 2023, 116-120 | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Sensors and Artificial Intelligence (AI) have revolutionized the analysis of
human movement, but the scarcity of specific samples presents a significant
challenge in training intelligent systems, particularly in the context of
diagnosing neurodegenerative diseases. This study investigates the feasibility
of utilizing robot-collected data to train classification systems traditionally
trained with human-collected data. As a proof of concept, we recorded a
database of numeric characters using an ABB robotic arm and an Apple Watch. We
compare the classification performance of the trained systems using both
human-recorded and robot-recorded data. Our primary objective is to determine
the potential for accurate identification of human numeric characters wearing a
smartwatch using robotic movement as training data. The findings of this study
offer valuable insights into the feasibility of using robot-collected data for
training classification systems. This research holds broad implications across
various domains that require reliable identification, particularly in scenarios
where access to human-specific data is limited.
| [
{
"created": "Tue, 7 May 2024 11:58:34 GMT",
"version": "v1"
}
] | 2024-05-08 | [
[
"Garcia-Sosa",
"Alejandro",
""
],
[
"Quintana-Hernandez",
"Jose J.",
""
],
[
"Ballester",
"Miguel A. Ferrer",
""
],
[
"Carmona-Duarte",
"Cristina",
""
]
] |
2405.04561 | Felipe A. Moreno | Felipe Moreno-Vera | Inferring Discussion Topics about Exploitation of Vulnerabilities from
Underground Hacking Forums | 6 pages | 2023 14th International Conference on Information and
Communication Technology Convergence (ICTC) | 10.1109/ICTC58733.2023.10393244 | null | cs.CR cs.AI cs.CY cs.LG | http://creativecommons.org/licenses/by/4.0/ | The increasing sophistication of cyber threats necessitates proactive
measures to identify vulnerabilities and potential exploits. Underground
hacking forums serve as breeding grounds for the exchange of hacking techniques
and discussions related to exploitation. In this research, we propose an
innovative approach using topic modeling to analyze and uncover key themes in
vulnerabilities discussed within these forums. The objective of our study is to
develop a machine learning-based model that can automatically detect and
classify vulnerability-related discussions in underground hacking forums. By
monitoring and analyzing the content of these forums, we aim to identify
emerging vulnerabilities, exploit techniques, and potential threat actors. To
achieve this, we collect a large-scale dataset consisting of posts and threads
from multiple underground forums. We preprocess and clean the data to ensure
accuracy and reliability. Leveraging topic modeling techniques, specifically
Latent Dirichlet Allocation (LDA), we uncover latent topics and their
associated keywords within the dataset. This enables us to identify recurring
themes and prevalent discussions related to vulnerabilities, exploits, and
potential targets.
| [
{
"created": "Tue, 7 May 2024 14:54:32 GMT",
"version": "v1"
}
] | 2024-05-09 | [
[
"Moreno-Vera",
"Felipe",
""
]
] |
2405.04589 | Xianlei Long | Xianlei Long, Hui Zhao, Chao Chen, Fuqiang Gu, Qingyi Gu | A Novel Wide-Area Multiobject Detection System with High-Probability
Region Searching | Accepted by ICRA 2024 | 2024 IEEE International Conference on Robotics and Automation
(ICRA) | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | In recent years, wide-area visual surveillance systems have been widely
applied in various industrial and transportation scenarios. These systems,
however, face significant challenges when implementing multi-object detection
due to conflicts arising from the need for high-resolution imaging, efficient
object searching, and accurate localization. To address these challenges, this
paper presents a hybrid system that incorporates a wide-angle camera, a
high-speed search camera, and a galvano-mirror. In this system, the wide-angle
camera offers panoramic images as prior information, which helps the search
camera capture detailed images of the targeted objects. This integrated
approach enhances the overall efficiency and effectiveness of wide-area visual
detection systems. Specifically, in this study, we introduce a wide-angle
camera-based method to generate a panoramic probability map (PPM) for
estimating high-probability regions of target object presence. Then, we propose
a probability searching module that uses the PPM-generated prior information to
dynamically adjust the sampling range and refine target coordinates based on
uncertainty variance computed by the object detector. Finally, the integration
of PPM and the probability searching module yields an efficient hybrid vision
system capable of achieving 120 fps multi-object search and detection.
Extensive experiments are conducted to verify the system's effectiveness and
robustness.
| [
{
"created": "Tue, 7 May 2024 18:06:40 GMT",
"version": "v1"
}
] | 2024-05-09 | [
[
"Long",
"Xianlei",
""
],
[
"Zhao",
"Hui",
""
],
[
"Chen",
"Chao",
""
],
[
"Gu",
"Fuqiang",
""
],
[
"Gu",
"Qingyi",
""
]
] |
2405.04595 | Naveed Sultan | Naveed Sultan, Amir Hajian and Supavadee Aramvith | An Advanced Features Extraction Module for Remote Sensing Image
Super-Resolution | Preprint of paper from The 21st International Conference on
Electrical Engineering/Electronics, Computer, Telecommunications and
Information Technology or ECTI-CON 2024, Khon Kaen, Thailand | ECTI-CON 2024, Khon Kaen Thailand | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In recent years, convolutional neural networks (CNNs) have achieved
remarkable advancement in the field of remote sensing image super-resolution
due to the complexity and variability of textures and structures in remote
sensing images (RSIs), which often repeat in the same images but differ across
others. Current deep learning-based super-resolution models focus less on
high-frequency features, which leads to suboptimal performance in capturing
contours, textures, and spatial information. State-of-the-art CNN-based methods
now focus on the feature extraction of RSIs using attention mechanisms.
However, these methods are still incapable of effectively identifying and
utilizing key content attention signals in RSIs. To solve this problem, we
proposed an advanced feature extraction module called Channel and Spatial
Attention Feature Extraction (CSA-FE) for effectively extracting the features
by using the channel and spatial attention incorporated with the standard
vision transformer (ViT). The proposed method trained over the UCMerced dataset
on scales 2, 3, and 4. The experimental results show that our proposed method
helps the model focus on the specific channels and spatial locations containing
high-frequency information so that the model can focus on relevant features and
suppress irrelevant ones, which enhances the quality of super-resolved images.
Our model achieved superior performance compared to various existing models.
| [
{
"created": "Tue, 7 May 2024 18:15:51 GMT",
"version": "v1"
}
] | 2024-05-09 | [
[
"Sultan",
"Naveed",
""
],
[
"Hajian",
"Amir",
""
],
[
"Aramvith",
"Supavadee",
""
]
] |
2405.05161 | Evie Malaia | Julia Krebs, Evie Malaia, Ronnie B. Wilbur, Isabella Fessl, Hans-Peter
Wiesinger, Hermann Schwameder, Dietmar Roehm | Motion Capture Analysis of Verb and Adjective Types in Austrian Sign
Language | 10 pages, 7 figures | Proc of the International Conference on Computational Linguistics
(2024) | null | null | cs.CL q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Across a number of sign languages, temporal and spatial characteristics of
dominant hand articulation are used to express semantic and grammatical
features. In this study of Austrian Sign Language (\"Osterreichische
Geb\"ardensprache, or \"OGS), motion capture data of four Deaf signers is used
to quantitatively characterize the kinematic parameters of sign production in
verbs and adjectives. We investigate (1) the difference in production between
verbs involving a natural endpoint (telic verbs; e.g. arrive) and verbs lacking
an endpoint (atelic verbs; e.g. analyze), and (2) adjective signs in
intensified vs. non-intensified (plain) forms. Motion capture data analysis
using linear-mixed effects models (LME) indicates that both the endpoint
marking in verbs, as well as marking of intensification in adjectives, are
expressed by movement modulation in \"OGS. While the semantic distinction
between verb types (telic/atelic) is marked by higher peak velocity and shorter
duration for telic signs compared to atelic ones, the grammatical distinction
(intensification) in adjectives is expressed by longer duration for intensified
compared to non-intensified adjectives. The observed individual differences of
signers might be interpreted as personal signing style.
| [
{
"created": "Wed, 8 May 2024 15:54:12 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Sep 2024 17:24:52 GMT",
"version": "v2"
}
] | 2024-09-16 | [
[
"Krebs",
"Julia",
""
],
[
"Malaia",
"Evie",
""
],
[
"Wilbur",
"Ronnie B.",
""
],
[
"Fessl",
"Isabella",
""
],
[
"Wiesinger",
"Hans-Peter",
""
],
[
"Schwameder",
"Hermann",
""
],
[
"Roehm",
"Dietmar",
""
]
] |
2405.05173 | Huaiyuan Xu | Huaiyuan Xu, Junliang Chen, Shiyu Meng, Yi Wang, Lap-Pui Chau | A Survey on Occupancy Perception for Autonomous Driving: The Information
Fusion Perspective | null | Information Fusion, 2024 | 10.1016/j.inffus.2024.102671 | null | cs.CV cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D occupancy perception technology aims to observe and understand dense 3D
environments for autonomous vehicles. Owing to its comprehensive perception
capability, this technology is emerging as a trend in autonomous driving
perception systems, and is attracting significant attention from both industry
and academia. Similar to traditional bird's-eye view (BEV) perception, 3D
occupancy perception has the nature of multi-source input and the necessity for
information fusion. However, the difference is that it captures vertical
structures that are ignored by 2D BEV. In this survey, we review the most
recent works on 3D occupancy perception, and provide in-depth analyses of
methodologies with various input modalities. Specifically, we summarize general
network pipelines, highlight information fusion techniques, and discuss
effective network training. We evaluate and analyze the occupancy perception
performance of the state-of-the-art on the most popular datasets. Furthermore,
challenges and future research directions are discussed. We hope this paper
will inspire the community and encourage more research work on 3D occupancy
perception. A comprehensive list of studies in this survey is publicly
available in an active repository that continuously collects the latest work:
https://github.com/HuaiyuanXu/3D-Occupancy-Perception.
| [
{
"created": "Wed, 8 May 2024 16:10:46 GMT",
"version": "v1"
},
{
"created": "Sat, 18 May 2024 16:31:09 GMT",
"version": "v2"
},
{
"created": "Sun, 21 Jul 2024 12:01:28 GMT",
"version": "v3"
}
] | 2024-09-17 | [
[
"Xu",
"Huaiyuan",
""
],
[
"Chen",
"Junliang",
""
],
[
"Meng",
"Shiyu",
""
],
[
"Wang",
"Yi",
""
],
[
"Chau",
"Lap-Pui",
""
]
] |
2405.05551 | Roy Rudolf Huizen | Florentina Tatrin Kurniati, Daniel HF Manongga, Irwan Sembiring,
Sutarto Wijono, Roy Rudolf Huizen | The object detection model uses combined extraction with KNN and RF
classification | null | IJEECS, pp 436-445, Vol 35, No 1 July 2024;
https://ijeecs.iaescore.com/index.php/IJEECS/article/view/35888 | 10.11591/ijeecs.v35.i1.pp436-445 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Object detection plays an important role in various fields. Developing
detection models for 2D objects that experience rotation and texture variations
is a challenge. In this research, the initial stage of the proposed model
integrates the gray-level co-occurrence matrix (GLCM) and local binary patterns
(LBP) texture feature extraction to obtain feature vectors. The next stage is
classifying features using k-nearest neighbors (KNN) and random forest (RF), as
well as voting ensemble (VE). System testing used a dataset of 4,437 2D images,
the results for KNN accuracy were 92.7% and F1-score 92.5%, while RF
performance was lower. Although GLCM features improve performance on both
algorithms, KNN is more consistent. The VE approach provides the best
performance with an accuracy of 93.9% and an F1 score of 93.8%, this shows the
effectiveness of the ensemble technique in increasing object detection
accuracy. This study contributes to the field of object detection with a new
approach combining GLCM and LBP as feature vectors as well as VE for
classification
| [
{
"created": "Thu, 9 May 2024 05:21:42 GMT",
"version": "v1"
}
] | 2024-05-10 | [
[
"Kurniati",
"Florentina Tatrin",
""
],
[
"Manongga",
"Daniel HF",
""
],
[
"Sembiring",
"Irwan",
""
],
[
"Wijono",
"Sutarto",
""
],
[
"Huizen",
"Roy Rudolf",
""
]
] |
2405.05588 | Ngoc-Bao Nguyen | Sy-Tuyen Ho, Koh Jun Hao, Keshigeyan Chandrasegaran, Ngoc-Bao Nguyen,
Ngai-Man Cheung | Model Inversion Robustness: Can Transfer Learning Help? | null | CVPR 2024 | null | null | cs.LG cs.CR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model Inversion (MI) attacks aim to reconstruct private training data by
abusing access to machine learning models. Contemporary MI attacks have
achieved impressive attack performance, posing serious threats to privacy.
Meanwhile, all existing MI defense methods rely on regularization that is in
direct conflict with the training objective, resulting in noticeable
degradation in model utility. In this work, we take a different perspective,
and propose a novel and simple Transfer Learning-based Defense against Model
Inversion (TL-DMI) to render MI-robust models. Particularly, by leveraging TL,
we limit the number of layers encoding sensitive information from private
training dataset, thereby degrading the performance of MI attack. We conduct an
analysis using Fisher Information to justify our method. Our defense is
remarkably simple to implement. Without bells and whistles, we show in
extensive experiments that TL-DMI achieves state-of-the-art (SOTA) MI
robustness. Our code, pre-trained models, demo and inverted data are available
at: https://hosytuyen.github.io/projects/TL-DMI
| [
{
"created": "Thu, 9 May 2024 07:24:28 GMT",
"version": "v1"
}
] | 2024-05-10 | [
[
"Ho",
"Sy-Tuyen",
""
],
[
"Hao",
"Koh Jun",
""
],
[
"Chandrasegaran",
"Keshigeyan",
""
],
[
"Nguyen",
"Ngoc-Bao",
""
],
[
"Cheung",
"Ngai-Man",
""
]
] |
2405.05614 | Xinran Liu | Xinran Liua and Lin Qia and Yuxuan Songa and Qi Wen | Depth Awakens: A Depth-perceptual Attention Fusion Network for RGB-D
Camouflaged Object Detection | null | Image and Vision Computing, 143:104924, 2024 | 10.1016/j.imavis.2024.104924 | null | cs.CV cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Camouflaged object detection (COD) presents a persistent challenge in
accurately identifying objects that seamlessly blend into their surroundings.
However, most existing COD models overlook the fact that visual systems operate
within a genuine 3D environment. The scene depth inherent in a single 2D image
provides rich spatial clues that can assist in the detection of camouflaged
objects. Therefore, we propose a novel depth-perception attention fusion
network that leverages the depth map as an auxiliary input to enhance the
network's ability to perceive 3D information, which is typically challenging
for the human eye to discern from 2D images. The network uses a trident-branch
encoder to extract chromatic and depth information and their communications.
Recognizing that certain regions of a depth map may not effectively highlight
the camouflaged object, we introduce a depth-weighted cross-attention fusion
module to dynamically adjust the fusion weights on depth and RGB feature maps.
To keep the model simple without compromising effectiveness, we design a
straightforward feature aggregation decoder that adaptively fuses the enhanced
aggregated features. Experiments demonstrate the significant superiority of our
proposed method over other states of the arts, which further validates the
contribution of depth information in camouflaged object detection. The code
will be available at https://github.com/xinran-liu00/DAF-Net.
| [
{
"created": "Thu, 9 May 2024 08:17:43 GMT",
"version": "v1"
}
] | 2024-05-12 | [
[
"Liua",
"Xinran",
""
],
[
"Qia",
"Lin",
""
],
[
"Songa",
"Yuxuan",
""
],
[
"Wen",
"Qi",
""
]
] |
2405.05695 | Yuan Gao | Yuan Gao, Weizhong Zhang, Wenhan Luo, Lin Ma, Jin-Gang Yu, Gui-Song
Xia, Jiayi Ma | Aux-NAS: Exploiting Auxiliary Labels with Negligibly Extra Inference
Cost | Accepted to ICLR 2024 | International Conference on Learning Representations (ICLR), 2024 | null | null | cs.LG cs.AI cs.CV stat.ML | http://creativecommons.org/licenses/by/4.0/ | We aim at exploiting additional auxiliary labels from an independent
(auxiliary) task to boost the primary task performance which we focus on, while
preserving a single task inference cost of the primary task. While most
existing auxiliary learning methods are optimization-based relying on loss
weights/gradients manipulation, our method is architecture-based with a
flexible asymmetric structure for the primary and auxiliary tasks, which
produces different networks for training and inference. Specifically, starting
from two single task networks/branches (each representing a task), we propose a
novel method with evolving networks where only primary-to-auxiliary links exist
as the cross-task connections after convergence. These connections can be
removed during the primary task inference, resulting in a single-task inference
cost. We achieve this by formulating a Neural Architecture Search (NAS)
problem, where we initialize bi-directional connections in the search space and
guide the NAS optimization converging to an architecture with only the
single-side primary-to-auxiliary connections. Moreover, our method can be
incorporated with optimization-based auxiliary learning approaches. Extensive
experiments with six tasks on NYU v2, CityScapes, and Taskonomy datasets using
VGG, ResNet, and ViT backbones validate the promising performance. The codes
are available at https://github.com/ethanygao/Aux-NAS.
| [
{
"created": "Thu, 9 May 2024 11:50:19 GMT",
"version": "v1"
}
] | 2024-05-10 | [
[
"Gao",
"Yuan",
""
],
[
"Zhang",
"Weizhong",
""
],
[
"Luo",
"Wenhan",
""
],
[
"Ma",
"Lin",
""
],
[
"Yu",
"Jin-Gang",
""
],
[
"Xia",
"Gui-Song",
""
],
[
"Ma",
"Jiayi",
""
]
] |
2405.05836 | Atefeh Mahdavi | Atefeh Mahdavi, Marco Carvalho | Informed Decision-Making through Advancements in Open Set Recognition
and Unknown Sample Detection | Accepted for proceedings of the 57th Hawaii International Conference
on System Sciences: 10 pages, 6 figures, 3-6 January 2024, Honolulu, United
States | Atefeh, M., & Marco, C. (2024). "Informed Decision-Making through
Advancements in Open Set Recognition and Unknown Sample Detection."
Proceedings of the 57th Hawaii International Conference on System Sciences,
1090-1999 | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning-based techniques open up many opportunities and improvements
to derive deeper and more practical insights from data that can help businesses
make informed decisions. However, the majority of these techniques focus on the
conventional closed-set scenario, in which the label spaces for the training
and test sets are identical. Open set recognition (OSR) aims to bring
classification tasks in a situation that is more like reality, which focuses on
classifying the known classes as well as handling unknown classes effectively.
In such an open-set problem the gathered samples in the training set cannot
encompass all the classes and the system needs to identify unknown samples at
test time. On the other hand, building an accurate and comprehensive model in a
real dynamic environment presents a number of obstacles, because it is
prohibitively expensive to train for every possible example of unknown items,
and the model may fail when tested in testbeds. This study provides an
algorithm exploring a new representation of feature space to improve
classification in OSR tasks. The efficacy and efficiency of business processes
and decision-making can be improved by integrating OSR, which offers more
precise and insightful predictions of outcomes. We demonstrate the performance
of the proposed method on three established datasets. The results indicate that
the proposed model outperforms the baseline methods in accuracy and F1-score.
| [
{
"created": "Thu, 9 May 2024 15:15:34 GMT",
"version": "v1"
}
] | 2024-05-10 | [
[
"Mahdavi",
"Atefeh",
""
],
[
"Carvalho",
"Marco",
""
]
] |
2405.05886 | Marcella Astrid | Marcella Astrid, Muhammad Zaigham Zaheer, Djamila Aouada, Seung-Ik Lee | Exploiting Autoencoder's Weakness to Generate Pseudo Anomalies | SharedIt link: https://rdcu.be/dGOrh | Neural Computing and Applications, pp.1-17 (2024) | 10.1007/s00521-024-09790-z | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Due to the rare occurrence of anomalous events, a typical approach to anomaly
detection is to train an autoencoder (AE) with normal data only so that it
learns the patterns or representations of the normal training data. At test
time, the trained AE is expected to well reconstruct normal but to poorly
reconstruct anomalous data. However, contrary to the expectation, anomalous
data is often well reconstructed as well. In order to further separate the
reconstruction quality between normal and anomalous data, we propose creating
pseudo anomalies from learned adaptive noise by exploiting the aforementioned
weakness of AE, i.e., reconstructing anomalies too well. The generated noise is
added to the normal data to create pseudo anomalies. Extensive experiments on
Ped2, Avenue, ShanghaiTech, CIFAR-10, and KDDCUP datasets demonstrate the
effectiveness and generic applicability of our approach in improving the
discriminative capability of AEs for anomaly detection.
| [
{
"created": "Thu, 9 May 2024 16:22:24 GMT",
"version": "v1"
},
{
"created": "Fri, 17 May 2024 12:16:35 GMT",
"version": "v2"
}
] | 2024-05-20 | [
[
"Astrid",
"Marcella",
""
],
[
"Zaheer",
"Muhammad Zaigham",
""
],
[
"Aouada",
"Djamila",
""
],
[
"Lee",
"Seung-Ik",
""
]
] |
2405.05906 | Ahmed Bensaoud | Ahmed Bensaoud and Jugal Kalita | Deep Multi-Task Learning for Malware Image Classification | null | Journal of Information Security and Applications, Volume 64, 2022,
Page 103057 | 10.1016/j.jisa.2021.103057 | null | cs.CR cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Malicious software is a pernicious global problem. A novel multi-task
learning framework is proposed in this paper for malware image classification
for accurate and fast malware detection. We generate bitmap (BMP) and (PNG)
images from malware features, which we feed to a deep learning classifier. Our
state-of-the-art multi-task learning approach has been tested on a new dataset,
for which we have collected approximately 100,000 benign and malicious PE, APK,
Mach-o, and ELF examples. Experiments with seven tasks tested with 4 activation
functions, ReLU, LeakyReLU, PReLU, and ELU separately demonstrate that PReLU
gives the highest accuracy of more than 99.87% on all tasks. Our model can
effectively detect a variety of obfuscation methods like packing, encryption,
and instruction overlapping, strengthing the beneficial claims of our model, in
addition to achieving the state-of-art methods in terms of accuracy.
| [
{
"created": "Thu, 9 May 2024 17:02:06 GMT",
"version": "v1"
}
] | 2024-05-12 | [
[
"Bensaoud",
"Ahmed",
""
],
[
"Kalita",
"Jugal",
""
]
] |
2405.06164 | James Neve | Kawasaki Fumitake, Shota Kishi, James Neve | Skeet: Towards a Lightweight Serverless Framework Supporting Modern
AI-Driven App Development | null | Fumitake, K.; Kishi, S. and Neve, J. (2024). In Proceedings of the
19th International Conference on Evaluation of Novel Approaches to Software
Engineering - ENASE | 10.5220/0012681000003687 | null | cs.SE cs.AI | http://creativecommons.org/licenses/by/4.0/ | The field of web and mobile software frameworks is relatively mature, with a
large variety of tools in different languages that facilitate traditional app
development where data in a relational database is displayed and modified. Our
position is that many current frameworks became popular during single server
deployment of MVC architecture apps, and do not facilitate modern aspects of
app development such as cloud computing and the incorporation of emerging
technologies such as AI. We present a novel framework which accomplishes these
purposes, Skeet, which was recently released to general use, alongside an
initial evaluation. Skeet provides an app structure that reflects current
trends in architecture, and tool suites that allow developers with minimal
knowledge of AI internals to easily incorporate such technologies into their
apps and deploy them.
| [
{
"created": "Fri, 10 May 2024 01:00:20 GMT",
"version": "v1"
}
] | 2024-05-13 | [
[
"Fumitake",
"Kawasaki",
""
],
[
"Kishi",
"Shota",
""
],
[
"Neve",
"James",
""
]
] |
2405.06263 | Hongyu Zang | Ruixiang Sun, Hongyu Zang, Xin Li, Riashat Islam | Learning Latent Dynamic Robust Representations for World Models | null | ICML 2024 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual Model-Based Reinforcement Learning (MBRL) promises to encapsulate
agent's knowledge about the underlying dynamics of the environment, enabling
learning a world model as a useful planner. However, top MBRL agents such as
Dreamer often struggle with visual pixel-based inputs in the presence of
exogenous or irrelevant noise in the observation space, due to failure to
capture task-specific features while filtering out irrelevant spatio-temporal
details. To tackle this problem, we apply a spatio-temporal masking strategy, a
bisimulation principle, combined with latent reconstruction, to capture
endogenous task-specific aspects of the environment for world models,
effectively eliminating non-essential information. Joint training of
representations, dynamics, and policy often leads to instabilities. To further
address this issue, we develop a Hybrid Recurrent State-Space Model (HRSSM)
structure, enhancing state representation robustness for effective policy
learning. Our empirical evaluation demonstrates significant performance
improvements over existing methods in a range of visually complex control tasks
such as Maniskill \cite{gu2023maniskill2} with exogenous distractors from the
Matterport environment. Our code is avaliable at
https://github.com/bit1029public/HRSSM.
| [
{
"created": "Fri, 10 May 2024 06:28:42 GMT",
"version": "v1"
},
{
"created": "Thu, 30 May 2024 09:40:02 GMT",
"version": "v2"
}
] | 2024-05-31 | [
[
"Sun",
"Ruixiang",
""
],
[
"Zang",
"Hongyu",
""
],
[
"Li",
"Xin",
""
],
[
"Islam",
"Riashat",
""
]
] |
2405.06264 | Yunqian Fan | Yunqian Fan, Xiuying Wei, Ruihao Gong, Yuqing Ma, Xiangguo Zhang, Qi
Zhang, Xianglong Liu | Selective Focus: Investigating Semantics Sensitivity in Post-training
Quantization for Lane Detection | Accepted by AAAI-24 | AAAI 2024, 38, 11936-11943 | 10.1609/aaai.v38i11.29080 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lane detection (LD) plays a crucial role in enhancing the L2+ capabilities of
autonomous driving, capturing widespread attention. The Post-Processing
Quantization (PTQ) could facilitate the practical application of LD models,
enabling fast speeds and limited memories without labeled data. However, prior
PTQ methods do not consider the complex LD outputs that contain physical
semantics, such as offsets, locations, etc., and thus cannot be directly
applied to LD models. In this paper, we pioneeringly investigate semantic
sensitivity to post-processing for lane detection with a novel Lane Distortion
Score. Moreover, we identify two main factors impacting the LD performance
after quantization, namely intra-head sensitivity and inter-head sensitivity,
where a small quantization error in specific semantics can cause significant
lane distortion. Thus, we propose a Selective Focus framework deployed with
Semantic Guided Focus and Sensitivity Aware Selection modules, to incorporate
post-processing information into PTQ reconstruction. Based on the observed
intra-head sensitivity, Semantic Guided Focus is introduced to prioritize
foreground-related semantics using a practical proxy. For inter-head
sensitivity, we present Sensitivity Aware Selection, efficiently recognizing
influential prediction heads and refining the optimization objectives at
runtime. Extensive experiments have been done on a wide variety of models
including keypoint-, anchor-, curve-, and segmentation-based ones. Our method
produces quantized models in minutes on a single GPU and can achieve 6.4% F1
Score improvement on the CULane dataset.
| [
{
"created": "Fri, 10 May 2024 06:29:15 GMT",
"version": "v1"
}
] | 2024-05-14 | [
[
"Fan",
"Yunqian",
""
],
[
"Wei",
"Xiuying",
""
],
[
"Gong",
"Ruihao",
""
],
[
"Ma",
"Yuqing",
""
],
[
"Zhang",
"Xiangguo",
""
],
[
"Zhang",
"Qi",
""
],
[
"Liu",
"Xianglong",
""
]
] |
2405.06266 | Baichao Long | Jianli Xiao and Baichao Long | A Multi-Channel Spatial-Temporal Transformer Model for Traffic Flow
Forecasting | null | Xiao J, Long B. A Multi-Channel Spatial-Temporal Transformer Model
for Traffic Flow Forecasting[J]. Information Sciences, 2024: 120648 | 10.1016/j.ins.2024.120648 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic flow forecasting is a crucial task in transportation management and
planning. The main challenges for traffic flow forecasting are that (1) as the
length of prediction time increases, the accuracy of prediction will decrease;
(2) the predicted results greatly rely on the extraction of temporal and
spatial dependencies from the road networks. To overcome the challenges
mentioned above, we propose a multi-channel spatial-temporal transformer model
for traffic flow forecasting, which improves the accuracy of the prediction by
fusing results from different channels of traffic data. Our approach leverages
graph convolutional network to extract spatial features from each channel while
using a transformer-based architecture to capture temporal dependencies across
channels. We introduce an adaptive adjacency matrix to overcome limitations in
feature extraction from fixed topological structures. Experimental results on
six real-world datasets demonstrate that introducing a multi-channel mechanism
into the temporal model enhances performance and our proposed model outperforms
state-of-the-art models in terms of accuracy.
| [
{
"created": "Fri, 10 May 2024 06:37:07 GMT",
"version": "v1"
}
] | 2024-05-13 | [
[
"Xiao",
"Jianli",
""
],
[
"Long",
"Baichao",
""
]
] |
2405.06286 | Jens Ziehn | Leon Eisemann, Mirjam Fehling-Kaschek, Silke Forkert, Andreas Forster,
Henrik Gommel, Susanne Guenther, Stephan Hammer, David Hermann, Marvin Klemp,
Benjamin Lickert, Florian Luettner, Robin Moss, Nicole Neis, Maria Pohle,
Dominik Schreiber, Cathrina Sowa, Daniel Stadler, Janina Stompe, Michael
Strobelt, David Unger, Jens Ziehn | A Joint Approach Towards Data-Driven Virtual Testing for Automated
Driving: The AVEAS Project | 6 pages, 5 figures, 2 tables | Proceedings of the 7th International Symposium on Future Active
Safety Technology toward zero traffic accidents (JSAE FAST-zero '23), 2023 | null | null | cs.RO cs.CV cs.CY cs.LG cs.SY eess.SY | http://creativecommons.org/licenses/by-nc-sa/4.0/ | With growing complexity and responsibility of automated driving functions in
road traffic and growing scope of their operational design domains, there is
increasing demand for covering significant parts of development, validation,
and verification via virtual environments and simulation models.
If, however, simulations are meant not only to augment real-world
experiments, but to replace them, quantitative approaches are required that
measure to what degree and under which preconditions simulation models
adequately represent reality, and thus allow their usage for virtual testing of
driving functions. Especially in research and development areas related to the
safety impacts of the "open world", there is a significant shortage of
real-world data to parametrize and/or validate simulations - especially with
respect to the behavior of human traffic participants, whom automated vehicles
will meet in mixed traffic.
This paper presents the intermediate results of the German AVEAS research
project (www.aveas.org) which aims at developing methods and metrics for the
harmonized, systematic, and scalable acquisition of real-world data for virtual
verification and validation of advanced driver assistance systems and automated
driving, and establishing an online database following the FAIR principles.
| [
{
"created": "Fri, 10 May 2024 07:36:03 GMT",
"version": "v1"
}
] | 2024-05-13 | [
[
"Eisemann",
"Leon",
""
],
[
"Fehling-Kaschek",
"Mirjam",
""
],
[
"Forkert",
"Silke",
""
],
[
"Forster",
"Andreas",
""
],
[
"Gommel",
"Henrik",
""
],
[
"Guenther",
"Susanne",
""
],
[
"Hammer",
"Stephan",
""
],
[
"Hermann",
"David",
""
],
[
"Klemp",
"Marvin",
""
],
[
"Lickert",
"Benjamin",
""
],
[
"Luettner",
"Florian",
""
],
[
"Moss",
"Robin",
""
],
[
"Neis",
"Nicole",
""
],
[
"Pohle",
"Maria",
""
],
[
"Schreiber",
"Dominik",
""
],
[
"Sowa",
"Cathrina",
""
],
[
"Stadler",
"Daniel",
""
],
[
"Stompe",
"Janina",
""
],
[
"Strobelt",
"Michael",
""
],
[
"Unger",
"David",
""
],
[
"Ziehn",
"Jens",
""
]
] |
2405.06321 | Xin Du PhD | Xin Du, Kumiko Tanaka-Ishii | Correlation Dimension of Natural Language in a Statistical Manifold | Published at Physical Review Research | Physical Review Research, 6(2), L022028 (2024) | 10.1103/PhysRevResearch.6.L022028 | null | cs.CL cond-mat.stat-mech cs.AI | http://creativecommons.org/licenses/by/4.0/ | The correlation dimension of natural language is measured by applying the
Grassberger-Procaccia algorithm to high-dimensional sequences produced by a
large-scale language model. This method, previously studied only in a Euclidean
space, is reformulated in a statistical manifold via the Fisher-Rao distance.
Language exhibits a multifractal, with global self-similarity and a universal
dimension around 6.5, which is smaller than those of simple discrete random
sequences and larger than that of a Barab\'asi-Albert process. Long memory is
the key to producing self-similarity. Our method is applicable to any
probabilistic model of real-world discrete sequences, and we show an
application to music data.
| [
{
"created": "Fri, 10 May 2024 08:48:03 GMT",
"version": "v1"
},
{
"created": "Wed, 15 May 2024 07:46:01 GMT",
"version": "v2"
}
] | 2024-05-16 | [
[
"Du",
"Xin",
""
],
[
"Tanaka-Ishii",
"Kumiko",
""
]
] |
2405.06598 | Dongwei Sun | Dongwei Sun, Yajie Bao, Junmin Liu, Xiangyong Cao | A Lightweight Sparse Focus Transformer for Remote Sensing Image Change
Captioning | null | IEEE Journal of Selected Topics in Applied Earth Observations and
Remote Sensing 2024 | 10.1109/JSTARS.2024.3471625 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Remote sensing image change captioning (RSICC) aims to automatically generate
sentences that describe content differences in remote sensing bitemporal
images. Recently, attention-based transformers have become a prevalent idea for
capturing the features of global change. However, existing transformer-based
RSICC methods face challenges, e.g., high parameters and high computational
complexity caused by the self-attention operation in the transformer encoder
component. To alleviate these issues, this paper proposes a Sparse Focus
Transformer (SFT) for the RSICC task. Specifically, the SFT network consists of
three main components, i.e. a high-level features extractor based on a
convolutional neural network (CNN), a sparse focus attention mechanism-based
transformer encoder network designed to locate and capture changing regions in
dual-temporal images, and a description decoder that embeds images and words to
generate sentences for captioning differences. The proposed SFT network can
reduce the parameter number and computational complexity by incorporating a
sparse attention mechanism within the transformer encoder network. Experimental
results on various datasets demonstrate that even with a reduction of over 90\%
in parameters and computational complexity for the transformer encoder, our
proposed network can still obtain competitive performance compared to other
state-of-the-art RSICC methods. The code is available at
\href{https://github.com/sundongwei/SFT_chag2cap}{Lite\_Chag2cap}.
| [
{
"created": "Fri, 10 May 2024 16:56:53 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Oct 2024 09:50:58 GMT",
"version": "v2"
}
] | 2024-10-14 | [
[
"Sun",
"Dongwei",
""
],
[
"Bao",
"Yajie",
""
],
[
"Liu",
"Junmin",
""
],
[
"Cao",
"Xiangyong",
""
]
] |
2405.06668 | Silvia Garc\'ia-M\'endez | Francisco de Arriba-P\'erez, Silvia Garc\'ia-M\'endez, F\'atima Leal,
Benedita Malheiro and Juan Carlos Burguillo | Exposing and Explaining Fake News On-the-Fly | null | Mach Learn (2024) | 10.1007/s10994-024-06527-w | null | cs.CL cs.AI cs.SI | http://creativecommons.org/licenses/by/4.0/ | Social media platforms enable the rapid dissemination and consumption of
information. However, users instantly consume such content regardless of the
reliability of the shared data. Consequently, the latter crowdsourcing model is
exposed to manipulation. This work contributes with an explainable and online
classification method to recognize fake news in real-time. The proposed method
combines both unsupervised and supervised Machine Learning approaches with
online created lexica. The profiling is built using creator-, content- and
context-based features using Natural Language Processing techniques. The
explainable classification mechanism displays in a dashboard the features
selected for classification and the prediction confidence. The performance of
the proposed solution has been validated with real data sets from Twitter and
the results attain 80 % accuracy and macro F-measure. This proposal is the
first to jointly provide data stream processing, profiling, classification and
explainability. Ultimately, the proposed early detection, isolation and
explanation of fake news contribute to increase the quality and trustworthiness
of social media contents.
| [
{
"created": "Fri, 3 May 2024 14:49:04 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Sep 2024 10:07:46 GMT",
"version": "v2"
}
] | 2024-09-06 | [
[
"de Arriba-Pérez",
"Francisco",
""
],
[
"García-Méndez",
"Silvia",
""
],
[
"Leal",
"Fátima",
""
],
[
"Malheiro",
"Benedita",
""
],
[
"Burguillo",
"Juan Carlos",
""
]
] |
2405.06684 | Jia-Rui Lin | Jin Han, Zhe Zheng, Xin-Zheng Lu, Ke-Yin Chen, Jia-Rui Lin | QuakeBERT: Accurate Classification of Social Media Texts for Rapid
Earthquake Impact Assessment | null | International Journal of Disaster Risk Reduction, 2024 | 10.1016/j.ijdrr.2024.104574 | null | cs.CL cs.LG cs.SI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Social media aids disaster response but suffers from noise, hindering
accurate impact assessment and decision making for resilient cities, which few
studies considered. To address the problem, this study proposes the first
domain-specific LLM model and an integrated method for rapid earthquake impact
assessment. First, a few categories are introduced to classify and filter
microblogs considering their relationship to the physical and social impacts of
earthquakes, and a dataset comprising 7282 earthquake-related microblogs from
twenty earthquakes in different locations is developed as well. Then, with a
systematic analysis of various influential factors, QuakeBERT, a
domain-specific large language model (LLM), is developed and fine-tuned for
accurate classification and filtering of microblogs. Meanwhile, an integrated
method integrating public opinion trend analysis, sentiment analysis, and
keyword-based physical impact quantification is introduced to assess both the
physical and social impacts of earthquakes based on social media texts.
Experiments show that data diversity and data volume dominate the performance
of QuakeBERT and increase the macro average F1 score by 27%, while the best
classification model QuakeBERT outperforms the CNN- or RNN-based models by
improving the macro average F1 score from 60.87% to 84.33%. Finally, the
proposed approach is applied to assess two earthquakes with the same magnitude
and focal depth. Results show that the proposed approach can effectively
enhance the impact assessment process by accurate detection of noisy
microblogs, which enables effective post-disaster emergency responses to create
more resilient cities.
| [
{
"created": "Mon, 6 May 2024 10:52:21 GMT",
"version": "v1"
}
] | 2024-06-03 | [
[
"Han",
"Jin",
""
],
[
"Zheng",
"Zhe",
""
],
[
"Lu",
"Xin-Zheng",
""
],
[
"Chen",
"Ke-Yin",
""
],
[
"Lin",
"Jia-Rui",
""
]
] |
2405.06772 | Urjitkumar Patel | Urjitkumar Patel, Fang-Chun Yeh, Chinmay Gondhalekar | CANAL -- Cyber Activity News Alerting Language Model: Empirical Approach
vs. Expensive LLM | Published in 2024 IEEE 3rd International Conference on AI in
Cybersecurity (ICAIC), Conference Date: 07-09 February 2024 | 2024 IEEE 3rd International Conference on AI in Cybersecurity
(ICAIC), Houston, TX, USA, 2024, pp. 1-12 | 10.1109/ICAIC60265.2024.10433839 | null | cs.CR cs.AI cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In today's digital landscape, where cyber attacks have become the norm, the
detection of cyber attacks and threats is critically imperative across diverse
domains. Our research presents a new empirical framework for cyber threat
modeling, adept at parsing and categorizing cyber-related information from news
articles, enhancing real-time vigilance for market stakeholders. At the core of
this framework is a fine-tuned BERT model, which we call CANAL - Cyber Activity
News Alerting Language Model, tailored for cyber categorization using a novel
silver labeling approach powered by Random Forest. We benchmark CANAL against
larger, costlier LLMs, including GPT-4, LLaMA, and Zephyr, highlighting their
zero to few-shot learning in cyber news classification. CANAL demonstrates
superior performance by outperforming all other LLM counterparts in both
accuracy and cost-effectiveness. Furthermore, we introduce the Cyber Signal
Discovery module, a strategic component designed to efficiently detect emerging
cyber signals from news articles. Collectively, CANAL and Cyber Signal
Discovery module equip our framework to provide a robust and cost-effective
solution for businesses that require agile responses to cyber intelligence.
| [
{
"created": "Fri, 10 May 2024 18:57:35 GMT",
"version": "v1"
}
] | 2024-05-14 | [
[
"Patel",
"Urjitkumar",
""
],
[
"Yeh",
"Fang-Chun",
""
],
[
"Gondhalekar",
"Chinmay",
""
]
] |
2405.06802 | Raul Salles De Padua | Raul Salles de Padua and Imran Qureshi | Summarizing Radiology Reports Findings into Impressions | This version reverts to the original preprint, following the advice
from the Artificial Intelligence in Health editorial office. The published
version is peer-reviewed and available in the journal (see external DOI). The
preprint remains unchanged to maintain version transparency, as noted in the
further disclosure section of the published article | Artificial Intelligence in Health 3846. 2024 | 10.36922/aih.3846 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Patient hand-off and triage are two fundamental problems in health care.
Often doctors must painstakingly summarize complex findings to efficiently
communicate with specialists and quickly make decisions on which patients have
the most urgent cases. In pursuit of these challenges, we present (1) a model
with state-of-art radiology report summarization performance using (2) a novel
method for augmenting medical data, and (3) an analysis of the model
limitations and radiology knowledge gain. We also provide a data processing
pipeline for future models developed on the the MIMIC CXR dataset. Our best
performing model was a fine-tuned BERT-to-BERT encoder-decoder with 58.75/100
ROUGE-L F1, which outperformed specialized checkpoints with more sophisticated
attention mechanisms. We investigate these aspects in this work.
| [
{
"created": "Fri, 10 May 2024 20:29:25 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Sep 2024 09:52:20 GMT",
"version": "v2"
},
{
"created": "Fri, 27 Sep 2024 06:13:06 GMT",
"version": "v3"
}
] | 2024-09-30 | [
[
"de Padua",
"Raul Salles",
""
],
[
"Qureshi",
"Imran",
""
]
] |
2405.06919 | Awais Hameed Khan | Awais Hameed Khan, Hiruni Kegalle, Rhea D'Silva, Ned Watt, Daniel
Whelan-Shamy, Lida Ghahremanlou and Liam Magee | Automating Thematic Analysis: How LLMs Analyse Controversial Topics | 18 pages, 6 figures | Microsoft Journal for Applied Research, Vol 21 (2024), pp 69 - 87 | null | null | cs.CY cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large Language Models (LLMs) are promising analytical tools. They can augment
human epistemic, cognitive and reasoning abilities, and support 'sensemaking',
making sense of a complex environment or subject by analysing large volumes of
data with a sensitivity to context and nuance absent in earlier text processing
systems. This paper presents a pilot experiment that explores how LLMs can
support thematic analysis of controversial topics. We compare how human
researchers and two LLMs GPT-4 and Llama 2 categorise excerpts from media
coverage of the controversial Australian Robodebt scandal. Our findings
highlight intriguing overlaps and variances in thematic categorisation between
human and machine agents, and suggest where LLMs can be effective in supporting
forms of discourse and thematic analysis. We argue LLMs should be used to
augment, and not replace human interpretation, and we add further
methodological insights and reflections to existing research on the application
of automation to qualitative research methods. We also introduce a novel
card-based design toolkit, for both researchers and practitioners to further
interrogate LLMs as analytical tools.
| [
{
"created": "Sat, 11 May 2024 05:28:25 GMT",
"version": "v1"
}
] | 2024-09-19 | [
[
"Khan",
"Awais Hameed",
""
],
[
"Kegalle",
"Hiruni",
""
],
[
"D'Silva",
"Rhea",
""
],
[
"Watt",
"Ned",
""
],
[
"Whelan-Shamy",
"Daniel",
""
],
[
"Ghahremanlou",
"Lida",
""
],
[
"Magee",
"Liam",
""
]
] |
2405.07097 | Katsiaryna Haitsiukevich | Katsiaryna Haitsiukevich, Onur Poyraz, Pekka Marttinen, Alexander Ilin | Diffusion models as probabilistic neural operators for recovering
unobserved states of dynamical systems | Preprint submitted to IEEE MLSP 2024 | IEEE International Workshop on Machine Learning for Signal
Processing (MLSP) 2024 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper explores the efficacy of diffusion-based generative models as
neural operators for partial differential equations (PDEs). Neural operators
are neural networks that learn a mapping from the parameter space to the
solution space of PDEs from data, and they can also solve the inverse problem
of estimating the parameter from the solution. Diffusion models excel in many
domains, but their potential as neural operators has not been thoroughly
explored. In this work, we show that diffusion-based generative models exhibit
many properties favourable for neural operators, and they can effectively
generate the solution of a PDE conditionally on the parameter or recover the
unobserved parts of the system. We propose to train a single model adaptable to
multiple tasks, by alternating between the tasks during training. In our
experiments with multiple realistic dynamical systems, diffusion models
outperform other neural operators. Furthermore, we demonstrate how the
probabilistic diffusion model can elegantly deal with systems which are only
partially identifiable, by producing samples corresponding to the different
possible solutions.
| [
{
"created": "Sat, 11 May 2024 21:23:55 GMT",
"version": "v1"
}
] | 2024-07-18 | [
[
"Haitsiukevich",
"Katsiaryna",
""
],
[
"Poyraz",
"Onur",
""
],
[
"Marttinen",
"Pekka",
""
],
[
"Ilin",
"Alexander",
""
]
] |
2405.07099 | Avi Shmidman | Avi Shmidman, Cheyn Shmuel Shmidman, Dan Bareket, Moshe Koppel, Reut
Tsarfaty | Do Pretrained Contextual Language Models Distinguish between Hebrew
Homograph Analyses? | null | In Proceedings of EACL 2023, 849-864 (2023) | 10.18653/v1/2023.eacl-main.59 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Semitic morphologically-rich languages (MRLs) are characterized by extreme
word ambiguity. Because most vowels are omitted in standard texts, many of the
words are homographs with multiple possible analyses, each with a different
pronunciation and different morphosyntactic properties. This ambiguity goes
beyond word-sense disambiguation (WSD), and may include token segmentation into
multiple word units. Previous research on MRLs claimed that standardly trained
pre-trained language models (PLMs) based on word-pieces may not sufficiently
capture the internal structure of such tokens in order to distinguish between
these analyses. Taking Hebrew as a case study, we investigate the extent to
which Hebrew homographs can be disambiguated and analyzed using PLMs. We
evaluate all existing models for contextualized Hebrew embeddings on a novel
Hebrew homograph challenge sets that we deliver. Our empirical results
demonstrate that contemporary Hebrew contextualized embeddings outperform
non-contextualized embeddings; and that they are most effective for
disambiguating segmentation and morphosyntactic features, less so regarding
pure word-sense disambiguation. We show that these embeddings are more
effective when the number of word-piece splits is limited, and they are more
effective for 2-way and 3-way ambiguities than for 4-way ambiguity. We show
that the embeddings are equally effective for homographs of both balanced and
skewed distributions, whether calculated as masked or unmasked tokens. Finally,
we show that these embeddings are as effective for homograph disambiguation
with extensive supervised training as with a few-shot setup.
| [
{
"created": "Sat, 11 May 2024 21:50:56 GMT",
"version": "v1"
}
] | 2024-05-14 | [
[
"Shmidman",
"Avi",
""
],
[
"Shmidman",
"Cheyn Shmuel",
""
],
[
"Bareket",
"Dan",
""
],
[
"Koppel",
"Moshe",
""
],
[
"Tsarfaty",
"Reut",
""
]
] |
2405.07327 | Carter Blair | Carter Blair, Ben Armstrong, Kate Larson | Liquid Ensemble Selection for Continual Learning | Accepted at Canadian AI Conference 2024 | Proceedings of the Canadian Conference on Artificial Intelligence.
https://caiac.pubpub.org/pub/7gegu91h (2024) | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Continual learning aims to enable machine learning models to continually
learn from a shifting data distribution without forgetting what has already
been learned. Such shifting distributions can be broken into disjoint subsets
of related examples; by training each member of an ensemble on a different
subset it is possible for the ensemble as a whole to achieve much higher
accuracy with less forgetting than a naive model. We address the problem of
selecting which models within an ensemble should learn on any given data, and
which should predict. By drawing on work from delegative voting we develop an
algorithm for using delegation to dynamically select which models in an
ensemble are active. We explore a variety of delegation methods and performance
metrics, ultimately finding that delegation is able to provide a significant
performance boost over naive learning in the face of distribution shifts.
| [
{
"created": "Sun, 12 May 2024 16:33:48 GMT",
"version": "v1"
}
] | 2024-07-29 | [
[
"Blair",
"Carter",
""
],
[
"Armstrong",
"Ben",
""
],
[
"Larson",
"Kate",
""
]
] |
2405.07500 | Yuzhang Xie | Yuzhang Xie, Jiaying Lu, Joyce Ho, Fadi Nahab, Xiao Hu, Carl Yang | PromptLink: Leveraging Large Language Models for Cross-Source Biomedical
Concept Linking | null | Proceedings of the 47th International ACM SIGIR Conference on
Research and Development in Information Retrieval (Short-Paper Track), 2024 | 10.1145/3626772.3657904 | null | cs.IR cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Linking (aligning) biomedical concepts across diverse data sources enables
various integrative analyses, but it is challenging due to the discrepancies in
concept naming conventions. Various strategies have been developed to overcome
this challenge, such as those based on string-matching rules, manually crafted
thesauri, and machine learning models. However, these methods are constrained
by limited prior biomedical knowledge and can hardly generalize beyond the
limited amounts of rules, thesauri, or training samples. Recently, large
language models (LLMs) have exhibited impressive results in diverse biomedical
NLP tasks due to their unprecedentedly rich prior knowledge and strong
zero-shot prediction abilities. However, LLMs suffer from issues including high
costs, limited context length, and unreliable predictions. In this research, we
propose PromptLink, a novel biomedical concept linking framework that leverages
LLMs. It first employs a biomedical-specialized pre-trained language model to
generate candidate concepts that can fit in the LLM context windows. Then it
utilizes an LLM to link concepts through two-stage prompts, where the
first-stage prompt aims to elicit the biomedical prior knowledge from the LLM
for the concept linking task and the second-stage prompt enforces the LLM to
reflect on its own predictions to further enhance their reliability. Empirical
results on the concept linking task between two EHR datasets and an external
biomedical KG demonstrate the effectiveness of PromptLink. Furthermore,
PromptLink is a generic framework without reliance on additional prior
knowledge, context, or training data, making it well-suited for concept linking
across various types of data sources. The source code is available at
https://github.com/constantjxyz/PromptLink.
| [
{
"created": "Mon, 13 May 2024 06:36:30 GMT",
"version": "v1"
}
] | 2024-05-14 | [
[
"Xie",
"Yuzhang",
""
],
[
"Lu",
"Jiaying",
""
],
[
"Ho",
"Joyce",
""
],
[
"Nahab",
"Fadi",
""
],
[
"Hu",
"Xiao",
""
],
[
"Yang",
"Carl",
""
]
] |
2405.07544 | Leon Eisemann | Leon Eisemann and Johannes Maucher | Automatic Odometry-Less OpenDRIVE Generation From Sparse Point Clouds | 8 pages, 4 figures, 3 algorithms, 2 tables | 2023 IEEE 26th International Conference on Intelligent
Transportation Systems (ITSC) | 10.1109/ITSC57777.2023.10421842 | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | High-resolution road representations are a key factor for the success of
(highly) automated driving functions. These representations, for example,
high-definition (HD) maps, contain accurate information on a multitude of
factors, among others: road geometry, lane information, and traffic signs.
Through the growing complexity and functionality of automated driving
functions, also the requirements on testing and evaluation grow continuously.
This leads to an increasing interest in virtual test drives for evaluation
purposes. As roads play a crucial role in traffic flow, accurate real-world
representations are needed, especially when deriving realistic driving behavior
data. This paper proposes a novel approach to generate realistic road
representations based solely on point cloud information, independent of the
LiDAR sensor, mounting position, and without the need for odometry data,
multi-sensor fusion, machine learning, or highly-accurate calibration. As the
primary use case is simulation, we use the OpenDRIVE format for evaluation.
| [
{
"created": "Mon, 13 May 2024 08:26:24 GMT",
"version": "v1"
}
] | 2024-05-14 | [
[
"Eisemann",
"Leon",
""
],
[
"Maucher",
"Johannes",
""
]
] |
2405.07749 | Franz Kevin Stehle | Franz Kevin Stehle, Wainer Vandelli, Giuseppe Avolio, Felix Zahn,
Holger Fr\"oning | DeepHYDRA: Resource-Efficient Time-Series Anomaly Detection in
Dynamically-Configured Systems | null | Proceedings of the 38th ACM International Conference on
Supercomputing (ICS '24), June 4--7, 2024, Kyoto, Japan | 10.1145/3650200.3656637 | null | cs.LG cs.AI cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anomaly detection in distributed systems such as High-Performance Computing
(HPC) clusters is vital for early fault detection, performance optimisation,
security monitoring, reliability in general but also operational insights. Deep
Neural Networks have seen successful use in detecting long-term anomalies in
multidimensional data, originating for instance from industrial or medical
systems, or weather prediction. A downside of such methods is that they require
a static input size, or lose data through cropping, sampling, or other
dimensionality reduction methods, making deployment on systems with variability
on monitored data channels, such as computing clusters difficult. To address
these problems, we present DeepHYDRA (Deep Hybrid DBSCAN/Reduction-Based
Anomaly Detection) which combines DBSCAN and learning-based anomaly detection.
DBSCAN clustering is used to find point anomalies in time-series data,
mitigating the risk of missing outliers through loss of information when
reducing input data to a fixed number of channels. A deep learning-based
time-series anomaly detection method is then applied to the reduced data in
order to identify long-term outliers. This hybrid approach reduces the chances
of missing anomalies that might be made indistinguishable from normal data by
the reduction process, and likewise enables the algorithm to be scalable and
tolerate partial system failures while retaining its detection capabilities.
Using a subset of the well-known SMD dataset family, a modified variant of the
Eclipse dataset, as well as an in-house dataset with a large variability in
active data channels, made publicly available with this work, we furthermore
analyse computational intensity, memory footprint, and activation counts.
DeepHYDRA is shown to reliably detect different types of anomalies in both
large and complex datasets.
| [
{
"created": "Mon, 13 May 2024 13:47:15 GMT",
"version": "v1"
}
] | 2024-05-14 | [
[
"Stehle",
"Franz Kevin",
""
],
[
"Vandelli",
"Wainer",
""
],
[
"Avolio",
"Giuseppe",
""
],
[
"Zahn",
"Felix",
""
],
[
"Fröning",
"Holger",
""
]
] |
2405.07778 | Karahan Sar{\i}ta\c{s} | Karahan Sar{\i}ta\c{s}, Cahid Arda \"Oz and Tunga G\"ung\"or | A Comprehensive Analysis of Static Word Embeddings for Turkish | null | Expert Systems with Applications Volume 252, Part A, 15 October
2024, 124123 | 10.1016/j.eswa.2024.124123 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Word embeddings are fixed-length, dense and distributed word representations
that are used in natural language processing (NLP) applications. There are
basically two types of word embedding models which are non-contextual (static)
models and contextual models. The former method generates a single embedding
for a word regardless of its context, while the latter method produces distinct
embeddings for a word based on the specific contexts in which it appears. There
are plenty of works that compare contextual and non-contextual embedding models
within their respective groups in different languages. However, the number of
studies that compare the models in these two groups with each other is very few
and there is no such study in Turkish. This process necessitates converting
contextual embeddings into static embeddings. In this paper, we compare and
evaluate the performance of several contextual and non-contextual models in
both intrinsic and extrinsic evaluation settings for Turkish. We make a
fine-grained comparison by analyzing the syntactic and semantic capabilities of
the models separately. The results of the analyses provide insights about the
suitability of different embedding models in different types of NLP tasks. We
also build a Turkish word embedding repository comprising the embedding models
used in this work, which may serve as a valuable resource for researchers and
practitioners in the field of Turkish NLP. We make the word embeddings,
scripts, and evaluation datasets publicly available.
| [
{
"created": "Mon, 13 May 2024 14:23:37 GMT",
"version": "v1"
}
] | 2024-05-14 | [
[
"Sarıtaş",
"Karahan",
""
],
[
"Öz",
"Cahid Arda",
""
],
[
"Güngör",
"Tunga",
""
]
] |
2405.07842 | Utsav Akhaury | Utsav Akhaury, Pascale Jablonka, Jean-Luc Starck, Fr\'ed\'eric Courbin | Ground-based image deconvolution with Swin Transformer UNet | 11 pages, 14 figures | A&A 688, A6 (2024) | 10.1051/0004-6361/202449495 | null | astro-ph.IM cs.CV | http://creativecommons.org/licenses/by/4.0/ | As ground-based all-sky astronomical surveys will gather millions of images
in the coming years, a critical requirement emerges for the development of fast
deconvolution algorithms capable of efficiently improving the spatial
resolution of these images. By successfully recovering clean and
high-resolution images from these surveys, the objective is to deepen the
understanding of galaxy formation and evolution through accurate photometric
measurements. We introduce a two-step deconvolution framework using a Swin
Transformer architecture. Our study reveals that the deep learning-based
solution introduces a bias, constraining the scope of scientific analysis. To
address this limitation, we propose a novel third step relying on the active
coefficients in the sparsity wavelet framework. We conducted a performance
comparison between our deep learning-based method and Firedec, a classical
deconvolution algorithm, based on an analysis of a subset of the EDisCS cluster
samples. We demonstrate the advantage of our method in terms of resolution
recovery, generalisation to different noise properties, and computational
efficiency. The analysis of this cluster sample not only allowed us to assess
the efficiency of our method, but it also enabled us to quantify the number of
clumps within these galaxies in relation to their disc colour. This robust
technique that we propose holds promise for identifying structures in the
distant universe through ground-based images.
| [
{
"created": "Mon, 13 May 2024 15:30:41 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jun 2024 22:51:32 GMT",
"version": "v2"
}
] | 2024-07-31 | [
[
"Akhaury",
"Utsav",
""
],
[
"Jablonka",
"Pascale",
""
],
[
"Starck",
"Jean-Luc",
""
],
[
"Courbin",
"Frédéric",
""
]
] |
2405.08154 | Winnie Street | Winnie Street | LLM Theory of Mind and Alignment: Opportunities and Risks | null | Proceedings of Workshop on Theory of Mind in Human-AI Interaction
at CHI 2024 (ToMinHAI at CHI 2024) | null | null | cs.HC cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) are transforming human-computer interaction and
conceptions of artificial intelligence (AI) with their impressive capacities
for conversing and reasoning in natural language. There is growing interest in
whether LLMs have theory of mind (ToM); the ability to reason about the mental
and emotional states of others that is core to human social intelligence. As
LLMs are integrated into the fabric of our personal, professional and social
lives and given greater agency to make decisions with real-world consequences,
there is a critical need to understand how they can be aligned with human
values. ToM seems to be a promising direction of inquiry in this regard.
Following the literature on the role and impacts of human ToM, this paper
identifies key areas in which LLM ToM will show up in human:LLM interactions at
individual and group levels, and what opportunities and risks for alignment are
raised in each. On the individual level, the paper considers how LLM ToM might
manifest in goal specification, conversational adaptation, empathy and
anthropomorphism. On the group level, it considers how LLM ToM might facilitate
collective alignment, cooperation or competition, and moral judgement-making.
The paper lays out a broad spectrum of potential implications and suggests the
most pressing areas for future research.
| [
{
"created": "Mon, 13 May 2024 19:52:16 GMT",
"version": "v1"
}
] | 2024-05-15 | [
[
"Street",
"Winnie",
""
]
] |
2405.08209 | Rachel Hong | Rachel Hong, William Agnew, Tadayoshi Kohno, and Jamie Morgenstern | Who's in and who's out? A case study of multimodal CLIP-filtering in
DataComp | Content warning: This paper discusses societal stereotypes and
sexually-explicit material that may be disturbing, distressing, and/or
offensive to the reader | Proceedings of the 4th ACM Conference on Equity and Access in
Algorithms, Mechanisms, and Optimization (EAAMO 2024) | 10.1145/3689904.3694702 | null | cs.CY cs.CL cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As training datasets become increasingly drawn from unstructured,
uncontrolled environments such as the web, researchers and industry
practitioners have increasingly relied upon data filtering techniques to
"filter out the noise" of web-scraped data. While datasets have been widely
shown to reflect the biases and values of their creators, in this paper we
contribute to an emerging body of research that assesses the filters used to
create these datasets. We show that image-text data filtering also has biases
and is value-laden, encoding specific notions of what is counted as
"high-quality" data. In our work, we audit a standard approach of image-text
CLIP-filtering on the academic benchmark DataComp's CommonPool by analyzing
discrepancies of filtering through various annotation techniques across
multiple modalities of image, text, and website source. We find that data
relating to several imputed demographic groups -- such as LGBTQ+ people, older
women, and younger men -- are associated with higher rates of exclusion.
Moreover, we demonstrate cases of exclusion amplification: not only are certain
marginalized groups already underrepresented in the unfiltered data, but
CLIP-filtering excludes data from these groups at higher rates. The
data-filtering step in the machine learning pipeline can therefore exacerbate
representation disparities already present in the data-gathering step,
especially when existing filters are designed to optimize a specifically-chosen
downstream performance metric like zero-shot image classification accuracy.
Finally, we show that the NSFW filter fails to remove sexually-explicit content
from CommonPool, and that CLIP-filtering includes several categories of
copyrighted content at high rates. Our conclusions point to a need for
fundamental changes in dataset creation and filtering practices.
| [
{
"created": "Mon, 13 May 2024 21:53:06 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Oct 2024 19:34:13 GMT",
"version": "v2"
}
] | 2024-10-11 | [
[
"Hong",
"Rachel",
""
],
[
"Agnew",
"William",
""
],
[
"Kohno",
"Tadayoshi",
""
],
[
"Morgenstern",
"Jamie",
""
]
] |
2405.08238 | Katie Seaborn | Takao Fujii, Katie Seaborn, Madeleine Steeds | Silver-Tongued and Sundry: Exploring Intersectional Pronouns with
ChatGPT | Honorable Mention award (top 5%) at CHI '24 | CHI '24: Proceedings of the CHI Conference on Human Factors in
Computing Systems (2024), Article No. 511, 1-14 | 10.1145/3613904.3642303 | null | cs.HC cs.AI cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | ChatGPT is a conversational agent built on a large language model. Trained on
a significant portion of human output, ChatGPT can mimic people to a degree. As
such, we need to consider what social identities ChatGPT simulates (or can be
designed to simulate). In this study, we explored the case of identity
simulation through Japanese first-person pronouns, which are tightly connected
to social identities in intersectional ways, i.e., intersectional pronouns. We
conducted a controlled online experiment where people from two regions in Japan
(Kanto and Kinki) witnessed interactions with ChatGPT using ten sets of
first-person pronouns. We discovered that pronouns alone can evoke perceptions
of social identities in ChatGPT at the intersections of gender, age, region,
and formality, with caveats. This work highlights the importance of pronoun use
for social identity simulation, provides a language-based methodology for
culturally-sensitive persona development, and advances the potential of
intersectional identities in intelligent agents.
| [
{
"created": "Mon, 13 May 2024 23:38:50 GMT",
"version": "v1"
}
] | 2024-05-15 | [
[
"Fujii",
"Takao",
""
],
[
"Seaborn",
"Katie",
""
],
[
"Steeds",
"Madeleine",
""
]
] |
2405.08334 | Jiaqing Xie | Jiaqing Xie, Ziheng Chi | Could Chemical LLMs benefit from Message Passing | Accepted at ACL @ Languages and Molecules 2024. In Proceedings of ACL
2024 | In Proceedings of the 1st Workshop on Language + Molecules (L+M
2024), pages 10 20, Bangkok, Thailand. Association for Computational
Linguistics | 10.18653/v1/2024.langmol-1.2 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Pretrained language models (LMs) showcase significant capabilities in
processing molecular text, while concurrently, message passing neural networks
(MPNNs) demonstrate resilience and versatility in the domain of molecular
science. Despite these advancements, we find there are limited studies
investigating the bidirectional interactions between molecular structures and
their corresponding textual representations. Therefore, in this paper, we
propose two strategies to evaluate whether an information integration can
enhance the performance: contrast learning, which involves utilizing an MPNN to
supervise the training of the LM, and fusion, which exploits information from
both models. Our empirical analysis reveals that the integration approaches
exhibit superior performance compared to baselines when applied to smaller
molecular graphs, while these integration approaches do not yield performance
enhancements on large scale graphs.
| [
{
"created": "Tue, 14 May 2024 06:09:08 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Aug 2024 08:24:14 GMT",
"version": "v2"
}
] | 2024-10-03 | [
[
"Xie",
"Jiaqing",
""
],
[
"Chi",
"Ziheng",
""
]
] |
2405.08429 | Mart\'in Bay\'on-Guti\'errez | Mart\'in Bay\'on-Guti\'errez, Mar\'ia Teresa Garc\'ia-Ord\'as,
H\'ector Alaiz Moret\'on, Jose Aveleira-Mata, Sergio Rubio Mart\'in and
Jos\'e Alberto Ben\'itez-Andrades | TEDNet: Twin Encoder Decoder Neural Network for 2D Camera and LiDAR Road
Detection | Source code: https://github.com/martin-bayon/TEDNet | M Bay\'on-Guti\'errez, MT Garc\'ia-Ord\'as, H Alaiz Moret\'on, J
Aveleira-Mata, S Rubio-Mart\'in, JA Ben\'itez-Andrades. TEDNet: Twin Encoder
Decoder Neural Network for 2D Camera and LiDAR Road Detection. Logic Journal
of the IGPL. 2024 | 10.1093/jigpal/jzae048 | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Robust road surface estimation is required for autonomous ground vehicles to
navigate safely. Despite it becoming one of the main targets for autonomous
mobility researchers in recent years, it is still an open problem in which
cameras and LiDAR sensors have demonstrated to be adequate to predict the
position, size and shape of the road a vehicle is driving on in different
environments. In this work, a novel Convolutional Neural Network model is
proposed for the accurate estimation of the roadway surface. Furthermore, an
ablation study has been conducted to investigate how different encoding
strategies affect model performance, testing 6 slightly different neural
network architectures. Our model is based on the use of a Twin Encoder-Decoder
Neural Network (TEDNet) for independent camera and LiDAR feature extraction,
and has been trained and evaluated on the Kitti-Road dataset. Bird's Eye View
projections of the camera and LiDAR data are used in this model to perform
semantic segmentation on whether each pixel belongs to the road surface. The
proposed method performs among other state-of-the-art methods and operates at
the same frame-rate as the LiDAR and cameras, so it is adequate for its use in
real-time applications.
| [
{
"created": "Tue, 14 May 2024 08:45:34 GMT",
"version": "v1"
}
] | 2024-05-15 | [
[
"Bayón-Gutiérrez",
"Martín",
""
],
[
"García-Ordás",
"María Teresa",
""
],
[
"Moretón",
"Héctor Alaiz",
""
],
[
"Aveleira-Mata",
"Jose",
""
],
[
"Martín",
"Sergio Rubio",
""
],
[
"Benítez-Andrades",
"José Alberto",
""
]
] |
2405.08434 | Liming Han | Liming Han, Zhaoxiang Liu, Shiguo Lian | TP3M: Transformer-based Pseudo 3D Image Matching with Reference Image | Accepted by ICRA 2024 | 2024 IEEE International Conference on Robotics and Automation
(ICRA), 3962-3968 | 10.1109/ICRA57147.2024.10610556 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Image matching is still challenging in such scenes with large viewpoints or
illumination changes or with low textures. In this paper, we propose a
Transformer-based pseudo 3D image matching method. It upgrades the 2D features
extracted from the source image to 3D features with the help of a reference
image and matches to the 2D features extracted from the destination image by
the coarse-to-fine 3D matching. Our key discovery is that by introducing the
reference image, the source image's fine points are screened and furtherly
their feature descriptors are enriched from 2D to 3D, which improves the match
performance with the destination image. Experimental results on multiple
datasets show that the proposed method achieves the state-of-the-art on the
tasks of homography estimation, pose estimation and visual localization
especially in challenging scenes.
| [
{
"created": "Tue, 14 May 2024 08:56:09 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Aug 2024 02:57:30 GMT",
"version": "v2"
}
] | 2024-08-13 | [
[
"Han",
"Liming",
""
],
[
"Liu",
"Zhaoxiang",
""
],
[
"Lian",
"Shiguo",
""
]
] |
2405.08755 | Syed Mhamudul Hasan | Syed Mhamudul Hasan, Alaa M. Alotaibi, Sajedul Talukder, Abdur R.
Shahid | Distributed Threat Intelligence at the Edge Devices: A Large Language
Model-Driven Approach | null | 2024 IEEE 48th Annual Computers, Software, and Applications
Conference (COMPSAC) | 10.1109/COMPSAC61105.2024.00206 | null | cs.CR cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | With the proliferation of edge devices, there is a significant increase in
attack surface on these devices. The decentralized deployment of threat
intelligence on edge devices, coupled with adaptive machine learning techniques
such as the in-context learning feature of Large Language Models (LLMs),
represents a promising paradigm for enhancing cybersecurity on
resource-constrained edge devices. This approach involves the deployment of
lightweight machine learning models directly onto edge devices to analyze local
data streams, such as network traffic and system logs, in real-time.
Additionally, distributing computational tasks to an edge server reduces
latency and improves responsiveness while also enhancing privacy by processing
sensitive data locally. LLM servers can enable these edge servers to
autonomously adapt to evolving threats and attack patterns, continuously
updating their models to improve detection accuracy and reduce false positives.
Furthermore, collaborative learning mechanisms facilitate peer-to-peer secure
and trustworthy knowledge sharing among edge devices, enhancing the collective
intelligence of the network and enabling dynamic threat mitigation measures
such as device quarantine in response to detected anomalies. The scalability
and flexibility of this approach make it well-suited for diverse and evolving
network environments, as edge devices only send suspicious information such as
network traffic and system log changes, offering a resilient and efficient
solution to combat emerging cyber threats at the network edge. Thus, our
proposed framework can improve edge computing security by providing better
security in cyber threat detection and mitigation by isolating the edge devices
from the network.
| [
{
"created": "Tue, 14 May 2024 16:40:37 GMT",
"version": "v1"
},
{
"created": "Sun, 26 May 2024 06:06:08 GMT",
"version": "v2"
}
] | 2024-10-10 | [
[
"Hasan",
"Syed Mhamudul",
""
],
[
"Alotaibi",
"Alaa M.",
""
],
[
"Talukder",
"Sajedul",
""
],
[
"Shahid",
"Abdur R.",
""
]
] |
2405.09118 | Alireza Ahmadi | Alireza Ahmadi, Michael Halstead, Claus Smitt, and Chris McCool | BonnBot-I Plus: A Bio-diversity Aware Precise Weed Management Robotic
Platform | null | IEEE Robotics and Automation Letters 2024 | null | null | cs.RO cs.AI cs.LG cs.MA | http://creativecommons.org/licenses/by/4.0/ | In this article, we focus on the critical tasks of plant protection in arable
farms, addressing a modern challenge in agriculture: integrating ecological
considerations into the operational strategy of precision weeding robots like
\bbot. This article presents the recent advancements in weed management
algorithms and the real-world performance of \bbot\ at the University of Bonn's
Klein-Altendorf campus. We present a novel Rolling-view observation model for
the BonnBot-Is weed monitoring section which leads to an average absolute
weeding performance enhancement of $3.4\%$. Furthermore, for the first time, we
show how precision weeding robots could consider bio-diversity-aware concerns
in challenging weeding scenarios. We carried out comprehensive weeding
experiments in sugar-beet fields, covering both weed-only and mixed crop-weed
situations, and introduced a new dataset compatible with precision weeding. Our
real-field experiments revealed that our weeding approach is capable of
handling diverse weed distributions, with a minimal loss of only $11.66\%$
attributable to intervention planning and $14.7\%$ to vision system limitations
highlighting required improvements of the vision system.
| [
{
"created": "Wed, 15 May 2024 06:23:59 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Jul 2024 20:49:51 GMT",
"version": "v2"
}
] | 2024-07-08 | [
[
"Ahmadi",
"Alireza",
""
],
[
"Halstead",
"Michael",
""
],
[
"Smitt",
"Claus",
""
],
[
"McCool",
"Chris",
""
]
] |
2405.09194 | Benjamin Labbe | Henri Bouma, Bart Joosten, Maarten C Kruithof, Maaike H T de Boer,
Alexandru Ginsca (LIST (CEA)), Benjamin Labbe (LIST (CEA)), Quoc T Vuong
(LIST (CEA)) | Flexible image analysis for law enforcement agencies with deep neural
networks to determine: where, who and what | null | SPIE - Counterterrorism, Crime Fighting, Forensics, and
Surveillance Technologies II, 2018, pp.27 | 10.1117/12.2325452 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the increasing need for effective security measures and the
integration of cameras in commercial products, a hugeamount of visual data is
created today. Law enforcement agencies (LEAs) are inspecting images and videos
to findradicalization, propaganda for terrorist organizations and illegal
products on darknet markets. This is time consuming.Instead of an undirected
search, LEAs would like to adapt to new crimes and threats, and focus only on
data from specificlocations, persons or objects, which requires flexible
interpretation of image content. Visual concept detection with
deepconvolutional neural networks (CNNs) is a crucial component to understand
the image content. This paper has fivecontributions. The first contribution
allows image-based geo-localization to estimate the origin of an image. CNNs
andgeotagged images are used to create a model that determines the location of
an image by its pixel values. The secondcontribution enables analysis of
fine-grained concepts to distinguish sub-categories in a generic concept. The
proposedmethod encompasses data acquisition and cleaning and concept
hierarchies. The third contribution is the recognition ofperson attributes
(e.g., glasses or moustache) to enable query by textual description for a
person. The person-attributeproblem is treated as a specific sub-task of
concept classification. The fourth contribution is an intuitive image
annotationtool based on active learning. Active learning allows users to define
novel concepts flexibly and train CNNs with minimalannotation effort. The fifth
contribution increases the flexibility for LEAs in the query definition by
using query expansion.Query expansion maps user queries to known and detectable
concepts. Therefore, no prior knowledge of the detectableconcepts is required
for the users. The methods are validated on data with varying locations
(popular and non-touristiclocations), varying person attributes (CelebA
dataset), and varying number of annotations.
| [
{
"created": "Wed, 15 May 2024 09:02:17 GMT",
"version": "v1"
}
] | 2024-05-16 | [
[
"Bouma",
"Henri",
"",
"LIST"
],
[
"Joosten",
"Bart",
"",
"LIST"
],
[
"Kruithof",
"Maarten C",
"",
"LIST"
],
[
"de Boer",
"Maaike H T",
"",
"LIST"
],
[
"Ginsca",
"Alexandru",
"",
"LIST"
],
[
"Labbe",
"Benjamin",
"",
"LIST"
],
[
"Vuong",
"Quoc T",
"",
"LIST"
]
] |
2405.09558 | Stefano Savazzi | Vittorio Rampa, Federica Fieramosca, Stefano Savazzi, Michele D'Amico | An EM Body Model for Device-Free Localization with Multiple Antenna
Receivers: A First Study | null | 2023 IEEE-APS Topical Conference on Antennas and Propagation in
Wireless Communications (APWC) | 10.1109/APWC57320.2023.10297446 | null | eess.SP cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Device-Free Localization (DFL) employs passive radio techniques capable to
detect and locate people without imposing them to wear any electronic device.
By exploiting the Integrated Sensing and Communication paradigm, DFL networks
employ Radio Frequency (RF) nodes to measure the excess attenuation introduced
by the subjects (i.e., human bodies) moving inside the monitored area, and to
estimate their positions and movements. Physical, statistical, and
ElectroMagnetic (EM) models have been proposed in the literature to estimate
the body positions according to the RF signals collected by the nodes. These
body models usually employ a single-antenna processing for localization
purposes. However, the availability of low-cost multi-antenna devices such as
those used for WLAN (Wireless Local Area Network) applications and the timely
development of array-based body models, allow us to employ array-based
processing techniques in DFL networks. By exploiting a suitable array-capable
EM body model, this paper proposes an array-based framework to improve people
sensing and localization. In particular, some simulations are proposed and
discussed to compare the model results in both single- and multi-antenna
scenarios. The proposed framework paves the way for a wider use of
multi-antenna devices (e.g., those employed in current IEEE 802.11ac/ax/be and
forthcoming IEEE 802.11be networks) and novel beamforming algorithms for DFL
scenarios.
| [
{
"created": "Thu, 2 May 2024 16:39:37 GMT",
"version": "v1"
}
] | 2024-05-17 | [
[
"Rampa",
"Vittorio",
""
],
[
"Fieramosca",
"Federica",
""
],
[
"Savazzi",
"Stefano",
""
],
[
"D'Amico",
"Michele",
""
]
] |
2405.09781 | Shiva Raj Pokhrel Dr | Navneet Singh and Shiva Raj Pokhrel | An Independent Implementation of Quantum Machine Learning Algorithms in
Qiskit for Genomic Data | 2 pager extended abstract | SIGCOMM 2024, Sydney Australia | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we explore the power of Quantum Machine Learning as we extend,
implement and evaluate algorithms like Quantum Support Vector Classifier
(QSVC), Pegasos-QSVC, Variational Quantum Circuits (VQC), and Quantum Neural
Networks (QNN) in Qiskit with diverse feature mapping techniques for genomic
sequence classification.
| [
{
"created": "Thu, 16 May 2024 03:00:41 GMT",
"version": "v1"
}
] | 2024-05-17 | [
[
"Singh",
"Navneet",
""
],
[
"Pokhrel",
"Shiva Raj",
""
]
] |
2405.09864 | Andres Asensio Ramos | A. Asensio Ramos (IAC+ULL) | Solar multi-object multi-frame blind deconvolution with a spatially
variant convolution neural emulator | 15 pages, 14 figures, accepted for publication in A&A | A&A 688, A88 (2024) | 10.1051/0004-6361/202449568 | null | astro-ph.IM cs.CV | http://creativecommons.org/licenses/by/4.0/ | The study of astronomical phenomena through ground-based observations is
always challenged by the distorting effects of Earth's atmosphere. Traditional
methods of post-facto image correction, essential for correcting these
distortions, often rely on simplifying assumptions that limit their
effectiveness, particularly in the presence of spatially variant atmospheric
turbulence. Such cases are often solved by partitioning the field-of-view into
small patches, deconvolving each patch independently, and merging all patches
together. This approach is often inefficient and can produce artifacts. Recent
advancements in computational techniques and the advent of deep learning offer
new pathways to address these limitations. This paper introduces a novel
framework leveraging a deep neural network to emulate spatially variant
convolutions, offering a breakthrough in the efficiency and accuracy of
astronomical image deconvolution. By training on a dataset of images convolved
with spatially invariant point spread functions and validating its
generalizability to spatially variant conditions, this approach presents a
significant advancement over traditional methods. The convolution emulator is
used as a forward model in a multi-object multi-frame blind deconvolution
algorithm for solar images. The emulator enables the deconvolution of solar
observations across large fields of view without resorting to patch-wise
mosaicking, thus avoiding artifacts associated with such techniques. This
method represents a significant computational advantage, reducing processing
times by orders of magnitude.
| [
{
"created": "Thu, 16 May 2024 07:42:39 GMT",
"version": "v1"
}
] | 2024-08-14 | [
[
"Ramos",
"A. Asensio",
"",
"IAC+ULL"
]
] |
2405.09983 | Federico Moiraghi | Federico Moiraghi and Matteo Palmonari and Davide Allavena and
Federico Morando | Zero-Shot Hierarchical Classification on the Common Procurement
Vocabulary Taxonomy | Full-length version of the short paper accepted at COMPSAC 2024 | COMPSAC 2024 | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Classifying public tenders is a useful task for both companies that are
invited to participate and for inspecting fraudulent activities. To facilitate
the task for both participants and public administrations, the European Union
presented a common taxonomy (Common Procurement Vocabulary, CPV) which is
mandatory for tenders of certain importance; however, the contracts in which a
CPV label is mandatory are the minority compared to all the Public
Administrations activities. Classifying over a real-world taxonomy introduces
some difficulties that can not be ignored. First of all, some fine-grained
classes have an insufficient (if any) number of observations in the training
set, while other classes are far more frequent (even thousands of times) than
the average. To overcome those difficulties, we present a zero-shot approach,
based on a pre-trained language model that relies only on label description and
respects the label taxonomy. To train our proposed model, we used industrial
data, which comes from contrattipubblici.org, a service by SpazioDati s.r.l.
that collects public contracts stipulated in Italy in the last 25 years.
Results show that the proposed model achieves better performance in classifying
low-frequent classes compared to three different baselines, and is also able to
predict never-seen classes.
| [
{
"created": "Thu, 16 May 2024 11:01:09 GMT",
"version": "v1"
},
{
"created": "Thu, 30 May 2024 15:34:10 GMT",
"version": "v2"
}
] | 2024-05-31 | [
[
"Moiraghi",
"Federico",
""
],
[
"Palmonari",
"Matteo",
""
],
[
"Allavena",
"Davide",
""
],
[
"Morando",
"Federico",
""
]
] |
2405.10276 | Tuo Zhang | Tuo Zhang, Jinyue Yuan, Salman Avestimehr | Revisiting OPRO: The Limitations of Small-Scale LLMs as Optimizers | null | ACL Findings 2024 | null | null | cs.CL cs.HC | http://creativecommons.org/licenses/by/4.0/ | Numerous recent works aim to enhance the efficacy of Large Language Models
(LLMs) through strategic prompting. In particular, the Optimization by
PROmpting (OPRO) approach provides state-of-the-art performance by leveraging
LLMs as optimizers where the optimization task is to find instructions that
maximize the task accuracy. In this paper, we revisit OPRO for automated
prompting with relatively small-scale LLMs, such as LLaMa-2 family and Mistral
7B. Our investigation reveals that OPRO shows limited effectiveness in
small-scale LLMs, with limited inference capabilities constraining optimization
ability. We suggest future automatic prompting engineering to consider both
model capabilities and computational costs. Additionally, for small-scale LLMs,
we recommend direct instructions that clearly outline objectives and
methodologies as robust prompt baselines, ensuring efficient and effective
prompt engineering in ongoing research.
| [
{
"created": "Thu, 16 May 2024 17:33:50 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Jul 2024 00:29:05 GMT",
"version": "v2"
}
] | 2024-07-22 | [
[
"Zhang",
"Tuo",
""
],
[
"Yuan",
"Jinyue",
""
],
[
"Avestimehr",
"Salman",
""
]
] |
2405.10385 | Soumya Smruti Mishra | Mina Ghashami, Soumya Smruti Mishra | AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering
System for Commonsense Defying Reasoning | Accepted at SemEval 2024 (Colocated with NAACL 2024) | Proceedings of the 18th International Workshop on Semantic
Evaluation (SemEval-2024) | null | null | cs.CL cs.AI cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | The SemEval 2024 BRAINTEASER task represents a pioneering venture in Natural
Language Processing (NLP) by focusing on lateral thinking, a dimension of
cognitive reasoning that is often overlooked in traditional linguistic
analyses. This challenge comprises of Sentence Puzzle and Word Puzzle subtasks
and aims to test language models' capacity for divergent thinking.
In this paper, we present our approach to the BRAINTEASER task. We employ a
holistic strategy by leveraging cutting-edge pre-trained models in multiple
choice architecture, and diversify the training data with Sentence and Word
Puzzle datasets. To gain further improvement, we fine-tuned the model with
synthetic humor or jokes dataset and the RiddleSense dataset which helped
augmenting the model's lateral thinking abilities. Empirical results show that
our approach achieve 92.5% accuracy in Sentence Puzzle subtask and 80.2%
accuracy in Word Puzzle subtask.
| [
{
"created": "Thu, 16 May 2024 18:26:38 GMT",
"version": "v1"
},
{
"created": "Mon, 20 May 2024 05:21:13 GMT",
"version": "v2"
}
] | 2024-05-21 | [
[
"Ghashami",
"Mina",
""
],
[
"Mishra",
"Soumya Smruti",
""
]
] |
2405.10542 | Jie Zhu | Jie Zhu and Junhui Li and Yalong Wen and Lifan Guo | Benchmarking Large Language Models on CFLUE -- A Chinese Financial
Language Understanding Evaluation Dataset | Accepted by ACL 2024 | The 62nd Annual Meeting of the Association for Computational
Linguistics(ACL),2024 | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In light of recent breakthroughs in large language models (LLMs) that have
revolutionized natural language processing (NLP), there is an urgent need for
new benchmarks to keep pace with the fast development of LLMs. In this paper,
we propose CFLUE, the Chinese Financial Language Understanding Evaluation
benchmark, designed to assess the capability of LLMs across various dimensions.
Specifically, CFLUE provides datasets tailored for both knowledge assessment
and application assessment. In knowledge assessment, it consists of 38K+
multiple-choice questions with associated solution explanations. These
questions serve dual purposes: answer prediction and question reasoning. In
application assessment, CFLUE features 16K+ test instances across distinct
groups of NLP tasks such as text classification, machine translation, relation
extraction, reading comprehension, and text generation. Upon CFLUE, we conduct
a thorough evaluation of representative LLMs. The results reveal that only
GPT-4 and GPT-4-turbo achieve an accuracy exceeding 60\% in answer prediction
for knowledge assessment, suggesting that there is still substantial room for
improvement in current LLMs. In application assessment, although GPT-4 and
GPT-4-turbo are the top two performers, their considerable advantage over
lightweight LLMs is noticeably diminished. The datasets and scripts associated
with CFLUE are openly accessible at https://github.com/aliyun/cflue.
| [
{
"created": "Fri, 17 May 2024 05:03:40 GMT",
"version": "v1"
}
] | 2024-05-20 | [
[
"Zhu",
"Jie",
""
],
[
"Li",
"Junhui",
""
],
[
"Wen",
"Yalong",
""
],
[
"Guo",
"Lifan",
""
]
] |
2405.10700 | Scott A. Hale | Michael Shliselberg and Ashkan Kazemi and Scott A. Hale and Shiri
Dori-Hacohen | SynDy: Synthetic Dynamic Dataset Generation Framework for Misinformation
Tasks | null | Proceedings of the 47th International ACM SIGIR Conference on
Research and Development in Information Retrieval (SIGIR '24), July 14--18,
2024, Washington, DC, USA | 10.1145/3626772.3657667 | null | cs.IR cs.AI cs.CL cs.CY | http://creativecommons.org/licenses/by/4.0/ | Diaspora communities are disproportionately impacted by off-the-radar
misinformation and often neglected by mainstream fact-checking efforts,
creating a critical need to scale-up efforts of nascent fact-checking
initiatives. In this paper we present SynDy, a framework for Synthetic Dynamic
Dataset Generation to leverage the capabilities of the largest frontier Large
Language Models (LLMs) to train local, specialized language models. To the best
of our knowledge, SynDy is the first paper utilizing LLMs to create
fine-grained synthetic labels for tasks of direct relevance to misinformation
mitigation, namely Claim Matching, Topical Clustering, and Claim Relationship
Classification. SynDy utilizes LLMs and social media queries to automatically
generate distantly-supervised, topically-focused datasets with synthetic labels
on these three tasks, providing essential tools to scale up human-led
fact-checking at a fraction of the cost of human-annotated data. Training on
SynDy's generated labels shows improvement over a standard baseline and is not
significantly worse compared to training on human labels (which may be
infeasible to acquire). SynDy is being integrated into Meedan's chatbot
tiplines that are used by over 50 organizations, serve over 230K users
annually, and automatically distribute human-written fact-checks via messaging
apps such as WhatsApp. SynDy will also be integrated into our deployed
Co-Insights toolkit, enabling low-resource organizations to launch tiplines for
their communities. Finally, we envision SynDy enabling additional fact-checking
tools such as matching new misinformation claims to high-quality explainers on
common misinformation topics.
| [
{
"created": "Fri, 17 May 2024 11:14:55 GMT",
"version": "v1"
}
] | 2024-05-20 | [
[
"Shliselberg",
"Michael",
""
],
[
"Kazemi",
"Ashkan",
""
],
[
"Hale",
"Scott A.",
""
],
[
"Dori-Hacohen",
"Shiri",
""
]
] |
2405.10870 | Yixing Huang | Yixing Huang, Zahra Khodabakhshi, Ahmed Gomaa, Manuel Schmidt, Rainer
Fietkau, Matthias Guckenberger, Nicolaus Andratschke, Christoph Bert,
Stephanie Tanadini-Lang, Florian Putz | Multicenter Privacy-Preserving Model Training for Deep Learning Brain
Metastases Autosegmentation | Official published version in the Green Journal:
https://doi.org/10.1016/j.radonc.2024.110419 | Radiotherapy & Oncology. 2024, 198, 110419, 1-8 | 10.1016/j.radonc.2024.110419 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objectives: This work aims to explore the impact of multicenter data
heterogeneity on deep learning brain metastases (BM) autosegmentation
performance, and assess the efficacy of an incremental transfer learning
technique, namely learning without forgetting (LWF), to improve model
generalizability without sharing raw data.
Materials and methods: A total of six BM datasets from University Hospital
Erlangen (UKER), University Hospital Zurich (USZ), Stanford, UCSF, NYU and
BraTS Challenge 2023 on BM segmentation were used for this evaluation. First,
the multicenter performance of a convolutional neural network (DeepMedic) for
BM autosegmentation was established for exclusive single-center training and
for training on pooled data, respectively. Subsequently bilateral collaboration
was evaluated, where a UKER pretrained model is shared to another center for
further training using transfer learning (TL) either with or without LWF.
Results: For single-center training, average F1 scores of BM detection range
from 0.625 (NYU) to 0.876 (UKER) on respective single-center test data. Mixed
multicenter training notably improves F1 scores at Stanford and NYU, with
negligible improvement at other centers. When the UKER pretrained model is
applied to USZ, LWF achieves a higher average F1 score (0.839) than naive TL
(0.570) and single-center training (0.688) on combined UKER and USZ test data.
Naive TL improves sensitivity and contouring accuracy, but compromises
precision. Conversely, LWF demonstrates commendable sensitivity, precision and
contouring accuracy. When applied to Stanford, similar performance was
observed.
Conclusion: Data heterogeneity results in varying performance in BM
autosegmentation, posing challenges to model generalizability. LWF is a
promising approach to peer-to-peer privacy-preserving model training.
| [
{
"created": "Fri, 17 May 2024 16:01:11 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Jul 2024 09:51:06 GMT",
"version": "v2"
}
] | 2024-07-26 | [
[
"Huang",
"Yixing",
""
],
[
"Khodabakhshi",
"Zahra",
""
],
[
"Gomaa",
"Ahmed",
""
],
[
"Schmidt",
"Manuel",
""
],
[
"Fietkau",
"Rainer",
""
],
[
"Guckenberger",
"Matthias",
""
],
[
"Andratschke",
"Nicolaus",
""
],
[
"Bert",
"Christoph",
""
],
[
"Tanadini-Lang",
"Stephanie",
""
],
[
"Putz",
"Florian",
""
]
] |
Subsets and Splits