instruction
stringclasses 1
value | input
stringlengths 260
2.07k
| output
stringclasses 10
values |
---|---|---|
What field is the article from? | Title: Optimizing Dense Feed-Forward Neural Networks
Abstract: Deep learning models have been widely used during the last decade due to
their outstanding learning and abstraction capacities. However, one of the main
challenges any scientist has to face using deep learning models is to establish
the network's architecture. Due to this difficulty, data scientists usually
build over complex models and, as a result, most of them result computationally
intensive and impose a large memory footprint, generating huge costs,
contributing to climate change and hindering their use in computational-limited
devices. In this paper, we propose a novel feed-forward neural network
constructing method based on pruning and transfer learning. Its performance has
been thoroughly assessed in classification and regression problems. Without any
accuracy loss, our approach can compress the number of parameters by more than
70%. Even further, choosing the pruning parameter carefully, most of the
refined models outperform original ones. We also evaluate the transfer learning
level comparing the refined model and the original one training from scratch a
neural network with the same hyper parameters as the optimized model. The
results obtained show that our constructing method not only helps in the design
of more efficient models but also more effective ones. | Machine Learning |
What field is the article from? | Title: "It's not like Jarvis, but it's pretty close!" -- Examining ChatGPT's Usage among Undergraduate Students in Computer Science
Abstract: Large language models (LLMs) such as ChatGPT and Google Bard have garnered
significant attention in the academic community. Previous research has
evaluated these LLMs for various applications such as generating programming
exercises and solutions. However, these evaluations have predominantly been
conducted by instructors and researchers, not considering the actual usage of
LLMs by students. This study adopts a student-first approach to comprehensively
understand how undergraduate computer science students utilize ChatGPT, a
popular LLM, released by OpenAI. We employ a combination of student surveys and
interviews to obtain valuable insights into the benefits, challenges, and
suggested improvements related to ChatGPT. Our findings suggest that a majority
of students (over 57%) have a convincingly positive outlook towards adopting
ChatGPT as an aid in coursework-related tasks. However, our research also
highlights various challenges that must be resolved for long-term acceptance of
ChatGPT amongst students. The findings from this investigation have broader
implications and may be applicable to other LLMs and their role in computing
education. | Human-Computer Interaction |
What field is the article from? | Title: From Text to Structure: Using Large Language Models to Support the Development of Legal Expert Systems
Abstract: Encoding legislative text in a formal representation is an important
prerequisite to different tasks in the field of AI & Law. For example,
rule-based expert systems focused on legislation can support laypeople in
understanding how legislation applies to them and provide them with helpful
context and information. However, the process of analyzing legislation and
other sources to encode it in the desired formal representation can be
time-consuming and represents a bottleneck in the development of such systems.
Here, we investigate to what degree large language models (LLMs), such as
GPT-4, are able to automatically extract structured representations from
legislation. We use LLMs to create pathways from legislation, according to the
JusticeBot methodology for legal decision support systems, evaluate the
pathways and compare them to manually created pathways. The results are
promising, with 60% of generated pathways being rated as equivalent or better
than manually created ones in a blind comparison. The approach suggests a
promising path to leverage the capabilities of LLMs to ease the costly
development of systems based on symbolic approaches that are transparent and
explainable. | Computational Linguistics |
What field is the article from? | Title: A DRL solution to help reduce the cost in waiting time of securing a traffic light for cyclists
Abstract: Cyclists prefer to use infrastructure that separates them from motorized
traffic. Using a traffic light to segregate car and bike flows, with the
addition of bike-specific green phases, is a lightweight and cheap solution
that can be deployed dynamically to assess the opportunity of a heavier
infrastructure such as a separate bike lane. To compensate for the increased
waiting time induced by these new phases, we introduce in this paper a deep
reinforcement learning solution that adapts the green phase cycle of a traffic
light to the traffic. Vehicle counter data are used to compare the DRL approach
with the actuated traffic light control algorithm over whole days. Results show
that DRL achieves better minimization of vehicle waiting time at almost all
hours. Our DRL approach is also robust to moderate changes in bike traffic. The
code of this paper is available at
https://github.com/LucasMagnana/A-DRL-solution-to-help-reduce-the-cost-in-waiting-time-of-securing-a-traffic-light-for-cyclists. | Artificial Intelligence |
What field is the article from? | Title: Coupling Fairness and Pruning in a Single Run: a Bi-level Optimization Perspective
Abstract: Deep neural networks have demonstrated remarkable performance in various
tasks. With a growing need for sparse deep learning, model compression
techniques, especially pruning, have gained significant attention. However,
conventional pruning techniques can inadvertently exacerbate algorithmic bias,
resulting in unequal predictions. To address this, we define a fair pruning
task where a sparse model is derived subject to fairness requirements. In
particular, we propose a framework to jointly optimize the pruning mask and
weight update processes with fairness constraints. This framework is engineered
to compress models that maintain performance while ensuring fairness in a
single execution. To this end, we formulate the fair pruning problem as a novel
constrained bi-level optimization task and derive efficient and effective
solving strategies. We design experiments spanning various datasets and
settings to validate our proposed method. Our empirical analysis contrasts our
framework with several mainstream pruning strategies, emphasizing our method's
superiority in maintaining model fairness, performance, and efficiency. | Machine Learning |
What field is the article from? | Title: TeacherLM: Teaching to Fish Rather Than Giving the Fish, Language Modeling Likewise
Abstract: Large Language Models (LLMs) exhibit impressive reasoning and data
augmentation capabilities in various NLP tasks. However, what about small
models? In this work, we propose TeacherLM-7.1B, capable of annotating relevant
fundamentals, chain of thought, and common mistakes for most NLP samples, which
makes annotation more than just an answer, thus allowing other models to learn
"why" instead of just "what". The TeacherLM-7.1B model achieved a zero-shot
score of 52.3 on MMLU, surpassing most models with over 100B parameters. Even
more remarkable is its data augmentation ability. Based on TeacherLM-7.1B, we
augmented 58 NLP datasets and taught various student models with different
parameters from OPT and BLOOM series in a multi-task setting. The experimental
results indicate that the data augmentation provided by TeacherLM has brought
significant benefits. We will release the TeacherLM series of models and
augmented datasets as open-source. | Computational Linguistics |
What field is the article from? | Title: Earthfarseer: Versatile Spatio-Temporal Dynamical Systems Modeling in One Model
Abstract: Efficiently modeling spatio-temporal (ST) physical processes and observations
presents a challenging problem for the deep learning community. Many recent
studies have concentrated on meticulously reconciling various advantages,
leading to designed models that are neither simple nor practical. To address
this issue, this paper presents a systematic study on existing shortcomings
faced by off-the-shelf models, including lack of local fidelity, poor
prediction performance over long time-steps,low scalability, and inefficiency.
To systematically address the aforementioned problems, we propose an
EarthFarseer, a concise framework that combines parallel local convolutions and
global Fourier-based transformer architectures, enabling dynamically capture
the local-global spatial interactions and dependencies. EarthFarseer also
incorporates a multi-scale fully convolutional and Fourier architectures to
efficiently and effectively capture the temporal evolution. Our proposal
demonstrates strong adaptability across various tasks and datasets, with fast
convergence and better local fidelity in long time-steps predictions. Extensive
experiments and visualizations over eight human society physical and natural
physical datasets demonstrates the state-of-the-art performance of
EarthFarseer. We release our code at
https://github.com/easylearningscores/EarthFarseer. | Artificial Intelligence |
What field is the article from? | Title: FireMatch: A Semi-Supervised Video Fire Detection Network Based on Consistency and Distribution Alignment
Abstract: Deep learning techniques have greatly enhanced the performance of fire
detection in videos. However, video-based fire detection models heavily rely on
labeled data, and the process of data labeling is particularly costly and
time-consuming, especially when dealing with videos. Considering the limited
quantity of labeled video data, we propose a semi-supervised fire detection
model called FireMatch, which is based on consistency regularization and
adversarial distribution alignment. Specifically, we first combine consistency
regularization with pseudo-label. For unlabeled data, we design video data
augmentation to obtain corresponding weakly augmented and strongly augmented
samples. The proposed model predicts weakly augmented samples and retains
pseudo-label above a threshold, while training on strongly augmented samples to
predict these pseudo-labels for learning more robust feature representations.
Secondly, we generate video cross-set augmented samples by adversarial
distribution alignment to expand the training data and alleviate the decline in
classification performance caused by insufficient labeled data. Finally, we
introduce a fairness loss to help the model produce diverse predictions for
input samples, thereby addressing the issue of high confidence with the
non-fire class in fire classification scenarios. The FireMatch achieved an
accuracy of 76.92% and 91.81% on two real-world fire datasets, respectively.
The experimental results demonstrate that the proposed method outperforms the
current state-of-the-art semi-supervised classification methods. | Computer Vision |
What field is the article from? | Title: A Benchmark to Understand the Role of Knowledge Graphs on Large Language Model's Accuracy for Question Answering on Enterprise SQL Databases
Abstract: Enterprise applications of Large Language Models (LLMs) hold promise for
question answering on enterprise SQL databases. However, the extent to which
LLMs can accurately respond to enterprise questions in such databases remains
unclear, given the absence of suitable Text-to-SQL benchmarks tailored to
enterprise settings. Additionally, the potential of Knowledge Graphs (KGs) to
enhance LLM-based question answering by providing business context is not well
understood. This study aims to evaluate the accuracy of LLM-powered question
answering systems in the context of enterprise questions and SQL databases,
while also exploring the role of knowledge graphs in improving accuracy. To
achieve this, we introduce a benchmark comprising an enterprise SQL schema in
the insurance domain, a range of enterprise queries encompassing reporting to
metrics, and a contextual layer incorporating an ontology and mappings that
define a knowledge graph. Our primary finding reveals that question answering
using GPT-4, with zero-shot prompts directly on SQL databases, achieves an
accuracy of 16%. Notably, this accuracy increases to 54% when questions are
posed over a Knowledge Graph representation of the enterprise SQL database.
Therefore, investing in Knowledge Graph provides higher accuracy for LLM
powered question answering systems. | Artificial Intelligence |
What field is the article from? | Title: A Path to Simpler Models Starts With Noise
Abstract: The Rashomon set is the set of models that perform approximately equally well
on a given dataset, and the Rashomon ratio is the fraction of all models in a
given hypothesis space that are in the Rashomon set. Rashomon ratios are often
large for tabular datasets in criminal justice, healthcare, lending, education,
and in other areas, which has practical implications about whether simpler
models can attain the same level of accuracy as more complex models. An open
question is why Rashomon ratios often tend to be large. In this work, we
propose and study a mechanism of the data generation process, coupled with
choices usually made by the analyst during the learning process, that
determines the size of the Rashomon ratio. Specifically, we demonstrate that
noisier datasets lead to larger Rashomon ratios through the way that
practitioners train models. Additionally, we introduce a measure called pattern
diversity, which captures the average difference in predictions between
distinct classification patterns in the Rashomon set, and motivate why it tends
to increase with label noise. Our results explain a key aspect of why simpler
models often tend to perform as well as black box models on complex, noisier
datasets. | Machine Learning |
What field is the article from? | Title: Modeling Uncertainty in Personalized Emotion Prediction with Normalizing Flows
Abstract: Designing predictive models for subjective problems in natural language
processing (NLP) remains challenging. This is mainly due to its
non-deterministic nature and different perceptions of the content by different
humans. It may be solved by Personalized Natural Language Processing (PNLP),
where the model exploits additional information about the reader to make more
accurate predictions. However, current approaches require complete information
about the recipients to be straight embedded. Besides, the recent methods focus
on deterministic inference or simple frequency-based estimations of the
probabilities. In this work, we overcome this limitation by proposing a novel
approach to capture the uncertainty of the forecast using conditional
Normalizing Flows. This allows us to model complex multimodal distributions and
to compare various models using negative log-likelihood (NLL). In addition, the
new solution allows for various interpretations of possible reader perception
thanks to the available sampling function. We validated our method on three
challenging, subjective NLP tasks, including emotion recognition and hate
speech. The comparative analysis of generalized and personalized approaches
revealed that our personalized solutions significantly outperform the baseline
and provide more precise uncertainty estimates. The impact on the text
interpretability and uncertainty studies are presented as well. The information
brought by the developed methods makes it possible to build hybrid models whose
effectiveness surpasses classic solutions. In addition, an analysis and
visualization of the probabilities of the given decisions for texts with high
entropy of annotations and annotators with mixed views were carried out. | Artificial Intelligence |
What field is the article from? | Title: Leveraging Reinforcement Learning and Large Language Models for Code Optimization
Abstract: Code optimization is a daunting task that requires a significant level of
expertise from experienced programmers. This level of expertise is not
sufficient when compared to the rapid development of new hardware
architectures. Towards advancing the whole code optimization process, recent
approaches rely on machine learning and artificial intelligence techniques.
This paper introduces a new framework to decrease the complexity of code
optimization. The proposed framework builds on large language models (LLMs) and
reinforcement learning (RL) and enables LLMs to receive feedback from their
environment (i.e., unit tests) during the fine-tuning process. We compare our
framework with existing state-of-the-art models and show that it is more
efficient with respect to speed and computational usage, as a result of the
decrement in training steps and its applicability to models with fewer
parameters. Additionally, our framework reduces the possibility of logical and
syntactical errors. Toward evaluating our approach, we run several experiments
on the PIE dataset using a CodeT5 language model and RRHF, a new reinforcement
learning algorithm. We adopt a variety of evaluation metrics with regards to
optimization quality, and speedup. The evaluation results demonstrate that the
proposed framework has similar results in comparison with existing models using
shorter training times and smaller pre-trained models. In particular, we
accomplish an increase of 5.6% and 2.2 over the baseline models concerning the
%OP T and SP metrics. | Machine Learning |
What field is the article from? | Title: Unified Segment-to-Segment Framework for Simultaneous Sequence Generation
Abstract: Simultaneous sequence generation is a pivotal task for real-time scenarios,
such as streaming speech recognition, simultaneous machine translation and
simultaneous speech translation, where the target sequence is generated while
receiving the source sequence. The crux of achieving high-quality generation
with low latency lies in identifying the optimal moments for generating,
accomplished by learning a mapping between the source and target sequences.
However, existing methods often rely on task-specific heuristics for different
sequence types, limiting the model's capacity to adaptively learn the
source-target mapping and hindering the exploration of multi-task learning for
various simultaneous tasks. In this paper, we propose a unified
segment-to-segment framework (Seg2Seg) for simultaneous sequence generation,
which learns the mapping in an adaptive and unified manner. During the process
of simultaneous generation, the model alternates between waiting for a source
segment and generating a target segment, making the segment serve as the
natural bridge between the source and target. To accomplish this, Seg2Seg
introduces a latent segment as the pivot between source to target and explores
all potential source-target mappings via the proposed expectation training,
thereby learning the optimal moments for generating. Experiments on multiple
simultaneous generation tasks demonstrate that Seg2Seg achieves
state-of-the-art performance and exhibits better generality across various
tasks. | Computational Linguistics |
What field is the article from? | Title: Real Customization or Just Marketing: Are Customized Versions of Chat GPT Useful?
Abstract: Large Language Models (LLMs), as the case of OpenAI ChatGPT-4 Turbo, are
revolutionizing several industries, including higher education. In this
context, LLMs can be personalized through a fine-tuning process to meet the
student demands on every particular subject, like statistics. Recently, OpenAI
has launched the possibility to fine-tune their model with a natural language
web interface, enabling the possibility to create customized GPT version
deliberately conditioned to meet the demands of a specific task. The objective
of this research is to assess the potential of the customized GPTs that have
recently been launched by OpenAI. After developing a Business Statistics
Virtual Professor (BSVP), tailored for students at the Universidad Pontificia
Comillas, its behavior was evaluated and compared with that of ChatGPT-4 Turbo.
The results lead to several conclusions. Firstly, a substantial modification in
the style of communication was observed. Following the instructions it was
trained with, BSVP provided responses in a more relatable and friendly tone,
even incorporating a few minor jokes. Secondly, and this is a matter of
relevance, when explicitly asked for something like, "I would like to practice
a programming exercise similar to those in R practice 4," BSVP was capable of
providing a far superior response: having access to contextual documentation,
it could fulfill the request, something beyond ChatGPT-4 Turbo's capabilities.
On the downside, the response times were generally higher. Lastly, regarding
overall performance, quality, depth, and alignment with the specific content of
the course, no statistically significant differences were observed in the
responses between BSVP and ChatGPT-4 Turbo. It appears that customized
assistants trained with prompts present advantages as virtual aids for
students, yet they do not constitute a substantial improvement over ChatGPT-4
Turbo. | Computational Linguistics |
What field is the article from? | Title: The Role of Chain-of-Thought in Complex Vision-Language Reasoning Task
Abstract: The study explores the effectiveness of the Chain-of-Thought approach, known
for its proficiency in language tasks by breaking them down into sub-tasks and
intermediate steps, in improving vision-language tasks that demand
sophisticated perception and reasoning. We present the "Description then
Decision" strategy, which is inspired by how humans process signals. This
strategy significantly improves probing task performance by 50%, establishing
the groundwork for future research on reasoning paradigms in complex
vision-language tasks. | Computational Linguistics |
What field is the article from? | Title: LuminanceL1Loss: A loss function which measures percieved brightness and colour differences
Abstract: We introduce LuminanceL1Loss, a novel loss function designed to enhance the
performance of image restoration tasks. We demonstrate its superiority over MSE
when applied to the Retinexformer, BUIFD and DnCNN architectures. Our proposed
LuminanceL1Loss leverages a unique approach by transforming images into
grayscale and subsequently computing the MSE loss for both grayscale and color
channels. Experimental results demonstrate that this innovative loss function
consistently outperforms traditional methods, showcasing its potential in image
denoising and other related tasks in image reconstruction. It demonstrates
gains up to 4.7dB. The results presented in this study highlight the efficacy
of LuminanceL1Loss for various image restoration tasks. | Computer Vision |
What field is the article from? | Title: Symbolic Planning and Code Generation for Grounded Dialogue
Abstract: Large language models (LLMs) excel at processing and generating both text and
code. However, LLMs have had limited applicability in grounded task-oriented
dialogue as they are difficult to steer toward task objectives and fail to
handle novel grounding. We present a modular and interpretable grounded
dialogue system that addresses these shortcomings by composing LLMs with a
symbolic planner and grounded code execution. Our system consists of a reader
and planner: the reader leverages an LLM to convert partner utterances into
executable code, calling functions that perform grounding. The translated
code's output is stored to track dialogue state, while a symbolic planner
determines the next appropriate response. We evaluate our system's performance
on the demanding OneCommon dialogue task, involving collaborative reference
resolution on abstract images of scattered dots. Our system substantially
outperforms the previous state-of-the-art, including improving task success in
human evaluations from 56% to 69% in the most challenging setting. | Computational Linguistics |
What field is the article from? | Title: A multi-sourced data and agent-based approach for complementing Time Use Surveys in the context of residential human activity and load curve simulation
Abstract: To address the major issues associated with using Time-Use Survey (TUS) for
simulating residential load curves, we present the SMACH approach, which
combines qualitative and quantitative data with agent-based simulation. Our
model consists of autonomous agents assigned with daily tasks. The agents try
to accomplish their assigned tasks to the best of their abilities. Quantitative
data are used to generate tasks assignments. Qualitative studies allow us to
define how agents select, based on plausible cognitive principles, the tasks to
accomplish depending on the context. Our results show a better representation
of weekdays and weekends, a more flexible association of tasks with appliances,
and an improved simulation of load curves compared to real data. Highlights
$\bullet$ Discussion about Time-Use Surveys (TUS) limits and the use of TUS in
activity and energy simulation $\bullet$ Presentation of complementary data
both qualitative and quantitative used to complement TUS data $\bullet$
Proposition of an agent-based approach that balances these limitations | Artificial Intelligence |
What field is the article from? | Title: Can Physics Informed Neural Operators Self Improve?
Abstract: Self-training techniques have shown remarkable value across many deep
learning models and tasks. However, such techniques remain largely unexplored
when considered in the context of learning fast solvers for systems of partial
differential equations (Eg: Neural Operators). In this work, we explore the use
of self-training for Fourier Neural Operators (FNO). Neural Operators emerged
as a data driven technique, however, data from experiments or traditional
solvers is not always readily available. Physics Informed Neural Operators
(PINO) overcome this constraint by utilizing a physics loss for the training,
however the accuracy of PINO trained without data does not match the
performance obtained by training with data. In this work we show that
self-training can be used to close this gap in performance. We examine
canonical examples, namely the 1D-Burgers and 2D-Darcy PDEs, to showcase the
efficacy of self-training. Specifically, FNOs, when trained exclusively with
physics loss through self-training, approach 1.07x for Burgers and 1.02x for
Darcy, compared to FNOs trained with both data and physics loss. Furthermore,
we discover that pseudo-labels can be used for self-training without
necessarily training to convergence in each iteration. A consequence of this is
that we are able to discover self-training schedules that improve upon the
baseline performance of PINO in terms of accuracy as well as time. | Machine Learning |
What field is the article from? | Title: FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models
Abstract: We introduce FaceTalk, a novel generative approach designed for synthesizing
high-fidelity 3D motion sequences of talking human heads from input audio
signal. To capture the expressive, detailed nature of human heads, including
hair, ears, and finer-scale eye movements, we propose to couple speech signal
with the latent space of neural parametric head models to create high-fidelity,
temporally coherent motion sequences. We propose a new latent diffusion model
for this task, operating in the expression space of neural parametric head
models, to synthesize audio-driven realistic head sequences. In the absence of
a dataset with corresponding NPHM expressions to audio, we optimize for these
correspondences to produce a dataset of temporally-optimized NPHM expressions
fit to audio-video recordings of people talking. To the best of our knowledge,
this is the first work to propose a generative approach for realistic and
high-quality motion synthesis of volumetric human heads, representing a
significant advancement in the field of audio-driven 3D animation. Notably, our
approach stands out in its ability to generate plausible motion sequences that
can produce high-fidelity head animation coupled with the NPHM shape space. Our
experimental results substantiate the effectiveness of FaceTalk, consistently
achieving superior and visually natural motion, encompassing diverse facial
expressions and styles, outperforming existing methods by 75% in perceptual
user study evaluation. | Computer Vision |
What field is the article from? | Title: Can GPT models Follow Human Summarization Guidelines? Evaluating ChatGPT and GPT-4 for Dialogue Summarization
Abstract: This study explores the capabilities of prompt-driven Large Language Models
(LLMs) like ChatGPT and GPT-4 in adhering to human guidelines for dialogue
summarization. Experiments employed DialogSum (English social conversations)
and DECODA (French call center interactions), testing various prompts:
including prompts from existing literature and those from human summarization
guidelines, as well as a two-step prompt approach. Our findings indicate that
GPT models often produce lengthy summaries and deviate from human summarization
guidelines. However, using human guidelines as an intermediate step shows
promise, outperforming direct word-length constraint prompts in some cases. The
results reveal that GPT models exhibit unique stylistic tendencies in their
summaries. While BERTScores did not dramatically decrease for GPT outputs
suggesting semantic similarity to human references and specialised pre-trained
models, ROUGE scores reveal grammatical and lexical disparities between
GPT-generated and human-written summaries. These findings shed light on the
capabilities and limitations of GPT models in following human instructions for
dialogue summarization. | Computational Linguistics |
What field is the article from? | Title: In-Context Learning for Knowledge Base Question Answering for Unmanned Systems based on Large Language Models
Abstract: Knowledge Base Question Answering (KBQA) aims to answer factoid questions
based on knowledge bases. However, generating the most appropriate knowledge
base query code based on Natural Language Questions (NLQ) poses a significant
challenge in KBQA. In this work, we focus on the CCKS2023 Competition of
Question Answering with Knowledge Graph Inference for Unmanned Systems.
Inspired by the recent success of large language models (LLMs) like ChatGPT and
GPT-3 in many QA tasks, we propose a ChatGPT-based Cypher Query Language (CQL)
generation framework to generate the most appropriate CQL based on the given
NLQ. Our generative framework contains six parts: an auxiliary model predicting
the syntax-related information of CQL based on the given NLQ, a proper noun
matcher extracting proper nouns from the given NLQ, a demonstration example
selector retrieving similar examples of the input sample, a prompt constructor
designing the input template of ChatGPT, a ChatGPT-based generation model
generating the CQL, and an ensemble model to obtain the final answers from
diversified outputs. With our ChatGPT-based CQL generation framework, we
achieved the second place in the CCKS 2023 Question Answering with Knowledge
Graph Inference for Unmanned Systems competition, achieving an F1-score of
0.92676. | Computational Linguistics |
What field is the article from? | Title: YOLO-BEV: Generating Bird's-Eye View in the Same Way as 2D Object Detection
Abstract: Vehicle perception systems strive to achieve comprehensive and rapid visual
interpretation of their surroundings for improved safety and navigation. We
introduce YOLO-BEV, an efficient framework that harnesses a unique surrounding
cameras setup to generate a 2D bird's-eye view of the vehicular environment. By
strategically positioning eight cameras, each at a 45-degree interval, our
system captures and integrates imagery into a coherent 3x3 grid format, leaving
the center blank, providing an enriched spatial representation that facilitates
efficient processing. In our approach, we employ YOLO's detection mechanism,
favoring its inherent advantages of swift response and compact model structure.
Instead of leveraging the conventional YOLO detection head, we augment it with
a custom-designed detection head, translating the panoramically captured data
into a unified bird's-eye view map of ego car. Preliminary results validate the
feasibility of YOLO-BEV in real-time vehicular perception tasks. With its
streamlined architecture and potential for rapid deployment due to minimized
parameters, YOLO-BEV poses as a promising tool that may reshape future
perspectives in autonomous driving systems. | Computer Vision |
What field is the article from? | Title: MathNAS: If Blocks Have a Role in Mathematical Architecture Design
Abstract: Neural Architecture Search (NAS) has emerged as a favoured method for
unearthing effective neural architectures. Recent development of large models
has intensified the demand for faster search speeds and more accurate search
results. However, designing large models by NAS is challenging due to the
dramatical increase of search space and the associated huge performance
evaluation cost. Consider a typical modular search space widely used in NAS, in
which a neural architecture consists of $m$ block nodes and a block node has
$n$ alternative blocks. Facing the space containing $n^m$ candidate networks,
existing NAS methods attempt to find the best one by searching and evaluating
candidate networks directly.Different from the general strategy that takes
architecture search as a whole problem, we propose a novel divide-and-conquer
strategy by making use of the modular nature of the search space.Here, we
introduce MathNAS, a general NAS framework based on mathematical programming.In
MathNAS, the performances of the $m*n$ possible building blocks in the search
space are calculated first, and then the performance of a network is directly
predicted based on the performances of its building blocks. Although estimating
block performances involves network training, just as what happens for network
performance evaluation in existing NAS methods, predicting network performance
is completely training-free and thus extremely fast. In contrast to the $n^m$
candidate networks to evaluate in existing NAS methods, which require training
and a formidable computational burden, there are only $m*n$ possible blocks to
handle in MathNAS. Therefore, our approach effectively reduces the complexity
of network performance evaluation.Our code is available at
https://github.com/wangqinsi1/MathNAS. | Machine Learning |
What field is the article from? | Title: Personalized Decision Supports based on Theory of Mind Modeling and Explainable Reinforcement Learning
Abstract: In this paper, we propose a novel personalized decision support system that
combines Theory of Mind (ToM) modeling and explainable Reinforcement Learning
(XRL) to provide effective and interpretable interventions. Our method
leverages DRL to provide expert action recommendations while incorporating ToM
modeling to understand users' mental states and predict their future actions,
enabling appropriate timing for intervention. To explain interventions, we use
counterfactual explanations based on RL's feature importance and users' ToM
model structure. Our proposed system generates accurate and personalized
interventions that are easily interpretable by end-users. We demonstrate the
effectiveness of our approach through a series of crowd-sourcing experiments in
a simulated team decision-making task, where our system outperforms control
baselines in terms of task performance. Our proposed approach is agnostic to
task environment and RL model structure, therefore has the potential to be
generalized to a wide range of applications. | Machine Learning |
What field is the article from? | Title: Predictable Reinforcement Learning Dynamics through Entropy Rate Minimization
Abstract: In Reinforcement Learning (RL), agents have no incentive to exhibit
predictable behaviors, and are often pushed (through e.g. policy entropy
regularization) to randomize their actions in favor of exploration. From a
human perspective, this makes RL agents hard to interpret and predict, and from
a safety perspective, even harder to formally verify. We propose a novel method
to induce predictable behavior in RL agents, referred to as
Predictability-Aware RL (PA-RL), which employs the state sequence entropy rate
as a predictability measure. We show how the entropy rate can be formulated as
an average reward objective, and since its entropy reward function is
policy-dependent, we introduce an action-dependent surrogate entropy enabling
the use of PG methods. We prove that deterministic policies minimizing the
average surrogate reward exist and also minimize the actual entropy rate, and
show how, given a learned dynamical model, we are able to approximate the value
function associated to the true entropy rate. Finally, we demonstrate the
effectiveness of the approach in RL tasks inspired by human-robot use-cases,
and show how it produces agents with more predictable behavior while achieving
near-optimal rewards. | Machine Learning |
What field is the article from? | Title: MMM: Generative Masked Motion Model
Abstract: Recent advances in text-to-motion generation using diffusion and
autoregressive models have shown promising results. However, these models often
suffer from a trade-off between real-time performance, high fidelity, and
motion editability. To address this gap, we introduce MMM, a novel yet simple
motion generation paradigm based on Masked Motion Model. MMM consists of two
key components: (1) a motion tokenizer that transforms 3D human motion into a
sequence of discrete tokens in latent space, and (2) a conditional masked
motion transformer that learns to predict randomly masked motion tokens,
conditioned on the pre-computed text tokens. By attending to motion and text
tokens in all directions, MMM explicitly captures inherent dependency among
motion tokens and semantic mapping between motion and text tokens. During
inference, this allows parallel and iterative decoding of multiple motion
tokens that are highly consistent with fine-grained text descriptions,
therefore simultaneously achieving high-fidelity and high-speed motion
generation. In addition, MMM has innate motion editability. By simply placing
mask tokens in the place that needs editing, MMM automatically fills the gaps
while guaranteeing smooth transitions between editing and non-editing parts.
Extensive experiments on the HumanML3D and KIT-ML datasets demonstrate that MMM
surpasses current leading methods in generating high-quality motion (evidenced
by superior FID scores of 0.08 and 0.429), while offering advanced editing
features such as body-part modification, motion in-betweening, and the
synthesis of long motion sequences. In addition, MMM is two orders of magnitude
faster on a single mid-range GPU than editable motion diffusion models. Our
project page is available at \url{https://exitudio.github.io/MMM-page}. | Computer Vision |
What field is the article from? | Title: ChatCoder: Chat-based Refine Requirement Improves LLMs' Code Generation
Abstract: Large language models have shown good performances in generating code to meet
human requirements. However, human requirements expressed in natural languages
can be vague, incomplete, and ambiguous, leading large language models to
misunderstand human requirements and make mistakes. Worse, it is difficult for
a human user to refine the requirement. To help human users refine their
requirements and improve large language models' code generation performances,
we propose ChatCoder: a method to refine the requirements via chatting with
large language models. We design a chat scheme in which the large language
models will guide the human users to refine their expression of requirements to
be more precise, unambiguous, and complete than before. Experiments show that
ChatCoder has improved existing large language models' performance by a large
margin. Besides, ChatCoder has the advantage over refine-based methods and LLMs
fine-tuned via human response. | Software Engineering |
What field is the article from? | Title: IL-NeRF: Incremental Learning for Neural Radiance Fields with Camera Pose Alignment
Abstract: Neural radiance fields (NeRF) is a promising approach for generating
photorealistic images and representing complex scenes. However, when processing
data sequentially, it can suffer from catastrophic forgetting, where previous
data is easily forgotten after training with new data. Existing incremental
learning methods using knowledge distillation assume that continuous data
chunks contain both 2D images and corresponding camera pose parameters,
pre-estimated from the complete dataset. This poses a paradox as the necessary
camera pose must be estimated from the entire dataset, even though the data
arrives sequentially and future chunks are inaccessible. In contrast, we focus
on a practical scenario where camera poses are unknown. We propose IL-NeRF, a
novel framework for incremental NeRF training, to address this challenge.
IL-NeRF's key idea lies in selecting a set of past camera poses as references
to initialize and align the camera poses of incoming image data. This is
followed by a joint optimization of camera poses and replay-based NeRF
distillation. Our experiments on real-world indoor and outdoor scenes show that
IL-NeRF handles incremental NeRF training and outperforms the baselines by up
to $54.04\%$ in rendering quality. | Computer Vision |
What field is the article from? | Title: Unified learning-based lossy and lossless JPEG recompression
Abstract: JPEG is still the most widely used image compression algorithm. Most image
compression algorithms only consider uncompressed original image, while
ignoring a large number of already existing JPEG images. Recently, JPEG
recompression approaches have been proposed to further reduce the size of JPEG
files. However, those methods only consider JPEG lossless recompression, which
is just a special case of the rate-distortion theorem. In this paper, we
propose a unified lossly and lossless JPEG recompression framework, which
consists of learned quantization table and Markovian hierarchical variational
autoencoders. Experiments show that our method can achieve arbitrarily low
distortion when the bitrate is close to the upper bound, namely the bitrate of
the lossless compression model. To the best of our knowledge, this is the first
learned method that bridges the gap between lossy and lossless recompression of
JPEG images. | Computer Vision |
What field is the article from? | Title: LLMs may Dominate Information Access: Neural Retrievers are Biased Towards LLM-Generated Texts
Abstract: Recently, the emergence of large language models (LLMs) has revolutionized
the paradigm of information retrieval (IR) applications, especially in web
search. With their remarkable capabilities in generating human-like texts, LLMs
have created enormous texts on the Internet. As a result, IR systems in the
LLMs era are facing a new challenge: the indexed documents now are not only
written by human beings but also automatically generated by the LLMs. How these
LLM-generated documents influence the IR systems is a pressing and still
unexplored question. In this work, we conduct a quantitative evaluation of
different IR models in scenarios where both human-written and LLM-generated
texts are involved. Surprisingly, our findings indicate that neural retrieval
models tend to rank LLM-generated documents higher.We refer to this category of
biases in neural retrieval models towards the LLM-generated text as the
\textbf{source bias}. Moreover, we discover that this bias is not confined to
the first-stage neural retrievers, but extends to the second-stage neural
re-rankers. Then, we provide an in-depth analysis from the perspective of text
compression and observe that neural models can better understand the semantic
information of LLM-generated text, which is further substantiated by our
theoretical analysis.We also discuss the potential server concerns stemming
from the observed source bias and hope our findings can serve as a critical
wake-up call to the IR community and beyond. To facilitate future explorations
of IR in the LLM era, the constructed two new benchmarks and codes will later
be available at \url{https://github.com/KID-22/LLM4IR-Bias}. | Information Retrieval |
What field is the article from? | Title: SynH2R: Synthesizing Hand-Object Motions for Learning Human-to-Robot Handovers
Abstract: Vision-based human-to-robot handover is an important and challenging task in
human-robot interaction. Recent work has attempted to train robot policies by
interacting with dynamic virtual humans in simulated environments, where the
policies can later be transferred to the real world. However, a major
bottleneck is the reliance on human motion capture data, which is expensive to
acquire and difficult to scale to arbitrary objects and human grasping motions.
In this paper, we introduce a framework that can generate plausible human
grasping motions suitable for training the robot. To achieve this, we propose a
hand-object synthesis method that is designed to generate handover-friendly
motions similar to humans. This allows us to generate synthetic training and
testing data with 100x more objects than previous work. In our experiments, we
show that our method trained purely with synthetic data is competitive with
state-of-the-art methods that rely on real human motion data both in simulation
and on a real system. In addition, we can perform evaluations on a larger scale
compared to prior work. With our newly introduced test set, we show that our
model can better scale to a large variety of unseen objects and human motions
compared to the baselines. Project page:
https://eth-ait.github.io/synthetic-handovers/ | Robotics |
What field is the article from? | Title: GPQA: A Graduate-Level Google-Proof Q&A Benchmark
Abstract: We present GPQA, a challenging dataset of 448 multiple-choice questions
written by domain experts in biology, physics, and chemistry. We ensure that
the questions are high-quality and extremely difficult: experts who have or are
pursuing PhDs in the corresponding domains reach 65% accuracy (74% when
discounting clear mistakes the experts identified in retrospect), while highly
skilled non-expert validators only reach 34% accuracy, despite spending on
average over 30 minutes with unrestricted access to the web (i.e., the
questions are "Google-proof"). The questions are also difficult for
state-of-the-art AI systems, with our strongest GPT-4 based baseline achieving
39% accuracy. If we are to use future AI systems to help us answer very hard
questions, for example, when developing new scientific knowledge, we need to
develop scalable oversight methods that enable humans to supervise their
outputs, which may be difficult even if the supervisors are themselves skilled
and knowledgeable. The difficulty of GPQA both for skilled non-experts and
frontier AI systems should enable realistic scalable oversight experiments,
which we hope can help devise ways for human experts to reliably get truthful
information from AI systems that surpass human capabilities. | Artificial Intelligence |
What field is the article from? | Title: Amortized Bayesian Decision Making for simulation-based models
Abstract: Simulation-based inference (SBI) provides a powerful framework for inferring
posterior distributions of stochastic simulators in a wide range of domains. In
many settings, however, the posterior distribution is not the end goal itself
-- rather, the derived parameter values and their uncertainties are used as a
basis for deciding what actions to take. Unfortunately, because posterior
distributions provided by SBI are (potentially crude) approximations of the
true posterior, the resulting decisions can be suboptimal. Here, we address the
question of how to perform Bayesian decision making on stochastic simulators,
and how one can circumvent the need to compute an explicit approximation to the
posterior. Our method trains a neural network on simulated data and can predict
the expected cost given any data and action, and can, thus, be directly used to
infer the action with lowest cost. We apply our method to several benchmark
problems and demonstrate that it induces similar cost as the true posterior
distribution. We then apply the method to infer optimal actions in a real-world
simulator in the medical neurosciences, the Bayesian Virtual Epileptic Patient,
and demonstrate that it allows to infer actions associated with low cost after
few simulations. | Machine Learning |
What field is the article from? | Title: Graph Information Bottleneck for Remote Sensing Segmentation
Abstract: Remote sensing segmentation has a wide range of applications in environmental
protection, and urban change detection, etc. Despite the success of deep
learning-based remote sensing segmentation methods (e.g., CNN and Transformer),
they are not flexible enough to model irregular objects. In addition, existing
graph contrastive learning methods usually adopt the way of maximizing mutual
information to keep the node representations consistent between different graph
views, which may cause the model to learn task-independent redundant
information. To tackle the above problems, this paper treats images as graph
structures and introduces a simple contrastive vision GNN (SC-ViG) architecture
for remote sensing segmentation. Specifically, we construct a node-masked and
edge-masked graph view to obtain an optimal graph structure representation,
which can adaptively learn whether to mask nodes and edges. Furthermore, this
paper innovatively introduces information bottleneck theory into graph
contrastive learning to maximize task-related information while minimizing
task-independent redundant information. Finally, we replace the convolutional
module in UNet with the SC-ViG module to complete the segmentation and
classification tasks of remote sensing images. Extensive experiments on
publicly available real datasets demonstrate that our method outperforms
state-of-the-art remote sensing image segmentation methods. | Computer Vision |
What field is the article from? | Title: Large Language Models Meet Computer Vision: A Brief Survey
Abstract: Recently, the intersection of Large Language Models (LLMs) and Computer
Vision (CV) has emerged as a pivotal area of research, driving significant
advancements in the field of Artificial Intelligence (AI). As transformers have
become the backbone of many state-of-the-art models in both Natural Language
Processing (NLP) and CV, understanding their evolution and potential
enhancements is crucial. This survey paper delves into the latest progressions
in the domain of transformers and their subsequent successors, emphasizing
their potential to revolutionize Vision Transformers (ViTs) and LLMs. This
survey also presents a comparative analysis, juxtaposing the performance
metrics of several leading paid and open-source LLMs, shedding light on their
strengths and areas of improvement as well as a literature review on how LLMs
are being used to tackle vision related tasks. Furthermore, the survey presents
a comprehensive collection of datasets employed to train LLMs, offering
insights into the diverse data available to achieve high performance in various
pre-training and downstream tasks of LLMs. The survey is concluded by
highlighting open directions in the field, suggesting potential venues for
future research and development. This survey aims to underscores the profound
intersection of LLMs on CV, leading to a new era of integrated and advanced AI
models. | Computer Vision |
What field is the article from? | Title: Cracking the Code of Negative Transfer: A Cooperative Game Theoretic Approach for Cross-Domain Sequential Recommendation
Abstract: This paper investigates Cross-Domain Sequential Recommendation (CDSR), a
promising method that uses information from multiple domains (more than three)
to generate accurate and diverse recommendations, and takes into account the
sequential nature of user interactions. The effectiveness of these systems
often depends on the complex interplay among the multiple domains. In this
dynamic landscape, the problem of negative transfer arises, where heterogeneous
knowledge between dissimilar domains leads to performance degradation due to
differences in user preferences across these domains. As a remedy, we propose a
new CDSR framework that addresses the problem of negative transfer by assessing
the extent of negative transfer from one domain to another and adaptively
assigning low weight values to the corresponding prediction losses. To this
end, the amount of negative transfer is estimated by measuring the marginal
contribution of each domain to model performance based on a cooperative game
theory. In addition, a hierarchical contrastive learning approach that
incorporates information from the sequence of coarse-level categories into that
of fine-level categories (e.g., item level) when implementing contrastive
learning was developed to mitigate negative transfer. Despite the potentially
low relevance between domains at the fine-level, there may be higher relevance
at the category level due to its generalised and broader preferences. We show
that our model is superior to prior works in terms of model performance on two
real-world datasets across ten different domains. | Artificial Intelligence |
What field is the article from? | Title: Offline Data Enhanced On-Policy Policy Gradient with Provable Guarantees
Abstract: Hybrid RL is the setting where an RL agent has access to both offline data
and online data by interacting with the real-world environment. In this work,
we propose a new hybrid RL algorithm that combines an on-policy actor-critic
method with offline data. On-policy methods such as policy gradient and natural
policy gradient (NPG) have shown to be more robust to model misspecification,
though sometimes it may not be as sample efficient as methods that rely on
off-policy learning. On the other hand, offline methods that depend on
off-policy training often require strong assumptions in theory and are less
stable to train in practice. Our new approach integrates a procedure of
off-policy training on the offline data into an on-policy NPG framework. We
show that our approach, in theory, can obtain a best-of-both-worlds type of
result -- it achieves the state-of-art theoretical guarantees of offline RL
when offline RL-specific assumptions hold, while at the same time maintaining
the theoretical guarantees of on-policy NPG regardless of the offline RL
assumptions' validity. Experimentally, in challenging rich-observation
environments, we show that our approach outperforms a state-of-the-art hybrid
RL baseline which only relies on off-policy policy optimization, demonstrating
the empirical benefit of combining on-policy and off-policy learning. Our code
is publicly available at https://github.com/YifeiZhou02/HNPG. | Machine Learning |
What field is the article from? | Title: BESTMVQA: A Benchmark Evaluation System for Medical Visual Question Answering
Abstract: Medical Visual Question Answering (Med-VQA) is a very important task in
healthcare industry, which answers a natural language question with a medical
image. Existing VQA techniques in information systems can be directly applied
to solving the task. However, they often suffer from (i) the data insufficient
problem, which makes it difficult to train the state of the arts (SOTAs) for
the domain-specific task, and (ii) the reproducibility problem, that many
existing models have not been thoroughly evaluated in a unified experimental
setup. To address these issues, this paper develops a Benchmark Evaluation
SysTem for Medical Visual Question Answering, denoted by BESTMVQA. Given
self-collected clinical data, our system provides a useful tool for users to
automatically build Med-VQA datasets, which helps overcoming the data
insufficient problem. Users also can conveniently select a wide spectrum of
SOTA models from our model library to perform a comprehensive empirical study.
With simple configurations, our system automatically trains and evaluates the
selected models over a benchmark dataset, and reports the comprehensive results
for users to develop new techniques or perform medical practice. Limitations of
existing work are overcome (i) by the data generation tool, which automatically
constructs new datasets from unstructured clinical data, and (ii) by evaluating
SOTAs on benchmark datasets in a unified experimental setup. The demonstration
video of our system can be found at https://youtu.be/QkEeFlu1x4A. Our code and
data will be available soon. | Artificial Intelligence |
What field is the article from? | Title: Reframing Audience Expansion through the Lens of Probability Density Estimation
Abstract: Audience expansion has become an important element of prospective marketing,
helping marketers create target audiences based on a mere representative sample
of their current customer base. Within the realm of machine learning, a favored
algorithm for scaling this sample into a broader audience hinges on a binary
classification task, with class probability estimates playing a crucial role.
In this paper, we review this technique and introduce a key change in how we
choose training examples to ensure the quality of the generated audience. We
present a simulation study based on the widely used MNIST dataset, where
consistent high precision and recall values demonstrate our approach's ability
to identify the most relevant users for an expanded audience. Our results are
easily reproducible and a Python implementation is openly available on GitHub:
\url{https://github.com/carvalhaes-ai/audience-expansion} | Artificial Intelligence |
What field is the article from? | Title: Enhancing Human Persuasion With Large Language Models
Abstract: Although large language models (LLMs) are reshaping various aspects of human
life, our current understanding of their impacts remains somewhat constrained.
Here we investigate the impact of LLMs on human communication, in the context
of consumer complaints in the financial industry. Employing an AI detection
tool on more than 780K complaints gathered by the Consumer Financial Protection
Bureau (CFPB), we find evidence of LLM usage in the writing of complaints -
shortly after the release of ChatGPT. Our analyses reveal that LLM usage is
positively correlated with the likelihood of obtaining desirable outcomes
(i.e., offer of relief from financial firms) and suggest that this positive
correlation may be partly due to the linguistic features improved by LLMs. We
test this conjecture with a preregistered experiment, which reveals results
consistent with those from observational studies: Consumer complaints written
with ChatGPT for improved linguistic qualities were more likely to receive
hypothetical relief offers than the original consumer complaints, demonstrating
the LLM's ability to enhance message persuasiveness in human communication.
Being some of the earliest empirical evidence on LLM usage for enhancing
persuasion, our results highlight the transformative potential of LLMs in human
communication. | Human-Computer Interaction |
What field is the article from? | Title: Correlated Attention in Transformers for Multivariate Time Series
Abstract: Multivariate time series (MTS) analysis prevails in real-world applications
such as finance, climate science and healthcare. The various self-attention
mechanisms, the backbone of the state-of-the-art Transformer-based models,
efficiently discover the temporal dependencies, yet cannot well capture the
intricate cross-correlation between different features of MTS data, which
inherently stems from complex dynamical systems in practice. To this end, we
propose a novel correlated attention mechanism, which not only efficiently
captures feature-wise dependencies, but can also be seamlessly integrated
within the encoder blocks of existing well-known Transformers to gain
efficiency improvement. In particular, correlated attention operates across
feature channels to compute cross-covariance matrices between queries and keys
with different lag values, and selectively aggregate representations at the
sub-series level. This architecture facilitates automated discovery and
representation learning of not only instantaneous but also lagged
cross-correlations, while inherently capturing time series auto-correlation.
When combined with prevalent Transformer baselines, correlated attention
mechanism constitutes a better alternative for encoder-only architectures,
which are suitable for a wide range of tasks including imputation, anomaly
detection and classification. Extensive experiments on the aforementioned tasks
consistently underscore the advantages of correlated attention mechanism in
enhancing base Transformer models, and demonstrate our state-of-the-art results
in imputation, anomaly detection and classification. | Machine Learning |
What field is the article from? | Title: Causal Optimal Transport of Abstractions
Abstract: Causal abstraction (CA) theory establishes formal criteria for relating
multiple structural causal models (SCMs) at different levels of granularity by
defining maps between them. These maps have significant relevance for
real-world challenges such as synthesizing causal evidence from multiple
experimental environments, learning causally consistent representations at
different resolutions, and linking interventions across multiple SCMs. In this
work, we propose COTA, the first method to learn abstraction maps from
observational and interventional data without assuming complete knowledge of
the underlying SCMs. In particular, we introduce a multi-marginal Optimal
Transport (OT) formulation that enforces do-calculus causal constraints,
together with a cost function that relies on interventional information. We
extensively evaluate COTA on synthetic and real world problems, and showcase
its advantages over non-causal, independent and aggregated COTA formulations.
Finally, we demonstrate the efficiency of our method as a data augmentation
tool by comparing it against the state-of-the-art CA learning framework, which
assumes fully specified SCMs, on a real-world downstream task. | Machine Learning |
What field is the article from? | Title: GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer
Abstract: Named Entity Recognition (NER) is essential in various Natural Language
Processing (NLP) applications. Traditional NER models are effective but limited
to a set of predefined entity types. In contrast, Large Language Models (LLMs)
can extract arbitrary entities through natural language instructions, offering
greater flexibility. However, their size and cost, particularly for those
accessed via APIs like ChatGPT, make them impractical in resource-limited
scenarios. In this paper, we introduce a compact NER model trained to identify
any type of entity. Leveraging a bidirectional transformer encoder, our model,
GLiNER, facilitates parallel entity extraction, an advantage over the slow
sequential token generation of LLMs. Through comprehensive testing, GLiNER
demonstrate strong performance, outperforming both ChatGPT and fine-tuned LLMs
in zero-shot evaluations on various NER benchmarks. | Computational Linguistics |
What field is the article from? | Title: ChatGPT as Co-Advisor in Scientific Initiation: Action Research with Project-Based Learning in Elementary Education
Abstract: Background: In the contemporary educational landscape, technology has the
power to drive innovative pedagogical practices. Overcoming the resistance of
teachers and students to adopting new methods and technologies is a challenge
that needs to be addressed. Objectives: To evaluate the effectiveness of
ChatGPT as a co-advisor in research projects and its influence on the
implementation of Project-Based Learning (PBL), as well as overcoming
resistance to the use of new pedagogical methodologies. Design: An
action-research methodology was employed, including unstructured interviews and
the application of questionnaires via Google Forms. Setting and Participants:
The research was conducted in an elementary school, involving 353 students and
16 teachers. Data Collection and Analysis: Data were gathered through
observations and notes in meetings and interviews, complemented by electronic
questionnaires, with quantitative and qualitative analyses performed via
Microsoft Excel and Google Forms. Results: The introduction of ChatGPT as a
pedagogical tool led to increased student engagement and decreased teacher
resistance, reflected in recognition at local science fairs. Conclusion: The
study confirmed the utility of ChatGPT in school research co-orientation,
highlighting its role in facilitating PBL and promoting cultural changes in
educational practice, with proactive school management identified as a
catalysing element in adapting to educational innovations. | Computers and Society |
What field is the article from? | Title: Flames: Benchmarking Value Alignment of Chinese Large Language Models
Abstract: The widespread adoption of large language models (LLMs) across various
regions underscores the urgent need to evaluate their alignment with human
values. Current benchmarks, however, fall short of effectively uncovering
safety vulnerabilities in LLMs. Despite numerous models achieving high scores
and 'topping the chart' in these evaluations, there is still a significant gap
in LLMs' deeper alignment with human values and achieving genuine harmlessness.
To this end, this paper proposes the first highly adversarial benchmark named
Flames, consisting of 2,251 manually crafted prompts, ~18.7K model responses
with fine-grained annotations, and a specified scorer. Our framework
encompasses both common harmlessness principles, such as fairness, safety,
legality, and data protection, and a unique morality dimension that integrates
specific Chinese values such as harmony. Based on the framework, we carefully
design adversarial prompts that incorporate complex scenarios and jailbreaking
methods, mostly with implicit malice. By prompting mainstream LLMs with such
adversarially constructed prompts, we obtain model responses, which are then
rigorously annotated for evaluation. Our findings indicate that all the
evaluated LLMs demonstrate relatively poor performance on Flames, particularly
in the safety and fairness dimensions. Claude emerges as the best-performing
model overall, but with its harmless rate being only 63.08% while GPT-4 only
scores 39.04%. The complexity of Flames has far exceeded existing benchmarks,
setting a new challenge for contemporary LLMs and highlighting the need for
further alignment of LLMs. To efficiently evaluate new models on the benchmark,
we develop a specified scorer capable of scoring LLMs across multiple
dimensions, achieving an accuracy of 77.4%. The Flames Benchmark is publicly
available on https://github.com/AIFlames/Flames. | Computational Linguistics |
What field is the article from? | Title: A Decision Support System for Liver Diseases Prediction: Integrating Batch Processing, Rule-Based Event Detection and SPARQL Query
Abstract: Liver diseases pose a significant global health burden, impacting a
substantial number of individuals and exerting substantial economic and social
consequences. Rising liver problems are considered a fatal disease in many
countries, such as Egypt, Molda, etc. The objective of this study is to
construct a predictive model for liver illness using Basic Formal Ontology
(BFO) and detection rules derived from a decision tree algorithm. Based on
these rules, events are detected through batch processing using the Apache Jena
framework. Based on the event detected, queries can be directly processed using
SPARQL. To make the ontology operational, these Decision Tree (DT) rules are
converted into Semantic Web Rule Language (SWRL). Using this SWRL in the
ontology for predicting different types of liver disease with the help of the
Pellet and Drool inference engines in Protege Tools, a total of 615 records are
taken from different liver diseases. After inferring the rules, the result can
be generated for the patient according to the DT rules, and other
patient-related details along with different precautionary suggestions can be
obtained based on these results. Combining query results of batch processing
and ontology-generated results can give more accurate suggestions for disease
prevention and detection. This work aims to provide a comprehensive approach
that is applicable for liver disease prediction, rich knowledge graph
representation, and smart querying capabilities. The results show that
combining RDF data, SWRL rules, and SPARQL queries for analysing and predicting
liver disease can help medical professionals to learn more about liver diseases
and make a Decision Support System (DSS) for health care. | Artificial Intelligence |
What field is the article from? | Title: A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning
Abstract: Logical reasoning has been an ongoing pursuit in the field of AI. Despite
significant advancements made by large language models (LLMs), they still
struggle with complex logical reasoning problems. To enhance reasoning
performance, one promising direction is scalable oversight, which requires LLMs
to identify their own errors and then improve by themselves. Various
self-verification methods have been proposed in pursuit of this goal.
Nevertheless, whether existing models understand their own errors well is still
under investigation. In this paper, we take a closer look at the
self-verification abilities of LLMs in the context of logical reasoning,
focusing on their ability to identify logical fallacies accurately. We
introduce a dataset, FALLACIES, containing 232 types of reasoning fallacies
categorized in a hierarchical taxonomy. By conducting exhaustive experiments on
FALLACIES, we obtain comprehensive and detailed analyses of a series of models
on their verification abilities. Our main findings suggest that existing LLMs
could struggle to identify fallacious reasoning steps accurately and may fall
short of guaranteeing the validity of self-verification methods. Drawing from
these observations, we offer suggestions for future research and practical
applications of self-verification methods. | Artificial Intelligence |
What field is the article from? | Title: PEFTDebias : Capturing debiasing information using PEFTs
Abstract: The increasing use of foundation models highlights the urgent need to address
and eliminate implicit biases present in them that arise during pretraining. In
this paper, we introduce PEFTDebias, a novel approach that employs
parameter-efficient fine-tuning (PEFT) to mitigate the biases within foundation
models. PEFTDebias consists of two main phases: an upstream phase for acquiring
debiasing parameters along a specific bias axis, and a downstream phase where
these parameters are incorporated into the model and frozen during the
fine-tuning process. By evaluating on four datasets across two bias axes namely
gender and race, we find that downstream biases can be effectively reduced with
PEFTs. In addition, we show that these parameters possess axis-specific
debiasing characteristics, enabling their effective transferability in
mitigating biases in various downstream tasks. To ensure reproducibility, we
release the code to do our experiments. | Machine Learning |
What field is the article from? | Title: DRUformer: Enhancing the driving scene Important object detection with driving relationship self-understanding
Abstract: Traffic accidents frequently lead to fatal injuries, contributing to over 50
million deaths until 2023. To mitigate driving hazards and ensure personal
safety, it is crucial to assist vehicles in anticipating important objects
during travel. Previous research on important object detection primarily
assessed the importance of individual participants, treating them as
independent entities and frequently overlooking the connections between these
participants. Unfortunately, this approach has proven less effective in
detecting important objects in complex scenarios. In response, we introduce
Driving scene Relationship self-Understanding transformer (DRUformer), designed
to enhance the important object detection task. The DRUformer is a
transformer-based multi-modal important object detection model that takes into
account the relationships between all the participants in the driving scenario.
Recognizing that driving intention also significantly affects the detection of
important objects during driving, we have incorporated a module for embedding
driving intention. To assess the performance of our approach, we conducted a
comparative experiment on the DRAMA dataset, pitting our model against other
state-of-the-art (SOTA) models. The results demonstrated a noteworthy 16.2\%
improvement in mIoU and a substantial 12.3\% boost in ACC compared to SOTA
methods. Furthermore, we conducted a qualitative analysis of our model's
ability to detect important objects across different road scenarios and
classes, highlighting its effectiveness in diverse contexts. Finally, we
conducted various ablation studies to assess the efficiency of the proposed
modules in our DRUformer model. | Computer Vision |
What field is the article from? | Title: Artificial intelligence optical hardware empowers high-resolution hyperspectral video understanding at 1.2 Tb/s
Abstract: Foundation models, exemplified by GPT technology, are discovering new
horizons in artificial intelligence by executing tasks beyond their designers'
expectations. While the present generation provides fundamental advances in
understanding language and images, the next frontier is video comprehension.
Progress in this area must overcome the 1 Tb/s data rate demanded to grasp
real-time multidimensional video information. This speed limit lies well beyond
the capabilities of the existing generation of hardware, imposing a roadblock
to further advances. This work introduces a hardware-accelerated integrated
optoelectronic platform for multidimensional video understanding in real-time.
The technology platform combines artificial intelligence hardware, processing
information optically, with state-of-the-art machine vision networks, resulting
in a data processing speed of 1.2 Tb/s with hundreds of frequency bands and
megapixel spatial resolution at video rates. Such performance, validated in the
AI tasks of video semantic segmentation and object understanding in indoor and
aerial applications, surpasses the speed of the closest technologies with
similar spectral resolution by three to four orders of magnitude. This platform
opens up new avenues for research in real-time AI video understanding of
multidimensional visual information, helping the empowerment of future
human-machine interactions and cognitive processing developments. | Computer Vision |
What field is the article from? | Title: How do Language Models Bind Entities in Context?
Abstract: To correctly use in-context information, language models (LMs) must bind
entities to their attributes. For example, given a context describing a "green
square" and a "blue circle", LMs must bind the shapes to their respective
colors. We analyze LM representations and identify the binding ID mechanism: a
general mechanism for solving the binding problem, which we observe in every
sufficiently large model from the Pythia and LLaMA families. Using causal
interventions, we show that LMs' internal activations represent binding
information by attaching binding ID vectors to corresponding entities and
attributes. We further show that binding ID vectors form a continuous subspace,
in which distances between binding ID vectors reflect their discernability.
Overall, our results uncover interpretable strategies in LMs for representing
symbolic knowledge in-context, providing a step towards understanding general
in-context reasoning in large-scale LMs. | Machine Learning |
What field is the article from? | Title: Visual Encoders for Data-Efficient Imitation Learning in Modern Video Games
Abstract: Video games have served as useful benchmarks for the decision making
community, but going beyond Atari games towards training agents in modern games
has been prohibitively expensive for the vast majority of the research
community. Recent progress in the research, development and open release of
large vision models has the potential to amortize some of these costs across
the community. However, it is currently unclear which of these models have
learnt representations that retain information critical for sequential decision
making. Towards enabling wider participation in the research of gameplaying
agents in modern games, we present a systematic study of imitation learning
with publicly available visual encoders compared to the typical, task-specific,
end-to-end training approach in Minecraft, Minecraft Dungeons and
Counter-Strike: Global Offensive. | Machine Learning |
What field is the article from? | Title: Improving a Named Entity Recognizer Trained on Noisy Data with a Few Clean Instances
Abstract: To achieve state-of-the-art performance, one still needs to train NER models
on large-scale, high-quality annotated data, an asset that is both costly and
time-intensive to accumulate. In contrast, real-world applications often resort
to massive low-quality labeled data through non-expert annotators via
crowdsourcing and external knowledge bases via distant supervision as a
cost-effective alternative. However, these annotation methods result in noisy
labels, which in turn lead to a notable decline in performance. Hence, we
propose to denoise the noisy NER data with guidance from a small set of clean
instances. Along with the main NER model we train a discriminator model and use
its outputs to recalibrate the sample weights. The discriminator is capable of
detecting both span and category errors with different discriminative prompts.
Results on public crowdsourcing and distant supervision datasets show that the
proposed method can consistently improve performance with a small guidance set. | Computational Linguistics |
What field is the article from? | Title: ConDefects: A New Dataset to Address the Data Leakage Concern for LLM-based Fault Localization and Program Repair
Abstract: With the growing interest on Large Language Models (LLMs) for fault
localization and program repair, ensuring the integrity and generalizability of
the LLM-based methods becomes paramount. The code in existing widely-adopted
benchmarks for these tasks was written before the the bloom of LLMs and may be
included in the training data of existing popular LLMs, thereby suffering from
the threat of data leakage, leading to misleadingly optimistic performance
metrics. To address this issue, we introduce "ConDefects", a novel dataset of
real faults meticulously curated to eliminate such overlap. ConDefects contains
1,254 Java faulty programs and 1,625 Python faulty programs. All these programs
are sourced from the online competition platform AtCoder and were produced
between October 2021 and September 2023. We pair each fault with fault
locations and the corresponding repaired code versions, making it tailored for
in fault localization and program repair related research. We also provide
interfaces for selecting subsets based on different time windows and coding
task difficulties. While inspired by LLM-based tasks, ConDefects can be adopted
for benchmarking ALL types of fault localization and program repair methods.
The dataset is publicly available, and a demo video can be found at
https://www.youtube.com/watch?v=22j15Hj5ONk. | Software Engineering |
What field is the article from? | Title: RigLSTM: Recurrent Independent Grid LSTM for Generalizable Sequence Learning
Abstract: Sequential processes in real-world often carry a combination of simple
subsystems that interact with each other in certain forms. Learning such a
modular structure can often improve the robustness against environmental
changes. In this paper, we propose recurrent independent Grid LSTM (RigLSTM),
composed of a group of independent LSTM cells that cooperate with each other,
for exploiting the underlying modular structure of the target task. Our model
adopts cell selection, input feature selection, hidden state selection, and
soft state updating to achieve a better generalization ability on the basis of
the recent Grid LSTM for the tasks where some factors differ between training
and evaluation. Specifically, at each time step, only a fraction of cells are
activated, and the activated cells select relevant inputs and cells to
communicate with. At the end of one time step, the hidden states of the
activated cells are updated by considering the relevance between the inputs and
the hidden states from the last and current time steps. Extensive experiments
on diversified sequential modeling tasks are conducted to show the superior
generalization ability when there exist changes in the testing environment.
Source code is available at \url{https://github.com/ziyuwwang/rig-lstm}. | Machine Learning |
What field is the article from? | Title: RIGA: A Regret-Based Interactive Genetic Algorithm
Abstract: In this paper, we propose an interactive genetic algorithm for solving
multi-objective combinatorial optimization problems under preference
imprecision. More precisely, we consider problems where the decision maker's
preferences over solutions can be represented by a parameterized aggregation
function (e.g., a weighted sum, an OWA operator, a Choquet integral), and we
assume that the parameters are initially not known by the recommendation
system. In order to quickly make a good recommendation, we combine elicitation
and search in the following way: 1) we use regret-based elicitation techniques
to reduce the parameter space in a efficient way, 2) genetic operators are
applied on parameter instances (instead of solutions) to better explore the
parameter space, and 3) we generate promising solutions (population) using
existing solving methods designed for the problem with known preferences. Our
algorithm, called RIGA, can be applied to any multi-objective combinatorial
optimization problem provided that the aggregation function is linear in its
parameters and that a (near-)optimal solution can be efficiently determined for
the problem with known preferences. We also study its theoretical performances:
RIGA can be implemented in such way that it runs in polynomial time while
asking no more than a polynomial number of queries. The method is tested on the
multi-objective knapsack and traveling salesman problems. For several
performance indicators (computation times, gap to optimality and number of
queries), RIGA obtains better results than state-of-the-art algorithms. | Artificial Intelligence |
What field is the article from? | Title: GROOViST: A Metric for Grounding Objects in Visual Storytelling
Abstract: A proper evaluation of stories generated for a sequence of images -- the task
commonly referred to as visual storytelling -- must consider multiple aspects,
such as coherence, grammatical correctness, and visual grounding. In this work,
we focus on evaluating the degree of grounding, that is, the extent to which a
story is about the entities shown in the images. We analyze current metrics,
both designed for this purpose and for general vision-text alignment. Given
their observed shortcomings, we propose a novel evaluation tool, GROOViST, that
accounts for cross-modal dependencies, temporal misalignments (the fact that
the order in which entities appear in the story and the image sequence may not
match), and human intuitions on visual grounding. An additional advantage of
GROOViST is its modular design, where the contribution of each component can be
assessed and interpreted individually. | Artificial Intelligence |
What field is the article from? | Title: E4SRec: An Elegant Effective Efficient Extensible Solution of Large Language Models for Sequential Recommendation
Abstract: The recent advancements in Large Language Models (LLMs) have sparked interest
in harnessing their potential within recommender systems. Since LLMs are
designed for natural language tasks, existing recommendation approaches have
predominantly transformed recommendation tasks into open-domain natural
language generation tasks. However, this approach necessitates items to possess
rich semantic information, often generates out-of-range results, and suffers
from notably low efficiency and limited extensibility. Furthermore, practical
ID-based recommendation strategies, reliant on a huge number of unique
identities (IDs) to represent users and items, have gained prominence in
real-world recommender systems due to their effectiveness and efficiency.
Nevertheless, the incapacity of LLMs to model IDs presents a formidable
challenge when seeking to leverage LLMs for personalized recommendations. In
this paper, we introduce an Elegant Effective Efficient Extensible solution for
large language models for Sequential Recommendation (E4SRec), which seamlessly
integrates LLMs with traditional recommender systems that exclusively utilize
IDs to represent items. Specifically, E4SRec takes ID sequences as inputs,
ensuring that the generated outputs fall within the candidate lists.
Furthermore, E4SRec possesses the capability to generate the entire ranking
list in a single forward process, and demands only a minimal set of pluggable
parameters, which are trained for each dataset while keeping the entire LLM
frozen. We substantiate the effectiveness, efficiency, and extensibility of our
proposed E4SRec through comprehensive experiments conducted on four widely-used
real-world datasets. The implementation code is accessible at
https://github.com/HestiaSky/E4SRec/. | Information Retrieval |
What field is the article from? | Title: GateLoop: Fully Data-Controlled Linear Recurrence for Sequence Modeling
Abstract: Linear Recurrence has proven to be a powerful tool for modeling long
sequences efficiently. In this work, we show that existing models fail to take
full advantage of its potential. Motivated by this finding, we develop
GateLoop, a foundational sequence model that generalizes linear recurrent
models such as S4, S5, LRU and RetNet, by employing data-controlled state
transitions. Utilizing this theoretical advance, GateLoop empirically
outperforms existing models for auto-regressive language modeling. Our method
comes with a low-cost $O(l)$ recurrent mode and an efficient $O(l \log_{2} l)$
parallel mode making use of highly optimized associative scan implementations.
Furthermore, we derive an $O(l^2)$ surrogate attention mode, revealing
remarkable implications for Transformer and recently proposed architectures.
Specifically, we prove that our approach can be interpreted as providing
data-controlled relative-positional information to Attention. While many
existing models solely rely on data-controlled cumulative sums for context
aggregation, our findings suggest that incorporating data-controlled complex
cumulative products may be a crucial step towards more powerful sequence
models. | Machine Learning |
What field is the article from? | Title: Understanding and Mitigating Classification Errors Through Interpretable Token Patterns
Abstract: State-of-the-art NLP methods achieve human-like performance on many tasks,
but make errors nevertheless. Characterizing these errors in easily
interpretable terms gives insight into whether a classifier is prone to making
systematic errors, but also gives a way to act and improve the classifier. We
propose to discover those patterns of tokens that distinguish correct and
erroneous predictions as to obtain global and interpretable descriptions for
arbitrary NLP classifiers. We formulate the problem of finding a succinct and
non-redundant set of such patterns in terms of the Minimum Description Length
principle. Through an extensive set of experiments, we show that our method,
Premise, performs well in practice. Unlike existing solutions, it recovers
ground truth, even on highly imbalanced data over large vocabularies. In VQA
and NER case studies, we confirm that it gives clear and actionable insight
into the systematic errors made by NLP classifiers. | Computational Linguistics |
What field is the article from? | Title: GreekT5: A Series of Greek Sequence-to-Sequence Models for News Summarization
Abstract: Text summarization (TS) is a natural language processing (NLP) subtask
pertaining to the automatic formulation of a concise and coherent summary that
covers the major concepts and topics from one or multiple documents. Recent
advancements in deep learning have led to the development of abstractive
summarization transformer-based models, which outperform classical approaches.
In any case, research in this field focuses on high resource languages such as
English, while the corresponding work for low resource languages is still
underdeveloped. Taking the above into account, this paper proposes a series of
novel TS models for Greek news articles. The proposed models were thoroughly
evaluated on the same dataset against GreekBART, which is the state-of-the-art
model in Greek abstractive news summarization. Our evaluation results reveal
that most of the proposed models significantly outperform GreekBART on various
evaluation metrics. We make our evaluation code public, aiming to increase the
reproducibility of this work and facilitate future research in the field. | Computational Linguistics |
What field is the article from? | Title: DeliverAI: Reinforcement Learning Based Distributed Path-Sharing Network for Food Deliveries
Abstract: Delivery of items from the producer to the consumer has experienced
significant growth over the past decade and has been greatly fueled by the
recent pandemic. Amazon Fresh, Shopify, UberEats, InstaCart, and DoorDash are
rapidly growing and are sharing the same business model of consumer items or
food delivery. Existing food delivery methods are sub-optimal because each
delivery is individually optimized to go directly from the producer to the
consumer via the shortest time path. We observe a significant scope for
reducing the costs associated with completing deliveries under the current
model. We model our food delivery problem as a multi-objective optimization,
where consumer satisfaction and delivery costs, both, need to be optimized.
Taking inspiration from the success of ride-sharing in the taxi industry, we
propose DeliverAI - a reinforcement learning-based path-sharing algorithm.
Unlike previous attempts for path-sharing, DeliverAI can provide real-time,
time-efficient decision-making using a Reinforcement learning-enabled agent
system. Our novel agent interaction scheme leverages path-sharing among
deliveries to reduce the total distance traveled while keeping the delivery
completion time under check. We generate and test our methodology vigorously on
a simulation setup using real data from the city of Chicago. Our results show
that DeliverAI can reduce the delivery fleet size by 12\%, the distance
traveled by 13%, and achieve 50% higher fleet utilization compared to the
baselines. | Machine Learning |
What field is the article from? | Title: RDGCN: Reinforced Dependency Graph Convolutional Network for Aspect-based Sentiment Analysis
Abstract: Aspect-based sentiment analysis (ABSA) is dedicated to forecasting the
sentiment polarity of aspect terms within sentences. Employing graph neural
networks to capture structural patterns from syntactic dependency parsing has
been confirmed as an effective approach for boosting ABSA. In most works, the
topology of dependency trees or dependency-based attention coefficients is
often loosely regarded as edges between aspects and opinions, which can result
in insufficient and ambiguous syntactic utilization. To address these problems,
we propose a new reinforced dependency graph convolutional network (RDGCN) that
improves the importance calculation of dependencies in both distance and type
views. Initially, we propose an importance calculation criterion for the
minimum distances over dependency trees. Under the criterion, we design a
distance-importance function that leverages reinforcement learning for weight
distribution search and dissimilarity control. Since dependency types often do
not have explicit syntax like tree distances, we use global attention and mask
mechanisms to design type-importance functions. Finally, we merge these weights
and implement feature aggregation and classification. Comprehensive experiments
on three popular datasets demonstrate the effectiveness of the criterion and
importance functions. RDGCN outperforms state-of-the-art GNN-based baselines in
all validations. | Computational Linguistics |
What field is the article from? | Title: Towards Few-Annotation Learning for Object Detection: Are Transformer-based Models More Efficient ?
Abstract: For specialized and dense downstream tasks such as object detection, labeling
data requires expertise and can be very expensive, making few-shot and
semi-supervised models much more attractive alternatives. While in the few-shot
setup we observe that transformer-based object detectors perform better than
convolution-based two-stage models for a similar amount of parameters, they are
not as effective when used with recent approaches in the semi-supervised
setting. In this paper, we propose a semi-supervised method tailored for the
current state-of-the-art object detector Deformable DETR in the few-annotation
learning setup using a student-teacher architecture, which avoids relying on a
sensitive post-processing of the pseudo-labels generated by the teacher model.
We evaluate our method on the semi-supervised object detection benchmarks COCO
and Pascal VOC, and it outperforms previous methods, especially when
annotations are scarce. We believe that our contributions open new
possibilities to adapt similar object detection methods in this setup as well. | Computer Vision |
What field is the article from? | Title: On the Powerfulness of Textual Outlier Exposure for Visual OoD Detection
Abstract: Successful detection of Out-of-Distribution (OoD) data is becoming
increasingly important to ensure safe deployment of neural networks. One of the
main challenges in OoD detection is that neural networks output overconfident
predictions on OoD data, make it difficult to determine OoD-ness of data solely
based on their predictions. Outlier exposure addresses this issue by
introducing an additional loss that encourages low-confidence predictions on
OoD data during training. While outlier exposure has shown promising potential
in improving OoD detection performance, all previous studies on outlier
exposure have been limited to utilizing visual outliers. Drawing inspiration
from the recent advancements in vision-language pre-training, this paper
venture out to the uncharted territory of textual outlier exposure. First, we
uncover the benefits of using textual outliers by replacing real or virtual
outliers in the image-domain with textual equivalents. Then, we propose various
ways of generating preferable textual outliers. Our extensive experiments
demonstrate that generated textual outliers achieve competitive performance on
large-scale OoD and hard OoD benchmarks. Furthermore, we conduct empirical
analyses of textual outliers to provide primary criteria for designing
advantageous textual outliers: near-distribution, descriptiveness, and
inclusion of visual semantics. | Computer Vision |
What field is the article from? | Title: Learned Causal Method Prediction
Abstract: For a given causal question, it is important to efficiently decide which
causal inference method to use for a given dataset. This is challenging because
causal methods typically rely on complex and difficult-to-verify assumptions,
and cross-validation is not applicable since ground truth causal quantities are
unobserved. In this work, we propose CAusal Method Predictor (CAMP), a
framework for predicting the best method for a given dataset. To this end, we
generate datasets from a diverse set of synthetic causal models, score the
candidate methods, and train a model to directly predict the highest-scoring
method for that dataset. Next, by formulating a self-supervised pre-training
objective centered on dataset assumptions relevant for causal inference, we
significantly reduce the need for costly labeled data and enhance training
efficiency. Our strategy learns to map implicit dataset properties to the best
method in a data-driven manner. In our experiments, we focus on method
prediction for causal discovery. CAMP outperforms selecting any individual
candidate method and demonstrates promising generalization to unseen
semi-synthetic and real-world benchmarks. | Machine Learning |
What field is the article from? | Title: Students' interest in knowledge acquisition in Artificial Intelligence
Abstract: Some students' expectations and points of view related to the Artificial
Intelligence course are explored and analyzed in this study. We anonymous
collected answers from 58 undergraduate students out of 200 enrolled in the
Computer Science specialization. The answers were analysed and interpreted
using thematic analysis to find out their interests and attractive and
unattractive aspects related to the Artificial Intelligence study topic. We
concluded that students are interested in Artificial Intelligence due to its
trendiness, applicability, their passion and interest in the subject, the
potential for future growth, and high salaries. However, the students'
expectations were mainly related to achieving medium knowledge in the
Artificial Intelligence field, and men seem to be more interested in acquiring
high-level skills than women. The most common part that wasn't enjoyed by the
students was the mathematical aspect used in Artificial Intelligence. Some of
them (a small group) were also aware of the Artificial Intelligence potential
which could be used in an unethical manner for negative purposes. Our study
also provides a short comparison to the Databases course, in which students
were not that passionate or interested in achieving medium knowledge, their
interest was related to DB usage and basic information. | Computers and Society |
What field is the article from? | Title: Efficient Object Detection in Autonomous Driving using Spiking Neural Networks: Performance, Energy Consumption Analysis, and Insights into Open-set Object Discovery
Abstract: Besides performance, efficiency is a key design driver of technologies
supporting vehicular perception. Indeed, a well-balanced trade-off between
performance and energy consumption is crucial for the sustainability of
autonomous vehicles. In this context, the diversity of real-world contexts in
which autonomous vehicles can operate motivates the need for empowering
perception models with the capability to detect, characterize and identify
newly appearing objects by themselves. In this manuscript we elaborate on this
threefold conundrum (performance, efficiency and open-world learning) for
object detection modeling tasks over image data collected from vehicular
scenarios. Specifically, we show that well-performing and efficient models can
be realized by virtue of Spiking Neural Networks (SNNs), reaching competitive
levels of detection performance when compared to their non-spiking counterparts
at dramatic energy consumption savings (up to 85%) and a slightly improved
robustness against image noise. Our experiments herein offered also expose
qualitatively the complexity of detecting new objects based on the preliminary
results of a simple approach to discriminate potential object proposals in the
captured image. | Computer Vision |
What field is the article from? | Title: Time Series Anomaly Detection using Diffusion-based Models
Abstract: Diffusion models have been recently used for anomaly detection (AD) in
images. In this paper we investigate whether they can also be leveraged for AD
on multivariate time series (MTS). We test two diffusion-based models and
compare them to several strong neural baselines. We also extend the PA%K
protocol, by computing a ROCK-AUC metric, which is agnostic to both the
detection threshold and the ratio K of correctly detected points. Our models
outperform the baselines on synthetic datasets and are competitive on
real-world datasets, illustrating the potential of diffusion-based methods for
AD in multivariate time series. | Machine Learning |
What field is the article from? | Title: Automated Fact-Checking in Dialogue: Are Specialized Models Needed?
Abstract: Prior research has shown that typical fact-checking models for stand-alone
claims struggle with claims made in dialogues. As a solution, fine-tuning these
models on labelled dialogue data has been proposed. However, creating separate
models for each use case is impractical, and we show that fine-tuning models
for dialogue results in poor performance on typical fact-checking. To overcome
this challenge, we present techniques that allow us to use the same models for
both dialogue and typical fact-checking. These mainly focus on retrieval
adaptation and transforming conversational inputs so that they can be
accurately predicted by models trained on stand-alone claims. We demonstrate
that a typical fact-checking model incorporating these techniques is
competitive with state-of-the-art models fine-tuned for dialogue, while
maintaining its accuracy on stand-alone claims. | Computational Linguistics |
What field is the article from? | Title: Dig-CSI: A Distributed and Generative Model Assisted CSI Feedback Training Framework
Abstract: The advent of deep learning (DL)-based models has significantly advanced
Channel State Information (CSI) feedback mechanisms in wireless communication
systems. However, traditional approaches often suffer from high communication
overhead and potential privacy risks due to the centralized nature of CSI data
processing. To address these challenges, we design a CSI feedback training
framework called Dig-CSI, in which the dataset for training the CSI feedback
model is produced by the distributed generators uploaded by each user equipment
(UE), but not through local data upload. Each UE trains an autoencoder, where
the decoder is considered as the distributed generator, with local data to gain
reconstruction accuracy and the ability to generate. Experimental results show
that Dig-CSI can train a global CSI feedback model with comparable performance
to the model trained with classical centralized learning with a much lighter
communication overhead. | Artificial Intelligence |
What field is the article from? | Title: How should the advent of large language models affect the practice of science?
Abstract: Large language models (LLMs) are being increasingly incorporated into
scientific workflows. However, we have yet to fully grasp the implications of
this integration. How should the advent of large language models affect the
practice of science? For this opinion piece, we have invited four diverse
groups of scientists to reflect on this query, sharing their perspectives and
engaging in debate. Schulz et al. make the argument that working with LLMs is
not fundamentally different from working with human collaborators, while Bender
et al. argue that LLMs are often misused and over-hyped, and that their
limitations warrant a focus on more specialized, easily interpretable tools.
Marelli et al. emphasize the importance of transparent attribution and
responsible use of LLMs. Finally, Botvinick and Gershman advocate that humans
should retain responsibility for determining the scientific roadmap. To
facilitate the discussion, the four perspectives are complemented with a
response from each group. By putting these different perspectives in
conversation, we aim to bring attention to important considerations within the
academic community regarding the adoption of LLMs and their impact on both
current and future scientific practices. | Computational Linguistics |
What field is the article from? | Title: ShipGen: A Diffusion Model for Parametric Ship Hull Generation with Multiple Objectives and Constraints
Abstract: Ship design is a years-long process that requires balancing complex design
trade-offs to create a ship that is efficient and effective. Finding new ways
to improve the ship design process can lead to significant cost savings for
ship building and operation. One promising technology is generative artificial
intelligence, which has been shown to reduce design cycle time and create
novel, high-performing designs. In literature review, generative artificial
intelligence has been shown to generate ship hulls; however, ship design is
particularly difficult as the hull of a ship requires the consideration of many
objectives. This paper presents a study on the generation of parametric ship
hull designs using a parametric diffusion model that considers multiple
objectives and constraints for the hulls. This denoising diffusion
probabilistic model (DDPM) generates the tabular parametric design vectors of a
ship hull for evaluation. In addition to a tabular DDPM, this paper details
adding guidance to improve the quality of generated ship hull designs. By
leveraging classifier guidance, the DDPM produced feasible parametric ship
hulls that maintain the coverage of the initial training dataset of ship hulls
with a 99.5% rate, a 149x improvement over random sampling of the design vector
parameters across the design space. Parametric ship hulls produced with
performance guidance saw an average of 91.4% reduction in wave drag
coefficients and an average of a 47.9x relative increase in the total displaced
volume of the hulls compared to the mean performance of the hulls in the
training dataset. The use of a DDPM to generate parametric ship hulls can
reduce design time by generating high-performing hull designs for future
analysis. These generated hulls have low drag and high volume, which can reduce
the cost of operating a ship and increase its potential to generate revenue. | Machine Learning |
What field is the article from? | Title: Market Concentration Implications of Foundation Models
Abstract: We analyze the structure of the market for foundation models, i.e., large AI
models such as those that power ChatGPT and that are adaptable to downstream
uses, and we examine the implications for competition policy and regulation. We
observe that the most capable models will have a tendency towards natural
monopoly and may have potentially vast markets. This calls for a two-pronged
regulatory response: (i) Antitrust authorities need to ensure the
contestability of the market by tackling strategic behavior, in particular by
ensuring that monopolies do not propagate vertically to downstream uses, and
(ii) given the diminished potential for market discipline, there is a role for
regulators to ensure that the most capable models meet sufficient quality
standards (including safety, privacy, non-discrimination, reliability and
interoperability standards) to maximally contribute to social welfare.
Regulators should also ensure a level regulatory playing field between AI and
non-AI applications in all sectors of the economy. For models that are behind
the frontier, we expect competition to be quite intense, implying a more
limited role for competition policy, although a role for regulation remains. | Artificial Intelligence |
What field is the article from? | Title: DAIL: Data Augmentation for In-Context Learning via Self-Paraphrase
Abstract: In-Context Learning (ICL) combined with pre-trained large language models has
achieved promising results on various NLP tasks. However, ICL requires
high-quality annotated demonstrations which might not be available in
real-world scenarios. To overcome this limitation, we propose \textbf{D}ata
\textbf{A}ugmentation for \textbf{I}n-Context \textbf{L}earning
(\textbf{DAIL}). DAIL leverages the intuition that large language models are
more familiar with the content generated by themselves. It first utilizes the
language model to generate paraphrases of the test sample and employs majority
voting to determine the final result based on individual predictions. Our
extensive empirical evaluation shows that DAIL outperforms the standard ICL
method and other ensemble-based methods in the low-resource scenario.
Additionally, we explore the use of voting consistency as a confidence score of
the model when the logits of predictions are inaccessible. We believe our work
will stimulate further research on ICL in low-resource settings. | Computational Linguistics |
What field is the article from? | Title: $σ$-PCA: a unified neural model for linear and nonlinear principal component analysis
Abstract: Linear principal component analysis (PCA), nonlinear PCA, and linear
independent component analysis (ICA) -- those are three methods with
single-layer autoencoder formulations for learning linear transformations from
data. Linear PCA learns orthogonal transformations (rotations) that orient axes
to maximise variance, but it suffers from a subspace rotational indeterminacy:
it fails to find a unique rotation for axes that share the same variance. Both
nonlinear PCA and linear ICA reduce the subspace indeterminacy from rotational
to permutational by maximising statistical independence under the assumption of
unit variance. The relationship between all three can be understood by the
singular value decomposition of the linear ICA transformation into a sequence
of rotation, scale, rotation. Linear PCA learns the first rotation; nonlinear
PCA learns the second. The scale is simply the inverse of the standard
deviations. The problem is that, in contrast to linear PCA, conventional
nonlinear PCA cannot be used directly on the data to learn the first rotation,
the first being special as it reduces dimensionality and orders by variances.
In this paper, we have identified the cause, and as a solution we propose
$\sigma$-PCA: a unified neural model for linear and nonlinear PCA as
single-layer autoencoders. One of its key ingredients: modelling not just the
rotation but also the scale -- the variances. This model bridges the disparity
between linear and nonlinear PCA. And so, like linear PCA, it can learn a
semi-orthogonal transformation that reduces dimensionality and orders by
variances, but, unlike linear PCA, it does not suffer from rotational
indeterminacy. | Machine Learning |
What field is the article from? | Title: Modeling the Uncertainty with Maximum Discrepant Students for Semi-supervised 2D Pose Estimation
Abstract: Semi-supervised pose estimation is a practically challenging task for
computer vision. Although numerous excellent semi-supervised classification
methods have emerged, these methods typically use confidence to evaluate the
quality of pseudo-labels, which is difficult to achieve in pose estimation
tasks. For example, in pose estimation, confidence represents only the
possibility that a position of the heatmap is a keypoint, not the quality of
that prediction. In this paper, we propose a simple yet efficient framework to
estimate the quality of pseudo-labels in semi-supervised pose estimation tasks
from the perspective of modeling the uncertainty of the pseudo-labels.
Concretely, under the dual mean-teacher framework, we construct the two maximum
discrepant students (MDSs) to effectively push two teachers to generate
different decision boundaries for the same sample. Moreover, we create multiple
uncertainties to assess the quality of the pseudo-labels. Experimental results
demonstrate that our method improves the performance of semi-supervised pose
estimation on three datasets. | Computer Vision |
What field is the article from? | Title: Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation
Abstract: Evaluating text-to-image models is notoriously difficult. A strong recent
approach for assessing text-image faithfulness is based on QG/A (question
generation and answering), which uses pre-trained foundational models to
automatically generate a set of questions and answers from the prompt, and
output images are scored based on whether these answers extracted with a visual
question answering model are consistent with the prompt-based answers. This
kind of evaluation is naturally dependent on the quality of the underlying QG
and QA models. We identify and address several reliability challenges in
existing QG/A work: (a) QG questions should respect the prompt (avoiding
hallucinations, duplications, and omissions) and (b) VQA answers should be
consistent (not asserting that there is no motorcycle in an image while also
claiming the motorcycle is blue). We address these issues with Davidsonian
Scene Graph (DSG), an empirically grounded evaluation framework inspired by
formal semantics. DSG is an automatic, graph-based QG/A that is modularly
implemented to be adaptable to any QG/A module. DSG produces atomic and unique
questions organized in dependency graphs, which (i) ensure appropriate semantic
coverage and (ii) sidestep inconsistent answers. With extensive experimentation
and human evaluation on a range of model configurations (LLM, VQA, and T2I), we
empirically demonstrate that DSG addresses the challenges noted above. Finally,
we present DSG-1k, an open-sourced evaluation benchmark that includes 1,060
prompts, covering a wide range of fine-grained semantic categories with a
balanced distribution. We release the DSG-1k prompts and the corresponding DSG
questions. | Computer Vision |
What field is the article from? | Title: DEFT: Data Efficient Fine-Tuning for Large Language Models via Unsupervised Core-Set Selection
Abstract: Recent advances have led to the availability of many pre-trained language
models (PLMs); however, a question that remains is how much data is truly
needed to fine-tune PLMs for downstream tasks? In this work, we introduce DEFT,
a data-efficient fine-tuning framework that leverages unsupervised core-set
selection to minimize the amount of data needed to fine-tune PLMs for
downstream tasks. We demonstrate the efficacy of our DEFT framework in the
context of text-editing LMs, and compare to the state-of-the art text-editing
model, CoEDIT. Our quantitative and qualitative results demonstrate that DEFT
models are just as accurate as CoEDIT while being finetuned on ~70% less data. | Computational Linguistics |
What field is the article from? | Title: Prompt Risk Control: A Rigorous Framework for Responsible Deployment of Large Language Models
Abstract: The recent explosion in the capabilities of large language models has led to
a wave of interest in how best to prompt a model to perform a given task. While
it may be tempting to simply choose a prompt based on average performance on a
validation set, this can lead to a deployment where unexpectedly poor responses
are generated, especially for the worst-off users. To mitigate this prospect,
we propose Prompt Risk Control, a lightweight framework for selecting a prompt
based on rigorous upper bounds on families of informative risk measures. We
offer methods for producing bounds on a diverse set of metrics, including
quantities that measure worst-case responses and disparities in generation
quality across the population of users. In addition, we extend the underlying
statistical bounding techniques to accommodate the possibility of distribution
shifts in deployment. Experiments on applications such as open-ended chat,
medical question summarization, and code generation highlight how such a
framework can foster responsible deployment by reducing the risk of the worst
outcomes. | Machine Learning |
What field is the article from? | Title: Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations
Abstract: Ensuring both transparency and safety is critical when deploying Deep Neural
Networks (DNNs) in high-risk applications, such as medicine. The field of
explainable AI (XAI) has proposed various methods to comprehend the
decision-making processes of opaque DNNs. However, only few XAI methods are
suitable of ensuring safety in practice as they heavily rely on repeated
labor-intensive and possibly biased human assessment. In this work, we present
a novel post-hoc concept-based XAI framework that conveys besides instance-wise
(local) also class-wise (global) decision-making strategies via prototypes.
What sets our approach apart is the combination of local and global strategies,
enabling a clearer understanding of the (dis-)similarities in model decisions
compared to the expected (prototypical) concept use, ultimately reducing the
dependence on human long-term assessment. Quantifying the deviation from
prototypical behavior not only allows to associate predictions with specific
model sub-strategies but also to detect outlier behavior. As such, our approach
constitutes an intuitive and explainable tool for model validation. We
demonstrate the effectiveness of our approach in identifying
out-of-distribution samples, spurious model behavior and data quality issues
across three datasets (ImageNet, CUB-200, and CIFAR-10) utilizing VGG, ResNet,
and EfficientNet architectures. Code is available on
https://github.com/maxdreyer/pcx. | Computer Vision |
What field is the article from? | Title: Vulnerability of Automatic Identity Recognition to Audio-Visual Deepfakes
Abstract: The task of deepfakes detection is far from being solved by speech or vision
researchers. Several publicly available databases of fake synthetic video and
speech were built to aid the development of detection methods. However,
existing databases typically focus on visual or voice modalities and provide no
proof that their deepfakes can in fact impersonate any real person. In this
paper, we present the first realistic audio-visual database of deepfakes
SWAN-DF, where lips and speech are well synchronized and video have high visual
and audio qualities. We took the publicly available SWAN dataset of real videos
with different identities to create audio-visual deepfakes using several models
from DeepFaceLab and blending techniques for face swapping and HiFiVC, DiffVC,
YourTTS, and FreeVC models for voice conversion. From the publicly available
speech dataset LibriTTS, we also created a separate database of only audio
deepfakes LibriTTS-DF using several latest text to speech methods: YourTTS,
Adaspeech, and TorToiSe. We demonstrate the vulnerability of a state of the art
speaker recognition system, such as ECAPA-TDNN-based model from SpeechBrain, to
the synthetic voices. Similarly, we tested face recognition system based on the
MobileFaceNet architecture to several variants of our visual deepfakes. The
vulnerability assessment show that by tuning the existing pretrained deepfake
models to specific identities, one can successfully spoof the face and speaker
recognition systems in more than 90% of the time and achieve a very realistic
looking and sounding fake video of a given person. | Computer Vision |
What field is the article from? | Title: VLTSeg: Simple Transfer of CLIP-Based Vision-Language Representations for Domain Generalized Semantic Segmentation
Abstract: Domain generalization (DG) remains a significant challenge for perception
based on deep neural networks (DNN), where domain shifts occur due to lighting,
weather, or geolocation changes. In this work, we propose VLTSeg to enhance
domain generalization in semantic segmentation, where the network is solely
trained on the source domain and evaluated on unseen target domains. Our method
leverages the inherent semantic robustness of vision-language models. First, by
substituting traditional vision-only backbones with pre-trained encoders from
CLIP and EVA-CLIP as transfer learning setting we find that in the field of DG,
vision-language pre-training significantly outperforms supervised and
self-supervised vision pre-training. We thus propose a new vision-language
approach for domain generalized segmentation, which improves the domain
generalization SOTA by 7.6% mIoU when training on the synthetic GTA5 dataset.
We further show the superior generalization capabilities of vision-language
segmentation models by reaching 76.48% mIoU on the popular Cityscapes-to-ACDC
benchmark, outperforming the previous SOTA approach by 6.9% mIoU on the test
set at the time of writing. Additionally, our approach shows strong in-domain
generalization capabilities indicated by 86.1% mIoU on the Cityscapes test set,
resulting in a shared first place with the previous SOTA on the current
leaderboard at the time of submission. | Computer Vision |
What field is the article from? | Title: Churn Prediction via Multimodal Fusion Learning:Integrating Customer Financial Literacy, Voice, and Behavioral Data
Abstract: In todays competitive landscape, businesses grapple with customer retention.
Churn prediction models, although beneficial, often lack accuracy due to the
reliance on a single data source. The intricate nature of human behavior and
high dimensional customer data further complicate these efforts. To address
these concerns, this paper proposes a multimodal fusion learning model for
identifying customer churn risk levels in financial service providers. Our
multimodal approach integrates customer sentiments financial literacy (FL)
level, and financial behavioral data, enabling more accurate and bias-free
churn prediction models. The proposed FL model utilizes a SMOGN COREG
supervised model to gauge customer FL levels from their financial data. The
baseline churn model applies an ensemble artificial neural network and
oversampling techniques to predict churn propensity in high-dimensional
financial data. We also incorporate a speech emotion recognition model
employing a pre-trained CNN-VGG16 to recognize customer emotions based on
pitch, energy, and tone. To integrate these diverse features while retaining
unique insights, we introduced late and hybrid fusion techniques that
complementary boost coordinated multimodal co learning. Robust metrics were
utilized to evaluate the proposed multimodal fusion model and hence the
approach validity, including mean average precision and macro-averaged F1
score. Our novel approach demonstrates a marked improvement in churn
prediction, achieving a test accuracy of 91.2%, a Mean Average Precision (MAP)
score of 66, and a Macro-Averaged F1 score of 54 through the proposed hybrid
fusion learning technique compared with late fusion and baseline models.
Furthermore, the analysis demonstrates a positive correlation between negative
emotions, low FL scores, and high-risk customers. | Machine Learning |
What field is the article from? | Title: XplainLLM: A QA Explanation Dataset for Understanding LLM Decision-Making
Abstract: Large Language Models (LLMs) have recently made impressive strides in natural
language understanding tasks. Despite their remarkable performance,
understanding their decision-making process remains a big challenge. In this
paper, we look into bringing some transparency to this process by introducing a
new explanation dataset for question answering (QA) tasks that integrates
knowledge graphs (KGs) in a novel way. Our dataset includes 12,102
question-answer-explanation (QAE) triples. Each explanation in the dataset
links the LLM's reasoning to entities and relations in the KGs. The explanation
component includes a why-choose explanation, a why-not-choose explanation, and
a set of reason-elements that underlie the LLM's decision. We leverage KGs and
graph attention networks (GAT) to find the reason-elements and transform them
into why-choose and why-not-choose explanations that are comprehensible to
humans. Through quantitative and qualitative evaluations, we demonstrate the
potential of our dataset to improve the in-context learning of LLMs, and
enhance their interpretability and explainability. Our work contributes to the
field of explainable AI by enabling a deeper understanding of the LLMs
decision-making process to make them more transparent and thereby, potentially
more reliable, to researchers and practitioners alike. Our dataset is available
at: https://github.com/chen-zichen/XplainLLM_dataset.git | Computational Linguistics |
What field is the article from? | Title: Using GPT-4 to Augment Unbalanced Data for Automatic Scoring
Abstract: Machine learning-based automatic scoring can be challenging if students'
responses are unbalanced across scoring categories, as it introduces
uncertainty in the machine training process. To meet this challenge, we
introduce a novel text data augmentation framework using GPT-4, a generative
large language model, specifically tailored for unbalanced datasets in
automatic scoring. Our experimental dataset comprised student-written responses
to two science items. We crafted prompts for GPT-4 to generate responses
resembling student-written answers, particularly for the minority scoring
classes, to augment the data. We then finetuned DistillBERT for automatic
scoring based on the augmented and original datasets. Model performance was
assessed using accuracy, precision, recall, and F1 score. We incorporate varied
amounts of augmented data to examine scoring performance, and our findings
revealed remarkedly improved model performance. The average maximum increase
observed across two items is: 3.5% for accuracy, 30.6% for precision, 21.1% for
recall, and 24.2% for F1 score. Notably, using just 5% of the augmented data
led to substantial improvements: 2.6%, 29.2%, 15.1%, and 19.6%. Interestingly,
the extent of improvement varied depending on specific datasets. Moreover, we
found that a varying amount of augmented data (5%-40%) was needed to obtain a
stable improvement. We also compare models trained with GPT-4 augmented data
and those trained with additional student-written responses. The findings
indicate that former ones match or even exceed the performance of the latter.
Specifically, there is an average difference of 1.7%, 1.9%, 11.0%, and 7.8% for
four metrics separately. This research underscores the potential and
effectiveness of data augmentation techniques utilizing GPT-4 in addressing
unbalanced datasets within automated assessment. | Computational Linguistics |
What field is the article from? | Title: ProAgent: From Robotic Process Automation to Agentic Process Automation
Abstract: From ancient water wheels to robotic process automation (RPA), automation
technology has evolved throughout history to liberate human beings from arduous
tasks. Yet, RPA struggles with tasks needing human-like intelligence,
especially in elaborate design of workflow construction and dynamic
decision-making in workflow execution. As Large Language Models (LLMs) have
emerged human-like intelligence, this paper introduces Agentic Process
Automation (APA), a groundbreaking automation paradigm using LLM-based agents
for advanced automation by offloading the human labor to agents associated with
construction and execution. We then instantiate ProAgent, an LLM-based agent
designed to craft workflows from human instructions and make intricate
decisions by coordinating specialized agents. Empirical experiments are
conducted to detail its construction and execution procedure of workflow,
showcasing the feasibility of APA, unveiling the possibility of a new paradigm
of automation driven by agents. Our code is public at
https://github.com/OpenBMB/ProAgent. | Robotics |
What field is the article from? | Title: SigFormer: Sparse Signal-Guided Transformer for Multi-Modal Human Action Segmentation
Abstract: Multi-modal human action segmentation is a critical and challenging task with
a wide range of applications. Nowadays, the majority of approaches concentrate
on the fusion of dense signals (i.e., RGB, optical flow, and depth maps).
However, the potential contributions of sparse IoT sensor signals, which can be
crucial for achieving accurate recognition, have not been fully explored. To
make up for this, we introduce a Sparse signalguided Transformer (SigFormer) to
combine both dense and sparse signals. We employ mask attention to fuse
localized features by constraining cross-attention within the regions where
sparse signals are valid. However, since sparse signals are discrete, they lack
sufficient information about the temporal action boundaries. Therefore, in
SigFormer, we propose to emphasize the boundary information at two stages to
alleviate this problem. In the first feature extraction stage, we introduce an
intermediate bottleneck module to jointly learn both category and boundary
features of each dense modality through the inner loss functions. After the
fusion of dense modalities and sparse signals, we then devise a two-branch
architecture that explicitly models the interrelationship between action
category and temporal boundary. Experimental results demonstrate that SigFormer
outperforms the state-of-the-art approaches on a multi-modal action
segmentation dataset from real industrial environments, reaching an outstanding
F1 score of 0.958. The codes and pre-trained models have been available at
https://github.com/LIUQI-creat/SigFormer. | Computer Vision |
What field is the article from? | Title: Deep Natural Language Feature Learning for Interpretable Prediction
Abstract: We propose a general method to break down a main complex task into a set of
intermediary easier sub-tasks, which are formulated in natural language as
binary questions related to the final target task. Our method allows for
representing each example by a vector consisting of the answers to these
questions. We call this representation Natural Language Learned Features
(NLLF). NLLF is generated by a small transformer language model (e.g., BERT)
that has been trained in a Natural Language Inference (NLI) fashion, using weak
labels automatically obtained from a Large Language Model (LLM). We show that
the LLM normally struggles for the main task using in-context learning, but can
handle these easiest subtasks and produce useful weak labels to train a BERT.
The NLI-like training of the BERT allows for tackling zero-shot inference with
any binary question, and not necessarily the ones seen during the training. We
show that this NLLF vector not only helps to reach better performances by
enhancing any classifier, but that it can be used as input of an
easy-to-interpret machine learning model like a decision tree. This decision
tree is interpretable but also reaches high performances, surpassing those of a
pre-trained transformer in some cases.We have successfully applied this method
to two completely different tasks: detecting incoherence in students' answers
to open-ended mathematics exam questions, and screening abstracts for a
systematic literature review of scientific papers on climate change and
agroecology. | Computational Linguistics |
What field is the article from? | Title: The Expressive Power of Low-Rank Adaptation
Abstract: Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning method that
leverages low-rank adaptation of weight matrices, has emerged as a prevalent
technique for fine-tuning pre-trained models such as large language models and
diffusion models. Despite its huge success in practice, the theoretical
underpinnings of LoRA have largely remained unexplored. This paper takes the
first step to bridge this gap by theoretically analyzing the expressive power
of LoRA. We prove that, for fully connected neural networks, LoRA can adapt any
model $f$ to accurately represent any smaller target model $\overline{f}$ if
LoRA-rank $\geq(\text{width of }f) \times \frac{\text{depth of
}\overline{f}}{\text{depth of }f}$. We also quantify the approximation error
when LoRA-rank is lower than the threshold. For Transformer networks, we show
any model can be adapted to a target model of the same size with
rank-$(\frac{\text{embedding size}}{2})$ LoRA adapters. | Machine Learning |
What field is the article from? | Title: Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer Inputs of Language Models in Federated Learning
Abstract: Federated learning (FL) emphasizes decentralized training by storing data
locally and sending only model updates, underlining user privacy. Recently, a
line of works on privacy attacks impairs user privacy by extracting sensitive
training text from language models in the context of FL. Yet, these attack
techniques face distinct hurdles: some work chiefly with limited batch sizes
(e.g., batch size of 1), and others are easily detectable. This paper
introduces an innovative approach that is challenging to detect, significantly
enhancing the recovery rate of text in various batch-size settings. Building on
fundamental gradient matching and domain prior knowledge, we enhance the attack
by recovering the input of the Pooler layer of language models, which enables
us to provide additional supervised signals at the feature level. Unlike
gradient data, these signals do not average across sentences and tokens,
thereby offering more nuanced and effective insights. We benchmark our method
using text classification tasks on datasets such as CoLA, SST-2, and Rotten
Tomatoes. Across different batch sizes and models, our approach consistently
outperforms previous state-of-the-art results. | Machine Learning |
What field is the article from? | Title: Incorporating Probing Signals into Multimodal Machine Translation via Visual Question-Answering Pairs
Abstract: This paper presents an in-depth study of multimodal machine translation
(MMT), examining the prevailing understanding that MMT systems exhibit
decreased sensitivity to visual information when text inputs are complete.
Instead, we attribute this phenomenon to insufficient cross-modal interaction,
rather than image information redundancy. A novel approach is proposed to
generate parallel Visual Question-Answering (VQA) style pairs from the source
text, fostering more robust cross-modal interaction. Using Large Language
Models (LLMs), we explicitly model the probing signal in MMT to convert it into
VQA-style data to create the Multi30K-VQA dataset. An MMT-VQA multitask
learning framework is introduced to incorporate explicit probing signals from
the dataset into the MMT training process. Experimental results on two
widely-used benchmarks demonstrate the effectiveness of this novel approach.
Our code and data would be available at:
\url{https://github.com/libeineu/MMT-VQA}. | Computational Linguistics |
What field is the article from? | Title: SurvBeNIM: The Beran-Based Neural Importance Model for Explaining the Survival Models
Abstract: A new method called the Survival Beran-based Neural Importance Model
(SurvBeNIM) is proposed. It aims to explain predictions of machine learning
survival models, which are in the form of survival or cumulative hazard
functions. The main idea behind SurvBeNIM is to extend the Beran estimator by
incorporating the importance functions into its kernels and by implementing
these importance functions as a set of neural networks which are jointly
trained in an end-to-end manner. Two strategies of using and training the whole
neural network implementing SurvBeNIM are proposed. The first one explains a
single instance, and the neural network is trained for each explained instance.
According to the second strategy, the neural network only learns once on all
instances from the dataset and on all generated instances. Then the neural
network is used to explain any instance in a dataset domain. Various numerical
experiments compare the method with different existing explanation methods. A
code implementing the proposed method is publicly available. | Machine Learning |
What field is the article from? | Title: Search-Based Fairness Testing: An Overview
Abstract: Artificial Intelligence (AI) has demonstrated remarkable capabilities in
domains such as recruitment, finance, healthcare, and the judiciary. However,
biases in AI systems raise ethical and societal concerns, emphasizing the need
for effective fairness testing methods. This paper reviews current research on
fairness testing, particularly its application through search-based testing.
Our analysis highlights progress and identifies areas of improvement in
addressing AI systems biases. Future research should focus on leveraging
established search-based testing methodologies for fairness testing. | Artificial Intelligence |
What field is the article from? | Title: Vertical Federated Alzheimer's Detection on Multimodal Data
Abstract: In the era of rapidly advancing medical technologies, the segmentation of
medical data has become inevitable, necessitating the development of privacy
preserving machine learning algorithms that can train on distributed data.
Consolidating sensitive medical data is not always an option particularly due
to the stringent privacy regulations imposed by the Health Insurance
Portability and Accountability Act (HIPAA). In this paper, we introduce a HIPAA
compliant framework that can train from distributed data. We then propose a
multimodal vertical federated model for Alzheimer's Disease (AD) detection, a
serious neurodegenerative condition that can cause dementia, severely impairing
brain function and hindering simple tasks, especially without preventative
care. This vertical federated model offers a distributed architecture that
enables collaborative learning across diverse sources of medical data while
respecting privacy constraints imposed by HIPAA. It is also able to leverage
multiple modalities of data, enhancing the robustness and accuracy of AD
detection. Our proposed model not only contributes to the advancement of
federated learning techniques but also holds promise for overcoming the hurdles
posed by data segmentation in medical research. By using vertical federated
learning, this research strives to provide a framework that enables healthcare
institutions to harness the collective intelligence embedded in their
distributed datasets without compromising patient privacy. | Machine Learning |
What field is the article from? | Title: Algorithmic Transparency and Manipulation
Abstract: A series of recent papers raises worries about the manipulative potential of
algorithmic transparency. But while the concern is apt and relevant, it is
based on a fraught understanding of manipulation. Therefore, this paper draws
attention to the indifference view of manipulation, which explains better than
the vulnerability view why algorithmic transparency has manipulative potential.
The paper also raises pertinent research questions for future studies of
manipulation in the context of algorithmic transparency. | Artificial Intelligence |
What field is the article from? | Title: Everybody Needs a Little HELP: Explaining Graphs via Hierarchical Concepts
Abstract: Graph neural networks (GNNs) have led to major breakthroughs in a variety of
domains such as drug discovery, social network analysis, and travel time
estimation. However, they lack interpretability which hinders human trust and
thereby deployment to settings with high-stakes decisions. A line of
interpretable methods approach this by discovering a small set of relevant
concepts as subgraphs in the last GNN layer that together explain the
prediction. This can yield oversimplified explanations, failing to explain the
interaction between GNN layers. To address this oversight, we provide HELP
(Hierarchical Explainable Latent Pooling), a novel, inherently interpretable
graph pooling approach that reveals how concepts from different GNN layers
compose to new ones in later steps. HELP is more than 1-WL expressive and is
the first non-spectral, end-to-end-learnable, hierarchical graph pooling method
that can learn to pool a variable number of arbitrary connected components. We
empirically demonstrate that it performs on-par with standard GCNs and popular
pooling methods in terms of accuracy while yielding explanations that are
aligned with expert knowledge in the domains of chemistry and social networks.
In addition to a qualitative analysis, we employ concept completeness scores as
well as concept conformity, a novel metric to measure the noise in discovered
concepts, quantitatively verifying that the discovered concepts are
significantly easier to fully understand than those from previous work. Our
work represents a first step towards an understanding of graph neural networks
that goes beyond a set of concepts from the final layer and instead explains
the complex interplay of concepts on different levels. | Machine Learning |
What field is the article from? | Title: Detecting Intentional AIS Shutdown in Open Sea Maritime Surveillance Using Self-Supervised Deep Learning
Abstract: In maritime traffic surveillance, detecting illegal activities, such as
illegal fishing or transshipment of illicit products is a crucial task of the
coastal administration. In the open sea, one has to rely on Automatic
Identification System (AIS) message transmitted by on-board transponders, which
are captured by surveillance satellites. However, insincere vessels often
intentionally shut down their AIS transponders to hide illegal activities. In
the open sea, it is very challenging to differentiate intentional AIS shutdowns
from missing reception due to protocol limitations, bad weather conditions or
restricting satellite positions. This paper presents a novel approach for the
detection of abnormal AIS missing reception based on self-supervised deep
learning techniques and transformer models. Using historical data, the trained
model predicts if a message should be received in the upcoming minute or not.
Afterwards, the model reports on detected anomalies by comparing the prediction
with what actually happens. Our method can process AIS messages in real-time,
in particular, more than 500 Millions AIS messages per month, corresponding to
the trajectories of more than 60 000 ships. The method is evaluated on 1-year
of real-world data coming from four Norwegian surveillance satellites. Using
related research results, we validated our method by rediscovering already
detected intentional AIS shutdowns. | Machine Learning |
What field is the article from? | Title: Improving search relevance of Azure Cognitive Search by Bayesian optimization
Abstract: Azure Cognitive Search (ACS) has emerged as a major contender in "Search as a
Service" cloud products in recent years. However, one of the major challenges
for ACS users is to improve the relevance of the search results for their
specific usecases. In this paper, we propose a novel method to find the optimal
ACS configuration that maximizes search relevance for a specific usecase
(product search, document search...) The proposed solution improves key online
marketplace metrics such as click through rates (CTR) by formulating the search
relevance problem as hyperparameter tuning. We have observed significant
improvements in real-world search call to action (CTA) rate in multiple
marketplaces by introducing optimized weights generated from the proposed
approach. | Information Retrieval |