instruction
stringclasses 1
value | input
stringlengths 260
2.07k
| output
stringclasses 10
values |
---|---|---|
What field is the article from? | Title: Causal Fairness under Unobserved Confounding: A Neural Sensitivity Framework
Abstract: Fairness for machine learning predictions is widely required in practice for
legal, ethical, and societal reasons. Existing work typically focuses on
settings without unobserved confounding, even though unobserved confounding can
lead to severe violations of causal fairness and, thus, unfair predictions. In
this work, we analyze the sensitivity of causal fairness to unobserved
confounding. Our contributions are three-fold. First, we derive bounds for
causal fairness metrics under different sources of unobserved confounding. This
enables practitioners to examine the sensitivity of their machine learning
models to unobserved confounding in fairness-critical applications. Second, we
propose a novel neural framework for learning fair predictions, which allows us
to offer worst-case guarantees of the extent to which causal fairness can be
violated due to unobserved confounding. Third, we demonstrate the effectiveness
of our framework in a series of experiments, including a real-world case study
about predicting prison sentences. To the best of our knowledge, ours is the
first work to study causal fairness under unobserved confounding. To this end,
our work is of direct practical value as a refutation strategy to ensure the
fairness of predictions in high-stakes applications. | Machine Learning |
What field is the article from? | Title: A Pseudo-Semantic Loss for Autoregressive Models with Logical Constraints
Abstract: Neuro-symbolic AI bridges the gap between purely symbolic and neural
approaches to learning. This often requires maximizing the likelihood of a
symbolic constraint w.r.t the neural network's output distribution. Such output
distributions are typically assumed to be fully-factorized. This limits the
applicability of neuro-symbolic learning to the more expressive autoregressive
distributions, e.g., transformers. Under such distributions, computing the
likelihood of even simple constraints is #P-hard. Instead of attempting to
enforce the constraint on the entire output distribution, we propose to do so
on a random, local approximation thereof. More precisely, we optimize the
likelihood of the constraint under a pseudolikelihood-based approximation
centered around a model sample. Our approximation is factorized, allowing the
reuse of solutions to sub-problems, a main tenet for efficiently computing
neuro-symbolic losses. Moreover, it is a local, high-fidelity approximation of
the likelihood, exhibiting low entropy and KL-divergence around the model
sample. We evaluate our approach on Sudoku and shortest-path prediction cast as
autoregressive generation, and observe that we greatly improve upon the base
model's ability to predict logically-consistent outputs. We also evaluate on
the task of detoxifying large language models. Using a simple constraint
disallowing a list of toxic words, we are able to steer the model's outputs
away from toxic generations, achieving SoTA detoxification compared to previous
approaches. | Machine Learning |
What field is the article from? | Title: ASI: Accuracy-Stability Index for Evaluating Deep Learning Models
Abstract: In the context of deep learning research, where model introductions
continually occur, the need for effective and efficient evaluation remains
paramount. Existing methods often emphasize accuracy metrics, overlooking
stability. To address this, the paper introduces the Accuracy-Stability Index
(ASI), a quantitative measure incorporating both accuracy and stability for
assessing deep learning models. Experimental results demonstrate the
application of ASI, and a 3D surface model is presented for visualizing ASI,
mean accuracy, and coefficient of variation. This paper addresses the important
issue of quantitative benchmarking metrics for deep learning models, providing
a new approach for accurately evaluating accuracy and stability of deep
learning models. The paper concludes with discussions on potential weaknesses
and outlines future research directions. | Machine Learning |
What field is the article from? | Title: Dense Visual Odometry Using Genetic Algorithm
Abstract: Our work aims to estimate the camera motion mounted on the head of a mobile
robot or a moving object from RGB-D images in a static scene. The problem of
motion estimation is transformed into a nonlinear least squares function.
Methods for solving such problems are iterative. Various classic methods gave
an iterative solution by linearizing this function. We can also use the
metaheuristic optimization method to solve this problem and improve results. In
this paper, a new algorithm is developed for visual odometry using a sequence
of RGB-D images. This algorithm is based on a genetic algorithm. The proposed
iterative genetic algorithm searches using particles to estimate the optimal
motion and then compares it to the traditional methods. To evaluate our method,
we use the root mean square error to compare it with the based energy method
and another metaheuristic method. We prove the efficiency of our innovative
algorithm on a large set of images. | Robotics |
What field is the article from? | Title: TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding
Abstract: Large-scale video-language pre-training has made remarkable strides in
advancing video-language understanding tasks. However, the heavy computational
burden of video encoding remains a formidable efficiency bottleneck,
particularly for long-form videos. These videos contain massive visual tokens
due to their inherent 3D properties and spatiotemporal redundancy, making it
challenging to capture complex temporal and spatial relationships. To tackle
this issue, we propose an efficient method called TEmporal-Spatial Token
Aggregation (TESTA). TESTA condenses video semantics by adaptively aggregating
similar frames, as well as similar patches within each frame. TESTA can reduce
the number of visual tokens by 75% and thus accelerate video encoding. Building
upon TESTA, we introduce a pre-trained video-language model equipped with a
divided space-time token aggregation module in each video encoder block. We
evaluate our model on five datasets for paragraph-to-video retrieval and
long-form VideoQA tasks. Experimental results show that TESTA improves
computing efficiency by 1.7 times, and achieves significant performance gains
from its scalability in processing longer input frames, e.g., +13.7 R@1 on
QuerYD and +6.5 R@1 on Condensed Movie. | Computer Vision |
What field is the article from? | Title: Revealing Networks: Understanding Effective Teacher Practices in AI-Supported Classrooms using Transmodal Ordered Network Analysis
Abstract: Learning analytics research increasingly studies classroom learning with
AI-based systems through rich contextual data from outside these systems,
especially student-teacher interactions. One key challenge in leveraging such
data is generating meaningful insights into effective teacher practices.
Quantitative ethnography bears the potential to close this gap by combining
multimodal data streams into networks of co-occurring behavior that drive
insight into favorable learning conditions. The present study uses transmodal
ordered network analysis to understand effective teacher practices in
relationship to traditional metrics of in-system learning in a mathematics
classroom working with AI tutors. Incorporating teacher practices captured by
position tracking and human observation codes into modeling significantly
improved the inference of how efficiently students improved in the AI tutor
beyond a model with tutor log data features only. Comparing teacher practices
by student learning rates, we find that students with low learning rates
exhibited more hint use after monitoring. However, after an extended visit,
students with low learning rates showed learning behavior similar to their high
learning rate peers, achieving repeated correct attempts in the tutor.
Observation notes suggest conceptual and procedural support differences can
help explain visit effectiveness. Taken together, offering early conceptual
support to students with low learning rates could make classroom practice with
AI tutors more effective. This study advances the scientific understanding of
effective teacher practice in classrooms learning with AI tutors and
methodologies to make such practices visible. | Computers and Society |
What field is the article from? | Title: DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer
Abstract: Large Language Models (LLMs) have emerged as dominant tools for various
tasks, particularly when tailored for a specific target by prompt tuning.
Nevertheless, concerns surrounding data privacy present obstacles due to the
tuned prompts' dependency on sensitive private information. A practical
solution is to host a local LLM and optimize a soft prompt privately using
data. Yet, hosting a local model becomes problematic when model ownership is
protected. Alternative methods, like sending data to the model's provider for
training, intensify these privacy issues facing an untrusted provider. In this
paper, we present a novel solution called Differentially-Private Offsite Prompt
Tuning (DP-OPT) to address this challenge. Our approach involves tuning a
discrete prompt on the client side and then applying it to the desired cloud
models. We demonstrate that prompts suggested by LLMs themselves can be
transferred without compromising performance significantly. To ensure that the
prompts do not leak private information, we introduce the first private prompt
generation mechanism, by a differentially-private (DP) ensemble of in-context
learning with private demonstrations. With DP-OPT, generating
privacy-preserving prompts by Vicuna-7b can yield competitive performance
compared to non-private in-context learning on GPT3.5 or local private prompt
tuning. Codes are available at https://github.com/VITA-Group/DP-OPT . | Computational Linguistics |
What field is the article from? | Title: InstructPipe: Building Visual Programming Pipelines with Human Instructions
Abstract: Visual programming provides beginner-level programmers with a coding-free
experience to build their customized pipelines. Existing systems require users
to build a pipeline entirely from scratch, implying that novice users need to
set up and link appropriate nodes all by themselves, starting from a blank
workspace. We present InstructPipe, an AI assistant that enables users to start
prototyping machine learning (ML) pipelines with text instructions. We designed
two LLM modules and a code interpreter to execute our solution. LLM modules
generate pseudocode of a target pipeline, and the interpreter renders a
pipeline in the node-graph editor for further human-AI collaboration. Technical
evaluations reveal that InstructPipe reduces user interactions by 81.1%
compared to traditional methods. Our user study (N=16) showed that InstructPipe
empowers novice users to streamline their workflow in creating desired ML
pipelines, reduce their learning curve, and spark innovative ideas with
open-ended commands. | Human-Computer Interaction |
What field is the article from? | Title: TD-MPC2: Scalable, Robust World Models for Continuous Control
Abstract: TD-MPC is a model-based reinforcement learning (RL) algorithm that performs
local trajectory optimization in the latent space of a learned implicit
(decoder-free) world model. In this work, we present TD-MPC2: a series of
improvements upon the TD-MPC algorithm. We demonstrate that TD-MPC2 improves
significantly over baselines across 104 online RL tasks spanning 4 diverse task
domains, achieving consistently strong results with a single set of
hyperparameters. We further show that agent capabilities increase with model
and data size, and successfully train a single 317M parameter agent to perform
80 tasks across multiple task domains, embodiments, and action spaces. We
conclude with an account of lessons, opportunities, and risks associated with
large TD-MPC2 agents. Explore videos, models, data, code, and more at
https://nicklashansen.github.io/td-mpc2 | Machine Learning |
What field is the article from? | Title: A Study on the Calibration of In-context Learning
Abstract: Modern auto-regressive language models are trained to minimize log loss on
broad data by predicting the next token so they are expected to get calibrated
answers in next-token prediction tasks. We study this for in-context learning
(ICL), a widely used way to adapt frozen large language models (LLMs) via
crafting prompts, and investigate the trade-offs between performance and
calibration on a wide range of natural language understanding and reasoning
tasks. We conduct extensive experiments to show that such trade-offs may get
worse as we increase model size, incorporate more ICL examples, and fine-tune
models using instruction, dialog, or reinforcement learning from human feedback
(RLHF) on carefully curated datasets. Furthermore, we find that common
recalibration techniques that are widely effective such as temperature scaling
provide limited gains in calibration errors, suggesting that new methods may be
required for settings where models are expected to be reliable. | Computational Linguistics |
What field is the article from? | Title: Real-Time Neural Rasterization for Large Scenes
Abstract: We propose a new method for realistic real-time novel-view synthesis (NVS) of
large scenes. Existing neural rendering methods generate realistic results, but
primarily work for small scale scenes (<50 square meters) and have difficulty
at large scale (>10000 square meters). Traditional graphics-based rasterization
rendering is fast for large scenes but lacks realism and requires expensive
manually created assets. Our approach combines the best of both worlds by
taking a moderate-quality scaffold mesh as input and learning a neural texture
field and shader to model view-dependant effects to enhance realism, while
still using the standard graphics pipeline for real-time rendering. Our method
outperforms existing neural rendering methods, providing at least 30x faster
rendering with comparable or better realism for large self-driving and drone
scenes. Our work is the first to enable real-time rendering of large real-world
scenes. | Computer Vision |
What field is the article from? | Title: Efficient Classification of Student Help Requests in Programming Courses Using Large Language Models
Abstract: The accurate classification of student help requests with respect to the type
of help being sought can enable the tailoring of effective responses.
Automatically classifying such requests is non-trivial, but large language
models (LLMs) appear to offer an accessible, cost-effective solution. This
study evaluates the performance of the GPT-3.5 and GPT-4 models for classifying
help requests from students in an introductory programming class. In zero-shot
trials, GPT-3.5 and GPT-4 exhibited comparable performance on most categories,
while GPT-4 outperformed GPT-3.5 in classifying sub-categories for requests
related to debugging. Fine-tuning the GPT-3.5 model improved its performance to
such an extent that it approximated the accuracy and consistency across
categories observed between two human raters. Overall, this study demonstrates
the feasibility of using LLMs to enhance educational systems through the
automated classification of student needs. | Computers and Society |
What field is the article from? | Title: Gaze Detection and Analysis for Initiating Joint Activity in Industrial Human-Robot Collaboration
Abstract: Collaborative robots (cobots) are widely used in industrial applications, yet
extensive research is still needed to enhance human-robot collaborations and
operator experience. A potential approach to improve the collaboration
experience involves adapting cobot behavior based on natural cues from the
operator. Inspired by the literature on human-human interactions, we conducted
a wizard-of-oz study to examine whether a gaze towards the cobot can serve as a
trigger for initiating joint activities in collaborative sessions. In this
study, 37 participants engaged in an assembly task while their gaze behavior
was analyzed. We employ a gaze-based attention recognition model to identify
when the participants look at the cobot. Our results indicate that in most
cases (84.88\%), the joint activity is preceded by a gaze towards the cobot.
Furthermore, during the entire assembly cycle, the participants tend to look at
the cobot around the time of the joint activity. To the best of our knowledge,
this is the first study to analyze the natural gaze behavior of participants
working on a joint activity with a robot during a collaborative assembly task. | Robotics |
What field is the article from? | Title: Reviewing Developments of Graph Convolutional Network Techniques for Recommendation Systems
Abstract: The Recommender system is a vital information service on today's Internet.
Recently, graph neural networks have emerged as the leading approach for
recommender systems. We try to review recent literature on graph neural
network-based recommender systems, covering the background and development of
both recommender systems and graph neural networks. Then categorizing
recommender systems by their settings and graph neural networks by spectral and
spatial models, we explore the motivation behind incorporating graph neural
networks into recommender systems. We also analyze challenges and open problems
in graph construction, embedding propagation and aggregation, and computation
efficiency. This guides us to better explore the future directions and
developments in this domain. | Information Retrieval |
What field is the article from? | Title: ChatSOS: LLM-based knowledge Q&A system for safety engineering
Abstract: Recent advancements in large language models (LLMs) have notably propelled
natural language processing (NLP) capabilities, demonstrating significant
potential in safety engineering applications. Despite these advancements, LLMs
face constraints in processing specialized tasks, attributed to factors such as
corpus size, input processing limitations, and privacy concerns. Obtaining
useful information from reliable sources in a limited time is crucial for LLM.
Addressing this, our study introduces an LLM-based Q&A system for safety
engineering, enhancing the comprehension and response accuracy of the model. We
employed prompt engineering to incorporate external knowledge databases, thus
enriching the LLM with up-to-date and reliable information. The system analyzes
historical incident reports through statistical methods, utilizes vector
embedding to construct a vector database, and offers an efficient
similarity-based search functionality. Our findings indicate that the
integration of external knowledge significantly augments the capabilities of
LLM for in-depth problem analysis and autonomous task assignment. It
effectively summarizes accident reports and provides pertinent recommendations.
This integration approach not only expands LLM applications in safety
engineering but also sets a precedent for future developments towards
automation and intelligent systems. | Artificial Intelligence |
What field is the article from? | Title: Towards A Unified View of Answer Calibration for Multi-Step Reasoning
Abstract: Large Language Models (LLMs) employing Chain-of-Thought (CoT) prompting have
broadened the scope for improving multi-step reasoning capabilities. Usually,
answer calibration strategies such as step-level or path-level calibration play
a vital role in multi-step reasoning. While effective, there remains a
significant gap in our understanding of the key factors that drive their
success. In this paper, we break down the design of recent answer calibration
strategies and present a unified view which establishes connections between
them. We then conduct a thorough evaluation on these strategies from a unified
view, systematically scrutinizing step-level and path-level answer calibration
across multiple paths. Our study holds the potential to illuminate key insights
for optimizing multi-step reasoning with answer calibration. | Computational Linguistics |
What field is the article from? | Title: State-of-the-Art Review and Synthesis: A Requirement-based Roadmap for Standardized Predictive Maintenance Automation Using Digital Twin Technologies
Abstract: Recent digital advances have popularized predictive maintenance (PMx),
offering enhanced efficiency, automation, accuracy, cost savings, and
independence in maintenance. Yet, it continues to face numerous limitations
such as poor explainability, sample inefficiency of data-driven methods,
complexity of physics-based methods, and limited generalizability and
scalability of knowledge-based methods. This paper proposes leveraging Digital
Twins (DTs) to address these challenges and enable automated PMx adoption at
larger scales. While we argue that DTs have this transformative potential, they
have not yet reached the level of maturity needed to bridge these gaps in a
standardized way. Without a standard definition for such evolution, this
transformation lacks a solid foundation upon which to base its development.
This paper provides a requirement-based roadmap supporting standardized PMx
automation using DT technologies. A systematic approach comprising two primary
stages is presented. First, we methodically identify the Informational
Requirements (IRs) and Functional Requirements (FRs) for PMx, which serve as a
foundation from which any unified framework must emerge. Our approach to
defining and using IRs and FRs to form the backbone of any PMx DT is supported
by the track record of IRs and FRs being successfully used as blueprints in
other areas, such as for product development within the software industry.
Second, we conduct a thorough literature review spanning fields to determine
the ways in which these IRs and FRs are currently being used within DTs,
enabling us to point to the specific areas where further research is warranted
to support the progress and maturation of requirement-based PMx DTs. | Artificial Intelligence |
What field is the article from? | Title: Two-step dynamic obstacle avoidance
Abstract: Dynamic obstacle avoidance (DOA) is a fundamental challenge for any
autonomous vehicle, independent of whether it operates in sea, air, or land.
This paper proposes a two-step architecture for handling DOA tasks by combining
supervised and reinforcement learning (RL). In the first step, we introduce a
data-driven approach to estimate the collision risk of an obstacle using a
recurrent neural network, which is trained in a supervised fashion and offers
robustness to non-linear obstacle movements. In the second step, we include
these collision risk estimates into the observation space of an RL agent to
increase its situational awareness.~We illustrate the power of our two-step
approach by training different RL agents in a challenging environment that
requires to navigate amid multiple obstacles. The non-linear movements of
obstacles are exemplarily modeled based on stochastic processes and periodic
patterns, although our architecture is suitable for any obstacle dynamics. The
experiments reveal that integrating our collision risk metrics into the
observation space doubles the performance in terms of reward, which is
equivalent to halving the number of collisions in the considered environment.
Furthermore, we show that the architecture's performance improvement is
independent of the applied RL algorithm. | Robotics |
What field is the article from? | Title: Generative artificial intelligence enhances individual creativity but reduces the collective diversity of novel content
Abstract: Creativity is core to being human. Generative artificial intelligence (GenAI)
holds promise for humans to be more creative by offering new ideas, or less
creative by anchoring on GenAI ideas. We study the causal impact of GenAI ideas
on the production of an unstructured creative output in an online experimental
study where some writers could obtain ideas for a story from a GenAI platform.
We find that access to GenAI ideas causes stories to be evaluated as more
creative, better written and more enjoyable, especially among less creative
writers. However, objective measures of story similarity within each condition
reveal that GenAI-enabled stories are more similar to each other than stories
by humans alone. These results point to an increase in individual creativity,
but at the same time there is a risk of losing collective novelty: this dynamic
resembles a social dilemma where individual writers are better off using GenAI
to improve their own writing, but collectively a narrower scope of novel
content may be produced with GenAI. Our results have implications for
researchers, policy-makers and practitioners interested in bolstering
creativity, but point to potential downstream consequences from over-reliance. | Human-Computer Interaction |
What field is the article from? | Title: Towards Reliable Participation in UAV-Enabled Federated Edge Learning on Non-IID Data
Abstract: Federated Learning (FL) is a decentralized machine learning (ML) technique
that allows a number of participants to train an ML model collaboratively
without having to share their private local datasets with others. When
participants are unmanned aerial vehicles (UAVs), UAV-enabled FL would
experience heterogeneity due to the majorly skewed (non-independent and
identically distributed -IID) collected data. In addition, UAVs may demonstrate
unintentional misbehavior in which the latter may fail to send updates to the
FL server due, for instance, to UAVs' disconnectivity from the FL system caused
by high mobility, unavailability, or battery depletion. Such challenges may
significantly affect the convergence of the FL model. A recent way to tackle
these challenges is client selection, based on customized criteria that
consider UAV computing power and energy consumption. However, most existing
client selection schemes neglected the participants' reliability. Indeed, FL
can be targeted by poisoning attacks, in which malicious UAVs upload poisonous
local models to the FL server, by either providing targeted false predictions
for specifically chosen inputs or by compromising the global model's accuracy
through tampering with the local model. Hence, we propose in this paper a novel
client selection scheme that enhances convergence by prioritizing fast UAVs
with high-reliability scores, while eliminating malicious UAVs from training.
Through experiments, we assess the effectiveness of our scheme in resisting
different attack scenarios, in terms of convergence and achieved model
accuracy. Finally, we demonstrate the performance superiority of the proposed
approach compared to baseline methods. | Machine Learning |
What field is the article from? | Title: Stochastic Directly-Follows Process Discovery Using Grammatical Inference
Abstract: Starting with a collection of traces generated by process executions, process
discovery is the task of constructing a simple model that describes the
process, where simplicity is often measured in terms of model size. The
challenge of process discovery is that the process of interest is unknown, and
that while the input traces constitute positive examples of process executions,
no negative examples are available. Many commercial tools discover
Directly-Follows Graphs, in which nodes represent the observable actions of the
process, and directed arcs indicate execution order possibilities over the
actions. We propose a new approach for discovering sound Directly-Follows
Graphs that is grounded in grammatical inference over the input traces. To
promote the discovery of small graphs that also describe the process accurately
we design and evaluate a genetic algorithm that supports the convergence of the
inference parameters to the areas that lead to the discovery of interesting
models. Experiments over real-world datasets confirm that our new approach can
construct smaller models that represent the input traces and their frequencies
more accurately than the state-of-the-art technique. Reasoning over the
frequencies of encoded traces also becomes possible, due to the stochastic
semantics of the action graphs we propose, which, for the first time, are
interpreted as models that describe the stochastic languages of action traces. | Artificial Intelligence |
What field is the article from? | Title: H-GAP: Humanoid Control with a Generalist Planner
Abstract: Humanoid control is an important research challenge offering avenues for
integration into human-centric infrastructures and enabling physics-driven
humanoid animations. The daunting challenges in this field stem from the
difficulty of optimizing in high-dimensional action spaces and the instability
introduced by the bipedal morphology of humanoids. However, the extensive
collection of human motion-captured data and the derived datasets of humanoid
trajectories, such as MoCapAct, paves the way to tackle these challenges. In
this context, we present Humanoid Generalist Autoencoding Planner (H-GAP), a
state-action trajectory generative model trained on humanoid trajectories
derived from human motion-captured data, capable of adeptly handling downstream
control tasks with Model Predictive Control (MPC). For 56 degrees of freedom
humanoid, we empirically demonstrate that H-GAP learns to represent and
generate a wide range of motor behaviours. Further, without any learning from
online interactions, it can also flexibly transfer these behaviors to solve
novel downstream control tasks via planning. Notably, H-GAP excels established
MPC baselines that have access to the ground truth dynamics model, and is
superior or comparable to offline RL methods trained for individual tasks.
Finally, we do a series of empirical studies on the scaling properties of
H-GAP, showing the potential for performance gains via additional data but not
computing. Code and videos are available at
https://ycxuyingchen.github.io/hgap/. | Machine Learning |
What field is the article from? | Title: Tuning-less Object Naming with a Foundation Model
Abstract: We implement a real-time object naming system that enables learning a set of
named entities never seen. Our approach employs an existing foundation model
that we consider ready to see anything before starting. It turns seen images
into relatively small feature vectors that we associate with index to a
gradually built vocabulary without any training of fine-tuning of the model.
Our contribution is using the association mechanism known from transformers as
attention. It has features that support generalization from irrelevant
information for distinguishing the entities and potentially enable associating
with much more than indices to vocabulary. As a result, the system can work in
a one-shot manner and correctly name objects named in different contents. We
also outline implementation details of the system modules integrated by a
blackboard architecture. Finally, we investigate the system's quality, mainly
how many objects it can handle in this way. | Computational Linguistics |
What field is the article from? | Title: Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes
Abstract: In this paper, we democratise 3D content creation, enabling precise
generation of 3D shapes from abstract sketches while overcoming limitations
tied to drawing skills. We introduce a novel part-level modelling and alignment
framework that facilitates abstraction modelling and cross-modal
correspondence. Leveraging the same part-level decoder, our approach seamlessly
extends to sketch modelling by establishing correspondence between CLIPasso
edgemaps and projected 3D part regions, eliminating the need for a dataset
pairing human sketches and 3D shapes. Additionally, our method introduces a
seamless in-position editing process as a byproduct of cross-modal part-aligned
modelling. Operating in a low-dimensional implicit space, our approach
significantly reduces computational demands and processing time. | Computer Vision |
What field is the article from? | Title: Exploring Geometry of Blind Spots in Vision Models
Abstract: Despite the remarkable success of deep neural networks in a myriad of
settings, several works have demonstrated their overwhelming sensitivity to
near-imperceptible perturbations, known as adversarial attacks. On the other
hand, prior works have also observed that deep networks can be under-sensitive,
wherein large-magnitude perturbations in input space do not induce appreciable
changes to network activations. In this work, we study in detail the phenomenon
of under-sensitivity in vision models such as CNNs and Transformers, and
present techniques to study the geometry and extent of "equi-confidence" level
sets of such networks. We propose a Level Set Traversal algorithm that
iteratively explores regions of high confidence with respect to the input space
using orthogonal components of the local gradients. Given a source image, we
use this algorithm to identify inputs that lie in the same equi-confidence
level set as the source image despite being perceptually similar to arbitrary
images from other classes. We further observe that the source image is linearly
connected by a high-confidence path to these inputs, uncovering a star-like
structure for level sets of deep networks. Furthermore, we attempt to identify
and estimate the extent of these connected higher-dimensional regions over
which the model maintains a high degree of confidence. The code for this
project is publicly available at
https://github.com/SriramB-98/blindspots-neurips-sub | Computer Vision |
What field is the article from? | Title: Helping Language Models Learn More: Multi-dimensional Task Prompt for Few-shot Tuning
Abstract: Large language models (LLMs) can be used as accessible and intelligent
chatbots by constructing natural language queries and directly inputting the
prompt into the large language model. However, different prompt' constructions
often lead to uncertainty in the answers and thus make it hard to utilize the
specific knowledge of LLMs (like ChatGPT). To alleviate this, we use an
interpretable structure to explain the prompt learning principle in LLMs, which
certificates that the effectiveness of language models is determined by
position changes of the task's related tokens. Therefore, we propose MTPrompt,
a multi-dimensional task prompt learning method consisting based on
task-related object, summary, and task description information. By
automatically building and searching for appropriate prompts, our proposed
MTPrompt achieves the best results on few-shot samples setting and five
different datasets. In addition, we demonstrate the effectiveness and stability
of our method in different experimental settings and ablation experiments. In
interaction with large language models, embedding more task-related information
into prompts will make it easier to stimulate knowledge embedded in large
language models. | Computational Linguistics |
What field is the article from? | Title: DAMEX: Dataset-aware Mixture-of-Experts for visual understanding of mixture-of-datasets
Abstract: Construction of a universal detector poses a crucial question: How can we
most effectively train a model on a large mixture of datasets? The answer lies
in learning dataset-specific features and ensembling their knowledge but do all
this in a single model. Previous methods achieve this by having separate
detection heads on a common backbone but that results in a significant increase
in parameters. In this work, we present Mixture-of-Experts as a solution,
highlighting that MoEs are much more than a scalability tool. We propose
Dataset-Aware Mixture-of-Experts, DAMEX where we train the experts to become an
`expert' of a dataset by learning to route each dataset tokens to its mapped
expert. Experiments on Universal Object-Detection Benchmark show that we
outperform the existing state-of-the-art by average +10.2 AP score and improve
over our non-MoE baseline by average +2.0 AP score. We also observe consistent
gains while mixing datasets with (1) limited availability, (2) disparate
domains and (3) divergent label sets. Further, we qualitatively show that DAMEX
is robust against expert representation collapse. | Computer Vision |
What field is the article from? | Title: Generalization of Fitness Exercise Recognition from Doppler Measurements by Domain-adaption and Few-Shot Learning
Abstract: In previous works, a mobile application was developed using an unmodified
commercial off-the-shelf smartphone to recognize whole-body exercises. The
working principle was based on the ultrasound Doppler sensing with the device
built-in hardware. Applying such a lab-environment trained model on realistic
application variations causes a significant drop in performance, and thus
decimate its applicability. The reason of the reduced performance can be
manifold. It could be induced by the user, environment, and device variations
in realistic scenarios. Such scenarios are often more complex and diverse,
which can be challenging to anticipate in the initial training data. To study
and overcome this issue, this paper presents a database with controlled and
uncontrolled subsets of fitness exercises. We propose two concepts to utilize
small adaption data to successfully improve model generalization in an
uncontrolled environment, increasing the recognition accuracy by two to six
folds compared to the baseline for different users. | Artificial Intelligence |
What field is the article from? | Title: STEER: Semantic Turn Extension-Expansion Recognition for Voice Assistants
Abstract: In the context of a voice assistant system, steering refers to the phenomenon
in which a user issues a follow-up command attempting to direct or clarify a
previous turn. We propose STEER, a steering detection model that predicts
whether a follow-up turn is a user's attempt to steer the previous command.
Constructing a training dataset for steering use cases poses challenges due to
the cold-start problem. To overcome this, we developed heuristic rules to
sample opt-in usage data, approximating positive and negative samples without
any annotation. Our experimental results show promising performance in
identifying steering intent, with over 95% accuracy on our sampled data.
Moreover, STEER, in conjunction with our sampling strategy, aligns effectively
with real-world steering scenarios, as evidenced by its strong zero-shot
performance on a human-graded evaluation set. In addition to relying solely on
user transcripts as input, we introduce STEER+, an enhanced version of the
model. STEER+ utilizes a semantic parse tree to provide more context on
out-of-vocabulary words, such as named entities that often occur at the
sentence boundary. This further improves model performance, reducing error rate
in domains where entities frequently appear, such as messaging. Lastly, we
present a data analysis that highlights the improvement in user experience when
voice assistants support steering use cases. | Computational Linguistics |
What field is the article from? | Title: Task Tree Retrieval For Robotic Cooking
Abstract: This paper is based on developing different algorithms, which generate the
task tree planning for the given goal node(recipe). The knowledge
representation of the dishes is called FOON. It contains the different objects
and their between them with respective to the motion node The graphical
representation of FOON is made by noticing the change in the state of an object
with respect to the human manipulators. We will explore how the FOON is created
for different recipes by the robots. Task planning contains difficulties in
exploring unknown problems, as its knowledge is limited to the FOON. To get the
task tree planning for a given recipe, the robot will retrieve the information
of different functional units from the knowledge retrieval process called FOON.
Thus the generated subgraphs will allow the robot to cook the required dish.
Thus the robot can able to cook the given recipe by following the sequence of
instructions. | Robotics |
What field is the article from? | Title: TaskWeaver: A Code-First Agent Framework
Abstract: Large Language Models (LLMs) have shown impressive abilities in natural
language understanding and generation, leading to their use in applications
such as chatbots and virtual assistants. However, existing LLM frameworks face
limitations in handling domain-specific data analytics tasks with rich data
structures. Moreover, they struggle with flexibility to meet diverse user
requirements. To address these issues, TaskWeaver is proposed as a code-first
framework for building LLM-powered autonomous agents. It converts user requests
into executable code and treats user-defined plugins as callable functions.
TaskWeaver provides support for rich data structures, flexible plugin usage,
and dynamic plugin selection, and leverages LLM coding capabilities for complex
logic. It also incorporates domain-specific knowledge through examples and
ensures the secure execution of generated code. TaskWeaver offers a powerful
and flexible framework for creating intelligent conversational agents that can
handle complex tasks and adapt to domain-specific scenarios. The code is
open-sourced at https://github.com/microsoft/TaskWeaver/. | Artificial Intelligence |
What field is the article from? | Title: Concept Alignment as a Prerequisite for Value Alignment
Abstract: Value alignment is essential for building AI systems that can safely and
reliably interact with people. However, what a person values -- and is even
capable of valuing -- depends on the concepts that they are currently using to
understand and evaluate what happens in the world. The dependence of values on
concepts means that concept alignment is a prerequisite for value alignment --
agents need to align their representation of a situation with that of humans in
order to successfully align their values. Here, we formally analyze the concept
alignment problem in the inverse reinforcement learning setting, show how
neglecting concept alignment can lead to systematic value mis-alignment, and
describe an approach that helps minimize such failure modes by jointly
reasoning about a person's concepts and values. Additionally, we report
experimental results with human participants showing that humans reason about
the concepts used by an agent when acting intentionally, in line with our joint
reasoning model. | Artificial Intelligence |
What field is the article from? | Title: Toward the Tradeoffs between Privacy, Fairness and Utility in Federated Learning
Abstract: Federated Learning (FL) is a novel privacy-protection distributed machine
learning paradigm that guarantees user privacy and prevents the risk of data
leakage due to the advantage of the client's local training. Researchers have
struggled to design fair FL systems that ensure fairness of results. However,
the interplay between fairness and privacy has been less studied. Increasing
the fairness of FL systems can have an impact on user privacy, while an
increase in user privacy can affect fairness. In this work, on the client side,
we use fairness metrics, such as Demographic Parity (DemP), Equalized Odds
(EOs), and Disparate Impact (DI), to construct the local fair model. To protect
the privacy of the client model, we propose a privacy-protection fairness FL
method. The results show that the accuracy of the fair model with privacy
increases because privacy breaks the constraints of the fairness metrics. In
our experiments, we conclude the relationship between privacy, fairness and
utility, and there is a tradeoff between these. | Machine Learning |
What field is the article from? | Title: On The Fairness Impacts of Hardware Selection in Machine Learning
Abstract: In the machine learning ecosystem, hardware selection is often regarded as a
mere utility, overshadowed by the spotlight on algorithms and data. This
oversight is particularly problematic in contexts like ML-as-a-service
platforms, where users often lack control over the hardware used for model
deployment. How does the choice of hardware impact generalization properties?
This paper investigates the influence of hardware on the delicate balance
between model performance and fairness. We demonstrate that hardware choices
can exacerbate existing disparities, attributing these discrepancies to
variations in gradient flows and loss surfaces across different demographic
groups. Through both theoretical and empirical analysis, the paper not only
identifies the underlying factors but also proposes an effective strategy for
mitigating hardware-induced performance imbalances. | Machine Learning |
What field is the article from? | Title: Resource-constrained knowledge diffusion processes inspired by human peer learning
Abstract: We consider a setting where a population of artificial learners is given, and
the objective is to optimize aggregate measures of performance, under
constraints on training resources. The problem is motivated by the study of
peer learning in human educational systems. In this context, we study natural
knowledge diffusion processes in networks of interacting artificial learners.
By `natural', we mean processes that reflect human peer learning where the
students' internal state and learning process is mostly opaque, and the main
degree of freedom lies in the formation of peer learning groups by a
coordinator who can potentially evaluate the learners before assigning them to
peer groups. Among else, we empirically show that such processes indeed make
effective use of the training resources, and enable the design of modular
neural models that have the capacity to generalize without being prone to
overfitting noisy labels. | Machine Learning |
What field is the article from? | Title: The Limits of Fair Medical Imaging AI In The Wild
Abstract: As artificial intelligence (AI) rapidly approaches human-level performance in
medical imaging, it is crucial that it does not exacerbate or propagate
healthcare disparities. Prior research has established AI's capacity to infer
demographic data from chest X-rays, leading to a key concern: do models using
demographic shortcuts have unfair predictions across subpopulations? In this
study, we conduct a thorough investigation into the extent to which medical AI
utilizes demographic encodings, focusing on potential fairness discrepancies
within both in-distribution training sets and external test sets. Our analysis
covers three key medical imaging disciplines: radiology, dermatology, and
ophthalmology, and incorporates data from six global chest X-ray datasets. We
confirm that medical imaging AI leverages demographic shortcuts in disease
classification. While correcting shortcuts algorithmically effectively
addresses fairness gaps to create "locally optimal" models within the original
data distribution, this optimality is not true in new test settings.
Surprisingly, we find that models with less encoding of demographic attributes
are often most "globally optimal", exhibiting better fairness during model
evaluation in new test environments. Our work establishes best practices for
medical imaging models which maintain their performance and fairness in
deployments beyond their initial training contexts, underscoring critical
considerations for AI clinical deployments across populations and sites. | Computers and Society |
What field is the article from? | Title: DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary Intelligence
Abstract: We present DARLEI, a framework that combines evolutionary algorithms with
parallelized reinforcement learning for efficiently training and evolving
populations of UNIMAL agents. Our approach utilizes Proximal Policy
Optimization (PPO) for individual agent learning and pairs it with a tournament
selection-based generational learning mechanism to foster morphological
evolution. By building on Nvidia's Isaac Gym, DARLEI leverages GPU accelerated
simulation to achieve over 20x speedup using just a single workstation,
compared to previous work which required large distributed CPU clusters. We
systematically characterize DARLEI's performance under various conditions,
revealing factors impacting diversity of evolved morphologies. For example, by
enabling inter-agent collisions within the simulator, we find that we can
simulate some multi-agent interactions between the same morphology, and see how
it influences individual agent capabilities and long-term evolutionary
adaptation. While current results demonstrate limited diversity across
generations, we hope to extend DARLEI in future work to include interactions
between diverse morphologies in richer environments, and create a platform that
allows for coevolving populations and investigating emergent behaviours in
them. Our source code is also made publicly at
https://saeejithnair.github.io/darlei. | Artificial Intelligence |
What field is the article from? | Title: Using Artificial French Data to Understand the Emergence of Gender Bias in Transformer Language Models
Abstract: Numerous studies have demonstrated the ability of neural language models to
learn various linguistic properties without direct supervision. This work takes
an initial step towards exploring the less researched topic of how neural
models discover linguistic properties of words, such as gender, as well as the
rules governing their usage. We propose to use an artificial corpus generated
by a PCFG based on French to precisely control the gender distribution in the
training data and determine under which conditions a model correctly captures
gender information or, on the contrary, appears gender-biased. | Computational Linguistics |
What field is the article from? | Title: Exploring Values in Museum Artifacts in the SPICE project: a Preliminary Study
Abstract: This document describes the rationale, the implementation and a preliminary
evaluation of a semantic reasoning tool developed in the EU H2020 SPICE project
to enhance the diversity of perspectives experienced by museum visitors. The
tool, called DEGARI 2.0 for values, relies on the commonsense reasoning
framework TCL, and exploits an ontological model formalizingthe Haidt's theory
of moral values to associate museum items with combined values and emotions.
Within a museum exhibition, this tool can suggest cultural items that are
associated not only with the values of already experienced or preferred
objects, but also with novel items with different value stances, opening the
visit experience to more inclusive interpretations of cultural content. The
system has been preliminarily tested, in the context of the SPICE project, on
the collection of the Hecht Museum of Haifa. | Artificial Intelligence |
What field is the article from? | Title: A Causal Disentangled Multi-Granularity Graph Classification Method
Abstract: Graph data widely exists in real life, with large amounts of data and complex
structures. It is necessary to map graph data to low-dimensional embedding.
Graph classification, a critical graph task, mainly relies on identifying the
important substructures within the graph. At present, some graph classification
methods do not combine the multi-granularity characteristics of graph data.
This lack of granularity distinction in modeling leads to a conflation of key
information and false correlations within the model. So, achieving the desired
goal of a credible and interpretable model becomes challenging. This paper
proposes a causal disentangled multi-granularity graph representation learning
method (CDM-GNN) to solve this challenge. The CDM-GNN model disentangles the
important substructures and bias parts within the graph from a
multi-granularity perspective. The disentanglement of the CDM-GNN model reveals
important and bias parts, forming the foundation for its classification task,
specifically, model interpretations. The CDM-GNN model exhibits strong
classification performance and generates explanatory outcomes aligning with
human cognitive patterns. In order to verify the effectiveness of the model,
this paper compares the three real-world datasets MUTAG, PTC, and IMDM-M. Six
state-of-the-art models, namely GCN, GAT, Top-k, ASAPool, SUGAR, and SAT are
employed for comparison purposes. Additionally, a qualitative analysis of the
interpretation results is conducted. | Machine Learning |
What field is the article from? | Title: OpinSummEval: Revisiting Automated Evaluation for Opinion Summarization
Abstract: Opinion summarization sets itself apart from other types of summarization
tasks due to its distinctive focus on aspects and sentiments. Although certain
automated evaluation methods like ROUGE have gained popularity, we have found
them to be unreliable measures for assessing the quality of opinion summaries.
In this paper, we present OpinSummEval, a dataset comprising human judgments
and outputs from 14 opinion summarization models. We further explore the
correlation between 24 automatic metrics and human ratings across four
dimensions. Our findings indicate that metrics based on neural networks
generally outperform non-neural ones. However, even metrics built on powerful
backbones, such as BART and GPT-3/3.5, do not consistently correlate well
across all dimensions, highlighting the need for advancements in automated
evaluation methods for opinion summarization. The code and data are publicly
available at https://github.com/A-Chicharito-S/OpinSummEval/tree/main. | Computational Linguistics |
What field is the article from? | Title: Generating Valid and Natural Adversarial Examples with Large Language Models
Abstract: Deep learning-based natural language processing (NLP) models, particularly
pre-trained language models (PLMs), have been revealed to be vulnerable to
adversarial attacks. However, the adversarial examples generated by many
mainstream word-level adversarial attack models are neither valid nor natural,
leading to the loss of semantic maintenance, grammaticality, and human
imperceptibility. Based on the exceptional capacity of language understanding
and generation of large language models (LLMs), we propose LLM-Attack, which
aims at generating both valid and natural adversarial examples with LLMs. The
method consists of two stages: word importance ranking (which searches for the
most vulnerable words) and word synonym replacement (which substitutes them
with their synonyms obtained from LLMs). Experimental results on the Movie
Review (MR), IMDB, and Yelp Review Polarity datasets against the baseline
adversarial attack models illustrate the effectiveness of LLM-Attack, and it
outperforms the baselines in human and GPT-4 evaluation by a significant
margin. The model can generate adversarial examples that are typically valid
and natural, with the preservation of semantic meaning, grammaticality, and
human imperceptibility. | Computational Linguistics |
What field is the article from? | Title: CoheSentia: A Novel Benchmark of Incremental versus Holistic Assessment of Coherence in Generated Texts
Abstract: Coherence is a linguistic term that refers to the relations between small
textual units (sentences, propositions), which make the text logically
consistent and meaningful to the reader. With the advances of generative
foundational models in NLP, there is a pressing need to automatically assess
the human-perceived coherence of automatically generated texts. Up until now,
little work has been done on explicitly assessing the coherence of generated
texts and analyzing the factors contributing to (in)coherence. Previous work on
the topic used other tasks, e.g., sentence reordering, as proxies of coherence,
rather than approaching coherence detection heads on. In this paper, we
introduce {\sc CoheSentia}, a novel benchmark of human-perceived coherence of
automatically generated texts. Our annotation protocol reflects two
perspectives; one is global, assigning a single coherence score, and the other
is incremental, scoring sentence by sentence. The incremental method produces
an (in)coherence score for each text fragment and also pinpoints reasons for
incoherence at that point. Our benchmark contains 500 automatically-generated
and human-annotated paragraphs, each annotated in both methods, by multiple
raters. Our analysis shows that the inter-annotator agreement in the
incremental mode is higher than in the holistic alternative, and our
experiments show that standard LMs fine-tuned for coherence detection show
varied performance on the different factors contributing to (in)coherence. All
in all, these models yield unsatisfactory performance, emphasizing the need for
developing more reliable methods for coherence assessment. | Computational Linguistics |
What field is the article from? | Title: Generative AI for Software Metadata: Overview of the Information Retrieval in Software Engineering Track at FIRE 2023
Abstract: The Information Retrieval in Software Engineering (IRSE) track aims to
develop solutions for automated evaluation of code comments in a machine
learning framework based on human and large language model generated labels. In
this track, there is a binary classification task to classify comments as
useful and not useful. The dataset consists of 9048 code comments and
surrounding code snippet pairs extracted from open source github C based
projects and an additional dataset generated individually by teams using large
language models. Overall 56 experiments have been submitted by 17 teams from
various universities and software companies. The submissions have been
evaluated quantitatively using the F1-Score and qualitatively based on the type
of features developed, the supervised learning model used and their
corresponding hyper-parameters. The labels generated from large language models
increase the bias in the prediction model but lead to less over-fitted results. | Software Engineering |
What field is the article from? | Title: When is Offline Policy Selection Sample Efficient for Reinforcement Learning?
Abstract: Offline reinforcement learning algorithms often require careful
hyperparameter tuning. Consequently, before deployment, we need to select
amongst a set of candidate policies. As yet, however, there is little
understanding about the fundamental limits of this offline policy selection
(OPS) problem. In this work we aim to provide clarity on when sample efficient
OPS is possible, primarily by connecting OPS to off-policy policy evaluation
(OPE) and Bellman error (BE) estimation. We first show a hardness result, that
in the worst case, OPS is just as hard as OPE, by proving a reduction of OPE to
OPS. As a result, no OPS method can be more sample efficient than OPE in the
worst case. We then propose a BE method for OPS, called Identifiable BE
Selection (IBES), that has a straightforward method for selecting its own
hyperparameters. We highlight that using IBES for OPS generally has more
requirements than OPE methods, but if satisfied, can be more sample efficient.
We conclude with an empirical study comparing OPE and IBES, and by showing the
difficulty of OPS on an offline Atari benchmark dataset. | Machine Learning |
What field is the article from? | Title: Data-Driven Risk Modeling for Infrastructure Projects Using Artificial Intelligence Techniques
Abstract: Managing project risk is a key part of the successful implementation of any
large project and is widely recognized as a best practice for public agencies
to deliver infrastructures. The conventional method of identifying and
evaluating project risks involves getting input from subject matter experts at
risk workshops in the early phases of a project. As a project moves through its
life cycle, these identified risks and their assessments evolve. Some risks are
realized to become issues, some are mitigated, and some are retired as no
longer important. Despite the value provided by conventional expert-based
approaches, several challenges remain due to the time-consuming and expensive
processes involved. Moreover, limited is known about how risks evolve from
ex-ante to ex-post over time. How well does the project team identify and
evaluate risks in the initial phase compared to what happens during project
execution? Using historical data and artificial intelligence techniques, this
study addressed these limitations by introducing a data-driven framework to
identify risks automatically and to examine the quality of early risk registers
and risk assessments. Risk registers from more than 70 U.S. major
transportation projects form the input dataset. | Software Engineering |
What field is the article from? | Title: Rethinking Samples Selection for Contrastive Learning: Mining of Potential Samples
Abstract: Contrastive learning predicts whether two images belong to the same category
by training a model to make their feature representations as close or as far
away as possible. In this paper, we rethink how to mine samples in contrastive
learning, unlike other methods, our approach is more comprehensive, taking into
account both positive and negative samples, and mining potential samples from
two aspects: First, for positive samples, we consider both the augmented sample
views obtained by data augmentation and the mined sample views through data
mining. Then, we weight and combine them using both soft and hard weighting
strategies. Second, considering the existence of uninformative negative samples
and false negative samples in the negative samples, we analyze the negative
samples from the gradient perspective and finally mine negative samples that
are neither too hard nor too easy as potential negative samples, i.e., those
negative samples that are close to positive samples. The experiments show the
obvious advantages of our method compared with some traditional self-supervised
methods. Our method achieves 88.57%, 61.10%, and 36.69% top-1 accuracy on
CIFAR10, CIFAR100, and TinyImagenet, respectively. | Computer Vision |
What field is the article from? | Title: Diverse Conventions for Human-AI Collaboration
Abstract: Conventions are crucial for strong performance in cooperative multi-agent
games, because they allow players to coordinate on a shared strategy without
explicit communication. Unfortunately, standard multi-agent reinforcement
learning techniques, such as self-play, converge to conventions that are
arbitrary and non-diverse, leading to poor generalization when interacting with
new partners. In this work, we present a technique for generating diverse
conventions by (1) maximizing their rewards during self-play, while (2)
minimizing their rewards when playing with previously discovered conventions
(cross-play), stimulating conventions to be semantically different. To ensure
that learned policies act in good faith despite the adversarial optimization of
cross-play, we introduce \emph{mixed-play}, where an initial state is randomly
generated by sampling self-play and cross-play transitions and the player
learns to maximize the self-play reward from this initial state. We analyze the
benefits of our technique on various multi-agent collaborative games, including
Overcooked, and find that our technique can adapt to the conventions of humans,
surpassing human-level performance when paired with real users. | Artificial Intelligence |
What field is the article from? | Title: SequenceMatch: Revisiting the design of weak-strong augmentations for Semi-supervised learning
Abstract: Semi-supervised learning (SSL) has become popular in recent years because it
allows the training of a model using a large amount of unlabeled data. However,
one issue that many SSL methods face is the confirmation bias, which occurs
when the model is overfitted to the small labeled training dataset and produces
overconfident, incorrect predictions. To address this issue, we propose
SequenceMatch, an efficient SSL method that utilizes multiple data
augmentations. The key element of SequenceMatch is the inclusion of a medium
augmentation for unlabeled data. By taking advantage of different augmentations
and the consistency constraints between each pair of augmented examples,
SequenceMatch helps reduce the divergence between the prediction distribution
of the model for weakly and strongly augmented examples. In addition,
SequenceMatch defines two different consistency constraints for high and
low-confidence predictions. As a result, SequenceMatch is more data-efficient
than ReMixMatch, and more time-efficient than both ReMixMatch ($\times4$) and
CoMatch ($\times2$) while having higher accuracy. Despite its simplicity,
SequenceMatch consistently outperforms prior methods on standard benchmarks,
such as CIFAR-10/100, SVHN, and STL-10. It also surpasses prior
state-of-the-art methods by a large margin on large-scale datasets such as
ImageNet, with a 38.46\% error rate. Code is available at
https://github.com/beandkay/SequenceMatch. | Computer Vision |
What field is the article from? | Title: Using Early Readouts to Mediate Featural Bias in Distillation
Abstract: Deep networks tend to learn spurious feature-label correlations in real-world
supervised learning tasks. This vulnerability is aggravated in distillation,
where a student model may have lesser representational capacity than the
corresponding teacher model. Often, knowledge of specific spurious correlations
is used to reweight instances & rebalance the learning process. We propose a
novel early readout mechanism whereby we attempt to predict the label using
representations from earlier network layers. We show that these early readouts
automatically identify problem instances or groups in the form of confident,
incorrect predictions. Leveraging these signals to modulate the distillation
loss on an instance level allows us to substantially improve not only group
fairness measures across benchmark datasets, but also overall accuracy of the
student model. We also provide secondary analyses that bring insight into the
role of feature learning in supervision and distillation. | Machine Learning |
What field is the article from? | Title: Trust, Accountability, and Autonomy in Knowledge Graph-based AI for Self-determination
Abstract: Knowledge Graphs (KGs) have emerged as fundamental platforms for powering
intelligent decision-making and a wide range of Artificial Intelligence (AI)
services across major corporations such as Google, Walmart, and AirBnb. KGs
complement Machine Learning (ML) algorithms by providing data context and
semantics, thereby enabling further inference and question-answering
capabilities. The integration of KGs with neuronal learning (e.g., Large
Language Models (LLMs)) is currently a topic of active research, commonly named
neuro-symbolic AI. Despite the numerous benefits that can be accomplished with
KG-based AI, its growing ubiquity within online services may result in the loss
of self-determination for citizens as a fundamental societal issue. The more we
rely on these technologies, which are often centralised, the less citizens will
be able to determine their own destinies. To counter this threat, AI
regulation, such as the European Union (EU) AI Act, is being proposed in
certain regions. The regulation sets what technologists need to do, leading to
questions concerning: How can the output of AI systems be trusted? What is
needed to ensure that the data fuelling and the inner workings of these
artefacts are transparent? How can AI be made accountable for its
decision-making? This paper conceptualises the foundational topics and research
pillars to support KG-based AI for self-determination. Drawing upon this
conceptual framework, challenges and opportunities for citizen
self-determination are illustrated and analysed in a real-world scenario. As a
result, we propose a research agenda aimed at accomplishing the recommended
objectives. | Artificial Intelligence |
What field is the article from? | Title: Exploring the Robustness of Decentralized Training for Large Language Models
Abstract: Decentralized training of large language models has emerged as an effective
way to democratize this technology. However, the potential threats associated
with this approach have not been carefully discussed, which would hinder the
development of decentralized training infrastructures. This paper aims to
initiate discussion towards this end by exploring the robustness of
decentralized training from three main perspectives. First, we demonstrate the
vulnerabilities inherent in decentralized training frameworks in terms of
hardware, data, and models. Second, we highlight the fundamental difference
between decentralized foundation model training and vanilla federated learning,
where the security techniques employed in federated learning cannot be applied
directly. Third, we discuss the essential components required for a robust and
efficient decentralized training framework and present a case study by modeling
a concrete threat model. Our objective in this vision paper is to emphasize the
importance of addressing security concerns in the context of decentralized
training for large language models. | Machine Learning |
What field is the article from? | Title: SAMSGL: Series-Aligned Multi-Scale Graph Learning for Spatio-Temporal Forecasting
Abstract: Spatio-temporal forecasting in various domains, like traffic prediction and
weather forecasting, is a challenging endeavor, primarily due to the
difficulties in modeling propagation dynamics and capturing high-dimensional
interactions among nodes. Despite the significant strides made by graph-based
networks in spatio-temporal forecasting, there remain two pivotal factors
closely related to forecasting performance that need further consideration:
time delays in propagation dynamics and multi-scale high-dimensional
interactions. In this work, we present a Series-Aligned Multi-Scale Graph
Learning (SAMSGL) framework, aiming to enhance forecasting performance. In
order to handle time delays in spatial interactions, we propose a
series-aligned graph convolution layer to facilitate the aggregation of
non-delayed graph signals, thereby mitigating the influence of time delays for
the improvement in accuracy. To understand global and local spatio-temporal
interactions, we develop a spatio-temporal architecture via multi-scale graph
learning, which encompasses two essential components: multi-scale graph
structure learning and graph-fully connected (Graph-FC) blocks. The multi-scale
graph structure learning includes a global graph structure to learn both
delayed and non-delayed node embeddings, as well as a local one to learn node
variations influenced by neighboring factors. The Graph-FC blocks
synergistically fuse spatial and temporal information to boost prediction
accuracy. To evaluate the performance of SAMSGL, we conduct experiments on
meteorological and traffic forecasting datasets, which demonstrate its
effectiveness and superiority. | Machine Learning |
What field is the article from? | Title: SemanticBoost: Elevating Motion Generation with Augmented Textual Cues
Abstract: Current techniques face difficulties in generating motions from intricate
semantic descriptions, primarily due to insufficient semantic annotations in
datasets and weak contextual understanding. To address these issues, we present
SemanticBoost, a novel framework that tackles both challenges simultaneously.
Our framework comprises a Semantic Enhancement module and a Context-Attuned
Motion Denoiser (CAMD). The Semantic Enhancement module extracts supplementary
semantics from motion data, enriching the dataset's textual description and
ensuring precise alignment between text and motion data without depending on
large language models. On the other hand, the CAMD approach provides an
all-encompassing solution for generating high-quality, semantically consistent
motion sequences by effectively capturing context information and aligning the
generated motion with the given textual descriptions. Distinct from existing
methods, our approach can synthesize accurate orientational movements, combined
motions based on specific body part descriptions, and motions generated from
complex, extended sentences. Our experimental results demonstrate that
SemanticBoost, as a diffusion-based method, outperforms auto-regressive-based
techniques, achieving cutting-edge performance on the Humanml3D dataset while
maintaining realistic and smooth motion generation quality. | Computer Vision |
What field is the article from? | Title: GPT4All: An Ecosystem of Open Source Compressed Language Models
Abstract: Large language models (LLMs) have recently achieved human-level performance
on a range of professional and academic benchmarks. The accessibility of these
models has lagged behind their performance. State-of-the-art LLMs require
costly infrastructure; are only accessible via rate-limited, geo-locked, and
censored web interfaces; and lack publicly available code and technical
reports. In this paper, we tell the story of GPT4All, a popular open source
repository that aims to democratize access to LLMs. We outline the technical
details of the original GPT4All model family, as well as the evolution of the
GPT4All project from a single model into a fully fledged open source ecosystem.
It is our hope that this paper acts as both a technical overview of the
original GPT4All models as well as a case study on the subsequent growth of the
GPT4All open source ecosystem. | Computational Linguistics |
What field is the article from? | Title: Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic Systems
Abstract: High-level reasoning can be defined as the capability to generalize over
knowledge acquired via experience, and to exhibit robust behavior in novel
situations. Such form of reasoning is a basic skill in humans, who seamlessly
use it in a broad spectrum of tasks, from language communication to decision
making in complex situations. When it manifests itself in understanding and
manipulating the everyday world of objects and their interactions, we talk
about common sense or commonsense reasoning. State-of-the-art AI systems don't
possess such capability: for instance, Large Language Models have recently
become popular by demonstrating remarkable fluency in conversing with humans,
but they still make trivial mistakes when probed for commonsense competence; on
a different level, performance degradation outside training data prevents
self-driving vehicles to safely adapt to unseen scenarios, a serious and
unsolved problem that limits the adoption of such technology. In this paper we
propose to enable high-level reasoning in AI systems by integrating cognitive
architectures with external neuro-symbolic components. We illustrate a hybrid
framework centered on ACT-R and we discuss the role of generative models in
recent and future applications. | Artificial Intelligence |
What field is the article from? | Title: Survey on Memory-Augmented Neural Networks: Cognitive Insights to AI Applications
Abstract: This paper explores Memory-Augmented Neural Networks (MANNs), delving into
how they blend human-like memory processes into AI. It covers different memory
types, like sensory, short-term, and long-term memory, linking psychological
theories with AI applications. The study investigates advanced architectures
such as Hopfield Networks, Neural Turing Machines, Correlation Matrix Memories,
Memformer, and Neural Attention Memory, explaining how they work and where they
excel. It dives into real-world uses of MANNs across Natural Language
Processing, Computer Vision, Multimodal Learning, and Retrieval Models, showing
how memory boosters enhance accuracy, efficiency, and reliability in AI tasks.
Overall, this survey provides a comprehensive view of MANNs, offering insights
for future research in memory-based AI systems. | Artificial Intelligence |
What field is the article from? | Title: Resolving Crash Bugs via Large Language Models: An Empirical Study
Abstract: Crash bugs cause unexpected program behaviors or even termination, requiring
high-priority resolution. However, manually resolving crash bugs is challenging
and labor-intensive, and researchers have proposed various techniques for their
automated localization and repair. ChatGPT, a recent large language model
(LLM), has garnered significant attention due to its exceptional performance
across various domains. This work performs the first investigation into
ChatGPT's capability in resolve real-world crash bugs, focusing on its
effectiveness in both localizing and repairing code-related and
environment-related crash bugs. Specifically, we initially assess ChatGPT's
fundamental ability to resolve crash bugs with basic prompts in a single
iteration. We observe that ChatGPT performs better at resolving code-related
crash bugs compared to environment-related ones, and its primary challenge in
resolution lies in inaccurate localization. Additionally, we explore ChatGPT's
potential with various advanced prompts. Furthermore, by stimulating ChatGPT's
self-planning, it methodically investigates each potential crash-causing
environmental factor through proactive inquiry, ultimately identifying the root
cause of the crash. Based on our findings, we propose IntDiagSolver, an
interaction methodology designed to facilitate precise crash bug resolution
through continuous interaction with LLMs. Evaluating IntDiagSolver on multiple
LLMs reveals consistent enhancement in the accuracy of crash bug resolution,
including ChatGPT, Claude, and CodeLlama. | Software Engineering |
What field is the article from? | Title: Alleviating Behavior Data Imbalance for Multi-Behavior Graph Collaborative Filtering
Abstract: Graph collaborative filtering, which learns user and item representations
through message propagation over the user-item interaction graph, has been
shown to effectively enhance recommendation performance. However, most current
graph collaborative filtering models mainly construct the interaction graph on
a single behavior domain (e.g. click), even though users exhibit various types
of behaviors on real-world platforms, including actions like click, cart, and
purchase. Furthermore, due to variations in user engagement, there exists an
imbalance in the scale of different types of behaviors. For instance, users may
click and view multiple items but only make selective purchases from a small
subset of them. How to alleviate the behavior imbalance problem and utilize
information from the multiple behavior graphs concurrently to improve the
target behavior conversion (e.g. purchase) remains underexplored. To this end,
we propose IMGCF, a simple but effective model to alleviate behavior data
imbalance for multi-behavior graph collaborative filtering. Specifically, IMGCF
utilizes a multi-task learning framework for collaborative filtering on
multi-behavior graphs. Then, to mitigate the data imbalance issue, IMGCF
improves representation learning on the sparse behavior by leveraging
representations learned from the behavior domain with abundant data volumes.
Experiments on two widely-used multi-behavior datasets demonstrate the
effectiveness of IMGCF. | Information Retrieval |
What field is the article from? | Title: Benchmarks for Physical Reasoning AI
Abstract: Physical reasoning is a crucial aspect in the development of general AI
systems, given that human learning starts with interacting with the physical
world before progressing to more complex concepts. Although researchers have
studied and assessed the physical reasoning of AI approaches through various
specific benchmarks, there is no comprehensive approach to evaluating and
measuring progress. Therefore, we aim to offer an overview of existing
benchmarks and their solution approaches and propose a unified perspective for
measuring the physical reasoning capacity of AI systems. We select benchmarks
that are designed to test algorithmic performance in physical reasoning tasks.
While each of the selected benchmarks poses a unique challenge, their ensemble
provides a comprehensive proving ground for an AI generalist agent with a
measurable skill level for various physical reasoning concepts. This gives an
advantage to such an ensemble of benchmarks over other holistic benchmarks that
aim to simulate the real world by intertwining its complexity and many
concepts. We group the presented set of physical reasoning benchmarks into
subcategories so that more narrow generalist AI agents can be tested first on
these groups. | Artificial Intelligence |
What field is the article from? | Title: Leveraging Diffusion Perturbations for Measuring Fairness in Computer Vision
Abstract: Computer vision models have been known to encode harmful biases, leading to
the potentially unfair treatment of historically marginalized groups, such as
people of color. However, there remains a lack of datasets balanced along
demographic traits that can be used to evaluate the downstream fairness of
these models. In this work, we demonstrate that diffusion models can be
leveraged to create such a dataset. We first use a diffusion model to generate
a large set of images depicting various occupations. Subsequently, each image
is edited using inpainting to generate multiple variants, where each variant
refers to a different perceived race. Using this dataset, we benchmark several
vision-language models on a multi-class occupation classification task. We find
that images generated with non-Caucasian labels have a significantly higher
occupation misclassification rate than images generated with Caucasian labels,
and that several misclassifications are suggestive of racial biases. We measure
a model's downstream fairness by computing the standard deviation in the
probability of predicting the true occupation label across the different
perceived identity groups. Using this fairness metric, we find significant
disparities between the evaluated vision-and-language models. We hope that our
work demonstrates the potential value of diffusion methods for fairness
evaluations. | Computer Vision |
What field is the article from? | Title: Distance-Based Propagation for Efficient Knowledge Graph Reasoning
Abstract: Knowledge graph completion (KGC) aims to predict unseen edges in knowledge
graphs (KGs), resulting in the discovery of new facts. A new class of methods
have been proposed to tackle this problem by aggregating path information.
These methods have shown tremendous ability in the task of KGC. However they
are plagued by efficiency issues. Though there are a few recent attempts to
address this through learnable path pruning, they often sacrifice the
performance to gain efficiency. In this work, we identify two intrinsic
limitations of these methods that affect the efficiency and representation
quality. To address the limitations, we introduce a new method, TAGNet, which
is able to efficiently propagate information. This is achieved by only
aggregating paths in a fixed window for each source-target pair. We demonstrate
that the complexity of TAGNet is independent of the number of layers. Extensive
experiments demonstrate that TAGNet can cut down on the number of propagated
messages by as much as 90% while achieving competitive performance on multiple
KG datasets. The code is available at https://github.com/HarryShomer/TAGNet. | Machine Learning |
What field is the article from? | Title: Style-Aware Radiology Report Generation with RadGraph and Few-Shot Prompting
Abstract: Automatically generated reports from medical images promise to improve the
workflow of radiologists. Existing methods consider an image-to-report modeling
task by directly generating a fully-fledged report from an image. However, this
conflates the content of the report (e.g., findings and their attributes) with
its style (e.g., format and choice of words), which can lead to clinically
inaccurate reports. To address this, we propose a two-step approach for
radiology report generation. First, we extract the content from an image; then,
we verbalize the extracted content into a report that matches the style of a
specific radiologist. For this, we leverage RadGraph -- a graph representation
of reports -- together with large language models (LLMs). In our quantitative
evaluations, we find that our approach leads to beneficial performance. Our
human evaluation with clinical raters highlights that the AI-generated reports
are indistinguishably tailored to the style of individual radiologist despite
leveraging only a few examples as context. | Artificial Intelligence |
What field is the article from? | Title: Learning Safety Constraints From Demonstration Using One-Class Decision Trees
Abstract: The alignment of autonomous agents with human values is a pivotal challenge
when deploying these agents within physical environments, where safety is an
important concern. However, defining the agent's objective as a reward and/or
cost function is inherently complex and prone to human errors. In response to
this challenge, we present a novel approach that leverages one-class decision
trees to facilitate learning from expert demonstrations. These decision trees
provide a foundation for representing a set of constraints pertinent to the
given environment as a logical formula in disjunctive normal form. The learned
constraints are subsequently employed within an oracle constrained
reinforcement learning framework, enabling the acquisition of a safe policy. In
contrast to other methods, our approach offers an interpretable representation
of the constraints, a vital feature in safety-critical environments. To
validate the effectiveness of our proposed method, we conduct experiments in
synthetic benchmark domains and a realistic driving environment. | Machine Learning |
What field is the article from? | Title: SatCLIP: Global, General-Purpose Location Embeddings with Satellite Imagery
Abstract: Geographic location is essential for modeling tasks in fields ranging from
ecology to epidemiology to the Earth system sciences. However, extracting
relevant and meaningful characteristics of a location can be challenging, often
entailing expensive data fusion or data distillation from global imagery
datasets. To address this challenge, we introduce Satellite Contrastive
Location-Image Pretraining (SatCLIP), a global, general-purpose geographic
location encoder that learns an implicit representation of locations from
openly available satellite imagery. Trained location encoders provide vector
embeddings summarizing the characteristics of any given location for convenient
usage in diverse downstream tasks. We show that SatCLIP embeddings, pretrained
on globally sampled multi-spectral Sentinel-2 satellite data, can be used in
various predictive tasks that depend on location information but not
necessarily satellite imagery, including temperature prediction, animal
recognition in imagery, and population density estimation. Across tasks,
SatCLIP embeddings consistently outperform embeddings from existing pretrained
location encoders, ranging from models trained on natural images to models
trained on semantic context. SatCLIP embeddings also help to improve geographic
generalization. This demonstrates the potential of general-purpose location
encoders and opens the door to learning meaningful representations of our
planet from the vast, varied, and largely untapped modalities of geospatial
data. | Computer Vision |
What field is the article from? | Title: FedSN: A General Federated Learning Framework over LEO Satellite Networks
Abstract: Recently, a large number of Low Earth Orbit (LEO) satellites have been
launched and deployed successfully in space by commercial companies, such as
SpaceX. Due to multimodal sensors equipped by the LEO satellites, they serve
not only for communication but also for various machine learning applications,
such as space modulation recognition, remote sensing image classification, etc.
However, the ground station (GS) may be incapable of downloading such a large
volume of raw sensing data for centralized model training due to the limited
contact time with LEO satellites (e.g. 5 minutes). Therefore, federated
learning (FL) has emerged as the promising solution to address this problem via
on-device training. Unfortunately, to enable FL on LEO satellites, we still
face three critical challenges that are i) heterogeneous computing and memory
capabilities, ii) limited uplink rate, and iii) model staleness. To this end,
we propose FedSN as a general FL framework to tackle the above challenges, and
fully explore data diversity on LEO satellites. Specifically, we first present
a novel sub-structure scheme to enable heterogeneous local model training
considering different computing, memory, and communication constraints on LEO
satellites. Additionally, we propose a pseudo-synchronous model aggregation
strategy to dynamically schedule model aggregation for compensating model
staleness. To further demonstrate the effectiveness of the FedSN, we evaluate
it using space modulation recognition and remote sensing image classification
tasks by leveraging the data from real-world satellite networks. Extensive
experimental results demonstrate that FedSN framework achieves higher accuracy,
lower computing, and communication overhead than the state-of-the-art
benchmarks and the effectiveness of each components in FedSN. | Machine Learning |
What field is the article from? | Title: Continual Diffusion with STAMINA: STack-And-Mask INcremental Adapters
Abstract: Recent work has demonstrated a remarkable ability to customize text-to-image
diffusion models to multiple, fine-grained concepts in a sequential (i.e.,
continual) manner while only providing a few example images for each concept.
This setting is known as continual diffusion. Here, we ask the question: Can we
scale these methods to longer concept sequences without forgetting? Although
prior work mitigates the forgetting of previously learned concepts, we show
that its capacity to learn new tasks reaches saturation over longer sequences.
We address this challenge by introducing a novel method, STack-And-Mask
INcremental Adapters (STAMINA), which is composed of low-ranked
attention-masked adapters and customized MLP tokens. STAMINA is designed to
enhance the robust fine-tuning properties of LoRA for sequential concept
learning via learnable hard-attention masks parameterized with low rank MLPs,
enabling precise, scalable learning via sparse adaptation. Notably, all
introduced trainable parameters can be folded back into the model after
training, inducing no additional inference parameter costs. We show that
STAMINA outperforms the prior SOTA for the setting of text-to-image continual
customization on a 50-concept benchmark composed of landmarks and human faces,
with no stored replay data. Additionally, we extended our method to the setting
of continual learning for image classification, demonstrating that our gains
also translate to state-of-the-art performance in this standard benchmark. | Computer Vision |
What field is the article from? | Title: Personas as a Way to Model Truthfulness in Language Models
Abstract: Large Language Models (LLMs) are trained on vast amounts of text from the
internet, which contains both factual and misleading information about the
world. Can language models discern truth from falsehood in this contradicting
data? Expanding on the view that LLMs can model different communicative agents,
we present the persona hypothesis: LLMs can cluster agents into personas using
common features of their generations. For instance, a truthful persona is a
group of agents that are likely to produce truthful text and that share similar
features like formal writing styles and scientific references. By modeling this
persona, LLMs can generalize truthfulness beyond the specific contexts in which
each agent generated the training text. For example, the model can infer that
the agent "Wikipedia" will behave truthfully on topics that were only generated
by "Science" because they both belong to the truthful persona. We show evidence
for the persona hypothesis via two observations: (1) we can probe whether a
model's answer will be truthful before it is generated; (2) finetuning a model
on a set of facts improves its truthfulness on unseen topics. Next, using
arithmetics as a synthetic environment, we show that language models can
separate true and false statements, and generalize truthfulness across agents;
but only if agents in the training data share a truthful generative process
that enables the creation of a truthful persona. Overall, our findings suggest
that models can exploit hierarchical structures in the data to learn abstract
concepts like truthfulness. | Computational Linguistics |
What field is the article from? | Title: Panoptic Video Scene Graph Generation
Abstract: Towards building comprehensive real-world visual perception systems, we
propose and study a new problem called panoptic scene graph generation (PVSG).
PVSG relates to the existing video scene graph generation (VidSGG) problem,
which focuses on temporal interactions between humans and objects grounded with
bounding boxes in videos. However, the limitation of bounding boxes in
detecting non-rigid objects and backgrounds often causes VidSGG to miss key
details crucial for comprehensive video understanding. In contrast, PVSG
requires nodes in scene graphs to be grounded by more precise, pixel-level
segmentation masks, which facilitate holistic scene understanding. To advance
research in this new area, we contribute the PVSG dataset, which consists of
400 videos (289 third-person + 111 egocentric videos) with a total of 150K
frames labeled with panoptic segmentation masks as well as fine, temporal scene
graphs. We also provide a variety of baseline methods and share useful design
practices for future work. | Computer Vision |
What field is the article from? | Title: A Unified View on Forgetting and Strong Equivalence Notions in Answer Set Programming
Abstract: Answer Set Programming (ASP) is a prominent rule-based language for knowledge
representation and reasoning with roots in logic programming and non-monotonic
reasoning. The aim to capture the essence of removing (ir)relevant details in
ASP programs led to the investigation of different notions, from strong
persistence (SP) forgetting, to faithful abstractions, and, recently, strong
simplifications, where the latter two can be seen as relaxed and strengthened
notions of forgetting, respectively. Although it was observed that these
notions are related, especially given that they have characterizations through
the semantics for strong equivalence, it remained unclear whether they can be
brought together. In this work, we bridge this gap by introducing a novel
relativized equivalence notion, which is a relaxation of the recent
simplification notion, that is able to capture all related notions from the
literature. We provide necessary and sufficient conditions for relativized
simplifiability, which shows that the challenging part is for when the context
programs do not contain all the atoms to remove. We then introduce an operator
that combines projection and a relaxation of (SP)-forgetting to obtain the
relativized simplifications. We furthermore present complexity results that
complete the overall picture. | Artificial Intelligence |
What field is the article from? | Title: Probable Object Location (POLo) Score Estimation for Efficient Object Goal Navigation
Abstract: To advance the field of autonomous robotics, particularly in object search
tasks within unexplored environments, we introduce a novel framework centered
around the Probable Object Location (POLo) score. Utilizing a 3D object
probability map, the POLo score allows the agent to make data-driven decisions
for efficient object search. We further enhance the framework's practicality by
introducing POLoNet, a neural network trained to approximate the
computationally intensive POLo score. Our approach addresses critical
limitations of both end-to-end reinforcement learning methods, which suffer
from memory decay over long-horizon tasks, and traditional map-based methods
that neglect visibility constraints. Our experiments, involving the first phase
of the OVMM 2023 challenge, demonstrate that an agent equipped with POLoNet
significantly outperforms a range of baseline methods, including end-to-end RL
techniques and prior map-based strategies. To provide a comprehensive
evaluation, we introduce new performance metrics that offer insights into the
efficiency and effectiveness of various agents in object goal navigation. | Robotics |
What field is the article from? | Title: Efficiently Adapting Pretrained Language Models To New Languages
Abstract: Recent large language models (LLM) exhibit sub-optimal performance on
low-resource languages, as the training data of these models is usually
dominated by English and other high-resource languages. Furthermore, it is
challenging to train models for low-resource languages, especially from
scratch, due to a lack of high quality training data. Adapting pretrained LLMs
reduces the need for data in the new language while also providing cross
lingual transfer capabilities. However, naively adapting to new languages leads
to catastrophic forgetting and poor tokenizer efficiency. In this work, we
study how to efficiently adapt any existing pretrained LLM to a new language
without running into these issues. In particular, we improve the encoding
efficiency of the tokenizer by adding new tokens from the target language and
study the data mixing recipe to mitigate forgetting. Our experiments on
adapting an English LLM to Hungarian and Thai show that our recipe can reach
better performance than open source models on the target language, with minimal
regressions on English. | Computational Linguistics |
What field is the article from? | Title: Exploring the Privacy-Energy Consumption Tradeoff for Split Federated Learning
Abstract: Split Federated Learning (SFL) has recently emerged as a promising
distributed learning technology, leveraging the strengths of both federated
learning and split learning. It emphasizes the advantages of rapid convergence
while addressing privacy concerns. As a result, this innovation has received
significant attention from both industry and academia. However, since the model
is split at a specific layer, known as a cut layer, into both client-side and
server-side models for the SFL, the choice of the cut layer in SFL can have a
substantial impact on the energy consumption of clients and their privacy, as
it influences the training burden and the output of the client-side models.
Moreover, the design challenge of determining the cut layer is highly
intricate, primarily due to the inherent heterogeneity in the computing and
networking capabilities of clients. In this article, we provide a comprehensive
overview of the SFL process and conduct a thorough analysis of energy
consumption and privacy. This analysis takes into account the influence of
various system parameters on the cut layer selection strategy. Additionally, we
provide an illustrative example of the cut layer selection, aiming to minimize
the risk of clients from reconstructing the raw data at the server while
sustaining energy consumption within the required energy budget, which involve
trade-offs. Finally, we address open challenges in this field including their
applications to 6G technology. These directions represent promising avenues for
future research and development. | Machine Learning |
What field is the article from? | Title: Causal Models Applied to the Patterns of Human Migration due to Climate Change
Abstract: The impacts of mass migration, such as crisis induced by climate change,
extend beyond environmental concerns and can greatly affect social
infrastructure and public services, such as education, healthcare, and
security. These crises exacerbate certain elements like cultural barriers, and
discrimination by amplifying the challenges faced by these affected
communities. This paper proposes an innovative approach to address migration
crises in the context of crisis management through a combination of modeling
and imbalance assessment tools. By employing deep learning for forecasting and
integrating causal reasoning via Bayesian networks, this methodology enables
the evaluation of imbalances and risks in the socio-technological landscape,
providing crucial insights for informed decision-making. Through this
framework, critical systems can be analyzed to understand how fluctuations in
migration levels may impact them, facilitating effective crisis governance
strategies. | Computers and Society |
What field is the article from? | Title: Greedy PIG: Adaptive Integrated Gradients
Abstract: Deep learning has become the standard approach for most machine learning
tasks. While its impact is undeniable, interpreting the predictions of deep
learning models from a human perspective remains a challenge. In contrast to
model training, model interpretability is harder to quantify and pose as an
explicit optimization problem. Inspired by the AUC softmax information curve
(AUC SIC) metric for evaluating feature attribution methods, we propose a
unified discrete optimization framework for feature attribution and feature
selection based on subset selection. This leads to a natural adaptive
generalization of the path integrated gradients (PIG) method for feature
attribution, which we call Greedy PIG. We demonstrate the success of Greedy PIG
on a wide variety of tasks, including image feature attribution, graph
compression/explanation, and post-hoc feature selection on tabular data. Our
results show that introducing adaptivity is a powerful and versatile method for
making attribution methods more powerful. | Machine Learning |
What field is the article from? | Title: Cooperative Network Learning for Large-Scale and Decentralized Graphs
Abstract: Graph research, the systematic study of interconnected data points
represented as graphs, plays a vital role in capturing intricate relationships
within networked systems. However, in the real world, as graphs scale up,
concerns about data security among different data-owning agencies arise,
hindering information sharing and, ultimately, the utilization of graph data.
Therefore, establishing a mutual trust mechanism among graph agencies is
crucial for unlocking the full potential of graphs. Here, we introduce a
Cooperative Network Learning (CNL) framework to ensure secure graph computing
for various graph tasks. Essentially, this CNL framework unifies the local and
global perspectives of GNN computing with distributed data for an agency by
virtually connecting all participating agencies as a global graph without a
fixed central coordinator. Inter-agency computing is protected by various
technologies inherent in our framework, including homomorphic encryption and
secure transmission. Moreover, each agency has a fair right to design or employ
various graph learning models from its local or global perspective. Thus, CNL
can collaboratively train GNN models based on decentralized graphs inferred
from local and global graphs. Experiments on contagion dynamics prediction and
traditional graph tasks (i.e., node classification and link prediction)
demonstrate that our CNL architecture outperforms state-of-the-art GNNs
developed at individual sites, revealing that CNL can provide a reliable, fair,
secure, privacy-preserving, and global perspective to build effective and
personalized models for network applications. We hope this framework will
address privacy concerns in graph-related research and integrate decentralized
graph data structures to benefit the network research community in cooperation
and innovation. | Machine Learning |
What field is the article from? | Title: Large Language Models are Clinical Reasoners: Reasoning-Aware Diagnosis Framework with Prompt-Generated Rationales
Abstract: Machine reasoning has made great progress in recent years owing to large
language models (LLMs). In the clinical domain, however, most NLP-driven
projects mainly focus on clinical classification or reading comprehension, and
under-explore clinical reasoning for disease diagnosis due to the expensive
rationale annotation with clinicians. In this work, we present a
``reasoning-aware'' diagnosis framework that rationalizes the diagnostic
process via prompt-based learning in a time- and labor-efficient manner, and
learns to reason over the prompt-generated rationales. Specifically, we address
the clinical reasoning for disease diagnosis, where the LLM generates
diagnostic rationales providing its insight on presented patient data and the
reasoning path towards the diagnosis, namely Clinical Chain-of-Thought
(Clinical CoT). We empirically demonstrate LLMs/LMs' ability of clinical
reasoning via extensive experiments and analyses on both rationale generation
and disease diagnosis in various settings. We further propose a novel set of
criteria for evaluating machine-generated rationales' potential for real-world
clinical settings, facilitating and benefiting future research in this area. | Computational Linguistics |
What field is the article from? | Title: Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding
Abstract: We propose a method for accelerating large-scale pre-training with online
data selection policies. For the first time, we demonstrate that model-based
data selection can reduce the total computation needed to reach the performance
of models trained with uniform sampling. The key insight which enables this
"compute-positive" regime is that small models provide good proxies for the
loss of much larger models, such that computation spent on scoring data can be
drastically scaled down but still significantly accelerate training of the
learner.. These data selection policies also strongly generalize across
datasets and tasks, opening an avenue for further amortizing the overhead of
data scoring by re-using off-the-shelf models and training sequences. Our
methods, ClassAct and ActiveCLIP, require 46% and 51% fewer training updates
and up to 25% less total computation when training visual classifiers on JFT
and multimodal models on ALIGN, respectively. Finally, our paradigm seamlessly
applies to the curation of large-scale image-text datasets, yielding a new
state-of-the-art in several multimodal transfer tasks and pre-training regimes. | Artificial Intelligence |
What field is the article from? | Title: Apollo: Zero-shot MultiModal Reasoning with Multiple Experts
Abstract: We propose a modular framework that leverages the expertise of different
foundation models over different modalities and domains in order to perform a
single, complex, multi-modal task, without relying on prompt engineering or
otherwise tailor-made multi-modal training. Our approach enables decentralized
command execution and allows each model to both contribute and benefit from the
expertise of the other models. Our method can be extended to a variety of
foundation models (including audio and vision), above and beyond only language
models, as it does not depend on prompts. We demonstrate our approach on two
tasks. On the well-known task of stylized image captioning, our experiments
show that our approach outperforms semi-supervised state-of-the-art models,
while being zero-shot and avoiding costly training, data collection, and prompt
engineering. We further demonstrate this method on a novel task, audio-aware
image captioning, in which an image and audio are given and the task is to
generate text that describes the image within the context of the provided
audio. Our code is available on GitHub. | Computational Linguistics |
What field is the article from? | Title: A multi-modal table tennis robot system
Abstract: In recent years, robotic table tennis has become a popular research challenge
for perception and robot control. Here, we present an improved table tennis
robot system with high accuracy vision detection and fast robot reaction. Based
on previous work, our system contains a KUKA robot arm with 6 DOF, with four
frame-based cameras and two additional event-based cameras. We developed a
novel calibration approach to calibrate this multimodal perception system. For
table tennis, spin estimation is crucial. Therefore, we introduced a novel, and
more accurate spin estimation approach. Finally, we show how combining the
output of an event-based camera and a Spiking Neural Network (SNN) can be used
for accurate ball detection. | Robotics |
What field is the article from? | Title: Sample Dominance Aware Framework via Non-Parametric Estimation for Spontaneous Brain-Computer Interface
Abstract: Deep learning has shown promise in decoding brain signals, such as
electroencephalogram (EEG), in the field of brain-computer interfaces (BCIs).
However, the non-stationary characteristics of EEG signals pose challenges for
training neural networks to acquire appropriate knowledge. Inconsistent EEG
signals resulting from these non-stationary characteristics can lead to poor
performance. Therefore, it is crucial to investigate and address sample
inconsistency to ensure robust performance in spontaneous BCIs. In this study,
we introduce the concept of sample dominance as a measure of EEG signal
inconsistency and propose a method to modulate its effect on network training.
We present a two-stage dominance score estimation technique that compensates
for performance degradation caused by sample inconsistencies. Our proposed
method utilizes non-parametric estimation to infer sample inconsistency and
assigns each sample a dominance score. This score is then aggregated with the
loss function during training to modulate the impact of sample inconsistency.
Furthermore, we design a curriculum learning approach that gradually increases
the influence of inconsistent signals during training to improve overall
performance. We evaluate our proposed method using public spontaneous BCI
dataset. The experimental results confirm that our findings highlight the
importance of addressing sample dominance for achieving robust performance in
spontaneous BCIs. | Machine Learning |
What field is the article from? | Title: Enhancing Person Re-Identification through Tensor Feature Fusion
Abstract: In this paper, we present a novel person reidentification (PRe-ID) system
that based on tensor feature representation and multilinear subspace learning.
Our approach utilizes pretrained CNNs for high-level feature extraction, along
with Local Maximal Occurrence (LOMO) and Gaussian Of Gaussian (GOG )
descriptors. Additionally, Cross-View Quadratic Discriminant Analysis (TXQDA)
algorithm is used for multilinear subspace learning, which models the data in a
tensor framework to enhance discriminative capabilities. Similarity measure
based on Mahalanobis distance is used for matching between training and test
pedestrian images. Experimental evaluations on VIPeR and PRID450s datasets
demonstrate the effectiveness of our method. | Computer Vision |
What field is the article from? | Title: Improved DDIM Sampling with Moment Matching Gaussian Mixtures
Abstract: We propose using a Gaussian Mixture Model (GMM) as reverse transition
operator (kernel) within the Denoising Diffusion Implicit Models (DDIM)
framework, which is one of the most widely used approaches for accelerated
sampling from pre-trained Denoising Diffusion Probabilistic Models (DDPM).
Specifically we match the first and second order central moments of the DDPM
forward marginals by constraining the parameters of the GMM. We see that moment
matching is sufficient to obtain samples with equal or better quality than the
original DDIM with Gaussian kernels. We provide experimental results with
unconditional models trained on CelebAHQ and FFHQ and class-conditional models
trained on ImageNet datasets respectively. Our results suggest that using the
GMM kernel leads to significant improvements in the quality of the generated
samples when the number of sampling steps is small, as measured by FID and IS
metrics. For example on ImageNet 256x256, using 10 sampling steps, we achieve a
FID of 6.94 and IS of 207.85 with a GMM kernel compared to 10.15 and 196.73
respectively with a Gaussian kernel. | Computer Vision |
What field is the article from? | Title: BIVDiff: A Training-Free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models
Abstract: Diffusion models have made tremendous progress in text-driven image and video
generation. Now text-to-image foundation models are widely applied to various
downstream image synthesis tasks, such as controllable image generation and
image editing, while downstream video synthesis tasks are less explored for
several reasons. First, it requires huge memory and compute overhead to train a
video generation foundation model. Even with video foundation models,
additional costly training is still required for downstream video synthesis
tasks. Second, although some works extend image diffusion models into videos in
a training-free manner, temporal consistency cannot be well kept. Finally,
these adaption methods are specifically designed for one task and fail to
generalize to different downstream video synthesis tasks. To mitigate these
issues, we propose a training-free general-purpose video synthesis framework,
coined as BIVDiff, via bridging specific image diffusion models and general
text-to-video foundation diffusion models. Specifically, we first use an image
diffusion model (like ControlNet, Instruct Pix2Pix) for frame-wise video
generation, then perform Mixed Inversion on the generated video, and finally
input the inverted latents into the video diffusion model for temporal
smoothing. Decoupling image and video models enables flexible image model
selection for different purposes, which endows the framework with strong task
generalization and high efficiency. To validate the effectiveness and general
use of BIVDiff, we perform a wide range of video generation tasks, including
controllable video generation video editing, video inpainting and outpainting.
Our project page is available at https://bivdiff.github.io. | Computer Vision |
What field is the article from? | Title: Proceedings Fifth International Workshop on Formal Methods for Autonomous Systems
Abstract: This EPTCS volume contains the proceedings for the Fifth International
Workshop on Formal Methods for Autonomous Systems (FMAS 2023), which was held
on the 15th and 16th of November 2023. FMAS 2023 was co-located with 18th
International Conference on integrated Formal Methods (iFM) (iFM'22), organised
by Leiden Institute of Advanced Computer Science of Leiden University. The
workshop itself was held at Scheltema Leiden, a renovated 19th Century blanket
factory alongside the canal.
FMAS 2023 received 25 submissions. We received 11 regular papers, 3
experience reports, 6 research previews, and 5 vision papers. The researchers
who submitted papers to FMAS 2023 were from institutions in: Australia, Canada,
Colombia, France, Germany, Ireland, Italy, the Netherlands, Sweden, the United
Kingdom, and the United States of America. Increasing our number of submissions
for the third year in a row is an encouraging sign that FMAS has established
itself as a reputable publication venue for research on the formal modelling
and verification of autonomous systems. After each paper was reviewed by three
members of our Programme Committee we accepted a total of 15 papers: 8 long
papers and 7 short papers. | Artificial Intelligence |
What field is the article from? | Title: Learning adaptive planning representations with natural language guidance
Abstract: Effective planning in the real world requires not only world knowledge, but
the ability to leverage that knowledge to build the right representation of the
task at hand. Decades of hierarchical planning techniques have used
domain-specific temporal action abstractions to support efficient and accurate
planning, almost always relying on human priors and domain knowledge to
decompose hard tasks into smaller subproblems appropriate for a goal or set of
goals. This paper describes Ada (Action Domain Acquisition), a framework for
automatically constructing task-specific planning representations using
task-general background knowledge from language models (LMs). Starting with a
general-purpose hierarchical planner and a low-level goal-conditioned policy,
Ada interactively learns a library of planner-compatible high-level action
abstractions and low-level controllers adapted to a particular domain of
planning tasks. On two language-guided interactive planning benchmarks (Mini
Minecraft and ALFRED Household Tasks), Ada strongly outperforms other
approaches that use LMs for sequential decision-making, offering more accurate
plans and better generalization to complex tasks. | Artificial Intelligence |
What field is the article from? | Title: HALO: An Ontology for Representing Hallucinations in Generative Models
Abstract: Recent progress in generative AI, including large language models (LLMs) like
ChatGPT, has opened up significant opportunities in fields ranging from natural
language processing to knowledge discovery and data mining. However, there is
also a growing awareness that the models can be prone to problems such as
making information up or `hallucinations', and faulty reasoning on seemingly
simple problems. Because of the popularity of models like ChatGPT, both
academic scholars and citizen scientists have documented hallucinations of
several different types and severity. Despite this body of work, a formal model
for describing and representing these hallucinations (with relevant meta-data)
at a fine-grained level, is still lacking. In this paper, we address this gap
by presenting the Hallucination Ontology or HALO, a formal, extensible ontology
written in OWL that currently offers support for six different types of
hallucinations known to arise in LLMs, along with support for provenance and
experimental metadata. We also collect and publish a dataset containing
hallucinations that we inductively gathered across multiple independent Web
sources, and show that HALO can be successfully used to model this dataset and
answer competency questions. | Artificial Intelligence |
What field is the article from? | Title: Revolutionizing Healthcare Image Analysis in Pandemic-Based Fog-Cloud Computing Architectures
Abstract: The emergence of pandemics has significantly emphasized the need for
effective solutions in healthcare data analysis. One particular challenge in
this domain is the manual examination of medical images, such as X-rays and CT
scans. This process is time-consuming and involves the logistical complexities
of transferring these images to centralized cloud computing servers.
Additionally, the speed and accuracy of image analysis are vital for efficient
healthcare image management. This research paper introduces an innovative
healthcare architecture that tackles the challenges of analysis efficiency and
accuracy by harnessing the capabilities of Artificial Intelligence (AI).
Specifically, the proposed architecture utilizes fog computing and presents a
modified Convolutional Neural Network (CNN) designed specifically for image
analysis. Different architectures of CNN layers are thoroughly explored and
evaluated to optimize overall performance. To demonstrate the effectiveness of
the proposed approach, a dataset of X-ray images is utilized for analysis and
evaluation. Comparative assessments are conducted against recent models such as
VGG16, VGG19, MobileNet, and related research papers. Notably, the proposed
approach achieves an exceptional accuracy rate of 99.88% in classifying normal
cases, accompanied by a validation rate of 96.5%, precision and recall rates of
100%, and an F1 score of 100%. These results highlight the immense potential of
fog computing and modified CNNs in revolutionizing healthcare image analysis
and diagnosis, not only during pandemics but also in the future. By leveraging
these technologies, healthcare professionals can enhance the efficiency and
accuracy of medical image analysis, leading to improved patient care and
outcomes. | Computer Vision |
What field is the article from? | Title: Three Conjectures on Unexpectedeness
Abstract: Unexpectedness is a central concept in Simplicity Theory, a theory of
cognition relating various inferential processes to the computation of
Kolmogorov complexities, rather than probabilities. Its predictive power has
been confirmed by several experiments with human subjects, yet its theoretical
basis remains largely unexplored: why does it work? This paper lays the
groundwork for three theoretical conjectures. First, unexpectedness can be seen
as a generalization of Bayes' rule. Second, the frequentist core of
unexpectedness can be connected to the function of tracking ergodic properties
of the world. Third, unexpectedness can be seen as constituent of various
measures of divergence between the entropy of the world (environment) and the
variety of the observer (system). The resulting framework hints to research
directions that go beyond the division between probabilistic and logical
approaches, potentially bringing new insights into the extraction of causal
relations, and into the role of descriptive mechanisms in learning. | Artificial Intelligence |
What field is the article from? | Title: RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D
Abstract: Lifting 2D diffusion for 3D generation is a challenging problem due to the
lack of geometric prior and the complex entanglement of materials and lighting
in natural images. Existing methods have shown promise by first creating the
geometry through score-distillation sampling (SDS) applied to rendered surface
normals, followed by appearance modeling. However, relying on a 2D RGB
diffusion model to optimize surface normals is suboptimal due to the
distribution discrepancy between natural images and normals maps, leading to
instability in optimization. In this paper, recognizing that the normal and
depth information effectively describe scene geometry and be automatically
estimated from images, we propose to learn a generalizable Normal-Depth
diffusion model for 3D generation. We achieve this by training on the
large-scale LAION dataset together with the generalizable image-to-depth and
normal prior models. In an attempt to alleviate the mixed illumination effects
in the generated materials, we introduce an albedo diffusion model to impose
data-driven constraints on the albedo component. Our experiments show that when
integrated into existing text-to-3D pipelines, our models significantly enhance
the detail richness, achieving state-of-the-art results. Our project page is
https://lingtengqiu.github.io/RichDreamer/. | Computer Vision |
What field is the article from? | Title: Transformation vs Tradition: Artificial General Intelligence (AGI) for Arts and Humanities
Abstract: Recent advances in artificial general intelligence (AGI), particularly large
language models and creative image generation systems have demonstrated
impressive capabilities on diverse tasks spanning the arts and humanities.
However, the swift evolution of AGI has also raised critical questions about
its responsible deployment in these culturally significant domains
traditionally seen as profoundly human. This paper provides a comprehensive
analysis of the applications and implications of AGI for text, graphics, audio,
and video pertaining to arts and the humanities. We survey cutting-edge systems
and their usage in areas ranging from poetry to history, marketing to film, and
communication to classical art. We outline substantial concerns pertaining to
factuality, toxicity, biases, and public safety in AGI systems, and propose
mitigation strategies. The paper argues for multi-stakeholder collaboration to
ensure AGI promotes creativity, knowledge, and cultural values without
undermining truth or human dignity. Our timely contribution summarizes a
rapidly developing field, highlighting promising directions while advocating
for responsible progress centering on human flourishing. The analysis lays the
groundwork for further research on aligning AGI's technological capacities with
enduring social goods. | Artificial Intelligence |
What field is the article from? | Title: PolyFit: A Peg-in-hole Assembly Framework for Unseen Polygon Shapes via Sim-to-real Adaptation
Abstract: The study addresses the foundational and challenging task of peg-in-hole
assembly in robotics, where misalignments caused by sensor inaccuracies and
mechanical errors often result in insertion failures or jamming. This research
introduces PolyFit, representing a paradigm shift by transitioning from a
reinforcement learning approach to a supervised learning methodology. PolyFit
is a Force/Torque (F/T)-based supervised learning framework designed for 5-DoF
peg-in-hole assembly. It utilizes F/T data for accurate extrinsic pose
estimation and adjusts the peg pose to rectify misalignments. Extensive
training in a simulated environment involves a dataset encompassing a diverse
range of peg-hole shapes, extrinsic poses, and their corresponding contact F/T
readings. To enhance extrinsic pose estimation, a multi-point contact strategy
is integrated into the model input, recognizing that identical F/T readings can
indicate different poses. The study proposes a sim-to-real adaptation method
for real-world application, using a sim-real paired dataset to enable effective
generalization to complex and unseen polygon shapes. PolyFit achieves
impressive peg-in-hole success rates of 97.3% and 96.3% for seen and unseen
shapes in simulations, respectively. Real-world evaluations further demonstrate
substantial success rates of 86.7% and 85.0%, highlighting the robustness and
adaptability of the proposed method. | Robotics |
What field is the article from? | Title: LongQLoRA: Efficient and Effective Method to Extend Context Length of Large Language Models
Abstract: We present LongQLoRA, an efficient and effective method to extend context
length of large language models with less training resources. LongQLoRA
combines the advantages of Position Interpolation, QLoRA and Shift Short
Attention of LongLoRA. With a single 32GB V100 GPU, LongQLoRA can extend the
context length of LLaMA2 7B and 13B from 4096 to 8192 and even to 12k within
1000 finetuning steps. LongQLoRA achieves competitive perplexity performance on
PG19 and Proof-pile datasets, our model outperforms LongLoRA and is very close
to MPT-7B-8K within the evaluation context length of 8192. We collect and build
39k long instruction data to extend context length of Vicuna-13B from 4096 to
8192 and achieve good performance both in long and short context generation
task. We also do some ablation experiments to study the effect of LoRA rank,
finetuning steps and attention patterns in inference.The model weights,
training data and code are avaliable at
https://github.com/yangjianxin1/LongQLoRA. | Computational Linguistics |
What field is the article from? | Title: Representing visual classification as a linear combination of words
Abstract: Explainability is a longstanding challenge in deep learning, especially in
high-stakes domains like healthcare. Common explainability methods highlight
image regions that drive an AI model's decision. Humans, however, heavily rely
on language to convey explanations of not only "where" but "what".
Additionally, most explainability approaches focus on explaining individual AI
predictions, rather than describing the features used by an AI model in
general. The latter would be especially useful for model and dataset auditing,
and potentially even knowledge generation as AI is increasingly being used in
novel tasks. Here, we present an explainability strategy that uses a
vision-language model to identify language-based descriptors of a visual
classification task. By leveraging a pre-trained joint embedding space between
images and text, our approach estimates a new classification task as a linear
combination of words, resulting in a weight for each word that indicates its
alignment with the vision-based classifier. We assess our approach using two
medical imaging classification tasks, where we find that the resulting
descriptors largely align with clinical knowledge despite a lack of
domain-specific language training. However, our approach also identifies the
potential for 'shortcut connections' in the public datasets used. Towards a
functional measure of explainability, we perform a pilot reader study where we
find that the AI-identified words can enable non-expert humans to perform a
specialized medical task at a non-trivial level. Altogether, our results
emphasize the potential of using multimodal foundational models to deliver
intuitive, language-based explanations of visual tasks. | Artificial Intelligence |
What field is the article from? | Title: Constraint Model for the Satellite Image Mosaic Selection Problem
Abstract: Satellite imagery solutions are widely used to study and monitor different
regions of the Earth. However, a single satellite image can cover only a
limited area. In cases where a larger area of interest is studied, several
images must be stitched together to create a single larger image, called a
mosaic, that can cover the area. Today, with the increasing number of satellite
images available for commercial use, selecting the images to build the mosaic
is challenging, especially when the user wants to optimize one or more
parameters, such as the total cost and the cloud coverage percentage in the
mosaic. More precisely, for this problem the input is an area of interest,
several satellite images intersecting the area, a list of requirements relative
to the image and the mosaic, such as cloud coverage percentage, image
resolution, and a list of objectives to optimize. We contribute to the
constraint and mixed integer lineal programming formulation of this new
problem, which we call the \textit{satellite image mosaic selection problem},
which is a multi-objective extension of the polygon cover problem. We propose a
dataset of realistic and challenging instances, where the images were captured
by the satellite constellations SPOT, Pl\'eiades and Pl\'eiades Neo. We
evaluate and compare the two proposed models and show their efficiency for
large instances, up to 200 images. | Artificial Intelligence |
What field is the article from? | Title: Nova$^+$: Generative Language Models for Binaries
Abstract: Generative large language models (LLMs) pre-trained on code have shown
impressive effectiveness in code generation, program repair, and document
analysis. However, existing generative LLMs focus on source code and are not
specialized for binaries. There are three main challenges for LLMs to model and
learn binary code: hex-decimal values, complex global dependencies, and
compiler optimization levels. To bring the benefit of LLMs to the binary
domain, we develop Nova and Nova$^+$, which are LLMs pre-trained on binary
corpora. Nova is pre-trained with the standard language modeling task, showing
significantly better capability on five benchmarks for three downstream tasks:
binary code similarity detection (BCSD), binary code translation (BCT), and
binary code recovery (BCR), over GPT-3.5 and other existing techniques. We
build Nova$^+$ to further boost Nova using two new pre-training tasks, i.e.,
optimization generation and optimization level prediction, which are designed
to learn binary optimization and align equivalent binaries. Nova$^+$ shows
overall the best performance for all three downstream tasks on five benchmarks,
demonstrating the contributions of the new pre-training tasks. | Software Engineering |
What field is the article from? | Title: tsMorph: generation of semi-synthetic time series to understand algorithm performance
Abstract: Time series forecasting is a subject of significant scientific and industrial
importance. Despite the widespread utilization of forecasting methods, there is
a dearth of research aimed at comprehending the conditions under which these
methods yield favorable or unfavorable performances. Empirical studies,
although common, encounter challenges due to the limited availability of
datasets, impeding the extraction of reliable insights. To address this, we
present tsMorph, a straightforward approach for generating semi-synthetic time
series through dataset morphing. tsMorph operates by creating a sequence of
datasets derived from two original datasets. These newly generated datasets
exhibit a progressive departure from the characteristics of one dataset and a
convergence toward the attributes of the other. This method provides a valuable
alternative for obtaining substantial datasets. In this paper, we demonstrate
the utility of tsMorph by assessing the performance of the Long Short-Term
Memory Network forecasting algorithm. The time series under examination are
sourced from the NN5 Competition. The findings reveal compelling insights.
Notably, the performance of the Long Short-Term Memory Network improves
proportionally with the frequency of the time series. These experiments affirm
that tsMorph serves as an effective tool for gaining an understanding of
forecasting algorithm behaviors, offering a pathway to overcome the limitations
posed by empirical studies and enabling more extensive and reliable
experimentation. | Machine Learning |
What field is the article from? | Title: PCRDiffusion: Diffusion Probabilistic Models for Point Cloud Registration
Abstract: We propose a new framework that formulates point cloud registration as a
denoising diffusion process from noisy transformation to object transformation.
During training stage, object transformation diffuses from ground-truth
transformation to random distribution, and the model learns to reverse this
noising process. In sampling stage, the model refines randomly generated
transformation to the output result in a progressive way. We derive the
variational bound in closed form for training and provide implementations of
the model. Our work provides the following crucial findings: (i) In contrast to
most existing methods, our framework, Diffusion Probabilistic Models for Point
Cloud Registration (PCRDiffusion) does not require repeatedly update source
point cloud to refine the predicted transformation. (ii) Point cloud
registration, one of the representative discriminative tasks, can be solved by
a generative way and the unified probabilistic formulation. Finally, we discuss
and provide an outlook on the application of diffusion model in different
scenarios for point cloud registration. Experimental results demonstrate that
our model achieves competitive performance in point cloud registration. In
correspondence-free and correspondence-based scenarios, PCRDifussion can both
achieve exceeding 50\% performance improvements. | Computer Vision |
What field is the article from? | Title: Forecasting Auxiliary Energy Consumption for Electric Heavy-Duty Vehicles
Abstract: Accurate energy consumption prediction is crucial for optimizing the
operation of electric commercial heavy-duty vehicles, e.g., route planning for
charging. Moreover, understanding why certain predictions are cast is paramount
for such a predictive model to gain user trust and be deployed in practice.
Since commercial vehicles operate differently as transportation tasks, ambient,
and drivers vary, a heterogeneous population is expected when building an AI
system for forecasting energy consumption. The dependencies between the input
features and the target values are expected to also differ across
sub-populations. One well-known example of such a statistical phenomenon is the
Simpson paradox. In this paper, we illustrate that such a setting poses a
challenge for existing XAI methods that produce global feature statistics, e.g.
LIME or SHAP, causing them to yield misleading results. We demonstrate a
potential solution by training multiple regression models on subsets of data.
It not only leads to superior regression performance but also more relevant and
consistent LIME explanations. Given that the employed groupings correspond to
relevant sub-populations, the associations between the input features and the
target values are consistent within each cluster but different across clusters.
Experiments on both synthetic and real-world datasets show that such splitting
of a complex problem into simpler ones yields better regression performance and
interpretability. | Machine Learning |
What field is the article from? | Title: CG3D: Compositional Generation for Text-to-3D via Gaussian Splatting
Abstract: With the onset of diffusion-based generative models and their ability to
generate text-conditioned images, content generation has received a massive
invigoration. Recently, these models have been shown to provide useful guidance
for the generation of 3D graphics assets. However, existing work in
text-conditioned 3D generation faces fundamental constraints: (i) inability to
generate detailed, multi-object scenes, (ii) inability to textually control
multi-object configurations, and (iii) physically realistic scene composition.
In this work, we propose CG3D, a method for compositionally generating scalable
3D assets that resolves these constraints. We find that explicit Gaussian
radiance fields, parameterized to allow for compositions of objects, possess
the capability to enable semantically and physically consistent scenes. By
utilizing a guidance framework built around this explicit representation, we
show state of the art results, capable of even exceeding the guiding diffusion
model in terms of object combinations and physics accuracy. | Computer Vision |