abs
stringlengths
44
64
Download PDF
stringlengths
75
115
OpenReview
stringlengths
42
42
title
stringlengths
15
148
url
stringlengths
44
64
authors
stringlengths
6
903
detail_url
stringlengths
44
64
tags
stringclasses
1 value
abstract
stringlengths
422
5.84k
https://proceedings.mlr.press/v235/abad-rocamora24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/abad-rocamora24a/abad-rocamora24a.pdf
https://openreview.net/forum?id=AZWqXfM6z9
Revisiting Character-level Adversarial Attacks for Language Models
https://proceedings.mlr.press/v235/abad-rocamora24a.html
Elias Abad Rocamora, Yongtao Wu, Fanghui Liu, Grigorios Chrysos, Volkan Cevher
https://proceedings.mlr.press/v235/abad-rocamora24a.html
ICML 2024
Adversarial attacks in Natural Language Processing apply perturbations in the character or token levels. Token-level attacks, gaining prominence for their use of gradient-based methods, are susceptible to altering sentence semantics, leading to invalid adversarial examples. While character-level attacks easily maintain semantics, they have received less attention as they cannot easily adopt popular gradient-based methods, and are thought to be easy to defend. Challenging these beliefs, we introduce Charmer, an efficient query-based adversarial attack capable of achieving high attack success rate (ASR) while generating highly similar adversarial examples. Our method successfully targets both small (BERT) and large (Llama 2) models. Specifically, on BERT with SST-2, Charmer improves the ASR in $4.84$% points and the USE similarity in $8$% points with respect to the previous art. Our implementation is available in https://github.com/LIONS-EPFL/Charmer.
https://proceedings.mlr.press/v235/abe24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/abe24a/abe24a.pdf
https://openreview.net/forum?id=9U29U3cDKq
Adaptively Perturbed Mirror Descent for Learning in Games
https://proceedings.mlr.press/v235/abe24a.html
Kenshi Abe, Kaito Ariu, Mitsuki Sakamoto, Atsushi Iwasaki
https://proceedings.mlr.press/v235/abe24a.html
ICML 2024
This paper proposes a payoff perturbation technique for the Mirror Descent (MD) algorithm in games where the gradient of the payoff functions is monotone in the strategy profile space, potentially containing additive noise. The optimistic family of learning algorithms, exemplified by optimistic MD, successfully achieves last-iterate convergence in scenarios devoid of noise, leading the dynamics to a Nash equilibrium. A recent re-emerging trend underscores the promise of the perturbation approach, where payoff functions are perturbed based on the distance from an anchoring, or slingshot, strategy. In response, we propose Adaptively Perturbed MD (APMD), which adjusts the magnitude of the perturbation by repeatedly updating the slingshot strategy at a predefined interval. This innovation empowers us to find a Nash equilibrium of the underlying game with guaranteed rates. Empirical demonstrations affirm that our algorithm exhibits significantly accelerated convergence.
https://proceedings.mlr.press/v235/abhyankar24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/abhyankar24a/abhyankar24a.pdf
https://openreview.net/forum?id=wDDGQabYPQ
InferCept: Efficient Intercept Support for Augmented Large Language Model Inference
https://proceedings.mlr.press/v235/abhyankar24a.html
Reyna Abhyankar, Zijian He, Vikranth Srivatsa, Hao Zhang, Yiying Zhang
https://proceedings.mlr.press/v235/abhyankar24a.html
ICML 2024
Large language models are increasingly integrated with external environments, tools, and agents like ChatGPT plugins to extend their capability beyond language-centric tasks. However, today’s LLM inference systems are designed for standalone LLMs. They treat each external interaction as the end of LLM generation and form a new request when the interaction finishes, causing unnecessary recomputation of already computed contexts, which accounts for 37-40% of total model forwarding time. This paper presents InferCept, the first LLM inference framework targeting augmented LLMs and supporting the efficient interception of LLM generation. InferCept minimizes the GPU resource waste caused by LLM interceptions and dedicates saved memory for serving more requests.InferCept improves the overall serving throughput by 1.6x-2x and completes 2x more requests per second compared to the state-of-the-art LLM inference systems.
https://proceedings.mlr.press/v235/acharya24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/acharya24a/acharya24a.pdf
https://openreview.net/forum?id=MurkwIl0h3
Balancing Feature Similarity and Label Variability for Optimal Size-Aware One-shot Subset Selection
https://proceedings.mlr.press/v235/acharya24a.html
Abhinab Acharya, Dayou Yu, Qi Yu, Xumin Liu
https://proceedings.mlr.press/v235/acharya24a.html
ICML 2024
Subset or core-set selection offers a data-efficient way for training deep learning models. One-shot subset selection poses additional challenges as subset selection is only performed once and full set data become unavailable after the selection. However, most existing methods tend to choose either diverse or difficult data samples, which fail to faithfully represent the joint data distribution that is comprised of both feature and label information. The selection is also performed independently from the subset size, which plays an essential role in choosing what types of samples. To address this critical gap, we propose to conduct Feature similarity and Label variability Balanced One-shot Subset Selection (BOSS), aiming to construct an optimal size-aware subset for data-efficient deep learning. We show that a novel balanced core-set loss bound theoretically justifies the need to simultaneously consider both diversity and difficulty to form an optimal subset. It also reveals how the subset size influences the bound. We further connect the inaccessible bound to a practical surrogate target which is tailored to subset sizes and varying levels of overall difficulty. We design a novel Beta-scoring importance function to delicately control the optimal balance of diversity and difficulty. Comprehensive experiments conducted on both synthetic and real data justify the important theoretical properties and demonstrate the superior performance of BOSS as compared with the competitive baselines.
https://proceedings.mlr.press/v235/achituve24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/achituve24a/achituve24a.pdf
https://openreview.net/forum?id=GiHo83ozsF
Bayesian Uncertainty for Gradient Aggregation in Multi-Task Learning
https://proceedings.mlr.press/v235/achituve24a.html
Idan Achituve, Idit Diamant, Arnon Netzer, Gal Chechik, Ethan Fetaya
https://proceedings.mlr.press/v235/achituve24a.html
ICML 2024
As machine learning becomes more prominent there is a growing demand to perform several inference tasks in parallel. Multi-task learning (MTL) addresses this challenge by learning a single model that solves several tasks simultaneously and efficiently. Often optimizing MTL models entails first computing the gradient of the loss for each task, and then aggregating all the gradients to obtain a combined update direction. However, common methods following this approach do not consider an important aspect, the sensitivity in the dimensions of the gradients. Some dimensions may be more lenient for changes while others may be more restrictive. Here, we introduce a novel gradient aggregation procedure using Bayesian inference. We place a probability distribution over the task-specific parameters, which in turn induce a distribution over the gradients of the tasks. This valuable information allows us to quantify the uncertainty associated with each of the gradients’ dimensions which is factored in when aggregating them. We empirically demonstrate the benefits of our approach in a variety of datasets, achieving state-of-the-art performance.
https://proceedings.mlr.press/v235/achtibat24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/achtibat24a/achtibat24a.pdf
https://openreview.net/forum?id=emtXYlBrNF
AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers
https://proceedings.mlr.press/v235/achtibat24a.html
Reduan Achtibat, Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer, Aakriti Jain, Thomas Wiegand, Sebastian Lapuschkin, Wojciech Samek
https://proceedings.mlr.press/v235/achtibat24a.html
ICML 2024
Large Language Models are prone to biased predictions and hallucinations, underlining the paramount importance of understanding their model-internal reasoning process. However, achieving faithful attributions for the entirety of a black-box transformer model and maintaining computational efficiency is an unsolved challenge. By extending the Layer-wise Relevance Propagation attribution method to handle attention layers, we address these challenges effectively. While partial solutions exist, our method is the first to faithfully and holistically attribute not only input but also latent representations of transformer models with the computational efficiency similar to a single backward pass. Through extensive evaluations against existing methods on LLaMa 2, Mixtral 8x7b, Flan-T5 and vision transformer architectures, we demonstrate that our proposed approach surpasses alternative methods in terms of faithfulness and enables the understanding of latent representations, opening up the door for concept-based explanations. We provide an LRP library at https://github.com/rachtibat/LRP-eXplains-Transformers.
https://proceedings.mlr.press/v235/adcock24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/adcock24a/adcock24a.pdf
https://openreview.net/forum?id=wG2SgnH6Zv
A Unified Framework for Learning with Nonlinear Model Classes from Arbitrary Linear Samples
https://proceedings.mlr.press/v235/adcock24a.html
Ben Adcock, Juan M. Cardenas, Nick Dexter
https://proceedings.mlr.press/v235/adcock24a.html
ICML 2024
This work considers the fundamental problem of learning an unknown object from training data using a given model class. We introduce a framework that allows for objects in arbitrary Hilbert spaces, general types of (random) linear measurements as training data and general types of nonlinear model classes. We establish a series of learning guarantees for this framework, which provide explicit relations between the amount of training data and the model class to ensure near-best generalization bounds. In doing so, we introduce the key notion of the variation of a model class with respect to a distribution of sampling operators. We show that this framework can accommodate many different types of well-known problems of interest, such as matrix sketching by random sampling, compressed sensing with isotropic vectors, active learning in regression and compressed sensing with generative models. In all cases, known results become straightforward corollaries of our general theory. Hence, this work provides a powerful framework for studying and analyzing many different types of learning problems.
https://proceedings.mlr.press/v235/adepu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/adepu24a/adepu24a.pdf
https://openreview.net/forum?id=xPypr0kufs
FrameQuant: Flexible Low-Bit Quantization for Transformers
https://proceedings.mlr.press/v235/adepu24a.html
Harshavardhan Adepu, Zhanpeng Zeng, Li Zhang, Vikas Singh
https://proceedings.mlr.press/v235/adepu24a.html
ICML 2024
Transformers are the backbone of powerful foundation models for many Vision and Natural Language Processing tasks. But their compute and memory/storage footprint is large, and so, serving such models is expensive often requiring high-end hardware. To mitigate this difficulty, Post-Training Quantization seeks to modify a pre-trained model and quantize it to eight bits or lower, significantly boosting compute/memory/latency efficiency. Such models have been successfully quantized to four bits with some performance loss. In this work, we outline a simple scheme to quantize Transformer-based models to just two bits (plus some overhead) with only a small drop in accuracy. Key to our formulation is a concept borrowed from Harmonic analysis called Fusion Frames. Our main finding is that the quantization must take place not in the original weight space, but instead in the Fusion Frame representations. If quantization is interpreted as the addition of noise, our casting of the problem allows invoking an extensive body of known consistent recovery and noise robustness guarantees. Further, if desired, de-noising filters are known in closed form. We show empirically, via a variety of experiments, that (almost) two-bit quantization for Transformer models promises sizable efficiency gains. The code is available at https://github.com/vsingh-group/FrameQuant
https://proceedings.mlr.press/v235/adhikary24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/adhikary24a/adhikary24a.pdf
https://openreview.net/forum?id=myCgfQZzbc
BeigeMaps: Behavioral Eigenmaps for Reinforcement Learning from Images
https://proceedings.mlr.press/v235/adhikary24a.html
Sandesh Adhikary, Anqi Li, Byron Boots
https://proceedings.mlr.press/v235/adhikary24a.html
ICML 2024
Training reinforcement learning (RL) agents directly from high-dimensional image observations continues to be a challenging problem. Recent line of work on behavioral distances proposes to learn representations that encode behavioral similarities quantified by the bisimulation metric. By learning an isometric mapping to a lower dimensional space that preserves this metric, such methods attempt to learn representations that group together functionally similar states. However, such an isometric mapping may not exist, making the learning objective ill-defined. We propose an alternative objective that allows distortions in long-range distances, while preserving local metric structure – inducing representations that highlight natural clusters in the state space. This leads to new representations, which we term Behavioral Eigenmaps (BeigeMaps), corresponding to the eigenfunctions of similarity kernels induced by behavioral distances. We empirically demonstrate that when added as a drop-in modification, BeigeMaps improve the policy performance of prior behavioral distance based RL algorithms.
https://proceedings.mlr.press/v235/adila24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/adila24a/adila24a.pdf
https://openreview.net/forum?id=dztd61efGy
Discovering Bias in Latent Space: An Unsupervised Debiasing Approach
https://proceedings.mlr.press/v235/adila24a.html
Dyah Adila, Shuai Zhang, Boran Han, Bernie Wang
https://proceedings.mlr.press/v235/adila24a.html
ICML 2024
The question-answering (QA) capabilities of foundation models are highly sensitive to prompt variations, rendering their performance susceptible to superficial, non-meaning-altering changes. This vulnerability often stems from the model’s preference or bias towards specific input characteristics, such as option position or superficial image features in multi-modal settings. We propose to rectify this bias directly in the model’s internal representation. Our approach, SteerFair, finds the bias direction in the model’s representation space and steers activation values away from it during inference. Specifically, we exploit the observation that bias often adheres to simple association rules, such as the spurious association between the first option and correctness likelihood. Next, we construct demonstrations of these rules from unlabeled samples and use them to identify the bias directions. We empirically show that SteerFair significantly reduces instruction-tuned model performance variance across prompt modifications on three benchmark tasks. Remarkably, our approach surpasses a supervised baseline with 100 labels by an average of 10.86% accuracy points and 12.95 score points and matches the performance with 500 labels.
https://proceedings.mlr.press/v235/afshani24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/afshani24a/afshani24a.pdf
https://openreview.net/forum?id=8iWDWQKxJ1
Optimal Coresets for Low-Dimensional Geometric Median
https://proceedings.mlr.press/v235/afshani24a.html
Peyman Afshani, Chris Schwiegelshohn
https://proceedings.mlr.press/v235/afshani24a.html
ICML 2024
We investigate coresets for approximating the cost with respect to median queries. In this problem, we are given a set of points $P\subset \mathbb{R}^d$ and median queries are $\sum_{p\in P} ||p-c||$ for any point $c\in \mathbb{R}^d$. Our goal is to compute a small weighted summary $S\subset P$ such that the cost of any median query is approximated within a multiplicative $(1\pm\varepsilon)$ factor. We provide matching upper and lower bounds on the number of points contained in $S$ of the order $\tilde{\Theta}\left(\varepsilon^{-d/(d+1)}\right)$.
https://proceedings.mlr.press/v235/afzal24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/afzal24a/afzal24a.pdf
https://openreview.net/forum?id=9GbAea74O6
REST: Efficient and Accelerated EEG Seizure Analysis through Residual State Updates
https://proceedings.mlr.press/v235/afzal24a.html
Arshia Afzal, Grigorios Chrysos, Volkan Cevher, Mahsa Shoaran
https://proceedings.mlr.press/v235/afzal24a.html
ICML 2024
EEG-based seizure detection models face challenges in terms of inference speed and memory efficiency, limiting their real-time implementation in clinical devices. This paper introduces a novel graph-based residual state update mechanism (REST) for real-time EEG signal analysis in applications such as epileptic seizure detection. By leveraging a combination of graph neural networks and recurrent structures, REST efficiently captures both non-Euclidean geometry and temporal dependencies within EEG data. Our model demonstrates high accuracy in both seizure detection and classification tasks. Notably, REST achieves a remarkable 9-fold acceleration in inference speed compared to state-of-the-art models, while simultaneously demanding substantially less memory than the smallest model employed for this task. These attributes position REST as a promising candidate for real-time implementation in clinical devices, such as Responsive Neurostimulation or seizure alert systems.
https://proceedings.mlr.press/v235/agarwal24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agarwal24a/agarwal24a.pdf
https://openreview.net/forum?id=xcDRx8vzCa
CHAI: Clustered Head Attention for Efficient LLM Inference
https://proceedings.mlr.press/v235/agarwal24a.html
Saurabh Agarwal, Bilge Acun, Basil Hosmer, Mostafa Elhoushi, Yejin Lee, Shivaram Venkataraman, Dimitris Papailiopoulos, Carole-Jean Wu
https://proceedings.mlr.press/v235/agarwal24a.html
ICML 2024
Large Language Models (LLMs) with hundreds of billions of parameters have transformed the field of machine learning. However, serving these models at inference time is both compute and memory intensive, where a single request can require multiple GPUs and tens of Gigabytes of memory. Multi-head attention is one of the key components of LLMs, which can for over 50% of LLMs memory and compute requirement. We observe that there is a high amount of redundancy across heads on which tokens they pay attention to. Based on this insight, we propose Clustered HeadAttention ( CHAI ). CHAI combines heads with a high amount of correlation for self-attention at runtime, thus reducing both memory and compute. In our experiments, we show that CHAI is able to reduce the memory requirements for storing K,V cache by up to 21.4% and inference time latency by up to 1.73× without any fine-tuning required. CHAI achieves this with a maximum 3.2% deviation in accuracy across 3 different models (i.e. OPT-66B, LLAMA-7B, LLAMA-33B) and 5 different evaluation datasets.
https://proceedings.mlr.press/v235/agarwal24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agarwal24b/agarwal24b.pdf
https://openreview.net/forum?id=w8BnKGFIYN
Learning to Play Atari in a World of Tokens
https://proceedings.mlr.press/v235/agarwal24b.html
Pranav Agarwal, Sheldon Andrews, Samira Ebrahimi Kahou
https://proceedings.mlr.press/v235/agarwal24b.html
ICML 2024
Model-based reinforcement learning agents utilizing transformers have shown improved sample efficiency due to their ability to model extended context, resulting in more accurate world models. However, for complex reasoning and planning tasks, these methods primarily rely on continuous representations. This complicates modeling of discrete properties of the real world such as disjoint object classes between which interpolation is not plausible. In this work, we introduce discrete abstract representations for transformer-based learning (DART), a sample-efficient method utilizing discrete representations for modeling both the world and learning behavior. We incorporate a transformer-decoder for auto-regressive world modeling and a transformer-encoder for learning behavior by attending to task-relevant cues in the discrete representation of the world model. For handling partial observability, we aggregate information from past time steps as memory tokens. DART outperforms previous state-of-the-art methods that do not use look-ahead search on the Atari 100k sample efficiency benchmark with a median human-normalized score of 0.790 and beats humans in 9 out of 26 games. We release our code at https://pranaval.github.io/DART/.
https://proceedings.mlr.press/v235/agarwal24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agarwal24c/agarwal24c.pdf
https://openreview.net/forum?id=EqFxIbGWRU
Probabilistic Generating Circuits - Demystified
https://proceedings.mlr.press/v235/agarwal24c.html
Sanyam Agarwal, Markus Bläser
https://proceedings.mlr.press/v235/agarwal24c.html
ICML 2024
Zhang et al. (ICML 2021, PLMR 139, pp. 12447–12457) introduced probabilistic generating circuits (PGCs) as a probabilistic model to unify probabilistic circuits (PCs) and determinantal point processes (DPPs). At a first glance, PGCs store a distribution in a very different way, they compute the probability generating polynomial instead of the probability mass function and it seems that this is the main reason why PGCs are more powerful than PCs or DPPs. However, PGCs also allow for negative weights, whereas classical PCs assume that all weights are nonnegative. One main insight of this work is that the negative weights are the cause for the power of PGCs and not the different representation. PGCs are PCs in disguise: we show how to transform any PGC on binary variables into a PC with negative weights with only polynomial blowup. PGCs were defined by Zhang et al. only for binary random variables. As our second main result, we show that there is a good reason for this: we prove that PGCs for categorical variables with larger image size do not support tractable marginalization unless NP=P. On the other hand, we show that we can model categorical variables with larger image size as PC with negative weights computing set-multilinear polynomials. These allow for tractable marginalization. In this sense, PCs with negative weights strictly subsume PGCs.
https://proceedings.mlr.press/v235/agarwal24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agarwal24d/agarwal24d.pdf
https://openreview.net/forum?id=xl2yU3dsHK
Improved Differentially Private and Lazy Online Convex Optimization: Lower Regret without Smoothness Requirements
https://proceedings.mlr.press/v235/agarwal24d.html
Naman Agarwal, Satyen Kale, Karan Singh, Abhradeep Guha Thakurta
https://proceedings.mlr.press/v235/agarwal24d.html
ICML 2024
We design differentially private regret-minimizing algorithms in the online convex optimization (OCO) framework. Unlike recent results, our algorithms and analyses do not require smoothness, thus yielding the first private regret bounds with an optimal leading-order term for non-smooth loss functions. Additionally, even for smooth losses, the resulting regret guarantees improve upon previous results in terms their dependence of dimension. Our results provide the best known rates for DP-OCO in all practical regimes of the privacy parameter, barring when it is exceptionally small. The principal innovation in our algorithm design is the use of sampling from strongly log-concave densities which satisfy the Log-Sobolev Inequality. The resulting concentration of measure allows us to obtain a better trade-off for the dimension factors than prior work, leading to improved results. Following previous works on DP-OCO, the proposed algorithm explicitly limits the number of switches via rejection sampling. Thus, independently of privacy constraints, the algorithm also provides improved results for online convex optimization with a switching budget.
https://proceedings.mlr.press/v235/agarwal24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agarwal24e/agarwal24e.pdf
https://openreview.net/forum?id=MMMHufVc2v
The Non-linear $F$-Design and Applications to Interactive Learning
https://proceedings.mlr.press/v235/agarwal24e.html
Alekh Agarwal, Jian Qian, Alexander Rakhlin, Tong Zhang
https://proceedings.mlr.press/v235/agarwal24e.html
ICML 2024
We propose a generalization of the classical G-optimal design concept to non-linear function classes. The criterion, termed F -design, coincides with G-design in the linear case. We compute the value of the optimal design, termed the F-condition number, for several non-linear function classes. We further provide algorithms to construct designs with a bounded F -condition number. Finally, we employ the F-design in a variety of interactive machine learning tasks, where the design is naturally useful for data collection or exploration. We show that in four diverse settings of confidence band construction, contextual bandits, model-free reinforcement learning, and active learning, F-design can be combined with existing approaches in a black-box manner to yield state-of-the-art results in known problem settings as well as to generalize to novel ones.
https://proceedings.mlr.press/v235/agnihotri24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agnihotri24a/agnihotri24a.pdf
https://openreview.net/forum?id=dmfvHU1LNF
ACPO: A Policy Optimization Algorithm for Average MDPs with Constraints
https://proceedings.mlr.press/v235/agnihotri24a.html
Akhil Agnihotri, Rahul Jain, Haipeng Luo
https://proceedings.mlr.press/v235/agnihotri24a.html
ICML 2024
Reinforcement Learning (RL) for constrained MDPs (CMDPs) is an increasingly important problem for various applications. Often, the average criterion is more suitable than the discounted criterion. Yet, RL for average-CMDPs (ACMDPs) remains a challenging problem. Algorithms designed for discounted constrained RL problems often do not perform well for the average CMDP setting. In this paper, we introduce a new policy optimization with function approximation algorithm for constrained MDPs with the average criterion. The Average-Constrained Policy Optimization (ACPO) algorithm is inspired by trust region-based policy optimization algorithms. We develop basic sensitivity theory for average CMDPs, and then use the corresponding bounds in the design of the algorithm. We provide theoretical guarantees on its performance, and through extensive experimental work in various challenging OpenAI Gym environments, show its superior empirical performance when compared to other state-of-the-art algorithms adapted for the ACMDPs.
https://proceedings.mlr.press/v235/agnihotri24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agnihotri24b/agnihotri24b.pdf
https://openreview.net/forum?id=CXZqGJonmt
CosPGD: an efficient white-box adversarial attack for pixel-wise prediction tasks
https://proceedings.mlr.press/v235/agnihotri24b.html
Shashank Agnihotri, Steffen Jung, Margret Keuper
https://proceedings.mlr.press/v235/agnihotri24b.html
ICML 2024
While neural networks allow highly accurate predictions in many tasks, their lack of robustness towards even slight input perturbations often hampers their deployment. Adversarial attacks such as the seminal projected gradient descent (PGD) offer an effective means to evaluate a model’s robustness and dedicated solutions have been proposed for attacks on semantic segmentation or optical flow estimation. While they attempt to increase the attack’s efficiency, a further objective is to balance its effect, so that it acts on the entire image domain instead of isolated point-wise predictions. This often comes at the cost of optimization stability and thus efficiency. Here, we propose CosPGD, an attack that encourages more balanced errors over the entire image domain while increasing the attack’s overall efficiency. To this end, CosPGD leverages a simple alignment score computed from any pixel-wise prediction and its target to scale the loss in a smooth and fully differentiable way. It leads to efficient evaluations of a model’s robustness for semantic segmentation as well as regression models (such as optical flow, disparity estimation, or image restoration), and it allows it to outperform the previous SotA attack on semantic segmentation. We provide code for the CosPGD algorithm and example usage at https://github.com/shashankskagnihotri/cospgd.
https://proceedings.mlr.press/v235/agostinelli-iii24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agostinelli-iii24a/agostinelli-iii24a.pdf
https://openreview.net/forum?id=XhH1OKLANY
LeaPformer: Enabling Linear Transformers for Autoregressive and Simultaneous Tasks via Learned Proportions
https://proceedings.mlr.press/v235/agostinelli-iii24a.html
Victor Agostinelli Iii, Sanghyun Hong, Lizhong Chen
https://proceedings.mlr.press/v235/agostinelli-iii24a.html
ICML 2024
A promising approach to preserving model performance in linearized transformers is to employ position-based re-weighting functions. However, state-of-the-art re-weighting functions rely heavily on target sequence lengths, making it difficult or impossible to apply them to autoregressive and simultaneous tasks, where the target and sometimes even the input sequence length are unknown. To address this issue, we propose Learned Proportions (LeaP) and LeaPformers. Our contribution is built on two major components. First, we generalize the dependence on explicit positional representations and sequence lengths into dependence on sequence proportions for re-weighting. Second, we replace static positional representations with dynamic proportions derived via a compact module, enabling more flexible attention concentration patterns. We evaluate LeaPformer against eight representative efficient transformers on the Long-Range Arena benchmark, where we show that LeaPformer achieves the best quality-throughput trade-off, as well as apply LeaPformer to Wikitext-103b autoregressive language modeling and simultaneous speech-to-text translation for two language pairs, achieving competitive results in both tasks.
https://proceedings.mlr.press/v235/agrawal24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agrawal24a/agrawal24a.pdf
https://openreview.net/forum?id=bID9PiBFpT
Policy Evaluation for Variance in Average Reward Reinforcement Learning
https://proceedings.mlr.press/v235/agrawal24a.html
Shubhada Agrawal, Prashanth L A, Siva Theja Maguluri
https://proceedings.mlr.press/v235/agrawal24a.html
ICML 2024
We consider an average reward reinforcement learning (RL) problem and work with asymptotic variance as a risk measure to model safety-critical applications. We design a temporal-difference (TD) type algorithm tailored for policy evaluation in this context. Our algorithm is based on linear stochastic approximation of an equivalent formulation of the asymptotic variance in terms of the solution of the Poisson equation. We consider both the tabular and linear function approximation settings, and establish $\tilde {O}(1/k)$ finite time convergence rate, where $k$ is the number of steps of the algorithm. Our work paves the way for developing actor-critic style algorithms for variance-constrained RL. To the best of our knowledge, our result provides the first sequential estimator for asymptotic variance of a Markov chain with provable finite sample guarantees, which is of independent interest.
https://proceedings.mlr.press/v235/ahdritz24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ahdritz24a/ahdritz24a.pdf
https://openreview.net/forum?id=ud4GSrqUKI
Distinguishing the Knowable from the Unknowable with Language Models
https://proceedings.mlr.press/v235/ahdritz24a.html
Gustaf Ahdritz, Tian Qin, Nikhil Vyas, Boaz Barak, Benjamin L. Edelman
https://proceedings.mlr.press/v235/ahdritz24a.html
ICML 2024
We study the feasibility of identifying epistemic uncertainty (reflecting a lack of knowledge), as opposed to aleatoric uncertainty (reflecting entropy in the underlying distribution), in the outputs of large language models (LLMs) over free-form text. In the absence of ground-truth probabilities, we explore a setting where, in order to (approximately) disentangle a given LLM’s uncertainty, a significantly larger model stands in as a proxy for the ground truth. We show that small linear probes trained on the embeddings of frozen, pretrained models accurately predict when larger models will be more confident at the token level and that probes trained on one text domain generalize to others. Going further, we propose a fully unsupervised method that achieves non-trivial accuracy on the same task. Taken together, we interpret these results as evidence that LLMs naturally contain internal representations of different types of uncertainty that could potentially be leveraged to devise more informative indicators of model confidence in diverse practical settings. Code can be found at: https://github.com/KempnerInstitute/llm_uncertainty
https://proceedings.mlr.press/v235/ahmadian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ahmadian24a/ahmadian24a.pdf
https://openreview.net/forum?id=jaJxpKkBcL
Unmasking Vulnerabilities: Cardinality Sketches under Adaptive Inputs
https://proceedings.mlr.press/v235/ahmadian24a.html
Sara Ahmadian, Edith Cohen
https://proceedings.mlr.press/v235/ahmadian24a.html
ICML 2024
Cardinality sketches are popular data structures that enhance the efficiency of working with large data sets. The sketches are randomized representations of sets that are only of logarithmic size but can support set merges and approximate cardinality (i.e., distinct count) queries. When queries are not adaptive, that is, they do not depend on preceding query responses, the design provides strong guarantees of correctly answering a number of queries exponential in the sketch size $k$. In this work, we investigate the performance of cardinality sketches in adaptive settings and unveil inherent vulnerabilities. We design an attack against the “standard” estimators that constructs an adversarial input by post-processing responses to a set of simple non-adaptive queries of size linear in the sketch size $k$. Empirically, our attack used only $4k$ queries with the widely used HyperLogLog (HLL++) Flajolet et al., 2007; Heule et al., 2013) sketch. The simple attack technique suggests it can be effective with post-processed natural workloads. Finally and importantly, we demonstrate that the vulnerability is inherent as any estimator applied to known sketch structures can be attacked using a number of queries that is quadratic in $k$, matching a generic upper bound.
https://proceedings.mlr.press/v235/ahmaditeshnizi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ahmaditeshnizi24a/ahmaditeshnizi24a.pdf
https://openreview.net/forum?id=YT1dtdLvSN
OptiMUS: Scalable Optimization Modeling with (MI)LP Solvers and Large Language Models
https://proceedings.mlr.press/v235/ahmaditeshnizi24a.html
Ali Ahmaditeshnizi, Wenzhi Gao, Madeleine Udell
https://proceedings.mlr.press/v235/ahmaditeshnizi24a.html
ICML 2024
Optimization problems are pervasive in sectors from manufacturing and distribution to healthcare. However, most such problems are still solved heuristically by hand rather than optimally by state-of-the-art solvers because the expertise required to formulate and solve these problems limits the widespread adoption of optimization tools and techniques. This paper introduces OptiMUS, a Large Language Model (LLM)-based agent designed to formulate and solve (mixed integer) linear programming problems from their natural language descriptions. OptiMUS can develop mathematical models, write and debug solver code, evaluate the generated solutions, and improve its model and code based on these evaluations. OptiMUS utilizes a modular structure to process problems, allowing it to handle problems with long descriptions and complex data without long prompts. Experiments demonstrate that OptiMUS outperforms existing state-of-the-art methods on easy datasets by more than $20$% and on hard datasets (including a new dataset, NLP4LP, released with this paper that features long and complex problems) by more than $30$%. The implementation and the datasets are available at https://github.com/teshnizi/OptiMUS.
https://proceedings.mlr.press/v235/ahn24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ahn24a/ahn24a.pdf
https://openreview.net/forum?id=tpYHbEl7P1
How to Escape Sharp Minima with Random Perturbations
https://proceedings.mlr.press/v235/ahn24a.html
Kwangjun Ahn, Ali Jadbabaie, Suvrit Sra
https://proceedings.mlr.press/v235/ahn24a.html
ICML 2024
Modern machine learning applications have witnessed the remarkable success of optimization algorithms that are designed to find flat minima. Motivated by this design choice, we undertake a formal study that (i) formulates the notion of flat minima, and (ii) studies the complexity of finding them. Specifically, we adopt the trace of the Hessian of the cost function as a measure of flatness, and use it to formally define the notion of approximate flat minima. Under this notion, we then analyze algorithms that find approximate flat minima efficiently. For general cost functions, we discuss a gradient-based algorithm that finds an approximate flat local minimum efficiently. The main component of the algorithm is to use gradients computed from randomly perturbed iterates to estimate a direction that leads to flatter minima. For the setting where the cost function is an empirical risk over training data, we present a faster algorithm that is inspired by a recently proposed practical algorithm called sharpness-aware minimization, supporting its success in practice.
https://proceedings.mlr.press/v235/ahn24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ahn24b/ahn24b.pdf
https://openreview.net/forum?id=iE2lMjeXRR
Understanding Adam Optimizer via Online Learning of Updates: Adam is FTRL in Disguise
https://proceedings.mlr.press/v235/ahn24b.html
Kwangjun Ahn, Zhiyu Zhang, Yunbum Kook, Yan Dai
https://proceedings.mlr.press/v235/ahn24b.html
ICML 2024
Despite the success of the Adam optimizer in practice, the theoretical understanding of its algorithmic components still remains limited. In particular, most existing analyses of Adam show the convergence rate that can be simply achieved by non-adative algorithms like SGD. In this work, we provide a different perspective based on online learning that underscores the importance of Adam’s algorithmic components. Inspired by Cutkosky et al. (2023), we consider the framework called online learning of updates/increments, where we choose the updates/increments of an optimizer based on an online learner. With this framework, the design of a good optimizer is reduced to the design of a good online learner. Our main observation is that Adam corresponds to a principled online learning framework called Follow-the-Regularized-Leader (FTRL). Building on this observation, we study the benefits of its algorithmic components from the online learning perspective.
https://proceedings.mlr.press/v235/ai24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ai24a/ai24a.pdf
https://openreview.net/forum?id=1v1oFF3aw0
Not all distributional shifts are equal: Fine-grained robust conformal inference
https://proceedings.mlr.press/v235/ai24a.html
Jiahao Ai, Zhimei Ren
https://proceedings.mlr.press/v235/ai24a.html
ICML 2024
We introduce a fine-grained framework for uncertainty quantification of predictive models under distributional shifts. This framework distinguishes the shift in covariate distributions from that in the conditional relationship between the outcome ($Y$) and the covariates ($X$). We propose to reweight the training samples to adjust for an identifiable shift in covariate distribution while protecting against the worst-case conditional distribution shift bounded in an $f$-divergence ball. Based on ideas from conformal inference and distributionally robust learning, we present an algorithm that outputs (approximately) valid and efficient prediction intervals in the presence of distributional shifts. As a use case, we apply the framework to sensitivity analysis of individual treatment effects with hidden confounding. The proposed methods are evaluated in simulations and four real data applications, demonstrating superior robustness and efficiency compared with existing benchmarks.
https://proceedings.mlr.press/v235/akbari24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/akbari24a/akbari24a.pdf
https://openreview.net/forum?id=yzNEkTmcoF
Triple Changes Estimator for Targeted Policies
https://proceedings.mlr.press/v235/akbari24a.html
Sina Akbari, Negar Kiyavash
https://proceedings.mlr.press/v235/akbari24a.html
ICML 2024
The renowned difference-in-differences (DiD) estimator relies on the assumption of ’parallel trends,’ which may not hold in many practical applications. To address this issue, economists are increasingly considering the triple difference estimator as a more credible alternative. Both DiD and triple difference are limited to assessing average effects exclusively. An alternative avenue is offered by the changes-in-changes (CiC) estimator, which provides an estimate of the entire counterfactual distribution by relying on assumptions imposed on the distribution of potential outcomes. In this work, we extend the triple difference estimator to accommodate the CiC framework, presenting the ‘triple changes estimator’ and its identification assumptions, thereby expanding the scope of the CiC paradigm. Subsequently, we empirically evaluate the proposed framework and apply it to a study examining the impact of Medicaid expansion on children’s preventive care.
https://proceedings.mlr.press/v235/akbarian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/akbarian24a/akbarian24a.pdf
https://openreview.net/forum?id=KwgAThfxEd
Improving Computational Complexity in Statistical Models with Local Curvature Information
https://proceedings.mlr.press/v235/akbarian24a.html
Pedram Akbarian, Tongzheng Ren, Jiacheng Zhuo, Sujay Sanghavi, Nhat Ho
https://proceedings.mlr.press/v235/akbarian24a.html
ICML 2024
It is known that when the statistical models are singular, i.e., the Fisher information matrix at the true parameter is degenerate, the fixed step-size gradient descent algorithm takes polynomial number of steps in terms of the sample size $n$ to converge to a final statistical radius around the true parameter, which can be unsatisfactory for the practical application. To further improve that computational complexity, we consider utilizing the local curvature information for parameter estimation. Even though there is a rich literature in using the local curvature information for optimization, the statistical rate of these methods in statistical models, to the best of our knowledge, has not been studied rigorously. The major challenge of this problem is due to the non-convex nature of sample loss function. To shed light on these problems, we specifically study the normalized gradient descent (NormGD) algorithm, a variant of gradient descent algorithm whose step size is scaled by the maximum eigenvalue of the Hessian matrix of the empirical loss function, and deal with the aforementioned issue with a population-to-sample analysis. When the population loss function is homogeneous, the NormGD iterates reach a final statistical radius around the true parameter after a logarithmic number of iterations in terms of $n$. Therefore, for fixed dimension $d$, the NormGD algorithm achieves the optimal computational complexity $\mathcal{O}(n)$ to reach the final statistical radius, which is cheaper than the complexity $\mathcal{O}(n^{\tau})$ of the fixed step-size gradient descent algorithm for some $\tau > 1$.
https://proceedings.mlr.press/v235/akeweje24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/akeweje24a/akeweje24a.pdf
https://openreview.net/forum?id=J5Yg7HMy39
Learning Mixtures of Gaussian Processes through Random Projection
https://proceedings.mlr.press/v235/akeweje24a.html
Emmanuel Akeweje, Mimi Zhang
https://proceedings.mlr.press/v235/akeweje24a.html
ICML 2024
We propose an ensemble clustering framework to uncover latent cluster labels in functional data generated from a Gaussian process mixture. Our method exploits the fact that the projection coefficients of the functional data onto any given projection function follow a univariate Gaussian mixture model (GMM). By conducting multiple one-dimensional projections and learning a univariate GMM for each, we create an ensemble of GMMs. Each GMM serves as a base clustering, and applying ensemble clustering yields a consensus clustering. Our approach significantly reduces computational complexity compared to state-of-the-art methods, and we provide theoretical guarantees on the identifiability and learnability of Gaussian process mixtures. Extensive experiments on synthetic and real datasets confirm the superiority of our method over existing techniques.
https://proceedings.mlr.press/v235/akhauri24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/akhauri24a/akhauri24a.pdf
https://openreview.net/forum?id=fqPH6ejwGi
Encodings for Prediction-based Neural Architecture Search
https://proceedings.mlr.press/v235/akhauri24a.html
Yash Akhauri, Mohamed S Abdelfattah
https://proceedings.mlr.press/v235/akhauri24a.html
ICML 2024
Predictor-based methods have substantially enhanced Neural Architecture Search (NAS) optimization. The efficacy of these predictors is largely influenced by the method of encoding neural network architectures. While traditional encodings used an adjacency matrix describing the graph structure of a neural network, novel encodings embrace a variety of approaches from unsupervised pretraining of latent representations to vectors of zero-cost proxies. In this paper, we categorize and investigate neural encodings from three main types: structural, learned, and score-based. Furthermore, we extend these encodings and introduce unified encodings, that extend NAS predictors to multiple search spaces. Our analysis draws from experiments conducted on over 1.5 million neural network architectures on NAS spaces such as NASBench-101 (NB101), NB201, NB301, Network Design Spaces (NDS), and TransNASBench-101. Building on our study, we present our predictor FLAN: Flow Attention for NAS. FLAN integrates critical insights on predictor design, transfer learning, and unified encodings to enable more than an order of magnitude cost reduction for training NAS accuracy predictors. Our implementation and encodings for all neural networks are open-sourced at https://github.com/abdelfattah-lab/flan_nas.
https://proceedings.mlr.press/v235/akhound-sadegh24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/akhound-sadegh24a/akhound-sadegh24a.pdf
https://openreview.net/forum?id=gVjMwLDFoQ
Iterated Denoising Energy Matching for Sampling from Boltzmann Densities
https://proceedings.mlr.press/v235/akhound-sadegh24a.html
Tara Akhound-Sadegh, Jarrid Rector-Brooks, Joey Bose, Sarthak Mittal, Pablo Lemos, Cheng-Hao Liu, Marcin Sendera, Siamak Ravanbakhsh, Gauthier Gidel, Yoshua Bengio, Nikolay Malkin, Alexander Tong
https://proceedings.mlr.press/v235/akhound-sadegh24a.html
ICML 2024
Efficiently generating statistically independent samples from an unnormalized probability distribution, such as equilibrium samples of many-body systems, is a foundational problem in science. In this paper, we propose Iterated Denoising Energy Matching (iDEM), an iterative algorithm that uses a novel stochastic score matching objective leveraging solely the energy function and its gradient—and no data samples—to train a diffusion-based sampler. Specifically, iDEM alternates between (I) sampling regions of high model density from a diffusion-based sampler and (II) using these samples in our stochastic matching objective to further improve the sampler. iDEM is scalable to high dimensions as the inner matching objective, is simulation-free, and requires no MCMC samples. Moreover, by leveraging the fast mode mixing behavior of diffusion, iDEM smooths out the energy landscape enabling efficient exploration and learning of an amortized sampler. We evaluate iDEM on a suite of tasks ranging from standard synthetic energy functions to invariant $n$-body particle systems. We show that the proposed approach achieves state-of-the-art performance on all metrics and trains $2-5\times$ faster, which allows it to be the first method to train using energy on the challenging $55$-particle Lennard-Jones system.
https://proceedings.mlr.press/v235/akyurek24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/akyurek24a/akyurek24a.pdf
https://openreview.net/forum?id=3Z9CRr5srL
In-Context Language Learning: Architectures and Algorithms
https://proceedings.mlr.press/v235/akyurek24a.html
Ekin Akyürek, Bailin Wang, Yoon Kim, Jacob Andreas
https://proceedings.mlr.press/v235/akyurek24a.html
ICML 2024
Some neural language models (LMs) exhibit a remarkable capacity for in-context learning (ICL): they can fit predictors to datasets provided as input. While the mechanisms underlying ICL are well-studied in the context of synthetic problems like in-context linear regression, there is still some divergence between these model problems and the “real” ICL exhibited by LMs trained on large text corpora. In this paper, we study ICL through the lens of a new family of model problems we term in context language learning (ICLL). In ICLL, LMs are presented with a set of strings from a formal language, and must generate additional strings from the same language. We focus on in- context learning of regular languages generated by random finite automata. We evaluate a diverse set of neural sequence models on regular ICLL tasks. We first show that Transformers significantly outperform neural sequence models with recurrent or convolutional representations on ICLL tasks. Next, we provide evidence that they do so by computing in-context n-gram statistics using specialized attention heads. Finally, we show that hard-wiring these heads into neural models improves performance not just on synthetic ICLL, but natural language modeling, reducing the perplexity of 340M-parameter Transformers by up to 1.14 points (6.7%) on the SlimPajama dataset. Our results highlight the usefulness of in-context formal language learning as a tool for understanding ICL in models of natural text.
https://proceedings.mlr.press/v235/al-jarrah24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/al-jarrah24a/al-jarrah24a.pdf
https://openreview.net/forum?id=blzDxD6bKt
Nonlinear Filtering with Brenier Optimal Transport Maps
https://proceedings.mlr.press/v235/al-jarrah24a.html
Mohammad Al-Jarrah, Niyizhen Jin, Bamdad Hosseini, Amirhossein Taghvaei
https://proceedings.mlr.press/v235/al-jarrah24a.html
ICML 2024
This paper is concerned with the problem of nonlinear filtering, i.e., computing the conditional distribution of the state of a stochastic dynamical system given a history of noisy partial observations. Conventional sequential importance resampling (SIR) particle filters suffer from fundamental limitations, in scenarios involving degenerate likelihoods or high-dimensional states, due to the weight degeneracy issue. In this paper, we explore an alternative method, which is based on estimating the Brenier optimal transport (OT) map from the current prior distribution of the state to the posterior distribution at the next time step. Unlike SIR particle filters, the OT formulation does not require the analytical form of the likelihood. Moreover, it allows us to harness the approximation power of neural networks to model complex and multi-modal distributions and employ stochastic optimization algorithms to enhance scalability. Extensive numerical experiments are presented that compare the OT method to the SIR particle filter and the ensemble Kalman filter, evaluating the performance in terms of sample efficiency, high-dimensional scalability, and the ability to capture complex and multi-modal distributions.
https://proceedings.mlr.press/v235/alacaoglu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/alacaoglu24a/alacaoglu24a.pdf
https://openreview.net/forum?id=lWy2lCTyJa
Revisiting Inexact Fixed-Point Iterations for Min-Max Problems: Stochasticity and Structured Nonconvexity
https://proceedings.mlr.press/v235/alacaoglu24a.html
Ahmet Alacaoglu, Donghwan Kim, Stephen Wright
https://proceedings.mlr.press/v235/alacaoglu24a.html
ICML 2024
We focus on constrained, $L$-smooth, potentially stochastic and nonconvex-nonconcave min-max problems either satisfying $\rho$-cohypomonotonicity or admitting a solution to the $\rho$-weakly Minty Variational Inequality (MVI), where larger values of the parameter $\rho>0$ correspond to a greater degree of nonconvexity. These problem classes include examples in two player reinforcement learning, interaction dominant min-max problems, and certain synthetic test problems on which classical min-max algorithms fail. It has been conjectured that first-order methods can tolerate a value of $\rho$ no larger than $\frac{1}{L}$, but existing results in the literature have stagnated at the tighter requirement $\rho < \frac{1}{2L}$. With a simple argument, we obtain optimal or best-known complexity guarantees with cohypomonotonicity or weak MVI conditions for $\rho < \frac{1}{L}$. First main insight for the improvements in the convergence analyses is to harness the recently proposed conic nonexpansiveness property of operators. Second, we provide a refined analysis for inexact Halpern iteration that relaxes the required inexactness level to improve some state-of-the-art complexity results even for constrained stochastic convex-concave min-max problems. Third, we analyze a stochastic inexact Krasnosel’skii-Mann iteration with a multilevel Monte Carlo estimator when the assumptions only hold with respect to a solution.
https://proceedings.mlr.press/v235/alain24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/alain24a/alain24a.pdf
https://openreview.net/forum?id=afnyJfQddk
Gaussian Processes on Cellular Complexes
https://proceedings.mlr.press/v235/alain24a.html
Mathieu Alain, So Takao, Brooks Paige, Marc Peter Deisenroth
https://proceedings.mlr.press/v235/alain24a.html
ICML 2024
In recent years, there has been considerable interest in developing machine learning models on graphs to account for topological inductive biases. In particular, recent attention has been given to Gaussian processes on such structures since they can additionally account for uncertainty. However, graphs are limited to modelling relations between two vertices. In this paper, we go beyond this dyadic setting and consider polyadic relations that include interactions between vertices, edges and one of their generalisations, known as cells. Specifically, we propose Gaussian processes on cellular complexes, a generalisation of graphs that captures interactions between these higher-order cells. One of our key contributions is the derivation of two novel kernels, one that generalises the graph Matérn kernel and one that additionally mixes information of different cell types.
https://proceedings.mlr.press/v235/alamdari24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/alamdari24a/alamdari24a.pdf
https://openreview.net/forum?id=4BIOZSz7zU
Remembering to Be Fair: Non-Markovian Fairness in Sequential Decision Making
https://proceedings.mlr.press/v235/alamdari24a.html
Parand A. Alamdari, Toryn Q. Klassen, Elliot Creager, Sheila A. Mcilraith
https://proceedings.mlr.press/v235/alamdari24a.html
ICML 2024
Fair decision making has largely been studied with respect to a single decision. Here we investigate the notion of fairness in the context of sequential decision making where multiple stakeholders can be affected by the outcomes of decisions. We observe that fairness often depends on the history of the sequential decision-making process, and in this sense that it is inherently non-Markovian. We further observe that fairness often needs to be assessed at time points within the process, not just at the end of the process. To advance our understanding of this class of fairness problems, we explore the notion of non-Markovian fairness in the context of sequential decision making. We identify properties of non-Markovian fairness, including notions of long-term, anytime, periodic, and bounded fairness. We explore the interplay between non-Markovian fairness and memory and how memory can support construction of fair policies. Finally, we introduce the FairQCM algorithm, which can automatically augment its training data to improve sample efficiency in the synthesis of fair policies via reinforcement learning.
https://proceedings.mlr.press/v235/albergo24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/albergo24a/albergo24a.pdf
https://openreview.net/forum?id=FFILRGD0jG
Stochastic Interpolants with Data-Dependent Couplings
https://proceedings.mlr.press/v235/albergo24a.html
Michael Samuel Albergo, Mark Goldstein, Nicholas Matthew Boffi, Rajesh Ranganath, Eric Vanden-Eijnden
https://proceedings.mlr.press/v235/albergo24a.html
ICML 2024
Generative models inspired by dynamical transport of measure – such as flows and diffusions – construct a continuous-time map between two probability densities. Conventionally, one of these is the target density, only accessible through samples, while the other is taken as a simple base density that is data-agnostic. In this work, using the framework of stochastic interpolants, we formalize how to couple the base and the target densities, whereby samples from the base are computed conditionally given samples from the target in a way that is different from (but does not preclude) incorporating information about class labels or continuous embeddings. This enables us to construct dynamical transport maps that serve as conditional generative models. We show that these transport maps can be learned by solving a simple square loss regression problem analogous to the standard independent setting. We demonstrate the usefulness of constructing dependent couplings in practice through experiments in super-resolution and in-painting. The code is available at https://github.com/interpolants/couplings.
https://proceedings.mlr.press/v235/albuquerque24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/albuquerque24a/albuquerque24a.pdf
https://openreview.net/forum?id=idyUNsoZ75
Evaluating Model Bias Requires Characterizing its Mistakes
https://proceedings.mlr.press/v235/albuquerque24a.html
Isabela Albuquerque, Jessica Schrouff, David Warde-Farley, Ali Taylan Cemgil, Sven Gowal, Olivia Wiles
https://proceedings.mlr.press/v235/albuquerque24a.html
ICML 2024
The ability to properly benchmark model performance in the face of spurious correlations is important to both build better predictors and increase confidence that models are operating as intended. We demonstrate that characterizing (as opposed to simply quantifying) model mistakes across subgroups is pivotal to properly reflect model biases, which are ignored by standard metrics such as worst-group accuracy or accuracy gap. Inspired by the hypothesis testing framework, we introduce SkewSize, a principled and flexible metric that captures bias from mistakes in a model’s predictions. It can be used in multi-class settings or generalised to the open vocabulary setting of generative models. SkewSize is an aggregation of the effect size of the interaction between two categorical variables: the spurious variable representing the bias attribute the model’s prediction. We demonstrate the utility of SkewSize in multiple settings including: standard vision models trained on synthetic data, vision models trained on ImageNet, and large scale vision-and-language models from the BLIP-2 family. In each case, the proposed SkewSize is able to highlight biases not captured by other metrics, while also providing insights on the impact of recently proposed techniques, such as instruction tuning.
https://proceedings.mlr.press/v235/alder24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/alder24a/alder24a.pdf
https://openreview.net/forum?id=v9tIJW1fzt
Energy-Efficient Gaussian Processes Using Low-Precision Arithmetic
https://proceedings.mlr.press/v235/alder24a.html
Nicolas Alder, Ralf Herbrich
https://proceedings.mlr.press/v235/alder24a.html
ICML 2024
The widespread use of artificial intelligence requires finding energy-efficient paradigms for the field. We propose to reduce the energy consumption of Gaussian process regression using low-precision floating-point representations. We explore how low-precision representations impact the results of Gaussian process regression and how data set properties, implementation approach, model performance, and energy consumption interact. Our findings show that a well-conditioned kernel matrix allows reducing the energy consumption by up to 89.01% for 98.08% of arithmetic operations with little to no impact on model performance. Our findings are relevant whenever one needs to invert a symmetric full-rank matrix.
https://proceedings.mlr.press/v235/alfarra24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/alfarra24a/alfarra24a.pdf
https://openreview.net/forum?id=6FtAXU4ean
Evaluation of Test-Time Adaptation Under Computational Time Constraints
https://proceedings.mlr.press/v235/alfarra24a.html
Motasem Alfarra, Hani Itani, Alejandro Pardo, Shyma Yaser Alhuwaider, Merey Ramazanova, Juan Camilo Perez, Zhipeng Cai, Matthias Müller, Bernard Ghanem
https://proceedings.mlr.press/v235/alfarra24a.html
ICML 2024
This paper proposes a novel online evaluation protocol for Test Time Adaptation (TTA) methods, which penalizes slower methods by providing them with fewer samples for adaptation. TTA methods leverage unlabeled data at test time to adapt to distribution shifts. Though many effective methods have been proposed, their impressive performance usually comes at the cost of significantly increased computation budgets. Current evaluation protocols overlook the effect of this extra computation cost, affecting their real-world applicability. To address this issue, we propose a more realistic evaluation protocol for TTA methods, where data is received in an online fashion from a constant-speed data stream, thereby accounting for the method’s adaptation speed. We apply our proposed protocol to benchmark several TTA methods on multiple datasets and scenarios. Extensive experiments shows that, when accounting for inference speed, simple and fast approaches can outperform more sophisticated but slower methods. For example, SHOT from 2020, outperforms the state-of-the-art method SAR from 2023 under our online setting. Our results reveal the importance of developing practical TTA methods that are both accurate and efficient.
https://proceedings.mlr.press/v235/ali-mehmeti-gopel24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ali-mehmeti-gopel24a/ali-mehmeti-gopel24a.pdf
https://openreview.net/forum?id=AzUCfhJ9Bs
On the Weight Dynamics of Deep Normalized Networks
https://proceedings.mlr.press/v235/ali-mehmeti-gopel24a.html
Christian H.X. Ali Mehmeti-Göpel, Michael Wand
https://proceedings.mlr.press/v235/ali-mehmeti-gopel24a.html
ICML 2024
Recent studies have shown that high disparities in effective learning rates (ELRs) across layers in deep neural networks can negatively affect trainability. We formalize how these disparities evolve over time by modeling weight dynamics (evolution of expected gradient and weight norms) of networks with normalization layers, predicting the evolution of layer-wise ELR ratios. We prove that when training with any constant learning rate, ELR ratios converge to 1, despite initial gradient explosion. We identify a "critical learning rate" beyond which ELR disparities widen, which only depends on current ELRs. To validate our findings, we devise a hyper-parameter-free warm-up method that successfully minimizes ELR spread quickly in theory and practice. Our experiments link ELR spread with trainability, a relationship that is most evident in very deep networks with significant gradient magnitude excursions.
https://proceedings.mlr.press/v235/alishahi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/alishahi24a/alishahi24a.pdf
https://openreview.net/forum?id=jS3CMHtYJD
No Dimensional Sampling Coresets for Classification
https://proceedings.mlr.press/v235/alishahi24a.html
Meysam Alishahi, Jeff M. Phillips
https://proceedings.mlr.press/v235/alishahi24a.html
ICML 2024
We refine and generalize what is known about coresets for classification problems via the sensitivity sampling framework. Such coresets seek the smallest possible subsets of input data, so one can optimize a loss function on the coreset and ensure approximation guarantees with respect to the original data. Our analysis provides the first no dimensional coresets, so the size does not depend on the dimension. Moreover, our results are general, apply for distributional input and can use iid samples, so provide sample complexity bounds, and work for a variety of loss functions. A key tool we develop is a Radamacher complexity version of the main sensitivity sampling approach, which can be of independent interest.
https://proceedings.mlr.press/v235/allamanis24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/allamanis24a/allamanis24a.pdf
https://openreview.net/forum?id=YnFuUX08CE
Unsupervised Evaluation of Code LLMs with Round-Trip Correctness
https://proceedings.mlr.press/v235/allamanis24a.html
Miltiadis Allamanis, Sheena Panthaplackel, Pengcheng Yin
https://proceedings.mlr.press/v235/allamanis24a.html
ICML 2024
To evaluate code large language models (LLMs), research has relied on a few small manually curated benchmarks, such as HumanEval and MBPP, which represent a narrow part of the real-world software domains. In this work, we introduce round-trip correctness (RTC) as an alternative evaluation method. RTC allows Code LLM evaluation on a broader spectrum of real-world software domains without the need for costly human curation. RTC rests on the idea that we can ask a model to make a prediction (e.g., describe some code using natural language), feed that prediction back (e.g., synthesize code from the predicted description), and check if this round-trip leads to code that is semantically equivalent to the original input. We show how to employ RTC to evaluate code synthesis and editing. We find that RTC strongly correlates with model performance on existing narrow-domain code synthesis benchmarks while allowing us to expand to a much broader set of domains and tasks which was not previously possible without costly human annotations.
https://proceedings.mlr.press/v235/allen-zhu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/allen-zhu24a/allen-zhu24a.pdf
https://openreview.net/forum?id=5x788rqbcj
Physics of Language Models: Part 3.1, Knowledge Storage and Extraction
https://proceedings.mlr.press/v235/allen-zhu24a.html
Zeyuan Allen-Zhu, Yuanzhi Li
https://proceedings.mlr.press/v235/allen-zhu24a.html
ICML 2024
Large language models (LLMs) can store a vast amount of world knowledge, often extractable via question-answering (e.g., "What is Abraham Lincoln’s birthday?”). However, do they answer such questions based on exposure to similar questions during training (i.e., cheating), or by genuinely learning to extract knowledge from sources like Wikipedia? In this paper, we investigate this issue using a controlled biography dataset. We find a strong correlation between the model’s ability to extract knowledge and various diversity measures of the training data. Essentially, for knowledge to be reliably extracted, it must be sufficiently augmented (e.g., through paraphrasing, sentence shuffling) during pretraining. Without such augmentation, knowledge may be memorized but not extractable, leading to 0% accuracy, regardless of subsequent instruction fine-tuning. To understand why this occurs, we employ (nearly) linear probing to demonstrate a strong connection between the observed correlation and how the model internally encodes knowledge — whether it is linearly encoded in the hidden embeddings of entity names or distributed across other token embeddings in the training text. This paper provides several key recommendations for LLM pretraining in the industry: (1) rewrite the pretraining data — using small, auxiliary models — to provide knowledge augmentation, and (2) incorporate more instruction-finetuning data into the pretraining stage before it becomes too late.
https://proceedings.mlr.press/v235/allouah24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/allouah24a/allouah24a.pdf
https://openreview.net/forum?id=Izv7gBnap3
Byzantine-Robust Federated Learning: Impact of Client Subsampling and Local Updates
https://proceedings.mlr.press/v235/allouah24a.html
Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, Geovani Rizk, Sasha Voitovych
https://proceedings.mlr.press/v235/allouah24a.html
ICML 2024
The possibility of adversarial (a.k.a., Byzantine) clients makes federated learning (FL) prone to arbitrary manipulation. The natural approach to robustify FL against adversarial clients is to replace the simple averaging operation at the server in the standard $\mathsf{FedAvg}$ algorithm by a robust averaging rule. While a significant amount of work has been devoted to studying the convergence of federated robust averaging (which we denote by $\mathsf{FedRo}$), prior work has largely ignored the impact of client subsampling and local steps, two fundamental FL characteristics. While client subsampling increases the effective fraction of Byzantine clients, local steps increase the drift between the local updates computed by honest (i.e., non-Byzantine) clients. Consequently, a careless deployment of $\mathsf{FedRo}$ could yield poor performance. We validate this observation by presenting an in-depth analysis of $\mathsf{FedRo}$ tightly analyzing the impact of client subsampling and local steps. Specifically, we present a sufficient condition on client subsampling for nearly-optimal convergence of $\mathsf{FedRo}$ (for smooth non-convex loss). Also, we show that the rate of improvement in learning accuracy diminishes with respect to the number of clients subsampled, as soon as the sample size exceeds a threshold value. Interestingly, we also observe that under a careful choice of step-sizes, the learning error due to Byzantine clients decreases with the number of local steps. We validate our theory by experiments on the FEMNIST and CIFAR-$10$ image classification tasks.
https://proceedings.mlr.press/v235/allouah24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/allouah24b/allouah24b.pdf
https://openreview.net/forum?id=5JrlywYHRi
The Privacy Power of Correlated Noise in Decentralized Learning
https://proceedings.mlr.press/v235/allouah24b.html
Youssef Allouah, Anastasia Koloskova, Aymane El Firdoussi, Martin Jaggi, Rachid Guerraoui
https://proceedings.mlr.press/v235/allouah24b.html
ICML 2024
Decentralized learning is appealing as it enables the scalable usage of large amounts of distributed data and resources without resorting to any central entity, while promoting privacy since every user minimizes the direct exposure of their data. Yet, without additional precautions, curious users can still leverage models obtained from their peers to violate privacy. In this paper, we propose Decor, a variant of decentralized SGD with differential privacy (DP) guarantees. Essentially, in Decor, users securely exchange randomness seeds in one communication round to generate pairwise-canceling correlated Gaussian noises, which are injected to protect local models at every communication round. We theoretically and empirically show that, for arbitrary connected graphs, Decor matches the central DP optimal privacy-utility trade-off. We do so under SecLDP, our new relaxation of local DP, which protects all user communications against an external eavesdropper and curious users, assuming that every pair of connected users shares a secret, i.e., an information hidden to all others. The main theoretical challenge is to control the accumulation of non-canceling correlated noise due to network sparsity. We also propose a companion SecLDP privacy accountant for public use.
https://proceedings.mlr.press/v235/alonso-campana24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/alonso-campana24a/alonso-campana24a.pdf
https://openreview.net/forum?id=MDAg5Q7IsI
Predicting Dose-Response Curves with Deep Neural Networks
https://proceedings.mlr.press/v235/alonso-campana24a.html
Pedro Alonso Campana, Paul Prasse, Tobias Scheffer
https://proceedings.mlr.press/v235/alonso-campana24a.html
ICML 2024
Dose-response curves characterize the relationship between the concentration of drugs and their inhibitory effect on the growth of specific types of cells. The predominant Hill-equation model of an ideal enzymatic inhibition unduly simplifies the biochemical reality of many drugs; and for these drugs the widely-used drug performance indicator of the half-inhibitory concentration $IC_{50}$ can lead to poor therapeutic recommendations and poor selections of promising drug candidates. We develop a neural model that uses an embedding of the interaction between drug molecules and the tissue transcriptome to estimate the entire dose-response curve rather than a scalar aggregate. We find that, compared to the prior state of the art, this model excels at interpolating and extrapolating the inhibitory effect of untried concentrations. Unlike prevalent parametric models, it it able to accurately predict dose-response curves of drugs on previously unseen tumor tissues as well as of previously untested drug molecules on established tumor cell lines.
https://proceedings.mlr.press/v235/altamirano24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/altamirano24a/altamirano24a.pdf
https://openreview.net/forum?id=5WnKLIAX4q
Robust and Conjugate Gaussian Process Regression
https://proceedings.mlr.press/v235/altamirano24a.html
Matias Altamirano, Francois-Xavier Briol, Jeremias Knoblauch
https://proceedings.mlr.press/v235/altamirano24a.html
ICML 2024
To enable closed form conditioning, a common assumption in Gaussian process (GP) regression is independent and identically distributed Gaussian observation noise. This strong and simplistic assumption is often violated in practice, which leads to unreliable inferences and uncertainty quantification. Unfortunately, existing methods for robustifying GPs break closed-form conditioning, which makes them less attractive to practitioners and significantly more computationally expensive. In this paper, we demonstrate how to perform provably robust and conjugate Gaussian process (RCGP) regression at virtually no additional cost using generalised Bayesian inference. RCGP is particularly versatile as it enables exact conjugate closed form updates in all settings where standard GPs admit them. To demonstrate its strong empirical performance, we deploy RCGP for problems ranging from Bayesian optimisation to sparse variational Gaussian processes.
https://proceedings.mlr.press/v235/altieri24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/altieri24a/altieri24a.pdf
https://openreview.net/forum?id=YqIIhl2ToH
Beyond the Norms: Detecting Prediction Errors in Regression Models
https://proceedings.mlr.press/v235/altieri24a.html
Andres Altieri, Marco Romanelli, Georg Pichler, Florence Alberge, Pablo Piantanida
https://proceedings.mlr.press/v235/altieri24a.html
ICML 2024
This paper tackles the challenge of detecting unreliable behavior in regression algorithms, which may arise from intrinsic variability (e.g., aleatoric uncertainty) or modeling errors (e.g., model uncertainty). First, we formally introduce the notion of unreliability in regression, i.e., when the output of the regressor exceeds a specified discrepancy (or error). Then, using powerful tools for probabilistic modeling, we estimate the discrepancy density, and we measure its statistical diversity using our proposed metric for statistical dissimilarity. In turn, this allows us to derive a data-driven score that expresses the uncertainty of the regression outcome. We show empirical improvements in error detection for multiple regression tasks, consistently outperforming popular baseline approaches, and contributing to the broader field of uncertainty quantification and safe machine learning systems.
https://proceedings.mlr.press/v235/altmeyer24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/altmeyer24a/altmeyer24a.pdf
https://openreview.net/forum?id=AIXUuLCuMe
Position: Stop Making Unscientific AGI Performance Claims
https://proceedings.mlr.press/v235/altmeyer24a.html
Patrick Altmeyer, Andrew M. Demetriou, Antony Bartlett, Cynthia C. S. Liem
https://proceedings.mlr.press/v235/altmeyer24a.html
ICML 2024
Developments in the field of Artificial Intelligence (AI), and particularly large language models (LLMs), have created a ’perfect storm’ for observing ’sparks’ of Artificial General Intelligence (AGI) that are spurious. Like simpler models, LLMs distill meaningful representations in their latent embeddings that have been shown to correlate with external variables. Nonetheless, the correlation of such representations has often been linked to human-like intelligence in the latter but not the former. We probe models of varying complexity including random projections, matrix decompositions, deep autoencoders and transformers: all of them successfully distill information that can be used to predict latent or external variables and yet none of them have previously been linked to AGI. We argue and empirically demonstrate that the finding of meaningful patterns in latent spaces of models cannot be seen as evidence in favor of AGI. Additionally, we review literature from the social sciences that shows that humans are prone to seek such patterns and anthropomorphize. We conclude that both the methodological setup and common public image of AI are ideal for the misinterpretation that correlations between model representations and some variables of interest are ’caused’ by the model’s understanding of underlying ’ground truth’ relationships. We, therefore, call for the academic community to exercise extra caution, and to be keenly aware of principles of academic integrity, in interpreting and communicating about AI research outcomes.
https://proceedings.mlr.press/v235/alvarado24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/alvarado24a/alvarado24a.pdf
https://openreview.net/forum?id=kZKopcDp2q
Hyperbolic Optimizer as a Dynamical System
https://proceedings.mlr.press/v235/alvarado24a.html
Nico Alvarado, Hans Lobel
https://proceedings.mlr.press/v235/alvarado24a.html
ICML 2024
During the last few years, the field of dynamical systems has been developing innovative tools to study the asymptotic behavior of different optimizers in the context of neural networks. In this work, we redefine an extensively studied optimizer, employing classical techniques from hyperbolic geometry. This new definition is linked to a non-linear differential equation as a continuous limit. Additionally, by utilizing Lyapunov stability concepts, we analyze the asymptotic behavior of its critical points.
https://proceedings.mlr.press/v235/ambrogioni24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ambrogioni24a/ambrogioni24a.pdf
https://openreview.net/forum?id=6CV1N7hhpA
Stationarity without mean reversion in improper Gaussian processes
https://proceedings.mlr.press/v235/ambrogioni24a.html
Luca Ambrogioni
https://proceedings.mlr.press/v235/ambrogioni24a.html
ICML 2024
The behavior of a GP regression depends on the choice of covariance function. Stationary covariance functions are preferred in machine learning applications. However, (non-periodic) stationary covariance functions are always mean reverting and can therefore exhibit pathological behavior when applied to data that does not relax to a fixed global mean value. In this paper we show that it is possible to use improper GP priors with infinite variance to define processes that are stationary but not mean reverting. To this aim, we use of non-positive kernels that can only be defined in this limit regime. The resulting posterior distributions can be computed analytically and it involves a simple correction of the usual formulas. The main contribution of the paper is the introduction of a large family of smooth non-reverting covariance functions that closely resemble the kernels commonly used in the GP literature (e.g. squared exponential and Matérn class). By analyzing both synthetic and real data, we demonstrate that these non-positive kernels solve some known pathologies of mean reverting GP regression while retaining most of the favorable properties of ordinary smooth stationary kernels.
https://proceedings.mlr.press/v235/ameen24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ameen24a/ameen24a.pdf
https://openreview.net/forum?id=WJn1BAx9aj
Robust Graph Matching when Nodes are Corrupt
https://proceedings.mlr.press/v235/ameen24a.html
Taha Ameen, Bruce Hajek
https://proceedings.mlr.press/v235/ameen24a.html
ICML 2024
Two models are introduced to study the problem of matching two correlated graphs when some of the nodes are corrupt. In the weak model, a random subset of nodes in one or both graphs can interact randomly with their network. For this model, it is shown that no estimator can correctly recover a positive fraction of the corrupt nodes. Necessary conditions for any estimator to correctly identify and match all the uncorrupt nodes are derived, and it is shown that these conditions are also sufficient for the k-core estimator. In the strong model, an adversarially selected subset of nodes in one or both graphs can interact arbitrarily with their network. For this model, detection of corrupt nodes is impossible. Even so, we show that if only one of the networks is compromised, then under appropriate conditions, the maximum overlap estimator can correctly match a positive fraction of nodes albeit without explicitly identifying them.
https://proceedings.mlr.press/v235/ameranis24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ameranis24a/ameranis24a.pdf
https://openreview.net/forum?id=sfQH4JJ4We
Fast Algorithms for Hypergraph PageRank with Applications to Semi-Supervised Learning
https://proceedings.mlr.press/v235/ameranis24a.html
Konstantinos Ameranis, Adela Frances Depavia, Lorenzo Orecchia, Erasmo Tani
https://proceedings.mlr.press/v235/ameranis24a.html
ICML 2024
A fundamental approach to semi-supervised learning is to leverage the structure of the sample space to diffuse label information from annotated examples to unlabeled points. Traditional methods model the input data points as a graph and rely on fast algorithms for solving Laplacian systems of equations, such as those defining PageRank. However, previous work has demonstrated that graph-based models fail to capture higher-order relations, such as group membership, which are better modeled by hypergraphs. Unfortunately, the scalable application of hypergraph models has been hampered by the non-linearity of the hypergraph Laplacian. In this paper, we present highly scalable algorithms for hypergraph primitives, such as hypergraph PageRank vectors and hypergraph Laplacian systems, over general families of hypergraphs. In addition to giving strong theoretical guarantees, we empirically showcase the speed of our algorithms on benchmark instances of semi-supervised learning on categorical data. We exploit their generality to improve semi-supervised manifold clustering via hypergraph models. By providing significant speed-ups on fundamental hypergraph tasks, our algorithms enable the deployment of hypergraph models on a massive scale.
https://proceedings.mlr.press/v235/amin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/amin24a/amin24a.pdf
https://openreview.net/forum?id=5M4Qa9AqY7
Scalable and Flexible Causal Discovery with an Efficient Test for Adjacency
https://proceedings.mlr.press/v235/amin24a.html
Alan Nawzad Amin, Andrew Gordon Wilson
https://proceedings.mlr.press/v235/amin24a.html
ICML 2024
To make accurate predictions, understand mechanisms, and design interventions in systems of many variables, we wish to learn causal graphs from large scale data. Unfortunately the space of all possible causal graphs is enormous so scalably and accurately searching for the best fit to the data is a challenge. In principle we could substantially decrease the search space, or learn the graph entirely, by testing the conditional independence of variables. However, deciding if two variables are adjacent in a causal graph may require an exponential number of tests. Here we build a scalable and flexible method to evaluate if two variables are adjacent in a causal graph, the Differentiable Adjacency Test (DAT). DAT replaces an exponential number of tests with a provably equivalent relaxed problem. It then solves this problem by training two neural networks. We build a graph learning method based on DAT, DAT-Graph, that can also learn from data with interventions. DAT-Graph can learn graphs of 1000 variables with state of the art accuracy. Using the graph learned by DAT-Graph, we also build models that make much more accurate predictions of the effects of interventions on large scale RNA sequencing data.
https://proceedings.mlr.press/v235/aminian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/aminian24a/aminian24a.pdf
https://openreview.net/forum?id=8h0x12p3zq
Generalization Error of Graph Neural Networks in the Mean-field Regime
https://proceedings.mlr.press/v235/aminian24a.html
Gholamali Aminian, Yixuan He, Gesine Reinert, Lukasz Szpruch, Samuel N. Cohen
https://proceedings.mlr.press/v235/aminian24a.html
ICML 2024
This work provides a theoretical framework for assessing the generalization error of graph neural networks in the over-parameterized regime, where the number of parameters surpasses the quantity of data points. We explore two widely utilized types of graph neural networks: graph convolutional neural networks and message passing graph neural networks. Prior to this study, existing bounds on the generalization error in the over-parametrized regime were uninformative, limiting our understanding of over-parameterized network performance. Our novel approach involves deriving upper bounds within the mean-field regime for evaluating the generalization error of these graph neural networks. We establish upper bounds with a convergence rate of $O(1/n)$, where $n$ is the number of graph samples. These upper bounds offer a theoretical assurance of the networks’ performance on unseen data in the challenging over-parameterized regime and overall contribute to our understanding of their performance.
https://proceedings.mlr.press/v235/amortila24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/amortila24a/amortila24a.pdf
https://openreview.net/forum?id=C64clssMVU
Scalable Online Exploration via Coverability
https://proceedings.mlr.press/v235/amortila24a.html
Philip Amortila, Dylan J Foster, Akshay Krishnamurthy
https://proceedings.mlr.press/v235/amortila24a.html
ICML 2024
Exploration is a major challenge in reinforcement learning, especially for high-dimensional domains that require function approximation. We propose exploration objectives—policy optimization objectives that enable downstream maximization of any reward function—as a conceptual framework to systematize the study of exploration. We introduce a new objective, L1-Coverage, which generalizes previous exploration schemes and supports three fundamental desiderata: 1. Intrinsic complexity control. L1-Coverage is associated with a structural parameter, L1-Coverability, which reflects the intrinsic statistical difficulty of the underlying MDP, subsuming Block and Low-Rank MDPs. 2. Efficient planning. For a known MDP, L1-Coverage efficiently reduces to standard policy optimization, allowing flexible integration with off-the-shelf methods such as policy gradient and Q-learning approaches. 3. Efficient exploration. L1-Coverage enables the first computationally efficient model-based and model-free algorithms for online (reward-free or reward-driven) reinforcement learning in MDPs with low coverability. Empirically, we find that L1-Coverage effectively drives off-the-shelf policy optimization algorithms to explore the state space.
https://proceedings.mlr.press/v235/an24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/an24a/an24a.pdf
https://openreview.net/forum?id=URtUYfC3GA
WAVES: Benchmarking the Robustness of Image Watermarks
https://proceedings.mlr.press/v235/an24a.html
Bang An, Mucong Ding, Tahseen Rabbani, Aakriti Agrawal, Yuancheng Xu, Chenghao Deng, Sicheng Zhu, Abdirisak Mohamed, Yuxin Wen, Tom Goldstein, Furong Huang
https://proceedings.mlr.press/v235/an24a.html
ICML 2024
In the burgeoning age of generative AI, watermarks act as identifiers of provenance and artificial content. We present WAVES (Watermark Analysis via Enhanced Stress-testing), a benchmark for assessing image watermark robustness, overcoming the limitations of current evaluation methods. WAVES integrates detection and identification tasks and establishes a standardized evaluation protocol comprised of a diverse range of stress tests. The attacks in WAVES range from traditional image distortions to advanced, novel variations of diffusive, and adversarial attacks. Our evaluation examines two pivotal dimensions: the degree of image quality degradation and the efficacy of watermark detection after attacks. Our novel, comprehensive evaluation reveals previously undetected vulnerabilities of several modern watermarking algorithms. We envision WAVES as a toolkit for the future development of robust watermarks.
https://proceedings.mlr.press/v235/an24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/an24b/an24b.pdf
https://openreview.net/forum?id=If4xW9vF7U
Training-Free Long-Context Scaling of Large Language Models
https://proceedings.mlr.press/v235/an24b.html
Chenxin An, Fei Huang, Jun Zhang, Shansan Gong, Xipeng Qiu, Chang Zhou, Lingpeng Kong
https://proceedings.mlr.press/v235/an24b.html
ICML 2024
The ability of Large Language Models (LLMs) to process and generate coherent text is markedly weakened when the number of input tokens exceeds their pretraining length. Given the expensive overhead of finetuning large-scale models with longer sequences, we propose a training-free approach named Dual Chunk Attention (DCA), which enables Llama2 70B to support context windows of up to 100k tokens. By decomposing the attention computation for long sequences into chunk-based modules, DCA manages to effectively capture the relative positional information of tokens within the same chunk (Intra-Chunk) and across distinct chunks (Inter-Chunk), as well as integrates seamlessly with Flash Attention. In addition to its impressive extrapolation capability, DCA achieves performance on practical long-context tasks that is comparable to or even better than that of models built through continual training. All code and data used in this work are released at https://github.com/HKUNLP/ChunkLlama.
https://proceedings.mlr.press/v235/anagnostidis24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/anagnostidis24a/anagnostidis24a.pdf
https://openreview.net/forum?id=3KxPo62PYn
Navigating Scaling Laws: Compute Optimality in Adaptive Model Training
https://proceedings.mlr.press/v235/anagnostidis24a.html
Sotiris Anagnostidis, Gregor Bachmann, Imanol Schlag, Thomas Hofmann
https://proceedings.mlr.press/v235/anagnostidis24a.html
ICML 2024
In recent years, the state-of-the-art in deep learning has been dominated by very large models that have been pre-trained on vast amounts of data. The paradigm is very simple: investing more computational resources (optimally) leads to better performance, and even predictably so; neural scaling laws have been derived that accurately forecast the performance of a network for a desired level of compute. This leads to the notion of a ’compute-optimal’ model, i.e. a model that allocates a given level of compute during training optimally to maximize performance. In this work, we extend the concept of optimality by allowing for an ’adaptive’ model, i.e. a model that can change its shape during training. By doing so, we can design adaptive models that optimally traverse between the underlying scaling laws and outpace their ‘static’ counterparts, leading to a significant reduction in the required compute to reach a given target performance. We show that our approach generalizes across modalities and different shape parameters.
https://proceedings.mlr.press/v235/anani24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/anani24a/anani24a.pdf
https://openreview.net/forum?id=iOEReiiTit
Adaptive Hierarchical Certification for Segmentation using Randomized Smoothing
https://proceedings.mlr.press/v235/anani24a.html
Alaa Anani, Tobias Lorenz, Bernt Schiele, Mario Fritz
https://proceedings.mlr.press/v235/anani24a.html
ICML 2024
Certification for machine learning is proving that no adversarial sample can evade a model within a range under certain conditions, a necessity for safety-critical domains. Common certification methods for segmentation use a flat set of fine-grained classes, leading to high abstain rates due to model uncertainty across many classes. We propose a novel, more practical setting, which certifies pixels within a multi-level hierarchy, and adaptively relaxes the certification to a coarser level for unstable components classic methods would abstain from, effectively lowering the abstain rate whilst providing more certified semantically meaningful information. We mathematically formulate the problem setup, introduce an adaptive hierarchical certification algorithm and prove the correctness of its guarantees. Since certified accuracy does not take the loss of information into account for coarser classes, we introduce the Certified Information Gain ($\mathrm{CIG}$) metric, which is proportional to the class granularity level. Our extensive experiments on the datasets Cityscapes, PASCAL-Context, ACDC and COCO-Stuff demonstrate that our adaptive algorithm achieves a higher $\mathrm{CIG}$ and lower abstain rate compared to the current state-of-the-art certification method. Our code can be found here: https://github.com/AlaaAnani/adaptive-certify.
https://proceedings.mlr.press/v235/anders24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/anders24a/anders24a.pdf
https://openreview.net/forum?id=dSrdnhLS2h
Adaptive Observation Cost Control for Variational Quantum Eigensolvers
https://proceedings.mlr.press/v235/anders24a.html
Christopher J. Anders, Kim Andrea Nicoli, Bingting Wu, Naima Elosegui, Samuele Pedrielli, Lena Funcke, Karl Jansen, Stefan Kühn, Shinichi Nakajima
https://proceedings.mlr.press/v235/anders24a.html
ICML 2024
The objective to be minimized in the variational quantum eigensolver (VQE) has a restricted form, which allows a specialized sequential minimal optimization (SMO) that requires only a few observations in each iteration. However, the SMO iteration is still costly due to the observation noise—one observation at a point typically requires averaging over hundreds to thousands of repeated quantum measurement shots for achieving a reasonable noise level. In this paper, we propose an adaptive cost control method, named subspace in confident region (SubsCoRe), for SMO. SubsCoRe uses the Gaussian process (GP) surrogate, and requires it to have low uncertainty over the subspace being updated, so that optimization in each iteration is performed with guaranteed accuracy. Adaptive cost control is performed by setting the required accuracy according to the progress of the optimization, and identifying the minimum number of measurement shots, as well as their distribution, satisfying the SubsCoRe requirement.
https://proceedings.mlr.press/v235/angell24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/angell24a/angell24a.pdf
https://openreview.net/forum?id=gqA8ZHO0j8
Fast, Scalable, Warm-Start Semidefinite Programming with Spectral Bundling and Sketching
https://proceedings.mlr.press/v235/angell24a.html
Rico Angell, Andrew Mccallum
https://proceedings.mlr.press/v235/angell24a.html
ICML 2024
While semidefinite programming (SDP) has traditionally been limited to moderate-sized problems, recent algorithms augmented with matrix sketching techniques have enabled solving larger SDPs. However, these methods achieve scalability at the cost of an increase in the number of necessary iterations, resulting in slower convergence as the problem size grows. Furthermore, they require iteration-dependent parameter schedules that prohibit effective utilization of warm-start initializations important in practical applications with incrementally-arriving data or mixed-integer programming. We present Unified Spectral Bundling with Sketching (USBS), a provably correct, fast and scalable algorithm for solving massive SDPs that can leverage a warm-start initialization to further accelerate convergence. Our proposed algorithm is a spectral bundle method for solving general SDPs containing both equality and inequality constraints. Moveover, when augmented with an optional matrix sketching technique, our algorithm achieves the dramatically improved scalability of previous work while sustaining convergence speed. We empirically demonstrate the effectiveness of our method across multiple applications, with and without warm-starting. For example, USBS provides a 500x speed-up over the state-of-the-art scalable SDP solver on an instance with over 2 billion decision variables. We make our implementation in pure JAX publicly available.
https://proceedings.mlr.press/v235/angelopoulos24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/angelopoulos24a/angelopoulos24a.pdf
https://openreview.net/forum?id=2XkRIijUKw
Online conformal prediction with decaying step sizes
https://proceedings.mlr.press/v235/angelopoulos24a.html
Anastasios Nikolas Angelopoulos, Rina Barber, Stephen Bates
https://proceedings.mlr.press/v235/angelopoulos24a.html
ICML 2024
We introduce a method for online conformal prediction with decaying step sizes. Like previous methods, ours possesses a retrospective guarantee of coverage for arbitrary sequences. However, unlike previous methods, we can simultaneously estimate a population quantile when it exists. Our theory and experiments indicate substantially improved practical properties: in particular, when the distribution is stable, the coverage is close to the desired level for every time point, not just on average over the observed sequence.
https://proceedings.mlr.press/v235/apostolopoulou24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/apostolopoulou24a/apostolopoulou24a.pdf
https://openreview.net/forum?id=zMGUDsPopK
A Rate-Distortion View of Uncertainty Quantification
https://proceedings.mlr.press/v235/apostolopoulou24a.html
Ifigeneia Apostolopoulou, Benjamin Eysenbach, Frank Nielsen, Artur Dubrawski
https://proceedings.mlr.press/v235/apostolopoulou24a.html
ICML 2024
In supervised learning, understanding an input’s proximity to the training data can help a model decide whether it has sufficient evidence for reaching a reliable prediction. While powerful probabilistic models such as Gaussian Processes naturally have this property, deep neural networks often lack it. In this paper, we introduce Distance Aware Bottleneck (DAB), i.e., a new method for enriching deep neural networks with this property. Building on prior information bottleneck approaches, our method learns a codebook that stores a compressed representation of all inputs seen during training. The distance of a new example from this codebook can serve as an uncertainty estimate for the example. The resulting model is simple to train and provides deterministic uncertainty estimates by a single forward pass. Finally, our method achieves better out-of-distribution (OOD) detection and misclassification prediction than prior methods, including expensive ensemble methods, deep kernel Gaussian Processes, and approaches based on the standard information bottleneck.
https://proceedings.mlr.press/v235/archer24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/archer24a/archer24a.pdf
https://openreview.net/forum?id=S3xqyEaST9
Practical Performance Guarantees for Pipelined DNN Inference
https://proceedings.mlr.press/v235/archer24a.html
Aaron Archer, Matthew Fahrbach, Kuikui Liu, Prakash Prabhu
https://proceedings.mlr.press/v235/archer24a.html
ICML 2024
We optimize pipeline parallelism for deep neural network (DNN) inference by partitioning model graphs into $k$ stages and minimizing the running time of the bottleneck stage, including communication. We give practical and effective algorithms for this NP-hard problem, but our emphasis is on tackling the practitioner’s dilemma of deciding when a solution is good enough. To this end, we design novel mixed integer programming (MIP) relaxations for proving lower bounds. Applying these methods to a diverse testbed of 369 production models, for $k \in \\{2, 4, 8, 16, 32, 64\\}$, we empirically show that these lower bounds are strong enough to be useful in practice. Our lower bounds are substantially stronger than standard combinatorial bounds. For example, evaluated via geometric means across a production testbed with $k = 16$ pipeline stages, our MIP formulations raise the lower bound from 0.4598 to 0.9452, expressed as a fraction of the best partition found. In other words, our improved lower bounds close the optimality gap by a factor of 9.855x.
https://proceedings.mlr.press/v235/arefin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/arefin24a/arefin24a.pdf
https://openreview.net/forum?id=lQzmDFlsHX
Unsupervised Concept Discovery Mitigates Spurious Correlations
https://proceedings.mlr.press/v235/arefin24a.html
Md Rifat Arefin, Yan Zhang, Aristide Baratin, Francesco Locatello, Irina Rish, Dianbo Liu, Kenji Kawaguchi
https://proceedings.mlr.press/v235/arefin24a.html
ICML 2024
Models prone to spurious correlations in training data often produce brittle predictions and introduce unintended biases. Addressing this challenge typically involves methods relying on prior knowledge and group annotation to remove spurious correlations, which may not be readily available in many applications. In this paper, we establish a novel connection between unsupervised object-centric learning and mitigation of spurious correlations. Instead of directly inferring subgroups with varying correlations with labels, our approach focuses on discovering concepts: discrete ideas that are shared across input samples. Leveraging existing object-centric representation learning, we introduce CoBalT: a concept balancing technique that effectively mitigates spurious correlations without requiring human labeling of subgroups. Evaluation across the benchmark datasets for sub-population shifts demonstrate superior or competitive performance compared state-of-the-art baselines, without the need for group annotation. Code is available at https://github.com/rarefin/CoBalT
https://proceedings.mlr.press/v235/arisaka24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/arisaka24a/arisaka24a.pdf
https://openreview.net/forum?id=yh6Y7ppf46
Accelerating Legacy Numerical Solvers by Non-intrusive Gradient-based Meta-solving
https://proceedings.mlr.press/v235/arisaka24a.html
Sohei Arisaka, Qianxiao Li
https://proceedings.mlr.press/v235/arisaka24a.html
ICML 2024
Scientific computing is an essential tool for scientific discovery and engineering design, and its computational cost is always a main concern in practice. To accelerate scientific computing, it is a promising approach to use machine learning (especially meta-learning) techniques for selecting hyperparameters of traditional numerical methods. There have been numerous proposals to this direction, but many of them require automatic-differentiable numerical methods. However, in reality, many practical applications still depend on well-established but non-automatic-differentiable legacy codes, which prevents practitioners from applying the state-of-the-art research to their own problems. To resolve this problem, we propose a non-intrusive methodology with a novel gradient estimation technique to combine machine learning and legacy numerical codes without any modification. We theoretically and numerically show the advantage of the proposed method over other baselines and present applications of accelerating established non-automatic-differentiable numerical solvers implemented in PETSc, a widely used open-source numerical software library.
https://proceedings.mlr.press/v235/armengol-urpi-24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/armengol-urpi-24a/armengol-urpi-24a.pdf
https://openreview.net/forum?id=6Zl9rv6PDx
Causal Action Influence Aware Counterfactual Data Augmentation
https://proceedings.mlr.press/v235/armengol-urpi-24a.html
Núria Armengol Urpı́, Marco Bagatella, Marin Vlastelica, Georg Martius
https://proceedings.mlr.press/v235/armengol-urpi-24a.html
ICML 2024
Offline data are both valuable and practical resources for teaching robots complex behaviors. Ideally, learning agents should not be constrained by the scarcity of available demonstrations, but rather generalize beyond the training distribution. However, the complexity of real-world scenarios typically requires huge amounts of data to prevent neural network policies from picking up on spurious correlations and learning non-causal relationships. We propose CAIAC, a data augmentation method that can create feasible synthetic transitions from a fixed dataset without having access to online environment interactions. By utilizing principled methods for quantifying causal influence, we are able to perform counterfactual reasoning by swapping $\textit{action}$-unaffected parts of the state-space between independent trajectories in the dataset. We empirically show that this leads to a substantial increase in robustness of offline learning algorithms against distributional shift.
https://proceedings.mlr.press/v235/arnaboldi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/arnaboldi24a/arnaboldi24a.pdf
https://openreview.net/forum?id=ZSQAf5YlvN
Online Learning and Information Exponents: The Importance of Batch size & Time/Complexity Tradeoffs
https://proceedings.mlr.press/v235/arnaboldi24a.html
Luca Arnaboldi, Yatin Dandi, Florent Krzakala, Bruno Loureiro, Luca Pesce, Ludovic Stephan
https://proceedings.mlr.press/v235/arnaboldi24a.html
ICML 2024
We study the impact of the batch size $n_b$ on the iteration time $T$ of training two-layer neural networks with one-pass stochastic gradient descent (SGD) on multi-index target functions of isotropic covariates. We characterize the optimal batch size minimizing the iteration time as a function of the hardness of the target, as characterized by the information exponents. We show that performing gradient updates with large batches $n_b \lesssim d^{\frac{\ell}{2}}$ minimizes the training time without changing the total sample complexity, where $\ell$ is the information exponent of the target to be learned and $d$ is the input dimension. However, larger batch sizes than $n_b \gg d^{\frac{\ell}{2}}$ are detrimental for improving the time complexity of SGD. We provably overcome this fundamental limitation via a different training protocol, Correlation loss SGD, which suppresses the auto-correlation terms in the loss function. We show that one can track the training progress by a system of low-dimensional ordinary differential equations (ODEs). Finally, we validate our theoretical results with numerical experiments.
https://proceedings.mlr.press/v235/arora24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/arora24a/arora24a.pdf
https://openreview.net/forum?id=e93ffDcpH3
Simple linear attention language models balance the recall-throughput tradeoff
https://proceedings.mlr.press/v235/arora24a.html
Simran Arora, Sabri Eyuboglu, Michael Zhang, Aman Timalsina, Silas Alberti, James Zou, Atri Rudra, Christopher Re
https://proceedings.mlr.press/v235/arora24a.html
ICML 2024
Recent work has shown that attention-based language models excel at "recall", the ability to ground generations in tokens previously seen in context. However, the efficiency of attention-based models is bottle-necked during inference by the KV-cache’s aggressive memory consumption. In this work, we explore whether we can improve language model efficiency (e.g. by reducing memory consumption) without compromising on recall. By applying experiments and theory to a broad set of architectures, we identify a key tradeoff between a model’s recurrent state size and recall ability. We show that efficient alternatives to attention (e.g. H3, Mamba, RWKV) maintain a fixed-size recurrent state, but struggle at recall. We propose BASED a simple architecture combining linear and sliding window attention. By varying BASED window size and linear attention feature dimension, we can dial the state size and traverse the Pareto frontier of the recall-memory tradeoff curve, recovering the full quality of attention on one end and the small state size of attention-alternatives on the other. We train language models up to $1.3$b parameters and show that BASED matches the strongest sub-quadratic models (e.g. Mamba) in perplexity and outperforms them on real-world recall-intensive tasks by 10.36 accuracy points. We further develop IO-aware algorithms that enable BASED to provide 24× higher throughput on language generation than FlashAttention-2, when generating 1024 tokens using 1.3b parameter models. Overall, BASED expands the Pareto frontier of the throughput-recall tradeoff space beyond prior architectures.
https://proceedings.mlr.press/v235/arpino24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/arpino24a/arpino24a.pdf
https://openreview.net/forum?id=1JgCpZS17T
Inferring Change Points in High-Dimensional Linear Regression via Approximate Message Passing
https://proceedings.mlr.press/v235/arpino24a.html
Gabriel Arpino, Xiaoqi Liu, Ramji Venkataramanan
https://proceedings.mlr.press/v235/arpino24a.html
ICML 2024
We consider the problem of localizing change points in high-dimensional linear regression. We propose an Approximate Message Passing (AMP) algorithm for estimating both the signals and the change point locations. Assuming Gaussian covariates, we give an exact asymptotic characterization of its estimation performance in the limit where the number of samples grows proportionally to the signal dimension. Our algorithm can be tailored to exploit any prior information on the signal, noise, and change points. It also enables uncertainty quantification in the form of an efficiently computable approximate posterior distribution, whose asymptotic form we characterize exactly. We validate our theory via numerical experiments, and demonstrate the favorable performance of our estimators on both synthetic data and images.
https://proceedings.mlr.press/v235/arruda24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/arruda24a/arruda24a.pdf
https://openreview.net/forum?id=uCdcXRuHnC
An amortized approach to non-linear mixed-effects modeling based on neural posterior estimation
https://proceedings.mlr.press/v235/arruda24a.html
Jonas Arruda, Yannik Schälte, Clemens Peiter, Olga Teplytska, Ulrich Jaehde, Jan Hasenauer
https://proceedings.mlr.press/v235/arruda24a.html
ICML 2024
Non-linear mixed-effects models are a powerful tool for studying heterogeneous populations in various fields, including biology, medicine, economics, and engineering. Here, the aim is to find a distribution over the parameters that describe the whole population using a model that can generate simulations for an individual of that population. However, fitting these distributions to data is computationally challenging if the description of individuals is complex and the population is large. To address this issue, we propose a novel machine learning-based approach: We exploit neural density estimation based on conditional normalizing flows to approximate individual-specific posterior distributions in an amortized fashion, thereby allowing for efficient inference of population parameters. Applying this approach to problems from cell biology and pharmacology, we demonstrate its unseen flexibility and scalability to large data sets compared to established methods.
https://proceedings.mlr.press/v235/asadi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/asadi24a/asadi24a.pdf
https://openreview.net/forum?id=jP1zeEqHli
Learning the Target Network in Function Space
https://proceedings.mlr.press/v235/asadi24a.html
Kavosh Asadi, Yao Liu, Shoham Sabach, Ming Yin, Rasool Fakoor
https://proceedings.mlr.press/v235/asadi24a.html
ICML 2024
We focus on the task of learning the value function in the reinforcement learning (RL) setting. This task is often solved by updating a pair of online and target networks while ensuring that the parameters of these two networks are equivalent. We propose Lookahead-Replicate (LR), a new value-function approximation algorithm that is agnostic to this parameter-space equivalence. Instead, the LR algorithm is designed to maintain an equivalence between the two networks in the function space. This value-based equivalence is obtained by employing a new target-network update. We show that LR leads to a convergent behavior in learning the value function. We also present empirical results demonstrating that LR-based target-network updates significantly improve deep RL on the Atari benchmark.
https://proceedings.mlr.press/v235/ashman24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ashman24a/ashman24a.pdf
https://openreview.net/forum?id=pftXzp6Yn3
Translation Equivariant Transformer Neural Processes
https://proceedings.mlr.press/v235/ashman24a.html
Matthew Ashman, Cristiana Diaconu, Junhyuck Kim, Lakee Sivaraya, Stratis Markou, James Requeima, Wessel P Bruinsma, Richard E. Turner
https://proceedings.mlr.press/v235/ashman24a.html
ICML 2024
The effectiveness of neural processes (NPs) in modelling posterior prediction maps—the mapping from data to posterior predictive distributions—has significantly improved since their inception. This improvement can be attributed to two principal factors: (1) advancements in the architecture of permutation invariant set functions, which are intrinsic to all NPs; and (2) leveraging symmetries present in the true posterior predictive map, which are problem dependent. Transformers are a notable development in permutation invariant set functions, and their utility within NPs has been demonstrated through the family of models we refer to as TNPs. Despite significant interest in TNPs, little attention has been given to incorporating symmetries. Notably, the posterior prediction maps for data that are stationary—a common assumption in spatio-temporal modelling—exhibit translation equivariance. In this paper, we introduce of a new family of translation equivariant TNPs that incorporate translation equivariance. Through an extensive range of experiments on synthetic and real-world spatio-temporal data, we demonstrate the effectiveness of TE-TNPs relative to their non-translation-equivariant counterparts and other NP baselines.
https://proceedings.mlr.press/v235/asi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/asi24a/asi24a.pdf
https://openreview.net/forum?id=PTGJOUlQ68
Private Vector Mean Estimation in the Shuffle Model: Optimal Rates Require Many Messages
https://proceedings.mlr.press/v235/asi24a.html
Hilal Asi, Vitaly Feldman, Jelani Nelson, Huy Nguyen, Kunal Talwar, Samson Zhou
https://proceedings.mlr.press/v235/asi24a.html
ICML 2024
We study the problem of private vector mean estimation in the shuffle model of privacy where $n$ users each have a unit vector $v^{(i)} \in \mathbb{R}^d$. We propose a new multi-message protocol that achieves the optimal error using $O(\min(n\varepsilon^2,d))$ messages per user. Moreover, we show that any (unbiased) protocol that achieves optimal error must require each user to send $\Omega(\min(n\varepsilon^2,d)/\log(n))$ messages, demonstrating the optimality of our message complexity up to logarithmic factors. Additionally, we study the single-message setting and design a protocol that achieves mean squared error $O(dn^{d/(d+2)}\varepsilon^{-4/(d+2)})$. Moreover, we show that any single-message protocol must incur mean squared error $\Omega(dn^{d/(d+2)})$, showing that our protocol is optimal in the standard setting where $\varepsilon = \Theta(1)$. Finally, we study robustness to malicious users and show that malicious users can incur large additive error with a single shuffler.
https://proceedings.mlr.press/v235/athiwaratkun24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/athiwaratkun24a/athiwaratkun24a.pdf
https://openreview.net/forum?id=JPNBFWQ9H2
Bifurcated Attention for Single-Context Large-Batch Sampling
https://proceedings.mlr.press/v235/athiwaratkun24a.html
Ben Athiwaratkun, Sujan Kumar Gonugondla, Sanjay Krishna Gouda, Haifeng Qian, Hantian Ding, Qing Sun, Jun Wang, Jiacheng Guo, Liangfu Chen, Parminder Bhatia, Ramesh Nallapati, Sudipta Sengupta, Bing Xiang
https://proceedings.mlr.press/v235/athiwaratkun24a.html
ICML 2024
In our study, we present bifurcated attention, a method developed for language model inference in single-context batch sampling contexts. This approach aims to reduce redundant memory IO costs, a significant factor in latency for high batch sizes and long context lengths. Bifurcated attention achieves this by dividing the attention mechanism during incremental decoding into two distinct GEMM operations, focusing on the KV cache from prefill and the decoding process. This method ensures precise computation and maintains the usual computational load (FLOPs) of standard attention mechanisms, but with reduced memory IO. Bifurcated attention is also compatible with multi-query attention mechanism known for reduced memory IO for KV cache, further enabling higher batch size and context length. The resulting efficiency leads to lower latency, improving suitability for real-time applications, e.g., enabling massively-parallel answer generation without substantially increasing latency, enhancing performance when integrated with post-processing techniques such as reranking.
https://proceedings.mlr.press/v235/attali24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/attali24a/attali24a.pdf
https://openreview.net/forum?id=uyhjKoaIQa
Delaunay Graph: Addressing Over-Squashing and Over-Smoothing Using Delaunay Triangulation
https://proceedings.mlr.press/v235/attali24a.html
Hugo Attali, Davide Buscaldi, Nathalie Pernelle
https://proceedings.mlr.press/v235/attali24a.html
ICML 2024
GNNs rely on the exchange of messages to distribute information along the edges of the graph. This approach makes the efficiency of architectures highly dependent on the specific structure of the input graph. Certain graph topologies lead to inefficient information propagation, resulting in a phenomenon known as over-squashing. While the majority of existing methods address over-squashing by rewiring the input graph, our novel approach involves constructing a graph directly from features using Delaunay Triangulation. We posit that the topological properties of the resulting graph prove advantageous for mitigate oversmoothing and over-squashing. Our extensive experimentation demonstrates that our method consistently outperforms established graph rewiring methods.
https://proceedings.mlr.press/v235/attia24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/attia24a/attia24a.pdf
https://openreview.net/forum?id=6L4K5jmSJq
How Free is Parameter-Free Stochastic Optimization?
https://proceedings.mlr.press/v235/attia24a.html
Amit Attia, Tomer Koren
https://proceedings.mlr.press/v235/attia24a.html
ICML 2024
We study the problem of parameter-free stochastic optimization, inquiring whether, and under what conditions, do fully parameter-free methods exist: these are methods that achieve convergence rates competitive with optimally tuned methods, without requiring significant knowledge of the true problem parameters. Existing parameter-free methods can only be considered “partially” parameter-free, as they require some non-trivial knowledge of the true problem parameters, such as a bound on the stochastic gradient norms, a bound on the distance to a minimizer, etc. In the non-convex setting, we demonstrate that a simple hyperparameter search technique results in a fully parameter-free method that outperforms more sophisticated state-of-the-art algorithms. We also provide a similar result in the convex setting with access to noisy function values under mild noise assumptions. Finally, assuming only access to stochastic gradients, we establish a lower bound that renders fully parameter-free stochastic convex optimization infeasible, and provide a method which is (partially) parameter-free up to the limit indicated by our lower bound.
https://proceedings.mlr.press/v235/attias24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/attias24a/attias24a.pdf
https://openreview.net/forum?id=CyEJn71Z00
Information Complexity of Stochastic Convex Optimization: Applications to Generalization, Memorization, and Tracing
https://proceedings.mlr.press/v235/attias24a.html
Idan Attias, Gintare Karolina Dziugaite, Mahdi Haghifam, Roi Livni, Daniel M. Roy
https://proceedings.mlr.press/v235/attias24a.html
ICML 2024
In this work, we investigate the interplay between memorization and learning in the context of stochastic convex optimization (SCO). We define memorization via the information a learning algorithm reveals about its training data points. We then quantify this information using the framework of conditional mutual information (CMI) proposed by Steinke and Zakynthinou (2020). Our main result is a precise characterization of the tradeoff between the accuracy of a learning algorithm and its CMI, answering an open question posed by Livni (2023). We show that, in the $L^2$ Lipschitz–bounded setting and under strong convexity, every learner with an excess error $\epsilon$ has CMI bounded below by $\Omega(1/\epsilon^2)$ and $\Omega(1/\epsilon)$, respectively. We further demonstrate the essential role of memorization in learning problems in SCO by designing an adversary capable of accurately identifying a significant fraction of the training samples in specific SCO problems. Finally, we enumerate several implications of our results, such as a limitation of generalization bounds based on CMI and the incompressibility of samples in SCO problems.
https://proceedings.mlr.press/v235/attias24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/attias24b/attias24b.pdf
https://openreview.net/forum?id=71ktaA3ihI
Agnostic Sample Compression Schemes for Regression
https://proceedings.mlr.press/v235/attias24b.html
Idan Attias, Steve Hanneke, Aryeh Kontorovich, Menachem Sadigurschi
https://proceedings.mlr.press/v235/attias24b.html
ICML 2024
We obtain the first positive results for bounded sample compression in the agnostic regression setting with the $\ell_p$ loss, where $p\in [1,\infty]$. We construct a generic approximate sample compression scheme for real-valued function classes exhibiting exponential size in the fat-shattering dimension but independent of the sample size. Notably, for linear regression, an approximate compression of size linear in the dimension is constructed. Moreover, for $\ell_1$ and $\ell_\infty$ losses, we can even exhibit an efficient exact sample compression scheme of size linear in the dimension. We further show that for every other $\ell_p$ loss, $p\in (1,\infty)$, there does not exist an exact agnostic compression scheme of bounded size. This refines and generalizes a negative result of David, Moran, and Yehudayoff (2016) for the $\ell_2$ loss. We close by posing general open questions: for agnostic regression with $\ell_1$ loss, does every function class admit an exact compression scheme of polynomial size in the pseudo-dimension? For the $\ell_2$ loss, does every function class admit an approximate compression scheme of polynomial size in the fat-shattering dimension? These questions generalize Warmuth’s classic sample compression conjecture for realizable-case classification (Warmuth, 2003).
https://proceedings.mlr.press/v235/axiotis24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/axiotis24a/axiotis24a.pdf
https://openreview.net/forum?id=WUQ4YzIQt2
Data-Efficient Learning via Clustering-Based Sensitivity Sampling: Foundation Models and Beyond
https://proceedings.mlr.press/v235/axiotis24a.html
Kyriakos Axiotis, Vincent Cohen-Addad, Monika Henzinger, Sammy Jerome, Vahab Mirrokni, David Saulpic, David Woodruff, Michael Wunder
https://proceedings.mlr.press/v235/axiotis24a.html
ICML 2024
We study the data selection problem, whose aim is to select a small representative subset of data that can be used to efficiently train a machine learning model. We present a new data selection approach based on $k$-means clustering and sensitivity sampling. Assuming access to an embedding representation of the data with respect to which the model loss is Holder continuous, our approach provably allows selecting a set of “typical” $k + 1/\varepsilon^2$ elements whose average loss corresponds to the average loss of the whole dataset, up to a multiplicative $(1\pm\varepsilon)$ factor and an additive $\varepsilon \lambda \Phi_k$, where $\Phi_k$ represents the $k$-means cost for the input embeddings and $\lambda$ is the Holder constant. We furthermore demonstrate the performance and scalability of our approach on fine-tuning foundation models and show that it outperforms state-of-the-art methods. We also show how it can be applied on linear regression, leading to a new sampling strategy that surprisingly matches the performance of leverage score sampling, while being conceptually simpler and more scalable.
https://proceedings.mlr.press/v235/ayme24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ayme24a/ayme24a.pdf
https://openreview.net/forum?id=B5g6y7JlMw
Random features models: a way to study the success of naive imputation
https://proceedings.mlr.press/v235/ayme24a.html
Alexis Ayme, Claire Boyer, Aymeric Dieuleveut, Erwan Scornet
https://proceedings.mlr.press/v235/ayme24a.html
ICML 2024
Constant (naive) imputation is still widely used in practice as this is a first easy-to-use technique to deal with missing data. Yet, this simple method could be expected to induce a large bias for prediction purposes, as the imputed input may strongly differ from the true underlying data. However, recent works suggest that this bias is low in the context of high-dimensional linear predictors when data is supposed to be missing completely at random (MCAR). This paper completes the picture for linear predictors by confirming the intuition that the bias is negligible and that surprisingly naive imputation also remains relevant in very low dimension. To this aim, we consider a unique underlying random features model, which offers a rigorous framework for studying predictive performances, whilst the dimension of the observed features varies. Building on these theoretical results, we establish finite-sample bounds on stochastic gradient (SGD) predictors applied to zero-imputed data, a strategy particularly well suited for large-scale learning. If the MCAR assumption appears to be strong, we show that similar favorable behaviors occur for more complex missing data scenarios.
https://proceedings.mlr.press/v235/ayoub24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ayoub24a/ayoub24a.pdf
https://openreview.net/forum?id=7PXSc5fURu
Switching the Loss Reduces the Cost in Batch Reinforcement Learning
https://proceedings.mlr.press/v235/ayoub24a.html
Alex Ayoub, Kaiwen Wang, Vincent Liu, Samuel Robertson, James Mcinerney, Dawen Liang, Nathan Kallus, Csaba Szepesvari
https://proceedings.mlr.press/v235/ayoub24a.html
ICML 2024
We propose training fitted Q-iteration with log-loss (FQI-LOG) for batch reinforcement learning (RL). We show that the number of samples needed to learn a near-optimal policy with FQI-LOG scales with the accumulated cost of the optimal policy, which is zero in problems where acting optimally achieves the goal and incurs no cost. In doing so, we provide a general framework for proving small-cost bounds, i.e. bounds that scale with the optimal achievable cost, in batch RL. Moreover, we empirically verify that FQI-LOG uses fewer samples than FQI trained with squared loss on problems where the optimal policy reliably achieves the goal.
https://proceedings.mlr.press/v235/azarmehr24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/azarmehr24a/azarmehr24a.pdf
https://openreview.net/forum?id=EDEISRmi6X
Bipartite Matching in Massive Graphs: A Tight Analysis of EDCS
https://proceedings.mlr.press/v235/azarmehr24a.html
Amir Azarmehr, Soheil Behnezhad, Mohammad Roghani
https://proceedings.mlr.press/v235/azarmehr24a.html
ICML 2024
Maximum matching is one of the most fundamental combinatorial optimization problems with applications in various contexts such as balanced clustering, data mining, resource allocation, and online advertisement. In many of these applications, the input graph is massive. The sheer size of these inputs makes it impossible to store the whole graph in the memory of a single machine and process it there. Graph sparsification has been an extremely powerful tool to alleviate this problem. In this paper, we study a highly successful and versatile sparsifier for the matching problem: the edge-degree constrained subgraph (EDCS) introduced first by Bernstein & Stein 2015 The EDCS has a parameter $\beta \geq 2$ which controls the density of the sparsifier. It has been shown through various proofs in the literature that by picking a subgraph with $O(n\beta)$ edges, the EDCS includes a matching of size at least $2/3-O(1/\beta)$ times the maximum matching size. As such, by increasing $\beta$ the approximation ratio of EDCS gets closer and closer to $2/3$. In this paper, we propose a new approach for analyzing the approximation ratio of EDCS. Our analysis is tight for any value of $\beta$. Namely, we pinpoint the precise approximation ratio of EDCS for any sparsity parameter $\beta$. Our analysis reveals that one does not necessarily need to increase $\beta$ to improve approximation, as suggested by previous analysis. In particular, the best choice turns out to be $\beta = 6$, which achieves an approximation ratio of $.677$! This is arguably surprising as it is even better than $2/3 \sim .666$, the bound that was widely believed to be the limit for EDCS.
https://proceedings.mlr.press/v235/azizian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/azizian24a/azizian24a.pdf
https://openreview.net/forum?id=vsOF7qDNhl
What is the Long-Run Distribution of Stochastic Gradient Descent? A Large Deviations Analysis
https://proceedings.mlr.press/v235/azizian24a.html
Waı̈ss Azizian, Franck Iutzeler, Jerome Malick, Panayotis Mertikopoulos
https://proceedings.mlr.press/v235/azizian24a.html
ICML 2024
In this paper, we examine the long-run distribution of stochastic gradient descent (SGD) in general, non-convex problems. Specifically, we seek to understand which regions of the problem’s state space are more likely to be visited by SGD, and by how much. Using an approach based on the theory of large deviations and randomly perturbed dynamical systems, we show that the long-run distribution of SGD resembles the Boltzmann-Gibbs distribution of equilibrium thermodynamics with temperature equal to the method’s step-size and energy levels determined by the problem’s objective and the statistics of the noise. In particular, we show that, in the long run, (a) the problem’s critical region is visited exponentially more often than any non-critical region; (b) the iterates of SGD are exponentially concentrated around the problem’s minimum energy state (which does not always coincide with the global minimum of the objective); (c) all other connected components of critical points are visited with frequency that is exponentially proportional to their energy level; and, finally, (d) any component of local maximizers or saddle points is "dominated" by a component of local minimizers which is visited exponentially more often.
https://proceedings.mlr.press/v235/babu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/babu24a/babu24a.pdf
https://openreview.net/forum?id=8STOjGCkfH
HyperFields: Towards Zero-Shot Generation of NeRFs from Text
https://proceedings.mlr.press/v235/babu24a.html
Sudarshan Babu, Richard Liu, Avery Zhou, Michael Maire, Greg Shakhnarovich, Rana Hanocka
https://proceedings.mlr.press/v235/babu24a.html
ICML 2024
We introduce HyperFields, a method for generating text-conditioned Neural Radiance Fields (NeRFs) with a single forward pass and (optionally) some fine-tuning. Key to our approach are: (i) a dynamic hypernetwork, which learns a smooth mapping from text token embeddings to the space of NeRFs; (ii) NeRF distillation training, which distills scenes encoded in individual NeRFs into one dynamic hypernetwork. These techniques enable a single network to fit over a hundred unique scenes. We further demonstrate that HyperFields learns a more general map between text and NeRFs, and consequently is capable of predicting novel in-distribution and out-of-distribution scenes — either zero-shot or with a few finetuning steps. Finetuning HyperFields benefits from accelerated convergence thanks to the learned general map, and is capable of synthesizing novel scenes 5 to 10 times faster than existing neural optimization-based methods. Our ablation experiments show that both the dynamic architecture and NeRF distillation are critical to the expressivity of HyperFields.
https://proceedings.mlr.press/v235/baby24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/baby24a/baby24a.pdf
https://openreview.net/forum?id=7XZKzQtooN
Online Matrix Completion: A Collaborative Approach with Hott Items
https://proceedings.mlr.press/v235/baby24a.html
Dheeraj Baby, Soumyabrata Pal
https://proceedings.mlr.press/v235/baby24a.html
ICML 2024
We investigate the low rank matrix completion problem in an online setting with ${M}$ users, ${N}$ items, ${T}$ rounds, and an unknown rank-$r$ reward matrix ${R}\in \mathbb{R}^{{M}\times {N}}$. This problem has been well-studied in the literature and has several applications in practice. In each round, we recommend ${S}$ carefully chosen distinct items to every user and observe noisy rewards. In the regime where ${M},{N} >> {T}$, we propose two distinct computationally efficient algorithms for recommending items to users and analyze them under the benign hott items assumption 1) First, for ${S}=1$, under additional incoherence/smoothness assumptions on ${R}$, we propose the phased algorithm PhasedClusterElim. Our algorithm obtains a near-optimal per-user regret of $\tilde{O}({N}{M}^{-1}(\Delta^{-1}+\Delta_{\text{hott}}^{-2}))$ where $\Delta_{\text{hott}},\Delta$ are problem-dependent gap parameters with $\Delta_{\text{hott}} >> \Delta$ almost always. 2) Second, we consider a simplified setting with ${S}=r$ where we make significantly milder assumptions on ${R}$. Here, we introduce another phased algorithm, DeterminantElim, to derive a regret guarantee of $\tilde{O}({N}{M}^{-1/r}\Delta_\text{det}^{-1}))$ where $\Delta_{\text{det}}$ is another problem-dependent gap. Both algorithms crucially use collaboration among users to jointly eliminate sub-optimal items for groups of users successively in phases, but with distinctive and novel approaches.
https://proceedings.mlr.press/v235/bacellar24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bacellar24a/bacellar24a.pdf
https://openreview.net/forum?id=GBxflz0qdX
Differentiable Weightless Neural Networks
https://proceedings.mlr.press/v235/bacellar24a.html
Alan Tendler Leibel Bacellar, Zachary Susskind, Mauricio Breternitz Jr, Eugene John, Lizy Kurian John, Priscila Machado Vieira Lima, Felipe M.G. França
https://proceedings.mlr.press/v235/bacellar24a.html
ICML 2024
We introduce the Differentiable Weightless Neural Network (DWN), a model based on interconnected lookup tables. Training of DWNs is enabled by a novel Extended Finite Difference technique for approximate differentiation of binary values. We propose Learnable Mapping, Learnable Reduction, and Spectral Regularization to further improve the accuracy and efficiency of these models. We evaluate DWNs in three edge computing contexts: (1) an FPGA-based hardware accelerator, where they demonstrate superior latency, throughput, energy efficiency, and model area compared to state-of-the-art solutions, (2) a low-power microcontroller, where they achieve preferable accuracy to XGBoost while subject to stringent memory constraints, and (3) ultra-low-cost chips, where they consistently outperform small models in both accuracy and projected hardware area. DWNs also compare favorably against leading approaches for tabular datasets, with higher average rank. Overall, our work positions DWNs as a pioneering solution for edge-compatible high-throughput neural networks.
https://proceedings.mlr.press/v235/bachmann24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bachmann24a/bachmann24a.pdf
https://openreview.net/forum?id=76zq8Wkl6Z
The Pitfalls of Next-Token Prediction
https://proceedings.mlr.press/v235/bachmann24a.html
Gregor Bachmann, Vaishnavh Nagarajan
https://proceedings.mlr.press/v235/bachmann24a.html
ICML 2024
Can a mere next-token predictor faithfully model human thinking? Our work is aimed at crystallizing this intuitive concern, which is currently fragmented in the literature. First, we emphasize isolating the two phases of next-token prediction that are often conflated: autoregression during inference vs. teacher-forcing during training. We argue that the previously-identified problem of "exponential error accumulation" is a symptom of autoregressive inference. But more concerningly, we identify that teacher-forcing can let the model fit the training data by cheating, causing total in-distribution failure. We design a minimal planning task where empirically both the Transformer and the Mamba architecture fail in this manner - remarkably, despite the task being easy to learn. Overall, our work consolidates these and other essential arguments surrounding next-token prediction. We hope this effort can ground future discussions and inspire explorations beyond the next-token prediction paradigm.
https://proceedings.mlr.press/v235/back-de-luca24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/back-de-luca24a/back-de-luca24a.pdf
https://openreview.net/forum?id=aA2326y3hf
Simulation of Graph Algorithms with Looped Transformers
https://proceedings.mlr.press/v235/back-de-luca24a.html
Artur Back De Luca, Kimon Fountoulakis
https://proceedings.mlr.press/v235/back-de-luca24a.html
ICML 2024
The execution of graph algorithms using neural networks has recently attracted significant interest due to promising empirical progress. This motivates further understanding of how neural networks can replicate reasoning steps with relational data. In this work, we study the ability of transformer networks to simulate algorithms on graphs from a theoretical perspective. The architecture we use is a looped transformer with extra attention heads that interact with the graph. We prove by construction that this architecture can simulate individual algorithms such as Dijkstra’s shortest path, Breadth- and Depth-First Search, and Kosaraju’s strongly connected components, as well as multiple algorithms simultaneously. The number of parameters in the networks does not increase with the input graph size, which implies that the networks can simulate the above algorithms for any graph. Despite this property, we show a limit to simulation in our solution due to finite precision. Finally, we show a Turing Completeness result with constant width when the extra attention heads are utilized.
https://proceedings.mlr.press/v235/bai24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bai24a/bai24a.pdf
https://openreview.net/forum?id=PYDCwWvbG7
QBMK: Quantum-based Matching Kernels for Un-attributed Graphs
https://proceedings.mlr.press/v235/bai24a.html
Lu Bai, Lixin Cui, Ming Li, Yue Wang, Edwin Hancock
https://proceedings.mlr.press/v235/bai24a.html
ICML 2024
In this work, we develop a new Quantum-based Matching Kernel (QBMK) for un-attributed graphs, by computing the kernel-based similarity between the quantum Shannon entropies of aligned vertices through the Continuous-time Quantum Walk (CTQW). The theoretical analysis reveals that the proposed QBMK kernel not only addresses the shortcoming of neglecting the structural correspondence information between graphs arising in existing R-convolution graph kernels, but also overcomes the problem of neglecting the structural differences between pairs of aligned vertices arising in existing vertex-based matching kernels. Moreover, the proposed QBMK kernel can simultaneously capture both global and local structural characteristics through the quantum Shannon entropies. Experimental evaluations on standard graph datasets demonstrate that the proposed QBMK kernel is able to outperform state-of-the-art graph kernels and graph deep learning approaches.
https://proceedings.mlr.press/v235/bai24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bai24b/bai24b.pdf
https://openreview.net/forum?id=2NUGeV64y2
Diffusion Models Demand Contrastive Guidance for Adversarial Purification to Advance
https://proceedings.mlr.press/v235/bai24b.html
Mingyuan Bai, Wei Huang, Tenghui Li, Andong Wang, Junbin Gao, Cesar F Caiafa, Qibin Zhao
https://proceedings.mlr.press/v235/bai24b.html
ICML 2024
In adversarial defense, adversarial purification can be viewed as a special generation task with the purpose to remove adversarial attacks and diffusion models excel in adversarial purification for their strong generative power. With different predetermined generation requirements, various types of guidance have been proposed, but few of them focuses on adversarial purification. In this work, we propose to guide diffusion models for adversarial purification using contrastive guidance. We theoretically derive the proper noise level added in the forward process diffusion models for adversarial purification from a feature learning perspective. For the reverse process, it is implied that the role of contrastive loss guidance is to facilitate the evolution towards the signal direction. From the theoretical findings and implications, we design the forward process with the proper amount of Gaussian noise added and the reverse process with the gradient of contrastive loss as the guidance of diffusion models for adversarial purification. Empirically, extensive experiments on CIFAR-10, CIFAR-100, the German Traffic Sign Recognition Benchmark and ImageNet datasets with ResNet and WideResNet classifiers show that our method outperforms most of current adversarial training and adversarial purification methods by a large improvement.
https://proceedings.mlr.press/v235/bai24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bai24c/bai24c.pdf
https://openreview.net/forum?id=leJGQCron2
On the Complexity of Finite-Sum Smooth Optimization under the Polyak–Łojasiewicz Condition
https://proceedings.mlr.press/v235/bai24c.html
Yunyan Bai, Yuxing Liu, Luo Luo
https://proceedings.mlr.press/v235/bai24c.html
ICML 2024
This paper considers the optimization problem of the form $\min_{{\bf x}\in{\mathbb R}^d} f({\bf x})\triangleq \frac{1}{n}\sum_{i=1}^n f_i({\bf x})$, where $f(\cdot)$ satisfies the Polyak–Łojasiewicz (PL) condition with parameter $\mu$ and $\{f_i(\cdot)\}_{i=1}^n$ is $L$-mean-squared smooth. We show that any gradient method requires at least $\Omega(n+\kappa\sqrt{n}\log(1/\epsilon))$ incremental first-order oracle (IFO) calls to find an $\epsilon$-suboptimal solution, where $\kappa\triangleq L/\mu$ is the condition number of the problem. This result nearly matches upper bounds of IFO complexity for best-known first-order methods. We also study the problem of minimizing the PL function in the distributed setting such that the individuals $f_1(\cdot),…,f_n(\cdot)$ are located on a connected network of $n$ agents. We provide lower bounds of $\Omega(\kappa/\sqrt{\gamma}\log(1/\epsilon))$, $\Omega((\kappa+\tau\kappa/\sqrt{\gamma})\log(1/\epsilon))$ and $\Omega\big(n+\kappa\sqrt{n}\log(1/\epsilon)\big)$ for communication rounds, time cost and local first-order oracle calls respectively, where $\gamma\in(0,1]$ is the spectral gap of the mixing matrix associated with the network and $\tau>0$ is the time cost of per communication round. Furthermore, we propose a decentralized first-order method that nearly matches above lower bounds in expectation.
https://proceedings.mlr.press/v235/bai24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bai24d/bai24d.pdf
https://openreview.net/forum?id=AOJCCFTlfJ
Constrained Ensemble Exploration for Unsupervised Skill Discovery
https://proceedings.mlr.press/v235/bai24d.html
Chenjia Bai, Rushuai Yang, Qiaosheng Zhang, Kang Xu, Yi Chen, Ting Xiao, Xuelong Li
https://proceedings.mlr.press/v235/bai24d.html
ICML 2024
Unsupervised Reinforcement Learning (RL) provides a promising paradigm for learning useful behaviors via reward-free per-training. Existing methods for unsupervised RL mainly conduct empowerment-driven skill discovery or entropy-based exploration. However, empowerment often leads to static skills, and pure exploration only maximizes the state coverage rather than learning useful behaviors. In this paper, we propose a novel unsupervised RL framework via an ensemble of skills, where each skill performs partition exploration based on the state prototypes. Thus, each skill can explore the clustered area locally, and the ensemble skills maximize the overall state coverage. We adopt state-distribution constraints for the skill occupancy and the desired cluster for learning distinguishable skills. Theoretical analysis is provided for the state entropy and the resulting skill distributions. Based on extensive experiments on several challenging tasks, we find our method learns well-explored ensemble skills and achieves superior performance in various downstream tasks compared to previous methods.
https://proceedings.mlr.press/v235/bailey24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bailey24a/bailey24a.pdf
https://openreview.net/forum?id=8ho1l6RZNB
Image Hijacks: Adversarial Images can Control Generative Models at Runtime
https://proceedings.mlr.press/v235/bailey24a.html
Luke Bailey, Euan Ong, Stuart Russell, Scott Emmons
https://proceedings.mlr.press/v235/bailey24a.html
ICML 2024
Are foundation models secure against malicious actors? In this work, we focus on the image input to a vision-language model (VLM). We discover image hijacks, adversarial images that control the behaviour of VLMs at inference time, and introduce the general Behaviour Matching algorithm for training image hijacks. From this, we derive the Prompt Matching method, allowing us to train hijacks matching the behaviour of an arbitrary user-defined text prompt (e.g. ’the Eiffel Tower is now located in Rome’) using a generic, off-the-shelf dataset unrelated to our choice of prompt. We use Behaviour matching to craft hijacks for four types of attack: forcing VLMs to generate outputs of the adversary’s choice, leak information from their context window, override their safety training, and believe false statements. We study these attacks against LLaVA, a state-of-the-art VLM based on CLIP and LLaMA-2, and find that all attack types achieve a success rate of over 80%. Moreover, our attacks are automated and require only small image perturbations.
https://proceedings.mlr.press/v235/baker24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/baker24a/baker24a.pdf
https://openreview.net/forum?id=SZ0JnRxi0x
An Explicit Frame Construction for Normalizing 3D Point Clouds
https://proceedings.mlr.press/v235/baker24a.html
Justin Baker, Shih-Hsin Wang, Tommaso De Fernex, Bao Wang
https://proceedings.mlr.press/v235/baker24a.html
ICML 2024
Many real-world datasets are represented as 3D point clouds – yet they often lack a predefined reference frame, posing a challenge for machine learning or general data analysis. Traditional methods for determining reference frames and normalizing 3D point clouds often struggle with specific inputs, lack theoretical guarantees, or require massive data. We introduce a new algorithm that overcomes these limitations and guarantees both universality and compatibility with any learnable framework for 3D point cloud analysis. Our algorithm works with any input point cloud and performs consistently regardless of input complexities, unlike data-driven methods that are susceptible to biases or limited training data. Empirically, our algorithm outperforms existing methods in effectiveness and generalizability across diverse benchmark datasets. Code is available at https://github.com/Utah-Math-Data-Science/alignment.
https://proceedings.mlr.press/v235/balabin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/balabin24a/balabin24a.pdf
https://openreview.net/forum?id=q0lxAs5GGO
Disentanglement Learning via Topology
https://proceedings.mlr.press/v235/balabin24a.html
Nikita Balabin, Daria Voronkova, Ilya Trofimov, Evgeny Burnaev, Serguei Barannikov
https://proceedings.mlr.press/v235/balabin24a.html
ICML 2024
We propose TopDis (Topological Disentanglement), a method for learning disentangled representations via adding a multi-scale topological loss term. Disentanglement is a crucial property of data representations substantial for the explainability and robustness of deep learning models and a step towards high-level cognition. The state-of-the-art methods are based on VAE and encourage the joint distribution of latent variables to be factorized. We take a different perspective on disentanglement by analyzing topological properties of data manifolds. In particular, we optimize the topological similarity for data manifolds traversals. To the best of our knowledge, our paper is the first one to propose a differentiable topological loss for disentanglement learning. Our experiments have shown that the proposed TopDis loss improves disentanglement scores such as MIG, FactorVAE score, SAP score, and DCI disentanglement score with respect to state-of-the-art results while preserving the reconstruction quality. Our method works in an unsupervised manner, permitting us to apply it to problems without labeled factors of variation. The TopDis loss works even when factors of variation are correlated. Additionally, we show how to use the proposed topological loss to find disentangled directions in a trained GAN.
https://proceedings.mlr.press/v235/balasubramanian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/balasubramanian24a/balasubramanian24a.pdf
https://openreview.net/forum?id=0tPBk24xNj
Adversarial Attacks on Combinatorial Multi-Armed Bandits
https://proceedings.mlr.press/v235/balasubramanian24a.html
Rishab Balasubramanian, Jiawei Li, Prasad Tadepalli, Huazheng Wang, Qingyun Wu, Haoyu Zhao
https://proceedings.mlr.press/v235/balasubramanian24a.html
ICML 2024
We study reward poisoning attacks on Combinatorial Multi-armed Bandits (CMAB). We first provide a sufficient and necessary condition for the attackability of CMAB, a notion to capture the vulnerability and robustness of CMAB. The attackability condition depends on the intrinsic properties of the corresponding CMAB instance such as the reward distributions of super arms and outcome distributions of base arms. Additionally, we devise an attack algorithm for attackable CMAB instances. Contrary to prior understanding of multi-armed bandits, our work reveals a surprising fact that the attackability of a specific CMAB instance also depends on whether the bandit instance is known or unknown to the adversary. This finding indicates that adversarial attacks on CMAB are difficult in practice and a general attack strategy for any CMAB instance does not exist since the environment is mostly unknown to the adversary. We validate our theoretical findings via extensive experiments on real-world CMAB applications including probabilistic maximum covering problem, online minimum spanning tree, cascading bandits for online ranking, and online shortest path.

ICML 2024 International Conference on Machine Learning 2024 Accepted Paper Meta Info Dataset

This dataset is collect from the ICML 2024 OpenReview website (https://openreview.net/group?id=ICML.cc/2024/Conference#tab-accept-oral) as well as the arxiv website DeepNLP paper arxiv (http://www.deepnlp.org/content/paper/icml2024). For researchers who are interested in doing analysis of ICML 2024 accepted papers and potential trends, you can use the already cleaned up json files. Each row contains the meta information of a paper in the ICML 2024 conference. To explore more AI & Robotic papers (NIPS/ICML/ICLR/IROS/ICRA/etc) and AI equations, feel free to navigate the Equation Search Engine (http://www.deepnlp.org/search/equation) as well as the AI Agent Search Engine to find the deployed AI Apps and Agents (http://www.deepnlp.org/search/agent) in your domain.

Meta Information of Json File

{
    "abs": "https://proceedings.mlr.press/v235/abad-rocamora24a.html",
    "Download PDF": "https://raw.githubusercontent.com/mlresearch/v235/main/assets/abad-rocamora24a/abad-rocamora24a.pdf",
    "OpenReview": "https://openreview.net/forum?id=AZWqXfM6z9",
    "title": "Revisiting Character-level Adversarial Attacks for Language Models",
    "url": "https://proceedings.mlr.press/v235/abad-rocamora24a.html",
    "authors": "Elias Abad Rocamora, Yongtao Wu, Fanghui Liu, Grigorios Chrysos, Volkan Cevher",
    "detail_url": "https://proceedings.mlr.press/v235/abad-rocamora24a.html",
    "tags": "ICML 2024",
    "abstract": "Adversarial attacks in Natural Language Processing apply perturbations in the character or token levels. Token-level attacks, gaining prominence for their use of gradient-based methods, are susceptible to altering sentence semantics, leading to invalid adversarial examples. While character-level attacks easily maintain semantics, they have received less attention as they cannot easily adopt popular gradient-based methods, and are thought to be easy to defend. Challenging these beliefs, we introduce Charmer, an efficient query-based adversarial attack capable of achieving high attack success rate (ASR) while generating highly similar adversarial examples. Our method successfully targets both small (BERT) and large (Llama 2) models. Specifically, on BERT with SST-2, Charmer improves the ASR in $4.84$% points and the USE similarity in $8$% points with respect to the previous art. Our implementation is available in https://github.com/LIONS-EPFL/Charmer."
}

Related

AI Equation

List of AI Equations and Latex
List of Math Equations and Latex
List of Physics Equations and Latex
List of Statistics Equations and Latex
List of Machine Learning Equations and Latex

AI Agent Marketplace and Search

AI Agent Marketplace and Search
Robot Search
Equation and Academic search
AI & Robot Comprehensive Search
AI & Robot Question
AI & Robot Community
AI Agent Marketplace Blog

AI Agent Reviews

AI Agent Marketplace Directory
Microsoft AI Agents Reviews
Claude AI Agents Reviews
OpenAI AI Agents Reviews
Saleforce AI Agents Reviews
AI Agent Builder Reviews

Downloads last month
12