title
stringlengths
18
162
url
stringlengths
42
44
detail_url
stringlengths
42
44
authors
stringlengths
10
429
tags
stringclasses
3 values
abstract
stringlengths
400
2.37k
pdf
stringlengths
71
71
Encoding Recurrence into Transformers
https://openreview.net/forum?id=7YfHla7IxBJ
https://openreview.net/forum?id=7YfHla7IxBJ
Feiqing Huang,Kexin Lu,Yuxi CAI,Zhen Qin,Yanwen Fang,Guangjian Tian,Guodong Li
ICLR 2023,Top 5%
This paper novelly breaks down with ignorable loss an RNN layer into a sequence of simple RNNs, each of which can be further rewritten into a lightweight positional encoding matrix of a self-attention, named the Recurrence Encoding Matrix (REM). Thus, recurrent dynamics introduced by the RNN layer can be encapsulated into the positional encodings of a multihead self-attention, and this makes it possible to seamlessly incorporate these recurrent dynamics into a Transformer, leading to a new module, Self-Attention with Recurrence (RSA). The proposed module can leverage the recurrent inductive bias of REMs to achieve a better sample efficiency than its corresponding baseline Transformer, while the self-attention is used to model the remaining non-recurrent signals. The relative proportions of these two components are controlled by a data-driven gated mechanism, and the effectiveness of RSA modules are demonstrated by four sequential learning tasks.
https://openreview.net/pdf/70636775789b51f219cb29634cc7c794cc86577b.pdf
Modeling content creator incentives on algorithm-curated platforms
https://openreview.net/forum?id=l6CpxixmUg
https://openreview.net/forum?id=l6CpxixmUg
Jiri Hron,Karl Krauth,Michael Jordan,Niki Kilbertus,Sarah Dean
ICLR 2023,Top 5%
Content creators compete for user attention. Their reach crucially depends on algorithmic choices made by developers on online platforms. To maximize exposure, many creators adapt strategically, as evidenced by examples like the sprawling search engine optimization industry. This begets competition for the finite user attention pool. We formalize these dynamics in what we call an exposure game, a model of incentives induced by modern algorithms including factorization and (deep) two-tower architectures. We prove that seemingly innocuous algorithmic choices—e.g., non-negative vs. unconstrained factorization—significantly affect the existence and character of (Nash) equilibria in exposure games. We proffer use of creator behavior models like ours for an (ex-ante) pre-deployment audit. Such an audit can identify misalignment between desirable and incentivized content, and thus complement post-hoc measures like content filtering and moderation. To this end, we propose tools for numerically finding equilibria in exposure games, and illustrate results of an audit on the MovieLens and LastFM datasets. Among else, we find that the strategically produced content exhibits strong dependence between algorithmic exploration and content diversity, and between model expressivity and bias towards gender-based user and creator groups.
https://openreview.net/pdf/12c4dfbbd1516c36a132fe1e8e1205b88da0540b.pdf
Transfer NAS with Meta-learned Bayesian Surrogates
https://openreview.net/forum?id=paGvsrl4Ntr
https://openreview.net/forum?id=paGvsrl4Ntr
Gresa Shala,Thomas Elsken,Frank Hutter,Josif Grabocka
ICLR 2023,Top 5%
While neural architecture search (NAS) is an intensely-researched area, approaches typically still suffer from either (i) high computational costs or (ii) lack of robustness across datasets and experiments. Furthermore, most methods start searching for an optimal architecture from scratch, ignoring prior knowledge. This is in contrast to the manual design process by researchers and engineers that leverage previous deep learning experiences by, e.g., transferring architectures from previously solved, related problems. We propose to adopt this human design strategy and introduce a novel surrogate for NAS, that is meta-learned across prior architecture evaluations across different datasets. We utilizes Bayesian Optimization (BO) with deep-kernel Gaussian Processes, graph neural networks for the architecture embeddings and a transformer-based set encoder of datasets. As a result, our method consistently achieves state-of-the-art results on six computer vision datasets, while being as fast as one-shot NAS methods.
https://openreview.net/pdf/1d6bd2efad6066b8250a1ed96932db04f31c080f.pdf
Scaling Up Probabilistic Circuits by Latent Variable Distillation
https://openreview.net/forum?id=067CGykiZTS
https://openreview.net/forum?id=067CGykiZTS
Anji Liu,Honghua Zhang,Guy Van den Broeck
ICLR 2023,Top 5%
Probabilistic Circuits (PCs) are a unified framework for tractable probabilistic models that support efficient computation of various probabilistic queries (e.g., marginal probabilities). One key challenge is to scale PCs to model large and high-dimensional real-world datasets: we observe that as the number of parameters in PCs increases, their performance immediately plateaus. This phenomenon suggests that the existing optimizers fail to exploit the full expressive power of large PCs. We propose to overcome such bottleneck by latent variable distillation: we leverage the less tractable but more expressive deep generative models to provide extra supervision over the latent variables of PCs. Specifically, we extract information from Transformer-based generative models to assign values to latent variables of PCs, providing guidance to PC optimizers. Experiments on both image and language modeling benchmarks (e.g., ImageNet and WikiText-2) show that latent variable distillation substantially boosts the performance of large PCs compared to their counterparts without latent variable distillation. In particular, on the image modeling benchmarks, PCs achieve competitive performance against some of the widely-used deep generative models, including variational autoencoders and flow-based models, opening up new avenues for tractable generative modeling. Our code can be found at https://github.com/UCLA-StarAI/LVD.
https://openreview.net/pdf/03a72f57ccbfd43e91ba786ca0f782f4065669e5.pdf
A Kernel Perspective of Skip Connections in Convolutional Networks
https://openreview.net/forum?id=6H_uOfcwiVh
https://openreview.net/forum?id=6H_uOfcwiVh
Daniel Barzilai,Amnon Geifman,Meirav Galun,Ronen Basri
ICLR 2023,Top 5%
Over-parameterized residual networks (ResNets) are amongst the most successful convolutional neural architectures for image processing. Here we study their properties through their Gaussian Process and Neural Tangent kernels. We derive explicit formulas for these kernels, analyze their spectra, and provide bounds on their implied condition numbers. Our results indicate that (1) with ReLU activation, the eigenvalues of these residual kernels decay polynomially at a similar rate compared to the same kernels when skip connections are not used, thus maintaining a similar frequency bias; (2) however, residual kernels are more locally biased. Our analysis further shows that the matrices obtained by these residual kernels yield favorable condition numbers at finite depths than those obtained without the skip connections, enabling therefore faster convergence of training with gradient descent.
https://openreview.net/pdf/d02ce0a1fbf33b0f5c0f942e925ba67c6bcfaab5.pdf
WikiWhy: Answering and Explaining Cause-and-Effect Questions
https://openreview.net/forum?id=vaxnu-Utr4l
https://openreview.net/forum?id=vaxnu-Utr4l
Matthew Ho,Aditya Sharma,Justin Chang,Michael Saxon,Sharon Levy,Yujie Lu,William Yang Wang
ICLR 2023,Top 5%
As large language models (LLMs) grow larger and more sophisticated, assessing their "reasoning" capabilities in natural language grows more challenging. Recent question answering (QA) benchmarks that attempt to assess reasoning are often limited by a narrow scope of covered situations and subject matters. We introduce WikiWhy, a QA dataset built around a novel auxiliary task: explaining why an answer is true in natural language. WikiWhy contains over 9,000 "why" question-answer-rationale triples, grounded on Wikipedia facts across a diverse set of topics. Each rationale is a set of supporting statements connecting the question to the answer. WikiWhy serves as a benchmark for the reasoning capabilities of LLMs because it demands rigorous explicit rationales for each answer to demonstrate the acquisition of implicit commonsense knowledge, which is unlikely to be easily memorized. GPT-3 baselines achieve only 38.7% human-evaluated correctness in the end-to-end answer & explain condition, leaving significant room for future improvements.
https://openreview.net/pdf/dd230e9938db73b0fff7ee629cb682af034688fc.pdf
Git Re-Basin: Merging Models modulo Permutation Symmetries
https://openreview.net/forum?id=CQsmMYmlP5T
https://openreview.net/forum?id=CQsmMYmlP5T
Samuel Ainsworth,Jonathan Hayase,Siddhartha Srinivasa
ICLR 2023,Top 5%
The success of deep learning is due in large part to our ability to solve certain massive non-convex optimization problems with relative ease. Though non-convex optimization is NP-hard, simple algorithms -- often variants of stochastic gradient descent -- exhibit surprising effectiveness in fitting large neural networks in practice. We argue that neural network loss landscapes often contain (nearly) a single basin after accounting for all possible permutation symmetries of hidden units a la Entezari et al. 2021. We introduce three algorithms to permute the units of one model to bring them into alignment with a reference model in order to merge the two models in weight space. This transformation produces a functionally equivalent set of weights that lie in an approximately convex basin near the reference model. Experimentally, we demonstrate the single basin phenomenon across a variety of model architectures and datasets, including the first (to our knowledge) demonstration of zero-barrier linear mode connectivity between independently trained ResNet models on CIFAR-10. Additionally, we identify intriguing phenomena relating model width and training time to mode connectivity. Finally, we discuss shortcomings of the linear mode connectivity hypothesis, including a counterexample to the single basin theory.
https://openreview.net/pdf/b212b96bd3f13e202965581f6173495898534b76.pdf
The Role of Coverage in Online Reinforcement Learning
https://openreview.net/forum?id=LQIjzPdDt3q
https://openreview.net/forum?id=LQIjzPdDt3q
Tengyang Xie,Dylan J Foster,Yu Bai,Nan Jiang,Sham M. Kakade
ICLR 2023,Top 5%
Coverage conditions---which assert that the data logging distribution adequately covers the state space---play a fundamental role in determining the sample complexity of offline reinforcement learning. While such conditions might seem irrelevant to online reinforcement learning at first glance, we establish a new connection by showing---somewhat surprisingly---that the mere existence of a data distribution with good coverage can enable sample-efficient online RL. Concretely, we show that coverability---that is, existence of a data distribution that satisfies a ubiquitous coverage condition called concentrability---can be viewed as a structural property of the underlying MDP, and can be exploited by standard algorithms for sample-efficient exploration, even when the agent does not know said distribution. We complement this result by proving that several weaker notions of coverage, despite being sufficient for offline RL, are insufficient for online RL. We also show that existing complexity measures for online RL, including Bellman rank and Bellman-Eluder dimension, fail to optimally capture coverability, and propose a new complexity measure, the self-normalized coefficient, to provide a unification.
https://openreview.net/pdf/a2c365918c8b9f3e5b7cd871606f05d90118525a.pdf
Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
https://openreview.net/forum?id=FZdJQgy05rz
https://openreview.net/forum?id=FZdJQgy05rz
Takashi Ishida,Ikko Yamane,Nontawat Charoenphakdee,Gang Niu,Masashi Sugiyama
ICLR 2023,Top 5%
There is a fundamental limitation in the prediction performance that a machine learning model can achieve due to the inevitable uncertainty of the prediction target. In classification problems, this can be characterized by the Bayes error, which is the best achievable error with any classifier. The Bayes error can be used as a criterion to evaluate classifiers with state-of-the-art performance and can be used to detect test set overfitting. We propose a simple and direct Bayes error estimator, where we just take the mean of the labels that show \emph{uncertainty} of the class assignments. Our flexible approach enables us to perform Bayes error estimation even for weakly supervised data. In contrast to others, our method is model-free and even instance-free. Moreover, it has no hyperparameters and gives a more accurate estimate of the Bayes error than several baselines empirically. Experiments using our method suggest that recently proposed deep networks such as the Vision Transformer may have reached, or is about to reach, the Bayes error for benchmark datasets. Finally, we discuss how we can study the inherent difficulty of the acceptance/rejection decision for scientific articles, by estimating the Bayes error of the ICLR papers from 2017 to 2023.
https://openreview.net/pdf/adf5cd1db7eb1218ea6e605d13c786cdf71eab45.pdf
Offline Q-learning on Diverse Multi-Task Data Both Scales And Generalizes
https://openreview.net/forum?id=4-k7kUavAj
https://openreview.net/forum?id=4-k7kUavAj
Aviral Kumar,Rishabh Agarwal,Xinyang Geng,George Tucker,Sergey Levine
ICLR 2023,Top 5%
The potential of offline reinforcement learning (RL) is that high-capacity models trained on large, heterogeneous datasets can lead to agents that generalize broadly, analogously to similar advances in vision and NLP. However, recent works argue that offline RL methods encounter unique challenges to scaling up model capacity. Drawing on the learnings from these works, we re-examine previous design choices and find that with appropriate choices: ResNets, cross-entropy based distributional backups, and feature normalization, offline Q-learning algorithms exhibit strong performance that scales with model capacity. Using multi-task Atari as a testbed for scaling and generalization, we train a single policy on 40 games with near-human performance using up-to 80 million parameter networks, finding that model performance scales favorably with capacity. In contrast to prior work, we extrapolate beyond dataset performance even when trained entirely on a large (400M transitions) but highly suboptimal dataset (51% human-level performance). Compared to return-conditioned supervised approaches, offline Q-learning scales similarly with model capacity and has better performance, especially when the dataset is suboptimal. Finally, we show that offline Q-learning with a diverse dataset is sufficient to learn powerful representations that facilitate rapid transfer to novel games and fast online learning on new variations of a training game, improving over existing state-of-the-art representation learning approaches.
https://openreview.net/pdf/c4fe1442235b5f185dc41908f09f0b65f8faa938.pdf
​​What learning algorithm is in-context learning? Investigations with linear models
https://openreview.net/forum?id=0g0X4H8yN4I
https://openreview.net/forum?id=0g0X4H8yN4I
Ekin Akyürek,Dale Schuurmans,Jacob Andreas,Tengyu Ma,Denny Zhou
ICLR 2023,Top 5%
Neural sequence models, especially transformers, exhibit a remarkable capacity for in-context learning. They can construct new predictors from sequences of labeled examples $(x, f(x))$ presented in the input without further parameter updates. We investigate the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly, by encoding context-specific parametric models in their hidden representations, and updating these implicit models as new examples appear in the context. Using linear regression as a model problem, we offer three sources of evidence for this hypothesis. First, we prove by construction that transformers can implement learning algorithms for linear models based on gradient descent and closed-form computation of regression parameters. Second, we show that trained in-context learners closely match the predictors computed by gradient descent, ridge regression, and exact least-squares regression, transitioning between different predictors as transformer depth and dataset noise vary. Third, we present preliminary evidence that in-context learners share algorithmic features with these predictors: learners' late layers encode weight vectors and moment matrices. These results suggest that in-context learning is understandable in algorithmic terms, and that (at least in the linear case) learners may work by rediscovering standard estimation algorithms.
https://openreview.net/pdf/7295479b5085774245ad66c73c5176e41b868b67.pdf
Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning
https://openreview.net/forum?id=Uuf2q9TfXGA
https://openreview.net/forum?id=Uuf2q9TfXGA
Zeyuan Allen-Zhu,Yuanzhi Li
ICLR 2023,Top 5%
We formally study how \emph{ensemble} of deep learning models can improve test accuracy, and how the superior performance of ensemble can be distilled into a single model using \emph{knowledge distillation}. We consider the challenging case where the ensemble is simply an average of the outputs of a few independently trained neural networks with the \emph{same} architecture, trained using the \emph{same} algorithm on the \emph{same} data set, and they only differ by the random seeds used in the initialization. We show that ensemble/knowledge distillation in \emph{deep learning} works very differently from traditional learning theory (such as boosting or NTKs). We develop a theory showing that when data has a structure we refer to as ``multi-view'', then ensemble of independently trained neural networks can provably improve test accuracy, and such superior test accuracy can also be provably distilled into a single model. Our result sheds light on how ensemble works in deep learning in a way that is completely different from traditional theorems, and how the ``dark knowledge'' is hidden in the outputs of the ensemble and can be used in distillation.
https://openreview.net/pdf/fbebb24f15ad18f41fae9b87ca59c93d0a7de7f2.pdf
When and Why Vision-Language Models Behave like Bags-Of-Words, and What to Do About It?
https://openreview.net/forum?id=KRLUvxh8uaX
https://openreview.net/forum?id=KRLUvxh8uaX
Mert Yuksekgonul,Federico Bianchi,Pratyusha Kalluri,Dan Jurafsky,James Zou
ICLR 2023,Top 5%
Despite the success of large vision and language models (VLMs) in many downstream applications, it is unclear how well they encode the compositional relationships between objects and attributes. Here, we create the Attribution, Relation, and Order (ARO) benchmark to systematically evaluate the ability of VLMs to understand different types of relationships, attributes, and order information. ARO consists of \emph{Visual Genome Attribution}, to test the understanding of objects' properties; \emph{Visual Genome Relation}, to test for relational understanding; and \emph{COCO-Order \& Flickr30k-Order}, to test for order sensitivity in VLMs. ARO is orders of magnitude larger than previous benchmarks of compositionality, with more than 50,000 test cases. We present the settings where state-of-the-art VLMs behave like bags-of-words---i.e. when they have poor relational understanding, can blunder when linking objects to their attributes, and demonstrate a severe lack of order sensitivity. VLMs are predominantly trained and evaluated on large scale datasets with rich compositional structure in the images and captions. Yet, training on these datasets has not been enough to address the lack of compositional understanding, and evaluating on these datasets has failed to surface this deficiency. To understand why these limitations emerge and are not represented in the standard tests, we zoom into the evaluation and training procedures. We demonstrate that it is possible to perform well on image-text retrieval over existing datasets without using the composition and order information. This further motivates the value of using ARO to benchmark VLMs. Given that contrastive pretraining optimizes for retrieval on large datasets with similar shortcuts, we hypothesize that this can explain why the models do not need to learn to represent compositional information. This finding suggests a natural solution: composition-aware hard negative mining. We show that a simple-to-implement modification of contrastive learning significantly improves the performance on tasks requiring understanding of order and compositionality.
https://openreview.net/pdf/ced77554985af011f5544a8798a3035d4b6ab52b.pdf
Confidence-Conditioned Value Functions for Offline Reinforcement Learning
https://openreview.net/forum?id=Zeb5mTuqT5
https://openreview.net/forum?id=Zeb5mTuqT5
Joey Hong,Aviral Kumar,Sergey Levine
ICLR 2023,Top 5%
Offline reinforcement learning (RL) promises the ability to learn effective policies solely using existing, static datasets, without any costly online interaction. To do so, offline RL methods must handle distributional shift between the dataset and the learned policy. The most common approach is to learn conservative, or lower-bound, value functions, which underestimate the return of OOD actions. However, such methods exhibit one notable drawback: policies optimized on such value functions can only behave according to a fixed, possibly suboptimal, degree of conservatism. However, this can be alleviated if we instead are able to learn policies for varying degrees of conservatism at training time and devise a method to dynamically choose one of them during evaluation. To do so, in this work, we propose learning value functions that additionally condition on the degree of conservatism, which we dub confidence-conditioned value functions. We derive a new form of a Bellman backup that simultaneously learns Q-values for any degree of confidence with high probability. By conditioning on confidence, our value functions enable adaptive strategies during online evaluation by controlling for confidence level using the history of observations thus far. This approach can be implemented in practice by conditioning the Q-function from existing conservative algorithms on the confidence. We theoretically show that our learned value functions produce conservative estimates of the true value at any desired confidence. Finally, we empirically show that our algorithm outperforms existing conservative offline RL algorithms on multiple discrete control domains.
https://openreview.net/pdf/83d1be96a20a4accfffcc8dd593c0f0a3c5b5776.pdf
On the Sensitivity of Reward Inference to Misspecified Human Models
https://openreview.net/forum?id=hJqGbUpDGV
https://openreview.net/forum?id=hJqGbUpDGV
Joey Hong,Kush Bhatia,Anca Dragan
ICLR 2023,Top 5%
Inferring reward functions from human behavior is at the center of value alignment – aligning AI objectives with what we, humans, actually want. But doing so relies on models of how humans behave given their objectives. After decades of research in cognitive science, neuroscience, and behavioral economics, obtaining accurate human models remains an open research topic. This begs the question: how accurate do these models need to be in order for the reward inference to be accurate? On the one hand, if small errors in the model can lead to catastrophic error in inference, the entire framework of reward learning seems ill-fated, as we will never have perfect models of human behavior. On the other hand, if as our models improve, we can have a guarantee that reward accuracy also improves, this would show the benefit of more work on the modeling side. We study this question both theoretically and empirically. We do show that it is unfortunately possible to construct small adversarial biases in behavior that lead to arbitrarily large errors in the inferred reward. However, and arguably more importantly, we are also able to identify reasonable assumptions under which the reward inference error can be bounded linearly in the error in the human model. Finally, we verify our theoretical insights in discrete and continuous control tasks with simulated and human data.
https://openreview.net/pdf/787489763506d1437ac7b05b15f89ea0beb8c3b1.pdf
Time Will Tell: New Outlooks and A Baseline for Temporal Multi-View 3D Object Detection
https://openreview.net/forum?id=H3HcEJA2Um
https://openreview.net/forum?id=H3HcEJA2Um
Jinhyung Park,Chenfeng Xu,Shijia Yang,Kurt Keutzer,Kris M. Kitani,Masayoshi Tomizuka,Wei Zhan
ICLR 2023,Top 5%
While recent camera-only 3D detection methods leverage multiple timesteps, the limited history they use significantly hampers the extent to which temporal fusion can improve object perception. Observing that existing works' fusion of multi-frame images are instances of temporal stereo matching, we find that performance is hindered by the interplay between 1) the low granularity of matching resolution and 2) the sub-optimal multi-view setup produced by limited history usage. Our theoretical and empirical analysis demonstrates that the optimal temporal difference between views varies significantly for different pixels and depths, making it necessary to fuse many timesteps over long-term history. Building on our investigation, we propose to generate a cost volume from a long history of image observations, compensating for the coarse but efficient matching resolution with a more optimal multi-view matching setup. Further, we augment the per-frame monocular depth predictions used for long-term, coarse matching with short-term, fine-grained matching and find that long and short term temporal fusion are highly complementary. While maintaining high efficiency, our framework sets new state-of-the-art on nuScenes, achieving first place on the test set and outperforming previous best art by 5.2% mAP and 3.7% NDS on the validation set. Code will be released here: https://github.com/Divadi/SOLOFusion.
https://openreview.net/pdf/1653c1b285d859cb8e3ba8eb36976b1006f2bf1c.pdf
Dichotomy of Control: Separating What You Can Control from What You Cannot
https://openreview.net/forum?id=DEGjDDV22pI
https://openreview.net/forum?id=DEGjDDV22pI
Sherry Yang,Dale Schuurmans,Pieter Abbeel,Ofir Nachum
ICLR 2023,Top 5%
Future- or return-conditioned supervised learning is an emerging paradigm for offline reinforcement learning (RL), in which the future outcome (i.e., return) associated with a sequence of actions in an offline dataset is used as input to a policy trained to imitate those same actions. While return-conditioning is at the heart of popular algorithms such as decision transformer (DT), these methods tend to perform poorly in highly stochastic environments, where an occasional high return associated with a sequence of actions may be due more to the randomness of the environment than to the actions themselves. Such situations can lead to a learned policy that is inconsistent with its conditioning inputs; i.e., using the policy – while conditioned on a specific desired return – to act in the environment can lead to a distribution of real returns that is wildly different than desired. In this work, we propose the dichotomy of control (DoC), a future-conditioned supervised learning framework that separates mechanisms within a policy’s control (actions) from those outside of a policy’s control (environment stochasticity). We achieve this by conditioning the policy on a latent variable representation of the future and designing a mutual information constraint that removes any future information from the latent variable that is only due to randomness of the environment. Theoretically, we show that DoC yields policies that are consistent with their conditioning inputs, ensuring that conditioning a learned policy on a desired high-return future outcome will correctly induce high-return behavior. Empirically, we show that DoC is able to achieve significantly better performance than DT on environments with highly stochastic rewards (e.g., Bandit) and transitions (e.g., FrozenLake).
https://openreview.net/pdf/6570cf14640b106571e1d2ce08ee384f1f17eeaf.pdf
Learning where and when to reason in neuro-symbolic inference
https://openreview.net/forum?id=en9V5F8PR-
https://openreview.net/forum?id=en9V5F8PR-
Cristina Cornelio,Jan Stuehmer,Shell Xu Hu,Timothy Hospedales
ICLR 2023,Top 5%
The integration of hard constraints on neural network outputs is a very desirable capability. This allows to instill trust in AI by guaranteeing the sanity of that neural network predictions with respect to domain knowledge. Recently, this topic has received a lot of attention. However, all the existing methods usually either impose the constraints in a "weak" form at training time, with no guarantees at inference, or fail to provide a general framework that supports different tasks and constraint types. We tackle this open problem from a neuro-symbolic perspective. Our pipeline enhances a conventional neural predictor with (1) a symbolic reasoning module capable of correcting structured prediction errors and (2) a neural attention module that learns to direct the reasoning effort to focus on potential prediction errors, while keeping other outputs unchanged. This framework provides an appealing trade-off between the efficiency of constraint-free neural inference and the prohibitive cost of exhaustive reasoning at inference time. We show that our method outperforms the state of the art on visual-Sudoku, and can also benefit visual scene graph prediction. Furthermore, it can improve the performance of existing neuro-symbolic systems that lack our explicit reasoning during inference.
https://openreview.net/pdf/31f018dbf1b4f56acf88d2715ebd70a6d3908c99.pdf
On the duality between contrastive and non-contrastive self-supervised learning
https://openreview.net/forum?id=kDEL91Dufpa
https://openreview.net/forum?id=kDEL91Dufpa
Quentin Garrido,Yubei Chen,Adrien Bardes,Laurent Najman,Yann LeCun
ICLR 2023,Top 5%
Recent approaches in self-supervised learning of image representations can be categorized into different families of methods and, in particular, can be divided into contrastive and non-contrastive approaches. While differences between the two families have been thoroughly discussed to motivate new approaches, we focus more on the theoretical similarities between them. By designing contrastive and covariance based non-contrastive criteria that can be related algebraically and shown to be equivalent under limited assumptions, we show how close those families can be. We further study popular methods and introduce variations of them, allowing us to relate this theoretical result to current practices and show the influence (or lack thereof) of design choices on downstream performance. Motivated by our equivalence result, we investigate the low performance of SimCLR and show how it can match VICReg's with careful hyperparameter tuning, improving significantly over known baselines. We also challenge the popular assumption that non-contrastive methods need large output dimensions. Our theoretical and quantitative results suggest that the numerical gaps between contrastive and non-contrastive methods in certain regimes can be closed given better network design choices and hyperparameter tuning. The evidence shows that unifying different SOTA methods is an important direction to build a better understanding of self-supervised learning.
https://openreview.net/pdf/b65a5392645765469baab2e39bb691bf22a9e6fd.pdf
DreamFusion: Text-to-3D using 2D Diffusion
https://openreview.net/forum?id=FjNys5c7VyY
https://openreview.net/forum?id=FjNys5c7VyY
Ben Poole,Ajay Jain,Jonathan T. Barron,Ben Mildenhall
ICLR 2023,Top 5%
Recent breakthroughs in text-to-image synthesis have been driven by diffusion models trained on billions of image-text pairs. Adapting this approach to 3D synthesis would require large-scale datasets of labeled 3D or multiview data and efficient architectures for denoising 3D data, neither of which currently exist. In this work, we circumvent these limitations by using a pretrained 2D text-to-image diffusion model to perform text-to-3D synthesis. We introduce a loss based on probability density distillation that enables the use of a 2D diffusion model as a prior for optimization of a parametric image generator. Using this loss in a DeepDream-like procedure, we optimize a randomly-initialized 3D model (a Neural Radiance Field, or NeRF) via gradient descent such that its 2D renderings from random angles achieve a low loss. The resulting 3D model of the given text can be viewed from any angle, relit by arbitrary illumination, or composited into any 3D environment. Our approach requires no 3D training data and no modifications to the image diffusion model, demonstrating the effectiveness of pretrained image diffusion models as priors.
https://openreview.net/pdf/fc5d88df1a06d30ae79fb23e87030f0fb2c8bd76.pdf
Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions
https://openreview.net/forum?id=zyLVMgsZ0U_
https://openreview.net/forum?id=zyLVMgsZ0U_
Sitan Chen,Sinho Chewi,Jerry Li,Yuanzhi Li,Adil Salim,Anru Zhang
ICLR 2023,Top 5%
We provide theoretical convergence guarantees for score-based generative models (SGMs) such as denoising diffusion probabilistic models (DDPMs), which constitute the backbone of large-scale real-world generative models such as DALL$\cdot$E 2. Our main result is that, assuming accurate score estimates, such SGMs can efficiently sample from essentially any realistic data distribution. In contrast to prior works, our results (1) hold for an $L^2$-accurate score estimate (rather than $L^\infty$-accurate); (2) do not require restrictive functional inequality conditions that preclude substantial non-log-concavity; (3) scale polynomially in all relevant problem parameters; and (4) match state-of-the-art complexity guarantees for discretization of the Langevin diffusion, provided that the score error is sufficiently small. We view this as strong theoretical justification for the empirical success of SGMs. We also examine SGMs based on the critically damped Langevin diffusion (CLD). Contrary to conventional wisdom, we provide evidence that the use of the CLD does *not* reduce the complexity of SGMs.
https://openreview.net/pdf/f0dc173be132440952bd7d8221b096d0a0ecf2c7.pdf
Universal Few-shot Learning of Dense Prediction Tasks with Visual Token Matching
https://openreview.net/forum?id=88nT0j5jAn
https://openreview.net/forum?id=88nT0j5jAn
Donggyun Kim,Jinwoo Kim,Seongwoong Cho,Chong Luo,Seunghoon Hong
ICLR 2023,Top 5%
Dense prediction tasks are a fundamental class of problems in computer vision. As supervised methods suffer from high pixel-wise labeling cost, a few-shot learning solution that can learn any dense task from a few labeled images is desired. Yet, current few-shot learning methods target a restricted set of tasks such as semantic segmentation, presumably due to challenges in designing a general and unified model that is able to flexibly and efficiently adapt to arbitrary tasks of unseen semantics. We propose Visual Token Matching (VTM), a universal few-shot learner for arbitrary dense prediction tasks. It employs non-parametric matching on patch-level embedded tokens of images and labels that encapsulates all tasks. Also, VTM flexibly adapts to any task with a tiny amount of task-specific parameters that modulate the matching algorithm. We implement VTM as a powerful hierarchical encoder-decoder architecture involving ViT backbones where token matching is performed at multiple feature hierarchies. We experiment VTM on a challenging variant of Taskonomy dataset and observe that it robustly few-shot learns various unseen dense prediction tasks. Surprisingly, it is competitive with fully supervised baselines using only 10 labeled examples of novel tasks ($0.004\%$ of full supervision) and sometimes outperforms using $0.1\%$ of full supervision. Codes are available at https://github.com/GitGyun/visual_token_matching.
https://openreview.net/pdf/45149e96f3e88087d3e81a1ff08f0d2b5e719921.pdf
Mitigating Gradient Bias in Multi-objective Learning: A Provably Convergent Approach
https://openreview.net/forum?id=dLAYGdKTi2
https://openreview.net/forum?id=dLAYGdKTi2
Heshan Devaka Fernando,Han Shen,Miao Liu,Subhajit Chaudhury,Keerthiram Murugesan,Tianyi Chen
ICLR 2023,Top 5%
Many machine learning problems today have multiple objective functions. They appear either in learning with multiple criteria where learning has to make a trade-off between multiple performance metrics such as fairness, safety and accuracy; or, in multi-task learning where multiple tasks are optimized jointly, sharing inductive bias between them. This problems are often tackled by the multi-objective optimization framework. However, existing stochastic multi-objective gradient methods and its variants (e.g., MGDA, PCGrad, CAGrad, etc.) all adopt a biased noisy gradient direction, which leads to degraded empirical performance. To this end, we develop a stochastic multi-objective gradient correction (MoCo) method for multi-objective optimization. The unique feature of our method is that it can guarantee convergence without increasing the batch size even in the nonconvex setting. Simulations on multi-task supervised and reinforcement learning demonstrate the effectiveness of our method relative to the state-of-the-art methods.
https://openreview.net/pdf/9e46581e9b775d4b10ffcc00c43e0bdb8d21e1b4.pdf
ReAct: Synergizing Reasoning and Acting in Language Models
https://openreview.net/forum?id=WE_vluYUL-X
https://openreview.net/forum?id=WE_vluYUL-X
Shunyu Yao,Jeffrey Zhao,Dian Yu,Nan Du,Izhak Shafran,Karthik R Narasimhan,Yuan Cao
ICLR 2023,Top 5%
While large language models (LLMs) have demonstrated impressive capabilities across tasks in language understanding and interactive decision making, their abilities for reasoning (e.g. chain-of-thought prompting) and acting (e.g. action plan generation) have primarily been studied as separate topics. In this paper, we explore the use of LLMs to generate both reasoning traces and task-specific actions in an interleaved manner, allowing for greater synergy between the two: reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with external sources, such as knowledge bases or environments, to gather additional information. We apply our approach, named ReAct, to a diverse set of language and decision making tasks and demonstrate its effectiveness over state-of-the-art baselines, as well as improved human interpretability and trustworthiness over methods without reasoning or acting components. Concretely, on question answering (HotpotQA) and fact verification (Fever), ReAct overcomes issues of hallucination and error propagation prevalent in chain-of-thought reasoning by interacting with a simple Wikipedia API, and generates human-like task-solving trajectories that are more interpretable than baselines without reasoning traces. On two interactive decision making benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and reinforcement learning methods by an absolute success rate of 34% and 10% respectively, while being prompted with only one or two in-context examples.
https://openreview.net/pdf/bc117919562a4ccddbe5c5b24ee364d14289cdee.pdf
Do We Really Need Complicated Model Architectures For Temporal Networks?
https://openreview.net/forum?id=ayPPc0SyLv1
https://openreview.net/forum?id=ayPPc0SyLv1
Weilin Cong,Si Zhang,Jian Kang,Baichuan Yuan,Hao Wu,Xin Zhou,Hanghang Tong,Mehrdad Mahdavi
ICLR 2023,Top 5%
Recurrent neural network (RNN) and self-attention mechanism (SAM) are the de facto methods to extract spatial-temporal information for temporal graph learning. Interestingly, we found that although both RNN and SAM could lead to a good performance, in practice neither of them is always necessary. In this paper, we propose GraphMixer, a conceptually and technically simple architecture that consists of three components: (1) a link-encoder that is only based on multi-layer perceptrons (MLP) to summarize the information from temporal links, (2) a node-encoder that is only based on neighbor mean-pooling to summarize node information, and (3) an MLP-based link classifier that performs link prediction based on the outputs of the encoders. Despite its simplicity, GraphMixer attains an outstanding performance on temporal link prediction benchmarks with faster convergence and better generalization performance. These results motivate us to rethink the importance of simpler model architecture.
https://openreview.net/pdf/4b4fffb0d6f563cba29cdcf32f829b333eb53899.pdf
Is Conditional Generative Modeling all you need for Decision Making?
https://openreview.net/forum?id=sP1fo2K9DFG
https://openreview.net/forum?id=sP1fo2K9DFG
Anurag Ajay,Yilun Du,Abhi Gupta,Joshua B. Tenenbaum,Tommi S. Jaakkola,Pulkit Agrawal
ICLR 2023,Top 5%
Recent improvements in conditional generative modeling have made it possible to generate high-quality images from language descriptions alone. We investigate whether these methods can directly address the problem of sequential decision-making. We view decision-making not through the lens of reinforcement learning (RL), but rather through conditional generative modeling. To our surprise, we find that our formulation leads to policies that can outperform existing offline RL approaches across standard benchmarks. By modeling a policy as a return-conditional generative model, we avoid the need for dynamic programming and subsequently eliminate many of the complexities that come with traditional offline RL. We further demonstrate the advantages of modeling policies as conditional generative models by considering two other conditioning variables: constraints and skills. Conditioning on a single constraint or skill during training leads to behaviors at test-time that can satisfy several constraints together or demonstrate a composition of skills. Our results illustrate that conditional generative modeling is a powerful tool for decision-making.
https://openreview.net/pdf/e4e0b6540b8164996a357a85347d96a324cf5647.pdf
The Lie Derivative for Measuring Learned Equivariance
https://openreview.net/forum?id=JL7Va5Vy15J
https://openreview.net/forum?id=JL7Va5Vy15J
Nate Gruver,Marc Anton Finzi,Micah Goldblum,Andrew Gordon Wilson
ICLR 2023,Top 5%
Equivariance guarantees that a model's predictions capture key symmetries in data. When an image is translated or rotated, an equivariant model's representation of that image will translate or rotate accordingly. The success of convolutional neural networks has historically been tied to translation equivariance directly encoded in their architecture. The rising success of vision transformers, which have no explicit architectural bias towards equivariance, challenges this narrative and suggests that augmentations and training data might also play a significant role in their performance. In order to better understand the role of equivariance in recent vision models, we apply the Lie derivative, a method for measuring equivariance with strong mathematical foundations and minimal hyperparameters. Using the Lie derivative, we study the equivariance properties of hundreds of pretrained models, spanning CNNs, transformers, and Mixer architectures. The scale of our analysis allows us to separate the impact of architecture from other factors like model size or training method. Surprisingly, we find that many violations of equivariance can be linked to spatial aliasing in ubiquitous network layers, such as pointwise non-linearities, and that as models get larger and more accurate they tend to display more equivariance, regardless of architecture. For example, transformers can be more equivariant than convolutional neural networks after training.
https://openreview.net/pdf/6d3e8e96475697f1cf6193df36e370ffd12302e8.pdf
Agree to Disagree: Diversity through Disagreement for Better Transferability
https://openreview.net/forum?id=K7CbYQbyYhY
https://openreview.net/forum?id=K7CbYQbyYhY
Matteo Pagliardini,Martin Jaggi,François Fleuret,Sai Praneeth Karimireddy
ICLR 2023,Top 5%
Gradient-based learning algorithms have an implicit \emph{simplicity bias} which in effect can limit the diversity of predictors being sampled by the learning procedure. This behavior can hinder the transferability of trained models by (i) favoring the learning of simpler but spurious features --- present in the training data but absent from the test data --- and (ii) by only leveraging a small subset of predictive features. Such an effect is especially magnified when the test distribution does not exactly match the train distribution---referred to as the Out of Distribution (OOD) generalization problem. However, given only the training data, it is not always possible to apriori assess if a given feature is spurious or transferable. Instead, we advocate for learning an ensemble of models which capture a diverse set of predictive features. Towards this, we propose a new algorithm D-BAT (Diversity-By-disAgreement Training), which enforces agreement among the models on the training data, but disagreement on the OOD data. We show how D-BAT naturally emerges from the notion of generalized discrepancy, as well as demonstrate in multiple experiments how the proposed method can mitigate shortcut-learning, enhance uncertainty and OOD detection, as well as improve transferability.
https://openreview.net/pdf/bffcee09d1939996b54123724697afa1a9d4df37.pdf
Efficient Conditionally Invariant Representation Learning
https://openreview.net/forum?id=dJruFeSRym1
https://openreview.net/forum?id=dJruFeSRym1
Roman Pogodin,Namrata Deka,Yazhe Li,Danica J. Sutherland,Victor Veitch,Arthur Gretton
ICLR 2023,Top 5%
We introduce the Conditional Independence Regression CovariancE (CIRCE), a measure of conditional independence for multivariate continuous-valued variables. CIRCE applies as a regularizer in settings where we wish to learn neural features $\varphi(X)$ of data $X$ to estimate a target $Y$, while being conditionally independent of a distractor $Z$ given $Y$. Both $Z$ and $Y$ are assumed to be continuous-valued but relatively low dimensional, whereas $X$ and its features may be complex and high dimensional. Relevant settings include domain-invariant learning, fairness, and causal learning. The procedure requires just a single ridge regression from $Y$ to kernelized features of $Z$, which can be done in advance. It is then only necessary to enforce independence of $\varphi(X)$ from residuals of this regression, which is possible with attractive estimation properties and consistency guarantees. By contrast, earlier measures of conditional feature dependence require multiple regressions for each step of feature learning, resulting in more severe bias and variance, and greater computational cost. When sufficiently rich features are used, we establish that CIRCE is zero if and only if $\varphi(X) \perp \!\!\! \perp Z \mid Y$. In experiments, we show superior performance to previous methods on challenging benchmarks, including learning conditionally invariant image features. Code for image data experiments is available at github.com/namratadeka/circe.
https://openreview.net/pdf/59fb48f35c3ae783e6d4bb6e29843529e56a0305.pdf
Aligning Model and Macaque Inferior Temporal Cortex Representations Improves Model-to-Human Behavioral Alignment and Adversarial Robustness
https://openreview.net/forum?id=SMYdcXjJh1q
https://openreview.net/forum?id=SMYdcXjJh1q
Joel Dapello,Kohitij Kar,Martin Schrimpf,Robert Baldwin Geary,Michael Ferguson,David Daniel Cox,James J. DiCarlo
ICLR 2023,Top 5%
While some state-of-the-art artificial neural network systems in computer vision are strikingly accurate models of the corresponding primate visual processing, there are still many discrepancies between these models and the behavior of primates on object recognition tasks. Many current models suffer from extreme sensitivity to adversarial attacks and often do not align well with the image-by-image behavioral error patterns observed in humans. Previous research has provided strong evidence that primate object recognition behavior can be very accurately predicted by neural population activity in the inferior temporal (IT) cortex, a brain area in the late stages of the visual processing hierarchy. Therefore, here we directly test whether making the late stage representations of models more similar to that of macaque IT produces new models that exhibit more robust, primate-like behavior. We conducted chronic, large-scale multi-electrode recordings across the IT cortex in six non-human primates (rhesus macaques). We then use these data to fine-tune (end-to-end) the model "IT" representations such that they are more aligned with the biological IT representations, while preserving accuracy on object recognition tasks. We generate a cohort of models with a range of IT similarity scores validated on held-out animals across two image sets with distinct statistics. Across a battery of optimization conditions, we observed a strong correlation between the models' IT-likeness and alignment with human behavior, as well as an increase in its adversarial robustness. We further assessed the limitations of this approach and find that the improvements in behavioral alignment and adversarial robustness generalize across different image statistics, but not to object categories outside of those covered in our IT training set. Taken together, our results demonstrate that building models that are more aligned with the primate brain leads to more robust and human-like behavior, and call for larger neural data-sets to further augment these gains.
https://openreview.net/pdf/9c4c1940dba43cb5ad6502b7a23339d19d3a9a49.pdf
Transformers Learn Shortcuts to Automata
https://openreview.net/forum?id=De4FYqjFueZ
https://openreview.net/forum?id=De4FYqjFueZ
Bingbin Liu,Jordan T. Ash,Surbhi Goel,Akshay Krishnamurthy,Cyril Zhang
ICLR 2023,Top 5%
Algorithmic reasoning requires capabilities which are most naturally understood through recurrent models of computation, like the Turing machine. However, Transformer models, while lacking recurrence, are able to perform such reasoning using far fewer layers than the number of reasoning steps. This raises the question: what solutions are these shallow and non-recurrent models finding? We investigate this question in the setting of learning automata, discrete dynamical systems naturally suited to recurrent modeling and expressing algorithmic tasks. Our theoretical results completely characterize shortcut solutions, whereby a shallow Transformer with only $o(T)$ layers can exactly replicate the computation of an automaton on an input sequence of length $T$. By representing automata using the algebraic structure of their underlying transformation semigroups, we obtain $O(\log T)$-depth simulators for all automata and $O(1)$-depth simulators for all automata whose associated groups are solvable. Empirically, we perform synthetic experiments by training Transformers to simulate a wide variety of automata, and show that shortcut solutions can be learned via standard training. We further investigate the brittleness of these solutions and propose potential mitigations.
https://openreview.net/pdf/6fceba3e100352173ef8f64b4743424fc99f1e8d.pdf
In-context Reinforcement Learning with Algorithm Distillation
https://openreview.net/forum?id=hy0a5MMPUv
https://openreview.net/forum?id=hy0a5MMPUv
Michael Laskin,Luyu Wang,Junhyuk Oh,Emilio Parisotto,Stephen Spencer,Richie Steigerwald,DJ Strouse,Steven Stenberg Hansen,Angelos Filos,Ethan Brooks,maxime gazeau,Himanshu Sahni,Satinder Singh,Volodymyr Mnih
ICLR 2023,Top 5%
We propose Algorithm Distillation (AD), a method for distilling reinforcement learning (RL) algorithms into neural networks by modeling their training histories with a causal sequence model. Algorithm Distillation treats learning to reinforcement learn as an across-episode sequential prediction problem. A dataset of learning histories is generated by a source RL algorithm, and then a causal transformer is trained by autoregressively predicting actions given their preceding learning histories as context. Unlike sequential policy prediction architectures that distill post-learning or expert sequences, AD is able to improve its policy entirely in-context without updating its network parameters. We demonstrate that AD can reinforcement learn in-context in a variety of environments with sparse rewards, combinatorial task structure, and pixel-based observations, and find that AD learns a more data-efficient RL algorithm than the one that generated the source data.
https://openreview.net/pdf/c985c5523f4d0b869ac3914fad93d499e71fcb5a.pdf
Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning
https://openreview.net/forum?id=3Pf3Wg6o-A4
https://openreview.net/forum?id=3Pf3Wg6o-A4
Antonia Creswell,Murray Shanahan,Irina Higgins
ICLR 2023,Top 5%
Large language models (LLMs) have been shown to be capable of impressive few-shot generalisation to new tasks. However, they still tend to perform poorly on multi-step logical reasoning problems. Here we carry out a comprehensive evaluation of LLMs on 46 tasks that probe different aspects of logical reasoning. We show that language models tend to perform fairly well at single step inference or entailment tasks, but struggle to chain together multiple reasoning steps to solve more complex problems. In light of this, we propose a Selection-Inference (SI) framework that exploits pre-trained LLMs as general processing modules, and alternates between selection and inference to generate a series of interpretable, casual reasoning steps leading to the final answer. We show that a 7B parameter LLM used within the SI framework in a 5-shot generalisation setting, with no fine-tuning, yields a performance improvement of over 100% compared to an equivalent vanilla baseline on a suite of 10 logical reasoning tasks. The same model in the same setting even outperforms a significantly larger 280B parameter baseline on the same suite of tasks. Moreover, answers produced by the SI framework are accompanied by a causal natural-language-based reasoning trace, which has important implications for the safety and trustworthiness of the system.
https://openreview.net/pdf/4c8f591f9bb58ccd07ed826e0e57885bc4227b12.pdf
Compressing multidimensional weather and climate data into neural networks
https://openreview.net/forum?id=Y5SEe3dfniJ
https://openreview.net/forum?id=Y5SEe3dfniJ
Langwen Huang,Torsten Hoefler
ICLR 2023,Top 5%
Weather and climate simulations produce petabytes of high-resolution data that are later analyzed by researchers in order to understand climate change or severe weather. We propose a new method of compressing this multidimensional weather and climate data: a coordinate-based neural network is trained to overfit the data, and the resulting parameters are taken as a compact representation of the original grid-based data. While compression ratios range from 300x to more than 3,000x, our method outperforms the state-of-the-art compressor SZ3 in terms of weighted RMSE, MAE. It can faithfully preserve important large scale atmosphere structures and does not introduce significant artifacts. When using the resulting neural network as a 790x compressed dataloader to train the WeatherBench forecasting model, its RMSE increases by less than 2%. The three orders of magnitude compression democratizes access to high-resolution climate data and enables numerous new research directions.
https://openreview.net/pdf/6959d1573e13008d77bafdde3a013ed0767d1185.pdf
Confidential-PROFITT: Confidential PROof of FaIr Training of Trees
https://openreview.net/forum?id=iIfDQVyuFD
https://openreview.net/forum?id=iIfDQVyuFD
Ali Shahin Shamsabadi,Sierra Calanda Wyllie,Nicholas Franzese,Natalie Dullerud,Sébastien Gambs,Nicolas Papernot,Xiao Wang,Adrian Weller
ICLR 2023,Top 5%
Post hoc auditing of model fairness suffers from potential drawbacks: (1) auditing may be highly sensitive to the test samples chosen; (2) the model and/or its training data may need to be shared with an auditor thereby breaking confidentiality. We address these issues by instead providing a certificate that demonstrates that the learning algorithm itself is fair, and hence, as a consequence, so too is the trained model. We introduce a method to provide a confidential proof of fairness for training, in the context of widely used decision trees, which we term Confidential-PROFITT. We propose novel fair decision tree learning algorithms along with customized zero-knowledge proof protocols to obtain a proof of fairness that can be audited by a third party. Using zero-knowledge proofs enables us to guarantee confidentiality of both the model and its training data. We show empirically that bounding the information gain of each node with respect to the sensitive attributes reduces the unfairness of the final tree. In extensive experiments on the COMPAS, Communities and Crime, Default Credit, and Adult datasets, we demonstrate that a company can use Confidential-PROFITT to certify the fairness of their decision tree to an auditor in less than 2 minutes, thus indicating the applicability of our approach. This is true for both the demographic parity and equalized odds definitions of fairness. Finally, we extend Confidential-PROFITT to apply to ensembles of trees.
https://openreview.net/pdf/20b12822064b2d7eb054021a0f1209e1dd066515.pdf
Near-optimal Coresets for Robust Clustering
https://openreview.net/forum?id=Nc1ZkRW8Vde
https://openreview.net/forum?id=Nc1ZkRW8Vde
Lingxiao Huang,Shaofeng H.-C. Jiang,Jianing Lou,Xuan Wu
ICLR 2023,Top 5%
We consider robust clustering problems in $\mathbb{R}^d$, specifically $k$-clustering problems (e.g., $k$-Median and $k$-Means) with $m$ \emph{outliers}, where the cost for a given center set $C \subset \mathbb{R}^d$ aggregates the distances from $C$ to all but the furthest $m$ data points, instead of all points as in classical clustering. We focus on the $\epsilon$-coreset for robust clustering, a small proxy of the dataset that preserves the clustering cost within $\epsilon$-relative error for all center sets. Our main result is an $\epsilon$-coreset of size $O(m + \mathrm{poly}(k \epsilon^{-1}))$ that can be constructed in near-linear time. This significantly improves previous results, which either suffers an exponential dependence on $(m + k)$ [Feldman and Schulman, SODA'12], or has a weaker bi-criteria guarantee [Huang et al., FOCS'18]. Furthermore, we show this dependence in $m$ is nearly-optimal, and the fact that it is isolated from other factors may be crucial for dealing with large number of outliers. We construct our coresets by adapting to the outlier setting a recent framework [Braverman et al., FOCS'22] which was designed for capacity-constrained clustering, overcoming a new challenge that the participating terms in the cost, particularly the excluded $m$ outlier points, are dependent on the center set $C$. We validate our coresets on various datasets, and we observe a superior size-accuracy tradeoff compared with popular baselines including uniform sampling and sensitivity sampling. We also achieve a significant speedup of existing approximation algorithms for robust clustering using our coresets.
https://openreview.net/pdf/697bd8e4cac416b91757762ed8f0209073062f6d.pdf
Targeted Hyperparameter Optimization with Lexicographic Preferences Over Multiple Objectives
https://openreview.net/forum?id=0Ij9_q567Ma
https://openreview.net/forum?id=0Ij9_q567Ma
Shaokun Zhang,Feiran Jia,Chi Wang,Qingyun Wu
ICLR 2023,Top 5%
Motivated by various practical applications, we propose a novel and general formulation of targeted multi-objective hyperparameter optimization. Our formulation allows a clear specification of an automatable optimization goal using lexicographic preference over multiple objectives. We then propose a randomized directed search method named LexiFlow to solve this problem. We demonstrate the strong empirical performance of the proposed algorithm in multiple hyperparameter optimization tasks.
https://openreview.net/pdf/01544b5bcb68c0bc76fbffa2876dca9d12ec0f24.pdf
Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning
https://openreview.net/forum?id=F61FwJTZhb
https://openreview.net/forum?id=F61FwJTZhb
Anton Bakhtin,David J Wu,Adam Lerer,Jonathan Gray,Athul Paul Jacob,Gabriele Farina,Alexander H Miller,Noam Brown
ICLR 2023,Top 5%
No-press Diplomacy is a complex strategy game involving both cooperation and competition that has served as a benchmark for multi-agent AI research. While self-play reinforcement learning has resulted in numerous successes in purely adversarial games like chess, Go, and poker, self-play alone is insufficient for achieving optimal performance in domains involving cooperation with humans. We address this shortcoming by first introducing a planning algorithm we call DiL-piKL that regularizes a reward-maximizing policy toward a human imitation-learned policy. We prove that this is a no-regret learning algorithm under a modified utility function. We then show that DiL-piKL can be extended into a self-play reinforcement learning algorithm we call RL-DiL-piKL that provides a model of human play while simultaneously training an agent that responds well to this human model. We used RL-DiL-piKL to train an agent we name Diplodocus. In a 200-game no-press Diplomacy tournament involving 62 human participants spanning skill levels from beginner to expert, two Diplodocus agents both achieved a higher average score than all other participants who played more than two games, and ranked first and third according to an Elo ratings model.
https://openreview.net/pdf/5355b9a9bc1eabd198a78654d7dbfa4e5f1664b0.pdf
Efficient Attention via Control Variates
https://openreview.net/forum?id=G-uNfHKrj46
https://openreview.net/forum?id=G-uNfHKrj46
Lin Zheng,Jianbo Yuan,Chong Wang,Lingpeng Kong
ICLR 2023,Top 5%
Random-feature-based attention (RFA) is an efficient approximation of softmax attention with linear runtime and space complexity. However, the approximation gap between RFA and conventional softmax attention is not well studied. Built upon previous progress of RFA, we characterize this gap through the lens of control variates and show that RFA can be decomposed into a sum of multiple control variate estimators for each element in the sequence. This new framework reveals that exact softmax attention can be recovered from RFA by manipulating each control variate. Besides, it allows us to develop a more flexible form of control variates, resulting in a novel attention mechanism that significantly reduces the approximation gap while maintaining linear complexity. Extensive experiments demonstrate that our model outperforms state-of-the-art efficient attention mechanisms on both vision and language tasks.
https://openreview.net/pdf/2d280a38a1ccefd5c4718511ab9b2b2571c6bd05.pdf
SAM as an Optimal Relaxation of Bayes
https://openreview.net/forum?id=k4fevFqSQcX
https://openreview.net/forum?id=k4fevFqSQcX
Thomas Möllenhoff,Mohammad Emtiyaz Khan
ICLR 2023,Top 5%
Sharpness-aware minimization (SAM) and related adversarial deep-learning methods can drastically improve generalization, but their underlying mechanisms are not yet fully understood. Here, we establish SAM as a relaxation of the Bayes objective where the expected negative-loss is replaced by the optimal convex lower bound, obtained by using the so-called Fenchel biconjugate. The connection enables a new Adam-like extension of SAM to automatically obtain reasonable uncertainty estimates, while sometimes also improving its accuracy. By connecting adversarial and Bayesian methods, our work opens a new path to robustness.
https://openreview.net/pdf/9f7784562cd53ab7d908c93bc8ece8b40dcaa922.pdf
Learning on Large-scale Text-attributed Graphs via Variational Inference
https://openreview.net/forum?id=q0nmYciuuZN
https://openreview.net/forum?id=q0nmYciuuZN
Jianan Zhao,Meng Qu,Chaozhuo Li,Hao Yan,Qian Liu,Rui Li,Xing Xie,Jian Tang
ICLR 2023,Top 5%
This paper studies learning on text-attributed graphs (TAGs), where each node is associated with a text description. An ideal solution for such a problem would be integrating both the text and graph structure information with large language models and graph neural networks (GNNs). However, the problem becomes very challenging when graphs are large due to the high computational complexity brought by training large language models and GNNs together. In this paper, we propose an efficient and effective solution to learning on large text-attributed graphs by fusing graph structure and language learning with a variational Expectation-Maximization (EM) framework, called GLEM. Instead of simultaneously training large language models and GNNs on big graphs, GLEM proposes to alternatively update the two modules in the E-step and M-step. Such a procedure allows training the two modules separately while simultaneously allowing the two modules to interact and mutually enhance each other. Extensive experiments on multiple data sets demonstrate the efficiency and effectiveness of the proposed approach.
https://openreview.net/pdf/d5933681412eb0329ac9f838744d30d98d4f8c3d.pdf
Extreme Q-Learning: MaxEnt RL without Entropy
https://openreview.net/forum?id=SJ0Lde3tRL
https://openreview.net/forum?id=SJ0Lde3tRL
Divyansh Garg,Joey Hejna,Matthieu Geist,Stefano Ermon
ICLR 2023,Top 5%
Modern Deep Reinforcement Learning (RL) algorithms require estimates of the maximal Q-value, which are difficult to compute in continuous domains with an infinite number of possible actions. In this work, we introduce a new update rule for online and offline RL which directly models the maximal value using Extreme Value Theory (EVT), drawing inspiration from economics. By doing so, we avoid computing Q-values using out-of-distribution actions which is often a substantial source of error. Our key insight is to introduce an objective that directly estimates the optimal soft-value functions (LogSumExp) in the maximum entropy RL setting without needing to sample from a policy. Using EVT, we derive our \emph{Extreme Q-Learning} framework and consequently online and, for the first time, offline MaxEnt Q-learning algorithms, that do not explicitly require access to a policy or its entropy. Our method obtains consistently strong performance in the D4RL benchmark, outperforming prior works by \emph{10+ points} on the challenging Franka Kitchen tasks while offering moderate improvements over SAC and TD3 on online DM Control tasks. Visualizations and code can be found on our website.
https://openreview.net/pdf/fe4a8907cc4cf7607754d21d04e1da5914902db2.pdf
Efficiently Computing Nash Equilibria in Adversarial Team Markov Games
https://openreview.net/forum?id=mjzm6btqgV
https://openreview.net/forum?id=mjzm6btqgV
Fivos Kalogiannis,Ioannis Anagnostides,Ioannis Panageas,Emmanouil-Vasileios Vlatakis-Gkaragkounis,Vaggos Chatziafratis,Stelios Andrew Stavroulakis
ICLR 2023,Top 5%
Computing Nash equilibrium policies is a central problem in multi-agent reinforcement learning that has received extensive attention both in theory and in practice. However, in light of computational intractability barriers in general-sum games, provable guarantees have been thus far either limited to fully competitive or cooperative scenarios or impose strong assumptions that are difficult to meet in most practical applications. In this work, we depart from those prior results by investigating infinite-horizon \emph{adversarial team Markov games}, a natural and well-motivated class of games in which a team of identically-interested players---in the absence of any explicit coordination or communication---is competing against an adversarial player. This setting allows for a unifying treatment of zero-sum Markov games and Markov potential games, and serves as a step to model more realistic strategic interactions that feature both competing and cooperative interests. Our main contribution is the first algorithm for computing stationary $\epsilon$-approximate Nash equilibria in adversarial team Markov games with computational complexity that is polynomial in all the natural parameters of the game, as well as $1/\epsilon$. The proposed algorithm is based on performing independent policy gradient steps for each player in the team, in tandem with best responses from the side of the adversary; in turn, the policy for the adversary is then obtained by solving a carefully constructed linear program. Our analysis leverages non-standard techniques to establish the KKT optimality conditions for a nonlinear program with nonconvex constraints, thereby leading to a natural interpretation of the induced Lagrange multipliers.
https://openreview.net/pdf/3e531dec92de6b02fcbeef7a63d114423e73b571.pdf
Simplified State Space Layers for Sequence Modeling
https://openreview.net/forum?id=Ai8Hw3AXqks
https://openreview.net/forum?id=Ai8Hw3AXqks
Jimmy T.H. Smith,Andrew Warrington,Scott Linderman
ICLR 2023,Top 5%
Models using structured state space sequence (S4) layers have achieved state-of-the-art performance on long-range sequence modeling tasks. An S4 layer combines linear state space models (SSMs), the HiPPO framework, and deep learning to achieve high performance. We build on the design of the S4 layer and introduce a new state space layer, the S5 layer. Whereas an S4 layer uses many independent single-input, single-output SSMs, the S5 layer uses one multi-input, multi-output SSM. We establish a connection between S5 and S4, and use this to develop the initialization and parameterization used by the S5 model. The result is a state space layer that can leverage efficient and widely implemented parallel scans, allowing S5 to match the computational efficiency of S4, while also achieving state-of-the-art performance on several long-range sequence modeling tasks. S5 averages $87.4\%$ on the long range arena benchmark, and $98.5\%$ on the most difficult Path-X task.
https://openreview.net/pdf/57b1a9f476230b4a6e75b745f2c8fe47c5fa8c5a.pdf
Moving Forward by Moving Backward: Embedding Action Impact over Action Semantics
https://openreview.net/forum?id=vmjctNUSWI
https://openreview.net/forum?id=vmjctNUSWI
Kuo-Hao Zeng,Luca Weihs,Roozbeh Mottaghi,Ali Farhadi
ICLR 2023,Top 5%
A common assumption when training embodied agents is that the impact of taking an action is stable; for instance, executing the ``move ahead'' action will always move the agent forward by a fixed distance, perhaps with some small amount of actuator-induced noise. This assumption is limiting; an agent may encounter settings that dramatically alter the impact of actions: a move ahead action on a wet floor may send the agent twice as far as it expects and using the same action with a broken wheel might transform the expected translation into a rotation. Instead of relying that the impact of an action stably reflects its pre-defined semantic meaning, we propose to model the impact of actions on-the-fly using latent embeddings. By combining these latent action embeddings with a novel, transformer-based, policy head, we design an Action Adaptive Policy (AAP). We evaluate our AAP on two challenging visual navigation tasks in the AI2-THOR and Habitat environments and show that our AAP is highly performant even when faced, at inference-time, with missing actions and, previously unseen, perturbed action spaces. Moreover, we observe significant improvement in robustness against these actions when evaluating in real-world scenarios.
https://openreview.net/pdf/5fd307801a722f24990855f8235ae461cabf66fa.pdf
SimPer: Simple Self-Supervised Learning of Periodic Targets
https://openreview.net/forum?id=EKpMeEV0hOo
https://openreview.net/forum?id=EKpMeEV0hOo
Yuzhe Yang,Xin Liu,Jiang Wu,Silviu Borac,Dina Katabi,Ming-Zher Poh,Daniel McDuff
ICLR 2023,Top 5%
From human physiology to environmental evolution, important processes in nature often exhibit meaningful and strong periodic or quasi-periodic changes. Due to their inherent label scarcity, learning useful representations for periodic tasks with limited or no supervision is of great benefit. Yet, existing self-supervised learning (SSL) methods overlook the intrinsic periodicity in data, and fail to learn representations that capture periodic or frequency attributes. In this paper, we present SimPer, a simple contrastive SSL regime for learning periodic information in data. To exploit the periodic inductive bias, SimPer introduces customized augmentations, feature similarity measures, and a generalized contrastive loss for learning efficient and robust periodic representations. Extensive experiments on common real-world tasks in human behavior analysis, environmental sensing, and healthcare domains verify the superior performance of SimPer compared to state-of-the-art SSL methods, highlighting its intriguing properties including better data efficiency, robustness to spurious correlations, and generalization to distribution shifts.
https://openreview.net/pdf/efc783fea3d58e0bcea5f077e7756fc620f0d6c2.pdf
PaLI: A Jointly-Scaled Multilingual Language-Image Model
https://openreview.net/forum?id=mWVoBz4W0u
https://openreview.net/forum?id=mWVoBz4W0u
Xi Chen,Xiao Wang,Soravit Changpinyo,AJ Piergiovanni,Piotr Padlewski,Daniel Salz,Sebastian Goodman,Adam Grycner,Basil Mustafa,Lucas Beyer,Alexander Kolesnikov,Joan Puigcerver,Nan Ding,Keran Rong,Hassan Akbari,Gaurav Mishra,Linting Xue,Ashish V Thapliyal,James Bradbury,Weicheng Kuo,Mojtaba Seyedhosseini,Chao Jia,Burcu Karagol Ayan,Carlos Riquelme Ruiz,Andreas Peter Steiner,Anelia Angelova,Xiaohua Zhai,Neil Houlsby,Radu Soricut
ICLR 2023,Top 5%
Effective scaling and a flexible task interface enable large language models to excel at many tasks. We present PaLI, a model that extends this approach to the joint modeling of language and vision. PaLI generates text based on visual and textual inputs, and with this interface performs many vision, language, and multimodal tasks, in many languages. To train PaLI, we make use of large pretrained encoder-decoder language models and Vision Transformers (ViTs). This allows us to capitalize on their existing capabilities and leverage the substantial cost of training them. We find that joint scaling of the vision and language components is important. Since existing Transformers for language are much larger than their vision counterparts, we train a large, 4-billion parameter ViT (ViT-e) to quantify the benefits from even larger-capacity vision models. To train PaLI, we create a large multilingual mix of pretraining tasks, based on a new image-text training set containing 10B images and texts in over 100 languages. PaLI achieves state-of-the-art in multiple vision and language tasks (such as captioning, visual question-answering, scene-text understanding), while retaining a simple, modular, and scalable design.
https://openreview.net/pdf/1870a0455d0e7a6ed7d8f02e8e156cf63f5d6b6a.pdf
Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier
https://openreview.net/forum?id=OpC-9aBBVJe
https://openreview.net/forum?id=OpC-9aBBVJe
Pierluca D'Oro,Max Schwarzer,Evgenii Nikishin,Pierre-Luc Bacon,Marc G Bellemare,Aaron Courville
ICLR 2023,Top 5%
Increasing the replay ratio, the number of updates of an agent's parameters per environment interaction, is an appealing strategy for improving the sample efficiency of deep reinforcement learning algorithms. In this work, we show that fully or partially resetting the parameters of deep reinforcement learning agents causes better replay ratio scaling capabilities to emerge. We push the limits of the sample efficiency of carefully-modified algorithms by training them using an order of magnitude more updates than usual, significantly improving their performance in the Atari 100k and DeepMind Control Suite benchmarks. We then provide an analysis of the design choices required for favorable replay ratio scaling to be possible and discuss inherent limits and tradeoffs.
https://openreview.net/pdf/c891095f8e46b891138ef064f19d6b0e2d84dcb2.pdf
Dr.Spider: A Diagnostic Evaluation Benchmark towards Text-to-SQL Robustness
https://openreview.net/forum?id=Wc5bmZZU9cy
https://openreview.net/forum?id=Wc5bmZZU9cy
Shuaichen Chang,Jun Wang,Mingwen Dong,Lin Pan,Henghui Zhu,Alexander Hanbo Li,Wuwei Lan,Sheng Zhang,Jiarong Jiang,Joseph Lilien,Steve Ash,William Yang Wang,Zhiguo Wang,Vittorio Castelli,Patrick Ng,Bing Xiang
ICLR 2023,Top 5%
Neural text-to-SQL models have achieved remarkable performance in translating natural language questions into SQL queries. However, recent studies reveal that text-to-SQL models are vulnerable to task-specific perturbations. Previous curated robustness test sets usually focus on individual phenomena. In this paper, we propose a comprehensive robustness benchmark based on Spider, a cross-domain text-to-SQL benchmark, to diagnose the model robustness. We design 17 perturbations on databases, natural language questions, and SQL queries to measure the robustness from different angles. In order to collect more diversified natural question perturbations, we utilize large pretrained language models (PLMs) to simulate human behaviors in creating natural questions. We conduct a diagnostic study of the state-of-the-art models on the robustness set. Experimental results reveal that even the most robust model suffers from a 14.0% performance drop overall and a 50.7% performance drop on the most challenging perturbation. We also present a breakdown analysis regarding text-to-SQL model designs and provide insights for improving model robustness.
https://openreview.net/pdf/28dd8eb27d485f652c4874af1d995452557ae2b3.pdf
Temporal Domain Generalization with Drift-Aware Dynamic Neural Networks
https://openreview.net/forum?id=sWOsRj4nT1n
https://openreview.net/forum?id=sWOsRj4nT1n
Guangji Bai,Chen Ling,Liang Zhao
ICLR 2023,Top 5%
Temporal domain generalization is a promising yet extremely challenging area where the goal is to learn models under temporally changing data distributions and generalize to unseen data distributions following the trends of the change. The advancement of this area is challenged by: 1) characterizing data distribution drift and its impacts on models, 2) expressiveness in tracking the model dynamics, and 3) theoretical guarantee on the performance. To address them, we propose a Temporal Domain Generalization with Drift-Aware Dynamic Neural Network (DRAIN) framework. Specifically, we formulate the problem into a Bayesian framework that jointly models the relation between data and model dynamics. We then build a recurrent graph generation scenario to characterize the dynamic graph-structured neural networks learned across different time points. It captures the temporal drift of model parameters and data distributions and can predict models in the future without the presence of future data. In addition, we explore theoretical guarantees of the model performance under the challenging temporal DG setting and provide theoretical analysis, including uncertainty and generalization error. Finally, extensive experiments on several real-world benchmarks with temporal drift demonstrate the proposed method’s effectiveness and efficiency.
https://openreview.net/pdf/5951cadc6186425d767a2acdd1f92bd01ab49268.pdf
Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs
https://openreview.net/forum?id=SMa9EAovKMC
https://openreview.net/forum?id=SMa9EAovKMC
Albert Qiaochu Jiang,Sean Welleck,Jin Peng Zhou,Timothee Lacroix,Jiacheng Liu,Wenda Li,Mateja Jamnik,Guillaume Lample,Yuhuai Wu
ICLR 2023,Top 5%
The formalization of existing mathematical proofs is a notoriously difficult process. Despite decades of research on automation and proof assistants, writing formal proofs remains arduous and only accessible to a few experts. While previous studies to automate formalization focused on powerful search algorithms, no attempts were made to take advantage of available informal proofs. In this work, we introduce Draft, Sketch, and Prove (DSP), a method that maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems. We investigate two relevant setups where informal proofs are either written by humans or generated by a language model. Our experiments and ablation studies show that large language models are able to produce well-structured formal sketches that follow the same reasoning steps as the informal proofs. Guiding an automated prover with these sketches enhances its performance from $20.9\%$ to $39.3\%$ on a collection of mathematical competition problems.
https://openreview.net/pdf/cfd03f19d20263d9c1d1cc026a2b3528392fc857.pdf
REVISITING PRUNING AT INITIALIZATION THROUGH THE LENS OF RAMANUJAN GRAPH
https://openreview.net/forum?id=uVcDssQff_
https://openreview.net/forum?id=uVcDssQff_
Duc N.M Hoang,Shiwei Liu,Radu Marculescu,Zhangyang Wang
ICLR 2023,Top 5%
Pruning neural networks at initialization (PaI) has received an upsurge of interest due to its end-to-end saving potential. PaI is able to find sparse subnetworks at initialization that can achieve comparable performance to the full networks. These methods can surpass the trivial baseline of random pruning but suffer from a significant performance gap compared to post-training pruning. Previous approaches firmly rely on weights, gradients, and sanity checks as primary signals when conducting PaI analysis. To better understand the underlying mechanism of PaI, we propose to interpret it through the lens of the Ramanujan Graph - a class of expander graphs that are sparse while being highly connected. It is often believed there should be a strong correlation between the Ramanujan graph and PaI since both are about finding sparse and well-connected neural networks. However, the finer-grained link relating highly sparse and connected networks to their relative performance (i.e., ranking of difference sparse structures at the same specific global sparsity) is still missing. We observe that not only the Ramanujan property for sparse networks shows no significant relationship to PaI’s relative performance, but maximizing it can also lead to the formation of pseudo-random graphs with no structural meanings. We reveal the underlying cause to be Ramanujan Graph’s strong assumption on the upper bound of the largest nontrivial eigenvalue (µˆ) of layers belonging to highly sparse networks. We hence propose Iterative Mean Difference of Bound (IMDB) as a mean to relax the µˆ upper bound. Likewise, we also show there exists a lower bound for µˆ, which we call the Normalized Random Coefficient (NaRC), that gives us an accurate assessment for when sparse but highly connected structure degenerates into naive randomness. Finally, we systematically analyze the behavior of various PaI methods and demonstrate the utility of our proposed metrics in characterizing PaI performance. We show that subnetworks preserving better the IMDB property correlate higher in performance, while NaRC provides us with a possible mean to locate the region where highly connected, highly sparse, and non-trivial Ramanujan expanders exist. Our code is available at: https://github.com/VITA-Group/ramanujan-on-pai.
https://openreview.net/pdf/f73064906e38441e21dd0a622065469ef3f5b5bd.pdf
Embedding Fourier for Ultra-High-Definition Low-Light Image Enhancement
https://openreview.net/forum?id=5N0wtJZ89r9
https://openreview.net/forum?id=5N0wtJZ89r9
Chongyi Li,Chun-Le Guo,man zhou,Zhexin Liang,Shangchen Zhou,Ruicheng Feng,Chen Change Loy
ICLR 2023,Top 5%
Ultra-High-Definition (UHD) photo has gradually become the standard configuration in advanced imaging devices. The new standard unveils many issues in existing approaches for low-light image enhancement (LLIE), especially in dealing with the intricate issue of joint luminance enhancement and noise removal while remaining efficient. Unlike existing methods that address the problem in the spatial domain, we propose a new solution, UHDFour, that embeds Fourier transform into a cascaded network. Our approach is motivated by a few unique characteristics in the Fourier domain: 1) most luminance information concentrates on amplitudes while noise is closely related to phases, and 2) a high-resolution image and its low-resolution version share similar amplitude patterns. Through embedding Fourier into our network, the amplitude and phase of a low-light image are separately processed to avoid amplifying noise when enhancing luminance. Besides, UHDFour is scalable to UHD images by implementing amplitude and phase enhancement under the low-resolution regime and then adjusting the high-resolution scale with few computations. We also contribute the first real UHD LLIE dataset, UHD-LL, that contains 2,150 low-noise/normal-clear 4K image pairs with diverse darkness and noise levels captured in different scenarios. With this dataset, we systematically analyze the performance of existing LLIE methods for processing UHD images and demonstrate the advantage of our solution. We believe our new framework, coupled with the dataset, would push the frontier of LLIE towards UHD. The code and dataset are available at https://li-chongyi.github.io/UHDFour/.
https://openreview.net/pdf/4e2ab7acffc377a1981d0ed5d1e4310328115c82.pdf
A Call to Reflect on Evaluation Practices for Failure Detection in Image Classification
https://openreview.net/forum?id=YnkGMIh0gvX
https://openreview.net/forum?id=YnkGMIh0gvX
Paul F Jaeger,Carsten Tim Lüth,Lukas Klein,Till J. Bungert
ICLR 2023,Top 5%
Reliable application of machine learning-based decision systems in the wild is one of the major challenges currently investigated by the field. A large portion of established approaches aims to detect erroneous predictions by means of assigning confidence scores. This confidence may be obtained by either quantifying the model's predictive uncertainty, learning explicit scoring functions, or assessing whether the input is in line with the training distribution. Curiously, while these approaches all state to address the same eventual goal of detecting failures of a classifier upon real-world application, they currently constitute largely separated research fields with individual evaluation protocols, which either exclude a substantial part of relevant methods or ignore large parts of relevant failure sources. In this work, we systematically reveal current pitfalls caused by these inconsistencies and derive requirements for a holistic and realistic evaluation of failure detection. To demonstrate the relevance of this unified perspective, we present a large-scale empirical study for the first time enabling benchmarking confidence scoring functions w.r.t all relevant methods and failure sources. The revelation of a simple softmax response baseline as the overall best performing method underlines the drastic shortcomings of current evaluation in the plethora of publicized research on confidence scoring. Code and trained models are at https://github.com/https://github.com/IML-DKFZ/fd-shifts
https://openreview.net/pdf/a5de8999d6fc1e463fee479f14b17ae999f6cbc2.pdf
Fast and Precise: Adjusting Planning Horizon with Adaptive Subgoal Search
https://openreview.net/forum?id=7JsGYvjE88d
https://openreview.net/forum?id=7JsGYvjE88d
Michał Zawalski,Michał Tyrolski,Konrad Czechowski,Tomasz Odrzygóźdź,Damian Stachura,Piotr Piękos,Yuhuai Wu,Łukasz Kuciński,Piotr Miłoś
ICLR 2023,Top 5%
Complex reasoning problems contain states that vary in the computational cost required to determine the right action plan. To take advantage of this property, we propose Adaptive Subgoal Search (AdaSubS), a search method that adaptively adjusts the planning horizon. To this end, AdaSubS generates diverse sets of subgoals at different distances. A verification mechanism is employed to filter out unreachable subgoals swiftly, making it possible to focus on feasible further subgoals. In this way, AdaSubS benefits from the efficiency of planning with longer-term subgoals and the fine control with shorter-term ones, and thus scales well to difficult planning problems. We show that AdaSubS significantly surpasses hierarchical planning algorithms on three complex reasoning tasks: Sokoban, the Rubik’s Cube, and the inequality-proving benchmark INT.
https://openreview.net/pdf/361fb386c64c303b0467dd1fb8d3946766d58d4c.pdf
Towards Open Temporal Graph Neural Networks
https://openreview.net/forum?id=N9Pk5iSCzAn
https://openreview.net/forum?id=N9Pk5iSCzAn
Kaituo Feng,Changsheng Li,Xiaolu Zhang,JUN ZHOU
ICLR 2023,Top 5%
Graph neural networks (GNNs) for temporal graphs have recently attracted increasing attentions, where a common assumption is that the class set for nodes is closed. However, in real-world scenarios, it often faces the open set problem with the dynamically increased class set as the time passes by. This will bring two big challenges to the existing dynamic GNN methods: (i) How to dynamically propagate appropriate information in an open temporal graph, where new class nodes are often linked to old class nodes. This case will lead to a sharp contradiction. This is because typical GNNs are prone to make the embeddings of connected nodes become similar, while we expect the embeddings of these two interactive nodes to be distinguishable since they belong to different classes. (ii) How to avoid catastrophic knowledge forgetting over old classes when learning new classes occurred in temporal graphs. In this paper, we propose a general and principled learning approach for open temporal graphs, called OTGNet, with the goal of addressing the above two challenges. We assume the knowledge of a node can be disentangled into class-relevant and class-agnostic one, and thus explore a new message passing mechanism by extending the information bottleneck principle to only propagate class-agnostic knowledge between nodes of different classes, avoiding aggregating conflictive information. Moreover, we devise a strategy to select both important and diverse triad sub-graph structures for effective class-incremental learning. Extensive experiments on three real-world datasets of different domains demonstrate the superiority of our method, compared to the baselines.
https://openreview.net/pdf/50805c42deb9d452f3b80c28edbbd14aa21932f7.pdf
Relative representations enable zero-shot latent space communication
https://openreview.net/forum?id=SrC-nwieGJ
https://openreview.net/forum?id=SrC-nwieGJ
Luca Moschella,Valentino Maiorca,Marco Fumero,Antonio Norelli,Francesco Locatello,Emanuele Rodolà
ICLR 2023,Top 5%
Neural networks embed the geometric structure of a data manifold lying in a high-dimensional space into latent representations. Ideally, the distribution of the data points in the latent space should depend only on the task, the data, the loss, and other architecture-specific constraints. However, factors such as the random weights initialization, training hyperparameters, or other sources of randomness in the training phase may induce incoherent latent spaces that hinder any form of reuse. Nevertheless, we empirically observe that, under the same data and modeling choices, the angles between the encodings within distinct latent spaces do not change. In this work, we propose the latent similarity between each sample and a fixed set of anchors as an alternative data representation, demonstrating that it can enforce the desired invariances without any additional training. We show how neural architectures can leverage these relative representations to guarantee, in practice, invariance to latent isometries and rescalings, effectively enabling latent space communication: from zero-shot model stitching to latent space comparison between diverse settings. We extensively validate the generalization capability of our approach on different datasets, spanning various modalities (images, text, graphs), tasks (e.g., classification, reconstruction) and architectures (e.g., CNNs, GCNs, transformers).
https://openreview.net/pdf/2d9f62e22019d0d53476f0c4a9d760c6cc7895e2.pdf
Language Modelling with Pixels
https://openreview.net/forum?id=FkSp8VW8RjH
https://openreview.net/forum?id=FkSp8VW8RjH
Phillip Rust,Jonas F. Lotz,Emanuele Bugliarello,Elizabeth Salesky,Miryam de Lhoneux,Desmond Elliott
ICLR 2023,Top 5%
Language models are defined over a finite set of inputs, which creates a vocabulary bottleneck when we attempt to scale the number of supported languages. Tackling this bottleneck results in a trade-off between what can be represented in the embedding matrix and computational issues in the output layer. This paper introduces PIXEL, the Pixel-based Encoder of Language, which suffers from neither of these issues. PIXEL is a pretrained language model that renders text as images, making it possible to transfer representations across languages based on orthographic similarity or the co-activation of pixels. PIXEL is trained to reconstruct the pixels of masked patches instead of predicting a distribution over tokens. We pretrain the 86M parameter PIXEL model on the same English data as BERT and evaluate on syntactic and semantic tasks in typologically diverse languages, including various non-Latin scripts. We find that PIXEL substantially outperforms BERT on syntactic and semantic processing tasks on scripts that are not found in the pretraining data, but PIXEL is slightly weaker than BERT when working with Latin scripts. Furthermore, we find that PIXEL is more robust than BERT to orthographic attacks and linguistic code-switching, further confirming the benefits of modelling language with pixels.
https://openreview.net/pdf/5ade25a9134d48be86a9acbbebf941357365462c.pdf
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
https://openreview.net/forum?id=M95oDwJXayG
https://openreview.net/forum?id=M95oDwJXayG
Marius-Constantin Dinu,Markus Holzleitner,Maximilian Beck,Hoan Duc Nguyen,Andrea Huber,Hamid Eghbal-zadeh,Bernhard A. Moser,Sergei Pereverzyev,Sepp Hochreiter,Werner Zellinger
ICLR 2023,Top 5%
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution. We follow the strategy to compute several models using different hyper-parameters, and, to subsequently compute a linear aggregation of the models. While several heuristics exist that follow this strategy, methods are still missing that rely on thorough theories for bounding the target error. In this turn, we propose a method that extends weighted least squares to vector-valued functions, e.g., deep neural networks. We show that the target error of the proposed algorithm is asymptotically not worse than twice the error of the unknown optimal aggregation. We also perform a large scale empirical comparative study on several datasets, including text, images, electroencephalogram, body sensor signals and signals from mobile phones. Our method outperforms deep embedded validation (DEV) and importance weighted validation (IWV) on all datasets, setting a new state-of-the-art performance for solving parameter choice issues in unsupervised domain adaptation with theoretical error guarantees. We further study several competitive heuristics, all outperforming IWV and DEV on at least five datasets. However, our method outperforms each heuristic on at least five of seven datasets.
https://openreview.net/pdf/36c115dd350b35beffcf18cbfd0a6afd2ab5a0e7.pdf
Symbolic Physics Learner: Discovering governing equations via Monte Carlo tree search
https://openreview.net/forum?id=ZTK3SefE8_Z
https://openreview.net/forum?id=ZTK3SefE8_Z
Fangzheng Sun,Yang Liu,Jian-Xun Wang,Hao Sun
ICLR 2023,Top 5%
Nonlinear dynamics is ubiquitous in nature and commonly seen in various science and engineering disciplines. Distilling analytical expressions that govern nonlinear dynamics from limited data remains vital but challenging. To tackle this fundamental issue, we propose a novel Symbolic Physics Learner (SPL) machine to discover the mathematical structure of nonlinear dynamics. The key concept is to interpret mathematical operations and system state variables by computational rules and symbols, establish symbolic reasoning of mathematical formulas via expression trees, and employ a Monte Carlo tree search (MCTS) agent to explore optimal expression trees based on measurement data. The MCTS agent obtains an optimistic selection policy through the traversal of expression trees, featuring the one that maps to the arithmetic expression of underlying physics. Salient features of the proposed framework include search flexibility and enforcement of parsimony for discovered equations. The efficacy and superiority of the SPL machine are demonstrated by numerical examples, compared with state-of-the-art baselines.
https://openreview.net/pdf/0c815f206ac64432f9caf1f36b816f9e368dee15.pdf
Clean-image Backdoor: Attacking Multi-label Models with Poisoned Labels Only
https://openreview.net/forum?id=rFQfjDC9Mt
https://openreview.net/forum?id=rFQfjDC9Mt
Kangjie Chen,Xiaoxuan Lou,Guowen Xu,Jiwei Li,Tianwei Zhang
ICLR 2023,Top 5%
Multi-label models have been widely used in various applications including image annotation and object detection. The fly in the ointment is its inherent vulnerability to backdoor attacks due to the adoption of deep learning techniques. However, all existing backdoor attacks exclusively require to modify training inputs (e.g., images), which may be impractical in real-world applications. In this paper, we aim to break this wall and propose the first clean-image backdoor attack, which only poisons the training labels without touching the training samples. Our key insight is that in a multi-label learning task, the adversary can just manipulate the annotations of training samples consisting of a specific set of classes to activate the backdoor. We design a novel trigger exploration method to find convert and effective triggers to enhance the attack performance. We also propose three target label selection strategies to achieve different goals. Experimental results indicate that our clean-image backdoor can achieve a 98% attack success rate while preserving the model's functionality on the benign inputs. Besides, the proposed clean-image backdoor can evade existing state-of-the-art defenses.
https://openreview.net/pdf/6021cbdfd717a31730914f92bc2b1e9762135b65.pdf
Graph Neural Networks for Link Prediction with Subgraph Sketching
https://openreview.net/forum?id=m1oqEOAozQU
https://openreview.net/forum?id=m1oqEOAozQU
Benjamin Paul Chamberlain,Sergey Shirobokov,Emanuele Rossi,Fabrizio Frasca,Thomas Markovich,Nils Yannick Hammerla,Michael M. Bronstein,Max Hansmire
ICLR 2023,Top 5%
Many Graph Neural Networks (GNNs) perform poorly compared to simple heuristics on Link Prediction (LP) tasks. This is due to limitations in expressive power such as the inability to count triangles (the backbone of most LP heuristics) and because they can not distinguish automorphic nodes (those having identical structural roles). Both expressiveness issues can be alleviated by learning link (rather than node) representations and incorporating structural features such as triangle counts. Since explicit link representations are often prohibitively expensive, recent works resorted to subgraph-based methods, which have achieved state-of-the-art performance for LP, but suffer from poor efficiency due to high levels of redundancy between subgraphs. We analyze the components of subgraph GNN (SGNN) methods for link prediction. Based on our analysis, we propose a novel full-graph GNN called ELPH (Efficient Link Prediction with Hashing) that passes subgraph sketches as messages to approximate the key components of SGNNs without explicit subgraph construction. ELPH is provably more expressive than Message Passing GNNs (MPNNs). It outperforms existing SGNN models on many standard LP benchmarks while being orders of magnitude faster. However, it shares the common GNN limitation that it is only efficient when the dataset fits in GPU memory. Accordingly, we develop a highly scalable model, called BUDDY, which uses feature precomputation to circumvent this limitation without sacrificing predictive performance. Our experiments show that BUDDY also outperforms SGNNs on standard LP benchmarks while being highly scalable and faster than ELPH.
https://openreview.net/pdf/c24fea923ffff6f10becdc0da41b8e84eb3412a1.pdf
Image to Sphere: Learning Equivariant Features for Efficient Pose Prediction
https://openreview.net/forum?id=_2bDpAtr7PI
https://openreview.net/forum?id=_2bDpAtr7PI
David Klee,Ondrej Biza,Robert Platt,Robin Walters
ICLR 2023,Top 5%
Predicting the pose of objects from a single image is an important but difficult computer vision problem. Methods that predict a single point estimate do not predict the pose of objects with symmetries well and cannot represent uncertainty. Alternatively, some works predict a distribution over orientations in $\mathrm{SO}(3)$. However, training such models can be computation- and sample-inefficient. Instead, we propose a novel mapping of features from the image domain to the 3D rotation manifold. Our method then leverages $\mathrm{SO}(3)$ equivariant layers, which are more sample efficient, and outputs a distribution over rotations that can be sampled at arbitrary resolution. We demonstrate the effectiveness of our method at object orientation prediction, and achieve state-of-the-art performance on the popular PASCAL3D+ dataset. Moreover, we show that our method can model complex object symmetries, without any modifications to the parameters or loss function. Code is available at \url{https://dmklee.github.io/image2sphere}.
https://openreview.net/pdf/dc2578c49b3cfc78beece0602f3564947a512c18.pdf
MICN: Multi-scale Local and Global Context Modeling for Long-term Series Forecasting
https://openreview.net/forum?id=zt53IDUR1U
https://openreview.net/forum?id=zt53IDUR1U
Huiqiang Wang,Jian Peng,Feihu Huang,Jince Wang,Junhui Chen,Yifei Xiao
ICLR 2023,Top 5%
Recently, Transformer-based methods have achieved surprising performance in the field of long-term series forecasting, but the attention mechanism for computing global correlations entails high complexity. And they do not allow for targeted modeling of local features as CNN structures do. To solve the above problems, we propose to combine local features and global correlations to capture the overall view of time series (e.g., fluctuations, trends). To fully exploit the underlying information in the time series, a multi-scale branch structure is adopted to model different potential patterns separately. Each pattern is extracted with down-sampled convolution and isometric convolution for local features and global correlations, respectively. In addition to being more effective, our proposed method, termed as Multi-scale Isometric Convolution Network (MICN), is more efficient with linear complexity about the sequence length with suitable convolution kernels. Our experiments on six benchmark datasets show that compared with state-of-the-art methods, MICN yields 17.2% and 21.6% relative improvements for multivariate and univariate time series, respectively.
https://openreview.net/pdf/6e3044ae6e9494f027b7c011f97efa8f0ed029c0.pdf
Personalized Federated Learning with Feature Alignment and Classifier Collaboration
https://openreview.net/forum?id=SXZr8aDKia
https://openreview.net/forum?id=SXZr8aDKia
Jian Xu,Xinyi Tong,Shao-Lun Huang
ICLR 2023,Top 5%
Data heterogeneity is one of the most challenging issues in federated learning, which motivates a variety of approaches to learn personalized models for participating clients. One such approach in deep neural networks based tasks is employing a shared feature representation and learning a customized classifier head for each client. However, previous works do not utilize the global knowledge during local representation learning and also neglect the fine-grained collaboration between local classifier heads, which limits the model generalization ability. In this work, we conduct explicit local-global feature alignment by leveraging global semantic knowledge for learning a better representation. Moreover, we quantify the benefit of classifier combination for each client as a function of the combining weights and derive an optimization problem for estimating optimal weights. Finally, extensive evaluation results on benchmark datasets with various heterogeneous data scenarios demonstrate the effectiveness of our proposed method.
https://openreview.net/pdf/7e45d7414cae758349f97df5277f8897ef7b8c04.pdf
From Play to Policy: Conditional Behavior Generation from Uncurated Robot Data
https://openreview.net/forum?id=c7rM7F7jQjN
https://openreview.net/forum?id=c7rM7F7jQjN
Zichen Jeff Cui,Yibin Wang,Nur Muhammad Mahi Shafiullah,Lerrel Pinto
ICLR 2023,Top 5%
While large-scale sequence modelling from offline data has led to impressive performance gains in natural language generation and image generation, directly translating such ideas to robotics has been challenging. One critical reason for this is that uncurated robot demonstration data, i.e. play data, collected from non-expert human demonstrators are often noisy, diverse, and distributionally multi-modal. This makes extracting useful, task-centric behaviors from such data a difficult generative modelling problem. In this work, we present Conditional Behavior Transformers (C-BeT), a method that combines the multi-modal generation ability of Behavior Transformer with future-conditioned goal specification. On a suite of simulated benchmark tasks, we find that C-BeT improves upon prior state-of-the-art work in learning from play data by an average of 45.7%. Further, we demonstrate for the first time that useful task-centric behaviors can be learned on a real-world robot purely from play data without any task labels or reward information. Robot videos are best viewed on our project website: play-to-policy.github.io
https://openreview.net/pdf/2ac61e4b87940fa144ced394ae19abce9e89a184.pdf
Visual Classification via Description from Large Language Models
https://openreview.net/forum?id=jlAjNL8z5cs
https://openreview.net/forum?id=jlAjNL8z5cs
Sachit Menon,Carl Vondrick
ICLR 2023,Top 5%
Vision-language models such as CLIP have shown promising performance on a variety of recognition tasks using the standard zero-shot classification procedure -- computing similarity between the query image and the embedded words for each category. By only using the category name, they neglect to make use of the rich context of additional information that language affords. The procedure gives no intermediate understanding of why a category is chosen, and furthermore provides no mechanism for adjusting the criteria used towards this decision. We present an alternative framework for classification with VLMs, which we call classification by description. We ask VLMs to check for descriptive features rather than broad categories: to find a tiger, look for its stripes; its claws; and more. By basing decisions on these descriptors, we can provide additional cues that encourage using the features we want to be used. In the process, we can get a clear idea of what the model ``thinks" it is seeing to make its decision; it gains some level of inherent explainability. We query large language models (e.g., GPT-3) for these descriptors to obtain them in a scalable way. Extensive experiments show our framework has numerous advantages past interpretability. We show improvements in accuracy on ImageNet across distribution shifts; demonstrate the ability to adapt VLMs to recognize concepts unseen during training; and illustrate how descriptors can be edited to effectively mitigate bias compared to the baseline.
https://openreview.net/pdf/d171255a976821dd4ebfacb7a012082c4b888b7a.pdf
The Modality Focusing Hypothesis: Towards Understanding Crossmodal Knowledge Distillation
https://openreview.net/forum?id=w0QXrZ3N-s
https://openreview.net/forum?id=w0QXrZ3N-s
Zihui Xue,Zhengqi Gao,Sucheng Ren,Hang Zhao
ICLR 2023,Top 5%
Crossmodal knowledge distillation (KD) extends traditional knowledge distillation to the area of multimodal learning and demonstrates great success in various applications. To achieve knowledge transfer across modalities, a pretrained network from one modality is adopted as the teacher to provide supervision signals to a student network learning from the other modality. In contrast to the empirical success reported in prior works, the working mechanism of crossmodal KD remains a mystery. In this paper, we present a thorough understanding of crossmodal KD. We begin by providing two failure cases and demonstrate that KD is not a universal cure in crossmodal knowledge transfer. We then present the modality Venn diagram to understand modality relationships and the modality focusing hypothesis revealing the decisive factor in the efficacy of crossmodal KD. Experimental results on 6 multimodal datasets help justify our hypothesis, diagnose failure cases, and point directions to improve crossmodal knowledge transfer in the future.
https://openreview.net/pdf/741eead42fe714d67fac001285243a76fd4ad259.pdf
Multi-Rate VAE: Train Once, Get the Full Rate-Distortion Curve
https://openreview.net/forum?id=OJ8aSjCaMNK
https://openreview.net/forum?id=OJ8aSjCaMNK
Juhan Bae,Michael R. Zhang,Michael Ruan,Eric Wang,So Hasegawa,Jimmy Ba,Roger Baker Grosse
ICLR 2023,Top 5%
Variational autoencoders (VAEs) are powerful tools for learning latent representations of data used in a wide range of applications. In practice, VAEs usually require multiple training rounds to choose the amount of information the latent variable should retain. This trade-off between the reconstruction error (distortion) and the KL divergence (rate) is typically parameterized by a hyperparameter $\beta$. In this paper, we introduce Multi-Rate VAE (MR-VAE), a computationally efficient framework for learning optimal parameters corresponding to various $\beta$ in a single training run. The key idea is to explicitly formulate a response function using hypernetworks that maps $\beta$ to the optimal parameters. MR-VAEs construct a compact response hypernetwork where the pre-activations are conditionally gated based on $\beta$. We justify the proposed architecture by analyzing linear VAEs and showing that it can represent response functions exactly for linear VAEs. With the learned hypernetwork, MR-VAEs can construct the rate-distortion curve without additional training and can be deployed with significantly less hyperparameter tuning. Empirically, our approach is competitive and often exceeds the performance of multiple $\beta$-VAEs training with minimal computation and memory overheads.
https://openreview.net/pdf/14a6477c29547f6a0e88be838a4bb2fe39d0bef6.pdf
Near-optimal Policy Identification in Active Reinforcement Learning
https://openreview.net/forum?id=3OR2tbtnYC-
https://openreview.net/forum?id=3OR2tbtnYC-
Xiang Li,Viraj Mehta,Johannes Kirschner,Ian Char,Willie Neiswanger,Jeff Schneider,Andreas Krause,Ilija Bogunovic
ICLR 2023,Top 5%
Many real-world reinforcement learning tasks require control of complex dynamical systems that involve both costly data acquisition processes and large state spaces. In cases where the expensive transition dynamics can be readily evaluated at specified states (e.g., via a simulator), agents can operate in what is often referred to as planning with a \emph{generative model}. We propose the AE-LSVI algorithm for best policy identification, a novel variant of the kernelized least-squares value iteration (LSVI) algorithm that combines optimism with pessimism for active exploration (AE). AE-LSVI provably identifies a near-optimal policy \emph{uniformly} over an entire state space and achieves polynomial sample complexity guarantees that are independent of the number of states. When specialized to the recently introduced offline contextual Bayesian optimization setting, our algorithm achieves improved sample complexity bounds. Experimentally, we demonstrate that AE-LSVI outperforms other RL algorithms in a variety of environments when robustness to the initial state is required.
https://openreview.net/pdf/3f2fd20ea112039f10550e677478e83b1f6260a7.pdf
Conditional Antibody Design as 3D Equivariant Graph Translation
https://openreview.net/forum?id=LFHFQbjxIiP
https://openreview.net/forum?id=LFHFQbjxIiP
Xiangzhe Kong,Wenbing Huang,Yang Liu
ICLR 2023,Top 5%
Antibody design is valuable for therapeutic usage and biological research. Existing deep-learning-based methods encounter several key issues: 1) incomplete context for Complementarity-Determining Regions (CDRs) generation; 2) incapability of capturing the entire 3D geometry of the input structure; 3) inefficient prediction of the CDR sequences in an autoregressive manner. In this paper, we propose Multi-channel Equivariant Attention Network (MEAN) to co-design 1D sequences and 3D structures of CDRs. To be specific, MEAN formulates antibody design as a conditional graph translation problem by importing extra components including the target antigen and the light chain of the antibody. Then, MEAN resorts to E(3)-equivariant message passing along with a proposed attention mechanism to better capture the geometrical correlation between different components. Finally, it outputs both the 1D sequences and 3D structure via a multi-round progressive full-shot scheme, which enjoys more efficiency and precision against previous autoregressive approaches. Our method significantly surpasses state-of-the-art models in sequence and structure modeling, antigen-binding CDR design, and binding affinity optimization. Specifically, the relative improvement to baselines is about 23\% in antigen-binding CDR design and 34\% for affinity optimization.
https://openreview.net/pdf/3ad0b04b8a9b31f816c7c80ce0cf71fad13fa636.pdf
Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task
https://openreview.net/forum?id=DeG07_TcZvT
https://openreview.net/forum?id=DeG07_TcZvT
Kenneth Li,Aspen K Hopkins,David Bau,Fernanda Viégas,Hanspeter Pfister,Martin Wattenberg
ICLR 2023,Top 5%
Language models show a surprising range of capabilities, but the source of their apparent competence is unclear. Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state. Interventional experiments indicate this representation can be used to control the output of the network and create "latent saliency maps" that can help explain predictions in human terms.
https://openreview.net/pdf/70fb51a26cffdf3304e24f4d2e803b729904fe20.pdf
Tailoring Language Generation Models under Total Variation Distance
https://openreview.net/forum?id=VELL0PlWfc
https://openreview.net/forum?id=VELL0PlWfc
Haozhe Ji,Pei Ke,Zhipeng Hu,Rongsheng Zhang,Minlie Huang
ICLR 2023,Top 5%
The standard paradigm of neural language generation adopts maximum likelihood estimation (MLE) as the optimizing method. From a distributional view, MLE in fact minimizes the Kullback-Leibler divergence (KLD) between the distribution of the real data and that of the model. However, this approach forces the model to distribute non-zero (sometimes large) probability mass to all training samples regardless of their quality. Moreover, in the attempt to cover the low-probability regions in the data distribution, the model systematically overestimates the probability of corrupted text sequences, which we conjecture is one of the main reasons for text degeneration during autoregressive decoding. To remedy this problem, we leverage the total variation distance (TVD) with its robustness to outliers, and develop practical bounds to apply it to language generation. Then, we introduce the TaiLr objective that balances the tradeoff of estimating TVD. Intuitively, TaiLr downweights real data samples that have low model probabilities with tunable penalization intensity. Experimental results show that our method alleviates the overestimation of degenerated sequences without sacrificing diversity and improves generation quality on a wide range of text generation tasks.
https://openreview.net/pdf/222b0c66b1d6e4c664fc67e8d5d1348ae37c505e.pdf
Transformers are Sample-Efficient World Models
https://openreview.net/forum?id=vhFu1Acb0xb
https://openreview.net/forum?id=vhFu1Acb0xb
Vincent Micheli,Eloi Alonso,François Fleuret
ICLR 2023,Top 5%
Deep reinforcement learning agents are notoriously sample inefficient, which considerably limits their application to real-world problems. Recently, many model-based methods have been designed to address this issue, with learning in the imagination of a world model being one of the most prominent approaches. However, while virtually unlimited interaction with a simulated environment sounds appealing, the world model has to be accurate over extended periods of time. Motivated by the success of Transformers in sequence modeling tasks, we introduce IRIS, a data-efficient agent that learns in a world model composed of a discrete autoencoder and an autoregressive Transformer. With the equivalent of only two hours of gameplay in the Atari 100k benchmark, IRIS achieves a mean human normalized score of 1.046, and outperforms humans on 10 out of 26 games, setting a new state of the art for methods without lookahead search. To foster future research on Transformers and world models for sample-efficient reinforcement learning, we release our code and models at https://github.com/eloialonso/iris.
https://openreview.net/pdf/f23ea2080e754e26ad7f8a9f9a55865dd11f0a73.pdf
Statistical Efficiency of Score Matching: The View from Isoperimetry
https://openreview.net/forum?id=TD7AnQjNzR6
https://openreview.net/forum?id=TD7AnQjNzR6
Frederic Koehler,Alexander Heckett,Andrej Risteski
ICLR 2023,Top 5%
Deep generative models parametrized up to a normalizing constant (e.g. energy-based models) are difficult to train by maximizing the likelihood of the data because the likelihood and/or gradients thereof cannot be explicitly or efficiently written down. Score matching is a training method, whereby instead of fitting the likelihood $\log p(x)$ for the training data, we instead fit the score function $\nabla_x \log p(x)$ --- obviating the need to evaluate the partition function. Though this estimator is known to be consistent, its unclear whether (and when) its statistical efficiency is comparable to that of maximum likelihood --- which is known to be (asymptotically) optimal. We initiate this line of inquiry in this paper, and show a tight connection between statistical efficiency of score matching and the isoperimetric properties of the distribution being estimated --- i.e. the Poincar\'e, log-Sobolev and isoperimetric constant --- quantities which govern the mixing time of Markov processes like Langevin dynamics. Roughly, we show that the score matching estimator is statistically comparable to the maximum likelihood when the distribution has a small isoperimetric constant. Conversely, if the distribution has a large isoperimetric constant --- even for simple families of distributions like exponential families with rich enough sufficient statistics --- score matching will be substantially less efficient than maximum likelihood. We suitably formalize these results both in the finite sample regime, and in the asymptotic regime. Finally, we identify a direct parallel in the discrete setting, where we connect the statistical properties of pseudolikelihood estimation with approximate tensorization of entropy and the Glauber dynamics.
https://openreview.net/pdf/650e8b5c38872cf721fff2c0b10c3e5fa039579b.pdf
View Synthesis with Sculpted Neural Points
https://openreview.net/forum?id=0ypGZvm0er0
https://openreview.net/forum?id=0ypGZvm0er0
Yiming Zuo,Jia Deng
ICLR 2023,Top 5%
We address the task of view synthesis, generating novel views of a scene given a set of images as input. In many recent works such as NeRF (Mildenhall et al., 2020), the scene geometry is parameterized using neural implicit representations (i.e., MLPs). Implicit neural representations have achieved impressive visual quality but have drawbacks in computational efficiency. In this work, we propose a new approach that performs view synthesis using point clouds. It is the first point-based method that achieves better visual quality than NeRF while being 100× faster in rendering speed. Our approach builds on existing works on differentiable point-based rendering but introduces a novel technique we call “Sculpted Neural Points (SNP)”, which significantly improves the robustness to errors and holes in the reconstructed point cloud. We further propose to use view-dependent point features based on spherical harmonics to capture non-Lambertian surfaces, and new designs in the point-based rendering pipeline that further boost the performance. Finally, we show that our system supports fine-grained scene editing. Code is available at https://github.com/princeton-vl/SNP.
https://openreview.net/pdf/a844600e54c069b827ba8e0013a60b4a1193f97f.pdf
AutoGT: Automated Graph Transformer Architecture Search
https://openreview.net/forum?id=GcM7qfl5zY
https://openreview.net/forum?id=GcM7qfl5zY
Zizhao Zhang,Xin Wang,Chaoyu Guan,Ziwei Zhang,Haoyang Li,Wenwu Zhu
ICLR 2023,Top 5%
Although Transformer architectures have been successfully applied to graph data with the advent of Graph Transformer, current design of Graph Transformer still heavily relies on human labor and expertise knowledge to decide proper neural architectures and suitable graph encoding strategies at each Transformer layer. In literature, there have been some works on automated design of Transformers focusing on non-graph data such as texts and images without considering graph encoding strategies, which fail to handle the non-euclidean graph data. In this paper, we study the problem of automated graph Transformer, for the first time. However, solving these problems poses the following challenges: i) how can we design a unified search space for graph Transformer, and ii) how to deal with the coupling relations between Transformer architectures and the graph encodings of each Transformer layer. To address these challenges, we propose Automated Graph Transformer (AutoGT), a neural architecture search framework that can automatically discover the optimal graph Transformer architectures by joint optimization of Transformer architecture and graph encoding strategies. Specifically, we first propose a unified graph Transformer formulation that can represent most of state-of-the-art graph Transformer architectures. Based upon the unified formulation, we further design the graph Transformer search space that includes both candidate architectures and various graph encodings. To handle the coupling relations, we propose a novel encoding-aware performance estimation strategy by gradually training and splitting the supernets according to the correlations between graph encodings and architectures. The proposed strategy can provide a more consistent and fine-grained performance prediction when evaluating the jointly optimized graph encodings and architectures. Extensive experiments and ablation studies show that our proposed AutoGT gains sufficient improvement over state-of-the-art hand-crafted baselines on all datasets, demonstrating its effectiveness and wide applicability.
https://openreview.net/pdf/ea1ae3473367dc3011d3f2b84c2b2192c39aee04.pdf
Crossformer: Transformer Utilizing Cross-Dimension Dependency for Multivariate Time Series Forecasting
https://openreview.net/forum?id=vSVLM2j9eie
https://openreview.net/forum?id=vSVLM2j9eie
Yunhao Zhang,Junchi Yan
ICLR 2023,Top 5%
Recently many deep models have been proposed for multivariate time series (MTS) forecasting. In particular, Transformer-based models have shown great potential because they can capture long-term dependency. However, existing Transformer-based models mainly focus on modeling the temporal dependency (cross-time dependency) yet often omit the dependency among different variables (cross-dimension dependency), which is critical for MTS forecasting. To fill the gap, we propose Crossformer, a Transformer-based model utilizing cross-dimension dependency for MTS forecasting. In Crossformer, the input MTS is embedded into a 2D vector array through the Dimension-Segment-Wise (DSW) embedding to preserve time and dimension information. Then the Two-Stage Attention (TSA) layer is proposed to efficiently capture the cross-time and cross-dimension dependency. Utilizing DSW embedding and TSA layer, Crossformer establishes a Hierarchical Encoder-Decoder (HED) to use the information at different scales for the final forecasting. Extensive experimental results on six real-world datasets show the effectiveness of Crossformer against previous state-of-the-arts.
https://openreview.net/pdf/1d793d6ba7c00ecfe98128614d58e2493255bd89.pdf
Betty: An Automatic Differentiation Library for Multilevel Optimization
https://openreview.net/forum?id=LV_MeMS38Q9
https://openreview.net/forum?id=LV_MeMS38Q9
Sang Keun Choe,Willie Neiswanger,Pengtao Xie,Eric Xing
ICLR 2023,Top 5%
Gradient-based multilevel optimization (MLO) has gained attention as a framework for studying numerous problems, ranging from hyperparameter optimization and meta-learning to neural architecture search and reinforcement learning. However, gradients in MLO, which are obtained by composing best-response Jacobians via the chain rule, are notoriously difficult to implement and memory/compute intensive. We take an initial step towards closing this gap by introducing Betty, a software library for large-scale MLO. At its core, we devise a novel dataflow graph for MLO, which allows us to (1) develop efficient automatic differentiation for MLO that reduces the computational complexity from $\mathcal{O}(d^3)$ to $\mathcal{O}(d^2)$, (2) incorporate systems support such as mixed-precision and data-parallel training for scalability, and (3) facilitate implementation of MLO programs of arbitrary complexity while allowing a modular interface for diverse algorithmic and systems design choices. We empirically demonstrate that Betty can be used to implement an array of MLO programs, while also observing up to 11% increase in test accuracy, 14% decrease in GPU memory usage, and 20% decrease in training wall time over existing implementations on multiple benchmarks. We also showcase that Betty enables scaling MLO to models with hundreds of millions of parameters. We open-source the code at https://github.com/leopard-ai/betty.
https://openreview.net/pdf/e92379cd67840d63d8a85743600bfe396bcdf7fb.pdf
Offline RL with No OOD Actions: In-Sample Learning via Implicit Value Regularization
https://openreview.net/forum?id=ueYYgo2pSSU
https://openreview.net/forum?id=ueYYgo2pSSU
Haoran Xu,Li Jiang,Jianxiong Li,Zhuoran Yang,Zhaoran Wang,Victor Wai Kin Chan,Xianyuan Zhan
ICLR 2023,Top 5%
Most offline reinforcement learning (RL) methods suffer from the trade-off between improving the policy to surpass the behavior policy and constraining the policy to limit the deviation from the behavior policy as computing $Q$-values using out-of-distribution (OOD) actions will suffer from errors due to distributional shift. The recent proposed \textit{In-sample Learning} paradigm (i.e., IQL), which improves the policy by quantile regression using only data samples, shows great promise because it learns an optimal policy without querying the value function of any unseen actions. However, it remains unclear how this type of method handles the distributional shift in learning the value function. In this work, we make a key finding that the in-sample learning paradigm arises under the \textit{Implicit Value Regularization} (IVR) framework. This gives a deeper understanding of why the in-sample learning paradigm works, i.e., it applies implicit value regularization to the policy. Based on the IVR framework, we further propose two practical algorithms, Sparse $Q$-learning (SQL) and Exponential $Q$-learning (EQL), which adopt the same value regularization used in existing works, but in a complete in-sample manner. Compared with IQL, we find that our algorithms introduce sparsity in learning the value function, making them more robust in noisy data regimes. We also verify the effectiveness of SQL and EQL on D4RL benchmark datasets and show the benefits of in-sample learning by comparing them with CQL in small data regimes. Code is available at \url{https://github.com/ryanxhr/SQL}.
https://openreview.net/pdf/dbd2c001478b511324bdbec3a393c6f1552fbb3d.pdf
Win: Weight-Decay-Integrated Nesterov Acceleration for Adaptive Gradient Algorithms
https://openreview.net/forum?id=CPdc77SQfQ5
https://openreview.net/forum?id=CPdc77SQfQ5
Pan Zhou,Xingyu Xie,Shuicheng YAN
ICLR 2023,Top 5%
Training deep networks on large-scale datasets is computationally challenging. In this work, we explore the problem of ``\textit{how to accelerate adaptive gradient algorithms in a general manner}", and aim to provide practical efficiency-boosting insights. To this end, we propose an effective and general {Weight-decay-Integrated Nesterov acceleration} (Win) to accelerate adaptive algorithms. Taking AdamW and Adam as examples, we minimize a dynamical loss per iteration which combines the vanilla training loss and a dynamic regularizer inspired by proximal point method (PPM) to improve the convexity of the problem. To introduce Nesterov-alike-acceleration into AdamW and Adam, we respectively use the first- and second-order Taylor approximations of vanilla loss to update the variable twice. In this way, we arrive at our Win acceleration for AdamW and Adam that uses a conservative step and a reckless step to update twice and then linearly combines these two updates for acceleration. Next, we extend Win acceleration to LAMB and SGD. Our transparent acceleration derivation could provide insights for other accelerated methods and their integration into adaptive algorithms. Besides, we prove the convergence of Win-accelerated adaptive algorithms and justify their convergence superiority over their non-accelerated counterparts by taking AdamW and Adam as examples. Experimental results testify to the faster convergence speed and superior performance of our Win-accelerated AdamW, Adam, LAMB and SGD over their non-accelerated counterparts on vision classification tasks and language modeling tasks with both CNN and Transformer backbones. We hope Win shall be a default acceleration option for popular optimizers in deep learning community to improve the training efficiency. Code will be released at \url{https://github.com/sail-sg/win}.
https://openreview.net/pdf/b3453f304fc9650f5fcaa04d42bafe01e1c5bd1a.pdf
Towards Stable Test-time Adaptation in Dynamic Wild World
https://openreview.net/forum?id=g2YraF75Tj
https://openreview.net/forum?id=g2YraF75Tj
Shuaicheng Niu,Jiaxiang Wu,Yifan Zhang,Zhiquan Wen,Yaofo Chen,Peilin Zhao,Mingkui Tan
ICLR 2023,Top 5%
Test-time adaptation (TTA) has shown to be effective at tackling distribution shifts between training and testing data by adapting a given model on test samples. However, the online model updating of TTA may be unstable and this is often a key obstacle preventing existing TTA methods from being deployed in the real world. Specifically, TTA may fail to improve or even harm the model performance when test data have: 1) mixed distribution shifts, 2) small batch sizes, and 3) online imbalanced label distribution shifts, which are quite common in practice. In this paper, we investigate the unstable reasons and find that the batch norm layer is a crucial factor hindering TTA stability. Conversely, TTA can perform more stably with batch-agnostic norm layers, i.e., group or layer norm. However, we observe that TTA with group and layer norms does not always succeed and still suffers many failure cases. By digging into the failure cases, we find that certain noisy test samples with large gradients may disturb the model adaption and result in collapsed trivial solutions, i.e., assigning the same class label for all samples. To address the above collapse issue, we propose a sharpness-aware and reliable entropy minimization method, called SAR, for further stabilizing TTA from two aspects: 1) remove partial noisy samples with large gradients, 2) encourage model weights to go to a flat minimum so that the model is robust to the remaining noisy samples. Promising results demonstrate that SAR performs more stably than prior methods and is computationally efficient under the above wild test scenarios.
https://openreview.net/pdf/4bf9a568654ef33fe83fe18f5e34b489be3ca06b.pdf
MocoSFL: enabling cross-client collaborative self-supervised learning
https://openreview.net/forum?id=2QGJXyMNoPz
https://openreview.net/forum?id=2QGJXyMNoPz
Jingtao Li,Lingjuan Lyu,Daisuke Iso,Chaitali Chakrabarti,Michael Spranger
ICLR 2023,Top 5%
Existing collaborative self-supervised learning (SSL) schemes are not suitable for cross-client applications because of their expensive computation and large local data requirements. To address these issues, we propose MocoSFL, a collaborative SSL framework based on Split Federated Learning (SFL) and Momentum Contrast (MoCo). In MocoSFL, the large backbone model is split into a small client-side model and a large server-side model, and only the small client-side model is processed locally on the client's local devices. MocoSFL has three key components: (i) vector concatenation which enables the use of small batch size and reduces computation and memory requirements by orders of magnitude; (ii) feature sharing that helps achieve high accuracy regardless of the quality and volume of local data; (iii) frequent synchronization that helps achieve better non-IID performance because of smaller local model divergence. For a 1,000-client case with non-IID data (each client only has data from 2 random classes of CIFAR-10), MocoSFL can achieve over 84% accuracy with ResNet-18 model. Next we present TAResSFL module that significantly improves the resistance to privacy threats and communication overhead with small sacrifice in accuracy for a MocoSFL system. On a Raspberry Pi 4B device, the MocoSFL-based scheme requires less than 1MB of memory and less than 40MB of communication, and consumes less than 5W power. The code is available at https://github.com/SonyAI/MocoSFL.
https://openreview.net/pdf/e7d98a4942f9fa3e0236bec53218b97e0792f3ee.pdf
DaxBench: Benchmarking Deformable Object Manipulation with Differentiable Physics
https://openreview.net/forum?id=1NAzMofMnWl
https://openreview.net/forum?id=1NAzMofMnWl
Siwei Chen,Yiqing Xu,Cunjun Yu,Linfeng Li,Xiao Ma,Zhongwen Xu,David Hsu
ICLR 2023,Top 5%
Deformable object manipulation (DOM) is a long-standing challenge in robotics and has attracted significant interest recently. This paper presents DaXBench, a differentiable simulation framework for DOM. While existing work often focuses on a specific type of deformable objects, DaXBench supports fluid, rope, cloth ...; it provides a general-purpose benchmark to evaluate widely different DOM methods, including planning, imitation learning, and reinforcement learning. DaXBench combines recent advances in deformable object simulation with JAX, a high-performance computational framework. All DOM tasks in DaXBench are wrapped with the OpenAI Gym API for easy integration with DOM algorithms. We hope that DaXBench provides to the research community a comprehensive, standardized benchmark and a valuable tool to support the development and evaluation of new DOM methods. The code and video are available online.
https://openreview.net/pdf/3c5184bef72b67b8b06885038e921049f56dc94e.pdf
3D generation on ImageNet
https://openreview.net/forum?id=U2WjB9xxZ9q
https://openreview.net/forum?id=U2WjB9xxZ9q
Ivan Skorokhodov,Aliaksandr Siarohin,Yinghao Xu,Jian Ren,Hsin-Ying Lee,Peter Wonka,Sergey Tulyakov
ICLR 2023,Top 5%
All existing 3D-from-2D generators are designed for well-curated single-category datasets, where all the objects have (approximately) the same scale, 3D location, and orientation, and the camera always points to the center of the scene. This makes them inapplicable to diverse, in-the-wild datasets of non-alignable scenes rendered from arbitrary camera poses. In this work, we develop a 3D generator with Generic Priors (3DGP): a 3D synthesis framework with more general assumptions about the training data, and show that it scales to very challenging datasets, like ImageNet. Our model is based on three new ideas. First, we incorporate an inaccurate off-the-shelf depth estimator into 3D GAN training via a special depth adaptation module to handle the imprecision. Then, we create a flexible camera model and a regularization strategy for it to learn its distribution parameters during training. Finally, we extend the recent ideas of transferring knowledge from pretrained classifiers into GANs for patch-wise trained models by employing a simple distillation-based technique on top of the discriminator. It achieves more stable training than the existing methods and speeds up the convergence by at least 40%. We explore our model on four datasets: SDIP Dogs $256^2$, SDIP Elephants $256^2$, LSUN Horses $256^2$, and ImageNet $256^2$ and demonstrate that 3DGP outperforms the recent state-of-the-art in terms of both texture and geometry quality. Code and visualizations: https://snap-research.github.io/3dgp.
https://openreview.net/pdf/303cbc4bcfff52f24148569ddc61d7213ad090eb.pdf
Rethinking the Expressive Power of GNNs via Graph Biconnectivity
https://openreview.net/forum?id=r9hNv76KoT3
https://openreview.net/forum?id=r9hNv76KoT3
Bohang Zhang,Shengjie Luo,Liwei Wang,Di He
ICLR 2023,Top 5%
Designing expressive Graph Neural Networks (GNNs) is a central topic in learning graph-structured data. While numerous approaches have been proposed to improve GNNs with respect to the Weisfeiler-Lehman (WL) test, for most of them, there is still a lack of deep understanding of what additional power they can systematically and provably gain. In this paper, we take a fundamentally different perspective to study the expressive power of GNNs beyond the WL test. Specifically, we introduce a novel class of expressivity metrics via graph biconnectivity and highlight their importance in both theory and practice. As biconnectivity can be easily calculated using simple algorithms that have linear computational costs, it is natural to expect that popular GNNs can learn it easily as well. However, after a thorough review of prior GNN architectures, we surprisingly find that most of them are not expressive for any of these metrics. The only exception is the ESAN framework (Bevilacqua et al., 2022), for which we give a theoretical justification of its power. We proceed to introduce a principled and more efficient approach, called the Generalized Distance Weisfeiler-Lehman (GD-WL), which is provably expressive for all biconnectivity metrics. Practically, we show GD-WL can be implemented by a Transformer-like architecture that preserves expressiveness and enjoys full parallelizability. A set of experiments on both synthetic and real datasets demonstrates that our approach can consistently outperform prior GNN architectures.
https://openreview.net/pdf/be0ebeff1b3c008481709874f052f374a1d68dec.pdf
Sparse Mixture-of-Experts are Domain Generalizable Learners
https://openreview.net/forum?id=RecZ9nB9Q4
https://openreview.net/forum?id=RecZ9nB9Q4
Bo Li,Yifei Shen,Jingkang Yang,Yezhen Wang,Jiawei Ren,Tong Che,Jun Zhang,Ziwei Liu
ICLR 2023,Top 5%
Human visual perception can easily generalize to out-of-distributed visual data, which is far beyond the capability of modern machine learning models. Domain generalization (DG) aims to close this gap, with existing DG methods mainly focusing on the loss function design. In this paper, we propose to explore an orthogonal direction, i.e., the design of the backbone architecture. It is motivated by an empirical finding that transformer-based models trained with empirical risk minimization (ERM) outperform CNN-based models employing state-of-the-art (SOTA) DG algorithms on multiple DG datasets. We develop a formal framework to characterize a network's robustness to distribution shifts by studying its architecture's alignment with the correlations in the dataset. This analysis guides us to propose a novel DG model built upon vision transformers, namely \emph{Generalizable Mixture-of-Experts (GMoE)}. Extensive experiments on DomainBed demonstrate that GMoE trained with ERM outperforms SOTA DG baselines by a large margin. Moreover, GMoE is complementary to existing DG methods and its performance is substantially improved when trained with DG algorithms.
https://openreview.net/pdf/7bdb46ea980861f27d1fc50dacde68ac444c5231.pdf
Token Merging: Your ViT But Faster
https://openreview.net/forum?id=JroZRaRw7Eu
https://openreview.net/forum?id=JroZRaRw7Eu
Daniel Bolya,Cheng-Yang Fu,Xiaoliang Dai,Peizhao Zhang,Christoph Feichtenhofer,Judy Hoffman
ICLR 2023,Top 5%
We introduce Token Merging (ToMe), a simple method to increase the throughput of existing ViT models without needing to train. ToMe gradually combines similar tokens in a transformer using a general and light-weight matching algorithm that is as fast as pruning while being more accurate. Off-the-shelf, ToMe can 2x the throughput of state-of-the-art ViT-L @ 512 and ViT-H @ 518 models on images and 2.2x the throughput of ViT-L on video with only a 0.2-0.3% accuracy drop in each case. ToMe can also easily be applied during training, improving in practice training speed up to 2x for MAE fine-tuning on video. Training with ToMe further minimizes accuracy drop, leading to 2x the throughput of ViT-B on audio for only a 0.4% mAP drop. Qualitatively, we find that ToMe merges object parts into one token, even over multiple frames of video. Overall, ToMe’s accuracy and speed are competitive with state-of-the-art on images, video, and audio.
https://openreview.net/pdf/ef10c4387f0309b8f942d720fdb3ed5bc6ec5b30.pdf
Learnable Behavior Control: Breaking Atari Human World Records via Sample-Efficient Behavior Selection
https://openreview.net/forum?id=FeWvD0L_a4
https://openreview.net/forum?id=FeWvD0L_a4
Jiajun Fan,Yuzheng Zhuang,Yuecheng Liu,Jianye HAO,Bin Wang,Jiangcheng Zhu,Hao Wang,Shu-Tao Xia
ICLR 2023,Top 5%
The exploration problem is one of the main challenges in deep reinforcement learning (RL). Recent promising works tried to handle the problem with population-based methods, which collect samples with diverse behaviors derived from a population of different exploratory policies. Adaptive policy selection has been adopted for behavior control. However, the behavior selection space is largely limited by the predefined policy population, which further limits behavior diversity. In this paper, we propose a general framework called Learnable Behavioral Control (LBC) to address the limitation, which a) enables a significantly enlarged behavior selection space via formulating a hybrid behavior mapping from all policies; b) constructs a unified learnable process for behavior selection. We introduce LBC into distributed off-policy actor-critic methods and achieve behavior control via optimizing the selection of the behavior mappings with bandit-based meta-controllers. Our agents have achieved 10077.52% mean human normalized score and surpassed 24 human world records within 1B training frames in the Arcade Learning Environment, which demonstrates our significant state-of-the-art (SOTA) performance without degrading the sample efficiency.
https://openreview.net/pdf/6576875018fe482d865d62a571a8b8df3278b360.pdf
Image as Set of Points
https://openreview.net/forum?id=awnvqZja69
https://openreview.net/forum?id=awnvqZja69
Xu Ma,Yuqian Zhou,Huan Wang,Can Qin,Bin Sun,Chang Liu,Yun Fu
ICLR 2023,Top 5%
What is an image, and how to extract latent features? Convolutional Networks (ConvNets) consider an image as organized pixels in a rectangular shape and extract features via convolutional operation in a local region; Vision Transformers (ViTs) treat an image as a sequence of patches and extract features via attention mechanism in a global range. In this work, we introduce a straightforward and promising paradigm for visual representation, which is called Context Clusters. Context clusters (CoCs) view an image as a set of unorganized points and extract features via a simplified clustering algorithm. In detail, each point includes the raw feature (e.g., color) and positional information (e.g., coordinates), and a simplified clustering algorithm is employed to group and extract deep features hierarchically. Our CoCs are convolution- and attention-free, only relying on clustering algorithm for spatial interaction. Owing to the simple design, we show CoCs endow gratifying interpretability via the visualization of the clustering process. Our CoCs aim at providing a new perspective on image and visual representation, which may enjoy broad applications in different domains and exhibit profound insights. Even though we are not targeting SOTA performance, COCs still achieve comparable or even better performance than ConvNets or ViTs on several benchmarks.
https://openreview.net/pdf/839da9c992ee84a8fa5be183d987fa55966e54ff.pdf
Human-Guided Fair Classification for Natural Language Processing
https://openreview.net/forum?id=N_g8TT9Cy7f
https://openreview.net/forum?id=N_g8TT9Cy7f
Florian E. Dorner,Momchil Peychev,Nikola Konstantinov,Naman Goel,Elliott Ash,Martin Vechev
ICLR 2023,Top 25%
Text classifiers have promising applications in high-stake tasks such as resume screening and content moderation. These classifiers must be fair and avoid discriminatory decisions by being invariant to perturbations of sensitive attributes such as gender or ethnicity. However, there is a gap between human intuition about these perturbations and the formal similarity specifications capturing them. While existing research has started to address this gap, current methods are based on hardcoded word replacements, resulting in specifications with limited expressivity or ones that fail to fully align with human intuition (e.g., in cases of asymmetric counterfactuals). This work proposes novel methods for bridging this gap by discovering expressive and intuitive individual fairness specifications. We show how to leverage unsupervised style transfer and GPT-3's zero-shot capabilities to automatically generate expressive candidate pairs of semantically similar sentences that differ along sensitive attributes. We then validate the generated pairs via an extensive crowdsourcing study, which confirms that a lot of these pairs align with human intuition about fairness in the context of toxicity classification. Finally, we show how limited amounts of human feedback can be leveraged to learn a similarity specification that can be used to train downstream fairness-aware models.
https://openreview.net/pdf/09b5568016529de9fe0127852626c933cb6af627.pdf
Humanly Certifying Superhuman Classifiers
https://openreview.net/forum?id=X5ZMzRYqUjB
https://openreview.net/forum?id=X5ZMzRYqUjB
Qiongkai Xu,Christian Walder,Chenchen Xu
ICLR 2023,Top 25%
This paper addresses a key question in current machine learning research: if we believe that a model's predictions might be better than those given by human experts, how can we (humans) verify these beliefs? In some cases, this ``superhuman'' performance is readily demonstrated; for example by defeating top-tier human players in traditional two player games. On the other hand, it can be challenging to evaluate classification models that potentially surpass human performance. Indeed, human annotations are often treated as a ground truth, which implicitly assumes the superiority of the human over any models trained on human annotations. In reality, human annotators are subjective and can make mistakes. Evaluating the performance with respect to a genuine oracle is more objective and reliable, even when querying the oracle is more expensive or sometimes impossible. In this paper, we first raise the challenge of evaluating the performance of both humans and models with respect to an oracle which is $\textit{unobserved}$. We develop a theory for estimating the accuracy compared to the oracle, using only imperfect human annotations for reference. Our analysis provides an executable recipe for detecting and certifying superhuman performance in this setting, which we believe will assist in understanding the stage of current research on classification. We validate the convergence of the bounds and the assumptions of our theory on carefully designed toy experiments with known oracles. Moreover, we demonstrate the utility of our theory by meta-analyzing large-scale natural language processing tasks, for which an oracle does not exist, and show that under our mild assumptions a number of models from recent years have already achieved superhuman performance with high probability---suggesting that our new oracle based performance evaluation metrics are overdue as an alternative to the widely used accuracy metrics that are naively based on imperfect human annotations.
https://openreview.net/pdf/cd3013d0326b50c5c63ae8604d438ed46e8c664c.pdf
Few-Shot Domain Adaptation For End-to-End Communication
https://openreview.net/forum?id=4F1gvduDeL
https://openreview.net/forum?id=4F1gvduDeL
Jayaram Raghuram,Yijing Zeng,Dolores Garcia,Rafael Ruiz,Somesh Jha,Joerg Widmer,Suman Banerjee
ICLR 2023,Top 25%
The problem of end-to-end learning of a communication system using an autoencoder -- consisting of an encoder, channel, and decoder modeled using neural networks -- has recently been shown to be an effective approach. A challenge faced in the practical adoption of this learning approach is that under changing channel conditions (e.g. a wireless link), it requires frequent retraining of the autoencoder in order to maintain a low decoding error rate. Since retraining is both time consuming and requires a large number of samples, it becomes impractical when the channel distribution is changing quickly. We propose to address this problem using a fast and sample-efficient (few-shot) domain adaptation method that does not change the encoder and decoder networks. Different from conventional training-time unsupervised or semi-supervised domain adaptation, here we have a trained autoencoder from a source distribution that we want to adapt (at test time) to a target distribution using only a small labeled dataset, and no unlabeled data. We focus on a generative channel model based on the Gaussian mixture density network (MDN), and propose a regularized, parameter-efficient adaptation of the MDN using a set of affine transformations. The learned affine transformations are then used to design an optimal transformation at the decoder input to compensate for the distribution shift, and effectively present to the decoder inputs close to the source distribution. Experiments on many simulated distribution changes common to the wireless setting, and a real mmWave FPGA testbed demonstrate the effectiveness of our method at adaptation using very few target domain samples~\footnote{Code for our work: \url{https://github.com/jayaram-r/domain-adaptation-autoencoder}}.
https://openreview.net/pdf/502da8335c25f515d1b0a7b57057ac446ce9f67b.pdf
Learning a Data-Driven Policy Network for Pre-Training Automated Feature Engineering
https://openreview.net/forum?id=688hNNMigVX
https://openreview.net/forum?id=688hNNMigVX
Liyao Li,Haobo Wang,Liangyu Zha,Qingyi Huang,Sai Wu,Gang Chen,Junbo Zhao
ICLR 2023,Top 25%
Feature engineering is widely acknowledged to be pivotal in tabular data analysis and prediction. Automated feature engineering (AutoFE) emerged to automate this process managed by experienced data scientists and engineers conventionally. In this area, most — if not all — prior work adopted an identical framework from the neural architecture search (NAS) method. While feasible, we posit that the NAS framework very much contradicts the way how human experts cope with the data since the inherent Markov decision process (MDP) setup differs. We point out that its data-unobserved setup consequentially results in an incapability to generalize across different datasets as well as also high computational cost. This paper proposes a novel AutoFE framework Feature Set Data-Driven Search (FETCH), a pipeline mainly for feature generation and selection. Notably, FETCH is built on a brand-new data-driven MDP setup using the tabular dataset as the state fed into the policy network. Further, we posit that the crucial merit of FETCH is its transferability where the yielded policy network trained on a variety of datasets is indeed capable to enact feature engineering on unseen data, without requiring additional exploration. To the best of our knowledge, this is a pioneer attempt to build a tabular data pre-training paradigm via AutoFE. Extensive experiments show that FETCH systematically surpasses the current state-of-the-art AutoFE methods and validates the transferability of AutoFE pre-training.
https://openreview.net/pdf/1c15c68dc3b8354cfb9326758f23b4ffaddbca2d.pdf
Learning Group Importance using the Differentiable Hypergeometric Distribution
https://openreview.net/forum?id=75O7S_L4oY
https://openreview.net/forum?id=75O7S_L4oY
Thomas M. Sutter,Laura Manduchi,Alain Ryser,Julia E Vogt
ICLR 2023,Top 25%
Partitioning a set of elements into subsets of a priori unknown sizes is essential in many applications. These subset sizes are rarely explicitly learned - be it the cluster sizes in clustering applications or the number of shared versus independent generative latent factors in weakly-supervised learning. Probability distributions over correct combinations of subset sizes are non-differentiable due to hard constraints, which prohibit gradient-based optimization. In this work, we propose the differentiable hypergeometric distribution. The hypergeometric distribution models the probability of different group sizes based on their relative importance. We introduce reparameterizable gradients to learn the importance between groups and highlight the advantage of explicitly learning the size of subsets in two typical applications: weakly-supervised learning and clustering. In both applications, we outperform previous approaches, which rely on suboptimal heuristics to model the unknown size of groups.
https://openreview.net/pdf/eaa362d272c28b62b383ba46f668c0058f49115c.pdf
Concept-level Debugging of Part-Prototype Networks
https://openreview.net/forum?id=oiwXWPDTyNk
https://openreview.net/forum?id=oiwXWPDTyNk
Andrea Bontempelli,Stefano Teso,Katya Tentori,Fausto Giunchiglia,Andrea Passerini
ICLR 2023,Top 25%
Part-prototype Networks (ProtoPNets) are concept-based classifiers designed to achieve the same performance as black-box models without compromising transparency. ProtoPNets compute predictions based on similarity to class-specific part-prototypes learned to recognize parts of training examples, making it easy to faithfully determine what examples are responsible for any target prediction and why. However, like other models, they are prone to picking up confounders and shortcuts from the data, thus suffering from compromised prediction accuracy and limited generalization. We propose ProtoPDebug, an effective concept-level debugger for ProtoPNets in which a human supervisor, guided by the model’s explanations, supplies feedback in the form of what part-prototypes must be forgotten or kept, and the model is fine-tuned to align with this supervision. Our experimental evaluation shows that ProtoPDebug outperforms state-of-the-art debuggers for a fraction of the annotation cost. An online experiment with laypeople confirms the simplicity of the feedback requested to the users and the effectiveness of the collected feedback for learning confounder-free part-prototypes. ProtoPDebug is a promising tool for trustworthy interactive learning in critical applications, as suggested by a preliminary evaluation on a medical decision making task.
https://openreview.net/pdf/c62dc701dcd52c5bdceeac7478072e161f7d982d.pdf
Neuroevolution is a Competitive Alternative to Reinforcement Learning for Skill Discovery
https://openreview.net/forum?id=6BHlZgyPOZY
https://openreview.net/forum?id=6BHlZgyPOZY
Felix Chalumeau,Raphael Boige,Bryan Lim,Valentin Macé,Maxime Allard,Arthur Flajolet,Antoine Cully,Thomas PIERROT
ICLR 2023,Top 25%
Deep Reinforcement Learning (RL) has emerged as a powerful paradigm for training neural policies to solve complex control tasks. However, these policies tend to be overfit to the exact specifications of the task and environment they were trained on, and thus do not perform well when conditions deviate slightly or when composed hierarchically to solve even more complex tasks. Recent work has shown that training a mixture of policies, as opposed to a single one, that are driven to explore different regions of the state-action space can address this shortcoming by generating a diverse set of behaviors, referred to as skills, that can be collectively used to great effect in adaptation tasks or for hierarchical planning. This is typically realized by including a diversity term - often derived from information theory - in the objective function optimized by RL. However these approaches often require careful hyperparameter tuning to be effective. In this work, we demonstrate that less widely-used neuroevolution methods, specifically Quality Diversity (QD), are a competitive alternative to information-theory-augmented RL for skill discovery. Through an extensive empirical evaluation comparing eight state-of-the-art algorithms (four flagship algorithms from each line of work) on the basis of (i) metrics directly evaluating the skills' diversity, (ii) the skills' performance on adaptation tasks, and (iii) the skills' performance when used as primitives for hierarchical planning; QD methods are found to provide equal, and sometimes improved, performance whilst being less sensitive to hyperparameters and more scalable. As no single method is found to provide near-optimal performance across all environments, there is a rich scope for further research which we support by proposing future directions and providing optimized open-source implementations.
https://openreview.net/pdf/1c63093c2dc46ae51a5d9ec802a0d85f3455069d.pdf
Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data
https://openreview.net/forum?id=JpbLyEI5EwW
https://openreview.net/forum?id=JpbLyEI5EwW
Spencer Frei,Gal Vardi,Peter Bartlett,Nathan Srebro,Wei Hu
ICLR 2023,Top 25%
The implicit biases of gradient-based optimization algorithms are conjectured to be a major factor in the success of modern deep learning. In this work, we investigate the implicit bias of gradient flow and gradient descent in two-layer fully-connected neural networks with leaky ReLU activations when the training data are nearly-orthogonal, a common property of high-dimensional data. For gradient flow, we leverage recent work on the implicit bias for homogeneous neural networks to show that asymptotically, gradient flow produces a neural network with rank at most two. Moreover, this network is an $\ell_2$-max-margin solution (in parameter space), and has a linear decision boundary that corresponds to an approximate-max-margin linear predictor. For gradient descent, provided the random initialization variance is small enough, we show that a single step of gradient descent suffices to drastically reduce the rank of the network, and that the rank remains small throughout training. We provide experiments which suggest that a small initialization scale is important for finding low-rank neural networks with gradient descent.
https://openreview.net/pdf/aa62c3225873e9b019b0e053bf4f2ab35a42de9c.pdf
Guarded Policy Optimization with Imperfect Online Demonstrations
https://openreview.net/forum?id=O5rKg7IRQIO
https://openreview.net/forum?id=O5rKg7IRQIO
Zhenghai Xue,Zhenghao Peng,Quanyi Li,Zhihan Liu,Bolei Zhou
ICLR 2023,Top 25%
The Teacher-Student Framework (TSF) is a reinforcement learning setting where a teacher agent guards the training of a student agent by intervening and providing online demonstrations. Assuming optimal, the teacher policy has the perfect timing and capability to intervene in the learning process of the student agent, providing safety guarantee and exploration guidance. Nevertheless, in many real-world settings it is expensive or even impossible to obtain a well-performing teacher policy. In this work, we relax the assumption of a well-performing teacher and develop a new method that can incorporate arbitrary teacher policies with modest or inferior performance. We instantiate an Off-Policy Reinforcement Learning algorithm, termed Teacher-Student Shared Control (TS2C), which incorporates teacher intervention based on trajectory-based value estimation. Theoretical analysis validates that the proposed TS2C algorithm attains efficient exploration and substantial safety guarantee without being affected by the teacher's own performance. Experiments on various continuous control tasks show that our method can exploit teacher policies at different performance levels while maintaining a low training cost. Moreover, the student policy surpasses the imperfect teacher policy in terms of higher accumulated reward in held-out testing environments. Code is available at https://metadriverse.github.io/TS2C.
https://openreview.net/pdf/e19dee281e43ab70ef8f8640d6ccb689bed45bd8.pdf
Learning with Logical Constraints but without Shortcut Satisfaction
https://openreview.net/forum?id=M2unceRvqhh
https://openreview.net/forum?id=M2unceRvqhh
Zenan Li,Zehua Liu,Yuan Yao,Jingwei Xu,Taolue Chen,Xiaoxing Ma,Jian L\"{u}
ICLR 2023,Top 25%
Recent studies have started to explore the integration of logical knowledge into deep learning via encoding logical constraints as an additional loss function. However, existing approaches tend to vacuously satisfy logical constraints through shortcuts, failing to fully exploit the knowledge. In this paper, we present a new framework for learning with logical constraints. Specifically, we address the shortcut satisfaction issue by introducing dual variables for logical connectives, encoding how the constraint is satisfied. We further propose a variational framework where the encoded logical constraint is expressed as a distributional loss that is compatible with the model's original training loss. The theoretical analysis shows that the proposed approach bears some nice properties, and the experimental evaluations demonstrate its superior performance in both model generalizability and constraint satisfaction.
https://openreview.net/pdf/172ef390502d417f43730d591512cda9247cb5fa.pdf

ICLR 2023 International Conference on Learning Representations 2023 Accepted Paper Meta Info Dataset

This dataset is collect from the ICLR 2023 OpenReview website (https://openreview.net/group?id=ICLR.cc/2023/Conference#tab-accept-oral) as well as the arxiv website DeepNLP paper arxiv (http://www.deepnlp.org/content/paper/iclr2023). For researchers who are interested in doing analysis of ICLR 2023 accepted papers and potential trends, you can use the already cleaned up json files. Each row contains the meta information of a paper in the ICLR 2023 conference. To explore more AI & Robotic papers (NIPS/ICML/ICLR/IROS/ICRA/etc) and AI equations, feel free to navigate the Equation Search Engine (http://www.deepnlp.org/search/equation) as well as the AI Agent Search Engine to find the deployed AI Apps and Agents (http://www.deepnlp.org/search/agent) in your domain.

Meta Information of Json File

{
    "title": "Encoding Recurrence into Transformers",
    "url": "https://openreview.net/forum?id=7YfHla7IxBJ",
    "detail_url": "https://openreview.net/forum?id=7YfHla7IxBJ",
    "authors": "Feiqing Huang,Kexin Lu,Yuxi CAI,Zhen Qin,Yanwen Fang,Guangjian Tian,Guodong Li",
    "tags": "ICLR 2023,Top 5%",
    "abstract": "This paper novelly breaks down with ignorable loss an RNN layer into a sequence of simple RNNs, each of which can be further rewritten into a lightweight positional encoding matrix of a self-attention, named the Recurrence Encoding Matrix (REM). Thus, recurrent dynamics introduced by the RNN layer can be encapsulated into the positional encodings of a multihead self-attention, and this makes it possible to seamlessly incorporate these recurrent dynamics into a Transformer, leading to a new module, Self-Attention with Recurrence (RSA). The proposed module can leverage the recurrent inductive bias of REMs to achieve a better sample efficiency than its corresponding baseline Transformer, while the self-attention is used to model the remaining non-recurrent signals. The relative proportions of these two components are controlled by a data-driven gated mechanism, and the effectiveness of RSA modules are demonstrated by four sequential learning tasks.",
    "pdf": "https://openreview.net/pdf/70636775789b51f219cb29634cc7c794cc86577b.pdf"
}

Related

AI Equation

List of AI Equations and Latex
List of Math Equations and Latex
List of Physics Equations and Latex
List of Statistics Equations and Latex
List of Machine Learning Equations and Latex

AI Agent Marketplace and Search

AI Agent Marketplace and Search
Robot Search
Equation and Academic search
AI & Robot Comprehensive Search
AI & Robot Question
AI & Robot Community
AI Agent Marketplace Blog

AI Agent Reviews

AI Agent Marketplace Directory
Microsoft AI Agents Reviews
Claude AI Agents Reviews
OpenAI AI Agents Reviews
Saleforce AI Agents Reviews
AI Agent Builder Reviews

Downloads last month
12