title
stringlengths 18
162
| url
stringlengths 42
44
| detail_url
stringlengths 42
44
| authors
stringlengths 10
429
| tags
stringclasses 3
values | abstract
stringlengths 400
2.37k
| pdf
stringlengths 71
71
|
---|---|---|---|---|---|---|
Language models are multilingual chain-of-thought reasoners | https://openreview.net/forum?id=fR3wGCk-IXp | https://openreview.net/forum?id=fR3wGCk-IXp | Freda Shi,Mirac Suzgun,Markus Freitag,Xuezhi Wang,Suraj Srivats,Soroush Vosoughi,Hyung Won Chung,Yi Tay,Sebastian Ruder,Denny Zhou,Dipanjan Das,Jason Wei | ICLR 2023,Poster | We evaluate the reasoning abilities of large language models in multilingual settings. We introduce the Multilingual Grade School Math (MGSM) benchmark, by manually translating 250 grade-school math problems from the GSM8K dataset (Cobbe et al., 2021) into ten typologically diverse languages. We find that the ability to solve MGSM problems via chain-of-thought prompting emerges with increasing model scale, and that models have strikingly strong multilingual reasoning abilities, even in underrepresented languages such as Bengali and Swahili. Finally, we show that multilingual reasoning abilities of language models extend to other tasks such as commonsense reasoning and word-in-context semantic judgment. The MGSM benchmark is publicly available at AnonymousLink and the supplementary material.
| https://openreview.net/pdf/972d6eaf77336eece16b7ec5bdb9565b06423b8a.pdf |
Recitation-Augmented Language Models | https://openreview.net/forum?id=-cqvvvb-NkI | https://openreview.net/forum?id=-cqvvvb-NkI | Zhiqing Sun,Xuezhi Wang,Yi Tay,Yiming Yang,Denny Zhou | ICLR 2023,Poster | We propose a new paradigm to help Large Language Models (LLMs) generate more accurate factual knowledge without retrieving from an external corpus, called RECITation-augmented gEneration (RECITE). Different from retrieval-augmented language models that retrieve relevant documents before generating the outputs, given an input, RECITE first recites one or several relevant passages from LLMs’ own memory via sampling, and then produces the final answers. We show that RECITE is a powerful paradigm for knowledge-intensive NLP tasks. Specifically, we show that by utilizing recitation as the intermediate step, a recite-and-answer scheme can achieve new state-of-the-art performance in various closed-book question answering (CBQA) tasks. In experiments, we verify the effectiveness of RECITE on three pre-trained models (In-house LM, UL2, and OPT) and three CBQA tasks (Natural Questions, TriviaQA, and HotpotQA). Our code is available at "https://github.com/Edward-Sun/RECITE". | https://openreview.net/pdf/693f49dd101c5c13e74972b49546fdff73d91ac4.pdf |
KwikBucks: Correlation Clustering with Cheap-Weak and Expensive-Strong Signals | https://openreview.net/forum?id=p0JSSa1AuV | https://openreview.net/forum?id=p0JSSa1AuV | Sandeep Silwal,Sara Ahmadian,Andrew Nystrom,Andrew McCallum,Deepak Ramachandran,Seyed Mehran Kazemi | ICLR 2023,Poster | The unprecedented rate at which the sizes of machine learning (ML) models are growing necessitates novel approaches to enable efficient and scalable solutions. We contribute to this line of work by studying a novel version of the Budgeted Correlation Clustering problem (\bcc) where along with a limited number of queries to an expensive oracle for node similarities (e.g. a large ML model), we have unlimited access to a cheaper but less accurate second oracle. Our formulation is inspired by many practical scenarios where coarse approximations of the expensive similarity metric can be efficiently obtained via weaker models. We develop a theoretically motivated algorithm in this setting that leverages the cheap oracle to judiciously query the strong oracle while maintaining high clustering quality. We empirically demonstrate gains in query minimization and clustering metrics on a variety of datasets with diverse strong and cheap oracles. Most notably, we demonstrate a practical application in text clustering based on expensive cross-attention language models by showing that cheaper (but weaker) embedding-based models can be leveraged to substantially reduce the number of inference calls to the former. | https://openreview.net/pdf/405e4387799886dd6afb53d4b2fb1eaf9fea1ae8.pdf |
Reward Design with Language Models | https://openreview.net/forum?id=10uNUgI5Kl | https://openreview.net/forum?id=10uNUgI5Kl | Minae Kwon,Sang Michael Xie,Kalesha Bullard,Dorsa Sadigh | ICLR 2023,Poster | Reward design in reinforcement learning (RL) is challenging since specifying human notions of desired behavior may be difficult via reward functions or require many expert demonstrations. Can we instead cheaply design rewards using a natural language interface? This paper explores how to simplify reward design by using a large language model (LLM) such as GPT-3 as a proxy reward function, where the user provides a textual prompt containing a few examples (few-shot) or a description (zero-shot) of desired behavior. Our approach leverages this proxy reward function in an RL framework. Specifically, users specify a prompt once at the beginning of training. During training, the LLM evaluates an RL agent's behavior against the desired behavior described by the prompt and outputs a corresponding reward signal. The RL agent then uses this reward to update its behavior. We evaluate whether our approach can train agents aligned with user objectives in the Ultimatum Game, matrix games, and the DealOrNoDeal negotiation task. In all three tasks, we show that RL agents trained with our framework are well-aligned with the user's objectives and outperforms RL agents trained with reward functions learned via supervised learning. | https://openreview.net/pdf/696171827b35dfe4e639dfe0644bf0f279f84c75.pdf |
Calibrating the Rigged Lottery: Making All Tickets Reliable | https://openreview.net/forum?id=KdwnGErdT6 | https://openreview.net/forum?id=KdwnGErdT6 | Bowen Lei,Ruqi Zhang,Dongkuan Xu,Bani Mallick | ICLR 2023,Poster | Although sparse training has been successfully used in various deep learning tasks to save memory and reduce inference time, the reliability of the produced sparse models remains unexplored. Previous research has shown that deep neural networks tend to be over-confident, and we find that sparse training exacerbates this problem. Therefore, calibrating the sparse models is crucial for reliable prediction and decision making. In this paper, we propose a new sparse training method to produce sparse models with improved confidence calibration. In contrast to previous research that uses only one mask to control the sparse topology, our method utilizes two masks, including a deterministic mask and a random mask. The former efficiently searches and activates important weights by exploiting the magnitude of weights and gradients. While the latter brings better exploration and finds more appropriate weight values by random updates. Theoretically, we prove our method can be viewed as a hierarchical variational approximation of a probabilistic deep Gaussian process. Extensive experiments on multiple datasets, model architectures, and sparsities show that our method can reduce ECE values by up to 47.8\% and simultaneously maintain or even improve accuracy with only a slight increase in computational and storage burden. | https://openreview.net/pdf/7e2ec096c4cf8d171d97f1bc9ae39c2551137dd6.pdf |
A Statistical Framework for Personalized Federated Learning and Estimation: Theory, Algorithms, and Privacy | https://openreview.net/forum?id=FUiDMCr_W4o | https://openreview.net/forum?id=FUiDMCr_W4o | Kaan Ozkara,Antonious M. Girgis,Deepesh Data,Suhas Diggavi | ICLR 2023,Poster | A distinguishing characteristic of federated learning is that the (local) client data could have statistical heterogeneity. This heterogeneity has motivated the design of personalized learning, where individual (personalized) models are trained, through collaboration. There have been various personalization methods proposed in literature, with seemingly very different forms and methods ranging from use of a single global model for local regularization and model interpolation, to use of multiple global models for personalized clustering, etc. In this work, we begin with a statistical framework that unifies several different algorithms as well as suggest new algorithms. We apply our framework to personalized estimation, and connect it to the classical empirical Bayes' methodology. We develop novel private personalized estimation under this framework. We then use our statistical framework to propose new personalized learning algorithms, including AdaPeD based on information-geometry regularization, which numerically outperforms several known algorithms. We develop privacy for personalized learning methods with guarantees for user-level privacy and composition. We numerically evaluate the performance as well as the privacy for both the estimation and learning problems, demonstrating the advantages of our proposed methods. | https://openreview.net/pdf/1addd86b6341672a0897ffdf066611ae89dac3bc.pdf |
Subsampling in Large Graphs Using Ricci Curvature | https://openreview.net/forum?id=w9WUQkBvpI | https://openreview.net/forum?id=w9WUQkBvpI | Shushan Wu,Huimin Cheng,Jiazhang Cai,Ping Ma,Wenxuan Zhong | ICLR 2023,Poster | In the past decades, many large graphs with millions of nodes have been collected/constructed. The high computational cost and significant visualization difficulty hinder the analysis of large graphs. To overcome the difficulties, researchers have developed many graph subsampling approaches to provide a rough sketch that preserves global properties. By selecting representative nodes, these graph subsampling methods can help researchers estimate the graph statistics, e.g., the number of communities, of the large graph from the subsample. However, the available subsampling methods, e.g., degree node sampler and random walk sampler, tend to leave out minority communities because nodes with high degrees are more likely to be sampled. To overcome the shortcomings of the existing methods, we are motivated to apply the community information hidden in the graph to the subsampling method. Though the community structure is unavailable, community structure information can be obtained by applying geometric methods to a graph. An analog of Ricci curvature in the manifold is defined for the graph, i.e., Ollivier Ricci curvature. Based on the asymptotic results about the within-community edge and between-community edge's OR curvature, we propose a subsampling algorithm based on our theoretical results, the Ollivier-Ricci curvature Gradient-based subsampling (ORG-sub) algorithm. The proposed ORG-sub algorithm has two main contributions: First, ORG-sub provides a rigorous theoretical guarantee that the probability of ORG-sub taking all communities into the final subgraph converges to one. Second, extensive experiments on synthetic and benchmark datasets demonstrate the advantages of our algorithm. | https://openreview.net/pdf/028be470cd7466da953f6e016e46cf5dfb5947b5.pdf |
Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization | https://openreview.net/forum?id=dNqxZgyjcYA | https://openreview.net/forum?id=dNqxZgyjcYA | Jihwan Jeong,Xiaoyu Wang,Michael Gimelfarb,Hyunwoo Kim,Baher abdulhai,Scott Sanner | ICLR 2023,Poster | Offline reinforcement learning (RL) addresses the problem of learning a performant policy from a fixed batch of data collected by following some behavior policy. Model-based approaches are particularly appealing in the offline setting since they can extract more learning signals from the logged dataset by learning a model of the environment. However, the performance of existing model-based approaches falls short of model-free counterparts, due to the compounding of estimation errors in the learned model. Driven by this observation, we argue that it is critical for a model-based method to understand when to trust the model and when to rely on model-free estimates, and how to act conservatively w.r.t. both. To this end, we derive an elegant and simple methodology called conservative Bayesian model-based value expansion for offline policy optimization (CBOP), that trades off model-free and model-based estimates during the policy evaluation step according to their epistemic uncertainties, and facilitates conservatism by taking a lower bound on the Bayesian posterior value estimate. On the standard D4RL continuous control tasks, we find that our method significantly outperforms previous model-based approaches: e.g., MOPO by $116.4$%, MOReL by $23.2$% and COMBO by $23.7$%. Further, CBOP achieves state-of-the-art performance on $11$ out of $18$ benchmark datasets while doing on par on the remaining datasets. | https://openreview.net/pdf/893cd27f203e1c4d6cd462eea1596210361ea469.pdf |
Scaling up and Stabilizing Differentiable Planning with Implicit Differentiation | https://openreview.net/forum?id=PYbe4MoHf32 | https://openreview.net/forum?id=PYbe4MoHf32 | Linfeng Zhao,Huazhe Xu,Lawson L.S. Wong | ICLR 2023,Poster | Differentiable planning promises end-to-end differentiability and adaptivity. However, an issue prevents it from scaling up to larger-scale problems: they need to differentiate through forward iteration layers to compute gradients, which couples forward computation and backpropagation and needs to balance forward planner performance and computational cost of the backward pass. To alleviate this issue, we propose to differentiate through the Bellman fixed-point equation to decouple forward and backward passes for Value Iteration Network and its variants, which enables constant backward cost (in planning horizon) and flexible forward budget and helps scale up to large tasks. We study the convergence stability, scalability, and efficiency of the proposed implicit version of VIN and its variants and demonstrate their superiorities on a range of planning tasks: 2D navigation, visual navigation, and 2-DOF manipulation in configuration space and workspace. | https://openreview.net/pdf/c9ba511ff253534e8b5c8b381259eb8b04b6406a.pdf |
Score-based Continuous-time Discrete Diffusion Models | https://openreview.net/forum?id=BYWWwSY2G5s | https://openreview.net/forum?id=BYWWwSY2G5s | Haoran Sun,Lijun Yu,Bo Dai,Dale Schuurmans,Hanjun Dai | ICLR 2023,Poster | Score-based modeling through stochastic differential equations (SDEs) has provided a new perspective on diffusion models, and demonstrated superior performance on continuous data. However, the gradient of the log-likelihood function, \ie, the score function, is not properly defined for discrete spaces. This makes it non-trivial to adapt SDE with score functions to categorical data. In this paper, we extend diffusion models to discrete variables by introducing a stochastic jump process where the reverse process denoises via a continuous-time Markov chain. This formulation admits an analytical simulation during backward sampling. To learn the reverse process, we extend score matching to general categorical data, and show that an unbiased estimator can be obtained via simple matching of the conditional marginal distributions. We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks. | https://openreview.net/pdf/d96e90543a3d04a3e6248eacc3088b15b0907078.pdf |
Decision Transformer under Random Frame Dropping | https://openreview.net/forum?id=NmZXv4467ai | https://openreview.net/forum?id=NmZXv4467ai | Kaizhe Hu,Ray Chen Zheng,Yang Gao,Huazhe Xu | ICLR 2023,Poster | Controlling agents remotely with deep reinforcement learning~(DRL) in the real world is yet to come. One crucial stepping stone is to devise RL algorithms that are robust in the face of dropped information from corrupted communication or malfunctioning sensors. Typical RL methods usually require considerable online interaction data that are costly and unsafe to collect in the real world. Furthermore, when applying to the frame dropping scenarios, they perform unsatisfactorily even with moderate drop rates. To address these issues, we propose Decision Transformer under Random Frame Dropping~(DeFog), an offline RL algorithm that enables agents to act robustly in frame dropping scenarios without online interaction. DeFog first randomly masks out data in the offline datasets and explicitly adds the time span of frame dropping as inputs. After that, a finetuning stage on the same offline dataset with a higher mask rate would further boost the performance. Empirical results show that DeFog outperforms strong baselines under severe frame drop rates like 90\%, while maintaining similar returns under non-frame-dropping conditions in the regular MuJoCo control benchmarks and the Atari environments. Our approach offers a robust and deployable solution for controlling agents in real-world environments with limited or unreliable data. | https://openreview.net/pdf/40b5d8cc3e6627b100c8764c3b83c1a43756e10f.pdf |
Adversarial Imitation Learning with Preferences | https://openreview.net/forum?id=bhfp5GlDtGe | https://openreview.net/forum?id=bhfp5GlDtGe | Aleksandar Taranovic,Andras Gabor Kupcsik,Niklas Freymuth,Gerhard Neumann | ICLR 2023,Poster | Designing an accurate and explainable reward function for many Reinforcement Learning tasks is a cumbersome and tedious process.
Instead, learning policies directly from the feedback of human teachers naturally integrates human domain knowledge into the policy optimization process.
However, different feedback modalities, such as demonstrations and preferences, provide distinct benefits and disadvantages. For example, demonstrations convey a lot of information about the task but are often hard or costly to obtain from real experts while preferences typically contain less information but are in most cases cheap to generate.
However, existing methods centered around human feedback mostly focus on a single teaching modality, causing them to miss out on important training data while making them less intuitive to use.
In this paper we propose a novel method for policy learning that incorporates two different feedback types, namely \emph{demonstrations} and \emph{preferences}.
To this end, we make use of the connection between discriminator training and density ratio estimation to incorporate preferences into the popular Adversarial Imitation Learning paradigm.
This insight allows us to express loss functions over both demonstrations and preferences in a unified framework.
Besides expert demonstrations, we are also able to learn from imperfect ones and combine them with preferences to achieve improved task performance.
We experimentally validate the effectiveness of combining both preferences and demonstrations on common benchmarks and also show that our method can efficiently learn challenging robot manipulation tasks. | https://openreview.net/pdf/d28eb645545687a36d4398fab2466f674e553bb5.pdf |
Is Model Ensemble Necessary? Model-based RL via a Single Model with Lipschitz Regularized Value Function | https://openreview.net/forum?id=hNyJBk3CwR | https://openreview.net/forum?id=hNyJBk3CwR | Ruijie Zheng,Xiyao Wang,Huazhe Xu,Furong Huang | ICLR 2023,Poster | Probabilistic dynamics model ensemble is widely used in existing model-based reinforcement learning methods as it outperforms a single dynamics model in both asymptotic performance and sample efficiency. In this paper, we provide both practical and theoretical insights on the empirical success of the probabilistic dynamics model ensemble through the lens of Lipschitz continuity. We find that, for a value function, the stronger the Lipschitz condition is, the smaller the gap between the true dynamics- and learned dynamics-induced Bellman operators is, thus enabling the converged value function to be closer to the optimal value function. Hence, we hypothesize that the key functionality of the probabilistic dynamics model ensemble is to regularize the Lipschitz condition of the value function using generated samples. To validate this hypothesis, we devise two practical robust training mechanisms through computing the adversarial noise and regularizing the value network’s spectral norm to directly regularize the Lipschitz condition of the value functions. Empirical results show that combined with our mechanisms, model-based RL algorithms with a single dynamics model outperform those with ensemble of the probabilistic dynamics models. These findings not only support the theoretical insight, but also provide a practical solution for developing computationally efficient model-based RL algorithms. | https://openreview.net/pdf/4569591fe6e09e958df0e03906a9bb0411ab14e0.pdf |
Synthetic Data Generation of Many-to-Many Datasets via Random Graph Generation | https://openreview.net/forum?id=Q120_4COf-K | https://openreview.net/forum?id=Q120_4COf-K | Kai Xu,Georgi Ganev,Emile Joubert,Rees Davison,Olivier Van Acker,Luke Robinson | ICLR 2023,Poster | Synthetic data generation (SDG) has become a popular approach to release private datasets.
In SDG, a generative model is fitted on the private real data, and samples drawn from the model are released as the protected synthetic data.
While real-world datasets usually consist of multiple tables with potential \emph{many-to-many} relationships (i.e.~\emph{many-to-many datasets}), recent research in SDG mostly focuses on modeling tables \emph{independently} or only considers generating datasets with special cases of many-to-many relationships such as \emph{one-to-many}.
In this paper, we first study challenges of building faithful generative models for many-to-many datasets, identifying limitations of existing methods.
We then present a novel factorization for many-to-many generative models, which leads to a scalable generation framework by combining recent results from random graph theory and representation learning.
Finally, we extend the framework to establish the notion of $(\epsilon,\delta)$-differential privacy.
Through a real-world dataset, we demonstrate that our method can generate synthetic datasets while preserving information within and across tables better than its closest competitor. | https://openreview.net/pdf/be7956b2e543e1b8e0ec80abee0a911606aee3cb.pdf |
Learning Low Dimensional State Spaces with Overparameterized Recurrent Neural Nets | https://openreview.net/forum?id=k9CF4h3muD | https://openreview.net/forum?id=k9CF4h3muD | Edo Cohen-Karlik,Itamar Menuhin-Gruman,Raja Giryes,Nadav Cohen,Amir Globerson | ICLR 2023,Poster | Overparameterization in deep learning refers to settings where a trained Neural Network (NN) has representational capacity to fit the training data in many ways, some of which generalize well, while others do not. In the case of Recurrent Neural Networks (RNNs) there exists an additional layer of overparameterization, in the sense that a model may exhibit many solutions that generalize well for sequence lengths seen in training, some of which \emph{extrapolate} to longer sequences, while others do not. Numerous works studied the tendency of Gradient Descent (GD) to fit overparameterized NNs with solutions that generalize well. On the other hand, its tendency to fit overparameterized RNNs with solutions that extrapolate has been discovered only lately, and is far less understood. In this paper, we analyze the extrapolation properties of GD when applied to overparameterized linear RNNs. In contrast to recent arguments suggesting an implicit bias towards short-term memory, we provide theoretical evidence for learning low dimensional state spaces, which can also model long-term memory. Our result relies on a dynamical characterization showing that GD (with small step size and near zero initialization) strives to maintain a certain form of balancedness, as well as tools developed in the context of the \emph{moment problem} from statistics (recovery of discrete probability distribution from its moments). Experiments corroborate our theory, demonstrating extrapolation via learning low dimensional state spaces with both linear and non-linear RNNs. | https://openreview.net/pdf/0959259bb8e8fb2220da502d1e842af14e4d80d5.pdf |
Images as Weight Matrices: Sequential Image Generation Through Synaptic Learning Rules | https://openreview.net/forum?id=ddad0PNUvV | https://openreview.net/forum?id=ddad0PNUvV | Kazuki Irie,Jürgen Schmidhuber | ICLR 2023,Poster | Work on fast weight programmers has demonstrated the effectiveness of key/value outer product-based learning rules for sequentially generating a weight matrix (WM) of a neural net (NN) by another NN or itself. However, the weight generation steps are typically not visually interpretable by humans, because the contents stored in the WM of an NN are not. Here we apply the same principle to generate natural images. The resulting fast weight painters (FPAs) learn to execute sequences of delta learning rules to sequentially generate images as sums of outer products of self-invented keys and values, one rank at a time, as if each image was a WM of an NN. We train our FPAs in the generative adversarial networks framework, and evaluate on various image datasets. We show how these generic learning rules can generate images with respectable visual quality without any explicit inductive bias for images. While the performance largely lags behind the one of specialised state-of-the-art image generators, our approach allows for visualising how synaptic learning rules iteratively produce complex connection patterns, yielding human-interpretable meaningful images. Finally, we also show that an additional convolutional U-Net (now popular in diffusion models) at the output of an FPA can learn one-step "denoising" of FPA-generated images to enhance their quality.
Our code is public. | https://openreview.net/pdf/aa76486a805137c091d1ab926caf23997d179f30.pdf |
Why (and When) does Local SGD Generalize Better than SGD? | https://openreview.net/forum?id=svCcui6Drl | https://openreview.net/forum?id=svCcui6Drl | Xinran Gu,Kaifeng Lyu,Longbo Huang,Sanjeev Arora | ICLR 2023,Poster | Local SGD is a communication-efficient variant of SGD for large-scale training, where multiple GPUs perform SGD independently and average the model parameters periodically. It has been recently observed that Local SGD can not only achieve the design goal of reducing the communication overhead but also lead to higher test accuracy than the corresponding SGD baseline (Lin et al., 2020b), though the training regimes for this to happen are still in debate (Ortiz et al., 2021). This paper aims to understand why (and when) Local SGD generalizes better based on Stochastic Differential Equation (SDE) approximation. The main contributions of this paper include (i) the derivation of an SDE that captures the long-term behavior of Local SGD in the small learning rate regime, showing how noise drives the iterate to drift and diffuse after it has reached close to the manifold of local minima, (ii) a comparison between the SDEs of Local SGD and SGD, showing that Local SGD induces a stronger drift term that can result in a stronger effect of regularization, e.g., a faster reduction of sharpness, and (iii) empirical evidence validating that having a small learning rate and long enough training time enables the generalization improvement over SGD but removing either of the two conditions leads to no improvement. | https://openreview.net/pdf/8471404cf45ec9df071bd14f618b77114982c086.pdf |
Function-space regularized Rényi divergences | https://openreview.net/forum?id=89GT-S49mGd | https://openreview.net/forum?id=89GT-S49mGd | Jeremiah Birrell,Yannis Pantazis,Paul Dupuis,Luc Rey-Bellet,Markos Katsoulakis | ICLR 2023,Poster | We propose a new family of regularized Rényi divergences parametrized not only by the order $\alpha$ but also by a variational function space. These new objects are defined by taking the infimal convolution of the standard Rényi divergence with the integral probability metric (IPM) associated with the chosen function space. We derive a novel dual variational representation that can be used to construct numerically tractable divergence estimators. This representation avoids risk-sensitive terms and therefore exhibits lower variance, making it well-behaved when $\alpha>1$; this addresses a notable weakness of prior approaches. We prove several properties of these new divergences, showing that they interpolate between the classical Rényi divergences and IPMs. We also study the $\alpha\to\infty$ limit, which leads to a regularized worst-case-regret and a new variational representation in the classical case. Moreover, we show that the proposed regularized Rényi divergences inherit features from IPMs such as the ability to compare distributions that are not absolutely continuous, e.g., empirical measures and distributions with low-dimensional support. We present numerical results on both synthetic and real datasets, showing the utility of these new divergences in both estimation and GAN training applications; in particular, we demonstrate significantly reduced variance and improved training performance. | https://openreview.net/pdf/2e551aaf5e4f3d5a15272f0f78bcaf860e9cb8bd.pdf |
Analogy-Forming Transformers for Few-Shot 3D Parsing | https://openreview.net/forum?id=SRIQZTh0IK | https://openreview.net/forum?id=SRIQZTh0IK | Nikolaos Gkanatsios,Mayank Singh,Zhaoyuan Fang,Shubham Tulsiani,Katerina Fragkiadaki | ICLR 2023,Poster | We present Analogical Networks, a model that segments 3D object scenes with analogical reasoning: instead of mapping a scene to part segments directly, our model first retrieves related scenes from memory and their corresponding part structures, and then predicts analogous part structures in the input object 3D point cloud, via an end-to-end learnable modulation mechanism. By conditioning on more than one retrieved memories, compositions of structures are predicted, that mix and match parts across the retrieved memories. One-shot, few-shot or many-shot learning are treated uniformly in Analogical Networks, by conditioning on the appropriate set of memories, whether taken from a single, few or many memory exemplars, and inferring analogous parses. We show Analogical Networks are competitive with state-of-the-art 3D segmentation transformer in many-shot settings and outperform them and existing paradigms of meta-learning and few-shot learning in few-shot scenarios. Our model successfully parses instances of novel object categories simply by expanding its memory, without any weight updates. | https://openreview.net/pdf/a6250a5fdb062e80a357db6137516f335e3606ec.pdf |
Fake It Until You Make It : Towards Accurate Near-Distribution Novelty Detection | https://openreview.net/forum?id=QWQM0ZwZdRS | https://openreview.net/forum?id=QWQM0ZwZdRS | Hossein Mirzaei,Mohammadreza Salehi,Sajjad Shahabi,Efstratios Gavves,Cees G. M. Snoek,Mohammad Sabokrou,Mohammad Hossein Rohban | ICLR 2023,Poster | We aim for image-based novelty detection. Despite considerable progress, existing models either fail or face dramatic drop under the so-called ``near-distribution" setup, where the differences between normal and anomalous samples are subtle. We first demonstrate existing methods could experience up to 20\% decrease in their AUCs in the near-distribution setting. Next, we propose to exploit a score-based generative model to produce synthetic near-distribution anomalous data. Our model is then fine-tuned to distinguish such data from the normal samples. We make quantitative as well as qualitative evaluation of this strategy, and compare the results with a variety of GAN-based models. Effectiveness of our method for both near-distribution and standard novelty detection is assessed through extensive experiments on datasets in diverse applications such as medical images, object classification, and quality control. This reveals that our method significantly improves upon existing models, and consistently decreases the gap between the near-distribution and standard novelty detection AUCs by a considerable amount. | https://openreview.net/pdf/adb38cfa18f4064baa8532ba96fd48c4ad2cf87a.pdf |
DySR: Adaptive Super-Resolution via Algorithm and System Co-design | https://openreview.net/forum?id=Pgtn4l6eKjv | https://openreview.net/forum?id=Pgtn4l6eKjv | Syed Zawad,Cheng Li,Zhewei Yao,Elton Zheng,Yuxiong He,Feng Yan | ICLR 2023,Poster | Super resolution (SR) is a promising approach for improving the quality of low resolution steaming services on mobile devices.
On mobile devices, the available computing and memory resources change dynamically depending on other running applications.
Due to the high computation and memory demands of SR models, it is essential to adapt the model according to available resources to harvest the best possible model performance while maintaining quality of service (QoS), such as meeting a minimum framerate and avoiding interruptions. Nevertheless, there is no SR model or machine learning system that supports adaptive SR, and enabling adaptive SR model on mobile devices is challenging because adapting model can cause significant framerate drop or even service interruption. To address this challenge, we take an algorithm and system co-design approach and propose DySR that maintains QoS while maximizing the model performance. During the training stage, DySR employs an adaption-aware one-shot Neural Architecture Search to produce sub-graphs that share kernel operation weights for low model adaption overhead while striking a balance between performance and framerate. During the inference stage, an incremental model adaption method is developed for further reducing the model adaption overhead. We evaluate on a diverse set of hardware and datasets to show that DySR can generate models close to the Pareto frontier while maintaining a steady framerate throughput with a memory footprint of around 40\% less compared to baseline methods. | https://openreview.net/pdf/e471fee5a1aaadc48573eb8e9adf4d3a9b0f4499.pdf |
Integrating Symmetry into Differentiable Planning with Steerable Convolutions | https://openreview.net/forum?id=n7CPzMPKQl | https://openreview.net/forum?id=n7CPzMPKQl | Linfeng Zhao,Xupeng Zhu,Lingzhi Kong,Robin Walters,Lawson L.S. Wong | ICLR 2023,Poster | To achieve this, we draw inspiration from equivariant convolution networks and model the path planning problem as a set of signals over grids. We demonstrate that value iteration can be treated as a linear equivariant operator, which is effectively a steerable convolution. Building upon Value Iteration Networks (VIN), we propose a new Symmetric Planning (SymPlan) framework that incorporates rotation and reflection symmetry using steerable convolution networks. We evaluate our approach on four tasks: 2D navigation, visual navigation, 2 degrees of freedom (2-DOF) configuration space manipulation, and 2-DOF workspace manipulation. Our experimental results show that our symmetric planning algorithms significantly improve training efficiency and generalization performance compared to non-equivariant baselines, including VINs and GPPN. | https://openreview.net/pdf/2cd0e4ba1de33d3aab278a9a2a5639ec64349cc4.pdf |
Causal Reasoning in the Presence of Latent Confounders via Neural ADMG Learning | https://openreview.net/forum?id=dcN0CaXQhT | https://openreview.net/forum?id=dcN0CaXQhT | Matthew Ashman,Chao Ma,Agrin Hilmkil,Joel Jennings,Cheng Zhang | ICLR 2023,Poster | Latent confounding has been a long-standing obstacle for causal reasoning from observational data. One popular approach is to model the data using acyclic directed mixed graphs (ADMGs), which describe ancestral relations between variables using directed and bidirected edges. However, existing methods using ADMGs are based on either linear functional assumptions or a discrete search that is complicated to use and lacks computational tractability for large datasets. In this work, we further extend the existing body of work and develop a novel gradient-based approach to learning an ADMG with nonlinear functional relations from observational data. We first show that the presence of latent confounding is identifiable under the assumptions of bow-free ADMGs with nonlinear additive noise models. With this insight, we propose a novel neural causal model based on autoregressive flows. This not only enables us to model complex causal relationships behind the data, but also estimate their functional relationships (hence treatment effects) simultaneously. We further validate our approach via experiments on both synthetic and real-world datasets, and demonstrate the competitive performance against relevant baselines. | https://openreview.net/pdf/149ce81bce210b81430db7d28cdb51750814141c.pdf |
$O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in Two-Player Zero-Sum Markov Games | https://openreview.net/forum?id=VWqiPBB_EM | https://openreview.net/forum?id=VWqiPBB_EM | Yuepeng Yang,Cong Ma | ICLR 2023,Poster | We prove that the optimistic-follow-the-regularized-leader (OFTRL) algorithm, together with smooth value updates, finds an $O(T^{−1})$ approximate Nash equilibrium in $T$ iterations for two-player zero-sum Markov games with full information. This improves the $\tilde{O}(T^{−5/6})$ convergence rate recently shown by Zhang et al (2022). The refined analysis hinges on two essential ingredients. First, the sum of the regrets of the two players, though not necessarily non-negative as in normal-form games, is approximately non-negative in Markov games. This property allows us to bound the second-order path lengths of the learning dynamics. Second, we prove a tighter algebraic inequality regarding the weights deployed by OFTRL that shaves an extra $\log T$ factor. This crucial improvement enables the inductive analysis that leads to the final $O(T^{−1})$ rate. | https://openreview.net/pdf/a35472137b52f0a6e8767cfa258493272a4c2699.pdf |
Bispectral Neural Networks | https://openreview.net/forum?id=xnsg4pfKb7 | https://openreview.net/forum?id=xnsg4pfKb7 | Sophia Sanborn,Christian A Shewmake,Bruno Olshausen,Christopher J. Hillar | ICLR 2023,Poster | We present a neural network architecture, Bispectral Neural Networks (BNNs) for learning representations that are invariant to the actions of compact commutative groups on the space over which a signal is defined. The model incorporates the ansatz of the bispectrum, an analytically defined group invariant that is complete -- that is, it preserves all signal structure while removing only the variation due to group actions. Here, we demonstrate that BNNs are able to simultaneously learn groups, their irreducible representations, and corresponding equivariant and complete-invariant maps purely from the symmetries implicit in data. Further, we demonstrate that the completeness property endows these networks with strong invariance-based adversarial robustness. This work establishes Bispectral Neural Networks as a powerful computational primitive for robust invariant representation learning. | https://openreview.net/pdf/e62b9fd3b63639f57c669991b7ad9f9b89065016.pdf |
Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for Full-Batch GD | https://openreview.net/forum?id=pOyi9KqE56b | https://openreview.net/forum?id=pOyi9KqE56b | Konstantinos Nikolakakis,Farzin Haddadpour,Amin Karbasi,Dionysios Kalogerias | ICLR 2023,Poster | We provide sharp path-dependent generalization and excess risk guarantees for the full-batch Gradient Descent (GD) algorithm on smooth losses (possibly non-Lipschitz, possibly nonconvex). At the heart of our analysis is an upper bound on the generalization error, which implies that average output stability and a bounded expected optimization error at termination lead to generalization. This result shows that a small generalization error occurs along the optimization path, and allows us to bypass Lipschitz or sub-Gaussian assumptions on the loss prevalent in previous works. For nonconvex, convex, and strongly convex losses, we show the explicit dependence of the generalization error in terms of the accumulated path-dependent optimization error, terminal optimization error, number of samples, and number of iterations. For nonconvex smooth losses, we prove that full-batch GD efficiently generalizes close to any stationary point at termination, and recovers the generalization error guarantees of stochastic algorithms with fewer assumptions. For smooth convex losses, we show that the generalization error is tighter than existing bounds for SGD (up to one order of error magnitude). Consequently the excess risk matches that of SGD for quadratically less iterations. Lastly, for strongly convex smooth losses, we show that full-batch GD achieves essentially the same excess risk rate as compared with the state of the art on SGD, but with an exponentially smaller number of iterations (logarithmic in the dataset size). | https://openreview.net/pdf/3ca1646b10a5ac124c9577c056ea32c53da6db58.pdf |
Hyper-Decision Transformer for Efficient Online Policy Adaptation | https://openreview.net/forum?id=AatUEvC-Wjv | https://openreview.net/forum?id=AatUEvC-Wjv | Mengdi Xu,Yuchen Lu,Yikang Shen,Shun Zhang,Ding Zhao,Chuang Gan | ICLR 2023,Poster | Decision Transformers (DT) have demonstrated strong performances in offline reinforcement learning settings, but quickly adapting to unseen novel tasks remains challenging. To address this challenge, we propose a new framework, called Hyper-Decision Transformer (HDT), that can generalize to novel tasks from a handful of demonstrations in a data- and parameter-efficient manner. To achieve such a goal, we propose to augment the base DT with an adaptation module, whose parameters are initialized by a hyper-network. When encountering unseen tasks, the hyper-network takes a handful of demonstrations as inputs and initializes the adaptation module accordingly. This initialization enables HDT to efficiently adapt to novel tasks by only fine-tuning the adaptation module. We validate HDT's generalization capability on object manipulation tasks. We find that with a single expert demonstration and fine-tuning only 0.5% of DT parameters, HDT adapts faster to unseen tasks than fine-tuning the whole DT model. Finally, we explore a more challenging setting where expert actions are not available, and we show that HDT outperforms state-of-the-art baselines in terms of task success rates by a large margin. Demos are available on our project page: https://sites.google.com/view/hdtforiclr2023/home. | https://openreview.net/pdf/1f0b37a019fe4936d828e553d436ca30059a9540.pdf |
Solving Continuous Control via Q-learning | https://openreview.net/forum?id=U5XOGxAgccS | https://openreview.net/forum?id=U5XOGxAgccS | Tim Seyde,Peter Werner,Wilko Schwarting,Igor Gilitschenski,Martin Riedmiller,Daniela Rus,Markus Wulfmeier | ICLR 2023,Poster | While there has been substantial success for solving continuous control with actor-critic methods, simpler critic-only methods such as Q-learning find limited application in the associated high-dimensional action spaces. However, most actor-critic methods come at the cost of added complexity: heuristics for stabilisation, compute requirements and wider hyperparameter search spaces. We show that a simple modification of deep Q-learning largely alleviates these issues. By combining bang-bang action discretization with value decomposition, framing single-agent control as cooperative multi-agent reinforcement learning (MARL), this simple critic-only approach matches performance of state-of-the-art continuous actor-critic methods when learning from features or pixels. We extend classical bandit examples from cooperative MARL to provide intuition for how decoupled critics leverage state information to coordinate joint optimization, and demonstrate surprisingly strong performance across a variety of continuous control tasks. | https://openreview.net/pdf/8785841c3d3960cea3b9230ca8db34e70e54e679.pdf |
Make-A-Video: Text-to-Video Generation without Text-Video Data | https://openreview.net/forum?id=nJfylDvgzlq | https://openreview.net/forum?id=nJfylDvgzlq | Uriel Singer,Adam Polyak,Thomas Hayes,Xi Yin,Jie An,Songyang Zhang,Qiyuan Hu,Harry Yang,Oron Ashual,Oran Gafni,Devi Parikh,Sonal Gupta,Yaniv Taigman | ICLR 2023,Poster | We propose Make-A-Video -- an approach for directly translating the tremendous recent progress in Text-to-Image (T2I) generation to Text-to-Video (T2V). Our intuition is simple: learn what the world looks like and how it is described from paired text-image data, and learn how the world moves from unsupervised video footage. Make-A-Video has three advantages: (1) it accelerates training of the T2V model (it does not need to learn visual and multimodal representations from scratch), (2) it does not require paired text-video data, and (3) the generated videos inherit the vastness (diversity in aesthetic, fantastical depictions, etc.) of today's image generation models.
We design a simple yet effective way to build on T2I models with novel and effective spatial-temporal modules. First, we decompose the full temporal U-Net and attention tensors and approximate them in space and time. Second, we design a spatial temporal pipeline to generate high resolution and frame rate videos with a video decoder, interpolation model and two super resolution models that can enable various applications besides T2V. In all aspects, spatial and temporal resolution, faithfulness to text, and quality, Make-A-Video sets the new state-of-the-art in text-to-video generation, as determined by both qualitative and quantitative measures. | https://openreview.net/pdf/89dbebc8608b7115596225380f7f411d9c944eaf.pdf |
Personalized Reward Learning with Interaction-Grounded Learning (IGL) | https://openreview.net/forum?id=wGvzQWFyUB | https://openreview.net/forum?id=wGvzQWFyUB | Jessica Maghakian,Paul Mineiro,Kishan Panaganti,Mark Rucker,Akanksha Saran,Cheng Tan | ICLR 2023,Poster | In an era of countless content offerings, recommender systems alleviate information overload by providing users with personalized content suggestions. Due to the scarcity of explicit user feedback, modern recommender systems typically optimize for the same fixed combination of implicit feedback signals across all users. However, this approach disregards a growing body of work highlighting that (i) implicit signals can be used by users in diverse ways, signaling anything from satisfaction to active dislike, and (ii) different users communicate preferences in different ways. We propose applying the recent Interaction Grounded Learning (IGL) paradigm to address the challenge of learning representations of diverse user communication modalities. Rather than requiring a fixed, human-designed reward function, IGL is able to learn personalized reward functions for different users and then optimize directly for the latent user satisfaction. We demonstrate the success of IGL with experiments using simulations as well as with real-world production traces.
| https://openreview.net/pdf/f1866bfb98a0b15a96694c234259fcbf8f91e3b8.pdf |
Towards convergence to Nash equilibria in two-team zero-sum games | https://openreview.net/forum?id=4BPFwvKOvo5 | https://openreview.net/forum?id=4BPFwvKOvo5 | Fivos Kalogiannis,Ioannis Panageas,Emmanouil-Vasileios Vlatakis-Gkaragkounis | ICLR 2023,Poster | Contemporary applications of machine learning raise important and overlooked theoretical questions regarding optimization in two-team games. Formally, two-team zero-sum games are defined as multi-player games where players are split into two competing sets of agents, each experiencing a utility identical to that of their teammates and opposite to that of the opposing team. We focus on the solution concept of Nash equilibria and prove $\textrm{CLS}$-hardness of computing them in this class of games. To further examine the capabilities of online learning algorithms in games with full-information feedback, we propose a benchmark of a simple ---yet nontrivial--- family of such games. These games do not enjoy the properties used to prove convergence for relevant algorithms. In particular, we use a dynamical systems perspective to demonstrate that gradient descent-ascent, its optimistic variant, optimistic multiplicative weights update, and extra gradient fail to converge (even locally) to a Nash equilibrium. On a brighter note, we propose a first-order method that leverages control theory techniques and under some conditions enjoys last-iterate local convergence to a Nash equilibrium. We also believe our proposed method is of independent interest for general min-max optimization. | https://openreview.net/pdf/f089e06582d6aa8804f308b5b0d27d432cdb6b26.pdf |
Discovering Evolution Strategies via Meta-Black-Box Optimization | https://openreview.net/forum?id=mFDU0fP3EQH | https://openreview.net/forum?id=mFDU0fP3EQH | Robert Tjarko Lange,Tom Schaul,Yutian Chen,Tom Zahavy,Valentin Dalibard,Chris Lu,Satinder Singh,Sebastian Flennerhag | ICLR 2023,Poster | Optimizing functions without access to gradients is the remit of black-box meth- ods such as evolution strategies. While highly general, their learning dynamics are often times heuristic and inflexible — exactly the limitations that meta-learning can address. Hence, we propose to discover effective update rules for evolution strategies via meta-learning. Concretely, our approach employs a search strategy parametrized by a self-attention-based architecture, which guarantees the update rule is invariant to the ordering of the candidate solutions. We show that meta-evolving this system on a small set of representative low-dimensional analytic optimization problems is sufficient to discover new evolution strategies capable of generalizing to unseen optimization problems, population sizes and optimization horizons. Furthermore, the same learned evolution strategy can outperform established neuroevolution baselines on supervised and continuous control tasks. As additional contributions, we ablate the individual neural network components of our method; reverse engineer the learned strategy into an explicit heuristic form, which remains highly competitive; and show that it is possible to self-referentially train an evolution strategy from scratch, with the learned update rule used to drive the outer meta-learning loop. | https://openreview.net/pdf/09efdea6923af7e8edaae929d132c514d2ca4920.pdf |
DensePure: Understanding Diffusion Models for Adversarial Robustness | https://openreview.net/forum?id=p7hvOJ6Gq0i | https://openreview.net/forum?id=p7hvOJ6Gq0i | Chaowei Xiao,Zhongzhu Chen,Kun Jin,Jiongxiao Wang,Weili Nie,Mingyan Liu,Anima Anandkumar,Bo Li,Dawn Song | ICLR 2023,Poster | Diffusion models have been recently employed to improve certified robustness through the process of denoising. However, the theoretical understanding of why diffusion models are able to improve the certified robustness is still lacking, preventing from further improvement. In this study, we close this gap by analyzing the fundamental properties of diffusion models and establishing the conditions under which they can enhance certified robustness. This deeper understanding allows us to propose a new method DensePure, designed to improve the certified robustness of a pretrained model (i.e. classifier). Given an (adversarial) input, DensePure consists of multiple runs of denoising via the reverse process of the diffusion model (with different random seeds) to get multiple reversed samples, which are then passed through the classifier, followed by majority voting of inferred labels to make the final prediction. This design of using multiple runs of denoising is informed by our theoretical analysis of the conditional distribution of the reversed sample. Specifically, when the data density of a clean sample is high, its conditional density under the reverse process in a diffusion model is also high; thus sampling from the latter conditional distribution can purify the adversarial example and return the corresponding clean sample with a high probability. By using the highest density point in the conditional distribution as the reversed sample, we identify the robust region of a given instance under the diffusion model's reverse process. We show that this robust region is a union of multiple convex sets, and is potentially much larger than the robust regions identified in previous works. In practice, DensePure can approximate the label of the high density region in the conditional distribution so that it can enhance certified robustness. We conduct extensive experiments to demonstrate the effectiveness of DensePure by evaluating its certified robustness given a standard model via randomized smoothing. We show that DensePure is consistently better than existing methods on ImageNet, with 7% improvement on average. | https://openreview.net/pdf/d8526f8f386272bc4382c378f94069337bd8be63.pdf |
Grounding Graph Network Simulators using Physical Sensor Observations | https://openreview.net/forum?id=jsZsEd8VEY | https://openreview.net/forum?id=jsZsEd8VEY | Jonas Linkerhägner,Niklas Freymuth,Paul Maria Scheikl,Franziska Mathis-Ullrich,Gerhard Neumann | ICLR 2023,Poster | Physical simulations that accurately model reality are crucial for many engineering disciplines such as mechanical engineering and robotic motion planning. In recent years, learned Graph Network Simulators produced accurate mesh-based simulations while requiring only a fraction of the computational cost of traditional simulators. Yet, the resulting predictors are confined to learning from data generated by existing mesh-based simulators and thus cannot include real world sensory information such as point cloud data. As these predictors have to simulate complex physical systems from only an initial state, they exhibit a high error accumulation for long-term predictions. In this work, we integrate sensory information to ground Graph Network Simulators on real world observations. In particular, we predict the mesh state of deformable objects by utilizing point cloud data. The resulting model allows for accurate predictions over longer time horizons, even under uncertainties in the simulation, such as unknown material properties. Since point clouds are usually not available for every time step, especially in online settings, we employ an imputation-based model. The model can make use of such additional information only when provided, and resorts to a standard Graph Network Simulator, otherwise. We experimentally validate our approach on a suite of prediction tasks for mesh-based interactions between soft and rigid bodies. Our method results in utilization of additional point cloud information to accurately predict stable simulations where existing Graph Network Simulators fail. | https://openreview.net/pdf/6cc29c715bf3520566713c94c35e439f84a99f80.pdf |
Where to Diffuse, How to Diffuse, and How to Get Back: Automated Learning for Multivariate Diffusions | https://openreview.net/forum?id=osei3IzUia | https://openreview.net/forum?id=osei3IzUia | Raghav Singhal,Mark Goldstein,Rajesh Ranganath | ICLR 2023,Poster | Diffusion-based generative models (DBGMs) perturb data to a target noise distribution and reverse this process to generate samples. The choice of noising process, or inference diffusion process, affects both likelihoods and sample quality. For example, extending the inference process with auxiliary variables leads to improved sample quality. While there are many such multivariate diffusions to explore, each new one requires significant model-specific analysis, hindering rapid prototyping and evaluation. In this work, we study Multivariate Diffusion Models (MDMs). For any number of auxiliary variables, we provide a recipe for maximizing a lower-bound on the MDMs likelihood without requiring any model-specific analysis. We then demonstrate how to parameterize the diffusion for a specified target noise distribution; these two points together enable optimizing the inference diffusion process. Optimizing the diffusion expands easy experimentation from just a few well-known processes to an automatic search over all linear diffusions. To demonstrate these ideas, we introduce two new specific diffusions as well as learn a diffusion process on the MNIST, CIFAR10, and ImageNet32 datasets. We show learned MDMs match or surpass bits-per-dims (BPDs) relative to fixed choices of diffusions for a given dataset and model architecture. | https://openreview.net/pdf/f14263e154b403508bc9610cec0745bc42f8b9e7.pdf |
Contrastive Corpus Attribution for Explaining Representations | https://openreview.net/forum?id=eWKfMBL5to | https://openreview.net/forum?id=eWKfMBL5to | Chris Lin,Hugh Chen,Chanwoo Kim,Su-In Lee | ICLR 2023,Poster | Despite the widespread use of unsupervised models, very few methods are designed to explain them. Most explanation methods explain a scalar model output. However, unsupervised models output representation vectors, the elements of which are not good candidates to explain because they lack semantic meaning. To bridge this gap, recent works defined a scalar explanation output: a dot product-based similarity in the representation space to the sample being explained (i.e., an explicand). Although this enabled explanations of unsupervised models, the interpretation of this approach can still be opaque because similarity to the explicand's representation may not be meaningful to humans. To address this, we propose contrastive corpus similarity, a novel and semantically meaningful scalar explanation output based on a reference corpus and a contrasting foil set of samples. We demonstrate that contrastive corpus similarity is compatible with many post-hoc feature attribution methods to generate COntrastive COrpus Attributions (COCOA) and quantitatively verify that features important to the corpus are identified. We showcase the utility of COCOA in two ways: (i) we draw insights by explaining augmentations of the same image in a contrastive learning setting (SimCLR); and (ii) we perform zero-shot object localization by explaining the similarity of image representations to jointly learned text representations (CLIP). | https://openreview.net/pdf/32173e256e55a81e9c42abce0ae086cd3e08acf2.pdf |
Spatio-temporal point processes with deep non-stationary kernels | https://openreview.net/forum?id=PsIk0kO3hKd | https://openreview.net/forum?id=PsIk0kO3hKd | Zheng Dong,Xiuyuan Cheng,Yao Xie | ICLR 2023,Poster | Point process data are becoming ubiquitous in modern applications, such as social networks, health care, and finance. Despite the powerful expressiveness of the popular recurrent neural network (RNN) models for point process data, they may not successfully capture sophisticated non-stationary dependencies in the data due to their recurrent structures. Another popular type of deep model for point process data is based on representing the influence kernel (rather than the intensity function) by neural networks. We take the latter approach and develop a new deep non-stationary influence kernel that can model non-stationary spatio-temporal point processes. The main idea is to approximate the influence kernel with a novel and general low-rank decomposition, enabling efficient representation through deep neural networks and computational efficiency and better performance. We also take a new approach to maintain the non-negativity constraint of the conditional intensity by introducing a log-barrier penalty. We demonstrate our proposed method's good performance and computational efficiency compared with the state-of-the-art on simulated and real data. | https://openreview.net/pdf/805a22ec2ddcfcaefd7076337990eb22fe609119.pdf |
Federated Learning from Small Datasets | https://openreview.net/forum?id=hDDV1lsRV8 | https://openreview.net/forum?id=hDDV1lsRV8 | Michael Kamp,Jonas Fischer,Jilles Vreeken | ICLR 2023,Poster | Federated learning allows multiple parties to collaboratively train a joint model without having to share any local data. It enables applications of machine learning in settings where data is inherently distributed and undisclosable, such as in the medical domain. Joint training is usually achieved by aggregating local models. When local datasets are small, locally trained models can vary greatly from a globally good model. Bad local models can arbitrarily deteriorate the aggregate model quality, causing federating learning to fail in these settings. We propose a novel approach that avoids this problem by interleaving model aggregation and permutation steps. During a permutation step we redistribute local models across clients through the server, while preserving data privacy, to allow each local model to train on a daisy chain of local datasets. This enables successful training in data-sparse domains. Combined with model aggregation, this approach enables effective learning even if the local datasets are extremely small, while retaining the privacy benefits of federated learning. | https://openreview.net/pdf/00cf1af90ef46baaaee3333a1ca4303356378bd5.pdf |
Relative Behavioral Attributes: Filling the Gap between Symbolic Goal Specification and Reward Learning from Human Preferences | https://openreview.net/forum?id=lGz9u1ubUXE | https://openreview.net/forum?id=lGz9u1ubUXE | Lin Guan,Karthik Valmeekam,Subbarao Kambhampati | ICLR 2023,Poster | Generating complex behaviors that satisfy the preferences of non-expert users is a crucial requirement for AI agents. Interactive reward learning from trajectory comparisons (a.k.a. RLHF) is one way to allow non-expert users to convey complex objectives by expressing preferences over short clips of agent behaviors. Even though this parametric method can encode complex tacit knowledge present in the underlying tasks, it implicitly assumes that the human is unable to provide richer feedback than binary preference labels, leading to intolerably high feedback complexity and poor user experience. While providing a detailed symbolic closed-form specification of the objectives might be tempting, it is not always feasible even for an expert user. However, in most cases, humans are aware of how the agent should change its behavior along meaningful axes to fulfill their underlying purpose, even if they are not able to fully specify task objectives symbolically. Using this as motivation, we introduce the notion of Relative Behavioral Attributes, which allows the users to tweak the agent behavior through symbolic concepts (e.g., increasing the softness or speed of agents' movement). We propose two practical methods that can learn to model any kind of behavioral attributes from ordered behavior clips. We demonstrate the effectiveness of our methods on four tasks with nine different behavioral attributes, showing that once the attributes are learned, end users can produce desirable agent behaviors relatively effortlessly, by providing feedback just around ten times. This is over an order of magnitude less than that required by the popular learning-from-human-preferences baselines. The supplementary video and source code are available at: https://guansuns.github.io/pages/rba. | https://openreview.net/pdf/8ddd06f0bef80e495ca1650eb74874f576793c38.pdf |
Scalable Batch-Mode Deep Bayesian Active Learning via Equivalence Class Annealing | https://openreview.net/forum?id=GRZtigJljLY | https://openreview.net/forum?id=GRZtigJljLY | Renyu Zhang,Aly A Khan,Robert L. Grossman,Yuxin Chen | ICLR 2023,Poster | Active learning has demonstrated data efficiency in many fields. Existing active learning algorithms, especially in the context of batch-mode deep Bayesian active models, rely heavily on the quality of uncertainty estimations of the model, and are often challenging to scale to large batches. In this paper, we propose Batch-BALanCe, a scalable batch-mode active learning algorithm, which combines insights from decision-theoretic active learning, combinatorial information measure, and diversity sampling. At its core, Batch-BALanCe relies on a novel decision-theoretic acquisition function that facilitates differentiation among different equivalence classes. Intuitively, each equivalence class consists of hypotheses (e.g., posterior samples of deep neural networks) with similar predictions, and Batch-BALanCe adaptively adjusts the size of the equivalence classes as learning progresses. To scale up the computation of queries to large batches, we further propose an efficient batch-mode acquisition procedure, which aims to maximize a novel combinatorial information measure defined through the acquisition function. We show that our algorithm can effectively handle realistic multi-class classification tasks, and achieves compelling performance on several benchmark datasets for active learning under both low- and large-batch regimes. | https://openreview.net/pdf/f8c636a7941b10f719e111fe8c8ab85c3c68465c.pdf |
Semi-Parametric Inducing Point Networks and Neural Processes | https://openreview.net/forum?id=FE99-fDrWd5 | https://openreview.net/forum?id=FE99-fDrWd5 | Richa Rastogi,Yair Schiff,Alon Hacohen,Zhaozhi Li,Ian Lee,Yuntian Deng,Mert R. Sabuncu,Volodymyr Kuleshov | ICLR 2023,Poster | We introduce semi-parametric inducing point networks (SPIN), a general-purpose architecture that can query the training set at inference time in a compute-efficient manner. Semi-parametric architectures are typically more compact than parametric models, but their computational complexity is often quadratic. In contrast, SPIN attains linear complexity via a cross-attention mechanism between datapoints inspired by inducing point methods. Querying large training sets can be particularly useful in meta-learning, as it unlocks additional training signal, but often exceeds the scaling limits of existing models. We use SPIN as the basis of the Inducing Point Neural Process, a probabilistic model which supports large contexts in meta-learning and achieves high accuracy where existing models fail. In our experiments, SPIN reduces memory requirements, improves accuracy across a range of meta-learning tasks, and improves state-of-the-art performance on an important practical problem, genotype imputation. | https://openreview.net/pdf/01b53721eeac239645a3544209c2a57815014a5d.pdf |
DAG Learning on the Permutahedron | https://openreview.net/forum?id=m9LCdYgN8-6 | https://openreview.net/forum?id=m9LCdYgN8-6 | Valentina Zantedeschi,Luca Franceschi,Jean Kaddour,Matt Kusner,Vlad Niculae | ICLR 2023,Poster | We propose a continuous optimization framework for discovering a latent directed acyclic graph (DAG) from observational data. Our approach optimizes over the polytope of permutation vectors, the so-called Permutahedron, to learn a topological ordering. Edges can be optimized jointly, or learned conditional on the ordering via a non-differentiable subroutine. Compared to existing continuous optimization approaches our formulation has a number of advantages including: 1. validity: optimizes over exact DAGs as opposed to other relaxations optimizing approximate DAGs; 2. modularity: accommodates any edge-optimization procedure, edge structural parameterization, and optimization loss; 3. end-to-end: either alternately iterates between node-ordering and edge-optimization, or optimizes them jointly; We demonstrate, on real-world data problems in protein-signaling and transcriptional network discovery, that our approach lies on the Pareto frontier of two key metrics, the SID and SHD. | https://openreview.net/pdf/88a68ed7951c36fd21393e3c996f2ebe7964a157.pdf |
Explicitly Minimizing the Blur Error of Variational Autoencoders | https://openreview.net/forum?id=9krnQ-ue9M | https://openreview.net/forum?id=9krnQ-ue9M | Gustav Bredell,Kyriakos Flouris,Krishna Chaitanya,Ertunc Erdil,Ender Konukoglu | ICLR 2023,Poster | Variational autoencoders (VAEs) are powerful generative modelling methods, however they suffer from blurry generated samples and reconstructions compared to the images they have been trained on. Significant research effort has been spent to increase the generative capabilities by creating more flexible models but often flexibility comes at the cost of higher complexity and computational cost. Several works have focused on altering the reconstruction term of the evidence lower bound (ELBO), however, often at the expense of losing the mathematical link to maximizing the likelihood of the samples under the modeled distribution. Here we propose a new formulation of the reconstruction term for the VAE that specifically penalizes the generation of blurry images while at the same time still maximizing the ELBO under the modeled distribution.
We show the potential of the proposed loss on three different data sets, where it outperforms several recently proposed reconstruction losses for VAEs. | https://openreview.net/pdf/13c9218efd3a171ed6da7c92777b14f0d597a58e.pdf |
3D Equivariant Diffusion for Target-Aware Molecule Generation and Affinity Prediction | https://openreview.net/forum?id=kJqXEPXMsE0 | https://openreview.net/forum?id=kJqXEPXMsE0 | Jiaqi Guan,Wesley Wei Qian,Xingang Peng,Yufeng Su,Jian Peng,Jianzhu Ma | ICLR 2023,Poster | Rich data and powerful machine learning models allow us to design drugs for a specific protein target <em>in silico</em>. Recently, the inclusion of 3D structures during targeted drug design shows superior performance to other target-free models as the atomic interaction in the 3D space is explicitly modeled. However, current 3D target-aware models either rely on the voxelized atom densities or the autoregressive sampling process, which are not equivariant to rotation or easily violate geometric constraints resulting in unrealistic structures. In this work, we develop a 3D equivariant diffusion model to solve the above challenges. To achieve target-aware molecule design, our method learns a joint generative process of both continuous atom coordinates and categorical atom types with a SE(3)-equivariant network. Moreover, we show that our model can serve as an unsupervised feature extractor to estimate the binding affinity under proper parameterization, which provides an effective way for drug screening. To evaluate our model, we propose a comprehensive framework to evaluate the quality of sampled molecules from different dimensions. Empirical studies show our model could generate molecules with more realistic 3D structures and better affinities towards the protein targets, and improve binding affinity ranking and prediction without retraining. | https://openreview.net/pdf/7d2f5804455f227aebb0b02ed88a71d50f3ad49a.pdf |
How gradient estimator variance and bias impact learning in neural networks | https://openreview.net/forum?id=EBC60mxBwyw | https://openreview.net/forum?id=EBC60mxBwyw | Arna Ghosh,Yuhan Helena Liu,Guillaume Lajoie,Konrad Kording,Blake Aaron Richards | ICLR 2023,Poster | There is growing interest in understanding how real brains may approximate gradients and how gradients can be used to train neuromorphic chips. However, neither real brains nor neuromorphic chips can perfectly follow the loss gradient, so parameter updates would necessarily use gradient estimators that have some variance and/or bias. Therefore, there is a need to understand better how variance and bias in gradient estimators impact learning dependent on network and task properties. Here, we show that variance and bias can impair learning on the training data, but some degree of variance and bias in a gradient estimator can be beneficial for generalization. We find that the ideal amount of variance and bias in a gradient estimator are dependent on several properties of the network and task: the size and activity sparsity of the network, the norm of the gradient, and the curvature of the loss landscape. As such, whether considering biologically-plausible learning algorithms or algorithms for training neuromorphic chips, researchers can analyze these properties to determine whether their approximation to gradient descent will be effective for learning given their network and task properties. | https://openreview.net/pdf/78403b3ae7bbdc9e73614cc0b261982b15a6a1f0.pdf |
Evaluating Representations with Readout Model Switching | https://openreview.net/forum?id=Fsd-6ax4T1m | https://openreview.net/forum?id=Fsd-6ax4T1m | Yazhe Li,Jorg Bornschein,Marcus Hutter | ICLR 2023,Poster | Although much of the success of Deep Learning builds on learning good representations, a rigorous method to evaluate their quality is lacking. In this paper, we treat the evaluation of representations as a model selection problem and propose to use the Minimum Description Length (MDL) principle to devise an evaluation metric. Contrary to the established practice of limiting the capacity of the readout model, we design a hybrid discrete and continuous-valued model space for the readout models and employ a switching strategy to combine their predictions. The MDL score takes model complexity, as well as data efficiency into account. As a result, the most appropriate model for the specific task and representation will be chosen, making it a unified measure for comparison. The proposed metric can be efficiently computed with an online method and we present results for pre-trained vision encoders of various architectures (ResNet and ViT) and objective functions (supervised and self-supervised) on a range of downstream tasks. We compare our methods with accuracy-based approaches and show that the latter are inconsistent when multiple readout models are used. Finally, we discuss important properties revealed by our evaluations such as model scaling, preferred readout model, and data efficiency. | https://openreview.net/pdf/e6852d6a9b749e54b195047d6910e5c27fa9474a.pdf |
Augmentation with Projection: Towards an Effective and Efficient Data Augmentation Paradigm for Distillation | https://openreview.net/forum?id=kPPVmUF6bM_ | https://openreview.net/forum?id=kPPVmUF6bM_ | Ziqi Wang,Yuexin Wu,Frederick Liu,Daogao Liu,Le Hou,Hongkun Yu,Jing Li,Heng Ji | ICLR 2023,Poster | Knowledge distillation is one of the primary methods of transferring knowledge from large to small models. However, it requires massive task-specific data, which may not be plausible in many real-world applications. Data augmentation methods such as representation interpolation, token replacement, or augmentation with models are applied to tackle this problem. However, these data augmentation methods either potentially cause shifts in decision boundaries (representation interpolation), are not expressive enough (token replacement), or introduce too much computational overhead (augmentation with models). To this end, we propose AugPro (Augmentation with Projection), an effective and efficient data augmentation method for distillation. Our method builds on top of representation interpolation augmentation methods to maintain the diversity of expressions and converts the augmented data to tokens to avoid shifting decision boundaries. It uses simple operations that come with little computational overhead. The results on multiple GLUE tasks show that our methods can improve distillation performance by a large margin at a low time cost. | https://openreview.net/pdf/fde269800382d4ef28ff4cff9a8757680b4210fa.pdf |
Pseudoinverse-Guided Diffusion Models for Inverse Problems | https://openreview.net/forum?id=9_gsMA8MRKQ | https://openreview.net/forum?id=9_gsMA8MRKQ | Jiaming Song,Arash Vahdat,Morteza Mardani,Jan Kautz | ICLR 2023,Poster | Diffusion models have become competitive candidates for solving various inverse problems. Models trained for specific inverse problems work well but are limited to their particular use cases, whereas methods that use problem-agnostic models are general but often perform worse empirically. To address this dilemma, we introduce Pseudoinverse-guided Diffusion Models ($\Pi$GDM), an approach that uses problem-agnostic models to close the gap in performance. $\Pi$GDM directly estimates conditional scores from the measurement model of the inverse problem without additional training. It can address inverse problems with noisy, non-linear, or even non-differentiable measurements, in contrast to many existing approaches that are limited to noiseless linear ones. We illustrate the empirical effectiveness of $\Pi$GDM on several image restoration tasks, including super-resolution, inpainting and JPEG restoration. On ImageNet, $\Pi$GDM is competitive with state-of-the-art diffusion models trained on specific tasks, and is the first to achieve this with problem-agnostic diffusion models. $\Pi$GDM can also solve a wider set of inverse problems where the measurement processes are composed of several simpler ones. | https://openreview.net/pdf/210093330709030207aa90dbfe2a1f525ac5fb7d.pdf |
Planning with Sequence Models through Iterative Energy Minimization | https://openreview.net/forum?id=cVFD6qE8gnY | https://openreview.net/forum?id=cVFD6qE8gnY | Hongyi Chen,Yilun Du,Yiye Chen,Joshua B. Tenenbaum,Patricio A. Vela | ICLR 2023,Poster | Recent works have shown that language modeling can be effectively used to train reinforcement learning (RL) policies. However, the success of applying existing language models to planning, in which we wish to obtain a trajectory of actions to reach some goal, is less straightforward. The typical autoregressive generation procedures of language models preclude sequential refinement of earlier steps, which limits the effectiveness of a predicted plan. In this paper, we suggest an approach towards integrating planning with language models based on the idea of iterative energy minimization, and illustrate how such a procedure leads to improved RL performance across different tasks. We train a masked language model to capture an implicit energy function over trajectories of actions, and formulate planning as finding a trajectory of actions with minimum energy. We illustrate how this procedure enables improved performance over recent approaches across BabyAI and Atari environments. We further demonstrate unique benefits of our iterative optimization procedure, involving new task generalization, test-time constraints adaptation, and the ability to compose plans together. Project webpage: https://hychen-naza.github.io/projects/LEAP/index.html | https://openreview.net/pdf/b6ee4b3ab28ce8f9c2f94ec81b64cf338bfdfafe.pdf |
Verifying the Union of Manifolds Hypothesis for Image Data | https://openreview.net/forum?id=Rvee9CAX4fi | https://openreview.net/forum?id=Rvee9CAX4fi | Bradley CA Brown,Anthony L. Caterini,Brendan Leigh Ross,Jesse C Cresswell,Gabriel Loaiza-Ganem | ICLR 2023,Poster | Deep learning has had tremendous success at learning low-dimensional representations of high-dimensional data. This success would be impossible if there was no hidden low-dimensional structure in data of interest; this existence is posited by the manifold hypothesis, which states that the data lies on an unknown manifold of low intrinsic dimension. In this paper, we argue that this hypothesis does not properly capture the low-dimensional structure typically present in image data. Assuming that data lies on a single manifold implies intrinsic dimension is identical across the entire data space, and does not allow for subregions of this space to have a different number of factors of variation. To address this deficiency, we consider the union of manifolds hypothesis, which states that data lies on a disjoint union of manifolds of varying intrinsic dimensions. We empirically verify this hypothesis on commonly-used image datasets, finding that indeed, observed data lies on a disconnected set and that intrinsic dimension is not constant. We also provide insights into the implications of the union of manifolds hypothesis in deep learning, both supervised and unsupervised, showing that designing models with an inductive bias for this structure improves performance across classification and generative modelling tasks. Our code is available at https://github.com/layer6ai-labs/UoMH. | https://openreview.net/pdf/f6ede6f446eb6a74ef9b335623e27b10682272ea.pdf |
Error Sensitivity Modulation based Experience Replay: Mitigating Abrupt Representation Drift in Continual Learning | https://openreview.net/forum?id=zlbci7019Z3 | https://openreview.net/forum?id=zlbci7019Z3 | Fahad Sarfraz,Elahe Arani,Bahram Zonooz | ICLR 2023,Poster | Humans excel at lifelong learning, as the brain has evolved to be robust to distribution shifts and noise in our ever-changing environment. Deep neural networks (DNNs), however, exhibit catastrophic forgetting and the learned representations drift drastically as they encounter a new task. This alludes to a different error-based learning mechanism in the brain. Unlike DNNs, where learning scales linearly with the magnitude of the error, the sensitivity to errors in the brain decreases as a function of their magnitude. To this end, we propose "ESMER" which employs a principled mechanism to modulate error sensitivity in a dual-memory rehearsal-based system. Concretely, it maintains a memory of past errors and uses it to modify the learning dynamics so that the model learns more from small consistent errors compared to large sudden errors. We also propose "Error-Sensitive Reservoir Sampling" to maintain episodic memory, which leverages the error history to pre-select low-loss samples as candidates for the buffer, which are better suited for retaining information. Empirical results show that ESMER effectively reduces forgetting and abrupt drift in representations at the task boundary by gradually adapting to the new task while consolidating knowledge. Remarkably, it also enables the model to learn under high levels of label noise, which is ubiquitous in real-world data streams. | https://openreview.net/pdf/1fbbc169fd74408bb6e6db0a1a2363e7161ae239.pdf |
Don’t forget the nullspace! Nullspace occupancy as a mechanism for out of distribution failure | https://openreview.net/forum?id=39z0zPZ0AvB | https://openreview.net/forum?id=39z0zPZ0AvB | Daksh Idnani,Vivek Madan,Naman Goyal,David J. Schwab,Shanmukha Ramakrishna Vedantam | ICLR 2023,Poster | Out of distribution (OoD) generalization has received considerable interest in recent years. In this work, we identify a particular failure mode of OoD generalization for discriminative classifiers that is based on test data (from a new domain) lying in the nullspace of features learnt from source data. We demonstrate the existence of this failure mode across multiple networks trained across RotatedMNIST, PACS, TerraIncognita, DomainNet and ImageNet-R datasets. We then study different choices for characterizing the feature space and show that projecting intermediate representations onto the span of directions that obtain maximum training accuracy provides consistent improvements in OoD performance. Finally, we show that such nullspace behavior also provides an insight into neural networks trained on poisoned data. We hope our work galvanizes interest in the relationship between the nullspace occupancy failure mode and generalization. | https://openreview.net/pdf/db9b6c631e14f77256d03c03af1728ef41406450.pdf |
ContraNorm: A Contrastive Learning Perspective on Oversmoothing and Beyond | https://openreview.net/forum?id=SM7XkJouWHm | https://openreview.net/forum?id=SM7XkJouWHm | Xiaojun Guo,Yifei Wang,Tianqi Du,Yisen Wang | ICLR 2023,Poster | Oversmoothing is a common phenomenon in a wide range of Graph Neural Networks (GNNs) and Transformers, where performance degenerates as the layer goes deeper. Instead of characterizing oversmoothing from the view of complete collapse in which representations converge to a single point, we dive into a more general perspective dimensional collapse in which representations lie in a narrow cone. Accordingly, inspired by the power of contrastive learning in preventing dimensional collapse, we propose a novel normalization layer ContraNorm. Intuitively, ContraNorm implicitly shatters representations in the embedding space, leading to a more uniform distribution and slighter dimensional collapse. On the theoretical analysis, we prove that ContraNorm can alleviate both complete collapse and dimensional collapse under some conditions. Our proposed normalization layer can be easily inserted into GNNs and Transformers with negligible parameter overhead. Experiments on various real-world datasets verify the effectiveness of our method. | https://openreview.net/pdf/b307a595509ff0168e796ba3dbcfad8f1810f630.pdf |
Accelerated Single-Call Methods for Constrained Min-Max Optimization | https://openreview.net/forum?id=HRwN7IQLUKA | https://openreview.net/forum?id=HRwN7IQLUKA | Yang Cai,Weiqiang Zheng | ICLR 2023,Poster | We study first-order methods for constrained min-max optimization. Existing methods either require two gradient calls or two projections in each iteration, which may be costly in some applications. In this paper, we first show that a variant of the \emph{Optimistic Gradient (OG)} method, a \emph{single-call single-projection} algorithm, has $O(\frac{1}{\sqrt{T}})$ best-iterate convergence rate for inclusion problems with operators that satisfy the weak Minty variation inequality (MVI). Our second result is the first single-call single-projection algorithm -- the \emph{Accelerated Reflected Gradient (ARG)} method that achieves the \emph{optimal $O(\frac{1}{T})$} last-iterate convergence rate for inclusion problems that satisfy negative comonotonicity. Both the weak MVI and negative comonotonicity are well-studied assumptions and capture a rich set of non-convex non-concave min-max optimization problems. Finally, we show that the \emph{Reflected Gradient (RG)} method, another \emph{single-call single-projection} algorithm, has $O(\frac{1}{\sqrt{T}})$ last-iterate convergence rate for constrained convex-concave min-max optimization, answering an open problem of [Hsieh et al., 2019]. Our convergence rates hold for standard measures such as the tangent residual and the natural residual. | https://openreview.net/pdf/153c2c1ec443965f298c8215dc024f0a9ff33102.pdf |
Distributed Extra-gradient with Optimal Complexity and Communication Guarantees | https://openreview.net/forum?id=b3itJyarLM0 | https://openreview.net/forum?id=b3itJyarLM0 | Ali Ramezani-Kebrya,Kimon Antonakopoulos,Igor Krawczuk,Justin Deschenaux,Volkan Cevher | ICLR 2023,Poster | We consider monotone variational inequality (VI) problems in multi-GPU settings where multiple processors/workers/clients have access to local stochastic dual vectors. This setting includes a broad range of important problems from distributed convex minimization to min-max and games. Extra-gradient, which is a de facto algorithm for monotone VI problems, has not been designed to be communication-efficient. To this end, we propose a quantized generalized extra-gradient (Q-GenX), which is an unbiased and adaptive compression method tailored to solve VIs. We provide an adaptive step-size rule, which adapts to the respective noise profiles at hand and achieve a fast rate of ${\cal O}(1/T)$ under relative noise, and an order-optimal ${\cal O}(1/\sqrt{T})$ under absolute noise and show distributed training accelerates convergence. Finally, we validate our theoretical results by providing real-world experiments and training generative adversarial networks on multiple GPUs.
| https://openreview.net/pdf/cb71d6b7493a97893dd096fc88a312dd84ad2216.pdf |
Performance Bounds for Model and Policy Transfer in Hidden-parameter MDPs | https://openreview.net/forum?id=20gBzEzgtiI | https://openreview.net/forum?id=20gBzEzgtiI | Haotian Fu,Jiayu Yao,Omer Gottesman,Finale Doshi-Velez,George Konidaris | ICLR 2023,Poster | In the Hidden-Parameter MDP (HiP-MDP) framework, a family of reinforcement learning tasks is generated by varying hidden parameters specifying the dynamics and reward function for each individual task. HiP-MDP is a natural model for families of tasks in which meta- and lifelong-reinforcement learning approaches can succeed. Given a learned context encoder that infers the hidden parameters from previous experience, most existing algorithms fall into two categories: $\textit{model transfer}$ and $\textit{policy transfer}$, depending on which function the hidden parameters are used to parameterize. We characterize the robustness of model and policy transfer algorithms with respect to hidden parameter estimation error. We first show that the value function of HiP-MDPs is Lipschitz continuous under certain conditions. We then derive regret bounds for both settings through the lens of Lipschitz continuity. Finally, we empirically corroborate our theoretical analysis by experimentally varying the hyper-parameters governing the Lipschitz constants of two continuous control problems; the resulting performance is consistent with our predictions. | https://openreview.net/pdf/60d3574f06987c12eb924a57883c12320d9eec12.pdf |
Composing Task Knowledge With Modular Successor Feature Approximators | https://openreview.net/forum?id=DrtSx1z40Ib | https://openreview.net/forum?id=DrtSx1z40Ib | Wilka Torrico Carvalho,Angelos Filos,Richard Lewis,Honglak Lee,Satinder Singh | ICLR 2023,Poster | Recently, the Successor Features and Generalized Policy Improvement (SF&GPI) framework has been proposed as a method for learning, composing and transferring predictive knowledge and behavior. SF&GPI works by having an agent learn predictive representations (SFs) that can be combined for transfer to new tasks with GPI. However, to be effective this approach requires state features that are useful to predict, and these state-features are typically hand-designed. In this work, we present a novel neural network architecture, “Modular Successor Feature Approximators” (MSFA), where modules both discover what is useful to predict, and learn their own predictive representations. We show that MSFA is able to better generalize compared to baseline architectures for learning SFs and a modular network that discovers factored state representations.
| https://openreview.net/pdf/f6e1ffd51a2415a8b0d1b98c2fabdf7bc677dfeb.pdf |
DexDeform: Dexterous Deformable Object Manipulation with Human Demonstrations and Differentiable Physics | https://openreview.net/forum?id=LIV7-_7pYPl | https://openreview.net/forum?id=LIV7-_7pYPl | Sizhe Li,Zhiao Huang,Tao Chen,Tao Du,Hao Su,Joshua B. Tenenbaum,Chuang Gan | ICLR 2023,Poster | In this work, we aim to learn dexterous manipulation of deformable objects using multi-fingered hands. Reinforcement learning approaches for dexterous rigid object manipulation would struggle in this setting due to the complexity of physics interaction with deformable objects. At the same time, previous trajectory optimization approaches with differentiable physics for deformable manipulation would suffer from local optima caused by the explosion of contact modes from hand-object interactions. To address these challenges, we propose DexDeform, a principled framework that abstracts dexterous manipulation skills from human demonstration, and refines the learned skills with differentiable physics. Concretely, we first collect a small set of human demonstrations using teleoperation. And we then train a skill model using demonstrations for planning over action abstractions in imagination. To explore the goal space, we further apply augmentations to the existing deformable shapes in demonstrations and use a gradient optimizer to refine the actions planned by the skill model. Finally, we adopt the refined trajectories as new demonstrations for finetuning the skill model. To evaluate the effectiveness of our approach, we introduce a suite of six challenging dexterous deformable object manipulation tasks. Compared with baselines, DexDeform is able to better explore and generalize across novel goals unseen in the initial human demonstrations. Additional materials can be found at our project website: https://sites.google.com/view/dexdeform. | https://openreview.net/pdf/40f58e1097499c5bdbba2b9dea60f73decfcf1b9.pdf |
Effective passive membership inference attacks in federated learning against overparameterized models | https://openreview.net/forum?id=QsCSLPP55Ku | https://openreview.net/forum?id=QsCSLPP55Ku | Jiacheng Li,Ninghui Li,Bruno Ribeiro | ICLR 2023,Poster | This work considers the challenge of performing membership inference attacks in a federated learning setting ---for image classification--- where an adversary can only observe the communication between the central node and a single client (a passive white-box attack). Passive attacks are one of the hardest-to-detect attacks, since they can be performed without modifying how the behavior of the central server or its clients, and assumes *no access to private data instances*. The key insight of our method is empirically observing that, near parameters that generalize well in test, the gradient of large overparameterized neural network models statistically behave like high-dimensional independent isotropic random vectors. Using this insight, we devise two attacks that are often little impacted by existing and proposed defenses. Finally, we validated the hypothesis that our attack depends on the overparametrization by showing that increasing the level of overparametrization (without changing the neural network architecture) positively correlates with our attack effectiveness. | https://openreview.net/pdf/954032a07615cd8bfbd1828c3a300c03f31a286c.pdf |
Optimizing Bi-Encoder for Named Entity Recognition via Contrastive Learning | https://openreview.net/forum?id=9EAQVEINuum | https://openreview.net/forum?id=9EAQVEINuum | Sheng Zhang,Hao Cheng,Jianfeng Gao,Hoifung Poon | ICLR 2023,Poster | We present a bi-encoder framework for named entity recognition (NER), which applies contrastive learning to map candidate text spans and entity types into the same vector representation space. Prior work predominantly approaches NER as sequence labeling or span classification. We instead frame NER as a representation learning problem that maximizes the similarity between the vector representations of an entity mention and its type. This makes it easy to handle nested and flat NER alike, and can better leverage noisy self-supervision signals. A major challenge to this bi-encoder formulation for NER lies in separating non-entity spans from entity mentions. Instead of explicitly labeling all non-entity spans as the same class $\texttt{Outside}$ ($\texttt{O}$) as in most prior methods, we introduce a novel dynamic thresholding loss, learned in conjunction with the standard contrastive loss. Experiments show that our method performs well in both supervised and distantly supervised settings, for nested and flat NER alike, establishing new state of the art across standard datasets in the general domain (e.g., ACE2004, ACE2005, CoNLL2003) and high-value verticals such as biomedicine (e.g., GENIA, NCBI, BC5CDR, JNLPBA). We release the code at https://github.com/microsoft/binder. | https://openreview.net/pdf/6dca640907739533f2dc7a6e7b7d2e4104fdfc43.pdf |
Taking a Step Back with KCal: Multi-Class Kernel-Based Calibration for Deep Neural Networks | https://openreview.net/forum?id=p_jIy5QFB7 | https://openreview.net/forum?id=p_jIy5QFB7 | Zhen Lin,Shubhendu Trivedi,Jimeng Sun | ICLR 2023,Poster | Deep neural network (DNN) classifiers are often overconfident, producing miscalibrated class probabilities. In high-risk applications like healthcare, practitioners require fully calibrated probability predictions for decision-making. That is, conditioned on the prediction vector, every class’ probability should be close to the predicted value. Most existing calibration methods either lack theoretical guarantees for producing calibrated outputs, reduce classification accuracy in the process, or only calibrate the predicted class. This paper proposes a new Kernel-based calibration method called KCal. Unlike existing calibration procedures, KCal does not operate directly on the logits or softmax outputs of the DNN. Instead, KCal learns a metric space on the penultimate-layer latent embedding and generates predictions using kernel density estimates on a calibration set. We first analyze KCal theoretically, showing that it enjoys a provable full calibration guarantee. Then, through extensive experiments across a variety of datasets, we show that KCal consistently outperforms baselines as measured by the calibration error and by proper scoring rules like the Brier Score. | https://openreview.net/pdf/f98704c521e5df54883c99e029e7e14b042f444b.pdf |
SemPPL: Predicting Pseudo-Labels for Better Contrastive Representations | https://openreview.net/forum?id=TAVBJ4aHsWt | https://openreview.net/forum?id=TAVBJ4aHsWt | Matko Bošnjak,Pierre Harvey Richemond,Nenad Tomasev,Florian Strub,Jacob C Walker,Felix Hill,Lars Holger Buesing,Razvan Pascanu,Charles Blundell,Jovana Mitrovic | ICLR 2023,Poster | Learning from large amounts of unsupervised data and a small amount of supervision is an important open problem in computer vision. We propose a new semi-supervised learning method, Semantic Positives via Pseudo-Labels (SEMPPL), that combines labelled and unlabelled data to learn informative representations. Our method extends self-supervised contrastive learning—where representations are shaped by distinguishing whether two samples represent the same underlying datum (positives) or not (negatives)—with a novel approach to selecting positives. To enrich the set of positives, we leverage the few existing ground-truth labels to predict the missing ones through a k-nearest neighbors classifier by using the learned embeddings of the labelled data. We thus extend the set of positives with datapoints having the same pseudo-label and call these semantic positives. We jointly learn the representation and predict bootstrapped pseudo-labels. This creates a reinforcing cycle. Strong initial representations enable better pseudo-label predictions which then improve the selection of semantic positives and lead to even better representations. SEMPPL outperforms competing semi-supervised methods setting new state-of-the-art performance of 76% and 68.5% top-1accuracy when using a ResNet-50 and training on 10% and 1% of labels on ImageNet, respectively. Furthermore, when using selective kernels, SEMPPL significantly outperforms previous state-of-the-art achieving 72.3% and 78.3% top-1accuracy on ImageNet with 1% and 10% labels, respectively, which improves absolute +7.8% and +6.2% over previous work. SEMPPL also exhibits state-of-the-art performance over larger ResNet models as well as strong robustness, out-of-distribution and transfer performance. We release the checkpoints and the evaluation code at https://github.com/deepmind/semppl. | https://openreview.net/pdf/d50c85ce057666d1d57860e1bd9919146f56b21f.pdf |
Differentially Private Adaptive Optimization with Delayed Preconditioners | https://openreview.net/forum?id=j1zQGmQQOX1 | https://openreview.net/forum?id=j1zQGmQQOX1 | Tian Li,Manzil Zaheer,Ken Liu,Sashank J. Reddi,Hugh Brendan McMahan,Virginia Smith | ICLR 2023,Poster | Privacy costs may negate the benefits of using adaptive optimizers in differentially private model training. Prior works typically address this issue by using auxiliary information (e.g., public data) to boost the effectiveness of adaptive optimization. In this work, we explore techniques to estimate and efficiently adapt to gradient geometry in private adaptive optimization without auxiliary data. Motivated by the observation that adaptive methods can tolerate stale preconditioners, we propose differentially private adaptive training with delayed preconditioners (DP^2), a simple method that constructs delayed but less noisy preconditioners to better realize the benefits of adaptivity. Theoretically, we provide convergence guarantees for our method for both convex and non-convex problems, and analyze trade-offs between delay and privacy noise reduction. Empirically, we explore DP^2 across several real-world datasets, demonstrating that it can improve convergence speed by as much as 4× relative to non-adaptive baselines and match the performance of state-of-the-art optimization methods that require auxiliary data. | https://openreview.net/pdf/66a6dccdd07b9a1103315a658487355c6e004885.pdf |
Phenaki: Variable Length Video Generation from Open Domain Textual Descriptions | https://openreview.net/forum?id=vOEXS39nOF | https://openreview.net/forum?id=vOEXS39nOF | Ruben Villegas,Mohammad Babaeizadeh,Pieter-Jan Kindermans,Hernan Moraldo,Han Zhang,Mohammad Taghi Saffar,Santiago Castro,Julius Kunze,Dumitru Erhan | ICLR 2023,Poster | We present Phenaki, a model capable of realistic video synthesis given a sequence of textual prompts. Generating videos from text is particularly challenging due to the computational cost, limited quantities of high quality text-video data and variable length of videos. To address these issues, we introduce a new causal model for learning video representation which compresses the video to a small discrete tokens representation. This tokenizer is auto-regressive in time, which allows it to work with video representations of different length.
To generate video tokens from text we are using a bidirectional masked transformer conditioned on pre-computed text tokens. The generated video tokens are subsequently de-tokenized to create the actual video. To address data issues, we demonstrate how joint training on a large corpus of image-text pairs as well as a smaller number of video-text examples can result in generalization beyond what is available in the video datasets. Compared to the previous video generation methods, Phenaki can generate arbitrary long videos conditioned on a sequence of prompts (i.e. time variable text or story) in open domain. To the best of our knowledge, this is the first time a paper studies generating videos from time variable prompts. | https://openreview.net/pdf/fe8e106a2746992c9c2e658bdc8cb9c89cc5a39a.pdf |
Long Range Language Modeling via Gated State Spaces | https://openreview.net/forum?id=5MkYIYCbva | https://openreview.net/forum?id=5MkYIYCbva | Harsh Mehta,Ankit Gupta,Ashok Cutkosky,Behnam Neyshabur | ICLR 2023,Poster | State space models have shown to be effective at modeling long range dependencies, specially on sequence classification tasks. In this work we focus on autoregressive sequence modeling over English books, Github source code and ArXiv mathematics articles. Based on recent developments around the effectiveness of gated activation functions, we propose a new layer named \textit{Gated State Space} (GSS) and show that it trains significantly faster than the diagonal version of S4 (i.e. DSS) on TPUs, is fairly competitive with several well-tuned Transformer-based baselines and exhibits zero-shot generalization to longer inputs while being straightforward to implement. Finally, we show that leveraging self-attention to model local dependencies improves the performance of GSS even further. | https://openreview.net/pdf/5c337ba7d563872f1a5d203061c5984c0059509e.pdf |
Bayes-MIL: A New Probabilistic Perspective on Attention-based Multiple Instance Learning for Whole Slide Images | https://openreview.net/forum?id=_geIwiOyUhZ | https://openreview.net/forum?id=_geIwiOyUhZ | Yufei CUI,Ziquan Liu,Xiangyu Liu,Xue Liu,Cong Wang,Tei-Wei Kuo,Chun Jason Xue,Antoni B. Chan | ICLR 2023,Poster | Multiple instance learning (MIL) is a popular weakly-supervised learning model on the whole slide image (WSI) for AI-assisted pathology diagnosis. The recent advance in attention-based MIL allows the model to find its region-of-interest (ROI) for interpretation by learning the attention weights for image patches of WSI slides. However, we empirically find that the interpretability of some related methods is either untrustworthy as the principle of MIL is violated or unsatisfactory as the high-attention regions are not consistent with experts' annotations. In this paper, we propose Bayes-MIL to address the problem from a probabilistic perspective. The induced patch-level uncertainty is proposed as a new measure of MIL interpretability, which outperforms previous methods in matching doctors annotations. We design a slide-dependent patch regularizer (SDPR) for the attention, imposing constraints derived from the MIL assumption, on the attention distribution. SDPR explicitly constrains the model to generate correct attention values. The spatial information is further encoded by an approximate convolutional conditional random field (CRF), for better interpretability. Experimental results show Bayes-MIL outperforms the related methods in patch-level and slide-level metrics and provides much better interpretable ROI on several large-scale WSI datasets. | https://openreview.net/pdf/2fc22942117b5645f3be532399184ac1fdedaa89.pdf |
Investigating Multi-task Pretraining and Generalization in Reinforcement Learning | https://openreview.net/forum?id=sSt9fROSZRO | https://openreview.net/forum?id=sSt9fROSZRO | Adrien Ali Taiga,Rishabh Agarwal,Jesse Farebrother,Aaron Courville,Marc G Bellemare | ICLR 2023,Poster | Deep reinforcement learning~(RL) has achieved remarkable successes in complex single-task settings. However, designing RL agents that can learn multiple tasks and leverage prior experience to quickly adapt to a related new task remains challenging. Despite previous attempts to improve on these areas, our understanding of multi-task training and generalization in RL remains limited. To fill this gap, we investigate the generalization capabilities of a popular actor-critic method, IMPALA. Specifically, we build on previous work that has advocated for the use of modes and difficulties of Atari 2600 games as a challenging benchmark for transfer learning in RL. We do so by pretraining an agent on multiple variants of the same Atari game before fine-tuning on the remaining never-before-seen variants. This protocol simplifies the multi-task pretraining phase by limiting negative interference between tasks and allows us to better understand the dynamics of multi-task training and generalization. We find that, given a fixed amount of pretraining data, agents trained with more variations are able to generalize better. Surprisingly, we also observe that this advantage can still be present after fine-tuning for 200M environment frames than when doing zero-shot transfer. This highlights the potential effect of a good learned representation. We also find that, even though small networks have remained popular to solve Atari 2600 games, increasing the capacity of the value and policy network is critical to achieve good performance as we increase the number of pretraining modes and difficulties. Overall, our findings emphasize key points that are essential for efficient multi-task training and generalization in reinforcement learning. | https://openreview.net/pdf/dc00572cbf1ba37d6927b5663d6ca68300b6678e.pdf |
FIT: A Metric for Model Sensitivity | https://openreview.net/forum?id=PDG4-Y3aboN | https://openreview.net/forum?id=PDG4-Y3aboN | Ben Zandonati,Adrian Alan Pol,Maurizio Pierini,Olya Sirkin,Tal Kopetz | ICLR 2023,Poster | Model compression is vital to the deployment of deep learning on edge devices. Low precision representations, achieved via quantization of weights and activations, can reduce inference time and memory requirements. However, quantifying and predicting the response of a model to the changes associated with this procedure remains challenging. This response is non-linear and heterogeneous throughout the network. Understanding which groups of parameters and activations are more sensitive to quantization than others is a critical stage in maximizing efficiency. For this purpose, we propose FIT. Motivated by an information geometric perspective, FIT combines the Fisher information with a model of quantization. We find that FIT can estimate the final performance of a network without retraining. FIT effectively fuses contributions from both parameter and activation quantization into a single metric. Additionally, FIT is fast to compute when compared to existing methods, demonstrating favourable convergence properties. These properties are validated experimentally across hundreds of quantization configurations, with a focus on layer-wise mixed-precision quantization. | https://openreview.net/pdf/e2969454c67149d7a4865ba3bd2d0d4f7978ce21.pdf |
Transfer Learning with Deep Tabular Models | https://openreview.net/forum?id=b0RuGUYo8pA | https://openreview.net/forum?id=b0RuGUYo8pA | Roman Levin,Valeriia Cherepanova,Avi Schwarzschild,Arpit Bansal,C. Bayan Bruss,Tom Goldstein,Andrew Gordon Wilson,Micah Goldblum | ICLR 2023,Poster | Recent work on deep learning for tabular data demonstrates the strong performance of deep tabular models, often bridging the gap between gradient boosted decision trees and neural networks. Accuracy aside, a major advantage of neural models is that they are easily fine-tuned in new domains and learn reusable features. This property is often exploited in computer vision and natural language applications, where transfer learning is indispensable when task-specific training data is scarce. In this work, we explore the benefits that representation learning provides for knowledge transfer in the tabular domain. We conduct experiments in a realistic medical diagnosis test bed with limited amounts of downstream data and find that transfer learning with deep tabular models provides a definitive advantage over gradient boosted decision tree methods. We further compare the supervised and self-supervised pretraining strategies and provide practical advice on transfer learning with tabular models. Finally, we propose a pseudo-feature method for cases where the upstream and downstream feature sets differ, a tabular-specific problem widespread in real-world applications. | https://openreview.net/pdf/fffc8ea6e63cf3db729b9a0289bd08c7eee4e8e5.pdf |
CrAM: A Compression-Aware Minimizer | https://openreview.net/forum?id=_eTZBs-yedr | https://openreview.net/forum?id=_eTZBs-yedr | Alexandra Peste,Adrian Vladu,Eldar Kurtic,Christoph H Lampert,Dan Alistarh | ICLR 2023,Poster | Deep neural networks (DNNs) often have to be compressed, via pruning and/or quantization, before they can be deployed in practical settings. In this work we propose a new compression-aware minimizer dubbed CrAM that modifies the optimization step in a principled way, in order to produce models whose local loss behavior is stable under compression operations such as pruning. Thus, dense models trained via CrAM should be compressible post-training, in a single step, without significant accuracy loss. Experimental results on standard benchmarks, such as residual networks for ImageNet classification and BERT models for language modelling, show that CrAM produces dense models that can be more accurate than the standard SGD/Adam-based baselines, but which are stable under weight pruning: specifically, we can prune models in one-shot to 70-80% sparsity with almost no accuracy loss, and to 90% with reasonable (∼ 1%) accuracy loss, which is competitive with gradual compression methods. Additionally, CrAM can produce sparse models which perform well for transfer learning, and it also works for semi-structured 2:4 pruning patterns supported by GPU hardware. The code for reproducing the results is available at: https://github.com/IST-DASLab/CrAM .
| https://openreview.net/pdf/2cfdf569529c2c31505b56782cfe6ccfe97b5e49.pdf |
Understanding Train-Validation Split in Meta-Learning with Neural Networks | https://openreview.net/forum?id=JVlyfHEEm0k | https://openreview.net/forum?id=JVlyfHEEm0k | Xinzhe Zuo,Zixiang Chen,Huaxiu Yao,Yuan Cao,Quanquan Gu | ICLR 2023,Poster | The goal of meta-learning is to learn a good prior model from a collection of tasks such that the learned prior is able to adapt quickly to new tasks without accessing many data from the new tasks. A common practice in meta-learning is to perform a train-validation split on each task, where the training set is used for adapting the model parameter to that specific task and the validation set is used for learning a prior model that is shared across all tasks. Despite its success and popularity in multitask learning and few-shot learning, the understanding of the train-validation split is still limited, especially when the neural network models are used. In this paper, we study the benefit of train-validation split for classification problems with neural network models trained by gradient descent. We prove that the train-validation split is necessary to learn a good prior model when the noise in the training sample is large, while the train-train method fails. We validate our theory by conducting experiment on both synthetic and real datasets. To the best of our knowledge, this is the first work towards the theoretical understanding of train-validation split in meta-learning with neural networks. | https://openreview.net/pdf/b36628c00cf099f06c22fb730529a978598fddfa.pdf |
Revisiting Robustness in Graph Machine Learning | https://openreview.net/forum?id=h1o7Ry9Zctm | https://openreview.net/forum?id=h1o7Ry9Zctm | Lukas Gosch,Daniel Sturm,Simon Geisler,Stephan Günnemann | ICLR 2023,Poster | Many works show that node-level predictions of Graph Neural Networks (GNNs) are unrobust to small, often termed adversarial, changes to the graph structure. However, because manual inspection of a graph is difficult, it is unclear if the studied perturbations always preserve a core assumption of adversarial examples: that of unchanged semantic content. To address this problem, we introduce a more principled notion of an adversarial graph, which is aware of semantic content change. Using Contextual Stochastic Block Models (CSBMs) and real-world graphs, our results suggest: $i)$ for a majority of nodes the prevalent perturbation models include a large fraction of perturbed graphs violating the unchanged semantics assumption; $ii)$ surprisingly, all assessed GNNs show over-robustness - that is robustness beyond the point of semantic change. We find this to be a complementary phenomenon to adversarial examples and show that including the label-structure of the training graph into the inference process of GNNs significantly reduces over-robustness, while having a positive effect on test accuracy and adversarial robustness. Theoretically, leveraging our new semantics-aware notion of robustness, we prove that there is no robustness-accuracy tradeoff for inductively classifying a newly added node. | https://openreview.net/pdf/4c93ca03fa1343f82eccf0cb81fed0e4c04ee6ad.pdf |
Variational Information Pursuit for Interpretable Predictions | https://openreview.net/forum?id=77lSWa-Tm3Z | https://openreview.net/forum?id=77lSWa-Tm3Z | Aditya Chattopadhyay,Kwan Ho Ryan Chan,Benjamin David Haeffele,Donald Geman,Rene Vidal | ICLR 2023,Poster | There is a growing interest in the machine learning community in developing predictive algorithms that are interpretable by design. To this end, recent work proposes to sequentially ask interpretable queries about data until a high confidence prediction can be made based on the answers obtained (the history). To promote short query-answer chains, a greedy procedure called Information Pursuit (IP) is used, which adaptively chooses queries in order of information gain. Generative models are employed to learn the distribution of query-answers and labels, which is in turn used to estimate the most informative query. However, learning and inference with a full generative model of the data is often intractable for complex tasks. In this work, we propose Variational Information Pursuit (V-IP), a variational characterization of IP which bypasses the need to learn generative models. V-IP is based on finding a query selection strategy and a classifier that minimize the expected cross-entropy between true and predicted labels. We prove that the IP strategy is the optimal solution to this problem. Therefore, instead of learning generative models, we can use our optimal strategy to directly pick the most informative query given any history. We then develop a practical algorithm by defining a finite-dimensional parameterization of our strategy and classifier using deep networks and train them end-to-end using our objective. Empirically, V-IP is 10-100x faster than IP on different Vision and NLP tasks with competitive performance. Moreover, V-IP finds much shorter query chains when compared to reinforcement learning which is typically used in sequential-decision-making problems. Finally, we demonstrate the utility of V-IP on challenging tasks like medical diagnosis where the performance is far superior to the generative modeling approach. | https://openreview.net/pdf/e4a7da7aba80912163ac2cc81f64add6ca7960ba.pdf |
Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints | https://openreview.net/forum?id=T5nUQDrM4u | https://openreview.net/forum?id=T5nUQDrM4u | Aran Komatsuzaki,Joan Puigcerver,James Lee-Thorp,Carlos Riquelme Ruiz,Basil Mustafa,Joshua Ainslie,Yi Tay,Mostafa Dehghani,Neil Houlsby | ICLR 2023,Poster | Training large, deep neural networks to convergence can be prohibitively expensive. As a result, often only a small selection of popular, dense models are reused across different contexts and tasks. Increasingly, sparsely activated models, which seek to decouple model size from computation costs, are becoming an attractive alternative to dense models. Although more efficient in terms of quality and computation cost, sparse models remain data-hungry and costly to train from scratch in the large scale regime. In this work, we propose sparse upcycling -- a simple way to reuse sunk training costs by initializing a sparsely activated Mixture-of-Experts model from a dense checkpoint. We show that sparsely upcycled T5 Base, Large, and XL language models and Vision Transformer Base and Large models, respectively, significantly outperform their dense counterparts on SuperGLUE and ImageNet, using only ~50% of the initial dense pretraining sunk cost. The upcycled models also outperform sparse models trained from scratch on 100% of the initial dense pretraining computation budget. | https://openreview.net/pdf/c037cbccf13c2380ece6d1296d30d8e07d64b943.pdf |
Lossless Adaptation of Pretrained Vision Models For Robotic Manipulation | https://openreview.net/forum?id=5IND3TXJRb- | https://openreview.net/forum?id=5IND3TXJRb- | Mohit Sharma,Claudio Fantacci,Yuxiang Zhou,Skanda Koppula,Nicolas Heess,Jon Scholz,Yusuf Aytar | ICLR 2023,Poster | Recent works have shown that large models pretrained on common visual learning tasks can provide useful representations for a wide range of specialized perception problems, as well as a variety of robotic manipulation tasks. While prior work on robotic manipulation has predominantly used frozen pretrained features, we demonstrate that in robotics this approach can fail to reach optimal performance, and that fine-tuning of the full model can lead to significantly better results. Unfortunately, fine-tuning disrupts the pretrained visual representation, and causes representational drift towards the fine-tuned task thus leading to a loss of the versatility of the original model. We introduce a method for lossless adaptation to address this shortcoming of classical fine-tuning. We demonstrate that appropriate placement of our parameter efficient adapters can significantly reduce the performance gap between frozen pretrained representations and full end-to-end fine-tuning without changes to the original representation and thus preserving original capabilities of the pretrained model. We perform a comprehensive investigation across three major model architectures (ViTs, NFNets, and ResNets), supervised (ImageNet-1K classification) and self-supervised pretrained weights (CLIP, BYOL, Visual MAE) in three manipulation task domains and 35 individual tasks, and demonstrate that our claims are strongly validated in various settings. Please see real world videos at https://sites.google.com/view/robo-adapters | https://openreview.net/pdf/28f4471b307550d3ccc1dd10e23ed088114a0109.pdf |
Logical Message Passing Networks with One-hop Inference on Atomic Formulas | https://openreview.net/forum?id=SoyOsp7i_l | https://openreview.net/forum?id=SoyOsp7i_l | Zihao Wang,Yangqiu Song,Ginny Wong,Simon See | ICLR 2023,Poster | Complex Query Answering (CQA) over Knowledge Graphs (KGs) has attracted a lot of attention to potentially support many applications. Given that KGs are usually incomplete, neural models are proposed to answer the logical queries by parameterizing set operators with complex neural networks. However, such methods usually train neural set operators with a large number of entity and relation embeddings from the zero, where whether and how the embeddings or the neural set operators contribute to the performance remains not clear. In this paper, we propose a simple framework for complex query answering that decomposes the KG embeddings from neural set operators. We propose to represent the complex queries into the query graph. On top of the query graph, we propose the Logical Message Passing Neural Network (LMPNN) that connects the local one-hop inferences on atomic formulas to the global logical reasoning for complex query answering. We leverage existing effective KG embeddings to conduct one-hop inferences on atomic formulas, the results of which are regarded as the messages passed in LMPNN. The reasoning process over the overall logical formulas is turned into the forward pass of LMPNN that incrementally aggregates local information to finally predict the answers' embeddings. The complex logical inference across different types of queries will then be learned from training examples based on the LMPNN architecture. Theoretically, our query-graph represenation is more general than the prevailing operator-tree formulation, so our approach applies to a broader range of complex KG queries. Empirically, our approach yields the new state-of-the-art neural CQA model. Our research bridges the gap between complex KG query answering tasks and the long-standing achievements of knowledge graph representation learning. Our implementation can be found at https://github.com/HKUST-KnowComp/LMPNN. | https://openreview.net/pdf/714e7393366837991b2475fa6fe9f5536896ae83.pdf |
Noise-Robust De-Duplication at Scale | https://openreview.net/forum?id=bAz2DBS35i | https://openreview.net/forum?id=bAz2DBS35i | Emily Silcock,Luca D'Amico-Wong,Jinglin Yang,Melissa Dell | ICLR 2023,Poster | Identifying near duplicates within large, noisy text corpora has a myriad of applications that range from de-duplicating training datasets, reducing privacy risk, and evaluating test set leakage, to identifying reproduced news articles and literature within large corpora. Across these diverse applications, the overwhelming majority of work relies on $N$-grams. Limited efforts have been made to evaluate how well $N$-gram methods perform, in part because it is unclear how one could create an unbiased evaluation dataset for a massive corpus. This study uses the unique timeliness of historical news wires to create a 27,210 document dataset, with 122,876 positive duplicate pairs, for studying noise-robust de-duplication. The time-sensitivity of news makes comprehensive hand labelling feasible - despite the massive overall size of the corpus - as duplicates occur within a narrow date range. The study then develops and evaluates a range of de-duplication methods: hashing and $N$-gram overlap (which predominate in the literature), a contrastively trained bi-encoder, and a ``re-rank'' style approach combining a bi- and cross-encoder. The neural approaches significantly outperform hashing and $N$-gram overlap. We show that the bi-encoder scales well, de-duplicating a 10 million article corpus on a single GPU card in a matter of hours. We also apply our pre-trained model to the RealNews and patent portions of C4 (Colossal Clean Crawled Corpus), illustrating that a neural approach can identify many near duplicates missed by hashing, in the presence of various types of noise. The public release of our NEWS-COPY de-duplication dataset, codebase, and the pre-trained models will facilitate further research and applications. | https://openreview.net/pdf/8b9428278fecebbf54dd7baa215439f4ffb8f5f8.pdf |
Few-shot Backdoor Attacks via Neural Tangent Kernels | https://openreview.net/forum?id=a70lGJ-rwy | https://openreview.net/forum?id=a70lGJ-rwy | Jonathan Hayase,Sewoong Oh | ICLR 2023,Poster | In a backdoor attack, an attacker injects corrupted examples into the training set. The goal of the attacker is to cause the final trained model to predict the attacker's desired target label when a predefined trigger is added to test inputs. Central to these attacks is the trade-off between the success rate of the attack and the number of corrupted training examples injected. We pose this attack as a novel bilevel optimization problem: construct strong poison examples that maximize the attack success rate of the trained model. We use neural tangent kernels to approximate the training dynamics of the model being attacked and automatically learn strong poison examples. We experiment on subclasses of CIFAR-10 and ImageNet with WideResNet-34 and ConvNeXt architectures on periodic and patch trigger attacks and show that NTBA-designed poisoned examples achieve, for example, an attack success rate of 90% with ten times smaller number of poison examples injected compared to the baseline. We provided an interpretation of the NTBA-designed attacks using the analysis of kernel linear regression. We further demonstrate a vulnerability in overparametrized deep neural networks, which is revealed by the shape of the neural tangent kernel.
| https://openreview.net/pdf/e0eed4dce64a1fba0b2095181b6f7486910402e0.pdf |
Hyperparameter Optimization through Neural Network Partitioning | https://openreview.net/forum?id=nAgdXgfmqj | https://openreview.net/forum?id=nAgdXgfmqj | Bruno Kacper Mlodozeniec,Matthias Reisser,Christos Louizos | ICLR 2023,Poster | Well-tuned hyperparameters are crucial for obtaining good generalization behavior in neural networks. They can enforce appropriate inductive biases, regularize the model and improve performance --- especially in the presence of limited data. In this work, we propose a simple and efficient way for optimizing hyperparameters inspired by the marginal likelihood, an optimization objective that requires no validation data. Our method partitions the training data and a neural network model into $K$ data shards and parameter partitions, respectively. Each partition is associated with and optimized only on specific data shards. Combining these partitions into subnetworks allows us to define the "out-of-training-sample" loss of a subnetwork, i.e., the loss on data shards unseen by the subnetwork, as the objective for hyperparameter optimization. We demonstrate that we can apply this objective to optimize a variety of different hyperparameters in a single training run while being significantly computationally cheaper than alternative methods aiming to optimize the marginal likelihood for neural networks. Lastly, we also focus on optimizing hyperparameters in federated learning, where retraining and cross-validation are particularly challenging. | https://openreview.net/pdf/c737a202fd8cca6c64afefee284c5754e3676139.pdf |
Symmetries, Flat Minima, and the Conserved Quantities of Gradient Flow | https://openreview.net/forum?id=9ZpciCOunFb | https://openreview.net/forum?id=9ZpciCOunFb | Bo Zhao,Iordan Ganev,Robin Walters,Rose Yu,Nima Dehmamy | ICLR 2023,Poster | Empirical studies of the loss landscape of deep networks have revealed that many local minima are connected through low-loss valleys. Yet, little is known about the theoretical origin of such valleys. We present a general framework for finding continuous symmetries in the parameter space, which carve out low-loss valleys. Our framework uses equivariances of the activation functions and can be applied to different layer architectures. To generalize this framework to nonlinear neural networks, we introduce a novel set of nonlinear, data-dependent symmetries. These symmetries can transform a trained model such that it performs similarly on new samples, which allows ensemble building that improves robustness under certain adversarial attacks. We then show that conserved quantities associated with linear symmetries can be used to define coordinates along low-loss valleys. The conserved quantities help reveal that using common initialization methods, gradient flow only explores a small part of the global minimum. By relating conserved quantities to convergence rate and sharpness of the minimum, we provide insights on how initialization impacts convergence and generalizability.
| https://openreview.net/pdf/5e9f00255cce370f6775a942b9adf4fbcb67ca31.pdf |
Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees | https://openreview.net/forum?id=ooxDOe7ZtBe | https://openreview.net/forum?id=ooxDOe7ZtBe | Swarnadeep Saha,Shiyue Zhang,Peter Hase,Mohit Bansal | ICLR 2023,Poster | Current abstractive summarization models either suffer from a lack of clear interpretability or provide incomplete rationales by only highlighting parts of the source document. To this end, we propose the Summarization Program (SP), an interpretable modular framework consisting of an (ordered) list of binary trees, each encoding the step-by-step generative process of an abstractive summary sentence from the source document. A Summarization Program contains one root node per summary sentence, and a distinct tree connects each summary sentence (root node) to the document sentences (leaf nodes) from which it is derived, with the connecting nodes containing intermediate generated sentences. Edges represent different modular operations involved in summarization such as sentence fusion, compression, and paraphrasing. We first propose an efficient best-first search method over neural modules, SP-Search that identifies SPs for human summaries by directly optimizing for ROUGE scores. Next, using these programs as automatic supervision, we propose seq2seq models that generate Summarization Programs, which are then executed to obtain final summaries. We demonstrate that SP-Search effectively represents the generative process behind human summaries using modules that are typically faithful to their intended behavior. We also conduct a simulation study to show that Summarization Programs improve the interpretability of summarization models by allowing humans to better simulate model reasoning. Summarization Programs constitute a promising step toward interpretable and modular abstractive summarization, a complex task previously addressed primarily through blackbox end-to-end neural systems. | https://openreview.net/pdf/fbaeabb25bb85d42efe7fa1e96b443c0ef45056c.pdf |
Planning with Large Language Models for Code Generation | https://openreview.net/forum?id=Lr8cOOtYbfL | https://openreview.net/forum?id=Lr8cOOtYbfL | Shun Zhang,Zhenfang Chen,Yikang Shen,Mingyu Ding,Joshua B. Tenenbaum,Chuang Gan | ICLR 2023,Poster | Existing large language model-based code generation pipelines typically use beam search or sampling algorithms during the decoding process. Although the programs they generate achieve high token-matching-based scores, they often fail to compile or generate incorrect outputs. The main reason is that conventional Transformer decoding algorithms may not be the best choice for code generation. In this work, we propose a novel Transformer decoding algorithm, Planning-Guided Transformer Decoding (PG-TD), that uses a planning algorithm to do lookahead search and guide the Transformer to generate better programs. Specifically, instead of simply optimizing the likelihood of the generated sequences, the Transformer makes use of a planner that generates candidate programs and tests them on public test cases. The Transformer can therefore make more informed decisions and generate tokens that will eventually lead to higher-quality programs. We also design a mechanism that shares information between the Transformer and the planner to make our algorithm computationally efficient. We empirically evaluate our framework with several large language models as backbones on public coding challenge benchmarks, showing that 1) it can generate programs that consistently achieve higher performance compared with competing baseline methods; 2) it enables controllable code generation, such as concise codes and highly-commented codes by optimizing modified objective. | https://openreview.net/pdf/5f8b793197851829ddf2e08915b38f1549cb5b9d.pdf |
Equivariance-aware Architectural Optimization of Neural Networks | https://openreview.net/forum?id=a6rCdfABJXg | https://openreview.net/forum?id=a6rCdfABJXg | Kaitlin Maile,Dennis George Wilson,Patrick Forré | ICLR 2023,Poster | Incorporating equivariance to symmetry groups as a constraint during neural network training can improve performance and generalization for tasks exhibiting those symmetries, but such symmetries are often not perfectly nor explicitly present. This motivates algorithmically optimizing the architectural constraints imposed by equivariance. We propose the equivariance relaxation morphism, which preserves functionality while reparameterizing a group equivariant layer to operate with equivariance constraints on a subgroup, as well as the $[G]$-mixed equivariant layer, which mixes layers constrained to different groups to enable within-layer equivariance optimization. We further present evolutionary and differentiable neural architecture search (NAS) algorithms that utilize these mechanisms respectively for equivariance-aware architectural optimization. Experiments across a variety of datasets show the benefit of dynamically constrained equivariance to find effective architectures with approximate equivariance. | https://openreview.net/pdf/40adb285350cd61721cd51be17e7bc6e709c8404.pdf |
Accelerating Hamiltonian Monte Carlo via Chebyshev Integration Time | https://openreview.net/forum?id=FbRY1XVfwK | https://openreview.net/forum?id=FbRY1XVfwK | Jun-Kun Wang,Andre Wibisono | ICLR 2023,Poster | Hamiltonian Monte Carlo (HMC) is a popular method in sampling. While there are quite a few works of studying this method on various aspects, an interesting question is how to choose its integration time to achieve acceleration. In this work, we consider accelerating the process of sampling from a distribution $\pi(x) \propto \exp(-f(x))$ via HMC via time-varying integration time. When the potential $f$ is $L$-smooth and $m$-strongly convex, i.e. for sampling from a log-smooth and strongly log-concave target distribution $\pi$, it is known that under a constant integration time, the number of iterations that ideal HMC takes to get an $\epsilon$ Wasserstein-2 distance to the target $\pi$ is $O( \kappa \log \frac{1}{\epsilon} )$, where $\kappa := \frac{L}{m}$ is the condition number. We propose a scheme of time-varying integration time based on the roots of Chebyshev polynomials. We show that in the case of quadratic potential $f$, i.e. when the target $\pi$ is a Gaussian distribution, ideal HMC with this choice of integration time only takes $O( \sqrt{\kappa} \log \frac{1}{\epsilon} )$ number of iterations to reach Wasserstein-2 distance less than $\epsilon$; this improvement on the dependence on condition number is akin to acceleration in optimization. The design and analysis of HMC with the proposed integration time is built on the tools of Chebyshev polynomials. Experiments find the advantage of adopting our scheme of time-varying integration time even for sampling from distributions with smooth strongly convex potentials that are not quadratic.
| https://openreview.net/pdf/4edd646eadffbd394fd642e6eb387dc6be646d76.pdf |
Order Matters: Agent-by-agent Policy Optimization | https://openreview.net/forum?id=Q-neeWNVv1 | https://openreview.net/forum?id=Q-neeWNVv1 | Xihuai Wang,Zheng Tian,Ziyu Wan,Ying Wen,Jun Wang,Weinan Zhang | ICLR 2023,Poster | While multi-agent trust region algorithms have achieved great success empirically in solving coordination tasks, most of them, however, suffer from a non-stationarity problem since agents update their policies simultaneously. In contrast, a sequential scheme that updates policies agent-by-agent provides another perspective and shows strong performance. However, sample inefficiency and lack of monotonic improvement guarantees for each agent are still the two significant challenges for the sequential scheme. In this paper, we propose the \textbf{A}gent-by-\textbf{a}gent \textbf{P}olicy \textbf{O}ptimization (A2PO) algorithm to improve the sample efficiency and retain the guarantees of monotonic improvement for each agent during training. We justify the tightness of the monotonic improvement bound compared with other trust region algorithms. From the perspective of sequentially updating agents, we further consider the effect of agent updating order and extend the theory of non-stationarity into the sequential update scheme. To evaluate A2PO, we conduct a comprehensive empirical study on four benchmarks: StarCraftII, Multi-agent MuJoCo, Multi-agent Particle Environment, and Google Research Football full game scenarios. A2PO consistently outperforms strong baselines. | https://openreview.net/pdf/b7c35e63818d65e4523a6ae4314674a0eeb7bb36.pdf |
On the Convergence of AdaGrad(Norm) on $\mathbb{R}^d$: Beyond Convexity, Non-Asymptotic Rate and Acceleration | https://openreview.net/forum?id=ULnHxczCBaE | https://openreview.net/forum?id=ULnHxczCBaE | Zijian Liu,Ta Duy Nguyen,Alina Ene,Huy Nguyen | ICLR 2023,Poster | Existing analysis of AdaGrad and other adaptive methods for smooth convex optimization is typically for functions with bounded domain diameter. In unconstrained problems, previous works guarantee an asymptotic convergence rate without an explicit constant factor that holds true for the entire function class. Furthermore, in the stochastic setting, only a modified version of AdaGrad, different from the one commonly used in practice, in which the latest gradient is not used to update the stepsize, has been analyzed. Our paper aims at bridging these gaps and developing a deeper understanding of AdaGrad and its variants in the standard setting of smooth convex functions as well as the more general setting of quasar convex functions. First, we demonstrate new techniques to explicitly bound the convergence rate of the vanilla AdaGrad for unconstrained problems in both deterministic and stochastic settings. Second, we propose a variant of AdaGrad for which we can show the convergence of the last iterate, instead of the average iterate. Finally, we give new accelerated adaptive algorithms and their convergence guarantee in the deterministic setting with explicit dependency on the problem parameters, improving upon the asymptotic rate shown in previous works. | https://openreview.net/pdf/a8b5702662d7f1559c45e854549715043449d6fd.pdf |
SP2 : A Second Order Stochastic Polyak Method | https://openreview.net/forum?id=5mqFra2ZSuf | https://openreview.net/forum?id=5mqFra2ZSuf | Shuang Li,William Joseph Swartworth,Martin Takáč,Deanna Needell,Robert M. Gower | ICLR 2023,Poster | Recently the SP (Stochastic Polyak step size) method has emerged as a competitive adaptive method for setting the step sizes of SGD. SP can be interpreted as a method specialized to interpolated models, since it solves the interpolation equations. SP solves these equation by using local linearizations of the model. We take a step further and develop a method for solving the interpolation equations that uses the local second-order approximation of the model. Our resulting method SP2 uses Hessian-vector products to speed-up the convergence of SP. Furthermore, and rather uniquely among second-order methods, the design of SP2 in no way relies on positive definite Hessian matrices or convexity of the objective function. We show SP2 is competitive both in experiments and in theory.
We show SP2 is very competitive on matrix completion, non-convex test problems and logistic regression. We also provide a convergence theory on sums-of-quadratics. | https://openreview.net/pdf/b8bbe368f67c756d1289de0bf523893e220e8eca.pdf |
Making Better Decision by Directly Planning in Continuous Control | https://openreview.net/forum?id=r8Mu7idxyF | https://openreview.net/forum?id=r8Mu7idxyF | Jinhua Zhu,Yue Wang,Lijun Wu,Tao Qin,Wengang Zhou,Tie-Yan Liu,Houqiang Li | ICLR 2023,Poster | By properly utilizing the learned environment model, model-based reinforcement learning methods can improve the sample efficiency for decision-making problems. Beyond using the learned environment model to train a policy, the success of MCTS-based methods shows that directly incorporating the learned environment model as a planner to make decisions might be more effective. However, when action space is of high dimension and continuous, directly planning according to the learned model is costly and non-trivial. Because of two challenges: (1) the infinite number of candidate actions and (2) the temporal dependency between actions in different timesteps. To address these challenges, inspired by Differential Dynamic Programming (DDP) in optimal control theory, we design a novel Policy Optimization with Model Planning (POMP) algorithm, which incorporates a carefully designed Deep Differential Dynamic Programming (D3P) planner into the model-based RL framework. In D3P planner, (1) to effectively plan in the continuous action space, we construct a locally quadratic programming problem that uses a gradient-based optimization process to replace search. (2) To take the temporal dependency of actions at different timesteps into account, we leverage the updated and latest actions of previous timesteps (i.e., step $1, \cdots, h-1$) to update the action of the current step (i.e., step $h$), instead of updating all actions simultaneously. We theoretically prove the convergence rate for our D3P planner and analyze the effect of the feedback term. In practice, to effectively apply the neural network based D3P planner in reinforcement learning, we leverage the policy network to initialize the action sequence and keep the action update conservative in the planning process. Experiments demonstrate that POMP consistently improves sample efficiency on widely used continuous control tasks. Our code is released at https://github.com/POMP-D3P/POMP-D3P. | https://openreview.net/pdf/d5862528e904b0999da130078b73a4adb326c44d.pdf |
HiT-MDP: Learning the SMDP option framework on MDPs with Hidden Temporal Embeddings | https://openreview.net/forum?id=VuuDXDgujAc | https://openreview.net/forum?id=VuuDXDgujAc | Chang Li,Dongjin Song,Dacheng Tao | ICLR 2023,Poster | The standard option framework is developed on the Semi-Markov Decision Process (SMDP) which is unstable to optimize and sample inefficient. To this end, we propose the Hidden Temporal MDP (HiT-MDP) and prove that the option-induced HiT-MDP is homomorphic equivalent to the option-induced SMDP. A novel transformer-based framework is introduced to learn options' embedding vectors (rather than conventional option tuples) on HiT-MDPs. We then derive a stable and sample efficient option discovering method under the maximum-entropy policy gradient framework. Extensive experiments on challenging Mujoco environments demonstrate HiT-MDP's efficiency and effectiveness: under widely used configurations, HiT-MDP achieves competitive, if not better, performance compared to the state-of-the-art baselines on all finite horizon and transfer learning environments. Moreover, HiT-MDP significantly outperforms all baselines on infinite horizon environments while exhibiting smaller variance, faster convergence, and better interpretability. Our work potentially sheds light on the theoretical ground of extending the option framework into a large-scale foundation model. | https://openreview.net/pdf/61698cdc4d4f3090830fd86c25540ed087f4ae78.pdf |
(Certified!!) Adversarial Robustness for Free! | https://openreview.net/forum?id=JLg5aHHv7j | https://openreview.net/forum?id=JLg5aHHv7j | Nicholas Carlini,Florian Tramer,Krishnamurthy Dj Dvijotham,Leslie Rice,Mingjie Sun,J Zico Kolter | ICLR 2023,Poster | In this paper we show how to achieve state-of-the-art certified adversarial robustness to 2-norm bounded perturbations by relying exclusively on off-the-shelf pretrained models. To do so, we instantiate the denoised smoothing approach of Salman et al. by combining a pretrained denoising diffusion probabilistic model and a standard high-accuracy classifier. This allows us to certify 71% accuracy on ImageNet under adversarial perturbations constrained to be within a 2-norm of 0.5, an improvement of 14 percentage points over the prior certified SoTA using any approach, or an improvement of 30 percentage points over denoised smoothing. We obtain these results using only pretrained diffusion models and image classifiers, without requiring any fine tuning or retraining of model parameters. | https://openreview.net/pdf/ffd8346924f2b5c29c98d9eebd37fde97ba3461a.pdf |
Heterogeneous Neuronal and Synaptic Dynamics for Spike-Efficient Unsupervised Learning: Theory and Design Principles | https://openreview.net/forum?id=QIRtAqoXwj | https://openreview.net/forum?id=QIRtAqoXwj | Biswadeep Chakraborty,Saibal Mukhopadhyay | ICLR 2023,Poster | This paper shows that the heterogeneity in neuronal and synaptic dynamics reduces the spiking activity of a Recurrent Spiking Neural Network (RSNN) while improving prediction performance, enabling spike-efficient (unsupervised) learning.
We analytically show that the diversity in neurons' integration/relaxation dynamics improves an RSNN's ability to learn more distinct input patterns (higher memory capacity), leading to improved classification and prediction performance. We further prove that heterogeneous Spike-Timing-Dependent-Plasticity (STDP) dynamics of synapses reduce spiking activity but preserve memory capacity. The analytical results motivate Heterogeneous RSNN design using Bayesian optimization to determine heterogeneity in neurons and synapses to improve $\mathcal{E}$, defined as the ratio of spiking activity and memory capacity. The empirical results on time series classification and prediction tasks show that optimized HRSNN increases performance and reduces spiking activity compared to a homogeneous RSNN. | https://openreview.net/pdf/9f7c495d032385cfaeae5a10554a9593efeb5a33.pdf |
MMVAE+: Enhancing the Generative Quality of Multimodal VAEs without Compromises | https://openreview.net/forum?id=sdQGxouELX | https://openreview.net/forum?id=sdQGxouELX | Emanuele Palumbo,Imant Daunhawer,Julia E Vogt | ICLR 2023,Poster | Multimodal VAEs have recently gained attention as efficient models for weakly-supervised generative learning with multiple modalities. However, all existing variants of multimodal VAEs are affected by a non-trivial trade-off between generative quality and generative coherence. In particular mixture-based models achieve good coherence only at the expense of sample diversity and a resulting lack of generative quality. We present a novel variant of the mixture-of-experts multimodal variational autoencoder that improves its generative quality, while maintaining high semantic coherence. We model shared and modality-specific information in separate latent subspaces, proposing an objective that overcomes certain dependencies on hyperparameters that arise for existing approaches with the same latent space structure. Compared to these existing approaches, we show increased robustness with respect to changes in the design of the latent space, in terms of the capacity allocated to modality-specific subspaces. We show that our model achieves both good generative coherence and high generative quality in challenging experiments, including more complex multimodal datasets than those used in previous works. | https://openreview.net/pdf/629389d94e1def86f254ed563a21b9df2af23304.pdf |
In-Situ Text-Only Adaptation of Speech Models with Low-Overhead Speech Imputations | https://openreview.net/forum?id=T2Ncx_PN2K | https://openreview.net/forum?id=T2Ncx_PN2K | Ashish Mittal,Sunita Sarawagi,Preethi Jyothi | ICLR 2023,Poster | Fast and accurate adaptation of automatic speech recognition (ASR) systems using only text data in the target domain is a problem of long-standing practical relevance. Text-only adaptation was easy in traditional cascaded ASR systems with completely decoupled acoustic and language models. Recently, the RNNTransducer (RNN-T) has emerged as a default ASR model because of its high accuracy, low latency, and capability of supporting streaming input. However text-only adaptation of the RNN-T model is significantly more challenging due to its tight integration of acoustic and language models and end-to-end training. Existing recent approaches for text-only adaptation of RNN-Ts, either entail significant modification to the network or introduce high latency during decoding. We propose a new approach (TOLSTOI) that imputes speech representations internal to a baseline RNN-T, starting from text-only inputs, and performs in-situ adaptation that results in higher adaptation accuracy without any runtime overheads during decoding. Our imputation model is a function of the labeled data and trained parameters of the ASR model, and that we show, is more effective in controlling catastrophic forgetting compared to existing methods. We establish the effectiveness of TOLSTOI using three target domains and two ASR models of varying complexity. We yield up to 35% relative reduction in word error rate with text-only adaptation while forgetting the least compared to existing adaptation approaches. Our method is easy to implement and can be harnessed on existing RNN-T models without requiring ASR model training from scratch. | https://openreview.net/pdf/54c647f24fd2b8654ec28bcd6943439a635a84f8.pdf |
Scaling Laws For Deep Learning Based Image Reconstruction | https://openreview.net/forum?id=op-ceGueqc4 | https://openreview.net/forum?id=op-ceGueqc4 | Tobit Klug,Reinhard Heckel | ICLR 2023,Poster | Deep neural networks trained end-to-end to map a measurement of a (noisy) image to a clean image perform excellent for a variety of linear inverse problems.
Current methods are only trained on a few hundreds or thousands of images as opposed to the millions of examples deep networks are trained on in other domains.
In this work, we study whether major performance gains are expected from scaling up the training set size.
We consider image denoising, accelerated magnetic resonance imaging, and super-resolution and empirically determine the reconstruction quality as a function of training set size, while simultaneously scaling the network size.
For all three tasks we find that an initially steep power-law scaling slows significantly already at moderate training set sizes.
Interpolating those scaling laws suggests that even training on millions of images would not significantly improve performance.
To understand the expected behavior, we analytically characterize the performance of a linear estimator learned with early stopped gradient descent.
The result formalizes the intuition that once the error induced by learning the signal model is small relative to the error floor, more training examples do not improve performance. | https://openreview.net/pdf/534317d02440ff84e78965fc326e08dc42b25be7.pdf |
Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning | https://openreview.net/forum?id=3oWo92cQyxL | https://openreview.net/forum?id=3oWo92cQyxL | Ivona Najdenkoska,Xiantong Zhen,Marcel Worring | ICLR 2023,Poster | Multimodal few-shot learning is challenging due to the large domain gap between vision and language modalities. Existing methods are trying to communicate visual concepts as prompts to frozen language models, but rely on hand-engineered task induction to reduce the hypothesis space. To make the whole process learnable, we introduce a multimodal meta-learning approach. Specifically, our approach decomposes the training of the model into a set of related multimodal few-shot tasks. We define a meta-mapper network, acting as a meta-learner, to efficiently bridge frozen large-scale vision and language models and leverage their already learned capacity. By updating the learnable parameters only of the meta-mapper, it learns to accrue shared meta-knowledge among these tasks. Thus, it can rapidly adapt to newly presented samples with only a few gradient updates. Importantly, it induces the task in a completely data-driven manner, with no need for a hand-engineered task induction. We evaluate our approach on recently proposed multimodal few-shot benchmarks, measuring how rapidly the model can bind novel visual concepts to words and answer visual questions by observing only a limited set of labeled examples. The experimental results show that our meta-learning approach outperforms the baseline across multiple datasets and various training settings while being computationally more efficient. | https://openreview.net/pdf/0e9bd6133a3659d2a5883ce2063de3dfff12c275.pdf |
SoftZoo: A Soft Robot Co-design Benchmark For Locomotion In Diverse Environments | https://openreview.net/forum?id=Xyme9p1rpZw | https://openreview.net/forum?id=Xyme9p1rpZw | Tsun-Hsuan Wang,Pingchuan Ma,Andrew Everett Spielberg,Zhou Xian,Hao Zhang,Joshua B. Tenenbaum,Daniela Rus,Chuang Gan | ICLR 2023,Poster | While significant research progress has been made in robot learning for control, unique challenges arise when simultaneously co-optimizing morphology. Existing work has typically been tailored for particular environments or representations. In order to more fully understand inherent design and performance tradeoffs and accelerate the development of new breeds of soft robots, a comprehensive virtual platform — with well-established tasks, environments, and evaluation metrics — is needed. In this work, we introduce SoftZoo, a soft robot co-design platform for locomotion in diverse environments. SoftZoo supports an extensive, naturally-inspired material set, including the ability to simulate environments such as flat ground, desert, wetland, clay, ice, snow, shallow water, and ocean. Further, it provides a variety of tasks relevant for soft robotics, including fast locomotion, agile turning, and path following, as well as differentiable design representations for morphology and control. Combined, these elements form a feature-rich platform for analysis and development of soft robot co-design algorithms. We benchmark prevalent representations and co-design algorithms, and shed light on 1) the interplay between environment, morphology, and behavior (2) the importance of design space representations 3) the ambiguity in muscle formation and controller synthesis and 4) the value of differentiable physics. We envision that SoftZoo will serve as a standard platform and template an approach toward the development of novel representations and algorithms for co-designing soft robots’ behavioral and morphological intelligence. Demos are available on our project page. | https://openreview.net/pdf/9aebabf194749af0a4efc5f28bb6630bcd7f9917.pdf |
Improved Learning-augmented Algorithms for k-means and k-medians Clustering | https://openreview.net/forum?id=dCSFiAl_VO3 | https://openreview.net/forum?id=dCSFiAl_VO3 | Thy Dinh Nguyen,Anamay Chaturvedi,Huy Nguyen | ICLR 2023,Poster | We consider the problem of clustering in the learning-augmented setting. We are given a data set in $d$-dimensional Euclidean space, and a label for each data point given by a predictor indicating what subsets of points should be clustered together. This setting captures situations where we have access to some auxiliary information about the data set relevant for our clustering objective, for instance the labels output by a neural network. Following prior work, we assume that there are at most an $\alpha \in (0,c)$ for some $c<1$ fraction of false positives and false negatives in each predicted cluster, in the absence of which the labels would attain the optimal clustering cost $\mathrm{OPT}$. For a dataset of size $m$, we propose a deterministic $k$-means algorithm that produces centers with an improved bound on the clustering cost compared to the previous randomized state-of-the-art algorithm while preserving the $O( d m \log m)$ runtime. Furthermore, our algorithm works even when the predictions are not very accurate, i.e., our cost bound holds for $\alpha$ up to $1/2$, an improvement from $\alpha$ being at most $1/7$ in previous work. For the $k$-medians problem we again improve upon prior work by achieving a biquadratic improvement in the dependence of the approximation factor on the accuracy parameter $\alpha$ to get a cost of $(1+O(\alpha))\mathrm{OPT}$, while requiring essentially just $O(md \log^3 m/\alpha)$ runtime. | https://openreview.net/pdf/fc031fcffcb2f6b9903ade5988a8eaad0dafbce5.pdf |
Neural Implicit Shape Editing using Boundary Sensitivity | https://openreview.net/forum?id=CMPIBjmhpo | https://openreview.net/forum?id=CMPIBjmhpo | Arturs Berzins,Moritz Ibing,Leif Kobbelt | ICLR 2023,Poster | Neural fields are receiving increased attention as a geometric representation due to their ability to compactly store detailed and smooth shapes and easily undergo topological changes. Compared to classic geometry representations, however, neural representations do not allow the user to exert intuitive control over the shape. Motivated by this, we leverage boundary sensitivity to express how perturbations in parameters move the shape boundary. This allows us to interpret the effect of each learnable parameter and study achievable deformations. With this, we perform geometric editing: finding a parameter update that best approximates a globally prescribed deformation. Prescribing the deformation only locally allows the rest of the shape to change according to some prior, such as semantics or deformation rigidity. Our method is agnostic to the model and its training and updates the NN in-place. Furthermore, we show how boundary sensitivity helps to optimize and constrain objectives (such as surface area and volume), which are difficult to compute without first converting to another representation, such as a mesh. | https://openreview.net/pdf/2bb2869a2fc20265557dcaa1d8fb95e2369b2d06.pdf |
Amortised Invariance Learning for Contrastive Self-Supervision | https://openreview.net/forum?id=nXOhmfFu5n | https://openreview.net/forum?id=nXOhmfFu5n | Ruchika Chavhan,Jan Stuehmer,Calum Heggan,Mehrdad Yaghoobi,Timothy Hospedales | ICLR 2023,Poster | Contrastive self-supervised learning methods famously produce high quality transferable representations by learning invariances to different data augmentations. Invariances established during pre-training can be interpreted as strong inductive biases. However these may or may not be helpful, depending on if they match the invariance requirements of downstream tasks or not. This has led to several attempts to learn task-specific invariances during pre-training, however, these methods are highly compute intensive and tedious to train. We introduce the notion of amortized invariance learning for contrastive self supervision. In the pre-training stage, we parameterize the feature extractor by differentiable invariance hyper-parameters that control the invariances encoded by the representation. Then, for any downstream task, both linear readout and task-specific invariance requirements can be efficiently and effectively learned by gradient-descent. We evaluate the notion of amortized invariances for contrastive learning over two different modalities: vision and audio, on two widely-used contrastive learning methods in vision: SimCLR and MoCo-v2 with popular architectures like ResNets and Vision Transformers, and SimCLR with ResNet-18 for audio. We show that our amortized features provide a reliable way to learn diverse downstream tasks with different invariance requirements, while using a single feature and avoiding task-specific pre-training. This provides an exciting perspective that opens up new horizons in the field of general purpose representation learning. | https://openreview.net/pdf/62b125792eabd26dabe695ba7886d7df4261dfa1.pdf |
Revisiting Populations in multi-agent Communication | https://openreview.net/forum?id=n-UHRIdPju | https://openreview.net/forum?id=n-UHRIdPju | Paul Michel,Mathieu Rita,Kory Wallace Mathewson,Olivier Tieleman,Angeliki Lazaridou | ICLR 2023,Poster | Despite evidence from cognitive sciences that larger groups of speakers tend to develop more structured languages in human communication, scaling up to populations has failed to yield significant benefits in emergent multi-agent communication. In this paper we advocate for an alternate population-level training paradigm for referential games based on the idea of "partitioning" the agents into sender-receiver pairs and limiting co-adaptation across pairs. We show that this results in optimizing a different objective at the population level, where agents maximize (1) their respective "internal" communication accuracy and (2) some measure of alignment between agents. In experiments, we find that this leads to the emergence of languages that are significantly more compositional. Moreover, when agents are trained in populations that are not fully connected (ie. not all agent pairs interact at training time), this approach reduces multi-linguality and improves zero-shot communication with new agents (ie. agents are able to communicate successfully with other agents outside their training partners). | https://openreview.net/pdf/670a147872e92b070c64ecc75a028560548072a4.pdf |