title
stringlengths 18
162
| url
stringlengths 42
44
| detail_url
stringlengths 42
44
| authors
stringlengths 10
429
| tags
stringclasses 3
values | abstract
stringlengths 400
2.37k
| pdf
stringlengths 71
71
|
---|---|---|---|---|---|---|
Optimal Conservative Offline RL with General Function Approximation via Augmented Lagrangian | https://openreview.net/forum?id=ZsvWb6mJnMv | https://openreview.net/forum?id=ZsvWb6mJnMv | Paria Rashidinejad,Hanlin Zhu,Kunhe Yang,Stuart Russell,Jiantao Jiao | ICLR 2023,Top 25% | Offline reinforcement learning (RL), which aims at learning good policies from historical data, has received significant attention over the past years. Much effort has focused on improving offline RL practicality by addressing the prevalent issue of partial data coverage through various forms of conservative policy learning. While the majority of algorithms do not have finite-
sample guarantees, several provable conservative offline RL algorithms are designed and analyzed within the single-policy concentrability framework that handles partial coverage. Yet, in the nonlinear function approximation setting where confidence intervals are difficult to obtain, existing provable algorithms suffer from computational intractability, prohibitively strong assumptions, and suboptimal statistical rates. In this paper, we leverage the marginalized importance sampling (MIS) formulation of RL and present the first set of offline RL algorithms that are statistically optimal and practical under general function approximation and single-policy concentrability, bypassing the need for uncertainty quantification. We identify that the key to successfully solving the sample-based approximation of the MIS problem is ensuring that certain occupancy validity constraints are nearly satisfied. We enforce these constraints by a novel application of the augmented Lagrangian method and prove the following result: with the MIS formulation, augmented Lagrangian is enough for statistically optimal offline RL. In stark contrast to prior algorithms that induce additional conservatism through methods such as behavior regularization, our approach provably eliminates this need and reinterprets regularizers as "enforcers of occupancy validity" than "promoters of conservatism." | https://openreview.net/pdf/1e66fdaab805cebe5a84c568baa5b2a817e6b6f3.pdf |
DocPrompting: Generating Code by Retrieving the Docs | https://openreview.net/forum?id=ZTCxT2t2Ru | https://openreview.net/forum?id=ZTCxT2t2Ru | Shuyan Zhou,Uri Alon,Frank F. Xu,Zhengbao Jiang,Graham Neubig | ICLR 2023,Top 25% | Publicly available source-code libraries are continuously growing and changing. This makes it impossible for models of code
to keep current with all available APIs by simply training these models on existing code repositories. Thus, existing models inherently cannot generalize to using unseen functions and libraries, because these would never appear in the training data. In contrast, when human programmers use functions and libraries for the first time, they frequently refer to textual resources such as code manuals and documentation, to explore and understand the available functionality. Inspired by this observation, we introduce DocPrompting: a natural-language-to-code generation approach that explicitly leverages documentation by (1) retrieving the relevant documentation pieces given an NL intent, and (2) generating code based on the NL intent and the retrieved documentation. DocPrompting is general: it can be applied to any programming language and is agnostic to the underlying neural model. We demonstrate that DocPrompting consistently improves NL-to-code models: DocPrompting improves strong base models such as CodeT5 by 2.85% in pass@1 (52% relative gain) and 4.39% in pass@10 (30% relative gain) in execution-based evaluation on the popular Python CoNaLa benchmark; on a new Bash dataset tldr, DocPrompting improves CodeT5 and GPT-Neo1.3B by up to absolute 6.9% exact match. | https://openreview.net/pdf/c9881a374e0bce9d005809d63e83dfdae53d9d40.pdf |
A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation | https://openreview.net/forum?id=HcUf-QwZeFh | https://openreview.net/forum?id=HcUf-QwZeFh | Hiroki Furuta,Yusuke Iwasawa,Yutaka Matsuo,Shixiang Shane Gu | ICLR 2023,Top 25% | The rise of generalist large-scale models in natural language and vision has made us expect that a massive data-driven approach could achieve broader generalization in other domains such as continuous control. In this work, we explore a method for learning a single policy that manipulates various forms of agents to solve various tasks by distilling a large amount of proficient behavioral data. In order to align input-output (IO) interface among multiple tasks and diverse agent morphologies while preserving essential 3D geometric relations, we introduce morphology-task graph, which treats observations, actions and goals/task in a unified graph representation. We also develop MxT-Bench for fast large-scale behavior generation, which supports procedural generation of diverse morphology-task combinations with a minimal blueprint and hardware-accelerated simulator. Through efficient representation and architecture selection on MxT-Bench, we find out that a morphology-task graph representation coupled with Transformer architecture improves the multi-task performances compared to other baselines including recent discrete tokenization, and provides better prior knowledge for zero-shot transfer or sample efficiency in downstream multi-task imitation learning. Our work suggests large diverse offline datasets, unified IO representation, and policy representation and architecture selection through supervised learning form a promising approach for studying and advancing morphology-task generalization. | https://openreview.net/pdf/184fcbb9f9a73128759c56558e7ad476b59fa452.pdf |
Progress measures for grokking via mechanistic interpretability | https://openreview.net/forum?id=9XFSbDPmdW | https://openreview.net/forum?id=9XFSbDPmdW | Neel Nanda,Lawrence Chan,Tom Lieberum,Jess Smith,Jacob Steinhardt | ICLR 2023,Top 25% | Neural networks often exhibit emergent behavior in which qualitatively new capabilities that arise from scaling up the number of parameters, training data, or even the number of steps. One approach to understanding emergence is to find the continuous \textit{progress measures} that underlie the seemingly discontinuous qualitative changes. In this work, we argue that progress measures can be found via mechanistic interpretability---that is, by reverse engineering learned models into components and measuring the progress of each component over the course of training. As a case study, we study small transformers trained on a modular arithmetic tasks with emergent grokking behavior. We fully reverse engineer the algorithm learned by these networks, which uses discrete fourier transforms and trigonometric identities to convert addition to rotation about a circle. After confirming the algorithm via ablation, we then use our understanding of the algorithm to define progress measures that precede the grokking phase transition on this task. We see our result as demonstrating both that it is possible to fully reverse engineer trained networks, and that doing so can be invaluable to understanding their training dynamics. | https://openreview.net/pdf/4a139897d29f8bd1c37ac9483d9e6ac2fa5ec8fb.pdf |
PiFold: Toward effective and efficient protein inverse folding | https://openreview.net/forum?id=oMsN9TYwJ0j | https://openreview.net/forum?id=oMsN9TYwJ0j | Zhangyang Gao,Cheng Tan,Stan Z. Li | ICLR 2023,Top 25% | How can we design protein sequences folding into the desired structures effectively and efficiently? AI methods for structure-based protein design have attracted increasing attention in recent years; however, few methods can simultaneously improve the accuracy and efficiency due to the lack of expressive features and autoregressive sequence decoder. To address these issues, we propose PiFold, which contains a novel residue featurizer and PiGNN layers to generate protein sequences in a one-shot way with improved recovery. Experiments show that PiFold could achieve 51.66\% recovery on CATH 4.2, while the inference speed is 70 times faster than the autoregressive competitors. In addition, PiFold achieves 58.72\% and 60.42\% recovery scores on TS50 and TS500, respectively. We conduct comprehensive ablation studies to reveal the role of different types of protein features and model designs, inspiring further simplification and improvement. The PyTorch code is available at \href{https://github.com/A4Bio/PiFold}{GitHub}. | https://openreview.net/pdf/e1a0ac295d7e905fb72e78968e396731ad364a0b.pdf |
Planning Goals for Exploration | https://openreview.net/forum?id=6qeBuZSo7Pr | https://openreview.net/forum?id=6qeBuZSo7Pr | Edward S. Hu,Richard Chang,Oleh Rybkin,Dinesh Jayaraman | ICLR 2023,Top 25% | Dropped into an unknown environment, what should an agent do to quickly learn about the environment and how to accomplish diverse tasks within it? We address this question within the goal-conditioned reinforcement learning paradigm, by identifying how the agent should set its goals at training time to maximize exploration. We propose "Planning Exploratory Goals" (PEG), a method that sets goals for each training episode to directly optimize an intrinsic exploration reward. PEG first chooses goal commands such that the agent's goal-conditioned policy, at its current level of training, will end up in states with high exploration potential. It then launches an exploration policy starting at those promising states. To enable this direct optimization, PEG learns world models and adapts sampling-based planning algorithms to "plan goal commands". In challenging simulated robotics environments including a multi-legged ant robot in a maze, and a robot arm on a cluttered tabletop, PEG exploration enables more efficient and effective training of goal-conditioned policies relative to baselines and ablations. Our ant successfully navigates a long maze, and the robot arm successfully builds a stack of three blocks upon command. Website: https://sites.google.com/view/exploratory-goals | https://openreview.net/pdf/b28237bb9e4d96d5f02a9d0639565db68727d08c.pdf |
Learning Sparse Group Models Through Boolean Relaxation | https://openreview.net/forum?id=Do9MOlwWHu0 | https://openreview.net/forum?id=Do9MOlwWHu0 | Yijie Wang,Yuan Zhou,Xiaoqing Huang,Kun Huang,Jie Zhang,Jianzhu Ma | ICLR 2023,Top 25% | We introduce an efficient algorithmic framework for learning sparse group models formulated as the natural convex relaxation of a cardinality-constrained program with Boolean variables. We provide theoretical techniques to characterize the equivalent condition when the relaxation achieves the exact integral optimal solution, as well as a rounding algorithm to produce a feasible integral solution once the optimal relaxation solution is fractional. We demonstrate the power of our equivalent condition by applying it to two ensembles of random problem instances that are challenging and popularly used in literature and prove that our method achieves exactness with overwhelming probability and nearly optimal sample complexity. Empirically, we use synthetic datasets to demonstrate that our proposed method significantly outperforms the state-of-the-art group sparse learning models in terms of individual and group support recovery when the number of samples is small. Furthermore, we show the out-performance of our method in cancer drug response prediction. | https://openreview.net/pdf/0760530295a66fdff783489bb9ee1628a6ed3880.pdf |
MeshDiffusion: Score-based Generative 3D Mesh Modeling | https://openreview.net/forum?id=0cpM2ApF9p6 | https://openreview.net/forum?id=0cpM2ApF9p6 | Zhen Liu,Yao Feng,Michael J. Black,Derek Nowrouzezahrai,Liam Paull,Weiyang Liu | ICLR 2023,Top 25% | We consider the task of generating realistic 3D shapes, which is useful for a variety of applications such as automatic scene generation and physical simulation. Compared to other 3D representations like voxels and point clouds, meshes are more desirable in practice, because (1) they enable easy and arbitrary manipulation of shapes for relighting and simulation, and (2) they can fully leverage the power of modern graphics pipelines which are mostly optimized for meshes. Previous scalable methods for generating meshes typically rely on sub-optimal post-processing, and they tend to produce overly-smooth or noisy surfaces without fine-grained geometric details. To overcome these shortcomings, we take advantage of the graph structure of meshes and use a simple yet very effective generative modeling method to generate 3D meshes. Specifically, we represent meshes with deformable tetrahedral grids, and then train a diffusion model on this direct parameterization. We demonstrate the effectiveness of our model on multiple generative tasks. | https://openreview.net/pdf/f4b27531cf7771c608830f2a184f9c5ef06eab1c.pdf |
Partially Observable RL with B-Stability: Unified Structural Condition and Sharp Sample-Efficient Algorithms | https://openreview.net/forum?id=n05upKp02kQ | https://openreview.net/forum?id=n05upKp02kQ | Fan Chen,Yu Bai,Song Mei | ICLR 2023,Top 25% | Partial Observability---where agents can only observe partial information about the true underlying state of the system---is ubiquitous in real-world applications of Reinforcement Learning (RL). Theoretically, learning a near-optimal policy under partial observability is known to be hard in the worst case due to an exponential sample complexity lower bound. Recent work has identified several tractable subclasses that are learnable with polynomial samples, such as Partially Observable Markov Decision Processes (POMDPs) with certain revealing or decodability conditions. However, this line of research is still in its infancy, where (1) unified structural conditions enabling sample-efficient learning are lacking; (2) existing sample complexities for known tractable subclasses are far from sharp; and (3) fewer sample-efficient algorithms are available than in fully observable RL.
This paper advances all three aspects above for Partially Observable RL in the general setting of Predictive State Representations (PSRs). First, we propose a natural and unified structural condition for PSRs called \emph{B-stability}. B-stable PSRs encompasses the vast majority of known tractable subclasses such as weakly revealing POMDPs, low-rank future-sufficient POMDPs, decodable POMDPs, and regular PSRs. Next, we show that any B-stable PSR can be learned with polynomial samples in relevant problem parameters. When instantiated in the aforementioned subclasses, our sample complexities improve substantially over the current best ones. Finally, our results are achieved by three algorithms simultaneously: Optimistic Maximum Likelihood Estimation, Estimation-to-Decisions, and Model-Based Optimistic Posterior Sampling. The latter two algorithms are new for sample-efficient learning of POMDPs/PSRs.
We additionally design a variant of the Estimation-to-Decisions algorithm to perform sample-efficient \emph{all-policy model estimation} for B-stable PSRs, which also yields guarantees for reward-free learning as an implication. | https://openreview.net/pdf/72aa8e574f037ca63f878f9b771e8fdb68841877.pdf |
Domain Generalization via Heckman-type Selection Models | https://openreview.net/forum?id=fk7RbGibe1 | https://openreview.net/forum?id=fk7RbGibe1 | Hyungu Kahng,Hyungrok Do,Judy Zhong | ICLR 2023,Top 25% | The domain generalization (DG) setup considers the problem where models are trained on data sampled from multiple domains and evaluated on test domains unseen during training. In this paper, we formulate DG as a sample selection problem where each domain is sampled from a common underlying population through non-random sampling probabilities that correlate with both the features and the outcome. Under this setting, the fundamental iid assumption of the empirical risk minimization (ERM) is violated, so it often performs worse on test domains whose non-random sampling probabilities differ from the domains in the training dataset. We propose a Selection-Guided DG (SGDG) framework to learn the selection probability of each domain and the joint distribution of the outcome and domain selection variables. The proposed SGDG is domain generalizable as it intends to minimize the risk under the population distribution. We theoretically proved that, under certain regular conditions, SGDG can achieve smaller risk than ERM. Furthermore, we present a class of parametric SGDG (HeckmanDG) estimators applicable to continuous, binary, and multinomial outcomes. We also demonstrated its efficacy empirically through simulations and experiments on a set of benchmark datasets comparing with other well-known DG methods. | https://openreview.net/pdf/44f6a3958bfee5b8302b55dd30335b9c8be982eb.pdf |
A CMDP-within-online framework for Meta-Safe Reinforcement Learning | https://openreview.net/forum?id=mbxz9Cjehr | https://openreview.net/forum?id=mbxz9Cjehr | Vanshaj Khattar,Yuhao Ding,Bilgehan Sel,Javad Lavaei,Ming Jin | ICLR 2023,Top 25% | Meta-reinforcement learning has widely been used as a learning-to-learn framework to solve unseen tasks with limited experience. However, the aspect of constraint violations has not been adequately addressed in the existing works, making their application restricted in real-world settings. In this paper, we study the problem of meta-safe reinforcement learning (meta-SRL) through the CMDP-within-online framework. We obtain task-averaged regret guarantees for the reward maximization (optimality gap) and constraint violations using gradient-based meta-learning and show that the task-averaged optimality gap and constraint satisfaction improve with task-similarity in the static environment, or task-relatedness in the changing environment. Several technical challenges arise when making this framework practical while still having strong theoretical guarantees. To address these challenges, we propose a meta-algorithm that performs inexact online learning on the upper bounds of intra-task optimality gap and constraint violations estimated by off-policy stationary distribution corrections. Furthermore, we enable the learning rates to be adapted for every task and extend our approach to settings with the dynamically changing task environments. Finally, experiments are conducted to demonstrate the effectiveness of our approach. The proposed theoretical framework is the first to handle the nonconvexity and stochastic nature of within-task CMDPs, while exploiting inter-task dependency for multi-task safe learning.
| https://openreview.net/pdf/a0814d04508ed834d5ecec6097573946c1f8b619.pdf |
Effects of Graph Convolutions in Multi-layer Networks | https://openreview.net/forum?id=P-73JPgRs0R | https://openreview.net/forum?id=P-73JPgRs0R | Aseem Baranwal,Kimon Fountoulakis,Aukosh Jagannath | ICLR 2023,Top 25% | Graph Convolutional Networks (GCNs) are one of the most popular architectures that are used to solve classification problems accompanied by graphical information. We present a rigorous theoretical understanding of the effects of graph convolutions in multi-layer networks. We study these effects through the node classification problem of a non-linearly separable Gaussian mixture model coupled with a stochastic block model. First, we show that a single graph convolution expands the regime of the distance between the means where multi-layer networks can classify the data by a factor of at least $1/\sqrt[4]{\rm deg}$, where ${\rm deg}$ denotes the expected degree of a node. Second, we show that with a slightly stronger graph density, two graph convolutions improve this factor to at least $1/\sqrt[4]{n}$, where $n$ is the number of nodes in the graph. Finally, we provide both theoretical and empirical insights into the performance of graph convolutions placed in different combinations among the layers of a neural network, concluding that the performance is mutually similar for all combinations of the placement. We present extensive experiments on both synthetic and real-world data that illustrate our results. | https://openreview.net/pdf/d210fced5bf1ca06dc521b5bd8088e97ffbdc31e.pdf |
Post-hoc Concept Bottleneck Models | https://openreview.net/forum?id=nA5AZ8CEyow | https://openreview.net/forum?id=nA5AZ8CEyow | Mert Yuksekgonul,Maggie Wang,James Zou | ICLR 2023,Top 25% | Concept Bottleneck Models (CBMs) map the inputs onto a set of interpretable concepts (``the bottleneck'') and use the concepts to make predictions. A concept bottleneck enhances interpretability since it can be investigated to understand what concepts the model "sees" in an input and which of these concepts are deemed important. However, CBMs are restrictive in practice as they require dense concept annotations in the training data to learn the bottleneck. Moreover, CBMs often do not match the accuracy of an unrestricted neural network, reducing the incentive to deploy them in practice. In this work, we address these limitations of CBMs by introducing Post-hoc Concept Bottleneck models (PCBMs). We show that we can turn any neural network into a PCBM without sacrificing model performance while still retaining the interpretability benefits. When concept annotations are not available on the training data, we show that PCBM can transfer concepts from other datasets or from natural language descriptions of concepts via multimodal models. A key benefit of PCBM is that it enables users to quickly debug and update the model to reduce spurious correlations and improve generalization to new distributions. PCBM allows for global model edits, which can be more efficient than previous works on local interventions that fix a specific prediction. Through a model-editing user study, we show that editing PCBMs via concept-level feedback can provide significant performance gains without using data from the target domain or model retraining. | https://openreview.net/pdf/bd9522b16fb6b3a1e89ec20c6aa411c7a84f0fb3.pdf |
When Source-Free Domain Adaptation Meets Learning with Noisy Labels | https://openreview.net/forum?id=u2Pd6x794I | https://openreview.net/forum?id=u2Pd6x794I | Li Yi,Gezheng Xu,Pengcheng Xu,Jiaqi Li,Ruizhi Pu,Charles Ling,Ian McLeod,Boyu Wang | ICLR 2023,Top 25% | Recent state-of-the-art source-free domain adaptation (SFDA) methods have focused on learning meaningful cluster structures in the feature space, which have succeeded in adapting the knowledge from source domain to unlabeled target domain without accessing the private source data. However, existing methods rely on the pseudo-labels generated by source models that can be noisy due to domain shift. In this paper, we study SFDA from the perspective of learning with label noise (LLN). Unlike the label noise in the conventional LLN scenario, we prove that the label noise in SFDA follows a different distribution assumption. We also prove that such a difference makes existing LLN methods that rely on their distribution assumptions unable to address the label noise in SFDA. Empirical evidence suggests that only marginal improvements are achieved when applying the existing LLN methods to solve the SFDA problem. On the other hand, although there exists a fundamental difference between the label noise in the two scenarios, we demonstrate theoretically that the early-time training phenomenon (ETP), which has been previously observed in conventional label noise settings, can also be observed in the SFDA problem. Extensive experiments demonstrate significant improvements to existing SFDA algorithms by leveraging ETP to address the label noise in SFDA. | https://openreview.net/pdf/3132194ea43e68910cd7e90e9be2141425b45f39.pdf |
Neural Networks Efficiently Learn Low-Dimensional Representations with SGD | https://openreview.net/forum?id=6taykzqcPD | https://openreview.net/forum?id=6taykzqcPD | Alireza Mousavi-Hosseini,Sejun Park,Manuela Girotti,Ioannis Mitliagkas,Murat A Erdogdu | ICLR 2023,Top 25% | We study the problem of training a two-layer neural network (NN) of arbitrary width using stochastic gradient descent (SGD) where the input $\boldsymbol{x}\in \mathbb{R}^d$ is Gaussian and the target $y \in \mathbb{R}$ follows a multiple-index model, i.e., $y=g(\langle\boldsymbol{u_1},\boldsymbol{x}\rangle,...,\langle\boldsymbol{u_k},\boldsymbol{x}\rangle)$ with a noisy link function $g$. We prove that the first-layer weights in the NN converge to the $k$-dimensional principal subspace spanned by the vectors $\boldsymbol{u_1},...,\boldsymbol{u_k}$ of the true model, when online SGD with weight decay is used for training. This phenomenon has several important consequences when $k \ll d$. First, by employing uniform convergence on this smaller subspace, we establish a generalization error bound of $\mathcal{O}(\sqrt{{kd}/{T}})$ after $T$ iterations of SGD, which is independent of the width of the NN. We further demonstrate that, by recovering the principal direction, SGD-trained ReLU NNs can learn a single-index target of the form $y=f(\langle\boldsymbol{u},\boldsymbol{x}\rangle) + \epsilon$ with a sample complexity linear in $d$ (up to log factors), where $f$ is a monotonic function with at most polynomial growth, and $\epsilon$ is the noise. This is in contrast to the known $d^{\Omega(p)}$ samples required to learn any degree $p$ polynomial in the kernel regime, and shows that SGD-trained NNs can outperform the Neural Tangent Kernel at initialization. Finally, we establish compressibility guarantees for NNs using that SGD produces an approximately rank-$k$ first-layer weight matrix. | https://openreview.net/pdf/1240f349d491e95499f0c82e7a0de39047d53f8e.pdf |
Does Zero-Shot Reinforcement Learning Exist? | https://openreview.net/forum?id=MYEap_OcQI | https://openreview.net/forum?id=MYEap_OcQI | Ahmed Touati,Jérémy Rapin,Yann Ollivier | ICLR 2023,Top 25% | A zero-shot RL agent is an agent that can solve any RL task in a given environment, instantly with no additional planning or learning, after an initial reward-free learning phase. This marks a shift from the reward-centric RL paradigm towards controllable agents that can follow arbitrary instructions in an environment. Current RL agents can solve families of related tasks at best, or require planning anew for each task. Strategies for approximate zero-shot RL have been suggested using successor features (SFs) (Borsa et al., 2018) or forward-backward (FB) representations (Touati & Ollivier, 2021), but testing has been limited.
After clarifying the relationships between these schemes, we introduce improved losses and new SF models, and test the viability of zero-shot RL schemes systematically on tasks from the Unsupervised RL benchmark (Laskin et al., 2021). To disentangle universal representation learning from exploration, we work in an offline setting and repeat the tests on several existing replay buffers.
SFs appear to suffer from the choice of the elementary state features. SFs with Laplacian eigenfunctions do well, while SFs based on auto-encoders, inverse curiosity, transition models, low-rank transition matrix, contrastive learning, or diversity (APS), perform unconsistently. In contrast, FB representations jointly learn the elementary and successor features from a single, principled criterion. They perform best and consistently across the board, reaching $85\%$ of supervised RL performance with a good replay buffer, in a zero-shot manner. | https://openreview.net/pdf/63a8b5a5af811abc3b027de6cccef1854dbedc3c.pdf |
Hyperbolic Deep Reinforcement Learning | https://openreview.net/forum?id=TfBHFLgv77 | https://openreview.net/forum?id=TfBHFLgv77 | Edoardo Cetin,Benjamin Paul Chamberlain,Michael M. Bronstein,Jonathan J Hunt | ICLR 2023,Top 25% | In deep reinforcement learning (RL), useful information about the state is inherently tied to its possible future successors. Consequently, encoding features that capture the hierarchical relationships between states into the model's latent representations is often conducive to recovering effective policies. In this work, we study a new class of deep RL algorithms that promote encoding such relationships by using hyperbolic space to model latent representations. However, we find that a naive application of existing methodology from the hyperbolic deep learning literature leads to fatal instabilities due to the non-stationarity and variance characterizing common gradient estimators in RL. Hence, we design a new general method that directly addresses such optimization challenges and enables stable end-to-end learning with deep hyperbolic representations. We empirically validate our framework by applying it to popular on-policy and off-policy RL algorithms on the Procgen and Atari 100K benchmarks, attaining near universal performance and generalization benefits. Given its natural fit, we hope this work will inspire future RL research to consider hyperbolic representations as a standard tool. | https://openreview.net/pdf/9fac2de989afbbe7a7767c39f7d03bdb640f3016.pdf |
Learning Controllable Adaptive Simulation for Multi-resolution Physics | https://openreview.net/forum?id=PbfgkZ2HdbE | https://openreview.net/forum?id=PbfgkZ2HdbE | Tailin Wu,Takashi Maruyama,Qingqing Zhao,Gordon Wetzstein,Jure Leskovec | ICLR 2023,Top 25% | Simulating the time evolution of physical systems is pivotal in many scientific and engineering problems. An open challenge in simulating such systems is their multi-resolution dynamics: a small fraction of the system is extremely dynamic, and requires very fine-grained resolution, while a majority of the system is changing slowly and can be modeled by coarser spatial scales. Typical learning-based surrogate models use a uniform spatial scale, which needs to resolve to the finest required scale and can waste a huge compute to achieve required accuracy. In this work, we introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model that jointly learns the evolution model and optimizes appropriate spatial resolutions that devote more compute to the highly dynamic regions. LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening. We introduce learning techniques that optimizes LAMP with weighted sum of error and computational cost as objective, allowing LAMP to adapt to varying relative importance of error vs. computation tradeoff at inference time. We evaluate our method in a 1D benchmark of nonlinear PDEs and a challenging 2D mesh-based simulation. We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error: it achieves an average of 33.7% error reduction for 1D nonlinear PDEs, and outperforms MeshGraphNets + classical Adaptive Mesh Refinement (AMR) in 2D mesh-based simulations. Project website with data and code can be found at: http://snap.stanford.edu/lamp. | https://openreview.net/pdf/6c5914ffbccd510000a84a0f0aad7cb9fdfbf835.pdf |
Where to Begin? On the Impact of Pre-Training and Initialization in Federated Learning | https://openreview.net/forum?id=Mpa3tRJFBb | https://openreview.net/forum?id=Mpa3tRJFBb | John Nguyen,Jianyu Wang,Kshitiz Malik,Maziar Sanjabi,Michael Rabbat | ICLR 2023,Top 25% | An oft-cited challenge of federated learning is the presence of heterogeneity. \emph{Data heterogeneity} refers to the fact that data from different clients may follow very different distributions. \emph{System heterogeneity} refers to client devices having different system capabilities. A considerable number of federated optimization methods address this challenge. In the literature, empirical evaluations usually start federated training from random initialization. However, in many practical applications of federated learning, the server has access to proxy data for the training task that can be used to pre-train a model before starting federated training. Using four standard federated learning benchmark datasets, we empirically study the impact of starting from a pre-trained model in federated learning. Unsurprisingly, starting from a pre-trained model reduces the training time required to reach a target error rate and enables the training of more accurate models (up to 40\%) than is possible when starting from random initialization. Surprisingly, we also find that starting federated learning from a pre-trained initialization reduces the effect of both data and system heterogeneity. We recommend future work proposing and evaluating federated optimization methods to evaluate the performance when starting from random and pre-trained initializations. This study raises several questions for further work on understanding the role of heterogeneity in federated optimization. | https://openreview.net/pdf/270568c6d80daef0fdc3934838d90aba2eb3610c.pdf |
Parametrizing Product Shape Manifolds by Composite Networks | https://openreview.net/forum?id=F_EhNDSamN | https://openreview.net/forum?id=F_EhNDSamN | Josua Sassen,Klaus Hildebrandt,Martin Rumpf,Benedikt Wirth | ICLR 2023,Top 25% | Parametrizations of data manifolds in shape spaces can be computed using the rich toolbox of Riemannian geometry. This, however, often comes with high computational costs, which raises the question if one can learn an efficient neural network approximation. We show that this is indeed possible for shape spaces with a special product structure, namely those smoothly approximable by a direct sum of low-dimensional manifolds. Our proposed architecture leverages this structure by separately learning approximations for the low-dimensional factors and a subsequent combination. After developing the approach as a general framework, we apply it to a shape space of triangular surfaces. Here, typical examples of data manifolds are given through datasets of articulated models and can be factorized, for example, by a Sparse Principal Geodesic Analysis (SPGA). We demonstrate the effectiveness of our proposed approach with experiments on synthetic data as well as manifolds extracted from data via SPGA. | https://openreview.net/pdf/2832887c23c3957ac23c919a6f7a43abde5a7ef2.pdf |
Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning? | https://openreview.net/forum?id=zKvm1ETDOq | https://openreview.net/forum?id=zKvm1ETDOq | Rui Wen,Zhengyu Zhao,Zhuoran Liu,Michael Backes,Tianhao Wang,Yang Zhang | ICLR 2023,Top 25% | Indiscriminate data poisoning can decrease the clean test accuracy of a deep learning model by slightly perturbing its training samples.
There is a consensus that such poisons can hardly harm adversarially-trained (AT) models when the adversarial training budget is no less than the poison budget, i.e., $\epsilon_\mathrm{adv}\geq\epsilon_\mathrm{poi}$. This consensus, however, is challenged in this paper based on our new attack strategy that induces \textit{entangled features} (EntF). The existence of entangled features makes the poisoned data become less useful for training a model, no matter if AT is applied or not. We demonstrate that for attacking a CIFAR-10 AT model under a reasonable setting with $\epsilon_\mathrm{adv}=\epsilon_\mathrm{poi}=8/255$, our EntF yields an accuracy drop of $13.31\%$, which is $7\times$ better than existing methods and equal to discarding $83\%$ training data. We further show the generalizability of EntF to more challenging settings, e.g., higher AT budgets, partial poisoning, unseen model architectures, and stronger (ensemble or adaptive) defenses. We finally provide new insights into the distinct roles of non-robust vs. robust features in poisoning standard vs. AT models and demonstrate the possibility of using a hybrid attack to poison standard and AT models simultaneously. Our code is available at~\url{https://github.com/WenRuiUSTC/EntF}. | https://openreview.net/pdf/bcce19ed68bdb0a34957207b9b69cebedeab384c.pdf |
Learning with Stochastic Orders | https://openreview.net/forum?id=P3PJokAqGW | https://openreview.net/forum?id=P3PJokAqGW | Carles Domingo-Enrich,Yair Schiff,Youssef Mroueh | ICLR 2023,Top 25% | Learning high-dimensional distributions is often done with explicit likelihood modeling or implicit modeling via minimizing integral probability metrics (IPMs). In this paper, we expand this learning paradigm to stochastic orders, namely, the convex or Choquet order between probability measures. Towards this end, exploiting the relation between convex orders and optimal transport, we introduce the Choquet-Toland distance between probability measures, that can be used as a drop-in replacement for IPMs. We also introduce the Variational Dominance Criterion (VDC) to learn probability measures with dominance constraints, that encode the desired stochastic order between the learned measure and a known baseline. We analyze both quantities and show that they suffer from the curse of dimensionality and propose surrogates via input convex maxout networks (ICMNs), that enjoy parametric rates. We provide a min-max framework for learning with stochastic orders and validate it experimentally on synthetic and high-dimensional image generation, with promising results. Finally, our ICMNs class of convex functions and its derived Rademacher Complexity are of independent interest beyond their application in convex orders. Code to reproduce experimental results is available at https://github.com/yair-schiff/stochastic-orders-ICMN. | https://openreview.net/pdf/69bf232a5f31365934fcdc570925118eede29e06.pdf |
MEDFAIR: Benchmarking Fairness for Medical Imaging | https://openreview.net/forum?id=6ve2CkeQe5S | https://openreview.net/forum?id=6ve2CkeQe5S | Yongshuo Zong,Yongxin Yang,Timothy Hospedales | ICLR 2023,Top 25% | A multitude of work has shown that machine learning-based medical diagnosis systems can be biased against certain subgroups of people. This has motivated a growing number of bias mitigation algorithms that aim to address fairness issues in machine learning. However, it is difficult to compare their effectiveness in medical imaging for two reasons. First, there is little consensus on the criteria to assess fairness. Second, existing bias mitigation algorithms are developed under different settings, e.g., datasets, model selection strategies, backbones, and fairness metrics, making a direct comparison and evaluation based on existing results impossible. In this work, we introduce MEDFAIR, a framework to benchmark the fairness of machine learning models for medical imaging. MEDFAIR covers eleven algorithms from various categories, ten datasets from different imaging modalities, and three model selection criteria. Through extensive experiments, we find that the under-studied issue of model selection criterion can have a significant impact on fairness outcomes; while in contrast, state-of-the-art bias mitigation algorithms do not significantly improve fairness outcomes over empirical risk minimization (ERM) in both in-distribution and out-of-distribution settings. We evaluate fairness from various perspectives and make recommendations for different medical application scenarios that require different ethical principles. Our framework provides a reproducible and easy-to-use entry point for the development and evaluation of future bias mitigation algorithms in deep learning. Code is available at https://github.com/ys-zong/MEDFAIR. | https://openreview.net/pdf/75dab3d31898ee627528af860910801000bfc9c1.pdf |
Neural Design for Genetic Perturbation Experiments | https://openreview.net/forum?id=TUBpc5rqGA | https://openreview.net/forum?id=TUBpc5rqGA | Aldo Pacchiano,Drausin Wulsin,Robert A Barton,Luis Voloch | ICLR 2023,Top 25% | The problem of how to genetically modify cells in order to maximize a certain cellular phenotype has taken center stage in drug development over the last few years (with, for example, genetically edited CAR-T, CAR-NK, and CAR-NKT cells entering cancer clinical trials). Exhausting the search space for all possible genetic edits (perturbations) or combinations thereof is infeasible due to cost and experimental limitations. This work provides a theoretically sound framework for iteratively exploring the space of perturbations in pooled batches in order to maximize a target phenotype under an experimental budget. Inspired by this application domain, we study the problem of batch query bandit optimization and introduce the Optimistic Arm Elimination ($\mathrm{OAE}$) principle designed to find an almost optimal arm under different functional relationships between the queries (arms) and the outputs (rewards). We analyze the convergence properties of $\mathrm{OAE}$ by relating it to the Eluder dimension of the algorithm's function class and validate that $\mathrm{OAE}$ outperforms other strategies in finding optimal actions in experiments on simulated problems, public datasets well-studied in bandit contexts, and in genetic perturbation datasets when the regression model is a deep neural network. OAE also outperforms the benchmark algorithms in 3 of 4 datasets in the GeneDisco experimental planning challenge. | https://openreview.net/pdf/f56ee4a8d1d86d0f8b8aba389f8502186aeab60b.pdf |
Efficient Discrete Multi Marginal Optimal Transport Regularization | https://openreview.net/forum?id=R98ZfMt-jE | https://openreview.net/forum?id=R98ZfMt-jE | Ronak Mehta,Jeffery Kline,Vishnu Suresh Lokhande,Glenn Fung,Vikas Singh | ICLR 2023,Top 25% | Optimal transport has emerged as a powerful tool for a variety of problems in machine learning, and it is frequently used to enforce distributional constraints. In this context, existing methods often use either a Wasserstein metric, or else they apply concurrent barycenter approaches when more than two distributions are considered. In this paper, we leverage multi-marginal optimal transport (MMOT), where we take advantage of a procedure that computes a generalized earth mover's distance as a sub-routine. We show that not only is our algorithm computationally more efficient compared to other barycentric-based distance methods, but it has the additional advantage that gradients used for backpropagation can be efficiently computed during the forward pass computation itself, which leads to substantially faster model training. We provide technical details about this new regularization term and its properties, and we present experimental demonstrations of faster runtimes when compared to standard Wasserstein-style methods. Finally, on a range of experiments designed to assess effectiveness at enforcing fairness, we demonstrate our method compares well with alternatives. | https://openreview.net/pdf/751b7f72b933e8842e1162601b80445c8fa2b7c7.pdf |
Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask? | https://openreview.net/forum?id=xSsW2Am-ukZ | https://openreview.net/forum?id=xSsW2Am-ukZ | Mansheej Paul,Feng Chen,Brett W. Larsen,Jonathan Frankle,Surya Ganguli,Gintare Karolina Dziugaite | ICLR 2023,Top 25% | As neural networks get larger and costlier, it is important to find sparse networks that require less compute and memory but can be trained to the same accuracy as the full network (i.e. matching). Iterative magnitude pruning (IMP) is a state of the art algorithm that can find such highly sparse matching subnetworks, known as winning tickets. IMP iterates through cycles of training, pruning a fraction of smallest magnitude weights, rewinding unpruned weights back to an early training point, and repeating. Despite its simplicity, the principles underlying when and how IMP finds winning tickets remain elusive. In particular, what useful information does an IMP mask found at the end of training convey to a rewound network near the beginning of training? How does SGD allow the network to extract this information? And why is iterative pruning needed, i.e. why can't we prune to very high sparsities in one shot? We investigate these questions through the lens of the geometry of the error landscape. First, we find that—at higher sparsities—pairs of pruned networks at successive pruning iterations are connected by a linear path with zero error barrier if and only if they are matching. This indicates that masks found at the end of training convey to the rewind point the identity of an axial subspace that intersects a desired linearly connected mode of a matching sublevel set. Second, we show SGD can exploit this information due to a strong form of robustness: it can return to this mode despite strong perturbations early in training. Third, we show how the flatness of the error landscape at the end of training limits the fraction of weights that can be pruned at each iteration of IMP. This analysis yields a new quantitative link between IMP performance and the Hessian eigenspectrum. Finally, we show that the role of retraining in IMP is to find a network with new small weights to prune. Overall, these results make progress toward demystifying the existence of winning tickets by revealing the fundamental role of error landscape geometry in the algorithms used to find them. | https://openreview.net/pdf/4dbbc1d35dfd048e01a703f7058ecec7d030cfea.pdf |
Quantifying Memorization Across Neural Language Models | https://openreview.net/forum?id=TatRHT_1cK | https://openreview.net/forum?id=TatRHT_1cK | Nicholas Carlini,Daphne Ippolito,Matthew Jagielski,Katherine Lee,Florian Tramer,Chiyuan Zhang | ICLR 2023,Top 25% | Large language models (LMs) have been shown to memorize parts of their training data, and when prompted appropriately, they will emit the memorized training data verbatim. This is undesirable because memorization violates privacy (exposing user data), degrades utility (repeated easy-to-memorize text is often low quality), and hurts fairness (some texts are memorized over others).
We describe three log-linear relationships that quantify the degree to which LMs emit memorized training data. Memorization significantly grows as we increase (1) the capacity of a model, (2) the number of times an example has been duplicated, and (3) the number of tokens of context used to prompt the model. Surprisingly, we find the situation becomes complicated when generalizing these results across model families. On the whole, we find that memorization in LMs is more prevalent than previously believed and will likely get worse as models continues to scale, at least without active mitigations. | https://openreview.net/pdf/6b4201e769d9dc79c8462750821d94951ee50a84.pdf |
Powderworld: A Platform for Understanding Generalization via Rich Task Distributions | https://openreview.net/forum?id=AWZgXGmsbA | https://openreview.net/forum?id=AWZgXGmsbA | Kevin Frans,Phillip Isola | ICLR 2023,Top 25% | One of the grand challenges of reinforcement learning is the ability to generalize to new tasks. However, general agents require a set of rich, diverse tasks to train on. Designing a `foundation environment' for such tasks is tricky -- the ideal environment would support a range of emergent phenomena, an expressive task space, and fast runtime. To take a step towards addressing this research bottleneck, this work presents Powderworld, a lightweight yet expressive simulation environment running directly on the GPU. Within Powderworld, two motivating task distributions are presented, one for world-modelling and one for reinforcement learning. Each contains hand-designed test tasks to examine generalization. Experiments indicate that increasing the environment's complexity improves generalization for world models, yet causes reinforcement learning agents to struggle. Powderworld aims to support the study of generalization by providing a source of diverse tasks arising from the same core rules. | https://openreview.net/pdf/2fadf0fbbebe9f361cc99785d5c8977657738d68.pdf |
Out-of-Distribution Detection and Selective Generation for Conditional Language Models | https://openreview.net/forum?id=kJUS5nD0vPB | https://openreview.net/forum?id=kJUS5nD0vPB | Jie Ren,Jiaming Luo,Yao Zhao,Kundan Krishna,Mohammad Saleh,Balaji Lakshminarayanan,Peter J Liu | ICLR 2023,Top 25% | Machine learning algorithms typically assume independent and identically distributed samples in training and at test time (IID). Much work has shown that high-performing ML classifiers can degrade significantly and provide overly-confident, wrong classification predictions, particularly for out-of-distribution (OOD) inputs. Conditional language models (CLMs) are predominantly trained to classify the next token in an output sequence, and may suffer even worse degradation on OOD inputs as the prediction is done auto-regressively over many steps. Furthermore, the space of potential low-quality outputs is larger as arbitrary text can be generated and it is important to know when to trust the generated output. We present a highly accurate and lightweight OOD detection method for CLMs, and demonstrate its effectiveness on abstractive summarization and translation. We also show how our method can be used under the common and realistic setting of distribution shift for selective generation (analogous to selective prediction for classification) of high-quality outputs, while automatically abstaining from low-quality ones, enabling safer deployment of generative language models. | https://openreview.net/pdf/f47874745e38526618ae5e9fd6012d1584ed30a1.pdf |
Differentially Private $L_2$-Heavy Hitters in the Sliding Window Model | https://openreview.net/forum?id=3UHoYrglYkG | https://openreview.net/forum?id=3UHoYrglYkG | Jeremiah Blocki,Seunghoon Lee,Tamalika Mukherjee,Samson Zhou | ICLR 2023,Top 25% | The data management of large companies often prioritize more recent data, as a source of higher accuracy prediction than outdated data. For example, the Facebook data policy retains user search histories for $6$ months while the Google data retention policy states that browser information may be stored for up to $9$ months. These policies are captured by the sliding window model, in which only the most recent $W$ statistics form the underlying dataset. In this paper, we consider the problem of privately releasing the $L_2$-heavy hitters in the sliding window model, which include $L_p$-heavy hitters for $p\le 2$ and in some sense are the strongest possible guarantees that can be achieved using polylogarithmic space, but cannot be handled by existing techniques due to the sub-additivity of the $L_2$ norm. Moreover, existing non-private sliding window algorithms use the smooth histogram framework, which has high sensitivity. To overcome these barriers, we introduce the first differentially private algorithm for $L_2$-heavy hitters in the sliding window model by initiating a number of $L_2$-heavy hitter algorithms across the stream with significantly lower threshold. Similarly, we augment the algorithms with an approximate frequency tracking algorithm with significantly higher accuracy. We then use smooth sensitivity and statistical distance arguments to show that we can add noise proportional to an estimation of the $L_2$ norm. To the best of our knowledge, our techniques are the first to privately release statistics that are related to a sub-additive function in the sliding window model, and may be of independent interest to future differentially private algorithmic design in the sliding window model. | https://openreview.net/pdf/7a0d8905677bac47b853d1dfdaa37542861101da.pdf |
NTFields: Neural Time Fields for Physics-Informed Robot Motion Planning | https://openreview.net/forum?id=ApF0dmi1_9K | https://openreview.net/forum?id=ApF0dmi1_9K | Ruiqi Ni,Ahmed H Qureshi | ICLR 2023,Top 25% | Neural Motion Planners (NMPs) have emerged as a promising tool for solving robot navigation tasks in complex environments. However, these methods often require expert data for learning, which limits their application to scenarios where data generation is time-consuming. Recent developments have also led to physics-informed deep neural models capable of representing complex dynamical Partial Differential Equations (PDEs). Inspired by these developments, we propose Neural Time Fields (NTFields) for robot motion planning in cluttered scenarios. Our framework represents a wave propagation model generating continuous arrival time to find path solutions informed by a nonlinear first-order PDE called Eikonal Equation. We evaluate our method in various cluttered 3D environments, including the Gibson dataset, and demonstrate its ability to solve motion planning problems for 4-DOF and 6-DOF robot manipulators where the traditional grid-based Eikonal planners often face the curse of dimensionality. Furthermore, the results show that our method exhibits high success rates and significantly lower computational times than the state-of-the-art methods, including NMPs that require training data from classical planners. | https://openreview.net/pdf/483cf04b31ae0d71d5a838b5ba85e6273a018d60.pdf |
ZiCo: Zero-shot NAS via inverse Coefficient of Variation on Gradients | https://openreview.net/forum?id=rwo-ls5GqGn | https://openreview.net/forum?id=rwo-ls5GqGn | Guihong Li,Yuedong Yang,Kartikeya Bhardwaj,Radu Marculescu | ICLR 2023,Top 25% | Neural Architecture Search (NAS) is widely used to automatically obtain the neural network with the best performance among a large number of candidate architectures. To reduce the search time, zero-shot NAS aims at designing training-free proxies that can predict the test performance of a given architecture. However, as shown recently, none of the zero-shot proxies proposed to date can actually work consistently better than a naive proxy, namely, the number of network parameters (#Params). To improve this state of affairs, as the main theoretical contribution, we first reveal how some specific gradient properties across different samples impact the convergence rate and generalization capacity of neural networks. Based on this theoretical analysis, we propose a new zero-shot proxy, ZiCo, the first proxy that works consistently better than #Params. We demonstrate that ZiCo works better than State-Of-The-Art (SOTA) proxies on several popular NAS-Benchmarks (NASBench101, NATSBench-SSS/TSS, TransNASBench-101) for multiple applications (e.g., image classification/reconstruction and pixel-level prediction). Finally, we demonstrate that the optimal architectures found via ZiCo are as competitive as the ones found by one-shot and multi-shot NAS methods, but with much less search time. For example, ZiCo-based NAS can find optimal architectures with 78.1%, 79.4%, and 80.4% test accuracy under inference budgets of 450M, 600M, and 1000M FLOPs, respectively, on ImageNet within 0.4 GPU days. Our code is available at https://github.com/SLDGroup/ZiCo.
| https://openreview.net/pdf/0ed7119196bfabbdc248d6add738ac67510f7662.pdf |
Pink Noise Is All You Need: Colored Noise Exploration in Deep Reinforcement Learning | https://openreview.net/forum?id=hQ9V5QN27eS | https://openreview.net/forum?id=hQ9V5QN27eS | Onno Eberhard,Jakob Hollenstein,Cristina Pinneri,Georg Martius | ICLR 2023,Top 25% | In off-policy deep reinforcement learning with continuous action spaces, exploration is often implemented by injecting action noise into the action selection process. Popular algorithms based on stochastic policies, such as SAC or MPO, inject white noise by sampling actions from uncorrelated Gaussian distributions. In many tasks, however, white noise does not provide sufficient exploration, and temporally correlated noise is used instead. A common choice is Ornstein-Uhlenbeck (OU) noise, which is closely related to Brownian motion (red noise). Both red noise and white noise belong to the broad family of colored noise. In this work, we perform a comprehensive experimental evaluation on MPO and SAC to explore the effectiveness of other colors of noise as action noise. We find that pink noise, which is halfway between white and red noise, significantly outperforms white noise, OU noise, and other alternatives on a wide range of environments. Thus, we recommend it as the default choice for action noise in continuous control.
| https://openreview.net/pdf/9eb6698653898299e855964e9b4950f0e56ab28c.pdf |
STaSy: Score-based Tabular data Synthesis | https://openreview.net/forum?id=1mNssCWt_v | https://openreview.net/forum?id=1mNssCWt_v | Jayoung Kim,Chaejeong Lee,Noseong Park | ICLR 2023,Top 25% | Tabular data synthesis is a long-standing research topic in machine learning. Many different methods have been proposed over the past decades, ranging from statistical methods to deep generative methods. However, it has not always been successful due to the complicated nature of real-world tabular data. In this paper, we present a new model named $\textbf{S}$core-based $\textbf{Ta}$bular data $\textbf{Sy}$nthesis ($\texttt{STaSy}$) and its training strategy based on the paradigm of score-based generative modeling. Despite the fact that score-based generative models have resolved many issues in generative models, there still exists room for improvement in tabular data synthesis. Our proposed training strategy includes a self-paced learning technique and a fine-tuning strategy, which further increases the sampling quality and diversity by stabilizing the denoising score matching training. Furthermore, we also conduct rigorous experimental studies in terms of the generative task trilemma: sampling quality, diversity, and time. In our experiments with 15 benchmark tabular datasets and 7 baselines, our method outperforms existing methods in terms of task-dependant evaluations and diversity.
| https://openreview.net/pdf/7cc08c44de490f3e79794b5827aa36b84f99c4c3.pdf |
A Unified Algebraic Perspective on Lipschitz Neural Networks | https://openreview.net/forum?id=k71IGLC8cfc | https://openreview.net/forum?id=k71IGLC8cfc | Alexandre Araujo,Aaron J Havens,Blaise Delattre,Alexandre Allauzen,Bin Hu | ICLR 2023,Top 25% | Important research efforts have focused on the design and training of neural networks with a controlled Lipschitz constant. The goal is to increase and sometimes guarantee the robustness against adversarial attacks. Recent promising techniques draw inspirations from different backgrounds to design 1-Lipschitz neural networks, just to name a few: convex potential layers derive from the discretization of continuous dynamical systems, Almost-Orthogonal-Layer proposes a tailored method for matrix rescaling. However, it is today important to consider the recent and promising contributions in the field under a common theoretical lens to better design new and improved layers. This paper introduces a novel algebraic perspective unifying various types of 1-Lipschitz neural networks, including the ones previously mentioned, along with methods based on orthogonality and spectral methods. Interestingly, we show that many existing techniques can be derived and generalized via finding analytical solutions of a common semidefinite programming (SDP) condition. We also prove that AOL biases the scaled weight to the ones which are close to the set of orthogonal matrices in a certain mathematical manner. Moreover, our algebraic condition, combined with the Gershgorin circle theorem, readily leads to new and diverse parameterizations for 1-Lipschitz network layers. Our approach, called SDP-based Lipschitz Layers (SLL), allows us to design non-trivial yet efficient generalization of convex potential layers. Finally, the comprehensive set of experiments on image classification shows that SLLs outperform previous approaches on certified robust accuracy. Code is available at https://github.com/araujoalexandre/Lipschitz-SLL-Networks. | https://openreview.net/pdf/0db46e14af869da0146f16c3a0b546c42c16ac4a.pdf |
The Influence of Learning Rule on Representation Dynamics in Wide Neural Networks | https://openreview.net/forum?id=nZ2NtpolC5- | https://openreview.net/forum?id=nZ2NtpolC5- | Blake Bordelon,Cengiz Pehlevan | ICLR 2023,Top 25% | It is unclear how changing the learning rule of a deep neural network alters its learning dynamics and representations. To gain insight into the relationship between learned features, function approximation, and the learning rule, we analyze infinite-width deep networks trained with gradient descent (GD) and biologically-plausible alternatives including feedback alignment (FA), direct feedback alignment (DFA), and error modulated Hebbian learning (Hebb), as well as gated linear networks (GLN). We show that, for each of these learning rules, the evolution of the output function at infinite width is governed by a time varying effective neural tangent kernel (eNTK). In the lazy training limit, this eNTK is static and does not evolve, while in the rich mean-field regime this kernel's evolution can be determined self-consistently with dynamical mean field theory (DMFT). This DMFT enables comparisons of the feature and prediction dynamics induced by each of these learning rules. In the lazy limit, we find that DFA and Hebb can only learn using the last layer features, while full FA can utilize earlier layers with a scale determined by the initial correlation between feedforward and feedback weight matrices. In the rich regime, DFA and FA utilize a temporally evolving and depth-dependent NTK. Counterintuitively, we find that FA networks trained in the rich regime exhibit more feature learning if initialized with smaller correlation between the forward and backward pass weights. GLNs admit a very simple formula for their lazy limit kernel and preserve conditional Gaussianity of their preactivations under gating functions. Error modulated Hebb rules show very small task-relevant alignment of their kernels and perform most task relevant learning in the last layer. | https://openreview.net/pdf/48d0ec8e5e188584424c803e8b24556739d8fa4d.pdf |
Few-shot Cross-domain Image Generation via Inference-time Latent-code Learning | https://openreview.net/forum?id=sCYXJr3QJM8 | https://openreview.net/forum?id=sCYXJr3QJM8 | Arnab Kumar Mondal,Piyush Tiwary,Parag Singla,Prathosh AP | ICLR 2023,Top 25% | In this work, our objective is to adapt a Deep generative model trained on a large-scale source dataset to multiple target domains with scarce data. Specifically, we focus on adapting a pre-trained Generative Adversarial Network (GAN) to a target domain without re-training the generator. Our method draws the motivation from the fact that out-of-distribution samples can be `embedded' onto the latent space of a pre-trained source-GAN. We propose to train a small latent-generation network during the inference stage, each time a batch of target samples is to be generated. These target latent codes are fed to the source-generator to obtain novel target samples. Despite using the same small set of target samples and the source generator, multiple independent training episodes of the latent-generation network results in the diversity of the generated target samples. Our method, albeit simple, can be used to generate data from multiple target distributions using a generator trained on a single source distribution. We demonstrate the efficacy of our surprisingly simple method in generating multiple target datasets with only a single source generator and a few target samples. | https://openreview.net/pdf/acc5eab2f3488d4a16e1e9bdc1b8836b5ebccdfe.pdf |
RLx2: Training a Sparse Deep Reinforcement Learning Model from Scratch | https://openreview.net/forum?id=DJEEqoAq7to | https://openreview.net/forum?id=DJEEqoAq7to | Yiqin Tan,Pihe Hu,Ling Pan,Jiatai Huang,Longbo Huang | ICLR 2023,Top 25% | Training deep reinforcement learning (DRL) models usually requires high computation costs. Therefore, compressing DRL models possesses immense potential for training acceleration and model deployment. However, existing methods that generate small models mainly adopt the knowledge distillation-based approach by iteratively training a dense network. As a result, the training process still demands massive computing resources. Indeed, sparse training from scratch in DRL has not been well explored and is particularly challenging due to non-stationarity in bootstrap training. In this work, we propose a novel sparse DRL training framework, “the Rigged Reinforcement Learning Lottery” (RLx2), which builds upon gradient-based topology evolution and is capable of training a sparse DRL model based entirely on a sparse network. Specifically, RLx2 introduces a novel multi-step TD target mechanism with a dynamic-capacity replay buffer to achieve robust value learning and efficient topology exploration in sparse models. It also reaches state-of-the-art sparse training performance in several tasks, showing $7.5\times$-$20\times$ model compression with less than $3\%$ performance degradation and up to $20\times$ and $50\times$ FLOPs reduction for training and inference, respectively. | https://openreview.net/pdf/095bd7aea3382d53f48388a8b9051db4f9ed8f31.pdf |
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together! | https://openreview.net/forum?id=J6F3lLg4Kdp | https://openreview.net/forum?id=J6F3lLg4Kdp | Shiwei Liu,Tianlong Chen,Zhenyu Zhang,Xuxi Chen,Tianjin Huang,AJAY KUMAR JAISWAL,Zhangyang Wang | ICLR 2023,Top 25% | Sparse Neural Networks (SNNs) have received voluminous attention predominantly due to growing computational and memory footprints of consistently exploding parameter count in large-scale models. Similar to their dense counterparts, recent SNNs generalize just as well and are equipped with numerous favorable benefits (e.g., low complexity, high scalability, and robustness), sometimes even better than the original dense networks. As research effort is focused on developing increasingly sophisticated sparse algorithms, it is startling that a comprehensive benchmark to evaluate the effectiveness of these algorithms has been highly overlooked. In absence of a carefully crafted evaluation benchmark, most if not all, sparse algorithms are evaluated against fairly simple and naive tasks (eg. CIFAR-10/100, ImageNet, GLUE, etc.), which can potentially camouflage many advantages as well unexpected predicaments of SNNs. In pursuit of a more general evaluation and unveiling the true potential of sparse algorithms, we introduce “Sparsity May Cry” Benchmark (SMC-Bench), a collection of carefully-curated 4 diverse tasks with 10 datasets, that accounts for capturing a wide range of domain-specific and sophisticated knowledge. Our systemic evaluation of the most representative sparse algorithms reveals an important obscured observation: the state-of-the-art magnitude- and/or gradient-based sparse algorithms seemingly fail to perform on SMC-Bench when applied out-of-the-box, sometimes at significantly trivial sparsity as low as 5%. The observations seek the immediate attention of the sparsity research community to reconsider the highly proclaimed benefits of SNNs. We further conduct a thorough investigation into the reasons for the failure of common SNNs. Our analysis points out that such failure is intimately related to the “lazy regime” of large model training, which hints us with stronger pruning recipes that alleviate the failure on SMC-Bench (though still more or less suffering). By incorporating these well-thought and diverse tasks, SMC-Bench is designed to favor and encourage the development of more scalable and generalizable sparse algorithms. We open-source SMC-Bench to assist researchers in building next-generation sparse algorithms that scale and generalize: https://github.com/VITA-Group/SMC-Bench. | https://openreview.net/pdf/60f68324a3ec40c50412c32d7f1d7ee813d44b35.pdf |
Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers | https://openreview.net/forum?id=w1hwFUb_81 | https://openreview.net/forum?id=w1hwFUb_81 | Tianlong Chen,Zhenyu Zhang,AJAY KUMAR JAISWAL,Shiwei Liu,Zhangyang Wang | ICLR 2023,Top 25% | Despite their remarkable achievement, gigantic transformers encounter significant drawbacks, including exorbitant computational and memory footprints during training, as well as severe collapse evidenced by a high degree of parameter redundancy. Sparsely-activated Mixture-of-Experts (SMoEs) have shown promise to mitigate the issue of training efficiency, yet they are prone to (1) $\textit{redundant experts}$ due to representational collapse; and (2) $\textit{poor expert scalability for inference and downstream fine-tuning}$, primarily due to overfitting of the learned routing policy to the number of activated experts during training. As recent research efforts are predominantly focused on improving routing policies to encourage expert specializations, this work focuses on $\textit{exploring the overlooked scalability bottleneck of SMoEs}$ and leveraging it to effectively $\textbf{scale dense transformers}$. To this end, we propose a new plug-and-play training framework, $\textbf{SMoE-Dropout}$, to enable scaling transformers to better accuracy in their full capacity without collapse. Specifically, SMoE-Dropout consists of a $\textit{randomly initialized and fixed}$ router network to activate experts and gradually increases the activated expert number as training progresses over time. Transformers trained by SMoE-Dropout naturally exhibit a $\textbf{``self-slimmable”}$ property subject to resource availability, offering smooth and consistent performance boosts with an increase in activated experts during inference or fine-tuning. Our extensive experiments across diverse transformer architectures on a variety of tasks demonstrate the superior performance and substantial computation savings of SMoE-Dropout, compared to dense training baselines with equivalent parameter counts. In particular, our trained BERT outperforms its densely trained counterpart with consistent improvements of {$1.03\%$, $0.78\%$, $1.09\%$} on challenging reasoning tasks {$\texttt{ASDiv-A}$, $\texttt{MAWPS}$, $\texttt{SVAMP}$}, respectively. Codes and models are available in https://github.com/VITA-Group/Random-MoE-as-Dropout. | https://openreview.net/pdf/9a22d737856844ae4058be999052c67e4e975671.pdf |
Adversarial Training of Self-supervised Monocular Depth Estimation against Physical-World Attacks | https://openreview.net/forum?id=LfdEuhjR5GV | https://openreview.net/forum?id=LfdEuhjR5GV | Zhiyuan Cheng,James Chenhao Liang,Guanhong Tao,Dongfang Liu,Xiangyu Zhang | ICLR 2023,Top 25% | Monocular Depth Estimation (MDE) is a critical component in applications such as autonomous driving. There are various attacks against MDE networks. These attacks, especially the physical ones, pose a great threat to the security of such systems. Traditional adversarial training method requires ground-truth labels and hence cannot be directly applied to self-supervised MDE that does not have depth ground truth. Some self-supervised model hardening technique (e.g., contrastive learning) ignores the domain knowledge of MDE and can hardly achieve optimal performance. In this work, we propose a novel adversarial training method for self-supervised MDE models based on view synthesis without using the depth ground truth. We improve adversarial robustness against physical-world attacks using $L_0$-norm-bounded perturbation in training. We compare our method with supervised learning-based and contrastive learning-based methods that are tailored for MDE. Results on two representative MDE networks show that we achieve better robustness against various adversarial attacks with nearly no benign performance degradation. | https://openreview.net/pdf/2adac83c94d065c20230762eac3aad072f2424ef.pdf |
Sparsity-Constrained Optimal Transport | https://openreview.net/forum?id=yHY9NbQJ5BP | https://openreview.net/forum?id=yHY9NbQJ5BP | Tianlin Liu,Joan Puigcerver,Mathieu Blondel | ICLR 2023,Top 25% | Regularized optimal transport (OT) is now increasingly used as a loss or as a matching layer in neural networks. Entropy-regularized OT can be computed using the Sinkhorn algorithm but it leads to fully-dense transportation plans, meaning that all sources are (fractionally) matched with all targets. To address this issue, several works have investigated quadratic regularization instead. This regularization preserves sparsity and leads to unconstrained and smooth (semi) dual objectives, that can be solved with off-the-shelf gradient methods. Unfortunately, quadratic regularization does not give direct control over the cardinality (number of nonzeros) of the transportation plan. We propose in this paper a new approach for OT with explicit cardinality constraints on the transportation plan. Our work is motivated by an application to sparse mixture of experts, where OT can be used to match input tokens such as image patches with expert models such as neural networks. Cardinality constraints ensure that at most $k$ tokens are matched with an expert, which is crucial for computational performance reasons. Despite the nonconvexity of cardinality constraints, we show that the corresponding (semi) dual problems are tractable and can be solved with first-order gradient methods. Our method can be thought as a middle ground between unregularized OT (recovered in the limit case $k=1$) and quadratically-regularized OT (recovered when $k$ is large enough). The smoothness of the objectives increases as $k$ increases, giving rise to a trade-off between convergence speed and sparsity of the optimal plan. | https://openreview.net/pdf/01b19a43fd4282f5a55738b35ee52c3bb7236a0d.pdf |
Turning the Curse of Heterogeneity in Federated Learning into a Blessing for Out-of-Distribution Detection | https://openreview.net/forum?id=mMNimwRb7Gr | https://openreview.net/forum?id=mMNimwRb7Gr | Shuyang Yu,Junyuan Hong,Haotao Wang,Zhangyang Wang,Jiayu Zhou | ICLR 2023,Top 25% | Deep neural networks have witnessed huge successes in many challenging prediction tasks and yet they often suffer from out-of-distribution (OoD) samples, misclassifying them with high confidence. Recent advances show promising OoD detection performance for centralized training, and however, OoD detection in federated learning (FL) is largely overlooked, even though many security sensitive applications such as autonomous driving and voice recognition authorization are commonly trained using FL for data privacy concerns. The main challenge that prevents previous state-of-the-art OoD detection methods from being incorporated to FL is that they require large amount of real OoD samples. However, in real-world scenarios, such large-scale OoD training data can be costly or even infeasible to obtain, especially for resource-limited local devices. On the other hand, a notorious challenge in FL is data heterogeneity where each client collects non-identically and independently distributed (non-iid) data. We propose to take advantage of such heterogeneity and turn the curse into a blessing that facilitates OoD detection in FL. The key is that for each client, non-iid data from other clients (unseen external classes) can serve as an alternative to real OoD samples. Specifically, we propose a novel Federated Out-of-Distribution Synthesizer (FOSTER), which learns a class-conditional generator to synthesize virtual external-class OoD samples, and maintains data confidentiality and communication efficiency required by FL. Experimental results show that our method outperforms the state-of-the-art by 2.49%, 2.88%, 1.42% AUROC, and 0.01%, 0.89%, 1.74% ID accuracy, on CIFAR-10, CIFAR-100, and STL10, respectively. | https://openreview.net/pdf/3943d2638f2adc77b54de624c3d14c17ee8615f8.pdf |
DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained Diffusion | https://openreview.net/forum?id=j6zUzrapY3L | https://openreview.net/forum?id=j6zUzrapY3L | Qitian Wu,Chenxiao Yang,Wentao Zhao,Yixuan He,David Wipf,Junchi Yan | ICLR 2023,Top 25% | Real-world data generation often involves complex inter-dependencies among instances, violating the IID-data hypothesis of standard learning paradigms and posing a challenge for uncovering the geometric structures for learning desired instance representations. To this end, we introduce an energy constrained diffusion model which encodes a batch of instances from a dataset into evolutionary states that progressively incorporate other instances' information by their interactions. The diffusion process is constrained by descent criteria w.r.t. a principled energy function that characterizes the global consistency of instance representations over latent structures. We provide rigorous theory that implies closed-form optimal estimates for the pairwise diffusion strength among arbitrary instance pairs, which gives rise to a new class of neural encoders, dubbed as DIFFormer (diffusion-based Transformers), with two instantiations: a simple version with linear complexity for prohibitive instance numbers, and an advanced version for learning complex structures. Experiments highlight the wide applicability of our model as a general-purpose encoder backbone with superior performance in various tasks, such as node classification on large graphs, semi-supervised image/text classification, and spatial-temporal dynamics prediction. The codes are available at https://github.com/qitianwu/DIFFormer. | https://openreview.net/pdf/2c274286ca9d89f558de1d9abc67d9b0a429bc4d.pdf |
Neural Lagrangian Schr\"{o}dinger Bridge: Diffusion Modeling for Population Dynamics | https://openreview.net/forum?id=d3QNWD_pcFv | https://openreview.net/forum?id=d3QNWD_pcFv | Takeshi Koshizuka,Issei Sato | ICLR 2023,Top 25% | Population dynamics is the study of temporal and spatial variation in the size of populations of organisms and is a major part of population ecology. One of the main difficulties in analyzing population dynamics is that we can only obtain observation data with coarse time intervals from fixed-point observations due to experimental costs or measurement constraints. Recently, modeling population dynamics by using continuous normalizing flows (CNFs) and dynamic optimal transport has been proposed to infer the sample trajectories from a fixed-point observed population. While the sample behavior in CNFs is deterministic, the actual sample in biological systems moves in an essentially random yet directional manner. Moreover, when a sample moves from point A to point B in dynamical systems, its trajectory typically follows the principle of least action in which the corresponding action has the smallest possible value. To satisfy these requirements of the sample trajectories, we formulate the Lagrangian Schrödinger bridge (LSB) problem and propose to solve it approximately by modeling the advection-diffusion process with regularized neural SDE. We also develop a model architecture that enables faster computation of the loss function. Experimental results show that the proposed method can efficiently approximate the population-level dynamics even for high-dimensional data and that using the prior knowledge introduced by the Lagrangian enables us to estimate the sample-level dynamics with stochastic behavior. | https://openreview.net/pdf/4f5dfd7d5e9825029e736d0ace01eda002efdcb8.pdf |
Loss Landscapes are All You Need: Neural Network Generalization Can Be Explained Without the Implicit Bias of Gradient Descent | https://openreview.net/forum?id=QC10RmRbZy9 | https://openreview.net/forum?id=QC10RmRbZy9 | Ping-yeh Chiang,Renkun Ni,David Yu Miller,Arpit Bansal,Jonas Geiping,Micah Goldblum,Tom Goldstein | ICLR 2023,Top 25% | It is commonly believed that the implicit regularization of optimizers is needed for neural networks to generalize in the overparameterized regime. In this paper, we observe experimentally that this implicit regularization behavior is {\em generic}, i.e. it does not depend strongly on the choice of optimizer. We demonstrate this by training neural networks using several gradient-free optimizers, which do not benefit from properties that are often attributed to gradient-based optimizers. This includes a guess-and-check optimizer that generates uniformly random parameter vectors until finding one that happens to achieve perfect train accuracy, and a zeroth-order Pattern Search optimizer that uses no gradient computations. In the low sample and few-shot regimes, where zeroth order optimizers are most computationally tractable, we find that these non-gradient optimizers achieve test accuracy comparable to SGD. The code to reproduce results can be found at https://github.com/Ping-C/optimizer . | https://openreview.net/pdf/2a88b78329da070f92f565b0cde765a1fb20d3d9.pdf |
Self-Guided Noise-Free Data Generation for Efficient Zero-Shot Learning | https://openreview.net/forum?id=h5OpjGd_lo6 | https://openreview.net/forum?id=h5OpjGd_lo6 | Jiahui Gao,Renjie Pi,LIN Yong,Hang Xu,Jiacheng Ye,Zhiyong Wu,WEIZHONG ZHANG,Xiaodan Liang,Zhenguo Li,Lingpeng Kong | ICLR 2023,Top 25% | There is a rising interest in further exploring the zero-shot learning potential of large pre-trained language models (PLMs). A new paradigm called data-generation-based zero-shot learning has achieved impressive success. In this paradigm, the synthesized data from the PLM acts as the carrier of knowledge, which is used to train a task-specific model with orders of magnitude fewer parameters than the PLM, achieving both higher performance and efficiency than prompt-based zero-shot learning methods on PLMs. The main hurdle of this approach is that the synthesized data from PLM usually contains a significant portion of low-quality samples. Fitting on such data will greatly hamper the performance of the task-specific model, making it unreliable for deployment. Previous methods remedy this issue mainly by filtering synthetic data using heuristic metrics(e.g., output confidence), or refining the data with the help of a human expert, which comes with excessive manual tuning or expensive costs. In this paper, we propose a novel noise-robust re-weighting framework SunGen to automatically construct high-quality data for zero-shot classification problems. Our framework features the ability to learn the sample weights indicating data quality without requiring any human annotation. We theoretically and empirically verify the ability of our method to help construct good-quality synthetic datasets. Notably, SunGen-LSTM yields a 9.8% relative improvement than the baseline on average accuracy across eight different established text classification tasks. | https://openreview.net/pdf/82812310fbf1dff5ce1f72fe99e2d46523ca8d5a.pdf |
D4FT: A Deep Learning Approach to Kohn-Sham Density Functional Theory | https://openreview.net/forum?id=aBWnqqsuot7 | https://openreview.net/forum?id=aBWnqqsuot7 | Tianbo Li,Min Lin,Zheyuan Hu,Kunhao Zheng,Giovanni Vignale,Kenji Kawaguchi,A.H. Castro Neto,Kostya S. Novoselov,Shuicheng YAN | ICLR 2023,Top 25% | Kohn-Sham Density Functional Theory (KS-DFT) has been traditionally solved by the Self-Consistent Field (SCF) method. Behind the SCF loop is the physics intuition of solving a system of non-interactive single-electron wave functions under an effective potential. In this work, we propose a deep learning approach to KS-DFT. First, in contrast to the conventional SCF loop, we propose to directly minimize the total energy by reparameterizing the orthogonal constraint as a feed-forward computation. We prove that such an approach has the same expressivity as the SCF method, yet reduces the computational complexity from O(N^4) to O(N^3). Second, the numerical integration which involves a summation over the quadrature grids can be amortized to the optimization steps. At each step, stochastic gradient descent (SGD) is performed with a sampled minibatch of the grids. Extensive experiments are carried out to demonstrate the advantage of our approach in terms of efficiency and stability. In addition, we show that our approach enables us to explore more complex neural-based wave functions. | https://openreview.net/pdf/2224ef90a640f03ebd92a397a2ffd6bc277a8b16.pdf |
Warping the Space: Weight Space Rotation for Class-Incremental Few-Shot Learning | https://openreview.net/forum?id=kPLzOfPfA2l | https://openreview.net/forum?id=kPLzOfPfA2l | Do-Yeon Kim,Dong-Jun Han,Jun Seo,Jaekyun Moon | ICLR 2023,Top 25% | Class-incremental few-shot learning, where new sets of classes are provided sequentially with only a few training samples, presents a great challenge due to catastrophic forgetting of old knowledge and overfitting caused by lack of data. During finetuning on new classes, the performance on previous classes deteriorates quickly even when only a small fraction of parameters are updated, since the previous knowledge is broadly associated with most of the model parameters in the original parameter space. In this paper, we introduce WaRP, the \textit{weight space rotation process}, which transforms the original parameter space into a new space so that we can push most of the previous knowledge compactly into only a few important parameters. By properly identifying and freezing these key parameters in the new weight space, we can finetune the remaining parameters without affecting the knowledge of previous classes. As a result, WaRP provides an additional room for the model to effectively learn new classes in future incremental sessions. Experimental results confirm the effectiveness of our solution and show the improved performance over the state-of-the-art methods. | https://openreview.net/pdf/36973a131d3dce27cb038d510a98686e3e24a480.pdf |
Pre-training via Denoising for Molecular Property Prediction | https://openreview.net/forum?id=tYIMtogyee | https://openreview.net/forum?id=tYIMtogyee | Sheheryar Zaidi,Michael Schaarschmidt,James Martens,Hyunjik Kim,Yee Whye Teh,Alvaro Sanchez-Gonzalez,Peter Battaglia,Razvan Pascanu,Jonathan Godwin | ICLR 2023,Top 25% | Many important problems involving molecular property prediction from 3D structures have limited data, posing a generalization challenge for neural networks. In this paper, we describe a pre-training technique based on denoising that achieves a new state-of-the-art in molecular property prediction by utilizing large datasets of 3D molecular structures at equilibrium to learn meaningful representations for downstream tasks. Relying on the well-known link between denoising autoencoders and score-matching, we show that the denoising objective corresponds to learning a molecular force field -- arising from approximating the Boltzmann distribution with a mixture of Gaussians -- directly from equilibrium structures. Our experiments demonstrate that using this pre-training objective significantly improves performance on multiple benchmarks, achieving a new state-of-the-art on the majority of targets in the widely used QM9 dataset. Our analysis then provides practical insights into the effects of different factors -- dataset sizes, model size and architecture, and the choice of upstream and downstream datasets -- on pre-training. | https://openreview.net/pdf/5124e3ed078b69949b650fc3e97fcc328fafe4ff.pdf |
Martingale Posterior Neural Processes | https://openreview.net/forum?id=-9PVqZ-IR_ | https://openreview.net/forum?id=-9PVqZ-IR_ | Hyungi Lee,Eunggu Yun,Giung Nam,Edwin Fong,Juho Lee | ICLR 2023,Top 25% | A Neural Process (NP) estimates a stochastic process implicitly defined with neural networks given a stream of data, rather than pre-specifying priors already known, such as Gaussian processes. An ideal NP would learn everything from data without any inductive biases, but in practice, we often restrict the class of stochastic processes for the ease of estimation. One such restriction is the use of a finite-dimensional latent variable accounting for the uncertainty in the functions drawn from NPs. Some recent works show that this can be improved with more “data-driven” source of uncertainty such as bootstrapping. In this work, we take a different approach based on the martingale posterior, a recently developed alternative to Bayesian inference. For the martingale posterior, instead of specifying prior-likelihood pairs, a predictive distribution for future data is specified. Under specific conditions on the predictive distribution, it can be shown that the uncertainty in the generated future data actually corresponds to the uncertainty of the implicitly defined Bayesian posteriors. Based on this result, instead of assuming any form of the latent variables, we equip a NP with a predictive distribution implicitly defined with neural networks and use the corresponding martingale posteriors as the source of uncertainty. The resulting model, which we name as Martingale Posterior Neural Process (MPNP), is demonstrated to outperform baselines on various tasks. | https://openreview.net/pdf/5806c8aefe1a560e8eb99dbcad6143cd7e30f31d.pdf |
On the Usefulness of Embeddings, Clusters and Strings for Text Generation Evaluation | https://openreview.net/forum?id=bvpkw7UIRdU | https://openreview.net/forum?id=bvpkw7UIRdU | Tiago Pimentel,Clara Isabel Meister,Ryan Cotterell | ICLR 2023,Top 25% | A good automatic evaluation metric for language generation ideally correlates highly with human judgements of text quality. Yet, there is a dearth of such metrics, which inhibits the rapid and efficient progress of language generators. One exception is the recently proposed Mauve. In theory, Mauve measures an information-theoretic divergence between two probability distributions over strings: one representing the language generator under evaluation; the other representing the true natural language distribution. Mauve's authors argue that its success comes from the qualitative properties of their proposed divergence. Yet in practice, as this divergence is uncomputable, Mauve approximates it by measuring the divergence between multinomial distributions over clusters instead, where cluster assignments are attained by grouping strings based on a pretrained language model's embeddings. As we show, however, this is not a tight approximation---in either theory or practice. This begs the question: why does Mauve work so well? In this work, we show that \mauve was right for the wrong reasons, and that its newly proposed divergence is not necessary for its high performance. In fact, classical divergences paired with its proposed cluster-based approximation may actually serve as better evaluation metrics. We finish the paper with a probing analysis; this analysis leads us to conclude that---by encoding syntactic- and coherence-level features of text, while ignoring surface-level features---such cluster-based approximations to string distributions may simply be better for evaluating state-of-the-art language generators. | https://openreview.net/pdf/ecbd3cf3099fb64e9d4d2614aa66862f601c3328.pdf |
DEP-RL: Embodied Exploration for Reinforcement Learning in Overactuated and Musculoskeletal Systems | https://openreview.net/forum?id=C-xa_D3oTj6 | https://openreview.net/forum?id=C-xa_D3oTj6 | Pierre Schumacher,Daniel Haeufle,Dieter Büchler,Syn Schmitt,Georg Martius | ICLR 2023,Top 25% | Muscle-actuated organisms are capable of learning an unparalleled diversity of dexterous movements despite their vast amount of muscles.
Reinforcement learning (RL) on large musculoskeletal models, however, has not been able to show similar performance.
We conjecture that ineffective exploration in large overactuated action spaces is a key problem.
This is supported by the finding that common exploration noise strategies are inadequate in synthetic examples of overactuated systems.
We identify differential extrinsic plasticity (DEP), a method from the domain of self-organization, as being able to induce state-space covering exploration within seconds of interaction.
By integrating DEP into RL, we achieve fast learning of reaching and locomotion in musculoskeletal systems, outperforming current approaches in all considered tasks in sample efficiency and robustness. | https://openreview.net/pdf/e3ebc4afb3c3051ac2670b1f21a54881897fe728.pdf |
The Symmetric Generalized Eigenvalue Problem as a Nash Equilibrium | https://openreview.net/forum?id=PEgBEB74JjB | https://openreview.net/forum?id=PEgBEB74JjB | Ian Gemp,Charlie Chen,Brian McWilliams | ICLR 2023,Top 25% | The symmetric generalized eigenvalue problem (SGEP) is a fundamental concept in numerical linear algebra. It captures the solution of many classical machine learning problems such as canonical correlation analysis, independent components analysis, partial least squares, linear discriminant analysis, principal components and others. Despite this, most general solvers are prohibitively expensive when dealing with *streaming data sets* (i.e., minibatches) and research has instead concentrated on finding efficient solutions to specific problem instances. In this work, we develop a game-theoretic formulation of the top-$k$ SGEP whose Nash equilibrium is the set of generalized eigenvectors. We also present a parallelizable algorithm with guaranteed asymptotic convergence to the Nash. Current state-of-the-art methods require $\mathcal{O}(d^2k)$ runtime complexity per iteration which is prohibitively expensive when the number of dimensions ($d$) is large. We show how to modify this parallel approach to achieve $\mathcal{O}(dk)$ runtime complexity. Empirically we demonstrate that this resulting algorithm is able to solve a variety of SGEP problem instances including a large-scale analysis of neural network activations. | https://openreview.net/pdf/fa439d55119aeee9e8abbf9fc9998c806d8d9320.pdf |
EA-HAS-Bench: Energy-aware Hyperparameter and Architecture Search Benchmark | https://openreview.net/forum?id=n-bvaLSCC78 | https://openreview.net/forum?id=n-bvaLSCC78 | Shuguang Dou,XINYANG JIANG,Cai Rong Zhao,Dongsheng Li | ICLR 2023,Top 25% | The energy consumption for training deep learning models is increasing at an alarming rate due to the growth of training data and model scale, resulting in a negative impact on carbon neutrality. Energy consumption is an especially pressing issue for AutoML algorithms because it usually requires repeatedly training large numbers of computationally intensive deep models to search for optimal configurations. This paper takes one of the most essential steps in developing energy-aware (EA) NAS methods, by providing a benchmark that makes EA-NAS research more reproducible and accessible. Specifically, we present the first large-scale energy-aware benchmark that allows studying AutoML methods to achieve better trade-offs between performance and search energy consumption, named EA-HAS-Bench. EA-HAS-Bench provides a large-scale architecture/hyperparameter joint search space, covering diversified configurations related to energy consumption. Furthermore, we propose a novel surrogate model specially designed for large joint search space, which proposes a Bezier curve-based model to predict learning curves with unlimited shape and length. Based on the proposed dataset, we new energy-aware AutoML method that arms existing AutoML algorithms to consider the search energy consumption, and our experiments show that the modified energy-aware AutoML methods achieve a better trade-off between energy consumption and model performance. | https://openreview.net/pdf/9106b5730cee2cbeca225886e09cd6befa802419.pdf |
MARS: Meta-learning as Score Matching in the Function Space | https://openreview.net/forum?id=WAgXmT8BeRj | https://openreview.net/forum?id=WAgXmT8BeRj | Krunoslav Lehman Pavasovic,Jonas Rothfuss,Andreas Krause | ICLR 2023,Top 25% | Meta-learning aims to extract useful inductive biases from a set of related datasets. In Bayesian meta-learning, this is typically achieved by constructing a prior distribution over neural network parameters. However, specifying families of computationally viable prior distributions over the high-dimensional neural network parameters is difficult. As a result, existing approaches resort to meta-learning restrictive diagonal Gaussian priors, severely limiting their expressiveness and performance. To circumvent these issues, we approach meta-learning through the lens of functional Bayesian neural network inference which views the prior as a stochastic process and performs inference in the function space. Specifically, we view the meta-training tasks as samples from the data-generating process and formalize meta-learning as empirically estimating the law of this stochastic process. Our approach can seamlessly acquire and represent complex prior knowledge by meta-learning the score function of the data-generating process marginals instead of parameter space priors. In a comprehensive benchmark, we demonstrate that our method achieves state-of-the-art performance in terms of predictive accuracy and substantial improvements in the quality of uncertainty estimates. | https://openreview.net/pdf/1844049a3a5915d7c96f3e7a03be2fe5f82a0e4b.pdf |
Faster Gradient-Free Methods for Escaping Saddle Points | https://openreview.net/forum?id=KDhFkA6MQsW | https://openreview.net/forum?id=KDhFkA6MQsW | Hualin Zhang,Bin Gu | ICLR 2023,Top 25% | Escaping from saddle points has become an important research topic in non-convex optimization. In this paper, we study the case when calculations of explicit gradients are expensive or even infeasible, and only function values are accessible.
Currently, there have two types of gradient-free (zeroth-order) methods based on random perturbation and negative curvature finding proposed to escape saddle points efficiently and converge to an $\epsilon$-approximate second-order stationary point.
Nesterov's accelerated gradient descent (AGD) method can escape saddle points faster than gradient descent (GD) which have been verified in first-order algorithms. However, whether AGD could accelerate the gradient-free methods is still unstudied. To unfold this mystery, in this paper, we propose two accelerated variants for the two types of gradient-free methods of escaping saddle points. We show that our algorithms can find an $\epsilon$-approximate second-order stationary point with $\tilde{\mathcal{O}}(1/\epsilon^{1.75})$ iteration complexity and $\tilde{\mathcal{O}}(d/\epsilon^{1.75})$ oracle complexity, where $d$ is the problem dimension. Thus, our methods achieve a comparable convergence rate to their first-order counterparts and have fewer oracle complexity compared to prior derivative-free methods for finding second-order stationary points. | https://openreview.net/pdf/98601f415eedff1073917a2b7eeacd6ce9a0031f.pdf |
VA-DepthNet: A Variational Approach to Single Image Depth Prediction | https://openreview.net/forum?id=xjxUjHa_Wpa | https://openreview.net/forum?id=xjxUjHa_Wpa | Ce Liu,Suryansh Kumar,Shuhang Gu,Radu Timofte,Luc Van Gool | ICLR 2023,Top 25% | We introduce VA-DepthNet, a simple, effective, and accurate deep neural network approach for the single-image depth prediction (SIDP) problem. The proposed approach advocates using classical first-order variational constraints for this problem. While state-of-the-art deep neural network methods for SIDP learn the scene depth from images in a supervised setting, they often overlook the invaluable invariances and priors in the rigid scene space, such as the regularity of the scene. The paper's main contribution is to reveal the benefit of classical and well-founded variational constraints in the neural network design for the SIDP task. It is shown that imposing first-order variational constraints in the scene space together with popular encoder-decoder-based network architecture design provides excellent results for the supervised SIDP task. The imposed first-order variational constraint makes the network aware of the depth gradient in the scene space, i.e., regularity. The paper demonstrates the usefulness of the proposed approach via extensive evaluation and ablation analysis over several benchmark datasets, such as KITTI, NYU Depth V2, and SUN RGB-D. The VA-DepthNet at test time shows considerable improvements in depth prediction accuracy compared to the prior art and is accurate also at high-frequency regions in the scene space. At the time of writing this paper, our method---labeled as VA-DepthNet, when tested on the KITTI depth-prediction evaluation set benchmarks, shows state-of-the-art results, and is the top-performing published approach. | https://openreview.net/pdf/583bb302f77492085cedcf40f241f19b02f4e775.pdf |
Prompt-to-Prompt Image Editing with Cross-Attention Control | https://openreview.net/forum?id=_CDixzkzeyb | https://openreview.net/forum?id=_CDixzkzeyb | Amir Hertz,Ron Mokady,Jay Tenenbaum,Kfir Aberman,Yael Pritch,Daniel Cohen-or | ICLR 2023,Top 25% | Recent large-scale text-driven synthesis diffusion models have attracted much attention thanks to their remarkable capabilities of generating highly diverse images that follow given text prompts. Therefore, it is only natural to build upon these synthesis models to provide text-driven image editing capabilities. However, Editing is challenging for these generative models, since an innate property of an editing technique is to preserve some content from the original image, while in the text-based models, even a small modification of the text prompt often leads to a completely different outcome. State-of-the-art methods mitigate this by requiring the users to provide a spatial mask to localize the edit, hence, ignoring the original structure and content within the masked region. In this paper, we pursue an intuitive prompt-to-prompt editing framework, where the edits are controlled by text only. We analyze a text-conditioned model in depth and observe that the cross-attention layers are the key to controlling the relation between the spatial layout of the image to each word in the prompt. With this observation, we propose to control the attention maps along the diffusion process. Our approach enables us to monitor the synthesis process by editing the textual prompt only, paving the way to a myriad of caption-based editing applications such as localized editing by replacing a word, global editing by adding a specification, and even controlling the extent to which a word is reflected in the image. We present our results over diverse images and prompts with different text-to-image models, demonstrating high-quality synthesis and fidelity to the edited prompts. | https://openreview.net/pdf/a6e78444f28f4790c2b8eb24364ced3ce736feb0.pdf |
DiffEdit: Diffusion-based semantic image editing with mask guidance | https://openreview.net/forum?id=3lge0p5o-M- | https://openreview.net/forum?id=3lge0p5o-M- | Guillaume Couairon,Jakob Verbeek,Holger Schwenk,Matthieu Cord | ICLR 2023,Top 25% | Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image.
Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion.
DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images. | https://openreview.net/pdf/3d837329e3740d349726e77482e1be2f69278a1b.pdf |
Rarity Score : A New Metric to Evaluate the Uncommonness of Synthesized Images | https://openreview.net/forum?id=JTGimap_-F | https://openreview.net/forum?id=JTGimap_-F | Jiyeon Han,Hwanil Choi,Yunjey Choi,Junho Kim,Jung-Woo Ha,Jaesik Choi | ICLR 2023,Top 25% | Evaluation metrics in image synthesis play a key role to measure performances of generative models. However, most metrics mainly focus on image fidelity. Existing diversity metrics are derived by comparing distributions, and thus they cannot quantify the diversity or rarity degree of each generated image. In this work, we propose a new evaluation metric, called `rarity score', to measure both image-wise uncommonness and model-wise diversified generation performance.
We first show empirical observation that typical samples are close to each other and distinctive samples are far from each other in nearest-neighbor distances on latent spaces represented by feature extractor networks such as VGG16. We then show that one can effectively filter typical or distinctive samples with the proposed metric. We also use our metric to demonstrate that the extent to which different generative models produce rare images can be effectively compared. Further, our metric can be used to compare rarities between datasets that share the same concept such as CelebA-HQ and FFHQ. Finally, we analyze the use of metrics in different designs of feature extractors to better understand the relationship between feature spaces and resulting high-rarity images. Code will be publicly available for the research community. | https://openreview.net/pdf/dfcead3ef1a1fcc3a124e59886764e4d93b824a7.pdf |
Corrupted Image Modeling for Self-Supervised Visual Pre-Training | https://openreview.net/forum?id=09hVcSDkea | https://openreview.net/forum?id=09hVcSDkea | Yuxin Fang,Li Dong,Hangbo Bao,Xinggang Wang,Furu Wei | ICLR 2023,Top 25% | We introduce Corrupted Image Modeling (CIM) for self-supervised visual pre-training. CIM uses an auxiliary generator with a small trainable BEiT to corrupt the input image instead of using artificial [MASK] tokens, where some patches are randomly selected and replaced with plausible alternatives sampled from the BEiT output distribution. Given this corrupted image, an enhancer network learns to either recover all the original image pixels, or predict whether each visual token is replaced by a generator sample or not. The generator and the enhancer are simultaneously trained and synergistically updated. After pre-training, the enhancer can be used as a high-capacity visual encoder for downstream tasks. CIM is a general and flexible visual pre-training framework that is suitable for various network architectures. For the first time, CIM demonstrates that both ViT and CNN can learn rich visual representations using a unified, non-Siamese framework. Experimental results show that our approach achieves compelling results in vision benchmarks, such as ImageNet classification and ADE20K semantic segmentation. | https://openreview.net/pdf/4f86e1c43a4f5b420e19c75c8be820279b0b46a9.pdf |
Semi-Implicit Variational Inference via Score Matching | https://openreview.net/forum?id=sd90a2ytrt | https://openreview.net/forum?id=sd90a2ytrt | Longlin Yu,Cheng Zhang | ICLR 2023,Top 25% | Semi-implicit variational inference (SIVI) greatly enriches the expressiveness of variational families by considering implicit variational distributions defined in a hierarchical manner. However, due to the intractable densities of variational distributions, current SIVI approaches often use surrogate evidence lower bounds (ELBOs) or employ expensive inner-loop MCMC runs for unbiased ELBOs for training. In this paper, we propose SIVI-SM, a new method for SIVI based on an alternative training objective via score matching. Leveraging the hierarchical structure of semi-implicit variational families, the score matching objective allows a minimax formulation where the intractable variational densities can be naturally handled with denoising score matching. We show that SIVI-SM closely matches the accuracy of MCMC and outperforms ELBO-based SIVI methods in a variety of Bayesian inference tasks. | https://openreview.net/pdf/0d0ccdad3898dc31ee34f2593f76a3a9d2a77512.pdf |
Exploring Temporally Dynamic Data Augmentation for Video Recognition | https://openreview.net/forum?id=fxjzKOdw9wb | https://openreview.net/forum?id=fxjzKOdw9wb | Taeoh Kim,Jinhyung Kim,Minho Shim,Sangdoo Yun,Myunggu Kang,Dongyoon Wee,Sangyoun Lee | ICLR 2023,Top 25% | Data augmentation has recently emerged as an essential component of modern training recipes for visual recognition tasks.
However, data augmentation for video recognition has been rarely explored despite its effectiveness.
Few existing augmentation recipes for video recognition naively extend the image augmentation methods by applying the same operations to the whole video frames.
Our main idea is that the magnitude of augmentation operations for each frame needs to be changed over time to capture the real-world video's temporal variations.
These variations should be generated as diverse as possible using fewer additional hyper-parameters during training.
Through this motivation, we propose a simple yet effective video data augmentation framework, DynaAugment.
The magnitude of augmentation operations on each frame is changed by an effective mechanism, Fourier Sampling that parameterizes diverse, smooth, and realistic temporal variations.
DynaAugment also includes an extended search space suitable for video for automatic data augmentation methods.
DynaAugment experimentally demonstrates that there are additional performance rooms to be improved from static augmentations on diverse video models.
Specifically, we show the effectiveness of DynaAugment on various video datasets and tasks: large-scale video recognition (Kinetics-400 and Something-Something-v2), small-scale video recognition (UCF-101 and HMDB-51), fine-grained video recognition (Diving-48 and FineGym), video action segmentation on Breakfast, video action localization on THUMOS'14, and video object detection on MOT17Det. | https://openreview.net/pdf/0c7e421612ae6fa1ce1a6ff3fc3b73e0fef95830.pdf |
A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning | https://openreview.net/forum?id=dqITIpZ5Z4b | https://openreview.net/forum?id=dqITIpZ5Z4b | Zixiang Chen,Chris Junchi Li,Huizhuo Yuan,Quanquan Gu,Michael Jordan | ICLR 2023,Top 25% | With the increasing need for handling large state and action spaces, general function approximation has become a key technique in reinforcement learning (RL). In this paper, we propose a general framework that unifies model-based and model-free RL, and an Admissible Bellman Characterization (ABC) class that subsumes nearly all Markov decision process (MDP) models in the literature for tractable RL. We propose a novel estimation function with decomposable structural properties for optimization-based exploration and the functional Eluder dimension as a complexity measure of the ABC class. Under our framework, a new sample-efficient algorithm namely OPtimization-based ExploRation with Approximation (OPERA) is proposed, achieving regret bounds that match or improve over the best-known results for a variety of MDP models. In particular, for MDPs with low Witness rank, under a slightly stronger assumption, OPERA improves the state-of-the-art sample complexity results by a factor of $dH$. Our framework provides a generic interface to design and analyze new RL models and algorithms. | https://openreview.net/pdf/78f90f35e722c4fb344bd1556ce84379181cd92a.pdf |
Adversarial Attacks on Adversarial Bandits | https://openreview.net/forum?id=bBpT6dEjeRG | https://openreview.net/forum?id=bBpT6dEjeRG | Yuzhe Ma,Zhijin Zhou | ICLR 2023,Top 25% | We study a security threat to adversarial multi-armed bandit, in which an attacker perturbs the loss or reward signal to control the behavior of the victim bandit player. We show that the attacker is able to mislead any no-regret adversarial bandit algorithm into selecting a suboptimal target action in every but sublinear (T−o(T )) number of rounds, while incurring only sublinear (o(T)) cumulative attack cost. This result implies critical security concern in real-world bandit-based systems, e.g., in online recommendation, an attacker might be able to hijack the recommender system and promote a desired product. Our proposed attack algorithms require knowledge of only the regret rate, thus are agnostic to the concrete bandit algorithm employed by the victim player. We also derived a theoretical lower bound on the cumulative attack cost that any victim-agnostic attack algorithm must incur. The lower bound matches the upper bound achieved by our attack, which shows that our attack is asymptotically optimal. | https://openreview.net/pdf/082ae9856f805df99b2abe0f422e94e79c2f5733.pdf |
Ensuring DNN Solution Feasibility for Optimization Problems with Linear Constraints | https://openreview.net/forum?id=QVcDQJdFTG | https://openreview.net/forum?id=QVcDQJdFTG | Tianyu Zhao,Xiang Pan,Minghua Chen,Steven Low | ICLR 2023,Top 25% | We propose preventive learning as the first framework to guarantee Deep Neural Network (DNN) solution feasibility for optimization problems with linear constraints without post-processing, upon satisfying a mild condition on constraint calibration. Without loss of generality, we focus on problems with only inequality constraints. We systematically calibrate the inequality constraints used in training, thereby anticipating DNN prediction errors and ensuring the obtained solutions remain feasible. We characterize the calibration rate and a critical DNN size, based on which we can directly construct a DNN with provable solution feasibility guarantee. We further propose an Adversarial-Sample Aware training algorithm to improve its optimality performance. We apply the framework to develop DeepOPF+ for solving essential DC optimal power flow problems in grid operation. Simulation results over IEEE test cases show that it outperforms existing strong DNN baselines in ensuring 100\% feasibility and attaining consistent optimality loss (<0.19%) and speedup (up to x228) in both light-load and heavy-load regimes, as compared to a state-of-the-art solver. We also apply our framework to a non-convex problem and show its performance advantage over existing schemes. | https://openreview.net/pdf/e48b4b7a07d7810a1f1175bb20762f88e7436ae8.pdf |
LightGCL: Simple Yet Effective Graph Contrastive Learning for Recommendation | https://openreview.net/forum?id=FKXVK9dyMM | https://openreview.net/forum?id=FKXVK9dyMM | Xuheng Cai,Chao Huang,Lianghao Xia,Xubin Ren | ICLR 2023,Top 25% | Graph neural network (GNN) is a powerful learning approach for graph-based recommender systems. Recently, GNNs integrated with contrastive learning have shown superior performance in recommendation with their data augmentation schemes, aiming at dealing with highly sparse data. Despite their success, most existing graph contrastive learning methods either perform stochastic augmentation (e.g., node/edge perturbation) on the user-item interaction graph, or rely on the heuristic-based augmentation techniques (e.g., user clustering) for generating contrastive views. We argue that these methods cannot well preserve the intrinsic semantic structures and are easily biased by the noise perturbation. In this paper, we propose a simple yet effective graph contrastive learning paradigm LightGCL that mitigates these issues impairing the generality and robustness of CL-based recommenders. Our model exclusively utilizes singular value decomposition for contrastive augmentation, which enables the unconstrained structural refinement with global collaborative relation modeling. Experiments conducted on several benchmark datasets demonstrate the significant improvement in performance of our model over the state-of-the-arts. Further analyses demonstrate the superiority of LightGCL's robustness against data sparsity and popularity bias. The source code of our model is available at https://github.com/HKUDS/LightGCL. | https://openreview.net/pdf/83b56b5d44ab3126d8b47ac750cd92cb0c6475dc.pdf |
MIMT: Masked Image Modeling Transformer for Video Compression | https://openreview.net/forum?id=j9m-mVnndbm | https://openreview.net/forum?id=j9m-mVnndbm | Jinxi Xiang,Kuan Tian,Jun Zhang | ICLR 2023,Top 25% | Deep learning video compression outperforms its hand-craft counterparts with enhanced flexibility and capacity. One key component of the learned video codec is the autoregressive entropy model conditioned on spatial and temporal priors. Operating autoregressive on raster scanning order naively treats the context as unidirectional. This is neither efficient nor optimal, considering that conditional information probably locates at the end of the sequence. We thus introduce an entropy model based on a masked image modeling transformer (MIMT) to learn the spatial-temporal dependencies. Video frames are first encoded into sequences of tokens and then processed with the transformer encoder as priors. The transformer decoder learns the probability mass functions (PMFs) \emph{conditioned} on the priors and masked inputs. Then it is capable of selecting optimal decoding orders without a fixed direction. During training, MIMT aims to predict the PMFs of randomly masked tokens by attending to tokens in all directions. This allows MIMT to capture the temporal dependencies from encoded priors and the spatial dependencies from the unmasked tokens, i.e., decoded tokens. At inference time, the model begins with generating PMFs of all masked tokens in parallel and then decodes the frame iteratively from the previously-selected decoded tokens (i.e., with high confidence). In addition, we improve the overall performance with more techniques, e.g., manifold conditional priors accumulating a long range of information, shifted window attention to reduce complexity. Extensive experiments demonstrate the proposed MIMT framework equipped with the new transformer entropy model achieves state-of-the-art performance on HEVC, UVG, and MCL-JCV datasets, generally outperforming the VVC in terms of PSNR and SSIM. | https://openreview.net/pdf/77a1b3484f4a2e2c214313bd3f9964508a65d42a.pdf |
Hungry Hungry Hippos: Towards Language Modeling with State Space Models | https://openreview.net/forum?id=COZDy0WYGg | https://openreview.net/forum?id=COZDy0WYGg | Daniel Y Fu,Tri Dao,Khaled Kamal Saab,Armin W Thomas,Atri Rudra,Christopher Re | ICLR 2023,Top 25% | State space models (SSMs) have demonstrated state-of-the-art sequence modeling performance in some modalities, but underperform attention in language modeling. Moreover, despite scaling nearly linearly in sequence length instead of quadratically, SSMs are still slower than Transformers due to poor hardware utilization. In this paper, we make progress on understanding the expressivity gap between SSMs and attention in language modeling, and on reducing the hardware barrier between SSMs and attention. First, we use synthetic language modeling tasks to understand the gap between SSMs and attention. We find that existing SSMs struggle with two capabilities: recalling earlier tokens in the sequence and comparing tokens across the sequence. To understand the impact on language modeling, we propose a new SSM layer, H3, that is explicitly designed for these abilities. H3 matches attention on the synthetic languages and comes within 0.4 PPL of Transformers on OpenWebText. Furthermore, a hybrid 125M-parameter H3-attention model that retains two attention layers surprisingly outperforms Transformers on OpenWebText by 1.0 PPL. Next, to improve the efficiency of training SSMs on modern hardware, we propose FlashConv. FlashConv uses a fused block FFT algorithm to improve efficiency on sequences up to 8K, and introduces a novel state passing algorithm that exploits the recurrent properties of SSMs to scale to longer sequences. FlashConv yields 2$\times$ speedup on the long-range arena benchmark and allows hybrid language models to generate text 2.4$\times$ faster than Transformers. Using FlashConv, we scale hybrid H3-attention language models up to 2.7B parameters on the Pile and find promising initial results, achieving lower perplexity than Transformers and outperforming Transformers in zero- and few-shot learning on a majority of tasks in the SuperGLUE benchmark. | https://openreview.net/pdf/b3774a7e6b7bda0783528bf1dc8e2600707d797f.pdf |
ACMP: Allen-Cahn Message Passing with Attractive and Repulsive Forces for Graph Neural Networks | https://openreview.net/forum?id=4fZc_79Lrqs | https://openreview.net/forum?id=4fZc_79Lrqs | Yuelin Wang,Kai Yi,Xinliang Liu,Yu Guang Wang,Shi Jin | ICLR 2023,Top 25% | Neural message passing is a basic feature extraction unit for graph-structured data considering neighboring node features in network propagation from one layer to the next. We model such process by an interacting particle system with attractive and repulsive forces and the Allen-Cahn force arising in the modeling of phase transition. The dynamics of the system is a reaction-diffusion process which can separate particles without blowing up. This induces an Allen-Cahn message passing (ACMP) for graph neural networks where the numerical iteration for the particle system solution constitutes the message passing propagation. ACMP which has a simple implementation with a neural ODE solver can propel the network depth up to one hundred of layers with theoretically proven strictly positive lower bound of the Dirichlet energy. It thus provides a deep model of GNNs circumventing the common GNN problem of oversmoothing. GNNs with ACMP achieve state of the art performance for real-world node classification tasks on both homophilic and heterophilic datasets. Codes are available at https://github.com/ykiiiiii/ACMP | https://openreview.net/pdf/d58ae8ad07cd24feb44b22279a901a3b7fbf5279.pdf |
Relational Attention: Generalizing Transformers for Graph-Structured Tasks | https://openreview.net/forum?id=cFuMmbWiN6 | https://openreview.net/forum?id=cFuMmbWiN6 | Cameron Diao,Ricky Loynd | ICLR 2023,Top 25% | Transformers flexibly operate over sets of real-valued vectors representing task-specific entities and their attributes, where each vector might encode one word-piece token and its position in a sequence, or some piece of information that carries no position at all. As set processors, transformers are at a disadvantage in reasoning over more general graph-structured data where nodes represent entities and edges represent relations between entities. To address this shortcoming, we generalize transformer attention to consider and update edge vectors in each transformer layer. We evaluate this relational transformer on a diverse array of graph-structured tasks, including the large and challenging CLRS Algorithmic Reasoning Benchmark. There, it dramatically outperforms state-of-the-art graph neural networks expressly designed to reason over graph-structured data. Our analysis demonstrates that these gains are attributable to relational attention's inherent ability to leverage the greater expressivity of graphs over sets. | https://openreview.net/pdf/49232cd55923175bab0a33ca81d281c76edcfaad.pdf |
Distilling Model Failures as Directions in Latent Space | https://openreview.net/forum?id=99RpBVpLiX | https://openreview.net/forum?id=99RpBVpLiX | Saachi Jain,Hannah Lawrence,Ankur Moitra,Aleksander Madry | ICLR 2023,Top 25% | Existing methods for isolating hard subpopulations and spurious correlations in datasets often require human intervention. This can make these methods labor-intensive and dataset-specific. To address these shortcomings, we present a scalable method for automatically distilling a model's failure modes. Specifically, we harness linear classifiers to identify consistent error patterns, and, in turn, induce a natural representation of these failure modes as directions within the feature space. We demonstrate that this framework allows us to discover and automatically caption challenging subpopulations within the training dataset. Moreover, by combining our framework with off-the-shelf diffusion models, we can generate images that are especially challenging for the analyzed model, and thus can be used to perform synthetic data augmentation that helps remedy the model's failure modes. | https://openreview.net/pdf/c9daa261ea96d95a6dee52da157a59e14333cf07.pdf |
Combinatorial-Probabilistic Trade-Off: P-Values of Community Properties Test in the Stochastic Block Models | https://openreview.net/forum?id=8qjSA5QACb40 | https://openreview.net/forum?id=8qjSA5QACb40 | Shuting Shen,Junwei Lu | ICLR 2023,Top 25% | We propose an inferential framework testing the general community combinatorial properties of the stochastic block model. We aim to test the hypothesis on whether a certain community property is satisfied, e.g., whether a given set of nodes belong to the same community, and provide p-values for uncertainty quantification. Our framework is applicable to all symmetric community properties. To ease the challenges caused by the combinatorial nature of community properties, we develop a novel shadowing bootstrap method. Utilizing the symmetry, our method can find a shadowing representative of the true assignment and the number of tested assignments in the alternative is largely reduced. In theory, we introduce a combinatorial distance between two community classes and show a combinatorial-probabilistic trade-off phenomenon. Our test is honest as long as the product of the combinatorial distance between two communities and the probabilistic distance between two connection probabilities is sufficiently large. Besides, we show that such trade-off also exists in the information-theoretic lower bound. We also implement numerical experiments to show the validity of our method. | https://openreview.net/pdf/5c689940c93924b24ea5f66a7b1ea95007134c04.pdf |
Continuized Acceleration for Quasar Convex Functions in Non-Convex Optimization | https://openreview.net/forum?id=yYbhKqdi7Hz | https://openreview.net/forum?id=yYbhKqdi7Hz | Jun-Kun Wang,Andre Wibisono | ICLR 2023,Top 25% | Quasar convexity is a condition that allows some first-order methods to efficiently minimize a function even when the optimization landscape is non-convex. Previous works develop near-optimal accelerated algorithms for minimizing this class of functions, however, they require a subroutine of binary search which results in multiple calls to gradient evaluations in each iteration, and consequently the total number of gradient evaluations does not match a known lower bound. In this work, we show that a recently proposed continuized Nesterov acceleration can be applied to minimizing quasar convex functions and achieves the optimal bound with a high probability. Furthermore, we find that the objective functions of training generalized linear models (GLMs) satisfy quasar convexity, which broadens the applicability of the relevant algorithms, while known practical examples of quasar convexity in non-convex learning are sparse in the literature. We also show that if a smooth and one-point strongly convex, Polyak-Lojasiewicz, or quadratic-growth function satisfies quasar convexity, then attaining an accelerated linear rate for minimizing the function is possible under certain conditions, while acceleration is not known in general for these classes of functions.
| https://openreview.net/pdf/1c5f7418978dd32dfc6351a734e73fa6cc98583e.pdf |
Learning Soft Constraints From Constrained Expert Demonstrations | https://openreview.net/forum?id=8sSnD78NqTN | https://openreview.net/forum?id=8sSnD78NqTN | Ashish Gaurav,Kasra Rezaee,Guiliang Liu,Pascal Poupart | ICLR 2023,Top 25% | Inverse reinforcement learning (IRL) methods assume that the expert data is generated by an agent optimizing some reward function. However, in many settings, the agent may optimize a reward function subject to some constraints, where the constraints induce behaviors that may be otherwise difficult to express with just a reward function. We consider the setting where the reward function is given, and the constraints are unknown, and propose a method that is able to recover these constraints satisfactorily from the expert data. While previous work has focused on recovering hard constraints, our method can recover cumulative soft constraints that the agent satisfies on average per episode. In IRL fashion, our method solves this problem by adjusting the constraint function iteratively through a constrained optimization procedure, until the agent behavior matches the expert behavior. We demonstrate our approach on synthetic environments, robotics environments and real world highway driving scenarios. | https://openreview.net/pdf/8fcf77a080574ee36abb6525663524292f7b5217.pdf |
Learning to Grow Pretrained Models for Efficient Transformer Training | https://openreview.net/forum?id=cDYRS5iZ16f | https://openreview.net/forum?id=cDYRS5iZ16f | Peihao Wang,Rameswar Panda,Lucas Torroba Hennigen,Philip Greengard,Leonid Karlinsky,Rogerio Feris,David Daniel Cox,Zhangyang Wang,Yoon Kim | ICLR 2023,Top 25% | Scaling transformers has led to significant breakthroughs in many domains, leading to a paradigm in which larger versions of existing models are trained and released on a periodic basis. New instances of such models are typically trained completely from scratch, despite the fact that they are often just scaled-up versions of their smaller counterparts. How can we use the implicit knowledge in the parameters of smaller, extant models to enable faster training of newer, larger models? This paper describes an approach for accelerating transformer training by learning to grow pretrained transformers, where we learn to linearly map the parameters of the smaller model to initialize the larger model. For tractable learning, we factorize the linear transformation as a composition of (linear) width- and depth-growth operators, and further employ a Kronecker factorization of these growth operators to encode architectural knowledge. Extensive experiments across both language and vision transformers demonstrate that our learned Linear Growth Operator (LiGO) can save up to 50% computational cost of training from scratch, while also consistently outperforming strong baselines that also reuse smaller pretrained models to initialize larger models. | https://openreview.net/pdf/043fba8d0ed8251ba2eb757665721e7fc496d839.pdf |
InCoder: A Generative Model for Code Infilling and Synthesis | https://openreview.net/forum?id=hQwb-lbM6EL | https://openreview.net/forum?id=hQwb-lbM6EL | Daniel Fried,Armen Aghajanyan,Jessy Lin,Sida Wang,Eric Wallace,Freda Shi,Ruiqi Zhong,Scott Yih,Luke Zettlemoyer,Mike Lewis | ICLR 2023,Top 25% | Code is seldom written in a single left-to-right pass and is instead repeatedly edited and refined. We introduce InCoder, a unified generative model that can perform program synthesis (via left-to-right generation) as well as editing (via masking and infilling). InCoder is trained to generate code files from a large corpus of permissively licensed code, where regions of code have been randomly masked and moved to the end of each file, allowing code infilling with bidirectional context. Our model is the first large generative code model that is able to infill arbitrary regions of code, which we evaluate in a zero-shot setting on challenging tasks such as type inference, comment generation, and variable re-naming. We find that the ability to condition on bidirectional context substantially improves performance on these tasks, while still performing comparably on standard program synthesis benchmarks in comparison to left-to-right only models pretrained at similar scale. Our models and code will be publicly released. | https://openreview.net/pdf/be45f53a1cdce7b55fea4d2ed6ba734f27dea87f.pdf |
UNIFIED-IO: A Unified Model for Vision, Language, and Multi-modal Tasks | https://openreview.net/forum?id=E01k9048soZ | https://openreview.net/forum?id=E01k9048soZ | Jiasen Lu,Christopher Clark,Rowan Zellers,Roozbeh Mottaghi,Aniruddha Kembhavi | ICLR 2023,Top 25% | We propose Unified-IO, a model that performs a large variety of AI tasks spanning classical computer vision tasks, including pose estimation, object detection, depth estimation and image generation, vision-and-language tasks such as region captioning and referring expression, to natural language processing tasks such as question answering and paraphrasing. Developing a single unified model for such a large variety of tasks poses unique challenges due to the heterogeneous inputs and outputs pertaining to each task, including RGB images, per-pixel maps, binary masks, bounding boxes, and language. We achieve this unification by homogenizing every supported input and output into a sequence of discrete vocabulary tokens. This common representation across all tasks allows us to train a single transformer-based architecture, jointly on over 90 diverse datasets in the vision and language fields. Unified-IO is the first model capable of performing all 7 tasks on the GRIT benchmark and produces strong results across 16 diverse benchmarks like NYUv2-Depth, ImageNet, VQA2.0, OK-VQA, Swig, VizWizGround, BoolQ, and SciTail, with no task-specific fine-tuning. Code and pre-trained models will be made publicly available. | https://openreview.net/pdf/4f576a5041215d0298e9540a8c23041533da1724.pdf |
Benchmarking Offline Reinforcement Learning on Real-Robot Hardware | https://openreview.net/forum?id=3k5CUGDLNdd | https://openreview.net/forum?id=3k5CUGDLNdd | Nico Gürtler,Sebastian Blaes,Pavel Kolev,Felix Widmaier,Manuel Wuthrich,Stefan Bauer,Bernhard Schölkopf,Georg Martius | ICLR 2023,Top 25% | Learning policies from previously recorded data is a promising direction for real-world robotics tasks, as online learning is often infeasible. Dexterous manipulation in particular remains an open problem in its general form. The combination of offline reinforcement learning with large diverse datasets, however, has the potential to lead to a breakthrough in this challenging domain analogously to the rapid progress made in supervised learning in recent years. To coordinate the efforts of the research community toward tackling this problem, we propose a benchmark including: i) a large collection of data for offline learning from a dexterous manipulation platform on two tasks, obtained with capable RL agents trained in simulation; ii) the option to execute learned policies on a real-world robotic system and a simulation for efficient debugging. We evaluate prominent open-sourced offline reinforcement learning algorithms on the datasets and provide a reproducible experimental setup for offline reinforcement learning on real systems. | https://openreview.net/pdf/67dcc1b0cfc87e5d6aeaf0391094380da9c1897b.pdf |
CUDA: Curriculum of Data Augmentation for Long-tailed Recognition | https://openreview.net/forum?id=RgUPdudkWlN | https://openreview.net/forum?id=RgUPdudkWlN | Sumyeong Ahn,Jongwoo Ko,Se-Young Yun | ICLR 2023,Top 25% | Class imbalance problems frequently occur in real-world tasks, and conventional deep learning algorithms are well known for performance degradation on imbalanced training datasets. To mitigate this problem, many approaches have aimed to balance among given classes by re-weighting or re-sampling training samples. These re-balancing methods increase the impact of minority classes and reduce the influence of majority classes on the output of models. However, the extracted representations may be of poor quality owing to the limited number of minority samples. To handle this restriction, several methods have been developed that increase the representations of minority samples by leveraging the features of the majority samples. Despite extensive recent studies, no deep analysis has been conducted on determination of classes to be augmented and strength of augmentation has been conducted. In this study, we first investigate the correlation between the degree of augmentation and class-wise performance, and find that the proper degree of augmentation must be allocated for each class to mitigate class imbalance problems. Motivated by this finding, we propose a simple and efficient novel curriculum, which is designed to find the appropriate per-class strength of data augmentation, called CUDA: CUrriculum of Data Augmentation for long-tailed recognition. CUDA can simply be integrated into existing long-tailed recognition methods. We present the results of experiments showing that CUDA effectively achieves better generalization performance compared to the state-of-the-art method on various imbalanced datasets such as CIFAR-100-LT, ImageNet-LT, and iNaturalist 2018.
| https://openreview.net/pdf/653fc91920f2396c3eec7c4aab421dc95ba6ccf5.pdf |
Learning to Estimate Shapley Values with Vision Transformers | https://openreview.net/forum?id=5ktFNz_pJLK | https://openreview.net/forum?id=5ktFNz_pJLK | Ian Connick Covert,Chanwoo Kim,Su-In Lee | ICLR 2023,Top 25% | Transformers have become a default architecture in computer vision, but understanding what drives their predictions remains a challenging problem. Current explanation approaches rely on attention values or input gradients, but these provide a limited view of a model’s dependencies. Shapley values offer a theoretically sound alternative, but their computational cost makes them impractical for large, high-dimensional models. In this work, we aim to make Shapley values practical for vision transformers (ViTs). To do so, we first leverage an attention masking approach to evaluate ViTs with partial information, and we then develop a procedure to generate Shapley value explanations via a separate, learned explainer model. Our experiments compare Shapley values to many baseline methods (e.g., attention rollout, GradCAM, LRP), and we find that our approach provides more accurate explanations than existing methods for ViTs. | https://openreview.net/pdf/63a91ca98681923ceee596aa7d3254f49445c743.pdf |
A framework for benchmarking Class-out-of-distribution detection and its application to ImageNet | https://openreview.net/forum?id=Iuubb9W6Jtk | https://openreview.net/forum?id=Iuubb9W6Jtk | Ido Galil,Mohammed Dabbah,Ran El-Yaniv | ICLR 2023,Top 25% | When deployed for risk-sensitive tasks, deep neural networks must be able to detect instances with labels from outside the distribution for which they were trained.
In this paper we present a novel framework to benchmark the ability of image classifiers to detect class-out-of-distribution instances
(i.e., instances whose true labels do not appear in the training distribution) at various levels of detection difficulty.
We apply this technique to ImageNet, and benchmark 525 pretrained, publicly available, ImageNet-1k classifiers.
The code for generating a benchmark for any ImageNet-1k classifier, along with the benchmarks prepared for the above-mentioned 525 models is available at https://github.com/mdabbah/COOD_benchmarking.
The usefulness of the proposed framework and its advantage over alternative existing benchmarks is demonstrated by analyzing the results obtained for these models, which reveals numerous novel observations including:
(1) knowledge distillation consistently improves class-out-of-distribution (C-OOD) detection performance; (2) a subset of ViTs performs better C-OOD detection than any other model; (3) the language–-vision CLIP model achieves good zero-shot detection performance, with its best instance outperforming 96% of all other models evaluated; (4) accuracy and in-distribution ranking are positively correlated to C-OOD detection; and
(5) we compare various confidence functions for C-OOD detection.
Our companion paper, also published in ICLR 2023 (What Can We Learn From The Selective Prediction And Uncertainty Estimation Performance Of 523 Imagenet Classifiers), examines the uncertainty estimation performance (ranking, calibration, and selective prediction performance) of these classifiers in an in-distribution setting. | https://openreview.net/pdf/973b16b739dacc7aaa862e3a74f9469f31742eb0.pdf |
Retrieval-based Controllable Molecule Generation | https://openreview.net/forum?id=vDFA1tpuLvk | https://openreview.net/forum?id=vDFA1tpuLvk | Zichao Wang,Weili Nie,Zhuoran Qiao,Chaowei Xiao,Richard Baraniuk,Anima Anandkumar | ICLR 2023,Top 25% | Generating new molecules with specified chemical and biological properties via generative models has emerged as a promising direction for drug discovery. However, existing methods require extensive training/fine-tuning with a large dataset, often unavailable in real-world generation tasks. In this work, we propose a new retrieval-based framework for controllable molecule generation. We use a small set of exemplar molecules, i.e., those that (partially) satisfy the design criteria, to steer the pre-trained generative model towards synthesizing molecules that satisfy the given design criteria. We design a retrieval mechanism that retrieves and fuses the exemplar molecules with the input molecule, which is trained by a new self-supervised objective that predicts the nearest neighbor of the input molecule. We also propose an iterative refinement process to dynamically update the generated molecules and retrieval database for better generalization. Our approach is agnostic to the choice of generative models and requires no task-specific fine-tuning. On various tasks ranging from simple design criteria to a challenging real-world scenario for designing lead compounds that bind to the SARS-CoV-2 main protease, we demonstrate our approach extrapolates well beyond the retrieval database, and achieves better performance and wider applicability than previous methods. | https://openreview.net/pdf/22616ca08340c5ed2e05df70269bcf7e3ebf5592.pdf |
Stochastic Multi-Person 3D Motion Forecasting | https://openreview.net/forum?id=_s1N-DnxdyT | https://openreview.net/forum?id=_s1N-DnxdyT | Sirui Xu,Yu-Xiong Wang,Liangyan Gui | ICLR 2023,Top 25% | This paper aims to deal with the ignored real-world complexities in prior work on human motion forecasting, emphasizing the social properties of multi-person motion, the diversity of motion and social interactions, and the complexity of articulated motion. To this end, we introduce a novel task of stochastic multi-person 3D motion forecasting. We propose a dual-level generative modeling framework that separately models independent individual motion at the local level and social interactions at the global level. Notably, this dual-level modeling mechanism can be achieved within a shared generative model, through introducing learnable latent codes that represent intents of future motion and switching the codes' modes of operation at different levels. Our framework is general; we instantiate it with different generative models, including generative adversarial networks and diffusion models, and various multi-person forecasting models. Extensive experiments on CMU-Mocap, MuPoTS-3D, and SoMoF benchmarks show that our approach produces diverse and accurate multi-person predictions, significantly outperforming the state of the art. | https://openreview.net/pdf/cd1fe46c26063b4d564a5f4fa721d062014dd432.pdf |
Sign and Basis Invariant Networks for Spectral Graph Representation Learning | https://openreview.net/forum?id=Q-UHqMorzil | https://openreview.net/forum?id=Q-UHqMorzil | Derek Lim,Joshua David Robinson,Lingxiao Zhao,Tess Smidt,Suvrit Sra,Haggai Maron,Stefanie Jegelka | ICLR 2023,Top 25% | We introduce SignNet and BasisNet---new neural architectures that are invariant to two key symmetries displayed by eigenvectors: (i) sign flips, since if v is an eigenvector then so is -v; and (ii) more general basis symmetries, which occur in higher dimensional eigenspaces with infinitely many choices of basis eigenvectors. We prove that under certain conditions our networks are universal, i.e., they can approximate any continuous function of eigenvectors with the desired invariances. When used with Laplacian eigenvectors, our networks are provably more expressive than existing spectral methods on graphs; for instance, they subsume all spectral graph convolutions, certain spectral graph invariants, and previously proposed graph positional encodings as special cases. Experiments show that our networks significantly outperform existing baselines on molecular graph regression, learning expressive graph representations, and learning neural fields on triangle meshes. Our code is available at https://github.com/cptq/SignNet-BasisNet. | https://openreview.net/pdf/dcdf0914e050b658395e5e8f1bafe2c5d8f6dffc.pdf |
Sequential Latent Variable Models for Few-Shot High-Dimensional Time-Series Forecasting | https://openreview.net/forum?id=7C9aRX2nBf2 | https://openreview.net/forum?id=7C9aRX2nBf2 | Xiajun Jiang,Ryan Missel,Zhiyuan Li,Linwei Wang | ICLR 2023,Top 25% | Modern applications increasingly require learning and forecasting latent dynamics from high-dimensional time-series. Compared to univariate time-series forecasting, this adds a new challenge of reasoning about the latent dynamics of an unobserved abstract state. Sequential latent variable models (LVMs) present an attractive solution, although existing works either struggle with long-term forecasting or have difficulty learning across diverse dynamics. In this paper, we first present a conceptual framework of sequential LVMs to unify existing works, contrast their fundamental limitations, and identify an intuitive solution to long-term forecasting for diverse dynamics via meta-learning. We then present the first framework of few-shot forecasting for high-dimensional time-series: instead of learning a single dynamic function, we leverage data of diverse dynamics and learn to adapt latent dynamic functions to few-shot support series. This is realized via Bayesian meta-learning underpinned by: 1) a latent dynamic function conditioned on knowledge derived from few-shot support series, and 2) a meta-model that learns to extract such dynamic-specific knowledge via feed-forward embedding of support set. We compared the presented framework with a comprehensive set of baseline models trained 1) globally on the large meta-training set with diverse dynamics, and 2) individually on single dynamics, both with and without fine-tuning to k-shot support series used by the meta-models. We demonstrated that the presented framework is agnostic to the latent dynamic function of choice and, at meta-test time, is able to forecast for new dynamics given variable-shot of support series. | https://openreview.net/pdf/2a624e3ea737a57633593e32152635c30eaf25d6.pdf |
Code Translation with Compiler Representations | https://openreview.net/forum?id=XomEU3eNeSQ | https://openreview.net/forum?id=XomEU3eNeSQ | Marc Szafraniec,Baptiste Roziere,Hugh James Leather,Patrick Labatut,Francois Charton,Gabriel Synnaeve | ICLR 2023,Top 25% | In this paper, we leverage low-level compiler intermediate representations (IR) code translation. Traditional transpilers rely on syntactic information and handcrafted rules, which limits their applicability and produces unnatural-looking code. Applying neural machine translation (NMT) approaches to code has successfully broadened the set of programs on which one can get a natural-looking translation. However, they treat the code as sequences of text tokens, and still do not differentiate well enough between similar pieces of code which have different semantics in different languages. The consequence is low quality translation, reducing the practicality of NMT, and stressing the need for an approach significantly increasing its accuracy. Here we propose to augment code translation with IRs, specifically LLVM IR, with results on the C++, Java, Rust, and Go languages. Our method improves upon the state of the art for unsupervised code translation, increasing the number of correct translations by 11% on average, and up to 79% for the Java → Rust pair with greedy decoding. With beam search, it increases the number of correct translations by 5.5% in average. We extend previous test sets for code translation, by adding hundreds of Go and Rust functions. Additionally, we train models with high performance on the problem of IR decompilation, generating programming source code from IR, and study using IRs as intermediary pivot for translation. | https://openreview.net/pdf/e6271eb661d7f4d7cff1993ad01d1e6dcaa983e0.pdf |
Omnigrok: Grokking Beyond Algorithmic Data | https://openreview.net/forum?id=zDiHoIWa0q1 | https://openreview.net/forum?id=zDiHoIWa0q1 | Ziming Liu,Eric J Michaud,Max Tegmark | ICLR 2023,Top 25% | Grokking, the unusual phenomenon for algorithmic datasets where generalization happens long after overfitting the training data, has remained elusive. We aim to understand grokking by analyzing the loss landscapes of neural networks, identifying the mismatch between training and test losses as the cause for grokking. We refer to this as the "LU mechanism" because training and test losses (against model weight norm) typically resemble "L" and "U", respectively. This simple mechanism can nicely explain many aspects of grokking: data size dependence, weight decay dependence, the emergence of representations, etc. Guided by the intuitive picture, we are able to induce grokking on tasks involving images, language and molecules, although the grokking signals are sometimes less dramatic. We attribute the dramatic nature of grokking for algorithmic datasets to representation learning. | https://openreview.net/pdf/4f07c7b6fea42534db8640c840402f6d066a8bd5.pdf |
Flow Annealed Importance Sampling Bootstrap | https://openreview.net/forum?id=XCTVFJwS9LJ | https://openreview.net/forum?id=XCTVFJwS9LJ | Laurence Illing Midgley,Vincent Stimper,Gregor N. C. Simm,Bernhard Schölkopf,José Miguel Hernández-Lobato | ICLR 2023,Top 25% | Normalizing flows are tractable density models that can approximate complicated target distributions, e.g. Boltzmann distributions of physical systems. However, current methods for training flows either suffer from mode-seeking behavior, use samples from the target generated beforehand by expensive MCMC methods, or use stochastic losses that have high variance. To avoid these problems, we augment flows with annealed importance sampling (AIS) and minimize the mass-covering $\alpha$-divergence with $\alpha=2$, which minimizes importance weight variance. Our method, Flow AIS Bootstrap (FAB), uses AIS to generate samples in regions where the flow is a poor approximation of the target, facilitating the discovery of new modes. We apply FAB to multimodal targets and show that we can approximate them very accurately where previous methods fail. To the best of our knowledge, we are the first to learn the Boltzmann distribution of the alanine dipeptide molecule using only the unnormalized target density, without access to samples generated via Molecular Dynamics (MD) simulations: FAB produces better results than training via maximum likelihood on MD samples while using 100 times fewer target evaluations. After reweighting the samples, we obtain unbiased histograms of dihedral angles that are almost identical to the ground truth. | https://openreview.net/pdf/b982ed337b6c3ff43fb3fa4e63f9492b31f03e06.pdf |
Continual Unsupervised Disentangling of Self-Organizing Representations | https://openreview.net/forum?id=ih0uFRFhaZZ | https://openreview.net/forum?id=ih0uFRFhaZZ | Zhiyuan Li,Xiajun Jiang,Ryan Missel,Prashnna Kumar Gyawali,Nilesh Kumar,Linwei Wang | ICLR 2023,Top 25% | Limited progress has been made in continual unsupervised learning of representations, especially in reusing, expanding, and continually disentangling learned semantic factors across data environments. We argue that this is because existing approaches treat continually-arrived data independently, without considering how they are related based on the underlying semantic factors. We address this by a new generative model describing a topologically-connected mixture of spike-and-slab distributions in the latent space, learned end-to-end in a continual fashion via principled variational inference. The learned mixture is able to automatically discover the active semantic factors underlying each data environment and to accumulate their relational structure based on that. This distilled knowledge of different data environments can further be used for generative replay and guiding continual disentangling of new semantic factors. We tested the presented method on a split version of 3DShapes to provide the first quantitative disentanglement evaluation of continually learned representations, and further demonstrated its ability to continually disentangle new representations in benchmark datasets. | https://openreview.net/pdf/fdb507165ec3efc7233824c93b345e73bef5cd31.pdf |
LMC: Fast Training of GNNs via Subgraph Sampling with Provable Convergence | https://openreview.net/forum?id=5VBBA91N6n | https://openreview.net/forum?id=5VBBA91N6n | Zhihao Shi,Xize Liang,Jie Wang | ICLR 2023,Top 25% | The message passing-based graph neural networks (GNNs) have achieved great success in many real-world applications.
However, training GNNs on large-scale graphs suffers from the well-known neighbor explosion problem, i.e., the exponentially increasing dependencies of nodes with the number of message passing layers. Subgraph-wise sampling methods---a promising class of mini-batch training techniques---discard messages outside the mini-batches in backward passes to avoid the neighbor explosion problem at the expense of gradient estimation accuracy. This poses significant challenges to their convergence analysis and convergence speeds, which seriously limits their reliable real-world applications. To address this challenge, we propose a novel subgraph-wise sampling method with a convergence guarantee, namely Local Message Compensation (LMC). To the best of our knowledge, LMC is the {\it first} subgraph-wise sampling method with provable convergence. The key idea of LMC is to retrieve the discarded messages in backward passes based on a message passing formulation of backward passes. By efficient and effective compensations for the discarded messages in both forward and backward passes, LMC computes accurate mini-batch gradients and thus accelerates convergence. We further show that LMC converges to first-order stationary points of GNNs. Experiments on large-scale benchmark tasks demonstrate that LMC significantly outperforms state-of-the-art subgraph-wise sampling methods in terms of efficiency. | https://openreview.net/pdf/afdfd7b6a07fae9bc742768d872aaea1ea7526a3.pdf |
Programmatically Grounded, Compositionally Generalizable Robotic Manipulation | https://openreview.net/forum?id=rZ-wylY5VI | https://openreview.net/forum?id=rZ-wylY5VI | Renhao Wang,Jiayuan Mao,Joy Hsu,Hang Zhao,Jiajun Wu,Yang Gao | ICLR 2023,Top 25% | Robots operating in the real world require both rich manipulation skills as well as the ability to semantically reason about when to apply those skills. Towards this goal, recent works have integrated semantic representations from large-scale pretrained vision-language (VL) models into manipulation models, imparting them with more general reasoning capabilities. However, we show that the conventional {\it pretraining-finetuning} pipeline for integrating such representations entangles the learning of domain-specific action information and domain-general visual information, leading to less data-efficient training and poor generalization to unseen objects and tasks. To this end, we propose \ours, a {\it modular} approach to better leverage pretrained VL models by exploiting the syntactic and semantic structures of language instructions. Our framework uses a semantic parser to recover an executable program, composed of functional modules grounded on vision and action across different modalities. Each functional module is realized as a combination of deterministic computation and learnable neural networks. Program execution produces parameters to general manipulation primitives for a robotic end-effector. The entire modular network can be trained with end-to-end imitation learning objectives. Experiments show that our model successfully disentangles action and perception, translating to improved zero-shot and compositional generalization in a variety of manipulation behaviors. Project webpage at: \url{https://progport.github.io}. | https://openreview.net/pdf/3f955a8cf103d552d11f9329dde46d6173f574aa.pdf |
SketchKnitter: Vectorized Sketch Generation with Diffusion Models | https://openreview.net/forum?id=4eJ43EN2g6l | https://openreview.net/forum?id=4eJ43EN2g6l | Qiang Wang,Haoge Deng,Yonggang Qi,Da Li,Yi-Zhe Song | ICLR 2023,Top 25% | We show vectorized sketch generation can be identified as a reversal of the stroke deformation process. This relationship was established by means of a diffusion model that learns data distributions over the stroke-point locations and pen states of real human sketches. Given randomly scattered stroke-points, sketch generation becomes a process of deformation-based denoising, where the generator rectifies positions of stroke points at each timestep to converge at a recognizable sketch. A key innovation was to embed recognizability into the reverse time diffusion process. It was observed that the estimated noise during the reversal process is strongly correlated with sketch classification accuracy. An auxiliary recurrent neural network (RNN) was consequently used to quantify recognizability during data sampling. It follows that, based on the recognizability scores, a sampling shortcut function can also be devised that renders better quality sketches with fewer sampling steps. Finally it is shown that the model can be easily extended to a conditional generation framework, where given incomplete and unfaithful sketches, it yields one that is more visually appealing and with higher recognizability. | https://openreview.net/pdf/aa51f28767b5d95ceced7af0c79780b06d2fd1e0.pdf |
A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning | https://openreview.net/forum?id=S07feAlQHgM | https://openreview.net/forum?id=S07feAlQHgM | Da-Wei Zhou,Qi-Wei Wang,Han-Jia Ye,De-Chuan Zhan | ICLR 2023,Top 25% | Real-world applications require the classification model to adapt to new classes without forgetting old ones. Correspondingly, Class-Incremental Learning (CIL) aims to train a model with limited memory size to meet this requirement. Typical CIL methods tend to save representative exemplars from former classes to resist forgetting, while recent works find that storing models from history can substantially boost the performance. However, the stored models are not counted into the memory budget, which implicitly results in unfair comparisons. We find that when counting the model size into the total budget and comparing methods with aligned memory size, saving models do not consistently work, especially for the case with limited memory budgets. As a result, we need to holistically evaluate different CIL methods at different memory scales and simultaneously consider accuracy and memory size for measurement. On the other hand, we dive deeply into the construction of the memory buffer for memory efficiency. By analyzing the effect of different layers in the network, we find that shallow and deep layers have different characteristics in CIL. Motivated by this, we propose a simple yet effective baseline, denoted as MEMO for Memory-efficient Expandable MOdel. MEMO extends specialized layers based on the shared generalized representations, efficiently extracting diverse representations with modest cost and maintaining representative exemplars. Extensive experiments on benchmark datasets validate MEMO's competitive performance. Code is available at: https://github.com/wangkiw/ICLR23-MEMO | https://openreview.net/pdf/1b652f4ee2aba1681f8d0e268557ff8ce743d37d.pdf |
Toeplitz Neural Network for Sequence Modeling | https://openreview.net/forum?id=IxmWsm4xrua | https://openreview.net/forum?id=IxmWsm4xrua | Zhen Qin,Xiaodong Han,Weixuan Sun,Bowen He,Dong Li,Dongxu Li,Yuchao Dai,Lingpeng Kong,Yiran Zhong | ICLR 2023,Top 25% | Sequence modeling has important applications in natural language processing and computer vision. Recently, the transformer-based models have shown strong performance on various sequence modeling tasks, which rely on attention to capture pairwise token relations, and position embedding to inject positional information. While showing good performance, the transformer models are inefficient to scale to long input sequences, mainly due to the quadratic space-time complexity of attention. To overcome this inefficiency, we propose to model sequences with a relative position encoded Toeplitz matrix and use a Toeplitz matrix-vector production trick to reduce the space-time complexity of the sequence modeling to log linear. A lightweight sub-network called relative position encoder is proposed to generate relative position coefficients with a fixed budget of parameters, enabling the proposed Toeplitz neural network to deal with varying sequence lengths. In addition, despite being trained on 512-token sequences, our model can extrapolate input sequence length up to 14K tokens in inference with consistent performance. Extensive experiments on autoregressive and bidirectional language modeling, image modeling, and the challenging Long-range Arena Benchmark show that our method achieves better performance than its competitors in most downstream tasks while being significantly faster. | https://openreview.net/pdf/2a7e1fcbcfe67f92df33295ecde966d4a9095dda.pdf |
QuAnt: Quantum Annealing with Learnt Couplings | https://openreview.net/forum?id=isiQ5KIXbjj | https://openreview.net/forum?id=isiQ5KIXbjj | Marcel Seelbach Benkner,Maximilian Krahn,Edith Tretschk,Zorah Lähner,Michael Moeller,Vladislav Golyanik | ICLR 2023,Top 25% | Modern quantum annealers can find high-quality solutions to combinatorial optimisation objectives given as quadratic unconstrained binary optimisation (QUBO) problems. Unfortunately, obtaining suitable QUBO forms in computer vision remains challenging and currently requires problem-specific analytical derivations. Moreover, such explicit formulations impose tangible constraints on solution encodings. In stark contrast to prior work, this paper proposes to learn QUBO forms from data through gradient backpropagation instead of deriving them. As a result, the solution encodings can be chosen flexibly and compactly. Furthermore, our methodology is general and virtually independent of the specifics of the target problem type. We demonstrate the advantages of learnt QUBOs on the diverse problem types of graph matching, 2D point cloud alignment and 3D rotation estimation. Our results are competitive with the previous quantum state of the art while requiring much fewer logical and physical qubits, enabling our method to scale to larger problems. The code and the new dataset are available at https://4dqv.mpi-inf.mpg.de/QuAnt/. | https://openreview.net/pdf/abe676c41d571977f05ff1eee049ec4fb86d0301.pdf |
Towards Effective and Interpretable Human-Agent Collaboration in MOBA Games: A Communication Perspective | https://openreview.net/forum?id=q3F0UBAruO | https://openreview.net/forum?id=q3F0UBAruO | Yiming Gao,Feiyu Liu,Liang Wang,Zhenjie Lian,Weixuan Wang,Siqin Li,Xianliang Wang,Xianhan Zeng,Rundong Wang,jiawei wang,QIANG FU,Yang Wei,Lanxiao Huang,Wei Liu | ICLR 2023,Top 25% | MOBA games, e.g., Dota2 and Honor of Kings, have been actively used as the testbed for the recent AI research on games, and various AI systems have been developed at the human level so far. However, these AI systems mainly focus on how to compete with humans, less on exploring how to collaborate with humans. To this end, this paper makes the first attempt to investigate human-agent collaboration in MOBA games. In this paper, we propose to enable humans and agents to collaborate through explicit communication by designing an efficient and interpretable Meta-Command Communication-based framework, dubbed MCC, for accomplishing effective human-agent collaboration in MOBA games. The MCC framework consists of two pivotal modules: 1) an interpretable communication protocol, i.e., the Meta-Command, to bridge the communication gap between humans and agents; 2) a meta-command value estimator, i.e., the Meta-Command Selector, to select a valuable meta-command for each agent to achieve effective human-agent collaboration. Experimental results in Honor of Kings demonstrate that MCC agents can collaborate reasonably well with human teammates and even generalize to collaborate with different levels and numbers of human teammates. Videos are available at https://sites.google.com/view/mcc-demo. | https://openreview.net/pdf/05be94f6c3da0d1f97a06aaecf42515ddc07d159.pdf |
On the complexity of nonsmooth automatic differentiation | https://openreview.net/forum?id=uqg3FhRZaq | https://openreview.net/forum?id=uqg3FhRZaq | Jerome Bolte,Ryan Boustany,Edouard Pauwels,Béatrice Pesquet-Popescu | ICLR 2023,Top 25% | Using the notion of conservative gradient, we provide a simple model to estimate the computational costs of the backward and forward modes of algorithmic differentiation for a wide class of nonsmooth programs. The complexity overhead of the backward mode turns out to be independent of the dimension when using programs with locally Lipschitz semi-algebraic or definable elementary functions. This extends considerably the Baur-Strassen's smooth cheap gradient principle. We illustrate our results by establishing fast backpropagation results of conservative gradients through feedforward neural networks with standard activation and loss functions. Nonsmooth backpropagation's cheapness contrasts with concurrent forward approaches, which have, to this day, dimensional-dependent worst case overhead estimates. We provide further results suggesting the superiority of backward propagation of conservative gradients. Indeed, we relate the complexity of computing a large number of directional derivatives to that of matrix multiplication, and we show that finding two subgradients in the Clarke subdifferential of a function is a NP-hard problem. | https://openreview.net/pdf/81b5f1858aeb447b1d248391272b9e30a7ceb511.pdf |
Diffusion Posterior Sampling for General Noisy Inverse Problems | https://openreview.net/forum?id=OnD9zGAGT0k | https://openreview.net/forum?id=OnD9zGAGT0k | Hyungjin Chung,Jeongsol Kim,Michael Thompson Mccann,Marc Louis Klasky,Jong Chul Ye | ICLR 2023,Top 25% | Diffusion models have been recently studied as powerful generative inverse problem solvers, owing to their high quality reconstructions and the ease of combining existing iterative solvers. However, most works focus on solving simple linear inverse problems in noiseless settings, which significantly under-represents the complexity of real-world problems. In this work, we extend diffusion solvers to efficiently handle general noisy (non)linear inverse problems via the Laplace approximation of the posterior sampling. Interestingly, the resulting posterior sampling scheme is a blended version of diffusion sampling with the manifold constrained gradient without a strict measurement consistency projection step, yielding a more desirable generative path in noisy settings compared to the previous studies. Our method demonstrates that diffusion models can incorporate various measurement noise statistics such as Gaussian and Poisson, and also efficiently handle noisy nonlinear inverse problems such as Fourier phase retrieval and non-uniform deblurring. | https://openreview.net/pdf/dd7f2e1f5581d91eb4c1ff34fec78b93d3dfa599.pdf |