title
stringlengths
18
162
url
stringlengths
42
44
detail_url
stringlengths
42
44
authors
stringlengths
10
429
tags
stringclasses
3 values
abstract
stringlengths
400
2.37k
pdf
stringlengths
71
71
Mass-Editing Memory in a Transformer
https://openreview.net/forum?id=MkbcAHIYgyS
https://openreview.net/forum?id=MkbcAHIYgyS
Kevin Meng,Arnab Sen Sharma,Alex J Andonian,Yonatan Belinkov,David Bau
ICLR 2023,Top 25%
Recent work has shown exciting promise in updating large language models with new memories, so as to replace obsolete information or add specialized knowledge. However, this line of work is predominantly limited to updating single associations. We develop MEMIT, a method for directly updating a language model with many memories, demonstrating experimentally that it can scale up to thousands of associations for GPT-J (6B) and GPT-NeoX (20B), exceeding prior work by an order of magnitude. Our code and data will be open-sourced upon publication.
https://openreview.net/pdf/5d2ff18d2f074c0f0b7bda40d118bb08e13bcd43.pdf
Learning the Positions in CountSketch
https://openreview.net/forum?id=iV9Cs8s8keU
https://openreview.net/forum?id=iV9Cs8s8keU
Yi Li,Honghao Lin,Simin Liu,Ali Vakilian,David Woodruff
ICLR 2023,Top 25%
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem, e.g., low-rank approximation and regression. In the learning-based sketching paradigm proposed by Indyk et al., the sketch matrix is found by choosing a random sparse matrix, e.g., CountSketch, and then the values of its non-zero entries are updated by running gradient descent on a training data set. Despite the growing body of work on this paradigm, a noticeable omission is that the locations of the non-zero entries of previous algorithms were fixed, and only their values were learned. In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries. Our first proposed algorithm is based on a greedy algorithm. However, one drawback of the greedy algorithm is its slower training time. We fix this issue and propose approaches for learning a sketching matrix for both low-rank approximation and Hessian approximation for second-order optimization. The latter is helpful for a range of constrained optimization problems, such as LASSO and matrix estimation with a nuclear norm constraint. Both approaches achieve good accuracy with a fast running time. Moreover, our experiments suggest that our algorithm can still reduce the error significantly even if we only have a very limited number of training matrices.
https://openreview.net/pdf/cc22b9e7a5b739383f6f17c1c3f51a0cf1f79b66.pdf
Outcome-directed Reinforcement Learning by Uncertainty \& Temporal Distance-Aware Curriculum Goal Generation
https://openreview.net/forum?id=v69itrHLEu
https://openreview.net/forum?id=v69itrHLEu
Daesol Cho,Seungjae Lee,H. Jin Kim
ICLR 2023,Top 25%
Current reinforcement learning (RL) often suffers when solving a challenging exploration problem where the desired outcomes or high rewards are rarely observed. Even though curriculum RL, a framework that solves complex tasks by proposing a sequence of surrogate tasks, shows reasonable results, most of the previous works still have difficulty in proposing curriculum due to the absence of a mechanism for obtaining calibrated guidance to the desired outcome state without any prior domain knowledge. To alleviate it, we propose an uncertainty \& temporal distance-aware curriculum goal generation method for the outcome-directed RL via solving a bipartite matching problem. It could not only provide precisely calibrated guidance of the curriculum to the desired outcome states but also bring much better sample efficiency and geometry-agnostic curriculum goal proposal capability compared to previous curriculum RL methods. We demonstrate that our algorithm significantly outperforms these prior methods in a variety of challenging navigation tasks and robotic manipulation tasks in a quantitative and qualitative way.
https://openreview.net/pdf/2c6db8a4ff69ca681456fe5e686ec86f942d3e12.pdf
A Laplace-inspired Distribution on SO(3) for Probabilistic Rotation Estimation
https://openreview.net/forum?id=Mvetq8DO05O
https://openreview.net/forum?id=Mvetq8DO05O
Yingda Yin,Yang Wang,He Wang,Baoquan Chen
ICLR 2023,Top 25%
Estimating the 3DoF rotation from a single RGB image is an important yet challenging problem. Probabilistic rotation regression has raised more and more attention with the benefit of expressing uncertainty information along with the prediction. Though modeling noise using Gaussian-resembling Bingham distribution and matrix Fisher distribution is natural, they are shown to be sensitive to outliers for the nature of quadratic punishment to deviations. In this paper, we draw inspiration from multivariate Laplace distribution and propose a novel Rotation Laplace distribution on SO(3). Rotation Laplace distribution is robust to the disturbance of outliers and enforces much gradient to the low-error region, resulting in a better convergence. Our extensive experiments show that our proposed distribution achieves state-of-the-art performance for rotation regression tasks over both probabilistic and non-probabilistic baselines. Our project page is at pku-epic.github.io/RotationLaplace.
https://openreview.net/pdf/40fe2e0a65fc37b47b6e6110a4872759e47e20f0.pdf
HiViT: A Simpler and More Efficient Design of Hierarchical Vision Transformer
https://openreview.net/forum?id=3F6I-0-57SC
https://openreview.net/forum?id=3F6I-0-57SC
Xiaosong Zhang,Yunjie Tian,Lingxi Xie,Wei Huang,Qi Dai,Qixiang Ye,Qi Tian
ICLR 2023,Top 25%
There has been a debate on the choice of plain vs. hierarchical vision transformers, where researchers often believe that the former (e.g., ViT) has a simpler design but the latter (e.g., Swin) enjoys higher recognition accuracy. Recently, the emerge of masked image modeling (MIM), a self-supervised visual pre-training method, raised a new challenge to vision transformers in terms of flexibility, i.e., part of image patches or tokens are to be discarded, which seems to claim the advantages of plain vision transformers. In this paper, we delve deep into the comparison between ViT and Swin, revealing that (i) the performance gain of Swin is mainly brought by a deepened backbone and relative positional encoding, (ii) the hierarchical design of Swin can be simplified into hierarchical patch embedding (proposed in this work), and (iii) other designs such as shifted-window attentions can be removed. By removing the unnecessary operations, we come up with a new architecture named HiViT (short for hierarchical ViT), which is simpler and more efficient than Swin yet further improves its performance on fully-supervised and self-supervised visual representation learning. In particular, after pre-trained using masked autoencoder (MAE) on ImageNet-1K, HiViT-B reports a 84.6% accuracy on ImageNet-1K classification, a 53.3% box AP on COCO detection, and a 52.8% mIoU on ADE20K segmentation, significantly surpassing the baseline. Code is available at https://github.com/zhangxiaosong18/hivit.
https://openreview.net/pdf/7835ef364a3e5f77397911e7f2f90b3aa3630f8b.pdf
A Minimalist Dataset for Systematic Generalization of Perception, Syntax, and Semantics
https://openreview.net/forum?id=kIPyTuEZuAK
https://openreview.net/forum?id=kIPyTuEZuAK
Qing Li,Siyuan Huang,Yining Hong,Yixin Zhu,Ying Nian Wu,Song-Chun Zhu
ICLR 2023,Top 25%
Inspired by humans' exceptional ability to master arithmetic and generalize to new problems, we present a new dataset, HINT, to examine machines' capability of learning generalizable concepts at three levels: perception, syntax, and semantics. In HINT, machines are tasked with learning how concepts are perceived from raw signals such as images (i.e., perception), how multiple concepts are structurally combined to form a valid expression (i.e., syntax), and how concepts are realized to afford various reasoning tasks (i.e., semantics), all in a weakly supervised manner. Focusing on systematic generalization, we carefully design a five-fold test set to evaluate both the interpolation and the extrapolation of learned concepts w.r.t the three levels. Further, we design a few-shot learning split to determine whether or not models can rapidly learn new concepts and generalize them to more complex scenarios. To comprehend existing models' limitations, we undertake extensive experiments with various sequence-to-sequence models, including RNNs, Transformers, and GPT-3 (with the chain of thought prompting). The results indicate that current models struggle to extrapolate to long-range syntactic dependency and semantics. Models exhibit a considerable gap toward human-level generalization when evaluated with new concepts in a few-shot setting. Moreover, we discover that it is infeasible to solve HINT by merely scaling up the dataset and the model size; this strategy contributes little to the extrapolation of syntax and semantics. Finally, in zero-shot GPT-3 experiments, the chain of thought prompting exhibits impressive results and significantly boosts the test accuracy. We believe the HINT dataset and the experimental findings are of great interest to the learning community on systematic generalization.%
https://openreview.net/pdf/09b973c9c84fd934195e0c087cb7af065e9c6829.pdf
Unsupervised Model Selection for Time Series Anomaly Detection
https://openreview.net/forum?id=gOZ_pKANaPW
https://openreview.net/forum?id=gOZ_pKANaPW
Mononito Goswami,Cristian Ignacio Challu,Laurent Callot,Lenon Minorics,Andrey Kan
ICLR 2023,Top 25%
Anomaly detection in time-series has a wide range of practical applications. While numerous anomaly detection methods have been proposed in the literature, a recent survey concluded that no single method is the most accurate across various datasets. To make matters worse, anomaly labels are scarce and rarely available in practice. The practical problem of selecting the most accurate model for a given dataset without labels has received little attention in the literature. This paper answers this question \textit{i.e.} Given an unlabeled dataset and a set of candidate anomaly detectors, how can we select the most accurate model? To this end, we identify three classes of surrogate (unsupervised) metrics, namely, \textit{prediction error}, \textit{model centrality}, and \textit{performance on injected synthetic anomalies}, and show that some metrics are highly correlated with standard supervised anomaly detection performance metrics such as the $F_1$ score, but to varying degrees. We formulate metric combination with multiple imperfect surrogate metrics as a robust rank aggregation problem. We then provide theoretical justification behind the proposed approach. Large-scale experiments on multiple real-world datasets demonstrate that our proposed unsupervised approach is as effective as selecting the most accurate model based on partially labeled data.
https://openreview.net/pdf/b9338f8e0cd4d78c188aa60e26ced6737232b2a8.pdf
AANG : Automating Auxiliary Learning
https://openreview.net/forum?id=vtVDI3w_BLL
https://openreview.net/forum?id=vtVDI3w_BLL
Lucio M. Dery,Paul Michel,Mikhail Khodak,Graham Neubig,Ameet Talwalkar
ICLR 2023,Top 25%
Auxiliary objectives, supplementary learning signals that are introduced to help aid learning on data-starved or highly complex end-tasks, are commonplace in machine learning. Whilst much work has been done to formulate useful auxiliary objectives, their construction is still an art which proceeds by slow and tedious hand-design. Intuition for how and when these objectives improve end-task performance has also had limited theoretical backing. In this work, we present an approach for automatically generating a suite of auxiliary objectives. We achieve this by deconstructing existing objectives within a novel unified taxonomy, identifying connections between them, and generating new ones based on the uncovered structure. Next, we theoretically formalize widely-held intuitions about how auxiliary learning improves generalization on the end-task. This leads us to a principled and efficient algorithm for searching the space of generated objectives to find those most useful to a specified end-task. With natural language processing (NLP) as our domain of study, we demonstrate that our automated auxiliary learning pipeline leads to strong improvements over competitive baselines across continued training experiments on a pre-trained model on 5 NLP end-tasks.
https://openreview.net/pdf/89801dac56ce056d438ce9105f85d897747fa081.pdf
NeRN: Learning Neural Representations for Neural Networks
https://openreview.net/forum?id=9gfir3fSy3J
https://openreview.net/forum?id=9gfir3fSy3J
Maor Ashkenazi,Zohar Rimon,Ron Vainshtein,Shir Levi,Elad Richardson,Pinchas Mintz,Eran Treister
ICLR 2023,Top 25%
Neural Representations have recently been shown to effectively reconstruct a wide range of signals from 3D meshes and shapes to images and videos. We show that, when adapted correctly, neural representations can be used to directly represent the weights of a pre-trained convolutional neural network, resulting in a Neural Representation for Neural Networks (NeRN). Inspired by coordinate inputs of previous neural representation methods, we assign a coordinate to each convolutional kernel in our network based on its position in the architecture, and optimize a predictor network to map coordinates to their corresponding weights. Similarly to the spatial smoothness of visual scenes, we show that incorporating a smoothness constraint over the original network's weights aids NeRN towards a better reconstruction. In addition, since slight perturbations in pre-trained model weights can result in a considerable accuracy loss, we employ techniques from the field of knowledge distillation to stabilize the learning process. We demonstrate the effectiveness of NeRN in reconstructing widely used architectures on CIFAR-10, CIFAR-100, and ImageNet. Finally, we present two applications using NeRN, demonstrating the capabilities of the learned representations.
https://openreview.net/pdf/7a5e2aeccac1ea354d122a24d739db89c51f2599.pdf
Formal Mathematics Statement Curriculum Learning
https://openreview.net/forum?id=-P7G-8dmSh4
https://openreview.net/forum?id=-P7G-8dmSh4
Stanislas Polu,Jesse Michael Han,Kunhao Zheng,Mantas Baksys,Igor Babuschkin,Ilya Sutskever
ICLR 2023,Top 25%
We explore the use of expert iteration in the context of language modeling applied to formal mathematics. We show that at same compute budget, expert iteration, by which we mean proof search interleaved with learning, dramatically outperforms proof search only. We also observe that when applied to a collection of formal statements of sufficiently varied difficulty, expert iteration is capable of finding and solving a curriculum of increasingly difficult problems, without the need for associated ground-truth proofs. Finally, by applying this expert iteration to a manually curated set of problem statements, we surpass previous state-of-the-art on the miniF2F benchmark, automatically solving multiple challenging problems drawn from high school olympiads.
https://openreview.net/pdf/f13db9db8ab1cc1e6f7db9c1754c276c7c7601ed.pdf
Multifactor Sequential Disentanglement via Structured Koopman Autoencoders
https://openreview.net/forum?id=6fuPIe9tbnC
https://openreview.net/forum?id=6fuPIe9tbnC
Nimrod Berman,Ilan Naiman,Omri Azencot
ICLR 2023,Top 25%
Disentangling complex data to its latent factors of variation is a fundamental task in representation learning. Existing work on sequential disentanglement mostly provides two factor representations, i.e., it separates the data to time-varying and time-invariant factors. In contrast, we consider multifactor disentanglement in which multiple (more than two) semantic disentangled components are generated. Key to our approach is a strong inductive bias where we assume that the underlying dynamics can be represented linearly in the latent space. Under this assumption, it becomes natural to exploit the recently introduced Koopman autoencoder models. However, disentangled representations are not guaranteed in Koopman approaches, and thus we propose a novel spectral loss term which leads to structured Koopman matrices and disentanglement. Overall, we propose a simple and easy to code new deep model that is fully unsupervised and it supports multifactor disentanglement. We showcase new disentangling abilities such as swapping of individual static factors between characters, and an incremental swap of disentangled factors from the source to the target. Moreover, we evaluate our method extensively on two factor standard benchmark tasks where we significantly improve over competing unsupervised approaches, and we perform competitively in comparison to weakly- and self-supervised state-of-the-art approaches. The code is available at https://github.com/azencot-group/SKD.
https://openreview.net/pdf/80996ea72234008065b9f90cd4275bc159fa8565.pdf
Packed Ensembles for efficient uncertainty estimation
https://openreview.net/forum?id=XXTyv1zD9zD
https://openreview.net/forum?id=XXTyv1zD9zD
Olivier Laurent,Adrien Lafage,Enzo Tartaglione,Geoffrey Daniel,Jean-marc Martinez,Andrei Bursuc,Gianni Franchi
ICLR 2023,Top 25%
Deep Ensembles (DE) are a prominent approach for achieving excellent performance on key metrics such as accuracy, calibration, uncertainty estimation, and out-of-distribution detection. However, hardware limitations of real-world systems constrain to smaller ensembles and lower-capacity networks, significantly deteriorating their performance and properties. We introduce Packed-Ensembles (PE), a strategy to design and train lightweight structured ensembles by carefully modulating the dimension of their encoding space. We leverage grouped convolutions to parallelize the ensemble into a single shared backbone and forward pass to improve training and inference speeds. PE is designed to operate within the memory limits of a standard neural network. Our extensive research indicates that PE accurately preserves the properties of DE, such as diversity, and performs equally well in terms of accuracy, calibration, out-of-distribution detection, and robustness to distribution shift. We make our code available at https://github.com/ENSTA-U2IS/torch-uncertainty.
https://openreview.net/pdf/ca8af472cc5062b34c4e52f4fcb5b8591d4474c9.pdf
Hidden Markov Transformer for Simultaneous Machine Translation
https://openreview.net/forum?id=9y0HFvaAYD6
https://openreview.net/forum?id=9y0HFvaAYD6
Shaolei Zhang,Yang Feng
ICLR 2023,Top 25%
Simultaneous machine translation (SiMT) outputs the target sequence while receiving the source sequence, and hence learning when to start translating each target token is the core challenge for SiMT task. However, it is non-trivial to learn the optimal moment among many possible moments of starting translating, as the moments of starting translating always hide inside the model and can only be supervised with the observed target sequence. In this paper, we propose a Hidden Markov Transformer (HMT), which treats the moments of starting translating as hidden events and the target sequence as the corresponding observed events, thereby organizing them as a hidden Markov model. HMT explicitly models multiple moments of starting translating as the candidate hidden events, and then selects one to generate the target token. During training, by maximizing the marginal likelihood of the target sequence over multiple moments of starting translating, HMT learns to start translating at the moments that target tokens can be generated more accurately. Experiments on multiple SiMT benchmarks show that HMT outperforms strong baselines and achieves state-of-the-art performance.
https://openreview.net/pdf/fcf9747a3df24a2f10acd861765126ce790b5424.pdf
Multi-domain image generation and translation with identifiability guarantees
https://openreview.net/forum?id=U2g8OGONA_V
https://openreview.net/forum?id=U2g8OGONA_V
Shaoan Xie,Lingjing Kong,Mingming Gong,Kun Zhang
ICLR 2023,Top 25%
Multi-domain image generation and unpaired image-to-to-image translation are two important and related computer vision problems. The common technique for the two tasks is the learning of a joint distribution from multiple marginal distributions. However, it is well known that there can be infinitely many joint distributions that can derive the same marginals. Hence, it is necessary to formulate suitable constraints to address this highly ill-posed problem. Inspired by the recent advances in nonlinear Independent Component Analysis (ICA) theory, we propose a new method to learn the joint distribution from the marginals by enforcing a specific type of minimal change across domains. We report one of the first results connecting multi-domain generative models to identifiability and shows why identifiability is essential and how to achieve it theoretically and practically. We apply our method to five multi-domain image generation and six image-to-image translation tasks. The superior performance of our model supports our theory and demonstrates the effectiveness of our method. The training code are available at https://github.com/Mid-Push/i-stylegan.
https://openreview.net/pdf/51f8278f376fd961504ae802f4d2f35deeb936d7.pdf
Continual evaluation for lifelong learning: Identifying the stability gap
https://openreview.net/forum?id=Zy350cRstc6
https://openreview.net/forum?id=Zy350cRstc6
Matthias De Lange,Gido M van de Ven,Tinne Tuytelaars
ICLR 2023,Top 25%
Time-dependent data-generating distributions have proven to be difficult for gradient-based training of neural networks, as the greedy updates result in catastrophic forgetting of previously learned knowledge. Despite the progress in the field of continual learning to overcome this forgetting, we show that a set of common state-of-the-art methods still suffers from substantial forgetting upon starting to learn new tasks, except that this forgetting is temporary and followed by a phase of performance recovery. We refer to this intriguing but potentially problematic phenomenon as the stability gap. The stability gap had likely remained under the radar due to standard practice in the field of evaluating continual learning models only after each task. Instead, we establish a framework for continual evaluation that uses per-iteration evaluation and we define a new set of metrics to quantify worst-case performance. Empirically we show that experience replay, constraint-based replay, knowledge-distillation, and parameter regularization methods are all prone to the stability gap; and that the stability gap can be observed in class-, task-, and domain-incremental learning benchmarks. Additionally, a controlled experiment shows that the stability gap increases when tasks are more dissimilar. Finally, by disentangling gradients into plasticity and stability components, we propose a conceptual explanation for the stability gap.
https://openreview.net/pdf/913d2396e313fdd690c50b875b3c31efaa2e05a5.pdf
Domain-Indexing Variational Bayes: Interpretable Domain Index for Domain Adaptation
https://openreview.net/forum?id=pxStyaf2oJ5
https://openreview.net/forum?id=pxStyaf2oJ5
Zihao Xu,Guang-Yuan Hao,Hao He,Hao Wang
ICLR 2023,Top 25%
Previous studies have shown that leveraging "domain index" can significantly boost domain adaptation performance (Wang et al., 2020; Xu et al., 2022). However, such domain indices are not always available. To address this challenge, we first provide a formal definition of domain index from the probabilistic perspective, and then propose an adversarial variational Bayesian framework that infers domain indices from multi-domain data, thereby providing additional insight on domain relations and improving domain adaptation performance. Our theoretical analysis shows that our adversarial variational Bayesian framework finds the optimal domain index at equilibrium. Empirical results on both synthetic and real data verify that our model can produce interpretable domain indices which enable us to achieve superior performance compared to state-of-the-art domain adaptation methods. Code is available at https://github.com/Wang-ML-Lab/VDI.
https://openreview.net/pdf/4340ecd2eb1d6cddf23d0257a4ab36cd01fba41e.pdf
One-Pixel Shortcut: On the Learning Preference of Deep Neural Networks
https://openreview.net/forum?id=p7G8t5FVn2h
https://openreview.net/forum?id=p7G8t5FVn2h
Shutong Wu,Sizhe Chen,Cihang Xie,Xiaolin Huang
ICLR 2023,Top 25%
Unlearnable examples (ULEs) aim to protect data from unauthorized usage for training DNNs. Existing work adds $\ell_\infty$-bounded perturbations to the original sample so that the trained model generalizes poorly. Such perturbations, however, are easy to eliminate by adversarial training and data augmentations. In this paper, we resolve this problem from a novel perspective by perturbing only one pixel in each image. Interestingly, such a small modification could effectively degrade model accuracy to almost an untrained counterpart. Moreover, our produced \emph{One-Pixel Shortcut (OPS)} could not be erased by adversarial training and strong augmentations. To generate OPS, we perturb in-class images at the same position to the same target value that could mostly and stably deviate from all the original images. Since such generation is only based on images, OPS needs significantly less computation cost than the previous methods using DNN generators. Based on OPS, we introduce an unlearnable dataset called CIFAR-10-S, which is indistinguishable from CIFAR-10 by humans but induces the trained model to extremely low accuracy. Even under adversarial training, a ResNet-18 trained on CIFAR-10-S has only 10.61% accuracy, compared to 83.02% by the existing error-minimizing method.
https://openreview.net/pdf/b69561625d5ce4388db999c205fdb5a8b988725e.pdf
Deterministic training of generative autoencoders using invertible layers
https://openreview.net/forum?id=g8wBdhnstYz
https://openreview.net/forum?id=g8wBdhnstYz
Gianluigi Silvestri,Daan Roos,Luca Ambrogioni
ICLR 2023,Top 25%
In this work, we provide a deterministic alternative to the stochastic variational training of generative autoencoders. We refer to these new generative autoencoders as AutoEncoders within Flows (AEF), since the encoder and decoder are defined as affine layers of an overall invertible architecture. This results in a deterministic encoding of the data, as opposed to the stochastic encoding of VAEs. The paper introduces two related families of AEFs. The first family relies on a partition of the ambient space and is trained by exact maximum-likelihood. The second family exploits a deterministic expansion of the ambient space and is trained by maximizing the log-probability in this extended space. This latter case leaves complete freedom in the choice of encoder, decoder and prior architectures, making it a drop-in replacement for the training of existing VAEs and VAE-style models. We show that these AEFs can have strikingly higher performance than architecturally identical VAEs in terms of log-likelihood and sample quality, especially for low dimensional latent spaces. Importantly, we show that AEF samples are substantially sharper than VAE samples.
https://openreview.net/pdf/78c7fb939078a784f02006f7272c92a758e1e9c7.pdf
A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond
https://openreview.net/forum?id=aFzaXRImWE
https://openreview.net/forum?id=aFzaXRImWE
LIN Yong,Renjie Pi,WEIZHONG ZHANG,Xiaobo Xia,Jiahui Gao,Xiao Zhou,Tongliang Liu,Bo Han
ICLR 2023,Top 25%
In this paper, we explore learning statistically consistent classifiers under label noise by estimating the noise transition matrix T. We first provide a holistic view of existing T-estimation methods including those with or without anchor point assumptions. We unified them into the Minimum Geometric Envelope Operator (MGEO) framework, which tries to find the smallest T (in terms of a certain metric) that elicits a convex hull to enclose the posteriors of all the training data. Although MGEO methods show appealing theoretical properties and empirical results, we find them prone to failing when the noisy posterior estimation is imperfect, which is inevitable in practice. Specifically, we show that MGEO methods are in-consistent even with infinite samples if the noisy posterior is not estimated accurately. In view of this, we make the first effort to address this issue by proposing a novel T-estimation framework via the lens of bilevel optimization, and term it RObust Bilevel OpTimzation (ROBOT). ROBOT paves a new road beyond MGEO framework, which enjoys strong theoretical properties: identifibility, consistency and finite-sample generalization guarantees. Notably, ROBOT neither requires the perfect posterior estimation nor assumes the existence of anchor points. We further theoretically demonstrate that ROBOT is more robust in the case where MGEO methods fail. Experimentally, our framework also shows superior performance across multiple benchmarks.
https://openreview.net/pdf/405fab9c74f731a957a6e9ee24c23a06a6809b77.pdf
Active Learning in Bayesian Neural Networks with Balanced Entropy Learning Principle
https://openreview.net/forum?id=ZTMuZ68B1g
https://openreview.net/forum?id=ZTMuZ68B1g
Jae Oh Woo
ICLR 2023,Top 25%
Acquiring labeled data is challenging in many machine learning applications with limited budgets. Active learning gives a procedure to select the most informative data points and improve data efficiency by reducing the cost of labeling. The info-max learning principle maximizing mutual information such as BALD has been successful and widely adapted in various active learning applications. However, this pool-based specific objective inherently introduces a redundant selection and further requires a high computational cost for batch selection. In this paper, we design and propose a new uncertainty measure, Balanced Entropy Acquisition (BalEntAcq), which captures the information balance between the uncertainty of underlying softmax probability and the label variable. To do this, we approximate each marginal distribution by Beta distribution. Beta approximation enables us to formulate BalEntAcq as a ratio between an augmented entropy and the marginalized joint entropy. The closed-form expression of BalEntAcq facilitates parallelization by estimating two parameters in each marginal Beta distribution. BalEntAcq is a purely standalone measure without requiring any relational computations with other data points. Nevertheless, BalEntAcq captures a well-diversified selection near the decision boundary with a margin, unlike other existing uncertainty measures such as BALD, Entropy, or Mean Standard Deviation (MeanSD). Finally, we demonstrate that our balanced entropy learning principle with BalEntAcq consistently outperforms well-known linearly scalable active learning methods, including a recently proposed PowerBALD, a simple but diversified version of BALD, by showing experimental results obtained from MNIST, CIFAR-100, SVHN, and TinyImageNet datasets.
https://openreview.net/pdf/4919425fc00a999aa99ff64ba1e275ca945e9f6f.pdf
Near-Optimal Adversarial Reinforcement Learning with Switching Costs
https://openreview.net/forum?id=i9ogGQHYbkY
https://openreview.net/forum?id=i9ogGQHYbkY
Ming Shi,Yingbin Liang,Ness Shroff
ICLR 2023,Top 25%
Switching costs, which capture the costs for changing policies, are regarded as a critical metric in reinforcement learning (RL), in addition to the standard metric of losses (or rewards). However, existing studies on switching costs (with a coefficient that is strictly positive and is independent of the time horizon) have mainly focused on static RL, where the loss distribution is assumed to be fixed during the learning process, and thus practical scenarios where the loss distribution could be non-stationary or even adversarial are not considered. While adversarial RL better models this type of practical scenarios, an open problem remains: how to develop a provably efficient algorithm for adversarial RL with switching costs? This paper makes the first effort towards solving this problem. First, we provide a regret lower-bound that shows that the regret of any algorithm must be larger than $\tilde{\Omega}( ( H S A )^{1/3} T^{2/3} )$, where $T$, $S$, $A$ and $H$ are the number of episodes, states, actions and layers in each episode, respectively. Our lower bound indicates that, due to the fundamental challenge of switching costs in adversarial RL, the best achieved regret (whose dependency on $T$ is $\tilde{O}(\sqrt{T})$) in static RL with switching costs (as well as adversarial RL without switching costs) is no longer achievable. Moreover, we propose two novel switching-reduced algorithms with regrets that match our lower bound when the transition function is known, and match our lower bound within a small factor of $\tilde{O}( H^{1/3} )$ when the transition function is unknown. Our regret analysis demonstrates the near-optimal performance of them.
https://openreview.net/pdf/c49c1d1fb9288fba31814bc7cccd62fe483bf469.pdf
GPViT: A High Resolution Non-Hierarchical Vision Transformer with Group Propagation
https://openreview.net/forum?id=IowKt5rYWsK
https://openreview.net/forum?id=IowKt5rYWsK
Chenhongyi Yang,Jiarui Xu,Shalini De Mello,Elliot J. Crowley,Xiaolong Wang
ICLR 2023,Top 25%
We present the Group Propagation Vision Transformer (GPViT): a novel non- hierarchical (i.e. non-pyramidal) transformer model designed for general visual recognition with high-resolution features. High-resolution features (or tokens) are a natural fit for tasks that involve perceiving fine-grained details such as detection and segmentation, but exchanging global information between these features is expensive in memory and computation because of the way self-attention scales. We provide a highly efficient alternative Group Propagation Block (GP Block) to exchange global information. In each GP Block, features are first grouped to- gether by a fixed number of learnable group tokens; we then perform Group Propagation where global information is exchanged between the grouped fea- tures; finally, global information in the updated grouped features is returned back to the image features through a transformer decoder. We evaluate GPViT on a variety of visual recognition tasks including image classification, semantic seg- mentation, object detection, and instance segmentation. Our method achieves significant performance gains over previous works across all tasks, especially on tasks that require high-resolution outputs, for example, our GPViT-L3 out- performs Swin Transformer-B by 2.0 mIoU on ADE20K semantic segmentation with only half as many parameters. Code and pre-trained models are available at https://github.com/ChenhongyiYang/GPViT.
https://openreview.net/pdf/9542365fc4380de76797ec856ed324fe9acf8f79.pdf
Neural Optimal Transport
https://openreview.net/forum?id=d8CBRlWNkqH
https://openreview.net/forum?id=d8CBRlWNkqH
Alexander Korotin,Daniil Selikhanovych,Evgeny Burnaev
ICLR 2023,Top 25%
We present a novel neural-networks-based algorithm to compute optimal transport maps and plans for strong and weak transport costs. To justify the usage of neural networks, we prove that they are universal approximators of transport plans between probability distributions. We evaluate the performance of our optimal transport algorithm on toy examples and on the unpaired image-to-image translation.
https://openreview.net/pdf/b137a1ea32ff2b3c00faafef118b83c3223bc3eb.pdf
Dirichlet-based Uncertainty Calibration for Active Domain Adaptation
https://openreview.net/forum?id=4WM4cy42B81
https://openreview.net/forum?id=4WM4cy42B81
Mixue Xie,Shuang Li,Rui Zhang,Chi Harold Liu
ICLR 2023,Top 25%
Active domain adaptation (DA) aims to maximally boost the model adaptation on a new target domain by actively selecting limited target data to annotate, whereas traditional active learning methods may be less effective since they do not consider the domain shift issue. Despite active DA methods address this by further proposing targetness to measure the representativeness of target domain characteristics, their predictive uncertainty is usually based on the prediction of deterministic models, which can easily be miscalibrated on data with distribution shift. Considering this, we propose a Dirichlet-based Uncertainty Calibration (DUC) approach for active DA, which simultaneously achieves the mitigation of miscalibration and the selection of informative target samples. Specifically, we place a Dirichlet prior on the prediction and interpret the prediction as a distribution on the probability simplex, rather than a point estimate like deterministic models. This manner enables us to consider all possible predictions, mitigating the miscalibration of unilateral prediction. Then a two-round selection strategy based on different uncertainty origins is designed to select target samples that are both representative of target domain and conducive to discriminability. Extensive experiments on cross-domain image classification and semantic segmentation validate the superiority of DUC.
https://openreview.net/pdf/d263b9c4283973a09247e2e1effd05f8d9bd7652.pdf
Accurate Image Restoration with Attention Retractable Transformer
https://openreview.net/forum?id=IloMJ5rqfnt
https://openreview.net/forum?id=IloMJ5rqfnt
Jiale Zhang,Yulun Zhang,Jinjin Gu,Yongbing Zhang,Linghe Kong,Xin Yuan
ICLR 2023,Top 25%
Recently, Transformer-based image restoration networks have achieved promising improvements over convolutional neural networks due to parameter-independent global interactions. To lower computational cost, existing works generally limit self-attention computation within non-overlapping windows. However, each group of tokens are always from a dense area of the image. This is considered as a dense attention strategy since the interactions of tokens are restrained in dense regions. Obviously, this strategy could result in restricted receptive fields. To address this issue, we propose \textbf{A}ttention \textbf{R}etractable \textbf{T}ransformer (ART) for image restoration, which presents both dense and sparse attention modules in the network. The sparse attention module allows tokens from sparse areas to interact and thus provides a wider receptive field. Furthermore, the alternating application of dense and sparse attention modules greatly enhances representation ability of Transformer while providing retractable attention on the input image.We conduct extensive experiments on image super-resolution, denoising, and JPEG compression artifact reduction tasks. Experimental results validate that our proposed ART outperforms state-of-the-art methods on various benchmark datasets both quantitatively and visually. We also provide code and models at~\url{https://github.com/gladzhang/ART}.
https://openreview.net/pdf/aa567c400b76e1249b3186bd548cfc118ad0f339.pdf
Neural Episodic Control with State Abstraction
https://openreview.net/forum?id=C2fsSj3ZGiU
https://openreview.net/forum?id=C2fsSj3ZGiU
Zhuo Li,Derui Zhu,Yujing Hu,Xiaofei Xie,Lei Ma,YAN ZHENG,Yan Song,Yingfeng Chen,Jianjun Zhao
ICLR 2023,Top 25%
Existing Deep Reinforcement Learning (DRL) algorithms suffer from sample inefficiency. Generally, episodic control-based approaches are solutions that leverage highly rewarded past experiences to improve sample efficiency of DRL algorithms. However, previous episodic control-based approaches fail to utilize the latent information from the historical behaviors (\eg, state transitions, topological similarities, \etc) and lack scalability during DRL training. This work introduces Neural Episodic Control with State Abstraction (NECSA), a simple but effective state abstraction-based episodic control containing a more comprehensive episodic memory, a novel state evaluation, and a multi-step state analysis. We evaluate our approach to the MuJoCo and Atari tasks in OpenAI gym domains. The experimental results indicate that NECSA achieves higher sample efficiency than the state-of-the-art episodic control-based approaches. Our data and code are available at the project website\footnote{\url{https://sites.google.com/view/drl-necsa}}.
https://openreview.net/pdf/48c93f23de2f99bd3c38419a3f4bf1aba384c134.pdf
The Role of ImageNet Classes in Fréchet Inception Distance
https://openreview.net/forum?id=4oXTQ6m_ws8
https://openreview.net/forum?id=4oXTQ6m_ws8
Tuomas Kynkäänniemi,Tero Karras,Miika Aittala,Timo Aila,Jaakko Lehtinen
ICLR 2023,Top 25%
Fréchet Inception Distance (FID) is the primary metric for ranking models in data-driven generative modeling. While remarkably successful, the metric is known to sometimes disagree with human judgement. We investigate a root cause of these discrepancies, and visualize what FID "looks at" in generated images. We show that the feature space that FID is (typically) computed in is so close to the ImageNet classifications that aligning the histograms of Top-$N$ classifications between sets of generated and real images can reduce FID substantially — without actually improving the quality of results. Thus, we conclude that FID is prone to intentional or accidental distortions. As a practical example of an accidental distortion, we discuss a case where an ImageNet pre-trained FastGAN achieves a FID comparable to StyleGAN2, while being worse in terms of human evaluation.
https://openreview.net/pdf/0e0f4c80c56d0d57f3f758fa07e6f2226ddefea8.pdf
Diffusion Models Already Have A Semantic Latent Space
https://openreview.net/forum?id=pd1P2eUBVfq
https://openreview.net/forum?id=pd1P2eUBVfq
Mingi Kwon,Jaeseok Jeong,Youngjung Uh
ICLR 2023,Top 25%
Diffusion models achieve outstanding generative performance in various domains. Despite their great success, they lack semantic latent space which is essential for controlling the generative process. To address the problem, we propose asymmetric reverse process (Asyrp) which discovers the semantic latent space in frozen pretrained diffusion models. Our semantic latent space, named h-space, has nice properties for accommodating semantic image manipulation: homogeneity, linearity, robustness, and consistency across timesteps. In addition, we measure editing strength and quality deficiency of a generative process at timesteps to provide a principled design of the process for versatility and quality improvements. Our method is applicable to various architectures (DDPM++, iDDPM, and ADM) and datasets (CelebA-HQ, AFHQ-dog, LSUN-church, LSUN-bedroom, and METFACES).
https://openreview.net/pdf/0d48a82a332a9c1fbc68f65e41a9b16eb9efa537.pdf
Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model
https://openreview.net/forum?id=mRieQgMtNTQ
https://openreview.net/forum?id=mRieQgMtNTQ
Yinhuai Wang,Jiwen Yu,Jian Zhang
ICLR 2023,Top 25%
Most existing Image Restoration (IR) models are task-specific, which can not be generalized to different degradation operators. In this work, we propose the Denoising Diffusion Null-Space Model (DDNM), a novel zero-shot framework for arbitrary linear IR problems, including but not limited to image super-resolution, colorization, inpainting, compressed sensing, and deblurring. DDNM only needs a pre-trained off-the-shelf diffusion model as the generative prior, without any extra training or network modifications. By refining only the null-space contents during the reverse diffusion process, we can yield diverse results satisfying both data consistency and realness. We further propose an enhanced and robust version, dubbed DDNM+, to support noisy restoration and improve restoration quality for hard tasks. Our experiments on several IR tasks reveal that DDNM outperforms other state-of-the-art zero-shot IR methods. We also demonstrate that DDNM+ can solve complex real-world applications, e.g., old photo restoration.
https://openreview.net/pdf/e31de23cacc50c8cddef5c6e559520cdd3a62b0c.pdf
Nonlinear Reconstruction for Operator Learning of PDEs with Discontinuities
https://openreview.net/forum?id=CrfhZAsJDsZ
https://openreview.net/forum?id=CrfhZAsJDsZ
Samuel Lanthaler,Roberto Molinaro,Patrik Hadorn,Siddhartha Mishra
ICLR 2023,Top 25%
Discontinuous solutions arise in a large class of hyperbolic and advection-dominated PDEs. This paper investigates, both theoretically and empirically, the operator learning of PDEs with discontinuous solutions. We rigorously prove, in terms of lower approximation bounds, that methods which entail a linear reconstruction step (e.g. DeepONets or PCA-Nets) fail to efficiently approximate the solution operator of such PDEs. In contrast, we show that certain methods employing a non-linear reconstruction mechanism can overcome these fundamental lower bounds and approximate the underlying operator efficiently. The latter class includes Fourier Neural Operators and a novel extension of DeepONets termed shift-DeepONets. Our theoretical findings are confirmed by empirical results for advection equations, inviscid Burgers’ equation and the compressible Euler equations of gas dynamics.
https://openreview.net/pdf/995dfba9492244e0ce9642782af6ec9a55816279.pdf
Learning Label Encodings for Deep Regression
https://openreview.net/forum?id=k60XE_b0Ix6
https://openreview.net/forum?id=k60XE_b0Ix6
Deval Shah,Tor M. Aamodt
ICLR 2023,Top 25%
Deep regression networks are widely used to tackle the problem of predicting a continuous value for a given input. Task-specialized approaches for training regression networks have shown significant improvement over generic approaches, such as direct regression. More recently, a generic approach based on regression by binary classification using binary-encoded labels has shown significant improvement over direct regression. The space of label encodings for regression is large. Lacking heretofore have been automated approaches to find a good label encoding for a given application. This paper introduces Regularized Label Encoding Learning (RLEL) for end-to-end training of an entire network and its label encoding. RLEL provides a generic approach for tackling regression. Underlying RLEL is our observation that the search space of label encodings can be constrained and efficiently explored by using a continuous search space of real-valued label encodings combined with a regularization function designed to encourage encodings with certain properties. These properties balance the probability of classification error in individual bits against error correction capability. Label encodings found by RLEL result in lower or comparable errors to manually designed label encodings. Applying RLEL results in $10.9\%$ and $12.4\%$ improvement in Mean Absolute Error (MAE) over direct regression and multiclass classification, respectively. Our evaluation demonstrates that RLEL can be combined with off-the-shelf feature extractors and is suitable across different architectures, datasets, and tasks. Code is available at \url{https://github.com/ubc-aamodt-group/RLEL_regression}.
https://openreview.net/pdf/3af8d03ffcf4536fc86a2416e751d3d4282af4d0.pdf
Multi-skill Mobile Manipulation for Object Rearrangement
https://openreview.net/forum?id=Z3IClM_bzvP
https://openreview.net/forum?id=Z3IClM_bzvP
Jiayuan Gu,Devendra Singh Chaplot,Hao Su,Jitendra Malik
ICLR 2023,Top 25%
We study a modular approach to tackle long-horizon mobile manipulation tasks for object rearrangement, which decomposes a full task into a sequence of subtasks. To tackle the entire task, prior work chains multiple stationary manipulation skills with a point-goal navigation skill, which are learned individually on subtasks. Although more effective than monolithic end-to-end RL policies, this framework suffers from compounding errors in skill chaining, e.g., navigating to a bad location where a stationary manipulation skill can not reach its target to manipulate. To this end, we propose that the manipulation skills should include mobility to have flexibility in interacting with the target object from multiple locations and at the same time the navigation skill could have multiple end points which lead to successful manipulation. We operationalize these ideas by implementing mobile manipulation skills rather than stationary ones and training a navigation skill trained with region goal instead of point goal. We evaluate our multi-skill mobile manipulation method M3 on 3 challenging long-horizon mobile manipulation tasks in the Home Assistant Benchmark (HAB), and show superior performance as compared to the baselines.
https://openreview.net/pdf/826efb580419b89e9ce1db3a7c676c7010ebe04b.pdf
Single-shot General Hyper-parameter Optimization for Federated Learning
https://openreview.net/forum?id=3RhuF8foyPW
https://openreview.net/forum?id=3RhuF8foyPW
Yi Zhou,Parikshit Ram,Theodoros Salonidis,Nathalie Baracaldo,Horst Samulowitz,Heiko Ludwig
ICLR 2023,Top 25%
We address the problem of hyper-parameter optimization (HPO) for federated learning (FL-HPO). We introduce Federated Loss SuRface Aggregation (FLoRA), a general FL-HPO solution framework that can address use cases of tabular data and any Machine Learning (ML) model including gradient boosting training algorithms, SVMs, neural networks, among others and thereby further expands the scope of FL-HPO. FLoRA enables single-shot FL-HPO: identifying a single set of good hyper-parameters that are subsequently used in a single FL training. Thus, it enables FL-HPO solutions with minimal additional communication overhead compared to FL training without HPO. Utilizing standard smoothness assumptions, we theoretically characterize the optimality gap of FLoRA for any convex and non-convex loss functions, which explicitly accounts for the heterogeneous nature of the parties' local data distributions, a dominant characteristic of FL systems. Our empirical evaluation of FLoRA for multiple FL algorithms on seven OpenML datasets demonstrates significant model accuracy improvements over the baselines, and robustness to increasing number of parties involved in FL-HPO training.
https://openreview.net/pdf/05e0c917ee8caab80ea9a21a831a3c9589442e99.pdf
Simplicial Embeddings in Self-Supervised Learning and Downstream Classification
https://openreview.net/forum?id=RWtGreRpovS
https://openreview.net/forum?id=RWtGreRpovS
Samuel Lavoie,Christos Tsirigotis,Max Schwarzer,Ankit Vani,Michael Noukhovitch,Kenji Kawaguchi,Aaron Courville
ICLR 2023,Top 25%
Simplicial Embeddings (SEM) are representations learned through self-supervised learning (SSL), wherein a representation is projected into $L$ simplices of $V$ dimensions each using a \texttt{softmax} operation. This procedure conditions the representation onto a constrained space during pretraining and imparts an inductive bias for group sparsity. For downstream classification, we formally prove that the SEM representation leads to better generalization than an unnormalized representation. Furthermore, we empirically demonstrate that SSL methods trained with SEMs have improved generalization on natural image datasets such as CIFAR-100 and ImageNet. Finally, when used in a downstream classification task, we show that SEM features exhibit emergent semantic coherence where small groups of learned features are distinctly predictive of semantically-relevant classes.
https://openreview.net/pdf/fadb3c6b6bc3ec01fc5bf271cf08a4eb1e0f6fc1.pdf
Vision Transformer Adapter for Dense Predictions
https://openreview.net/forum?id=plKu2GByCNW
https://openreview.net/forum?id=plKu2GByCNW
Zhe Chen,Yuchen Duan,Wenhai Wang,Junjun He,Tong Lu,Jifeng Dai,Yu Qiao
ICLR 2023,Top 25%
This work investigates a simple yet powerful dense prediction task adapter for Vision Transformer (ViT). Unlike recently advanced variants that incorporate vision-specific inductive biases into their architectures, the plain ViT suffers inferior performance on dense predictions due to weak prior assumptions. To address this issue, we propose the ViT-Adapter, which allows plain ViT to achieve comparable performance to vision-specific transformers. Specifically, the backbone in our framework is a plain ViT that can learn powerful representations from large-scale multi-modal data. When transferring to downstream tasks, a pre-training-free adapter is used to introduce the image-related inductive biases into the model, making it suitable for these tasks. We verify ViT-Adapter on multiple dense prediction tasks, including object detection, instance segmentation, and semantic segmentation. Notably, without using extra detection data, our ViT-Adapter-L yields state-of-the-art 60.9 box AP and 53.0 mask AP on COCO test-dev. We hope that the ViT-Adapter could serve as an alternative for vision-specific transformers and facilitate future research. Code and models will be released at https://github.com/czczup/ViT-Adapter.
https://openreview.net/pdf/a1a7cac48a3e0fa0d2a12b5a46c5b2463fe22a38.pdf
Divide to Adapt: Mitigating Confirmation Bias for Domain Adaptation of Black-Box Predictors
https://openreview.net/forum?id=hVrXUps3LFA
https://openreview.net/forum?id=hVrXUps3LFA
Jianfei Yang,Xiangyu Peng,Kai Wang,Zheng Zhu,Jiashi Feng,Lihua Xie,Yang You
ICLR 2023,Top 25%
Domain Adaptation of Black-box Predictors (DABP) aims to learn a model on an unlabeled target domain supervised by a black-box predictor trained on a source domain. It does not require access to both the source-domain data and the predictor parameters, thus addressing the data privacy and portability issues of standard domain adaptation methods. Existing DABP approaches mostly rely on knowledge distillation (KD) from the black-box predictor, i.e., training the model with its noisy target-domain predictions, which however inevitably introduces the confirmation bias accumulated from the prediction noises and leads to degrading performance. To mitigate such bias, we propose a new strategy, \textit{divide-to-adapt}, that purifies cross-domain knowledge distillation by proper domain division. This is inspired by an observation we make for the first time in domain adaptation: the target domain usually contains easy-to-adapt and hard-to-adapt samples that have different levels of domain discrepancy w.r.t. the source domain, and deep models tend to fit easy-to-adapt samples first. Leveraging easy-to-adapt samples with less noise can help KD alleviate the negative effect of prediction noises from black-box predictors. In this sense, the target domain can be divided into an easy-to-adapt subdomain with less noise and a hard-to-adapt subdomain at the early stage of training. Then the adaptation is achieved by semi-supervised learning. We further reduce distribution discrepancy between subdomains and develop weak-strong augmentation strategy to filter the predictor errors progressively. As such, our method is a simple yet effective solution to reduce error accumulation in cross-domain knowledge distillation for DABP. Moreover, we prove that the target error of DABP is bounded by the noise ratio of two subdomains, i.e., the confirmation bias, which provides the theoretical justifications for our method. Extensive experiments demonstrate our method achieves state of the art on all DABP benchmarks, outperforming the existing best approach by 7.0\% on VisDA-17, and is even comparable with the standard domain adaptation methods that use the source-domain data.
https://openreview.net/pdf/f6acdba3a448c8c49b38089c9fcca2175f862634.pdf
PLOT: Prompt Learning with Optimal Transport for Vision-Language Models
https://openreview.net/forum?id=zqwryBoXYnh
https://openreview.net/forum?id=zqwryBoXYnh
Guangyi Chen,Weiran Yao,Xiangchen Song,Xinyue Li,Yongming Rao,Kun Zhang
ICLR 2023,Top 25%
With the increasing attention to large vision-language models such as CLIP, there has been a significant amount of effort dedicated to building efficient prompts. Unlike conventional methods of only learning one single prompt, we propose to learn multiple comprehensive prompts to describe diverse characteristics of categories such as intrinsic attributes or extrinsic contexts. However, directly matching each prompt to the same visual feature is problematic, as it pushes the prompts to converge to one point. To solve this problem, we propose to apply optimal transport to match the vision and text modalities. Specifically, we first model images and the categories with visual and textual feature sets. Then, we apply a two-stage optimization strategy to learn the prompts. In the inner loop, we optimize the optimal transport distance to align visual features and prompts by the Sinkhorn algorithm, while in the outer loop, we learn the prompts by this distance from the supervised data. Extensive experiments are conducted on the few-shot recognition task and the improvement demonstrates the superiority of our method. The code is available at https://github.com/CHENGY12/PLOT.
https://openreview.net/pdf/ddf150416bd1ce46f5512042c2aaa162c8ad10b7.pdf
DASHA: Distributed Nonconvex Optimization with Communication Compression and Optimal Oracle Complexity
https://openreview.net/forum?id=VA1YpcNr7ul
https://openreview.net/forum?id=VA1YpcNr7ul
Alexander Tyurin,Peter Richtárik
ICLR 2023,Top 25%
We develop and analyze DASHA: a new family of methods for nonconvex distributed optimization problems. When the local functions at the nodes have a finite-sum or an expectation form, our new methods, DASHA-PAGE, DASHA-MVR and DASHA-SYNC-MVR, improve the theoretical oracle and communication complexity of the previous state-of-the-art method MARINA by Gorbunov et al. (2020). In particular, to achieve an $\varepsilon$-stationary point, and considering the random sparsifier Rand$K$ as an example, our methods compute the optimal number of gradients $\mathcal{O}\left(\frac{\sqrt{m}}{\varepsilon\sqrt{n}}\right)$ and $\mathcal{O}\left(\frac{\sigma}{\varepsilon^{3/2}n}\right)$ in finite-sum and expectation form cases, respectively, while maintaining the SOTA communication complexity $\mathcal{O}\left(\frac{d}{\varepsilon \sqrt{n}}\right)$. Furthermore, unlike MARINA, the new methods DASHA, DASHA-PAGE and DASHA-MVR send compressed vectors only, which makes them more practical for federated learning. We extend our results to the case when the functions satisfy the Polyak-Lojasiewicz condition. Finally, our theory is corroborated in practice: we see a significant improvement in experiments with nonconvex classification and training of deep learning models.
https://openreview.net/pdf/b48aa1d65ec14a3d0d8248dd7e332ea750bdee69.pdf
LAVA: Data Valuation without Pre-Specified Learning Algorithms
https://openreview.net/forum?id=JJuP86nBl4q
https://openreview.net/forum?id=JJuP86nBl4q
Hoang Anh Just,Feiyang Kang,Tianhao Wang,Yi Zeng,Myeongseob Ko,Ming Jin,Ruoxi Jia
ICLR 2023,Top 25%
Traditionally, data valuation is posed as a problem of equitably splitting the validation performance of a learning algorithm among the training data. As a result, the calculated data values depend on many design choices of the underlying learning algorithm. However, this dependence is undesirable for many use cases of data valuation, such as setting priorities over different data sources in a data acquisition process and informing pricing mechanisms in a data marketplace. In these scenarios, data needs to be valued before the actual analysis and the choice of the learning algorithm is still undetermined then. Another side-effect of the dependence is that to assess the value of individual points, one needs to re-run the learning algorithm with and without a point, which incurs a large computation burden. This work leapfrogs over the current limits of data valuation methods by introducing a new framework that can value training data in a way that is oblivious to the downstream learning algorithm. Our main results are as follows. $\textbf{(1)}$ We develop a proxy for the validation performance associated with a training set based on a non-conventional $\textit{class-wise}$ $\textit{Wasserstein distance}$ between the training and the validation set. We show that the distance characterizes the upper bound of the validation performance for any given model under certain Lipschitz conditions. $\textbf{(2)}$ We develop a novel method to value individual data based on the sensitivity analysis of the $\textit{class-wise}$ Wasserstein distance. Importantly, these values can be directly obtained $\textit{for free}$ from the output of off-the-shelf optimization solvers once the Wasserstein distance is computed. $\textbf{(3) }$We evaluate our new data valuation framework over various use cases related to detecting low-quality data and show that, surprisingly, the learning-agnostic feature of our framework enables a $\textit{significant improvement}$ over the state-of-the-art performance while being $\textit{orders of magnitude faster.}$
https://openreview.net/pdf/8a4a49f404d172df902842781f95ef52ed70433e.pdf
Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets
https://openreview.net/forum?id=SEh5SfEQtqB
https://openreview.net/forum?id=SEh5SfEQtqB
Hayeon Lee,Sohyun An,Minseon Kim,Sung Ju Hwang
ICLR 2023,Top 25%
Distillation-aware Neural Architecture Search (DaNAS) aims to search for an optimal student architecture that obtains the best performance and/or efficiency when distilling the knowledge from a given teacher model. Previous DaNAS methods have mostly tackled the search for the neural architecture for fixed datasets and the teacher, which are not generalized well on a new task consisting of an unseen dataset and an unseen teacher, thus need to perform a costly search for any new combination of the datasets and the teachers. For standard NAS tasks without KD, meta-learning-based computationally efficient NAS methods have been proposed, which learn the generalized search process over multiple tasks (datasets) and transfer the knowledge obtained over those tasks to a new task. However, since they assume learning from scratch without KD from a teacher, they might not be ideal for DaNAS scenarios. To eliminate the excessive computational cost of DaNAS methods and the sub-optimality of rapid NAS methods, we propose a distillation-aware meta-accuracy prediction model, DaSS (Distillation-aware Student Search), which can predict a given architecture's final performances on a dataset when performing KD with a given teacher, without having actually to train it on the target task. The experimental results demonstrate that our proposed meta-prediction model successfully generalizes to multiple unseen datasets for DaNAS tasks, largely outperforming existing meta-NAS methods and rapid NAS baselines. Code is available at https://github.com/CownowAn/DaSS.
https://openreview.net/pdf/80703f68458650e155bc0dd7dd6c988a91fbc1be.pdf
Denoising Diffusion Error Correction Codes
https://openreview.net/forum?id=rLwC0_MG-4w
https://openreview.net/forum?id=rLwC0_MG-4w
Yoni Choukroun,Lior Wolf
ICLR 2023,Top 25%
Error correction code (ECC) is an integral part of the physical communication layer, ensuring reliable data transfer over noisy channels. Recently, neural decoders have demonstrated their advantage over classical decoding techniques. However, recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders. In this work, we propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths. Our framework models the forward channel corruption as a series of diffusion steps that can be reversed iteratively. Three contributions are made: (i) a diffusion process suitable for the decoding setting is introduced, (ii) the neural diffusion decoder is conditioned on the number of parity errors, which indicates the level of corruption at a given step, (iii) a line search procedure based on the code's syndrome obtains the optimal reverse diffusion step size. The proposed approach demonstrates the power of diffusion models for ECC and is able to achieve state-of-the-art accuracy, outperforming the other neural decoders by sizable margins, even for a single reverse diffusion step.
https://openreview.net/pdf/1ba7d8f5e235d93b8db4a40b633bb42c9494223e.pdf
Exploring Active 3D Object Detection from a Generalization Perspective
https://openreview.net/forum?id=2RwXVje1rAh
https://openreview.net/forum?id=2RwXVje1rAh
Yadan Luo,Zhuoxiao Chen,Zijian Wang,Xin Yu,Zi Huang,Mahsa Baktashmotlagh
ICLR 2023,Top 25%
To alleviate the high annotation cost in LiDAR-based 3D object detection, active learning is a promising solution that learns to select only a small portion of unlabeled data to annotate, without compromising model performance. Our empirical study, however, suggests that mainstream uncertainty-based and diversity-based active learning policies are not effective when applied in the 3D detection task, as they fail to balance the trade-off between point cloud informativeness and box-level annotation costs. To overcome this limitation, we jointly investigate three novel criteria in our framework CRB for point cloud acquisition - label conciseness, feature representativeness and geometric balance, which hierarchically filters out the point clouds of redundant 3D bounding box labels, latent features and geometric characteristics (e.g., point cloud density) from the unlabeled sample pool and greedily selects informative ones with fewer objects to annotate. Our theoretical analysis demonstrates that the proposed criteria aligns the marginal distributions of the selected subset and the prior distributions of the unseen test set, and minimizes the upper bound of the generalization error. To validate the effectiveness and applicability of CRB, we conduct extensive experiments on the two benchmark 3D object detection datasets of KITTI and Waymo and examine both one-stage (i.e., Second) and two-stage 3D detector (i.e., PV-RCNN). Experiments evidence that the proposed approach outperforms existing active learning strategies and achieves fully supervised performance requiring $1\%$ and $8\%$ annotations of bounding boxes and point clouds, respectively.
https://openreview.net/pdf/cbdf54e075523d503dc1b31538bc70e029256b15.pdf
Neuro-Symbolic Procedural Planning with Commonsense Prompting
https://openreview.net/forum?id=iOc57X9KM54
https://openreview.net/forum?id=iOc57X9KM54
Yujie Lu,Weixi Feng,Wanrong Zhu,Wenda Xu,Xin Eric Wang,Miguel Eckstein,William Yang Wang
ICLR 2023,Top 25%
Procedural planning aims to implement complex high-level goals by decomposition into simpler low-level steps. Although procedural planning is a basic skill set for humans in daily life, it remains a challenge for large language models (LLMs) that lack a deep understanding of the cause-effect relations in procedures. Previous methods require manual exemplars to acquire procedural planning knowledge from LLMs in the zero-shot setting. However, such elicited pre-trained knowledge in LLMs induces spurious correlations between goals and steps, which impair the model generalization to unseen tasks. In contrast, this paper proposes a neuro-symbolic procedural PLANner (PLAN) that elicits procedural planning knowledge from the LLMs with commonsense-infused prompting. To mitigate spurious goal-step correlations, we use symbolic program executors on the latent procedural representations to formalize prompts from commonsense knowledge bases as a causal intervention toward the Structural Causal Model. Both automatic and human evaluations on WikiHow and RobotHow show the superiority of PLAN on procedural planning without further training or manual exemplars.
https://openreview.net/pdf/3af66a16e02e6ec05187d765b1d2da8cabae2719.pdf
Generative Augmented Flow Networks
https://openreview.net/forum?id=urF_CBK5XC0
https://openreview.net/forum?id=urF_CBK5XC0
Ling Pan,Dinghuai Zhang,Aaron Courville,Longbo Huang,Yoshua Bengio
ICLR 2023,Top 25%
The Generative Flow Network is a probabilistic framework where an agent learns a stochastic policy for object generation, such that the probability of generating an object is proportional to a given reward function. Its effectiveness has been shown in discovering high-quality and diverse solutions, compared to reward-maximizing reinforcement learning-based methods. Nonetheless, GFlowNets only learn from rewards of the terminal states, which can limit its applicability. Indeed, intermediate rewards play a critical role in learning, for example from intrinsic motivation to provide intermediate feedback even in particularly challenging sparse reward tasks. Inspired by this, we propose Generative Augmented Flow Networks (GAFlowNets), a novel learning framework to incorporate intermediate rewards into GFlowNets. We specify intermediate rewards by intrinsic motivation to tackle the exploration problem in sparse reward environments. GAFlowNets can leverage edge-based and state-based intrinsic rewards in a joint way to improve exploration. Based on extensive experiments on the GridWorld task, we demonstrate the effectiveness and efficiency of GAFlowNet in terms of convergence, performance, and diversity of solutions. We further show that GAFlowNet is scalable to a more complex and large-scale molecule generation domain, where it achieves consistent and significant performance improvement.
https://openreview.net/pdf/6f7969e92eef7ad5bb4561e7dbd141decf138128.pdf
The Trade-off between Universality and Label Efficiency of Representations from Contrastive Learning
https://openreview.net/forum?id=rvsbw2YthH_
https://openreview.net/forum?id=rvsbw2YthH_
Zhenmei Shi,Jiefeng Chen,Kunyang Li,Jayaram Raghuram,Xi Wu,Yingyu Liang,Somesh Jha
ICLR 2023,Top 25%
Pre-training representations (a.k.a. foundation models) has recently become a prevalent learning paradigm, where one first pre-trains a representation using large-scale unlabeled data, and then learns simple predictors on top of the representation using small labeled data from the downstream tasks. There are two key desiderata for the representation: label efficiency (the ability to learn an accurate classifier on top of the representation with a small amount of labeled data) and universality (usefulness across a wide range of downstream tasks). In this paper, we focus on one of the most popular instantiations of this paradigm: contrastive learning with linear probing, i.e., learning a linear predictor on the representation pre-trained by contrastive learning. We show that there exists a trade-off between the two desiderata so that one may not be able to achieve both simultaneously. Specifically, we provide analysis using a theoretical data model and show that, while more diverse pre-training data result in more diverse features for different tasks (improving universality), it puts less emphasis on task-specific features, giving rise to larger sample complexity for down-stream supervised tasks, and thus worse prediction performance. Guided by this analysis, we propose a contrastive regularization method to improve the trade-off. We validate our analysis and method empirically with systematic experiments using real-world datasets and foundation models.
https://openreview.net/pdf/043603199adc5b3a50a0bd4a9a36f0faea6f3b13.pdf
CROM: Continuous Reduced-Order Modeling of PDEs Using Implicit Neural Representations
https://openreview.net/forum?id=FUORz1tG8Og
https://openreview.net/forum?id=FUORz1tG8Og
Peter Yichen Chen,Jinxu Xiang,Dong Heon Cho,Yue Chang,G A Pershing,Henrique Teles Maia,Maurizio M Chiaramonte,Kevin Thomas Carlberg,Eitan Grinspun
ICLR 2023,Top 25%
The long runtime of high-fidelity partial differential equation (PDE) solvers makes them unsuitable for time-critical applications. We propose to accelerate PDE solvers using reduced-order modeling (ROM). Whereas prior ROM approaches reduce the dimensionality of discretized vector fields, our continuous reduced-order modeling (CROM) approach builds a low-dimensional embedding of the continuous vector fields themselves, not their discretization. We represent this reduced manifold using continuously differentiable neural fields, which may train on any and all available numerical solutions of the continuous system, even when they are obtained using diverse methods or discretizations. We validate our approach on an extensive range of PDEs with training data from voxel grids, meshes, and point clouds. Compared to prior discretization-dependent ROM methods, such as linear subspace proper orthogonal decomposition (POD) and nonlinear manifold neural-network-based autoencoders, CROM features higher accuracy, lower memory consumption, dynamically adaptive resolutions, and applicability to any discretization. For equal latent space dimension, CROM exhibits 79$\times$ and 49$\times$ better accuracy, and 39$\times$ and 132$\times$ smaller memory footprint, than POD and autoencoder methods, respectively. Experiments demonstrate 109$\times$ and 89$\times$ wall-clock speedups over unreduced models on CPUs and GPUs, respectively. Videos and codes are available on the project page: https://crom-pde.github.io
https://openreview.net/pdf/0b88dbf1d08790a512a587e20d57a19dea822933.pdf
Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language
https://openreview.net/forum?id=G2Q2Mh3avow
https://openreview.net/forum?id=G2Q2Mh3avow
Andy Zeng,Maria Attarian,brian ichter,Krzysztof Marcin Choromanski,Adrian Wong,Stefan Welker,Federico Tombari,Aveek Purohit,Michael S Ryoo,Vikas Sindhwani,Johnny Lee,Vincent Vanhoucke,Pete Florence
ICLR 2023,Top 25%
We investigate how multimodal prompt engineering can use language as the intermediate representation to combine complementary knowledge from different pretrained (potentially multimodal) language models for a variety of tasks. This approach is both distinct from and complementary to the dominant paradigm of joint multimodal training. It also recalls a traditional systems-building view as in classical NLP pipelines, but with prompting large pretrained multimodal models. We refer to these as Socratic Models (SMs): a modular class of systems in which multiple pretrained models may be composed zero-shot via multimodal-informed prompting to capture new multimodal capabilities, without additional finetuning. We show that these systems provide competitive state-of-the-art performance for zero-shot image captioning and video-to-text retrieval, and also enable new applications such as (i) answering free-form questions about egocentric video, (ii) engaging in multimodal assistive dialogue with people (e.g., for cooking recipes), and (iii) robot perception and planning. We hope this work provides (a) results for stronger zero-shot baseline performance with analysis also highlighting their limitations, (b) new perspectives for building multimodal systems powered by large pretrained models, and (c) practical application advantages in certain regimes limited by data scarcity, training compute, or model access.
https://openreview.net/pdf/92b6e024f8a9e971e8041aa14e06de2802245730.pdf
Multi-lingual Evaluation of Code Generation Models
https://openreview.net/forum?id=Bo7eeXm6An8
https://openreview.net/forum?id=Bo7eeXm6An8
Ben Athiwaratkun,Sanjay Krishna Gouda,Zijian Wang,Xiaopeng Li,Yuchen Tian,Ming Tan,Wasi Uddin Ahmad,Shiqi Wang,Qing Sun,Mingyue Shang,Sujan Kumar Gonugondla,Hantian Ding,Varun Kumar,Nathan Fulton,Arash Farahani,Siddhartha Jain,Robert Giaquinto,Haifeng Qian,Murali Krishna Ramanathan,Ramesh Nallapati,Baishakhi Ray,Parminder Bhatia,Sudipta Sengupta,Dan Roth,Bing Xiang
ICLR 2023,Top 25%
We present two new benchmarks, MBXP and Multilingual HumanEval, designed to evaluate code completion models in over 10 programming languages. These datasets are generated using a conversion framework that transpiles prompts and test cases from the original MBPP and HumanEval datasets into the corresponding data in the target language. By using these benchmarks, we are able to assess the performance of code generation models in a multi-lingual fashion, and discovered generalization ability of language models on out-of-domain languages, advantages of multi-lingual models over mono-lingual, the ability of few-shot prompting to teach the model new languages, and zero-shot translation abilities. In addition, we use our code generation model to perform large-scale bootstrapping to obtain synthetic canonical solutions in several languages, which can be used for other code-related evaluations such as code insertion, robustness, or summarization tasks.
https://openreview.net/pdf/c2ba4659e44c45ec67969ec9a74097a37184ad62.pdf
GRACE-C: Generalized Rate Agnostic Causal Estimation via Constraints
https://openreview.net/forum?id=B_pCIsX8KL_
https://openreview.net/forum?id=B_pCIsX8KL_
Mohammadsajad Abavisani,David Danks,Sergey Plis
ICLR 2023,Top 25%
Graphical structures estimated by causal learning algorithms from time series data can provide highly misleading causal information if the causal timescale of the generating process fails to match the measurement timescale of the data. Existing algorithms provide limited resources to respond to this challenge, and so researchers must either use models that they know are likely misleading, or else forego causal learning entirely. Existing methods face up-to-four distinct shortfalls, as they might a) require that the difference between causal and measurement timescales is known; b) only handle very small number of random variables when the timescale difference is unknown; c) only apply to pairs of variables (albeit with fewer assumptions about prior knowledge); or d) be unable to find a solution given statistical noise in the data. This paper aims to address these challenges. We present an approach that combines constraint programming with both theoretical insights into the problem structure and prior information about admissible causal interactions to achieve speed up of multiple orders of magnitude. The resulting system scales to significantly larger sets of random variables ($>100$) without knowledge of the timescale difference while maintaining theoretical guarantees. This method is also robust to edge misidentification and can use parametric connection strengths, while optionally finding the optimal among many possible solutions.
https://openreview.net/pdf/c2afd6ed2baef8ba0d051c56c6a88ec4fdfd0cd9.pdf
Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs
https://openreview.net/forum?id=KwmPfARgOTD
https://openreview.net/forum?id=KwmPfARgOTD
Yi-Lun Liao,Tess Smidt
ICLR 2023,Top 25%
Despite their widespread success in various domains, Transformer networks have yet to perform well across datasets in the domain of 3D atomistic graphs such as molecules even when 3D-related inductive biases like translational invariance and rotational equivariance are considered. In this paper, we demonstrate that Transformers can generalize well to 3D atomistic graphs and present Equiformer, a graph neural network leveraging the strength of Transformer architectures and incorporating SE(3)/E(3)-equivariant features based on irreducible representations (irreps). First, we propose a simple and effective architecture by only replacing original operations in Transformers with their equivariant counterparts and including tensor products. Using equivariant operations enables encoding equivariant information in channels of irreps features without complicating graph structures. With minimal modifications to Transformers, this architecture has already achieved strong empirical results. Second, we propose a novel attention mechanism called equivariant graph attention, which improves upon typical attention in Transformers through replacing dot product attention with multi-layer perceptron attention and including non-linear message passing. With these two innovations, Equiformer achieves competitive results to previous models on QM9, MD17 and OC20 datasets.
https://openreview.net/pdf/adc86be91e22b350b3f22fb21d5124250509a935.pdf
MPCFORMER: FAST, PERFORMANT AND PRIVATE TRANSFORMER INFERENCE WITH MPC
https://openreview.net/forum?id=CWmvjOEhgH-
https://openreview.net/forum?id=CWmvjOEhgH-
Dacheng Li,Hongyi Wang,Rulin Shao,Han Guo,Eric Xing,Hao Zhang
ICLR 2023,Top 25%
Enabling private inference is crucial for many cloud inference services that are based on Transformer models. However, existing private inference solutions can increase the inference latency by more than 60$\times$ or significantly compromise the inference quality. In this paper, we design the framework MPCFORMER as a practical solution, using Secure Multi-Party Computation (MPC) and Knowledge Distillation (KD). Through extensive evaluations, we show that MPCFORMER significantly speeds up Transformer inference in MPC settings while achieving similar ML performance to the input model. On the IMDb dataset, it achieves similar performance to $\text{BERT}_\text{BASE}$, while being 5.3$\times$ faster. On the GLUE benchmark, it achieves 97% performance of $\text{BERT}_\text{BASE}$ with a 2.2$\times$ speedup. MPCFORMER remains effective with different trained Transformer weights such as $\text{ROBERTA}_\text{BASE}$ and larger models including $\text{BERT}_\text{LARGE}$. Code is available at https://github.com/MccRee177/MPCFormer.
https://openreview.net/pdf/f2f107f5dbed42ef3523a9abb2677e2c00c61c31.pdf
Disparate Impact in Differential Privacy from Gradient Misalignment
https://openreview.net/forum?id=qLOaeRvteqbx
https://openreview.net/forum?id=qLOaeRvteqbx
Maria S. Esipova,Atiyeh Ashari Ghomi,Yaqiao Luo,Jesse C Cresswell
ICLR 2023,Top 25%
As machine learning becomes more widespread throughout society, aspects including data privacy and fairness must be carefully considered, and are crucial for deployment in highly regulated industries. Unfortunately, the application of privacy enhancing technologies can worsen unfair tendencies in models. In particular, one of the most widely used techniques for private model training, differentially private stochastic gradient descent (DPSGD), frequently intensifies disparate impact on groups within data. In this work we study the fine-grained causes of unfairness in DPSGD and identify gradient misalignment due to inequitable gradient clipping as the most significant source. This observation leads us to a new method for reducing unfairness by preventing gradient misalignment in DPSGD.
https://openreview.net/pdf/6101c980f75602f74261af8068d5b04eb74e1476.pdf
TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second
https://openreview.net/forum?id=cp5PvcI6w8_
https://openreview.net/forum?id=cp5PvcI6w8_
Noah Hollmann,Samuel Müller,Katharina Eggensperger,Frank Hutter
ICLR 2023,Top 25%
We present TabPFN, a trained Transformer that can do supervised classification for small tabular datasets in less than a second, needs no hyperparameter tuning and is competitive with state-of-the-art classification methods. TabPFN is fully entailed in the weights of our network, which accepts training and test samples as a set-valued input and yields predictions for the entire test set in a single forward pass. TabPFN is a Prior-Data Fitted Network (PFN) and is trained offline once, to approximate Bayesian inference on synthetic datasets drawn from our prior. This prior incorporates ideas from causal reasoning: It entails a large space of structural causal models with a preference for simple structures. On the $18$ datasets in the OpenML-CC18 suite that contain up to 1000 training data points, up to 100 purely numerical features without missing values, and up to 10 classes, we show that our method clearly outperforms boosted trees and performs on par with complex state-of-the-art AutoML systems with up to $230\times$ speedup. This increases to a $5\,700\times$ speedup when using a GPU. We also validate these results on an additional 67 small numerical datasets from OpenML. We provide all our code, the trained TabPFN, an interactive browser demo and a Colab notebook at https://github.com/automl/TabPFN.
https://openreview.net/pdf/a14bada70718d8e2f05879f7f5dd162a0adbe28c.pdf
Human Motion Diffusion Model
https://openreview.net/forum?id=SJ1kSyO2jwu
https://openreview.net/forum?id=SJ1kSyO2jwu
Guy Tevet,Sigal Raab,Brian Gordon,Yoni Shafir,Daniel Cohen-or,Amit Haim Bermano
ICLR 2023,Top 25%
Natural and expressive human motion generation is the holy grail of computer animation. It is a challenging task, due to the diversity of possible motion, human perceptual sensitivity to it, and the difficulty of accurately describing it. Therefore, current generative solutions are either low-quality or limited in expressiveness. Diffusion models are promising candidates for the human motion domain since they have already shown remarkable generative capabilities in other domains, and their many-to-many nature. In this paper, we introduce Motion Diffusion Model (MDM), a carefully adapted classifier-free diffusion-based generative model for human motion data. MDM is transformer-based, combining insights from motion generation literature. A notable design-choice is that it predicts the sample itself rather than the noise in each step to facilitate the use of established geometric losses on the locations and velocities of the motion, such as the foot contact loss. As we demonstrate, MDM is a generic approach, enabling different modes of conditioning, and different generation tasks. We show that our model is trained with lightweight resources and yet achieves state-of-the-art results on leading benchmarks for text-to-motion, action-to-motion, and unconditioned motion generation.
https://openreview.net/pdf/f0e30bdff6d93fdd5a01526aaea18c2fec384fc0.pdf
Visual Recognition with Deep Nearest Centroids
https://openreview.net/forum?id=CsKwavjr7A
https://openreview.net/forum?id=CsKwavjr7A
Wenguan Wang,Cheng Han,Tianfei Zhou,Dongfang Liu
ICLR 2023,Top 25%
We devise deep nearest centroids (DNC), a conceptually elegant yet surprisingly effective network for large-scale visual recognition, by revisiting Nearest Centroids, one of the most classic and simple classifiers. Current deep models learn the classifier in a fully parametric manner, ignoring the latent data structure and lacking simplicity and explainability. DNC instead conducts nonparametric, case-based reasoning; it utilizes sub-centroids of training samples to describe class distributions and clearly explains the classification as the proximity of test data and the class sub-centroids in the feature space. Due to the distance-based nature, the network output dimensionality is flexible, and all the learnable parameters are only for data embedding. That means all the knowledge learnt for ImageNet classification can be completely transferred for pixel recognition learning, under the ‘pre-training and fine-tuning’ paradigm. Apart from its nested simplicity and intuitive decision-making mechanism, DNC can even possess ad-hoc explainability when the sub-centroids are selected as actual training images that humans can view and inspect. Compared with parametric counterparts, DNC performs better on image classification (CIFAR-10, ImageNet) and greatly boots pixel recognition (ADE20K, Cityscapes), with improved transparency and fewer learnable parameters, using various network architectures (ResNet, Swin) and segmentation models (FCN, DeepLabV3, Swin). We feel this work brings fundamental insights into related fields. Our code is available at https://github.com/ChengHan111/DNC.
https://openreview.net/pdf/cda95db26061bc6fad92b050d82b6ff54e19d475.pdf
Continuous PDE Dynamics Forecasting with Implicit Neural Representations
https://openreview.net/forum?id=B73niNjbPs
https://openreview.net/forum?id=B73niNjbPs
Yuan Yin,Matthieu Kirchmeyer,Jean-Yves Franceschi,Alain Rakotomamonjy,patrick gallinari
ICLR 2023,Top 25%
Effective data-driven PDE forecasting methods often rely on fixed spatial and / or temporal discretizations. This raises limitations in real-world applications like weather prediction where flexible extrapolation at arbitrary spatiotemporal locations is required. We address this problem by introducing a new data-driven approach, DINo, that models a PDE's flow with continuous-time dynamics of spatially continuous functions. This is achieved by embedding spatial observations independently of their discretization via Implicit Neural Representations in a small latent space temporally driven by a learned ODE. This separate and flexible treatment of time and space makes DINo the first data-driven model to combine the following advantages. It extrapolates at arbitrary spatial and temporal locations; it can learn from sparse irregular grids or manifolds; at test time, it generalizes to new grids or resolutions. DINo outperforms alternative neural PDE forecasters in a variety of challenging generalization scenarios on representative PDE systems.
https://openreview.net/pdf/7870a5e000f8b6cb07adaf5eaf38552eecc48b6a.pdf
No Reason for No Supervision: Improved Generalization in Supervised Models
https://openreview.net/forum?id=3Y5Uhf5KgGK
https://openreview.net/forum?id=3Y5Uhf5KgGK
Mert Bülent Sarıyıldız,Yannis Kalantidis,Karteek Alahari,Diane Larlus
ICLR 2023,Top 25%
We consider the problem of training a deep neural network on a given classification task, e.g., ImageNet-1K (IN1K), so that it excels at both the training task as well as at other (future) transfer tasks. These two seemingly contradictory properties impose a trade-off between improving the model’s generalization and maintaining its performance on the original task. Models trained with self-supervised learning tend to generalize better than their supervised counterparts for transfer learning; yet, they still lag behind supervised models on IN1K. In this paper, we propose a supervised learning setup that leverages the best of both worlds. We extensively analyze supervised training using multi-scale crops for data augmentation and an expendable projector head, and reveal that the design of the projector allows us to control the trade-off between performance on the training task and transferability. We further replace the last layer of class weights with class prototypes computed on the fly using a memory bank and derive two models: t-ReX that achieves a new state of the art for transfer learning and outperforms top methods such as DINO and PAWS on IN1K, and t-ReX* that matches the highly optimized RSB-A1 model on IN1K while performing better on transfer tasks. Code and pretrained models: https://europe.naverlabs.com/t-rex
https://openreview.net/pdf/ebca9fb533c934341267ac07467bc1bb652f422e.pdf
EVA3D: Compositional 3D Human Generation from 2D Image Collections
https://openreview.net/forum?id=g7U9jD_2CUr
https://openreview.net/forum?id=g7U9jD_2CUr
Fangzhou Hong,Zhaoxi Chen,Yushi LAN,Liang Pan,Ziwei Liu
ICLR 2023,Top 25%
Inverse graphics aims to recover 3D models from 2D observations. Utilizing differentiable rendering, recent 3D-aware generative models have shown impressive results of rigid object generation using 2D images. However, it remains challenging to generate articulated objects, like human bodies, due to their complexity and diversity in poses and appearances. In this work, we propose, EVA3D, an unconditional 3D human generative model learned from 2D image collections only. EVA3D can sample 3D humans with detailed geometry and render high-quality images (up to 512x256) without bells and whistles (e.g. super resolution). At the core of EVA3D is a compositional human NeRF representation, which divides the human body into local parts. Each part is represented by an individual volume. This compositional representation enables 1) inherent human priors, 2) adaptive allocation of network parameters, 3) efficient training and rendering. Moreover, to accommodate for the characteristics of sparse 2D human image collections (e.g. imbalanced pose distribution), we propose a pose-guided sampling strategy for better GAN learning. Extensive experiments validate that EVA3D achieves state-of-the-art 3D human generation performance regarding both geometry and texture quality. Notably, EVA3D demonstrates great potential and scalability to "inverse-graphics" diverse human bodies with a clean framework.
https://openreview.net/pdf/554f7af511783002653244a77c3f8ac31ae45c7c.pdf
Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction
https://openreview.net/forum?id=DSy8tP4WctmZ
https://openreview.net/forum?id=DSy8tP4WctmZ
Tong Wu,Jiaqi Wang,Xingang Pan,Xudong XU,Christian Theobalt,Ziwei Liu,Dahua Lin
ICLR 2023,Top 25%
Neural surface reconstruction aims to reconstruct accurate 3D surfaces based on multi-view images. Previous methods based on neural volume rendering mostly train a fully implicit model with MLPs, which typically require hours of training for a single scene. Recent efforts explore the explicit volumetric representation to accelerate the optimization via memorizing significant information with learnable voxel grids. However, existing voxel-based methods often struggle in reconstructing fine-grained geometry, even when combined with an SDF-based volume rendering scheme. We reveal that this is because 1) the voxel grids tend to break the color-geometry dependency that facilitates fine-geometry learning, and 2) the under-constrained voxel grids lack spatial coherence and are vulnerable to local minima. In this work, we present Voxurf, a voxel-based surface reconstruction approach that is both efficient and accurate. Voxurf addresses the aforementioned issues via several key designs, including 1) a two-stage training procedure that attains a coherent coarse shape and recovers fine details successively, 2) a dual color network that maintains color-geometry dependency, and 3) a hierarchical geometry feature to encourage information propagation across voxels. Extensive experiments show that Voxurf achieves high efficiency and high quality at the same time. On the DTU benchmark, Voxurf achieves higher reconstruction quality with a 20x training speedup compared to previous fully implicit methods. Our code is publicly available at https://github.com/wutong16/Voxurf/.
https://openreview.net/pdf/8385b49620c0d807cbd7621fce00e5a6302d95ef.pdf
Generating Diverse Cooperative Agents by Learning Incompatible Policies
https://openreview.net/forum?id=UkU05GOH7_6
https://openreview.net/forum?id=UkU05GOH7_6
Rujikorn Charakorn,Poramate Manoonpong,Nat Dilokthanakul
ICLR 2023,Top 25%
Training a robust cooperative agent requires diverse partner agents. However, obtaining those agents is difficult. Previous works aim to learn diverse behaviors by changing the state-action distribution of agents. But, without information about the task's goal, the diversified agents are not guided to find other important, albeit sub-optimal, solutions: the agents might learn only variations of the same solution. In this work, we propose to learn diverse behaviors via policy compatibility. Conceptually, policy compatibility measures whether policies of interest can coordinate effectively. We theoretically show that incompatible policies are not similar. Thus, policy compatibility—which has been used exclusively as a measure of robustness—can be used as a proxy for learning diverse behaviors. Then, we incorporate the proposed objective into a population-based training scheme to allow concurrent training of multiple agents. Additionally, we use state-action information to induce local variations of each policy. Empirically, the proposed method consistently discovers more solutions than baseline methods across various multi-goal cooperative environments. Finally, in multi-recipe Overcooked, we show that our method produces populations of behaviorally diverse agents, which enables generalist agents trained with such a population to be more robust. See our project page at https://bit.ly/marl-lipo
https://openreview.net/pdf/ac9e4f47a8a7afc2d31fe69575bb97700dd88071.pdf
PEER: A Collaborative Language Model
https://openreview.net/forum?id=KbYevcLjnc
https://openreview.net/forum?id=KbYevcLjnc
Timo Schick,Jane A. Yu,Zhengbao Jiang,Fabio Petroni,Patrick Lewis,Gautier Izacard,Qingfei You,Christoforos Nalmpantis,Edouard Grave,Sebastian Riedel
ICLR 2023,Top 25%
Textual content is often the output of a collaborative writing process: We start with an initial draft, ask for suggestions, and repeatedly make changes. Agnostic of this process, today’s language models are trained to generate only the final result. As a consequence, they lack several abilities crucial for collaborative writing: They are unable to update existing texts, difficult to control and incapable of verbally planning or explaining their actions. To address these shortcomings, we introduce PEER, a collaborative language model that is trained to imitate the entire writing process itself. PEER can write drafts, add suggestions, propose edits and provide explanations for its actions. Crucially, we train multiple instances of PEER able to infill various parts of the writing process, enabling the use of self-training techniques for increasing the quality, amount and diversity of training data. This unlocks PEER's full potential by making it applicable in domains for which no edit histories are available and improving its ability to follow instructions, to write useful comments, and to explain its actions. We show that PEER achieves strong performance across various domains and editing tasks.
https://openreview.net/pdf/e50eaf58c25ddb7ed0ec57bcc796b131b7046154.pdf
ISS: Image as Stepping Stone for Text-Guided 3D Shape Generation
https://openreview.net/forum?id=GMRodZ8OlVr
https://openreview.net/forum?id=GMRodZ8OlVr
Zhengzhe Liu,Peng Dai,Ruihui Li,XIAOJUAN QI,Chi-Wing Fu
ICLR 2023,Top 25%
Text-guided 3D shape generation remains challenging due to the absence of large paired text-shape dataset, the substantial semantic gap between these two modalities, and the structural complexity of 3D shapes. This paper presents a new framework called Image as Stepping Stone (ISS) for the task by introducing 2D image as a stepping stone to connect the two modalities and to eliminate the need for paired text-shape data. Our key contribution is a two-stage feature-space-alignment approach that maps CLIP features to shapes by harnessing a pre-trained single-view reconstruction (SVR) model with multi-view supervisions: first map the CLIP image feature to the detail-rich shape space in the SVR model, then map the CLIP text feature to the shape space and optimize the mapping by encouraging CLIP consistency between the input text and the rendered images. Further, we formulate a textguided shape stylization module to dress up the output shapes with novel structures and textures. Beyond existing works on 3D shape generation from text, our new approach is general for creating shapes in a broad range of categories, without requiring paired text-shape data. Experimental results manifest that our approach outperforms the state-of-the-arts and our baselines in terms of fidelity and consistency with text. Further, our approach can stylize the generated shapes with both realistic and fantasy structures and textures. Codes are available at https://github.com/liuzhengzhe/ISS-Image-as-Stepping-Stone-for-Text-Guided-3D-Shape-Generation.
https://openreview.net/pdf/ed35cc59666dab7baf7735f5b066cdda25ff209c.pdf
STREET: A MULTI-TASK STRUCTURED REASONING AND EXPLANATION BENCHMARK
https://openreview.net/forum?id=1C_kSW1-k0
https://openreview.net/forum?id=1C_kSW1-k0
Danilo Neves Ribeiro,Shen Wang,Xiaofei Ma,Henghui Zhu,Rui Dong,Deguang Kong,Juliette Burger,Anjelica Ramos,zhiheng huang,William Yang Wang,George Karypis,Bing Xiang,Dan Roth
ICLR 2023,Top 25%
We introduce STREET, a unified multi-task and multi-domain natural language reasoning and explanation benchmark. Unlike most existing question-answering (QA) datasets, we expect models to not only answer questions, but also produce step-by-step structured explanations describing how premises in the question are used to produce intermediate conclusions that can prove the correctness of a certain answer. We perform extensive evaluation with popular language models such as few-shot prompting GPT-3 and fine-tuned T5. We find that these models still lag behind human performance when producing such structured reasoning steps. We believe this work will provide a way for the community to better train and test systems on multi-step reasoning and explanations in natural language.
https://openreview.net/pdf/1b74d54ce93b0d4d1558e20806f96d4b743468ea.pdf
Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class-Incremental Learning
https://openreview.net/forum?id=y5W8tpojhtJ
https://openreview.net/forum?id=y5W8tpojhtJ
Yibo Yang,Haobo Yuan,Xiangtai Li,Zhouchen Lin,Philip Torr,Dacheng Tao
ICLR 2023,Top 25%
Few-shot class-incremental learning (FSCIL) has been a challenging problem as only a few training samples are accessible for each novel class in the new sessions. Finetuning the backbone or adjusting the classifier prototypes trained in the prior sessions would inevitably cause a misalignment between the feature and classifier of old classes, which explains the well-known catastrophic forgetting problem. In this paper, we deal with this misalignment dilemma in FSCIL inspired by the recently discovered phenomenon named neural collapse, which reveals that the last-layer features of the same class will collapse into a vertex, and the vertices of all classes are aligned with the classifier prototypes, which are formed as a simplex equiangular tight frame (ETF). It corresponds to an optimal geometric structure for classification due to the maximized Fisher Discriminant Ratio. We propose a neural collapse inspired framework for FSCIL. A group of classifier prototypes are pre-assigned as a simplex ETF for the whole label space, including the base session and all the incremental sessions. During training, the classifier prototypes are not learnable, and we adopt a novel loss function that drives the features into their corresponding prototypes. Theoretical analysis shows that our method holds the neural collapse optimality and does not break the feature-classifier alignment in an incremental fashion. Experiments on the miniImageNet, CUB-200, and CIFAR-100 datasets demonstrate that our proposed framework outperforms the state-of-the-art performances. Code address: https://github.com/NeuralCollapseApplications/FSCIL
https://openreview.net/pdf/0ffbc09764bcd3fed340d49d2404429bae5277f5.pdf
Neural Networks and the Chomsky Hierarchy
https://openreview.net/forum?id=WbxHAzkeQcn
https://openreview.net/forum?id=WbxHAzkeQcn
Gregoire Deletang,Anian Ruoss,Jordi Grau-Moya,Tim Genewein,Li Kevin Wenliang,Elliot Catt,Chris Cundy,Marcus Hutter,Shane Legg,Joel Veness,Pedro A Ortega
ICLR 2023,Top 25%
Reliable generalization lies at the heart of safe ML and AI. However, understanding when and how neural networks generalize remains one of the most important unsolved problems in the field. In this work, we conduct an extensive empirical study (20'910 models, 15 tasks) to investigate whether insights from the theory of computation can predict the limits of neural network generalization in practice. We demonstrate that grouping tasks according to the Chomsky hierarchy allows us to forecast whether certain architectures will be able to generalize to out-of-distribution inputs. This includes negative results where even extensive amounts of data and training time never lead to any non-trivial generalization, despite models having sufficient capacity to fit the training data perfectly. Our results show that, for our subset of tasks, RNNs and Transformers fail to generalize on non-regular tasks, LSTMs can solve regular and counter-language tasks, and only networks augmented with structured memory (such as a stack or memory tape) can successfully generalize on context-free and context-sensitive tasks.
https://openreview.net/pdf/e3f8464e2b508de864e993df3d7e0162aa25d7ff.pdf
Neural ePDOs: Spatially Adaptive Equivariant Partial Differential Operator Based Networks
https://openreview.net/forum?id=D1Iqfm7WTkk
https://openreview.net/forum?id=D1Iqfm7WTkk
Lingshen He,Yuxuan Chen,Zhengyang Shen,Yibo Yang,Zhouchen Lin
ICLR 2023,Top 25%
Endowing deep learning models with symmetry priors can lead to a considerable performance improvement. As an interesting bridge between physics and deep learning, the equivariant partial differential operators (PDOs) have drawn much researchers' attention recently. However, to ensure the PDOs translation equivariance, previous works have to require coefficient matrices to be constant and spatially shared for their linearity, which could lead to the sub-optimal feature learning at each position. In this work, we propose a novel nonlinear PDOs scheme that is both spatially adaptive and translation equivariant. The coefficient matrices are obtained by local features through a generator rather than spatially shared. Besides, we establish a new theory on incorporating more equivariance like rotations for such PDOs. Based on our theoretical results, we efficiently implement the generator with an equivariant multilayer perceptron (EMLP). As such equivariant PDOs are generated by neural networks, we call them Neural ePDOs. In experiments, we show that our method can significantly improve previous works with smaller model size in various datasets. Especially, we achieve the state-of-the-art performance on the MNIST-rot dataset with only half parameters of the previous best model.
https://openreview.net/pdf/c4b5cb80999f0dac523cd50129e5c768bdbbcaf9.pdf
An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion
https://openreview.net/forum?id=NAQvF08TcyG
https://openreview.net/forum?id=NAQvF08TcyG
Rinon Gal,Yuval Alaluf,Yuval Atzmon,Or Patashnik,Amit Haim Bermano,Gal Chechik,Daniel Cohen-or
ICLR 2023,Top 25%
Text-to-image models offer unprecedented freedom to guide creation through natural language. Yet, it is unclear how such freedom can be exercised to generate images of specific unique concepts, modify their appearance, or compose them in new roles and novel scenes. In other words, we ask: how can we use language-guided models to turn *our* cat into a painting, or imagine a new product based on *our* favorite toy? Here we present a simple approach that allows such creative freedom. Using only $3$-$5$ images of a user-provided concept, like an object or a style, we learn to represent it through new ``words" in the embedding space of a frozen text-to-image model. These ``words" can be composed into natural language sentences, guiding *personalized* creation in an intuitive way. Notably, we find evidence that a *single* word embedding is sufficient for capturing unique and varied concepts. We compare our approach to a wide range of baselines, and demonstrate that it can more faithfully portray the concepts across a range of applications and tasks. Our code, data and new words will be available.
https://openreview.net/pdf/dd5c5803a1a63bd0d148c2be26a9ee612d1615f8.pdf
IS SYNTHETIC DATA FROM GENERATIVE MODELS READY FOR IMAGE RECOGNITION?
https://openreview.net/forum?id=nUmCcZ5RKF
https://openreview.net/forum?id=nUmCcZ5RKF
Ruifei He,Shuyang Sun,Xin Yu,Chuhui Xue,Wenqing Zhang,Philip Torr,Song Bai,XIAOJUAN QI
ICLR 2023,Top 25%
Recent text-to-image generation models have shown promising results in generating high-fidelity photo-realistic images. Though the results are astonishing to human eyes, how applicable these generated images are for recognition tasks remains under-explored. In this work, we extensively study whether and how synthetic images generated from state-of-the-art text-to-image generation models can be used for image recognition tasks, and focus on two perspectives: synthetic data for improving classification models in the data-scare settings (i.e. zero-shot and few-shot), and synthetic data for large-scale model pre-training for transfer learning. We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks. Code: https://github.com/CVMI-Lab/SyntheticData.
https://openreview.net/pdf/530478d6bc03dbd80ae4d0e00c93647edd522adc.pdf
MapTR: Structured Modeling and Learning for Online Vectorized HD Map Construction
https://openreview.net/forum?id=k7p_YAO7yE
https://openreview.net/forum?id=k7p_YAO7yE
Bencheng Liao,Shaoyu Chen,Xinggang Wang,Tianheng Cheng,Qian Zhang,Wenyu Liu,Chang Huang
ICLR 2023,Top 25%
High-definition (HD) map provides abundant and precise environmental information of the driving scene, serving as a fundamental and indispensable component for planning in autonomous driving system. We present MapTR, a structured end-to-end Transformer for efficient online vectorized HD map construction. We propose a unified permutation-equivalent modeling approach, i.e., modeling map element as a point set with a group of equivalent permutations, which accurately describes the shape of map element and stabilizes the learning process. We design a hierarchical query embedding scheme to flexibly encode structured map information and perform hierarchical bipartite matching for map element learning. MapTR achieves the best performance and efficiency with only camera input among existing vectorized map construction approaches on nuScenes dataset. In particular, MapTR-nano runs at real-time inference speed ($25.1$ FPS) on RTX 3090, $8\times$ faster than the existing state-of-the-art camera-based method while achieving $5.0$ higher mAP. Even compared with the existing state-of-the-art multi-modality method, MapTR-nano achieves $0.7$ higher mAP and $8\times$ faster inference speed, and MapTR-tiny achieves $13.5$ higher mAP and $3\times$ faster inference speed. Abundant qualitative results show that MapTR maintains stable and robust map construction quality in complex and various driving scenes. MapTR is of great application value in autonomous driving. Code and more demos are available at https://github.com/hustvl/MapTR.
https://openreview.net/pdf/f0aa5f3818d2d071eed47bfd84263b7b217b437a.pdf
Minimax Optimal Kernel Operator Learning via Multilevel Training
https://openreview.net/forum?id=zEn1BhaNYsC
https://openreview.net/forum?id=zEn1BhaNYsC
Jikai Jin,Yiping Lu,Jose Blanchet,Lexing Ying
ICLR 2023,Top 25%
Learning mappings between infinite-dimensional function spaces have achieved empirical success in many disciplines of machine learning, including generative modeling, functional data analysis, causal inference, and multi-agent reinforcement learning. In this paper, we study the statistical limit of learning a Hilbert-Schmidt operator between two infinite-dimensional Sobolev reproducing kernel Hilbert spaces. We establish the information-theoretic lower bound in terms of the Sobolev Hilbert-Schmidt norm and show that a regularization that learns the spectral components below the bias contour and ignores the ones above the variance contour can achieve the optimal learning rate. At the same time, the spectral components between the bias and variance contours give us flexibility in designing computationally feasible machine learning algorithms. Based on this observation, we develop a multilevel kernel operator learning algorithm that is optimal when learning linear operators between infinite-dimensional function spaces.
https://openreview.net/pdf/5133e05de4997ce07895732d586909263b656b92.pdf
Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling
https://openreview.net/forum?id=NRxydtWup1S
https://openreview.net/forum?id=NRxydtWup1S
Keyu Tian,Yi Jiang,qishuai diao,Chen Lin,Liwei Wang,Zehuan Yuan
ICLR 2023,Top 25%
We identify and overcome two key obstacles in extending the success of BERT-style pre-training, or masked image modeling, to convolutional networks (convnets): (i) convolution operation cannot handle irregular, randomly masked input images; (ii) the single-scale nature of BERT pre-training is inconsistent with convnet’s hierarchical structure. For (i), we treat unmasked pixels as sparse voxels of 3D point clouds and use sparse convolution to encode. This is the first use of sparse convolution for 2D masked modeling. For (ii), we develop a hierarchical decoder to reconstruct images from multi-scale encoded features. Our method, called Sparse masKed modeling (SparK), is general: it can be used directly on any convolutional model without backbone modifications. We validate it on both classical (ResNet) and modern (ConvNeXt) models: on three downstream tasks, it surpasses both state-of-the-art contrastive learning and transformer-based masked modeling by similarly large margins (around +1.0%). The improvements on object detection and instance segmentation are more significant (up to +3.5%), validating the strong transferability of features learned. We also find SparK’s favorable scaling behavior by observing more gains on larger networks. All of these findings support the promising future of generative pre-training on convnets. Both codes and pre-trained models have been released at https://github.com/keyu-tian/SparK.
https://openreview.net/pdf/1f583ce7b466371efb133c5c74c8283ffc7fb6f7.pdf
Quantifying and Mitigating the Impact of Label Errors on Model Disparity Metrics
https://openreview.net/forum?id=RUzSobdYy0V
https://openreview.net/forum?id=RUzSobdYy0V
Julius Adebayo,Melissa Hall,Bowen Yu,Bobbie Chern
ICLR 2023,Poster
Errors in labels obtained via human annotation adversely affect a trained model's performance. Existing approaches propose ways to mitigate the effect of label error on a model's downstream accuracy, yet little is known about its impact on a model's group-based disparity metrics\footnote{Group-based disparity metrics like subgroup calibration, false positive rate, false negative rate, equalized odds, and equal opportunity are more often known, colloquially, as \textit{fairness metrics} in the literature. We use the term group-based disparity metrics in this work.}. Here we study the effect of label error on a model's group-based disparity metrics like group calibration. We empirically characterize how varying levels of label error, in both training and test data, affect these disparity metrics. We find that group calibration and other metrics are sensitive to train-time and test-time label error---particularly for minority groups. For the same level of label error, the percentage change in group calibration error for the minority group is on average 1.5 times larger than the change for the majority group. Towards mitigating the impact of training-time label error, we present an approach to estimate how changing a single training input's label affects a model's group disparity metric on a test set. We empirically assess the proposed approach on a variety of datasets and find a 10-40\% improvement, compared to alternative approaches, in identifying training inputs that improve a model's disparity metric. The proposed approach can help surface training inputs that may need to be corrected for improving a model's group-based disparity metrics.
https://openreview.net/pdf/8fa4751c3b6bc13a0eefd3b9a9dd75dc9359f20f.pdf
Factorized Fourier Neural Operators
https://openreview.net/forum?id=tmIiMPl4IPa
https://openreview.net/forum?id=tmIiMPl4IPa
Alasdair Tran,Alexander Mathews,Lexing Xie,Cheng Soon Ong
ICLR 2023,Poster
We propose the Factorized Fourier Neural Operator (F-FNO), a learning-based approach for simulating partial differential equations (PDEs). Starting from a recently proposed Fourier representation of flow fields, the F-FNO bridges the performance gap between pure machine learning approaches to that of the best numerical or hybrid solvers. This is achieved with new representations – separable spectral layers and improved residual connections – and a combination of training strategies such as the Markov assumption, Gaussian noise, and cosine learning rate decay. On several challenging benchmark PDEs on regular grids, structured meshes, and point clouds, the F-FNO can scale to deeper networks and outperform both the FNO and the geo-FNO, reducing the error by 83% on the Navier-Stokes problem, 31% on the elasticity problem, 57% on the airfoil flow problem, and 60% on the plastic forging problem. Compared to the state-of-the-art pseudo-spectral method, the F-FNO can take a step size that is an order of magnitude larger in time and achieve an order of magnitude speedup to produce the same solution quality.
https://openreview.net/pdf/c381fdf1b7600bdbaba7b4a98c1679006ec61c83.pdf
DFPC: Data flow driven pruning of coupled channels without data.
https://openreview.net/forum?id=mhnHqRqcjYU
https://openreview.net/forum?id=mhnHqRqcjYU
Tanay Narshana,Chaitanya Murti,Chiranjib Bhattacharyya
ICLR 2023,Poster
Modern, multi-branched neural network architectures often possess complex interconnections between layers, which we call coupled channels (CCs). Structured pruning of CCs in these multi-branch networks is an under-researched problem, as most existing works are typically designed for pruning single-branch models like VGG-nets. While these methods yield accurate subnetworks, the improvements in inference times when applied to multi-branch networks are comparatively modest, as these methods do not prune CCs, which we observe contribute significantly to inference time. For instance, layers with CCs as input or output take more than 66% of the inference time in ResNet-50. Moreover, pruning in the data-free regime, where data is not used for pruning, is gaining traction owing to privacy concerns and computational costs associated with fine-tuning. Motivated by this, we study the problem of pruning CCs in the data-free regime. To facilitate the development of algorithms to prune CCs, we define Data Flow Couplings (DFCs) to enumerate the layers that constitute coupled connections and the associated transformation. Additionally, saliencies for pruning CCs cannot be gauged in isolation, as there may be discrepancies among the layerwise importance of CCs using conventional scoring strategies. This necessitates finding grouped saliencies to gauge the importance of all corresponding coupled elements in a network. We thus propose the Backwards Graph-based Saliency Computation (BGSC) algorithm, a data-free method that computes saliencies by estimating an upper bound to the reconstruction error of intermediate layers; we call this pruning strategy Data Flow driven Pruning of Coupled channels (DFPC). Finally, we show the efficacy of DFPC for models trained on standard datasets. Since we pruned coupled channels, we achieve up to 1.66x improvements in inference time for ResNet-101 trained on CIFAR-10 with a 5% accuracy drop without fine-tuning. With access to the ImageNet training set, we achieve significant improvements over the data-free method and see an improvement of at least 47.1% in speedup for a 2.3% accuracy drop for ResNet-50 against our baselines.
https://openreview.net/pdf/a04d739740d3a54486c4a47bf7d26dd24b41732d.pdf
TVSPrune - Pruning Non-discriminative filters via Total Variation separability of intermediate representations without fine tuning
https://openreview.net/forum?id=sZI1Oj9KBKy
https://openreview.net/forum?id=sZI1Oj9KBKy
Chaitanya Murti,Tanay Narshana,Chiranjib Bhattacharyya
ICLR 2023,Poster
Achieving structured, data-free sparsity of deep neural networks (DNNs) remains an open area of research. In this work, we address the challenge of pruning filters without access to the original training set or loss function. We propose the discriminative filters hypothesis, that well-trained models possess discriminative filters, and any non-discriminative filters can be pruned without impacting the predictive performance of the classifier. Based on this hypothesis, we propose a new paradigm for pruning neural networks: distributional pruning, wherein we only require access to the distributions that generated the original datasets. Our approach to solving the problem of formalising and quantifying the discriminating ability of filters is through the total variation (TV) distance between the class-conditional distributions of the filter outputs. We present empirical results that, using this definition of discriminability, support our hypothesis on a variety of datasets and architectures. Next, we define the LDIFF score, a heuristic to quantify the extent to which a layer possesses a mixture of discriminative and non-discriminative filters. We empirically demonstrate that the LDIFF score is indicative of the performance of random pruning for a given layer, and thereby indicates the extent to which a layer may be pruned. Our main contribution is a novel one-shot pruning algorithm, called TVSPrune, that identifies non-discriminative filters for pruning. We extend this algorithm to IterTVSPrune, wherein we iteratively apply TVSPrune, thereby enabling us to achieve greater sparsity. Last, we demonstrate the efficacy of the TVSPrune on a variety of datasets, and show that in some cases, we can prune up to 60% of parameters with only a 2% loss of accuracy without any fine-tuning of the model, beating the nearest baseline by almost 10%.
https://openreview.net/pdf/54b7911797398691422146138209e69d0674e5de.pdf
Finding Actual Descent Directions for Adversarial Training
https://openreview.net/forum?id=I3HCE7Ro78H
https://openreview.net/forum?id=I3HCE7Ro78H
Fabian Latorre,Igor Krawczuk,Leello Tadesse Dadi,Thomas Pethick,Volkan Cevher
ICLR 2023,Poster
Adversarial Training using a strong first-order adversary (PGD) is the gold standard for training Deep Neural Networks that are robust to adversarial examples. We show that, contrary to the general understanding of the method, the gradient at an optimal adversarial example may increase, rather than decrease, the adversarially robust loss. This holds independently of the learning rate. More precisely, we provide a counterexample to a corollary of Danskin's Theorem presented in the seminal paper of Madry et al. (2018) which states that a solution of the inner maximization problem can yield a descent direction for the adversarially robust loss. Based on a correct interpretation of Danskin's Theorem, we propose Danskin's Descent Direction (DDi) and we verify experimentally that it provides better directions than those obtained by a PGD adversary. Using the CIFAR10 dataset we further provide a real world example showing that our method achieves a steeper increase in robustness levels in the early stages of training, and is more stable than the PGD baseline. As a limitation, PGD training of ReLU+BatchNorm networks still performs better, but current theory is unable to explain this.
https://openreview.net/pdf/b2c8d8ffd230a816fdb5106370cd0dc65865737b.pdf
Learning Continuous Normalizing Flows For Faster Convergence To Target Distribution via Ascent Regularizations
https://openreview.net/forum?id=6iEoTr-jeB7
https://openreview.net/forum?id=6iEoTr-jeB7
Shuangshuang Chen,Sihao Ding,Yiannis Karayiannidis,Mårten Björkman
ICLR 2023,Poster
Normalizing flows (NFs) have been shown to be advantageous in modeling complex distributions and improving sampling efficiency for unbiased sampling. In this work, we propose a new class of continuous NFs, ascent continuous normalizing flows (ACNFs), that makes a base distribution converge faster to a target distribution. As solving such a flow is non-trivial and barely possible, we propose a practical implementation to learn flexibly parametric ACNFs via ascent regularization and apply it in two learning cases: maximum likelihood learning for density estimation and minimizing reverse KL divergence for unbiased sampling and variational inference. The learned ACNFs demonstrate faster convergence towards the target distributions, therefore, achieving better density estimations, unbiased sampling and variational approximation at lower computational costs. Furthermore, the flows show to stabilize themselves to mitigate performance deterioration and are less sensitive to the choice of training flow length $T$.
https://openreview.net/pdf/fd0e07fc837555ae3a7254d18b51bd091b998332.pdf
Softened Symbol Grounding for Neuro-symbolic Systems
https://openreview.net/forum?id=HTJE5Krui0g
https://openreview.net/forum?id=HTJE5Krui0g
Zenan Li,Yuan Yao,Taolue Chen,Jingwei Xu,Chun Cao,Xiaoxing Ma,Jian L\"{u}
ICLR 2023,Poster
Neuro-symbolic learning generally consists of two separated worlds, i.e., neural network training and symbolic constraint solving, whose success hinges on symbol grounding, a fundamental problem in AI. This paper presents a novel, softened symbol grounding process, bridging the gap between the two worlds, and resulting in an effective and efficient neuro-symbolic learning framework. Technically, the framework features (1) modeling of symbol solution states as a Boltzmann distribution, which avoids expensive state searching and facilitates mutually beneficial interactions between network training and symbolic reasoning; (2) a new MCMC technique leveraging projection and SMT solvers, which efficiently samples from disconnected symbol solution spaces; (3) an annealing mechanism that can escape from sub-optimal symbol groundings. Experiments with three representative neuro-symbolic learning tasks demonstrate that, owing to its superior symbol grounding capability, our framework successfully solves problems well beyond the frontier of the existing proposals.
https://openreview.net/pdf/4c780471a39291acbdf144086610ee0081f60947.pdf
Mini-batch $k$-means terminates within $O(d/\epsilon)$ iterations
https://openreview.net/forum?id=jREF4bkfi_S
https://openreview.net/forum?id=jREF4bkfi_S
Gregory Schwartzman
ICLR 2023,Poster
We answer the question: "Does \emph{local} progress (on batches) imply \emph{global} progress (on the entire dataset) for mini-batch $k$-means?". Specifically, we consider mini-batch $k$-means which terminates only when the improvement in the quality of the clustering on the sampled batch is below some threshold. Although at first glance it appears that this algorithm might execute forever, we answer the above question in the affirmative and show that if the batch is of size $\tilde{\Omega}((d/\epsilon)^2)$, it must terminate within $O(d/\epsilon)$ iterations with high probability, where $d$ is the dimension of the input, and $\epsilon$ is a threshold parameter for termination. This is true \emph{regardless} of how the centers are initialized. When the algorithm is initialized with the $k$-means++ initialization scheme, it achieves an approximation ratio of $O(\log k)$ (the same as the full-batch version). Finally, we show the applicability of our results to the mini-batch $k$-means algorithm implemented in the scikit-learn (sklearn) python library.
https://openreview.net/pdf/5a52186b24476b8d4da37309da8a8f4682166127.pdf
Learning Uncertainty for Unknown Domains with Zero-Target-Assumption
https://openreview.net/forum?id=pWVASryOyFw
https://openreview.net/forum?id=pWVASryOyFw
Yu Yu,Hassan Sajjad,Jia Xu
ICLR 2023,Poster
We introduce our Maximum-Entropy Rewarded Reinforcement Learning (MERRL) framework that selects training data for more accurate Natural Language Processing (NLP). Because conventional data selection methods select training samples based on the test domain knowledge and not on real life data, they frequently fail in unknown domains like patent and Twitter. Our approach selects training samples that maximize information uncertainty measured by entropy, including observation entropy like empirical Shannon entropy, Min-entropy, R\'enyi entropy, and prediction entropy using mutual information, to cover more possible queries that may appear in unknown worlds. Our MERRL using regularized A2C and SAC achieves up to -99.7 perplexity decrease (-43.4\% relatively) in language modeling, +25.0 accuracy increase (+40.0\% relatively) in sentiment analysis, and +5.0 F1 score increase (+30.8\% relatively) in named entity recognition over various domains, demonstrating strong generalization power on unknown test sets.
https://openreview.net/pdf/51a6f57de280cd08d584ebf7d65e42f4a9832852.pdf
Transformer-based model for symbolic regression via joint supervised learning
https://openreview.net/forum?id=ULzyv9M1j5
https://openreview.net/forum?id=ULzyv9M1j5
Wenqiang Li,Weijun Li,Linjun Sun,Min Wu,Lina Yu,Jingyi Liu,Yanjie Li,Songsong Tian
ICLR 2023,Poster
Symbolic regression (SR) is an important technique for discovering hidden mathematical expressions from observed data. Transformer-based approaches have been widely used for machine translation due to their high performance, and are recently highly expected to be used for SR. They input the data points, then output the expression skeleton, and finally optimize the coefficients. However, recent transformer-based methods for SR focus more attention on large scale training data and ignore the ill-posed problem: the lack of sufficient supervision, i.e., expressions that may be completely different have the same supervision because of their same skeleton, which makes it challenging to deal with data that may be from the same expression skeleton but with different coefficients. Therefore, we present a transformer-based model for SR with the ability to alleviate this problem. Specifically, we leverage a feature extractor based on pure residual MLP networks to obtain more information about data points. Furthermore, the core idea is that we propose a joint learning mechanism combining supervised contrastive learning, which makes features of data points from expressions with the same skeleton more similar so as to effectively alleviates the ill-posed problem. The benchmark results show that the proposed method is up to 25% higher with respect to the recovery rate of skeletons than typical transformer-based methods. Moreover, our method outperforms state-of-the-art SR methods based on reinforcement learning and genetic programming in terms of the coefficient of determination ($R^2$).
https://openreview.net/pdf/9f75235bc383f592ff6dce2f44b927b805abc762.pdf
QAID: Question Answering Inspired Few-shot Intent Detection
https://openreview.net/forum?id=gNI4_85Cyve
https://openreview.net/forum?id=gNI4_85Cyve
Asaf Yehudai,Matan Vetzler,Yosi Mass,Koren Lazar,Doron Cohen,Boaz Carmeli
ICLR 2023,Poster
Intent detection with semantically similar fine-grained intents is a challenging task. To address it, we reformulate intent detection as a question-answering retrieval task by treating utterances and intent names as questions and answers. To that end, we utilize a question-answering retrieval architecture and adopt a two stages training schema with batch contrastive loss. In the pre-training stage, we improve query representations through self-supervised training. Then, in the fine-tuning stage, we increase contextualized token-level similarity scores between queries and answers from the same intent. Our results on three few-shot intent detection benchmarks achieve state-of-the-art performance.
https://openreview.net/pdf/be84c220209d03546af019e5ae2253495baa3fb9.pdf
Solving stochastic weak Minty variational inequalities without increasing batch size
https://openreview.net/forum?id=ejR4E1jaH9k
https://openreview.net/forum?id=ejR4E1jaH9k
Thomas Pethick,Olivier Fercoq,Puya Latafat,Panagiotis Patrinos,Volkan Cevher
ICLR 2023,Poster
This paper introduces a family of stochastic extragradient-type algorithms for a class of nonconvex-nonconcave problems characterized by the weak Minty variational inequality (MVI). Unlike existing results on extragradient methods in the monotone setting, employing diminishing stepsizes is no longer possible in the weak MVI setting. This has led to approaches such as increasing batch sizes per iteration which can however be prohibitively expensive. In contrast, our proposed methods involves two stepsizes and only requires one additional oracle evaluation per iteration. We show that it is possible to keep one fixed stepsize while it is only the second stepsize that is taken to be diminishing, making it interesting even in the monotone setting. Almost sure convergence is established and we provide a unified analysis for this family of schemes which contains a nonlinear generalization of the celebrated primal dual hybrid gradient algorithm.
https://openreview.net/pdf/ccf6939924d9e260d3e36c3d6454d5db89ad6027.pdf
Curriculum-based Co-design of Morphology and Control of Voxel-based Soft Robots
https://openreview.net/forum?id=r9fX833CsuN
https://openreview.net/forum?id=r9fX833CsuN
Yuxing Wang,Shuang Wu,Haobo Fu,QIANG FU,Tiantian Zhang,Yongzhe Chang,Xueqian Wang
ICLR 2023,Poster
Co-design of morphology and control of a Voxel-based Soft Robot (VSR) is challenging due to the notorious bi-level optimization. In this paper, we present a Curriculum-based Co-design (CuCo) method for learning to design and control VSRs through an easy-to-difficult process. Specifically, we expand the design space from a small size to the target size gradually through a predefined curriculum. At each learning stage of the curriculum, we use reinforcement learning to simultaneously train the design policy and the control policy, which is enabled by incorporating the design process into the environment and using differentiable policy representations. The converged morphology and the learned policies from last stage are inherited and then serve as the starting point for the next stage. In empirical studies, we show that CuCo is more efficient in creating larger robots with better performance by reusing the practical design and control patterns learned within each stage, in comparison to prior approaches that learn from scratch in the space of target size.
https://openreview.net/pdf/13ac2fc1c4bb6b380af8507b9524a58b2144432e.pdf
WiNeRT: Towards Neural Ray Tracing for Wireless Channel Modelling and Differentiable Simulations
https://openreview.net/forum?id=tPKKXeW33YU
https://openreview.net/forum?id=tPKKXeW33YU
Tribhuvanesh Orekondy,Pratik Kumar,Shreya Kadambi,Hao Ye,Joseph Soriaga,Arash Behboodi
ICLR 2023,Poster
In this paper, we work towards a neural surrogate to model wireless electro-magnetic propagation effects in indoor environments. Such neural surrogates provide a fast, differentiable, and continuous representation of the environment and enables end-to-end optimization for downstream tasks (e.g., network planning). Specifically, the goal of the paper is to render the wireless signal (e.g., time-of-flights, power of each path) in an environment as a function of the sensor's spatial configuration (e.g., placement of transmit and receive antennas). NeRF-based approaches have shown promising results in the visual setting (RGB image signal, with a camera sensor), where the key idea is to algorithmically evaluate the 'global' signal (e.g., using volumetric rendering) by breaking it down in a sequence of 'local' evaluations (e.g., using co-ordinate neural networks). In a similar spirit, we model the time-angle channel impulse response (the global wireless signal) as a superposition of multiple paths. The wireless characteristics (e.g., power) of each path is a result of multiple evaluations of a neural network that learns implicit ray-surface interaction properties. We evaluate our approach in multiple indoor scenarios and demonstrate that our model achieves strong performance (e.g., $<$0.33ns error in time-of-flight predictions). Furthermore, we demonstrate that our neural surrogate whitens the `black-box' wireless simulators, and thus enables inverse rendering applications (e.g., user localization).
https://openreview.net/pdf/6d23a255602a38d6d7163f16454dc1a88ad31db9.pdf
LS-IQ: Implicit Reward Regularization for Inverse Reinforcement Learning
https://openreview.net/forum?id=o3Q4m8jg4BR
https://openreview.net/forum?id=o3Q4m8jg4BR
Firas Al-Hafez,Davide Tateo,Oleg Arenz,Guoping Zhao,Jan Peters
ICLR 2023,Poster
Recent methods for imitation learning directly learn a $Q$-function using an implicit reward formulation rather than an explicit reward function. However, these methods generally require implicit reward regularization to improve stability and often mistreat absorbing states. Previous works show that a squared norm regularization on the implicit reward function is effective, but do not provide a theoretical analysis of the resulting properties of the algorithms. In this work, we show that using this regularizer under a mixture distribution of the policy and the expert provides a particularly illuminating perspective: the original objective can be understood as squared Bellman error minimization, and the corresponding optimization problem minimizes a bounded $\chi^2$-Divergence between the expert and the mixture distribution. This perspective allows us to address instabilities and properly treat absorbing states. We show that our method, Least Squares Inverse Q-Learning (LS-IQ), outperforms state-of-the-art algorithms, particularly in environments with absorbing states. Finally, we propose to use an inverse dynamics model to learn from observations only. Using this approach, we retain performance in settings where no expert actions are available.
https://openreview.net/pdf/b623965b13d278d6941aa06a425e84985098cecf.pdf
Share Your Representation Only: Guaranteed Improvement of the Privacy-Utility Tradeoff in Federated Learning
https://openreview.net/forum?id=oJpVVGXu9i
https://openreview.net/forum?id=oJpVVGXu9i
Zebang Shen,Jiayuan Ye,Anmin Kang,Hamed Hassani,Reza Shokri
ICLR 2023,Poster
Repeated parameter sharing in federated learning causes significant information leakage about private data, thus defeating its main purpose: data privacy. Mitigating the risk of this information leakage, using state of the art differentially private algorithms, also does not come for free. Randomized mechanisms can prevent convergence of models on learning even the useful representation functions, especially if there is more disagreement between local models on the classification functions (due to data heterogeneity). In this paper, we consider a representation federated learning objective that encourages various parties to collaboratively refine the consensus part of the model, with differential privacy guarantees, while separately allowing sufficient freedom for local personalization (without releasing it). We prove that in the linear representation setting, while the objective is non-convex, our proposed new algorithm \DPFEDREP\ converges to a ball centered around the \emph{global optimal} solution at a linear rate, and the radius of the ball is proportional to the reciprocal of the privacy budget. With this novel utility analysis, we improve the SOTA utility-privacy trade-off for this problem by a factor of $\sqrt{d}$, where $d$ is the input dimension. We empirically evaluate our method with the image classification task on CIFAR10, CIFAR100, and EMNIST, and observe a significant performance improvement over the prior work under the same small privacy budget. The code can be found in this link, https://github.com/shenzebang/CENTAUR-Privacy-Federated-Representation-Learning.
https://openreview.net/pdf/65d25b717d0c0bbcfc88e898afc2ffee03b7d15e.pdf
EquiMod: An Equivariance Module to Improve Visual Instance Discrimination
https://openreview.net/forum?id=eDLwjKmtYFt
https://openreview.net/forum?id=eDLwjKmtYFt
Alexandre DEVILLERS,Mathieu Lefort
ICLR 2023,Poster
Recent self-supervised visual representation methods are closing the gap with supervised learning performance. Most of these successful methods rely on maximizing the similarity between embeddings of related synthetic inputs created through data augmentations. This can be seen as a task that encourages embeddings to leave out factors modified by these augmentations, i.e. to be invariant to them. However, this only considers one side of the trade-off in the choice of the augmentations: they need to strongly modify the images to avoid simple solution shortcut learning (e.g. using only color histograms), but on the other hand, augmentations-related information may be lacking in the representations for some downstream tasks (e.g. literature shows that color is important for bird and flower classification). Few recent works proposed to mitigate this problem of using only an invariance task by exploring some form of equivariance to augmentations. This has been performed by learning additional embeddings space(s), where some augmentation(s) cause embeddings to differ, yet in a non-controlled way. In this work, we introduce EquiMod a generic equivariance module that structures the learned latent space, in the sense that our module learns to predict the displacement in the embedding space caused by the augmentations. We show that applying that module to state-of-the-art invariance models, such as BYOL and SimCLR, increases the performances on the usual CIFAR10 and ImageNet datasets. Moreover, while our model could collapse to a trivial equivariance, i.e. invariance, we observe that it instead automatically learns to keep some augmentations-related information beneficial to the representations.
https://openreview.net/pdf/86ebacd324e18555c29ba7483c276055948f3c1c.pdf
Task-Aware Information Routing from Common Representation Space in Lifelong Learning
https://openreview.net/forum?id=-M0TNnyWFT5
https://openreview.net/forum?id=-M0TNnyWFT5
Prashant Shivaram Bhat,Bahram Zonooz,Elahe Arani
ICLR 2023,Poster
Intelligent systems deployed in the real world suffer from catastrophic forgetting when exposed to a sequence of tasks. Humans, on the other hand, acquire, consolidate, and transfer knowledge between tasks that rarely interfere with the consolidated knowledge. Accompanied by self-regulated neurogenesis, continual learning in the brain is governed by the rich set of neurophysiological processes that harbor different types of knowledge which are then integrated by the conscious processing. Thus, inspired by Global Workspace Theory of conscious information access in the brain, we propose TAMiL, a continual learning method that entails task-attention modules to capture task-specific information from the common representation space. We employ simple, undercomplete autoencoders to create a communication bottleneck between the common representation space and the global workspace, allowing only the task-relevant information to the global workspace, thereby greatly reducing task interference. Experimental results show that our method outperforms state-of-the-art rehearsal-based and dynamic sparse approaches and bridges the gap between fixed capacity and parameter isolation approaches while being scalable. We also show that our method effectively mitigates catastrophic forgetting while being well-calibrated with reduced task-recency bias.
https://openreview.net/pdf/1de7ad7060651b8d4abeed5bc573cc6a83a35dfe.pdf
CodeBPE: Investigating Subtokenization Options for Large Language Model Pretraining on Source Code
https://openreview.net/forum?id=htL4UZ344nF
https://openreview.net/forum?id=htL4UZ344nF
Nadezhda Chirkova,Sergey Troshin
ICLR 2023,Poster
Recent works have widely adopted large language model pretraining for source code, suggested source code-specific pretraining objectives and investigated the applicability of various Transformer-based language model architectures for source code. This work investigates another important aspect of such models, the effect of different subtokenization options, and aims at identifying most effective and length-efficient subtokenizations, taking into account source code specifics. We propose subtokenziation that reduces average length by 17--40% without downstream performance drop, and show that a carefully chosen subtokenization may improve quality by 0.5-2%, possibly with some length increase.
https://openreview.net/pdf/c3138a16b1e95192c50eacb849b3a42ecf8a6999.pdf
FairGBM: Gradient Boosting with Fairness Constraints
https://openreview.net/forum?id=x-mXzBgCX3a
https://openreview.net/forum?id=x-mXzBgCX3a
André Cruz,Catarina G Belém,João Bravo,Pedro Saleiro,Pedro Bizarro
ICLR 2023,Poster
Tabular data is prevalent in many high-stakes domains, such as financial services or public policy. Gradient Boosted Decision Trees (GBDT) are popular in these settings due to their scalability, performance, and low training cost. While fairness in these domains is a foremost concern, existing in-processing Fair ML methods are either incompatible with GBDT, or incur in significant performance losses while taking considerably longer to train. We present FairGBM, a dual ascent learning framework for training GBDT under fairness constraints, with little to no impact on predictive performance when compared to unconstrained GBDT. Since observational fairness metrics are non-differentiable, we propose smooth convex error rate proxies for common fairness criteria, enabling gradient-based optimization using a ``proxy-Lagrangian'' formulation. Our implementation shows an order of magnitude speedup in training time relative to related work, a pivotal aspect to foster the widespread adoption of FairGBM by real-world practitioners.
https://openreview.net/pdf/cb64783a7e1648699755d4be53dff6bcdb2e0ca3.pdf
Online Bias Correction for Task-Free Continual Learning
https://openreview.net/forum?id=18XzeuYZh_
https://openreview.net/forum?id=18XzeuYZh_
Aristotelis Chrysakis,Marie-Francine Moens
ICLR 2023,Poster
Task-free continual learning is the machine-learning setting where a model is trained online with data generated by a nonstationary stream. Conventional wisdom suggests that, in this setting, models are trained using an approach called experience replay, where the risk is computed both with respect to current stream observations and to a small subset of past observations. In this work, we explain both theoretically and empirically how experience replay biases the outputs of the model towards recent stream observations. Moreover, we propose a simple approach to mitigate this bias online, by changing how the output layer of the model is optimized. We show that our approach improves significantly the learning performance of experience-replay approaches over different datasets. Our findings suggest that, when performing experience replay, the output layer of the model should be optimized separately from the preceding layers.
https://openreview.net/pdf/d93cf42c88023cfea940ad6527e02004c76890e0.pdf
Don’t fear the unlabelled: safe semi-supervised learning via debiasing
https://openreview.net/forum?id=TN9gQ4x0Ep3
https://openreview.net/forum?id=TN9gQ4x0Ep3
Hugo Schmutz,Olivier HUMBERT,Pierre-Alexandre Mattei
ICLR 2023,Poster
Semi-supervised learning (SSL) provides an effective means of leveraging unlabelled data to improve a model’s performance. Even though the domain has received a considerable amount of attention in the past years, most methods present the common drawback of lacking theoretical guarantees. Our starting point is to notice that the estimate of the risk that most discriminative SSL methods minimise is biased, even asymptotically. This bias impedes the use of standard statistical learning theory and can hurt empirical performance. We propose a simple way of removing the bias. Our debiasing approach is straightforward to implement and applicable to most deep SSL methods. We provide simple theoretical guarantees on the trustworthiness of these modified methods, without having to rely on the strong assumptions on the data distribution that SSL theory usually requires. In particular, we provide generalisation error bounds for the proposed methods. We evaluate debiased versions of different existing SSL methods, such as the Pseudo-label method and Fixmatch, and show that debiasing can compete with classic deep SSL techniques in various settings by providing better calibrated models. Additionally, we provide a theoretical explanation of the intuition of the popular SSL methods. An implementation of a debiased version of Fixmatch is available at https://github.com/HugoSchmutz/DeFixmatch
https://openreview.net/pdf/bb8cb58cb71312b4eb0ae8c65988dbe094f7094f.pdf
Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples
https://openreview.net/forum?id=bjPPypbLre
https://openreview.net/forum?id=bjPPypbLre
Qizhang Li,Yiwen Guo,Wangmeng Zuo,Hao Chen
ICLR 2023,Poster
The transferability of adversarial examples across deep neural networks (DNNs) is the crux of many black-box attacks. Many prior efforts have been devoted to improving the transferability via increasing the diversity in inputs of some substitute models. In this paper, by contrast, we opt for the diversity in substitute models and advocate to attack a Bayesian model for achieving desirable transferability. Deriving from the Bayesian formulation, we develop a principled strategy for possible finetuning, which can be combined with many off-the-shelf Gaussian posterior approximations over DNN parameters. Extensive experiments have been conducted to verify the effectiveness of our method, on common benchmark datasets, and the results demonstrate that our method outperforms recent state-of-the-arts by large margins (roughly 19% absolute increase in average attack success rate on ImageNet), and, by combining with these recent methods, further performance gain can be obtained. Our code: https://github.com/qizhangli/MoreBayesian-attack.
https://openreview.net/pdf/dd61c1b2c781b395b41e61d1b5657c90bcdd75e9.pdf
Cross-Layer Retrospective Retrieving via Layer Attention
https://openreview.net/forum?id=pvgEL1yS3Ql
https://openreview.net/forum?id=pvgEL1yS3Ql
Yanwen Fang,Yuxi CAI,Jintai Chen,Jingyu Zhao,Guangjian Tian,Guodong Li
ICLR 2023,Poster
More and more evidence has shown that strengthening layer interactions can enhance the representation power of a deep neural network, while self-attention excels at learning interdependencies by retrieving query-activated information. Motivated by this, we devise a cross-layer attention mechanism, called multi-head recurrent layer attention (MRLA), that sends a query representation of the current layer to all previous layers to retrieve query-related information from different levels of receptive fields. A light-weighted version of MRLA is also proposed to reduce the quadratic computation cost. The proposed layer attention mechanism can enrich the representation power of many state-of-the-art vision networks, including CNNs and vision transformers. Its effectiveness has been extensively evaluated in image classification, object detection and instance segmentation tasks, where improvements can be consistently observed. For example, our MRLA can improve 1.6% Top-1 accuracy on ResNet-50, while only introducing 0.16M parameters and 0.07B FLOPs. Surprisingly, it can boost the performances by a large margin of 3-4% box AP and mask AP in dense prediction tasks. Our code is available at https://github.com/joyfang1106/MRLA.
https://openreview.net/pdf/cae8de5d49145465335e2585c7808cfe0dbea268.pdf
Decision S4: Efficient Sequence-Based RL via State Spaces Layers
https://openreview.net/forum?id=kqHkCVS7wbj
https://openreview.net/forum?id=kqHkCVS7wbj
Shmuel Bar David,Itamar Zimerman,Eliya Nachmani,Lior Wolf
ICLR 2023,Poster
Recently, sequence learning methods have been applied to the problem of off-policy Reinforcement Learning, including the seminal work on Decision Transformers, which employs transformers for this task. Since transformers are parameter-heavy, cannot benefit from history longer than a fixed window size, and are not computed using recurrence, we set out to investigate the suitability of the S4 family of models, which are based on state-space layers and have been shown to outperform transformers, especially in modeling long-range dependencies. In this work, we present two main algorithms: (i) an off-policy training procedure that works with trajectories, while still maintaining the training efficiency of the S4 model. (ii) An on-policy training procedure that is trained in a recurrent manner, benefits from long-range dependencies, and is based on a novel stable actor-critic mechanism. Our results indicate that our method outperforms multiple variants of decision transformers, as well as the other baseline methods on most tasks, while reducing the latency, number of parameters, and training time by several orders of magnitude, making our approach more suitable for real-world RL
https://openreview.net/pdf/e4218de49caaa090bb46ce1bdd439e9d6d6029fa.pdf
Unveiling the sampling density in non-uniform geometric graphs
https://openreview.net/forum?id=mnVf1W6ipGm
https://openreview.net/forum?id=mnVf1W6ipGm
Raffaele Paolino,Aleksandar Bojchevski,Stephan Günnemann,Gitta Kutyniok,Ron Levie
ICLR 2023,Poster
A powerful framework for studying graphs is to consider them as geometric graphs: nodes are randomly sampled from an underlying metric space, and any pair of nodes is connected if their distance is less than a specified neighborhood radius. Currently, the literature mostly focuses on uniform sampling and constant neighborhood radius. However, real-world graphs are likely to be better represented by a model in which the sampling density and the neighborhood radius can both vary over the latent space. For instance, in a social network communities can be modeled as densely sampled areas, and hubs as nodes with larger neighborhood radius. In this work, we first perform a rigorous mathematical analysis of this (more general) class of models, including derivations of the resulting graph shift operators. The key insight is that graph shift operators should be corrected in order to avoid potential distortions introduced by the non-uniform sampling. Then, we develop methods to estimate the unknown sampling density in a self-supervised fashion.  Finally, we present exemplary applications in which the learnt density is used to 1) correct the graph shift operator and improve performance on a variety of tasks, 2) improve pooling, and 3) extract knowledge from networks. Our experimental findings support our theory and provide strong evidence for our model.
https://openreview.net/pdf/69faac947ab545cf16568ea7a205f86a264b43b5.pdf
Boosting Causal Discovery via Adaptive Sample Reweighting
https://openreview.net/forum?id=LNpMtk15AS4
https://openreview.net/forum?id=LNpMtk15AS4
An Zhang,Fangfu Liu,Wenchang Ma,Zhibo Cai,Xiang Wang,Tat-Seng Chua
ICLR 2023,Poster
Under stringent model type and variable distribution assumptions, score-based causal discovery methods learn the directed acyclic graph (DAG) from observational data by evaluating candidate graphs over an averaged score function. Despite the great success in low-dimensional linear systems, it has been observed that these approaches overly exploits easier-to-fit samples, thus inevitably learning spurious edges. Worse still, the common homogeneity assumption of most causal discovery methods can be easily violated due to the widespread existence of heterogeneous data in the real world, resulting in performance vulnerability when noise distributions vary. We propose a simple yet effective model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore for short, where the learned weights tailors quantitatively to the important degree of each samples. Intuitively, we leverage the bilevel optimization scheme to alternatively train a standard DAG learner first, then upweight the samples that the DAG learner fails to fit well and downweight the samples that the DAG learner easily extracts the causation information from. Extensive experiments on both synthetic and real-world datasets are carried out to validate the effectiveness of ReScore. We observe consistent and significant boosts in structure learning performance. We further visualize that ReScore concurrently mitigates the influence of spurious edges and generalizes to heterogeneous data. Finally, we perform theoretical analysis to guarantee the structure identifiability and the weight adaptive properties of ReScore. Our codes are available at https://github.com/anzhang314/ReScore.
https://openreview.net/pdf/490a8e5885f74912244f797f7afd7060d7d2bbe9.pdf
Iterative Circuit Repair Against Formal Specifications
https://openreview.net/forum?id=SEcSahl0Ql
https://openreview.net/forum?id=SEcSahl0Ql
Matthias Cosler,Frederik Schmitt,Christopher Hahn,Bernd Finkbeiner
ICLR 2023,Poster
We present a deep learning approach for repairing sequential circuits against formal specifications given in linear-time temporal logic (LTL). Given a defective circuit and its formal specification, we train Transformer models to output circuits that satisfy the corresponding specification. We propose a separated hierarchical Transformer for multimodal representation learning of the formal specification and the circuit. We introduce a data generation algorithm that enables generalization to more complex specifications and out-of-distribution datasets. In addition, our proposed repair mechanism significantly improves the automated synthesis of circuits from LTL specifications with Transformers. It improves the state-of-the-art by $6.8$ percentage points on held-out instances and $11.8$ percentage points on an out-of-distribution dataset from the annual reactive synthesis competition.
https://openreview.net/pdf/836416358c35826ddb12f100d55e28a66973ef30.pdf
Can BERT Refrain from Forgetting on Sequential Tasks? A Probing Study
https://openreview.net/forum?id=UazgYBMS9-W
https://openreview.net/forum?id=UazgYBMS9-W
Mingxu Tao,Yansong Feng,Dongyan Zhao
ICLR 2023,Poster
Large pre-trained language models have helped to achieve state of the art on a variety of NLP tasks, nevertheless, they still suffer from forgetting when incrementally learning a series of sequential tasks. To alleviate this problem, recent works propose several models enhanced by sparse experience replay and local adaption, which yield satisfactory performance. However, in this paper we find that pre-trained language models like BERT have a potential ability to learn sequentially, even without any sparse memory replay. To verify the ability of BERT to maintain old knowledge, we adopt and re-finetune single-layer probe networks with the parameters of BERT fixed. We investigate the models on two typical kinds of NLP tasks, text classification and extractive question answering. And our experiments reveal that BERT can actually generate high quality representations for previous tasks in a long term, under extremely sparse replay or even no replay. We further introduce a series of methods to interpret the mechanism of forgetting and how memory rehearsal plays a significant role in task incremental learning, which bridges the gap between our new discovery and previous studies about catastrophic forgetting. Additionally, we provide both quantified and visualized results demonstrating that the representation space of BERT is always topologically organised, which guarantees its performance.
https://openreview.net/pdf/004c5b63bfdd7dc3e0577f31c9ce5ac302b1bc68.pdf