title
stringlengths
16
162
url
stringlengths
108
108
authors
stringlengths
7
427
detail_url
stringlengths
108
108
tags
stringclasses
1 value
Bibtex
stringlengths
54
54
Paper
stringlengths
104
104
Supplemental
stringlengths
111
111
abstract
stringlengths
1
2.47k
Paper_Errata
stringclasses
1 value
Supplemental_Errata
stringclasses
1 value
Intermediate Prototype Mining Transformer for Few-Shot Semantic Segmentation
https://papers.nips.cc/paper_files/paper/2022/hash/f7fef21d1fb3e950b12b50ad7f395e31-Abstract-Conference.html
YUANWEI LIU, Nian Liu, Xiwen Yao, Junwei Han
https://papers.nips.cc/paper_files/paper/2022/hash/f7fef21d1fb3e950b12b50ad7f395e31-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18156-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/f7fef21d1fb3e950b12b50ad7f395e31-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/f7fef21d1fb3e950b12b50ad7f395e31-Supplemental-Conference.pdf
Few-shot semantic segmentation aims to segment the target objects in query under the condition of a few annotated support images. Most previous works strive to mine more effective category information from the support to match with the corresponding objects in query. However, they all ignored the category information gap between query and support images. If the objects in them show large intra-class diversity, forcibly migrating the category information from the support to the query is ineffective. To solve this problem, we are the first to introduce an intermediate prototype for mining both deterministic category information from the support and adaptive category knowledge from the query. Specifically, we design an Intermediate Prototype Mining Transformer (IPMT) to learn the prototype in an iterative way. In each IPMT layer, we propagate the object information in both support and query features to the prototype and then use it to activate the query feature map. By conducting this process iteratively, both the intermediate prototype and the query feature can be progressively improved. At last, the final query feature is used to yield precise segmentation prediction. Extensive experiments on both PASCAL-5i and COCO-20i datasets clearly verify the effectiveness of our IPMT and show that it outperforms previous state-of-the-art methods by a large margin. Code is available at https://github.com/LIUYUANWEI98/IPMT
null
null
Long-Form Video-Language Pre-Training with Multimodal Temporal Contrastive Learning
https://papers.nips.cc/paper_files/paper/2022/hash/f8290ccc2905538be1a7f7914ccef629-Abstract-Conference.html
Yuchong Sun, Hongwei Xue, Ruihua Song, Bei Liu, Huan Yang, Jianlong Fu
https://papers.nips.cc/paper_files/paper/2022/hash/f8290ccc2905538be1a7f7914ccef629-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/19099-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/f8290ccc2905538be1a7f7914ccef629-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/f8290ccc2905538be1a7f7914ccef629-Supplemental-Conference.pdf
Large-scale video-language pre-training has shown significant improvement in video-language understanding tasks. Previous studies of video-language pretraining mainly focus on short-form videos (i.e., within 30 seconds) and sentences, leaving long-form video-language pre-training rarely explored. Directly learning representation from long-form videos and language may benefit many long-formvideo-language understanding tasks. However, it is challenging due to the difficulty of modeling long-range relationships and the heavy computational burden caused by more frames. In this paper, we introduce a Long-Form VIdeo-LAnguage pre-training model (LF-VILA) and train it on a large-scale long-form video and paragraph dataset constructed from an existing public dataset. To effectively capturethe rich temporal dynamics and to better align video and language in an efficient end-to-end manner, we introduce two novel designs in our LF-VILA model. We first propose a Multimodal Temporal Contrastive (MTC) loss to learn the temporal relation across different modalities by encouraging fine-grained alignment between long-form videos and paragraphs. Second, we propose a Hierarchical Temporal Window Attention (HTWA) mechanism to effectively capture long-range dependency while reducing computational cost in Transformer. We fine-tune the pre-trained LF-VILA model on seven downstream long-form video-language understanding tasks of paragraph-to-video retrieval and long-form video question-answering, and achieve new state-of-the-art performances. Specifically, our model achieves 16.1% relative improvement on ActivityNet paragraph-to-video retrieval task and 2.4% on How2QA task, respectively. We release our code, dataset, and pre-trained models at https://github.com/microsoft/XPretrain.
null
null
Model Preserving Compression for Neural Networks
https://papers.nips.cc/paper_files/paper/2022/hash/f8928b073ccbec15d35f2a9d39430bfd-Abstract-Conference.html
Jerry Chee, Megan Flynn (née Renz), Anil Damle, Christopher M. De Sa
https://papers.nips.cc/paper_files/paper/2022/hash/f8928b073ccbec15d35f2a9d39430bfd-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17664-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/f8928b073ccbec15d35f2a9d39430bfd-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/f8928b073ccbec15d35f2a9d39430bfd-Supplemental-Conference.zip
After training complex deep learning models, a common task is to compress the model to reduce compute and storage demands. When compressing, it is desirable to preserve the original model's per-example decisions (e.g., to go beyond top-1 accuracy or preserve robustness), maintain the network's structure, automatically determine per-layer compression levels, and eliminate the need for fine tuning. No existing compression methods simultaneously satisfy these criteria---we introduce a principled approach that does by leveraging interpolative decompositions. Our approach simultaneously selects and eliminates channels (analogously, neurons), then constructs an interpolation matrix that propagates a correction into the next layer, preserving the network's structure. Consequently, our method achieves good performance even without fine tuning and admits theoretical analysis. Our theoretical generalization bound for a one layer network lends itself naturally to a heuristic that allows our method to automatically choose per-layer sizes for deep networks. We demonstrate the efficacy of our approach with strong empirical performance on a variety of tasks, models, and datasets---from simple one-hidden-layer networks to deep networks on ImageNet.
null
null
Neural Conservation Laws: A Divergence-Free Perspective
https://papers.nips.cc/paper_files/paper/2022/hash/f8d39584f87944e5dbe46ec76f19e20a-Abstract-Conference.html
Jack Richter-Powell, Yaron Lipman, Ricky T. Q. Chen
https://papers.nips.cc/paper_files/paper/2022/hash/f8d39584f87944e5dbe46ec76f19e20a-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17350-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/f8d39584f87944e5dbe46ec76f19e20a-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/f8d39584f87944e5dbe46ec76f19e20a-Supplemental-Conference.pdf
We investigate the parameterization of deep neural networks that by design satisfy the continuity equation, a fundamental conservation law. This is enabled by the observation that any solution of the continuity equation can be represented as a divergence-free vector field. We hence propose building divergence-free neural networks through the concept of differential forms, and with the aid of automatic differentiation, realize two practical constructions. As a result, we can parameterize pairs of densities and vector fields that always satisfy the continuity equation by construction, foregoing the need for extra penalty methods or expensive numerical simulation. Furthermore, we prove these models are universal and so can be used to represent any divergence-free vector field. Finally, we experimentally validate our approaches by computing neural network-based solutions to fluid equations, solving for the Hodge decomposition, and learning dynamical optimal transport maps.
null
null
Towards Effective Multi-Modal Interchanges in Zero-Resource Sounding Object Localization
https://papers.nips.cc/paper_files/paper/2022/hash/f8de10c9ff056ae3d1eef43ad1762351-Abstract-Conference.html
Yang Zhao, Chen Zhang, Haifeng Huang, Haoyuan Li, Zhou Zhao
https://papers.nips.cc/paper_files/paper/2022/hash/f8de10c9ff056ae3d1eef43ad1762351-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17142-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/f8de10c9ff056ae3d1eef43ad1762351-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/f8de10c9ff056ae3d1eef43ad1762351-Supplemental-Conference.pdf
Aiming to locate the object that emits a specified sound in complex scenes, the task of sounding object localization bridges two perception-oriented modalities of vision and acoustics, and brings enormous research value to the comprehensive perceptual understanding of machine intelligence. Although there are massive training data collected in this field, few of them contain accurate bounding box annotations, hindering the learning process and further application of proposed models. In order to address this problem, we try to explore an effective multi-modal knowledge transfer strategy to obtain precise knowledge from other similar tasks and transfer it through well-aligned multi-modal data to deal with this task in a zero-resource manner. Concretely, we design and propose a novel \textit{Two-stream Universal Referring localization Network} (TURN), which is composed of a localization stream and an alignment stream to carry out different functions. The former is utilized to extract the knowledge related to referring object localization from the image grounding task, while the latter is devised to learn a universal semantic space shared between texts and audios. Moreover, we further develop an adaptive sampling strategy to automatically identify the overlap between different data domains, thus boosting the performance and stability of our model. The extensive experiments on various publicly-available benchmarks demonstrate that TURN can achieve competitive performance compared with the state-of-the-art approaches without using any data in this field, which verifies the feasibility of our proposed mechanisms and strategies.
null
null
On the Convergence of Stochastic Multi-Objective Gradient Manipulation and Beyond
https://papers.nips.cc/paper_files/paper/2022/hash/f91bd64a3620aad8e70a27ad9cb3ca57-Abstract-Conference.html
Shiji Zhou, Wenpeng Zhang, Jiyan Jiang, Wenliang Zhong, Jinjie GU, Wenwu Zhu
https://papers.nips.cc/paper_files/paper/2022/hash/f91bd64a3620aad8e70a27ad9cb3ca57-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18586-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/f91bd64a3620aad8e70a27ad9cb3ca57-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/f91bd64a3620aad8e70a27ad9cb3ca57-Supplemental-Conference.pdf
The conflicting gradients problem is one of the major bottlenecks for the effective training of machine learning models that deal with multiple objectives. To resolve this problem, various gradient manipulation techniques, such as PCGrad, MGDA, and CAGrad, have been developed, which directly alter the conflicting gradients to refined ones with alleviated or even no conflicts. However, the existing design and analysis of these techniques are mainly conducted under the full-batch gradient setting, ignoring the fact that they are primarily applied with stochastic mini-batch gradients. In this paper, we illustrate that the stochastic gradient manipulation algorithms may fail to converge to Pareto optimal solutions. Firstly, we show that these different algorithms can be summarized into a unified algorithmic framework, where the descent direction is given by the composition of the gradients of the multiple objectives. Then we provide an explicit two-objective convex optimization instance to explicate the non-convergence issue under the unified framework, which suggests that the non-convergence results from the determination of the composite weights solely by the instantaneous stochastic gradients. To fix the non-convergence issue, we propose a novel composite weights determination scheme that exponentially averages the past calculated weights. Finally, we show the resulting new variant of stochastic gradient manipulation converges to Pareto optimal or critical solutions and yield comparable or improved empirical performance.
null
null
Decentralized Local Stochastic Extra-Gradient for Variational Inequalities
https://papers.nips.cc/paper_files/paper/2022/hash/f9379afacdbabfdc6b060972b60f9ab8-Abstract-Conference.html
Aleksandr Beznosikov, Pavel Dvurechenskii, Anastasiia Koloskova, Valentin Samokhin, Sebastian U. Stich, Alexander Gasnikov
https://papers.nips.cc/paper_files/paper/2022/hash/f9379afacdbabfdc6b060972b60f9ab8-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17401-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/f9379afacdbabfdc6b060972b60f9ab8-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/f9379afacdbabfdc6b060972b60f9ab8-Supplemental-Conference.pdf
We consider distributed stochastic variational inequalities (VIs) on unbounded domains with the problem data that is heterogeneous (non-IID) and distributed across many devices. We make a very general assumption on the computational network that, in particular, covers the settings of fully decentralized calculations with time-varying networks and centralized topologies commonly used in Federated Learning. Moreover, multiple local updates on the workers can be made for reducing the communication frequency between the workers.We extend the stochastic extragradient method to this very general setting and theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone (when a Minty solution exists) settings. The provided rates explicitly exhibit the dependence on network characteristics (e.g., mixing time), iteration counter, data heterogeneity, variance, number of devices, and other standard parameters. As a special case, our method and analysis apply to distributed stochastic saddle-point problems (SPP), e.g., to the training of Deep Generative Adversarial Networks (GANs) for which decentralized training has been reported to be extremely challenging. In experiments for the decentralized training of GANs we demonstrate the effectiveness of our proposed approach.
null
null
Weakly-Supervised Multi-Granularity Map Learning for Vision-and-Language Navigation
https://papers.nips.cc/paper_files/paper/2022/hash/f959b05dd74ba8a735276c3df4ae8b71-Abstract-Conference.html
Peihao Chen, Dongyu Ji, Kunyang Lin, Runhao Zeng, Thomas Li, Mingkui Tan, Chuang Gan
https://papers.nips.cc/paper_files/paper/2022/hash/f959b05dd74ba8a735276c3df4ae8b71-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18278-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/f959b05dd74ba8a735276c3df4ae8b71-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/f959b05dd74ba8a735276c3df4ae8b71-Supplemental-Conference.pdf
We address a practical yet challenging problem of training robot agents to navigate in an environment following a path described by some language instructions. The instructions often contain descriptions of objects in the environment. To achieve accurate and efficient navigation, it is critical to build a map that accurately represents both spatial location and the semantic information of the environment objects. However, enabling a robot to build a map that well represents the environment is extremely challenging as the environment often involves diverse objects with various attributes. In this paper, we propose a multi-granularity map, which contains both object fine-grained details (\eg, color, texture) and semantic classes, to represent objects more comprehensively. Moreover, we propose a weakly-supervised auxiliary task, which requires the agent to localize instruction-relevant objects on the map. Through this task, the agent not only learns to localize the instruction-relevant objects for navigation but also is encouraged to learn a better map representation that reveals object information. We then feed the learned map and instruction to a waypoint predictor to determine the next navigation goal. Experimental results show our method outperforms the state-of-the-art by 4.0% and 4.6% w.r.t. success rate both in seen and unseen environments, respectively on VLN-CE dataset. The code is available at https://github.com/PeihaoChen/WS-MGMap.
null
null
Thinned random measures for sparse graphs with overlapping communities
https://papers.nips.cc/paper_files/paper/2022/hash/f9668d223e713943634dce9c66e8f2c1-Abstract-Conference.html
Federica Zoe Ricci, Michele Guindani, Erik Sudderth
https://papers.nips.cc/paper_files/paper/2022/hash/f9668d223e713943634dce9c66e8f2c1-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18308-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/f9668d223e713943634dce9c66e8f2c1-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/f9668d223e713943634dce9c66e8f2c1-Supplemental-Conference.pdf
Network models for exchangeable arrays, including most stochastic block models, generate dense graphs with a limited ability to capture many characteristics of real-world social and biological networks. A class of models based on completely random measures like the generalized gamma process (GGP) have recently addressed some of these limitations. We propose a framework for thinning edges from realizations of GGP random graphs that models observed links via nodes' overall propensity to interact, as well as the similarity of node memberships within a large set of latent communities. Our formulation allows us to learn the number of communities from data, and enables efficient Monte Carlo methods that scale linearly with the number of observed edges, and thus (unlike dense block models) sub-quadratically with the number of entities or nodes. We compare to alternative models for both dense and sparse networks, and demonstrate effective recovery of latent community structure for real-world networks with thousands of nodes.
null
null
Fine-tuning language models to find agreement among humans with diverse preferences
https://papers.nips.cc/paper_files/paper/2022/hash/f978c8f3b5f399cae464e85f72e28503-Abstract-Conference.html
Michiel Bakker, Martin Chadwick, Hannah Sheahan, Michael Tessler, Lucy Campbell-Gillingham, Jan Balaguer, Nat McAleese, Amelia Glaese, John Aslanides, Matt Botvinick, Christopher Summerfield
https://papers.nips.cc/paper_files/paper/2022/hash/f978c8f3b5f399cae464e85f72e28503-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17047-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/f978c8f3b5f399cae464e85f72e28503-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/f978c8f3b5f399cae464e85f72e28503-Supplemental-Conference.pdf
Recent work in large language modeling (LLMs) has used fine-tuning to align outputs with the preferences of a prototypical user. This work assumes that human preferences are static and homogeneous across individuals, so that aligning to a single "generic" user will confer more general alignment. Here, we embrace the heterogeneity of human preferences to consider a different challenge: how might a machine help people with diverse views find agreement? We fine-tune a 70 billion parameter LLM to generate statements that maximize the expected approval for a group of people with potentially diverse opinions. Human participants provide written opinions on thousands of questions touching on moral and political issues (e.g., "should we raise taxes on the rich?"), and rate the LLM's generated candidate consensus statements for agreement and quality. A reward model is then trained to predict individual preferences, enabling it to quantify and rank consensus statements in terms of their appeal to the overall group, defined according to different aggregation (social welfare) functions. The model produces consensus statements that are preferred by human users over those from prompted LLMs ($>70\%$) and significantly outperforms a tight fine-tuned baseline that lacks the final ranking step. Further, our best model's consensus statements are preferred over the best human-generated opinions ($>65\%$). We find that when we silently constructed consensus statements from only a subset of group members, those who were excluded were more likely to dissent, revealing the sensitivity of the consensus to individual contributions. These results highlight the potential to use LLMs to help groups of humans align their values with one another.
null
null
What is a Good Metric to Study Generalization of Minimax Learners?
https://papers.nips.cc/paper_files/paper/2022/hash/f9b8853ea81731f9bfc11820b064de96-Abstract-Conference.html
Asuman Ozdaglar, Sarath Pattathil, Jiawei Zhang, Kaiqing Zhang
https://papers.nips.cc/paper_files/paper/2022/hash/f9b8853ea81731f9bfc11820b064de96-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18527-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/f9b8853ea81731f9bfc11820b064de96-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/f9b8853ea81731f9bfc11820b064de96-Supplemental-Conference.pdf
Minimax optimization has served as the backbone of many machine learning problems. Although the convergence behavior of optimization algorithms has been extensively studied in minimax settings, their generalization guarantees, i.e., how the model trained on empirical data performs on the unseen testing data, have been relatively under-explored. A fundamental question remains elusive: What is a good metric to study generalization of minimax learners? In this paper, we aim to answer this question by first showing that primal risk, a universal metric to study generalization in minimization problems, fails in simple examples of minimax problems. Furthermore, another popular metric, the primal-dual risk, also fails to characterize the generalization behavior for minimax problems with nonconvexity, due to non-existence of saddle points. We thus propose a new metric to study generalization of minimax learners: the primal gap, to circumvent these issues. Next, we derive generalization bounds for the primal gap in nonconvex-concave settings. As byproducts of our analysis, we also solve two open questions: establishing generalization bounds for primal risk and primal-dual risk in this setting, and in the strong sense, i.e., without assuming that the maximization and expectation can be interchanged. Finally, we leverage this new metric to compare the generalization behavior of two popular algorithms - gradient descent-ascent (GDA) and gradient descent-max (GDMax) in minimax optimization.
null
null
Sequencer: Deep LSTM for Image Classification
https://papers.nips.cc/paper_files/paper/2022/hash/f9d7d6c695bc983fcfb5b70a5fbdfd2f-Abstract-Conference.html
Yuki Tatsunami, Masato Taki
https://papers.nips.cc/paper_files/paper/2022/hash/f9d7d6c695bc983fcfb5b70a5fbdfd2f-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17888-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/f9d7d6c695bc983fcfb5b70a5fbdfd2f-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/f9d7d6c695bc983fcfb5b70a5fbdfd2f-Supplemental-Conference.pdf
In recent computer vision research, the advent of the Vision Transformer (ViT) has rapidly revolutionized various architectural design efforts: ViT achieved state-of-the-art image classification performance using self-attention found in natural language processing, and MLP-Mixer achieved competitive performance using simple multi-layer perceptrons. In contrast, several studies have also suggested that carefully redesigned convolutional neural networks (CNNs) can achieve advanced performance comparable to ViT without resorting to these new ideas. Against this background, there is growing interest in what inductive bias is suitable for computer vision. Here we propose Sequencer, a novel and competitive architecture alternative to ViT that provides a new perspective on these issues. Unlike ViTs, Sequencer models long-range dependencies using LSTMs rather than self-attention layers. We also propose a two-dimensional version of Sequencer module, where an LSTM is decomposed into vertical and horizontal LSTMs to enhance performance. Despite its simplicity, several experiments demonstrate that Sequencer performs impressively well: Sequencer2D-L, with 54M parameters, realizes 84.6% top-1 accuracy on only ImageNet-1K. Not only that, we show that it has good transferability and the robust resolution adaptability on double resolution-band. solution-band. Our source code is available at https://github.com/okojoalg/sequencer.
null
null
Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination
https://papers.nips.cc/paper_files/paper/2022/hash/f9e2800a251fa9107a008104f47c45d1-Abstract-Conference.html
Jiafei Lyu, Xiu Li, Zongqing Lu
https://papers.nips.cc/paper_files/paper/2022/hash/f9e2800a251fa9107a008104f47c45d1-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17600-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/f9e2800a251fa9107a008104f47c45d1-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/f9e2800a251fa9107a008104f47c45d1-Supplemental-Conference.pdf
The learned policy of model-free offline reinforcement learning (RL) methods is often constrained to stay within the support of datasets to avoid possible dangerous out-of-distribution actions or states, making it challenging to handle out-of-support region. Model-based RL methods offer a richer dataset and benefit generalization by generating imaginary trajectories with either trained forward or reverse dynamics model. However, the imagined transitions may be inaccurate, thus downgrading the performance of the underlying offline RL method. In this paper, we propose to augment the offline dataset by using trained bidirectional dynamics models and rollout policies with double check. We introduce conservatism by trusting samples that the forward model and backward model agree on. Our method, confidence-aware bidirectional offline model-based imagination, generates reliable samples and can be combined with any model-free offline RL method. Experimental results on the D4RL benchmarks demonstrate that our method significantly boosts the performance of existing model-free offline RL algorithms and achieves competitive or better scores against baseline methods.
null
null
Learning on Arbitrary Graph Topologies via Predictive Coding
https://papers.nips.cc/paper_files/paper/2022/hash/f9f54762cbb4fe4dbffdd4f792c31221-Abstract-Conference.html
Tommaso Salvatori, Luca Pinchetti, Beren Millidge, Yuhang Song, Tianyi Bao, Rafal Bogacz, Thomas Lukasiewicz
https://papers.nips.cc/paper_files/paper/2022/hash/f9f54762cbb4fe4dbffdd4f792c31221-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/19274-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/f9f54762cbb4fe4dbffdd4f792c31221-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/f9f54762cbb4fe4dbffdd4f792c31221-Supplemental-Conference.pdf
Training with backpropagation (BP) in standard deep learning consists of two main steps: a forward pass that maps a data point to its prediction, and a backward pass that propagates the error of this prediction back through the network. This process is highly effective when the goal is to minimize a specific objective function. However, it does not allow training on networks with cyclic or backward connections. This is an obstacle to reaching brain-like capabilities, as the highly complex heterarchical structure of the neural connections in the neocortex are potentially fundamental for its effectiveness. In this paper, we show how predictive coding (PC), a theory of information processing in the cortex, can be used to perform inference and learning on arbitrary graph topologies. We experimentally show how this formulation, called PC graphs, can be used to flexibly perform different tasks with the same network by simply stimulating specific neurons. This enables the model to be queried on stimuli with different structures, such as partial images, images with labels, or images without labels. We conclude by investigating how the topology of the graph influences the final performance, and comparing against simple baselines trained with BP.
null
null
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
https://papers.nips.cc/paper_files/paper/2022/hash/fa0126bb7ebad258bf4ffdbbac2dd787-Abstract-Conference.html
Khoa D Doan, Yingjie Lao, Ping Li
https://papers.nips.cc/paper_files/paper/2022/hash/fa0126bb7ebad258bf4ffdbbac2dd787-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17389-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fa0126bb7ebad258bf4ffdbbac2dd787-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fa0126bb7ebad258bf4ffdbbac2dd787-Supplemental-Conference.pdf
In recent years, machine learning models have been shown to be vulnerable to backdoor attacks. Under such attacks, an adversary embeds a stealthy backdoor into the trained model such that the compromised models will behave normally on clean inputs but will misclassify according to the adversary's control on maliciously constructed input with a trigger. While these existing attacks are very effective, the adversary's capability is limited: given an input, these attacks can only cause the model to misclassify toward a single pre-defined or target class. In contrast, this paper exploits a novel backdoor attack with a much more powerful payload, denoted as Marksman, where the adversary can arbitrarily choose which target class the model will misclassify given any input during inference. To achieve this goal, we propose to represent the trigger function as a class-conditional generative model and to inject the backdoor in a constrained optimization framework, where the trigger function learns to generate an optimal trigger pattern to attack any target class at will while simultaneously embedding this generative backdoor into the trained model. Given the learned trigger-generation function, during inference, the adversary can specify an arbitrary backdoor attack target class, and an appropriate trigger causing the model to classify toward this target class is created accordingly. We show empirically that the proposed framework achieves high attack performance (e.g., 100% attack success rates in several experiments) while preserving the clean-data performance in several benchmark datasets, including MNIST, CIFAR10, GTSRB, and TinyImageNet. The proposed Marksman backdoor attack can also easily bypass existing backdoor defenses that were originally designed against backdoor attacks with a single target class. Our work takes another significant step toward understanding the extensive risks of backdoor attacks in practice.
null
null
Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models
https://papers.nips.cc/paper_files/paper/2022/hash/fa0509f4dab6807e2cb465715bf2d249-Abstract-Conference.html
Kushal Tirumala, Aram Markosyan, Luke Zettlemoyer, Armen Aghajanyan
https://papers.nips.cc/paper_files/paper/2022/hash/fa0509f4dab6807e2cb465715bf2d249-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18962-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fa0509f4dab6807e2cb465715bf2d249-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fa0509f4dab6807e2cb465715bf2d249-Supplemental-Conference.pdf
Despite their wide adoption, the underlying training and memorization dynamics of very large language models is not well understood. We empirically study exact memorization in causal and masked language modeling, across model sizes and throughout the training process. We measure the effects of dataset size, learning rate, and model size on memorization, finding that larger language models memorize training data faster across all settings. Surprisingly, we show that larger models can memorize a larger portion of the data before over-fitting and tend to forget less throughout the training process. We also analyze the memorization dynamics of different parts of speech and find that models memorize nouns and numbers first; we hypothesize and provide empirical evidence that nouns and numbers act as a unique identifier for memorizing individual training examples. Together, these findings present another piece of the broader puzzle of trying to understand what actually improves as models get bigger.
null
null
Bandit Theory and Thompson Sampling-Guided Directed Evolution for Sequence Optimization
https://papers.nips.cc/paper_files/paper/2022/hash/fa3c139cf8084de7bfd944f1c90c8695-Abstract-Conference.html
Hui Yuan, Chengzhuo Ni, Huazheng Wang, Xuezhou Zhang, Le Cong, Csaba Szepesvari, Mengdi Wang
https://papers.nips.cc/paper_files/paper/2022/hash/fa3c139cf8084de7bfd944f1c90c8695-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18493-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fa3c139cf8084de7bfd944f1c90c8695-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fa3c139cf8084de7bfd944f1c90c8695-Supplemental-Conference.pdf
Directed Evolution (DE), a landmark wet-lab method originated in 1960s, enables discovery of novel protein designs via evolving a population of candidate sequences. Recent advances in biotechnology has made it possible to collect high-throughput data, allowing the use of machine learning to map out a protein's sequence-to-function relation. There is a growing interest in machine learning-assisted DE for accelerating protein optimization. Yet the theoretical understanding of DE, as well as the use of machine learning in DE, remains limited.In this paper, we connect DE with the bandit learning theory and make a first attempt to study regret minimization in DE. We propose a Thompson Sampling-guided Directed Evolution (TS-DE) framework for sequence optimization, where the sequence-to-function mapping is unknown and querying a single value is subject to costly and noisy measurements. TS-DE updates a posterior of the function based on collected measurements. It uses a posterior-sampled function estimate to guide the crossover recombination and mutation steps in DE. In the case of a linear model, we show that TS-DE enjoys a Bayesian regret of order $\tilde O(d^{2}\sqrt{MT})$, where $d$ is feature dimension, $M$ is population size and $T$ is number of rounds. This regret bound is nearly optimal, confirming that bandit learning can provably accelerate DE. It may have implications for more general sequence optimization and evolutionary algorithms.
null
null
Scalable and Efficient Training of Large Convolutional Neural Networks with Differential Privacy
https://papers.nips.cc/paper_files/paper/2022/hash/fa5617c176e76fee83f3f9947fdf9f3f-Abstract-Conference.html
Zhiqi Bu, Jialin Mao, Shiyun Xu
https://papers.nips.cc/paper_files/paper/2022/hash/fa5617c176e76fee83f3f9947fdf9f3f-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18675-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fa5617c176e76fee83f3f9947fdf9f3f-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fa5617c176e76fee83f3f9947fdf9f3f-Supplemental-Conference.zip
Large convolutional neural networks (CNN) can be difficult to train in the differentially private (DP) regime, since the optimization algorithms require a computationally expensive operation, known as the per-sample gradient clipping. We propose an efficient and scalable implementation of this clipping on convolutional layers, termed as the mixed ghost clipping, that significantly eases the private training in terms of both time and space complexities, without affecting the accuracy. The improvement in efficiency is rigorously studied through the first complexity analysis for the mixed ghost clipping and existing DP training algorithms.Extensive experiments on vision classification tasks, with large ResNet, VGG, and Vision Transformers (ViT), demonstrate that DP training with mixed ghost clipping adds $1\sim 10\%$ memory overhead and $<2\times$ slowdown to the standard non-private training. Specifically, when training VGG19 on CIFAR10, the mixed ghost clipping is $3\times$ faster than state-of-the-art Opacus library with $18\times$ larger maximum batch size. To emphasize the significance of efficient DP training on convolutional layers, we achieve 96.7\% accuracy on CIFAR10 and 83.0\% on CIFAR100 at $\epsilon=1$ using BEiT, while the previous best results are 94.8\% and 67.4\%, respectively. We open-source a privacy engine (\url{https://github.com/woodyx218/private_vision}) that implements DP training of CNN (including convolutional ViT) with a few lines of code.
null
null
Weakly supervised causal representation learning
https://papers.nips.cc/paper_files/paper/2022/hash/fa567e2b2c870f8f09a87b6e73370869-Abstract-Conference.html
Johann Brehmer, Pim de Haan, Phillip Lippe, Taco S. Cohen
https://papers.nips.cc/paper_files/paper/2022/hash/fa567e2b2c870f8f09a87b6e73370869-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17115-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fa567e2b2c870f8f09a87b6e73370869-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fa567e2b2c870f8f09a87b6e73370869-Supplemental-Conference.pdf
Learning high-level causal representations together with a causal model from unstructured low-level data such as pixels is impossible from observational data alone. We prove under mild assumptions that this representation is however identifiable in a weakly supervised setting. This involves a dataset with paired samples before and after random, unknown interventions, but no further labels. We then introduce implicit latent causal models, variational autoencoders that represent causal variables and causal structure without having to optimize an explicit discrete graph structure. On simple image data, including a novel dataset of simulated robotic manipulation, we demonstrate that such models can reliably identify the causal structure and disentangle causal variables.
null
null
Zeroth-Order Negative Curvature Finding: Escaping Saddle Points without Gradients
https://papers.nips.cc/paper_files/paper/2022/hash/fa5ddd6bac0d665c72969d79221b680a-Abstract-Conference.html
Hualin Zhang, Huan Xiong, Bin Gu
https://papers.nips.cc/paper_files/paper/2022/hash/fa5ddd6bac0d665c72969d79221b680a-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/19301-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fa5ddd6bac0d665c72969d79221b680a-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fa5ddd6bac0d665c72969d79221b680a-Supplemental-Conference.zip
We consider escaping saddle points of nonconvex problems where only the function evaluations can be accessed. Although a variety of works have been proposed, the majority of them require either second or first-order information, and only a few of them have exploited zeroth-order methods, particularly the technique of negative curvature finding with zeroth-order methods which has been proven to be the most efficient method for escaping saddle points. To fill this gap, in this paper, we propose two zeroth-order negative curvature finding frameworks that can replace Hessian-vector product computations without increasing the iteration complexity. We apply the proposed frameworks to ZO-GD, ZO-SGD, ZO-SCSG, ZO-SPIDER and prove that these ZO algorithms can converge to $(\epsilon,\delta)$-approximate second-order stationary points with less query complexity compared with prior zeroth-order works for finding local minima.
null
null
Exposing and Exploiting Fine-Grained Block Structures for Fast and Accurate Sparse Training
https://papers.nips.cc/paper_files/paper/2022/hash/fa69e968b7319fd42524febd41475fb3-Abstract-Conference.html
Peng Jiang, Lihan Hu, Shihui Song
https://papers.nips.cc/paper_files/paper/2022/hash/fa69e968b7319fd42524febd41475fb3-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/16725-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fa69e968b7319fd42524febd41475fb3-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fa69e968b7319fd42524febd41475fb3-Supplemental-Conference.pdf
Sparse training is a popular technique to reduce the overhead of training large models. Although previous work has shown promising results for nonstructured sparse models, it is still unclear whether a sparse model with structural constraints can be trained from scratch to high accuracy. In this work, we study the dynamic sparse training for a class of sparse models with shuffled block structures. Compared to nonstructured models, such fine-grained structured models are more hardware-friendly and can effectively accelerate the training process. We propose an algorithm that keeps adapting the sparse model while maintaining the active parameters in shuffled blocks. We conduct experiments on a variety of networks and datasets and obtain positive results. In particular, on ImageNet, we achieve dense accuracy for ResNet50 and ResNet18 at 0.5 sparsity. On CIFAR10/100, we show that dense accuracy can be recovered at 0.6 sparsity for various models. At higher sparsity, our algorithm can still match the accuracy of nonstructured sparse training in most cases, while reducing the training time by up to 5x due to the fine-grained block structures in the models.
null
null
Operator Splitting Value Iteration
https://papers.nips.cc/paper_files/paper/2022/hash/fa809df3ec53cc5781e5078b7d500a5d-Abstract-Conference.html
Amin Rakhsha, Andrew Wang, Mohammad Ghavamzadeh, Amir-massoud Farahmand
https://papers.nips.cc/paper_files/paper/2022/hash/fa809df3ec53cc5781e5078b7d500a5d-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17866-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fa809df3ec53cc5781e5078b7d500a5d-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fa809df3ec53cc5781e5078b7d500a5d-Supplemental-Conference.zip
We introduce new planning and reinforcement learning algorithms for discounted MDPs that utilize an approximate model of the environment to accelerate the convergence of the value function. Inspired by the splitting approach in numerical linear algebra, we introduce \emph{Operator Splitting Value Iteration} (OS-VI) for both Policy Evaluation and Control problems. OS-VI achieves a much faster convergence rate when the model is accurate enough. We also introduce a sample-based version of the algorithm called OS-Dyna. Unlike the traditional Dyna architecture, OS-Dyna still converges to the correct value function in presence of model approximation error.
null
null
Enhanced Latent Space Blind Model for Real Image Denoising via Alternative Optimization
https://papers.nips.cc/paper_files/paper/2022/hash/fa93d7bfb48450e1af63c8fa647d317f-Abstract-Conference.html
Chao Ren, Yizhong Pan, Jie Huang
https://papers.nips.cc/paper_files/paper/2022/hash/fa93d7bfb48450e1af63c8fa647d317f-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/16761-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fa93d7bfb48450e1af63c8fa647d317f-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fa93d7bfb48450e1af63c8fa647d317f-Supplemental-Conference.zip
Motivated by the achievements in model-based methods and the advances in deep networks, we propose a novel enhanced latent space blind model based deep unfolding network, namely ScaoedNet, for complex real image denoising. It is derived by introducing latent space, noise information, and guidance constraint into the denoising cost function. A self-correction alternative optimization algorithm is proposed to split the novel cost function into three alternative subproblems, i.e., guidance representation (GR), degradation estimation (DE) and reconstruction (RE) subproblems. Finally, we implement the optimization process by a deep unfolding network consisting of GR, DE and RE networks. For higher performance of the DE network, a novel parameter-free noise feature adaptive enhancement (NFAE) layer is proposed. To synchronously and dynamically realize internal-external feature information mining in the RE network, a novel feature multi-modulation attention (FM2A) module is proposed. Our approach thereby leverages the advantages of deep learning, while also benefiting from the principled denoising provided by the classical model-based formulation. To the best of our knowledge, our enhanced latent space blind model, optimization scheme, NFAE and FM2A have not been reported in the previous literature. Experimental results show the promising performance of ScaoedNet on real image denoising. Code is available at https://github.com/chaoren88/ScaoedNet.
null
null
Self-Explaining Deviations for Coordination
https://papers.nips.cc/paper_files/paper/2022/hash/faa6276ea12d7afeb3e42b210c86f688-Abstract-Conference.html
Hengyuan Hu, Samuel Sokota, David Wu, Anton Bakhtin, Andrei Lupu, Brandon Cui, Jakob Foerster
https://papers.nips.cc/paper_files/paper/2022/hash/faa6276ea12d7afeb3e42b210c86f688-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/19114-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/faa6276ea12d7afeb3e42b210c86f688-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/faa6276ea12d7afeb3e42b210c86f688-Supplemental-Conference.pdf
Fully cooperative, partially observable multi-agent problems are ubiquitous in the real world. In this paper, we focus on a specific subclass of coordination problems in which humans are able to discover self-explaining deviations (SEDs). SEDs are actions that deviate from the common understanding of what reasonable behavior would be in normal circumstances. They are taken with the intention of causing another agent or other agents to realize, using theory of mind, that the circumstance must be abnormal. We motivate this idea with a real world example and formalize its definition. Next, we introduce an algorithm for improvement maximizing SEDs (IMPROVISED). Lastly, we evaluate IMPROVISED both in an illustrative toy setting and the popular benchmark setting Hanabi, where we show that it can produce so called finesse plays.
null
null
Communication Efficient Federated Learning for Generalized Linear Bandits
https://papers.nips.cc/paper_files/paper/2022/hash/faa8be9311811ba7c36fa1ceec13b862-Abstract-Conference.html
Chuanhao Li, Hongning Wang
https://papers.nips.cc/paper_files/paper/2022/hash/faa8be9311811ba7c36fa1ceec13b862-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18737-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/faa8be9311811ba7c36fa1ceec13b862-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/faa8be9311811ba7c36fa1ceec13b862-Supplemental-Conference.zip
Contextual bandit algorithms have been recently studied under the federated learning setting to satisfy the demand of keeping data decentralized and pushing the learning of bandit models to the client side. But limited by the required communication efficiency, existing solutions are restricted to linear models to exploit their closed-form solutions for parameter estimation. Such a restricted model choice greatly hampers these algorithms' practical utility. In this paper, we take the first step to addressing this challenge by studying generalized linear bandit models under the federated learning setting. We propose a communication-efficient solution framework that employs online regression for local update and offline regression for global update. We rigorously proved, though the setting is more general and challenging, our algorithm can attain sub-linear rate in both regret and communication cost, which is also validated by our extensive empirical evaluations.
null
null
Active Learning for Multiple Target Models
https://papers.nips.cc/paper_files/paper/2022/hash/faacb7a4827b4d51e201666b93ab5fa7-Abstract-Conference.html
Ying-Peng Tang, Sheng-Jun Huang
https://papers.nips.cc/paper_files/paper/2022/hash/faacb7a4827b4d51e201666b93ab5fa7-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/19171-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/faacb7a4827b4d51e201666b93ab5fa7-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/faacb7a4827b4d51e201666b93ab5fa7-Supplemental-Conference.pdf
We describe and explore a novel setting of active learning (AL), where there are multiple target models to be learned simultaneously. In many real applications, the machine learning system is required to be deployed on diverse devices with varying computational resources (e.g., workstation, mobile phone, edge devices, etc.), which leads to the demand of training multiple target models on the same labeled dataset. However, it is generally believed that AL is model-dependent and untransferable, i.e., the data queried by one model may be less effective for training another model. This phenomenon naturally raises a question "Does there exist an AL method that is effective for multiple target models?" In this paper, we answer this question by theoretically analyzing the label complexity of active and passive learning under the setting with multiple target models, and conclude that AL does have potential to achieve better label complexity under this novel setting. Based on this insight, we further propose an agnostic AL sampling strategy to select the examples located in the joint disagreement regions of different target models. The experimental results on the OCR benchmarks show that the proposed method can significantly surpass the traditional active and passive learning methods under this challenging setting.
null
null
Descent Steps of a Relation-Aware Energy Produce Heterogeneous Graph Neural Networks
https://papers.nips.cc/paper_files/paper/2022/hash/facaa170287a034cf99cf0489a7f8430-Abstract-Conference.html
Hongjoon Ahn, Yongyi Yang, Quan Gan, Taesup Moon, David P Wipf
https://papers.nips.cc/paper_files/paper/2022/hash/facaa170287a034cf99cf0489a7f8430-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18088-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/facaa170287a034cf99cf0489a7f8430-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/facaa170287a034cf99cf0489a7f8430-Supplemental-Conference.pdf
Heterogeneous graph neural networks (GNNs) achieve strong performance on node classification tasks in a semi-supervised learning setting. However, as in the simpler homogeneous GNN case, message-passing-based heterogeneous GNNs may struggle to balance between resisting the oversmoothing that may occur in deep models, and capturing long-range dependencies of graph structured data. Moreover, the complexity of this trade-off is compounded in the heterogeneous graph case due to the disparate heterophily relationships between nodes of different types. To address these issues, we propose a novel heterogeneous GNN architecture in which layers are derived from optimization steps that descend a novel relation-aware energy function. The corresponding minimizer is fully differentiable with respect to the energy function parameters, such that bilevel optimization can be applied to effectively learn a functional form whose minimum provides optimal node representations for subsequent classification tasks. In particular, this methodology allows us to model diverse heterophily relationships between different node types while avoiding oversmoothing effects. Experimental results on 8 heterogeneous graph benchmarks demonstrates that our proposed method can achieve competitive node classification accuracy.
null
null
Multi-agent Performative Prediction with Greedy Deployment and Consensus Seeking Agents
https://papers.nips.cc/paper_files/paper/2022/hash/fad7c708dda11f3e72cc1629bb130379-Abstract-Conference.html
Qiang LI, Chung-Yiu Yau, Hoi-To Wai
https://papers.nips.cc/paper_files/paper/2022/hash/fad7c708dda11f3e72cc1629bb130379-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17880-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fad7c708dda11f3e72cc1629bb130379-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fad7c708dda11f3e72cc1629bb130379-Supplemental-Conference.pdf
We consider a scenario where multiple agents are learning a common decision vector from data which can be influenced by the agents’ decisions. This leads to the problem of multi-agent performative prediction (Multi-PfD). In this paper, we formulate Multi-PfD as a decentralized optimization problem that minimizes a sum of loss functions, where each loss function is based on a distribution influenced by the local decision vector. We first prove the necessary and sufficient condition for the Multi-PfD problem to admit a unique multi-agent performative stable (Multi-PS) solution. We show that enforcing consensus leads to a laxer condition for existence of Multi-PS solution with respect to the distributions’ sensitivities, compared to the single agent case. Then, we study a decentralized extension to  the greedy deployment scheme [Mendler-Dünner et al., 2020], called the DSGD-GD  scheme. We show that DSGD-GD converges to the Multi-PS solution and analyze its non asymptotic convergence rate. Numerical results validate our analysis.
null
null
Preservation of the Global Knowledge by Not-True Distillation in Federated Learning
https://papers.nips.cc/paper_files/paper/2022/hash/fadec8f2e65f181d777507d1df69b92f-Abstract-Conference.html
Gihun Lee, Minchan Jeong, Yongjin Shin, Sangmin Bae, Se-Young Yun
https://papers.nips.cc/paper_files/paper/2022/hash/fadec8f2e65f181d777507d1df69b92f-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/19038-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fadec8f2e65f181d777507d1df69b92f-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fadec8f2e65f181d777507d1df69b92f-Supplemental-Conference.pdf
In federated learning, a strong global model is collaboratively learned by aggregating clients' locally trained models. Although this precludes the need to access clients' data directly, the global model's convergence often suffers from data heterogeneity. This study starts from an analogy to continual learning and suggests that forgetting could be the bottleneck of federated learning. We observe that the global model forgets the knowledge from previous rounds, and the local training induces forgetting the knowledge outside of the local distribution. Based on our findings, we hypothesize that tackling down forgetting will relieve the data heterogeneity problem. To this end, we propose a novel and effective algorithm, Federated Not-True Distillation (FedNTD), which preserves the global perspective on locally available data only for the not-true classes. In the experiments, FedNTD shows state-of-the-art performance on various setups without compromising data privacy or incurring additional communication costs.
null
null
Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits
https://papers.nips.cc/paper_files/paper/2022/hash/fb23cf87a9e04d7677b73c47acd060ef-Abstract-Conference.html
Tianyuan Jin, Pan Xu, Xiaokui Xiao, Anima Anandkumar
https://papers.nips.cc/paper_files/paper/2022/hash/fb23cf87a9e04d7677b73c47acd060ef-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18634-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fb23cf87a9e04d7677b73c47acd060ef-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fb23cf87a9e04d7677b73c47acd060ef-Supplemental-Conference.pdf
We study the regret of Thompson sampling (TS) algorithms for exponential family bandits, where the reward distribution is from a one-dimensional exponential family, which covers many common reward distributions including Bernoulli, Gaussian, Gamma, Exponential, etc. We propose a Thompson sampling algorithm, termed ExpTS, which uses a novel sampling distribution to avoid the under-estimation of the optimal arm. We provide a tight regret analysis for ExpTS, which simultaneously yields both the finite-time regret bound as well as the asymptotic regret bound. In particular, for a $K$-armed bandit with exponential family rewards, ExpTS over a horizon $T$ is sub-UCB (a strong criterion for the finite-time regret that is problem-dependent), minimax optimal up to a factor $\sqrt{\log K}$, and asymptotically optimal, for exponential family rewards. Moreover, we propose ExpTS$^+$, by adding a greedy exploitation step in addition to the sampling distribution used in ExpTS, to avoid the over-estimation of sub-optimal arms. ExpTS$^+$ is an anytime bandit algorithm and achieves the minimax optimality and asymptotic optimality simultaneously for exponential family reward distributions. Our proof techniques are general and conceptually simple and can be easily applied to analyze standard Thompson sampling with specific reward distributions.
null
null
Graph Reordering for Cache-Efficient Near Neighbor Search
https://papers.nips.cc/paper_files/paper/2022/hash/fb44a668c2d4bc984e9d6ca261262cbb-Abstract-Conference.html
Benjamin Coleman, Santiago Segarra, Alexander J. Smola, Anshumali Shrivastava
https://papers.nips.cc/paper_files/paper/2022/hash/fb44a668c2d4bc984e9d6ca261262cbb-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/16990-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fb44a668c2d4bc984e9d6ca261262cbb-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fb44a668c2d4bc984e9d6ca261262cbb-Supplemental-Conference.zip
Graph search is one of the most successful algorithmic trends in near neighbor search. Several of the most popular and empirically successful algorithms are, at their core, a greedy walk along a pruned near neighbor graph. However, graph traversal applications often suffer from poor memory access patterns, and near neighbor search is no exception to this rule. Our measurements show that popular search indices such as the hierarchical navigable small-world graph (HNSW) can have poor cache miss performance. To address this issue, we formulate the graph traversal problem as a cache hit maximization task and propose multiple graph reordering as a solution. Graph reordering is a memory layout optimization that groups commonly-accessed nodes together in memory. We mathematically formalize the connection between the graph layout and the cache complexity of search. We present exhaustive experiments applying several reordering algorithms to a leading graph-based near neighbor method based on the HNSW index. We find that reordering improves the query time by up to 40%, we present analysis and improvements for existing graph layout methods, and we demonstrate that the time needed to reorder the graph is negligible compared to the time required to construct the index.
null
null
MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning
https://papers.nips.cc/paper_files/paper/2022/hash/fb575ab4d882a4c734641155a5f30911-Abstract-Conference.html
Jiangmeng Li, Wenwen Qiang, Yanan Zhang, Wenyi Mo, Changwen Zheng, Bing Su, Hui Xiong
https://papers.nips.cc/paper_files/paper/2022/hash/fb575ab4d882a4c734641155a5f30911-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18670-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fb575ab4d882a4c734641155a5f30911-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fb575ab4d882a4c734641155a5f30911-Supplemental-Conference.pdf
As a successful approach to self-supervised learning, contrastive learning aims to learn invariant information shared among distortions of the input sample. While contrastive learning has yielded continuous advancements in sampling strategy and architecture design, it still remains two persistent defects: the interference of task-irrelevant information and sample inefficiency, which are related to the recurring existence of trivial constant solutions. From the perspective of dimensional analysis, we find out that the dimensional redundancy and dimensional confounder are the intrinsic issues behind the phenomena, and provide experimental evidence to support our viewpoint. We further propose a simple yet effective approach MetaMask, short for the dimensional Mask learned by Meta-learning, to learn representations against dimensional redundancy and confounder. MetaMask adopts the redundancy-reduction technique to tackle the dimensional redundancy issue and innovatively introduces a dimensional mask to reduce the gradient effects of specific dimensions containing the confounder, which is trained by employing a meta-learning paradigm with the objective of improving the performance of masked representations on a typical self-supervised task. We provide solid theoretical analyses to prove MetaMask can obtain tighter risk bounds for downstream classification compared to typical contrastive methods. Empirically, our method achieves state-of-the-art performance on various benchmarks.
null
null
On Feature Learning in the Presence of Spurious Correlations
https://papers.nips.cc/paper_files/paper/2022/hash/fb64a552feda3d981dbe43527a80a07e-Abstract-Conference.html
Pavel Izmailov, Polina Kirichenko, Nate Gruver, Andrew G. Wilson
https://papers.nips.cc/paper_files/paper/2022/hash/fb64a552feda3d981dbe43527a80a07e-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/19262-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fb64a552feda3d981dbe43527a80a07e-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fb64a552feda3d981dbe43527a80a07e-Supplemental-Conference.pdf
Deep classifiers are known to rely on spurious features — patterns which are correlated with the target on the training data but not inherently relevant to the learning problem, such as the image backgrounds when classifying the foregrounds. In this paper we evaluate the amount of information about the core (non-spurious) features that can be decoded from the representations learned by standard empirical risk minimization (ERM) and specialized group robustness training. Following recent work on Deep Feature Reweighting (DFR), we evaluate the feature representations by re-training the last layer of the model on a held-out set where the spurious correlation is broken. On multiple vision and NLP problems, we show that the features learned by simple ERM are highly competitive with the features learned by specialized group robustness methods targeted at reducing the effect of spurious correlations. Moreover, we show that the quality of learned feature representations is greatly affected by the design decisions beyond the training method, such as the model architecture and pre-training strategy. On the other hand, we find that strong regularization is not necessary for learning high-quality feature representations.Finally, using insights from our analysis, we significantly improve upon the best results reported in the literature on the popular Waterbirds, CelebA hair color prediction and WILDS-FMOW problems, achieving 97\%, 92\% and 50\% worst-group accuracies, respectively.
null
null
Sparse2Dense: Learning to Densify 3D Features for 3D Object Detection
https://papers.nips.cc/paper_files/paper/2022/hash/fb71332951af4ae27fbd457daadc5341-Abstract-Conference.html
Tianyu Wang, Xiaowei Hu, Zhengzhe LIU, Chi-Wing Fu
https://papers.nips.cc/paper_files/paper/2022/hash/fb71332951af4ae27fbd457daadc5341-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17960-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fb71332951af4ae27fbd457daadc5341-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fb71332951af4ae27fbd457daadc5341-Supplemental-Conference.pdf
LiDAR-produced point clouds are the major source for most state-of-the-art 3D object detectors. Yet, small, distant, and incomplete objects with sparse or few points are often hard to detect. We present Sparse2Dense, a new framework to efficiently boost 3D detection performance by learning to densify point clouds in latent space. Specifically, we first train a dense point 3D detector (DDet) with a dense point cloud as input and design a sparse point 3D detector (SDet) with a regular point cloud as input. Importantly, we formulate the lightweight plug-in S2D module and the point cloud reconstruction module in SDet to densify 3D features and train SDet to produce 3D features, following the dense 3D features in DDet. So, in inference, SDet can simulate dense 3D features from regular (sparse) point cloud inputs without requiring dense inputs. We evaluate our method on the large-scale Waymo Open Dataset and the Waymo Domain Adaptation Dataset, showing its high performance and efficiency over the state of the arts.
null
null
Exploring Length Generalization in Large Language Models
https://papers.nips.cc/paper_files/paper/2022/hash/fb7451e43f9c1c35b774bcfad7a5714b-Abstract-Conference.html
Cem Anil, Yuhuai Wu, Anders Andreassen, Aitor Lewkowycz, Vedant Misra, Vinay Ramasesh, Ambrose Slone, Guy Gur-Ari, Ethan Dyer, Behnam Neyshabur
https://papers.nips.cc/paper_files/paper/2022/hash/fb7451e43f9c1c35b774bcfad7a5714b-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18909-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fb7451e43f9c1c35b774bcfad7a5714b-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fb7451e43f9c1c35b774bcfad7a5714b-Supplemental-Conference.pdf
The ability to extrapolate from short problem instances to longer ones is an important form of out-of-distribution generalization in reasoning tasks, and is crucial when learning from datasets where longer problem instances are rare. These include theorem proving, solving quantitative mathematics problems, and reading/summarizing novels. In this paper, we run careful empirical studies exploring the length generalization capabilities of transformer-based language models. We first establish that naively finetuning transformers on length generalization tasks shows significant generalization deficiencies independent of model scale. We then show that combining pretrained large language models' in-context learning abilities with scratchpad prompting (asking the model to output solution steps before producing an answer) results in a dramatic improvement in length generalization. We run careful failure analyses on each of the learning modalities and identify common sources of mistakes that highlight opportunities in equipping language models with the ability to generalize to longer problems.
null
null
Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks
https://papers.nips.cc/paper_files/paper/2022/hash/fb8fe6b79288f3d83696a5d276f4fc9d-Abstract-Conference.html
Yunwen Lei, Rong Jin, Yiming Ying
https://papers.nips.cc/paper_files/paper/2022/hash/fb8fe6b79288f3d83696a5d276f4fc9d-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/19188-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fb8fe6b79288f3d83696a5d276f4fc9d-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fb8fe6b79288f3d83696a5d276f4fc9d-Supplemental-Conference.pdf
While significant theoretical progress has been achieved, unveiling the generalization mystery of overparameterized neural networks still remains largely elusive. In this paper, we study the generalization behavior of shallow neural networks (SNNs) by leveraging the concept of algorithmic stability. We consider gradient descent (GD) and stochastic gradient descent (SGD) to train SNNs, for both of which we develop consistent excess risk bounds by balancing the optimization and generalization via early-stopping. As compared to existing analysis on GD, our new analysis requires a relaxed overparameterization assumption and also applies to SGD. The key for the improvement is a better estimation of the smallest eigenvalues of the Hessian matrices of the empirical risks and the loss function along the trajectories of GD and SGD by providing a refined estimation of their iterates.
null
null
ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation
https://papers.nips.cc/paper_files/paper/2022/hash/fbb10d319d44f8c3b4720873e4177c65-Abstract-Conference.html
Yufei Xu, Jing Zhang, Qiming ZHANG, Dacheng Tao
https://papers.nips.cc/paper_files/paper/2022/hash/fbb10d319d44f8c3b4720873e4177c65-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18599-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fbb10d319d44f8c3b4720873e4177c65-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fbb10d319d44f8c3b4720873e4177c65-Supplemental-Conference.pdf
Although no specific domain knowledge is considered in the design, plain vision transformers have shown excellent performance in visual recognition tasks. However, little effort has been made to reveal the potential of such simple structures for pose estimation tasks. In this paper, we show the surprisingly good capabilities of plain vision transformers for pose estimation from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm, and transferability of knowledge between models, through a simple baseline model called ViTPose. Specifically, ViTPose employs plain and non-hierarchical vision transformers as backbones to extract features for a given person instance and a lightweight decoder for pose estimation. It can be scaled up from 100M to 1B parameters by taking the advantages of the scalable model capacity and high parallelism of transformers, setting a new Pareto front between throughput and performance. Besides, ViTPose is very flexible regarding the attention type, input resolution, pre-training and finetuning strategy, as well as dealing with multiple pose tasks. We also empirically demonstrate that the knowledge of large ViTPose models can be easily transferred to small ones via a simple knowledge token. Experimental results show that our basic ViTPose model outperforms representative methods on the challenging MS COCO Keypoint Detection benchmark, while the largest model sets a new state-of-the-art. The code and models are available at https://github.com/ViTAE-Transformer/ViTPose.
null
null
Re-Analyze Gauss: Bounds for Private Matrix Approximation via Dyson Brownian Motion
https://papers.nips.cc/paper_files/paper/2022/hash/fbc9981dd6316378aee7fd5975250f21-Abstract-Conference.html
Oren Mangoubi, Nisheeth Vishnoi
https://papers.nips.cc/paper_files/paper/2022/hash/fbc9981dd6316378aee7fd5975250f21-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17060-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fbc9981dd6316378aee7fd5975250f21-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fbc9981dd6316378aee7fd5975250f21-Supplemental-Conference.zip
Given a symmetric matrix $M$ and a vector $\lambda$, we present new bounds on the Frobenius-distance utility of the Gaussian mechanism for approximating $M$ by a matrix whose spectrum is $\lambda$, under $(\varepsilon,\delta)$-differential privacy. Our bounds depend on both $\lambda$ and the gaps in the eigenvalues of $M$, and hold whenever the top $k+1$ eigenvalues of $M$ have sufficiently large gaps. When applied to the problems of private rank-$k$ covariance matrix approximation and subspace recovery, our bounds yield improvements over previous bounds. Our bounds are obtained by viewing the addition of Gaussian noise as a continuous-time matrix Brownian motion. This viewpoint allows us to track the evolution of eigenvalues and eigenvectors of the matrix, which are governed by stochastic differential equations discovered by Dyson. These equations allow us to bound the utility as the square-root of a sum-of-squares of perturbations to the eigenvectors, as opposed to a sum of perturbation bounds obtained via Davis-Kahan-type theorems.
null
null
ASPiRe: Adaptive Skill Priors for Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2022/hash/fbd8e65962da06f83f3f28b52774ffd0-Abstract-Conference.html
Mengda Xu, Manuela Veloso, Shuran Song
https://papers.nips.cc/paper_files/paper/2022/hash/fbd8e65962da06f83f3f28b52774ffd0-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/16967-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fbd8e65962da06f83f3f28b52774ffd0-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fbd8e65962da06f83f3f28b52774ffd0-Supplemental-Conference.pdf
We introduce ASPiRe (Adaptive Skill Prior for RL), a new approach that leverages prior experience to accelerate reinforcement learning. Unlike existing methods that learn a single skill prior from a large and diverse dataset, our framework learns a library of different distinction skill priors (i.e., behavior priors) from a collection of specialized datasets, and learns how to combine them to solve a new task. This formulation allows the algorithm to acquire a set of specialized skill priors that are more reusable for downstream tasks; however, it also brings up additional challenges of how to effectively combine these unstructured sets of skill priors to form a new prior for new tasks. Specifically, it requires the agent not only to identify which skill prior(s) to use but also how to combine them (either sequentially or concurrently) to form a new prior. To achieve this goal, ASPiRe includes Adaptive Weight Module (AWM) that learns to infer an adaptive weight assignment between different skill priors and uses them to guide policy learning for downstream tasks via weighted Kullback-Leibler divergences. Our experiments demonstrate that ASPiRe can significantly accelerate the learning of new downstream tasks in the presence of multiple priors and show improvement on competitive baselines.
null
null
Neural Differential Equations for Learning to Program Neural Nets Through Continuous Learning Rules
https://papers.nips.cc/paper_files/paper/2022/hash/fc09b26b85ab3abb2832bd555a2e4215-Abstract-Conference.html
Kazuki Irie, Francesco Faccio, Jürgen Schmidhuber
https://papers.nips.cc/paper_files/paper/2022/hash/fc09b26b85ab3abb2832bd555a2e4215-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/19423-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fc09b26b85ab3abb2832bd555a2e4215-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fc09b26b85ab3abb2832bd555a2e4215-Supplemental-Conference.zip
Neural ordinary differential equations (ODEs) have attracted much attention as continuous-time counterparts of deep residual neural networks (NNs), and numerous extensions for recurrent NNs have been proposed. Since the 1980s, ODEs have also been used to derive theoretical results for NN learning rules, e.g., the famous connection between Oja's rule and principal component analysis. Such rules are typically expressed as additive iterative update processes which have straightforward ODE counterparts. Here we introduce a novel combination of learning rules and Neural ODEs to build continuous-time sequence processing nets that learn to manipulate short-term memory in rapidly changing synaptic connections of other nets. This yields continuous-time counterparts of Fast Weight Programmers and linear Transformers. Our novel models outperform the best existing Neural Controlled Differential Equation based models on various time series classification tasks, while also addressing their fundamental scalability limitations. Our code is public.
null
null
MEMO: Test Time Robustness via Adaptation and Augmentation
https://papers.nips.cc/paper_files/paper/2022/hash/fc28053a08f59fccb48b11f2e31e81c7-Abstract-Conference.html
Marvin Zhang, Sergey Levine, Chelsea Finn
https://papers.nips.cc/paper_files/paper/2022/hash/fc28053a08f59fccb48b11f2e31e81c7-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18576-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fc28053a08f59fccb48b11f2e31e81c7-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fc28053a08f59fccb48b11f2e31e81c7-Supplemental-Conference.pdf
While deep neural networks can attain good accuracy on in-distribution test points, many applications require robustness even in the face of unexpected perturbations in the input, changes in the domain, or other sources of distribution shift. We study the problem of test time robustification, i.e., using the test input to improve model robustness. Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions, such as access to multiple test points, that prevent widespread adoption. In this work, we aim to study and devise methods that make no assumptions about the model training process and are broadly applicable at test time. We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable: when presented with a test example, perform different data augmentations on the data point, and then adapt (all of) the model parameters by minimizing the entropy of the model's average, or marginal, output distribution across the augmentations. Intuitively, this objective encourages the model to make the same prediction across different augmentations, thus enforcing the invariances encoded in these augmentations, while also maintaining confidence in its predictions. In our experiments, we evaluate two baseline ResNet models, two robust ResNet-50 models, and a robust vision transformer model, and we demonstrate that this approach achieves accuracy gains of 1-8% over standard model evaluation and also generally outperforms prior augmentation and adaptation strategies. For the setting in which only one test point is available, we achieve state-of-the-art results on the ImageNet-C, ImageNet-R, and, among ResNet-50 models, ImageNet-A distribution shift benchmarks.
null
null
Learning-Augmented Algorithms for Online Linear and Semidefinite Programming
https://papers.nips.cc/paper_files/paper/2022/hash/fc5a1845bee1f5405ef99ba25c2d44e1-Abstract-Conference.html
Elena Grigorescu, Young-San Lin, Sandeep Silwal, Maoyuan Song, Samson Zhou
https://papers.nips.cc/paper_files/paper/2022/hash/fc5a1845bee1f5405ef99ba25c2d44e1-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17498-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fc5a1845bee1f5405ef99ba25c2d44e1-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fc5a1845bee1f5405ef99ba25c2d44e1-Supplemental-Conference.zip
Semidefinite programming (SDP) is a unifying framework that generalizes both linear programming and quadratically-constrained quadratic programming, while also yielding efficient solvers, both in theory and in practice. However, there exist known impossibility results for approximating the optimal solution when constraints for covering SDPs arrive in an online fashion. In this paper, we study online covering linear and semidefinite programs in which the algorithm is augmented with advice from a possibly erroneous predictor. We show that if the predictor is accurate, we can efficiently bypass these impossibility results and achieve a constant-factor approximation to the optimal solution, i.e., consistency. On the other hand, if the predictor is inaccurate, under some technical conditions, we achieve results that match both the classical optimal upper bounds and the tight lower bounds up to constant factors, i.e., robustness. More broadly, we introduce a framework that extends both (1) the online set cover problem augmented with machine-learning predictors, studied by Bamas, Maggiori, and Svensson (NeurIPS 2020), and (2) the online covering SDP problem, initiated by Elad, Kale, and Naor (ICALP 2016). Specifically, we obtain general online learning-augmented algorithms for covering linear programs with fractional advice and constraints, and initiate the study of learning-augmented algorithms for covering SDP problems. Our techniques are based on the primal-dual framework of Buchbinder and Naor (Mathematics of Operations Research, 34, 2009) and can be further adjusted to handle constraints where the variables lie in a bounded region, i.e., box constraints.
null
null
Text-Adaptive Multiple Visual Prototype Matching for Video-Text Retrieval
https://papers.nips.cc/paper_files/paper/2022/hash/fc65fab891d83433bd3c8d966edde311-Abstract-Conference.html
Chengzhi Lin, Ancong Wu, Junwei Liang, Jun Zhang, Wenhang Ge, Wei-Shi Zheng, Chunhua Shen
https://papers.nips.cc/paper_files/paper/2022/hash/fc65fab891d83433bd3c8d966edde311-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/16767-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fc65fab891d83433bd3c8d966edde311-Paper-Conference.pdf
null
Cross-modal retrieval between videos and texts has gained increasing interest because of the rapid emergence of videos on the web. Generally, a video contains rich instance and event information and the query text only describes a part of the information. Thus, a video can have multiple different text descriptions and queries. We call it the Video-Text Correspondence Ambiguity problem. Current techniques mostly concentrate on mining local or multi-level alignment between contents of video and text (e.g., object to entity and action to verb). It is difficult for these methods to alleviate video-text correspondence ambiguity by describing a video using only one feature, which is required to be matched with multiple different text features at the same time. To address this problem, we propose a Text-Adaptive Multiple Visual Prototype Matching Model. It automatically captures multiple prototypes to describe a video by adaptive aggregation on video token features. Given a query text, the similarity is determined by the most similar prototype to find correspondence in the video, which is called text-adaptive matching. To learn diverse prototypes for representing the rich information in videos, we propose a variance loss to encourage different prototypes to attend to different contents of the video. Our method outperforms the state-of-the-art methods on four public video retrieval datasets.
null
null
Asymptotically Unbiased Instance-wise Regularized Partial AUC Optimization: Theory and Algorithm
https://papers.nips.cc/paper_files/paper/2022/hash/fc9f83d9925e6885e8f1ae1e17b3c44b-Abstract-Conference.html
HuiYang Shao, Qianqian Xu, Zhiyong Yang, Shilong Bao, Qingming Huang
https://papers.nips.cc/paper_files/paper/2022/hash/fc9f83d9925e6885e8f1ae1e17b3c44b-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18249-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fc9f83d9925e6885e8f1ae1e17b3c44b-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fc9f83d9925e6885e8f1ae1e17b3c44b-Supplemental-Conference.pdf
The Partial Area Under the ROC Curve (PAUC), typically including One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC), measures the average performance of a binary classifier within a specific false positive rate and/or true positive rate interval, which is a widely adopted measure when decision constraints must be considered. Consequently, PAUC optimization has naturally attracted increasing attention in the machine learning community within the last few years. Nonetheless, most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable. Fortunately, a recent work presents an unbiased formulation of the PAUC optimization problem via distributional robust optimization. However, it is based on the pair-wise formulation of AUC, which suffers from the limited scalability w.r.t. sample size and a slow convergence rate, especially for TPAUC. To address this issue, we present a simpler reformulation of the problem in an asymptotically unbiased and instance-wise manner. For both OPAUC and TPAUC, we come to a nonconvex strongly concave min-max regularized problem of instance-wise functions. On top of this, we employ an efficient solver that enjoys a linear per-iteration computational complexity w.r.t. the sample size and a time-complexity of $O(\epsilon^{-1/3})$ to reach a $\epsilon$ stationary point. Furthermore, we find that the min-max reformulation also facilitates the theoretical analysis of generalization error as a byproduct. Compared with the existing results, we present new error bounds that are much easier to prove and could deal with hypotheses with real-valued outputs. Finally, extensive experiments on several benchmark datasets demonstrate the effectiveness of our method.
null
null
Universality of Group Convolutional Neural Networks Based on Ridgelet Analysis on Groups
https://papers.nips.cc/paper_files/paper/2022/hash/fcc3dc27672a12510babe448d665e152-Abstract-Conference.html
Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
https://papers.nips.cc/paper_files/paper/2022/hash/fcc3dc27672a12510babe448d665e152-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/16723-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fcc3dc27672a12510babe448d665e152-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fcc3dc27672a12510babe448d665e152-Supplemental-Conference.pdf
We show the universality of depth-2 group convolutional neural networks (GCNNs) in a unified and constructive manner based on the ridgelet theory. Despite widespread use in applications, the approximation property of (G)CNNs has not been well investigated. The universality of (G)CNNs has been shown since the late 2010s. Yet, our understanding on how (G)CNNs represent functions is incomplete because the past universality theorems have been shown in a case-by-case manner by manually/carefully assigning the network parameters depending on the variety of convolution layers, and in an indirect manner by converting/modifying the (G)CNNs into other universal approximators such as invariant polynomials and fully-connected networks. In this study, we formulate a versatile depth-2 continuous GCNN $S[\gamma]$ as a nonlinear mapping between group representations, and directly obtain an analysis operator, called the ridgelet trasform, that maps a given function $f$ to the network parameter $\gamma$ so that $S[\gamma]=f$. The proposed GCNN covers typical GCNNs such as the cyclic convolution on multi-channel images, networks on permutation-invariant inputs (Deep Sets), and $\mathrm{E}(n)$-equivariant networks. The closed-form expression of the ridgelet transform can describe how the network parameters are organized to represent a function. While it has been known only for fully-connected networks, this study is the first to obtain the ridgelet transform for GCNNs. By discretizing the closed-form expression, we can systematically generate a constructive proof of the $cc$-universality of finite GCNNs. In other words, our universality proofs are more unified and constructive than previous proofs.
null
null
Error Correction Code Transformer
https://papers.nips.cc/paper_files/paper/2022/hash/fcd3909db30887ce1da519c4468db668-Abstract-Conference.html
Yoni Choukroun, Lior Wolf
https://papers.nips.cc/paper_files/paper/2022/hash/fcd3909db30887ce1da519c4468db668-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17673-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fcd3909db30887ce1da519c4468db668-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fcd3909db30887ce1da519c4468db668-Supplemental-Conference.pdf
Error correction code is a major part of the physical communication layer, ensuring the reliable transfer of data over noisy channels.Recently, neural decoders were shown to outperform classical decoding techniques.However, the existing neural approaches present strong overfitting, due to the exponential training complexity, or a restrictive inductive bias, due to reliance on Belief Propagation.Recently, Transformers have become methods of choice in many applications, thanks to their ability to represent complex interactions between elements.In this work, we propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths.We encode each channel's output dimension to a high dimension for a better representation of the bits' information to be processed separately.The element-wise processing allows the analysis of channel output reliability, while the algebraic code and the interaction between the bits are inserted into the model via an adapted masked self-attention module.The proposed approach demonstrates the power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins, at a fraction of their time complexity.
null
null
Towards Improving Calibration in Object Detection Under Domain Shift
https://papers.nips.cc/paper_files/paper/2022/hash/fcd812a51b8f8d05cfea22e3c9c4b369-Abstract-Conference.html
Muhammad Akhtar Munir, Muhammad Haris Khan, M. Sarfraz, Mohsen Ali
https://papers.nips.cc/paper_files/paper/2022/hash/fcd812a51b8f8d05cfea22e3c9c4b369-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17996-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fcd812a51b8f8d05cfea22e3c9c4b369-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fcd812a51b8f8d05cfea22e3c9c4b369-Supplemental-Conference.pdf
With deep neural network based solution more readily being incorporated in real-world applications, it has been pressing requirement that predictions by such models, especially in safety-critical environments, be highly accurate and well-calibrated. Although some techniques addressing DNN calibration have been proposed, they are only limited to visual classification applications and in-domain predictions. Unfortunately, very little to no attention is paid towards addressing calibration of DNN-based visual object detectors, that occupy similar space and importance in many decision making systems as their visual classification counterparts. In this work, we study the calibration of DNN-based object detection models, particularly under domain shift. To this end, we first propose a new, plug-and-play, train-time calibration loss for object detection (coined as TCD). It can be used with various application-specific loss functions as an auxiliary loss function to improve detection calibration. Second, we devise a new implicit technique for improving calibration in self-training based domain adaptive detectors, featuring a new uncertainty quantification mechanism for object detection. We demonstrate TCD is capable of enhancing calibration with notable margins (1) across different DNN-based object detection paradigms both in in-domain and out-of-domain predictions, and (2) in different domain-adaptive detectors across challenging adaptation scenarios. Finally, we empirically show that our implicit calibration technique can be used in tandem with TCD during adaptation to further boost calibration in diverse domain shift scenarios.
null
null
Renyi Differential Privacy of Propose-Test-Release and Applications to Private and Robust Machine Learning
https://papers.nips.cc/paper_files/paper/2022/hash/fcdffb372c9fa2ce757cf457415c7aab-Abstract-Conference.html
Jiachen T. Wang, Saeed Mahloujifar, Shouda Wang, Ruoxi Jia, Prateek Mittal
https://papers.nips.cc/paper_files/paper/2022/hash/fcdffb372c9fa2ce757cf457415c7aab-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/16835-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fcdffb372c9fa2ce757cf457415c7aab-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fcdffb372c9fa2ce757cf457415c7aab-Supplemental-Conference.zip
Propose-Test-Release (PTR) is a differential privacy framework that works with local sensitivity of functions, instead of their global sensitivity. This framework is typically used for releasing robust statistics such as median or trimmed mean in a differentially private manner. While PTR is a common framework introduced over a decade ago, using it in applications such as robust SGD where we need many adaptive robust queries is challenging. This is mainly due to the lack of \Renyi Differential Privacy (RDP) analysis, an essential ingredient underlying the moments accountant approach for differentially private deep learning. In this work, we generalize the standard PTR and derive the first RDP bound for it. We show that our RDP bound for PTR yields tighter DP guarantees than the directly analyzed $(\varepsilon, \delta)$-DP. We also derive the algorithm-specific privacy amplification bound of PTR under subsampling. We show that our bound is much tighter than the general upper bound and close to the lower bound. Our RDP bounds enable tighter privacy loss calculation for the composition of many adaptive runs of PTR. As an application of our analysis, we show that PTR and our theoretical results can be used to design differentially private variants for byzantine robust training algorithms that use robust statistics for gradients aggregation. We conduct experiments on the settings of label, feature, and gradient corruption across different datasets and architectures. We show that PTR-based private and robust training algorithm significantly improves the utility compared with the baseline.
null
null
A Transformer-Based Object Detector with Coarse-Fine Crossing Representations
https://papers.nips.cc/paper_files/paper/2022/hash/fcfad93e2f30ab4c22f9ec5edfbb5cc0-Abstract-Conference.html
Zhishan Li, Ying Nie, Kai Han, Jianyuan Guo, Lei Xie, Yunhe Wang
https://papers.nips.cc/paper_files/paper/2022/hash/fcfad93e2f30ab4c22f9ec5edfbb5cc0-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18558-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fcfad93e2f30ab4c22f9ec5edfbb5cc0-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fcfad93e2f30ab4c22f9ec5edfbb5cc0-Supplemental-Conference.pdf
Transformer-based object detectors have shown competitive performance recently. Compared with convolutional neural networks limited by the relatively small receptive fields, the advantage of transformer for visual tasks is the capacity to perceive long-range dependencies among all image patches, while the deficiency is that the local fine-grained information is not fully excavated. In this paper, we introduce the Coarse-grained and Fine-grained crossing representations to build an efficient Detection Transformer (CFDT). Specifically, we propose a local-global cross fusion module to establish the connection between local fine-grained features and global coarse-grained features. Besides, we propose a coarse-fine aware neck which enables detection tokens to interact with both coarse-grained and fine-grained features. Furthermore, an efficient feature integration module is presented for fusing multi-scale representations from different stages. Experimental results on the COCO dataset demonstrate the effectiveness of the proposed method. For instance, our CFDT achieves 48.1 AP with 173G FLOPs, which possesses higher accuracy and less computation compared with the state-of-the-art transformer-based detector ViDT. Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/CFDT.
null
null
Beyond Adult and COMPAS: Fair Multi-Class Prediction via Information Projection
https://papers.nips.cc/paper_files/paper/2022/hash/fd5013ea0c3f96931dec77174eaf9d80-Abstract-Conference.html
Wael Alghamdi, Hsiang Hsu, Haewon Jeong, Hao Wang, Peter Michalak, Shahab Asoodeh, Flavio Calmon
https://papers.nips.cc/paper_files/paper/2022/hash/fd5013ea0c3f96931dec77174eaf9d80-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17444-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fd5013ea0c3f96931dec77174eaf9d80-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fd5013ea0c3f96931dec77174eaf9d80-Supplemental-Conference.pdf
We consider the problem of producing fair probabilistic classifiers for multi-class classification tasks. We formulate this problem in terms of ``projecting'' a pre-trained (and potentially unfair) classifier onto the set of models that satisfy target group-fairness requirements. The new, projected model is given by post-processing the outputs of the pre-trained classifier by a multiplicative factor. We provide a parallelizable, iterative algorithm for computing the projected classifier and derive both sample complexity and convergence guarantees. Comprehensive numerical comparisons with state-of-the-art benchmarks demonstrate that our approach maintains competitive performance in terms of accuracy-fairness trade-off curves, while achieving favorable runtime on large datasets. We also evaluate our method at scale on an open dataset with multiple classes, multiple intersectional groups, and over 1M samples.
null
null
Explicit Tradeoffs between Adversarial and Natural Distributional Robustness
https://papers.nips.cc/paper_files/paper/2022/hash/fd62b65606f0f0d2af2c01623a224258-Abstract-Conference.html
Mazda Moayeri, Kiarash Banihashem, Soheil Feizi
https://papers.nips.cc/paper_files/paper/2022/hash/fd62b65606f0f0d2af2c01623a224258-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17575-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fd62b65606f0f0d2af2c01623a224258-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fd62b65606f0f0d2af2c01623a224258-Supplemental-Conference.zip
Several existing works study either adversarial or natural distributional robustness of deep neural networks separately. In practice, however, models need to enjoy both types of robustness to ensure reliability. In this work, we bridge this gap and show that in fact, {\it explicit tradeoffs} exist between adversarial and natural distributional robustness. We first consider a simple linear regression setting on Gaussian data with disjoint sets of \emph{core} and \emph{spurious} features. In this setting, through theoretical and empirical analysis, we show that (i) adversarial training with $\ell_1$ and $\ell_2$ norms increases the model reliance on spurious features; (ii) For $\ell_\infty$ adversarial training, spurious reliance only occurs when the scale of the spurious features is larger than that of the core features; (iii) adversarial training can have {\it an unintended consequence} in reducing distributional robustness, specifically when spurious correlations are changed in the new test domain. Next, we present extensive empirical evidence, using a test suite of twenty adversarially trained models evaluated on five benchmark datasets (ObjectNet, RIVAL10, Salient ImageNet-1M, ImageNet-9, Waterbirds), that adversarially trained classifiers rely on backgrounds more than their standardly trained counterparts, validating our theoretical results. We also show that spurious correlations in training data (when preserved in the test domain) can {\it improve} adversarial robustness, revealing that previous claims that adversarial vulnerability is rooted in spurious correlations are incomplete.
null
null
Learning Latent Seasonal-Trend Representations for Time Series Forecasting
https://papers.nips.cc/paper_files/paper/2022/hash/fd6613131889a4b656206c50a8bd7790-Abstract-Conference.html
Zhiyuan Wang, Xovee Xu, Weifeng Zhang, Goce Trajcevski, Ting Zhong, Fan Zhou
https://papers.nips.cc/paper_files/paper/2022/hash/fd6613131889a4b656206c50a8bd7790-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17680-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fd6613131889a4b656206c50a8bd7790-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fd6613131889a4b656206c50a8bd7790-Supplemental-Conference.pdf
Forecasting complex time series is ubiquitous and vital in a range of applications but challenging. Recent advances endeavor to achieve progress by incorporating various deep learning techniques (e.g., RNN and Transformer) into sequential models. However, clear patterns are still hard to extract since time series are often composed of several intricately entangled components. Motivated by the success of disentangled variational autoencoder in computer vision and classical time series decomposition, we plan to infer a couple of representations that depict seasonal and trend components of time series. To achieve this goal, we propose LaST, which, based on variational inference, aims to disentangle the seasonal-trend representations in the latent space. Furthermore, LaST supervises and disassociates representations from the perspectives of themselves and input reconstruction, and introduces a series of auxiliary objectives. Extensive experiments prove that LaST achieves state-of-the-art performance on time series forecasting task against the most advanced representation learning and end-to-end forecasting models. For reproducibility, our implementation is publicly available on Github.
null
null
Capturing Graphs with Hypo-Elliptic Diffusions
https://papers.nips.cc/paper_files/paper/2022/hash/fd7f43f8689988f4ef056f192ec0589b-Abstract-Conference.html
Csaba Toth, Darrick Lee, Celia Hacker, Harald Oberhauser
https://papers.nips.cc/paper_files/paper/2022/hash/fd7f43f8689988f4ef056f192ec0589b-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17484-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fd7f43f8689988f4ef056f192ec0589b-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fd7f43f8689988f4ef056f192ec0589b-Supplemental-Conference.zip
Convolutional layers within graph neural networks operate by aggregating information about local neighbourhood structures; one common way to encode such substructures is through random walks. The distribution of these random walks evolves according to a diffusion equation defined using the graph Laplacian. We extend this approach by leveraging classic mathematical results about hypo-elliptic diffusions. This results in a novel tensor-valued graph operator, which we call the hypo-elliptic graph Laplacian. We provide theoretical guarantees and efficient low-rank approximation algorithms. In particular, this gives a structured approach to capture long-range dependencies on graphs that is robust to pooling. Besides the attractive theoretical properties, our experiments show that this method competes with graph transformers on datasets requiring long-range reasoning but scales only linearly in the number of edges as opposed to quadratically in nodes.
null
null
A Spectral Approach to Item Response Theory
https://papers.nips.cc/paper_files/paper/2022/hash/fd88ea50ca8c1973db037462f116ff99-Abstract-Conference.html
Duc Nguyen, Anderson Ye Zhang
https://papers.nips.cc/paper_files/paper/2022/hash/fd88ea50ca8c1973db037462f116ff99-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18395-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fd88ea50ca8c1973db037462f116ff99-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fd88ea50ca8c1973db037462f116ff99-Supplemental-Conference.zip
The Rasch model is one of the most fundamental models in item response theory and has wide-ranging applications from education testing to recommendation systems. In a universe with $n$ users and $m$ items, the Rasch model assumes that the binary response $X_{li} \in \{0,1\}$ of a user $l$ with parameter $\theta^*_l$ to an item $i$ with parameter $\beta^*_i$ (e.g., a user likes a movie, a student correctly solves a problem) is distributed as $\mathbb{P}(X_{li}=1) = 1/(1 + \exp(-(\theta^*_l - \beta^*_i)))$. In this paper, we propose a new item estimation algorithm for this celebrated model (i.e., to estimate $\beta^*$). The core of our algorithm is the computation of the stationary distribution of a Markov chain defined on an item-item graph. We complement our algorithmic contributions with finite-sample error guarantees, the first of their kind in the literature, showing that our algorithm is consistent and enjoys favorable optimality properties. We discuss practical modifications to accelerate and robustify the algorithm that practitioners can adopt. Experiments on synthetic and real-life datasets, ranging from small education testing datasets to large recommendation systems datasets show that our algorithm is scalable, accurate, and competitive with the most commonly used methods in the literature.
null
null
FedSR: A Simple and Effective Domain Generalization Method for Federated Learning
https://papers.nips.cc/paper_files/paper/2022/hash/fd946a6c99541fddc3d64a3ea39a1bc2-Abstract-Conference.html
A. Tuan Nguyen, Philip Torr, Ser Nam Lim
https://papers.nips.cc/paper_files/paper/2022/hash/fd946a6c99541fddc3d64a3ea39a1bc2-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/16895-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fd946a6c99541fddc3d64a3ea39a1bc2-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fd946a6c99541fddc3d64a3ea39a1bc2-Supplemental-Conference.zip
Federated Learning (FL) refers to the decentralized and privacy-preserving machine learning framework in which multiple clients collaborate (with the help of a central server) to train a global model without sharing their data. However, most existing FL methods only focus on maximizing the model's performance on the source clients' data (e.g., mobile users) without considering its generalization ability to unknown target data (e.g., a new user). In this paper, we incorporate the problem of Domain Generalization (DG) into Federated Learning to tackle the aforementioned issue. However, virtually all existing DG methods require a centralized setting where data is shared across the domains, which violates the principles of decentralized FL and hence not applicable. To this end, we propose a simple yet novel representation learning framework, namely FedSR, which enables domain generalization while still respecting the decentralized and privacy-preserving natures of this FL setting. Motivated by classical machine learning algorithms, we aim to learn a simple representation of the data for better generalization. In particular, we enforce an L2-norm regularizer on the representation and a conditional mutual information (between the representation and the data given the label) regularizer to encourage the model to only learn essential information (while ignoring spurious correlations such as the background). Furthermore, we provide theoretical connections between the above two objectives and representation alignment in domain generalization. Extensive experimental results suggest that our method significantly outperforms relevant baselines in this particular problem.
null
null
SIXO: Smoothing Inference with Twisted Objectives
https://papers.nips.cc/paper_files/paper/2022/hash/fddc79681b2df2734c01444f9bc2a17e-Abstract-Conference.html
Dieterich Lawson, Allan Raventós, Andrew Warrington, Scott Linderman
https://papers.nips.cc/paper_files/paper/2022/hash/fddc79681b2df2734c01444f9bc2a17e-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18030-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fddc79681b2df2734c01444f9bc2a17e-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fddc79681b2df2734c01444f9bc2a17e-Supplemental-Conference.pdf
Sequential Monte Carlo (SMC) is an inference algorithm for state space models that approximates the posterior by sampling from a sequence of target distributions. The target distributions are often chosen to be the filtering distributions, but these ignore information from future observations, leading to practical and theoretical limitations in inference and model learning. We introduce SIXO, a method that instead learns target distributions that approximate the smoothing distributions, incorporating information from all observations. The key idea is to use density ratio estimation to fit functions that warp the filtering distributions into the smoothing distributions. We then use SMC with these learned targets to define a variational objective for model and proposal learning. SIXO yields provably tighter log marginal lower bounds and offers more accurate posterior inferences and parameter estimates in a variety of domains.
null
null
Explicable Policy Search
https://papers.nips.cc/paper_files/paper/2022/hash/fdff3c4130c24c40c88aa41eb52d2a27-Abstract-Conference.html
Ze Gong, Yu (&quot;Tony&quot;) Zhang
https://papers.nips.cc/paper_files/paper/2022/hash/fdff3c4130c24c40c88aa41eb52d2a27-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/19116-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fdff3c4130c24c40c88aa41eb52d2a27-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fdff3c4130c24c40c88aa41eb52d2a27-Supplemental-Conference.pdf
Human teammates often form conscious and subconscious expectations of each other during interaction. Teaming success is contingent on whether such expectations can be met. Similarly, for an intelligent agent to operate beside a human, it must consider the human’s expectation of its behavior. Disregarding such expectations can lead to the loss of trust and degraded team performance. A key challenge here is that the human’s expectation may not align with the agent’s optimal behavior, e.g., due to the human’s partial or inaccurate understanding of the task domain. Prior work on explicable planning described the ability of agents to respect their human teammate’s expectations by trading off task performance for more expected or “explicable” behaviors. In this paper, we introduce Explicable Policy Search (EPS) to significantly extend such an ability to stochastic domains in a reinforcement learning (RL) setting with continuous state and action spaces. Furthermore, in contrast to the traditional RL methods, EPS must at the same time infer the human’s hidden expectations. Such inferences require information about the human’s belief about the domain dynamics and her reward model but directly querying them is impractical. We demonstrate that such information can be necessarily and sufficiently encoded by a surrogate reward function for EPS, which can be learned based on the human’s feedback on the agent’s behavior. The surrogate reward function is then used to reshape the agent’s reward function, which is shown to be equivalent to searching for an explicable policy. We evaluate EPS in a set of navigation domains with synthetic human models and in an autonomous driving domain with a user study. The results suggest that our method can generate explicable behaviors that reconcile task performance with human expectations intelligently and has real-world relevance in human-agent teaming domains.
null
null
Exploring evolution-aware & -free protein language models as protein function predictors
https://papers.nips.cc/paper_files/paper/2022/hash/fe066022bab2a6c6a3c57032a1623c70-Abstract-Conference.html
Mingyang Hu, Fajie Yuan, Kevin Yang, Fusong Ju, Jin Su, Hui Wang, Fei Yang, Qiuyang Ding
https://papers.nips.cc/paper_files/paper/2022/hash/fe066022bab2a6c6a3c57032a1623c70-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18083-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fe066022bab2a6c6a3c57032a1623c70-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fe066022bab2a6c6a3c57032a1623c70-Supplemental-Conference.zip
Large-scale Protein Language Models (PLMs) have improved performance in protein prediction tasks, ranging from 3D structure prediction to various function predictions. In particular, AlphaFold, a ground-breaking AI system, could potentially reshape structural biology. However, the utility of the PLM module in AlphaFold, Evoformer, has not been explored beyond structure prediction. In this paper, we investigate the representation ability of three popular PLMs: ESM-1b (single sequence), MSA-Transformer (multiple sequence alignment), and Evoformer (structural), with a special focus on Evoformer. Specifically, we aim to answer the following key questions: (1) Does the Evoformer trained as part of AlphaFold produce representations amenable to predicting protein function? (2) If yes, can Evoformer replace ESM-1b and MSA-Transformer? (3) How much do these PLMs rely on evolution-related protein data? In this regard, are they complementary to each other? We compare these models by empirical study along with new insights and conclusions. All code and datasets for reproducibility are available at https://github.com/elttaes/Revisiting-PLMs .
null
null
Fair and Optimal Decision Trees: A Dynamic Programming Approach
https://papers.nips.cc/paper_files/paper/2022/hash/fe248e22b241ae5a9adf11493c8c12bc-Abstract-Conference.html
Jacobus van der Linden, Mathijs de Weerdt, Emir Demirović
https://papers.nips.cc/paper_files/paper/2022/hash/fe248e22b241ae5a9adf11493c8c12bc-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/19373-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fe248e22b241ae5a9adf11493c8c12bc-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fe248e22b241ae5a9adf11493c8c12bc-Supplemental-Conference.pdf
Interpretable and fair machine learning models are required for many applications, such as credit assessment and in criminal justice. Decision trees offer this interpretability, especially when they are small. Optimal decision trees are of particular interest because they offer the best performance possible for a given size. However, state-of-the-art algorithms for fair and optimal decision trees have scalability issues, often requiring several hours to find such trees even for small datasets. Previous research has shown that dynamic programming (DP) performs well for optimizing decision trees because it can exploit the tree structure. However, adding a global fairness constraint to a DP approach is not straightforward, because the global constraint violates the condition that subproblems should be independent. We show how such a constraint can be incorporated by introducing upper and lower bounds on final fairness values for partial solutions of subproblems, which enables early comparison and pruning. Our results show that our model can find fair and optimal trees several orders of magnitude faster than previous methods, and now also for larger datasets that were previously beyond reach. Moreover, we show that with this substantial improvement our method can find the full Pareto front in the trade-off between accuracy and fairness.
null
null
Recommender Forest for Efficient Retrieval
https://papers.nips.cc/paper_files/paper/2022/hash/fe2fe749d329627f161484876630c689-Abstract-Conference.html
Chao Feng, Wuchao Li, Defu Lian, Zheng Liu, Enhong Chen
https://papers.nips.cc/paper_files/paper/2022/hash/fe2fe749d329627f161484876630c689-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17503-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fe2fe749d329627f161484876630c689-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fe2fe749d329627f161484876630c689-Supplemental-Conference.pdf
Recommender systems (RS) have to select the top-N items from a massive item set. For the sake of efficient recommendation, RS usually represents user and item as latent embeddings, and relies on approximate nearest neighbour search (ANNs) to retrieve the recommendation result. Despite the reduction of running time, the representation learning is independent of ANNs index construction; thus, the two operations can be incompatible, which results in potential loss of recommendation accuracy. To overcome the above problem, we propose the Recommender Forest (a.k.a., RecForest), which jointly learns latent embedding and index for efficient and high-fidelity recommendation. RecForest consists of multiple k-ary trees, each of which is a partition of the item set via hierarchical balanced clustering such that each item is uniquely represented by a path from the root to a leaf. Given such a data structure, an encoder-decoder based routing network is developed: it first encodes the context, i.e., user information, into hidden states; then, leveraging a transformer-based decoder, it identifies the top-N items via beam search. Compared with the existing methods, RecForest brings in the following advantages: 1) the false partition of the boundary items can be effectively alleviated by the use of multiple trees; 2) the routing operation becomes much more accurate thanks to the powerful transformer decoder; 3) the tree parameters are shared across different tree levels, making the index to be extremely memory-efficient. The experimental studies are performed on five popular recommendation datasets: with a significantly simplified training cost, RecForest outperforms competitive baseline approaches in terms of both recommendation accuracy and efficiency.
null
null
Graph Few-shot Learning with Task-specific Structures
https://papers.nips.cc/paper_files/paper/2022/hash/fe47dd3fd8e7eb43187d42d65083e383-Abstract-Conference.html
Song Wang, Chen Chen, Jundong Li
https://papers.nips.cc/paper_files/paper/2022/hash/fe47dd3fd8e7eb43187d42d65083e383-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18327-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fe47dd3fd8e7eb43187d42d65083e383-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fe47dd3fd8e7eb43187d42d65083e383-Supplemental-Conference.pdf
Graph few-shot learning is of great importance among various graph learning tasks. Under the few-shot scenario, models are often required to conduct classification given limited labeled samples. Existing graph few-shot learning methods typically leverage Graph Neural Networks (GNNs) and perform classification across a series of meta-tasks. Nevertheless, these methods generally rely on the original graph (i.e., the graph that the meta-task is sampled from) to learn node representations. Consequently, the learned representations for the same nodes are identical in all meta-tasks. Since the class sets are different across meta-tasks, node representations should be task-specific to promote classification performance. Therefore, to adaptively learn node representations across meta-tasks, we propose a novel framework that learns a task-specific structure for each meta-task. To handle the variety of nodes across meta-tasks, we extract relevant nodes and learn task-specific structures based on node influence and mutual information. In this way, we can learn node representations with the task-specific structure tailored for each meta-task. We further conduct extensive experiments on five node classification datasets under both single- and multiple-graph settings to validate the superiority of our framework over the state-of-the-art baselines.
null
null
DASCO: Dual-Generator Adversarial Support Constrained Offline Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2022/hash/fe51de4e7baf52e743b679e3bdba7905-Abstract-Conference.html
Quan Vuong, Aviral Kumar, Sergey Levine, Yevgen Chebotar
https://papers.nips.cc/paper_files/paper/2022/hash/fe51de4e7baf52e743b679e3bdba7905-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/16888-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fe51de4e7baf52e743b679e3bdba7905-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fe51de4e7baf52e743b679e3bdba7905-Supplemental-Conference.pdf
In offline RL, constraining the learned policy to remain close to the data is essential to prevent the policy from outputting out-of-distribution (OOD) actions with erroneously overestimated values. In principle, generative adversarial networks (GAN) can provide an elegant solution to do so, with the discriminator directly providing a probability that quantifies distributional shift. However, in practice, GAN-based offline RL methods have not outperformed alternative approaches, perhaps because the generator is trained to both fool the discriminator and maximize return - two objectives that are often at odds with each other. In this paper, we show that the issue of conflicting objectives can be resolved by training two generators: one that maximizes return, with the other capturing the "remainder" of the data distribution in the offline dataset, such that the mixture of the two is close to the behavior policy. We show that not only does having two generators enable an effective GAN-based offline RL method, but also approximates a support constraint, where the policy does not need to match the entire data distribution, but only the slice of the data that leads to high long term performance. We name our method DASCO, for Dual-Generator Adversarial Support Constrained Offline RL. On benchmark tasks that require learning from sub-optimal data, DASCO significantly outperforms prior methods that enforce distribution constraint.
null
null
Beyond L1: Faster and Better Sparse Models with skglm
https://papers.nips.cc/paper_files/paper/2022/hash/fe5c31e525e9a26a1426ab0b589f42fe-Abstract-Conference.html
Quentin Bertrand, Quentin Klopfenstein, Pierre-Antoine Bannier, Gauthier Gidel, Mathurin Massias
https://papers.nips.cc/paper_files/paper/2022/hash/fe5c31e525e9a26a1426ab0b589f42fe-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18287-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fe5c31e525e9a26a1426ab0b589f42fe-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fe5c31e525e9a26a1426ab0b589f42fe-Supplemental-Conference.zip
We propose a new fast algorithm to estimate any sparse generalized linear model with convex or non-convex separable penalties. Our algorithm is able to solve problems with millions of samples and features in seconds, by relying on coordinate descent, working sets and Anderson acceleration. It handles previously unaddressed models, and is extensively shown to improve state-of-art algorithms. We provide a flexible, scikit-learn compatible package, which easily handles customized datafits and penalties.
null
null
You Can’t Count on Luck: Why Decision Transformers and RvS Fail in Stochastic Environments
https://papers.nips.cc/paper_files/paper/2022/hash/fe90657b12193c7b52a3418bdc351807-Abstract-Conference.html
Keiran Paster, Sheila McIlraith, Jimmy Ba
https://papers.nips.cc/paper_files/paper/2022/hash/fe90657b12193c7b52a3418bdc351807-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18522-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fe90657b12193c7b52a3418bdc351807-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fe90657b12193c7b52a3418bdc351807-Supplemental-Conference.zip
Recently, methods such as Decision Transformer that reduce reinforcement learning to a prediction task and solve it via supervised learning (RvS) have become popular due to their simplicity, robustness to hyperparameters, and strong overall performance on offline RL tasks. However, simply conditioning a probabilistic model on a desired return and taking the predicted action can fail dramatically in stochastic environments since trajectories that result in a return may have only achieved that return due to luck. In this work, we describe the limitations of RvS approaches in stochastic environments and propose a solution. Rather than simply conditioning on returns, as is standard practice, our proposed method, ESPER, conditions on learned average returns which are independent from environment stochasticity. Doing so allows ESPER to achieve strong alignment between target return and expected performance in real environments. We demonstrate this in several challenging stochastic offline-RL tasks including the challenging puzzle game 2048, and Connect Four playing against a stochastic opponent. In all tested domains, ESPER achieves significantly better alignment between the target return and achieved return than simply conditioning on returns. ESPER also achieves higher maximum performance than even the value-based baselines.
null
null
Cost-efficient Gaussian tensor network embeddings for tensor-structured inputs
https://papers.nips.cc/paper_files/paper/2022/hash/fe91414cdc6348bcb5710e81bcb72c08-Abstract-Conference.html
Linjian Ma, Edgar Solomonik
https://papers.nips.cc/paper_files/paper/2022/hash/fe91414cdc6348bcb5710e81bcb72c08-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/19342-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fe91414cdc6348bcb5710e81bcb72c08-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fe91414cdc6348bcb5710e81bcb72c08-Supplemental-Conference.pdf
This work discusses tensor network embeddings, which are random matrices ($S$) with tensor network structure. These embeddings have been used to perform dimensionality reduction of tensor network structured inputs $x$ and accelerate applications such as tensor decomposition and kernel regression. Existing works have designed embeddings for inputs $x$ with specific structures, such as the Kronecker product or Khatri-Rao product, such that the computational cost for calculating $Sx$ is efficient. We provide a systematic way to design tensor network embeddings consisting of Gaussian random tensors, such that for inputs with more general tensor network structures, both the sketch size (row size of $S$) and the sketching computational cost are low.We analyze general tensor network embeddings that can be reduced to a sequence of sketching matrices. We provide a sufficient condition to quantify the accuracy of such embeddings and derive sketching asymptotic cost lower bounds using embeddings that satisfy this condition and have a sketch size lower than any input dimension. We then provide an algorithm to efficiently sketch input data using such embeddings. The sketch size of the embedding used in the algorithm has a linear dependence on the number of sketching dimensions of the input. Assuming tensor contractions are performed with classical dense matrix multiplication algorithms, this algorithm achieves asymptotic cost within a factor of $O(\sqrt{m})$ of our cost lower bound, where $m$ is the sketch size. Further, when each tensor in the input has a dimension that needs to be sketched, this algorithm yields the optimal sketching asymptotic cost. We apply our sketching analysis to inexact tensor decomposition optimization algorithms. We provide a sketching algorithm for CP decomposition that is asymptotically faster than existing work in multiple regimes, and show the optimality of an existing algorithm for tensor train rounding.
null
null
Neural Transmitted Radiance Fields
https://papers.nips.cc/paper_files/paper/2022/hash/fe989bb038b5dcc44181255dd6913e43-Abstract-Conference.html
Chengxuan Zhu, Renjie Wan, Boxin Shi
https://papers.nips.cc/paper_files/paper/2022/hash/fe989bb038b5dcc44181255dd6913e43-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18353-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/fe989bb038b5dcc44181255dd6913e43-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/fe989bb038b5dcc44181255dd6913e43-Supplemental-Conference.pdf
Neural radiance fields (NeRF) have brought tremendous progress to novel view synthesis. Though NeRF enables the rendering of subtle details in a scene by learning from a dense set of images, it also reconstructs the undesired reflections when we capture images through glass. As a commonly observed interference, the reflection would undermine the visibility of the desired transmitted scene behind glass by occluding the transmitted light rays. In this paper, we aim at addressing the problem of rendering novel transmitted views given a set of reflection-corrupted images. By introducing the transmission encoder and recurring edge constraints as guidance, our neural transmitted radiance fields can resist such reflection interference during rendering and reconstruct high-fidelity results even under sparse views. The proposed method achieves superior performance from the experiments on a newly collected dataset compared with state-of-the-art methods.
null
null
Unsupervised Skill Discovery via Recurrent Skill Training
https://papers.nips.cc/paper_files/paper/2022/hash/ff6b031d5bdc552b795175a0f3b35a50-Abstract-Conference.html
Zheyuan Jiang, Jingyue Gao, Jianyu Chen
https://papers.nips.cc/paper_files/paper/2022/hash/ff6b031d5bdc552b795175a0f3b35a50-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17057-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/ff6b031d5bdc552b795175a0f3b35a50-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/ff6b031d5bdc552b795175a0f3b35a50-Supplemental-Conference.zip
Being able to discover diverse useful skills without external reward functions is beneficial in reinforcement learning research. Previous unsupervised skill discovery approaches mainly train different skills in parallel. Although impressive results have been provided, we found that parallel training procedure can sometimes block exploration when the state visited by different skills overlap, which leads to poor state coverage and restricts the diversity of learned skills. In this paper, we take a deeper look into this phenomenon and propose a novel framework to address this issue, which we call Recurrent Skill Training (ReST). Instead of training all the skills in parallel, ReST trains different skills one after another recurrently, along with a state coverage based intrinsic reward. We conduct experiments on a number of challenging 2D navigation environments and robotic locomotion environments. Evaluation results show that our proposed approach outperforms previous parallel training approaches in terms of state coverage and skill diversity. Videos of the discovered skills are available at https://sites.google.com/view/neurips22-rest.
null
null
Structural Kernel Search via Bayesian Optimization and Symbolical Optimal Transport
https://papers.nips.cc/paper_files/paper/2022/hash/ff7373914a96956f2a7cacbdf3b0b8d8-Abstract-Conference.html
Matthias Bitzer, Mona Meister, Christoph Zimmer
https://papers.nips.cc/paper_files/paper/2022/hash/ff7373914a96956f2a7cacbdf3b0b8d8-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18418-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/ff7373914a96956f2a7cacbdf3b0b8d8-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/ff7373914a96956f2a7cacbdf3b0b8d8-Supplemental-Conference.zip
Despite recent advances in automated machine learning, model selection is still a complex and computationally intensive process. For Gaussian processes (GPs), selecting the kernel is a crucial task, often done manually by the expert. Additionally, evaluating the model selection criteria for Gaussian processes typically scales cubically in the sample size, rendering kernel search particularly computationally expensive. We propose a novel, efficient search method through a general, structured kernel space. Previous methods solved this task via Bayesian optimization and relied on measuring the distance between GP's directly in function space to construct a kernel-kernel. We present an alternative approach by defining a kernel-kernel over the symbolic representation of the statistical hypothesis that is associated with a kernel. We empirically show that this leads to a computationally more efficient way of searching through a discrete kernel space.
null
null
Robust Models are less Over-Confident
https://papers.nips.cc/paper_files/paper/2022/hash/ff887781480973bd3cb6026feb378d1e-Abstract-Conference.html
Julia Grabinski, Paul Gavrikov, Janis Keuper, Margret Keuper
https://papers.nips.cc/paper_files/paper/2022/hash/ff887781480973bd3cb6026feb378d1e-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/18225-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/ff887781480973bd3cb6026feb378d1e-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/ff887781480973bd3cb6026feb378d1e-Supplemental-Conference.pdf
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Current attack methods are able to manipulate the network's prediction by adding specific but small amounts of noise to the input. In turn, adversarial training (AT) aims to achieve robustness against such attacks and ideally a better model generalization ability by including adversarial samples in the trainingset. However, an in-depth analysis of the resulting robust models beyond adversarial robustness is still pending. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions, even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences. Data & Project website: https://github.com/GeJulia/robustnessconfidencesevaluation
null
null
Near-Optimal No-Regret Learning Dynamics for General Convex Games
https://papers.nips.cc/paper_files/paper/2022/hash/ffa1301939cc707d6e986e6c4124340b-Abstract-Conference.html
Gabriele Farina, Ioannis Anagnostides, Haipeng Luo, Chung-Wei Lee, Christian Kroer, Tuomas Sandholm
https://papers.nips.cc/paper_files/paper/2022/hash/ffa1301939cc707d6e986e6c4124340b-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17332-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/ffa1301939cc707d6e986e6c4124340b-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/ffa1301939cc707d6e986e6c4124340b-Supplemental-Conference.pdf
A recent line of work has established uncoupled learning dynamics such that, when employed by all players in a game, each player's regret after $T$ repetitions grows polylogarithmically in $T$, an exponential improvement over the traditional guarantees within the no-regret framework. However, so far these results have only been limited to certain classes of games with structured strategy spaces---such as normal-form and extensive-form games. The question as to whether $O(\mathrm{polylog} T)$ regret bounds can be obtained for general convex and compact strategy sets---as is the case in many fundamental models in economics and multiagent systems---while retaining efficient strategy updates is an important question. In this paper, we answer this in the positive by establishing the first uncoupled learning algorithm with $O(\log T)$ per-player regret in general convex games, that is, games with concave utility functions supported on arbitrary convex and compact strategy sets. Our learning dynamics are based on an instantiation of optimistic follow-the-regularized-leader over an appropriately lifted space using a self-concordant regularizer that is peculiarly not a barrier for the feasible region. Our learning dynamics are efficiently implementable given access to a proximal oracle for the convex strategy set, leading to $O(\log\log T)$ per-iteration complexity; we also give extensions when access to only a linear optimization oracle is assumed. Finally, we adapt our dynamics to guarantee $O(\sqrt{T})$ regret in the adversarial regime. Even in those special cases where prior results apply, our algorithm improves over the state-of-the-art regret bounds either in terms of the dependence on the number of iterations or on the dimension of the strategy sets.
null
null
OTKGE: Multi-modal Knowledge Graph Embeddings via Optimal Transport
https://papers.nips.cc/paper_files/paper/2022/hash/ffdb280e7c7b4c4af30e04daf5a84b98-Abstract-Conference.html
Zongsheng Cao, Qianqian Xu, Zhiyong Yang, Yuan He, Xiaochun Cao, Qingming Huang
https://papers.nips.cc/paper_files/paper/2022/hash/ffdb280e7c7b4c4af30e04daf5a84b98-Abstract-Conference.html
NIPS 2022
https://papers.nips.cc/paper_files/paper/17651-/bibtex
https://papers.nips.cc/paper_files/paper/2022/file/ffdb280e7c7b4c4af30e04daf5a84b98-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/2022/file/ffdb280e7c7b4c4af30e04daf5a84b98-Supplemental-Conference.pdf
Multi-modal knowledge graph embeddings (KGE) have caught more and more attention in learning representations of entities and relations for link prediction tasks. Different from previous uni-modal KGE approaches, multi-modal KGE can leverage expressive knowledge from a wealth of modalities (image, text, etc.), leading to more comprehensive representations of real-world entities. However, the critical challenge along this course lies in that the multi-modal embedding spaces are usually heterogeneous. In this sense, direct fusion will destroy the inherent spatial structure of different modal embeddings. To overcome this challenge, we revisit multi-modal KGE from a distributional alignment perspective and propose optimal transport knowledge graph embeddings (OTKGE). Specifically, we model the multi-modal fusion procedure as a transport plan moving different modal embeddings to a unified space by minimizing the Wasserstein distance between multi-modal distributions. Theoretically, we show that by minimizing the Wasserstein distance between the individual modalities and the unified embedding space, the final results are guaranteed to maintain consistency and comprehensiveness. Moreover, experimental results on well-established multi-modal knowledge graph completion benchmarks show that our OTKGE achieves state-of-the-art performance.
null
null