,Clean_Title,Clean_Text,Clean_Summary 0,Adaptive Loss Scaling for Mixed Precision Training,"Mixed precision training is becoming a practical technique to improve the speed and energy efficiency of training deep neural networks by leveraging the fast hardware support for IEEE half-precision floating point that is available in existing GPUs.MPT is typically used in combination with a technique called loss scaling, that works by scaling up the loss value up before the start of backpropagation in order to minimize the impact of numerical underflow on training.Unfortunately, existing methods make this loss scale value a hyperparameter that needs to be tuned per-model, and a single scale cannot be adapted to different layers at different training stages.We introduce a loss scaling-based training method called adaptive loss scaling that makes MPT easier and more practical to use, by removing the need to tune a model-specific loss scale hyperparameter.We achieve this by introducing layer-wise loss scale values which are automatically computed during training to deal with underflow more effectively than existing methods.We present experimental results on a variety of networks and tasks that show our approach can shorten the time to convergence and improve accuracy, compared with using the existing state-of-the-art MPT and single-precision floating point.",We devise adaptive loss scaling to improve mixed precision training that surpass the state-of-the-art results.Proposal for an adaptive loss scaling method during backpropagation for mix precision training where scale rate is decided automatically to reduce the underflow.The authors propose a method to train models in FP16 precision that adopts a more elaborate way to minimize underflow in every layer simultaneously and automatically. 1,Deep Perm-Set Net: Learn to predict sets with unknown permutation and cardinality using deep neural networks,"Many real-world problems, e.g. object detection, have outputs that are naturally expressed as sets of entities.This creates a challenge for traditional deep neural networks which naturally deal with structured outputs such as vectors, matrices or tensors.We present a novel approach for learning to predict sets with unknown permutation and cardinality using deep neural networks.Specifically, in our formulation we incorporate the permutation as unobservable variable and estimate its distribution during the learning process using alternating optimization.We demonstrate the validity of this new formulation on two relevant vision problems: object detection, for which our formulation outperforms state-of-the-art detectors such as Faster R-CNN and YOLO, and a complex CAPTCHA test, where we observe that, surprisingly, our set based network acquired the ability of mimicking arithmetics without any rules being coded.",We present a novel approach for learning to predict sets with unknown permutation and cardinality using feed-forward deep neural networks.A formulation to learn the distribution over unobservable permutation variables based on deep networks for the set prediction problem. 2,Foveated Downsampling Techniques,"Foveation is an important part of human vision, and a number of deep networks have also used foveation.However, there have been few systematic comparisons between foveating and non-foveating deep networks, and between different variable-resolution downsampling methods.Here we define several such methods, and compare their performance on ImageNet recognition with a Densenet-121 network.The best variable-resolution method slightly outperforms uniform downsampling.Thus in our experiments, foveation does not substantially help or hinder object recognition in deep networks.",We compare object recognition performance on images that are downsampled uniformly and with three different foveation schemes. 3,Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability,"We explore the concept of co-design in the context of neural network verification.Specifically, we aim to train deep neural networks that not only are robust to adversarial perturbations but also whose robustness can be verified more easily.To this end, we identify two properties of network models - weight sparsity and so-called ReLU stability - that turn out to significantly impact the complexity of the corresponding verification task.We demonstrate that improving weight sparsity alone already enables us to turn computationally intractable verification problems into tractable ones.Then, improving ReLU stability leads to an additional 4-13x speedup in verification times.An important feature of our methodology is its ""universality,"" in the sense that it can be used with a broad range of training procedures and verification approaches.","We develop methods to train deep neural models that are both robust to adversarial perturbations and whose robustness is significantly easier to verify.The paper presents several ways to regularize plain ReLU networks to opimize the adversarial robustness, provable adversarial robustness, and the verification speed.This paper proposes methods to train robust neural networks that can be verified faster, using pruning methods to encourage weight sparsity and regularization to encourage ReLU stability." 4,Towards an Adversarially Robust Normalization Approach,"Batch Normalization has shown to be effective for improving and accelerating the training of deep neural networks.However, recently it has been shown that it is also vulnerable to adversarial perturbations.In this work, we aim to investigate the cause of adversarial vulnerability of the BatchNorm.We hypothesize that the use of different normalization statistics during training and inference is the main cause of this adversarial vulnerability in the BatchNorm layer.We empirically proved this by experiments on various neural network architectures and datasets.Furthermore, we introduce Robust Normalization and experimentally show that it is not only resilient to adversarial perturbation but also inherit the benefits of BatchNorm.","Investigation of how BatchNorm causes adversarial vulnerability and how to avoid it. This paper addresses vulnerability to adversarial perturbations in BatchNorm, and proposes an alternative called RobustNorm, using min-max rescaling instead of normalization.This paper investigates the reason behind the vulnerability of BatchNorm and proposes Robust Normalization, a normalization method that achieves significantly better results under a variety of attack methods." 5,Uncertainty-aware Variational-Recurrent Imputation Network for Clinical Time Series,"Electronic Health Records comprise of longitudinal clinical observations portrayed with sparsity, irregularity, and high-dimensionality which become the major obstacles in drawing reliable downstream outcome.Despite greatly numbers of imputation methods are being proposed to tackle these issues, most of the existing methods ignore correlated features or temporal dynamics and entirely put aside the uncertainty.In particular, since the missing values estimates have the risk of being imprecise, it motivates us to pay attention to reliable and less certain information differently.In this work, we propose a novel variational-recurrent imputation network, which unified imputation and prediction network, by taking into account the correlated features, temporal dynamics, and further utilizing the uncertainty to alleviate the risk of biased missing values estimates.Specifically, we leverage the deep generative model to estimate the missing values based on the distribution among variables and a recurrent imputation network to exploit the temporal relations in conjunction with utilization of the uncertainty.We validated the effectiveness of our proposed model with publicly available real-world EHR dataset, PhysioNet Challenge 2012, and compared the results with other state-of-the-art competing methods in the literature.","Our variational-recurrent imputation network (V-RIN) takes into account the correlated features, temporal dynamics, and further utilizes the uncertainty to alleviate the risk of biased missing values estimates.A missing data imputation network to incorporate correlation, temporal relationships, and data uncertainty for the problem of data sparsity in EHRs, which yields higher AUC on mortality rate classification tasks.The paper presented a method that combines VAE and uncertainty aware GRU for sequential missing data imputation and outcome prediction." 6,Adaptive Quantization of Neural Networks,"Despite the state-of-the-art accuracy of Deep Neural Networks in various classification problems, their deployment onto resource constrained edge computing devices remains challenging due to their large size and complexity.Several recent studies have reported remarkable results in reducing this complexity through quantization of DNN models.However, these studies usually do not consider the changes in the loss function when performing quantization, nor do they take the different importances of DNN model parameters to the accuracy into account.We address these issues in this paper by proposing a new method, called adaptive quantization, which simplifies a trained DNN model by finding a unique, optimal precision for each network parameter such that the increase in loss is minimized.The optimization problem at the core of this method iteratively uses the loss function gradient to determine an error margin for each parameter and assigns it a precision accordingly.Since this problem uses linear functions, it is computationally cheap and, as we will show, has a closed-form approximate solution.Experiments on MNIST, CIFAR, and SVHN datasets showed that the proposed method can achieve near or better than state-of-the-art reduction in model size with similar error rates.Furthermore, it can achieve compressions close to floating-point model compression methods without loss of accuracy.","An adaptive method for fixed-point quantization of neural networks based on theoretical analysis rather than heuristics. Proposes a method for quantizing neural networks that allow weights to be quantized with different precision depending on their importance, taking into account the loss.The paper proposes a technique for quantizing the weights of a neural network with bit-depth/precision varying on a per-parameter basis." 7,Representation Learning with Multisets,"We study the problem of learning permutation invariant representations that can capture containment relations.We propose training a model on a novel task: predicting the size of the symmetric difference between pairs of multisets, sets which may contain multiple copies of the same object.With motivation from fuzzy set theory, we formulate both multiset representations and how to predict symmetric difference sizes given these representations.We model multiset elements as vectors on the standard simplex and multisets as the summations of such vectors, and we predict symmetric difference as the l1-distance between multiset representations.We demonstrate that our representations more effectively predict the sizes of symmetric differences than DeepSets-based approaches with unconstrained object representations.Furthermore, we demonstrate that the model learns meaningful representations, mapping objects of different classes to different standard basis vectors.","Based on fuzzy set theory, we propose a model that given only the sizes of symmetric differences between pairs of multisets, learns representations of such multisets and their elements.This paper proposes a new task of set learning, predicting the size of the symmetric difference between multisets, and gives a method to solve the task based on fuzzy set theory." 8,"Credible Sample Elicitation by Deep Learning, for Deep Learning","It is important to collect credible training samples for building data-intensive learning systems.In the literature, there is a line of studies on eliciting distributional information from self-interested agents who hold a relevant information. Asking people to report complex distribution, though theoretically viable, is challenging in practice.This is primarily due to the heavy cognitive loads required for human agents to reason and report this high dimensional information.Consider the example where we are interested in building an image classifier via first collecting a certain category of high-dimensional image data.While classical elicitation results apply to eliciting a complex and generative distribution for this image data, we are interested in eliciting samples from agents.This paper introduces a deep learning aided method to incentivize credible sample contributions from selfish and rational agents.The challenge to do so is to design an incentive-compatible score function to score each reported sample to induce truthful reports, instead of an arbitrary or even adversarial one.We show that with accurate estimation of a certain-divergence function we are able to achieve approximate incentive compatibility in eliciting truthful samples.We then present an efficient estimator with theoretical guarantee via studying the variational forms of-divergence function.Our work complements the literature of information elicitation via introducing the problem of . We also show a connection between this sample elicitation problem and-GAN, and how this connection can help reconstruct an estimator of the distribution based on collected samples.","This paper proposes a deep learning aided method to elicit credible samples from self-interested agents. The authors propose a sample elicitation framework for the problem of eliciting credible samples from agents for complex distributions, suggest that deep neural frameworks can be applied in this framework, and connect sample elicitation and f-GAN.This paper studies the sample elicitation problem, proposing a deep learning approach that relies on the dual expression of the f-divergence which writes as a maximum over a set of functions t." 9,Graph2Seq: Graph to Sequence Learning with Attention-Based Neural Networks,"The celebrated Sequence to Sequence learning technique and its numerous variants achieve excellent performance on many tasks.However, many machine learning tasks have inputs naturally represented as graphs; existing Seq2Seq models face a significant challenge in achieving accurate conversion from graph form to the appropriate sequence.To address this challenge, we introduce a general end-to-end graph-to-sequence neural encoder-decoder architecture that maps an input graph to a sequence of vectors and uses an attention-based LSTM method to decode the target sequence from these vectors.Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings.We further introduce an attention mechanism that aligns node embeddings and the decoding sequence to better cope with large graphs.Experimental results on bAbI, Shortest Path, and Natural Language Generation tasks demonstrate that our model achieves state-of-the-art performance and significantly outperforms existing graph neural networks, Seq2Seq, and Tree2Seq models; using the proposed bi-directional node embedding aggregation strategy, the model can converge rapidly to the optimal performance.","Graph to Sequence Learning with Attention-Based Neural NetworksA graph2seq architecture that combines a graph encoder mixing GGNN and GCN components with an attentional sequence encoder, and that shows improvements over baselines.This work proposes an end-to-end graph encoder to sequence decoder models with an attention mechanism in between." 10,Learning to Group: A Bottom-Up Framework for 3D Part Discovery in Unseen Categories,"We address the problem of learning to discover 3D parts for objects in unseen categories.Being able to learn the geometry prior of parts and transfer this prior to unseen categories pose fundamental challenges on data-driven shape segmentation approaches.Formulated as a contextual bandit problem, we propose a learning-based iterative grouping framework which learns a grouping policy to progressively merge small part proposals into bigger ones in a bottom-up fashion.At the core of our approach is to restrict the local context for extracting part-level features, which encourages the generalizability to novel categories.On a recently proposed large-scale fine-grained 3D part dataset, PartNet, we demonstrate that our method can transfer knowledge of parts learned from 3 training categories to 21 unseen testing categories without seeing any annotated samples.Quantitative comparisons against four strong shape segmentation baselines show that we achieve the state-of-the-art performance.","A zero-shot segmentation framework for 3D object part segmentation. Model the segmentation as a decision-making process and solve as a contextual bandit problem.A method for segmenting 3D point clouds of objects into component parts, focused on generalizing part groupings to novel object categories unseen during training, that shows strong performance relative to baselines.This paper proposes a method for part segmentation in object point clouds." 11,Beyond Classical Diffusion: Ballistic Graph Neural Network,"This paper presents the ballistic graph neural network.Ballistic graph neural network tackles the weight distribution from a transportation perspective and has many different properties comparing to the traditional graph neural network pipeline.The ballistic graph neural network does not require to calculate any eigenvalue.The filters propagate exponentially faster comparing to traditional graph neural network.We use a perturbed coin operator to perturb and optimize the diffusion rate.Our results show that by selecting the diffusion speed, the network can reach a similar accuracy with fewer parameters.We also show the perturbed filters act as better representations comparing to pure ballistic ones.We provide a new perspective of training graph neural network, by adjusting the diffusion rate, the neural networks performance can be improved.",A new perspective on how to collect the correlation between nodes based on diffusion properties.A new diffusion operation for graph neural networks that does not require eigenvalue calculation and can propagate exponentially faster compared to traditional graph neural networks.The paper proposes to cope with the speed of diffusion problem by introducing ballistic walk. 12,Passage Ranking with Weak Supervision,"In this paper, we propose a framework for neural ranking tasks based on the data programming paradigm p, which enables us to leverage multiple weak supervision signals from different sources.Empirically, we consider two sources of weak supervision signals, unsupervised ranking functions and semantic feature similarities.We train a BERT-based passage-ranking model in our weak supervision framework.Without using ground-truth training labels, BERT-PR models outperform BM25 baseline by a large margin on all three datasets and even beat the previous state-of-the-art results with full supervision on two of datasets.","We propose a weak supervision training pipeline based on the data programming framework for ranking tasks, in which we train a BERT-base ranking model and establish the new SOTA.The authors propose a combination of BERT and the weak supervision framework to tackle the problem of passage ranking, obtaining results better than the fully supervised state-of-the-art." 13,Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks,"We study the training process of Deep Neural Networks from the Fourier analysis perspective.We demonstrate a very universal Frequency Principle --- DNNs often fit target functions from low to high frequencies --- on high-dimensional benchmark datasets, such as MNIST/CIFAR10, and deep networks, such as VGG16.This F-Principle of DNNs is opposite to the learning behavior of most conventional iterative numerical schemes, which exhibits faster convergence for higher frequencies, for various scientific computing problems.With a naive theory, we illustrate that this F-Principle results from the regularity of the commonly used activation functions.The F-Principle implies an implicit bias that DNNs tend to fit training data by a low-frequency function.This understanding provides an explanation of good generalization of DNNs on most real datasets and bad generalization of DNNs on parity function or randomized dataset.","In real problems, we found that DNNs often fit target functions from low to high frequencies during the training process.This paper analyzes the loss of neural networks in the Fourier domain and finds that DNNs tend to learn low-frequency components before high-frequency ones.The paper studies the training process of NNs through Fourier analysis, concluding that NNs learn low frequency components before high frequency components." 14,Hierarchical Graph-to-Graph Translation for Molecules,"The problem of accelerating drug discovery relies heavily on automatic tools to optimize precursor molecules to afford them with better biochemical properties.Our work in this paper substantially extends prior state-of-the-art on graph-to-graph translation methods for molecular optimization.In particular, we realize coherent multi-resolution representations by interweaving the encoding of substructure components with the atom-level encoding of the original molecular graph.Moreover, our graph decoder is fully autoregressive, and interleaves each step of adding a new substructure with the process of resolving its attachment to the emerging molecule.We evaluate our model on multiple molecular optimization tasks and show that our model significantly outperforms previous state-of-the-art baselines.","We propose a multi-resolution, hierarchically coupled encoder-decoder for graph-to-graph translation.A hierarchical graph-to-graph translation model to generate molecular graphs using chemical substructures as building blocks that is fully autoregressive and learns coherent multi-resolution representations, outperforming previous models.The authors present a hierarchical graph-to-graph translation method for generating novel organic molecules." 15,Co-Attentive Equivariant Neural Networks: Focusing Equivariance On Transformations Co-Occurring in Data,"Equivariance is a nice property to have as it produces much more parameter efficient neural architectures and preserves the structure of the input through the feature mapping.Even though some combinations of transformations might never appear, current equivariant architectures consider the set of all possible transformations in a transformation group when learning feature representations.Contrarily, the human visual system is able to attend to the set of relevant transformations occurring in the environment and utilizes this information to assist and improve object recognition.Based on this observation, we modify conventional equivariant feature mappings such that they are able to attend to the set of co-occurring transformations in data and generalize this notion to act on groups consisting of multiple symmetries.We show that our proposed co-attentive equivariant neural networks consistently outperform conventional rotation equivariant and rotation & reflection equivariant neural networks on rotated MNIST and CIFAR-10.","We utilize attention to restrict equivariant neural networks to the set or co-occurring transformations in data. ', ""This paper combines attention with group equivariance, specifically looking at the p4m group of rotations, translations, and flips, and derives a form of self-attention that doesn't destroy the equivariance property."", 'The authors propose a self-attention mechanism for rotation-equivariant neural nets that improves classification performance over regular rotation-equivariant nets." 16,Fully differentiable full-atom protein backbone generation,"The fast generation and refinement of protein backbones would constitute a major advancement to current methodology for the design and development of de novo proteins.In this study, we train Generative Adversarial Networks to generate fixed-length full-atom protein backbones, with the goal of sampling from the distribution of realistic 3-D backbone fragments.We represent protein structures by pairwise distances between all backbone atoms, and present a method for directly recovering and refining the corresponding backbone coordinates in a differentiable manner.We show that interpolations in the latent space of the generator correspond to smooth deformations of the output backbones, and that test set structures not seen by the generator during training exist in its image.Finally, we perform sequence design, relaxation, and ab initio folding of a subset of generated structures, and show that in some cases we can recover the generated folds after forward-folding.Together, these results suggest a mechanism for fast protein structure refinement and folding using external energy functions.","We train a GAN to generate and recover full-atom protein backbones , and we show that in select cases we can recover the generated proteins after sequence design and ab initio forward-folding.A generative model for protein backbone which uses a GAN, autoencoder-like network, and refinement process, and a set of qualitative evaluations suggesting positive results.This paper presents an end-to-end approach for generating protein backbones using generative adversarial networks." 17,Meta-Learning with Domain Adaptation for Few-Shot Learning under Domain Shift,"Few-Shot Learning aims to overcome the limitations of traditional machine learning approaches which require thousands of labeled examples to train an effective model.Considered as a hallmark of human intelligence, the community has recently witnessed several contributions on this topic, in particular through meta-learning, where a model learns how to learn an effective model for few-shot learning.The main idea is to acquire prior knowledge from a set of training tasks, which is then used to perform test tasks.Most existing work assumes that both training and test tasks are drawn from the same distribution, and a large amount of labeled data is available in the training tasks.This is a very strong assumption which restricts the usage of meta-learning strategies in the real world where ample training tasks following the same distribution as test tasks may not be available.In this paper, we propose a novel meta-learning paradigm wherein a few-shot learning model is learnt, which simultaneously overcomes domain shift between the train and test tasks via adversarial domain adaptation.We demonstrate the efficacy the proposed method through extensive experiments.","Meta Learning for Few Shot learning assumes that training tasks and test tasks are drawn from the same distribution. What do you do if they are not? Meta Learning with task-level Domain Adaptation!This paper proposes a model combining unsupervised adversarial domain adaptation with prototypical networks that performs better than few-shot learning baselines on few-shot learning tasks with domain shift.The authors proposed meta domain adaptation to address domain shift scenario in meta learning setup, demonstrating performance improvements in several experiments." 18,"Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support","Universal probabilistic programming systems provide a powerful framework for specifying rich and complex probabilistic models.However, this expressiveness comes at the cost of substantially complicating the process of drawing inferences from the model.In particular, inference can become challenging when the support of the model varies between executions.Though general-purpose inference engines have been designed to operate in such settings, they are typically inefficient, often relying on proposing from the prior to make transitions.To address this, we introduce a new inference framework: Divide, Conquer, and Combine.DCC divides the program into separate straight-line sub-programs, each of which has a fixed support allowing more powerful inference algorithms to be run locally, before recombining their outputs in a principled fashion.We show how DCC can be implemented as an automated and general-purpose PPS inference engine, and empirically confirm that it can provide substantial performance improvements over previous approaches.","Divide, Conquer, and Combine is a new inference scheme that can be performed on the probabilistic programs with stochastic support, i.e. the very existence of variables is stochastic." 19,COMMUNITY PRESERVING NODE EMBEDDING,"Detecting communities or the modular structure of real-life networks is an important task because the way anetwork functions is often determined by its communities.The traditional approaches to community detection involve modularity-based approaches,which generally speaking, construct partitions based on heuristics thatseek to maximize the ratio of the edges within the partitions to those betweenthem.Node embedding approaches, which represent each node in a graph as areal-valued vector, transform the problem of community detection in a graph tothat of clustering a set of vectors.Existing node embedding approaches are primarilybased on first initiating uniform random walks from each node to constructa context of a node and then seeks to make the vector representation ofthe node close to its context.However, standard node embedding approaches donot directly take into account the community structure of a network while constructingthe context around each node.To alleviate this, we explore two differentthreads of work.First, we investigate the use of biased random walks to obtain more centrality preserving embeddingof nodes, which we hypothesize may lead to more effective clusters in the embeddedspace.Second, we propose a community structure aware node embeddingapproach where we incorporate modularity-based partitioning heuristics intothe objective function of node embedding.We demonstrate that our proposed approachfor community detection outperforms a number of modularity-based baselinesas well as K-means on a standard node-embedded vector space on a wide range of real-life networks of different sizes and densities.",A community preserving node embedding algorithm that results in more effective detection of communities with a clustering on the embedded space 20,PointGrow: Autoregressively Learned Point Cloud Generation with Self-Attention,"A point cloud is an agile 3D representation, efficiently modeling an objects surface geometry.However, these surface-centric properties also pose challenges on designing tools to recognize and synthesize point clouds.This work presents a novel autoregressive model, PointGrow, which generates realistic point cloud samples from scratch or conditioned from given semantic contexts.Our model operates recurrently, with each point sampled according to a conditional distribution given its previously-generated points.Since point cloud object shapes are typically encoded by long-range interpoint dependencies, we augment our model with dedicated self-attention modules to capture these relations.Extensive evaluation demonstrates that PointGrow achieves satisfying performance on both unconditional and conditional point cloud generation tasks, with respect to fidelity, diversity and semantic preservation.Further, conditional PointGrow learns a smooth manifold of given images where 3D shape interpolation and arithmetic calculation can be performed inside.",An autoregressive deep learning model for generating diverse point clouds.An approach for generating 3D shapes as point clouds which considers the lexicographic ordering of points according to coordinates and trains a model to predict points in order.The paper introduces a generative model for point clouds using a pixel RNN-like auto-regressive model and an attention model to handle longer-range interactions. 21,Challenges of Explaining Control,"Reinforcement learning and evolutionary algorithms can be used to create sophisticated control solutions.Unfortunately explaining how these solutions work can be difficult to due to their ""black box"" nature.In addition, the time-extended nature of control algorithms often prevent direct applications of explainability techniques used for standard supervised learning algorithms.This paper attempts to address explainability of blackbox control algorithms through six different techniques:1) Bayesian rule lists,2) Function analysis,3) Single time step integrated gradients,4) Grammar-based decision trees,5) Sensitivity analysis combined with temporal modeling with LSTMs, and6) Explanation templates.These techniques are tested on a simple 2d domain, where a simulated rover attempts to navigate through obstacles to reach a goal.For control, this rover uses an evolved multi-layer perception that maps an 8d field of obstacle and goal sensors to an action determining where it should go in the next time step.Results show that some simple insights in explaining the neural network are possible, but that good explanations are difficult.","Describes a series of explainability techniques applied to a simple neural network controller used for navigation.This paper provides insights and explanations for the problem of providing explanations for a multilayer perceptron used as an inverse controller for rover movement, and ideas on how to explain a black-box model." 22,Self-Monitoring Navigation Agent via Auxiliary Progress Estimation,"The Vision-and-Language Navigation task entails an agent following navigational instruction in photo-realistic unknown environments.This challenging task demands that the agent be aware of which instruction was completed, which instruction is needed next, which way to go, and its navigation progress towards the goal.In this paper, we introduce a self-monitoring agent with two complementary components: visual-textual co-grounding module to locate the instruction completed in the past, the instruction required for the next action, and the next moving direction from surrounding images and progress monitor to ensure the grounded instruction correctly reflects the navigation progress.We test our self-monitoring agent on a standard benchmark and analyze our proposed approach through a series of ablation studies that elucidate the contributions of the primary components.Using our proposed method, we set the new state of the art by a significant margin.Code is available at https://github.com/chihyaoma/selfmonitoring-agent.","We propose a self-monitoring agent for the Vision-and-Language Navigation task.A method for vision+language navigation which tracks progress on the instruction using a progress monitor and a visual-textual co-grounding module, and performs well on standard benchmarks.This paper describes a model for vision-and-language navigation with a panoramic visual attention and an auxillary progress monitoring loss, giving state-of-the-art results." 23,Event Discovery for History Representation in Reinforcement Learning,"Environments in Reinforcement Learning are usually only partially observable.To address this problem, a possible solution is to provide the agent with information about past observations.While common methods represent this history using a Recurrent Neural Network, in this paper we propose an alternative representation which is based on the record of the past events observed in a given episode.Inspired by the human memory, these events describe only important changes in the environment and, in our approach, are automatically discovered using self-supervision.We evaluate our history representation method using two challenging RL benchmarks: some games of the Atari-57 suite and the 3D environment Obstacle Tower.Using these benchmarks we show the advantage of our solution with respect to common RNN-based approaches.","event discovery to represent the history for the agent in RLThe authors study the problem of RL under partially observed settings, and propose a solution that uses a FFNN but provides a history representation, outperforming PPO.This paper proposes a new way to represent past history as input to an RL agent, showing to perform better than PPO and an RNN variant of PPO." 24,GENERATING HIGH FIDELITY IMAGES WITH SUBSCALE PIXEL NETWORKS AND MULTIDIMENSIONAL UPSCALING,"The unconditional generation of high fidelity images is a longstanding benchmarkfor testing the performance of image decoders.Autoregressive image modelshave been able to generate small images unconditionally, but the extension ofthese methods to large images where fidelity can be more readily assessed hasremained an open problem.Among the major challenges are the capacity to encodethe vast previous context and the sheer difficulty of learning a distribution thatpreserves both global semantic coherence and exactness of detail.To address theformer challenge, we propose the Subscale Pixel Network, a conditionaldecoder architecture that generates an image as a sequence of image slices of equalsize.The SPN compactly captures image-wide spatial dependencies and requires afraction of the memory and the computation.To address the latter challenge, wepropose to use multidimensional upscaling to grow an image in both size and depthvia intermediate stages corresponding to distinct SPNs.We evaluate SPNs on theunconditional generation of CelebAHQ of size 256 and of ImageNet from size 32to 128.We achieve state-of-the-art likelihood results in multiple settings, set upnew benchmark results in previously unexplored settings and are able to generatevery high fidelity large scale samples on the basis of both datasets.","We show that autoregressive models can generate high fidelity images. An architecture utilizing decoder, size-upscaling decoder, and depth-upscaling decoder components to tackle the problem of learning long-range dependencies in images in order to obtain high fidelity images.This paper addresses the problem of generation to high fidelity images, successfully showing convincing Imagenet samples with 128x128 resolution for a likelihood density model." 25,Relational State-Space Model for Stochastic Multi-Object Systems,"Real-world dynamical systems often consist of multiple stochastic subsystems that interact with each other.Modeling and forecasting the behavior of such dynamics are generally not easy, due to the inherent hardness in understanding the complicated interactions and evolutions of their constituents.This paper introduces the relational state-space model, a sequential hierarchical latent variable model that makes use of graph neural networks to simulate the joint state transitions of multiple correlated objects.By letting GNNs cooperate with SSM, R-SSM provides a flexible way to incorporate relational information into the modeling of multi-object dynamics.We further suggest augmenting the model with normalizing flows instantiated for vertex-indexed random variables and propose two auxiliary contrastive objectives to facilitate the learning.The utility of R-SSM is empirically evaluated on synthetic and real time series datasets.",A deep hierarchical state-space model in which the state transitions of correlated objects are coordinated by graph neural networks.A hierarchical latent variable model of sequential dynamic processes of multiple objects when each object exhibits significant stochasticity.The paper presents a relational state-space model that simulates the joint state transitions of correlated objects which are hierarchically coordinated in a graph structure. 26,Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks,"Natural language is hierarchically structured: smaller units are nested within larger units.When a larger constituent ends, all of the smaller constituents that are nested within it must also be closed.While the standard LSTM architecture allows different neurons to track information at different time scales, it does not have an explicit bias towards modeling a hierarchy of constituents.This paper proposes to add such inductive bias by ordering the neurons; a vector of master input and forget gates ensures that when a given neuron is updated, all the neurons that follow it in the ordering are also updated.Our novel recurrent architecture, ordered neurons LSTM, achieves good performance on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference.","We introduce a new inductive bias that integrates tree structures in recurrent neural networks.This paper proposes ON-LSTM, a new RNN unit that integrates the latent tree structure into recurrent models and that has good results on language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference." 27,Skip Connections Eliminate Singularities,"Skip connections made the training of very deep networks possible and have become an indispensable component in a variety of neural architectures.A completely satisfactory explanation for their success remains elusive.Here, we present a novel explanation for the benefits of skip connections in training very deep networks.The difficulty of training deep networks is partly due to the singularities caused by the non-identifiability of the model.Several such singularities have been identified in previous works: overlap singularities caused by the permutation symmetry of nodes in a given layer, elimination singularities corresponding to the elimination, i.e. consistent deactivation, of nodes, singularities generated by the linear dependence of the nodes.These singularities cause degenerate manifolds in the loss landscape that slow down learning.We argue that skip connections eliminate these singularities by breaking the permutation symmetry of nodes, by reducing the possibility of node elimination and by making the nodes less linearly dependent.Moreover, for typical initializations, skip connections move the network away from the ""ghosts"" of these singularities and sculpt the landscape around them to alleviate the learning slow-down.These hypotheses are supported by evidence from simplified models, as well as from experiments with deep networks trained on real-world datasets.","Degenerate manifolds arising from the non-identifiability of the model slow down learning in deep networks; skip connections help by breaking degeneracies.The authors show that elimination singularities and overlap singularities impede learning in deep neural networks, and demonstrate that skip connections can reduce the prevalence of these singularities, speeding up learning.Paper examines the use of skip connections in deep networks as a way of alleviating singularities in the Hessian matrix during training." 28,Learning Actionable Representations with Goal Conditioned Policies,"Representation learning is a central challenge across a range of machine learning areas.In reinforcement learning, effective and functional representations have the potential to tremendously accelerate learning progress and solve more challenging problems.Most prior work on representation learning has focused on generative approaches, learning representations that capture all the underlying factors of variation in the observation space in a more disentangled or well-ordered manner.In this paper, we instead aim to learn functionally salient representations: representations that are not necessarily complete in terms of capturing all factors of variation in the observation space, but rather aim to capture those factors of variation that are important for decision making -- that are ""actionable"".These representations are aware of the dynamics of the environment, and capture only the elements of the observation that are necessary for decision making rather than all factors of variation, eliminating the need for explicit reconstruction.We show how these learned representations can be useful to improve exploration for sparse reward problems, to enable long horizon hierarchical reinforcement learning, and as a state representation for learning policies for downstream tasks.We evaluate our method on a number of simulated environments, and compare it to prior methods for representation learning, exploration, and hierarchical reinforcement learning.",Learning state representations which capture factors necessary for controlAn approach to representation learning in the context of reinforcement learning that distinguishes two stages functionally in terms of the actions that are needed to reach them.The paper presents a method to learn representations where proximity in euclidean distance represents states that are achieved by similar policies. 29,Scaling characteristics of sequential multitask learning: Networks naturally learn to learn,"We explore the behavior of a standard convolutional neural net in a setting that introduces classification tasks sequentially and requires the net to master new tasks while preserving mastery of previously learned tasks. This setting corresponds to that which human learners face as they acquire domain expertise, for example, as an individual reads a textbook chapter-by-chapter.Through simulations involving sequences of 10 related tasks, we find reason for optimism that nets will scale well as they advance from having a single skill to becoming domain experts.We observed two key phenomena.First, forward facilitation---the accelerated learning of task n+1 having learned n previous tasks---grows with n. Second, backward interference---the forgetting of the n previous tasks when learning task n+1---diminishes with n. Forward facilitation is the goal of research on metalearning, and reduced backward interference is the goal of research on ameliorating catastrophic forgetting.We find that both of these goals are attained simply through broader exposure to a domain.",We study the behavior of a CNN as it masters new tasks while preserving mastery for previously learned tasks 30,MORTY Embedding: Improved Embeddings without Supervision,"We demonstrate a low effort method that unsupervisedly constructs task-optimized embeddings from existing word embeddings to gain performance on a supervised end-task.This avoids additional labeling or building more complex model architectures by instead providing specialized embeddings better fit for the end-task.Furthermore, the method can be used to roughly estimate whether a specific kind of end-task can be learned form, or is represented in, a given unlabeled dataset, e.g. using publicly available probing tasks.We evaluate our method for diverse word embedding probing tasks and by size of embedding training corpus -- i.e. to explore its use in reduced settings.","Morty refits pretrained word embeddings to either: (a) improve overall embedding performance (for Multi-task settings) or improve Single-task performance, while requiring only minimal effort." 31,Efficient Augmentation via Data Subsampling,"Data augmentation is commonly used to encode invariances in learning methods.However, this process is often performed in an inefficient manner, as artificial examples are created by applying a number of transformations to all points in the training set.The resulting explosion of the dataset size can be an issue in terms of storage and training costs, as well as in selecting and tuning the optimal set of transformations to apply.In this work, we demonstrate that it is possible to significantly reduce the number of data points included in data augmentation while realizing the same accuracy and invariance benefits of augmenting the entire dataset.We propose a novel set of subsampling policies, based on model influence and loss, that can achieve a 90% reduction in augmentation set size while maintaining the accuracy gains of standard data augmentation.",Selectively augmenting difficult to classify points results in efficient training.The authors study the problem of identifying subsampling strategies for data augmentation and propose strategies based on model influence and loss as well as empirical benchmarking of the proposed methods.The authors propose to use influence or loss-based methods to select a subset of points to use in augmenting data sets for training models where the loss is additive over data points. 32,Generating Molecules via Chemical Reactions,"Over the last few years exciting work in deep generative models has produced models able to suggest new organic molecules by generating strings, trees, and graphs representing their structure.While such models are able to generate molecules with desirable properties, their utility in practice is limited due to the difficulty in knowing how to synthesize these molecules.We therefore propose a new molecule generation model, mirroring a more realistic real-world process, where reactants are selected and combined to form more complex molecules.More specifically, our generative model proposes a bag of initial reactants and uses a reaction model to predict how they react together to generate new molecules.Modeling the entire process of constructing a molecule during generation offers a number of advantages.First, we show that such a model has the ability to generate a wide, diverse set of valid and unique molecules due to the useful inductive biases of modeling reactions.Second, modeling synthesis routes rather than final molecules offers practical advantages to chemists who are not only interested in new molecules but also suggestions on stable and safe synthetic routes.Third, we demonstrate the capabilities of our model to also solve one-step retrosynthesis problems, predicting a set of reactants that can produce a target product.","A deep generative model for organic molecules that first generates reactant building blocks before combining these using a reaction predictor.A molecular generative model that generates molecules via a two-step process that provides synthesis routes of the generated molecules, allowing users to examine the synthetic accessibility of generated compounds." 33,Relevant-features based Auxiliary Cells for Robust and Energy Efficient Deep Learning,"Deep neural networks are complex non-linear models used as predictive analytics tool and have demonstrated state-of-the-art performance on many classification tasks. However, they have no inherent capability to recognize when their predictions might go wrong.There have been several efforts in the recent past to detect natural errors i.e. misclassified inputs but these mechanisms pose additional energy requirements. To address this issue, we present a novel post-hoc framework to detect natural errors in an energy efficient way. We achieve this by appending relevant features based linear classifiers per class referred as Relevant features based Auxiliary Cells. The proposed technique makes use of the consensus between RACs appended at few selected hidden layers to distinguish the correctly classified inputs from misclassified inputs.The combined confidence of RACs is utilized to determine if classification should terminate at an early stage.We demonstrate the effectiveness of our technique on various image classification datasets such as CIFAR10, CIFAR100 and Tiny-ImageNet.Our results show that for CIFAR100 dataset trained on VGG16 network, RACs can detect 46% of the misclassified examples along with 12% reduction in energy compared to the baseline network while 69% of the examples are correctly classified.",Improve the robustness and energy efficiency of a deep neural network using the hidden representations.This paper aims to reduce the misclassifications of deep neural networks in an energy efficient way by adding Relevant feature based Auxiliary Cells after one or more hidden layers to decide whether to end classification early. 34,On Understanding Knowledge Graph Representation,"Many methods have been developed to represent knowledge graph data, which implicitly exploit low-rank latent structure in the data to encode known information and enable unknown facts to be inferred.To predict whether a relationship holds between entities, their embeddings are typically compared in the latent space following a relation-specific mapping.Whilst link prediction has steadily improved, the latent structure, and hence why such models capture semantic information, remains unexplained.We build on recent theoretical interpretation of word embeddings as a basis to consider an explicit structure for representations of relations between entities.For identifiable relation types, we are able to predict properties and justify the relative performance of leading knowledge graph representation methods, including their often overlooked ability to make independent predictions.","Understanding the structure of knowledge graph representation using insight from word embeddings.This paper attempts to understand the latent structure underlying knowledge graph embedding methods, and demonstrates that a model's ability to represent a relation type depends on the model architecture's limitations with respect to relation conditions.This paper proposes a detailed study on the explainability of link prediction (LP) models by utilizing a recent interpretation of word embeddings to provide a better understanding of LPs' model performance." 35,"Cross-Dimensional Self-Attention for Multivariate, Geo-tagged Time Series Imputation","Many real-world applications involve multivariate, geo-tagged time series data: at each location, multiple sensors record corresponding measurements.For example, air quality monitoring system records PM2.5, CO, etc.The resulting time-series data often has missing values due to device outages or communication errors.In order to impute the missing values, state-of-the-art methods are built on Recurrent Neural Networks, which process each time stamp sequentially, prohibiting the direct modeling of the relationship between distant time stamps.Recently, the self-attention mechanism has been proposed for sequence modeling tasks such as machine translation, significantly outperforming RNN because the relationship between each two time stamps can be modeled explicitly.In this paper, we are the first to adapt the self-attention mechanism for multivariate, geo-tagged time series data.In order to jointly capture the self-attention across different dimensions while keep the size of attention maps reasonable, we propose a novel approach called Cross-Dimensional Self-Attention to process each dimension sequentially, yet in an order-independent manner.On three real-world datasets, including one our newly collected NYC-traffic dataset, extensive experiments demonstrate the superiority of our approach compared to state-of-the-art methods for both imputation and forecasting tasks.","A novel self-attention mechanism for multivariate, geo-tagged time series imputation.This paper proposes the problem of applying the transformer network to spatiotemporal data in a compuationally efficient way, and investigates ways of implementing 3D attention.This paper empirically studies the effectiveness of transformer models for time series data imputation across dimensions of the input." 36,Document Enhancement System Using Auto-encoders,The conversion of scanned documents to digital forms is performed using an Optical Character Recognition software.This work focuses on improving the quality of scanned documents in order to improve the OCR output.We create an end-to-end document enhancement pipeline which takes in a set of noisy documents and produces clean ones.Deep neural network based denoising auto-encoders are trained to improve the OCR quality.We train a blind model that works on different noise levels of scanned text documents.Results are shown for blurring and watermark noise removal from noisy scanned documents.,"We designed and tested a REDNET (ResNet Encoder-Decoder) with 8 skip connections to remove noise from documents, including blurring and watermarks, resulting in a high performance deep network for document image cleanup. " 37,A Perturbation Analysis of Input Transformations for Adversarial Attacks,"The existence of adversarial examples, or intentional mis-predictions constructed from small changes to correctly predicted examples, is one of the most significant challenges in neural network research today.Ironically, many new defenses are based on a simple observation - the adversarial inputs themselves are not robust and small perturbations to the attacking input often recover the desired prediction.While the intuition is somewhat clear, a detailed understanding of this phenomenon is missing from the research literature.This paper presents a comprehensive experimental analysis of when and why perturbation defenses work and potential mechanisms that could explain their effectiveness in different settings.","We identify a family of defense techniques and show that both deterministic lossy compression and randomized perturbations to the input lead to similar gains in robustness.This paper discusses ways of destabilizing a given adversarial attack, what makes adversarial images non-robust, and if it's possible for attackers to use a universal model of perturbations to make their adversarial examples robust against such perturbations."", 'The paper studies the robustness of adversarial attacks to transformations of their input." 38,On the Tunability of Optimizers in Deep Learning,"There is no consensus yet on the question whether adaptive gradient methods like Adam are easier to use than non-adaptive optimization methods like SGD.In this work, we fill in the important, yet ambiguous concept of ‘ease-of-use’ by defining an optimizer’s tunability: How easy is it to find good hyperparameter configurations using automatic random hyperparameter search?We propose a practical and universal quantitative measure for optimizer tunability that can form the basis for a fair optimizer benchmark. Evaluating a variety of optimizers on an extensive set of standard datasets and architectures, we find that Adam is the most tunable for the majority of problems, especially with a low budget for hyperparameter tuning.","We provide a method to benchmark optimizers that is cognizant to the hyperparameter tuning process.Introduction of a novel metric to capture the tunability of an optimizer, and a comprehensive empirical comparison of deep learning optimizers under different amounts of hyper-parameter tuning. ', ""This paper introduces a simple measure of tunability that allows to compare optimizers under resource constraints, finding that tuning Adam optimizers' learning rate is easiest to find well-performing hyperparameter configurations." 39,Y-net: A Physics-constrained and Semi-supervised Learning Approach to the Phase Problem in Computational Electron Imaging,"The phase problem in diffraction physics is one of the oldest inverse problems in all of science.The central difficulty that any approach to solving this inverse problem must overcome is that half of the information, namely the phase of the diffracted beam, is always missing.In the context of electron microscopy, the phase problem is generally non-linear and solutions provided by phase-retrieval techniques are known to be poor approximations to the physics of electrons interacting with matter.Here, we show that a semi-supervised learning approach can effectively solve the phase problem in electron microscopy/scattering.In particular, we introduce a new Deep Neural Network, Y-net, which simultaneously learns a reconstruction algorithm via supervised training in addition to learning a physics-based regularization via unsupervised training.We demonstrate that this constrained, semi-supervised approach is an order of magnitude more data-efficient and accurate than the same model trained in a purely supervised fashion.In addition, the architecture of the Y-net model provides for a straightforward evaluation of the consistency of the models prediction during inference and is generally applicable to the phase problem in other settings.",We introduce a semi-supervised deep neural network to approximate the solution of the phase problem in electron microscopy 40,Word2net: Deep Representations of Language,"Word embeddings extract semantic features of words from large datasets of text.Most embedding methods rely on a log-bilinear model to predict the occurrenceof a word in a context of other words.Here we propose word2net, a method thatreplaces their linear parametrization with neural networks.For each term in thevocabulary, word2net posits a neural network that takes the context as input andoutputs a probability of occurrence.Further, word2net can use the hierarchicalorganization of its word networks to incorporate additional meta-data, such assyntactic features, into the embedding model.For example, we show how to shareparameters across word networks to develop an embedding model that includespart-of-speech information.We study word2net with two datasets, a collectionof Wikipedia articles and a corpus of U.S. Senate speeches.Quantitatively, wefound that word2net outperforms popular embedding methods on predicting held-out words and that sharing parameters based on part of speech further boostsperformance.Qualitatively, word2net learns interpretable semantic representationsand, compared to vector-based methods, better incorporates syntactic information.","Word2net is a novel method for learning neural network representations of words that can use syntactic information to learn better semantic features.This paper extends SGNS with an architectural change from a bag-of-words model to a feedforward model, and contributes a new form of regularization by tying a subset of layers between different associated networks.A method to use non-linear combination of context vectors for learning vector representation of words, where the main idea is to replace each word embedding by a neural network." 41,Functional Annotation of Human Cognitive States using Graph Convolution Networks,"A key goal in neuroscience is to understand brain mechanisms of cognitive functions.An emerging approach is to study “brain states” dynamics using functional magnetic resonance imaging.So far in the literature, brain states have typically been studied using 30 seconds of fMRI data or more, and it is unclear to which extent brain states can be reliably identified from very short time series.In this project, we applied graph convolutional networks to decode brain activity over short time windows in a task fMRI dataset, i.e. associate a given window of fMRI time series with the task used.Starting with a populational brain graph with nodes defined by a parcellation of cerebral cortex and the adjacent matrix extracted from functional connectome, GCN takes a short series of fMRI volumes as input, generates high-level domain-specific graph representations, and then predicts the corresponding cognitive state.We investigated the performance of this GCN ""cognitive state annotation"" in the Human Connectome Project database, which features 21 different experimental conditions spanning seven major cognitive domains, and high temporal resolution task fMRI data.Using a 10-second window, the 21 cognitive states were identified with an excellent average test accuracy of 89%.As the HCP task battery was designed to selectively activate a wide range of specialized functional networks, we anticipate the GCN annotation to be applicable as a base model for other transfer learning applications, for instance, adapting to new task domains.","Using a 10s window of fMRI signals, our GCN model identified 21 different task conditions from HCP dataset with a test accuracy of 89%." 42,Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality Regularization and Singular Value Sparsification,"Modern deep neural networks require high memory consumption and large computational loads. In order to deploy DNN algorithms efficiently on edge or mobile devices, a series of DNN compression algorithms have been explored, including the line of works on factorization methods.Factorization methods approximate the weight matrix of a DNN layer with multiplication of two or multiple low-rank matrices.However, it is hard to measure the ranks of DNN layers during the training process.Previous works mainly induce low-rank through implicit approximations or via costly singular value decomposition process on every training step.The former approach usually induces a high accuracy loss while the latter prevents DNN factorization from efficiently reaching a high compression rate.In this work, we propose SVD training, which first applies SVD to decompose DNNs layers and then performs training on the full-rank decomposed weights.To improve the training quality and convergence, we add orthogonality regularization to the singular vectors, which ensure the valid form of SVD and avoid gradient vanishing/exploding.Low-rank is encouraged by applying sparsity-inducing regularizers on the singular values of each layer.Singular value pruning is applied at the end to reach a low-rank model.We empirically show that SVD training can significantly reduce the rank of DNN layers and achieve higher reduction on computation load under the same accuracy, comparing to not only previous factorization methods but also state-of-the-art filter pruning methods.","Efficiently inducing low-rank deep neural networks via SVD training with sparse singular values and orthogonal singular vectors.This paper introduces an approach to network compression by encouraging the weight matrix in each layer to have a low rank and explicitly factorizing the weight matrices into an SVD-like factorization for treatment as new parameters.Proposal to parametrize each layer of a deep neural network, before training, with a low-rank matrix decomposition, accordingly replace convolutions with two consecutive convolutions, and then train the decomposed method." 43,Few-Shot Regression via Learned Basis Functions,"The recent rise in popularity of few-shot learning algorithms has enabled models to quickly adapt to new tasks based on only a few training samples.Previous few-shot learning works have mainly focused on classification and reinforcement learning.In this paper, we propose a few-shot meta-learning system that focuses exclusively on regression tasks.Our model is based on the idea that the degree of freedom of the unknown function can be significantly reduced if it is represented as a linear combination of a set of appropriate basis functions.This enables a few labelled samples to approximate the function.We design a Feature Extractor network to encode basis functions for a task distribution, and a Weights Generator to generate the weight vector for a novel task.We show that our model outperforms the current state of the art meta-learning methods in various regression tasks.",We propose a few-shot learning model that is tailored specifically for regression tasksThis paper proposes a novel shot-learning method for small sample regression problems.A method that learns a regression model with a few samples and outperforms other methods. 44,Discriminative out-of-distribution detection for semantic segmentation,"Most classification and segmentation datasets assume a closed-world scenario in which predictions are expressed as distribution over a predetermined set of visual classes.However, such assumption implies unavoidable and often unnoticeable failures in presence of out-of-distribution input.These failures are bound to happen in most real-life applications since current visual ontologies are far from being comprehensive.We propose to address this issue by discriminative detectionof OOD pixels in input data.Different from recent approaches, we avoid to bring any decisions by only observing the training dataset of the primary model trained to solve the desired computer vision task.Instead, we train a dedicated OOD modelwhich discriminates the primary training set from a much larger ""background"" dataset which approximates the variety of the visual world.We perform our experiments on high resolution natural images in a dense prediction setup.We use several road driving datasets as our training distribution, while we approximate the background distribution with the ILSVRC dataset.We evaluate our approach on WildDash test, which is currently the only public test dataset with out-of-distribution images.The obtained results show that the proposed approach succeeds to identify out-of-distribution pixels while outperforming previous work by a wide margin.","We present a novel approach for detecting out-of-distribution pixels in semantic segmentation.This paper addresses out-of-distribution detection for helping the segmentation process, and proposes an approach of training a binary classifier that distinguishes image patches from a known set of classes from those of an unknown.This paper aims to detect out-of-distribution pixels for semantic segmentation, and this work utilizes data from other domains to detect undetermined classes to model uncertainty better." 45,AutoQ: Automated Kernel-Wise Neural Network Quantization ,"Network quantization is one of the most hardware friendly techniques to enable the deployment of convolutional neural networks on low-power mobile devices.Recent network quantization techniques quantize each weight kernel in a convolutional layer independently for higher inference accuracy, since the weight kernels in a layer exhibit different variances and hence have different amounts of redundancy.The quantization bitwidth or bit number directly decides the inference accuracy, latency, energy and hardware overhead.To effectively reduce the redundancy and accelerate CNN inferences, various weight kernels should be quantized with different QBNs.However, prior works use only one QBN to quantize each convolutional layer or the entire CNN, because the design space of searching a QBN for each weight kernel is too large.The hand-crafted heuristic of the kernel-wise QBN search is so sophisticated that domain experts can obtain only sub-optimal results.It is difficult for even deep reinforcement learning DDPG-based agents to find a kernel-wise QBN configuration that can achieve reasonable inference accuracy.In this paper, we propose a hierarchical-DRL-based kernel-wise network quantization technique, AutoQ, to automatically search a QBN for each weight kernel, and choose another QBN for each activation layer.Compared to the models quantized by the state-of-the-art DRL-based schemes, on average, the same models quantized by AutoQ reduce the inference latency by 54.06%, and decrease the inference energy consumption by 50.69%, while achieving the same inference accuracy.","Accurate, Fast and Automated Kernel-Wise Neural Network Quantization with Mixed Precision using Hierarchical Deep Reinforcement LearningA method for quantizing neural network weights and activations that uses deep reinforcement learning to select bitwidth for individual kernels in a layer and that achieves better performance, or latency, than prior approaches.This paper proposes to automatically search quantization schemes for each kernel in the neural network, using hierarchial RL to guide the search. " 46,Gaggle: Visual Analytics for Model Space Navigation,"Recent visual analytics systems make use of multiple machine learning models to better fit the data as opposed to traditional single, pre-defined model systems.However, while multi-model visual analytic systems can be effective, their added complexity poses usability concerns, as users are required to interact with the parameters of multiple models.Further, the advent of various model algorithms and associated hyperparameters creates an exhaustive model space to sample models from.This poses complexity to navigate this model space to find the right model for the data and the task.In this paper, we present Gaggle, a multi-model visual analytic system that enables users to interactively navigate the model space.Further translating user interactions into inferences, Gaggle simplifies working with multiple models by automatically finding the best model from the high-dimensional model space to support various user tasks.Through a qualitative user study, we show how our approach helps users to find a best model for a classification and ranking task.The study results confirm that Gaggle is intuitive and easy to use, supporting interactive model space navigation and automated model selection without requiring any technical expertise from users.","Gaggle, an interactive visual analytic system to help users interactively navigate model space for classification and ranking tasks.A new visual analytic system which aims to enable non-expert users to interactively navigate a model space by using a demonstration-based approach.A visual analytics system that helps novice analysts navigate model space in performing classification and ranking tasks." 47,Extractor-Attention Network: A New Attention Network with Hybrid Encoders for Chinese Text Classification,"Chinese text classification has received more and more attention today.However, the problem of Chinese text representation still hinders the improvement of Chinese text classification, especially the polyphone and the homophone in social media.To cope with it effectively, we propose a new structure, the Extractor, based on attention mechanisms and design novel attention networks named Extractor-attention network.Unlike most of previous works, EAN uses a combination of a word encoder and a Pinyin character encoder instead of a single encoder.It improves the capability of Chinese text representation.Moreover, compared with the hybrid encoder methods, EAN has more complex combination architecture and more reducing parameters structures.Thus, EAN can take advantage of a large amount of information that comes from multi-inputs and alleviates efficiency issues.The proposed model achieves the state of the art results on 5 large datasets for Chinese text classification.","We propose a novel attention networks with the hybird encoder to solve the text representation issue of Chinese text classification, especially the language phenomena about pronunciations such as the polyphone and the homophone.This paper proposes an attention-based model consisting of the word encoder and Pinyin encoder for the Chinese text classification task, and extends the architecture for the Pinyin character encoder.Proposal for an attention network where both word and pinyin are considered for Chinese representation, with improved results shown in several datasets for text classification." 48,Imitation Learning from Visual Data with Multiple Intentions,"Recent advances in learning from demonstrations with deep neural networks have enabled learning complex robot skills that involve high dimensional perception such as raw image inputs.LfD algorithms generally assume learning from single task demonstrations.In practice, however, it is more efficient for a teacher to demonstrate a multitude of tasks without careful task set up, labeling, and engineering.Unfortunately in such cases, traditional imitation learning techniques fail to represent the multi-modal nature of the data, and often result in sub-optimal behavior.In this paper we present an LfD approach for learning multiple modes of behavior from visual data.Our approach is based on a stochastic deep neural network, which represents the underlying intention in the demonstration as a stochastic activation in the network.We present an efficient algorithm for training SNNs, and for learning with vision inputs, we also propose an architecture that associates the intention with a stochastic attention module.We demonstrate our method on real robot visual object reaching tasks, and show thatit can reliably learn the multiple behavior modes in the demonstration data.Video results are available at https://vimeo.com/240212286/fd401241b9.","multi-modal imitation learning from unstructured demonstrations using stochastic neural network modeling intention. A new sampling-based approach for inference in latent variable models that applies to multi-modal imitation learning and works better than deterministic neural networks and stochastic neural networks for a real visual robotics task.This paper shows how to learn several modalities using imitation learning from visual data using stochastic Neural Networks, and a method for learning from demonstrations where several modalities of the same task are given." 49,Building Hierarchical Interpretations in Natural Language via Feature Interaction Detection,"The interpretability of neural networks has become crucial for their applications in real world with respect to the reliability and trustworthiness.Existing explanation generation methods usually provide important features by scoring their individual contributions to the model prediction and ignore the interactions between features, which eventually provide a bag-of-words representation as explanation.In natural language processing, this type of explanations is challenging for human user to understand the meaning of an explanation and draw the connection between explanation and model prediction, especially for long texts.In this work, we focus on detecting the interactions between features, and propose a novel approach to build a hierarchy of explanations based on feature interactions.The proposed method is evaluated with three neural classifiers, LSTM, CNN, and BERT, on two benchmark text classification datasets.The generated explanations are assessed by both automatic evaluation measurements and human evaluators.Experiments show the effectiveness of the proposed method in providing explanations that are both faithful to models, and understandable to humans.","A novel approach to construct hierarchical explanations for text classification by detecting feature interactions.A novel method for providing explanations for predicitions made by text classifiers that outperforms baselines on word level importance scores, and a new metric, cohesion loss, to evaluate span-level importance.An interpretation method based on feature interactions and feature importance score as compared to independent feature contributions." 50,Dynamic Channel Pruning: Feature Boosting and Suppression,"Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources.In this paper, we reduce this cost by exploiting the fact that the importance of features computed by convolutional layers is highly input-dependent, and propose feature boosting and suppression, a new method to predictively amplify salient convolutional channels and skip unimportant ones at run-time.FBS introduces small auxiliary connections to existing convolutional layers.In contrast to channel pruning methods which permanently remove channels, it preserves the full network structures and accelerates convolution by dynamically skipping unimportant input and output channels.FBS-augmented networks are trained with conventional stochastic gradient descent, making it readily available for many state-of-the-art CNNs.We compare FBS to a range of existing channel pruning and dynamic execution schemes and demonstrate large improvements on ImageNet classification.Experiments show that FBS can respectively provide 5× and 2× savings in compute on VGG-16 and ResNet-18, both with less than 0.6% top-5 accuracy loss.",We make convolutional layers run faster by dynamically boosting and suppressing channels in feature computation.A feature boosting and suppression method for dynamic channel pruning that predicts the importance of each channel and then uses an affine function to amplify/suppress channel importance.Proposal for a channel pruning method for dynamically selecting channels during testing. 51,Characterizing Sparse Connectivity Patterns in Neural Networks,"We propose a novel way of reducing the number of parameters in the storage-hungry fully connected layers of a neural network by using pre-defined sparsity, where the majority of connections are absent prior to starting training.Our results indicate that convolutional neural networks can operate without any loss of accuracy at less than 0.5% classification layer connection density, or less than 5% overall network connection density.We also investigate the effects of pre-defining the sparsity of networks with only fully connected layers.Based on our sparsifying technique, we introduce the `scatter metric to characterize the quality of a particular connection pattern.As proof of concept, we show results on CIFAR, MNIST and a new dataset on classifying Morse code symbols, which highlights some interesting trends and limits of sparse connection patterns.","Neural networks can be pre-defined to have sparse connectivity without performance degradation.This paper examines sparse connection patterns in upper layers of convolutional image classification networks, and introduces heuristics for distributing connections among windows/groups and a measure called scatter to construct connectivity masks.Proposal to reduce the number of parameters learned by a deep network by setting up sparse connection weights in classiication layers, and introduction of a concept of ""scatter.""" 52,Benchmarking Adversarial Robustness,"Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important problems in the development of deep learning.While a lot of efforts have been made in recent years, it is of great significance to perform correct and complete evaluations of the adversarial attack and defense algorithms.In this paper, we establish a comprehensive, rigorous, and coherent benchmark to evaluate adversarial robustness on image classification tasks.After briefly reviewing plenty of representative attack and defense methods, we perform large-scale experiments with two robustness curves as the fair-minded evaluation criteria to fully understand the performance of these methods.Based on the evaluation results, we draw several important findings and provide insights for future research.","We provide a comprehensive, rigorous, and coherent benchmark to evaluate adversarial robustness of deep learning models.This paper presents an evaluation of different kinds of classification models under various adversarial attack methods.A large-scale empirical study comparing different adversarial attack and defense techniques, and use of accuracy vs. perturbation budget and accuracy vs. attack strength curves to evaluate attacks and defenses." 53,Context Dependent Modulation of Activation Function,"We propose a modification to traditional Artificial Neural Networks, which provides the ANNs with new aptitudes motivated by biological neurons. Biological neurons work far beyond linearly summing up synaptic inputs and then transforming the integrated information. A biological neuron change firing modes accordingly to peripheral factors as well as intrinsic ones. Our modification connects a new type of ANN nodes, which mimic the function of biological neuromodulators and are termed modulators, to enable other traditional ANN nodes to adjust their activation sensitivities in run-time based on their input patterns. In this manner, we enable the slope of the activation function to be context dependent. This modification produces statistically significant improvements in comparison with traditional ANN nodes in the context of Convolutional Neural Networks and Long Short-Term Memory networks.","We propose a modification to traditional Artificial Neural Networks motivated by the biology of neurons to enable the shape of the activation function to be context dependent.A method to scale the activations of a layer of neurons in an ANN depending on inputs to that layer that reports improvements above the baselines.Introduction of an architectural change for basic neurons in a neural network, and the idea to multiply neuron linear combination output by a modulator prior to feeding it into the activation function." 54,Mix-review: Alleviate Forgetting in the Pretrain-Finetune Framework for Neural Language Generation Models,"In this work, we study how the large-scale pretrain-finetune framework changes the behavior of a neural language generator.We focus on the transformer encoder-decoder model for the open-domain dialogue response generation task.We find that after standard fine-tuning, the model forgets important language generation skills acquired during large-scale pre-training.We demonstrate the forgetting phenomenon through a detailed behavior analysis from the perspectives of context sensitivity and knowledge transfer.Adopting the concept of data mixing, we propose an intuitive fine-tuning strategy named ""mix-review\\.We find that mix-review effectively regularize the fine-tuning process, and the forgetting problem is largely alleviated.Finally, we discuss interesting behavior of the resulting dialogue model and its implications.","We identify the forgetting problem in fine-tuning of pre-trained NLG models, and propose the mix-review strategy to address it.This paper analyzes the forgetting problem in the pretraining-finetuning framework from the perspective of context sensitivity and knowledge transfer, and proposes a fine-tuning strategy which outperforms the weight decay method.Study of the forgetting problem in the pretrain-finetune framework, specifically in dialogue response generation tasks, and proposal of a mix-review strategy to alleviate the forgetting issue." 55,Improved Modeling of Complex Systems Using Hybrid Physics/Machine Learning/Stochastic Models,"Combining domain knowledge models with neural models has been challenging. End-to-end trained neural models often perform better than domain knowledge models or domain/neural combinations, and the combination is inefficient to train. In this paper, we demonstrate that by composing domain models with machine learning models, by using extrapolative testing sets, and invoking decorrelation objective functions, we create models which can predict more complex systems.The models are interpretable, extrapolative, data-efficient, and capture predictable but complex non-stochastic behavior such as unmodeled degrees of freedom and systemic measurement noise. We apply this improved modeling paradigm to several simulated systems and an actual physical system in the context of system identification. Several ways of composing domain models with neural models are examined for time series, boosting, bagging, and auto-encoding on various systems of varying complexity and non-linearity. Although this work is preliminary, we show that the ability to combine models is a very promising direction for neural modeling.","Improved modeling of complex systems uses hybrid neural/domain model composition, new decorrelation loss functions and extrapolative test sets This paper conducts experiments to compare the extrapolative predictions of various hybrid models which compose physical models, neural networks and stochastic models, and tackles the challenge of unmodeled dynamics being a bottleneck.This paper presents approaches for combining neural network with non-NN models to predict behavior of complex physical systems." 56,Scoring-Aggregating-Planning: Learning task-agnostic priors from interactions and sparse rewards for zero-shot generalization,"Humans can learn task-agnostic priors from interactive experience and utilize the priors for novel tasks without any finetuning.In this paper, we propose Scoring-Aggregating-Planning, a framework that can learn task-agnostic semantics and dynamics priors from arbitrary quality interactions as well as the corresponding sparse rewards and then plan on unseen tasks in zero-shot condition.The framework finds a neural score function for local regional state and action pairs that can be aggregated to approximate the quality of a full trajectory; moreover, a dynamics model that is learned with self-supervision can be incorporated for planning.Many of previous works that leverage interactive data for policy learning either need massive on-policy environmental interactions or assume access to expert data while we can achieve a similar goal with pure off-policy imperfect data.Instantiating our framework results in a generalizable policy to unseen tasks.Experiments demonstrate that the proposed method can outperform baseline methods on a wide range of applications including gridworld, robotics tasks and video games.","We learn dense scores and dynamics model as priors from exploration data and use them to induce a good policy in new tasks in zero-shot condition.This paper discusses zero shot generalization into new environments, and proposes an approach with results on Grid-World, Super Mario Bros, and 3D Robotics.A method aiming to learn task-agnostic priors for zero-shot generalization, with the idea to employ a modeling approach on top of the model-based RL framework." 57,Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behavior,"We describe an approach to understand the peculiar and counterintuitive generalization properties of deep neural networks. The approach involves going beyond worst-case theoretical capacity control frameworks that have been popular in machine learning in recent years to revisit old ideas in the statistical mechanics of neural networks. Within this approach, we present a prototypical Very Simple Deep Learning model, whose behavior is controlled by two control parameters, one describing an effective amount of data, or load, on the network, and one with an effective temperature interpretation. Using this model, we describe how a very simple application of ideas from the statistical mechanics theory of generalization provides a strong qualitative description of recently-observed empirical results regarding the inability of deep neural networks not to overfit training data, discontinuous learning and sharp transitions in the generalization properties of learning algorithms, etc.","Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behaviorThe authors suggest that statistical mechanics ideas will help to understand generalization properties of deep neural networks, and give an approach that provides strong qualitative descriptions of empirical results regarding deep neural networks and learning algorithms.A set of ideas related to theoretical understanding generalization properties of multilayer neural networks, and a qualitative analogy between behaviours in deep learning and results from quantitative statistical physics analysis of single and two-layer neural networks." 58,Improving Sample Complexity with Observational Supervision,"Supervised machine learning models for high-value computer vision applications such as medical image classification often require large datasets labeled by domain experts, which are slow to collect, expensive to maintain, and static with respect to changes in the data distribution.In this context, we assess the utility of observational supervision, where we take advantage of passively-collected signals such as eye tracking or “gaze” data, to reduce the amount of hand-labeled data needed for model training.Specifically, we leverage gaze information to directly supervise a visual attention layer by penalizing disagreement between the spatial regions the human labeler looked at the longest and those that most heavily influence model output.We present evidence that constraining the model in this way can reduce the number of labeled examples required to achieve a given performance level by as much as 50%, and that gaze information is most helpful on more difficult tasks.","We explore using passively collected eye-tracking data to reduce the amount of labeled data needed during training.A method to use gaze information to reduce the sample complexity of a model and the needed labeling effort to get a target performance, with improved results in middle-sized samples and harder tasks.A method to incorporate gaze signals into standard CNNs for image classification, adding a loss function term based in the difference between the model's Class Activation Map and the map constructed from eye tracking information." 59,Empowering Graph Representation Learning with Paired Training and Graph Co-Attention,"Through many recent advances in graph representation learning, performance achieved on tasks involving graph-structured data has substantially increased in recent years---mostly on tasks involving node-level predictions.The setup of prediction tasks over entire graphs, however, proves to be more challenging, as the algorithm must combine evidence about several structurally relevant patches of the graph into a single prediction.Most prior work attempts to predict these graph-level properties while considering only one graph at a time---not allowing the learner to directly leverage structural similarities and motifs across graphs.Here we propose a setup in which a graph neural network receives pairs of graphs at once, and extend it with a co-attentional layer that allows node representations to easily exchange structural information across them.We first show that such a setup provides natural benefits on a pairwise graph classification task, and then expand to a more generic graph regression setup: enhancing predictions over QM9, a standard molecular prediction benchmark.Our setup is flexible, powerful and makes no assumptions about the underlying dataset properties, beyond anticipating the existence of multiple training graphs.","We use graph co-attention in a paired graph training system for graph classification and regression.This paper injects a multi-head co-attention mechanism in GCN that allows one drug to attend to another drug during drug side effect prediction.A method to extend graph-based learning with a co-attentional layer, which outperforms other previous ones on a pairwise graph classification task." 60,Improved Adversarial Image Captioning,"In this paper we study image captioning as a conditional GAN training, proposing both a context-aware LSTM captioner and co-attentive discriminator, which enforces semantic alignment between images and captions.We investigate the viability of two discrete GAN training methods: Self-critical Sequence Training and Gumbel Straight-Through and demonstrate that SCST shows more stable gradient behavior and improved results over Gumbel ST.","Image captioning as a conditional GAN training with novel architectures, also study two discrete GAN training methods. An improved GAN model for image captioning that proposes a context-aware LSTM captioner, introduces a stronger co-attentive discriminator with better performance, and uses SCST for GAN training." 61,Simultaneous Classification and Out-of-Distribution Detection Using Deep Neural Networks,"Deep neural networks have achieved great success in classification tasks during the last years.However, one major problem to the path towards artificial intelligence is the inability of neural networks to accurately detect samples from novel class distributions and therefore, most of the existent classification algorithms assume that all classes are known prior to the training stage.In this work, we propose a methodology for training a neural network that allows it to efficiently detect out-of-distribution examples without compromising much of its classification accuracy on the test examples from known classes.Based on the Outlier Exposure technique, we propose a novel loss function that achieves state-of-the-art results in out-of-distribution detection with OE both on image and text classification tasks.Additionally, the way this method was constructed makes it suitable for training any classification algorithm that is based on Maximum Likelihood methods.","We propose a novel loss function that achieves state-of-the-art results in out-of-distribution detection with Outlier Exposure both on image and text classification tasks.This paper tackles the problems of out-of-distribution detection and model calibration by adapting the loss function of the Outlier Exposure technique, with results demonstrating increased performance over OE on vision and text benchmarks and improved model calibration.Proposal for a new loss function to train the network with Outlier Exposure which leads to better OOD detection compared to simple loss functions using KL divergence." 62,Implementing Inductive bias for different navigation tasks through diverse RNN attrractors,"Navigation is crucial for animal behavior and is assumed to require an internal representation of the external environment, termed a cognitive map.The precise form of this representation is often considered to be a metric representation of space.An internal representation, however, is judged by its contribution to performance on a given task, and may thus vary between different types of navigation tasks.Here we train a recurrent neural network that controls an agent performing several navigation tasks in a simple environment.To focus on internal representations, we split learning into a task-agnostic pre-training stage that modifies internal connectivity and a task-specific Q learning stage that controls the networks output.We show that pre-training shapes the attractor landscape of the networks, leading to either a continuous attractor, discrete attractors or a disordered state.These structures induce bias onto the Q-Learning phase, leading to a performance pattern across the tasks corresponding to metric and topological regularities.Our results show that, in recurrent networks, inductive bias takes the form of attractor landscapes -- which can be shaped by pre-training and analyzed using dynamical systems methods.Furthermore, we demonstrate that non-metric representations are useful for navigation tasks. ","Task agnostic pre-training can shape RNN's attractor landscape, and form diverse inductive bias for different navigation tasks "", 'This paper studies the internal representations of recurrent neural networks trained on navigation tasks, and finds that RNNs pre-trained to use path integration contain 2D continuous attractors while RNNs pre-trained for landmark memory contain discrete attractors.This paper explores how pre-training recurrent networks on different navigational objectives confers different benefits for solving downstream tasks, and shows how different pretraining manifests as different dynamical structures in the networks after pre-training." 63,Scalable Neural Learning for Verifiable Consistency with Temporal Specifications,"Formal verification of machine learning models has attracted attention recently, and significant progress has been made on proving simple properties like robustness to small perturbations of the input features.In this context, it has also been observed that folding the verification procedure into training makes it easier to train verifiably robust models.In this paper, we extend the applicability of verified training by extending it to recurrent neural network architectures and complex specifications that go beyond simple adversarial robustness, particularly specifications that capture temporal properties like requiring that a robot periodically visits a charging station or that a language model always produces sentences of bounded length.Experiments show that while models trained using standard training often violate desired specifications, our verified training method produces models that both perform well and can be shown to be provably consistent with specifications.","Neural Network Verification for Temporal Properties and Sequence Generation ModelsThis paper extends interval bound propagation to recurrent computation and auto-regressive models, introduces and extends Signal Temporal Logic for specifying temporal contraints, and provides proof that STL with bound propagation can ensure neural models conform to temporal specification.A way to train time-series regressors verifiably with respect to a set of rules defined by signal temporal logic, and work in deriving bound propagation rules for the STL language." 64,TabNN: A Universal Neural Network Solution for Tabular Data,"Neural Network has achieved state-of-the-art performances in many tasks within image, speech, and text domains.Such great success is mainly due to special structure design to fit the particular data patterns, such as CNN capturing spatial locality and RNN modeling sequential dependency.Essentially, these specific NNs achieve good performance by leveraging the prior knowledge over corresponding domain data.Nevertheless, there are many applications with all kinds of tabular data in other domains.Since there are no shared patterns among these diverse tabular data, it is hard to design specific structures to fit them all.Without careful architecture design based on domain knowledge, it is quite challenging for NN to reach satisfactory performance in these tabular data domains.To fill the gap of NN in tabular data learning, we propose a universal neural network solution, called TabNN, to derive effective NN architectures for tabular data in all kinds of tasks automatically.Specifically, the design of TabNN follows two principles: and .Since GBDT has empirically proven its strength in modeling tabular data, we use GBDT to power the implementation of TabNN.Comprehensive experimental analysis on a variety of tabular datasets demonstrate that TabNN can achieve much better performance than many baseline solutions.","We propose a universal neural network solution to derive effective NN architectures for tabular data automatically.A new Neural Network training procedure, designed for tabular data, that seeks to leverage feature clusters extracted from GBDTs.Proposal for a hybrid machine learning algorithm using Gradient Boosted Decision Trees and Deep Neural Networks, with intended research direction on tabular data." 65,Scalable Rule Learning in Probabilistic Knowledge Bases,"Knowledge Bases are becoming increasingly large, sparse and probabilistic.These KBs are typically used to perform query inferences and rule mining.But their efficacy is only as high as their completeness.Efficiently utilizing incomplete KBs remains a major challenge as the current KB completion techniques either do not take into account the inherent uncertainty associated with each KB tuple or do not scale to large KBs.Probabilistic rule learning not only considers the probability of every KB tuple but also tackles the problem of KB completion in an explainable way.For any given probabilistic KB, it learns probabilistic first-order rules from its relations to identify interesting patterns.But, the current probabilistic rule learning techniques perform grounding to do probabilistic inference for evaluation of candidate rules.It does not scale well to large KBs as the time complexity of inference using grounding is exponential over the size of the KB.In this paper, we present SafeLearner -- a scalable solution to probabilistic KB completion that performs probabilistic rule learning using lifted probabilistic inference -- as faster approach instead of grounding.We compared SafeLearner to the state-of-the-art probabilistic rule learner ProbFOIL+ and to its deterministic contemporary AMIE+ on standard probabilistic KBs of NELL and Yago.Our results demonstrate that SafeLearner scales as good as AMIE+ when learning simple rules and is also significantly faster than ProbFOIL+.",Probabilistic Rule Learning system using Lifted InferenceA model for probabilistic rule learning to automate the completion of probabilistic databases that uses AMIE+ and lifted inference to help computational efficiency. 66,Non-Autoregressive Dialog State Tracking,"Recent efforts in Dialogue State Tracking for task-oriented dialogues have progressed toward open-vocabulary or generation-based approaches where the models can generate slot value candidates from the dialogue history itself.These approaches have shown good performance gain, especially in complicated dialogue domains with dynamic slot values.However, they fall short in two aspects: they do not allow models to explicitly learn signals across domains and slots to detect potential dependencies among pairs; and existing models follow auto-regressive approaches which incur high time cost when the dialogue evolves over multiple domains and multiple turns.In this paper, we propose a novel framework of Non-Autoregressive Dialog State Tracking which can factor in potential dependencies among domains and slots to optimize the models towards better prediction of dialogue states as a complete set rather than separate slots.In particular, the non-autoregressive nature of our method not only enables decoding in parallel to significantly reduce the latency of DST for real-time dialogue response generation, but also detect dependencies among slots at token level in addition to slot and domain level.Our empirical results show that our model achieves the state-of-the-art joint accuracy across all domains on the MultiWOZ 2.1 corpus, and the latency of our model is an order of magnitude lower than the previous state of the art as the dialogue history extends over time.","We propose the first non-autoregressive neural model for Dialogue State Tracking (DST), achieving the SOTA accuracy (49.04%) on MultiWOZ2.1 benchmark, and reducing inference latency by an order of magnitude.A new model for the DST task that reduces inference time complexity with a non-autoregressive decoder, obtains competitive DST accuracy, and shows improvements over other baselines.Proposal for a model that is capable of tracking dialogue states in a non-recursive fashion." 67,Deep 3D-Zoom Net: Unsupervised Learning of Photo-Realistic 3D-Zoom,"The 3D-zoom operation is the positive translation of the camera in the Z-axis, perpendicular to the image plane.In contrast, the optical zoom changes the focal length and the digital zoom is used to enlarge a certain region of an image to the original image size.In this paper, we are the first to formulate an unsupervised 3D-zoom learning problem where images with an arbitrary zoom factor can be generated from a given single image.An unsupervised framework is convenient, as it is a challenging task to obtain a 3D-zoom dataset of natural scenes due to the need for special equipment to ensure camera movement is restricted to the Z-axis.Besides, the objects in the scenes should not move when being captured, which hinders the construction of a large dataset of outdoor scenes.We present a novel unsupervised framework to learn how to generate arbitrarily 3D-zoomed versions of a single image, not requiring a 3D-zoom ground truth, called the Deep 3D-Zoom Net.The Deep 3D-Zoom Net incorporates the following features: transfer learning from a pre-trained disparity estimation network via a back re-projection reconstruction loss; a fully convolutional network architecture that models depth-image-based rendering, taking into account high-frequency details without the need for estimating the intermediate disparity; and incorporating a discriminator network that acts as a no-reference penalty for unnaturally rendered areas.Even though there is no baseline to fairly compare our results, our method outperforms previous novel view synthesis research in terms of realistic appearance on large camera baselines.We performed extensive experiments to verify the effectiveness of our method on the KITTI and Cityscapes datasets.","A novel network architecture to perform Deep 3D Zoom or close-ups.A method for creating a ""zoomed image"" for a given input image,and a novel back re-projection reconstruction loss that allows the network to learn underlying 3D structure and maintain a natural appearance.An algorithm for synthesizing 3D-zoom behavior when the camera is moving forward, a network structure incorporating disparity estimation in a GANs framework to synthesize novel views, and a proposed new computer vision task." 68,A closer look at the approximation capabilities of neural networks,"The universal approximation theorem, in one of its most general versions, says that if we consider only continuous activation functions σ, then a standard feedforward neural network with one hidden layer is able to approximate any continuous multivariate function f to any given approximation threshold ε, if and only if σ is non-polynomial.In this paper, we give a direct algebraic proof of the theorem.Furthermore we shall explicitly quantify the number of hidden units required for approximation.Specifically, if X in R^n is compact, then a neural network with n input units, m output units, and a single hidden layer with hidden units, can uniformly approximate any polynomial function f:X -> R^m whose total degree is at most d for each of its m coordinate functions.In the general case that f is any continuous function, we show there exists some N in O, such that N hidden units would suffice to approximate f.We also show that this uniform approximation property still holds even under seemingly strong conditions imposed on the weights.We highlight several consequences: For any δ > 0, the UAP still holds if we restrict all non-bias weights w in the last layer to satisfy |w| < δ. There exists some λ>0, such that the UAP still holds if we restrict all non-bias weights w in the first layer to satisfy |w|>λ. If the non-bias weights in the first layer are *fixed* and randomly chosen from a suitable range, then the UAP holds with probability 1.","A quantitative refinement of the universal approximation theorem via an algebraic approach.The authors derive the universal approximation property proofs algebraically and assert that the results are general to other kinds of neural networks and similar learners.A new proof of Leshno's version of the universal approximation property for neural networks, and new insights into the universal approximation property." 69,Robust Text Classifier on Test-Time Budgets,"In this paper, we design a generic framework for learning a robust text classification model that achieves accuracy comparable to standard full models under test-timebudget constraints.We take a different approach from existing methods and learn to dynamically delete a large fraction of unimportant words by a low-complexity selector such that the high-complexity classifier only needs to process a small fraction of important words.In addition, we propose a new data aggregation method to train the classifier, allowing it to make accurate predictions even on fragmented sequence of words.Our end-to-end method achieves state-of-the-art performance while its computational complexity scales linearly with the small fraction of important words in the whole corpus.Besides, a single deep neural network classifier trained by our framework can be dynamically tuned to different budget levels at inference time.","Modular framework for document classification and data aggregation technique for making the framework robust to various distortion, and noise and focus only on the important words. The authors consider training a RNN-based text classification where there is a resource restriction on test-time prediction, and provide an approach using a masking mechanism to reduce words/phrases/sentences used in prediction followed by a classifier to handle those components." 70,PC-DARTS: Partial Channel Connections for Memory-Efficient Architecture Search,"Differentiable architecture search provided a fast solution in finding effective network architectures, but suffered from large memory and computing overheads in jointly training a super-net and searching for an optimal architecture.In this paper, we present a novel approach, namely Partially-Connected DARTS, by sampling a small part of super-net to reduce the redundancy in exploring the network space, thereby performing a more efficient search without comprising the performance.In particular, we perform operation search in a subset of channels while bypassing the held out part in a shortcut.This strategy may suffer from an undesired inconsistency on selecting the edges of super-net caused by sampling different channels.We solve it by introducing edge normalization, which adds a new set of edge-level hyper-parameters to reduce uncertainty in search.Thanks to the reduced memory cost, PC-DARTS can be trained with a larger batch size and, consequently, enjoy both faster speed and higher training stability.Experiment results demonstrate the effectiveness of the proposed method.Specifically, we achieve an error rate of 2.57% on CIFAR10 within merely 0.1 GPU-days for architecture search, and a state-of-the-art top-1 error rate of 24.2% on ImageNet within 3.8 GPU-days for search.Our code has been made available at https://www.dropbox.com/sh/on9lg3rpx1r6dkf/AABG5mt0sMHjnEJyoRnLEYW4a?dl=0.","Allowing partial channel connection in super-networks to regularize and accelerate differentiable architecture searchAn extension of the neural architecture search method DARTS that addresses its shortcoming of immense memory cost by using a random subset of channels and a method to normalize edges.This paper proposes to improve DARTS in terms of training efficiency, from large memory and computing overheads, and proposes a partially-connected DARTS with partial channel connection and edge normalization." 71,I love your chain mail! Making knights smile in a fantasy game world,"Dialogue research tends to distinguish between chit-chat and goal-oriented tasks.While the former is arguably more naturalistic and has a wider use of language, the latter has clearer metrics and a more straightforward learning signal.Humans effortlessly combine the two, and engage in chit-chat for example with the goal of exchanging information or eliciting a specific response.Here, we bridge the divide between these two domains in the setting of a rich multi-player text-based fantasy environment where agents and humans engage in both actions and dialogue.Specifically, we train a goal-oriented model with reinforcement learning via self-play against an imitation-learned chit-chat model with two new approaches: the policy either learns to pick a topic or learns to pick an utterance given the top-k utterances.We show that both models outperform a strong inverse model baseline and can converse naturally with their dialogue partner in order to achieve goals.","Agents interact (speak, act) and can achieve goals in a rich world with diverse language, bridging the gap between chit-chat and goal-oriented dialogue.This paper studies a multiagent dialog task in which the learning agent aims to generate natural language actions that elicit a particular action from the other agent, and shows RL-agents can achieve higher task completion levels than imitation learning baselines.This paper explores the goal-oriented dialogue setting with reinforcement learning in a Fantasy Text Adventure Game and observes that the RL approaches outperform supervised learning models." 72,Infinite-horizon Off-Policy Policy Evaluation with Multiple Behavior Policies,"We consider off-policy policy evaluation when the trajectory data are generated by multiple behavior policies.Recent work has shown the key role played by the state or state-action stationary distribution corrections in the infinite horizon context for off-policy policy evaluation.We propose estimated mixture policy, a novel class of partially policy-agnostic methods to accurately estimate those quantities.With careful analysis, we show that EMP gives rise to estimates with reduced variance for estimating the state stationary distribution correction while it also offers a useful induction bias for estimating the state-action stationary distribution correction.In extensive experiments with both continuous and discrete environments, we demonstrate that our algorithm offers significantly improved accuracy compared to the state-of-the-art methods.","A new partially policy-agnostic method for infinite-horizon off-policy policy evalution with multiple known or unknown behavior policies.An estimated mixture policy which takes ideas from off-policy policy evaluation infinite horizon estimators and regression importance sampling for importance weight, and extends them to many policies and unknown policies.An algorithm to solve infinite horizon off policy evaluation with multiple behavior policies by estimating a mixed policy under regression, and theoretical proof that an estimated policy ratio can reduce variance." 73,Efficient Inference Amortization in Graphical Models using Structured Continuous Conditional Normalizing Flows,"We introduce a more efficient neural architecture for amortized inference, which combines continuous and conditional normalizing flows using a principled choice of structure.Our gradient flow derives its sparsity pattern from the minimally faithful inverse of its underlying graphical model.We find that this factorization reduces the necessary numbers both of parameters in the neural network and of adaptive integration steps in the ODE solver.Consequently, the throughput at training time and inference time is increased, without decreasing performance in comparison to unconstrained flows.By expressing the structural inversion and the flow construction as compilation passes of a probabilistic programming language, we demonstrate their applicability to the stochastic inversion of realistic models such as convolutional neural networks.","We introduce a more efficient neural architecture for amortized inference, which combines continuous and conditional normalizing flows using a principled choice of sparsity structure." 74,Reinforcement Learning with Chromatic Networks,"We present a neural architecture search algorithm to construct compact reinforcement learning policies, by combining ENAS and ES in a highly scalable and intuitive way.By defining the combinatorial search space of NAS to be the set of different edge-partitionings into same-weight classes, we represent compact architectures via efficient learned edge-partitionings.For several RL tasks, we manage to learn colorings translating to effective policies parameterized by as few as 17 weight parameters, providing >90 % compression over vanilla policies and 6x compression over state-of-the-art compact policies based on Toeplitz matrices, while still maintaining good reward.We believe that our work is one of the first attempts to propose a rigorous approach to training structured neural network architectures for RL problems that are of interest especially in mobile robotics with limited storage and computational resources.","We show that ENAS with ES-optimization in RL is highly scalable, and use it to compactify neural network policies by weight sharing.The authors construct reinforcement learning policies with very few parameters by compressing a feed-forward neural network, forcing it to share weights, and using a reinforcement learning method to learn the mapping of shared weights.This paper combines ideas from ENAS and ES methods for optimisation, and introduces the chromatic network architecture, which partitions weights of the RL network into tied sub-groups." 75,Deep Semi-Supervised Anomaly Detection,"Deep approaches to anomaly detection have recently shown promising results over shallow methods on large and complex datasets.Typically anomaly detection is treated as an unsupervised learning problem.In practice however, one may have---in addition to a large set of unlabeled samples---access to a small pool of labeled samples, e.g. a subset verified by some domain expert as being normal or anomalous.Semi-supervised approaches to anomaly detection aim to utilize such labeled samples, but most proposed methods are limited to merely including labeled normal samples.Only a few methods take advantage of labeled anomalies, with existing deep approaches being domain-specific.In this work we present Deep SAD, an end-to-end deep methodology for general semi-supervised anomaly detection.Using an information-theoretic perspective on anomaly detection, we derive a loss motivated by the idea that the entropy of the latent distribution for normal data should be lower than the entropy of the anomalous distribution.We demonstrate in extensive experiments on MNIST, Fashion-MNIST, and CIFAR-10, along with other anomaly detection benchmark datasets, that our method is on par or outperforms shallow, hybrid, and deep competitors, yielding appreciable performance improvements even when provided with only little labeled data.","We introduce Deep SAD, a deep method for general semi-supervised anomaly detection that especially takes advantage of labeled anomalies.A new method to find anomaly data, when some labeled anomalies are given, that applies information theory-derived loss based on normal data usuallly having lower entropy than abnormal data.Proposal for an abnormal detection framework under settings where unlabeled data, labeled positive data, and labeled negative data are available, and proposal to approach semi-supervised AD from an information theoretic perspective." 76,Toward Understanding Generalization of Over-parameterized Deep ReLU network trained with SGD in Student-teacher Setting,"To analyze deep ReLU network, we adopt a student-teacher setting in which an over-parameterized student network learns from the output of a fixed teacher network of the same depth, with Stochastic Gradient Descent.Our contributions are two-fold.First, we prove that when the gradient is zero at every data point in training, a situation called , there exists many-to-one between student and teacher nodes in the lowest layer under mild conditions.This suggests that generalization in unseen dataset is achievable, even the same condition often leads to zero training error.Second, analysis of noisy recovery and training dynamics in 2-layer network shows that strong teacher nodes are learned first and subtle teacher nodes are left unlearned until late stage of training.As a result, it could take a long time to converge into these small-gradient critical points.Our analysis shows that over-parameterization plays two roles: it is a necessary condition for alignment to happen at the critical points, and in training dynamics, it helps student nodes cover more teacher nodes with fewer iterations.Both improve generalization.Experiments justify our finding.","This paper analyzes training dynamics and critical points of training deep ReLU network via SGD in the teacher-student setting. Study of over-parametrization in student-teacher multilayer ReLU networks, a theoretical part about SGD critical points for the teacher-student setting, and a heuristic and empirical part on dynamics of the SDG algorithm as a function of teacher networks." 77,On the Global Convergence of Training Deep Linear ResNets,"We study the convergence of gradient descent and stochastic gradient descent for training-hidden-layer linear residual networks.We prove that for training deep residual networks with certain linear transformations at input and output layers, which are fixed throughout training, both GD and SGD with zero initialization on all hidden weights can converge to the global minimum of the training loss.Moreover, when specializing to appropriate Gaussian random linear transformations, GD and SGD provably optimize wide enough deep linear ResNets.Compared with the global convergence result of GD for training standard deep linear networks p, our condition on the neural network width is sharper by a factor of, where denotes the condition number of the covariance matrix of the training data.In addition, for the first time we establish the global convergence of SGD for training deep linear ResNets and prove a linear convergence rate when the global minimum is.","Under certain condition on the input and output linear transformations, both GD and SGD can achieve global convergence for training deep linear ResNets.The authors study the convergence of gradient descent in training deep linear residual networks, and establish a global convergence of GD/SGD and linear convergence rates of SG/SGD.Study of convergence properties of GD and SGD on deep linear resnets, and proof that under certain conditions on the input and output transformations and with zero initialization, GD and SGD converges to global minima." 78,Do deep neural networks learn shallow learnable examples first?,"In this paper, we empirically investigate the training journey of deep neural networks relative to fully trained shallow machine learning models.We observe that the deep neural networks train by learning to correctly classify shallow-learnable examples in the early epochs before learning the harder examples.We build on this observation this to suggest a way for partitioning the dataset into hard and easy subsets that can be used for improving the overall training process.Incidentally, we also found evidence of a subset of intriguing examples across all the datasets we considered, that were shallow learnable but not deep-learnable.In order to aid reproducibility, we also duly release our code for this work at https://github.com/karttikeya/Shallow_to_Deep/",We analyze the training process for Deep Networks and show that they start from rapidly learning shallow classifiable examples and slowly generalize to harder data points. 79,Learning Deep Latent-variable MRFs with Amortized Bethe Free Energy Minimization,"While much recent work has targeted learning deep discrete latent variable models with variational inference, this setting remains challenging, and it is often necessary to make use of potentially high-variance gradient estimators in optimizing the ELBO.As an alternative, we propose to optimize a non-ELBO objective derived from the Bethe free energy approximation to an MRFs partition function.This objective gives rise to a saddle-point learning problem, which we train inference networks to approximately optimize.The derived objective requires no sampling, and can be efficiently computed for many MRFs of interest.We evaluate the proposed approach in learning high-order neural HMMs on text, and find that it often outperforms other approximate inference schemes in terms of true held-out log likelihood.At the same time, we find that all the approximate inference-based approaches to learning high-order neural HMMs we consider underperform learning with exact inference by a significant margin.","Learning deep latent variable MRFs with a saddle-point objective derived from the Bethe partition function approximation.A method for learning deep latent-variable MRF with an optimization objective that utilizes Bethe free energy, that also solves the underlying constraints of Bethe free energy optimizations.An objective for learning latent variable MRFs based on Bethe free energy and amortized inference, different from optimizing the standard ELBO." 80,A General Logic-based Approach for Explanation Generation,"In an explanation generation problem, an agent needs to identify and explain the reasons for its decisions to another agent.Existing work in this area is mostly confined to planning-based systems that use automated planning approaches to solve the problem.In this paper, we approach this problem from a new perspective, where we propose a general logic-based framework for explanation generation.In particular, given a knowledge base that entails a formula and a second knowledge base that does not entail, we seek to identify an explanation that is a subset of such that the union of and entails.We define two types of explanations, model- and proof-theoretic explanations, and use cost functions to reflect preferences between explanations.Further, we present our algorithm implemented for propositional logic that compute such explanations and empirically evaluate it in random knowledge bases and a planning domain.","A general framework for explanation generation using Logic.This paper researches explanation generation from a KR point of view and conducts experiments measuring explanation size and runtime on random formulas and formulas from a Blocksworld instance.This paper provides a perspective on explanations between two knowledge bases, and runs parallel to work on model reconciliation in planning literature." 81,Collapse of deep and narrow neural nets,"Recent theoretical work has demonstrated that deep neural networks have superior performance over shallow networks, but their training is more difficult, e.g., they suffer from the vanishing gradient problem.This problem can be typically resolved by the rectified linear unit activation.However, here we show that even for such activation, deep and narrow neural networks will converge to erroneous mean or median states of the target function depending on the loss with high probability.Deep and narrow NNs are encountered in solving partial differential equations with high-order derivatives.We demonstrate this collapse of such NNs both numerically and theoretically, and provide estimates of the probability of collapse.We also construct a diagram of a safe region for designing NNs that avoid the collapse to erroneous states.Finally, we examine different ways of initialization and normalization that may avoid the collapse problem.Asymmetric initializations may reduce the probability of collapse but do not totally eliminate it.","Deep and narrow neural networks will converge to erroneous mean or median states of the target function depending on the loss with high probability.This paper studies failure modes of deep and narrow networks, focusing on as small as possible models for which the undesired behavior occurs.This paper shows that the training of deep ReLU neural networks will converge to a constant classifier with high probability over random initialization if hidden layer widths are too small." 82,MMA Training: Direct Input Space Margin Maximization through Adversarial Training,"We study adversarial robustness of neural networks from a margin maximization perspective, where margins are defined as the distances from inputs to a classifiers decision boundary.Our study shows that maximizing margins can be achieved by minimizing the adversarial loss on the decision boundary at the ""shortest successful perturbation"", demonstrating a close connection between adversarial losses and the margins.We propose Max-Margin Adversarial training to directly maximize the margins to achieve adversarial robustness.Instead of adversarial training with a fixed, MMA offers an improvement by enabling adaptive selection of the ""correct"" as the margin individually for each datapoint.In addition, we rigorously analyze adversarial training with the perspective of margin maximization, and provide an alternative interpretation for adversarial training, maximizing either a lower or an upper bound of the margins.Our experiments empirically confirm our theory and demonstrate MMA trainings efficacy on the MNIST and CIFAR10 datasets w.r.t. and robustness.","We propose MMA training to directly maximize input space margin in order to improve adversarial robustness primarily by removing the requirement of specifying a fixed distortion bound.An adaptive margin-based adversarial training approach to train robust DNNs, by maximizing the shortest margin of inputs to the decision boundary, that makes adversarial training with large perturbation possible.A method for robust learning against adversarial attacks where the input space margin is directly maximized and a softmax variant of the max-margin is introduced." 83,Anomaly Detection with Generative Adversarial Networks,"Many anomaly detection methods exist that perform well on low-dimensional problems however there is a notable lack of effective methods for high-dimensional spaces, such as images.Inspired by recent successes in deep learning we propose a novel approach to anomaly detection using generative adversarial networks.Given a sample under consideration, our method is based on searching for a good representation of that sample in the latent space of the generator; if such a representation is not found, the sample is deemed anomalous. We achieve state-of-the-art performance on standard image benchmark datasets and visual inspection of the most anomalous samples reveals that our method does indeed return anomalies.","We propose a method for anomaly detection with GANs by searching the generator's latent space for good sample representations."", 'The authors propose using GAN for anomaly detection, a gradient-descent based method to iteratively update latent representations, and a novel parameter update to the generators.A GAN based approach to doing anomaly detection for image data where the generator's latent space is explored to find a representation for a test image." 84,Langevin Dynamics as Nonparametric Variational Inference,"Variational inference and Markov chain Monte Carlo are approximate posterior inference algorithms that are often said to have complementary strengths, with VI being fast but biased and MCMC being slower but asymptotically unbiased.In this paper, we analyze gradient-based MCMC and VI procedures and find theoretical and empirical evidence that these procedures are not as different as one might think.In particular, a close examination of the Fokker-Planck equation that governs the Langevin dynamics MCMC procedure reveals that LD implicitly follows a gradient flow that corresponds to a variational inference procedure based on optimizing a nonparametric normalizing flow.This result suggests that the transient bias of LD may track that of VI, up to differences due to VI’s parameterization and asymptotic bias.Empirically, we find that the transient biases of these algorithms do evolve similarly.This suggests that practitioners with a limited time budget may get more accurate results by running an MCMC procedure than a VI procedure, as long as the variance of the MCMC estimator can be dealt with.","The transient behavior of gradient-based MCMC and variational inference algorithms is more similar than one might think, calling into question the claim that variational inference is faster than MCMC." 85,Composition-based Multi-Relational Graph Convolutional Networks,"Graph Convolutional Networks have recently been shown to be quite successful in modeling graph-structured data.However, the primary focus has been on handling simple undirected graphs.Multi-relational graphs are a more general and prevalent form of graphs where each edge has a label and direction associated with it.Most of the existing approaches to handle such graphs suffer from over-parameterization and are restricted to learning representations of nodes only.In this paper, we propose CompGCN, a novel Graph Convolutional framework which jointly embeds both nodes and relations in a relational graph.CompGCN leverages a variety of entity-relation composition operations from Knowledge Graph Embedding techniques and scales with the number of relations.It also generalizes several of the existing multi-relational GCN methods.We evaluate our proposed method on multiple tasks such as node classification, link prediction, and graph classification, and achieve demonstrably superior results.We make the source code of CompGCN available to foster reproducible research.","A Composition-based Graph Convolutional framework for multi-relational graphs.The authors develop GCN on multi-relational graphs and propose CompGCN, which leverages insights from knowledge graph embeddings and learns node and relation representations to alleviate the problem of over-parameterization.This paper introduces a GCN framework for multi-relational graphs and generalizes several existing approaches to Knowledge Graph embedding into one framework." 86,Fully Quantized Transformer for Improved Translation,"State-of-the-art neural machine translation methods employ massive amounts of parameters.Drastically reducing computational costs of such methods without affecting performance has been up to this point unsolved.In this work, we propose a quantization strategy tailored to the Transformer architecture.We evaluate our method on the WMT14 EN-FR and WMT14 EN-DE translation tasks and achieve state-of-the-art quantization results for the Transformer, obtaining no loss in BLEU scores compared to the non-quantized baseline.We further compress the Transformer by showing that, once the model is trained, a good portion of the nodes in the encoder can be removed without causing any loss in BLEU.","We fully quantize the Transformer to 8-bit and improve translation quality compared to the full precision model.An 8-bit quantization method to quantize the machine translation model Transformer, proposing to use uniform min-max quantization during inference and bucketing weigts before quantization to reduce quantization error.A method for reducing the required memory space by a quantization technique, focused on reducing it for Transformer architecture." 87,Meta-Learning with Latent Embedding Optimization,"Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems.However, they have practical difficulties when operating on high-dimensional parameter spaces in extreme low-data regimes.We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this low-dimensional latent space.The resulting approach, latent embedding optimization, decouples the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters.Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks.Further analysis indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space.","Latent Embedding Optimization (LEO) is a novel gradient-based meta-learner with state-of-the-art performance on the challenging 5-way 1-shot and 5-shot miniImageNet and tieredImageNet classification tasks.A new meta-learning framework that learns data-dependent latent space, performs fast adaptation in the latent space, is effective for few-shot learning, has task-dependent initialization for adaptation, and works well for multimodal task distribution.This paper proposes a latent embedding optimization method for meta-learning, and claims the contribution is to decouple optimization-based meta-learning techniques from high-dimensional space of model parameters." 88,Deep reinforcement learning with relational inductive biases,"We introduce an approach for augmenting model-free deep reinforcement learning agents with a mechanism for relational reasoning over structured representations, which improves performance, learning efficiency, generalization, and interpretability.Our architecture encodes an image as a set of vectors, and applies an iterative message-passing procedure to discover and reason about relevant entities and relations in a scene.In six of seven StarCraft II Learning Environment mini-games, our agent achieved state-of-the-art performance, and surpassed human grandmaster-level on four.In a novel navigation and planning task, our agents performance and learning efficiency far exceeded non-relational baselines, it was able to generalize to more complex scenes than it had experienced during training."", ""Moreover, when we examined its learned internal representations, they reflected important structure about the problem and the agents intentions.The main contribution of this work is to introduce techniques for representing and reasoning about states in model-free deep reinforcement learning agents via relational inductive biases.Our experiments show this approach can offer advantages in efficiency, generalization, and interpretability, and can scale up to meet some of the most challenging test environments in modern artificial intelligence.","Relational inductive biases improve out-of-distribution generalization capacities in model-free reinforcement learning agentsA shared relational network architecture for parameterizing the actor and critic network, focused on distributed advantage actor-critic algorithms, that enhances model-free deep reinforcement techniques with relational knowledge about the environment so agents can learn interpretable state representations.A quantitative and qualitative analysis and evaluation of the self-attention mechanism combined with relation network in the context of model-free RL." 89,Statestream: A toolbox to explore layerwise-parallel deep neural networks,"Building deep neural networks to control autonomous agents which have to interact in real-time with the physical world, such as robots or automotive vehicles, requires a seamless integration of time into a network’s architecture.The central question of this work is, how the temporal nature of reality should be reflected in the execution of a deep neural network and its components.Most artificial deep neural networks are partitioned into a directed graph of connected modules or layers and the layers themselves consist of elemental building blocks, such as single units.For most deep neural networks, all units of a layer are processed synchronously and in parallel, but layers themselves are processed in a sequential manner.In contrast, all elements of a biological neural network are processed in parallel.In this paper, we define a class of networks between these two extreme cases.These networks are executed in a streaming or synchronous layerwise-parallel manner, unlocking the layers of such networks for parallel processing.Compared to the standard layerwise-sequential deep networks, these new layerwise-parallel networks show a fundamentally different temporal behavior and flow of information, especially for networks with skip or recurrent connections.We argue that layerwise-parallel deep networks are better suited for future challenges of deep neural network design, such as large functional modularized and/or recurrent architectures as well as networks allocating different network capacities dependent on current stimulus and/or task complexity.We layout basic properties and discuss major challenges for layerwise-parallel networks.Additionally, we provide a toolbox to design, train, evaluate, and online-interact with layerwise-parallel networks.","We define a concept of layerwise model-parallel deep neural networks, for which layers operate in parallel, and provide a toolbox to design, train, evaluate, and on-line interact with these networks.A GPU-accelerated toolbox for parallel neuron updating, written in Theano, that supports different update orders in recurrent networks and networks with connections that skip layers. A new toolbox for deep neural networks learning and evaluation, and proposal for a paradigm switch from layerwise-sequential networks to layer-wise parallel networks." 90,Adversarially Robust Neural Networks via Optimal Control: Bridging Robustness with Lyapunov Stability,"Deep neural networks are known to be vulnerable to adversarial perturbations.In this paper, we bridge adversarial robustness of neural nets with Lyapunov stability of dynamical systems.From this viewpoint, training neural nets is equivalent to finding an optimal control of the discrete dynamical system, which allows one to utilize methods of successive approximations, an optimal control algorithm based on Pontryagins maximum principle, to train neural nets.This decoupled training method allows us to add constraints to the optimization, which makes the deep model more robust.The constrained optimization problem can be formulated as a semi-definite programming problem and hence can be solved efficiently.Experiments show that our method effectively improves deep models adversarial robustness.","An adversarial defense method bridging robustness of deep neural nets with Lyapunov stabilityThe authors formulate training NNs as finding an optimal controller for a discrete dynamical system, allowing them to use method of successive approximations to train a NN in a way to be more robust to adversarial attacks.This paper uses the theoretical view of a neural network as a discretized ODE to develop a robust control theory aimed at training the network while enforcing robustness." 91,Dimensional Reweighting Graph Convolution Networks,"In this paper, we propose a method named Dimensional reweighting Graph Convolutional Networks, to tackle the problem of variance between dimensional information in the node representations of GCNs.We prove that DrGCNs can reduce the variance of the node representations by connecting our problem to the theory of the mean field.However, practically, we find that the degrees DrGCNs help vary severely on different datasets.We revisit the problem and develop a new measure K to quantify the effect.This measure guides when we should use dimensional reweighting in GCNs and how much it can help.Moreover, it offers insights to explain the improvement obtained by the proposed DrGCNs.The dimensional reweighting block is light-weighted and highly flexible to be built on most of the GCN variants.Carefully designed experiments, including several fixes on duplicates, information leaks, and wrong labels of the well-known node classification benchmark datasets, demonstrate the superior performances of DrGCNs over the existing state-of-the-art approaches.Significant improvements can also be observed on a large scale industrial dataset.","We propose a simple yet effective reweighting scheme for GCNs, theoretically supported by the mean field theory.A method, known as DrGCN, for reweighting the different dimensions of the node representations in graph convolutional networks by reducing variance between dimensions." 92,Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue,"Knowledge-grounded dialogue is a task of generating an informative response based on both discourse context and external knowledge.As we focus on better modeling the knowledge selection in the multi-turn knowledge-grounded dialogue, we propose a sequential latent variable model as the first approach to this matter.The model named sequential knowledge transformer can keep track of the prior and posterior distribution over knowledge; as a result, it can not only reduce the ambiguity caused from the diversity in knowledge selection of conversation but also better leverage the response information for proper choice of knowledge.Our experimental results show that the proposed model improves the knowledge selection accuracy and subsequently the performance of utterance generation.We achieve the new state-of-the-art performance on Wizard of Wikipedia as one of the most large-scale and challenging benchmarks.We further validate the effectiveness of our model over existing conversation methods in another knowledge-based dialogue Holl-E dataset.","Our approach is the first attempt to leverage a sequential latent variable model for knowledge selection in the multi-turn knowledge-grounded dialogue. It achieves the new state-of-the-art performance on Wizard of Wikipedia benchmark.A sequential latent variable model for knowledge selection in dialogue generation that extends the posterior attention model to the latent knowledge selection problem and achieves higher performances than previous state-of-the-art models.A novel architecture for selecting knowledge-grounded multi-turn dialogue that yields state of the art on relevant benchmarks datasets, and scores higher in human evaluations." 93,Amortized Bayesian Meta-Learning,"Meta-learning, or learning-to-learn, has proven to be a successful strategy in attacking problems in supervised learning and reinforcement learning that involve small amounts of data.State-of-the-art solutions involve learning an initialization and/or learning algorithm using a set of training episodes so that the meta learner can generalize to an evaluation episode quickly.These methods perform well but often lack good quantification of uncertainty, which can be vital to real-world applications when data is lacking.We propose a meta-learning method which efficiently amortizes hierarchical variational inference across tasks, learning a prior distribution over neural network weights so that a few steps of Bayes by Backprop will produce a good task-specific approximate posterior.We show that our method produces good uncertainty estimates on contextual bandit and few-shot learning benchmarks.","We propose a meta-learning method which efficiently amortizes hierarchical variational inference across training episodes.An adaptation to MAML-type models that accounts for posterior uncertainty in task specific latent variables by employing variational inference for task-specific parameters in a hierarchical Bayesian view of MAML.The authors consider meta-learning to learn a prior over neural network weights, done via amortized variational inference." 94,Contrastive Representation Distillation," Often we wish to transfer representational knowledge from one neural network to another.Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator.Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network.We demonstrate that this objective ignores important structural knowledge of the teacher network.This motivates an alternative objective by which we train a student to capture significantly more information in the teachers representation of the data.We formulate this objective as contrastive learning.Experiments demonstrate that our resulting new objective outperforms knowledge distillation on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer.When combined with knowledge distillation, our method sets a state of the art in many transfer tasks, sometimes even outperforming the teacher network.","Representation/knowledge distillation by maximizing mutual information between teacher and studentThis paper combines a contrastive objective measuring the mutual information between the representations learned by teacher and student networks for model distillation, and proposes a model with improvement over existing alternatives on distillation tasks." 95,Learning to Learn with Feedback and Local Plasticity,"Developing effective biologically plausible learning rules for deep neural networks is important for advancing connections between deep learning and neuroscience.To date, local synaptic learning rules like those employed by the brain have failed to match the performance of backpropagation in deep networks.In this work, we employ meta-learning to discover networks that learn using feedback connections and local, biologically motivated learning rules.Importantly, the feedback connections are not tied to the feedforward weights, avoiding any biologically implausible weight transport.It can be shown mathematically that this approach has sufficient expressivity to approximate any online learning algorithm.Our experiments show that the meta-trained networks effectively use feedback connections to perform online credit assignment in multi-layer architectures.Moreover, we demonstrate empirically that this model outperforms a state-of-the-art gradient-based meta-learning algorithm for continual learning on regression and classification benchmarks.This approach represents a step toward biologically plausible learning mechanisms that can not only match gradient descent-based learning, but also overcome its limitations.",Networks that learn with feedback connections and local plasticity rules can be optimized for using meta learning. 96,Convolutional neural networks with extra-classical receptive fields,"In the visual system, neurons respond to a patch of the input known as their classical receptive field, and can be modulated by stimuli in the surround.These interactions are often mediated by lateral connections, giving rise to extra-classical RFs.We use supervised learning via backpropagation to learn feedforward connections, combined with an unsupervised learning rule to learn lateral connections between units within a convolutional neural network.These connections allow each unit to integrate information from its surround, generating extra-classical receptive fields for the units in our new proposed model.We demonstrate that these connections make the network more robust and achieve better performance on noisy versions of the MNIST and CIFAR-10 datasets.Although the image statistics of MNIST and CIFAR-10 differ greatly, the same unsupervised learning rule generalized to both datasets.Our framework can potentially be applied to networks trained on other tasks, with the learned lateral connections aiding the computations implemented by feedforward connections when the input is unreliable.",CNNs with biologically-inspired lateral connections learned in an unsupervised manner are more robust to noisy inputs. 97,A Neural-Symbolic Approach to Natural Language Tasks,"Deep learning has in recent years been widely used in naturallanguage processing applications due to its superiorperformance.However, while natural languages are rich ingrammatical structure, DL has not been able to explicitlyrepresent and enforce such structures.This paper proposes a newarchitecture to bridge this gap by exploiting tensor productrepresentations, a structured neural-symbolic frameworkdeveloped in cognitive science over the past 20 years, with theaim of integrating DL with explicit language structures and rules.We call it the Tensor Product Generation Network, and apply it to image captioning.The keyideas of TPGN are:1) unsupervised learning ofrole-unbinding vectors of words via a TPR-based deep neuralnetwork, and2) integration of TPR with typical DL architecturesincluding Long Short-Term Memory models.The novelty of ourapproach lies in its ability to generate a sentence and extractpartial grammatical structure of the sentence by usingrole-unbinding vectors, which are obtained in an unsupervisedmanner.Experimental results demonstrate the effectiveness of theproposed approach.",This paper is intended to develop a tensor product representation approach for deep-learning-based natural language processinig applications. 98,Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing,"It is well-known that classifiers are vulnerable to adversarial perturbations.To defend against adversarial perturbations, various certified robustness results have been derived.However, existing certified robustnesses are limited to top-1 predictions.In many real-world applications, top- predictions are more relevant.In this work, we aim to derive certified robustness for top- predictions.In particular, our certified robustness is based on randomized smoothing, which turns any classifier to a new classifier via adding noise to an input example.We adopt randomized smoothing because it is scalable to large-scale neural networks and applicable to any classifier.We derive a tight robustness in norm for top- predictions when using randomized smoothing with Gaussian noise.We find that generalizing the certified robustness from top-1 to top- predictions faces significant technical challenges.We also empirically evaluate our method on CIFAR10 and ImageNet.For example, our method can obtain an ImageNet classifier with a certified top-5 accuracy of 62.8% when the-norms of the adversarial perturbations are less than 0.5.Our code is publicly available at: url.","We study the certified robustness for top-k predictions via randomized smoothing under Gaussian noise and derive a tight robustness bound in L_2 norm.This paper extends work on deducing a certified radius using randomized smoothing, and shows the radius at which a smoothed classifier under Gaussian perturbations is certified for the top k predictions.This paper builds upon the random smoothing technique for top-1 prediction, and aims to provide certification on top-k predictions." 99,ISA-VAE: Independent Subspace Analysis with Variational Autoencoders,"Recent work has shown increased interest in using the Variational Autoencoder framework to discover interpretable representations of data in an unsupervised way.These methods have focussed largely on modifying the variational cost function to achieve this goal.However, we show that methods like beta-VAE simplify the tendency of variational inference to underfit causing pathological over-pruning and over-orthogonalization of learned components.In this paper we take a complementary approach: to modify the probabilistic model to encourage structured latent variable representations to be discovered.Specifically, the standard VAE probabilistic model is unidentifiable: the likelihood of the parameters is invariant under rotations of the latent space.This means there is no pressure to identify each true factor of variation with a latent variable.We therefore employ a rich prior distribution, akin to the ICA model, that breaks the rotational symmetry.Extensive quantitative and qualitative experiments demonstrate that the proposed prior mitigates the trade-off introduced by modified cost functions like beta-VAE and TCVAE between reconstruction loss and disentanglement.The proposed prior allows to improve these approaches with respect to both disentanglement and reconstruction quality significantly over the state of the art.","We present structured priors for unsupervised learning of disentangled representations in VAEs that significantly mitigate the trade-off between disentanglement and reconstruction loss.A general framework to use the family of L^p-nested distributions as the prior for the code vector of VAE, demonstrating a higher MIG.The authors point out issues in current VAE approaches and provide a new perspective on the tradeoff between reconstruction and orthogonalization for VAE, beta-VAE, and beta-TCVAE." 100,Tandem Blocks in Deep Convolutional Neural Networks,"Due to the success of residual networks and related architectures, shortcut connections have quickly become standard tools for building convolutional neural networks.The explanations in the literature for the apparent effectiveness of shortcuts are varied and often contradictory.We hypothesize that shortcuts work primarily because they act as linear counterparts to nonlinear layers.We test this hypothesis by using several variations on the standard residual block, with different types of linear connections, to build small image classification networks.Our experiments show that other kinds of linear connections can be even more effective than the identity shortcuts.Our results also suggest that the best type of linear connection for a given application may depend on both network width and depth.","We generalize residual blocks to tandem blocks, which use arbitrary linear maps instead of shortcuts, and improve performance over ResNets.This paper performs an analysis of shortcut connections in ResNet-like architectures, and proposes to substitute the identity shortcuts with an alternative convolutional one referred to as tandem block.This paper investigates the effect of replacing identity skip connections with trainable convolutional skip connections in ResNet and finds that performance improves." 101,AdamT: A Stochastic Optimization with Trend Correction Scheme,"Adam-typed optimizers, as a class of adaptive moment estimation methods with the exponential moving average scheme, have been successfully used in many applications of deep learning.Such methods are appealing for capability on large-scale sparse datasets.On top of that, they are computationally efficient and insensitive to the hyper-parameter settings.In this paper, we present a new framework for adapting Adam-typed methods, namely AdamT.Instead of applying a simple exponential weighted average, AdamT also includes the trend information when updating the parameters with the adaptive step size and gradients.The newly added term is expected to efficiently capture the non-horizontal moving patterns on the cost surface, and thus converge more rapidly.We show empirically the importance of the trend component, where AdamT outperforms the conventional Adam method constantly in both convex and non-convex settings.","We present a new framework for adapting Adam-typed methods, namely AdamT, to include the trend information when updating the parameters with the adaptive step size and gradients.A new type of Adam variant that uses Holt's linear method to compute the smoothed first order and second order momentum instead of using exponential weighted average." 102,Explanation by Progressive Exaggeration,"As machine learning methods see greater adoption and implementation in high stakes applications suchas medical image diagnosis, the need for model interpretability and explanation has become morecritical.Classical approaches that assess feature importance do not explain how and why a particular region of an image is relevant to the prediction.We proposea method that explains the outcome of a classification black-box by gradually exaggeratingthe semantic effect of a given class.Given a query input to a classifier, our method produces aprogressive set of plausible variations of that query, which gradually change the posterior probabilityfrom its original class to its negation.These counter-factually generated samples preserve featuresunrelated to the classification decision, such that a user can employ our method as a tuning knob to traverse a data manifold while crossing the decision boundary. "", Our method is model agnostic and only requires the output value and gradient of the predictor with respect to its input.","A method to explain a classifier, by generating visual perturbation of an image by exaggerating or diminishing the semantic features that the classifier associates with a target label.A model that when given a query input to a black-box, aims to explain the outcome by providing plausible and progressive variations to the query that can result in a change to the output.A method for explaining the output of black box classification of images, that generates gradual perturbation of outputs in response to gradually perturbed input queries." 103,Influence-Directed Explanations for Deep Convolutional Networks,"We study the problem of explaining a rich class of behavioral properties of deep neural networks.Our influence-directed explanations approach this problem by peering inside the network to identify neurons with high influence on the property of interest using an axiomatically justified influence measure, and then providing an interpretation for the concepts these neurons represent.We evaluate our approach by training convolutional neural networks on Pubfig, ImageNet, and Diabetic Retinopathy datasets. Our evaluation demonstrates that influence-directed explanations localize features used by the network, isolate features distinguishing related instances, help extract the essence of what the network learned about the class, and assist in debugging misclassifications.","We present an influence-directed approach to constructing explanations for the behavior of deep convolutional networks, and show how it can be used to answer a broad set of questions that could not be addressed by prior work.A way to measure influence that satisfies certain axioms, and a notion of influence that can be used to identify what input part is most influential for the output of a neuron in a deep neural network.This paper proposes to measure the influence of single neurons with regard to a quantity of interest represented by another neuron." 104,One-shot and few-shot learning of word embeddings,"Standard deep learning systems require thousands or millions of examples to learn a concept, and cannot integrate new concepts easily.By contrast, humans have an incredible ability to do one-shot or few-shot learning.For instance, from just hearing a word used in a sentence, humans can infer a great deal about it, by leveraging what the syntax and semantics of the surrounding words tells us.Here, we draw inspiration from this to highlight a simple technique by which deep recurrent networks can similarly exploit their prior knowledge to learn a useful representation for a new word from little data.This could make natural language processing systems much more flexible, by allowing them to learn continually from the new words they encounter.","We highlight a technique by which natural language processing systems can learn a new word from context, allowing them to be much more flexible.A technique for exploiting prior knowledge to learn embedding representations for new words with minimal data." 105,MEMO: A Deep Network for Flexible Combination of Episodic Memories,"Recent research developing neural network architectures with external memory have often used the benchmark bAbI question and answering dataset which provides a challenging number of tasks requiring reasoning.Here we employed a classic associative inference task from the human neuroscience literature in order to more carefully probe the reasoning capacity of existing memory-augmented architectures.This task is thought to capture the essence of reasoning -- the appreciation of distant relationships among elements distributed across multiple facts or memories.Surprisingly, we found that current architectures struggle to reason over long distance associations.Similar results were obtained on a more complex task involving finding the shortest path between nodes in a path.We therefore developed a novel architecture, MEMO, endowed with the capacity to reason over longer distances.This was accomplished with the addition of two novel components.First, it introduces a separation between memories/facts stored in external memory and the items that comprise these facts in external memory.Second, it makes use of an adaptive retrieval mechanism, allowing a variable number of ‘memory hops’ before the answer is produced.MEMO is capable of solving our novel reasoning tasks, as well as all 20 tasks in bAbI.","A memory architecture that support inferential reasoning.This paper proposes changes to the End2End Memory Network architecture, introduces a new Paired Associative Inference task that most existing models struggle to solve, and shows that their proposed architecture solves the task better.A new task (paired associate inference) drawn from cognitive psychology, and proposal for a new memory architecture with features that allow for better performance on the paired associate task." 106,Depthwise Separable Convolutions for Neural Machine Translation,"Depthwise separable convolutions reduce the number of parameters and computation used in convolutional operations while increasing representational efficiency.They have been shown to be successful in image classification models, both in obtaining better models than previously possible for a given parameter count and considerably reducing the number of parameters required to perform at a given level.Recently, convolutional sequence-to-sequence networks have been applied to machine translation tasks with good results.In this work, we study how depthwise separable convolutions can be applied to neural machine translation.We introduce a new architecture inspired by Xception and ByteNet, called SliceNet, which enables a significant reduction of the parameter count and amount of computation needed to obtain results like ByteNet, and, with a similar parameter count, achieves better results.In addition to showing that depthwise separable convolutions perform well for machine translation, we investigate the architectural changes that they enable: we observe that thanks to depthwise separability, we can increase the length of convolution windows, removing the need for filter dilation.We also introduce a new super-separable convolution operation that further reduces the number of parameters and computational cost of the models.","Depthwise separable convolutions improve neural machine translation: the more separable the better.This paper proposes to use depthwise separable convolution layers in a fully convolutional neural machine translation model, and introduces a new super-separable convolution layer which further reduces computational cost." 107,The divergences minimized by non-saturating GAN training,"Interpreting generative adversarial network training as approximate divergence minimization has beentheoretically insightful, has spurred discussion, and has lead to theoretically and practically interestingextensions such as f-GANs and Wasserstein GANs.For both classic GANs and f-GANs, there is an original variant of training and a ""non-saturating"" variant which uses an alternative form of generator gradient.The original variant is theoretically easier to study, but for GANs the alternative variant performs better in practice.The non-saturating scheme is often regarded as a simple modification to deal with optimization issues, but we show that in fact the non-saturating scheme for GANs is effectively optimizing a reverse KL-like f-divergence.We also develop a number of theoretical tools to help compare and classify f-divergences.We hope these results may help to clarify some of the theoretical discussion surrounding the divergence minimization view of GAN training.","Non-saturating GAN training effectively minimizes a reverse KL-like f-divergence.This paper proposes a useful expression of the class of f-divergences, investigates theoretical properties of popular f-divergences from newly developed tools, and investigates GANs with the non-saturating training scheme." 108,A novel text representation which enables image classifiers to perform text classification,"We introduce a novel method for converting text data into abstract image representations, which allows image-based processing techniques to be applied to text-based comparison problems.We apply the technique to entity disambiguation of inventor names in US patents.The method involves converting text from each pairwise comparison between two inventor name records into a 2D RGB image representation.We then train an image classification neural network to discriminate between such pairwise comparison images, and use the trained network to label each pair of records as either matched or non-matched, obtaining highly accurate results.Our new text-to-image representation method could potentially be used more broadly for other NLP comparison problems, such as disambiguation of academic publications, or for problems that require simultaneous classification of both text and images.","We introduce a novel text representation method which enables image classifiers to be applied to text classification problems, and apply the method to inventor name disambiguation.A method to map a pair of textual information into a 2D RGB image that can be fed to 2D convoutional neural networks (image classifiers).The authors consider the problem of names disambiguisation for patent names inventors and propose to build an image page representation of the two name strings to compare and to apply an image classifier." 109,Difference-Seeking Generative Adversarial Network,"We propose a novel algorithm, Difference-Seeking Generative Adversarial Network, developed from traditional GAN.DSGAN considers the scenario that the training samples of target distribution,, are difficult to collect.Suppose there are two distributions and such that the density of the target distribution can be the differences between the densities of and.We show how to learn the target distribution only via samples from and.DSGAN has the flexibility to produce samples from various target distributions.Two key applications, semi-supervised learning and adversarial training, are taken as examples to validate the effectiveness of DSGAN.We also provide theoretical analyses about the convergence of DSGAN.","We proposed ""Difference-Seeking Generative Adversarial Network"" (DSGAN) model to learn the target distribution which is hard to collect training data.This paper presents DS-GAN, which aims to learn the difference between any two distributions whose samples are difficult or impossible to collect, and shows its effectiveness on semi-supervised learning and adversarial training tasks.This paper considers the problem of learning a GAN to capture a target distribution with only very few training samples from that distribution available." 110,GUIDEGAN: ATTENTION BASED SPATIAL GUIDANCE FOR IMAGE-TO-IMAGE TRANSLATION,"Recently, Generative Adversarial Network and numbers of its variants have been widely used to solve the image-to-image translation problem and achieved extraordinary results in both a supervised and unsupervised manner.However, most GAN-based methods suffer from the imbalance problem between the generator and discriminator in practice.Namely, the relative model capacities of the generator and discriminator do not match, leading to mode collapse and/or diminished gradients.To tackle this problem, we propose a GuideGAN based on attention mechanism.More specifically, we arm the discriminator with an attention mechanism so not only it estimates the probability that its input is real, but also does it create an attention map that highlights the critical features for such prediction.This attention map then assists the generator to produce more plausible and realistic images.We extensively evaluate the proposed GuideGAN framework on a number of image transfer tasks.Both qualitative results and quantitative comparison demonstrate the superiority of our proposed approach.","A general method that improves the image translation performance of GAN framework by using an attention embedded discriminatorA feedback mechanism in the GAN framework which improves the quality of generated images in image-to-image translation, and whose discriminator outputs a map indicating where the generator should focus to make its results more convincing.Proposal for a GAN with an attention-based discriminator for I2I translation which provides the probability of real/fake and an attention map which reflects salience for image generation." 111,TabFact: A Large-scale Dataset for Table-based Fact Verification,"The problem of verifying whether a textual hypothesis holds based on the given evidence, also known as fact verification, plays an important role in the study of natural language understanding and semantic representation.However, existing studies are mainly restricted to dealing with unstructured evidence, while verification under structured evidence, such as tables, graphs, and databases, remains unexplored.This paper specifically aims to study the fact verification given semi-structured data as evidence.To this end, we construct a large-scale dataset called TabFact with 16k Wikipedia tables as the evidence for 118k human-annotated natural language statements, which are labeled as either ENTAILED or REFUTED.TabFact is challenging since it involves both soft linguistic reasoning and hard symbolic reasoning.To address these reasoning challenges, we design two different models: Table-BERT and Latent Program Algorithm.Table-BERT leverages the state-of-the-art pre-trained language model to encode the linearized tables and statements into continuous vectors for verification.LPA parses statements into LISP-like programs and executes them against the tables to obtain the returned binary value for verification.Both methods achieve similar accuracy but still lag far behind human performance.We also perform a comprehensive analysis to demonstrate great future opportunities.","We propose a new dataset to investigate the entailment problem under semi-structured table as premiseThis paper proposes a new dataset for table-based fact verification and introduces methods for the task.The authors propose the problem of fact verification with semi-structured data sources such as tables, create a new dataset, and evaluate baseline models with variations." 112,Deep Graph Matching Consensus,"This work presents a two-stage neural architecture for learning and refining structural correspondences between graphs.First, we use localized node embeddings computed by a graph neural network to obtain an initial ranking of soft correspondences between nodes.Secondly, we employ synchronous message passing networks to iteratively re-rank the soft correspondences to reach a matching consensus in local neighborhoods between graphs.We show, theoretically and empirically, that our message passing scheme computes a well-founded measure of consensus for corresponding neighborhoods, which is then used to guide the iterative re-ranking process.Our purely local and sparsity-aware architecture scales well to large, real-world inputs while still being able to recover global correspondences consistently.We demonstrate the practical effectiveness of our method on real-world tasks from the fields of computer vision and entity alignment between knowledge graphs, on which we improve upon the current state-of-the-art.",We develop a deep graph matching architecture which refines initial correspondences in order to reach neighborhood consensus.A framework for answering graph matching questions consisting of local node embeddings with a message passing refinement step.A two-stage GNN-based architecture to establish correspondences between two graphs that performs well on real-world tasks of image matching and knowledge graph entity alignment. 113,Approximation capability of neural networks on sets of probability measures and tree-structured data,"This paper extends the proof of density of neural networks in the space of continuous functions on Euclidean spaces to functions on compact sets of probability measures.By doing so the work parallels a more then a decade old results on mean-map embedding of probability measures in reproducing kernel Hilbert spaces.The work has wide practical consequences for multi-instance learning, where it theoretically justifies some recently proposed constructions.The result is then extended to Cartesian products, yielding universal approximation theorem for tree-structured domains, which naturally occur in data-exchange formats like JSON, XML, YAML, AVRO, and ProtoBuffer.This has important practical implications, as it enables to automatically create an architecture of neural networks for processing structured data, as demonstrated by an accompanied library for JSON format.","This paper extends the proof of density of neural networks in the space of continuous (or even measurable) functions on Euclidean spaces to functions on compact sets of probability measures. This paper investigates the approximation properties of a family of neural networks designed to address multi-instance learning problems, and shows that results for standard one layer architectures extend to these models.This paper generalizes the universal approximation theorem to real functions on the space of measures." 114,Can I trust you more? Model-Agnostic Hierarchical Explanations,"Interactions such as double negation in sentences and scene interactions in images are common forms of complex dependencies captured by state-of-the-art machine learning models.We propose Mahé, a novel approach to provide Model-Agnostic Hierarchical Explanations of how powerful machine learning models, such as deep neural networks, capture these interactions as either dependent on or free of the context of data instances.Specifically, Mahé provides context-dependent explanations by a novel local interpretation algorithm that effectively captures any-order interactions, and obtains context-free explanations through generalizing context-dependent interactions to explain global behaviors.Experimental results show that Mahé obtains improved local interaction interpretations over state-of-the-art methods and successfully provides explanations of interactions that are context-free.","A new framework for context-dependent and context-free explanations of predictionsThe authors extend the linear local attribution method LIME for interpreting black box models, and propose a method to discern between context-dependent and context-free interactions.A method that can provide hierarchical explanations for a model, including both context-dependent and context-free explanations by a local interpretation algorithm." 115,Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference,"To realize the promise of ubiquitous embedded deep network inference, it is essential to seek limits of energy and area efficiency. To this end, low-precision networks offer tremendous promise because both energy and area scale down quadratically with the reduction in precision. Here, for the first time, we demonstrate ResNet-18, ResNet-34, ResNet-50, ResNet-152, Inception-v3, densenet-161, and VGG-16bn networks on the ImageNet classification benchmark that, at 8-bit precision exceed the accuracy of the full-precision baseline networks after one epoch of finetuning, thereby leveraging the availability of pretrained models.We also demonstrate ResNet-18, ResNet-34, and ResNet-50 4-bit models that match the accuracy of the full-precision baseline networks -- the highest scores to date.Surprisingly, the weights of the low-precision networks are very close to the weights of the corresponding baseline networks, making training from scratch unnecessary.We find that gradient noise due to quantization during training increases with reduced precision, and seek ways to overcome this noise.The number of iterations required by stochastic gradient descent to achieve a given training error is related to the square of the distance of the initial solution from the final plus the maximum variance of the gradient estimates. By drawing inspiration from this observation, we reduce solution distance by starting with pretrained fp32 precision baseline networks and fine-tuning, and combat noise introduced by quantizing weights and activations during training, by using larger batches along with matched learning rate annealing. Sensitivity analysis indicates that these techniques, coupled with proper activation function range calibration, offer a promising heuristic to discover low-precision networks, if they exist, close to fp32 precision baseline networks.","Finetuning after quantization matches or exceeds full-precision state-of-the-art networks at both 8- and 4-bit quantization.This paper proposes to improve the performance of low-precision models by doing quantization on pre-trained models, using large batches size, and using proper learning rate annealing with longer training time.A method for low bit quantization to enable inference on efficient hardware that achieves full accuracy on ResNet50 with 4-bit weights and activations, based on observations that fine-tuning at low precision introduces noise in the gradient." 116,Correlating neural and symbolic representations of language," Analysis methods which enable us to better understand the representations and functioning of neural models of language are increasingly needed as deep learning becomes the dominant approach in NLP.Here we present two methods based on Representational Similarity Analysis and Tree Kernels which allow us to directly quantify how strongly the information encoded in neural activation patterns corresponds to information represented by symbolic structures such as syntax trees.We first validate our methods on the case of a simple synthetic language for arithmetic expressions with clearly defined syntax and semantics, and show that they exhibit the expected pattern of results.We then apply our methods to correlate neural representations of English sentences with their constituency parse trees.",Two methods based on Representational Similarity Analysis (RSA) and Tree Kernels (TK) which directly quantify how strongly information encoded in neural activation patterns corresponds to information represented by symbolic structures. 117,Efficient Deep Representation Learning by Adaptive Latent Space Sampling,"Supervised deep learning requires a large amount of training samples with annotations, which are expensive and time-consuming to obtain.During the training of a deep neural network, the annotated samples are fed into the network in a mini-batch way, where they are often regarded of equal importance.However, some of the samples may become less informative during training, as the magnitude of the gradient start to vanish for these samples.In the meantime, other samples of higher utility or hardness may be more demanded for the training process to proceed and require more exploitation.To address the challenges of expensive annotations and loss of sample informativeness, here we propose a novel training framework which adaptively selects informative samples that are fed to the training process.The adaptive selection or sampling is performed based on a hardness-aware strategy in the latent space constructed by a generative model.To evaluate the proposed training framework, we perform experiments on three different datasets, including MNIST and CIFAR-10 for image classification task and a medical image dataset IVUS for biophysical simulation task.On all three datasets, the proposed framework outperforms a random sampling method, which demonstrates the effectiveness of our framework.","This paper introduces a framework for data-efficient representation learning by adaptive sampling in latent space.A method for sequential and adaptive selection of training examples to be presented to the training algorithm, where selection happens in the latent space based on choosing samples in the direction of the gradient of the loss.A method to efficiently select hard samples during neural network training, achieved via a variational auto-encoder that encodes samples into a latent space." 118,Disentangling Style and Content in Anime Illustrations,"Existing methods for AI-generated artworks still struggle with generating high-quality stylized content, where high-level semantics are preserved, or separating fine-grained styles from various artists.We propose a novel Generative Adversarial Disentanglement Network which can disentangle two complementary factors of variations when only one of them is labelled in general, and fully decompose complex anime illustrations into style and content in particular.Training such model is challenging, since given a style, various content data may exist but not the other way round.Our approach is divided into two stages, one that encodes an input image into a style independent content, and one based on a dual-conditional generator.We demonstrate the ability to generate high-fidelity anime portraits with a fixed content and a large variety of styles from over a thousand artists, and vice versa, using a single end-to-end network and with applications in style transfer.We show this unique capability as well as superior output to the current state-of-the-art.","An adversarial training-based method for disentangling two complementary sets of variations in a dataset where only one of them is labelled, tested on style vs. content in anime illustrations.An image generation method combining conditional GANs and conditional VAEs that generates high fidelity anime images with various styles from various artists. Proposal for a method to learn disentangled style (artist) and content representations in anime." 119,Unsupervised Learning of Entailment-Vector Word Embeddings,"Entailment vectors are a principled way to encode in a vector what information is known and what is unknown. They are designed to model relations where one vector should include all the information in another vector, called entailment. This paper investigates the unsupervised learning of entailment vectors for the semantics of words. Using simple entailment-based models of the semantics of words in text, we induce entailment-vector word embeddings which outperform the best previous results for predicting entailment between words, in unsupervised and semi-supervised experiments on hyponymy.","We train word embeddings based on entailment instead of similarity, successfully predicting lexical entailment.The paper presents a word embedding algorithm for lexical entailment which follows the work of Henderson and Popa (ACL, 2016)." 120,Intrinsic Motivation and Automatic Curricula via Asymmetric Self-Play,"We describe a simple scheme that allows an agent to learn about its environment in an unsupervised manner.Our scheme pits two versions of the same agent, Alice and Bob, against one another.Alice proposes a task for Bob to complete; and then Bob attempts to complete the task. In this work we will focus on two kinds of environments: reversible environments and environments that can be reset.Alice will ""propose"" the task by doing a sequence of actions and then Bob must undo or repeat them, respectively. Via an appropriate reward structure, Alice and Bob automatically generate a curriculum of exploration, enabling unsupervised training of the agent.When Bob is deployed on an RL task within the environment, this unsupervised training reduces the number of supervised episodes needed to learn, and in some cases converges to a higher reward.","Unsupervised learning for reinforcement learning using an automatic curriculum of self-playA new formulation for exploring the environment in an unsupervised way to aid a specific task later, where one agent proposes increasingly difficult tasks and the learning agent tries to accomplish them.A self-play model where one agent learns to propose tasks that are easy for them but difficult for an opponent, creating a moving target of self-play objectives and learning curriculum. " 121,Adaptive Structural Fingerprints for Graph Attention Networks,"Many real-world data sets are represented as graphs, such as citation links, social media, and biological interaction.The volatile graph structure makes it non-trivial to employ convolutional neural networks for graph data processing.Recently, graph attention network has proven a promising attempt by combining graph neural networks with attention mechanism, so as to achieve massage passing in graphs with arbitrary structures.However, the attention in GAT is computed mainly based on the similarity between the node content, while the structures of the graph remains largely unemployed.In this paper, we propose an `""ADaptive Structural Fingerprint"" model to fully exploit both topological details of the graph and content features of the nodes.The key idea is to contextualize each node with a weighted, learnable receptive field encoding rich and diverse local graph structures.By doing this, structural interactions between the nodes can be inferred accurately, thus improving subsequent attention layer as well as the convergence of learning.Furthermore, our model provides a useful platform for different subspaces of node features and various scales of graph structures to cross-talk with each other through the learning of multi-head attention, being particularly useful in handling complex real-world data. "", Encouraging performance is observed on a number of benchmark data sets in node classification.","Exploiting rich strucural details in graph-structued data via adaptive ""strucutral fingerprints\'\'A graph structure based methodology to augment the attention mechanism of graph neural networks, with the main idea to explore interactions between different types of nodes of the local neighborhood of a root node.This paper extends the idea of self-attention in graph NNs, which is typically based on feature similarity between nodes, to include structural similarity." 122,Bayesian Residual Policy Optimization: Scalable Bayesian Reinforcement Learning with Clairvoyant Experts,"Informed and robust decision making in the face of uncertainty is critical for robots that perform physical tasks alongside people.We formulate this as a Bayesian Reinforcement Learning problem over latent Markov Decision Processes.While Bayes-optimality is theoretically the gold standard, existing algorithms do not scale well to continuous state and action spaces.We propose a scalable solution that builds on the following insight: in the absence of uncertainty, each latent MDP is easier to solve.We split the challenge into two simpler components.First, we obtain an ensemble of clairvoyant experts and fuse their advice to compute a baseline policy.Second, we train a Bayesian residual policy to improve upon the ensembles recommendation and learn to reduce uncertainty.Our algorithm, Bayesian Residual Policy Optimization, imports the scalability of policy gradient methods as well as the initialization from prior models.BRPO significantly improves the ensemble of experts and drastically outperforms existing adaptive RL methods.","We propose a scalable Bayesian Reinforcement Learning algorithm that learns a Bayesian correction over an ensemble of clairvoyant experts to solve problems with complex latent rewards and dynamics.This paper considers Bayesian Reinforcement Learning problem over latent Markov Decision Processes (MDPs) by making decisions with experts.In this paper, the authors motivate and propose a learning algorithm, called Bayesian Residual Policy Optimization (BRPO), for Bayesian reinforcement learning problems." 123,Gradient Descent Provably Optimizes Over-parameterized Neural Networks,"One of the mysteries in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth.This paper demystifies this surprising phenomenon for two-layer fully connected ReLU activated neural networks.For an hidden node shallow neural network with ReLU activation and training data, we show as long as is large enough and no two inputs are parallel, randomly initialized gradient descent converges to a globally optimal solution at a linear convergence rate for the quadratic loss function.Our analysis relies on the following observation: over-parameterization and random initialization jointly restrict every weight vector to be close to its initialization for all iterations, which allows us to exploit a strong convexity-like property to show that gradient descent converges at a global linear rate to the global optimum.We believe these insights are also useful in analyzing deep models and other first order methods.","We prove gradient descent achieves zero training loss with a linear rate on over-parameterized neural networks.This work considers optimizing a two-layer over-parameterized ReLU network with the squared loss and given a data set with arbituary labels.This paper studies one hidden layer neural networks with square loss, where they show that in over-parameterized setting, random initialization and gradient descent gets to zero loss." 124,Analyzing Inverse Problems with Invertible Neural Networks,"For many applications, in particular in natural science, the task is todetermine hidden system parameters from a set of measurements.Often,the forward process from parameter- to measurement-space is well-defined,whereas the inverse problem is ambiguous: multiple parameter sets canresult in the same measurement.To fully characterize this ambiguity, the fullposterior parameter distribution, conditioned on an observed measurement,has to be determined.We argue that a particular class of neural networksis well suited for this task – so-called Invertible Neural Networks.Unlike classical neural networks, which attempt to solve the ambiguousinverse problem directly, INNs focus on learning the forward process, usingadditional latent output variables to capture the information otherwiselost.Due to invertibility, a model of the corresponding inverse process islearned implicitly.Given a specific measurement and the distribution ofthe latent variables, the inverse pass of the INN provides the full posteriorover parameter space.We prove theoretically and verify experimentally, onartificial data and real-world problems from medicine and astrophysics, thatINNs are a powerful analysis tool to find multi-modalities in parameter space,uncover parameter correlations, and identify unrecoverable parameters.","To analyze inverse problems with Invertible Neural NetworksThe author proposes to use invertible networks to solve ambiguous inverse problems and suggest to not only train the forward model, but also the inverse model with an MMD critic.The research paper proposes an invertible network with observations for posterior probability of complex input distributions with a theoretical valid bidirectional training scheme." 125,Hidden incentives for self-induced distributional shift,"Decisions made by machine learning systems have increasing influence on the world.Yet it is common for machine learning algorithms to assume that no such influence exists.An example is the use of the i.i.d.assumption in online learning for applications such as content recommendation, where the content displayed can change users perceptions and preferences, or even drive them away, causing a shift in the distribution of users.Generally speaking, it is possible for an algorithm to change the distribution of its own inputs.We introduce the term self-induced distributional shift to describe this phenomenon.A large body of work in reinforcement learning and causal machine learning aims to deal with distributional shift caused by deploying learning systems previously trained offline.Our goal is similar, but distinct: we point out that changes to the learning algorithm, such as the introduction of meta-learning, can reveal hidden incentives for distributional shift, and aim to diagnose and prevent problems associated with hidden incentives.We design a simple \xa0environment as a ""unit test"" for HIDS, as well as a content recommendation environment which allows us to disentangle different types of SIDS.\xa0We demonstrate the potential for HIDS to cause unexpected or undesirable behavior in these environments, and propose and test a mitigation strategy.\xa0","Performance metrics are incomplete specifications; the ends don't always justify the means."", 'The authors show how meta-learning reveals the hidden incentives for distributional shift and propose an approach based on swapping learners between environments to reduce self introduced distributional shift.The paper generalizes the inherent incentive for the learner to win by making the task easier in meta-learning to a larger class of problems." 126,Consistency-based anomaly detection with adaptive multiple-hypotheses predictions,"In one-class-learning tasks, only the normal case can be modeled with data, whereas the variation of all possible anomalies is too large to be described sufficiently by samples.Thus, due to the lack of representative data, the wide-spread discriminative approaches cannot cover such learning tasks, and rather generative models, which attempt to learn the input density of the normal cases, are used.However, generative models suffer from a large input dimensionality and are typically inefficient learners.We propose to learn the data distribution more efficiently with a multi-hypotheses autoencoder.Moreover, the model is criticized by a discriminator, which prevents artificial data modes not supported by data, and which enforces diversity across hypotheses.This consistency-based anomaly detection framework allows the reliable identification of outof- distribution samples.For anomaly detection on CIFAR-10, it yields up to 3.9% points improvement over previously reported results.On a real anomaly detection task, the approach reduces the error of the baseline models from 6.8% to 1.5%.",We propose an anomaly-detection approach that combines modeling the foreground class via multiple local densities with adversarial training.The paper proposes a technique to make generative models more robust by making them consistent with the local density. 127,Point Cloud GAN,"Generative Adversarial Networks can achieve promising performance on learning complex data distributions on different types of data.In this paper, we first show that a straightforward extension of an existing GAN algorithm is not applicable to point clouds, because the constraint required for discriminators is undefined for set data.We propose a two fold modification to a GAN algorithm to be able to generate point clouds.First, we combine ideas from hierarchical Bayesian modeling and implicit generative models by learning a hierarchical and interpretable sampling process.A key component of our method is that we train a posterior inference network for the hidden variables.Second, PC-GAN defines a generic framework that can incorporate many existing GAN algorithms.We further propose a sandwiching objective, which results in a tighter Wasserstein distance estimate than the commonly used dual form in WGAN.We validate our claims on the ModelNet40 benchmark dataset and observe that PC- GAN trained by the sandwiching objective achieves better results on test data than existing methods.We also conduct studies on several tasks, including generalization on unseen point clouds, latent space interpolation, classification, and image to point clouds transformation, to demonstrate the versatility of the proposed PC-GAN algorithm.","We propose a GAN variant which learns to generate point clouds. Different studies have been explores, including tighter Wasserstein distance estimate, conditional generation, generalization to unseen point clouds and image to point cloud.This paper proposes using GAN to generate 3D point cloud and introduces a sandwiching objective, averaging the upper and lower bound of Wasserstein distance between distributions.This paper proposes a new generative model for unordered data, with a particular application to point clouds, which includes an inference method and a novel objective function. " 128,Area Attention,"Existing attention mechanisms, are mostly item-based in that a model is trained to attend to individual items in a collection where each item has a predefined, fixed granularity, e.g., a character or a word.Intuitively, an area in the memory consisting of multiple items can be worth attending to as a whole.We propose area attention: a way to attend to an area of the memory, where each area contains a group of items that are either spatially adjacent when the memory has a 2-dimensional structure, such as images, or temporally adjacent for 1-dimensional memory, such as natural language sentences.Importantly, the size of an area, i.e., the number of items in an area or the level of aggregation, is dynamically determined via learning, which can vary depending on the learned coherence of the adjacent items.By giving the model the option to attend to an area of items, instead of only individual items, a model can attend to information with varying granularity.Area attention can work along multi-head attention for attending to multiple areas in the memory.We evaluate area attention on two tasks: neural machine translation and image captioning, and improve upon strong baselines in all the cases.These improvements are obtainable with a basic form of area attention that is parameter free.In addition to proposing the novel concept of area attention, we contribute an efficient way for computing it by leveraging the technique of summed area tables.","The paper presents a novel approach for attentional mechanisms that can benefit a range of tasks such as machine translation and image captioning.This paper extends the current attention models from word level to the combination of adjacent words, by applying the models to items made from merged adjacent words." 129,Overcoming Multi-model Forgetting,"We identify a phenomenon, which we refer to as *multi-model forgetting*, that occurs when sequentially training multiple deep networks with partially-shared parameters; the performance of previously-trained models degrades as one optimizes a subsequent one, due to the overwriting of shared parameters.To overcome this, we introduce a statistically-justified weight plasticity loss that regularizes the learning of a models shared parameters according to their importance for the previous models, and demonstrate its effectiveness when training two models sequentially and for neural architecture search.Adding weight plasticity in neural architecture search preserves the best models to the end of the search and yields improved results in both natural language processing and computer vision tasks.","We identify a phenomenon, neural brainwashing, and introduce a statistically-justified weight plasticity loss to overcome this.This paper discusses the phenomena of “neural brainwashing”, which refers to that the performance of one model is affected via another model sharing model parameters." 130,Morpho-MNIST: Quantitative Assessment and Diagnostics for Representation Learning,"Revealing latent structure in data is an active field of research, having introduced exciting technologies such as variational autoencoders and adversarial networks, and is essential to push machine learning towards unsupervised knowledge discovery.However, a major challenge is the lack of suitable benchmarks for an objective and quantitative evaluation of learned representations.To address this issue we introduce Morpho-MNIST, a framework that aims to answer: ""to what extent has my model learned to represent specific factors of variation in the data?""We extend the popular MNIST dataset by adding a morphometric analysis enabling quantitative comparison of trained models, identification of the roles of latent variables, and characterisation of sample diversity.We further propose a set of quantifiable perturbations to assess the performance of unsupervised and supervised methods on challenging tasks such as outlier detection and domain adaptation.","This paper introduces Morpho-MNIST, a collection of shape metrics and perturbations, in a step towards quantitative evaluation of representation learning.This paper discusses the problem of evaluating and diagnosing the represenatations learnt using a generative model.Authors present a set of criteria to categorize MNISt digists and a set of interesting perturbations to modify MNIST dataset." 131,Learning to Control Visual Abstractions for Structured Exploration in Deep Reinforcement Learning,"Exploration in environments with sparse rewards is a key challenge for reinforcement learning.How do we design agents with generic inductive biases so that they can explore in a consistent manner instead of just using local exploration schemes like epsilon-greedy?We propose an unsupervised reinforcement learning agent which learns a discrete pixel grouping model that preserves spatial geometry of the sensors and implicitly of the environment as well.We use this representation to derive geometric intrinsic reward functions, like centroid coordinates and area, and learn policies to control each one of them with off-policy learning.These policies form a basis set of behaviors which allows us explore in a consistent way and use them in a hierarchical reinforcement learning setup to solve for extrinsically defined rewards.We show that our approach can scale to a variety of domains with competitive performance, including navigation in 3D environments and Atari games with sparse rewards.","structured exploration in deep reinforcement learning via unsupervised visual abstraction discovery and controlThe paper introduces visual abstractions that are used for reinforcement learning, where an algorithm learns to ""control"" each abstraction as well as select the options to achieve the overall task." 132,The Cakewalk Method,"Combinatorial optimization is a common theme in computer science.While in general such problems are NP-Hard, from a practical point of view, locally optimal solutions can be useful.In some combinatorial problems however, it can be hard to define meaningful solution neighborhoods that connect large portions of the search space, thus hindering methods that search this space directly.We suggest to circumvent such cases by utilizing a policy gradient algorithm that transforms the problem to the continuous domain, and to optimize a new surrogate objective that renders the former as generic stochastic optimizer.This is achieved by producing a surrogate objective whose distribution is fixed and predetermined, thus removing the need to fine-tune various hyper-parameters in a case by case manner.Since we are interested in methods which can successfully recover locally optimal solutions, we use the problem of finding locally maximal cliques as a challenging experimental benchmark, and we report results on a large dataset of graphs that is designed to test clique finding algorithms.Notably, we show in this benchmark that fixing the distribution of the surrogate is key to consistently recovering locally optimal solutions, and that our surrogate objective leads to an algorithm that outperforms other methods we have tested in a number of measures.","A new policy gradient algorithm designed to approach black-box combinatorial optimization problems. The algorithm relies only on function evaluations, and returns locally optimal solutions with high probability.The paper proposes an approach to construct surrogate objectives for the application of policy gradient methods to combinatorial optimization with the goal of reducing the need of hyper-parameter tuning.The paper propose to replace the reward term in the policy gradient algorithm with its centered empirical cumulative distribution. " 133,Deep Evidential Uncertainty,"Deterministic neural networks are increasingly being deployed in safety critical domains, where calibrated, robust and efficient measures of uncertainty are crucial.While it is possible to train regression networks to output the parameters of a probability distribution by maximizing a Gaussian likelihood function, the resulting model remains oblivious to the underlying confidence of its predictions.In this paper, we propose a novel method for training deterministic NNs to not only estimate the desired target but also the associated evidence in support of that target.We accomplish this by placing evidential priors over our original Gaussian likelihood function and training our NN to infer the hyperparameters of our evidential distribution.We impose priors during training such that the model is penalized when its predicted evidence is not aligned with the correct output.Thus the model estimates not only the probabilistic mean and variance of our target but also the underlying uncertainty associated with each of those parameters.We observe that our evidential regression method learns well-calibrated measures of uncertainty on various benchmarks, scales to complex computer vision tasks, and is robust to adversarial input perturbations.","Fast, calibrated uncertainty estimation for neural networks without samplingThis paper proposes a novel approach to estimate the confidence of predictions in a regression setting, opening the door to online applications with fully integrated uncertainty estimates.This paper proposed deep evidential regression, a method for training neural networks to not only estimate the output but also the associated evidence in support of that output." 134,Winning the Lottery with Continuous Sparsification,"The Lottery Ticket Hypothesis from Frankle & Carbin conjectures that, for typically-sized neural networks, it is possible to find small sub-networks which train faster and yield superior performance than their original counterparts.The proposed algorithm to search for such sub-networks, Iterative Magnitude Pruning, consistently finds sub-networks with 90-95% less parameters which indeed train faster and better than the overparameterized models they were extracted from, creating potential applications to problems such as transfer learning.In this paper, we propose a new algorithm to search for winning tickets, Continuous Sparsification, which continuously removes parameters from a network during training, and learns the sub-networks structure with gradient-based methods instead of relying on pruning strategies.We show empirically that our method is capable of finding tickets that outperforms the ones learned by Iterative Magnitude Pruning, and at the same time providing up to 5 times faster search, when measured in number of training epochs.","We propose a new algorithm that quickly finds winning tickets in neural networks.This paper proposes a novel objective function that can be used to jointly optimize a classification objective while encouraging sparsification in a network that performs with high accuracy.This work propose a new iterative pruning methods named Continuous Sparsification, which continuously prunes the current weight until it reaches the target ratio." 135,Budgeted Training: Rethinking Deep Neural Network Training Under Resource Constraints,"In most practical settings and theoretical analyses, one assumes that a model can be trained until convergence.However, the growing complexity of machine learning datasets and models may violate such assumptions.Indeed, current approaches for hyper-parameter tuning and neural architecture search tend to be limited by practical resource constraints.Therefore, we introduce a formal setting for studying training under the non-asymptotic, resource-constrained regime, i.e., budgeted training.We analyze the following problem: ""given a dataset, algorithm, and fixed resource budget, what is the best achievable performance?""We focus on the number of optimization iterations as the representative resource.Under such a setting, we show that it is critical to adjust the learning rate schedule according to the given budget.Among budget-aware learning schedules, we find simple linear decay to be both robust and high-performing.We support our claim through extensive experiments with state-of-the-art models on ImageNet, Kinetics, MS COCO, and Cityscapes.We also analyze our results and find that the key to a good schedule is budgeted convergence, a phenomenon whereby the gradient vanishes at the end of each allowed budget.We also revisit existing approaches for fast convergence and show that budget-aware learning schedules readily outperform such approaches under budgeted training setting.",Introduce a formal setting for budgeted training and propose a budget-aware linear learning rate scheduleThis work presents a technique for tuning the learning rate for Neural Network training when under a fixed number of epochs.This paper analyzed which learning rate schedule should be used when the number of iteration is limited using an introduced concept of BAS (Budget-Aware Schedule). 136,Novelty Search in representational space for sample efficient exploration,"We present a new approach for efficient exploration which leverages a low-dimensional encoding of the environment learned with a combination of model-based and model-free objectives.Our approach uses intrinsic rewards that are based on a weighted distance of nearest neighbors in the low dimensional representational space to gauge novelty.We then leverage these intrinsic rewards for sample-efficient exploration with planning routines in representational space.One key element of our approach is that we perform more gradient steps in-between every environment step in order to ensure the model accuracy.We test our approach on a number of maze tasks, as well as a control problem and show that our exploration approach is more sample-efficient compared to strong baselines.","We conduct exploration using intrinsic rewards that are based on a weighted distance of nearest neighbors in representational space.This paper proposes a method for efficient exploration in tabular MDPs as well as a simple control environment, using deterministic encoders to learn a low dimensional representation of the environment dynamics.This paper proposes a method of sample-efficient exploration for RL agent using a combination of model-based and model-free approaches with a novelty metric." 137,Ranking Policy Gradient,"Sample inefficiency is a long-lasting problem in reinforcement learning. The state-of-the-art uses action value function to derive policy while it usually involves an extensive search over the state-action space and unstable optimization.Towards the sample-efficient RL, we propose ranking policy gradient, a policy gradient method that learns the optimal rank of a set of discrete actions. To accelerate the learning of policy gradient methods, we establish the equivalence between maximizing the lower bound of return and imitating a near-optimal policy without accessing any oracles.These results lead to a general off-policy learning framework, which preserves the optimality, reduces variance, and improves the sample-efficiency.We conduct extensive experiments showing that when consolidating with the off-policy learning framework, RPG substantially reduces the sample complexity, comparing to the state-of-the-art.","We propose ranking policy gradient that learns the optimal rank of actions to maximize return. We propose a general off-policy learning framework with the properties of optimality preserving, variance reduction, and sample-efficiency.This paper proposes to reparameterize the policy using a form of ranking to convert the RL problem into a supervised learning problem.This paper presents a new view on policy gradient methods from the perspective of ranking. " 138,MultiGrain: a unified image embedding for classes and instances,"We introduce MultiGrain, a neural network architecture that generates compact image embedding vectors that solve multiple tasks of different granularity: class, instance, and copy recognition.MultiGrain is trained jointly for classification by optimizing the cross-entropy loss and for instance/copy recognition by optimizing a self-supervised ranking loss.The self-supervised loss only uses data augmentation and thus does not require additional labels.Remarkably, the unified embeddings are not only much more compact than using several specialized embeddings, but they also have the same or better accuracy.When fed to a linear classifier, MultiGrain using ResNet-50 achieves 79.4% top-1 accuracy on ImageNet, a +1.8% absolute improvement over the the current state-of-the-art AutoAugment method.The same embeddings perform on par with state-of-the-art instance retrieval with images of moderate resolution.An ablation study shows that our approach benefits from the self-supervision, the pooling method and the mini-batches with repeated augmentations of the same image.","Combining classification and image retrieval in a neural network architecture, we obtain an improvement for both tasks.This paper proposes a unified embedding for image classification and instance retrieval to enhance the performance for both tasks.The paper proposes to jointy train a deep neural net for image classification, instance, and copy recognition." 139,Mapping the hyponymy relation of wordnet onto vector Spaces," In this paper, we investigate mapping the hyponymy relation of wordnet to feature vectors. We aim to model lexical knowledge in such a way that it can be used as input in generic machine-learning models, such as phrase entailment predictors. We propose two models.The first one leverages an existing mapping of words to feature vectors, and attempts to classify such vectors as within or outside of each class.The second model is fully supervised, using solely wordnet as a ground truth.It maps each concept to an interval or a disjunction thereof. On the first model, we approach, but not quite attain state of the art performance.The second model can achieve near-perfect accuracy.",We investigate mapping the hyponymy relation of wordnet to feature vectorsThis paper studies how hyponymy between words can be mapped to feature representations.This paper explores the notion of hyponymy in word vector representations and describes a method of organizing WordNet relations into a tree structure to define hyponymy. 140,Learning to Write by Learning the Objective,"Recurrent Neural Networks are powerful autoregressive sequence models for learning prevalent patterns in natural language. Yet language generated by RNNs often shows several degenerate characteristics that are uncommon in human language; while fluent, RNN language production can be overly generic, repetitive, and even self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressive enough to capture the abstract qualities of good generation such as Grice’s Maxims.In this paper, we introduce a general learning framework that can construct a decoding objective better suited for generation.Starting with a generatively trained RNN language model, our framework learns to construct a substantially stronger generator by combining several discriminatively trained models that can collectively address the limitations of RNN generation. Human evaluation demonstrates that text generated by the resulting generator is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.",We build a stronger natural language generator by discriminatively training scoring functions that rank candidate generations with respect to various qualities of good writing.This paper proposes to bring together multiple inductive biases that hope to correct for inconsistencies in sequence decoding and proposes to optimize for the parameters of a pre-defined combination of various sub-objectives. This paper combines RNN language model with several discriminatively trained models to improve the language generation.This paper proposes to improve RNN language model generation using augmented objectives inspired by Grice's maxims of communication. 141,Load Balancing in Large-Scale Heterogeneous Systems with Multiple Dispatchers,"In recent years, the efficiency and even the feasibility of traditional load-balancing policies are challenged by the rapid growth of cloud infrastructure with increasing levels of server heterogeneity and increasing size of cloud services and applications.In such many software-load-balancers heterogeneous systems, traditional solutions, such as JSQ, incur an increasing communication overhead, whereas low-communication alternatives, such as JSQ and the recently proposed JIQ scheme are either unstable or provide poor performance.We argue that a better low-communication load balancing scheme can be established by allowing each dispatcher to have a different view of the system and keep using JSQ, rather than greedily trying to avoid starvation on a per-decision basis.accordingly, we introduce the Loosely-Shortest -Queue family of load balancing algorithms.Roughly speaking, in Loosely-shortest -Queue, each dispatcher keeps a different approximation of the server queue lengths and routes jobs to the shortest among them.Communication is used only to update the approximations and make sure that they are not too far from the real queue lengths in expectation.We formally establish the strong stability of any Loosely-Shortest -Queue policy and provide an easy-to-verify sufficient condition for verifying that a policy is Loosely-Shortest -Queue.We further demonstrate that the Loosely-Shortest -Queue approach allows constructing throughput optimal policies with an arbitrarily low communication budget.Finally, using extensive simulations that consider homogeneous, heterogeneous and highly skewed heterogeneous systems in scenarios with a single dispatcher as well as with multiple dispatchers, we show that the examined Loosely-Shortest -Queue example policies are always stable as dictated by theory.Moreover, it exhibits an appealing performance and significantly outperforms well-known low-communication policies, such as JSQ and JIQ, while using a similar communication budget.",Scalable and low communication load balancing solution for heterogeneous-server multi-dispatcher systems with strong theoretical guarantees and promising empirical results. 142,Counting the Paths in Deep Neural Networks as a Performance Predictor,"We propose a novel quantitative measure to predict the performance of a deep neural network classifier, where the measure is derived exclusively from the graph structure of the network.We expect that this measure is a fundamental first step in developing a method to evaluate new network architectures and reduce the reliance on the computationally expensive trial and error or ""brute force"" optimisation processes involved in model selection.The measure is derived in the context of multi-layer perceptrons, but the definitions are shown to be useful also in the context of deep convolutional neural networks, where it is able to estimate and compare the relative performance of different types of neural networks, such as VGG, ResNet, and DenseNet.Our measure is also used to study the effects of some important ""hidden"" hyper-parameters of the DenseNet architecture, such as number of layers, growth rate and the dimension of 1x1 convolutions in DenseNet-BC.Ultimately, our measure facilitates the optimisation of the DenseNet design, which shows improved results compared to the baseline.",A quantitative measure to predict the performances of deep neural network models.The paper proposes a novel quantity that counts the number of path in the neural network which is predictive of the performance of neural networks with the same number of parameters.The paper presents a method for counting paths in deep neural networks that arguably can be used to measure the performance of the network. 143,Rethinking learning rate schedules for stochastic optimization,"There is a stark disparity between the learning rate schedules used in the practice of large scale machine learning and what are considered admissible learning rate schedules prescribed in the theory of stochastic approximation.Recent results, such as in the super-convergence methods which use oscillating learning rates, serve to emphasize this point even more."", ""One plausible explanation is that non-convex neural network training procedures are better suited to the use of fundamentally different learning rate schedules, such as the cut the learning rate every constant number of epochs method; note that this widely used schedule is in stark contrast to the polynomial decay schemes prescribed in the stochastic approximation literature, which are indeed shown to be optimal for classes of convex optimization problems.The main contribution of this work shows that the picture is far more nuanced, where we do not even need to move to non-convex optimization to show other learning rate schemes can be far more effective.In fact, even for the simple case of stochastic linear regression with a fixed time horizon, the rate achieved by any polynomial decay scheme is sub-optimal compared to the statistical minimax rate; in contrast the `cut the learning rate every constant number of epochs provides an exponential improvement compared to any polynomial decay scheme. "", Finally, it is important to ask if our theoretical insights are somehow fundamentally tied to quadratic loss minimization?Here, we conjecture that recent results which make the gradient norm small at a near optimal rate, for both convex and non-convex optimization, may also provide more insights into learning rate schedules used in practice.","This paper presents a rigorous study of why practically used learning rate schedules (for a given computational budget) offer significant advantages even though these schemes are not advocated by the classical theory of Stochastic Approximation.This paper presents a theoretical study of different learning rate schedules that resulted in statistical minimax lower bounds for both polynomial and constant-and-cut schemes.The paper studies the effect of learning-rate choices for stochastic optimization, focusing on least-mean-squares with decaying stepsizes" 144,Value Propagation Networks,"We present Value Propagation, a set of parameter-efficient differentiable planning modules built on Value Iteration which can successfully be trained using reinforcement learning to solve unseen tasks, has the capability to generalize to larger map sizes, and can learn to navigate in dynamic environments.We show that the modules enable learning to plan when the environment also includes stochastic elements, providing a cost-efficient learning system to build low-level size-invariant planners for a variety of interactive navigation problems.We evaluate on static and dynamic configurations of MazeBase grid-worlds, with randomly generated environments of several different sizes, and on a StarCraft navigation scenario, with more complex dynamics, and pixels as input.","We present planners based on convnets that are sample-efficient and that generalize to larger instances of navigation and pathfinding problems.Proposes methods, which can be seen as modifications of Value Iteration Networks (VIN), with some improvements aimed at improving sample efficiency and generalization to large environment sizes.The paper presents an extension of the original value iteration networks (VIN) by considering a state-dependent transition function." 145,Lifelong Word Embedding via Meta-Learning,"Learning high-quality word embeddings is of significant importance in achieving better performance in many down-stream learning tasks.On one hand, traditional word embeddings are trained on a large scale corpus for general-purpose tasks, which are often sub-optimal for many domain-specific tasks.On the other hand, many domain-specific tasks do not have a large enough domain corpus to obtain high-quality embeddings.We observe that domains are not isolated and a small domain corpus can leverage the learned knowledge from many past domains to augment that corpus in order to generate high-quality embeddings.In this paper, we formulate the learning of word embeddings as a lifelong learning process.Given knowledge learned from many previous domains and a small new domain corpus, the proposed method can effectively generate new domain embeddings by leveraging a simple but effective algorithm and a meta-learner, where the meta-learner is able to provide word context similarity information at the domain-level.Experimental results demonstrate that the proposed method can effectively learn new domain embeddings from a small corpus and past domain knowledgesfootnote.Wealso demonstrate that general-purpose embeddings trained from a large scale corpus are sub-optimal in domain-specific tasks.",learning better domain embeddings via lifelong learning and meta-learningPresents a lifelong learning method for learning word embeddings.This paper proposes an approach to learn embeddings in new domains and significantly beats the baseline on an aspect extraction task. 146,Structured Pruning for Efficient ConvNets via Incremental Regularization,"Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance loss.Despite its effectiveness, existing regularization-based parameter pruning methods usually drive weights towards zero with large and constant regularization factors, which neglects the fact that the expressiveness of CNNs is fragile and needs a more gentle way of regularization for the networks to adapt during pruning.To solve this problem, we propose a new regularization-based pruning method to incrementally assign different regularization factors to different weight groups based on their relative importance, whose effectiveness is proved on popular CNNs compared with state-of-the-art methods.", we propose a new regularization-based pruning method (named IncReg) to incrementally assign different regularization factors to different weight groups based on their relative importance.This paper proposes a regularization-based pruning method to incrementally assign different regularization factors to different weight groups based on their relative importance. 147,On the insufficiency of existing momentum schemes for Stochastic Optimization,"Momentum based stochastic gradient methods such as heavy ball and Nesterovs accelerated gradient descent method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent.Rigorously speaking, fast gradient methods have provable improvements over gradient descent only for the deterministic case, where the gradients are exact.In the stochastic case, the popular explanations for their wide applicability is that when these fast gradient methods are applied in the stochastic case, they partially mimic their exact gradient counterparts, resulting in some practical gain.This work provides a counterpoint to this belief by proving that there exist simple problem instances where these methods cannot outperform SGD despite the best setting of its parameters.These negative problem instances are, in an informal sense, generic; they do not look like carefully constructed pathological instances.These results suggest that HB or NAGs practical performance gains are a by-product of minibatching."", ""Furthermore, this work provides a viable alternative, which, on the same set of problem instances, significantly improves over HB, NAG, and SGDs performance."", ""This algorithm, referred to as Accelerated Stochastic Gradient Descent, is a simple to implement stochastic algorithm, based on a relatively less popular variant of Nesterovs Acceleration.Extensive empirical results in this paper show that ASGD has performance gains over HB, NAG, and SGD.The code for implementing the ASGD Algorithm can be found at https://github.com/rahulkidambi/AccSGD.","Existing momentum/acceleration schemes such as heavy ball method and Nesterov's acceleration employed with stochastic gradients do not improve over vanilla stochastic gradient descent, especially when employed with small batch sizes." 148,A∗ Search and Bound-Sensitive Heuristics for Oversubscription Planning,"Oversubscription planning is the problem of finding plans that maximize the utility value of their end state while staying within a specified cost bound.Recently, it has been shown that OSP problems can be reformulated as classical planning problems with multiple cost functions but no utilities. Here we take advantage of this reformulation to show that OSP problems can be solved optimally using the A* search algorithm, in contrast to previous approaches that have used variations on branch-and-bound search.This allows many powerful techniques developed for classical planning to be applied to OSP problems.We also introduce novel bound-sensitive heuristics, which are able to reason about the primary cost of a solution while taking into account secondary cost functions and bounds, to provide superior guidance compared to heuristics that do not take these bounds into account.We implement two such bound-sensitive variants of existing classical planning heuristics, and show experimentally that the resulting search is significantly more informed than comparable heuristics that do not consider bounds.",We show that oversubscription planning tasks can be solved using A* and introduce novel bound-sensitive heuristics for oversubscription planning tasks.Presents an approach to solve oversubscription planning (OSP) tasks optimally by using a translation to classical planning with multiple cost functions.The paper proposes modifications to admissible heuristics to make them better informed in a multi-criteria setting where. 149,Robust Few-Shot Learning with Adversarially Queried Meta-Learners,"Previous work on adversarially robust neural networks requires large training sets and computationally expensive training procedures. On the other hand, few-shot learning methods are highly vulnerable to adversarial examples. The goal of our work is to produce networks which both perform well at few-shot tasks and are simultaneously robust to adversarial examples. We adapt adversarial training for meta-learning, we adapt robust architectural features to small networks for meta-learning, we test pre-processing defenses as an alternative to adversarial training for meta-learning, and we investigate the advantages of robust meta-learning over robust transfer-learning for few-shot tasks. This work provides a thorough analysis of adversarially robust methods in the context of meta-learning, and we lay the foundation for future work on defenses for few-shot tasks.",We develop meta-learning methods for adversarially robust few-shot learning.This paper presents a method that enhances the robustness of few-shot learning by introducing adversarial query data attack in the inner-task fine-tuning phase of a meta-learning algorithm.The authors of this paper propose a novel approach for training a robust few-shot model. 150,Pooling Is Neither Necessary nor Sufficient for Appropriate Deformation Stability in CNNs,"Many of our core assumptions about how neural networks operate remain empirically untested.One common assumption is that convolutional neural networks need to be stable to small translations and deformations to solve image recognition tasks.For many years, this stability was baked into CNN architectures by incorporating interleaved pooling layers.Recently, however, interleaved pooling has largely been abandoned.This raises a number of questions: Are our intuitions about deformation stability right at all?Is it important?Is pooling necessary for deformation invariance?If not, how is deformation invariance achieved in its absence?In this work, we rigorously test these questions, and find that deformation stability in convolutional networks is more nuanced than it first appears: Deformation invariance is not a binary property, but rather that different tasks require different degrees of deformation stability at different layers. Deformation stability is not a fixed property of a network and is heavily adjusted over the course of training, largely through the smoothness of the convolutional filters. Interleaved pooling layers are neither necessary nor sufficient for achieving the optimal form of deformation stability for natural image classification. Pooling confers deformation stability for image classification at initialization, and during training, networks have to learn to this inductive bias.Together, these findings provide new insights into the role of interleaved pooling and deformation invariance in CNNs, and demonstrate the importance of rigorous empirical testing of even our most basic assumptions about the working of neural networks.",We find that pooling alone does not determine deformation stability in CNNs and that filter smoothness plays an important role in determining stability. 151,SELF: Learning to Filter Noisy Labels with Self-Ensembling,"Deep neural networks have been shown to over-fit a dataset when being trained with noisy labels for a long enough time.To overcome this problem, we present a simple and effective method self-ensemble label filtering to progressively filter out the wrong labels during training.Our method improves the task performance by gradually allowing supervision only from the potentially non-noisy labels and stops learning on the filtered noisy labels.For the filtering, we form running averages of predictions over the entire training dataset using the network output at different training epochs.We show that these ensemble estimates yield more accurate identification of inconsistent predictions throughout training than the single estimates of the network at the most recent training epoch.While filtered samples are removed entirely from the supervised training loss, we dynamically leverage them via semi-supervised learning in the unsupervised loss.We demonstrate the positive effect of such an approach on various image classification tasks under both symmetric and asymmetric label noise and at different noise ratios.It substantially outperforms all previous works on noise-aware learning across different datasets and can be applied to a broad set of network architectures.","We propose a self-ensemble framework to train more robust deep learning models under noisy labeled datasets.This paper proposed ""self-ensemble label filtering"" for learning with noisy labels where the label noise is instance-independent, which yield more accurate identification of inconsistent predictions. This paper proposes an algorithm for learning from data with noisy labels which alternates between updating the model and removing samples that look like they have noisy labels." 152,NeuroFabric: Identifying Ideal Topologies for Training A Priori Sparse Networks,"Long training times of deep neural networks are a bottleneck in machine learning research.The major impediment to fast training is the quadratic growth of both memory and compute requirements of dense and convolutional layers with respect to their information bandwidth.Recently, training `a priori sparse networks has been proposed as a method for allowing layers to retain high information bandwidth, while keeping memory and compute low.However, the choice of which sparse topology should be used in these networks is unclear.In this work, we provide a theoretical foundation for the choice of intra-layer topology.First, we derive a new sparse neural network initialization scheme that allows us to explore the space of very deep sparse networks.Next, we evaluate several topologies and show that seemingly similar topologies can often have a large difference in attainable accuracy.To explain these differences, we develop a data-free heuristic that can evaluate a topology independently from the dataset the network will be trained on.We then derive a set of requirements that make a good topology, and arrive at a single topology that satisfies all of them.",We investigate pruning DNNs before training and provide an answer to which topology should be used for training a priori sparse networks.The authors propose to replace dense layers with sparsely-connected linear layers and an approach to finding the best topology by measuring how well the sparse layers approximate random weights of their dense counterparts.The paper proposes a sparse cascade architecture that is a multiplication of several sparse matrices and a specific connectivity pattern that outperforms other provided considerations. 153,Transfer Learning to Learn with Multitask Neural Model Search,"Deep learning models require extensive architecture design exploration and hyperparameter optimization to perform well on a given task.The exploration of the model design space is often made by a human expert, and optimized using a combination of grid search and search heuristics over a large space of possible choices.Neural Architecture Search is a Reinforcement Learning approach that has been proposed to automate architecture design.NAS has been successfully applied to generate Neural Networks that rival the best human-designed architectures.However, NAS requires sampling, constructing, and training hundreds to thousands of models to achieve well-performing architectures.This procedure needs to be executed from scratch for each new task.The application of NAS to a wide set of tasks currently lacks a way to transfer generalizable knowledge across tasks.In this paper, we present the Multitask Neural Model Search controller.Our goal is to learn a generalizable framework that can condition model construction on successful model searches for previously seen tasks, thus significantly speeding up the search for new tasks.We demonstrate that MNMS can conduct an automated architecture search for multiple tasks simultaneously while still learning well-performing, specialized models for each task.We then show that pre-trained MNMS controllers can transfer learning to new tasks.By leveraging knowledge from previous searches, we find that pre-trained MNMS models start from a better location in the search space and reduce search time on unseen tasks, while still discovering models that outperform published human-designed models.","We present Multitask Neural Model Search, a Meta-learner that can design models for multiple tasks simultaneously and transfer learning to unseen tasks.This paper extends Neural Architecture Search to the multi-task learning problem where a task conditioned model search controller is learned to handle multiple tasks simultaneously.In this paper, authors summarize their work on building a framework, called Multitask Neural Model Search controller, for automated neural network construction across multiple tasks simultaneously." 154,Linearizing Visual Processes with Deep Generative Models,"This work studies the problem of modeling non-linear visual processes by leveraging deep generative architectures for learning linear, Gaussian models of observed sequences.We propose a joint learning framework, combining a multivariate autoregressive model and deep convolutional generative networks.After justification of theoretical assumptions of inearization, we propose an architecture that allows Variational Autoencoders and Generative Adversarial Networks to simultaneously learn the non-linear observation as well as the linear state-transition model from a sequence of observed frames.Finally, we demonstrate our approach on conceptual toy examples and dynamic textures.","We model non-linear visual processes as autoregressive noise via generative deep learning.Proposes a new method that models non-linear visual process with a deep version of a linear process (Markov process).This paper proposes a new deep generative model for sequences, particularly image sequences and video, which uses a linear structure in part of the model." 155,PDE-Net: Learning PDEs from Data,"Partial differential equations play a prominent role in many disciplines such as applied mathematics, physics, chemistry, material science, computer science, etc.PDEs are commonly derived based on physical laws or empirical observations.However, the governing equations for many complex systems in modern applications are still not fully known.With the rapid development of sensors, computational power, and data storage in the past decade, huge quantities of data can be easily collected and efficiently stored.Such vast quantity of data offers new opportunities for data-driven discovery of hidden physical laws.Inspired by the latest development of neural network designs in deep learning, we propose a new feed-forward deep network, called PDE-Net, to fulfill two objectives at the same time: to accurately predict dynamics of complex systems and to uncover the underlying hidden PDE models.The basic idea of the proposed PDE-Net is to learn differential operators by learning convolution kernels, and apply neural networks or other machine learning methods to approximate the unknown nonlinear responses.Comparing with existing approaches, which either assume the form of the nonlinear response is known or fix certain finite difference approximations of differential operators, our approach has the most flexibility by learning both differential operators and the nonlinear responses.A special feature of the proposed PDE-Net is that all filters are properly constrained, which enables us to easily identify the governing PDE models while still maintaining the expressive and predictive power of the network.These constrains are carefully designed by fully exploiting the relation between the orders of differential operators and the orders of sum rules of filters.We also discuss relations of the PDE-Net with some existing networks in computer vision such as Network-In-Network and Residual Neural Network.Numerical experiments show that the PDE-Net has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment.","This paper proposes a new feed-forward network, call PDE-Net, to learn PDEs from data. The paper expores the use of deep learning machinery for the purpose of identifying dynamical systems specified by PDEs.The paper proposes a neural network based algorithm for learning from data that arises from dynamical systems with governing equations that can be written as partial differential equations.This paper addresses complex dynamical systems modelling through nonparametric Partial Differential Equations using neural architectures, with the most important idea of the papier (PDE-net) to learn both differential operators and the function that governs the PDE." 156,Discrete flow posteriors for variational inference in discrete dynamical systems,"Each training step for a variational autoencoder requires us to sample from the approximate posterior, so we usually choose simple approximate posteriors in which sampling is an efficient computation that fully exploits GPU parallelism. However, such simple approximate posteriors are often insufficient, as they eliminate statistical dependencies in the posterior. While it is possible to use normalizing flow approximate posteriors for continuous latents, there is nothing analogous for discrete latents.The most natural approach to model discrete dependencies is an autoregressive distribution, but sampling from such distributions is inherently sequential and thus slow. We develop a fast, parallel sampling procedure for autoregressive distributions based on fixed-point iterations which enables efficient and accurate variational inference in discrete state-space models. To optimize the variational bound, we considered two ways to evaluate probabilities: inserting the relaxed samples directly into the pmf for the discrete distribution, or converting to continuous logistic latent variables and interpreting the K-step fixed-point iterations as a normalizing flow. We found that converting to continuous latent variables gave considerable additional scope for mismatch between the true and approximate posteriors, which resulted in biased inferences, we thus used the former approach. We tested our approach on the neuroscience problem of inferring discrete spiking activity from noisy calcium-imaging data, and found that it gave accurate connectivity estimates in an order of magnitude less time.",We give a fast normalising-flow like sampling procedure for discrete latent variable models.This paper uses an autoregressive filtering variational approximation for parameter estimation in discrete dynamical systems by using fixed point iterations.The authors posit a general autoregressive posterior family for discrete variables or their continuous relaxations. This paper has two main contributions: it extends normalizing flows to discrete settings and presents an approximate fixed-point update rule for autoregressive time-series that can exploit GPU parallelism. 157,LEARNING TO ORGANIZE KNOWLEDGE WITH N-GRAM MACHINES,"Deep neural networks had great success on NLP tasks such as language modeling, machine translation and certain question answering tasks.However, the success is limited at more knowledge intensive tasks such as QA from a big corpus.Existing end-to-end deep QA models need to read the entire text after observing the question, and therefore their complexity in responding a question is linear in the text size.This is prohibitive for practical tasks such as QA from Wikipedia, a novel, or the Web.We propose to solve this scalability issue by using symbolic meaning representations, which can be indexed and retrieved efficiently with complexity that is independent of the text size.More specifically, we use sequence-to-sequence models to encode knowledge symbolically and generate programs to answer questions from the encoded knowledge.We apply our approach, called the N-Gram Machine, to the bAbI tasks and a special version of them which has stories of up to 10 million sentences.Our experiments show that NGM can successfully solve both of these tasks accurately and efficiently.Unlike fully differentiable memory models, NGM’s time complexity and answering quality are not affected by the story length.The whole system of NGM is trained end-to-end with REINFORCE.To avoid high variance in gradient estimation, which is typical in discrete latent variable models, we use beam search instead of sampling.To tackle the exponentially large search space, we use a stabilized auto-encoding objective and a structure tweak procedure to iteratively reduce and refine the search space.","We propose a framework that learns to encode knowledge symbolically and generate programs to reason about the encoded knowledge.The authors propose the N-Gram machine to answer questions over long documents.This paper presents the n-gram machine, a model that encodes sentences into simple symbolic representations which can be queried efficiently." 158,A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms,"We propose to use a meta-learning objective that maximizes the speed of transfer on a modified distribution to learn how to modularize acquired knowledge.In particular, we focus on how to factor a joint distribution into appropriate conditionals, consistent with the causal directions.We explain when this can work, using the assumption that the changes in distributions are localized.We prove that under this assumption of localized changes in causal mechanisms, the correct causal graph will tend to have only a few of its parameters with non-zero gradient, i.e. that need to be adapted.We argue and observe experimentally that this leads to faster adaptation, and use this property to define a meta-learning surrogate score which, in addition to a continuous parametrization of graphs, would favour correct causal graphs.Finally, motivated by the AI agent point of view, we consider how the same objective can discover the causal variables themselves, as a transformation of observed low-level variables with no causal meaning.Experiments in the two-variable case validate the proposed ideas and theoretical results.","This paper proposes a meta-learning objective based on speed of adaptation to transfer distributions to discover a modular decomposition and causal variables.The paper shows that a model with the correct underlying structure will adapt faster to a causal intervention than a model with the incorrect structure.In this work, the authors proposed a general and systematic framework of meta-transfer objective incorporating the causal structure learning under unknown interventions." 159,Minimizing Change in Classifier Likelihood to Mitigate Catastrophic Forgetting,"Continual learning is a longstanding goal of artificial intelligence, but is often counfounded by catastrophic forgetting that prevents neural networks from learning tasks sequentially.Previous methods in continual learning have demonstrated how to mitigate catastrophic forgetting, and learn new tasks while retaining performance on the previous tasks.We analyze catastrophic forgetting from the perspective of change in classifier likelihood and propose a simple L1 minimization criterion which can be adapted to different use cases.We further investigate two ways to minimize forgetting as quantified by this criterion and propose strategies to achieve finer control over forgetting.Finally, we evaluate our strategies on 3 datasets of varying difficulty and demonstrate improvements over previously known L2 strategies for mitigating catastrophic forgetting.","Another perspective on catastrophic forgettingThis paper introduces a framework for combatting catastrophic forgetting based upon changing the loss term to minimise changes in classifier likelihood, obtained via a Taylor series approximation.This paper tries to solve the continual learning prolem by focusing on regularization approaches, and it proposes a L_1 strategy to mitigate the problem." 160,Part-Based 3D Face Morphable Model with Anthropometric Local Control,"We propose an approach to construct realistic 3D facial morphable models that allows an intuitive facial attributeediting workflow.Current face modeling methods using 3DMM suffer from the lack of local control.We thus create a 3DMM bycombining local part-based 3DMM for the eyes, nose, mouth, ears, and facial mask regions.Our local PCA-based approachuses a novel method to select the best eigenvectors from the local 3DMM to ensure that the combined 3DMM is expressivewhile allowing accurate reconstruction.The editing controls we provide to the user are intuitive as they are extracted fromanthropometric measurements found in the literature.Out of a large set of possible anthropometric measurements, we filter theones that have meaningful generative power given the face data set.We bind the measurements to the part-based 3DMM throughmapping matrices derived from our data set of facial scans.Our part-based 3DMM is compact yet accurate, and compared toother 3DMM methods, it provides a new trade-off between local and global control.We tested our approach on a data set of 135scans used to derive the 3DMM, plus 19 scans that served for validation.The results show that our part-based 3DMM approachhas excellent generative properties and allows intuitive local control to the user.",We propose an approach to construct realistic 3D facial morphable models (3DMM) that allows an intuitive facial attribute editing workflow by selecting the best sets of eigenvectors and anthropometric measurements.Proposes a piecewise morphable model for human face meshes and also proposes a mapping between anthropometric measurements of the face and the parameters of the model in order to synthesize and edit faces with desired attributes. This paper describes a method of part-based morphable facial model allowing for localized user control. 161,Emergent Communication through Negotiation,"Multi-agent reinforcement learning offers a way to study how communication could emerge in communities of agents needing to solve specific problems.In this paper, we study the emergence of communication in the negotiation environment, a semi-cooperative model of agent interaction.We introduce two communication protocols - one grounded in the semantics of the game, and one which is a priori ungrounded. We show that self-interested agents can use the pre-grounded communication channel to negotiate fairly, but are unable to effectively use the ungrounded, cheap talk channel to do the same. However, prosocial agents do learn to use cheap talk to find an optimal negotiating strategy, suggesting that cooperation is necessary for language to emerge.We also study communication behaviour in a setting where one agent interacts with agents in a community with different levels of prosociality and show how agent identifiability can aid negotiation.","We teach agents to negotiate using only reinforcement learning; selfish agents can do so, but only using a trustworthy communication channel, and prosocial agents can negotiate using cheap talk.The authors describe a variant of the negotation game with the consideration of a secondary communication channel for cheap talk, finding that the secondary channel improves negotation outcomes.This paper explores how agents can learn to communicate to solve a negotiation task and find that prosocial agents are able to learn to ground symbols using RL, but self-interested agents are not.Examines problems of how agents can use communication to maximise their rewards in a simple negotiation game." 162,LEARNING TO PROPAGATE LABELS: TRANSDUCTIVE PROPAGATION NETWORK FOR FEW-SHOT LEARNING,"The goal of few-shot learning is to learn a classifier that generalizes well even when trained with a limited number of training instances per class.The recently introduced meta-learning approaches tackle this problem by learning a generic classifier across a large number of multiclass classification tasks and generalizing the model to a new task.Yet, even with such meta-learning, the low-data problem in the novel classification task still remains.In this paper, we propose Transductive Propagation Network, a novel meta-learning framework for transductive inference that classifies the entire test set at once to alleviate the low-data problem.Specifically, we propose to learn to propagate labels from labeled instances to unlabeled test instances, by learning a graph construction module that exploits the manifold structure in the data.TPN jointly learns both the parameters of feature embedding and the graph construction in an end-to-end manner. We validate TPN on multiple benchmark datasets, on which it largely outperforms existing few-shot learning approaches and achieves the state-of-the-art results.","We propose a novel meta-learning framework for transductive inference that classifies the entire test set at once to alleviate the low-data problem.This paper proposes to address few-shot learning in a transductive way by learning a label propagation model in an end-to-end manner, the first to learn label propagation for transductive few-shot learning and produced effective empirical results. This paper proposes a meta-learning framework that leverages unlabeled data by learning the graph-based label propogation in an end-to-end manner.Studies few-host learning in a transductive setting: using meta learning to learn to propagate labels from training samples to test samples. " 163,Automated Science Scheduling for the ECOSTRESS Mission,"We describe the use of an automated scheduling system for observation policy design and to schedule operations of the NASA ECOSystem Spaceborne Thermal Radiometer Experiment on Space Station.We describe the adaptation of the Compressed Large-scale Activity Scheduler and Planner scheduling system to the ECOSTRESS scheduling problem, highlighting multiple use cases for automated scheduling and several challenges for the scheduling technology: handling long-term campaigns with changing information, Mass Storage Unit Ring Buffer operations challenges, and orbit uncertainty.The described scheduling system has been used for operations of the ECOSTRESS instrument since its nominal operations start July 2018 and is expected to operate until mission end in Summer 2019.","We describe the use of an automated scheduling system for observation policy design and to schedule operations of NASA's ECOSTRESS mission."", 'This paper presents an adaptation of an automated scheduling system, CLASP, to target an EO experiment (ECOSTRESS) on the ISS. " 164,Explaining Adversarial Examples with Knowledge Representation,"Adversarial examples are modified samples that preserve original image structures but deviate classifiers.Researchers have put efforts into developing methods for generating adversarial examples and finding out origins.Past research put much attention on decision boundary changes caused by these methods.This paper, in contrast, discusses the origin of adversarial examples from a more underlying knowledge representation point of view.Human beings can learn and classify prototypes as well as transformations of objects.While neural networks store learned knowledge in a more hybrid way of combining all prototypes and transformations as a whole distribution.Hybrid storage may lead to lower distances between different classes so that small modifications can mislead the classifier.A one-step distribution imitation method is designed to imitate distribution of the nearest different class neighbor.Experiments show that simply by imitating distributions from a training set without any knowledge of the classifier can still lead to obvious impacts on classification results from deep networks.It also implies that adversarial examples can be in more forms than small perturbations.Potential ways of alleviating adversarial examples are discussed from the representation point of view.The first path is to change the encoding of data sent to the training step.Training data that are more prototypical can help seize more robust and accurate structural knowledge.The second path requires constructing learning frameworks with improved representations.",Hybird storage and representation of learned knowledge may be a reason for adversarial examples. 165,CAN ALTQ LEARN FASTER: EXPERIMENTS AND THEORY,"Differently from the popular Deep Q-Network learning, Alternating Q-learning does not fully fit a target Q-function at each iteration, and is generally known to be unstable and inefficient.Limited applications of AltQ mostly rely on substantially altering the algorithm architecture in order to improve its performance.Although Adam appears to be a natural solution, its performance in AltQ has rarely been studied before.In this paper, we first provide a solid exploration on how well AltQ performs with Adam.We then take a further step to improve the implementation by adopting the technique of parameter restart.More specifically, the proposed algorithms are tested on a batch of Atari 2600 games and exhibit superior performance than the DQN learning method.The convergence rate of the slightly modified version of the proposed algorithms is characterized under the linear function approximation.To the best of our knowledge, this is the first theoretical study on the Adam-type algorithms in Q-learning.",New Experiments and Theory for Adam Based Q-LearningThis paper provides a convergence result for traditional Q-learning with linear function approximation when using an Adam-like update. This paper describes a method to improve the AltQ algorithm by using a combination of an Adam optimizer and regularly restarting the internal parameters of the Adam optimizer. 166,Spectral Capsule Networks,"In search for more accurate predictive models, we customize capsule networks for the learning to diagnose problem.We also propose Spectral Capsule Networks, a novel variation of capsule networks, that converge faster than capsule network with EM routing.Spectral capsule networks consist of spatial coincidence filters that detect entities based on the alignment of extracted features on a one-dimensional linear subspace.Experiments on a public benchmark learning to diagnose dataset not only shows the success of capsule networks on this task, but also confirm the faster convergence of the spectral capsule networks.","A new capsule network that converges faster on our healthcare benchmark experiments.Presents a variant of capsule networks that instead of using EM routing employs a linear subspace spanned by the dominant eigenvector on the weighted votes matrix from the previous capsule.The paper proposes an improved routing method, which employs tools of eigendecomposition to find capsule activation and pose." 167,Distributed Fine-tuning of Language Models on Private Data,"One of the big challenges in machine learning applications is that training data can be different from the real-world data faced by the algorithm.In language modeling, users’ language could change in a year and be completely different from what we observe in publicly available data.At the same time, public data can be used for obtaining general knowledge.We study approaches to distributed fine-tuning of a general model on user private data with the additional requirements of maintaining the quality on the general data and minimization of communication costs.We propose a novel technique that significantly improves prediction quality on users’ language compared to a general model and outperforms gradient compression methods in terms of communication efficiency.The proposed procedure is fast and leads to an almost 70% perplexity reduction and 8.7 percentage point improvement in keystroke saving rate on informal English texts.Finally, we propose an experimental framework for evaluating differential privacy of distributed training of language models and show that our approach has good privacy guarantees.",We propose a method of distributed fine-tuning of language models on user devices without collection of private dataThis paper deals with improving language models on mobile equipments based on small portion of text that the user has inputted by employing a linearly interpolated objectives between user specific text and general English. 168,Pseudo-Bayesian Learning via Direct Loss Minimization with Applications to Sparse Gaussian Process Models,"We propose that approximate Bayesian algorithms should optimize a new criterion, directly derived from the loss, to calculate their approximate posterior which we refer to as pseudo-posterior.Unlike standard variational inference which optimizes a lower bound on the log marginal likelihood, the new algorithms can be analyzed to provide loss guarantees on the predictions with the pseudo-posterior.Our criterion can be used to derive new sparse Gaussian process algorithms that have error guarantees applicable to various likelihoods.",This paper utilizes the analysis of Lipschitz loss on a bounded hypothesis space to derive new ERM-type algorithms with strong performance guarantees that can be applied to the non-conjugate sparse GP model. 169,RotationOut as a Regularization Method for Neural Network,"In this paper, we propose a novel regularization method, RotationOut, for neural networks.Different from Dropout that handles each neuron/channel independently, RotationOut regards its input layer as an entire vector and introduces regularization by randomly rotating the vector.RotationOut can also be used in convolutional layers and recurrent layers with a small modification.We further use a noise analysis method to interpret the difference between RotationOut and Dropout in co-adaptation reduction.Using this method, we also show how to use RotationOut/Dropout together with Batch Normalization.Extensive experiments in vision and language tasks are conducted to show the effectiveness of the proposed method.Codes will be available.","We propose a regularization method for neural network and a noise analysis methodThis paper proposes a new regularization method to mitigate the overfitting issue of deep neural networks by rotating features with a random rotation matrix to reduce co-adaptation.This paper proposes a novel regularization method for training neural networks, which adds noise neurons in an inter-dependent fashion." 170,Probabilistic View of Multi-agent Reinforcement Learning: A Unified Approach,"Formulating the reinforcement learning problem in the framework of probabilistic inference not only offers a new perspective about RL, but also yields practical algorithms that are more robust and easier to train.While this connection between RL and probabilistic inference has been extensively studied in the single-agent setting, it has not yet been fully understood in the multi-agent setup.In this paper, we pose the problem of multi-agent reinforcement learning as the problem of performing inference in a particular graphical model.We model the environment, as seen by each of the agents, using separate but related Markov decision processes.We derive a practical off-policy maximum-entropy actor-critic algorithm that we call Multi-agent Soft Actor-Critic for performing approximate inference in the proposed model using variational inference.MA-SAC can be employed in both cooperative and competitive settings.Through experiments, we demonstrate that MA-SAC outperforms a strong baseline on several multi-agent scenarios.While MA-SAC is one resultant multi-agent RL algorithm that can be derived from the proposed probabilistic framework, our work provides a unified view of maximum-entropy algorithms in the multi-agent setting.",A probabilistic framework for multi-agent reinforcement learningThis paper proposes a new algorithm named Multi-Agent Soft Actor-Critic (MA-SAC) based on the off-policy maximum-entropy actor critic algorithm Soft Actor-Critic (SAC) 171,Stochastic Optimization of Sorting Networks via Continuous Relaxations,"Sorting input objects is an important step in many machine learning pipelines.However, the sorting operator is non-differentiable with respect to its inputs, which prohibits end-to-end gradient-based optimization.In this work, we propose NeuralSort, a general-purpose continuous relaxation of the output of the sorting operator from permutation matrices to the set of unimodal row-stochastic matrices, where every row sums to one and has a distinct argmax.This relaxation permits straight-through optimization of any computational graph involve a sorting operation.Further, we use this relaxation to enable gradient-based stochastic optimization over the combinatorially large space of permutations by deriving a reparameterized gradient estimator for the Plackett-Luce family of distributions over permutations.We demonstrate the usefulness of our framework on three tasks that require learning semantic orderings of high-dimensional objects, including a fully differentiable, parameterized extension of the k-nearest neighbors algorithm","We provide a continuous relaxation to the sorting operator, enabling end-to-end, gradient-based stochastic optimization.The paper considers how to sort a number of items without explicitly necessarily learning their actual meanings or values and proposes a method to perform the optimization via a continuous relaxation.This work builds on a sum(top k) identity to derive a pathwise differentiable sampler of 'unimodal row stochastic' matrices."", 'Introduces a continuous relaxation of the sorting operator in order to construct an end-to-end gradient-based optimization and introduces a stochastic extension of its method using Placket-Luce distributions and Monte Carlo." 172,Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization,"Transferring knowledge across tasks to improve data-efficiency is one ofthe open key challenges in the area of global optimization algorithms.Readilyavailable algorithms are typically designed to be universal optimizers and, thus,often suboptimal for specific tasks.We propose a novel transfer learning method toobtain customized optimizers within the well-established framework of Bayesianoptimization, allowing our algorithm to utilize the proven generalizationcapabilities of Gaussian processes.Using reinforcement learning to meta-train anacquisition function on a set of related tasks, the proposed method learns toextract implicit structural information and to exploit it for improved data-efficiency.We present experiments on a sim-to-real transfer task as well as on several simulatedfunctions and two hyperparameter search problems.The results show that ouralgorithm automatically identifies structural properties of objective functionsfrom available source tasks or simulations, performs favourably in settings withboth scarse and abundant source data, and falls back to the performance levelof general AFs if no structure is present.","We perform efficient and flexible transfer learning in the framework of Bayesian optimization through meta-learned neural acquisition functions.The authors present MetaBO which uses reinforcement learning to meta-learn the acquisition function for Bayesian Optimization, showing increasing sample efficiency on new tasks.The authors propose a meta-learning based alternative to standard acquisition functions (AFs), whereby a pretrained neural network outputs acquisition values as a function of hand-chosen features." 173,Estimating Information Flow in DNNs,"We study the evolution of internal representations during deep neural network training, aiming to demystify the compression aspect of the information bottleneck theory.The theory suggests that DNN training comprises a rapid fitting phase followed by a slower compression phase, in which the mutual information I between the input X and internal representations T decreases.Several papers observe compression of estimated mutual information on different DNN models, but the true I over these networks is provably either constant or infinite.This work explains the discrepancy between theory and experiments, and clarifies what was actually measured by these past works.To this end, we introduce an auxiliary DNN framework for which I is a meaningful quantity that depends on the networks parameters.This noisy framework is shown to be a good proxy for the original DNN both in terms of performance and the learned representations.We then develop a rigorous estimator for I in noisy DNNs and observe compression in various models.By relating I in the noisy DNN to an information-theoretic communication problem, we show that compression is driven by the progressive clustering of hidden representations of inputs from the same class.Several methods to directly monitor clustering of hidden representations, both in noisy and deterministic DNNs, are used to show that meaningful clusters form in the T space.Finally, we return to the estimator of I employed in past works, and demonstrate that while it fails to capture the true mutual information, it does serve as a measure for clustering.This clarifies the past observations of compression and isolates the geometric clustering of hidden representations as the true phenomenon of interest.","Deterministic deep neural networks do not discard information, but they do cluster their inputs.This paper provides a principled way to examine the compression phrase in deep neural networks by providing an theoretical sounding entropy estimator to estimate mutual information. " 174,Promoting Coordination through Policy Regularization in Multi-Agent Deep Reinforcement Learning,"A central challenge in multi-agent reinforcement learning is the induction of coordination between agents of a team.In this work, we investigate how to promote inter-agent coordination using policy regularization and discuss two possible avenues respectively based on inter-agent modelling and synchronized sub-policy selection.We test each approach in four challenging continuous control tasks with sparse rewards and compare them against three baselines including MADDPG, a state-of-the-art multi-agent reinforcement learning algorithm.To ensure a fair comparison, we rely on a thorough hyper-parameter selection and training methodology that allows a fixed hyper-parameter search budget for each algorithm and environment.We consequently assess both the hyper-parameter sensitivity, sample-efficiency and asymptotic performance of each learning method.Our experiments show that the proposed methods lead to significant improvements on cooperative problems.We further analyse the effects of the proposed regularizations on the behaviors learned by the agents.",We propose regularization objectives for multi-agent RL algorithms that foster coordination on cooperative tasks.This paper proposes two methods of biasing agents towards learning coordinated behaviours and evaluates both rigorously across multi-agent domains of suitable complexity.This paper proposes two methods building upon MADDPG to encourage collaboration amongst decentralized MARL agents. 175,Learning Robust Joint Representations for Multimodal Sentiment Analysis,"Multimodal sentiment analysis is a core research area that studies speaker sentiment expressed from the language, visual, and acoustic modalities.The central challenge in multimodal learning involves inferring joint representations that can process and relate information from these modalities.However, existing work learns joint representations using multiple modalities as input and may be sensitive to noisy or missing modalities at test time.With the recent success of sequence to sequence models in machine translation, there is an opportunity to explore new ways of learning joint representations that may not require all input modalities at test time.In this paper, we propose a method to learn robust joint representations by translating between modalities.Our method is based on the key insight that translation from a source to a target modality provides a method of learning joint representations using only the source modality as input.We augment modality translations with a cycle consistency loss to ensure that our joint representations retain maximal information from all modalities.Once our translation model is trained with paired multimodal data, we only need data from the source modality at test-time for prediction.This ensures that our model remains robust from perturbations or missing target modalities.We train our model with a coupled translation-prediction objective and it achieves new state-of-the-art results on multimodal sentiment analysis datasets: CMU-MOSI, ICT-MMMO, and YouTube.Additional experiments show that our model learns increasingly discriminative joint representations with more input modalities while maintaining robustness to perturbations of all other modalities.",We present a model that learns robust joint representations by performing hierarchical cyclic translations between multiple modalities.This paper presents the Multimodal Cyclic Translation Network (MCTN) and evaluates it for multimodal sentiment analysis. 176,Towards understanding the true loss surface of deep neural networks using random matrix theory and iterative spectral methods,"The geometric properties of loss surfaces, such as the local flatness of a solution, are associated with generalization in deep learning.The Hessian is often used to understand these geometric properties.We investigate the differences between the eigenvalues of the neural network Hessian evaluated over the empirical dataset, the Empirical Hessian, and the eigenvalues of the Hessian under the data generating distribution, which we term the True Hessian.Under mild assumptions, we use random matrix theory to show that the True Hessian has eigenvalues of smaller absolute value than the Empirical Hessian.We support these results for different SGD schedules on both a 110-Layer ResNet and VGG-16.To perform these experiments we propose a framework for spectral visualization, based on GPU accelerated stochastic Lanczos quadrature.This approach is an order of magnitude faster than state-of-the-art methods for spectral visualization, and can be generically used to investigate the spectral properties of matrices in deep learning.","Understanding the neural network Hessian eigenvalues under the data generating distribution.This paper analyzes the spectrum of the Hessian matrix of large neural networks, with an analysis of max/min eigenvalues and visualization of spectra using a Lanczos quadrature approach.This paper uses the random matrix theory to study the spectrum distribution of the empirical Hessian and true Hessian for deep learning, and proposes an efficient spectrum visualization methods." 177,Structured Neural Summarization,"Summarization of long sequences into a concise statement is a core problem in natural language processing, requiring non-trivial understanding of the input.Based on the promising results of graph neural networks on highly structured data, we develop a framework to extend existing sequence encoders with a graph component that can reason about long-distance relationships in weakly structured data such as text.In an extensive evaluation, we show that the resulting hybrid sequence-graph models outperform both pure sequence models as well as pure graph models on a range of summarization tasks.","One simple trick to improve sequence models: Compose them with a graph modelThis paper presents a structural summarization model with a graph-based encoder extended from RNN.This work combines Graph Neural Networks with a sequential approach to abstractive summarization, effective across all datasets in comparison to external baselines." 178,SDGM: Sparse Bayesian Classifier Based on a Discriminative Gaussian Mixture Model,"In probabilistic classification, a discriminative model based on Gaussian mixture exhibits flexible fitting capability.Nevertheless, it is difficult to determine the number of components.We propose a sparse classifier based on a discriminative Gaussian mixture model, which is named sparse discriminative Gaussian mixture.In the SDGM, a GMM-based discriminative model is trained by sparse Bayesian learning.This learning algorithm improves the generalization capability by obtaining a sparse solution and automatically determines the number of components by removing redundant components.The SDGM can be embedded into neural networks such as convolutional NNs and can be trained in an end-to-end manner.Experimental results indicated that the proposed method prevented overfitting by obtaining sparsity.Furthermore, we demonstrated that the proposed method outperformed a fully connected layer with the softmax function in certain cases when it was used as the last layer of a deep NN.","A sparse classifier based on a discriminative Gaussian mixture model, which can also be embedded into a neural network.The paper presents a Gaussian mixture model trained via gradient descent arguments which allows for inducing sparsity and reducing the trainable model layer parameters.This paper proposes a classifier, called SDGM, based on discriminative Gaussian mixture and its sparse parameter estimation." 179,CyCADA: Cycle-Consistent Adversarial Domain Adaptation,"Domain adaptation is critical for success in new, unseen environments.Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts.Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs.We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model.CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings.We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.","An unsupervised domain adaptation approach which adapts at both the pixel and feature levelsThis paper proposes a domain adaptation approach by extending the CycleGAN with task specific loss functions and loss imposed over both pixels and features. This paper proposes the use of CycleGANs for Domain AdaptationThis paper makes a novel extension to the previous work on CycleGAN by coupling it with adversarial adaptation approaches, including a new feature and semantic loss in the overall objective of the CycleGAN, with clear benefits." 180,Amharic Light Stemmer,"Stemming is the process of removing affixes that improve the accuracy and performance of information retrieval systems.This paper presents the reduction of Amharic words to corresponding stem where with the intention that it preserves semantic information.The proposed approach efficiently removes affixes from an Amharic word.The process of removing such affixes from a word to its base form is called stemming.While many stemmers exist for dominant languages such as English, under resourced languages such as Amharic which lacks such powerful tool support.In this paper, we design a light Amharic stemmer relying on the rules that receives an Amharic word and then it finds a match to the beginning of a word to the possible prefixes and to its ending with the possible suffixes and finally it checks whether it has infix.The final result is the stem if there is any prefix, infix or/and suffix, otherwise it remains in one of the earlier states.The technique does not rely on any additional resource to verify the generated stem.The performance of the generated stemmer is evaluated using manually annotated Amharic words.The result is compared with current state-of-the-art stemmer for Amharic showing an increase of 7% in stemmer correctness.","Amharic Light Stemmer is designed for improving performance of Amharic Sentiment Classification.This paper studies the stemming for morphologically rich languages with a light stemmer that only removes affixes to the extent that the original semantic information in the word is kept.This paper proposes a technique for Amharic light stemming using a cascade of transformations that standardize the form, remove suffixes, prefixes, and infixes." 181,Do deep neural networks possess concept space grid cells?,"Place and grid-cells are known to aid navigation in animals and humans.Together with concept cells, they allow humans to form an internal representation of the external world, namely the concept space.We investigate the presence of such a space in deep neural networks by plotting the activation profile of its hidden layer neurons.Although place cell and concept-cell like properties are found, grid-cell like firing patterns are absent thereby indicating a lack of path integration or feature transformation functionality in trained networks.Overall, we present a plausible inadequacy in current deep learning practices that restrict deep networks from performing analogical reasoning and memory retrieval tasks.",We investigated if simple deep networks possess grid cell-like artificial neurons while memory retrieval in the learned concept space. 182,Toward predictive machine learning for active vision,"We develop a comprehensive description of the active inference framework, as proposed by Friston, under a machine-learning compliant perspective.Stemming from a biological inspiration and the auto-encoding principles, a sketch of a cognitive architecture is proposed that should provide ways to implement estimation-oriented control policies. Computer simulations illustrate the effectiveness of the approach through a foveated inspection of the input data.The pros and cons of the control policy are analyzed in detail, showing interesting promises in terms of processing compression.Though optimizing future posterior entropy over the actions set is shown enough to attain locally optimal action selection, offline calculation using class-specific saliency maps is shown better for it saves processing costs through saccades pathways pre-processing, with a negligible effect on the recognition/compression rates.",Pros and cons of saccade-based computer vision under a predictive coding perspectivePresents a computational framework for the active vision problem and explains how the control policy can be learned to reduce the entropy of the posterior belief. 183,"Study of a Simple, Expressive and Consistent Graph Feature Representation","Graphs possess exotic features like variable size and absence of natural ordering of the nodes that make them difficult to analyze and compare.To circumvent this problem and learn on graphs, graph feature representation is required.Main difficulties with feature extraction lie in the trade-off between expressiveness, consistency and efficiency, i.e. the capacity to extract features that represent the structural information of the graph while being deformation-consistent and isomorphism-invariant.While state-of-the-art methods enhance expressiveness with powerful graph neural-networks, we propose to leverage natural spectral properties of graphs to study a simple graph feature: the graph Laplacian spectrum.We analyze the representational power of this object that satisfies both isomorphism-invariance, expressiveness and deformation-consistency.In particular, we propose a theoretical analysis based on graph perturbation to understand what kind of comparison between graphs we do when comparing GLS.To do so, we derive bounds for the distance between GLS that are related to the divergence to isomorphism, a standard computationally expensive graph divergence.Finally, we experiment GLS as graph representation through consistency tests and classification tasks, and show that it is a strong graph feature representation baseline.",We study theoretically the consistency the Laplacian spectrum and use it as whole-graph embedddingThis paper forcuses on the laplacian spectrum of a graph as means to generate a representation to be used to compare graphs and classify them.This work proposed to use Graph Laplacian spectrum to learn graph representation. 184,Fast is better than free: Revisiting adversarial training,"Adversarial training, a method for learning robust deep networks, is typically assumed to be more expensive than traditional training due to the necessity of constructing adversarial examples via a first-order method like projected gradient decent. In this paper, we make the surprising discovery that it is possible to train empirically robust models using a much weaker and cheaper adversary, an approach that was previously believed to be ineffective, rendering the method no more costly than standard training in practice. Specifically, we show that adversarial training with the fast gradient sign method, when combined with random initialization, is as effective as PGD-based training but has significantly lower cost. , ""Furthermore we show that FGSM adversarial training can be further accelerated by using standard techniques for efficient training of deep networks, allowing us to learn a robust CIFAR10 classifier with 45% robust accuracy at epsilon=8/255 in 6 minutes, and a robust ImageNet classifier with 43% robust accuracy at epsilon=2/255 in 12 hours, in comparison to past work based on free adversarial training which took 10 and 50 hours to reach the same respective thresholds.","FGSM-based adversarial training, with randomization, works just as well as PGD-based adversarial training: we can use this to train a robust classifier in 6 minutes on CIFAR10, and 12 hours on ImageNet, on a single machine.This paper revisits Random+FGSM method to train robust models against strong PGD evasion attacks faster than previous methods.The main claim of this paper is that a simple strategy of randomization plus fast gradient sign method (FGSM) adversarial training yields robust neural networks." 185,DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures,"In seeking for sparse and efficient neural network models, many previous works investigated on enforcing L1 or L0 regularizers to encourage weight sparsity during training.The L0 regularizer measures the parameter sparsity directly and is invariant to the scaling of parameter values.But it cannot provide useful gradients and therefore requires complex optimization techniques.The L1 regularizer is almost everywhere differentiable and can be easily optimized with gradient descent.Yet it is not scale-invariant and causes the same shrinking rate to all parameters, which is inefficient in increasing sparsity.Inspired by the Hoyer measure used in traditional compressed sensing problems, we present DeepHoyer, a set of sparsity-inducing regularizers that are both differentiable almost everywhere and scale-invariant.Our experiments show that enforcing DeepHoyer regularizers can produce even sparser neural network models than previous works, under the same accuracy level.We also show that DeepHoyer can be applied to both element-wise and structural pruning.","We propose almost everywhere differentiable and scale invariant regularizers for DNN pruning, which can lead to supremum sparsity through standard SGD training.The paper proposes a scale-invariant regularizer (DeepHoyer) inspired by the Hoyer measure to enforce sparsity in neural networks. " 186,Learning Through Limited Self-Supervision: Improving Time-Series Classification Without Additional Data via Auxiliary Tasks,"Self-supervision, in which a target task is improved without external supervision, has primarily been explored in settings that assume the availability of additional data.However, in many cases, particularly in healthcare, one may not have access to additional data.In such settings, we hypothesize that self-supervision based solely on the structure of the data at-hand can help.We explore a novel self-supervision framework for time-series data, in which multiple auxiliary tasks are included to improve overall performance on a sequence-level target task without additional training data.We call this approach limited self-supervision, as we limit ourselves to only the data at-hand.We demonstrate the utility of limited self-supervision on three sequence-level classification tasks, two pertaining to real clinical data and one using synthetic data.Within this framework, we introduce novel forms of self-supervision and demonstrate their utility in improving performance on the target task.Our results indicate that limited self-supervision leads to a consistent improvement over a supervised baseline, across a range of domains.In particular, for the task of identifying atrial fibrillation from small amounts of electrocardiogram data, we observe a nearly 13% improvement in the area under the receiver operating characteristics curve relative to the baseline.Limited self-supervision applied to sequential data can aid in learning intermediate representations, making it particularly applicable in settings where data collection is difficult.","We show that extra unlabeled data is not required for self-supervised auxiliary tasks to be useful for time series classification, and present new and effective auxiliary tasks.This paper proposes a self-supervised method for learning from time series data in healthcare settings via designing auxilliary tasks based on data's internal structure to create more labeled auxilliary training tasks."", 'This paper propose an approach for self-supervised learning on time series." 187,A Fine-Grained Spectral Perspective on Neural Networks,"Are neural networks biased toward simple functions?Does depth always help learn more complex features?Is training the last layer of a network as good as training all layers?These questions seem unrelated at face value, but in this work we give all of them a common treatment from the spectral perspective.We will study the spectra of the *Conjugate Kernel, CK,*, and the *Neural Tangent Kernel, NTK*.Roughly, the CK and the NTK tell us respectively ""what a network looks like at initialization"" and ""what a network looks like during and after training.""Their spectra then encode valuable information about the initial distribution and the training and generalization properties of neural networks.By analyzing the eigenvalues, we lend novel insights into the questions put forth at the beginning, and we verify these insights by extensive experiments of neural networks.We believe the computational tools we develop here for analyzing the spectra of CK and NTK serve as a solid foundation for future studies of deep neural networks.We have open-sourced the code for it and for generating the plots in this paper at github.com/jxVmnLgedVwv6mNcGCBy/NNspectra.","Eigenvalues of Conjugate (aka NNGP) and Neural Tangent Kernel can be computed in closed form over the Boolean cube and reveal the effects of hyperparameters on neural network inductive bias, training, and generalization.This paper gives a spectral analysis on neural networks' conjugate kernel and neural tangent kernel on boolean cube to resolve why deep networks are biased towards simple functions." 188,Visual Imitation with a Minimal Adversary,"High-dimensional sparse reward tasks present major challenges for reinforcement learning agents. In this work we use imitation learning to address two of these challenges: how to learn a useful representation of the world e.g. from pixels, and how to explore efficiently given the rarity of a reward signal?We show that adversarial imitation can work well even in this high dimensional observation space.Surprisingly the adversary itself, acting as the learned reward function, can be tiny, comprising as few as 128 parameters, and can be easily trained using the most basic GAN formulation.Our approach removes limitations present in most contemporary imitation approaches: requiring no demonstrator actions, no special initial conditions or warm starts, and no explicit tracking of any single demo.The proposed agent can solve a challenging robot manipulation task of block stacking from only video demonstrations and sparse reward, in which the non-imitating agents fail to learn completely. Furthermore, our agent learns much faster than competing approaches that depend on hand-crafted, staged dense reward functions, and also better compared to standard GAIL baselines.Finally, we develop a new adversarial goal recognizer that in some cases allows the agent to learn stacking without any task reward, purely from imitation.","Imitation from pixels, with sparse or no reward, using off-policy RL and a tiny adversarially-learned reward function.The paper proposes to use a ""minimal adversary"" in generative adversarial imitation learning under high-dimensional visual spaces.This paper aims at solving the problem of estimating sparse rewards in a high-dimensional input setting." 189,TequilaGAN: How To Easily Identify GAN Samples,"In this paper we show strategies to easily identify fake samples generated with the Generative Adversarial Network framework.One strategy is based on the statistical analysis and comparison of raw pixel values and features extracted from them.The other strategy learns formal specifications from the real data and shows that fake samples violate the specifications of the real data.We show that fake samples produced with GANs have a universal signature that can be used to identify fake samples.We provide results on MNIST, CIFAR10, music and speech data.",We show strategies to easily identify fake samples generated with the Generative Adversarial Network framework.Show that fake samples created with common generative adversarial network (GAN) implementations are easily identified using various statistical techniques. The paper proposes statistics to identify fake data generated using GANs based on simple marginal statistics or formal specifications automatically generated from real data. 190,Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks,"Efforts to reduce the numerical precision of computations in deep learning training have yielded systems that aggressively quantize weights and activations, yet employ wide high-precision accumulators for partial sums in inner-product operations to preserve the quality of convergence.The absence of any framework to analyze the precision requirements of partial sum accumulations results in conservative design choices.This imposes an upper-bound on the reduction of complexity of multiply-accumulate units.We present a statistical approach to analyze the impact of reduced accumulation precision on deep learning training.Observing that a bad choice for accumulation precision results in loss of information that manifests itself as a reduction in variance in an ensemble of partial sums, we derive a set of equations that relate this variance to the length of accumulation and the minimum number of bits needed for accumulation.We apply our analysis to three benchmark networks: CIFAR-10 ResNet 32, ImageNet ResNet 18 and ImageNet AlexNet.In each case, with accumulation precision set in accordance with our proposed equations, the networks successfully converge to the single precision floating-point baseline.We also show that reducing accumulation precision further degrades the quality of the trained network, proving that our equations produce tight bounds.Overall this analysis enables precise tailoring of computation hardware to the application, yielding area- and power-optimal systems.",We present an analytical framework to determine accumulation bit-width requirements in all three deep learning training GEMMs and verify the validity and tightness of our method via benchmarking experiments.The authors propose an analytical method to predict the number of mantissa bits needed for partial summations for convolutional and fully connected layersThe authors conduct a thorough analysis of the numeric precision required for the accumulation operations in neural network training and show the theoretical impact of reducing number of bits in the floating point accumulator. 191,Unsupervised Domain Adaptation for Distance Metric Learning,"Unsupervised domain adaptation is a promising avenue to enhance the performance of deep neural networks on a target domain, using labels only from a source domain.However, the two predominant methods, domain discrepancy reduction learning and semi-supervised learning, are not readily applicable when source and target domains do not share a common label space.This paper addresses the above scenario by learning a representation space that retains discriminative power on both the source and target domains while keeping representations for the two domains well-separated.Inspired by a theoretical analysis, we first reformulate the disjoint classification task, where the source and target domains correspond to non-overlapping class labels, to a verification one.To handle both within and cross domain verifications, we propose a Feature Transfer Network to separate the target feature space from the original source space while aligned with a transformed source space.Moreover, we present a non-parametric multi-class entropy minimization loss to further boost the discriminative power of FTNs on the target domain.In experiments, we first illustrate how FTN works in a controlled setting of adapting from MNIST-M to MNIST with disjoint digit classes between the two domains and then demonstrate the effectiveness of FTNs through state-of-the-art performances on a cross-ethnicity face recognition problem.",A new theory of unsupervised domain adaptation for distance metric learning and its application to face recognition across diverse ethnicity variations.Proposes a novel feature transfer network that optimizes domain adversarial loss and domain separation loss. 192,ProxSGD: Training Structured Neural Networks under Regularization and Constraints,"In this paper, we consider the problem of training neural networks.To promote a NN with specific structures, we explicitly take into consideration the nonsmooth regularization and constraints.This is formulated as a constrained nonsmooth nonconvex optimization problem, and we propose a convergent proximal-type stochastic gradient descent algorithm.We show that under properly selected learning rates, momentum eventually resembles the unknown real gradient and thus is crucial in analyzing the convergence.We establish that with probability 1, every limit point of the sequence generated by the proposed Prox-SGD is a stationary point.Then the Prox-SGD is tailored to train a sparse neural network and a binary neural network, and the theoretical analysis is also supported by extensive numerical tests.","We propose a convergent proximal-type stochastic gradient descent algorithm for constrained nonsmooth nonconvex optimization problemsThis paper proposes Prox-SGD, a theoretical framework for stochastic optimization algorithms shown to converge asymptotically to stationarity for smooth non-convvex loss + convex constraint/regularizer.The paper proposes a new gradient-based stochastic optimization algorithm with gradient averaging by adapting theory for proximal algorithms to the non-convex setting." 193,The Probabilistic Fault Tolerance of Neural Networks in the Continuous Limit,"The loss of a few neurons in a brain rarely results in any visible loss of function.However, the insight into what “few” means in this context is unclear.How many random neuron failures will it take to lead to a visible loss of function?In this paper, we address the fundamental question of the impact of the crash of a random subset of neurons on the overall computation of a neural network and the error in the output it produces.We study fault tolerance of neural networks subject to small random neuron/weight crash failures in a probabilistic setting.We give provable guarantees on the robustness of the network to these crashes.Our main contribution is a bound on the error in the output of a network under small random Bernoulli crashes proved by using a Taylor expansion in the continuous limit, where close-by neurons at a layer are similar.The failure mode we adopt in our model is characteristic of neuromorphic hardware, a promising technology to speed up artificial neural networks, as well as of biological networks.We show that our theoretical bounds can be used to compare the fault tolerance of different architectures and to design a regularizer improving the fault tolerance of a given architecture.We design an algorithm achieving fault tolerance using a reasonable number of neurons.In addition to the theoretical proof, we also provide experimental validation of our results and suggest a connection to the generalization capacity problem.","We give a bound for NNs on the output error in case of random weight failures using a Taylor expansion in the continuous limit where nearby neurons are similarThis paper considers the problem of dropping neurons from a neural network, showing that if the goal is to become robust to randomly dropped neurons during evaluation, then it is sufficient to just train with dropout.This contribution studies the impact of deletions of random neurons on prediction accuracy of trained architecture, with the application to failure analysis and the specific context of neuromorphic hardware." 194,Swoosh! Rattle! Thump! - Actions that Sound,"Truly intelligent agents need to capture the interplay of all their senses to build a rich physical understanding of their world.In robotics, we have seen tremendous progress in using visual and tactile perception; however we have often ignored a key sense: sound.This is primarily due to lack of data that captures the interplay of action and sound.In this work, we perform the first large-scale study of the interactions between sound and robotic action.To do this, we create the largest available sound-action-vision dataset with 15,000 interactions on 60 objects using our robotic platform Tilt-Bot.By tilting objects and allowing them to crash into the walls of a robotic tray, we collect rich four-channel audio information.Using this data, we explore the synergies between sound and action, and present three key insights.First, sound is indicative of fine-grained object class information, e.g., sound can differentiate a metal screwdriver from a metal wrench.Second, sound also contains information about the causal effects of an action, i.e. given the sound produced, we can predict what action was applied on the object.Finally, object representations derived from audio embeddings are indicative of implicit physical properties.We demonstrate that on previously unseen objects, audio embeddings generated through interactions can predict forward models 24% better than passive visual embeddings.","We explore and study the synergies between sound and action.This paper explores the connections between action and sound by building a sound-action-vision dataset with a tilt-bot.This paper studies the role of audio in object and action perception, as well as how auditory information can help learning forward and inverse dynamics models." 195,Hierarchical Complement Objective Training,"Hierarchical label structures widely exist in many machine learning tasks, ranging from those with explicit label hierarchies such as image classification to the ones that have latent label hierarchies such as semantic segmentation.Unfortunately, state-of-the-art methods often utilize cross-entropy loss which in-explicitly assumes the independence among class labels.Motivated by the fact that class members from the same hierarchy need to be similar to each others, we design a new training diagram called Hierarchical Complement Objective Training.In HCOT, in addition to maximizing the probability of the ground truth class, we also neutralize the probabilities of rest of the classes in a hierarchical fashion, making the model take advantage of the label hierarchy explicitly.We conduct our method on both image classification and semantic segmentation.Results show that HCOT outperforms state-of-the-art models in CIFAR100, Imagenet, and PASCAL-context.Our experiments also demonstrate that HCOT can be applied on tasks with latent label hierarchies, which is a common characteristic in many machine learning tasks.","We propose Hierarchical Complement Objective Training, a novel training paradigm to effectively leverage category hierarchy in the labeling space on both image classification and semantic segmentation.A method that regularizes the entropy of the posterior distribution over classes which can be useful for image classsification and segmentation tasks" 196,Improving One-Shot NAS By Suppressing The Posterior Fading,"There is a growing interest in automated neural architecture search.To improve the efficiency of NAS, previous approaches adopt weight sharing method to force all models share the same set of weights. However, it has been observed that a model performing better with shared weights does not necessarily perform better when trained alone.In this paper, we analyse existing weight sharing one-shot NAS approaches from a Bayesian point of view and identify the posterior fading problem, which compromises the effectiveness of shared weights.To alleviate this problem, we present a practical approach to guide the parameter posterior towards its true distribution.Moreover, a hard latency constraint is introduced during the search so that the desired latency can be achieved.The resulted method, namely Posterior Convergent NAS, achieves state-of-the-art performance under standard GPU latency constraint on ImageNet.In our small search space, our model PC-NAS-S attains76.8% top-1 accuracy, 2.1% higher than MobileNetV2 with the same latency.When adopted to our large search space, PC-NAS-L achieves 78.1% top-1 accuracy within 11ms.The discovered architecture also transfers well to other computer vision applications such as object detection and person re-identification.","Our paper identifies the issue of existing weight sharing approach in neural architecture search and propose a practical method, achieving strong results.Author identifies an issue with NAS called posterior fading and introduces Posterior Convergent NAS to mitigate this effect" 197,Prestopping: How Does Early Stopping Help Generalization Against Label Noise?,"Noisy labels are very common in real-world training data, which lead to poor generalization on test data because of overfitting to the noisy labels.In this paper, we claim that such overfitting can be avoided by ""early stopping"" training a deep neural network before the noisy labels are severely memorized.Then, we resume training the early stopped network using a ""maximal safe set,"" which maintains a collection of almost certainly true-labeled samples at each epoch since the early stop point.Putting them all together, our novel two-phase training method, called Prestopping, realizes noise-free training under any type of label noise for practical use.Extensive experiments using four image benchmark data sets verify that our method significantly outperforms four state-of-the-art methods in test error by 0.4–8.2 percent points under existence of real-world noise.","We propose a novel two-phase training approach based on ""early stopping"" for robust training on noisy labels.Paper proposes to study how early stopping in optimization helps find confident examplesThis paper proposes a two-phase training method for learning with label noise." 198,Learning when to Communicate at Scale in Multiagent Cooperative and Competitive Tasks,"Learning when to communicate and doing that effectively is essential in multi-agent tasks.Recent works show that continuous communication allows efficient training with back-propagation in multi-agent scenarios, but have been restricted to fully-cooperative tasks.In this paper, we present Individualized Controlled Continuous Communication Model which has better training efficiency than simple continuous communication model, and can be applied to semi-cooperative and competitive settings along with the cooperative settings.IC3Net controls continuous communication with a gating mechanism and uses individualized rewards foreach agent to gain better performance and scalability while fixing credit assignment issues.Using variety of tasks including StarCraft BroodWars explore and combat scenarios, we show that our network yields improved performance and convergence rates than the baselines as the scale increases.Our results convey that IC3Net agents learn when to communicate based on the scenario and profitability.","We introduce IC3Net, a single network which can be used to train agents in cooperative, competitive and mixed scenarios. We also show that agents can learn when to communicate using our model.Author proposes a new architecture for multi-agent reinforcement learning that uses several LSTM controllers with tied weights that transmit a continuous vector to each otherThe authors propose an interesting gating scheme allowing agents to communicate in an multi-agent RL setting. " 199,CATS: Customizable Abstractive Topic-based Summarization,"Neural sequence-to-sequence models are a recently proposed family of approaches used in abstractive summarization of text documents, useful for producing condensed versions of source text narratives without being restricted to using only words from the original text.Despite the advances in abstractive summarization, custom generation of summaries remains unexplored.In this paper, we present CATS, an abstractive neural summarization model, that summarizes content in a sequence-to-sequence fashion but also introduces a new mechanism to control the underlying latent topic distribution of the produced summaries.Our experimental results on the well-known CNN/DailyMail dataset show that our model achieves state-of-the-art performance.",We present the first neural abstractive summarization model capable of customization of generated summaries. 200,"A Flexible, Extensible Software Framework for Neural Net Compression","We propose a software framework based on ideas of the Learning-Compression algorithm , that allows one to compress any neural network by different compression mechanisms.By design, the learning of the neural net is decoupled from the compression of its parameters, so that the framework can be easily extended to handle different combinations of neural net and compression type.In addition, it has other advantages, such as easy integration with deep learning frameworks, efficient training time, competitive practical performance in the loss-compression tradeoff, and reasonable convergence guarantees.Our toolkit is written in Python and Pytorch and we plan to make it available by the workshop time, and eventually open it for contributions from the community.","We propose a software framework based on ideas of the Learning-Compression algorithm , that allows one to compress any neural network by different compression mechanisms (pruning, quantization, low-rank, etc.).This paper presents the design of a software library that makes it easier for the user to compress their networks by hiding away the details of the compression methods." 201,From Inference to Generation: End-to-end Fully Self-supervised Generation of Human Face from Speech,"This work seeks the possibility of generating the human face from voice solely based on the audio-visual data without any human-labeled annotations.To this end, we propose a multi-modal learning framework that links the inference stage and generation stage.First, the inference networks are trained to match the speaker identity between the two different modalities.Then the pre-trained inference networks cooperate with the generation network by giving conditional information about the voice.",This paper proposes a method of end-to-end multi-modal generation of human face from speech based on a self-supervised learning framework.This paper presents a multi-modal learning framework that links the inference stage and generation stage for seeking the possibility of generating the human face from voice solely.This work aims to build one conditional face image generation framework from the audio signal. 202,Top-Down Neural Model For Formulae,"We present a simple neural model that given a formula and a property tries to answer the question whether the formula has the given property, for example whether a propositional formula is always true.The structure of the formula is captured by a feedforward neural network recursively built for the given formula in a top-down manner.The results of this network are then processed by two recurrent neural networks.One of the interesting aspects of our model is how propositional atoms are treated.For example, the model is insensitive to their names, it only matters whether they are the same or distinct.","A top-down approach how to recursively represent propositional formulae by neural networks is presented.This paper provides a new neural-net model of logical formulae that gathers information about a given formula by traversing its parse tree top-down.The paper pursues the path of a tree-structured network isomorphic to the parse tree of a propositional-calculus formula, but by passing information top-down rather than bottom-up." 203,Towards Consistent Performance on Atari using Expert Demonstrations,"Despite significant advances in the field of deep Reinforcement Learning, todays algorithms still fail to learn human-level policies consistently over a set of diverse tasks such as Atari 2600 games.We identify three key challenges that any algorithm needs to master in order to perform well on all games: processing diverse reward distributions, reasoning over long time horizons, and exploring efficiently. In this paper, we propose an algorithm that addresses each of these challenges and is able to learn human-level policies on nearly all Atari games.A new transformed Bellman operator allows our algorithm to process rewards of varying densities and scales; an auxiliary temporal consistency loss allows us to train stably using a discount factor of 0.999 extending the effective planning horizon by an order of magnitude; and we ease the exploration problem by using human demonstrations that guide the agent towards rewarding states.When tested on a set of 42 Atari games, our algorithm exceeds the performance of an average human on 40 games using a common set of hyper parameters.","Ape-X DQfD = Distributed (many actors + one learner + prioritized replay) DQN with demonstrations optimizing the unclipped 0.999-discounted return on Atari.The paper proposes three extensions (Bellman update, temporal consistency loss, and expert demonstration) to DQN to improve the learning performance on Atari games, achieving outperformance over the state-of-the-art results for Atari games. This paper proposes a transformed Bellman operator that aims to solve sensitivity to unclipped reward, robustness to the value of the discount factor, and the exploration problem." 204,On Incorporating Semantic Prior Knowlegde in Deep Learning Through Embedding-Space Constraints,"The knowledge that humans hold about a problem often extends far beyond a set of training data and output labels.While the success of deep learning mostly relies on supervised training, important properties cannot be inferred efficiently from end-to-end annotations alone, for example causal relations or domain-specific invariances.We present a general technique to supplement supervised training with prior knowledge expressed as relations between training instances.We illustrate the method on the task of visual question answering to exploit various auxiliary annotations, including relations of equivalence and of logical entailment between questions.Existing methods to use these annotations, including auxiliary losses and data augmentation, cannot guarantee the strict inclusion of these relations into the model since they require a careful balancing against the end-to-end objective.Our method uses these relations to shape the embedding space of the model, and treats them as strict constraints on its learned representations.%The resulting model encodes relations that better generalize across instances.In the context of VQA, this approach brings significant improvements in accuracy and robustness, in particular over the common practice of incorporating the constraints as a soft regularizer.We also show that incorporating this type of prior knowledge with our method brings consistent improvements, independently from the amount of supervised data used.It demonstrates the value of an additional training signal that is otherwise difficult to extract from end-to-end annotations alone.","Training method to enforce strict constraints on learned embeddings during supervised training. Applied to visual question answering.The authors propose a framework to incorporate additional semantic prior knowledge into the traditional training of deep learning models to regularize the embedding space instead of the parameter space.The paper argues for encoding external knowledge in the linguistic embedding layer of a multimodal neural network, as a set of hard constraints." 205,AlgoNet: $C^\infty$ Smooth Algorithmic Neural Networks for Solving Inverse Problems,"Artificial neural networks revolutionized many areas of computer science in recent years since they provide solutions to a number of previously unsolved problems.On the other hand, for many problems, classic algorithms exist, which typically exceed the accuracy and stability of neural networks.To combine these two concepts, we present a new kind of neural networks—algorithmic neural networks.These networks integrate smooth versions of classic algorithms into the topology of neural networks.Our novel reconstructive adversarial network enables solving inverse problems without or with only weak supervision.",Solving inverse problems by using smooth approximations of the forward algorithms to train the inverse models. 206,Min-max Entropy for Weakly Supervised Pointwise Localization,"Pointwise localization allows more precise localization and accurate interpretability, compared to bounding box, in applications where objects are highly unstructured such as in medical domain.In this work, we focus on weakly supervised localization where a model is trained to classify an image and localize regions of interest at pixel-level using only global image annotation.Typical convolutional attentions maps are prune to high false positive regions.To alleviate this issue, we propose a new deep learning method for WSL, composed of a localizer and a classifier, where the localizer is constrained to determine relevant and irrelevant regions using conditional entropy with the aim to reduce false positive regions.Experimental results on a public medical dataset and two natural datasets, using Dice index, show that, compared to state of the art WSL methods, our proposal can provide significant improvements in terms of image-level classification and pixel-level localization with robustness to overfitting.A public reproducible PyTorch implementation is provided.",A deep learning method for weakly-supervised pointwise localization that learns using image-level label only. It relies on conditional entropy to localize relevant and irrelevant regions aiming to minimize false positive regions.This work explores the problem of WSL using a novel design of regularization terms and a recursive erasing algorithm.This paper presents a new weakly supervised approach for learning object segmentation with image-level class labels. 207,Frequency-based Search-control in Dyna,"Model-based reinforcement learning has been empirically demonstrated as a successful strategy to improve sample efficiency.Particularly, Dyna architecture, as an elegant model-based architecture integrating learning and planning, provides huge flexibility of using a model.One of the most important components in Dyna is called search-control, which refers to the process of generating state or state-action pairs from which we query the model to acquire simulated experiences.Search-control is critical to improve learning efficiency.In this work, we propose a simple and novel search-control strategy by searching high frequency region on value function.Our main intuition is built on Shannon sampling theorem from signal processing, which indicates that a high frequency signal requires more samples to reconstruct.We empirically show that a high frequency function is more difficult to approximate.This suggests a search-control strategy: we should use states in high frequency region of the value function to query the model to acquire more samples.We develop a simple strategy to locally measure the frequency of a function by gradient norm, and provide theoretical justification for this approach.We then apply our strategy to search-control in Dyna, and conduct experiments to show its property and effectiveness on benchmark domains.",Acquire states from high frequency region for search-control in Dyna.The authors propose to do sampling in the high-frequency domain to increase the sample efficiencyThis paper proposes a new way to select states from which do do transitions in dyna algorithm. 208,DRASIC: Distributed Recurrent Autoencoder for Scalable Image Compression,"We propose a new architecture for distributed image compression from a group of distributed data sources.The work is motivated by practical needs of data-driven codec design, low power consumption, robustness, and data privacy.The proposed architecture, which we refer to as Distributed Recurrent Autoencoder for Scalable Image Compression, is able to train distributed encoders and one joint decoder on correlated data sources.Its compression capability is much better than the method of training codecs separately.Meanwhile, for 10 distributed sources, our distributed system remarkably performs within 2 dB peak signal-to-noise ratio of that of a single codec trained with all data sources.We experiment distributed sources with different correlations and show how our methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding.Our method is also shown to be robust to the lack of presence of encoded data from a number of distributed sources.Moreover, it is scalable in the sense that codes can be decoded simultaneously at more than one compression quality level.To the best of our knowledge, this is the first data-driven DSC framework for general distributed code design with deep learning.","We introduce a data-driven Distributed Source Coding framework based on Distributed Recurrent Autoencoder for Scalable Image Compression (DRASIC).The paper proposed a distributed recurrent auto-encoder for image compression that uses a ConvLSTM to learn binary codes that are constructed progressively from residuals of previously encoded informationThe authors propose a method to train image compression models on multiple sources, with a separate encoder on each source, and a shared decoder. " 209,Reproducibility in Machine Learning for Health,"Machine learning algorithms designed to characterize, monitor, and intervene on human health are expected to perform safely and reliably when operating at scale, potentially outside strict human supervision.This requirement warrants a stricter attention to issues of reproducibility than other fields of machine learning.In this work, we conduct a systematic evaluation of over 100 recently published ML4H research papers along several dimensions related to reproducibility we identified.We find that the field of ML4H compares poorly to more established machine learning fields, particularly concerning data accessibility and code accessibility. Finally, drawing from success in other fields of science, we propose recommendations to data providers, academic publishers, and the ML4H research community in order to promote reproducible research moving forward.","By analyzing more than 300 papers in recent machine learning conferences, we found that Machine Learning for Health (ML4H) applications lag behind other machine learning fields in terms of reproducibility metrics.This paper conducts a quantitative and qualitative review of the state of the reproducibility for ML healthcare applications and proposes reccomendations to make research more reproducible." 210,Neural Arithmetic Unit by reusing many small pre-trained networks,"We propose a solution for evaluation of mathematical expression.However, instead of designing a single end-to-end model we propose a Lego bricks style architecture.In this architecture instead of training a complex end-to-end neural network, many small networks can be trained independently each accomplishing one specific operation and acting a single lego brick.More difficult or complex task can then be solved using a combination of these smaller network.In this work we first identify 8 fundamental operations that are commonly used to solve arithmetic operations.These fundamental operations are then learned using simple feed forward neural networks.We then shows that different operations can be designed simply by reusing these smaller networks.As an example we reuse these smaller networks to develop larger and a more complex network to solve n-digit multiplication, n-digit division, and cross product.This bottom-up strategy not only introduces reusability, we also show that it allows to generalize for computations involving n-digits and we show results for up to 7 digit numbers.Unlike existing methods, our solution also generalizes for both positive as well as negative numbers.","We train many small networks each for a specific operation, these are then combined to perform complex operationsThis paper proposes to use neural networks to evaluate the mathematical expressions by designing 8 small building blocks for 8 fundamental operations, e.g., addition, subtraction, etc and then designing multi-digit multiplication and division using these small blocks.The paper proposes a method to design a NN based mathematical expression evaluation engine." 211, The relativistic discriminator: a key element missing from standard GAN,"In standard generative adversarial network, the discriminator estimates the probability that the input data is real.The generator is trained to increase the probability that fake data is real.We argue that it should also simultaneously decrease the probability that real data is real because1) this would account for a priori knowledge that half of the data in the mini-batch is fake,2) this would be observed with divergence minimization, and3) in optimal settings, SGAN would be equivalent to integral probability metric GANs.We show that this property can be induced by using a relativistic discriminator which estimate the probability that the given real data is more realistic than a randomly sampled fake data.We also present a variant in which the discriminator estimate the probability that the given real data is more realistic than fake data, on average.We generalize both approaches to non-standard GAN loss functions and we refer to them respectively as Relativistic GANs and Relativistic average GANs.We show that IPM-based GANs are a subset of RGANs which use the identity function.Empirically, we observe that1) RGANs and RaGANs are significantly more stable and generate higher quality data samples than their non-relativistic counterparts,2) Standard RaGAN with gradient penalty generate data of better quality than WGAN-GP while only requiring a single discriminator update per generator update, and3) RaGANs are able to generate plausible high resolutions images from a very small sample, while GAN and LSGAN cannot; these images are of significantly better quality than the ones generated by WGAN-GP and SGAN with spectral normalization.The code is freely available on https://github.com/AlexiaJM/RelativisticGAN.","Improving the quality and stability of GANs using a relativistic discriminator; IPM GANs (such as WGAN-GP) are a special case.The paper proposes a “relativistic discriminator”, whic helps in some settings, although a bit sensitive to hyperparameters, architectures, and datasets.In this work, the authors considers a variation of GAN by simultaneously decreasing the probability that real data is real for the generator." 212,V-MPO: On-Policy Maximum a Posteriori Policy Optimization for Discrete and Continuous Control,"Some of the most successful applications of deep reinforcement learning to challenging domains in discrete and continuous control have used policy gradient methods in the on-policy setting.However, policy gradients can suffer from large variance that may limit performance, and in practice require carefully tuned entropy regularization to prevent policy collapse.As an alternative to policy gradient algorithms, we introduce V-MPO, an on-policy adaptation of Maximum a Posteriori Policy Optimization that performs policy iteration based on a learned state-value function.We show that V-MPO surpasses previously reported scores for both the Atari-57 and DMLab-30 benchmark suites in the multi-task setting, and does so reliably without importance weighting, entropy regularization, or population-based tuning of hyperparameters.On individual DMLab and Atari levels, the proposed algorithm can achieve scores that are substantially higher than has previously been reported.V-MPO is also applicable to problems with high-dimensional, continuous action spaces, which we demonstrate in the context of learning to control simulated humanoids with 22 degrees of freedom from full state observations and 56 degrees of freedom from pixel observations, as well as example OpenAI Gym tasks where V-MPO achieves substantially higher asymptotic scores than previously reported.","A state-value function-based version of MPO that achieves good results in a wide range of tasks in discrete and continuous control.This paper presents an algorithm for on-policy reinforcement learning that can handle both continuous/discrete control, single/multi-task learning and use both low dimensional states and pixels.The paper proposes an online variant of MPO, V-MPO, which learns the V-function and updates the non-parametric distribution towards the advantages." 213,NEURAL EXECUTION ENGINES,"Turing complete computation and reasoning are often regarded as necessary pre- cursors to general intelligence.There has been a significant body of work studying neural networks that mimic general computation, but these networks fail to generalize to data distributions that are outside of their training set.We study this problem through the lens of fundamental computer science problems: sorting and graph processing.We modify the masking mechanism of a transformer in order to allow them to implement rudimentary functions with strong generalization.We call this model the Neural Execution Engine, and show that it learns, through supervision, to numerically compute the basic subroutines comprising these algorithms with near perfect accuracy.Moreover, it retains this level of accuracy while generalizing to unseen data and long sequences outside of the training distribution.","We propose neural execution engines (NEEs), which leverage a learned mask and supervised execution traces to mimic the functionality of subroutines and demonstrate strong generalization.This paper investigates a problem of building a program execution engine with neural networks and proposes a transformer-based model to learn basic subroutines and applies them in several standard algorithms.This paper deals with the problem of designing neural network architectures that can learn and implement general programs." 214,Continuous Meta-Learning without Tasks,"Meta-learning is a promising strategy for learning to efficiently learn within new tasks, using data gathered from a distribution of tasks.However, the meta-learning literature thus far has focused on the task segmented setting, where at train-time, offline data is assumed to be split according to the underlying task, and at test-time, the algorithms are optimized to learn in a single task.In this work, we enable the application of generic meta-learning algorithms to settings where this task segmentation is unavailable, such as continual online learning with a time-varying task.We present meta-learning via online changepoint analysis, an approach which augments a meta-learning algorithm with a differentiable Bayesian changepoint detection scheme.The framework allows both training and testing directly on time series data without segmenting it into discrete tasks.We demonstrate the utility of this approach on a nonlinear meta-regression benchmark as well as two meta-image-classification benchmarks.","Bayesian changepoint detection enables meta-learning directly from time series data.The paper considers the meta-learning in the task un-segmented setting and apply Bayesian online change point detection with meta-learning.This paper pushes meta-learning towards task-unsegmented settings, where the MOCA framework adopts a Bayesian changepoint estimation scheme for task change detection." 215,FRICATIVE PHONEME DETECTION WITH ZERO DELAY,"People with high-frequency hearing loss rely on hearing aids that employ frequency lowering algorithms.These algorithms shift some of the sounds from the high frequency band to the lower frequency band where the sounds become more perceptible for the people with the condition.Fricative phonemes have an important part of their content concentrated in high frequency bands.It is important that the frequency lowering algorithm is activated exactly for the duration of a fricative phoneme, and kept off at all other times.Therefore, timely and accurate fricative phoneme detection is a key problem for high quality hearing aids.In this paper we present a deep learning based fricative phoneme detection algorithm that has zero detection delay and achieves state-of-the-art fricative phoneme detection accuracy on the TIMIT Speech Corpus.All reported results are reproducible and come with easy to use code that could serve as a baseline for future research.",A deep learning based approach for zero delay fricative phoneme detectionThis paper apples supervised deep learning methods to detect exact duration of a fricative phoneme in order to improve practical frequency lowering algorithm. 216,Monotonic Chunkwise Attention,"Sequence-to-sequence models with soft attention have been successfully applied to a wide variety of problems, but their decoding process incurs a quadratic time and space cost and is inapplicable to real-time sequence transduction.To address these issues, we propose Monotonic Chunkwise Attention, which adaptively splits the input sequence into small chunks over which soft attention is computed.We show that models utilizing MoChA can be trained efficiently with standard backpropagation while allowing online and linear-time decoding at test time.When applied to online speech recognition, we obtain state-of-the-art results and match the performance of a model using an offline soft attention mechanism.In document summarization experiments where we do not expect monotonic alignments, we show significantly improved performance compared to a baseline monotonic attention-based model.",An online and linear-time attention mechanism that performs soft attention over adaptively-located chunks of the input sequence.This paper proposes a small modification to the monotonic attention in [1] by adding a soft attention to the segment predicted by the monotonic attention.The paper proposes an extension to a previous monotonic attention model (Raffel et al 2017) to attend to a fixed-sized window up to the alignment position. 217,EXPLORING DEEP LEARNING USING INFORMATION THEORY TOOLS AND PATCH ORDERING,"We present a framework for automatically ordering image patches that enables in-depth analysis of dataset relationship to learnability of a classification task using convolutional neural network.An image patch is a group of pixels residing in a continuous area contained in the sample.Our preliminary experimental results show that an informed smart shuffling of patches at a sample level can expedite training by exposing important features at early stages of training.In addition, we conduct systematic experiments and provide evidence that CNN’s generalization capabilities do not correlate with human recognizable features present in training samples.We utilized the framework not only to show that spatial locality of features within samples do not correlate with generalization, but also to expedite convergence while achieving similar generalization performance.Using multiple network architectures and datasets, we show that ordering image regions using mutual information measure between adjacent patches, enables CNNs to converge in a third of the total steps required to train the same network without patch ordering.",Develop new techniques that rely on patch reordering to enable detailed analysis of data-set relationship to training and generalization performances. 218,Robust Domain Randomization for Reinforcement Learning,"Producing agents that can generalize to a wide range of environments is a significant challenge in reinforcement learning.One method for overcoming this issue is domain randomization, whereby at the start of each training episode some parameters of the environment are randomized so that the agent is exposed to many possible variations.However, domain randomization is highly inefficient and may lead to policies with high variance across domains.In this work, we formalize the domain randomization problem, and show that minimizing the policys Lipschitz constant with respect to the randomization parameters leads to low variance in the learned policies.We propose a method where the agent only needs to be trained on one variation of the environment, and its learned state representations are regularized during training to minimize this constant.We conduct experiments that demonstrate that our technique leads to more efficient and robust learning than standard domain randomization, while achieving equal generalization scores.","We produce reinforcement learning agents that generalize well to a wide range of environments using a novel regularization technique.The paper introduces the high variance policies challenge in domain randomization for reinforcement learning and mainly focuses on the problem of visual randomization, where the different randomized domains differ only in state space and the underlying rewards and dynamics are the same.To improve the generalization ability of deep RL agents across the tasks with different visual patterns, this paper proposed a simple regularization technique for domain randomization." 219,Alexandria: Unsupervised High-Precision Knowledge Base Construction using a Probabilistic Program,"Creating a knowledge base that is accurate, up-to-date and complete remains a significant challenge despite substantial efforts in automated knowledge base construction. In this paper, we present Alexandria -- a system for unsupervised, high-precision knowledge base construction.Alexandria uses a probabilistic program to define a process of converting knowledge base facts into unstructured text. Using probabilistic inference, we can invert this program and so retrieve facts, schemas and entities from web text.The use of a probabilistic program allows uncertainty in the text to be propagated through to the retrieved facts, which increases accuracy and helps merge facts from multiple sources.Because Alexandria does not require labelled training data, knowledge bases can be constructed with the minimum of manual input.We demonstrate this by constructing a high precision knowledge base for people from a single seed fact.","This paper presents a system for unsupervised, high-precision knowledge base construction using a probabilistic program to define a process of converting knowledge base facts into unstructured text.Overview about existing knowledge base that is constructed with a probabilistic model, with the knowledge base construction approach evaluated against other knowledge base approaches YAGO2, NELL, Knowledge Vault, and DeepDive.This paper uses a probabilistic program describing the process by which facts describing entities can be realised in text and large number of web pages, to learn to perform fact extraction about people using a single seed fact." 220,Towards Scalable Imitation Learning for Multi-Agent Systems with Graph Neural Networks,"We propose an implementation of GNN that predicts and imitates the motion be- haviors from observed swarm trajectory data.The network’s ability to capture interaction dynamics in swarms is demonstrated through transfer learning.We finally discuss the inherent availability and challenges in the scalability of GNN, and proposed a method to improve it with layer-wise tuning and mixing of data enabled by padding.",Improve the scalability of graph neural networks on imitation learning and prediction of swarm motionThe paper proposes a new time series model for learning a sequence of graphs.This work considers sequence prediction problems in a multi-agent system. 221,Learning Compact Embedding Layers via Differentiable Product Quantization,"Embedding layers are commonly used to map discrete symbols into continuous embedding vectors that reflect their semantic meanings.Despite their effectiveness, the number of parameters in an embedding layer increases linearly with the number of symbols and poses a critical challenge on memory and storage constraints.In this work, we propose a generic and end-to-end learnable compression framework termed differentiable product quantization.We present two instantiations of DPQ that leverage different approximation techniques to enable differentiability in end-to-end learning.Our method can readily serve as a drop-in alternative for any existing embedding layer.Empirically, DPQ offers significant compression ratios at negligible or no performance cost on 10 datasets across three different language tasks.","We propose a differentiable product quantization framework that can reduce the size of embedding layer in an end-to-end training at no performance cost.This paper works on methods for compressing embedding layers for low memory inference, where compressed embeddings are learned together with the task-specific models in a differentiable end-to-end fashion." 222,An implicit function learning approach for parametric modal regression,"For multi-valued functions---such as when the conditional distribution on targets given the inputs is multi-modal---standard regression approaches are not always desirable because they provide the conditional mean.Modal regression approaches aim to instead find the conditional mode, but are restricted to nonparametric approaches.Such approaches can be difficult to scale, and make it difficult to benefit from parametric function approximation, like neural networks, which can learn complex relationships between inputs and targets.In this work, we propose a parametric modal regression algorithm, by using the implicit function theorem to develop an objective for learning a joint parameterized function over inputs and targets.We empirically demonstrate on several synthetic problems that our method can learn multi-valued functions and produce the conditional modes, scales well to high-dimensional inputs and is even more effective for certain unimodal problems, particularly for high frequency data where the joint function over inputs and targets can better capture the complex relationship between them.We conclude by showing that our method provides small improvements on two regression datasets that have asymmetric distributions over the targets.",We introduce a simple and novel modal regression algorithm which is easy to scale to large problems. The paper proposes an implicit function approach to learning the modes of multimodal regression.The present work proposes a parametric approach to estimate the conditional mode using the Implicit Function Theorem for multi-modal distributions. 223,Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables,"Deep reinforcement learning algorithms require large amounts of experience to learn an individual task.While in principle meta-reinforcement learning algorithms enable agents to learn new skills from small amounts of experience, several major challenges preclude their practicality.Current methods rely heavily on on-policy experience, limiting their sample efficiency.They also lack mechanisms to reason about task uncertainty when adapting to new tasks, limiting their effectiveness in sparse reward problems.In this paper, we address these challenges by developing an off-policy meta-RL algorithm that disentangles task inference and control.In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience.This probabilistic interpretation enables posterior sampling for structured and efficient exploration.We demonstrate how to integrate these task variables with off-policy RL algorithms to achieve both meta-training and adaptation efficiency.Our method outperforms prior algorithms in sample efficiency by 20-100X as well as in asymptotic performance on several meta-RL benchmarks.",Sample efficient meta-RL by combining variational inference of probabilistic task variables with off-policy RL This paper proposes using off-policy RL during the meta-training time to greatly improve sample efficiency of Meta-RL methods. 224,MIDAS: Finding the Right Web Sources to Fill Knowledge Gaps,"Knowledge bases, massive collections of facts on diverse topics, support vital modern applications.However, existing knowledge bases contain very little data compared to the wealth of information on the Web.This is because the industry standard in knowledge base creation and augmentation suffers from a serious bottleneck: they rely on domain experts to identify appropriate web sources to extract data from.Efforts to fully automate knowledge extraction have failed to improve this standard: these automated systems are able to retrieve much more data and from a broader range of sources, but they suffer from very low precision and recall.As a result, these large-scale extractions remain unexploited.In this paper, we present MIDAS, a system that harnesses the results of automated knowledge extraction pipelines to repair the bottleneck in industrial knowledge creation and augmentation processes.MIDAS automates the suggestion of good-quality web sources and describes what to extract with respect to augmenting an existing knowledge base.We make three major contributions.First, we introduce a novel concept, web source slices, to describe the contents of a web source.Second, we define a profit function to quantify the value of a web source slice with respect to augmenting an existing knowledge base.Third, we develop effective and highly-scalable algorithms to derive high-profit web source slices.We demonstrate that MIDAS produces high-profit results and outperforms the baselines significantly on both real-word and synthetic datasets.",This paper focuses on identifying high quality web sources for industrial knowledge base augmentation pipeline. 225,Match prediction from group comparison data using neural networks,"We explore the match prediction problem where one seeks to estimate the likelihood of a group of M items preferred over another, based on partial group comparison data.Challenges arise in practice.As existing state-of-the-art algorithms are tailored to certain statistical models, we have different best algorithms across distinct scenarios.Worse yet, we have no prior knowledge on the underlying model for a given scenario.These call for a unified approach that can be universally applied to a wide range of scenarios and achieve consistently high performances.To this end, we incorporate deep learning architectures so as to reflect the key structural features that most state-of-the-art algorithms, some of which are optimal in certain settings, share in common.This enables us to infer hidden models underlying a given dataset, which govern in-group interactions and statistical patterns of comparisons, and hence to devise the best algorithm tailored to the dataset at hand.Through extensive experiments on synthetic and real-world datasets, we evaluate our framework in comparison to state-of-the-art algorithms.It turns out that our framework consistently leads to the best performance across all datasets in terms of cross entropy loss and prediction accuracy, while the state-of-the-art algorithms suffer from inconsistent performances across different datasets.Furthermore, we show that it can be easily extended to attain satisfactory performances in rank aggregation tasks, suggesting that it can be adaptable for other tasks as well.","We investigate the merits of employing neural networks in the match prediction problem where one seeks to estimate the likelihood of a group of M items preferred over another, based on partial group comparison data.This paper proposes a deep neural network solution to the set ranking problem and designs a architecture for this task inspired by previous manually designed algorithms.This paper provides a technique to solve the match prediction problem using a deep learning architecture." 226,Orthogonal Recurrent Neural Networks with Scaled Cayley Transform,"Recurrent Neural Networks are designed to handle sequential data but suffer from vanishing or exploding gradients. Recent work on Unitary Recurrent Neural Networks have been used to address this issue and in some cases, exceed the capabilities of Long Short-Term Memory networks. We propose a simpler and novel update scheme to maintain orthogonal recurrent weight matrices without using complex valued matrices.This is done by parametrizing with a skew-symmetric matrix using the Cayley transform.Such a parametrization is unable to represent matrices with negative one eigenvalues, but this limitation is overcome by scaling the recurrent weight matrix by a diagonal matrix consisting of ones and negative ones. The proposed training scheme involves a straightforward gradient calculation and update step.In several experiments, the proposed scaled Cayley orthogonal recurrent neural network achieves superior results with fewer trainable parameters than other unitary RNNs.",A novel approach to maintain orthogonal recurrent weight matrices in a RNN.Introduces a scheme for learning the recurrent parameter matrix in a neural network that uses the Cayley transform and a scaling weight matrix. This paper suggests an RNN reparametrization of the recurrent weights with a skew-symmetric matrix using Cayley transform to keep the recurrent weight matrix orthogonal.Novel parametrization of RNNs allows representing orthogonal weight matrices relatively easily. 227,Generalizing Natural Language Analysis through Span-relation Representations,"A large number of natural language processing tasks exist to analyze syntax, semantics, and information content of human language.These seemingly very different tasks are usually solved by specially designed architectures.In this paper, we provide the simple insight that a great variety of tasks can be represented in a single unified format consisting of labeling spans and relations between spans, thus a single task-independent model can be used across different tasks.We perform extensive experiments to test this insight on 10 disparate tasks as broad as dependency parsing, semantic role labeling, relation extraction, aspect based sentiment analysis, and many others, achieving comparable performance as state-of-the-art specialized models.We further demonstrate benefits in multi-task learning.We convert these datasets into a unified format to build a benchmark, which provides a holistic testbed for evaluating future models for generalized natural language analysis.",We use a single model to solve a great variety of natural language analysis tasks by formulating them in a unified span-relation format.This paper generalizes a wide range of natural language processing tasks as a single span-based framework and proposes a general architecture to solve all these problems.This work presents a unified formulation of various phrase and token level NLP tasks. 228,Variational Gaussian Process Models without Matrix Inverses,"Large matrix inversions have often been cited as a major impediment to scaling Gaussian process models.With the use of GPs as building blocks for ever more sophisticated Bayesian deep learning models, removing these impediments is a necessary step for achieving large scale results.We present a variational approximation for a wide range of GP models that does not require a matrix inverse to be performed at each optimisation step.Our bound instead directly parameterises a free matrix, which is an additional variational parameter.At the local maxima of the bound, this matrix is equal to the matrix inverse.We prove that our bound gives the same guarantees as earlier variational approximations.We demonstrate some beneficial properties of the bound experimentally, although significant wall clock time speed improvements will require future improvements in optimisation and implementation.","We present a variational lower bound for GP models that can be optimised without computing expensive matrix operations like inverses, while providing the same guarantees as existing variational approximations." 229,Mixed-curvature Variational Autoencoders,"It has been shown that using geometric spaces with non-zero curvature instead of plain Euclidean spaces with zero curvature improves performance on a range of Machine Learning tasks for learning representations.Recent work has leveraged these geometries to learn latent variable models like Variational Autoencoders in spherical and hyperbolic spaces with constant curvature.While these approaches work well on particular kinds of data that they were designed for e.g.~tree-like data for a hyperbolic VAE, there exists no generic approach unifying all three models.We develop a Mixed-curvature Variational Autoencoder, an efficient way to train a VAE whose latent space is a product of constant curvature Riemannian manifolds, where the per-component curvature can be learned.This generalizes the Euclidean VAE to curved latent spaces, as the model essentially reduces to the Euclidean VAE if curvatures of all latent space components go to 0.",Variational Autoencoders with latent spaces modeled as products of constant curvature Riemannian manifolds improve on image reconstruction over single-manifold variants.This paper introduces a general formulation of the notion of a VAE with a latent space composed by a curved manifold.This paper is about developing VAEs in non-Euclidean spaces. 230,Black Box Recursive Translations for Molecular Optimization,"Machine learning algorithms for generating molecular structures offer a promising new approach to drug discovery.We cast molecular optimization as a translation problem, where the goal is to map an input compound to a target compound with improved biochemical properties.Remarkably, we observe that when generated molecules are iteratively fed back into the translator, molecular compound attributes improve with each step.We show that this finding is invariant to the choice of translation model, making this a ""black box"" algorithm.We call this method Black Box Recursive Translation, a new inference method for molecular property optimization.This simple, powerful technique operates strictly on the inputs and outputs of any translation model.We obtain new state-of-the-art results for molecular property optimization tasks using our simple drop-in replacement with well-known sequence and graph-based models.Our method provides a significant boost in performance relative to its non-recursive peers with just a simple ""for"" loop.Further, BBRT is highly interpretable, allowing users to map the evolution of newly discovered compounds from known starting points.","We introduce a black box algorithm for repeated optimization of compounds using a translation framework.The authors frame molecule optimization as a sequence-to-sequence problem, and extend existing methods for improving molecules, showing that it is beneficial for optimizing logP but not QED.The paper builds on existing translation models developed for molecular optimization, making an iterative use of sequence to sequence or graph to graph translation models." 231,BlackMarks: Black-box Multi-bit Watermarking for Deep Neural Networks,"Deep Neural Networks are increasingly deployed in cloud servers and autonomous agents due to their superior performance.The deployed DNN is either leveraged in a white-box setting or a black-box setting depending on the application.A practical concern in the rush to adopt DNNs is protecting the models against Intellectual Property infringement.We propose BlackMarks, the first end-to-end multi-bit watermarking framework that is applicable in the black-box scenario.BlackMarks takes the pre-trained unmarked model and the owner’s binary signature as inputs.The output is the corresponding marked model with specific keys that can be later used to trigger the embedded watermark.To do so, BlackMarks first designs a model-dependent encoding scheme that maps all possible classes in the task to bit ‘0’ and bit ‘1’.Given the owner’s watermark signature, a set of key image and label pairs is designed using targeted adversarial attacks.The watermark is then encoded in the distribution of output activations of the DNN by fine-tuning the model with a WM-specific regularized loss.To extract the WM, BlackMarks queries the model with the WM key images and decodes the owner’s signature from the corresponding predictions using the designed encoding scheme.We perform a comprehensive evaluation of BlackMarks’ performance on MNIST, CIFAR-10, ImageNet datasets and corroborate its effectiveness and robustness.BlackMarks preserves the functionality of the original DNN and incurs negligible WM embedding overhead as low as 2.054%.",Proposing the first watermarking framework for multi-bit signature embedding and extraction using the outputs of the DNN. Proposes a method for multi-bit watermarking of neural networks in a black-box setting and demonstrate that the predictions of existing models can carry a multi-bit string that can later be used to verify ownership.The paper proposes an approach for model watermarking where the watermark is a bit string embedded in the model as part of a fine-tuning procedure 232,Learning to Defense by Learning to Attack,"Adversarial training provides a principled approach for training robust neural networks.From an optimization perspective, the adversarial training is essentially solving a minmax robust optimization problem.The outer minimization is trying to learn a robust classifier, while the inner maximization is trying to generate adversarial samples.Unfortunately, such a minmax problem is very difficult to solve due to the lack of convex-concave structure.This work proposes a new adversarial training method based on a general learning-to-learn framework.Specifically, instead of applying the existing hand-design algorithms for the inner problem, we learn an optimizer, which is parametrized as a convolutional neural network.At the same time, a robust classifier is learned to defense the adversarial attack generated by the learned optimizer.From the perspective of generative learning, our proposed method can be viewed as learning a deep generative model for generating adversarial samples, which is adaptive to the robust classification.Our experiments demonstrate that our proposed method significantly outperforms existing adversarial training methods on CIFAR-10 and CIFAR-100 datasets.","Don't know how to optimize? Then just learn to optimize!"", 'This paper proposes a way to train image classification models to be resistant to L-infinity perturbation attacks.This paper proposes using the learning-to-learn framework to learn an attacker." 233,Prediction Under Uncertainty with Error Encoding Networks,"In this work we introduce a new framework for performing temporal predictionsin the presence of uncertainty.It is based on a simple idea of disentangling com-ponents of the future state which are predictable from those which are inherentlyunpredictable, and encoding the unpredictable components into a low-dimensionallatent variable which is fed into the forward model.Our method uses a simple su-pervised training objective which is fast and easy to train.We evaluate it in thecontext of video prediction on multiple datasets and show that it is able to consi-tently generate diverse predictions without the need for alternating minimizationover a latent space or adversarial training.",A simple and easy to train method for multimodal prediction in time series. This paper introduces a times-series prediction model that that learns a deterministic mapping and trains another net to predict future frames given the input and residual error from the first network.The paper proposes a model for prediction under uncertainty where they separate out deterministic component prediction and uncertain component prediction. 234,simple_rl: Reproducible Reinforcement Learning in Python,"Conducting reinforcement-learning experiments can be a complex and timely process.A full experimental pipeline will typically consist of a simulation of an environment, an implementation of one or many learning algorithms, a variety of additional components designed to facilitate the agent-environment interplay, and any requisite analysis, plotting, and logging thereof.In light of this complexity, this paper introduces simple rl, a new open source library for carrying out reinforcement learning experiments in Python 2 and 3 with a focus on simplicity.The goal of simple_rl is to support seamless, reproducible methods for running reinforcement learning experiments.This paper gives an overview of the core design philosophy of the package, how it differs from existing libraries, and showcases its central features.","This paper introduces and motivates simple_rl, a new open source library for carrying out reinforcement learning experiments in Python 2 and 3 with a focus on simplicity." 235,Local Stability and Performance of Simple Gradient Penalty $\mu$-Wasserstein GAN,"Wasserstein GAN is a model that minimizes the Wasserstein distance between a data distribution and sample distribution.Recent studies have proposed stabilizing the training process for the WGAN and implementing the Lipschitz constraint.In this study, we prove the local stability of optimizing the simple gradient penalty-WGAN under suitable assumptions regarding the equilibrium and penalty measure.The measure valued differentiation concept is employed to deal with the derivative of the penalty terms, which is helpful for handling abstract singular measures with lower dimensional support.Based on this analysis, we claim that penalizing the data manifold or sample manifold is the key to regularizing the original WGAN with a gradient penalty.Experimental results obtained with unintuitive penalty measures that satisfy our assumptions are also provided to support our theoretical results.",This paper deals with stability of simple gradient penalty-WGAN optimization by introducing a concept of measure valued differentiation.WGAN with a squared zero centered gradient penalty term w.r.t. to a general measure is studied.Characterizes the convergence of gradient penalized Wasserstein GAN. 236,Random Partition Relaxation for Training Binary and Ternary Weight Neural Network,"We present Random Partition Relaxation, a method for strong quantization of the parameters of convolutional neural networks to binary and ternary values.Starting from a pretrained model, we first quantize the weights and then relax random partitions of them to their continuous values for retraining before quantizing them again and switching to another weight partition for further adaptation. We empirically evaluate the performance of RPR with ResNet-18, ResNet-50 and GoogLeNet on the ImageNet classification task for binary and ternary weight networks.We show accuracies beyond the state-of-the-art for binary- and ternary-weight GoogLeNet and competitive performance for ResNet-18 and ResNet-50 using a SGD-based training method that can easily be integrated into existing frameworks.","State-of-the-art training method for binary and ternary weight networks based on alternating optimization of randomly relaxed weight partitionsThe paper proposes a new training scheme of optimizing a ternary neural network.Authors propose RPR, a way to randomly partition and quantize weights and train the remaining parameters followed by relaxation in alternate cycles to train quantized models." 237,Decoupling Hierarchical Recurrent Neural Networks With Locally Computable Losses,"Learning long-term dependencies is a key long-standing challenge of recurrent neural networks.Hierarchical recurrent neural networks have been considered a promising approach as long-term dependencies are resolved through shortcuts up and down the hierarchy.Yet, the memory requirements of Truncated Backpropagation Through Time still prevent training them on very long sequences.In this paper, we empirically show that in HRNNs, propagating gradients back from higher to lower levels can be replaced by locally computable losses, without harming the learning capability of the network, over a wide range of tasks.This decoupling by local losses reduces the memory requirements of training by a factor exponential in the depth of the hierarchy in comparison to standard TBPTT.","We replace some gradients paths in hierarchical RNN's by an auxiliary loss. We show that this can reduce the memory cost while preserving performance."", 'The paper introduces a hierarchical RNN architecture that could be trained more memory efficiently.The proposed paper suggests to decouple the different layers of hierarchy in RNN using auxiliary losses." 238,Using effective dimension to analyze feature transformations in deep neural networks,"In a typical deep learning approach to a computer vision task, Convolutional Neural Networks are used to extract features at varying levels of abstraction from an image and compress a high dimensional input into a lower dimensional decision space through a series of transformations.In this paper, we investigate how a class of input images is eventually compressed over the course of these transformations.In particular, we use singular value decomposition to analyze the relevant variations in feature space.These variations are formalized as the effective dimension of the embedding.We consider how the effective dimension varies across layers within class.We show that across datasets and architectures, the effective dimension of a class increases before decreasing further into the network, suggesting some sort of initial whitening transformation.Further, the decrease rate of the effective dimension deeper in the network corresponds with training performance of the model.",Neural networks that do a good job of classification project points into more spherical shapes before compressing them into fewer dimensions. 239,Learning from Between-class Examples for Deep Sound Recognition,"Deep learning methods have achieved high performance in sound recognition tasks.Deciding how to feed the training data is important for further performance improvement.We propose a novel learning method for deep sound recognition: Between-Class learning.Our strategy is to learn a discriminative feature space by recognizing the between-class sounds as between-class sounds.We generate between-class sounds by mixing two sounds belonging to different classes with a random ratio.We then input the mixed sound to the model and train the model to output the mixing ratio.The advantages of BC learning are not limited only to the increase in variation of the training data; BC learning leads to an enlargement of Fisher’s criterion in the feature space and a regularization of the positional relationship among the feature distributions of the classes.The experimental results show that BC learning improves the performance on various sound recognition networks, datasets, and data augmentation schemes, in which BC learning proves to be always beneficial.Furthermore, we construct a new deep sound recognition network and train it with BC learning.As a result, we achieved a performance surpasses the human level.","We propose an novel learning method for deep sound recognition named BC learning.Authors defined a new learning task that requires a DNN to predict mixing ratio between sounds from two different classes to increase discriminitive power of the final learned network.Proposes a method to improve the performance of a generic learning method by generating ""in between class"" training samples and presents the basic intuition and necessity of the proposed technique." 240,Automatically Inferring Data Quality for Spatiotemporal Forecasting,"Spatiotemporal forecasting has become an increasingly important prediction task in machine learning and statistics due to its vast applications, such as climate modeling, traffic prediction, video caching predictions, and so on.While numerous studies have been conducted, most existing works assume that the data from different sources or across different locations are equally reliable.Due to cost, accessibility, or other factors, it is inevitable that the data quality could vary, which introduces significant biases into the model and leads to unreliable prediction results.The problem could be exacerbated in black-box prediction models, such as deep neural networks.In this paper, we propose a novel solution that can automatically infer data quality levels of different sources through local variations of spatiotemporal signals without explicit labels.Furthermore, we integrate the estimate of data quality level with graph convolutional networks to exploit their efficient structures.We evaluate our proposed method on forecasting temperatures in Los Angeles.","We propose a method that infers the time-varying data quality level for spatiotemporal forecasting without explicitly assigned labels.Introduces a new definition of data quality that relies on the notion of local variation defined in (Zhou and Scholkopf) and extends it to multiple heterogenous data sources.This work proposed a new way to evaluate the quality of different data sources with the time-vary graph model, with the quality level used as a regularization term in the objective function" 241,Learning to Infer and Execute 3D Shape Programs,"Human perception of 3D shapes goes beyond reconstructing them as a set of points or a composition of geometric primitives: we also effortlessly understand higher-level shape structure such as the repetition and reflective symmetry of object parts.In contrast, recent advances in 3D shape sensing focus more on low-level geometry but less on these higher-level relationships.In this paper, we propose 3D shape programs, integrating bottom-up recognition systems with top-down, symbolic program structure to capture both low-level geometry and high-level structural priors for 3D shapes.Because there are no annotations of shape programs for real shapes, we develop neural modules that not only learn to infer 3D shape programs from raw, unannotated shapes, but also to execute these programs for shape reconstruction.After initial bootstrapping, our end-to-end differentiable model learns 3D shape programs by reconstructing shapes in a self-supervised manner.Experiments demonstrate that our model accurately infers and executes 3D shape programs for highly complex shapes from various categories.It can also be integrated with an image-to-shape module to infer 3D shape programs directly from an RGB image, leading to 3D shape reconstructions that are both more accurate and more physically plausible.","We propose 3D shape programs, a structured, compositional shape representation. Our model learns to infer and execute shape programs to explain 3D shapes.An approach to infer shape programs given 3D models, with architecture consisting of a recurrent network that encodes a 3D shape and outputs instructions, and a second module that renders the program to 3D.This paper introduces a high-level semantic description for 3D shapes, given by the ShapeProgram." 242,Regularization Matters in Policy Optimization,"Deep Reinforcement Learning has been receiving increasingly more attention thanks to its encouraging performance on a variety of control tasks.Yet, conventional regularization techniques in training neural networks have been largely ignored in RL methods, possibly because agents are typically trained and evaluated in the same environment.In this work, we present the first comprehensive study of regularization techniques with multiple policy optimization algorithms on continuous control tasks.Interestingly, we find conventional regularization techniques on the policy networks can often bring large improvement on the task performance, and the improvement is typically more significant when the task is more difficult.We also compare with the widely used entropy regularization and find regularization is generally better.Our findings are further confirmed to be robust against the choice of training hyperparameters.We also study the effects of regularizing different components and find that only regularizing the policy network is typically enough.We hope our study provides guidance for future practices in regularizing policy optimization algorithms.","We show that conventional regularization methods (e.g.,, dropout), which have been largely ignored in RL methods, can be very effective in policy optimization.The authors study a set of existing direct policy optimization methods in the field of reinforcement learning and provide a detailed investigation on the effect of regulations on the performance and behavior of agents following these methods.This paper provides a study on the effect of regularization on performance in training environments in policy optimization methods in multiple continuous control tasks." 243,FigureQA: An Annotated Figure Dataset for Visual Reasoning,"We introduce FigureQA, a visual reasoning corpus of over one million question-answer pairs grounded in over 100,000 images.The images are synthetic, scientific-style figures from five classes: line plots, dot-line plots, vertical and horizontal bar graphs, and pie charts.We formulate our reasoning task by generating questions from 15 templates; questions concern various relationships between plot elements and examine characteristics like the maximum, the minimum, area-under-the-curve, smoothness, and intersection.To resolve, such questions often require reference to multiple plot elements and synthesis of information distributed spatially throughout a figure.To facilitate the training of machine learning systems, the corpus also includes side data that can be used to formulate auxiliary objectives.In particular, we provide the numerical data used to generate each figure as well as bounding-box annotations for all plot elements.We study the proposed visual reasoning task by training several models, including the recently proposed Relation Network as strong baseline.Preliminary results indicate that the task poses a significant machine learning challenge.We envision FigureQA as a first step towards developing models that can intuitively recognize patterns from visual representations of data.","We present a question-answering dataset, FigureQA, as a first step towards developing models that can intuitively recognize patterns from visual representations of data.This paper introduces a dataset of templated question answering on figures, involving reasoning about figure elements.The paper introduces a new visual reasoning dataset called Figure-QA which consists of 140K figure images and 1.55M QA pairs, which can help in developing models that can extract useful information from visual representations of data." 244,Varieties of Explainable Agency,"In this paper, I discuss some varieties of explanation that can arisein intelligent agents.I distinguish between process accounts, whichaddress the detailed decisions made during heuristic search, andpreference accounts, which clarify the ordering of alternativesindependent of how they were generated.I also hypothesizewhich types of users will appreciate which types of explanation.In addition, I discuss three facets of multi-step decision making-- conceptual inference, plan generation, and plan execution --in which explanations can arise.I also consider alternative waysto present questions to agents and for them provide their answers.","This position paper analyzes different types of self explanation that can arise in planning and related systems. Discusses different aspects of explanations, particularly in the context of sequential decision making. " 245,HighRes-net: Multi-Frame Super-Resolution by Recursive Fusion,"Generative deep learning has sparked a new wave of Super-Resolution algorithms that enhance single images with impressive aesthetic results, albeit with imaginary details.Multi-frame Super-Resolution offers a more grounded approach to the ill-posed problem, by conditioning on multiple low-resolution views.This is important for satellite monitoring of human impact on the planet -- from deforestation, to human rights violations -- that depend on reliable imagery.To this end, we present HighRes-net, the first deep learning approach to MFSR that learns its sub-tasks in an end-to-end fashion: co-registration, fusion, up-sampling, and registration-at-the-loss.Co-registration of low-res views is learned implicitly through a reference-frame channel, with no explicit registration mechanism.We learn a global fusion operator that is applied recursively on an arbitrary number of low-res pairs.We introduce a registered loss, by learning to align the SR output to a ground-truth through ShiftNet.We show that by learning deep representations of multiple views, we can super-resolve low-resolution signals and enhance Earth observation data at scale.Our approach recently topped the European Space Agencys MFSR competition on real-world satellite imagery.","The first deep learning approach to MFSR to solve registration, fusion, up-sampling in an end-to-end manner.This paper proposes an end-to-end multi-frame super-resolution algorithm, that relies on a pair-wise co-registrations and fusing blocks (convolutional residual blocks), embedded in a encoder-decoder network 'HighRes-net' that estimates the super-resolution image."", 'This paper proposes a framework including recursive fusion to co-registration loss to solve the problem of super-resolution results and high-resolution labels not being pixel aligned." 246,Stochastic Gradient Push for Distributed Deep Learning,"Large mini-batch parallel SGD is commonly used for distributed training of deep networks.Approaches that use tightly-coupled exact distributed averaging based on AllReduce are sensitive to slow nodes and high-latency communication.In this work we show the applicability of Stochastic Gradient Push for distributed training.SGP uses a gossip algorithm called PushSum for approximate distributed averaging, allowing for much more loosely coupled communications which can be beneficial in high-latency or high-variability scenarios.The tradeoff is that approximate distributed averaging injects additional noise in the gradient which can affect the train and test accuracies.We prove that SGP converges to a stationary point of smooth, non-convex objective functions.Furthermore, we validate empirically the potential of SGP.For example, using 32 nodes with 8 GPUs per node to train ResNet-50 on ImageNet, where nodes communicate over 10Gbps Ethernet, SGP completes 90 epochs in around 1.5 hours while AllReduce SGD takes over 5 hours, and the top-1 validation accuracy of SGP remains within 1.2% of that obtained using AllReduce SGD.","For distributed training over high-latency networks, use gossip-based approximate distributed averaging instead of exact distribute averaging like AllReduce.The authors propose using gossip algorithms as a general method of computing approximate average over a set of workers approximatelyThe paper proves the convergence of SGP for nonconvex smooth functions and shows the SGP can achieve a significant speed-up in the low-latency environment without sacrificing too much predictive performance. " 247,An Adversarial Learning Framework for a Persona-based Multi-turn Dialogue Model,"In this paper, we extend the persona-based sequence-to-sequence neural network conversation model to a multi-turn dialogue scenario by modifying the state-of-the-art hredGAN architecture to simultaneously capture utterance attributes such as speaker identity, dialogue topic, speaker sentiments and so on.The proposed system, phredGAN has a persona-based HRED generator and a conditional discriminator.We also explore two approaches to accomplish the conditional discriminator:, a system that passes the attribute representation as an additional input into a traditional adversarial discriminator, and, a dual discriminator system which in addition to the adversarial discriminator, collaboratively predicts the attribute that generated the input utterance.To demonstrate the superior performance of phredGAN over the persona SeqSeq model, we experiment with two conversational datasets, the Ubuntu Dialogue Corpus and TV series transcripts from the Big Bang Theory and Friends.Performance comparison is made with respect to a variety of quantitative measures as well as crowd-sourced human evaluation.We also explore the trade-offs from using either variant of on datasets with many but weak attribute modalities and ones with few but strong attribute modalities.",This paper develops an adversarial learning framework for neural conversation models with personaThis paper proposes an extension to hredGAN to simultaneously learn a set of attribute embeddings that represent the persona of each speaker and generate persona-based responses 248,Biologically-Inspired Spatial Neural Networks,"We introduce bio-inspired artificial neural networks consisting of neurons that are additionally characterized by spatial positions.To simulate properties of biological systems we add the costs penalizing long connections and the proximity of neurons in a two-dimensional space.Our experiments show that in the case where the network performs two different tasks, the neurons naturally split into clusters, where each cluster is responsible for processing a different task.This behavior not only corresponds to the biological systems, but also allows for further insight into interpretability or continual learning.","Bio-inspired artificial neural networks, consisting of neurons positioned in a two-dimensional space, are capable of forming independent groups for performing different tasks." 249,Discrete Transformer,"The transformer has become a central model for many NLP tasks from translation to language modeling to representation learning.Its success demonstrates the effectiveness of stacked attention as a replacement for recurrence for many tasks.In theory attention also offers more insights into the model’s internal decisions; however, in practice when stacked it quickly becomes nearly as fully-connected as recurrent models.In this work, we propose an alternative transformer architecture, discrete transformer, with the goal of better separating out internal model decisions.The model uses hard attention to ensure that each step only depends on a fixed context.Additionally, the model uses a separate “syntactic” controller to separate out network structure from decision making.Finally we show that this approach can be further sparsified with direct regularization.Empirically, this approach is able to maintain the same level of performance on several datasets, while discretizing reasoning decisions over the data.","Discrete transformer which uses hard attention to ensure that each step only depends on a fixed context.This paper presents modifications to the standard transformer architecture with the goal of improving interpretability while retaining performance in NLP tasks.This paper proposes three Discrete Transformers: a discrete and stochastic Gumbel-softmax based attention module, a two-stream syntactic and semantic transformer, and sparsity regularization." 250,Learning to predict visual brain activity by predicting future sensory states,"Deep predictive coding networks are neuroscience-inspired unsupervised learning models that learn to predict future sensory states.We build upon the PredNet implementation by Lotter, Kreiman, and Cox to investigate if predictive coding representations are useful to predict brain activity in the visual cortex.We use representational similarity analysis to compare PredNet representations to functional magnetic resonance imaging and magnetoencephalography data from the Algonauts Project.In contrast to previous findings in the literature, we report empirical data suggesting that unsupervised models trained to predict frames of videos without further fine-tuning may outperform supervised image classification baselines in terms of correlation to spatial and temporal data.",We show empirical evidence that predictive coding models yield representations more correlated to brain data than supervised image recognition models. 251,Joint autoencoders: a flexible meta-learning framework,"The incorporation of prior knowledge into learning is essential in achieving good performance based on small noisy samples.Such knowledge is often incorporated through the availability of related data arising from domains and tasks similar to the one of current interest.Ideally one would like to allow both the data for the current task and for previous related tasks to self-organize the learning system in such a way that commonalities and differences between the tasks are learned in a data-driven fashion.We develop a framework for learning multiple tasks simultaneously, based on sharing features that are common to all tasks, achieved through the use of a modular deep feedforward neural network consisting of shared branches, dealing with the common features of all tasks, and private branches, learning the specific unique aspects of each task.Once an appropriate weight sharing architecture has been established, learning takes place through standard algorithms for feedforward networks, e.g., stochastic gradient descent and its variations.The method deals with meta-learning in a unified fashion, and can easily deal with data arising from different types of sources.Numerical experiments demonstrate the effectiveness of learning in domain adaptation and transfer learning setups, and provide evidence for the flexible and task-oriented representations arising in the network.","A generic framework for handling transfer and multi-task learning using pairs of autoencoders with task-specific and shared weights.Proposes a generic framework for end-to-end transfer learning / domain adaptation with deep neural networks. This paper proposes a model for allowing deep neural network architectures to share parameters across different datasets, and applies it to transfer learning.The paper focuses on learning common features from multiple domains data and ends up with a general architecture for multi-task, semi-supervised and transfer learning" 252,Adaptive Neural Trees,"Deep neural networks and decision trees operate on largely separate paradigms; typically, the former performs representation learning with pre-specified architectures, while the latter is characterised by learning hierarchies over pre-specified features with data-driven architectures.We unite the two via adaptive neural trees, a model that incorporates representation learning into edges, routing functions and leaf nodes of a decision tree, along with a backpropagation-based training algorithm that adaptively grows the architecture from primitive modules.ANTs allow increased interpretability via hierarchical clustering, e.g., learning meaningful class associations, such as separating natural vs. man-made objects.We demonstrate this on classification and regression tasks, achieving over 99% and 90% accuracy on the MNIST and CIFAR-10 datasets, and outperforming standard neural networks, random forests and gradient boosted trees on the SARCOS dataset.Furthermore, ANT optimisation naturally adapts the architecture to the size and complexity of the training data.","We propose a framework to combine decision trees and neural networks, and show on image classification tasks that it enjoys the complementary benefits of the two approaches, while addressing the limitations of prior work.The authors proposed a new model, Adaptive Neural Trees, by combining the representation learning and gradient optimization of neural networks with architecture learning of decision treesThis paper proposes the Adaptive Neural Trees approach to combine the two learning paradigms of deep neural nets and decision trees" 253,XLDA: Cross-Lingual Data Augmentation for Natural Language Inference and Question Answering,"While natural language processing systems often focus on a single language, multilingual transfer learning has the potential to improve performance, especially for low-resource languages.We introduce XLDA, cross-lingual data augmentation, a method that replaces a segment of the input text with its translation in another language.XLDA enhances performance of all 14 tested languages of the cross-lingual natural language inference benchmark.With improvements of up to 4.8, training with XLDA achieves state-of-the-art performance for Greek, Turkish, and Urdu.XLDA is in contrast to, and performs markedly better than, a more naive approach that aggregates examples in various languages in a way that each example is solely in one language.On the SQuAD question answering task, we see that XLDA provides a 1.0 performance increase on the English evaluation set.Comprehensive experiments suggest that most languages are effective as cross-lingual augmentors, that XLDA is robust to a wide range of translation quality, and that XLDA is even more effective for randomly initialized models than for pretrained models.","Translating portions of the input during training can improve cross-lingual performance.The paper proposes a cross-lingual data augmentation method to improve the language inference and question answering tasks.This paper proposes to augment crosslingual data with heuristic swaps using aligned translations, like bilingual humans do in code-switching." 254,Mitigating Posterior Collapse in Strongly Conditioned Variational Autoencoders,"Training conditional generative latent-variable models is challenging in scenarios where the conditioning signal is very strong and the decoder is expressive enough to generate a plausible output given only the condition; the generative model tends to ignore the latent variable, suffering from posterior collapse. We find, and empirically show, that one of the major reasons behind posterior collapse is rooted in the way that generative models are conditioned, i.e., through concatenation of the latent variable and the condition. To mitigate this problem, we propose to explicitly make the latent variables depend on the condition by unifying the conditioning and latent variable sampling, thus coupling them so as to prevent the model from discarding the root of variations. To achieve this, we develop a conditional Variational Autoencoder architecture that learns a distribution not only of the latent variables, but also of the condition, the latter acting as prior on the former. Our experiments on the challenging tasks of conditional human motion prediction and image captioning demonstrate the effectiveness of our approach at avoiding posterior collapse. Video results of our approach are anonymously provided in http://bit.ly/iclr2020","We propose a conditional variational autoencoder framework that mitigates the posterior collapse in scenarios where the conditioning signal strong enough for an expressive decoder to generate a plausible output from it.This paper considers strongly conditioned generative models, and proposes an objective function and a parameterisation of the variational distribution such that latent variables explicitly depend on input conditions.This paper argues that when the decoder is conditioned on the concatenation of latent variables and auxiliary information, then posterior collapse is more likely than in vanilla VAE." 255,Reproducibility and Stability Analysis in Metric-Based Few-Shot Learning,"We propose a study of the stability of several few-shot learning algorithms subject to variations in the hyper-parameters and optimization schemes while controlling the random seed. We propose a methodology for testing for statistical differences in model performances under several replications.To study this specific design, we attempt to reproduce results from three prominent papers: Matching Nets, Prototypical Networks, and TADAM.We analyze on the miniImagenet dataset on the standard classification task in the 5-ways, 5-shots learning setting at test time.We find that the selected implementations exhibit stability across random seed, and repeats.",We propose a study of the stability of several few-shot learning algorithms subject to variations in the hyper-parameters and optimization schemes while controlling the random seed.This paper studies reproducibility for few-shot learning. 256,Near-Optimal Representation Learning for Hierarchical Reinforcement Learning,"We study the problem of representation learning in goal-conditioned hierarchical reinforcement learning.In such hierarchical structures, a higher-level controller solves tasks by iteratively communicating goals which a lower-level policy is trained to reach.Accordingly, the choice of representation -- the mapping of observation space to goal space -- is crucial.To study this problem, we develop a notion of sub-optimality of a representation, defined in terms of expected reward of the optimal hierarchical policy using this representation.We derive expressions which bound the sub-optimality and show how these expressions can be translated to representation learning objectives which may be optimized in practice.Results on a number of difficult continuous-control tasks show that our approach to representation learning yields qualitatively better representations as well as quantitatively better hierarchical policies, compared to existing methods.",We translate a bound on sub-optimality of representations to a practical training objective in the context of hierarchical reinforcement learning.The authors proposes a novel approach in learning a representation for HRL and state an intriguing connection between representation learning and bounding the sub-optimality which results in a gradient based algorithmThis paper proposes a way to handle sub-optimality in the context of learning representations which refer to the sub-optimality of hierarchical polity with respect to the task reward. 257,Beyond Cost-to-go Estimates in Situated Temporal Planning,"Heuristic search research often deals with finding algorithms for offline planning which aim to minimize the number of expanded nodes or planning time.In online planning, algorithms for real-time search or deadline-aware search have been considered before.However, in this paper, we are interested in the problem of in which an agents plan can depend on exogenous events in the external world, and thus it becomes important to take the passage of time into account during the planning process. "", Previous work on situated temporal planning has proposed simple pruning strategies, as well as complex schemes for a simplified version of the associated metareasoning problem.In this paper, we propose a simple metareasoning technique, called the crude greedy scheme, which can be applied in a situated temporal planner.Our empirical evaluation shows that the crude greedy scheme outperforms standard heuristic search based on cost-to-go estimates.","Metareasoning in a Situated Temporal PlannerThis paper addresses the problem of situated temporal planning, proposing a further simplification on greedy strategies previously proposed by Shperberg." 258,STRUCTURED ALIGNMENT NETWORKS," Many tasks in natural language processing involve comparing two sentences to compute some notion of relevance, entailment, or similarity.Typically this comparison is done either at the word level or at the sentence level, with no attempt to leverage the inherent structure of the sentence.When sentence structure is used for comparison, it is obtained during a non-differentiable pre-processing step, leading to propagation of errors.We introduce a model of structured alignments between sentences, showing how to compare two sentences by matching their latent structures.Using a structured attention mechanism, our model matches possible spans in the first sentence to possible spans in the second sentence, simultaneously discovering the tree structure of each sentence and performing a comparison, in a model that is fully differentiable and is trained only on the comparison objective.We evaluate this model on two sentence comparison tasks: the Stanford natural language inference dataset and the TREC-QA dataset.We find that comparing spans results in superior performance to comparing words individually, and that the learned trees are consistent with actual linguistic structures.",Matching sentences by learning the latent constituency tree structures with a variant of the inside-outside algorithm embedded as a neural network layer.This paper introduces a structured attention mechanisms to compute alignment scores among all possible spans in two given sentencesThis paper proposes a model of structured alignments between sentences as a means of comparing sentences by matching their latent structures. 259,Disentangled Representation Learning with Information Maximizing Autoencoder,Learning disentangled representation from any unlabelled data is a non-trivial problem.In this paper we propose Information Maximising Autoencoder where the encoder learns powerful disentangled representation through maximizing the mutual information between the representation and given information in an unsupervised fashion.We have evaluated our model on MNIST dataset and achieved approximately 98.9 % test accuracy while using complete unsupervised training.,"Learn disentangle representation in an unsupervised manner.The authors present a framework in which an auto encoder (E, D) is regularized such that its latent representation to share mutual information with a generated latent space representation." 260,Data Augmentation Generative Adversarial Networks,"Effective training of neural networks requires much data.In the low-data regime,parameters are underdetermined, and learnt networks generalise poorly.DataAugmentation alleviates this by using existing datamore effectively.However standard data augmentation produces only limitedplausible alternative data.Given there is potential to generate a much broader setof augmentations, we design and train a generative model to do data augmentation.The model, based on image conditional Generative Adversarial Networks, takesdata from a source domain and learns to take any data item and generalise itto generate other within-class data items.As this generative process does notdepend on the classes themselves, it can be applied to novel unseen classes of data.We show that a Data Augmentation Generative Adversarial Networkaugments standard vanilla classifiers well.We also show a DAGAN can enhancefew-shot learning systems such as Matching Networks.We demonstrate theseapproaches on Omniglot, on EMNIST having learnt the DAGAN on Omniglot, andVGG-Face data.In our experiments we can see over 13% increase in accuracy inthe low-data regime experiments in Omniglot, EMNIST and VGG-Face; in Matching Networks for Omniglot weobserve an increase of 0.5% and an increase of 1.8% inEMNIST.",Conditional GANs trained to generate data augmented samples of their conditional inputs used to enhance vanilla classification and one shot learning systems such as matching networks and pixel distanceThe authors propose a method to conduct data augmentation where the cross-class transformations are mapped to a low dimensional latent space using conditional GAN 261,Model-Agnostic Feature Selection with Additional Mutual Information,"Answering questions about data can require understanding what parts of an input X influence the response Y. Finding such an understanding can be built by testing relationships between variables through a machine learning model.For example, conditional randomization tests help determine whether a variable relates to the response given the rest of the variables.However, randomization tests require users to specify test statistics.We formalize a class of proper test statistics that are guaranteed to select a feature when it provides information about the response even when the rest of the features are known.We show that f-divergences provide a broad class of proper test statistics.In the class of f-divergences, the KL-divergence yields an easy-to-compute proper test statistic that relates to the AMI.Questions of feature importance can be asked at the level of an individual sample. We show that estimators from the same AMI test can also be used to find important features in a particular instance.We provide an example to show that perfect predictive models are insufficient for instance-wise feature selection.We evaluate our method on several simulation experiments, on a genomic dataset, a clinical dataset for hospital readmission, and on a subset of classes in ImageNet.Our method outperforms several baselines in various simulated datasets, is able to identify biologically significant genes, can select the most important predictors of a hospital readmission event, and is able to identify distinguishing features in an image-classification task.","We develop a simple regression-based model-agnostic feature selection method to interpret data generating processes with FDR control, and outperform several popular baselines on several simulated, medical, and image datasets.This paper proposes a practical improvement of the conditional randomization test and a new test statistic, proves f-divergence is one possible choice, and shows that KL-divergence cancels out some conditional distributions.This paper addresses the problem of finding useful features in an input that are dependent on a response variable even when conditioning on all other input variables.A model agnostic method to provide interpretation on the influence of input features on the response of a machine level model down to instance level, and proper test statistics for model agnostic feature selection." 262,Learning From Noisy Singly-labeled Data,"Supervised learning depends on annotated examples, which are taken to be the ground truth.But these labels often come from noisy crowdsourcing platforms, like Amazon Mechanical Turk.Practitioners typically collect multiple labels per example and aggregate the results to mitigate noise.Given a fixed annotation budget and unlimited unlabeled data, redundant annotation comes at the expense of fewer labeled examples.This raises two fundamental questions: How can we best learn from noisy workers? How should we allocate our labeling budget to maximize the performance of a classifier?We propose a new algorithm for jointly modeling labels and worker quality from noisy crowd-sourced data.The alternating minimization proceeds in rounds, estimating worker quality from disagreement with the current model and then updating the model by optimizing a loss function that accounts for the current estimate of worker quality.Unlike previous approaches, even with only one annotation per example, our algorithm can estimate worker quality.We establish a generalization error bound for models learned with our algorithm and establish theoretically that its better to label many examples once when worker quality exceeds a threshold."", ""Experiments conducted on both ImageNet and MS-COCO confirm our algorithms benefits.","A new approach for learning a model from noisy crowdsourced annotations.This paper proposes a method for learning from noisy labels, focusing on the case when data isn't redundantly labeled with theoretical and experimental validation"", 'This paper focuses on the learning-from-crowds problem, where jointly updating the classifier weights and the confusion matrices of workers can help on the estimation problem with rare crowdsourced labels.Proposes a supervised learning algorithm for modeling label and worker quality and utilizes algorithm to study how much redundancy is required in crowdsourcing and whether low redundancy with abundant noise examples lead to better labels." 263,Explaining the Mistakes of Neural Networks with Latent Sympathetic Examples,"Neural networks make mistakes.The reason why a mistake is made often remains a mystery.As such neural networks often are considered a black box.It would be useful to have a method that can give an explanation that is intuitive to a user as to why an image is misclassified.In this paper we develop a method for explaining the mistakes of a classifier model by visually showing what must be added to an image such that it is correctly classified.Our work combines the fields of adversarial examples, generative modeling and a correction technique based on difference target propagation to create an technique that creates explanations of why an image is misclassified.In this paper we explain our method and demonstrate it on MNIST and CelebA.This approach could aid in demystifying neural networks for a user.",New way of explaining why a neural network has misclassified an imageThis paper proposes a method for explaining the classification mistakes of neural networks. Aims to better understand the classification of neural networks and explores the latent space of a variational auto encoder and considers the perturbations of the latent space in order to obtain the correct classification. 264,Branched Multi-Task Networks: Deciding What Layers To Share,"In the context of multi-task learning, neural networks with branched architectures have often been employed to jointly tackle the tasks at hand.Such ramified networks typically start with a number of shared layers, after which different tasks branch out into their own sequence of layers.Understandably, as the number of possible network configurations is combinatorially large, deciding what layers to share and where to branch out becomes cumbersome.Prior works have either relied on ad hoc methods to determine the level of layer sharing, which is suboptimal, or utilized neural architecture search techniques to establish the network design, which is considerably expensive.In this paper, we go beyond these limitations and propose a principled approach to automatically construct branched multi-task networks, by leveraging the employed tasks affinities.Given a specific budget, i.e. number of learnable parameters, the proposed approach generates architectures, in which shallow layers are task-agnostic, whereas deeper ones gradually grow more task-specific.Extensive experimental analysis across numerous, diverse multi-tasking datasets shows that, for a given budget, our method consistently yields networks with the highest performance, while for a certain performance threshold it requires the least amount of learnable parameters.",A method for the automated construction of branched multi-task networks with strong experimental evaluation on diverse multi-tasking datasets.This paper proposes a novel soft parameter sharing Multi-task Learning framework based on a tree-like structure.This paper presents a method to infer multi-task networks architecture to determine which part of the network should be shared among different tasks. 265,Training Structured Efficient Convolutional Layers,"Typical recent neural network designs are primarily convolutional layers, but the tricks enabling structured efficient linear layers have not yet been adapted to the convolutional setting.We present a method to express the weight tensor in a convolutional layer using diagonal matrices, discrete cosine transforms and permutations that can be optimised using standard stochastic gradient methods.A network composed of such structured efficient convolutional layers outperforms existing low-rank networks and demonstrates competitive computational efficiency.","It's possible to substitute the weight matrix in a convolutional layer to train it as a structured efficient layer; performing as well as low-rank decomposition."", 'This work applies previous Structured Efficient Linear Layers to conv layers and proposes Structured Efficient Convolutional Layers as substitution of original conv layers." 266,SVDocNet: Spatially Variant U-Net for Blind Document Deblurring,"Blind document deblurring is a fundamental task in the field of document processing and restoration, having wide enhancement applications in optical character recognition systems, forensics, etc.Since this problem is highly ill-posed, supervised and unsupervised learning methods are well suited for this application.Using various techniques, extensive work has been done on natural-scene deblurring.However, these extracted features are not suitable for document images.We present SVDocNet, an end-to-end trainable U-Net based spatial recurrent neural network for blind document deblurring where the weights of the RNNs are determined by different convolutional neural networks.This network achieves state of the art performance in terms of both quantitative measures and qualitative results.","We present SVDocNet, an end-to-end trainable U-Net based spatial recurrent neural network (RNN) for blind document deblurring." 267,Novelty Detection Via Blurring," Conventional out-of-distribution detection schemes based on variational autoencoder or Random Network Distillation are known to assign lower uncertainty to the OOD data than the target distribution.In this work, we discover that such conventional novelty detection schemes are also vulnerable to the blurred images.Based on the observation, we construct a novel RND-based OOD detector, SVD-RND, that utilizes blurred images during training.Our detector is simple, efficient in test time, and outperforms baseline OOD detectors in various domains.Further results show that SVD-RND learns a better target distribution representation than the baselines.Finally, SVD-RND combined with geometric transform achieves near-perfect detection accuracy in CelebA domain.",We propose a novel OOD detector that employ blurred images as adversarial examples . Our model achieve significant OOD detection performance in various domains.This paper presents the idea to use blurred images as regularizing examples to improve out-of-distribution detection performance based on Random Network Distillation.This paper tackles out-of-data distribution by leveraging RND applied to data augmentations by training a model to match the outputs of a random network with an augmentation as input. 268,Large Batch Optimization for Deep Learning: Training BERT in 76 minutes,"Training large deep neural networks on massive datasets is \xa0computationally very challenging.There has been recent surge in interest in using large batch stochastic optimization methods to tackle this issue.The most prominent algorithm in this line of research is LARS, which by \xa0employing layerwise adaptive learning rates trains ResNet on ImageNet in a few minutes.However, LARS performs poorly for attention models like BERT, indicating that its performance gains are not consistent across tasks.In this paper, we first study a principled layerwise adaptation strategy to accelerate training of deep neural networks using large mini-batches.Using this strategy, we develop a new layerwise adaptive large batch optimization technique called LAMB; we then provide convergence analysis of LAMB as well as LARS, showing convergence to a stationary point in general nonconvex settings.Our empirical results demonstrate the superior performance of LAMB across various tasks such as BERT and ResNet-50 training with very little hyperparameter tuning.In particular, for BERT training, our optimizer enables use of very large batch sizes of 32868 without any degradation of performance. \xa0By increasing the batch size to the memory limit of a TPUv3 Pod, BERT training time can be reduced from 3 days to just 76 minutes.","A fast optimizer for general applications and large-batch training.In this paper, the authors made a study on large-batch training for the BERT, and successfully trained a BERT model in 76 minutes.This paper develops a layerwise adaptation strategy that allows training BERT models with large 32k mini-batches vs baseline 512." 269,Role of two learning rates in convergence of model-agnostic meta-learning,"Model-agnostic meta-learning is known as a powerful meta-learning method.However, MAML is notorious for being hard to train because of the existence of two learning rates.Therefore, in this paper, we derive the conditions that inner learning rate and meta-learning rate must satisfy for MAML to converge to minima with some simplifications.We find that the upper bound of depends on, in contrast to the case of using the normal gradient descent method.Moreover, we show that the threshold of increases as approaches its own upper bound.This result is verified by experiments on various few-shot tasks and architectures; specifically, we perform sinusoid regression and classification of Omniglot and MiniImagenet datasets with a multilayer perceptron and a convolutional neural network.Based on this outcome, we present a guideline for determining the learning rates: first, search for the largest possible; next, tune based on the chosen value of.",We analyzed the role of two learning rates in model-agnostic meta-learning in convergence.The authors tackled the optimization instability problem in MAML by investigating the two learning rates.This paper studies a method to help tune the two learning rates used in the MAML training algorithm. 270,Learning Function-Specific Word Representations,"We present a neural framework for learning associations between interrelated groups of words such as the ones found in Subject-Verb-Object structures.Our model induces a joint function-specific word vector space, where vectors of e.g. plausible SVO compositions lie close together.The model retains information about word group membership even in the joint space, and can thereby effectively be applied to a number of tasks reasoning over the SVO structure.We show the robustness and versatility of the proposed framework by reporting state-of-the-art results on the tasks of estimating selectional preference and event similarity.The results indicate that the combinations of representations learned with our task-independent model outperform task-specific architectures from prior work, while reducing the number of parameters by up to 95%.The proposed framework is versatile and holds promise to support learning function-specific representations beyond the SVO structures.","Task-independent neural model for learning associations between interrelated groups of words.The paper proposed a method for training function-specific word vectors, in which each word is represented with three vectors each in a different category (Subject-Verb-Object).This paper proposes a neural network to learn function-specific work representations and demonstrates the advantage over alternatives." 271,Automatic Measurement on Etched Structure in Semiconductor Using Deep Learning Approach,"The fabrication of semiconductor involves etching process to remove selected areas from wafers.However, the measurement of etched structure in micro-graph heavily relies on time-consuming manual routines.Traditional image processing usually demands on large number of annotated data and the performance is still poor.We treat this challenge as segmentation problem and use deep learning approach to detect masks of objects in etched structure of wafer.Then, we use simple image processing to carry out automatic measurement on the objects.We attempt Generative Adversarial Network to generate more data to overcome the problem of very limited dataset.We download 10 SEM images of 4 types from Internet, based on which we carry out our experiments.Our deep learning based method demonstrates superiority over image processing approach with mean accuracy reaching over 96% for the measurements, compared with the ground truth.To the best of our knowledge, it is the first time that deep learning has been applied in semiconductor industry for automatic measurement.",Using deep learning method to carry out automatic measurement of SEM images in semiconductor industry 272,Scheduling with Complex Consumptive Resources for a Planetary Rover,"Generating and scheduling activities is particularly challengingwhen considering both consumptive resources andcomplex resource interactions such as time-dependent resourceusage.We present three methods of determining validtemporal placement intervals for an activity in a temporallygrounded plan in the presence of such constraints.We introducethe Max Duration and Probe algorithms which aresound, but incomplete, and the Linear algorithm which issound and complete for linear rate resource consumption.We apply these techniques to the problem of schedulingawakes for a planetary rover where the awake durationsare affected by existing activities.We demonstrate how theProbe algorithm performs competitively with the Linear algorithmgiven an advantageous problem space and well-definedheuristics.We show that the Probe and Linear algorithmsoutperform the Max Duration algorithm empirically.We then empirically present the runtime differences betweenthe three algorithms.The Probe algorithm is currently base-linedfor use in the onboard scheduler for NASA’s next planetaryrover, the Mars 2020 rover.",This paper describes and analyzes three methods to schedule non-fixed duration activities in the presence of consumptive resources.The paper presents three approaches for on-board scheduling of activities in a planetary rover under reservoir resource constraints. 273,Classification-Based Anomaly Detection for General Data,"Anomaly detection, finding patterns that substantially deviate from those seen previously, is one of the fundamental problems of artificial intelligence.Recently, classification-based methods were shown to achieve superior results on this task.In this work, we present a unifying view and propose an open-set method to relax current generalization assumptions.Furthermore, we extend the applicability of transformation-based methods to non-image data using random affine transformations.Our method is shown to obtain state-of-the-art accuracy and is applicable to broad data types.The strong performance of our method is extensively validated on multiple datasets from different domains.",An anomaly detection that: uses random-transformation classification for generalizing to non-image data.This paper proposes a deep method for anomaly detection that unifies recent deep one-class classification and transformation-based classification approaches.This paper proposes an approach to classification-based anomaly detection for general data by using the affine transformation y = Wx+b. 274,Reducing Sentiment Bias in Language Models via Counterfactual Evaluation,"Recent improvements in large-scale language models have driven progress on automatic generation of syntactically and semantically consistent text for many real-world applications.Many of these advances leverage the availability of large corpora.While training on such corpora encourages the model to understand long-range dependencies in text, it can also result in the models internalizing the social biases present in the corpora.This paper aims to quantify and reduce biases exhibited by language models.Given a conditioning context and a language model, we analyze if the sentiment of the generated text is affected by changes in values of sensitive attributes in the conditioning context, a.k.a. counterfactual evaluation.We quantify these biases by adapting individual and group fairness metrics from the fair machine learning literature.Extensive evaluation on two different corpora shows that state-of-the-art Transformer-based language models exhibit biases learned from data.We propose embedding-similarity and sentiment-similarity regularization methods that improve both individual and group fairness metrics without sacrificing perplexity and semantic similarity---a positive step toward development and deployment of fairer language models for real-world applications.","We reduce sentiment biases based on counterfactual evaluation of text generation using language models.This paper measures sentiment bias in language models as reflected by text generated by the models, and adds other objective terms to the usual language modeling objective to reduce bias.This paper proposes to evaluate bias in pre-trained language models by using a fixed sentiment system and tests several different prefix templates.A method based on semantic simiilarity and a method based on sentiment similarity to debias the neural language models trained from large datasets." 275,A Bayesian Nonparametric Topic Model with Variational Auto-Encoders,"Topic modeling of text documents is one of the most important tasks in representation learning.In this work, we propose iTM-VAE, which is a Bayesian nonparametric topic model with variational auto-encoders.On one hand, as a BNP topic model, iTM-VAE potentially has infinite topics and can adapt the topic number to data automatically.On the other hand, different with the other BNP topic models, the inference of iTM-VAE is modeled by neural networks, which has rich representation capacity and can be computed in a simple feed-forward manner.Two variants of iTM-VAE are also proposed in this paper, where iTM-VAE-Prod models the generative process in products-of-experts fashion for better performance and iTM-VAE-G places a prior over the concentration parameter such that the model can adapt a suitable concentration parameter to data automatically.Experimental results on 20News and Reuters RCV1-V2 datasets show that the proposed models outperform the state-of-the-arts in terms of perplexity, topic coherence and document retrieval tasks.Moreover, the ability of adjusting the concentration parameter to data is also confirmed by experiments.","A Bayesian Nonparametric Topic Model with Variational Auto-Encoders which achieves the state-of-the-arts on public benchmarks in terms of perplexity, topic coherence and retrieval tasks.This paper constructs an infinite Topic Model with Variational Auto-Encoders by combining Nalisnick & Smith's stick-breaking variational auto-encoder with latent Dirichlet allocation and several inference techniques used in Miao." 276,One Generation Knowledge Distillation by Utilizing Peer Samples,"Knowledge Distillation is a widely used technique in recent deep learning research to obtain small and simple models whose performance is on a par with their large and complex counterparts.Standard Knowledge Distillation tends to be time-consuming because of the training time spent to obtain a teacher model that would then provide guidance for the student model.It might be possible to cut short the time by training a teacher model on the fly, but it is not trivial to have such a high-capacity teacher that gives quality guidance to student models this way.To improve this, we present a novel framework of Knowledge Distillation exploiting dark knowledge from the whole training set.In this framework, we propose a simple and effective implementation named Distillation by Utilizing Peer Samples in one generation.We verify our algorithm on numerous experiments.Compared with standard training on modern architectures, DUPS achieves an average improvement of 1%-2% on various tasks with nearly zero extra cost.Considering some typical Knowledge Distillation methods which are much more time-consuming, we also get comparable or even better performance using DUPS.","We present a novel framework of Knowledge Distillation utilizing peer samples as the teacherProposes a method for improving the effectiveness of knowledge distillation by softening the labels used and employing a dataset instead of a single sample.This paper proposes to address the extra computational cost of training with knowledge distillation, building on the recently proposed Snapshot Distillation technique." 277,META LEARNING SHARED HIERARCHIES,"We develop a metalearning approach for learning hierarchically structured poli- cies, improving sample efficiency on unseen tasks through the use of shared primitives—policies that are executed for large numbers of timesteps.Specifi- cally, a set of primitives are shared within a distribution of tasks, and are switched between by task-specific policies.We provide a concrete metric for measuring the strength of such hierarchies, leading to an optimization problem for quickly reaching high reward on unseen tasks.We then present an algorithm to solve this problem end-to-end through the use of any off-the-shelf reinforcement learning method, by repeatedly sampling new tasks and resetting task-specific policies.We successfully discover meaningful motor primitives for the directional movement of four-legged robots, solely by interacting with distributions of mazes.We also demonstrate the transferability of primitives to solve long-timescale sparse-reward obstacle courses, and we enable 3D humanoid robots to robustly walk and crawl with the same policy.",learn hierarchal sub-policies through end-to-end training over a distribution of tasksThe authors consider the problem of learning a useful set of ‘sub policies’ that can be shared between tasks so as to jump start learning on new tasks drawn from the task distribution. This paper proposes a novel method for inducing temporal hierarchical structure in a specialized multi-task setting. 278,Learning Document Embeddings With CNNs,This paper proposes a new model for document embedding.Existing approaches either require complex inference or use recurrent neural networks that are difficult to parallelize.We take a different route and use recent advances in language modeling to develop a convolutional neural network embedding model.This allows us to train deeper architectures that are fully parallelizable.Stacking layers together increases the receptive filed allowing each successive layer to model increasingly longer range semantic dependences within the document.Empirically we demonstrate superior results on two publicly available benchmarks.Full code will be released with the final version of this paper.,Convolutional neural network model for unsupervised document embedding.Introduces a new model for the general task of inducing document representations (embeddings) which uses a CNN architecture to improve computational efficiency.This paper proposes using CNNs with a skip-gram like objective as a fast way to output document embeddings 279,Ternary MobileNets via Per-Layer Hybrid Filter Banks,"MobileNets family of computer vision neural networks have fueled tremendous progress in the design and organization of resource-efficient architectures in recent years.New applications with stringent real-time requirements in highly constrained devices require further compression of MobileNets-like already computeefficient networks.Model quantization is a widely used technique to compress and accelerate neural network inference and prior works have quantized MobileNets to 4 − 6 bits albeit with a modest to significant drop in accuracy.While quantization to sub-byte values has been valuable, even further quantization of MobileNets to binary or ternary values is necessary to realize significant energy savings and possibly runtime speedups on specialized hardware, such as ASICs and FPGAs.Under the key observation that convolutional filters at each layer of a deep neural network may respond differently to ternary quantization, we propose a novel quantization method that generates per-layer hybrid filter banks consisting of full-precision and ternary weight filters for MobileNets.The layer-wise hybrid filter banks essentially combine the strengths of full-precision and ternary weight filters to derive a compact, energy-efficient architecture for MobileNets.Using this proposed quantization method, we quantized a substantial portion of weight filters of MobileNets to ternary values resulting in 27.98% savings in energy, and a 51.07% reduction in the model size, while achieving comparable accuracy and no degradation in throughput on specialized hardware in comparison to the baseline full-precision MobileNets.","2x savings in model size, 28% energy reduction for MobileNets on ImageNet at no loss in accuracy using hybrid layers composed of conventional full-precision filters and ternary filtersFocuses on quantizing the MobileNets architecture to ternary values, lowering the required space and computation in order to make neural networks more energy efficient.The paper proposes layer-wise hybrid filter bank which only quantizes a fraction of convolutional filters to ternary values towards the MobileNets architecture." 280,Synthetic vs Real: Deep Learning on Controlled Noise,"Performing controlled experiments on noisy data is essential in thoroughly understanding deep learning across a spectrum of noise levels.Due to the lack of suitable datasets, previous research have only examined deep learning on controlled synthetic noise, and real-world noise has never been systematically studied in a controlled setting.To this end, this paper establishes a benchmark of real-world noisy labels at 10 controlled noise levels.As real-world noise possesses unique properties, to understand the difference, we conduct a large-scale study across a variety of noise levels and types, architectures, methods, and training settings.Our study shows that: Deep Neural Networks generalize much better on real-world noise. DNNs may not learn patterns first on real-world noisy data. When networks are fine-tuned, ImageNet architectures generalize well on noisy data. Real-world noise appears to be less harmful, yet it is more difficult for robust DNN methods to improve. Robust learning methods that work well on synthetic noise may not work as well on real-world noise, and vice versa.We hope our benchmark, as well as our findings, will facilitate deep learning research on noisy data.","We establish a benchmark of controlled real noise and reveal several interesting findings about real-world noisy data.This paper compares 6 existing noisy label learning methods in two training settings: from scratch, and finetuning.The authors establish a large dataset and benchmark of controlled real-world noise for performing controlled experiments on noisy data in deep learning." 281,Learning to Design RNA,"Designing RNA molecules has garnered recent interest in medicine, synthetic biology, biotechnology and bioinformatics since many functional RNA molecules were shown to be involved in regulatory processes for transcription, epigenetics and translation.Since an RNAs function depends on its structural properties, the RNA Design problem is to find an RNA sequence which satisfies given structural constraints.Here, we propose a new algorithm for the RNA Design problem, dubbed LEARNA.LEARNA uses deep reinforcement learning to train a policy network to sequentially design an entire RNA sequence given a specified target structure.By meta-learning across 65000 different RNA Design tasks for one hour on 20 CPU cores, our extension Meta-LEARNA constructs an RNA Design policy that can be applied out of the box to solve novel RNA Design tasks.Methodologically, for what we believe to be the first time, we jointly optimize over a rich space of architectures for the policy network, the hyperparameters of the training procedure and the formulation of the decision process.Comprehensive empirical results on two widely-used RNA Design benchmarks, as well as a third one that we introduce, show that our approach achieves new state-of-the-art performance on the former while also being orders of magnitudes faster in reaching the previous state-of-the-art performance.In an ablation study, we analyze the importance of our methods different components.","We learn to solve the RNA Design problem with reinforcement learning using meta learning and autoML approaches.Used policy gradient optimization for generating RNA sequences which fold into a target secondary structure, resulting in clear accuracy and runtime improvements. " 282,Pruning neural networks: is it time to nip it in the bud?,"Pruning is a popular technique for compressing a neural network: a large pre-trained network is fine-tuned while connections are successively removed.However, the value of pruning has largely evaded scrutiny.In this extended abstract, we examine residual networks obtained through Fisher-pruning and make two interesting observations.First, when time-constrained, it is better to train a simple, smaller network from scratch than prune a large network.Second, it is the architectures obtained through the pruning process --- not the learnt weights --- that prove valuable.Such architectures are powerful when trained from scratch.Furthermore, these architectures are easy to approximate without any further pruning: we can prune once and obtain a family of new, scalable network architectures for different memory requirements.","Training small networks beats pruning, but pruning finds good small networks to train that are easy to copy." 283,Aggregating Crowdsourced Labels in Subjective Domains,"Supervised learning problems---particularly those involving social data---are often subjective.That is, human readers, looking at the same data, might come to legitimate but completely different conclusions based on their personal experiences.Yet in machine learning settings feedback from multiple human annotators is often reduced to a single ground truth label, thus hiding the true, potentially rich and diverse interpretations of the data found across the social spectrum.We explore the rewards and challenges of discovering and learning representative distributions of the labeling opinions of a large human population.A major, critical cost to this approach is the number of humans needed to provide enough labels not only to obtain representative samples but also to train a machine to predict representative distributions on unlabeled data.We propose aggregating label distributions over, not just individuals, but also data items, in order to maximize the costs of humans in the loop.We test different aggregation approaches on state-of-the-art deep learning models.Our results suggest that careful label aggregation methods can greatly reduce the number of samples needed to obtain representative distributions.",We study the problem of learning to predict the underlying diversity of beliefs present in supervised learning domains. 284,Deep Generative Inpainting with Comparative Sample Augmentation,"Recent advancements in deep learning techniques such as Convolutional Neural Networks and Generative Adversarial Networks have achieved breakthroughs in the problem of semantic image inpainting, the task of reconstructing missing pixels in given images.While much more effective than conventional approaches, deep learning models require large datasets and great computational resources for training, and inpainting quality varies considerably when training data vary in size and diversity.To address these problems, we present in this paper a inpainting strategy of , which enhances the quality of training set by filtering out irrelevant images and constructing additional images using information about the surrounding regions of the images to be inpainted.Experiments on multiple datasets demonstrate that our method extends the applicability of deep inpainting models to training sets with varying sizes, while maintaining inpainting quality as measured by qualitative and quantitative metrics for a large class of deep models, with little need for model-specific consideration.",We introduced a strategy which enables inpainting models on datasets of various sizesHelp image inpainting using GANs by using a comparative augmenting filter and adding random noise to each pixel. 285,Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step,"Generative adversarial networks are a family of generative models that do not minimize a single training criterion.Unlike other generative models, the data distribution is learned via a game between a generator and a discriminator that each minimize their own cost.GANs are designed to reach a Nash equilibrium at which each player cannot reduce their cost without changing the other players’ parameters.One useful approach for the theory of GANs is to show that a divergence between the training distribution and the model distribution obtains its minimum value at equilibrium.Several recent research directions have been motivated by the idea that this divergence is the primary guide for the learning process and that every step of learning should decrease the divergence.We show that this view is overly restrictive.During GAN training, the discriminator provides learning signal in situations where the gradients of the divergences between distributions would not be useful.We provide empirical counterexamples to the view of GAN training as divergence minimization.Specifically, we demonstrate that GANs are able to learn distributions in situations where the divergence minimization point of view predicts they would fail.We also show that gradient penalties motivated from the divergence minimization perspective are equally helpful when applied in other contexts in which the divergence minimization perspective does not predict they would be helpful.This contributes to a growing body of evidence that GAN training may be more usefully viewed as approaching Nash equilibria via trajectories that do not necessarily minimize a specific divergence at each step.","We find evidence that divergence minimization may not be an accurate characterization of GAN training.The submission aims to present empirical evidence that the theory of divergence minimization is more a tool to understand the outcome of training GANs than a necessary condition to be enforce during training itselfThis paper studies non-saturating GANs and the effect of two penalized gradient approaches, considering several thought experiments to demonstrate observations and validate them on real data experiments." 286,A Data-Efficient Mutual Information Neural Estimator for Statistical Dependency Testing,"Measuring Mutual Information between high-dimensional, continuous, random variables from observed samples has wide theoretical and practical applications.Recent works have developed accurate MI estimators through provably low-bias approximations and tight variational lower bounds assuming abundant supply of samples, but require an unrealistic number of samples to guarantee statistical significance of the estimation.In this work, we focus on improving data efficiency and propose a Data-Efficient MINE Estimator that can provide a tight lower confident interval of MI under limited data, through adding cross-validation to the MINE lower bound.Hyperparameter search is employed and a novel meta-learning approach with task augmentation is developed to increase robustness to hyperparamters, reduce overfitting and improve accuracy.With improved data-efficiency, our DEMINE estimator enables statistical testing of dependency at practical dataset sizes.We demonstrate the effectiveness of DEMINE on synthetic benchmarks and a real world fMRI dataset, with application of inter-subject correlation analysis.","A new & practical statistical test of dependency using neural networks, benchmarked on synthetic and a real fMRI datasets.Proposes a neural-network-based estimation of mutal information which can reliably work with small datasets, reducing the sample complexity by decoupling the network learning problem and the estimation problem." 287,LEAP: Learning Embeddings for Adaptive Pace,"Determining the optimal order in which data examples are presented to Deep Neural Networks during training is a non-trivial problem.However, choosing a non-trivial scheduling method may drastically improve convergence.In this paper, we propose a Self-Paced Learning-fused Deep Metric Learning framework, which we call Learning Embeddings for Adaptive Pace.Our method parameterizes mini-batches dynamically based on the and of the sample within a salient feature representation space.In LEAP, we train an Convolutional Neural Network to learn an expressive representation space by adaptive density discrimination using the Magnet Loss.The CNN classifier dynamically selects samples to form a mini-batch based on the from cross-entropy losses and of examples from the representation space sculpted by the CNN.We evaluate LEAP using deep CNN architectures for the task of supervised image classification on MNIST, FashionMNIST, CIFAR-10, CIFAR-100, and SVHN.We show that the LEAP framework converges faster with respect to the number of mini-batch updates required to achieve a comparable or better test performance on each of the datasets.","LEAP combines the strength of adaptive sampling with that of mini-batch online learning and adaptive representation learning to formulate a representative self-paced strategy in an end-to-end DNN training protocol. ', ""Introduces a method for creating mini batches for a student network by using a second learned representation space to dynamically select examples by their 'easiness and true diverseness'."", 'Experiments the classification accuracy on MNIST, FashionMNIST, and CIFAR-10 datasets to learn a representation with curriculum learning style minibatch selection in an end-to-end framework." 288,Construction of Macro Actions for Deep Reinforcement Learning,"Conventional deep reinforcement learning typically determines an appropriate primitive action at each timestep, which requires enormous amount of time and effort for learning an effective policy, especially in large and complex environments.To deal with the issue fundamentally, we incorporate macro actions, defined as sequences of primitive actions, into the primitive action space to form an augmented action space.The problem lies in how to find an appropriate macro action to augment the primitive action space. The agent using a proper augmented action space is able to jump to a farther state and thus speed up the exploration process as well as facilitate the learning procedure.In previous researches, macro actions are developed by mining the most frequently used action sequences or repeating previous actions.However, the most frequently used action sequences are extracted from a past policy, which may only reinforce the original behavior of that policy.On the other hand, repeating actions may limit the diversity of behaviors of the agent.Instead, we propose to construct macro actions by a genetic algorithm, which eliminates the dependency of the macro action derivation procedure from the past policies of the agent. Our approach appends a macro action to the primitive action space once at a time and evaluates whether the augmented action space leads to promising performance or not. We perform extensive experiments and show that the constructed macro actions are able to speed up the learning process for a variety of deep reinforcement learning methods.Our experimental results also demonstrate that the macro actions suggested by our approach are transferable among deep reinforcement learning methods and similar environments.We further provide a comprehensive set of ablation analysis to validate our methodology.","We propose to construct macro actions by a genetic algorithm, which eliminates the dependency of the macro action derivation procedure from the past policies of the agent.This paper proposes a generic algorithm for constructing macro actions for deep reinforcement learning by appending a macro action to the primitive action space." 289,Inferring hierarchies of latent features in calcium imaging data,"A key problem in neuroscience and life sciences more generally is that the data generation process is often best thought of as a hierarchy of dynamic systems.One example of this is in-vivo calcium imaging data, where observed calcium transients are driven by a combination of electro-chemical kinetics where hypothesized trajectories around manifolds determining the frequency of these transients.A recent approach using sequential variational auto-encoders demonstrated it was possible to learn the latent dynamic structure of reaching behaviour from spiking data modelled as a Poisson process.Here we extend this approach using a ladder method to infer the spiking events driving calcium transients along with the deeper latent dynamic system.We show strong performance of this approach on a benchmark synthetic dataset against a number of alternatives.",We propose an extension to LFADS capable of inferring spike trains to reconstruct calcium fluorescence traces using hierarchical VAEs. 290,Unsupervised Neural Machine Translation,"In spite of the recent success of neural machine translation in standard benchmarks, the lack of large parallel corpora poses a major practical problem for many language pairs.There have been several proposals to alleviate this issue with, for instance, triangulation and semi-supervised learning techniques, but they still require a strong cross-lingual signal.In this work, we completely remove the need of parallel data and propose a novel method to train an NMT system in a completely unsupervised manner, relying on nothing but monolingual corpora.Our model builds upon the recent work on unsupervised embedding mappings, and consists of a slightly modified attentional encoder-decoder model that can be trained on monolingual corpora alone using a combination of denoising and backtranslation.Despite the simplicity of the approach, our system obtains 15.56 and 10.21 BLEU points in WMT 2014 French-to-English and German-to-English translation.The model can also profit from small parallel corpora, and attains 21.81 and 15.24 points when combined with 100,000 parallel sentences, respectively.Our implementation is released as an open source project.","We introduce the first successful method to train neural machine translation in an unsupervised manner, using nothing but monolingual corporaThe authors present a model for unsupervised NMT which requires no parallel corpora between the two languages of interest. This is a paper on unsupervised MT which trains a standard architecture using word embeddings in a shared embedding space only with bilingual word papers and an encoder-decoder trained using monolingual data." 291,"Progressive Growing of GANs for Improved Quality, Stability, and Variation","We describe a new training methodology for generative adversarial networks.The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses.This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2.We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10.Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator.Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation.As an additional contribution, we construct a higher-quality version of the CelebA dataset.","We train generative adversarial networks in a progressive fashion, enabling us to generate high-resolution images with high quality.Introduces progressive growing and a simple parameter-free minibatch summary statistic feature for use in GAN training to enable synthesis of high-resolution images." 292,DeepSphere: a graph-based spherical CNN,"Designing a convolution for a spherical neural network requires a delicate tradeoff between efficiency and rotation equivariance.DeepSphere, a method based on a graph representation of the discretized sphere, strikes a controllable balance between these two desiderata.This contribution is twofold.First, we study both theoretically and empirically how equivariance is affected by the underlying graph with respect to the number of pixels and neighbors.Second, we evaluate DeepSphere on relevant problems.Experiments show state-of-the-art performance and demonstrates the efficiency and flexibility of this formulation.Perhaps surprisingly, comparison with previous work suggests that anisotropic filters might be an unnecessary price to pay.",A graph-based spherical CNN that strikes an interesting balance of trade-offs for a wide variety of applications.Combines existing CNN frameworks based on the discretization of a sphere as a graph to show a convergence result which is related to the rotation equivalence on a sphere.The authors use the existing graph CNN formulation and a pooling strategy that exploits hierarchical pixelations of the sphere to learn from the discretized sphere. 293,State-Denoised Recurrent Neural Networks,"Recurrent neural networks are difficult to train on sequence processing tasks, not only because input noise may be amplified through feedback, but also because any inaccuracy in the weights has similar consequences as input noise.We describe a method for denoising the hidden state during training to achieve more robust representations thereby improving generalization performance.Attractor dynamics are incorporated into the hidden state to `clean up representations at each step of a sequence.The attractor dynamics are trained through an auxillary denoising loss to recover previously experienced hidden states from noisy versions of those states.This state-denoised recurrent neural network performs multiple steps of internal processing for each external sequence step.On a range of tasks, we show that the SDRNN outperforms a generic RNN as well as a variant of the SDRNN with attractor dynamics on the hidden state but without the auxillary loss.We argue that attractor dynamics---and corresponding connectivity constraints---are an essential component of the deep learning arsenal and should be invoked not only for recurrent networks but also for improving deep feedforward nets and intertask transfer.",We propose a mechanism for denoising the internal state of an RNN to improve generalization performance. 294,Variance Reduction for Reinforcement Learning in Input-Driven Environments,"We consider reinforcement learning in input-driven environments, where an exogenous, stochastic input process affects the dynamics of the system.Input processes arise in many applications, including queuing systems, robotics control with disturbances, and object tracking.Since the state dynamics and rewards depend on the input process, the state alone provides limited information for the expected future returns.Therefore, policy gradient methods with standard state-dependent baselines suffer high variance during training.We derive a bias-free, input-dependent baseline to reduce this variance, and analytically show its benefits over state-dependent baselines.We then propose a meta-learning approach to overcome the complexity of learning a baseline that depends on a long sequence of inputs.Our experimental results show that across environments from queuing systems, computer networks, and MuJoCo robotic locomotion, input-dependent baselines consistently improve training stability and result in better eventual policies.","For environments dictated partially by external input processes, we derive an input-dependent baseline that provably reduces the variance for policy gradient methods and improves the policy performance in a wide range of RL tasks.The authors consider the problem of learning in input-driven environments, show how the PG theorem still applies for an input-aware critic, and show that input-dependent baselines are the best to use in conjecture with that critic.This paper introduces the notion of input-dependent baselines in Policy Gradient Methods in RL, and proposes different methods to train the input dependent baseline function to help clear variance from external factor perturbation." 295,Style Memory: Making a Classifier Network Generative,"Deep networks have shown great performance in classification tasks.However, the parameters learned by the classifier networks usually discard stylistic information of the input, in favour of information strictly relevant to classification.We introduce a network that has the capacity to do both classification and reconstruction by adding a ""style memory"" to the output layer of the network.We also show how to train such a neural network as a deep multi-layer autoencoder, jointly minimizing both classification and reconstruction losses.The generative capacity of our network demonstrates that the combination of style-memory neurons with the classifier neurons yield good reconstructions of the inputs when the classification is correct.We further investigate the nature of the style memory, and how it relates to composing digits and letters.","Augmenting the top layer of a classifier network with a style memory enables it to be generative.This paper proposes to train a classifier neural network not just to classifiy, but also to reconstruct a representation of its input, in order to factorize the class information from the appearance .The paper proposes training an autoencoder such that the middle layer representation consists of the class label of the input and a hidden vector representation" 296,Diversity and Depth in Per-Example Routing Models,"Routing models, a form of conditional computation where examples are routed through a subset of components in a larger network, have shown promising results in recent works.Surprisingly, routing models to date have lacked important properties, such as architectural diversity and large numbers of routing decisions.Both architectural diversity and routing depth can increase the representational power of a routing network.In this work, we address both of these deficiencies.We discuss the significance of architectural diversity in routing models, and explain the tradeoffs between capacity and optimization when increasing routing depth.In our experiments, we find that adding architectural diversity to routing models significantly improves performance, cutting the error rates of a strong baseline by 35% on an Omniglot setup.However, when scaling up routing depth, we find that modern routing techniques struggle with optimization.We conclude by discussing both the positive and negative results, and suggest directions for future research.","Per-example routing models benefit from architectural diversity, but still struggle to scale to a large number of routing decisions.Adds diversity to the type of architectural unit available for the router at each decision and scaling to deeper networks, achieving state of the art performance on Omniglot. This work extends routing networks to use diverse architectures across routed modules" 297,Scaling up Deep Learning for PDE-based Models,"Across numerous applications, forecasting relies on numerical solvers for partial differential equations.Although the use of deep-learning techniques has been proposed, the uses have been restricted by the fact the training data are obtained using PDE solvers.Thereby, the uses were limited to domains, where the PDE solver was applicable, but no further.We present methods for training on small domains, while applying the trained models on larger domains, with consistency constraints ensuring the solutions are physically meaningful even at the boundary of the small domains.We demonstrate the results on an air-pollution forecasting model for Dublin, Ireland.","We present RNNs for training surrogate models of PDEs, wherein consistency constraints ensure the solutions are physically meaningful, even when the training uses much smaller domains than the trained model is applied to." 298,Training GANs with Optimism,"We address the issue of limit cycling behavior in training Generative Adversarial Networks and propose the use of Optimistic Mirror Decent for training Wasserstein GANs.Recent theoretical results have shown that optimistic mirror decent can enjoy faster regret rates in the context of zero-sum games.WGANs is exactly a context of solving a zero-sum game with simultaneous no-regret dynamics. Moreover, we show that optimistic mirror decent addresses the limit cycling problem in training WGANs.We formally show that in the case of bi-linear zero-sum games the last iterate of OMD dynamics converges to an equilibrium, in contrast to GD dynamics which are bound to cycle.We also portray the huge qualitative difference between GD and OMD dynamics with toy examples, even when GD is modified with many adaptations proposed in the recent literature, such as gradient penalty or momentum.We apply OMD WGAN training to a bioinformatics problem of generating DNA sequences.We observe that models trained with OMD achieve consistently smaller KL divergence with respect to the true underlying distribution, than models trained with GD variants.Finally, we introduce a new algorithm, Optimistic Adam, which is an optimistic variant of Adam.We apply it to WGAN training on CIFAR10 and observe improved performance in terms of inception score as compared to Adam.","We propose the use of optimistic mirror decent to address cycling problems in the training of GANs. We also introduce the Optimistic Adam algorithmThis paper proposes the use of optimistic mirror descent to train WGANsThe paper proposes to use optimistic gradient descent for GAN training that avoids the cycling behavior observed with SGD and its variants and provides promising results in GAN training.This paper proposes a simple modification of standard gradient descent, claiming to improve the convergence of GANs and other minimax optimization problems." 299,Rethinking Generalized Matrix Factorization for Recommendation: The Importance of Multi-hot Encoding,"Learning good representations of users and items is crucially important to recommendation with implicit feedback.Matrix factorization is the basic idea to derive the representations of users and items by decomposing the given interaction matrix.However, existing matrix factorization based approaches share the limitation in that the interaction between user embedding and item embedding is only weakly enforced by fitting the given individual rating value, which may lose potentially useful information.In this paper, we propose a novel Augmented Generalized Matrix Factorization approach that is able to incorporate the historical interaction information of users and items for learning effective representations of users and items.Despite the simplicity of our proposed approach, extensive experiments on four public implicit feedback datasets demonstrate that our approach outperforms state-of-the-art counterparts.Furthermore, the ablation study demonstrates that by using multi-hot encoding to enrich user embedding and item embedding for Generalized Matrix Factorization, better performance, faster convergence, and lower training loss can be achieved.",A simple extension of generalized matrix factorization can outperform state-of-the-art approaches for recommendation.The work presents a matrix factorization framework for enforcing the effect of historical data when learning user preferences in collaborative filtering settings. 300,Representing dynamically: An active process for describing sequential data,"We propose an unsupervised method for building dynamic representations of sequential data, particularly of observed interactions.The method simultaneously acquires representations of input data and its dynamics.It is based on a hierarchical generative model composed of two levels.In the first level, a model learns representations to generate observed data.In the second level, representational states encode the dynamics of the lower one.The model is designed as a Bayesian network with switching variables represented in the higher level, and which generates transition models.The method actively explores the latent space guided by its knowledge and the uncertainty about it.That is achieved by updating the latent variables from prediction error signals backpropagated to the latent space.So, no encoder or inference models are used since the generators also serve as their inverse transformations.The method is evaluated in two scenarios, with static images and with videos.The results show that the adaptation over time leads to better performance than with similar architectures without temporal dependencies, e.g., variational autoencoders.With videos, it is shown that the system extracts the dynamics of the data in states that highly correlate with the ground truth of the actions observed.","A method that build representations of sequential data and its dynamics through generative models with an active processCombines neural networks and Gaussian distributions to create an architecture and generative model for images and video which minimizes the error between generated and supplied images.The paper proposes a Bayesian network model, realized as a neural network, that learns different data in the form of a linear dynamical system" 301,POLYNOMIAL ACTIVATION FUNCTIONS,"Activation is a nonlinearity function that plays a predominant role in the convergence and performance of deep neural networks.While Rectified Linear Unit is the most successful activation function, its derivatives have shown superior performance on benchmark datasets.In this work, we explore the polynomials as activation functions that can approximate continuous real valued function within a given interval.Leveraging this property, the main idea is to learn the nonlinearity, accepting that the ensuing function may not be monotonic.While having the ability to learn more suitable nonlinearity, we cannot ignore the fact that it is a challenge to achieve stable performance due to exploding gradients - which is prominent with the increase in order.To handle this issue, we introduce dynamic input scaling, output scaling, and lower learning rate for the polynomial weights.Moreover, lower learning rate will control the abrupt fluctuations of the polynomials between weight updates.In experiments on three public datasets, our proposed method matches the performance of prior activation functions, thus providing insight into a network’s nonlinearity preference.",We propose polynomial as activation functions.The authors introduce learnable activation functions that are parameterized by polynomial functions and show results slightly better than ReLU. 302,Curiosity-driven Exploration by Bootstrapping Features,"We introduce CBF, an exploration method that works in the absence of rewards or end of episode signal.CBF is based on intrinsic reward derived from the error of a dynamics model operating in feature space.It was inspired by, is easy to implement, and can achieve results such as passing four levels of Super Mario Bros, navigating VizDoom mazes and passing two levels of SpaceInvaders.We investigated the effect of combining the method with several auxiliary tasks, but find inconsistent improvements over the CBF baseline.",A simple intrinsic motivation method using forward dynamics model error in feature space of the policy. 303,Disentangling Improves VAEs' Robustness to Adversarial Attacks,"This paper is concerned with the robustness of VAEs to adversarial attacks.We highlight that conventional VAEs are brittle under attack but that methods recently introduced for disentanglement such as β-TCVAE improve robustness, as demonstrated through a variety of previously proposed adversarial attacks; Gondim-Ribeiro et al.; Kos et al.).This motivated us to develop Seatbelt-VAE, a new hierarchical disentangled VAE that is designed to be significantly more robust to adversarial attacks than existing approaches, while retaining high quality reconstructions.","We show that disentangled VAEs are more robust than vanilla VAEs to adversarial attacks that aim to trick them into decoding the adversarial input to a chosen target. We then develop an even more robust hierarchical disentangled VAE, Seatbelt-VAE.The authors propose a new VAE model called seatbelt-VAE, showing to be more robust for latent attack than benchmarks." 304,Function changes in the Backpropagation equation is equivalent to an implicit learning rate,"The backpropagation algorithm is the de-facto standard for credit assignment in artificial neural networks due to its empirical results.Since its conception, variants of the backpropagation algorithm have emerged.More specifically, variants that leverage function changes in the backpropagation equations to satisfy their specific requirements.Feedback Alignment is one such example, which replaces the weight transpose matrix in the backpropagation equations with a random matrix in search of a more biologically plausible credit assignment algorithm.In this work, we show that function changes in the backpropagation procedure is equivalent to adding an implicit learning rate to an artificial neural network.Furthermore, we learn activation function derivatives in the backpropagation equations to demonstrate early convergence in these artificial neural networks.Our work reports competitive performances with early convergence on MNIST and CIFAR10 on sufficiently large deep neural network architectures.",We demonstrate that function changes in the backpropagation is equivalent to an implicit learning rate 305,"RL-ST: Reinforcing Style, Fluency and Content Preservation for Unsupervised Text Style Transfer","Unsupervised text style transfer is the task of re-writing text of a given style into a target style without using a parallel corpus of source style and target style sentences for training.Style transfer systems are evaluated on their ability to generate sentences that1) possess the target style,2) are fluent and natural sounding, and3) preserve the non-stylistic parts of the source sentence.We train a reinforcement learning based unsupervised style transfer system that incorporates rewards for the above measures, and describe novel rewards shaping methods for the same.Our approach does not attempt to disentangle style and content, and leverages the power of massively pre-trained language models as well as the Transformer.Our system significantly outperforms existing state-of-art systems based on human as well as automatic evaluations on target style, fluency and content preservation as well as on overall success of style transfer, on a variety of datasets.","A reinforcement learning approach to text style transferIntroduces an RL-based method which leverages a pre-trained language model to transfer text style, without a disentanglement objective, while using style-transfer generations from another model.The authors propose a combination reward composed of fluency, content, and style for text style transfer." 306,Semantic Hierarchy Emerges in the Deep Generative Representations for Scene Synthesis,"Despite the success of Generative Adversarial Networks in image synthesis, there lacks enough understanding on what networks have learned inside the deep generative representations and how photo-realistic images are able to be composed from random noises.In this work, we show that highly-structured semantic hierarchy emerges from the generative representations as the variation factors for synthesizing scenes.By probing the layer-wise representations with a broad set of visual concepts at different abstraction levels, we are able to quantify the causality between the activations and the semantics occurring in the output image.Such a quantification identifies the human-understandable variation factors learned by GANs to compose scenes.The qualitative and quantitative results suggest that the generative representations learned by GAN are specialized to synthesize different hierarchical semantics: the early layers tend to determine the spatial layout and configuration, the middle layers control the categorical objects, and the later layers finally render the scene attributes as well as color scheme.Identifying such a set of manipulatable latent semantics facilitates semantic scene manipulation.","We show that highly-structured semantic hierarchy emerges in the deep generative representations as a result for synthesizing scenes.Paper investigates the aspects encoded by the latent variables input into different layers in StyleGAN.The paper presents a visually-guided interpretation of activations of the convolution layers in the generator of StyleGAN on layout, scene category, scene attributes, and color." 307,All SMILES Variational Autoencoder for Molecular Property Prediction and Optimization,"Variational autoencoders defined over SMILES string and graph-based representations of molecules promise to improve the optimization of molecular properties, thereby revolutionizing the pharmaceuticals and materials industries.However, these VAEs are hindered by the non-unique nature of SMILES strings and the computational cost of graph convolutions.To efficiently pass messages along all paths through the molecular graph, we encode multiple SMILES strings of a single molecule using a set of stacked recurrent neural networks, harmonizing hidden representations of each atom between SMILES representations, and use attentional pooling to build a final fixed-length latent representation.By then decoding to a disjoint set of SMILES strings of the molecule, our All SMILES VAE learns an almost bijective mapping between molecules and latent representations near the high-probability-mass subspace of the prior.Our SMILES-derived but molecule-based latent representations significantly surpass the state-of-the-art in a variety of fully- and semi-supervised property regression and molecular property optimization tasks.","We pool messages amongst multiple SMILES strings of the same molecule to pass information along all paths through the molecular graph, producing latent representations that significantly surpass the state-of-the-art in a variety of tasks.Method uses multiple inputs of SMILES strings, character-wise feature fusion across those strings, and network training through multiple output targets of SMILES strings, creating a robust fixed-length latent representation independent of SMILES variation. The authors describe a novel variational autoencoder like method for molecules which encode molecules as strings to reduce the operations needed to share information across atoms in the molecule." 308,Diversity-Sensitive Conditional Generative Adversarial Networks,"We propose a simple yet highly effective method that addresses the mode-collapse problem in the Conditional Generative Adversarial Network.Although conditional distributions are multi-modal in practice, most cGAN approaches tend to learn an overly simplified distribution where an input is always mapped to a single output regardless of variations in latent code.To address such issue, we propose to explicitly regularize the generator to produce diverse outputs depending on latent codes.The proposed regularization is simple, general, and can be easily integrated into most conditional GAN objectives.Additionally, explicit regularization on generator allows our method to control a balance between visual quality and diversity.We demonstrate the effectiveness of our method on three conditional generation tasks: image-to-image translation, image inpainting, and future video prediction.We show that simple addition of our regularization to existing models leads to surprisingly diverse generations, substantially outperforming the previous approaches for multi-modal conditional generation specifically designed in each individual task.","We propose a simple and general approach that avoids a mode collapse problem in various conditional GANs.The paper proposes a regularization term for the conditional GAN objective in order to promote diverse multimodal generation and prevent mode collapse.The paper proposes a method for generating diverse outputs for various conditional GAN frameworks including image-to-image translation, image-inpainting, and video prediction, which can be applied to various conditional synthesis frameworks for various tasks. " 309,Addressing the Representation Bottleneck in Neural Machine Translation with Lexical Shortcuts,"The transformer is a state-of-the-art neural translation model that uses attention to iteratively refine lexical representations with information drawn from the surrounding context.Lexical features are fed into the first layer and propagated through a deep network of hidden layers.We argue that the need to represent and propagate lexical features in each layer limits the model’s capacity for learning and representing other information relevant to the task.To alleviate this bottleneck, we introduce gated shortcut connections between the embedding layer and each subsequent layer within the encoder and decoder.This enables the model to access relevant lexical content dynamically, without expending limited resources on storing it within intermediate states.We show that the proposed modification yields consistent improvements on standard WMT translation tasks and reduces the amount of lexical information passed along the hidden layers.We furthermore evaluate different ways to integrate lexical connections into the transformer architecture and present ablation experiments exploring the effect of proposed shortcuts on model behavior.",Equipping the transformer model with shortcuts to the embedding layer frees up model capacity for learning novel information. 310,Understanding the (Un)interpretability of Natural Image Distributions Using Generative Models,"Probability density estimation is a classical and well studied problem, but standard density estimation methods have historically lacked the power to model complex and high-dimensional image distributions. More recent generative models leverage the power of neural networks to implicitly learn and represent probability models over complex images. We describe methods to extract explicit probability density estimates from GANs, and explore the properties of these image density functions. We perform sanity check experiments to provide evidence that these probabilities are reasonable. However, we also show that density functions of natural images are difficult to interpret and thus limited in use. We study reasons for this lack of interpretability, and suggest that we can get better interpretability by doing density estimation on latent representations of images. ",We examine the relationship between probability density values and image content in non-invertible GANs.The authors try to estimate the probability distribution of the image with the help of GAN and develop a proper approximation to the PDFs in the latent space. 311,Incorporating Horizontal Connections in Convolution by Spatial Shuffling,"Convolutional Neural Networks are composed of multiple convolution layers and show elegant performance in vision tasks.The design of the regular convolution is based on the Receptive Field where the information within a specific region is processed.In the view of the regular convolutions RF, the outputs of neurons in lower layers with smaller RF are bundled to create neurons in higher layers with larger RF.As a result, the neurons in high layers are able to capture the global context even though the neurons in low layers only see the local information.However, in lower layers of the biological brain, the information outside of the RF changes the properties of neurons.In this work, we extend the regular convolution and propose spatially shuffled convolution.In ss convolution, the regular convolution is able to use the information outside of its RF by spatial shuffling which is a simple and lightweight operation.We perform experiments on CIFAR-10 and ImageNet-1k dataset, and show that ss convolution improves the classification performance across various CNNs.","We propose spatially shuffled convolution that the regular convolution incorporates the information from outside of its receptive field.Proposes SS convulation which uses information outside of its RF, showing improved results when tested on multiple CNN models.The authors proposed a shuffle strategy for convolution layers in convolution layers in convolutional neural networks." 312,SpaMHMM: Sparse Mixture of Hidden Markov Models for Graph Connected Entities,"We propose a framework to model the distribution of sequential data coming froma set of entities connected in a graph with a known topology.The method isbased on a mixture of shared hidden Markov models, which are trainedin order to exploit the knowledge of the graph structure and in such a way that theobtained mixtures tend to be sparse.Experiments in different application domainsdemonstrate the effectiveness and versatility of the method.",A method to model the generative distribution of sequences coming from graph connected entities.The authors propose a method to model sequential data from multiple interconnected sources using a mixture of common pool of HMM's. 313,Sample-efficient policy learning in multi-agent Reinforcement Learning via meta-learning,"To gain high rewards in muti-agent scenes, it is sometimes necessary to understand other agents and make corresponding optimal decisions.We can solve these tasks by first building models for other agents and then finding the optimal policy with these models.To get an accurate model, many observations are needed and this can be sample-inefficient.Whats more, the learned model and policy can overfit to current agents and cannot generalize if the other agents are replaced by new agents.In many practical situations, each agent we face can be considered as a sample from a population with a fixed but unknown distribution.Thus we can treat the task against some specific agents as a task sampled from a task distribution.We apply meta-learning method to build models and learn policies.Therefore when new agents come, we can adapt to them efficiently.Experiments on grid games show that our method can quickly get high rewards.",Our work applies meta-learning to multi-agent Reinforcement Learning to help our agent efficiently adapted to new coming opponents.This paper focuses on fast adaptation to new behaviour of the other agents of the environment using a method based on MAMLThe paper presents an approach to multi-agent learning based on the framework of model-agnostic meta learning for the task of opponent modeling for multi-agent RL. 314,VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning,"Trading off exploration and exploitation in an unknown environment is key to maximising expected return during learning.A Bayes-optimal policy, which does so optimally, conditions its actions not only on the environment state but on the agents uncertainty about the environment.Computing a Bayes-optimal policy is however intractable for all but the smallest tasks.In this paper, we introduce variational Bayes-Adaptive Deep RL, a way to meta-learn to perform approximate inference in an unknown environment, and incorporate task uncertainty directly during action selection.In a grid-world domain, we illustrate how variBAD performs structured online exploration as a function of task uncertainty.We also evaluate variBAD on MuJoCo domains widely used in meta-RL and show that it achieves higher return during training than existing methods.","VariBAD opens a path to tractable approximate Bayes-optimal exploration for deep RL using ideas from meta-learning, Bayesian RL, and approximate variational inference.This paper presents a new deep reinforcement learning method that can efficiently trade-off exploration and exploitation that combines meta-learning, variational inference, and bayesian RL." 315,Better Knowledge Retention through Metric Learning,"In a continual learning setting, new categories may be introduced over time, and an ideal learning system should perform well on both the original categories and the new categories.While deep neural nets have achieved resounding success in the classical setting, they are known to forget about knowledge acquired in prior episodes of learning if the examples encountered in the current episode of learning are drastically different from those encountered in prior episodes.This makes deep neural nets ill-suited to continual learning.In this paper, we propose a new model that can both leverage the expressive power of deep neural nets and is resilient to forgetting when new categories are introduced.We demonstrate an improvement in terms of accuracy on original classes compared to a vanilla deep neural net.","We show metric learning can help reduce catastrophic forgettingThis paper applies metric learning to reduce catastrophic forgetting on neural networks by improving the expressiveness of the final layer, leading to better results in continual learning." 316,NormCo: Deep Disease Normalization for Biomedical Knowledge Base Construction,"Biomedical knowledge bases are crucial in modern data-driven biomedical sciences, but auto-mated biomedical knowledge base construction remains challenging.In this paper, we consider the problem of disease entity normalization, an essential task in constructing a biomedical knowledge base. We present NormCo, a deep coherence model which considers the semantics of an entity mention, as well as the topical coherence of the mentions within a single document.NormCo mod-els entity mentions using a simple semantic model which composes phrase representations from word embeddings, and treats coherence as a disease concept co-mention sequence using an RNN rather than modeling the joint probability of all concepts in a document, which requires NP-hard inference. To overcome the issue of data sparsity, we used distantly supervised data and synthetic data generated from priors derived from the BioASQ dataset. Our experimental results show thatNormCo outperforms state-of-the-art baseline methods on two disease normalization corpora in terms of prediction quality and efficiency, and is at least as performant in terms of accuracy and F1 score on tagged documents.","We present NormCo, a deep coherence model which considers the semantics of an entity mention, as well as the topical coherence of the mentions within a single document to perform disease entity normalization.Uses a GRU autoencoder to represent the ""context"" (related enitities of a given disease within the span of a sentence), solving the BioNLP task with significant improvements over the best-known methods." 317,Multiplicative Interactions and Where to Find Them,"We explore the role of multiplicative interaction as a unifying framework to describe a range of classical and modern neural network architectural motifs, such as gating, attention layers, hypernetworks, and dynamic convolutions amongst others.Multiplicative interaction layers as primitive operations have a long-established presence in the literature, though this often not emphasized and thus under-appreciated.We begin by showing that such layers strictly enrich the representable function classes of neural networks.We conjecture that multiplicative interactions offer a particularly powerful inductive bias when fusing multiple streams of information or when conditional computation is required.We therefore argue that they should be considered in many situation where multiple compute or information paths need to be combined, in place of the simple and oft-used concatenation operation.Finally, we back up our claims and demonstrate the potential of multiplicative interactions by applying them in large-scale complex RL and sequence modelling tasks, where their use allows us to deliver state-of-the-art results, and thereby provides new evidence in support of multiplicative interactions playing a more prominent role when designing new neural network architectures.","We explore the role of multiplicative interaction as a unifying framework to describe a range of classical and modern neural network architectural motifs, such as gating, attention layers, hypernetworks, and dynamic convolutions amongst others.Presents multiplicative interaction as a unified characterization for representing commonly used model architecture design components, showing empirical proof of superior performance on tasks like RL and sequence modeling.The paper explores different types of multiplicative interactions and finds MI models able to achieve a state-of-the-art performance on language modeling and reinforcement learning problems." 318,TFGAN: Improving Conditioning for Text-to-Video Synthesis,"Developing conditional generative models for text-to-video synthesis is an extremely challenging yet an important topic of research in machine learning.In this work, we address this problem by introducing Text-Filter conditioning Generative Adversarial Network, a GAN model with novel conditioning scheme that aids improving the text-video associations.With a combination of this conditioning scheme and a deep GAN architecture, TFGAN generates photo-realistic videos from text on very challenging real-world video datasets.In addition, we construct a benchmark synthetic dataset of moving shapes to systematically evaluate our conditioning scheme.Extensive experiments demonstrate that TFGAN significantly outperforms the existing approaches, and can also generate videos of novel categories not seen during training.","An effective text-conditioning GAN framework for generating videos from textThis paper presents a GAN-based method for video generation conditioned on text description, with a new conditioning method that generates convolution filters from the encoded text, and uses them for a convolution in the discriminator.This paper proposes conditional GAN models for text-to-video synthesis: developing text-feature-conditioned CNN filters and constructing moving-shape dataset with improved performance on video/image generation." 319,Split LBI for Deep Learning: Structural Sparsity via Differential Inclusion Paths,"Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error.However, compressive networks are desired in many real world applications and direct training of small networks may be trapped in local optima.In this paper, instead of pruning or distilling over-parameterized models to compressive ones, we propose a new approach based on , that generates a family of models from simple to complex ones by coupling gradient descent and mirror descent to explore model structural sparsity.It has a simple discretization, called the Split Linearized Bregman Iteration, whose global convergence analysis in deep learning is established that from any initializations, algorithmic iterations converge to a critical point of empirical risks.Experimental evidence shows that SplitLBI may achieve state-of-the-art performance in large scale training on ImageNet-2012 dataset etc., while with it unveils effective subnet architecture with comparable test accuracies to dense models after retraining instead of pruning well-trained ones.","SplitLBI is applied to deep learning to explore model structural sparsity, achieving state-of-the-art performance in ImageNet-2012 and unveiling effective subnet architecture.Proposes an optimization based algorithm for finding important sparse structures of large-scale neural networks by coupling the learning of weight matrix and sparsity constraints, offering guaranteed convergence on nonconvex optimization problems." 320,Sparse Coding with Gated Learned ISTA,"In this paper, we study the learned iterative shrinkage thresholding algorithm for solving sparse coding problems. Following assumptions made by prior works, we first discover that the code components in its estimations may be lower than expected, i.e., require gains, and to address this problem, a gated mechanism amenable to theoretical analysis is then introduced.Specific design of the gates is inspired by convergence analyses of the mechanism and hence its effectiveness can be formally guaranteed.In addition to the gain gates, we further introduce overshoot gates for compensating insufficient step size in LISTA.Extensive empirical results confirm our theoretical findings and verify the effectiveness of our method.","We propose gated mechanisms to enhance learned ISTA for sparse coding, with theoretical guarantees on the superiority of the method. Proposes extensions to LISTA which address underestimation by introducing ""gain gates"" and including momentum with ""overshoot gates"", showing improved convergence rates.This paper is focused on solving sparse coding problems using LISTA-type networks by proposing a ""gain gating function"" to mitigate the weakness of the ""no false positive"" assumption." 321,I Am Going MAD: Maximum Discrepancy Competition for Comparing Classifiers Adaptively,"The learning of hierarchical representations for image classification has experienced an impressive series of successes due in part to the availability of large-scale labeled data for training.On the other hand, the trained classifiers have traditionally been evaluated on a handful of test images, which are deemed to be extremely sparsely distributed in the space of all natural images.It is thus questionable whether recent performance improvements on the excessively re-used test sets generalize to real-world natural images with much richer content variations.In addition, studies on adversarial learning show that it is effortless to construct adversarial examples that fool nearly all image classifiers, adding more complications to relative performance comparison of existing models.This work presents an efficient framework for comparing image classifiers, which we name the MAximum Discrepancy competition.Rather than comparing image classifiers on fixed test sets, we adaptively sample a test set from an arbitrarily large corpus of unlabeled images so as to maximize the discrepancies between the classifiers, measured by the distance over WordNet hierarchy.Human labeling on the resulting small and model-dependent image sets reveals the relative performance of the competing classifiers and provides useful insights on potential ways to improve them.We report the MAD competition results of eleven ImageNet classifiers while noting that the framework is readily extensible and cost-effective to add future classifiers into the competition.","We present an efficient and adaptive framework for comparing image classifiers to maximize the discrepancies between the classifiers, in place of comparing on fixed test sets.Error spotting mechanism which compares image classifiers by sampling their ""most disagreed"" test set, measuring disagreement through a semantics-aware distance derived form WordNet ontology." 322,RANDOM MASK: Towards Robust Convolutional Neural Networks,"Robustness of neural networks has recently been highlighted by the adversarial examples, i.e., inputs added with well-designed perturbations which are imperceptible to humans but can cause the network to give incorrect outputs.In this paper, we design a new CNN architecture that by itself has good robustness.We introduce a simple but powerful technique, Random Mask, to modify existing CNN structures.We show that CNN with Random Mask achieves state-of-the-art performance against black-box adversarial attacks without applying any adversarial training.We next investigate the adversarial examples which “fool” a CNN with Random Mask.Surprisingly, we find that these adversarial examples often “fool” humans as well.This raises fundamental questions on how to define adversarial examples and robustness properly.","We propose a technique that modifies CNN structures to enhance robustness while keeping high test accuracy, and raise doubt on whether current definition of adversarial examples is appropriate by generating adversarial examples able to fool humans.This paper proposes a simple technique for improving the robustness of neural networks against black-box attacks.The authors propose a simple method for increasing the robustness of convolutional neural networks against adversarial examples, with surprisingly good results." 323,Unifying semi-supervised and robust learning by mixup,"Supervised deep learning methods require cleanly labeled large-scale datasets, but collecting such data is difficult and sometimes impossible.There exist two popular frameworks to alleviate this problem: semi-supervised learning and robust learning to label noise.Although these frameworks relax the restriction of supervised learning, they are studied independently.Hence, the training scheme that is suitable when only small cleanly-labeled data are available remains unknown.In this study, we consider learning from bi-quality data as a generalization of these studies, in which a small portion of data is cleanly labeled, and the rest is corrupt.Under this framework, we compare recent algorithms for semi-supervised and robust learning.The results suggest that semi-supervised learning outperforms robust learning with noisy labels.We also propose a training strategy for mixing mixup techniques to learn from such bi-quality data effectively.",We propose to compare semi-supervised and robust learning to noisy label under a shared settingThe authors propose a strategy based on mixup for training a model in a formal setting that includes the semi-supervised and the robust learning tasks as special cases. 324,Effect of top-down connections in Hierarchical Sparse Coding,"Hierarchical Sparse Coding is a powerful model to efficiently represent multi-dimensional, structured data such as images.The simplest solution to solve this computationally hard problem is to decompose it into independent layerwise subproblems.However, neuroscientific evidence would suggest inter-connecting these subproblems as in the Predictive Coding theory, which adds top-down connections between consecutive layers.In this study, a new model called Sparse Deep Predictive Coding is introduced to assess the impact of this inter-layer feedback connection.In particular, the SDPC is compared with a Hierarchical Lasso network made out of a sequence of Lasso layers.A 2-layered SDPC and a Hi-La networks are trained on 3 different databases and with different sparsity parameters on each layer.First, we show that the overall prediction error generated by SDPC is lower thanks to the feedback mechanism as it transfers prediction error between layers.Second, we demonstrate that the inference stage of the SDPC is faster to converge than for the Hi-La model.Third, we show that the SDPC also accelerates the learning process.Finally, the qualitative analysis of both models dictionaries, supported by their activation probability, show that the SDPC features are more generic and informative.","This paper experimentally demonstrates the beneficial effect of top-down connections in Hierarchical Sparse Coding algorithm.This paper presents a study that compares techniques for Hierarchical Sparse Coding, showing that the top-down term is beneficial in reducing predictive error and can learn faster." 325,Why do These Match? Explaining the Behavior of Image Similarity Models,"Explaining a deep learning model can help users understand its behavior and allow researchers to discern its shortcomings.Recent work has primarily focused on explaining models for tasks like image classification or visual question answering. , ""In this paper, we introduce an explanation approach for image similarity models, where a models output is a score measuring the similarity of two inputs rather than a classification. "", In this task, an explanation depends on both of the input images, so standard methods do not apply.We propose an explanation method that pairs a saliency map identifying important image regions with an attribute that best explains the match. We find that our explanations provide additional information not typically captured by saliency maps alone, and can also improve performance on the classic task of attribute recognition.Our approachs ability to generalize is demonstrated on two datasets from diverse domains, Polyvore Outfits and Animals with Attributes 2.",A black box approach for explaining the predictions of an image similarity model.Introduces method for image similarity model explanation which identifies attributes that contribute positively to the similarity score and pairs them with a generated saliency map.The paper proposes an explanation mechanism that pairs the typical saliency map regions together with attributes for similarity matching deep neural networks. 326,On Meaning-Preserving Adversarial Perturbations for Sequence-to-Sequence Models,"Adversarial examples have been shown to be an effective way of assessing the robustness of neural sequence-to-sequence models, by applying perturbations to the input of a model leading to large degradation in performance.However, these perturbations are only indicative of a weakness in the model if they do not change the semantics of the input in a way that would change the expected output.Using the example of machine translation, we propose a new evaluation framework for adversarial attacks on seq2seq models taking meaning preservation into account and demonstrate that existing methods may not preserve meaning in general.Based on these findings, we propose new constraints for attacks on word-based MT systems and show, via human and automatic evaluation, that they produce more semantically similar adversarial inputs.Furthermore, we show that performing adversarial training with meaning-preserving attacks is beneficial to the model in terms of adversarial robustness without hurting test performance.","How you should evaluate adversarial attacks on seq2seqThe authors investigate ways of generating adversarial examples, showing that adversarial training with the attack most consistent with the introduced meaning-preservation criteria results in improved robustness to this type of attack without degradation in the non-adversarial setting.The paper is about meaning-preserving adversarial perturbations in the context of Seq2Seq models" 327,Context Mover's Distance & Barycenters: Optimal transport of contexts for building representations,"We present a framework for building unsupervised representations of entities and their compositions, where each entity is viewed as a probability distribution rather than a fixed length vector.In particular, this distribution is supported over the contexts which co-occur with the entity and are embedded in a suitable low-dimensional space.This enables us to consider the problem of representation learning with a perspective from Optimal Transport and take advantage of its numerous tools such as Wasserstein distance and Wasserstein barycenters.We elaborate how the method can be applied for obtaining unsupervised representations of text and illustrate the performance quantitatively as well as qualitatively on tasks such as measuring sentence similarity and word entailment, where we empirically observe significant gains.The key benefits of the proposed approach include: capturing uncertainty and polysemy via modeling the entities as distributions, utilizing the underlying geometry of the particular task, simultaneously providing interpretability with the notion of optimal transport between contexts and easy applicability on top of existing point embedding methods.In essence, the framework can be useful for any unsupervised or supervised problem; and only requires a co-occurrence structure inherent to many problems.The code, as well as pre-built histograms, are available under https://github.com/context-mover.","Represent each entity as a probability distribution over contexts embedded in a ground space.Proposes to construct word embeddings from a histogram over context words, instead of as point vectors, which allows for measuring distances between two words in terms of optimal transport between the histograms through a method that augments representation of an entity from standard ""point in a vector space"" to a histogram with bins located at some points in that vector space. " 328,Adversarial Examples Are a Natural Consequence of Test Error in Noise," Over the last few years, the phenomenon of adversarial examples --- maliciously constructed inputs that fool trained machine learning models --- has captured the attention of the research community, especially when the adversary is restricted to making small modifications of a correctly handled input.At the same time, less surprisingly, image classifiers lack human-level performance on randomly corrupted images, such as images with additive Gaussian noise.In this work, we show that these are two manifestations of the same underlying phenomenon.We establish this connection in several ways.First, we find that adversarial examples exist at the same distance scales we would expect from a linear model with the same performance on corrupted images.Next, we show that Gaussian data augmentation during training improves robustness to small adversarial perturbations and that adversarial training improves robustness to several types of image corruptions.Finally, we present a model-independent upper bound on the distance from a corrupted image to its nearest error given test performance and show that in practice we already come close to achieving the bound, so that improving robustness further for the corrupted image distribution requires significantly reducing test error.All of this suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions.This yields a computationally tractable evaluation metric for defenses to consider: test error in noisy image distributions.","Small adversarial perturbations should be expected given observed error rates of models outside the natural data distribution.This paper proposes an alternative view for adversarial examples in high dimension spaces by considering the ""error rate"" in a Gaussian distribution centered at each test point." 329,Well-Read Students Learn Better: On the Importance of Pre-training Compact Models,"Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training.Due to the cost of applying such models to down-stream tasks, several model compression techniques on pre-trained language representations have been proposed.However, surprisingly, the simple baseline of just pre-training and fine-tuning compact models has been overlooked.In this paper, we first show that pre-training remains important in the context of smaller architectures, and fine-tuning pre-trained compact models can be competitive to more elaborate methods proposed in concurrent work.Starting with pre-trained compact models, we then explore transferring task knowledge from large fine-tuned models through standard knowledge distillation.The resulting simple, yet effective and general algorithm, Pre-trained Distillation, brings further improvements.Through extensive experiments, we more generally explore the interaction between pre-training and distillation under two variables that have been under-studied: model size and properties of unlabeled task data.One surprising observation is that they have a compound effect even when sequentially applied on the same data.To accelerate future research, we will make our 24 pre-trained miniature BERT models publicly available.","Studies how self-supervised learning and knowledge distillation interact in the context of building compact models.Investigates training compact pre-trained language models via distillation and shows that using a teacher for distilling a compact student model performs better than directly pre-training the model.This submission shows that pre-training a student directly on masked language modeling is better than distillation, and the best is to combine both and distill from that pre-trained student model." 330,Universal Deep Neural Network Compression,"In this paper, we investigate lossy compression of deep neural networks by weight quantization and lossless source coding for memory-efficient deployment.Whereas the previous work addressed non-universal scalar quantization and entropy coding of DNN weights, we for the first time introduce universal DNN compression by universal vector quantization and universal source coding.In particular, we examine universal randomized lattice quantization of DNNs, which randomizes DNN weights by uniform random dithering before lattice quantization and can perform near-optimally on any source without relying on knowledge of its probability distribution.Moreover, we present a method of fine-tuning vector quantized DNNs to recover the performance loss after quantization.Our experimental results show that the proposed universal DNN compression scheme compresses the 32-layer ResNet and the AlexNet with compression ratios of and, respectively.","We introduce the universal deep neural network compression scheme, which is applicable universally for compression of any models and can perform near-optimally regardless of their weight distribution.Introduces a pipeline for network compression that is similar to deep compression and uses randomized lattice quantization instead of the classical vector quantization, and uses universal source coding (bzip2) instead of Huffman coding." 331,Preliminary theoretical troubleshooting in Variational Autoencoder,"What would be learned by variational autoencoder and what influence the disentanglement of VAE?, ""This paper tries to preliminarily address VAEs intrinsic dimension, real factor, disentanglement and indicator issues theoretically in the idealistic situation and implementation issue practically through noise modeling perspective in the realistic case. "", On intrinsic dimension issue, due to information conservation, the idealistic VAE learns and only learns intrinsic factor dimension.Besides, suggested by mutual information separation property, the constraint induced by Gaussian prior to the VAE objective encourages the information sparsity in dimension.On disentanglement issue, subsequently, inspired by information conservation theorem the clarification on disentanglement in this paper is made.On real factor issue, due to factor equivalence, the idealistic VAE possibly learns any factor set in the equivalence class. On indicator issue, the behavior of current disentanglement metric is discussed, and several performance indicators regarding the disentanglement and generating influence are subsequently raised to evaluate the performance of VAE model and to supervise the used factors.On implementation issue, the experiments under noise modeling and constraints empirically testify the theoretical analysis and also show their own characteristic in pursuing disentanglement.",This paper tries to preliminarily address the disentanglement theoretically in the idealistic situation and practically through noise modelling perspective in the realistic case.Studies the importance of the noise modelling in Gaussian VAE and proposes to train the noise using Empirical-Bayes like fashion.Modifying how noise factors are treated when developing VAE models 332,Three Mechanisms of Weight Decay Regularization,"Weight decay is one of the standard tricks in the neural network toolbox, but the reasons for its regularization effect are poorly understood, and recent results have cast doubt on the traditional interpretation in terms of regularization.Literal weight decay has been shown to outperform regularization for optimizers for which they differ.We empirically investigate weight decay for three optimization algorithms and a variety of network architectures.We identify three distinct mechanisms by which weight decay exerts a regularization effect, depending on the particular optimization algorithm and architecture: increasing the effective learning rate, approximately regularizing the input-output Jacobian norm, and reducing the effective damping coefficient for second-order optimization.Our results provide insight into how to improve the regularization of neural networks.",We investigate weight decay regularization for different optimizers and identify three distinct mechanisms by which weight decay improves generalization.Discusses the effect of weight decay on the training of deep network models with and without batch normalization and when using first/second order optimization methods and hypothesizes that a larger learning rate has a regularization effect. 333,Variational lower bounds on mutual information based on nonextensive statistical mechanics,"This paper aims to address the limitations of mutual information estimators based on variational optimization.By redefining the cost using generalized functions from nonextensive statistical mechanics we raise the upper bound of previous estimators and enable the control of the bias variance trade off.Variational based estimators outperform previous methods especially in high dependence high dimensional scenarios found in machine learning setups.Despite their performance, these estimators either exhibit a high variance or are upper bounded by log.Our approach inspired by nonextensive statistical mechanics uses different generalizations for the logarithm and the exponential in the partition function.This enables the estimator to capture changes in mutual information over a wider range of dimensions and correlations of the input variables whereas previous estimators saturate them.","Mutual information estimator based nonextensive statistical mechanicsThis paper tries to establish novel variational lower bounds for mutual information by introducing parameter q and defining q-algebra, showing that the lower bounds have smaller variance and achieves high values." 334,SGD Learns One-Layer Networks in WGANs,"Generative adversarial networks are a widely used framework for learning generative models.Wasserstein GANs, one of the most successful variants of GANs, require solving a minmax problem to global optimality, but in practice, are successfully trained with stochastic gradient descent-ascent.In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution in polynomial time and sample complexity.",We show that stochastic gradient descent ascent converges to a global optimum for WGAN with one-layer generator network.Attempts to prove that the Stochastic Gradient Decent-Ascent could converge to a global solution for the min-max problem of WGAN. 335,"Universality, Robustness, and Detectability of Adversarial Perturbations under Adversarial Training","Classifiers such as deep neural networks have been shown to be vulnerable against adversarial perturbations on problems with high-dimensional input space.While adversarial training improves the robustness of classifiers against such adversarial perturbations, it leaves classifiers sensitive to them on a non-negligible fraction of the inputs.We argue that there are two different kinds of adversarial perturbations: shared perturbations which fool a classifier on many inputs and singular perturbations which only fool the classifier on a small fraction of the data.We find that adversarial training increases the robustness of classifiers against shared perturbations.Moreover, it is particularly effective in removing universal perturbations, which can be seen as an extreme form of shared perturbations.Unfortunately, adversarial training does not consistently increase the robustness against singular perturbations on unseen inputs.However, we find that adversarial training decreases robustness of the remaining perturbations against image transformations such as changes to contrast and brightness or Gaussian blurring.It thus makes successful attacks on the classifier in the physical world less likely.Finally, we show that even singular perturbations can be easily detected and must thus exhibit generalizable patterns even though the perturbations are specific for certain inputs.","We empirically show that adversarial training is effective for removing universal perturbations, makes adversarial examples less robust to image transformations, and leaves them detectable for a detection approach.Analyses adversarial training and its effect on universal adversarial examples as well as standard (basic iteration) adversarial examples and how adversarial training affects detection. The authors show that adversarial training is effective in protecting against ""shared"" adversarial perturbation, in particular against universal perturbation, but less effective to protect against singular perturbations." 336,Once for All: Train One Network and Specialize it for Efficient Deployment,"We address the challenging problem of efficient deep learning model deployment, where the goal is to design neural network architectures that can fit different hardware platform constraints.Most of the traditional approaches either manually design or use Neural Architecture Search to find a specialized neural network and train it from scratch for each case, which is computationally expensive and unscalable.Our key idea is to decouple model training from architecture search to save the cost.To this end, we propose to train a once-for-all network that supports diverse architectural settings.Given a deployment scenario, we can then quickly get a specialized sub-network by selecting from the OFA network without additional training.To prevent interference between many sub-networks during training, we also propose a novel progressive shrinking algorithm, which can train a surprisingly large number of sub-networks simultaneously.Extensive experiments on various hardware platforms show that OFA consistently outperforms SOTA NAS methods while reducing orders of magnitude GPU hours and emission.In particular, OFA achieves a new SOTA 80.0% ImageNet top1 accuracy under the mobile setting.Code and pre-trained models are released at https://github.com/mit-han-lab/once-for-all.","We introduce techniques to train a single once-for-all network that fits many hardware platforms.Method results in a network from which one can extract sub-networks for various resouce constraints (latency, memory) which perform well without a need for retraining.This paper tries to tackle the problem of searching best architectures for specialized resource constraint deployment scenarios with a prediction based NAS method." 337,Boosting Generative Models by Leveraging Cascaded Meta-Models,"A deep generative model is a powerful method of learning a data distribution, which has achieved tremendous success in numerous scenarios.However, it is nontrivial for a single generative model to faithfully capture the distributions of the complex data such as images with complicate structures.In this paper, we propose a novel approach of cascaded boosting for boosting generative models, where meta-models are cascaded together to produce a stronger model.Any hidden variable meta-model can be leveraged as long as it can support the likelihood evaluation.We derive a decomposable variational lower bound of the boosted model, which allows each meta-model to be trained separately and greedily.We can further improve the learning power of the generative models by combing our cascaded boosting framework with the multiplicative boosting framework.",Propose an approach for boosting generative models by cascading hidden variable modelsThis paper proposed a novel approach of cascaded boosting for boosting generative models which allows each each meta-model to be trained separately and greedily. 338,What do you learn from context? Probing for sentence structure in contextualized word representations,"Contextualized representation models such as ELMo and BERT have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks.Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline.We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena.We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline.","We probe for sentence structure in ELMo and related contextual embedding models. We find existing models efficiently encode syntax and show evidence of long-range dependencies, but only offer small improvements on semantic tasks.Proposes the ""edge probing"" method and focuses on the relationship between spans rather than individual words, enabling the authors to look at syntactic constituency, dependencies, entity labels, and semantic role labeling.Provides new insights on what is captured contextualized word embeddings by compiling a set of “edge probing” tasks. " 339,Discriminative Particle Filter Reinforcement Learning for Complex Partial observations,"Deep reinforcement learning has succeeded in sophisticated games such as Atari, Go, etc.Real-world decision making, however, often requires reasoning with partial information extracted from complex visual observations.This paper presents Discriminative Particle Filter Reinforcement Learning, a new reinforcement learning framework for partial and complex observations.DPFRL encodes a differentiable particle filter with learned transition and observation models in a neural network, which allows for reasoning with partial observations over multiple time steps.While a standard particle filter relies on a generative observation model, DPFRL learns a discriminatively parameterized model that is training directly for decision making.We show that the discriminative parameterization results in significantly improved performance, especially for tasks with complex visual observations, because it circumvents the difficulty of modelling observations explicitly.In most cases, DPFRL outperforms state-of-the-art POMDP RL models in Flickering Atari Games, an existing POMDP RL benchmark, and in Natural Flickering Atari Games, a new, more challenging POMDP RL benchmark that we introduce.We further show that DPFRL performs well for visual navigation with real-world data.","We introduce DPFRL, a framework for reinforcement learning under partial and complex observations with a fully differentiable discriminative particle filterIntroduces ideas for training DLR agents with latent state variables, modeled as a belief distribution, so they can handle partially observed environments.This paper introduces a principled method for POMDP RL: Discriminative Particle Filter Reinforcement Learning that allows for reasoning with partial observations over multiple time steps, achieving state-of-the-art on benchmarks." 340,Revisiting Auxiliary Latent Variables in Generative Models,"Extending models with auxiliary latent variables is a well-known technique to in-crease model expressivity.Bachman & Precup; Naesseth et al.; Cremer et al.; Domke & Sheldon show that Importance Weighted Autoencoders can be viewed as extending the variational family with auxiliary latent variables.Similarly, we show that this view encompasses many of the recent developments in variational bounds.The success of enriching the variational family with auxiliary latent variables motivates applying the same techniques to the generative model.We develop a generative model analogous to the IWAE bound and empirically show that it outperforms the recently proposed Learned Accept/Reject Sampling algorithm, while being substantially easier to implement.Furthermore, we show that this generative process provides new insights on ranking Noise Contrastive Estimation and Contrastive Predictive Coding.","Monte Carlo Objectives are analyzed using auxiliary variable variational inference, yielding a new analysis of CPC and NCE as well as a new generative model.Proposes a different view on improving variational bounds with auxiliary latent variable models and explores the use of those models in the generative model." 341,LSH-SAMPLING BREAKS THE COMPUTATIONAL CHICKEN-AND-EGG LOOP IN ADAPTIVE STOCHASTIC GRADIENT ESTIMATION,"Stochastic Gradient Descent or SGD is the most popular optimization algorithm for large-scale problems.SGD estimates the gradient by uniform sampling with sample size one.There have been several other works that suggest faster epoch wise convergence by using weighted non-uniform sampling for better gradient estimates.Unfortunately, the per-iteration cost of maintaining this adaptive distribution for gradient estimation is more than calculating the full gradient.As a result, the false impression of faster convergence in iterations leads to slower convergence in time, which we call a chicken-and-egg loop.In this paper, we break this barrier by providing the first demonstration of a sampling scheme, which leads to superior gradient estimation, while keeping the sampling cost per iteration similar to that of the uniform sampling.Such an algorithm is possible due to the sampling view of Locality Sensitive Hashing, which came to light recently.As a consequence of superior and fast estimation, we reduce the running time of all existing gradient descent algorithms.We demonstrate the benefits of our proposal on both SGD and AdaGrad.",We improve the running of all existing gradient descent algorithms.Authors propose sampling stochastic gradients from a monotonic function proportional to gradient magnitudes by using LSH. Considers SGD over an objective of the form of a sum over examples of a quadratic loss. 342,"Convolutionary, Evolutionary, Revolutionary: What’s next for Bodies, Brains and AI?","In recent years we have made significant progress identifying computational principles that underlie neural function.While not yet complete, we have sufficient evidence that a synthesis of these ideas could result in an understanding of how neural computation emerges from a combination of innate dynamics and plasticity, and which could potentially be used to construct new AI technologies with unique capabilities.I discuss the relevant principles, the advantages they have for computation, and how they can benefit AI.Limitations of current AI are generally recognized, but fewer people are aware that we understand enough about the brain to immediately offer novel AI formulations.","Limitations of current AI are generally recognized, but fewer people are aware that we understand enough about the brain to immediately offer novel AI formulations." 343,Probing Emergent Semantics in Predictive Agents via Question Answering,"Recent work has demonstrated how predictive modeling can endow agents with rich knowledge of their surroundings, improving their ability to act in complex environments.We propose question-answering as a general paradigm to decode and understand the representations that such agents develop, applying our method to two recent approaches to predictive modeling – action-conditional CPC and SimCore.After training agents with these predictive objectives in a visually-rich, 3D environment with an assortment of objects, colors, shapes, and spatial configurations, we probe their internal state representations with a host of synthetic questions, without backpropagating gradients from the question-answering decoder into the agent.The performance of different agents when probed in this way reveals that they learn to encode detailed, and seemingly compositional, information about objects, properties and spatial relations from their physical environment.Our approach is intuitive, i.e. humans can easily interpret the responses of the model as opposed to inspecting continuous vectors, and model-agnostic, i.e. applicable to any modeling approach.By revealing the implicit knowledge of objects, quantities, properties and relations acquired by agents as they learn, question-conditional agent probing can stimulate the design and development of stronger predictive learning objectives.","We use question-answering to evaluate how much knowledge about the environment can agents learn by self-supervised prediction.Proposes QA as a tool to investigate what agents learn about in the world, arguing this as an intuitive method for humans which allows for arbitrary complexity.The authors propose a framework to assess representations built by predictive models that contain sufficient information to answer questions about the environment they are trained on, showing those by SimCore contained sufficient information for the LSTM to answer questions accurately." 344,Imbalanced Classification via Adversarial Minority Over-sampling,"In most real-world scenarios, training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.In this paper, we explore a novel yet simple way to alleviate this issue via synthesizing less-frequent classes with adversarial examples of other classes.Surprisingly, we found this counter-intuitive method can effectively learn generalizable features of minority classes by transferring and leveraging the diversity of the majority information.Our experimental results on various types of class-imbalanced datasets in image classification and natural language processing show that the proposed method not only improves the generalization of minority classes significantly compared to other re-sampling or re-weighting methods, but also surpasses other methods of state-of-art level for the class-imbalanced classification.","We develop a new method for imbalanced classification using adversarial examplesProposes a new optimization objective that generates synthetic samples by over-sampling the majority classes instead of minority classes, solving the problem of overfitting minority classes.The authors propose to tackle imbalance classification using re-sampling methods, showing that adversarial examples in the minority class would help to train a new model that generalizes better." 345,Detecting Topological Defects in 2D Active Nematics Using Convolutional Neural Networks,"Active matter consists of active agents which transform energy extracted from surroundings into momentum, producing a variety of collective phenomena.A model, synthetic active system composed of microtubule polymers driven by protein motors spontaneously forms a liquid-crystalline nematic phase.Extensile stress created by the protein motors precipitates continuous buckling and folding of the microtubules creating motile topological defects and turbulent fluid flows.Defect motion is determined by the rheological properties of the material; however, these remain largely unquantified.Measuring defects dynamics can yield fundamental insights into active nematics, a class of materials that include bacterial films and animal cells.Current methods for defect detection lack robustness and precision, and require fine-tuning for datasets with different visual quality. In this study, we applied Deep Learning to train a defect detector to automatically analyze microscopy videos of the microtubule active nematic. Experimental results indicate that our method is robust and accurate.It is expected to significantly increase the amount of video data that can be processed.",An interesting application of CNN in soft condensed matter physics experiments.The authors demonstrate that a deep learning approach offers improvement to both the identification accuracy and rate at which defects can be identified of nematic liquid crystals.Apply a well known neural model (YOLO) to detect bounding boxes of objects in images. 346,Locality and Compositionality in Zero-Shot Learning,"In this work we study locality and compositionality in the context of learning representations for Zero Shot Learning.In order to well-isolate the importance of these properties in learned representations, we impose the additional constraint that, differently from most recent work in ZSL, no pre-training on different datasets is performed.The results of our experiment show how locality, in terms of small parts of the input, and compositionality, i.e. how well can the learned representations be expressed as a function of a smaller vocabulary, are both deeply related to generalization and motivate the focus on more local-aware models in future research directions for representation learning.","An analysis of the effects of compositionality and locality on representation learning for zero-shot learning.Proposes evaluation framework for ZSL where the model is not allowed to be pretrained and instead, model parameters are randomly initialized for better understanding of what's happening in ZSL." 347,Intriguing Properties of Adversarial Examples,"It is becoming increasingly clear that many machine learning classifiers are vulnerable to adversarial examples.In attempting to explain the origin of adversarial examples, previous studies have typically focused on the fact that neural networks operate on high dimensional data, they overfit, or they are too linear.Here we show that distributions of logit differences have a universal functional form.This functional form is independent of architecture, dataset, and training protocol; nor does it change during training.This leads to adversarial error having a universal scaling, as a power-law, with respect to the size of the adversarial perturbation.We show that this universality holds for a broad range of datasets, models, and attacks.Motivated by these results, we study the effects of reducing prediction entropy on adversarial robustness.Finally, we study the effect of network architectures on adversarial sensitivity.To do this, we use neural architecture search with reinforcement learning to find adversarially robust architectures on CIFAR10.Our resulting architecture is more robust to white black box attacks compared to previous attempts.","Adversarial error has similar power-law form for all datasets and models studied, and architecture matters." 348,Learning Good Policies By Learning Good Perceptual Models," Reinforcement learning has led to increasingly complex looking behavior in recent years.However, such complexity can be misleading and hides over-fitting.We find that visual representations may be a useful metric of complexity, and both correlates well objective optimization and causally effects reward optimization.We then propose curious representation learning which allows us to use better visual representation learning algorithms to correspondingly increase visual representation in policy through an intrinsic objective on both simulated environments and transfer to real images.Finally, we show better visual representations induced by CRL allows us to obtain better performance on Atari without any reward than other curiosity objectives.","We present a formulation of curiosity as a visual representation learning problem and show that it allows good visual representations in agents.This paper formulates curiosity based RL training as learning a visual representation model, arguing that focusing on better LR and maximising model loss for novel scenes will get better overall performance." 349,3D-SIC: 3D Semantic Instance Completion for RGB-D Scans,"This paper introduces the task of semantic instance completion: from an incomplete RGB-D scan of a scene, we aim to detect the individual object instances comprising the scene and infer their complete object geometry.This enables a semantically meaningful decomposition of a scanned scene into individual, complete 3D objects, including hidden and unobserved object parts.This will open up new possibilities for interactions with object in a scene, for instance for virtual or robotic agents.To address this task, we propose 3D-SIC, a new data-driven approach that jointly detects object instances and predicts their completed geometry.The core idea of 3D-SIC is a novel end-to-end 3D neural network architecture that leverages joint color and geometry feature learning.The fully-convolutional nature of our 3D network enables efficient inference of semantic instance completion for 3D scans at scale of large indoor environments in a single forward pass.In a series evaluation, we evaluate on both real and synthetic scan benchmark data, where we outperform state-of-the-art approaches by over 15 in mAP@0.5 on ScanNet, and over 18 in mAP@0.5 on SUNCG.","From an incomplete RGB-D scan of a scene, we aim to detect the individual object instances comprising the scene and infer their complete object geometry.Proposes an end-to-end 3D CNN structure which combines color features and 3D features to predict the missing 3D structure of a scene from RGB-D scans.The authors propose a novel end-to-end 3D convolutional network which predicts 3D semantic instance completion as object bounding boxes, class labels and complete object geometry." 350,XGAN: Unsupervised Image-to-Image Translation for many-to-many Mappings,"Style transfer usually refers to the task of applying color and texture information from a specific style image to a given content image while preserving the structure of the latter.Here we tackle the more generic problem of semantic style transfer: given two unpaired collections of images, we aim to learn a mapping between the corpus-level style of each collection, while preserving semantic content shared across the two domains.We introduce XGAN, a dual adversarial autoencoder, which captures a shared representation of the common domain semantic content in an unsupervised way, while jointly learning the domain-to-domain image translations in both directions. We exploit ideas from the domain adaptation literature and define a semantic consistency loss which encourages the model to preserve semantics in the learned embedding space.We report promising qualitative results for the task of face-to-cartoon translation.The cartoon dataset we collected for this purpose will also be released as a new benchmark for semantic style transfer.","XGAN is an unsupervised model for feature-level image-to-image translation applied to semantic style transfer problems such as the face-to-cartoon task, for which we introduce a new dataset.This paper proposes a new GAN-based model for unpaired image-to-image translation similar to DTN" 351,signSGD with Majority Vote is Communication Efficient and Fault Tolerant,"Training neural networks on large datasets can be accelerated by distributing the workload over a network of machines.As datasets grow ever larger, networks of hundreds or thousands of machines become economically viable.The time cost of communicating gradients limits the effectiveness of using such large machine counts, as may the increased chance of network faults.We explore a particularly simple algorithm for robust, communication-efficient learning---signSGD.Workers transmit only the sign of their gradient vector to a server, and the overall update is decided by a majority vote.This algorithm uses 32x less communication per iteration than full-precision, distributed SGD.Under natural conditions verified by experiment, we prove that signSGD converges in the large and mini-batch settings, establishing convergence for a parameter regime of Adam as a byproduct.Aggregating sign gradients by majority vote means that no individual worker has too much power.We prove that unlike SGD, majority vote is robust when up to 50% of workers behave adversarially.The class of adversaries we consider includes as special cases those that invert or randomise their gradient estimate.On the practical side, we built our distributed training system in Pytorch.Benchmarking against the state of the art collective communications library, our framework---with the parameter server housed entirely on one machine---led to a 25% reduction in time for training resnet50 on Imagenet when using 15 AWS p3.2xlarge machines.","Workers send gradient signs to the server, and the update is decided by majority vote. We show that this algorithm is convergent, communication efficient and fault tolerant, both in theory and in practice.Presents a distributed implementation of signSGD with majority vote as aggregation." 352,Correcting Nuisance Variation using Wasserstein Distance,"Profiling cellular phenotypes from microscopic imaging can provide meaningful biological information resulting from various factors affecting the cells.One motivating application is drug development: morphological cell features can be captured from images, from which similarities between different drugs applied at different dosages can be quantified.The general approach is to find a function mapping the images to an embedding space of manageable dimensionality whose geometry captures relevant features of the input images.An important known issue for such methods is separating relevant biological signal from nuisance variation.For example, the embedding vectors tend to be more correlated for cells that were cultured and imaged during the same week than for cells from a different week, despite having identical drug compounds applied in both cases.In this case, the particular batch a set of experiments were conducted in constitutes the domain of the data; an ideal set of image embeddings should contain only the relevant biological information.We develop a general framework for adjusting the image embeddings in order to `forget domain-specific information while preserving relevant biological information.To do this, we minimize a loss function based on distances between marginal distributions of embeddings across domains for each replicated treatment.For the dataset presented, the replicated treatment is the negative control.We find that for our transformed embeddings the underlying geometric structure is not only preserved but the embeddings also carry improved biological signal less domain-specific information is present.","We correct nuisance variation for image embeddings across different domains, preserving only relevant information.Discusses a method for adjusting image embeddings in order tease apart technical variation from biological signal.The authors present a method to remove domain-specific information while preserving the relevant biological information by training a network that minimizes the Wasserstein distance between distrbutions." 353,Combination of Supervised and Reinforcement Learning For Vision-Based Autonomous Control," Reinforcement learning methods have recently achieved impressive results on a wide range of control problems.However, especially with complex inputs, they still require an extensive amount of training data in order to converge to a meaningful solution.This limitation largely prohibits their usage for complex input spaces such as video signals, and it is still impossible to use it for a number of complex problems in a real world environments, including many of those for video based control.Supervised learning, on the contrary, is capable of learning on a relatively small number of samples, however it does not take into account reward-based control policies and is not capable to provide independent control policies. In this article we propose a model-free control method, which uses a combination of reinforcement and supervised learning for autonomous control and paves the way towards policy based control in real world environments.We use SpeedDreams/TORCS video game to demonstrate that our approach requires much less samples comparing to the state-of-the-art reinforcement learning techniques on similar data, and at the same time overcomes both supervised and reinforcement learning approaches in terms of quality.Additionally, we demonstrate the applicability of the method to MuJoCo control problems.","The new combination of reinforcement and supervised learning, dramatically decreasing the number of required samples for training on videoThis paper proposes leveraging labelled controlled data to accelerate reinforcement-based learning of a control policy" 354,FAST LEARNING VIA EPISODIC MEMORY: A PERSPECTIVE FROM ANIMAL DECISION-MAKING,"A typical experiment to study cognitive function is to train animals to perform tasks, while the researcher records the electrical activity of the animals neurons.The main obstacle faced, when using this type of electrophysiological experiment to uncover the circuit mechanisms underlying complex behaviors, is our incomplete access to relevant circuits in the brain.One promising approach is to model neural circuits using an artificial neural network, which can provide complete access to the “neural circuits” responsible for a behavior.More recently, reinforcement learning models have been adopted to understand the functions of cortico-basal ganglia circuits as reward-based learning has been found in mammalian brain.In this paper, we propose a Biologically-plausible Actor-Critic with Episodic Memory framework to model a prefrontal cortex-basal ganglia-hippocampus circuit, which is verified to capture the behavioral findings from a well-known perceptual decision-making task, i.e., random dots motion discrimination.This B-ACEM framework links neural computation to behaviors, on which we can explore how episodic memory should be considered to govern future decision.Experiments are conducted using different settings of the episodic memory and results show that all patterns of episodic memories can speed up learning.In particular, salient events are prioritized to propagate reward information and guide decisions.Our B-ACEM framework and the built-on experiments give inspirations to both designs for more standard decision-making models in biological system and a more biologically-plausible ANN.",Fast learning via episodic memory verified by a biologically plausible framework for prefrontal cortex-basal ganglia-hippocampus (PFC-BG) circuit 355,Depth-Width Trade-offs for ReLU Networks via Sharkovsky's Theorem,"Understanding the representational power of Deep Neural Networks and how their structural properties affect the functions they can compute, has been an important yet challenging question in deep learning and approximation theory.In a seminal paper, Telgarsky high- lighted the benefits of depth by presenting a family of functions for which DNNs achieve zero classification error, whereas shallow networks with fewer than exponentially many nodes incur constant error.Even though Telgarsky’s work reveals the limitations of shallow neural networks, it doesn’t inform us on why these functions are difficult to represent and in fact he states it as a tantalizing open question to characterize those functions that cannot be well-approximated by smaller depths.In this work, we point to a new connection between DNNs expressivity and Sharkovsky’s Theorem from dynamical systems, that enables us to characterize the depth-width trade-offs of ReLU networks for representing functions based on the presence of a generalized notion of fixed points, called periodic points.Motivated by our observation that the triangle waves used in Telgarsky’s work contain points of period 3 – a period that is special in that it implies chaotic behaviour based on the celebrated result by Li-Yorke – we proceed to give general lower bounds for the width needed to represent periodic functions as a function of the depth.Technically, the crux of our approach is based on an eigenvalue analysis of the dynamical systems associated with such functions.","In this work, we point to a new connection between DNNs expressivity and Sharkovsky’s Theorem from dynamical systems, that enables us to characterize the depth-width trade-offs of ReLU networks Shows how the expressive power of NN depends on its depth and width, furthering the understanding of the benefit of deep nets for representing certain function classes.The authors derive depth-width tradeoff conditions for when relu networks are able to represent periodic functions using dynamical systems analysis." 356,Low-bit quantization and quantization-aware training for small-footprint keyword spotting,"We investigate low-bit quantization to reduce computational cost of deep neural network based keyword spotting.We propose approaches to further reduce quantization bits via integrating quantization into keyword spotting model training, which we refer to as quantization-aware training.Our experimental results on large dataset indicate that quantization-aware training can recover performance models quantized to lower bits representations.By combining quantization-aware training and weight matrix factorization, we are able to significantly reduce model size and computation for small-footprint keyword spotting, while maintaining performance.",We investigate quantization-aware training in very low-bit quantized keyword spotters to reduce the cost of on-device keyword spotting.This submission proposes a combination of low-rank decomposition and quanitization approach to compress DNN models for keyword spotting. 357,Enhancing experimental signals in single-cell RNA-sequencing data using graph signal processing,"Single-cell RNA-sequencing is a powerful tool for analyzing biological systems.However, due to biological and technical noise, quantifying the effects of multiple experimental conditions presents an analytical challenge.To overcome this challenge, we developed MELD: Manifold Enhancement of Latent Dimensions.MELD leverages tools from graph signal processing to learn a latent dimension within the data scoring the prototypicality of each datapoint with respect to experimental or control conditions.We call this dimension the Enhanced Experimental Signal.MELD learns the EES by filtering the noisy categorical experimental label in the graph frequency domain to recover a smooth signal with continuous values.This method can be used to identify signature genes that vary between conditions and identify which cell types are most affected by a given perturbation.We demonstrate the advantages of MELD analysis in two biological datasets, including T-cell activation in response to antibody-coated beads and treatment of human pancreatic islet cells with interferon gamma.","A novel graph signal processing framework for quantifying the effects of experimental perturbations in single cell biomedical data.This paper introduces several methods to process experimental results on biological cells and proposes a MELD algorithm mapping hard group assignments to soft assignments, allowing relevant groups of cells to be clustered." 358,Interpretable User Models via Decision-rule Gaussian Processes: Preliminary Results on Energy Storage,"Models of user behavior are critical inputs in many prescriptive settings and can be viewed as decision rules that transform state information available to the user into actions.Gaussian processes, as well as nonlinear extensions thereof, provide a flexible framework to learn user models in conjunction with approximate Bayesian inference.However, the resulting models may not be interpretable in general.We propose decision-rule GPs that apply GPs in a transformed space defined by decision rules that have immediate interpretability to practitioners.We illustrate this modeling tool on a real application and show that structural variational inference techniques can be used with DRGPs.We find that DRGPs outperform the direct use of GPs in terms of out-of-sample performance.",We propose a class of user models based on using Gaussian processes applied to a transformed space defined by decision rules 359,Continuous-fidelity Bayesian Optimization with Knowledge Gradient,"While Bayesian optimization has achieved great success in optimizing expensive-to-evaluate black-box functions, especially tuning hyperparameters of neural networks, methods such as random search and multi-fidelity BO) that exploit cheap approximations, e.g. training on a smaller training data or with fewer iterations, can outperform standard BO approaches that use only full-fidelity observations.In this paper, we propose a novel Bayesian optimization algorithm, the continuous-fidelity knowledge gradient method, that can be used when fidelity is controlled by one or more continuous settings such as training data size and the number of training iterations.cfKG characterizes the value of the information gained by sampling a point at a given fidelity, choosing to sample at the point and fidelity with the largest value per unit cost.Furthermore, cfKG can be generalized, following Wu et al., to settings where derivatives are available in the optimization process, e.g. large-scale kernel learning, and where more than one point can be evaluated simultaneously.Numerical experiments show that cfKG outperforms state-of-art algorithms when optimizing synthetic functions, tuning convolutional neural networks on CIFAR-10 and SVHN, and in large-scale kernel learning.","We propose a Bayes-optimal Bayesian optimization algorithm for hyperparameter tuning by exploiting cheap approximations.Studies hyperparameter-optimization by Bayesian optimization, using the Knowledge Gradient framework and allowing the Bayesian optimizer to tune fidelity against cost." 360,Evaluating Robustness of Neural Networks with Mixed Integer Programming,"Neural networks trained only to optimize for training accuracy can often be fooled by adversarial examples --- slightly perturbed inputs misclassified with high confidence.Verification of networks enables us to gauge their vulnerability to such adversarial examples.We formulate verification of piecewise-linear neural networks as a mixed integer program.On a representative task of finding minimum adversarial distortions, our verifier is two to three orders of magnitude quicker than the state-of-the-art.We achieve this computational speedup via tight formulations for non-linearities, as well as a novel presolve algorithm that makes full use of all information available.The computational speedup allows us to verify properties on convolutional and residual networks with over 100,000 ReLUs --- several orders of magnitude more than networks previously verified by any complete verifier.In particular, we determine for the first time the exact adversarial accuracy of an MNIST classifier to perturbations with bounded l-∞ norm ε=0.1: for this classifier, we find an adversarial example for 4.38% of samples, and a certificate of robustness to norm-bounded perturbations for the remainder.Across all robust training procedures and network architectures considered, and for both the MNIST and CIFAR-10 datasets, we are able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack.","We efficiently verify the robustness of deep neural models with over 100,000 ReLUs, certifying more samples than the state-of-the-art and finding more adversarial examples than a strong first-order attack.Performs a careful study of mixed integer linear programming approaches for verifying robustness of neural networks to adversarial perturbations and proposes three enhancements to MILP formulations of neural network verification." 361,Training-Free Uncertainty Estimation for Neural Networks,"Uncertainty estimation is an essential step in the evaluation of the robustness for deep learning models in computer vision, especially when applied in risk-sensitive areas.However, most state-of-the-art deep learning models either fail to obtain uncertainty estimation or need significant modification to obtain it.None of the previous methods are able to take an arbitrary model off the shelf and generate uncertainty estimation without retraining or redesigning it.To address this gap, we perform the first systematic exploration into training-free uncertainty estimation.We propose three simple and scalable methods to analyze the variance of output from a trained network under tolerable perturbations: infer-transformation, infer-noise, and infer-dropout.They operate solely during inference, without the need to re-train, re-design, or fine-tune the model, as typically required by other state-of-the-art uncertainty estimation methods.Surprisingly, even without involving such perturbations in training, our methods produce comparable or even better uncertainty estimation when compared to other training-required state-of-the-art methods.Last but not least, we demonstrate that the uncertainty from our proposed methods can be used to improve the neural network training.","A set of methods to obtain uncertainty estimation of any given model without re-designing, re-training, or to fine-tuning it.Describes several approaches for measuring uncertainty in arbitrary neural networks when there is an absence of distortion during training." 362,Learnable Higher-order Representation for Action Recognition,"Capturing spatiotemporal dynamics is an essential topic in video recognition.In this paper, we present learnable higher-order operation as a generic family of building blocks for capturing higher-order correlations from high dimensional input video space.We prove that several successful architectures for visual classification tasks are in the family of higher-order neural networks, theoretical and experimental analysis demonstrates their underlying mechanism is higher-order. On the task of video recognition, even using RGB only without fine-tuning with other video datasets, our higher-order models can achieve results on par with or better than the existing state-of-the-art methods on both Something-Something and Charades datasets.","Proposed higher order operation for context learning', ""Proposes a new 3D convolutional block which convolves video input with its context, based on the assumpton that relevant context is present around the image's object." 363,There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average,"Presently the most successful approaches to semi-supervised learning are based on consistency regularization, whereby a model is trained to be robust to small perturbations of its inputs and parameters.To understand consistency regularization, we conceptually explore how loss geometry interacts with training procedures.The consistency loss dramatically improves generalization performance over supervised-only training; however, we show that SGD struggles to converge on the consistency loss and continues to make large steps that lead to changes in predictions on the test data.Motivated by these observations, we propose to train consistency-based methods with Stochastic Weight Averaging, a recent approach which averages weights along the trajectory of SGD with a modified learning rate schedule.We also propose fast-SWA, which further accelerates convergence by averaging multiple points within each cycle of a cyclical learning rate schedule.With weight averaging, we achieve the best known semi-supervised results on CIFAR-10 and CIFAR-100, over many different quantities of labeled training data.For example, we achieve 5.0% error on CIFAR-10 with only 4000 labels, compared to the previous best result in the literature of 6.3%.","Consistency-based models for semi-supervised learning do not converge to a single point but continue to explore a diverse set of plausible solutions on the perimeter of a flat region. Weight averaging helps improve generalization performance.The paper proposes to apply Stochastic Weight Averaging to the semi-supervised learning context, arguing that the semi-supervised MT/Pi models are especially amenable to SWA and propose fast SWA to speed up training." 364,Tracking Loss: Converting Object Detector to Robust Visual Tracker,"In this paper, we find that by designing a novel loss function entitled, tracking loss, Convolutional Neural Network based object detectors can be successfully converted to well-performed visual trackers without any extra computational cost.This property is preferable to visual tracking where annotated video sequences for training are always absent, because rich features learned by detectors from still images could be utilized by dynamic trackers.It also avoids extra machinery such as feature engineering and feature aggregation proposed in previous studies.Tracking loss achieves this property by exploiting the internal structure of feature maps within the detection network and treating different feature points discriminatively.Such structure allows us to simultaneously consider discrimination quality and bounding box accuracy which is found to be crucial to the success.We also propose a network compression method to accelerate tracking speed without performance reduction.That also verifies tracking loss will remain highly effective even if the network is drastically compressed.Furthermore, if we employ a carefully designed tracking loss ensemble, the tracker would be much more robust and accurate.Evaluation results show that our trackers, outperform all state-of-the-art methods on VOT 2016 Challenge in terms of Expected Average Overlap and robustness.We will make the code publicly available.",We successfully convert a popular detector RPN to a well-performed tracker from the viewpoint of loss function. 365,Semantic Code Repair using Neuro-Symbolic Transformation Networks,"We study the problem of semantic code repair, which can be broadly defined as automatically fixing non-syntactic bugs in source code.The majority of past work in semantic code repair assumed access to unit tests against which candidate repairs could be validated.In contrast, the goal here is to develop a strong statistical model to accurately predict both bug locations and exact fixes without access to information about the intended correct behavior of the program.Achieving such a goal requires a robust contextual repair model, which we train on a large corpus of real-world source code that has been augmented with synthetically injected bugs.Our framework adopts a two-stage approach where first a large set of repair candidates are generated by rule-based processors, and then these candidates are scored by a statistical model using a novel neural network architecture which we refer to as Share, Specialize, and Compete.Specifically, the architecture generates a shared encoding of the source code using an RNN over the abstract syntax tree, scores each candidate repair using specialized network modules, and then normalizes these scores together so they can compete against one another in comparable probability space.We evaluate our model on a real-world test set gathered from GitHub containing four common categories of bugs.Our model is able to predict the exact correct repair 41% of the time with a single guess, compared to 13% accuracy for an attentional sequence-to-sequence model.","A neural architecture for scoring and ranking program repair candidates to perform semantic program repair statically without access to unit tests.Presents a neural network architecture consisting of the share, specialize and compete parts for repairing code in four cases." 366,"Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference","Deep networks were recently suggested to face the odds between accuracy and robustness.Such a dilemma is shown to be rooted in the inherently higher sample complexity and/or model capacity, for learning a high-accuracy and robust classifier.In view of that, give a classification task, growing the model capacity appears to help draw a win-win between accuracy and robustness, yet at the expense of model size and latency, therefore posing challenges for resource-constrained applications.Is it possible to co-design model accuracy, robustness and efficiency to achieve their triple wins?This paper studies multi-exit networks associated with input-adaptive efficient inference, showing their strong promise in achieving a “sweet point"" in co-optimizing model accuracy, robustness, and efficiency.Our proposed solution, dubbed Robust Dynamic Inference Networks, allows for each input to adaptively choose one of the multiple output layers to output its prediction.That multi-loss adaptivity adds new variations and flexibility to adversarial attacks and defenses, on which we present a systematical investigation.We show experimentally that by equipping existing backbones with such robust adaptive inference, the resulting RDI-Nets can achieve better accuracy and robustness, yet with over 30% computational savings, compared to the defended original models.","Is it possible to co-design model accuracy, robustness and efficiency to achieve their triple wins? Yes!Exploits input-adaptive multiple early-exits for the field of adversarial attack and defense, reducing the average inference complexity without conflicting the larger capacity assumption." 367,Natural Language Detectors Emerge in Individual Neurons,"Although deep convolutional networks have achieved improved performance in many natural language tasks, they have been treated as black boxes because they are difficult to interpret.Especially, little is known about how they represent language in their intermediate layers.In an attempt to understand the representations of deep convolutional networks trained on language tasks, we show that individual units are selectively responsive to specific morphemes, words, and phrases, rather than responding to arbitrary and uninterpretable patterns.In order to quantitatively analyze such intriguing phenomenon, we propose a concept alignment method based on how units respond to replicated text.We conduct analyses with different architectures on multiple datasets for classification and translation tasks and provide new insights into how deep models understand natural language.",We show that individual units in CNN representations learned in NLP tasks are selectively responsive to specific natural language concepts.Uses grammatical units of natural language that preserve meanings to show that the units of deep CNNs learned in NLP tasks could act as a natural language concept detector. 368,Challenges in Disentangling Independent Factors of Variation,"We study the problem of building models that disentangle independent factors of variation.Such models encode features that can efficiently be used for classification and to transfer attributes between different images in image synthesis.As data we use a weakly labeled training set, where labels indicate what single factor has changed between two data samples, although the relative value of the change is unknown.This labeling is of particular interest as it may be readily available without annotation costs.We introduce an autoencoder model and train it through constraints on image pairs and triplets.We show the role of feature dimensionality and adversarial training theoretically and experimentally.We formally prove the existence of the reference ambiguity, which is inherently present in the disentangling task when weakly labeled data is used.The numerical value of a factor has different meaning in different reference frames.When the reference depends on other factors, transferring that factor becomes ambiguous.We demonstrate experimentally that the proposed model can successfully transfer attributes on several datasets, but show also cases when the reference ambiguity occurs.","It is a mostly theoretical paper that describes the challenges in disentangling factors of variation, using autoencoders and GAN.This paper considers disentangling factors of variation in images, shows that in general, without further assumptions, one cannot tell apart two different variation factors, and suggests a novel AE+GAN architecture to try and disentangle variation factors.This paper studies the challenges of disentangling independent factors of variation under weakly labeled data and introduces the term reference ambiguity for data point mapping." 369,AN ATTENTION-BASED DEEP NET FOR LEARNING TO RANK,"In information retrieval, learning to rank constructs a machine-based ranking model which given a query, sorts the search results by their degree of relevance or importance to the query.Neural networks have been successfully applied to this problem, and in this paper, we propose an attention-based deep neural network which better incorporates different embeddings of the queries and search results with an attention-based mechanism.This model also applies a decoder mechanism to learn the ranks of the search results in a listwise fashion.The embeddings are trained with convolutional neural networks or the word2vec model.We demonstrate the performance of this model with image retrieval and text querying data sets.",learning to rank with several embeddings and attentionsProposes to use attention to combine multiple input representations for both query and search results in the learning to rank task. 370,Data-Driven Discovery of Functional Cell Types that Improve Models of Neural Activity,"Computational neuroscience aims to fit reliable models of in vivo neural activity and interpret them as abstract computations.Recent work has shown that functional diversity of neurons may be limited to that of relatively few cell types; other work has shown that incorporating constraints into artificial neural networks can improve their ability to mimic neural data.Here we develop an algorithm that takes as input recordings of neural activity and returns clusters of neurons by cell type and models of neural activity constrained by these clusters.The resulting models are both more predictive and more interpretable, revealing the contributions of functional cell types to neural computation and ultimately informing the design of future ANNs.",We developed an algorithm that takes as input recordings of neural activity and returns clusters of neurons by cell type and models of neural activity constrained by these clusters. 371,Neural Execution of Graph Algorithms,"Graph Neural Networks are a powerful representational tool for solving problems on graph-structured inputs.In almost all cases so far, however, they have been applied to directly recovering a final solution from raw inputs, without explicit guidance on how to structure their problem-solving.Here, instead, we focus on learning in the space of algorithms: we train several state-of-the-art GNN architectures to imitate individual steps of classical graph algorithms, parallel as well as sequential.As graph algorithms usually rely on making discrete decisions within neighbourhoods, we hypothesise that maximisation-based message passing neural networks are best-suited for such objectives, and validate this claim empirically.We also demonstrate how learning in the space of algorithms can yield new opportunities for positive transfer between tasks---showing how learning a shortest-path algorithm can be substantially improved when simultaneously learning a reachability algorithm.","We supervise graph neural networks to imitate intermediate and step-wise outputs of classical graph algorithms, recovering highly favourable insights.Suggests training neural networks to imitate graph algorithms by learning primitives and subroutines rather than the final output." 372,Learning to Imagine Manipulation Goals for Robot Task Planning,"Prospection is an important part of how humans come up with new task plans, but has not been explored in depth in robotics.Predicting multiple task-level is a challenging problem that involves capturing both task semantics and continuous variability over the state of the world.Ideally, we would combine the ability of machine learning to leverage big data for learning the semantics of a task, while using techniques from task planning to reliably generalize to new environment.In this work, we propose a method for learning a model encoding just such a representation for task planning.We learn a neural net that encodes the k most likely outcomes from high level actions from a given world.Our approach creates comprehensible task plans that allow us to predict changes to the environment many time steps into the future.We demonstrate this approach via application to a stacking task in a cluttered environment, where the robot must select between different colored blocks while avoiding obstacles, in order to perform a task.We also show results on a simple navigation task.Our algorithm generates realistic image and pose predictions at multiple points in a given task.",We describe an architecture for generating diverse hypotheses for intermediate goals during robotic manipulation tasks.Evaluates the quality of a proposed generative predictive model to generate plans for robot execution.This paper proposes a method for learning a high-level transition function that is useful for task planning. 373,Towards Better Understanding of Adaptive Gradient Algorithms in Generative Adversarial Nets,"Adaptive gradient algorithms perform gradient-based updates using the history of gradients and are ubiquitous in training deep neural networks.While adaptive gradient methods theory is well understood for minimization problems, the underlying factors driving their empirical success in min-max problems such as GANs remain unclear.In this paper, we aim at bridging this gap from both theoretical and empirical perspectives.First, we analyze a variant of Optimistic Stochastic Gradient proposed in~p for solving a class of non-convex non-concave min-max problem and establish complexity for finding-first-order stationary point, in which the algorithm only requires invoking one stochastic first-order oracle while enjoying state-of-the-art iteration complexity achieved by stochastic extragradient method by~p.Then we propose an adaptive variant of OSG named Optimistic Adagrad and reveal an adaptive complexity~footnoteepsilonalpha0leq alphaleq 1/2$.To the best of our knowledge, this is the first work for establishing adaptive complexity in non-convex non-concave min-max optimization.Empirically, our experiments show that indeed adaptive gradient algorithms outperform their non-adaptive counterparts in GAN training.Moreover, this observation can be explained by the slow growth rate of the cumulative stochastic gradient, as observed empirically.","This paper provides novel analysis of adaptive gradient algorithms for solving non-convex non-concave min-max problems as GANs, and explains the reason why adaptive gradient methods outperform its non-adaptive counterparts by empirical studies.Develops algorithms for the solution of variational inequalities in the stochastic setting, proposing a variation of the extragradient method." 374,The Gaussian Process Prior VAE for Interpretable Latent Dynamics from Pixels,"We consider the problem of unsupervised learning of a low dimensional, interpretable, latent state of a video containing a moving object.The problem of distilling dynamics from pixels has been extensively considered through the lens of graphical/state space models that exploit Markov structure for cheap computation and structured graphical model priors for enforcing interpretability on latent representations.We take a step towards extending these approaches by discarding the Markov structure; instead, repurposing the recently proposed Gaussian Process Prior Variational Autoencoder for learning sophisticated latent trajectories.We describe the model and perform experiments on a synthetic dataset and see that the model reliably reconstructs smooth dynamics exhibiting U-turns and loops.We also observe that this model may be trained without any beta-annealing or freeze-thaw of training parameters.Training is performed purely end-to-end on the unmodified evidence lower bound objective.This is in contrast to previous works, albeit for slightly different use cases, where application specific training tricks are often required.",We learn sohpisticated trajectories of an object purely from pixels with a toy video dataset by using a VAE structure with a Gaussian process prior. 375,Unravelling the neural signatures of dream recall in EEG: a deep learning approach,"Dreams and our ability to recall them are among the most puzzling questions in sleep research.Specifically, putative differences in brain network dynamics between individuals with high versus low dream recall rates, are still poorly understood.In this study, we addressed this question as a classification problem where we applied deep convolutional networks to sleep EEG recordings to predict whether subjects belonged to the high or low dream recall group.Our model achieves significant accuracy levels across all the sleep stages, thereby indicating subtle signatures of dream recall in the sleep microstructure.We also visualized the feature space to inspect the subject-specificity of the learned features, thus ensuring that the network captured population level differences.Beyond being the first study to apply deep learning to sleep EEG in order to classify HDR and LDR, guided backpropagation allowed us to visualize the most discriminant features in each sleep stage.The significance of these findings and future directions are discussed.","We investigate the neural basis of dream recall using convolutional neural network and feature visualization techniques, like tSNE and guided-backpropagation." 376,Multi-agent Reinforcement Learning for Networked System Control,"This paper considers multi-agent reinforcement learning in networked system control.Specifically, each agent learns a decentralized control policy based on local observations and messages from connected neighbors.We formulate such a networked MARL problem as a spatiotemporal Markov decision process and introduce a spatial discount factor to stabilize the training of each local agent.Further, we propose a new differentiable communication protocol, called NeurComm, to reduce information loss and non-stationarity in NMARL.Based on experiments in realistic NMARL scenarios of adaptive traffic signal control and cooperative adaptive cruise control, an appropriate spatial discount factor effectively enhances the learning curves of non-communicative MARL algorithms, while NeurComm outperforms existing communication protocols in both learning efficiency and control performance.","This paper proposes a new formulation and a new communication protocol for networked multi-agent control problems', ""Concerned with N-MARL's where agents update their policy based only on messages from neighboring nodes, showing that introducing a spatial discount factor stabilizes learning." 377,The k-tied Normal Distribution: A Compact Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks,"Variational Bayesian Inference is a popular methodology for approximating posterior distributions over Bayesian neural network weights.Recent work developing this class of methods has explored ever richer parameterizations of the approximate posterior in the hope of improving performance.In contrast, here we share a curious experimental finding that suggests instead restricting the variational distribution to a more compact parameterization.For a variety of deep Bayesian neural networks trained using Gaussian mean-field variational inference, we find that the posterior standard deviations consistently exhibit strong low-rank structure after convergence.This means that by decomposing these variational parameters into a low-rank factorization, we can make our variational approximation more compact without decreasing the models performance.Furthermore, we find that such factorized parameterizations improve the signal-to-noise ratio of stochastic gradient estimates of the variational lower bound, resulting in faster convergence.","Mean field VB uses twice as many parameters; we tie variance parameters in mean field VB without any loss in ELBO, gaining speed and lower variance gradients." 378,Training Neural Networks for Aspect Extraction Using Descriptive Keywords Only,"Aspect extraction in online product reviews is a key task in sentiment analysis and opinion mining.Training supervised neural networks for aspect extraction is not possible when ground truth aspect labels are not available, while the unsupervised neural topic models fail to capture the particular aspects of interest.In this work, we propose a weakly supervised approach for training neural networks for aspect extraction in cases where only a small set of seed words, i.e., keywords that describe an aspect, are available.Our main contributions are as follows.First, we show that current weakly supervised networks fail to leverage the predictive power of the available seed words by comparing them to a simple bag-of-words classifier. Second, we propose a distillation approach for aspect extraction where the seed words are considered by the bag-of-words classifier and distilled to the parameters of a neural network.Third, we show that regularization encourages the student to consider non-seed words for classification and, as a result, the student outperforms the teacher, which only considers the seed words.Finally, we empirically show that our proposed distillation approach outperforms previous weakly supervised approaches for aspect extraction in six domains of Amazon product reviews.","We effectively leverage a few keywords as weak supervision for training neural networks for aspect extraction.Discusses a variant of knowledge distillation which uses a ""teacher"" based on a bag-of-words classifier with seed words and a ""student"" which is an embedding-based neural network." 379,Disentangling neural mechanisms for perceptual grouping,"Forming perceptual groups and individuating objects in visual scenes is an essential step towards visual intelligence.This ability is thought to arise in the brain from computations implemented by bottom-up, horizontal, and top-down connections between neurons.However, the relative contributions of these connections to perceptual grouping are poorly understood.We address this question by systematically evaluating neural network architectures featuring combinations of these connections on two synthetic visual tasks, which stress low-level ""Gestalt"" vs. high-level object cues for perceptual grouping.We show that increasing the difficulty of either task strains learning for networks that rely solely on bottom-up processing.Horizontal connections resolve this limitation on tasks with Gestalt cues by supporting incremental spatial propagation of activities, whereas top-down connections rescue learning on tasks with high-level object cues by modifying coarse predictions about the position of the target object.Our findings dissociate the computational roles of bottom-up, horizontal and top-down connectivity, and demonstrate how a model featuring all of these interactions can more flexibly learn to form perceptual groups.","Horizontal and top-down feedback connections are responsible for complementary perceptual grouping strategies in biological and recurrent vision systems.Using neural networks as a computational model of the brain, examines the efficiency of different strategies for solving two visual challenges." 380,High Fidelity Speech Synthesis with Adversarial Networks,"Generative adversarial networks have seen rapid development in recent years and have led to remarkable improvements in generative modelling of images.However, their application in the audio domain has received limited attention,and autoregressive models, such as WaveNet, remain the state of the art in generative modelling of audio signals such as human speech.To address this paucity, we introduce GAN-TTS, a Generative Adversarial Network for Text-to-Speech.Our architecture is composed of a conditional feed-forward generator producing raw speech audio, and an ensemble of discriminators which operate on random windows of different sizes.The discriminators analyse the audio both in terms of general realism, as well as how well the audio corresponds to the utterance that should be pronounced. To measure the performance of GAN-TTS, we employ both subjective human evaluation, as well as novel quantitative metrics, which we find to be well correlated with MOS.We show that GAN-TTS is capable of generating high-fidelity speech with naturalness comparable to the state-of-the-art models, and unlike autoregressive models, it is highly parallelisable thanks to an efficient feed-forward generator.Listen to GAN-TTS reading this abstract at http://tiny.cc/gantts.","We introduce GAN-TTS, a Generative Adversarial Network for Text-to-Speech, which achieves Mean Opinion Score (MOS) 4.2.Solves the GAN challenge in raw waveform synthesis and begins to close the existing performance gap between autoregressive models and GANs for raw audios." 381,PRUNING IN TRAINING: LEARNING AND RANKING SPARSE CONNECTIONS IN DEEP CONVOLUTIONAL NETWORKS,"This paper proposes a Pruning in Training framework of learning to reduce the parameter size of networks.Different from existing works, our PiT framework employs the sparse penalties to train networks and thus help rank the importance of weights and filters.Our PiT algorithms can directly prune the network without any fine-tuning.The pruned networks can still achieve comparable performance to the original networks.In particular, we introduce the Lasso-type Penalty, and Split LBI Penalty to regularize the networks, and a pruning strategy proposed is used in help prune the network.We conduct the extensive experiments on MNIST, Cifar-10, and miniImageNet.The results validate the efficacy of our proposed methods.Remarkably, on MNIST dataset, our PiT framework can save 17.5% parameter size of LeNet-5, which achieves the 98.47% recognition accuracy.",we propose an algorithm of learning to prune network by enforcing structure sparsity penaltiesThis paper introduces an approach to pruning while training a network using lasso and split LBI penalties 382,Unsupervised Continual Learning and Self-Taught Associative Memory Hierarchies,"We first pose the Unsupervised Continual Learning problem: learning salient representations from a non-stationary stream of unlabeled data in which the number of object classes varies with time.Given limited labeled data just before inference, those representations can also be associated with specific object types to perform classification.To solve the UCL problem, we propose an architecture that involves a single module, called Self-Taught Associative Memory, which loosely models the function of a cortical column in the mammalian brain.Hierarchies of STAM modules learn based on a combination of Hebbian learning, online clustering, detection of novel patterns and forgetting outliers, and top-down predictions.We illustrate the operation of STAMs in the context of learning handwritten digits in a continual manner with only 3-12 labeled examples per class.STAMs suggest a promising direction to solve the UCL problem without catastrophic forgetting.","We introduce unsupervised continual learning (UCL) and a neuro-inspired architecture that solves the UCL problem.Proposes using hierachies of STAM modules to solve the UCL problem, providing evidence that the representations the modules learn are well-suited for few-shot classification." 383,Retrieving Signals in the Frequency Domain with Deep Complex Extractors,"Recent advances have made it possible to create deep complex-valued neural networks.Despite this progress, the potential power of fully complex intermediate computations and representations has not yet been explored for many challenging learning problems.Building on recent advances, we propose a novel mechanism for extracting signals in the frequency domain.As a case study, we perform audio source separation in the Fourier domain.Our extraction mechanism could be regarded as a local ensembling method that combines a complex-valued convolutional version of Feature-Wise Linear Modulation and a signal averaging operation.We also introduce a new explicit amplitude and phase-aware loss, which is scale and time invariant, taking into account the complex-valued components of the spectrogram.Using the Wall Street Journal Dataset, we compare our phase-aware loss to several others that operate both in the time and frequency domains and demonstrate the effectiveness of our proposed signal extraction method and proposed loss.When operating in the complex-valued frequency domain, our deep complex-valued network substantially outperforms its real-valued counterparts even with half the depth and a third of the parameters.Our proposed mechanism improves significantly deep complex-valued networks performance and we demonstrate the usefulness of its regularizing effect.",New Signal Extraction Method in the Fourier DomainContributes a complex-valued convolutional version of the Feature-Wise Linear Modulation which allows parameter optimization and designs a loss which takes into account magnitude and phase. 384,Disentangling Content and Style via Unsupervised Geometry Distillation,"It is challenging to disentangle an object into two orthogonal spaces of content and style since each can influence the visual observation in a different and unpredictable way.It is rare for one to have access to a large number of data to help separate the influences.In this paper, we present a novel framework to learn this disentangled representation in a completely unsupervised manner.We address this problem in a two-branch Autoencoder framework.For the structural content branch, we project the latent factor into a soft structured point tensor and constrain it with losses derived from prior knowledge.This encourages the branch to distill geometry information.Another branch learns the complementary style information.The two branches form an effective framework that can disentangle objects content-style representation without any human annotation.We evaluate our approach on four image datasets, on which we demonstrate the superior disentanglement and visual analogy quality both in synthesized and real-world data.We are able to generate photo-realistic images with 256x256 resolution that are clearly disentangled in content and style.","We present a novel framework to learn the disentangled representation of content and style in a completely unsupervised manner. ', ""Propose model based on autoencoder framework to disentangle an object's representation, results show that model can produce representations capturing content and style." 385,Estimating Heterogeneous Treatment Effects Using Neural Networks With The Y-Learner,"We develop the Y-learner for estimating heterogeneous treatment effects in experimental and observational studies.The Y-learner is designed to leverage the abilities of neural networks to optimize multiple objectives and continually update, which allows for better pooling of underlying feature information between treatment and control groups.We evaluate the Y-learner on three test problems: A set of six simulated data benchmarks from the literature. A real-world large-scale experiment on voter persuasion. A task from the literature that estimates artificially generated treatment effects on MNIST didgits.The Y-learner achieves state of the art results on two of the three tasks.On the MNIST task, it gets the second best results.","We develop a CATE estimation strategy that takes advantage some of the intriguing properties of neural networks. Shows improvements to X-learner by modeling the treatment response function, the control response function, and the mapping from imputed treatment effect to the conditional average treatment effect, as neural networks.The authors propose the Y-learner to estimate conditional average treatment effect(CATE), which simultaneously updates the parameters of the outcome functions and the CATE estimator." 386,Deep Neural Forests: An Architecture for Tabular Data,"Deep neural models, such as convolutional and recurrent networks, achieve phenomenal results over spatial data such as images and text.However, when considering tabular data, gradient boosting of decision trees remains the method of choice.Aiming to bridge this gap, we propose -- a novel architecture that combines elements from decision trees as well as dense residual connections.We present the results of extensive empirical study in which we examine the performance of GBDTs, DNFs and fully-connected networks.These results indicate that DNFs achieve comparable results to GBDTs on tabular data, and open the door to end-to-end neural modeling of multi-modal data.To this end, we present a successful application of DNFs as part of a hybrid architecture for a multi-modal driving scene understanding classification task.","An architecture for tabular data, which emulates branches of decision trees and uses dense residual connectivity This paper proposes deep neural forest, an algorithm which targets tabular data and integrates strong points of gradient boosting of decision trees.A novel neural network architecture mimicking how decision forests work to tackle the general problem of training deep models for tabular data and showcasing effectiveness on par with GBDT." 387,YellowFin and the Art of Momentum Tuning,"Hyperparameter tuning is one of the most time-consuming workloads in deep learning.State-of-the-art optimizers, such as AdaGrad, RMSProp and Adam, reduce this labor by adaptively tuning an individual learning rate for each variable.Recently researchers have shown renewed interest in simpler methods like momentum SGD as they may yield better results.Motivated by this trend, we ask: can simple adaptive methods, based on SGD perform as well or better?We revisit the momentum SGD algorithm and show that hand-tuning a single learning rate and momentum makes it competitive with Adam.We then analyze its robustness to learning rate misspecification and objective curvature variation.Based on these insights, we design YellowFin, an automatic tuner for momentum and learning rate in SGD.YellowFin optionally uses a negative-feedback loop to compensate for the momentum dynamics in asynchronous settings on the fly.We empirically show YellowFin can converge in fewer iterations than Adam on ResNets and LSTMs for image recognition, language modeling and constituency parsing, with a speedup of up tox in synchronous and up tox in asynchronous settings.","YellowFin is an SGD based optimizer with both momentum and learning rate adaptivity.Proposes a method to automatically tuning the momentum parameter in momentum SGD methods, which achieves better results and fast convergence speed that state-of-the-art Adam algorithm." 388,LatentPoison -- Adversarial Attacks On The Latent Space,"Robustness and security of machine learning systems are intertwined, wherein a non-robust ML system can be subject to attacks using a wide variety of exploits.With the advent of scalable deep learning methodologies, a lot of emphasis has been put on the robustness of supervised, unsupervised and reinforcement learning algorithms.Here, we study the robustness of the latent space of a deep variational autoencoder, an unsupervised generative framework, to show that it is indeed possible to perturb the latent space, flip the class predictions and keep the classification probability approximately equal before and after an attack.This means that an agent that looks at the outputs of a decoder would remain oblivious to an attack.",Adversarial attacks on the latent space of variational autoencoders to change the semantic meaning of inputsThis paper concerns security and machine learning and proposes a man-in-middle attack that alters the VAE encoding of input data so that decoded output will be misclassified. 389,An Empirical Study of Encoders and Decoders in Graph-Based Dependency Parsing,"Graph-based dependency parsing consists of two steps: first, an encoder produces a feature representation for each parsing substructure of the input sentence, which is then used to compute a score for the substructure; and second, a decoder} finds the parse tree whose substructures have the largest total score.Over the past few years, powerful neural techniques have been introduced into the encoding step which substantially increases parsing accuracies.However, advanced decoding techniques, in particular high-order decoding, have seen a decline in usage.It is widely believed that contextualized features produced by neural encoders can help capture high-order decoding information and hence diminish the need for a high-order decoder.In this paper, we empirically evaluate the combinations of different neural and non-neural encoders with first- and second-order decoders and provide a comprehensive analysis about the effectiveness of these combinations with varied training data sizes.We find that: first, when there is large training data, a strong neural encoder with first-order decoding is sufficient to achieve high parsing accuracy and only slightly lags behind the combination of neural encoding and second-order decoding; second, with small training data, a non-neural encoder with a second-order decoder outperforms the other combinations in most cases. ","An empirical study that examines the effectiveness of different encoder-decoder combinations for the task of dependency parsingEmpirically analyzes various encoders, decoders, and their dependencies for graph-based dependency parsing." 390,Train Neural Network by Embedding Space Probabilistic Constraint ,"Using higher order knowledge to reduce training data has become a popular research topic.However, the ability for available methods to draw effective decision boundaries is still limited: when training set is small, neural networks will be biased to certain labels.Based on this observation, we consider constraining output probability distribution as higher order domain knowledge.We design a novel algorithm that jointly optimizes output probability distribution on a clustered embedding space to make neural networks draw effective decision boundaries. While directly applying probability constraint is not effective, users need to provide additional very weak supervisions: mark some batches that have output distribution greatly differ from target probability distribution.We use experiments to empirically prove that our model can converge to an accuracy higher than other state-of-art semi-supervised learning models with less high quality labeled training examples.","We introduce an embedding space approach to constrain neural network output probability distribution.This paper introduces a method to perform semi-supervised learning with deep neural networks, and the model achieves relatively high accuracy, given a small training size.This paper incorporates label distribution into model learning when a limited number of training instances is available, and proposes two techniques for handling the problem of output label distribution being wrongly biased." 391,Deep contextualized word representations,"We introduce a new type of deep contextualized word representation that models both complex characteristics of word use, and how these uses vary across linguistic contexts. Our word vectors are learned functions of the internal states of a deep bidirectional language model, which is pretrained on a large text corpus.We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pretrained network is crucial, allowing downstream models to mix different types of semi-supervision signals.",We introduce a new type of deep contextualized word representation that significantly improves the state of the art for a range of challenging NLP tasks. 392,SoftLoc: Robust Temporal Localization under Label Misalignment,"This work addresses the long-standing problem of robust event localization in the presence of temporally of misaligned labels in the training data.We propose a novel versatile loss function that generalizes a number of training regimes from standard fully-supervised cross-entropy to count-based weakly-supervised learning.Unlike classical models which are constrained to strictly fit the annotations during training, our soft localization learning approach relaxes the reliance on the exact position of labels instead.Training with this new loss function exhibits strong robustness to temporal misalignment of labels, thus alleviating the burden of precise annotation of temporal sequences.We demonstrate state-of-the-art performance against standard benchmarks in a number of challenging experiments and further show that robustness to label noise is not achieved at the expense of raw performance.",This work introduces a novel loss function for the robust training of temporal localization DNN in the presence of misaligned labels.A new loss for training models that predict where events occur in a training sequence with noisy labels by comparing smoothed label and prediction sequence. 393,Boosting Dilated Convolutional Networks with Mixed Tensor Decompositions,"The driving force behind deep networks is their ability to compactly represent rich classes of functions.The primary notion for formally reasoning about this phenomenon is expressive efficiency, which refers to a situation where one network must grow unfeasibly large in order to replicate functions of another.To date, expressive efficiency analyses focused on the architectural feature of depth, showing that deep networks are representationally superior to shallow ones.In this paper we study the expressive efficiency brought forth by connectivity, motivated by the observation that modern networks interconnect their layers in elaborate ways.We focus on dilated convolutional networks, a family of deep models delivering state of the art performance in sequence processing tasks.By introducing and analyzing the concept of mixed tensor decompositions, we prove that interconnecting dilated convolutional networks can lead to expressive efficiency.In particular, we show that even a single connection between intermediate layers can already lead to an almost quadratic gap, which in large-scale settings typically makes the difference between a model that is practical and one that is not.Empirical evaluation demonstrates how the expressive efficiency of connectivity, similarly to that of depth, translates into gains in accuracy.This leads us to believe that expressive efficiency may serve a key role in developing new tools for deep network design.","We introduce the notion of mixed tensor decompositions, and use it to prove that interconnecting dilated convolutional networks boosts their expressive power.This paper theoretically validates that interconnecting networks with different dilations can lead to expressive efficiency using mixed tensor decomposition.The authors study dilated convolutional networks and show that intertwining two dilated convolutional networks A and B at various stages is more expressively efficient than not intertwining.Shows that the WaveNet's structural assumption of a single perfect binary tree is hindering its performance and that WaveNet-like architectures with more complex mixed tree structures perform better." 394,Multi-task Learning on MNIST Image Datasets,"We apply multi-task learning to image classification tasks on MNIST-like datasets.MNIST dataset has been referred to as the of machine learning and has been the testbed of many learning theories.The NotMNIST dataset and the FashionMNIST dataset have been created with the MNIST dataset as reference.In this work, we exploit these MNIST-like datasets for multi-task learning.The datasets are pooled together for learning the parameters of joint classification networks.Then the learned parameters are used as the initial parameters to retrain disjoint classification networks.The baseline recognition model are all-convolution neural networks.Without multi-task learning, the recognition accuracies for MNIST, NotMNIST and FashionMNIST are 99.56%, 97.22% and 94.32% respectively.With multi-task learning to pre-train the networks, the recognition accuracies are respectively 99.70%, 97.46% and 95.25%.The results re-affirm that multi-task learning framework, even with data with different genres, does lead to significant improvement.",multi-task learning works This paper presents a multi-task neural network for classification on MNIST-like datasets 395,Towards Deep Learning Models Resistant to Adversarial Attacks,"Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network.To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization.This approach provides us with a broad and unifying view on much prior work on this topic.Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.In particular, they specify a concrete security guarantee that would protect against a well-defined class of adversaries.These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks.They also suggest robustness against a first-order adversary as a natural security guarantee.We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models.","We provide a principled, optimization-based re-look at the notion of adversarial examples, and develop methods that produce models that are adversarially robust against a wide range of adversaries.Investigates a minimax formulation of deep network learning to increase their robustness, using projected gradient descent as the main adversary. This paper proposes to look at making neural networks resistant to adversarial loss through the framework of saddle-point problems. " 396,Understanding Isomorphism Bias in Graph Data Sets ,"In recent years there has been a rapid increase in classification methods on graph structured data.Both in graph kernels and graph neural networks, one of the implicit assumptions of successful state-of-the-art models was that incorporating graph isomorphism features into the architecture leads to better empirical performance.However, as we discover in this work, commonly used data sets for graph classification have repeating instances which cause the problem of isomorphism bias, i.e. artificially increasing the accuracy of the models by memorizing target information from the training set.This prevents fair competition of the algorithms and raises a question of the validity of the obtained results.We analyze 54 data sets, previously extensively used for graph-related tasks, on the existence of isomorphism bias, give a set of recommendations to machine learning practitioners to properly set up their models, and open source new data sets for the future experiments.","Many graph classification data sets have duplicates, thus raising questions about generalization abilities and fair comparison of the models. The authors discuss isomorphism bias in graph datasets, the overfitting effect in learning networks whenever graph isomorphism features are incorporated within the model, theoretically analogous to data leakage effects." 397,Learning Self-Correctable Policies and Value Functions from Demonstrations with Negative Sampling,"Imitation learning, followed by reinforcement learning algorithms, is a promising paradigm to solve complex control tasks sample-efficiently.However, learning from demonstrations often suffers from the covariate shift problem, which resultsin cascading errors of the learned policy.We introduce a notion of conservatively extrapolated value functions, which provably lead to policies with self-correction.We design an algorithm Value Iteration with Negative Sampling that practically learns such value functions with conservative extrapolation.We show that VINS can correct mistakes of the behavioral cloning policy on simulated robotics benchmark tasks.We also propose the algorithm of using VINS to initialize a reinforcement learning algorithm, which is shown to outperform prior works in sample efficiency.","We introduce a notion of conservatively-extrapolated value functions, which provably lead to policies that can self-correct to stay close to the demonstration states, and learn them with a novel negative sampling technique.An algorithm called value iteration with negative sampling to address the covariate shift problem in imitation learning." 398,Contrastive Learning of Structured World Models,"A structured understanding of our world in terms of objects, relations, and hierarchies is an important component of human cognition.Learning such a structured world model from raw sensory data remains a challenge.As a step towards this goal, we introduce Contrastively-trained Structured World Models.C-SWMs utilize a contrastive approach for representation learning in environments with compositional structure.We structure each state embedding as a set of object representations and their relations, modeled by a graph neural network.This allows objects to be discovered from raw pixel observations without direct supervision as part of the learning process.We evaluate C-SWMs on compositional environments involving multiple interacting objects that can be manipulated independently by an agent, simple Atari games, and a multi-object physics simulation.Our experiments demonstrate that C-SWMs can overcome limitations of models based on pixel reconstruction and outperform typical representatives of this model class in highly structured environments, while learning interpretable object-based representations.",Contrastively-trained Structured World Models (C-SWMs) learn object-oriented state representations and a relational model of an environment from raw pixel input.The authors overcome the problem of using pixel-based losses in the construction and learning of structured world models by using a contrastive latent space. 399,Identifying and Controlling Important Neurons in Neural Machine Translation,"Neural machine translation models learn representations containing substantial linguistic information.However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons.We develop unsupervised methods for discovering important neurons in NMT models.Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision.We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena.Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.","Unsupervised methods for finding, analyzing, and controlling important neurons in NMTThis work proposes finding ""meaningful"" neurons in Neural Machine Translation models by ranking based on correlation between pairs of models, different epochs, or different datasets, and proposes a controlling mechanism for the models." 400,Layer rotation: a surprisingly simple indicator of generalization in deep networks?,"Our work presents empirical evidence that layer rotation, i.e. the evolution across training of the cosine distance between each layers weight vector and its initialization, constitutes an impressively consistent indicator of generalization performance."", ""Compared to previously studied indicators of generalization, we show that layer rotation has the additional benefit of being easily monitored and controlled, as well as having a network-independent optimum: the training procedures during which all layers weights reach a cosine distance of 1 from their initialization consistently outperform other configurations -by up to 20% test accuracy.Finally, our results also suggest that the study of layer rotation can provide a unified framework to explain the impact of weight decay and adaptive gradient methods on generalization.",This paper presents empirical evidence supporting the discovery of an indicator of generalization: the evolution across training of the cosine distance between each layer's weight vector and its initialization. 401,Global Relational Models of Source Code,"Models of code can learn distributed representations of a programs syntax and semantics to predict many non-trivial properties of a program.Recent state-of-the-art models leverage highly structured representations of programs, such as trees, graphs and paths therein, which are precise and abundantly available for code.This provides a strong inductive bias towards semantically meaningful relations, yielding more generalizable representations than classical sequence-based models.Unfortunately, these models primarily rely on graph-based message passing to represent relations in code, which makes them de facto local due to the high cost of message-passing steps, quite in contrast to modern, global sequence-based models, such as the Transformer.In this work, we bridge this divide between global and structured models by introducing two new hybrid model families that are both global and incorporate structural bias: Graph Sandwiches, which wrap traditional graph message-passing layers in sequential message-passing layers; and Graph Relational Embedding Attention Transformers, which bias traditional Transformers with relational information from graph edge types.By studying a popular, non-trivial program repair task, variable-misuse identification, we explore the relative merits of traditional and hybrid model families for code representation.Starting with a graph-based model that already improves upon the prior state-of-the-art for this task by 20%, we show that our proposed hybrid models improve an additional 10-15%, while training both faster and using fewer parameters.",Models of source code that combine global and structural features learn more powerful representations of programs.A new method to model the source code for the bug repairing task using a sandwich model like [RNN GNN RNN] which significantly improves localization and repair accuracy. 402,RNNs Incrementally Evolving on an Equilibrium Manifold: A Panacea for Vanishing and Exploding Gradients?,"Recurrent neural networks are particularly well-suited for modeling long-term dependencies in sequential data, but are notoriously hard to train because the error backpropagated in time either vanishes or explodes at an exponential rate.While a number of works attempt to mitigate this effect through gated recurrent units, skip-connections, parametric constraints and design choices, we propose a novel incremental RNN, where hidden state vectors keep track of incremental changes, and as such approximate state-vector increments of Rosenblatts continuous-time RNNs.iRNN exhibits identity gradients and is able to account for long-term dependencies.We show that our method is computationally efficient overcoming overheads of many existing methods that attempt to improve RNN training, while suffering no performance degradation.We demonstrate the utility of our approach with extensive experiments and show competitive performance against standard LSTMs on LTD and other non-LTD tasks.",Incremental-RNNs resolves exploding/vanishing gradient problem by updating state vectors based on difference between previous state and that predicted by an ODE.The authors address the problem of signal propagation in recurrent neural networks by building an attractor system for the signal transition and checking whether it converges to an equilibrium. 403,A Modern Take on the Bias-Variance Tradeoff in Neural Networks,"Recent empirical results on over-parameterized deep networks are marked by a striking absence of the classic U-shaped test error curve: test error keeps decreasing in wider networks.Researchers are actively working on bridging this discrepancy by proposing better complexity measures.Instead, we directly measure prediction bias and variance for four classification and regression tasks on modern deep networks.We find that both bias and variance can decrease as the number of parameters grows.Qualitatively, the phenomenon persists over a number of gradient-based optimizers.To better understand the role of optimization, we decompose the total variance into variance due to training set sampling and variance due to initialization.Variance due to initialization is significant in the under-parameterized regime.In the over-parameterized regime, total variance is much lower and dominated by variance due to sampling.We provide theoretical analysis in a simplified setting that is consistent with our empirical findings.",We provide evidence against classical claims about the bias-variance tradeoff and propose a novel decomposition for variance. 404,A PRIVACY-PRESERVING IMAGE CLASSIFICATION FRAMEWORK WITH A LEARNABLE OBFUSCATOR,"Real world images often contain large amounts of private / sensitive information that should be carefully protected without reducing their utilities.In this paper, we propose a privacy-preserving deep learning framework with a learnable ob- fuscator for the image classification task.Our framework consists of three mod- els: learnable obfuscator, classifier and reconstructor.The learnable obfuscator is used to remove the sensitive information in the images and extract the feature maps from them.The reconstructor plays the role as an attacker, which tries to recover the image from the feature maps extracted by the obfuscator.In order to best protect users’ privacy in images, we design an adversarial training methodol- ogy for our framework to optimize the obfuscator.Through extensive evaluations on real world datasets, both the numerical metrics and the visualization results demonstrate that our framework is qualified to protect users’ privacy and achieve a relatively high accuracy on the image classification task.","We proposed a novel deep learning image classification framework that can both accurately classify images and protect users' privacy."", 'This paper proposes a framework which preserves the private information in the image and doesn’t compromise the usability of the image.This current work suggests using adversarial networks to obfuscate images and thus allow collecting them without privacy concerns to use them for training machine learning models." 405,Address2vec: Generating vector embeddings for blockchain analytics,"Bitcoin is a virtual coinage system that enables users to trade virtually free of a central trusted authority.All transactions on the Bitcoin blockchain are publicly available for viewing, yet as Bitcoin is built mainly for security it’s original structure does not allow for direct analysis of address transactions.Existing analysis methods of the Bitcoin blockchain can be complicated, computationally expensive or inaccurate.We propose a computationally efficient model to analyze bitcoin blockchain addresses and allow for their use with existing machine learning algorithms.We compare our approach against Multi Level Sequence Learners, one of the best performing models on bitcoin address data.","a 2vec model for cryptocurrency transaction graphsThe paper proposes to use an autoencoder, networkX, and node2Vec to predict whether a Bitcoin address will become empty after a year, but the results are worse than an existing baseline." 406,ODE Analysis of Stochastic Gradient Methods with Optimism and Anchoring for Minimax Problems and GANs,"Despite remarkable empirical success, the training dynamics of generative adversarial networks, which involves solving a minimax game using stochastic gradients, is still poorly understood.In this work, we analyze last-iterate convergence of simultaneous gradient descent and its variants under the assumption of convex-concavity, guided by a continuous-time analysis with differential equations.First, we show that simGD, as is, converges with stochastic sub-gradients under strict convexity in the primal variable.Second, we generalize optimistic simGD to accommodate an optimism rate separate from the learning rate and show its convergence with full gradients.Finally, we present anchored simGD, a new method, and show convergence with stochastic subgradients.","Convergence proof of stochastic sub-gradients method and variations on convex-concave minimax problemsAn anaysis of simultaneous stochastic subgradient, simultaneous gradient with optimism, and simultaneous gradient with anchoring in the context of minmax convex concave games.This paper analyzes the dynamics of stochastic gradient descent when applied to convex-concave games, as well as GD with optimism and a new anchored GD algorithm that converges under weaker assumptions than SGD or SGD with optimism." 407,Autonomous Scheduling of Agile Spacecraft Constellations with Delay Tolerant Networking for Reactive Imaging,"Small spacecraft now have precise attitude control systems available commercially, allowing them to slew in 3 degrees of freedom, and capture images within short notice.When combined with appropriate software, this agility can significantly increase response rate, revisit time and coverage.In prior work, we have demonstrated an algorithmic framework that combines orbital mechanics, attitude control and scheduling optimization to plan the time-varying, full-body orientation of agile, small spacecraft in a constellation.The proposed schedule optimization would run at the ground station autonomously, and the resultant schedules uplinked to the spacecraft for execution.The algorithm is generalizable over small steerable spacecraft, control capability, sensor specs, imaging requirements, and regions of interest.In this article, we modify the algorithm to run onboard small spacecraft, such that the constellation can make time-sensitive decisions to slew and capture images autonomously, without ground control.We have developed a communication module based on Delay/Disruption Tolerant Networking for onboard data management and routing among the satellites, which will work in conjunction with the other modules to optimize the schedule of agile communication and steering.We then apply this preliminary framework on representative constellations to simulate targeted measurements of episodic precipitation events and subsequent urban floods.The command and control efficiency of our agile algorithm is compared to non-agile and non-DTN constellations.","We propose an algorithmic framework to schedule constellations of small spacecraft with 3-DOF re-orientation capabilities, networked with inter-sat links.This paper proposes a communication module to optimize the schedule of communication for the problem of spacecraft constellations, and compares the algorithm in distributed and centralized settings." 408,"Ridge Regression: Structure, Cross-Validation, and Sketching","We study the following three fundamental problems about ridge regression: what is the structure of the estimator? how to correctly use cross-validation to choose the regularization parameter?and how to accelerate computation without losing too much accuracy?We consider the three problems in a unified large-data linear model.We give a precise representation of ridge regression as a covariance matrix-dependent linear combination of the true parameter and the noise.We study the bias of-fold cross-validation for choosing the regularization parameter, and propose a simple bias-correction.We analyze the accuracy of primal and dual sketching for ridge regression, showing they are surprisingly accurate.Our results are illustrated by simulations and by analyzing empirical data.","We study the structure of ridge regression in a high-dimensional asymptotic framework, and get insights about cross-validation and sketching.A theoretical study of ridge regression by exploiting a new asymptotic characterisation of the ridge regression estimator." 409,Understanding Attention Mechanisms,"Attention mechanisms have advanced the state of the art in several machine learning tasks.Despite significant empirical gains, there is a lack of theoretical analyses on understanding their effectiveness.In this paper, we address this problem by studying the landscape of population and empirical loss functions of attention-based neural networks.Our results show that, under mild assumptions, every local minimum of a two-layer global attention model has low prediction error, and attention models require lower sample complexity than models not employing attention.We then extend our analyses to the popular self-attention model, proving that they deliver consistent predictions with a more expressive class of functions.Additionally, our theoretical results provide several guidelines for designing attention mechanisms.Our findings are validated with satisfactory experimental results on MNIST and IMDB reviews dataset.",We analyze the loss landscape of neural networks with attention and explain why attention is helpful in training neural networks to achieve good performance.This paper proves from the theoretical perspective that attention networks can generalize better than non-attention baselines for fixed-attention (single-layer and multi-layer) and self-attention in the single layer setting. 410,Adaptive Estimators Show Information Compression in Deep Neural Networks,"To improve how neural networks function it is crucial to understand their learning process.The information bottleneck theory of deep learning proposes that neural networks achieve good generalization by compressing their representations to disregard information that is not relevant to the task.However, empirical evidence for this theory is conflicting, as compression was only observed when networks used saturating activation functions.In contrast, networks with non-saturating activation functions achieved comparable levels of task performance but did not show compression.In this paper we developed more robust mutual information estimation techniques, that adapt to hidden activity of neural networks and produce more sensitive measurements of activations from all functions, especially unbounded functions.Using these adaptive estimation techniques, we explored compression in networks with a range of different activation functions.With two improved methods of estimation, firstly, we show that saturation of the activation function is not required for compression, and the amount of compression varies between different activation functions.We also find that there is a large amount of variation in compression between different network initializations.Secondary, we see that L2 regularization leads to significantly increased compression, while preventing overfitting.Finally, we show that only compression of the last layer is positively correlated with generalization.",We developed robust mutual information estimates for DNNs and used them to observe compression in networks with non-saturating activation functionsThis paper studied the popular belief that deep neural networks do information compression for supervised tasksThis paper proposes a method for the estimation of mutual information for networks with unbounded activation functions and the use of L2 regularization to induce more compression. 411,TimbreTron: A WaveNet(CycleGAN(CQT(Audio))) Pipeline for Musical Timbre Transfer,"In this work, we address the problem of musical timbre transfer, where the goal is to manipulate the timbre of a sound sample from one instrument to match another instrument while preserving other musical content, such as pitch, rhythm, and loudness.In principle, one could apply image-based style transfer techniques to a time-frequency representation of an audio signal, but this depends on having a representation that allows independent manipulation of timbre as well as high-quality waveform generation.We introduce TimbreTron, a method for musical timbre transfer which applies “image” domain style transfer to a time-frequency representation of the audio signal, and then produces a high-quality waveform using a conditional WaveNet synthesizer.We show that the Constant Q Transform representation is particularly well-suited to convolutional architectures due to its approximate pitch equivariance.Based on human perceptual evaluations, we confirmed that TimbreTron recognizably transferred the timbre while otherwise preserving the musical content, for both monophonic and polyphonic samples.We made an accompanying demo video here: https://www.cs.toronto.edu/~huang/TimbreTron/index.html which we strongly encourage you to watch before reading the paper.","We present the TimbreTron, a pipeline for perfoming high-quality timbre transfer on musical waveforms using CQT-domain style transfer.A method for converting recordings of a specific musical instrument to another by applying CycleGAN, developed for image style transfer, to transfer spectrograms.The authors use multiple techniques/tools to enable neural timbre transfer (converting music from one instrument to another) without paired training examples. Describes a model for musical timbre transfer with the results indicating that the proposed system is effective for pitch and tempo transfer, as well as timbre adaptation." 412,Deep Rewiring: Training very sparse deep networks,"Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them.But also generic hardware and software implementations of deep learning run more efficiently for sparse networks.Several methods exist for pruning connections of a neural network after it was trained without connectivity constraints.We present an algorithm, DEEP R, that enables us to train directly a sparsely connected neural network.DEEP R automatically rewires the network during supervised training so that connections are there where they are most needed for the task, while its total number is all the time strictly bounded.We demonstrate that DEEP R can be used to train very sparse feedforward and recurrent neural networks on standard benchmark tasks with just a minor loss in performance.DEEP R is based on a rigorous theoretical foundation that views rewiring as stochastic sampling of network configurations from a posterior.","The paper presents Deep Rewiring, an algorithm that can be used to train deep neural networks when the network connectivity is severely constrained during training.An approach to implement deep learning directly on sparsely connected graphs, allowing networks to be trained efficiently online and for fast and flexible learning.The authors provide a simple algorithm capable of training with limited memory" 413,Self-Supervised GAN Compression,"Deep learnings success has led to larger and larger models to handle more and more complex tasks; trained models can contain millions of parameters.These large models are compute- and memory-intensive, which makes it a challenge to deploy them with minimized latency, throughput, and storage requirements.Some model compression methods have been successfully applied on image classification and detection or language models, but there has been very little work compressing generative adversarial networks performing complex tasks.In this paper, we show that a standard model compression technique, weight pruning, cannot be applied to GANs using existing methods.We then develop a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator.We show that this framework has a compelling performance to high degrees of sparsity, generalizes well to new tasks and models, and enables meaningful comparisons between different pruning granularities.","Existing pruning methods fail when applied to GANs tackling complex tasks, so we present a simple and robust method to prune generators that works well for a wide variety of networks and tasks.The authors propose a modification to the classic distillation method for the task of compressing a network to address the failure of previous solutions when applied to generative adversarial networks." 414,Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training,"Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure.The situation gets even worse with distributed training on mobile devices, which suffers from higher latency, lower throughput, and intermittent poor connections.In this paper, we find 99.9% of the gradient exchange in distributed SGD is redundant, and propose Deep Gradient Compression to greatly reduce the communication bandwidth.To preserve accuracy during compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training.We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus.On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270x to 600x without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB.Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile.",we find 99.9% of the gradient exchange in distributed SGD is redundant; we reduce the communication bandwidth by two orders of magnitude without losing accuracy. This paper proposes additional improvement over gradient dropping to improve communication efficiency 415,Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency,"Image-to-image translation has recently received significant attention due to advances in deep learning.Most works focus on learning either a one-to-one mapping in an unsupervised way or a many-to-many mapping in a supervised way.However, a more practical setting is many-to-many mapping in an unsupervised way, which is harder due to the lack of supervision and the complex inner- and cross-domain variations.To alleviate these issues, we propose the Exemplar Guided & Semantically Consistent Image-to-image Translation network which conditions the translation process on an exemplar image in the target domain.We assume that an image comprises of a content component which is shared across domains, and a style component specific to each domain.Under the guidance of an exemplar from the target domain we apply Adaptive Instance Normalization to the shared content component, which allows us to transfer the style information of the target domain to the source domain.To avoid semantic inconsistencies during translation that naturally appear due to the large inner- and cross-domain variations, we introduce the concept of feature masks that provide coarse semantic guidance without requiring the use of any semantic labels.Experimental results on various datasets show that EGSC-IT does not only translate the source image to diverse instances in the target domain, but also preserves the semantic consistency during the process.",We propose the Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) network which conditions the translation process on an exemplar image in the target domain.Discusses a core failing and need for I2I translation models.The paper explores the idea that an image has two components and applies an attention model where the feature masks that steer the translation process do not require semantic labels 416,Graph Spectral Regularization For Neural Network Interpretability,"Deep neural networks can learn meaningful representations of data.However, these representations are hard to interpret.For example, visualizing a latent layer is generally only possible for at most three dimensions.Neural networks are able to learn and benefit from much higher dimensional representations but these are not visually interpretable because nodes have arbitrary ordering within a layer.Here, we utilize the ability of the human observer to identify patterns in structured representations to visualize higher dimensions.To do so, we propose a class of regularizations we call that impose graph-structure on latent layers.This is achieved by treating activations as signals on a predefined graph and constraining those activations using graph filters, such as low pass and wavelet-like filters.This framework allows for any kind of graph as well as filter to achieve a wide range of structured regularizations depending on the inference needs of the data.First, we show a synthetic example that the graph-structured layer can reveal topological features of the data.Next, we show that a smoothing regularization can impose semantically consistent ordering of nodes when applied to capsule nets.Further, we show that the graph-structured layer, using wavelet-like spatially localized filters, can form localized receptive fields for improved image and biomedical data interpretation.In other words, the mapping between latent layer, neurons and the output space becomes clear due to the localization of the activations.Finally, we show that when structured as a grid, the representations create coherent images that allow for image-processing techniques such as convolutions.",Imposing graph structure on neural network layers for improved visual interpretability.A novel regularizer to impose graph structure upon hidden layers of a Neural Network to improve the interpretability of hidden representations.Highlights the contribution of graph spectral regularizer to the interpretability of neural networks. 417,Residual Energy-Based Models for Text Generation,"Text generation is ubiquitous in many NLP tasks, from summarization, to dialogue and machine translation.The dominant parametric approach is based on locally normalized models which predict one word at a time.While these work remarkably well, they are plagued by exposure bias due to the greedy nature of the generation process.In this work, we investigate un-normalized energy-based models which operate not at the token but at the sequence level.In order to make training tractable, we first work in the residual of a pretrained locally normalized language model and second we train using noise contrastive estimation.Furthermore, since the EBM works at the sequence level, we can leverage pretrained bi-directional contextual representations, such as BERT and RoBERTa.Our experiments on two large language modeling datasets show that residual EBMs yield lower perplexity compared to locally normalized baselines.Moreover, generation via importance sampling is very efficient and of higher quality than the baseline models according to human evaluation.","We show that Energy-Based models when trained on the residual of an auto-regressive language model can be used effectively and efficiently to generate text. A proposed Residual Energy-based Model (EBM) for text generation which operates at the sentence level, and can therefore leverage BERT, and achieves lower perplexity and is preferred by human evaluation." 418,Improving the robustness of ImageNet classifiers using elements of human visual cognition,"We investigate the robustness properties of image recognition models equipped with two features inspired by human vision, an explicit episodic memory and a shape bias, at the ImageNet scale.As reported in previous work, we show that an explicit episodic memory improves the robustness of image recognition models against small-norm adversarial perturbations under some threat models.It does not, however, improve the robustness against more natural, and typically larger, perturbations.Learning more robust features during training appears to be necessary for robustness in this second sense.We show that features derived from a model that was encouraged to learn global, shape-based representations do not only improve the robustness against natural perturbations, but when used in conjunction with an episodic memory, they also provide additional robustness against adversarial perturbations.Finally, we address three important design choices for the episodic memory: memory size, dimensionality of the memories and the retrieval method.We show that to make the episodic memory more compact, it is preferable to reduce the number of memories by clustering them, instead of reducing their dimensionality.","systematic study of large-scale cache-based image recognition models, focusing particularly on their robustness propertiesThis paper proposed to use memory cache to improve robustness against adversarial image examples, and concluded that using a large continous cache is not superior to hard attention." 419,B-Spline CNNs on Lie groups,"Group convolutional neural networks can be used to improve classical CNNs by equipping them with the geometric structure of groups.Central in the success of G-CNNs is the lifting of feature maps to higher dimensional disentangled representations, in which data characteristics are effectively learned, geometric data-augmentations are made obsolete, and predictable behavior under geometric transformations is guaranteed via group theory.Currently, however, the practical implementations of G-CNNs are limited to either discrete groups or continuous compact groups such as rotations.In this paper we lift these limitations and propose a modular framework for the design and implementation of G-CNNs for arbitrary Lie groups.In our approach the differential structure of Lie groups is used to expand convolution kernels in a generic basis of B-splines that is defined on the Lie algebra.This leads to a flexible framework that enables localized, atrous, and deformable convolutions in G-CNNs by means of respectively localized, sparse and non-uniform B-spline expansions.The impact and potential of our approach is studied on two benchmark datasets: cancer detection in histopathology slides in which rotation equivariance plays a key role and facial landmark localization in which scale equivariance is important.In both cases, G-CNN architectures outperform their classical 2D counterparts and the added value of atrous and localized group convolutions is studied in detail.","The paper describes a flexible framework for building CNNs that are equivariant to a large class of transformations groups.A framework for building group CNN with an arbitrary Lie group G, which shows superiority over a CNN in tumor classification and landmark localization. " 420,On Global Feature Pooling for Fine-grained Visual Categorization," Global feature pooling is a modern variant of feature pooling providing better interpretatability and regularization.Although alternative pooling methods exist, the averaging operation is still the dominating global pooling scheme in popular models.As fine-grained recognition requires learning subtle, discriminative features, we consider the question: is average pooling the optimal strategy?, ""We first ask: is there a difference between features learned by global average and max pooling? Visualization and quantitative analysis show that max pooling encourages learning features of different spatial scales."", ""We then ask is there a single global feature pooling variant thats most suitable for fine-grained recognition? A thorough evaluation of nine representative pooling algorithms finds that: max pooling outperforms average pooling consistently across models, datasets, and image resolutions; it does so by reducing the generalization gap; and generalized poolings performance increases almost monotonically as it changes from average to max."", ""We finally ask: whats the best way to combine two heterogeneous pooling schemes? Common strategies struggle because of potential gradient conflict but the freeze-and-train trick works best.We also find that post-global batch normalization helps with faster convergence and improves model performance consistently.","A benchmark of nine representative global pooling schemes reveals some interesting findings.For fine-grained classification tasks, this paper validated that maxpooling would encourage sparser feature maps than and outperform avgpooling. " 421,When Does Self-supervision Improve Few-shot Learning?,"We present a technique to improve the generalization of deep representations learned on small labeled datasets by introducing self-supervised tasks as auxiliary loss functions.Although recent research has shown benefits of self-supervised learning on large unlabeled datasets, its utility on small datasets is unknown.We find that SSL reduces the relative error rate of few-shot meta-learners by 4%-27%, even when the datasets are small and only utilizing images within the datasets.The improvements are greater when the training set is smaller or the task is more challenging.Though the benefits of SSL may increase with larger training sets, we observe that SSL can have a negative impact on performance when there is a domain shift between distribution of images used for meta-learning and SSL.Based on this analysis we present a technique that automatically select images for SSL from a large, generic pool of unlabeled images for a given dataset using a domain classifier that provides further improvements.We present results using several meta-learners and self-supervised tasks across datasets with varying degrees of domain shifts and label sizes to characterize the effectiveness of SSL for few-shot learning.","Self-supervision improves few-shot recognition on small and challenging datasets without relying on extra data; Extra data helps only when it is from the same or similar domain.An empirical study of different self-supervised learning (SSL) methods, showing SSL helps more when the dataset is harder, that domain matters for training, and a method to choose samples from an unlabeled dataset. " 422,Online abstraction with MDP homomorphisms for Deep Learning,"Abstraction of Markov Decision Processes is a useful tool for solving complex problems, as it can ignore unimportant aspects of an environment, simplifying the process of learning an optimal policy.In this paper, we propose a new algorithm for finding abstract MDPs in environments with continuous state spaces.It is based on MDP homomorphisms, a structure-preserving mapping between MDPs.We demonstrate our algorithms ability to learns abstractions from collected experience and show how to reuse the abstractions to guide exploration in new tasks the agent encounters.Our novel task transfer method beats a baseline based on a deep Q-network.",We create abstract models of environments from experience and use them to learn new tasks faster.A methodology that uses the idea of MDP homomorphisms to transform a complex MDP with a continuous state space to a simpler one. 423,Examining Interpretable Feature Relationships in Deep Networks for Action recognition,"A number of recent methods to understand neural networks have focused on quantifying the role of individual features. One such method, NetDissect identifies interpretable features of a model using the Broden dataset of visual semantic labels. Given the recent rise of a number of action recognition datasets, we propose extending the Broden dataset to include actions to better analyze learned action models. We describe the annotation process, results from interpreting action recognition models on the extended Broden dataset and examine interpretable feature paths to help us understand the conceptual hierarchy used to classify an action.",We expand Network Dissection to include action interpretation and examine interpretable feature paths to understand the conceptual hierarchy used to classify an action. 424,Melody Generation for Pop Music via Word Representation of Musical Properties,"Automatic melody generation for pop music has been a long-time aspiration forboth AI researchers and musicians.However, learning to generate euphoniousmelody has turned out to be highly challenging due to a number of factors.Representationof multivariate property of notes has been one of the primary challenges.It is also difficult to remain in the permissible spectrum of musical variety, outsideof which would be perceived as a plain random play without auditory pleasantness.Observing the conventional structure of pop music poses further challenges.In this paper, we propose to represent each note and its properties as a unique‘word,’ thus lessening the prospect of misalignments between the properties, aswell as reducing the complexity of learning.We also enforce regularization policieson the range of notes, thus encouraging the generated melody to stay closeto what humans would find easy to follow.Furthermore, we generate melodyconditioned on song part information, thus replicating the overall structure of afull song.Experimental results demonstrate that our model can generate auditorilypleasant songs that are more indistinguishable from human-written ones thanprevious models.","We propose a novel model to represent notes and their properties, which can enhance the automatic melody generation.This paper proposes a generative model of symbolic (MIDI) melody in western popular music which jointly encodes note symbols with timing and duration information to form musical ""words"".The paper proposes to facilitate generation of melody by representing notes as ""words"", representing all of the note's properties and thus allowing the generation of musical ""sentences""." 425,AutoGrow: Automatic Layer Growing in Deep Convolutional Networks,"Depth is a key component of Deep Neural Networks, however, designing depth is heuristic and requires many human efforts.We propose AutoGrow to automate depth discovery in DNNs: starting from a shallow seed architecture, AutoGrow grows new layers if the growth improves the accuracy; otherwise, stops growing and thus discovers the depth.We propose robust growing and stopping policies to generalize to different network architectures and datasets.Our experiments show that by applying the same policy to different network architectures, AutoGrow can always discover near-optimal depth on various datasets of MNIST, FashionMNIST, SVHN, CIFAR10, CIFAR100 and ImageNet.For example, in terms of accuracy-computation trade-off, AutoGrow discovers a better depth combination in ResNets than human experts.Our AutoGrow is efficient.It discovers depth within similar time of training a single DNN.",A method that automatically grows layers in neural networks to discover optimal depth.A framework to interleave training a shallower network and adding new layers which provides insights into the paradigm of 'growing networks'. 426,In-Domain Representation Learning For Remote Sensing,"Given the importance of remote sensing, surprisingly little attention has been paid to it by the representation learning community.To address it and to speed up innovation in this domain, we provide simplified access to 5 diverse remote sensing datasets in a standardized form.We specifically explore in-domain representation learning and address the question of ""what characteristics should a dataset have to be a good source for remote sensing representation learning"".The established baselines achieve state-of-the-art performance on these datasets.",Exploration of in-domain representation learning for remote sensing datasets.This paper provided several standardized remote sensing data sets and showed that in-domain representation could produce better baseline results for remote sensing compared to fine-tuning on ImageNet or learning from scratch. 427,Classification as Decoder: Trading Flexibility for Control in Multi Domain Dialogue,"Generative seq2seq dialogue systems are trained to predict the next word in dialogues that have already occurred.They can learn from large unlabeled conversation datasets, build a deep understanding of conversational context, and generate a wide variety of responses.This flexibility comes at the cost of control.Undesirable responses in the training data will be reproduced by the model at inference time, and longer generations often don’t make sense.Instead of generating responses one word at a time, we train a classifier to choose from a predefined list of full responses.The classifier is trained on pairs, where each response class is a noisily labeled group of interchangeable responses.At inference, we generate the exemplar response associated with the predicted response class.Experts can edit and improve these exemplar responses over time without retraining the classifier or invalidating old training data.Human evaluation of 775 unseen doctor/patient conversations shows that this tradeoff improves responses.Only 12% of our discriminative approach’s responses are worse than the doctor’s response in the same conversational context, compared to 18% for the generative model.A discriminative model trained without any manual labeling of response classes achieves equal performance to the generative model.",Avoid generating responses one word at a time by using weak supervision to training a classifier to pick a full response.A way to generate responses for medical dialog using a classifier to select from expert-curated responses based on the conversation context. 428,Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes,"There is a previously identified equivalence between wide fully connected neural networks and Gaussian processes.This equivalence enables, for instance, test set predictions that would have resulted from a fully Bayesian, infinitely wide trained FCN to be computed without ever instantiating the FCN, but by instead evaluating the corresponding GP.In this work, we derive an analogous equivalence for multi-layer convolutional neural networks both with and without pooling layers, and achieve state of the art results on CIFAR10 for GPs without trainable kernels.We also introduce a Monte Carlo method to estimate the GP corresponding to a given neural network architecture, even in cases where the analytic form has too many terms to be computationally feasible.Surprisingly, in the absence of pooling layers, the GPs corresponding to CNNs with and without weight sharing are identical.As a consequence, translation equivariance, beneficial in finite channel CNNs trained with stochastic gradient descent, is guaranteed to play no role in the Bayesian treatment of the infinite channel limit - a qualitative difference between the two regimes that is not present in the FCN case.We confirm experimentally, that while in some scenarios the performance of SGD-trained finite CNNs approaches that of the corresponding GPs as the channel count increases, with careful tuning SGD-trained CNNs can significantly outperform their corresponding GPs, suggesting advantages from SGD training compared to fully Bayesian parameter estimation.",Finite-width SGD trained CNNs vs. infinitely wide fully Bayesian CNNs. Who wins?The paper establishes a connection between infinite channel Bayesian convolutional neural network and Gaussian processes. 429,Bayesian Inference for Large Scale Image Classification,"Bayesian inference promises to ground and improve the performance of deep neural networks.It promises to be robust to overfitting, to simplify the training procedure and the space of hyperparameters, and to provide a calibrated measure of uncertainty that can enhance decision making, agent exploration and prediction fairness.Markov Chain Monte Carlo methods enable Bayesian inference by generating samples from the posterior distribution over model parameters.Despite the theoretical advantages of Bayesian inference and the similarity between MCMC and optimization methods, the performance of sampling methods has so far lagged behind optimization methods for large scale deep learning tasks.We aim to fill this gap and introduce ATMC, an adaptive noise MCMC algorithm that estimates and is able to sample from the posterior of a neural network.ATMC dynamically adjusts the amount of momentum and noise applied to each parameter update in order to compensate for the use of stochastic gradients.We use a ResNet architecture without batch normalization to test ATMC on the Cifar10 benchmark and the large scale ImageNet benchmark and show that, despite the absence of batch normalization, ATMC outperforms a strong optimization baseline in terms of both classification accuracy and test log-likelihood.We show that ATMC is intrinsically robust to overfitting on the training data and that ATMC provides a better calibrated measure of uncertainty compared to the optimization baseline.","We scale Bayesian Inference to ImageNet classification and achieve competitive results accuracy and uncertainty calibration.An adaptive noise MCMC algorithm for image classification that dynamically adjusts the momentum and noise applied to each parameter update, and is robust to overfitting and provides an uncertainty measure with predictions. " 430,Real or Fake: An Empirical Study and Improved Model for Fake Face Detection,"Now GANs can generate more and more realistic face images that can easily fool human beings. In contrast, a common convolutional neural network, e.g. ResNet-18, can achieve more than 99.9% accuracy in discerning fake/real faces if training and testing faces are from the same source.In this paper, we performed both human studies and CNN experiments, which led us to two important findings.One finding is that the textures of fake faces are substantially different from real ones.CNNs can capture local image texture information for recognizing fake/real face, while such cues are easily overlooked by humans.The other finding is that global image texture information is more robust to image editing and generalizable to fake faces from different GANs and datasets.Based on the above findings, we propose a novel architecture coined as Gram-Net, which incorporates “Gram Block” in multiple semantic levels to extract global image texture representations.Experimental results demonstrate that our Gram-Net performs better than existing approaches for fake face detection. Especially, our Gram-Net is more robust to image editing, e.g. downsampling, JPEG compression, blur, and noise. More importantly, our Gram-Net generalizes significantly better in detecting fake faces from GAN models not seen in the training phase.",An empirical study on fake images reveals that texture is an important cue that current fake images differ from real images. Our improved model capturing global texture statistics shows better cross-GAN fake image detection performance.The paper proposes a way to improve model performance for fake face detection in images generated by a GAN to be more generalizable based on texture information. 431,The Cramer Distance as a Solution to Biased Wasserstein Gradients,"The Wasserstein probability metric has received much attention from the machine learning community.Unlike the Kullback-Leibler divergence, which strictly measures change in probability, the Wasserstein metric reflects the underlying geometry between outcomes.The value of being sensitive to this geometry has been demonstrated, among others, in ordinal regression and generative modelling, and most recently in reinforcement learning.In this paper we describe three natural properties of probability divergences that we believe reflect requirements from machine learning: sum invariance, scale sensitivity, and unbiased sample gradients.The Wasserstein metric possesses the first two properties but, unlike the Kullback-Leibler divergence, does not possess the third.We provide empirical evidence suggesting this is a serious issue in practice.Leveraging insights from probabilistic forecasting we propose an alternative to the Wasserstein metric, the Cramér distance.We show that the Cramér distance possesses all three desired properties, combining the best of the Wasserstein and Kullback-Leibler divergences.We give empirical results on a number of domains comparing these three divergences.To illustrate the practical relevance of the Cramér distance we design a new algorithm, the Cramér Generative Adversarial Network, and show that it has a number of desirable properties over the related Wasserstein GAN.","The Wasserstein distance is hard to minimize with stochastic gradient descent, while the Cramer distance can be optimized easily and works just as well.The manuscript proposes to use the Cramer distance to act as a loss when optimizing an objective function using stochastic gradient descent because it has unbiased sample gradients.The contribution of the article is related to performance criteria, in particular to the Wasserstein/Mallows metric" 432,Learning the Arrow of Time for Problems in Reinforcement Learning,"We humans have an innate understanding of the asymmetric progression of time, which we use to efficiently and safely perceive and manipulate our environment.Drawing inspiration from that, we approach the problem of learning an arrow of time in a Markov Process.We illustrate how a learned arrow of time can capture salient information about the environment, which in turn can be used to measure reachability, detect side-effects and to obtain an intrinsic reward signal.Finally, we propose a simple yet effective algorithm to parameterize the problem at hand and learn an arrow of time with a function approximator.Our empirical results span a selection of discrete and continuous environments, and demonstrate for a class of stochastic processes that the learned arrow of time agrees reasonably well with a well known notion of an arrow of time due to Jordan, Kinderlehrer and Otto.","We learn the arrow of time for MDPs and use it to measure reachability, detect side-effects and obtain a curiosity reward signal. This work proposes the h-potential as a solution to an objective that measures state-transition asymmetry in an MDP." 433,A unified theory of adaptive stochastic gradient descent as Bayesian filtering,"We formulate stochastic gradient descent as a novel factorised Bayesian filtering problem, in which each parameter is inferred separately, conditioned on the corresopnding backpropagated gradient. Inference in this setting naturally gives rise to BRMSprop and BAdam: Bayesian variants of RMSprop and Adam. Remarkably, the Bayesian approach recovers many features of state-of-the-art adaptive SGD methods, including amongst others root-mean-square normalization, Nesterov acceleration and AdamW. As such, the Bayesian approach provides one explanation for the empirical effectiveness of state-of-the-art adaptive SGD algorithms. Empirically comparing BRMSprop and BAdam with naive RMSprop and Adam on MNIST, we find that Bayesian methods have the potential to considerably reduce test loss and classification error.","We formulated SGD as a Bayesian filtering problem, and show that this gives rise to RMSprop, Adam, AdamW, NAG and other features of state-of-the-art adaptive methodsThe paper analyzes stochastic gradient descent through Bayesian filtering as a framework for analyzing adaptive methods.The authors attempt to unify existing adaptive gradient methods under the Bayesian filtering framework with the dynamical prior" 434,Adversarial AutoAugment,"Data augmentation has been widely utilized to improve generalization in training deep neural networks.Recently, human-designed data augmentation has been gradually replaced by automatically learned augmentation policy.Through finding the best policy in well-designed search space of data augmentation, AutoAugment can significantly improve validation accuracy on image classification tasks.However, this approach is not computationally practical for large-scale problems.In this paper, we develop an adversarial method to arrive at a computationally-affordable solution called Adversarial AutoAugment, which can simultaneously optimize target related object and augmentation policy search loss.The augmentation policy network attempts to increase the training loss of a target network through generating adversarial augmentation policies, while the target network can learn more robust features from harder examples to improve the generalization.In contrast to prior work, we reuse the computation in target network training for policy evaluation, and dispense with the retraining of the target network.Compared to AutoAugment, this leads to about 12x reduction in computing cost and 11x shortening in time overhead on ImageNet.We show experimental results of our approach on CIFAR-10/CIFAR-100, ImageNet, and demonstrate significant performance improvements over state-of-the-art.On CIFAR-10, we achieve a top-1 test error of 1.36%, which is the currently best performing single model.On ImageNet, we achieve a leading performance of top-1 accuracy 79.40% on ResNet-50 and 80.00% on ResNet-50-D without extra data.",We introduce the idea of adversarial learning into automatic data augmentation to improve the generalization of a targe network.A technique called Adversarial AutoAugment which dynamically learns good data augmentation policies during training using an adversarial approach. 435,Enhancing Generalization of First-Order Meta-Learning,"In this study we focus on first-order meta-learning algorithms that aim to learn a parameter initialization of a network which can quickly adapt to new concepts, given a few examples.We investigate two approaches to enhance generalization and speed of learning of such algorithms, particularly expanding on the Reptile algorithm.We introduce a novel regularization technique called meta-step gradient pruning and also investigate the effects of increasing the depth of network architectures in first-order meta-learning.We present an empirical evaluation of both approaches, where we match benchmark few-shot image classification results with 10 times fewer iterations using Mini-ImageNet dataset and with the use of deeper networks, we attain accuracies that surpass the current benchmarks of few-shot image classification using Omniglot dataset.","The study introduces two approaches to enhance generalization of first-order meta-learning and presents empirical evaluation on few-shot image classification.The paper presents an empirical study of the first-order meta-learning Reptile algorithm, investigating a proposed regularization technique and deeper networks" 436,In-training Matrix Factorization for Parameter-frugal Neural Machine Translation,"In this paper, we propose the use of in-training matrix factorization to reduce the model size for neural machine translation.Using in-training matrix factorization, parameter matrices may be decomposed into the products of smaller matrices, which can compress large machine translation architectures by vastly reducing the number of learnable parameters.We apply in-training matrix factorization to different layers of standard neural architectures and show that in-training factorization is capable of reducing nearly 50% of learnable parameters without any associated loss in BLEU score.Further, we find that in-training matrix factorization is especially powerful on embedding layers, providing a simple and effective method to curtail the number of parameters with minimal impact on model performance, and, at times, an increase in performance.","This paper proposes using matrix factorization at training time for neural machine translation, which can reduce model size and decrease training time without harming performance.This paper proposes to compress models using matrix factorization during training for deep neural networks of machine translation." 437,Investigating BERT's Knowledge of Language: Five Analysis Methods with NPIs,"Though state-of-the-art sentence representation models can perform tasks requiring significant knowledge of grammar, it is an open question how best to evaluate their grammatical knowledge.We explore five experimental methods inspired by prior work evaluating pretrained sentence representation models.We use a single linguistic phenomenon, negative polarity item licensing, as a case study for our experiments.NPIs like any are grammatical only if they appear in a licensing environment like negation.This phenomenon is challenging because of the variety of NPI licensing environments that exist.We introduce an artificially generated dataset that manipulates key features of NPI licensing for the experiments.We find that BERT has significant knowledge of these features, but its success varies widely across different experimental methods.We conclude that a variety of methods is necessary to reveal all relevant aspects of a models grammatical knowledge in a given domain.",Different methods for analyzing BERT suggest different (but compatible) conclusions in a case study on NPIs. 438,V1Net: A computational model of cortical horizontal connections,"The primate visual system builds robust, multi-purpose representations of the external world in order to support several diverse downstream cortical processes.Such representations are required to be invariant to the sensory inconsistencies caused by dynamically varying lighting, local texture distortion, etc.A key architectural feature combating such environmental irregularities is ‘long-range horizontal connections’ that aid the perception of the global form of objects.In this work, we explore the introduction of such horizontal connections into standard deep convolutional networks; we present V1Net -- a novel convolutional-recurrent unit that models linear and nonlinear horizontal inhibitory and excitatory connections inspired by primate visual cortical connectivity.We introduce the Texturized Challenge -- a new benchmark to evaluate object recognition performance under perceptual noise -- which we use to evaluate V1Net against an array of carefully selected control models with/without recurrent processing.Additionally, we present results from an ablation study of V1Net demonstrating the utility of diverse neurally inspired horizontal connections for state-of-the-art AI systems on the task of object boundary detection from natural images.We also present the emergence of several biologically plausible horizontal connectivity patterns, namely center-on surround-off, association fields and border-ownership connectivity patterns in a V1Net model trained to perform boundary detection on natural images from the Berkeley Segmentation Dataset 500.Our findings suggest an increased representational similarity between V1Net and biological visual systems, and highlight the importance of neurally inspired recurrent contextual processing principles for learning visual representations that are robust to perceptual noise and furthering the state-of-the-art in computer vision.","In this work, we present V1Net -- a novel recurrent neural network modeling cortical horizontal connections that give rise to robust visual representations through perceptual grouping.The authors propose to modify a convolutional variant of LSTM to include horizontal connections inspired by known interactions in visual cortex." 439,Permutation Equivariant Models for Compositional Generalization in Language,"Humans understand novel sentences by composing meanings and roles of core language components.In contrast, neural network models for natural language modeling fail when such compositional generalization is required.The main contribution of this paper is to hypothesize that language compositionality is a form of group-equivariance.Based on this hypothesis, we propose a set of tools for constructing equivariant sequence-to-sequence models.Throughout a variety of experiments on the SCAN tasks, we analyze the behavior of existing models under the lens of equivariance, and demonstrate that our equivariant architecture is able to achieve the type compositional generalization required in human language understanding.","We propose a link between permutation equivariance and compositional generalization, and provide equivariant language modelsThis work focuses on learning locally equivariant representations and functions over input/output words for the purposes of SCAN task." 440,Refining the variational posterior through iterative optimization,"Variational inference is a popular approach for approximate Bayesian inference that is particularly promising for highly parameterized models such as deep neural networks. A key challenge of variational inference is to approximate the posterior over model parameters with a distribution that is simpler and tractable yet sufficiently expressive.In this work, we propose a method for training highly flexible variational distributions by starting with a coarse approximation and iteratively refining it.Each refinement step makes cheap, local adjustments and only requires optimization of simple variational families.We demonstrate theoretically that our method always improves a bound on the approximation and observe this empirically across a variety of benchmark tasks. In experiments, our method consistently outperforms recent variational inference methods for deep learning in terms of log-likelihood and the ELBO. We see that the gains are further amplified on larger scale models, significantly outperforming standard VI and deep ensembles on residual networks on CIFAR10.","The paper proposes an algorithm to increase the flexibility of the variational posterior in Bayesian neural networks through iterative optimization.A method for training flexible variational posterior distributions, applied to Bayesian neural nets to perform variation inference (VI) over the weights." 441,Residual Non-local Attention Networks for Image Restoration,"In this paper, we propose a residual non-local attention network for high-quality image restoration.Without considering the uneven distribution of information in the corrupted images, previous methods are restricted by local convolutional operation and equal treatment of spatial- and channel-wise features.To address this issue, we design local and non-local attention blocks to extract features that capture the long-range dependencies between pixels and pay more attention to the challenging parts.Specifically, we design trunk branch andlocal mask branch in eachlocal attention block.The trunk branch is used to extract hierarchical features.Local and non-local mask branches aim to adaptively rescale these hierarchical features with mixed attentions.The local mask branch concentrates on more local structures with convolutional operations, while non-local attention considers more about long-range dependencies in the whole feature map.Furthermore, we propose residual local and non-local attention learning to train the very deep network, which further enhance the representation ability of the network.Our proposed method can be generalized for various image restoration applications, such as image denoising, demosaicing, compression artifacts reduction, and super-resolution.Experiments demonstrate that our method obtains comparable or better results compared with recently leading methods quantitatively and visually.","New state-of-the-art framework for image restorationThe paper proposes a convolutional neural network architecture that includes blocks for local and non-local attention mechanisms, which are claimed to be responsible for achieving excellent results in four image restoration applications.This paper proposes a residual non-local attention network for image restoration" 442,One-shot learning: From domain knowledge to action models,"Most approaches to learning action planning models heavily rely on a significantly large volume of training samples or plan observations.In this paper, we adopt a different approach based on deductive learning from domain-specific knowledge, specifically from logic formulae that specify constraints about the possible states of a given domain.The minimal input observability required by our approach is a single example composed of a full initial state and a partial goal state.We will show that exploiting specific domain knowledge enables to constrain the space of possible action models as well as to complete partial observations, both of which turn out helpful to learn good-quality action models.",Hybrid approach to model acquisition that compensates a lack of available data with domain specific knowledge provided by expertsA domain acquisition approach that considers using a different representation for the partial domain model by using schematic mutex relations in place of pre/post conditions. 443,{COMPANYNAME}11K: An Unsupervised Representation Learning Dataset for Arrhythmia Subtype Discovery,"We release the largest public ECG dataset of continuous raw signals for representation learning containing over 11k patients and 2 billion labelled beats.Our goal is to enable semi-supervised ECG models to be made as well as to discover unknown subtypes of arrhythmia and anomalous ECG signal events.To this end, we propose an unsupervised representation learning task, evaluated in a semi-supervised fashion. We provide a set of baselines for different feature extractors that can be built upon. Additionally, we perform qualitative evaluations on results from PCA embeddings, where we identify some clustering of known subtypes indicating the potential for representation learning in arrhythmia sub-type discovery.","We release a dataset constructed from single-lead ECG data from 11,000 patients who were prescribed to use the {DEVICENAME}(TM) device.This paper describes a large-scale ECG dataset the authors intend to publish and provides unsupervised analysis and visualization of the dataset." 444,Context-Gated Convolution,"As the basic building block of Convolutional Neural Networks, the convolutional layer is designed to extract local patterns and lacks the ability to model global context in its nature.Many efforts have been recently made to complement CNNs with the global modeling ability, especially by a family of works on global feature interaction.In these works, the global context information is incorporated into local features before they are fed into convolutional layers.However, research on neuroscience reveals that, besides influences changing the inputs to our neurons, the neurons ability of modifying their functions dynamically according to context is essential for perceptual tasks, which has been overlooked in most of CNNs.Motivated by this, we propose one novel Context-Gated Convolution to explicitly modify the weights of convolutional layers adaptively under the guidance of global context.As such, being aware of the global context, the modulated convolution kernel of our proposed CGC can better extract representative local patterns and compose discriminative features.Moreover, our proposed CGC is lightweight, amenable to modern CNN architectures, and consistently improves the performance of CNNs according to extensive experiments on image classification, action recognition, and machine translation.","A novel Context-Gated Convolution which incorporates global context information into CNNs by explicitly modulating convolution kernels, and thus captures more representative local patterns and extract discriminative features.This paper uses global context to modulate the weights of convolutional layers and help CNNs capture more discriminative features with high performance and fewer parameters than feature map modulating." 445,ACIQ: Analytical Clipping for Integer Quantization of neural networks,"We analyze the trade-off between quantization noise and clipping distortion in low precision networks.We identify the statistics of various tensors, and derive exact expressions for the mean-square-error degradation due to clipping.By optimizing these expressions, we show marked improvements over standard quantization schemes that normally avoid clipping.For example, just by choosing the accurate clipping values, more than 40% accuracy improvement is obtained for the quantization of VGG-16 to 4-bits of precision.Our results have many applications for the quantization of neural networks at both training and inference time.","We analyze the trade-off between quantization noise and clipping distortion in low precision networks, and show marked improvements over standard quantization schemes that normally avoid clippingDerives a formula for finding the minimum and maximum clipping values for uniform quantization which minimize the square error resulting from quantization, for either a Laplace or Gaussian distribution over pre-quantized value." 446,Towards Stabilizing Batch Statistics in Backward Propagation of Batch Normalization,"Batch Normalization is one of the most widely used techniques in Deep Learning field.But its performance can awfully degrade with insufficient batch size.This weakness limits the usage of BN on many computer vision tasks like detection or segmentation, where batch size is usually small due to the constraint of memory consumption.Therefore many modified normalization techniques have been proposed, which either fail to restore the performance of BN completely, or have to introduce additional nonlinear operations in inference procedure and increase huge consumption.In this paper, we reveal that there are two extra batch statistics involved in backward propagation of BN, on which has never been well discussed before.The extra batch statistics associated with gradients also can severely affect the training of deep neural network.Based on our analysis, we propose a novel normalization method, named Moving Average Batch Normalization.MABN can completely restore the performance of vanilla BN in small batch cases, without introducing any additional nonlinear operations in inference procedure.We prove the benefits of MABN by both theoretical analysis and experiments.Our experiments demonstrate the effectiveness of MABN in multiple computer vision tasks including ImageNet and COCO.The code has been released in https://github.com/megvii-model/MABN.",We propose a novel normalization method to handle small batch size cases.A method to deal with the small batch size problem of BN which applies moving average operation without too much overhead and reduces the number of statistics of BN for better stability. 447,A Simple Geometric Proof for the Benefit of Depth in ReLU Networks,"We present a simple proof for the benefit of depth in multi-layer feedforward network with rectifed activation.Specifically we present a sequence of classification problems f_i such that for any fixed depth rectified network we can find an index m such that problems with index > m require exponential network width to fully represent the function f_m; and for any problem f_m in the family, we present a concrete neural network with linear depth and bounded width that fully represents it.While there are several previous work showing similar results, our proof uses substantially simpler tools and techniques, and should be accessible to undergraduate students in computer science and people with similar backgrounds.",ReLU MLP depth seperation proof with gemoteric argumentsA proof that deeper networks need less units than shallower ones for a family of problems. 448,$\textrm{D}^2$GAN: A Few-Shot Learning Approach with Diverse and Discriminative Feature Synthesis,"The rich and accessible labeled data fuel the revolutionary success of deep learning.Nonetheless, massive supervision remains a luxury for many real applications, boosting great interest in label-scarce techniques such as few-shot learning.An intuitively feasible approach to FSL is to conduct data augmentation via synthesizing additional training samples.The key to this approach is how to guarantee both discriminability and diversity of the synthesized samples.In this paper, we propose a novel FSL model, calledGAN, which synthesizes Diverse and Discriminative features based on Generative Adversarial Networks.GAN secures discriminability of the synthesized features by constraining them to have high correlation with real features of the same classes while low correlation with those of different classes. Based on the observation that noise vectors that are closer in the latent code space are more likely to be collapsed into the same mode when mapped to feature space,GAN incorporates a novel anti-collapse regularization term, which encourages feature diversity by penalizing the ratio of the logarithmic similarity of two synthesized features and the logarithmic similarity of the latent codes generating them.Experiments on three common benchmark datasets verify the effectiveness ofGAN by comparing with the state-of-the-art.",A new GAN based few-shot learning algorithm by synthesizing diverse and discriminative FeaturesA meta-learning method that learns a generative model that can augment the support set of a few-shot learner which optimizes a combination of losses. 449,Modelling the influence of data structure on learning in neural networks,"The lack of crisp mathematical models that capture the structure of real-worlddata sets is a major obstacle to the detailed theoretical understanding of deepneural networks.Here, we first demonstrate the effect of structured data setsby experimentally comparing the dynamics and the performance of two-layernetworks trained on two different data sets: an unstructured synthetic dataset containing random i.i.d. inputs, and a simple canonical data set suchas MNIST images.Our analysis reveals two phenomena related to the dynamics ofthe networks and their ability to generalise that only appear when training onstructured data sets.Second, we introduce a generative model for data sets,where high-dimensional inputs lie on a lower-dimensional manifold and havelabels that depend only on their position within this manifold.We call it the*hidden manifold model* and we experimentally demonstrate that trainingnetworks on data sets drawn from this model reproduces both the phenomena seenduring training on MNIST.",We demonstrate how structure in data sets impacts neural networks and introduce a generative model for synthetic data sets that reproduces this impact.The paper studies how different settings of data structure affect learning of neural networks and how to mimic behavior on real datasets when learning on a synthetic one. 450,Understanding and Training Deep Diagonal Circulant Neural Networks,"In this paper, we study deep diagonal circulant neural networks, that is deep neural networks in which weight matrices are the product of diagonal and circulant ones.Besides making a theoretical analysis of their expressivity, we introduced principled techniques for training these models: we devise an initialization scheme and proposed a smart use of non-linearity functions in order to train deep diagonal circulant networks.Furthermore, we show that these networks outperform recently introduced deep networks with other types of structured layers.We conduct a thorough experimental study to compare the performance of deep diagonal circulant networks with state of the art models based on structured matrices and with dense models.We show that our models achieve better accuracy than other structured approaches while required 2x fewer weights as the next best approach.Finally we train deep diagonal circulant networks to build a compact and accurate models on a real world video classification dataset with over 3.8 million training examples.","We train deep neural networks based on diagonal and circulant matrices, and show that this type of networks are both compact and accurate on real world applications.The authors provide a theoretical analysis of the expressive power of diagonal circulant neural networks (DCNN) and propose an initialization scheme for deep DCNNs." 451,Learning Global Additive Explanations for Neural Nets Using Model Distillation,"Interpretability has largely focused on local explanations, i.e. explaining why a model made a particular prediction for a sample.These explanations are appealing due to their simplicity and local fidelity.However, they do not provide information about the general behavior of the model.We propose to leverage model distillation to learn global additive explanations that describe the relationship between input features and model predictions.These global explanations take the form of feature shapes, which are more expressive than feature attributions.Through careful experimentation, we show qualitatively and quantitatively that global additive explanations are able to describe model behavior and yield insights about models such as neural nets.A visualization of our approach applied to a neural net as it is trained is available at https://youtu.be/ErQYwNqzEdc",We propose to leverage model distillation to learn global additive explanations in the form of feature shapes (that are more expressive than feature attributions) for models such as neural nets trained on tabular data.This paper incorporates Generalized Additive Models (GAMs) with model distillation to provide global explanations of neural nets. 452,Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning,"A lot of the recent success in natural language processing has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner.These representations are typically used as general purpose features for words across a range of NLP problems.However, extending this success to learning representations of sequences of words, such as sentences, remains an open problem.Recent work has explored unsupervised as well as supervised learning techniques with different training objectives to learn general purpose fixed-length sentence representations.In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model.We train this model on several data sources with multiple training objectives on over 100 million sentences.Extensive experiments demonstrate that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods.We present substantial improvements in the context of transfer learning and low-resource settings using our learned general-purpose representations.","A large-scale multi-task learning framework with diverse training objectives to learn fixed-length sentence representationsThis paper is about learning sentence embeddings by combining several training signals: skip-thought, predicting translation, classifying entailment relationships, and predicting the constituent parse." 453,Generating Biased Datasets for Neural Natural Language Processing,"In a time where neural networks are increasingly adopted in sensitive applications, algorithmic bias has emerged as an issue with moral implications.While there are myriad ways that a system may be compromised by bias, systematically isolating and evaluating existing systems on such scenarios is non-trivial, i.e., bias may be subtle, natural and inherently difficult to quantify.To this end, this paper proposes the first systematic study of benchmarking state-of-the-art neural models against biased scenarios.More concretely, we postulate that the bias annotator problem can be approximated with neural models, i.e., we propose generative models of latent bias to deliberately and unfairly associate latent features to a specific class.All in all, our framework provides a new way for principled quantification and evaluation of models against biased datasets.Consequently, we find that state-of-the-art NLP models are readily compromised by biased data.","We propose a neural bias annotator to benchmark models on their robustness to biased text datasets.A method to generate biased datasets for NLP, relying on a conditional adversarially regularized autoencoder (CARA)." 454,WEAKLY SEMI-SUPERVISED NEURAL TOPIC MODELS,"We consider the problem of topic modeling in a weakly semi-supervised setting.In this scenario, we assume that the user knows a priori a subset of the topics she wants the model to learn and is able to provide a few exemplar documents for those topics.In addition, while each document may typically consist of multiple topics, we do not assume that the user will identify all its topics exhaustively. Recent state-of-the-art topic models such as NVDM, referred to herein as Neural Topic Models, fall under the variational autoencoder framework.We extend NTMs to the weakly semi-supervised setting by using informative priors in the training objective.After analyzing the effect of informative priors, we propose a simple modification of the NVDM model using a logit-normal posterior that we show achieves better alignment to user-desired topics versus other NTM models.",We propose supervising VAE-style topic models by intelligently adjusting the prior on a per document basis. We find a logit-normal posterior provides the best performance.A flexible method of weakly supervising a topic model to achieve better alignment with user intuition. 455,Information Plane Analysis of Deep Neural Networks via Matrix--Based Renyi's Entropy and Tensor Kernels,"Analyzing deep neural networks via information plane theory has gained tremendous attention recently as a tool to gain insight into, among others, their generalization ability.However, it is by no means obvious how to estimate mutual information between each hidden layer and the input/desired output, to construct the IP.For instance, hidden layers with many neurons require MI estimators with robustness towards the high dimensionality associated with such layers.MI estimators should also be able to naturally handle convolutional layers, while at the same time being computationally tractable to scale to large networks.None of the existing IP methods to date have been able to study truly deep Convolutional Neural Networks, such as the e.g. VGG-16.In this paper, we propose an IP analysis using the new matrix--based Renyis entropy coupled with tensor kernels over convolutional layers, leveraging the power of kernel methods to represent properties of the probability distribution independently of the dimensionality of the data.The obtained results shed new light on the previous literature concerning small-scale DNNs, however using a completely new approach.Importantly, the new framework enables us to provide the first comprehensive IP analysis of contemporary large-scale DNNs and CNNs, investigating the different training phases and providing new insights into the training dynamics of large-scale neural networks.",First comprehensive information plane analysis of large scale deep neural networks using matrix based entropy and tensor kernels.The authors propose a tensor-kernel based estimator for mutual information estimation between high-dimensional layers in a neural network. 456,Program Guided Agent,"Developing agents that can learn to follow natural language instructions has been an emerging research direction.While being accessible and flexible, natural language instructions can sometimes be ambiguous even to humans.To address this, we propose to utilize programs, structured in a formal language, as a precise and expressive way to specify tasks.We then devise a modular framework that learns to perform a task specified by a program – as different circumstances give rise to diverse ways to accomplish the task, our framework can perceive which circumstance it is currently under, and instruct a multitask policy accordingly to fulfill each subtask of the overall task.Experimental results on a 2D Minecraft environment not only demonstrate that the proposed framework learns to reliably accomplish program instructions and achieves zero-shot generalization to more complex instructions but also verify the efficiency of the proposed modulation mechanism for learning the multitask policy.We also conduct an analysis comparing various models which learn from programs and natural language instructions in an end-to-end fashion.","We propose a modular framework that can accomplish tasks specified by programs and achieve zero-shot generalization to more complex tasks.This paper investigates training RL agents with instructions and task decompositions formalized as programs, proposing a model for a program guided agent that interprets a program and proposes subgoals to an action module." 457,When is a Convolutional Filter Easy to Learn?,"We analyze the convergence of gradient descent algorithm for learning a convolutional filter with Rectified Linear Unit activation function.Our analysis does not rely on any specific form of the input distribution and our proofs only use the definition of ReLU, in contrast with previous works that are restricted to standard Gaussian input.We show that gradient descent with random initialization can learn the convolutional filter in polynomial time and the convergence rate depends on the smoothness of the input distribution and the closeness of patches.To the best of our knowledge, this is the first recovery guarantee of gradient-based algorithms for convolutional filter on non-Gaussian input distributions.Our theory also justifies the two-stage learning rate strategy in deep neural networks.While our focus is theoretical, we also present experiments that justify our theoretical findings.","We prove randomly initialized (stochastic) gradient descent learns a convolutional filter in polynomial time.Studies the problem of learning a single convolutional filter using SGD and shows that under certain conditions, SGD learns a single convolutional filter.This paper extends the Gaussian distribution assumption to a more general angular smoothness assumption, which covers a wider family of input distributions" 458,Bamboo: Ball-Shape Data Augmentation Against Adversarial Attacks from All Directions,"Deep neural networks are widely adopted in real-world cognitive applications because of their high accuracy.The robustness of DNN models, however, has been recently challenged by adversarial attacks where small disturbance on input samples may result in misclassification.State-of-the-art defending algorithms, such as adversarial training or robust optimization, improve DNNs resilience to adversarial attacks by paying high computational costs.Moreover, these approaches are usually designed to defend one or a few known attacking techniques only.The effectiveness to defend other types of attacking methods, especially those that have not yet been discovered or explored, cannot be guaranteed.This work aims for a general approach of enhancing the robustness of DNN models under adversarial attacks.In particular, we propose Bamboo -- the first data augmentation method designed for improving the general robustness of DNN without any hypothesis on the attacking algorithms.Bamboo augments the training data set with a small amount of data uniformly sampled on a fixed radius ball around each training data and hence, effectively increase the distance between natural data points and decision boundary.Our experiments show that Bamboo substantially improve the general robustness against arbitrary types of attacks and noises, achieving better results comparing to previous adversarial training methods, robust optimization methods and other data augmentation methods with the same amount of data points.","The first data augmentation method specially designed for improving the general robustness of DNN without any hypothesis on the attacking algorithms.Proposes a data augmentation training method to gain model robustness against adversarial perturbations, by augmenting uniformly random samples from a fixed-radius sphere centered at training data. " 459,Synthesizing realistic neural population activity patterns using Generative Adversarial Networks,"The ability to synthesize realistic patterns of neural activity is crucial for studying neural information processing.Here we used the Generative Adversarial Networks framework to simulate the concerted activity of a population of neurons.We adapted the Wasserstein-GAN variant to facilitate the generation of unconstrained neural population activity patterns while still benefiting from parameter sharing in the temporal domain.We demonstrate that our proposed GAN, which we termed Spike-GAN, generates spike trains that match accurately the first- and second-order statistics of datasets of tens of neurons and also approximates well their higher-order statistics.We applied Spike-GAN to a real dataset recorded from salamander retina and showed that it performs as well as state-of-the-art approaches based on the maximum entropy and the dichotomized Gaussian frameworks.Importantly, Spike-GAN does not require to specify a priori the statistics to be matched by the model, and so constitutes a more flexible method than these alternative approaches.Finally, we show how to exploit a trained Spike-GAN to construct importance maps to detect the most relevant statistical structures present in a spike train.Spike-GAN provides a powerful, easy-to-use technique for generating realistic spiking neural activity and for describing the most relevant features of the large-scale neural population recordings studied in modern systems neuroscience.",Using Wasserstein-GANs to generate realistic neural activity and to detect the most relevant features present in neural population patterns.A method for simulating spike trains from populations of neurons which match empirical data using a semi-convolutional GAN.The paper proposes to use GANs for synthesizing realistic neural activity patterns 460,Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives,"Deep latent variable models have become a popular model choice due to the scalable learning algorithms introduced by.These approaches maximize a variational lower bound on the intractable log likelihood of the observed data.Burda et al. introduced a multi-sample variational bound, IWAE, that is at least as tight as the standard variational lower bound and becomes increasingly tight as the number of samples increases.Counterintuitively, the typical inference network gradient estimator for the IWAE bound performs poorly as the number of samples increases.Roeder et a. propose an improved gradient estimator, however, are unable to show it is unbiased.We show that it is in fact biased and that the bias can be estimated efficiently with a second application of the reparameterization trick.The doubly reparameterized gradient estimator does not suffer as the number of samples increases, resolving the previously raised issues.The same idea can be used to improve many recently introduced training techniques for latent variable models.In particular, we show that this estimator reduces the variance of the IWAE gradient, the reweighted wake-sleep update, and the jackknife variational inference gradient.Finally, we show that this computationally efficient, drop-in estimator translates to improved performance for all three objectives on several modeling tasks.",Doubly reparameterized gradient estimators provide unbiased variance reduction which leads to improved performance.Author experimentally found that the estimator of the existing work(STL) is biased and proposes to reduce the bias to improve the gradient estimator of the ELBO. 461,Gradientless Descent: High-Dimensional Zeroth-Order Optimization,"Zeroth-order optimization is the process of minimizing an objective, given oracle access to evaluations at adaptively chosen inputs.In this paper, we present two simple yet powerful GradientLess Descent algorithms that do not rely on an underlying gradient estimate and are numerically stable.We analyze our algorithm from a novel geometric perspective and we show that for of a smooth and strongly convex objective with latent dimension, we present a novel analysis that shows convergence within an-ball of the optimum in evaluations, where the input dimension is, is the diameter of the input space and is the condition number.Our rates are the first of its kind to be both1) poly-logarithmically dependent on dimensionality and2) invariant under monotone transformations.We further leverage our geometric perspective to show that our analysis is optimal.Both monotone invariance and its ability to utilize a low latent dimensionality are key to the empirical success of our algorithms, as demonstrated on synthetic and MuJoCo benchmarks.",Gradientless Descent is a provably efficient gradient-free algorithm that is monotone-invariant and fast for high-dimensional zero-th order optimization.This paper proposes stable GradientLess Descent (GLD) algorithms that do not rely on gradient estimate. 462,Goal-Conditioned Video Prediction,"Many processes can be concisely represented as a sequence of events leading from a starting state to an end state.Given raw ingredients, and a finished cake, an experienced chef can surmise the recipe.Building upon this intuition, we propose a new class of visual generative models: goal-conditioned predictors.Prior work on video generation largely focuses on prediction models that only observe frames from the beginning of the video.GCP instead treats videos as start-goal transformations, making video generation easier by conditioning on the more informative context provided by the first and final frames. Not only do existing forward prediction approaches synthesize better and longer videos when modified to become goal-conditioned, but GCP models can also utilize structures that are not linear in time, to accomplish hierarchical prediction. To this end, we study both auto-regressive GCP models and novel tree-structured GCP models that generate frames recursively, splitting the video iteratively into finer and finer segments delineated by subgoals. In experiments across simulated and real datasets, our GCP methods generate high-quality sequences over long horizons. Tree-structured GCPs are also substantially easier to parallelize than auto-regressive GCPs, making training and inference very efficient, and allowing the model to train on sequences that are thousands of frames in length.Finally, we demonstrate the utility of GCP approaches for imitation learning in the setting without access to expert actions. Videos are on the supplementary website: https://sites.google.com/view/video-gcp","We propose a new class of visual generative models: goal-conditioned predictors. We show experimentally that conditioning on the goal allows to reduce uncertainty and produce predictions over much longer horizons.This paper reformulates video prediction problem as interpolation instead of extrapolation by conditioning the prediction on the start and end (goal) frame, resulting in higher quality predictions." 463,Relational Multi-Instance Learning for Concept Annotation from Medical Time Series,"Recent advances in computing technology and sensor design have made it easier to collect longitudinal or time series data from patients, resulting in a gigantic amount of available medical data.Most of the medical time series lack annotations or even when the annotations are available they could be subjective and prone to human errors.Earlier works have developed natural language processing techniques to extract concept annotations and/or clinical narratives from doctor notes.However, these approaches are slow and do not use the accompanying medical time series data.To address this issue, we introduce the problem of concept annotation for the medical time series data, i.e., the task of predicting and localizing medical concepts by using the time series data as input.We propose Relational Multi-Instance Learning - a deep Multi Instance Learning framework based on recurrent neural networks, which uses pooling functions and attention mechanisms for the concept annotation tasks.Empirical results on medical datasets show that our proposed models outperform various multi-instance learning models.","We propose a deep Multi Instance Learning framework based on recurrent neural networks which uses pooling functions and attention mechanisms for the concept annotation tasks.The paper addresses the classification of medical time-series data and proposes to model the temporal relationship between the instances in each series using a recurrent neural network architecture. Proposes a novel Multiple Instance Learning (MIL) formulation called Relation MIL (RMIL), and discussed a number of its variants with LSTM, Bi-LSTM, S2S, etc. and explores integrating RMIL with various attention mechanisms, and demonstrates its usage on medical concept prediction from time series data. " 464,Tensorized Embedding Layers for Efficient Model Compression,"The embedding layers transforming input words into real vectors are the key components of deep neural networks used in natural language processing.However, when the vocabulary is large, the corresponding weight matrices can be enormous, which precludes their deployment in a limited resource setting.We introduce a novel way of parametrizing embedding layers based on the Tensor Train decomposition, which allows compressing the model significantly at the cost of a negligible drop or even a slight gain in performance. We evaluate our method on a wide range of benchmarks in natural language processing and analyze the trade-off between performance and compression ratios for a wide range of architectures, from MLPs to LSTMs and Transformers.","Embedding layers are factorized with Tensor Train decomposition to reduce their memory footprint.This paper proposes a low-rank tensor decomposition model to parameterize the embedding matrix in Natural Language Processing (NLP), which compresses the network and sometimes increases test accuracy." 465,Fixing Weight Decay Regularization in Adam,"We note that common implementations of adaptive gradient algorithms, such as Adam, limit the potential benefit of weight decay regularization, because the weights do not decay multiplicatively but by an additive constant factor.We propose a simple way to resolve this issue by decoupling weight decay and the optimization steps taken w.r.t. the loss function.We provide empirical evidence that our proposed modificationdecouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam, and, "" substantially improves Adams generalization performance, allowing it to compete with SGD with momentum on image classification datasets.We also demonstrate that longer optimization runs require smaller weight decay values for optimal results and introduce a normalized variant of weight decay to reduce this dependence.Finally, we propose a version of Adam with warm restarts that has strong anytime performance while achieving state-of-the-art results on CIFAR-10 and ImageNet32x32.Our source code will become available after the review process.",Fixing weight decay regularization in adaptive gradient methods such as AdamProposes idea to decouple the weight decay from the number of steps taken by the optimization process.The paper presents an alternative way to implement weight decay in Adam with empirical results shownInvestigates weight decay issues lied in the SGD variants and proposes the decoupling method between weight decay and the gradient-based update. 466,Lifelong Generative Modeling,"Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner where knowledge gained from previous tasks is retained and used for future learning.It is essential towards the development of intelligent machines that can adapt to their surroundings.In this work we focus on a lifelong learning approach to generative modeling where we continuously incorporate newly observed streaming distributions into our learnt model.We do so through a student-teacher architecture which allows us to learn and preserve all the distributions seen so far without the need to retain the past data nor the past models.Through the introduction of a novel cross-model regularizer, the student model leverages the information learnt by the teacher, which acts as a summary of everything seen till now.The regularizer has the additional benefit of reducing the effect of catastrophic interference that appears when we learn over streaming data.We demonstrate its efficacy on streaming distributions as well as its ability to learn a common latent representation across a complex transfer learning scenario.",Lifelong distributional learning through a student-teacher architecture coupled with a cross model posterior regularizer. 467,Learning Representations and Generative Models for 3D Point Clouds,"Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling.In this paper, we look at geometric data represented as point clouds.We introduce a deep autoencoder network with excellent reconstruction quality and generalization ability.The learned representations outperform the state of the art in 3D recognition tasks and enable basic shape editing applications via simple algebraic manipulations, such as semantic part editing, shape analogies and shape interpolation.We also perform a thorough study of different generative models including GANs operating on the raw point clouds, significantly improved GANs trained in the fixed latent space our AEs and, Gaussian mixture models.Interestingly, GMMs trained in the latent space of our AEs produce samples of the best fidelity and diversity.To perform our quantitative evaluation of generative models, we propose simple measures of fidelity and diversity based on optimally matching between sets point clouds.",Deep autoencoders to learn a good representation for geometric 3D point-cloud data; Generative models for point clouds.Approaches to learn GAN-type generative models using PointNet architecture and latent-space GAN. 468,Adversarial Neural Pruning,"Despite the remarkable performance of deep neural networks on various tasks, they are susceptible to adversarial perturbations which makes it difficult to deploy them in real-world safety-critical applications.In this paper, we aim to obtain robust networks by sparsifying DNNs latent features sensitive to adversarial perturbation.Specifically, we define vulnerability at the latent feature space and then propose a Bayesian framework to prioritize/prune features based on their contribution to both the original and adversarial loss.We also suggest regularizing the features vulnerability during training to improve robustness further.While such network sparsification has been primarily studied in the literature for computational efficiency and regularization effect of DNNs, we confirm that it is also useful to design a defense mechanism through quantitative evaluation and qualitative analysis.We validate our method, on multiple benchmark datasets, which results in an improvement in test accuracy and leads to state-of-the-art robustness.ANP also tackles the practical problem of obtaining sparse and robust networks at the same time, which could be crucial to ensure adversarial robustness on lightweight networks deployed to computation and memory-limited devices.","We propose a novel method for suppressing the vulnerability of latent feature space to achieve robust and compact networks.This paper proposes ""adversarial neural pruning"" method of training a pruning mask and a new vulnerability suppression loss to improve accuracy and adversarial robustness." 469,Deep Variational Semi-Supervised Novelty Detection,"In anomaly detection, one seeks to identify whether a test sample is abnormal, given a data set of normal samples. A recent and promising approach to AD relies on deep generative models, such as variational autoencoders,for unsupervised learning of the normal data distribution.In semi-supervised AD, the data also includes a small sample of labeled anomalies.In this work,we propose two variational methods for training VAEs for SSAD.The intuitive idea in both methods is to train the encoder to ‘separate’ between latent vectors for normal and outlier data.We show that this idea can be derived from principled probabilistic formulations of the problem, and propose simple and effective algorithms. Our methods can be applied to various data types, as we demonstrate on SSAD datasets ranging from natural images to astronomy and medicine, and can be combined with any VAE model architecture.When comparing to state-of-the-art SSAD methods that are not specific to particular data types, we obtain marked improvement in outlier detection.","We proposed two VAE modifications that account for negative data examples, and used them for semi-supervised anomaly detection.The papers propose two methods of VAE-like approaches for semi-supervised novelty detection, MML-VAE and DP-VAE." 470,Dynamic Instance Hardness,"We introduce dynamic instance hardness to facilitate the training of machine learning models.DIH is a property of each training sample and is computed as the running mean of the samples instantaneous hardness as measured over the training history.We use DIH to evaluate how well a model retains knowledge about each training sample over time.We find that for deep neural nets, the DIH of a sample in relatively early training stages reflects its DIH in later stages and as a result, DIH can be effectively used to reduce the set of training samples in future epochs.Specifically, during each epoch, only samples with high DIH are trained while samples with low DIH can be safely ignored.DIH is updated each epoch only for the selected samples, so it does not require additional computation.Hence, using DIH during training leads to an appreciable speedup.Also, since the model is focused on the historically more challenging samples, resultant models are more accurate.The above, when formulated as an algorithm, can be seen as a form of curriculum learning, so we call our framework DIH curriculum learning.The advantages of DIHCL, compared to other curriculum learning approaches, are: DIHCL does not require additional inference steps over the data not selected by DIHCL in each epoch, the dynamic instance hardness, compared to static instance hardness, is more stable as it integrates information over the entire training history up to the present time.Making certain mathematical assumptions, we formulate the problem of DIHCL as finding a curriculum that maximizes a multi-set function, and derive an approximation bound for a DIH-produced curriculum relative to the optimal curriculum.Empirically, DIHCL-trained DNNs significantly outperform random mini-batch SGD and other recently developed curriculum learning methods in terms of efficiency, early-stage convergence, and final performance, and this is shown in training several state-of-the-art DNNs on 11 modern datasets.","New understanding of training dynamics and metrics of memorization hardness lead to efficient and provable curriculum learning.This paper formulates DIH as a curriculum leaning problem that can more effectively utilize the data to train DNNs, and derives theory on the approximation bound." 471,Connections Between Optimization in Machine Learning and Adaptive Control,"This paper explores many immediate connections between adaptive control and machine learning, both through common update laws as well as common concepts.Adaptive control as a field has focused on mathematical rigor and guaranteed convergence.The rapid advances in machine learning on the other hand have brought about a plethora of new techniques and problems for learning.This paper elucidates many of the numerous common connections between both fields such that results from both may be leveraged together to solve new problems.In particular, a specific problem related to higher order learning is solved through insights obtained from these intersections.",History of parallel developments in update laws and concepts between adaptive control and optimization in machine learning. 472,Recurrent Convolutions: A Model Compression Point of View,"Recurrent convolution shares the same convolutional kernels and unrolls them multiple times, which is originally proposed to model time-space signals.We suggest that RC can be viewed as a model compression strategy for deep convolutional neural networks.RC reduces the redundancy across layers and is complementary to most existing model compression approaches.However, the performance of an RC network cant match the performance of its corresponding standard one, i.e. with the same depth but independent convolutional kernels. "", This reduces the value of RC for model compression.In this paper, we propose a simple variant which improves RC networks: The batch normalization layers of an RC module are learned independently for different unrolling steps.We provide insights on why this works.Experiments on CIFAR show that unrolling a convolutional layer several steps can improve the performance, thus indirectly plays a role in model compression.","Recurrent convolution for model compression and a trick for training it, that is learning independent BN layres over steps.The author modifies the recurrent convolution neural network (RCNN) with independent batch normalization, with the experimental results on RCNN compatible with the ResNet neural network architecture when it contains the same number of layers." 473,Efficient Receptive Field Learning by Dynamic Gaussian Structure,"The visual world is vast and varied, but its variations divide into structured and unstructured factors.Structured factors, such as scale and orientation, admit clear theories and efficient representation design.Unstructured factors, such as what it is that makes a cat look like a cat, are too complicated to model analytically, and so require free-form representation learning.We compose structured Gaussian filters and free-form filters, optimized end-to-end, to factorize the representation for efficient yet general learning.Our experiments on dynamic structure, in which the structured filters vary with the input, equal the accuracy of dynamic inference with more degrees of freedom while improving efficiency.","Dynamic receptive fields with spatial Gaussian structure are accurate and efficient.This paper proposes a structured convolution operator to model deformations of local regions of an image, which significantly reduced the number of parameters." 474,LabelFool: A Trick in the Label Space,"It is widely known that well-designed perturbations can cause state-of-the-art machine learning classifiers to mis-label an image, with sufficiently small perturbations that are imperceptible to the human eyes.However, by detecting the inconsistency between the image and wrong label, the human observer would be alerted of the attack.In this paper, we aim to design attacks that not only make classifiers generate wrong labels, but also make the wrong labels imperceptible to human observers.To achieve this, we propose an algorithm called LabelFool which identifies a target label similar to the ground truth label and finds a perturbation of the image for this target label.We first find the target label for an input image by a probability model, then move the input in the feature space towards the target label.Subjective studies on ImageNet show that in the label space, our attack is much less recognizable by human observers, while objective experimental results on ImageNet show that we maintain similar performance in the image space as well as attack rates to state-of-the-art attack algorithms.",A trick on adversarial samples so that the mis-classified labels are imperceptible in the label space to human observersA method for constructing adversarial attacks that are less detectable by humans without cost in image space by changing the target class to be similar to the original class of the image. 475,Classification of Building Noise Type/Position via Supervised Learning,"This paper presents noise type/position classification of various impact noises generated in a building which is a serious conflict issue in apartment complexes.For this study, a collection of floor impact noise dataset is recorded with a single microphone.Noise types/positions are selected based on a report by the Floor Management Center under Korea Environmental Corporation.Using a convolutional neural networks based classifier, the impact noise signals converted to log-scaled Mel-spectrograms are classified into noise types or positions.Also, our model is evaluated on a standard environmental sound dataset ESC-50 to show extensibility on environmental sound classification.",This paper presents noise type/position classification of various impact noises generated in a building which is a serious conflict issue in apartment complexesThis work describes the use of convolutional neural networks in a novel application area of building noise type and noise position classification. 476,Recurrent neural networks learn robust representations by dynamically balancing compression and expansion,"Recordings of neural circuits in the brain reveal extraordinary dynamical richness and high variability.At the same time, dimensionality reduction techniques generally uncover low-dimensional structures underlying these dynamics.What determines the dimensionality of activity in neural circuits?What is the functional role of dimensionality in behavior and task learning?In this work we address these questions using recurrent neural network models.We find that, depending on the dynamics of the initial network, RNNs learn to increase and reduce dimensionality in a way that matches task demands.These findings shed light on fundamental dynamical mechanisms by which neural networks solve tasks with robust representations that generalize to new cases.","Recurrent Neural Networks learn to increase and reduce the dimensionality of their internal representation in a way that matches the task, depending on the dynamics of the initial network." 477,Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment,"Domain adaptation addresses the common problem when the target distribution generating our test data drifts from the source distribution.While absent assumptions, domain adaptation is impossible, strict conditions, e.g. covariate or label shift, enable principled algorithms.Recently-proposed domain-adversarial approaches consist of aligning source and target encodings, often motivating this approach as minimizing two terms in a theoretical bound on target error.Unfortunately, this minimization can cause arbitrary increases in the third term, e.g. they can break down under shifting label distributions.We propose asymmetrically-relaxed distribution alignment, a new approach that overcomes some limitations of standard domain-adversarial algorithms.Moreover, we characterize precise assumptions under which our algorithm is theoretically principled and demonstrate empirical benefits on both synthetic and real datasets.","Instead of strict distribution alignments in traditional deep domain adaptation objectives, which fails when target label distribution shifts, we propose to optimize a relaxed objective with new analysis, new algorithms, and experimental validation.This paper suggests relaxed metrics for domain adaptation which give new theoretical bounds on the target error." 478,Hierarchical Summary-to-Article Generation,"In this paper, we explore : the task of generating long articles given a short summary, which provides finer-grained content control for the generated text.To prevent sequence-to-sequence models from degenerating into language models and better controlling the long text to be generated, we propose a hierarchical generation approach which first generates a sketch of intermediate length based on the summary and then completes the article by enriching the generated sketch.To mitigate the discrepancy between the oracle sketch used during training and the noisy sketch generated during inference, we propose an end-to-end joint training framework based on multi-agent reinforcement learning.For evaluation, we use text summarization corpora by reversing their inputs and outputs, and introduce a novel evaluation method that employs a summarization system to summarize the generated article and test its match with the original input summary.Experiments show that our proposed hierarchical generation approach can generate a coherent and relevant article based on the given summary, yielding significant improvements upon conventional seq2seq models.","we explore the task of summary-to-article generation and propose a hierarchical generation scheme together with a jointly end-to-end reinforcement learning framework to train the hierarchical model.To address the issue of degeneration in summary-to-article generation, this paper proposes a hierarchical generation approach which first generates an intermediate sketch of the article and then the full article." 479,Learning to Learn via Gradient Component Corrections,"Gradient-based meta-learning algorithms require several steps of gradient descent to adapt to newly incoming tasks.This process becomes more costly as the number of samples increases. Moreover, the gradient updates suffer from several sources of noise leading to a degraded performance. In this work, we propose a meta-learning algorithm equipped with the GradiEnt Component COrrections, aGECCO cell for short, which generates a multiplicative corrective low-rank matrix which corrects the estimated gradients. GECCO contains a simple decoder-like network with learnable parameters, an attention module and a so-called context input parameter. The context parameter of GECCO is updated to generate a low-rank corrective term for the network gradients. As a result, meta-learning requires only a few of gradient updates to absorb new task. While previous approaches address this problem by altering the learning rates, factorising network parameters or directly learning feature corrections from features and/or gradients, GECCO is an off-the-shelf generator-like unit that performs element-wise gradient corrections without the need to ‘observe’ the features and/or the gradients directly. We show that our GECCO accelerates learning, performs robust corrections of the gradients corrupted by a noise, and leads to notable improvements over existing gradient-based meta-learning algorithms.","We propose a meta-learner to adapt quickly on multiple tasks even one step in a few-shot setting.This paper proposes a method to meta-learn a gradient correction module in which preconditioning is parameterized by a neural network, and builds in a two-stage gradient update process during adaptation. " 480,Generative Question Answering: Learning to Answer the Whole Question,"Discriminative question answering models can overfit to superficial biases in datasets, because their loss function saturates when any clue makes the answer likely. We introduce generative models of the joint distribution of questions and answers, which are trained to explain the whole question, not just to answer it.Our question answering model is implemented by learning a prior over answers, and a conditional language model to generate the question given the answer—allowing scalable and interpretable many-hop reasoning as the question is generated word-by-word. Our model achieves competitive performance with specialised discriminative models on the SQUAD and CLEVR benchmarks, indicating that it is a more general architecture for language understanding and reasoning than previous work.The model greatly improves generalisation both from biased training data and to adversarial testing data, achieving a new state-of-the-art on ADVERSARIAL SQUAD.We will release our code.","Question answering models that model the joint distribution of questions and answers can learn more than discriminative modelsThis paper proposes a generative approach to textual and visual QA, where a joint distribution over the question and answer space given the context is learned, which captures more complex relationships.This paper introduces a generative model for question answering and proposes to model p(q,a|c), factorized as p(a|c) * p(q|a,c). The authors proposes a generative QA model, which optimizes jointly the distribution of questions and answering given a document/context. " 481,Enhancing Batch Normalized Convolutional Networks using Displaced Rectifier Linear Units: A Systematic Comparative Study,"In this paper, we turn our attention to the interworking between the activation functions and the batch normalization, which is a virtually mandatory technique to train deep networks currently.We propose the activation function Displaced Rectifier Linear Unit by conjecturing that extending the identity function of ReLU to the third quadrant enhances compatibility with batch normalization.Moreover, we used statistical tests to compare the impact of using distinct activation functions on the learning speed and test accuracy performance of standardized VGG and Residual Networks state-of-the-art models.These convolutional neural networks were trained on CIFAR-100 and CIFAR-10, the most commonly used deep learning computer vision datasets.The results showed DReLU speeded up learning in all models and datasets.Besides, statistical significant performance assessments showed DReLU enhanced the test accuracy presented by ReLU in all scenarios.Furthermore, DReLU showed better test accuracy than any other tested activation function in all experiments with one exception, in which case it presented the second best performance.Therefore, this work demonstrates that it is possible to increase performance replacing ReLU by an enhanced activation function.","A new activation function called Displaced Rectifier Linear Unit is proposed. It is showed to enhance the training and inference performance of batch normalized convolutional neural networks.The paper compares and suggests against the usage of batch normalization after using rectifier linear unitsThis paper proposes an activation function, called displaced ReLU, to improve the performance of CNNs that use batch normalization." 482,Scale-Equivariant Neural Networks with Decomposed Convolutional Filters,"Encoding the input scale information explicitly into the representation learned by a convolutional neural network is beneficial for many vision tasks especially when dealing with multiscale input signals.We study, in this paper, a scale-equivariant CNN architecture with joint convolutions across the space and the scaling group, which is shown to be both sufficient and necessary to achieve scale-equivariant representations.To reduce the model complexity and computational burden, we decompose the convolutional filters under two pre-fixed separable bases and truncate the expansion to low-frequency components.A further benefit of the truncated filter expansion is the improved deformation robustness of the equivariant representation.Numerical experiments demonstrate that the proposed scale-equivariant neural network with decomposed convolutional filters achieves significantly improved performance in multiscale image classification and better interpretability than regular CNNs at a reduced model size.",We construct scale-equivariant convolutional neural networks in the most general form with both computational efficiency and proved deformation robustness.The authors propose a CNN architecture that is theoretically equivariant to isotropic scalings and translations by adding an extra scale-dimension to activation tensors. 483,Utility Analysis of Network Architectures for 3D Point Cloud Processing,"In this paper, we diagnose deep neural networks for 3D point cloud processing to explore the utility of different network architectures.We propose a number of hypotheses on the effects of specific network architectures on the representation capacity of DNNs.In order to prove the hypotheses, we design five metrics to diagnose various types of DNNs from the following perspectives, information discarding, information concentration, rotation robustness, adversarial robustness, and neighborhood inconsistency.We conduct comparative studies based on such metrics to verify the hypotheses, which may shed new lights on the architectural design of neural networks.Experiments demonstrated the effectiveness of our method.The code will be released when this paper is accepted.","We diagnose deep neural networks for 3D point cloud processing to explore the utility of different network architectures. The paper investigates different neural network architectures for 3D point cloud processing and proposes metrics for adversarial robustness, rotational robustness, and neighborhood consistency." 484,Self-Imitation Learning via Trajectory-Conditioned Policy for Hard-Exploration Tasks,"Imitation learning from human-expert demonstrations has been shown to be greatly helpful for challenging reinforcement learning problems with sparse environment rewards.However, it is very difficult to achieve similar success without relying on expert demonstrations.Recent works on self-imitation learning showed that imitating the agents own past good experience could indirectly drive exploration in some environments, but these methods often lead to sub-optimal and myopic behavior.To address this issue, we argue that exploration in diverse directions by imitating diverse trajectories, instead of focusing on limited good trajectories, is more desirable for the hard-exploration tasks.We propose a new method of learning a trajectory-conditioned policy to imitate diverse trajectories from the agents own past experiences and show that such self-imitation helps avoid myopic behavior and increases the chance of finding a globally optimal solution for hard-exploration tasks, especially when there are misleading rewards.Our method significantly outperforms existing self-imitation learning and count-based exploration methods on various hard-exploration tasks with local optima.In particular, we report a state-of-the-art score of more than 20,000 points on Montezumas Revenge without using expert demonstrations or resetting to arbitrary states.","Self-imitation learning of diverse trajectories with trajectory-conditioned policyThis paper addresses hard exploration tasks by applying self-imitation to a diverse selection of trajectories from past experience, to drive more efficient exploration in sparse-reward problems, achieving SOTA results." 485,Batch-shaping for learning conditional channel gated networks,"We present a method that trains large capacity neural networks with significantly improved accuracy and lower dynamic computational cost.This is achieved by gating the deep-learning architecture on a fine-grained-level.Individual convolutional maps are turned on/off conditionally on features in the network.To achieve this, we introduce a new residual block architecture that gates convolutional channels in a fine-grained manner.We also introduce a generally applicable tool batch-shaping that matches the marginal aggregate posteriors of features in a neural network to a pre-specified prior distribution.We use this novel technique to force gates to be more conditional on the data.We present results on CIFAR-10 and ImageNet datasets for image classification, and Cityscapes for semantic segmentation.Our results show that our method can slim down large architectures conditionally, such that the average computational cost on the data is on par with a smaller architecture, but with higher accuracy.In particular, on ImageNet, our ResNet50 and ResNet34 gated networks obtain 74.60% and 72.55% top-1 accuracy compared to the 69.76% accuracy of the baseline ResNet18 model, for similar complexity.We also show that the resulting networks automatically learn to use more features for difficult examples and fewer features for simple examples.","A method that trains large capacity neural networks with significantly improved accuracy and lower dynamic computational costA method to train a network with large capacity, only parts of which are used at inference time dependent on input, using fine-grained conditional selection and a new method of regularization, ""batch shaping.""" 486,An Explicitly Relational Neural Network Architecture,"With a view to bridging the gap between deep learning and symbolic AI, we present a novel end-to-end neural network architecture that learns to form propositional representations with an explicitly relational structure from raw pixel data.In order to evaluate and analyse the architecture, we introduce a family of simple visual relational reasoning tasks of varying complexity.We show that the proposed architecture, when pre-trained on a curriculum of such tasks, learns to generate reusable representations that better facilitate subsequent learning on previously unseen tasks when compared to a number of baseline architectures.The workings of a successfully trained model are visualised to shed some light on how the architecture functions.","We present an end-to-end differentiable architecture that learns to map pixels to predicates, and evaluate it on a suite of simple relational reasoning tasksA network architecture based on the multi-head self-attention module to learn a new form of relational representations, which improves data efficiency and generalization ability on curriculum learning." 487,Training individually fair ML models with sensitive subspace robustness,"We propose an approach to training machine learning models that are fair in the sense that their performance is invariant under certain perturbations to the features.For example, the performance of a resume screening system should be invariant under changes to the name of the applicant.We formalize this intuitive notion of fairness by connecting it to the original notion of individual fairness put forth by Dwork et al and show that the proposed approach achieves this notion of fairness.We also demonstrate the effectiveness of the approach on two machine learning tasks that are susceptible to gender and racial biases.",Algorithm for training individually fair classifier using adversarial robustnessThis paper proposes a new definition of algorithmic fairness and an algorithm to provably find an ML model that satisfies the fairness contraint. 488,A Seed-Augment-Train Framework for Universal Digit Classification ,"In this paper, we propose a Seed-Augment-Train/Transfer framework that contains a synthetic seed image dataset generation procedure for languages with different numeral systems using freely available open font file datasets.This seed dataset of images is then augmented to create a purely synthetic training dataset, which is in turn used to train a deep neural network and test on held-out real world handwritten digits dataset spanning five Indic scripts, Kannada, Tamil, Gujarati, Malayalam, and Devanagari.We showcase the efficacy of this approach both qualitatively, by training a Boundary-seeking GAN that generates realistic digit images in the five languages, and also qualitatively by testing a CNN trained on the synthetic data on the real-world datasets.This establishes not only an interesting nexus between the font-datasets-world and transfer learning but also provides a recipe for universal-digit classification in any script.",Is seeding and augmentation all you need for classifying digits in any language?This paper presents new datasets for five languages and proposes a new framework (SAT) for font image datasets generation for universal digit classification. 489,Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML,"An important research direction in machine learning has centered around developing meta-learning algorithms to tackle few-shot learning.An especially successful algorithm has been Model Agnostic Meta-Learning, a method that consists of two optimization loops, with the outer loop finding a meta-initialization, from which the inner loop can efficiently learn new tasks.Despite MAMLs popularity, a fundamental open question remains -- is the effectiveness of MAML due to the meta-initialization being primed for rapid learning or due to feature reuse, with the meta initialization already containing high quality features?"", We investigate this question, via ablation studies and analysis of the latent representations, finding that feature reuse is the dominant factor.This leads to the ANIL algorithm, a simplification of MAML where we remove the inner loop for all but the head of the underlying neural network.ANIL matches MAMLs performance on benchmark few-shot image classification and RL and offers computational improvements over MAML.We further study the precise contributions of the head and body of the network, showing that performance on the test tasks is entirely determined by the quality of the learned features, and we can remove even the head of the network.We conclude with a discussion of the rapid learning vs feature reuse question for meta-learning algorithms more broadly.","The success of MAML relies on feature reuse from the meta-initialization, which also yields a natural simplification of the algorithm, with the inner loop removed for the network body, as well as other insights on the head and body.The paper finds that feature reuse is the dominant factor in the success of MAML, and propose new algorithms which spend much less computation than MAML." 490,Mixed Setting Training Methods for Incremental Slot-Filling Tasks,"Model training remains a dominant financial cost and time investment in machine learning applications.Developing and debugging models often involve iterative training, further exacerbating this issue.With growing interest in increasingly complex models, there is a need for techniques that help to reduce overall training effort.While incremental training can save substantial time and cost by training an existing model on a small subset of data, little work has explored policies for determining when incremental training provides adequate model performance versus full retraining.We provide a method-agnostic algorithm for deciding when to incrementally train versus fully train.We call this setting of non-deterministic full- or incremental training Mixed Setting Training"".Upon evaluation in slot-filling tasks, we find that this algorithm provides a bounded error, avoids catastrophic forgetting, and results in a significant speedup over a policy of always fully training.",We provide a method-agnostic algorithm for deciding when to incrementally train versus fully train and it provides a significant speedup over fully training and avoids catastrophic forgettingThis paper proposes an approach for deciding when to incrementally vs. fully retrain a model in the setting of iterative model development in slot filling tasks. 491,What Can Neural Networks Reason About?,"Neural networks have succeeded in many reasoning tasks.Empirically, these tasks require specialized network structures, e.g., Graph Neural Networks perform well on many such tasks, while less structured networks fail.Theoretically, there is limited understanding of why and when a network structure generalizes better than other equally expressive ones.We develop a framework to characterize which reasoning tasks a network can learn well, by studying how well its structure aligns with the algorithmic structure of the relevant reasoning procedure.We formally define algorithmic alignment and derive a sample complexity bound that decreases with better alignment.This framework explains the empirical success of popular reasoning models and suggests their limitations.We unify seemingly different reasoning tasks, such as intuitive physics, visual question answering, and shortest paths, via the lens of a powerful algorithmic paradigm, dynamic programming.We show that GNNs can learn DP and thus solve these tasks.On several reasoning tasks, our theory aligns with empirical results.","We develop a theoretical framework to characterize which reasoning tasks a neural network can learn well.The paper proposes a measure of classes of algorithmic alignment that measure how ""close"" neural networks are to known algorithms, proving the link between several classes of known algorithms and neural network architectures." 492,Exploring Cellular Protein Localization Through Semantic Image Synthesis,"Cell-cell interactions have an integral role in tumorigenesis as they are critical in governing immune responses.As such, investigating specific cell-cell interactions has the potential to not only expand upon the understanding of tumorigenesis, but also guide clinical management of patient responses to cancer immunotherapies.A recent imaging technique for exploring cell-cell interactions, multiplexed ion beam imaging by time-of-flight, allows for cells to be quantified in 36 different protein markers at sub-cellular resolutions in situ as high resolution multiplexed images.To explore the MIBI images, we propose a GAN for multiplexed data with protein specific attention.By conditioning image generation on cell types, sizes, and neighborhoods through semantic segmentation maps, we are able to observe how these factors affect cell-cell interactions simultaneously in different protein channels.Furthermore, we design a set of metrics and offer the first insights towards cell spatial orientations, cell protein expressions, and cell neighborhoods.Our model, cell-cell interaction GAN, outperforms or matches existing image synthesis methods on all conventional measures and significantly outperforms on biologically motivated metrics.To our knowledge, we are the first to systematically model multiple cellular protein behaviors and interactions under simulated conditions through image synthesis.","We explore cell-cell interactions across tumor environment contexts observed in highly multiplexed images, by image synthesis using a novel attention GAN architecture.A new method to model the data generated by multiplexed ion beam imaging by time-of-flight (MIBI-TOF) by learning the many-to-many mapping between cell types and protein markers' expression levels." 493,On the Effectiveness of Minimal Context Selection for Robust Question Answering,"Machine learning models for question-answering, where given a question and a passage, the learner must select some span in the passage as an answer, are known to be brittle.By inserting a single nuisance sentence into the passage, an adversary can fool the model into selecting the wrong span.A promising new approach for QA decomposes the task into two stages: select relevant sentences from the passage; and select a span among those sentences.Intuitively, if the sentence selector excludes the offending sentence, then the downstream span selector will be robust.While recent work has hinted at the potential robustness of two-stage QA, these methods have never, to our knowledge, been explicitly combined with adversarial training.This paper offers a thorough empirical investigation of adversarial robustness, demonstrating that although the two-stage approach lags behind single-stage span selection, adversarial training improves its performance significantly, leading to an improvement of over 22 points in F1 score over the adversarially-trained single-stage model.",A two-stage approach consisting of sentence selection followed by span selection can be made more robust to adversarial attacks in comparison to a single-stage model trained on full context.This paper investigates an existing model and finds that a two-stage trained QA method is not more robust to adversarial attacks compared to other methods. 494,Towards Provably Correct Driver Assistance Systems through Stochastic Cognitive Modeling,"The aim of this study is to introduce a formal framework for analysis and synthesis of driver assistance systems.It applies formal methods to the verification of a stochastic human driver model built using the cognitive architecture ACT-R, and then bootstraps safety in semi-autonomous vehicles through the design of provably correct Advanced Driver Assistance Systems.The main contributions include the integration of probabilistic ACT-R models in the formal analysis of semi-autonomous systems and an abstraction technique that enables a finite representation of a large dimensional, continuous system in the form of a Markov model.The effectiveness of the method is illustrated in several case studies under various conditions.",Verification of a human driver model based on a cognitive architecture and synthesis of a correct-by-construction ADAS from it. 495,Composing RNNs and FSTs for Small Data: Recovering Missing Characters in Old Hawaiian Text,"In contrast to the older writing system of the 19th century, modern Hawaiian orthography employs characters for long vowels and glottal stops.These extra characters account for about one-third of the phonemes in Hawaiian, so including them makes a big difference to reading comprehension and pronunciation.However, transliterating between older and newer texts is a laborious task when performed manually.We introduce two related methods to help solve this transliteration problem automatically, given that there were not enough data to train an end-to-end deep learning model.One approach is implemented, end-to-end, using finite state transducers.The other is a hybrid deep learning approach which approximately composes an FST with a recurrent neural network.We find that the hybrid approach outperforms the end-to-end FST by partitioning the original problem into one part that can be modelled by hand, using an FST, and into another part, which is easily solved by an RNN trained on the available data.","A novel, hybrid deep learning approach provides the best solution to a limited-data problem (which is important to the conservation of the Hawaiian language)" 496,Out-of-distribution Detection in Few-shot Classification,"In many real-world settings, a learning model must perform few-shot classification: learn to classify examples from unseen classes using only a few labeled examples per class.Additionally, to be safely deployed, it should have the ability to detect out-of-distribution inputs: examples that do not belong to any of the classes.While both few-shot classification and out-of-distribution detection are popular topics,their combination has not been studied.In this work, we propose tasks for out-of-distribution detection in the few-shot setting and establish benchmark datasets, based on four popular few-shot classification datasets. Then, we propose two new methods for this task and investigate their performance.In sum, we establish baseline out-of-distribution detection results using standard metrics on new benchmark datasets and show improved results with our proposed methods.","We quantitatively study out-of-distribution detection in few-shot setting, establish baseline results with ProtoNet, MAML, ABML, and improved upon them.The paper proposes two new confidence scores which are more suitable for out-of-distribution detection of few-shot classification and shows that a distance metric-based approach improves performance." 497,Progressive Knowledge Distillation For Generative Modeling,"While modern generative models are able to synthesize high-fidelity, visually appealing images, successfully generating examples that are useful for recognition tasks remains an elusive goal.To this end, our key insight is that the examples should be synthesized to recover classifier decision boundaries that would be learned from a large amount of real examples.More concretely, we treat a classifier trained on synthetic examples as student and a classifier trained on real examples as teacher.By introducing knowledge distillation into a meta-learning framework, we encourage the generative model to produce examples in a way that enables the student classifier to mimic the behavior of the teacher.To mitigate the potential gap between student and teacher classifiers, we further propose to distill the knowledge in a progressive manner, either by gradually strengthening the teacher or weakening the student.We demonstrate the use of our model-agnostic distillation approach to deal with data scarcity, significantly improving few-shot learning performance on miniImageNet and ImageNet1K benchmarks.",This paper introduces progressive knowledge distillation for learning generative models that are recognition task orientedThis paper demonstrates easy-to-hard curriculum learning to train a generative model to improve few-shot classification. 498,Enhancing the Transferability of Adversarial Examples with Noise Reduced Gradient,"Deep neural networks provide state-of-the-art performance for many applications of interest.Unfortunately they are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs.Moreover, the perturbations can transfer across models: adversarial examples generated for a specific model will often mislead other unseen models.Consequently the adversary can leverage it to attack against the deployed black-box systems.In this work, we demonstrate that the adversarial perturbation can be decomposed into two components: model-specific and data-dependent one, and it is the latter that mainly contributes to the transferability.Motivated by this understanding, we propose to craft adversarial examples by utilizing the noise reduced gradient which approximates the data-dependent component.Experiments on various classification models trained on ImageNet demonstrates that the new approach enhances the transferability dramatically.We also find that low-capacity models have more powerful attack capability than high-capacity counterparts, under the condition that they have comparable test performance. These insights give rise to a principled manner to construct adversarial examples with high success rates and could potentially provide us guidance for designing effective defense approaches against black-box attacks.","We propose a new method for enhancing the transferability of adversarial examples by using the noise-reduced gradient.This paper postulates that an adversarial perturbation consists of a model-specific and data-specific component, and that amplification of the latter is best suited for adversarial attacks.This paper focuses on enhancing the transferability of adversarial examples from one model to another model." 499,Lipschitz constant estimation of Neural Networks via sparse polynomial optimization,"We introduce LiPopt, a polynomial optimization framework for computing increasingly tighter upper bound on the Lipschitz constant of neural networks.The underlying optimization problems boil down to either linear or semidefinite programming.We show how to use the sparse connectivity of a network, to significantly reduce the complexity of computation.This is specially useful for convolutional as well as pruned neural networks.We conduct experiments on networks with random weights as well as networks trained on MNIST, showing that in the particular case of the-Lipschitz constant, our approach yields superior estimates as compared to other baselines available in the literature.","LP-based upper bounds on the Lipschitz constant of Neural NetworksThe authors study the problem of estimating the Lipschitz constant of a deep neural network with ELO activation function, formulating it as a polynomial optimisation problem." 500,Domain-Agnostic Few-Shot Classification by Learning Disparate Modulators,"Although few-shot learning research has advanced rapidly with the help of meta-learning, its practical usefulness is still limited because most of the researches assumed that all meta-training and meta-testing examples came from a single domain.We propose a simple but effective way for few-shot classification in which a task distribution spans multiple domains including previously unseen ones during meta-training.The key idea is to build a pool of embedding models which have their own metric spaces and to learn to select the best one for a particular task through multi-domain meta-learning.This simplifies task-specific adaptation over a complex task distribution as a simple selection problem rather than modifying the model with a number of parameters at meta-testing time.Inspired by common multi-task learning techniques, we let all models in the pool share a base network and add a separate modulator to each model to refine the base network in its own way.This architecture allows the pool to maintain representational diversity and each model to have domain-invariant representation as well.Experiments show that our selection scheme outperforms other few-shot classification algorithms when target tasks could come from many different domains.They also reveal that aggregating outputs from all constituent models is effective for tasks from unseen domains showing the effectiveness of our framework.",We address multi-domain few-shot classification by building multiple models to represent this complex task distribution in a collective way and simplifying task-specific adaptation as a selection problem from these pre-trained models.This paper tackles few-shot classification with many different domains by building a pool of embedding models to capture domain-invariant and domain-specific features without a significant increase in the number of parameters. 501,DeepErase: Weakly Supervised Ink Artifact Removal in Document Text Images,"Still in 2019, many scanned documents come into businesses in non-digital format.Text to be extracted from real world documents is often nestled inside rich formatting, such as tabular structures or forms with fill-in-the-blank boxes or underlines whose ink often touches or even strikes through the ink of the text itself.Such ink artifacts can severely interfere with the performance of recognition algorithms or other downstream processing tasks.In this work, we propose DeepErase, a neural preprocessor to erase ink artifacts from text images.We devise a method to programmatically augment text images with real artifacts, and use them to train a segmentation network in an weakly supervised manner.In additional to high segmentation accuracy, we show that our cleansed images achieve a significant boost in downstream recognition accuracy by popular OCR software such as Tesseract 4.0.We test DeepErase on out-of-distribution datasets of scanned IRS tax return forms and achieve double-digit improvements in recognition accuracy over baseline for both printed and handwritten text.","Neural-based removal of document ink artifacts (underlines, smudges, etc.) using no manually annotated training data" 502,BayesOpt Adversarial Attack,"Black-box adversarial attacks require a large number of attempts before finding successful adversarial examples that are visually indistinguishable from the original input.Current approaches relying on substitute model training, gradient estimation or genetic algorithms often require an excessive number of queries.Therefore, they are not suitable for real-world systems where the maximum query number is limited due to cost.We propose a query-efficient black-box attack which uses Bayesian optimisation in combination with Bayesian model selection to optimise over the adversarial perturbation and the optimal degree of search space dimension reduction.We demonstrate empirically that our method can achieve comparable success rates with 2-5 times fewer queries compared to previous state-of-the-art black-box attacks.","We propose a query-efficient black-box attack which uses Bayesian optimisation in combination with Bayesian model selection to optimise over the adversarial perturbation and the optimal degree of search space dimension reduction. The authors propose to use Bayesian optimization with a GP surrogate for adversarial image generation, by exploiting additive structure and using Bayesian model selection to determine an optimal dimensionality reduction." 503,Learning Factorized Multimodal Representations,"Learning multimodal representations is a fundamentally complex research problem due to the presence of multiple heterogeneous sources of information.Although the presence of multiple modalities provides additional valuable information, there are two key challenges to address when learning from multimodal data:1) models must learn the complex intra-modal and cross-modal interactions for prediction and2) models must be robust to unexpected missing or noisy modalities during testing.In this paper, we propose to optimize for a joint generative-discriminative objective across multimodal data and labels.We introduce a model that factorizes representations into two sets of independent factors: multimodal discriminative and modality-specific generative factors.Multimodal discriminative factors are shared across all modalities and contain joint multimodal features required for discriminative tasks such as sentiment prediction.Modality-specific generative factors are unique for each modality and contain the information required for generating data.Experimental results show that our model is able to learn meaningful multimodal representations that achieve state-of-the-art or competitive performance on six multimodal datasets.Our model demonstrates flexible generative capabilities by conditioning on independent factors and can reconstruct missing modalities without significantly impacting performance.Lastly, we interpret our factorized representations to understand the interactions that influence multimodal learning.","We propose a model to learn factorized multimodal representations that are discriminative, generative, and interpretable.This paper presents 'Multimodal Factorization model' that factorizes representations into shared multimodal discriminative factors and modality specific generative factors. " 504,Compositional Transfer in Hierarchical Reinforcement Learning,"The successful application of flexible, general learning algorithms to real-world robotics applications is often limited by their poor data-efficiency.To address the challenge, domains with more than one dominant task of interest encourage the sharing of information across tasks to limit required experiment time.To this end, we investigate compositional inductive biases in the form of hierarchical policies as a mechanism for knowledge transfer across tasks in reinforcement learning.We demonstrate that this type of hierarchy enables positive transfer while mitigating negative interference.Furthermore, we demonstrate the benefits of additional incentives to efficiently decompose task solutions.Our experiments show that these incentives are naturally given in multitask learning and can be easily introduced for single objectives.We design an RL algorithm that enables stable and fast learning of structured policies and the effective reuse of both behavior components and transition data across tasks in an off-policy setting.Finally, we evaluate our algorithm in simulated environments as well as physical robot experiments and demonstrate substantial improvements in data data-efficiency over competitive baselines.","We develop a hierarchical, actor-critic algorithm for compositional transfer by sharing policy components and demonstrate component specialization and related direct benefits in multitask domains as well as its adaptation for single tasks.A combination of different learning techniques for acquiring structure and learning with asymmetric data, used to train an HRL policy.The authors introduce a hierarchical policy structure for use in both single task and multitask reinforcement learning, and assess the structure's usefulness on complex robotic tasks." 505,Bounding and Counting Linear Regions of Deep Neural Networks,"In this paper, we study the representational power of deep neural networks that belong to the family of piecewise-linear functions, based on PWL activation units such as rectifier or maxout.We investigate the complexity of such networks by studying the number of linear regions of the PWL function.Typically, a PWL function from a DNN can be seen as a large family of linear functions acting on millions of such regions.We directly build upon the work of Mont´ufar et al., Mont´ufar, and Raghu et al. by refining the upper and lower bounds on the number of linear regions for rectified and maxout networks.In addition to achieving tighter bounds, we also develop a novel method to perform exact numeration or counting of the number of linear regions with a mixed-integer linear formulation that maps the input space to output.We use this new capability to visualize how the number of linear regions change while training DNNs. ",We empirically count the number of linear regions of rectifier networks and refine upper and lower bounds.This paper presents improved bounds for counting the number of linear regions in ReLU networks. 506,Déjà Vu: An Empirical Evaluation of the Memorization Properties of Convnets,"Convolutional neural networks memorize part of their training data, which is why strategies such as data augmentation and drop-out are employed to mitigate over- fitting.This paper considers the related question of “membership inference”, where the goal is to determine if an image was used during training.We con- sider membership tests over either ensembles of samples or individual samples.First, we show how to detect if a dataset was used to train a model, and in particular whether some validation images were used at train time.Then, we introduce a new approach to infer membership when a few of the top layers are not available or have been fine-tuned, and show that lower layers still carry information about the training samples.To support our findings, we conduct large-scale experiments on Imagenet and subsets of YFCC-100M with modern architectures such as VGG and Resnet.",We analyze the memorization properties by a convnet of the training set and propose several use-cases where we can extract some information about the training set. Illuminates the generalization/memorization properties of large and deep ConvNets and tries to develop procedures related to identifying whether an input to a trained ConvNet has actually been used to train the network. 507,Approximability of Discriminators Implies Diversity in GANs,"While Generative Adversarial Networks have empirically produced impressive results on learning complex real-world distributions, recent works have shown that they suffer from lack of diversity or mode collapse.The theoretical work of Arora et al. suggests a dilemma about GANs’ statistical properties: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse.By contrast, we show in this paper that GANs can in principle learn distributions in Wasserstein distance with polynomial sample complexity, if the discriminator class has strong distinguishing power against the particular generator class.For various generator classes such as mixture of Gaussians, exponential families, and invertible and injective neural networks generators, we design corresponding discriminators such that the Integral Probability Metric induced by the discriminators can provably approximate the Wasserstein distance and/or KL-divergence.This implies that if the training is successful, then the learned distribution is close to the true distribution in Wasserstein distance or KL divergence, and thus cannot drop modes.Our preliminary experiments show that on synthetic datasets the test IPM is well correlated with KL divergence or the Wasserstein distance, indicating that the lack of diversity in GANs may be caused by the sub-optimality in optimization instead of statistical inefficiency.","GANs can in principle learn distributions sample-efficiently, if the discriminator class is compact and has strong distinguishing power against the particular generator class.Proposes the notion of restricted approximability, and provides a sample complexity bound, polynomial in the dimension, which is useful in investigating lack of diversity in GANs.Analyzes that the Integral Probability Metric can be a good approximation of Wasserstein distance under some mild assumptions." 508,The Break-Even Point on the Optimization Trajectories of Deep Neural Networks,"Understanding the optimization trajectory is critical to understand training of deep neural networks.We show how the hyperparameters of stochastic gradient descent influence the covariance of the gradients and the Hessian of the training loss along this trajectory.Based on a theoretical model, we predict that using a high learning rate or a small batch size in the early phase of training leads SGD to regions of the parameter space with reduced spectral norm of K, and improved conditioning of K and H. We show that the point on the trajectory after which these effects hold, which we refer to as the break-even point, is reached early during training.We demonstrate these effects empirically for a range of deep neural networks applied to multiple different tasks.Finally, we apply our analysis to networks with batch normalization layers and find that it is necessary to use a high learning rate to achieve loss smoothing effects attributed previously to BN alone.","In the early phase of training of deep neural networks there exists a ""break-even point"" which determines properties of the entire optimization trajectory.This work analyzes the optimization of deep neural networks by considering how the batch size and step-size hyper-parameters modify learning trajectories." 509,Higher-order Weighted Graph Convolutional Networks,"Graph Convolution Network has been recognized as one of the most effective graph models for semi-supervised learning, but it extracts merely the first-order or few-order neighborhood information through information propagation, which suffers performance drop-off for deeper structure.Existing approaches that deal with the higher-order neighbors tend to take advantage of adjacency matrix power.In this paper, we assume a seemly trivial condition that the higher-order neighborhood information may be similar to that of the first-order neighbors.Accordingly, we present an unsupervised approach to describe such similarities and learn the weight matrices of higher-order neighbors automatically through Lasso that minimizes the feature loss between the first-order and higher-order neighbors, based on which we formulate the new convolutional filter for GCN to learn the better node representations.Our model, called higher-order weighted GCN, has achieved the state-of-the-art results on a number of node classification tasks over Cora, Citeseer and Pubmed datasets.","We propose HWGCN to mix the relevant neighborhood information at different orders to better learn node representations.The authors propose a variant of GCN, HWGCN, to consider convolution beyond 1-step neighbors, which is comparable to state-of-the-art methods." 510,"Feature-Robustness, Flatness and Generalization Error for Deep Neural Networks","The performance of deep neural networks is often attributed to their automated, task-related feature construction.It remains an open question, though, why this leads to solutions with good generalization, even in cases where the number of parameters is larger than the number of samples.Back in the 90s, Hochreiter and Schmidhuber observed that flatness of the loss surface around a local minimum correlates with low generalization error.For several flatness measures, this correlation has been empirically validated.However, it has recently been shown that existing measures of flatness cannot theoretically be related to generalization: if a network uses ReLU activations, the network function can be reparameterized without changing its output in such a way that flatness is changed almost arbitrarily.This paper proposes a natural modification of existing flatness measures that results in invariance to reparameterization.The proposed measures imply a robustness of the network to changes in the input and the hidden layers.Connecting this feature robustness to generalization leads to a generalized definition of the representativeness of data.With this, the generalization error of a model trained on representative data can be bounded by its feature robustness which depends on our novel flatness measure.","We introduce a novel measure of flatness at local minima of the loss surface of deep neural networks which is invariant with respect to layer-wise reparameterizations and we connect flatness to feature robustness and generalization.The authors propose a notion of feature robustness which is invariant with respect to rescaling the weight and discuss the notion's relationship to generalization."", 'This paper defines a notion of feature-robustness and combines it with epsilon representativeness of a function to describe a connection between flatness of minima and generalization in deep neural networks." 511,Bayesian Sparsification of Gated Recurrent Neural Networks,"Bayesian methods have been successfully applied to sparsify weights of neural networks and to remove structure units from the networks, e.g.neurons.We apply and further develop this approach for gated recurrent architectures.Specifically, in addition to sparsification of individual weights and neurons, we propose to sparsify preactivations of gates and information flow in LSTM.It makes some gates and information flow components constant, speeds up forward pass and improves compression.Moreover, the resulting structure of gate sparsity is interpretable and depends on the task.",We propose to sparsify preactivations of gates and information flow in LSTM to make them constant and boost the neuron sparsity levelThis paper proposed a sparsification method for recurrent neural networks by eliminating neurons with zero preactivations to obtain compact networks. 512,Learning Time-Aware Assistance Functions for Numerical Fluid Solvers,"Improving the accuracy of numerical methods remains a central challenge in many disciplines and is especially important for nonlinear simulation problems.A representative example of such problems is fluid flow, which has been thoroughly studied to arrive at efficient simulations of complex flow phenomena.This paper presents a data-driven approach that learns to improve the accuracy of numerical solvers.The proposed method utilizes an advanced numerical scheme with a fine simulation resolution to acquire reference data.We, then, employ a neural network that infers a correction to move a coarse thus quickly obtainable result closer to the reference data.We provide insights into the targeted learning problem with different learning approaches: fully supervised learning methods with a naive and an optimized data acquisition as well as an unsupervised learning method with a differentiable Navier-Stokes solver.While our approach is very general and applicable to arbitrary partial differential equation models, we specifically highlight gains in accuracy for fluid flow simulations.",We introduce a neural network approach to assist partial differential equation solvers.The authors aim at improving the accuracy of numerical solvers by training a neural network on simulated reference data which corrects the numerical solver. 513,CONFEDERATED MACHINE LEARNING ON HORIZONTALLY AND VERTICALLY SEPARATED MEDICAL DATA FOR LARGE-SCALE HEALTH SYSTEM INTELLIGENCE,"A patient’s health information is generally fragmented across silos.Though it is technically feasible to unite data for analysis in a manner that underpins a rapid learning healthcare system, privacy concerns and regulatory barriers limit data centralization.Machine learning can be conducted in a federated manner on patient datasets with the same set of variables, but separated across sites of care.But federated learning cannot handle the situation where different data types for a givenpatient are separated vertically across different organizations.We call methods that enable machine learning model training on data separated by two or more degrees “confederated machine learning.”We built and evaluated a confederated machinelearningmodel to stratify the risk of accidental falls among the elderly.","a confederated learning method that train model from horizontally and vertically separated medical data A ""confederated"" machine learning method that learns across divides in medical data separated both horizontally and vertically." 514,Stochastic Quantized Activation: To prevent Overfitting in Fast Adversarial Training,"Existing neural networks are vulnerable to ""adversarial examples""---created by adding maliciously designed small perturbations in inputs to induce a misclassification by the networks.The most investigated defense strategy is adversarial training which augments training data with adversarial examples.However, applying single-step adversaries in adversarial training does not support the robustness of the networks, instead, they will even make the networks to be overfitted.In contrast to the single-step, multi-step training results in the state-of-the-art performance on MNIST and CIFAR10, yet it needs a massive amount of time.Therefore, we propose a method, Stochastic Quantized Activation that solves overfitting problems in single-step adversarial training and fastly achieves the robustness comparable to the multi-step.SQA attenuates the adversarial effects by providing random selectivity to activation functions and allows the network to learn robustness with only single-step training.Throughout the experiment, our method demonstrates the state-of-the-art robustness against one of the strongest white-box attacks as PGD training, but with much less computational cost.Finally, we visualize the learning process of the network with SQA to handle strong adversaries, which is different from existing methods.",This paper proposes Stochastic Quantized Activation that solves overfitting problems in FGSM adversarial training and fastly achieves the robustness comparable to multi-step training.The paper proposes a model to improve adversarial training by introducing random perturbations in the activations of one of the hidden layers 515,Does the neuronal noise in cortex help generalization?,"Neural activity is highly variable in response to repeated stimuli.We used an open dataset, the Allen Brain Observatory, to quantify the distribution of responses to repeated natural movie presentations.A large fraction of responses are best fit by log-normal distributions or Gaussian mixtures with two components.These distributions are similar to those from units in deep neural networks with dropout.Using a separate set of electrophysiological recordings, we constructed a population coupling model as a control for state-dependent activity fluctuations and found that the model residuals also show non-Gaussian distributions.We then analyzed responses across trials from multiple sections of different movie clips and observed that the noise in cortex aligns better with in-clip versus out-of-clip stimulus variations.We argue that noise is useful for generalization when it moves along representations of different exemplars in-class, similar to the structure of cortical noise.",We study the structure of noise in the brain and find it may help generalization by moving representations along in-class stimulus variations. 516,Open-Set Domain Adaptation with Category-Agnostic Clusters,"Unsupervised domain adaptation has received significant attention in recent years.Most of existing works tackle the closed-set scenario, assuming that the source and target domains share the exactly same categories.In practice, nevertheless, a target domain often contains samples of classes unseen in source domain.The extension of domain adaptation from closed-set to such open-set situation is not trivial since the target samples in unknown class are not expected to align with the source.In this paper, we address this problem by augmenting the state-of-the-art domain adaptation technique, Self-Ensembling, with category-agnostic clusters in target domain.Specifically, we present Self-Ensembling with Category-agnostic Clusters --- a novel architecture that steers domain adaptation with the additional guidance of category-agnostic clusters that are specific to target domain.These clustering information provides domain-specific visual cues, facilitating the generalization of Self-Ensembling for both closed-set and open-set scenarios.Technically, clustering is firstly performed over all the unlabeled target samples to obtain the category-agnostic clusters, which reveal the underlying data space structure peculiar to target domain.A clustering branch is capitalized on to ensure that the learnt representation preserves such underlying structure by matching the estimated assignment distribution over clusters to the inherent cluster distribution for each target sample.Furthermore, SE-CC enhances the learnt representation with mutual information maximization.Extensive experiments are conducted on Office and VisDA datasets for both open-set and closed-set domain adaptation, and superior results are reported when comparing to the state-of-the-art approaches.","We present a new design, i.e., Self-Ensembling with Category-agnostic Clusters, for both closed-set and open-set domain adaptation.A new approach to open set domain adaptation, where the source domain categories are contained in the target domain categories in order to filter out outlier categories and enable adaptation within the shared classes." 517,Spectral Inference Networks: Unifying Deep and Spectral Learning,"We present Spectral Inference Networks, a framework for learning eigenfunctions of linear operators by stochastic optimization.Spectral Inference Networks generalize Slow Feature Analysis to generic symmetric operators, and are closely related to Variational Monte Carlo methods from computational physics.As such, they can be a powerful tool for unsupervised representation learning from video or graph-structured data.We cast training Spectral Inference Networks as a bilevel optimization problem, which allows for online learning of multiple eigenfunctions.We show results of training Spectral Inference Networks on problems in quantum mechanics and feature learning for videos on synthetic datasets.Our results demonstrate that Spectral Inference Networks accurately recover eigenfunctions of linear operators and can discover interpretable representations from video in a fully unsupervised manner.","We show how to learn spectral decompositions of linear operators with deep learning, and use it for unsupervised learning without a generative model.The authors propose to use a deep learning framework to solve the computation of the largest eigenvectors.This paper presents a framework to learn eigenfunctions via a stochastic process and proposes to tackle the challenge of computing eigenfunctions in a large-scale context by approximating then using a two-phase stochastic optimization process." 518,Riemannian Stochastic Gradient Descent for Tensor-Train Recurrent Neural Networks,"The Tensor-Train factorization is an efficient way to compress large weight matrices of fully-connected layers and recurrent layers in recurrent neural networks.However, high Tensor-Train ranks for all the core tensors of parameters need to be element-wise fixed, which results in an unnecessary redundancy of model parameters.This work applies Riemannian stochastic gradient descent to train core tensors of parameters in the Riemannian Manifold before finding vectors of lower Tensor-Train ranks for parameters.The paper first presents the RSGD algorithm with a convergence analysis and then tests it on more advanced Tensor-Train RNNs such as bi-directional GRU/LSTM and Encoder-Decoder RNNs with a Tensor-Train attention model.The experiments on digit recognition and machine translation tasks suggest the effectiveness of the RSGD algorithm for Tensor-Train RNNs.","Applying the Riemannian SGD (RSGD) algorithm for training Tensor-Train RNNs to further reduce model parameters.The paper proposes to use Riemannian stochastic gradient algorithm for low-rank tensor train learning in deep networks.Proposes an algorithm for optimizing neural networks parametrized by Tensor Train decomposition based on the Riemannian optimization and rank adaptation, and designs a bidirectional TT LSTM architecture." 519,Projection Based Constrained Policy Optimization,"In this paper, we consider the problem of learning control policies that optimize areward function while satisfying constraints due to considerations of safety, fairness, or other costs.We propose a new algorithm - Projection Based ConstrainedPolicy Optimization, an iterative method for optimizing policies in a two-step process - the first step performs an unconstrained update while the secondstep reconciles the constraint violation by projection the policy back onto the constraint set.We theoretically analyze PCPO and provide a lower bound on rewardimprovement, as well as an upper bound on constraint violation for each policy update.We further characterize the convergence of PCPO with projection basedon two different metrics - L2 norm and Kullback-Leibler divergence.Our empirical results over several control tasks demonstrate that our algorithm achievessuperior performance, averaging more than 3.5 times less constraint violation andaround 15% higher reward compared to state-of-the-art methods.","We propose a new algorithm that learns constraint-satisfying policies, and provide theoretical analysis and empirical demonstration in the context of reinforcement learning with constraints.This paper introduces a constrained policy optimization algorithm using a two-step optimization process, where policies that do not satisfy the constraint can be projected back into the constraint set." 520,Characterizing Missing Information in Deep Networks Using Backpropagated Gradients,"Deep networks face challenges of ensuring their robustness against inputs that cannot be effectively represented by information learned from training data.We attribute this vulnerability to the limitations inherent to activation-based representation.To complement the learned information from activation-based representation, we propose utilizing a gradient-based representation that explicitly focuses on missing information.In addition, we propose a directional constraint on the gradients as an objective during training to improve the characterization of missing information.To validate the effectiveness of the proposed approach, we compare the anomaly detection performance of gradient-based and activation-based representations.We show that the gradient-based representation outperforms the activation-based representation by 0.093 in CIFAR-10 and 0.361 in CURE-TSR datasets in terms of AUROC averaged over all classes.Also, we propose an anomaly detection algorithm that uses the gradient-based representation, denoted as GradCon, and validate its performance on three benchmarking datasets.The proposed method outperforms the majority of the state-of-the-art algorithms in CIFAR-10, MNIST, and fMNIST datasets with an average AUROC of 0.664, 0.973, and 0.934, respectively.",We propose a gradient-based representation for characterizing information that deep networks have not learned.The authors present creating representations based on gradients with respect to the weights to supplement information missing from the training dataset for deep networks. 521,Zero-Shot Medical Image Artifact Reduction,"Medical images may contain various types of artifacts with different patterns and mixtures, which depend on many factors such as scan setting, machine condition, patients’ characteristics, surrounding environment, etc.However, existing deep learning based artifact reduction methods are restricted by their training set with specific predetermined artifact type and pattern.As such, they have limited clinical adoption.In this paper, we introduce a “Zero-Shot” medical image Artifact Reduction framework, which leverages the power of deep learning but without using general pre-trained networks or any clean image reference.Specifically, we utilize the low internal visual entropy of an image and train a light-weight image-specific artifact reduction network to reduce artifacts in an image at test-time.We use Computed Tomography and Magnetic Resonance Imaging as vehicles to show that ZSAR can reduce artifacts better than state-of-the-art both qualitatively and quantitatively, while using shorter execution time.To the best of our knowledge, this is the first deep learning framework that reduces artifacts in medical images without using a priori training set.","We introduce a “Zero-Shot” medical image Artifact Reduction framework, which leverages the power of deep learning but without using general pre-trained networks or any clean image reference. " 522,Restricting the Flow: Information Bottlenecks for Attribution,"Attribution methods provide insights into the decision-making of machine learning models like artificial neural networks.For a given input sample, they assign a relevance score to each individual input variable, such as the pixels of an image.In this work we adapt the information bottleneck concept for attribution.By adding noise to intermediate feature maps we restrict the flow of information and can quantify how much information image regions provide.We compare our method against ten baselines using three different metrics on VGG-16 and ResNet-50, and find that our methods outperform all baselines in five out of six settings.The method’s information-theoretic foundation provides an absolute frame of reference for attribution values and a guarantee that regions scored close to zero are not necessary for the networks decision.","We apply the informational bottleneck concept to attribution.The paper proposes a novel perturbation-based method for computing attribution/saliency maps for deep neural network based image classifiers, by injecting crafted noise into an early layer of the network." 523,Block-Sparse Recurrent Neural Networks,"Recurrent Neural Networks are used in state-of-the-art models in domains such as speech recognition, machine translation, and language modelling.Sparsity is a technique to reduce compute and memory requirements of deep learning models.Sparse RNNs are easier to deploy on devices and high-end server processors.Even though sparse operations need less compute and memory relative to their dense counterparts, the speed-up observed by using sparse operations is less than expected on different hardware platforms.In order to address this issue, we investigate two different approaches to induce block sparsity in RNNs: pruning blocks of weights in a layer and using group lasso regularization with pruning to create blocks of weights with zeros.Using these techniques, we can create block-sparse RNNs with sparsity ranging from 80% to 90% with a small loss in accuracy.This technique allows us to reduce the model size by roughly 10x.Additionally, we can prune a larger dense network to recover this loss in accuracy while maintaining high block sparsity and reducing the overall parameter count.Our technique works with a variety of block sizes up to 32x32.Block-sparse RNNs eliminate overheads related to data storage and irregular memory accesses while increasing hardware efficiency compared to unstructured sparsity.","We show the RNNs can be pruned to induce block sparsity which improves speedup for sparse operations on existing hardwareThe authors propose a block sparsity pruning approach to compress RNNs, using group LASSO to promote sparsity and to prune, but with a very specialized schedule as to the pruning and pruning weight." 524,Soft Value Iteration Networks for Planetary Rover Path Planning,"Value iteration networks are an approximation of the value iteration algorithm implemented with convolutional neural networks to make VI fully differentiable.In this work, we study these networks in the context of robot motion planning, with a focus on applications to planetary rovers.The key challenging task in learning-based motion planning is to learn a transformation from terrain observations to a suitable navigation reward function.In order to deal with complex terrain observations and policy learning, we propose a value iteration recurrence, referred to as the soft value iteration network.SVIN is designed to produce more effective training gradients through the value iteration network.It relies on a soft policy model, where the policy is represented with a probability distribution over all possible actions, rather than a deterministic policy that returns only the best action.We demonstrate the effectiveness of the proposed method in robot motion planning scenarios.In particular, we study the application of SVIN to very challenging problems in planetary rover navigation and present early training results on data gathered by the Curiosity rover that is currently operating on Mars.","We propose an improvement to value iteration networks, with applications to planetary rover path planning.This paper learns a reward function based on expert trajectories using a Value Iteration Module to make the planning step differentiable" 525,Augmenting Self-attention with Persistent Memory,"Transformer networks have lead to important progress in language modeling and machine translation.These models include two consecutive modules, a feed-forward layer and a self-attention layer.The latter allows the network to capture long term dependencies and are often regarded as the key ingredient in the success of Transformers.Building upon this intuition, we propose a new model that solely consists of attention layers.More precisely, we augment the self-attention layers with persistent memory vectors that play a similar role as the feed-forward layer.Thanks to these vectors, we can remove the feed-forward layer without degrading the performance of a transformer.Our evaluation shows the benefits brought by our model on standard character and word level language modeling benchmarks.","A novel attention layer that combines self-attention and feed-forward sublayers of Transformer networks.This paper proposes a modification to the Transformer model by incorporating attention over ""persistent"" memory vectors into the self-attention layer, resulting in performance on par with existing models while using fewer parameters." 526,Deep Mining: Detecting Anomalous Patterns in Neural Network Activations with Subset Scanning,"This work views neural networks as data generating systems and applies anomalous pattern detection techniques on that data in order to detect when a network is processing a group of anomalous inputs. Detecting anomalies is a critical component for multiple machine learning problems including detecting the presence of adversarial noise added to inputs.More broadly, this work is a step towards giving neural networks the ability to detect groups of out-of-distribution samples. This work introduces Subset Scanning methods from the anomalous pattern detection domain to the task of detecting anomalous inputs to neural networks. Subset Scanning allows us to answer the question: ""Which subset of inputs have larger-than-expected activations at which subset of nodes?"" Framing the adversarial detection problem this way allows us to identify systematic patterns in the activation space that span multiple adversarially noised images. Such images are ""weird together"". Leveraging this common anomalous pattern, we show increased detection power as the proportion of noised images increases in a test set. Detection power and accuracy results are provided for targeted adversarial noise added to CIFAR-10 images on a 20-layer ResNet using the Basic Iterative Method attack.","We efficiently find a subset of images that have higher than expected activations for some subset of nodes. These images appear more anomalous and easier to detect when viewed as a group. The paper proposed a scheme to detect the presence of anomalous inputs based on a ""subset scanning"" approach to detect anomalous activations in the deep learning network." 527,Stable Recurrent Models,"Stability is a fundamental property of dynamical systems, yet to this date it has had little bearing on the practice of recurrent neural networks.In this work, we conduct a thorough investigation of stable recurrent models.Theoretically, we prove stable recurrent neural networks are well approximated by feed-forward networks for the purpose of both inference and training by gradient descent.Empirically, we demonstrate stable recurrent models often perform as well as their unstable counterparts on benchmark sequence tasks.Taken together, these findings shed light on the effective power of recurrent networks and suggest much of sequence learning happens, or can be made to happen, in the stable regime.Moreover, our results help to explain why in many cases practitioners succeed in replacing recurrent models by feed-forward models.",Stable recurrent models can be approximated by feed-forward networks and empirically perform as well as unstable models on benchmark tasks.Studies the stability of RNNs and investigation of spectral normalization to sequential predictions. 528,Balanced and Deterministic Weight-sharing Helps Network Performance,"Weight-sharing plays a significant role in the success of many deep neural networks, by increasing memory efficiency and incorporating useful inductive priors about the problem into the network.But understanding how weight-sharing can be used effectively in general is a topic that has not been studied extensively.Chen et al. proposed HashedNets, which augments a multi-layer perceptron with a hash table, as a method for neural network compression.We generalize this method into a framework that allows for efficient arbitrary weight-sharing, and use it to study the role of weight-sharing in neural networks.We show that common neural networks can be expressed as ArbNets with different hash functions.We also present two novel hash functions, the Dirichlet hash and the Neighborhood hash, and use them to demonstrate experimentally that balanced and deterministic weight-sharing helps with the performance of a neural network.","Studied the role of weight sharing in neural networks using hash functions, found that a balanced and deterministic hash function helps network performance.Proposing ArbNets to study weight sharing in a more systematic way by defining the weight sharing function as a hash function." 529,Neural Markov Logic Networks,"We introduce Neural Markov Logic Networks, a statistical relational learning system that borrows ideas from Markov logic.Like Markov Logic Networks, NMLNs are an exponential-family model for modelling distributions over possible worlds, but unlike MLNs, they do not rely on explicitly specified first-order logic rules.Instead, NMLNs learn an implicit representation of such rules as a neural network that acts as a potential function on fragments of the relational structure.Interestingly, any MLN can be represented as an NMLN.Similarly to recently proposed Neural theorem provers, NMLNs can exploit embeddings of constants but, unlike NTPs, NMLNs work well also in their absence.This is extremely important for predicting in settings other than the transductive one.We showcase the potential of NMLNs on knowledge-base completion tasks and on generation of molecular data.", We introduce a statistical relational learning system that borrows ideas from Markov logic but learns an implicit representation of rules as a neural network.The paper provides an extension to Markov Logic Networks by removing their dependency on pre-defined first-order logic rules to model more domains in knowledge-base completion tasks. 530,Uncertainty in Multitask Transfer Learning,"Using variational Bayes neural networks, we develop an algorithm capable of accumulating knowledge into a prior from multiple different tasks.This results in a rich prior capable of few-shot learning on new tasks.The posterior can go beyond the mean field approximation and yields good uncertainty on the performed experiments.Analysis on toy tasks show that it can learn from significantly different tasks while finding similarities among them.Experiments on Mini-Imagenet reach state of the art with 74.5% accuracy on 5 shot learning.Finally, we provide two new benchmarks, each showing a failure mode of existing meta learning algorithms such as MAML and prototypical Networks.",A scalable method for learning an expressive prior over neural networks across multiple tasks.The paper presents a method for training a probabilistic model for Multitasks Transfer Learning by introducing a latent variable per task to capture the commonality in the task instances.The work proposes a variational approach to meta-learning that employs latent variables corresponding to task-specific datasets.Aims to learn a prior over neural networks for multiple tasks. 531,DISENTANGLED STATE SPACE MODELS: UNSUPERVISED LEARNING OF DYNAMICS ACROSS HETEROGENEOUS ENVIRONMENTS,"Sequential data often originates from diverse environments.Across them exist both shared regularities and environment specifics.To learn robust cross-environment descriptions of sequences we introduce disentangled state space models.In the latent space of DSSM environment-invariant state dynamics is explicitly disentangled from environment-specific information governing that dynamics.We empirically show that such separation enables robust prediction, sequence manipulation and environment characterization.We also propose an unsupervised VAE-based training procedure to learn DSSM as Bayesian filters.In our experiments, we demonstrate state-of-the-art performance in controlled generation and prediction of bouncing ball video sequences across varying gravitational influences.",DISENTANGLED STATE SPACE MODELSThe paper presents a generative state space model using a global latent variable E to capture environment-specific information. 532,Emergent Structures and Lifetime Structure Evolution in Artificial Neural Networks,"Motivated by the flexibility of biological neural networks whose connectivity structure changes significantly during their lifetime,we introduce the Unrestricted Recursive Network and demonstrate that it can exhibit similar flexibility during training via gradient descent.We show empirically that many of the different neural network structures commonly used in practice today can emerge dynamically from the same URN.These different structures can be derived using gradient descent on a single general loss function where the structure of the data and the relative strengths of various regulator terms determine the structure of the emergent network.We show that this loss function and the regulators arise naturally when considering the symmetries of the network as well as the geometric properties of the input data.",We introduce a network framework which can modify its structure during training and show that it can converge to various ML network archetypes such as MLPs and LCNs. 533,Generalizing Across Domains via Cross-Gradient Training,"We present CROSSGRAD , a method to use multi-domain training data to learn a classifier that generalizes to new domains.CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain.Most existing domain adaptation methods attempt to erase domain signals using techniques like domain adversarial training.In contrast, CROSSGRAD is free to use domain signals for predicting labels, if it can prevent overfitting on training domains.We conceptualize the task in a Bayesian setting, in which a sampling step is implemented as data augmentation, based on domain-guided perturbations of input instances.CROSSGRAD jointly trains a label and a domain classifier on examples perturbed by loss gradients of each other’s objectives.This enables us to directly perturb inputs, without separating and re-mixing domain signals while making various distributional assumptions.Empirical evaluation on three different applications where this setting is natural establishes that domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbation methods, and data augmentation is a more stable and accurate method than domain adversarial training.","Domain guided augmentation of data provides a robust and stable method of domain generalizationThis paper proposes a domain generalization approach by domain-dependent data augmentationThe authors introduce the CrossGrad method, which trains both a label classification task and a domain classification task." 534,A Neural Representation of Sketch Drawings,"We present sketch-rnn, a recurrent neural network able to construct stroke-based drawings of common objects.The model is trained on a dataset of human-drawn images representing many different classes.We outline a framework for conditional and unconditional sketch generation, and describe new robust training methods for generating coherent sketch drawings in a vector format.","We investigate alternative to traditional pixel image modelling approaches, and propose a generative model for vector images.This paper introduces a neural network architecture for generating sketch drawings inspired by the variational autoencoder." 535,Combining adaptive algorithms and hypergradient method: a performance and robustness study,"Wilson et al. showed that, when the stepsize schedule is properly designed, stochastic gradient generalizes better than ADAM.In light of recent work on hypergradient methods, we revisit these claims to see if such methods close the gap between the most popular optimizers.As a byproduct, we analyze the true benefit of these hypergradient methods compared to more classical schedules, such as the fixed decay of Wilson et al..In particular, we observe they are of marginal help since their performance varies significantly when tuning their hyperparameters.Finally, as robustness is a critical quality of an optimizer, we provide a sensitivity analysis of these gradient based optimizers to assess how challenging their tuning is.","We provide a study trying to see how the recent online learning rate adaptation extends the conclusion made by Wilson et al. 2018 about adaptive gradient methods, along with comparison and sensitivity analysis.Reports the results of testing several stepsize adjustment related methods including vanilla SGD, SGD with Neserov momentum, and ADAM and compares those methods with hypergradient and without. " 536,Unsupervised one-to-many image translation,"We perform completely unsupervised one-sided image to image translation between a source domain and a target domain such that we preserve relevant underlying shared semantics.In particular, we are interested in a more difficult case than those typically addressed in the literature, where the source and target are far"" enough that reconstruction-style or pixel-wise approaches fail.We argue that transferring said relevant information should involve both discarding source domain-specific information while incorporate target domain-specific information, the latter of which we model with a noisy prior distribution.In order to avoid the degenerate case where the generated samples are only explained by the prior distribution, we propose to minimize an estimate of the mutual information between the generated sample and the sample from the prior distribution.We discover that the architectural choices are an important factor to consider in order to preserve the shared semantic between and.We show state of the art results on the MNIST to SVHN task for unsupervised image to image translation.",We train an image to image translation network that take as input the source image and a sample from a prior distribution to generate a sample from the target distributionThis paper formalizes the problem of unsupervised translation and proposes an augmented GAN framework which uses the mutual information to avoid the degenerate caseThis paper formulates the problem of unsupervised one-to-many image translation and addresses the problem by minimizing the mutual information. 537,Neural Outlier Rejection for Self-Supervised Keypoint Learning,"Identifying salient points in images is a crucial component for visual odometry, Structure-from-Motion or SLAM algorithms.Recently, several learned keypoint methods have demonstrated compelling performance on challenging benchmarks. However, generating consistent and accurate training data for interest-point detection in natural images still remains challenging, especially for human annotators.We introduce IO-Net, a novel proxy task for the self-supervision of keypoint detection, description and matching.By making the sampling of inlier-outlier sets from point-pair correspondences fully differentiable within the keypoint learning framework, we show that are able to simultaneously self-supervise keypoint description and improve keypoint matching.Second, we introduce KeyPointNet, a keypoint-network architecture that is especially amenable to robust keypoint detection and description.We design the network to allow local keypoint aggregation to avoid artifacts due to spatial discretizations commonly used for this task, and we improve fine-grained keypoint descriptor performance by taking advantage of efficient sub-pixel convolutions to upsample the descriptor feature-maps to a higher operating resolution.Through extensive experiments and ablative analysis, we show that the proposed self-supervised keypoint learning method greatly improves the quality of feature matching and homography estimation on challenging benchmarks over the state-of-the-art.","Learning to extract distinguishable keypoints from a proxy task, outlier rejection.This paper is devoted to the self-supervised learning of local features using Neural Guided RANSAC as an additional auxillary loss provider for improving descriptor interpolation." 538,Intrinsic Motivation for Encouraging Synergistic Behavior,"We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks, which are tasks where multiple agents must work together to achieve a goal they could not individually.Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own.Thus, we propose to incentivize agents to take actions whose effects cannot be predicted via a composition of the predicted effect for each individual agent.We study two instantiations of this idea, one based on the true states encountered, and another based on a dynamics model trained concurrently with the policy.While the former is simpler, the latter has the benefit of being analytically differentiable with respect to the action taken.We validate our approach in robotic bimanual manipulation tasks with sparse rewards; we find that our approach yields more efficient learning than both1) training with only the sparse reward and2) using the typical surprise-based formulation of intrinsic motivation, which does not bias toward synergistic behavior.Videos are available on the project webpage: https://sites.google.com/view/iclr2020-synergistic.","We propose a formulation of intrinsic motivation that is suitable as an exploration bias in multi-agent sparse-reward synergistic tasks, by encouraging agents to affect the world in ways that would not be achieved if they were acting individually.The paper focuses on using intrinsic motivation to improve the exploration process of reinforcement learning agents in tasks that require multi-agent to achieve." 539,Policy Message Passing: A New Algorithm for Probabilistic Graph Inference,"A general graph-structured neural network architecture operates on graphs through two core components: complex enough message functions; a fixed information aggregation process.In this paper, we present the Policy Message Passing algorithm, which takes a probabilistic perspective and reformulates the whole information aggregation as stochastic sequential processes.The algorithm works on a much larger search space, utilizes reasoning history to perform inference, and is robust to noisy edges.We apply our algorithm to multiple complex graph reasoning and prediction tasks and show that our algorithm consistently outperforms state-of-the-art graph-structured models by a significant margin.","An probabilistic inference algorithm driven by neural network for graph-structured modelsThis paper introduces policy message passing, a graph neural network with an inference mechanism that assigns messages to edges in a recurrent fashion, indicating competitive performance on visual reasoning tasks." 540,GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks,"Deep multitask networks, in which one neural network produces multiple predictive outputs, are more scalable and often better regularized than their single-task counterparts.Such advantages can potentially lead to gains in both speed and performance, but multitask networks are also difficult to train without finding the right balance between tasks.We present a novel gradient normalization technique which automatically balances the multitask loss function by directly tuning the gradients to equalize task training rates.We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, GradNorm improves accuracy and reduces overfitting over single networks, static baselines, and other adaptive multitask loss balancing techniques.GradNorm also matches or surpasses the performance of exhaustive grid search methods, despite only involving a single asymmetry hyperparameter.Thus, what was once a tedious search process which incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks.Ultimately, we hope to demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning.",We show how you can boost performance in a multitask network by tuning an adaptive multitask loss function that is learned through directly balancing network gradients.This work proposes a dynamic weight update scheme that updates weights for different task losses during training time by making use of the loss ratios of different tasks. 541,Do Deep Neural Networks for Segmentation Understand Insideness?,"Image segmentation aims at grouping pixels that belong to the same object or region.At the heart of image segmentation lies the problem of determining whether a pixel is inside or outside a region, which we denote as the ""insideness"" problem.Many Deep Neural Networks variants excel in segmentation benchmarks, but regarding insideness, they have not been well visualized or understood: What representations do DNNs use to address the long-range relationships of insideness?How do architectural choices affect the learning of these representations?In this paper, we take the reductionist approach by analyzing DNNs solving the insideness problem in isolation, i.e. determining the inside of closed curves.We demonstrate analytically that state-of-the-art feed-forward and recurrent architectures can implement solutions of the insideness problem for any given curve.Yet, only recurrent networks could learn these general solutions when the training enforced a specific ""routine"" capable of breaking down the long-range relationships.Our results highlights the need for new training strategies that decompose the learning into appropriate stages, and that lead to the general class of solutions necessary for DNNs to understand insideness.","DNNs for image segmentation can implement solutions for the insideness problem but only some recurrent nets could learn them with a specific type of supervision.This paper introduces insideness to study semantic segmentation in deep learning era, and the results can help models generalize better." 542,Gradients as Features for Deep Representation Learning,"We address the challenging problem of deep representation learning--the efficient adaption of a pre-trained deep network to different tasks.Specifically, we propose to explore gradient-based features.These features are gradients of the model parameters with respect to a task-specific loss given an input sample.Our key innovation is the design of a linear model that incorporates both gradient features and the activation of the network.We show that our model provides a local linear approximation to a underlying deep model, and discuss important theoretical insight.Moreover, we present an efficient algorithm for the training and inference of our model without computing the actual gradients.Our method is evaluated across a number of representation learning tasks on several datasets and using different network architectures.We demonstrate strong results in all settings.And our results are well-aligned with our theoretical insight.","Given a pre-trained model, we explored the per-sample gradients of the model parameters relative to a task-specific loss, and constructed a linear model that combines gradients of model parameters and the activation of the model.This paper proposes to use the gradients of specific layers of convolutional networks as features in a linearized model for transfer learning and fast adaptation." 543,Semi-supervised 3D Face Reconstruction with Nonlinear Disentangled Representations,"Recovering 3D geometry shape, albedo and lighting from a single image has wide applications in many areas, which is also a typical ill-posed problem.In order to eliminate the ambiguity, face prior knowledge like linear 3D morphable models learned from limited scan data are often adopted to the reconstruction process.However, methods based on linear parametric models cannot generalize well for facial images in the wild with various ages, ethnicity, expressions, poses, and lightings.Recent methods aim to learn a nonlinear parametric model using convolutional neural networks to regress the face shape and texture directly.However, the models were only trained on a dataset that is generated from a linear 3DMM.Moreover, the identity and expression representations are entangled in these models, which hurdles many facial editing applications.In this paper, we train our model with adversarial loss in a semi-supervised manner on hybrid batches of unlabeled and labeled face images to exploit the value of large amounts of unlabeled face images from unconstrained photo collections.A novel center loss is introduced to make sure that different facial images from the same person have the same identity shape and albedo.Besides, our proposed model disentangles identity, expression, pose, and lighting representations, which improves the overall reconstruction performance and facilitates facial editing applications, e.g., expression transfer.Comprehensive experiments demonstrate that our model produces high-quality reconstruction compared to state-of-the-art methods and is robust to various expression, pose, and lighting conditions.","We train our face reconstruction model with adversarial loss in semi-supervised manner on hybrid batches of unlabeled and labeled face images to exploit the value of large amounts of unlabeled face images from unconstrained photo collections.This paper proposes a semi-supervised and adversarial training process to exact nonlinear disentangled representations from a face image with loss functions, achieving state-of-the-art performance in face reconstruction." 544,Conversation Generation with Concept Flow,"Human conversations naturally evolve around related entities and connected concepts, while may also shift from topic to topic.This paper presents ConceptFlow, which leverages commonsense knowledge graphs to explicitly model such conversation flows for better conversation response generation.ConceptFlow grounds the conversation inputs to the latent concept space and represents the potential conversation flow as a concept flow along the commonsense relations.The concept is guided by a graph attention mechanism that models the possibility of the conversation evolving towards different concepts.The conversation response is then decoded using the encodings of both utterance texts and concept flows, integrating the learned conversation structure in the concept space.Our experiments on Reddit conversations demonstrate the advantage of ConceptFlow over previous commonsense aware dialog models and fine-tuned GPT-2 models, while using much fewer parameters but with explicit modeling of conversation structures.",This paper presents ConceptFlow that explicitly models the conversation flow in commonsense knowledge graph for better conversation generation.The paper proposes a system for generating a single-turn response to a posted utterance in an open-domain dialog setting using the diffiusion into the neighbors of the grounded concepts. 545,Flexible degrees of connectivity under synaptic weight constraints,"Biological neural networks face homeostatic and resource constraints that restrict the allowed configurations of connection weights.If a constraint is tight it defines a very small solution space, and the size of these constraint spaces determines their potential overlap with the solutions for computational tasks.We study the geometry of the solution spaces for constraints on neurons total synaptic weight and on individual synaptic weights, characterizing the connection degrees that maximize the size of these solution spaces."", ""We then hypothesize that the size of constraints solution spaces could serve as a cost function governing neural circuit development.We develop analytical approximations and bounds for the model evidence of the maximum entropy degree distributions under these cost functions.We test these on a published electron microscopic connectome of an associative learning center in the fly brain, finding evidence for a developmental progression in circuit structure.","We examine the hypothesis that the entropy of solution spaces for constraints on synaptic weights (the ""flexibility"" of the constraint) could serve as a cost function for neural circuit development." 546,Cross-lingual Alignment vs Joint Training: A Comparative Study and A Simple Unified Framework,"Learning multilingual representations of text has proven a successful method for many cross-lingual transfer learning tasks.There are two main paradigms for learning such representations: alignment, which maps different independently trained monolingual representations into a shared space, and joint training, which directly learns unified multilingual representations using monolingual and cross-lingual objectives jointly.In this paper, we first conduct direct comparisons of representations learned using both of these methods across diverse cross-lingual tasks.Our empirical results reveal a set of pros and cons for both methods, and show that the relative performance of alignment versus joint training is task-dependent.Stemming from this analysis, we propose a simple and novel framework that combines these two previously mutually-exclusive approaches.Extensive experiments on various tasks demonstrate that our proposed framework alleviates limitations of both approaches, and outperforms existing methods on the MUSE bilingual lexicon induction benchmark.We further show that our proposed framework can generalize to contextualized representations and achieves state-of-the-art results on the CoNLL cross-lingual NER benchmark.","We conduct a comparative study of cross-lingual alignment vs joint training methods and unify these two previously exclusive paradigms in a new framework. This paper compares approaches to bilingual lexicon induction and shows which method performs better on lexicon, induction, and NER and MT tasks." 547,Compression of Deep Neural Networks by combining pruning and low rank decomposition,"Large number of weights in deep neural networks make the models difficult to be deployed in low memory environments such as, mobile phones, IOT edge devices as well as ""inferencing as a service"" environments on the cloud.Prior work has considered reduction in the size of the models, through compression techniques like weight pruning, filter pruning, etc. or through low-rank decomposition of the convolution layers.In this paper, we demonstrate the use of multiple techniques to achieve not only higher model compression but also reduce the compute resources required during inferencing.We do filter pruning followed by low-rank decomposition using Tucker decomposition for model compression.We show that our approach achieves upto 57% higher model compression when compared to either Tucker Decomposition or Filter pruning alone at similar accuracy for GoogleNet.Also, it reduces the Flops by upto 48% thereby making the inferencing faster.",Combining orthogonal model compression techniques to get significant reduction in model size and number of flops required during inferencing.This paper proposes combining Tucker Decomposition with Filter pruning. 548,GNN-FiLM: Graph Neural Networks with Feature-wise Linear Modulation,"This paper presents a new Graph Neural Network type using feature-wise linear modulation.Many standard GNN variants propagate information along the edges of a graph by computing messages based only on the representation of the source of each edge.In GNN-FiLM, the representation of the target node of an edge is additionally used to compute a transformation that can be applied to all incoming messages, allowing feature-wise modulation of the passed information.Results of experiments comparing different GNN architectures on three tasks from the literature are presented, based on re-implementations of baseline methods.Hyperparameters for all methods were found using extensive search, yielding somewhat surprising results: differences between baseline models are smaller than reported in the literature.Nonetheless, GNN-FiLM outperforms baseline methods on a regression task on molecular graphs and performs competitively on other tasks.",new GNN formalism + extensive experiments; showing differences between GGNN/GCN/GAT are smaller than thoughtThe paper proposes a new Graph Neural Network architecture that uses Feature-wise Linear Modulation to condition the source-to-target node message-passing based on the target node representation. 549,SIMULTANEOUS ATTRIBUTED NETWORK EMBEDDING AND CLUSTERING,"To deal simultaneously with both, the attributed network embedding and clustering, we propose a new model.It exploits both content and structure information, capitalising on their simultaneous use.The proposed model relies on the approximation of the relaxed continuous embedding solution by the true discrete clustering one.Thereby, we show that incorporating an embedding representation provides simpler and more interpretable solutions.Experiment results demonstrate that the proposed algorithm performs better, in terms of clustering and embedding, than the state-of-art algorithms, including deep learning methods devoted to similar tasks for attributed network datasets with different proprieties.",This paper propose a novel matrix decomposition framework for simultaneous attributed network data embedding and clustering.This paper proposes an algorithm to perform jointly attribute network embedding and clustering together. 550,Image-guided Neural Object Rendering,"We propose a learned image-guided rendering technique that combines the benefits of image-based rendering and GAN-based image synthesis.The goal of our method is to generate photo-realistic re-renderings of reconstructed objects for virtual and augmented reality applications.A core component of our work is the handling of view-dependent effects.Specifically, we directly train an object-specific deep neural network to synthesize the view-dependent appearance of an object.As input data we are using an RGB video of the object.This video is used to reconstruct a proxy geometry of the object via multi-view stereo.Based on this 3D proxy, the appearance of a captured view can be warped into a new target view as in classical image-based rendering.This warping assumes diffuse surfaces, in case of view-dependent effects, such as specular highlights, it leads to artifacts.To this end, we propose EffectsNet, a deep neural network that predicts view-dependent effects.Based on these estimations, we are able to convert observed images to diffuse images.These diffuse images can be projected into other views.In the target view, our pipeline reinserts the new view-dependent effects.To composite multiple reprojected images to a final output, we learn a composition network that outputs photo-realistic results.Using this image-guided approach, the network does not have to allocate capacity on remembering object appearance, instead it learns how to combine the appearance of captured images.We demonstrate the effectiveness of our approach both qualitatively and quantitatively on synthetic as well as on real data.","We propose a learned image-guided rendering technique that combines the benefits of image-based rendering and GAN-based image synthesis while considering view-dependent effects.This submission proposes a method to handle view-dependent effects in neural rendering, which improves the robustness of existing neural rendering methods." 551,Step Size Optimization,"This paper proposes a new approach for step size adaptation in gradient methods.The proposed method called step size optimization formulates the step size adaptation as an optimization problem which minimizes the loss function with respect to the step size for the given model parameters and gradients.Then, the step size is optimized based on alternating direction method of multipliers.SSO does not require the second-order information or any probabilistic models for adapting the step size, so it is efficient and easy to implement.Furthermore, we also introduce stochastic SSO for stochastic learning environments.In the experiments, we integrated SSO to vanilla SGD and Adam, and they outperformed state-of-the-art adaptive gradient methods including RMSProp, Adam, L4-Adam, and AdaBound on extensive benchmark datasets.","We propose an efficient and effective step size adaptation method for the gradient methods.A new step size adaptation in first-order gradient methods that establishes a new optimization problem with the first-order expansion of the loss function and regularization, where step size is treated as variable." 552,Universality Theorems for Generative Models,"Despite the fact that generative models are extremely successful in practice, the theory underlying this phenomenon is only starting to catch up with practice.In this work we address the question of the universality of generative models: is it true that neural networks can approximate any data manifold arbitrarily well?We provide a positive answer to this question and show that under mild assumptions on the activation function one can always find a feedforward neural network that maps the latent space onto a set located within the specified Hausdorff distance from the desired data manifold.We also prove similar theorems for the case of multiclass generative models and cycle generative models, trained to map samples from one manifold to another and vice versa.","We shot that a wide class of manifolds can be generated by ReLU and sigmoid networks with arbitrary precision.This paper provides certain basic guarantees on when manifolds can be written as the image of a map approximated by a neural net, and stitches together theorems from manifold geometry and standard universal approximation results.This paper theoretically shows that neural-network-based generative models can approximate data manifolds, and proves that under mild assumptions neural networks can map a latent space onto a set close to the given data manifold within a small Hausdorff distance." 553,Algorithmic Framework for Model-based Deep Reinforcement Learning with Theoretical Guarantees,"Model-based reinforcement learning is considered to be a promising approach to reduce the sample complexity that hinders model-free RL.However, the theoretical understanding of such methods has been rather limited.This paper introduces a novel algorithmic framework for designing and analyzing model-based RL algorithms with theoretical guarantees.We design a meta-algorithm with a theoretical guarantee of monotone improvement to a local maximum of the expected reward.The meta-algorithm iteratively builds a lower bound of the expected reward based on the estimated dynamical model and sample trajectories, and then maximizes the lower bound jointly over the policy and the model.The framework extends the optimism-in-face-of-uncertainty principle to non-linear dynamical models in a way that requires no explicit uncertainty quantification.Instantiating our framework with simplification gives a variant of model-based RL algorithms Stochastic Lower Bounds Optimization.Experiments demonstrate that SLBO achieves the state-of-the-art performance when only 1M or fewer samples are permitted on a range of continuous control benchmark tasks.",We design model-based reinforcement learning algorithms with theoretical guarantees and achieve state-of-the-art results on Mujuco benchmark tasks when one million or fewer samples are permitted.The paper proposed a framework to design model-based RL algorithms based on OFU that achieves SOTA performance on MuJoCo tasks. 554,On Compressing U-net Using Knowledge Distillation,"We study the use of knowledge distillation to compress the U-net architecture.We show that, while standard distillation is not sufficient to reliably train a compressed U-net, introducing other regularization methods, such as batch normalization and class re-weighting, in knowledge distillation significantly improves the training process.This allows us to compress a U-net by over 1000x, i.e., to 0.1% of its original number of parameters, at a negligible decrease in performance.",We present additional techniques to use knowledge distillation to compress U-net by over 1000x.The authors introduced a modified distillation strategy to compress a U-net architecture by over 1000x while retaining an accuracy close to the original U-net. 555,Overcoming Catastrophic Forgetting via Hessian-free Curvature Estimates,"Learning neural networks with gradient descent over a long sequence of tasks is problematic as their fine-tuning to new tasks overwrites the network weights that are important for previous tasks.This leads to a poor performance on old tasks – a phenomenon framed as catastrophic forgetting. While early approaches use task rehearsal and growing networks that both limit the scalability of the task sequence orthogonal approaches build on regularization. Based on the Fisher information matrix changes to parameters that are relevant to old tasks are penalized, which forces the task to be mapped into the available remaining capacity of the network.This requires to calculate the Hessian around a mode, which makes learning tractable.In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian. In contrast to previous work, we exploit the fact that most regions in the loss surface are flat and hence only calculate a Hessian-vector-product around the surface that is relevant for the current task.Our experiments show that on a variety of well-known task sequences we either significantly outperform or are en par with previous work.","This paper provides an approach to address catastrophic forgetting via Hessian-free curvature estimates', ""The paper proposes an approximate Laplace's method in neural network training in the continual learning setting with a low space complexity." 556,Leveraging Simple Model Predictions for Enhancing its Performance,"There has been recent interest in improving performance of simple models for multiple reasons such as interpretability, robust learning from small data, deployment in memory constrained settings as well as environmental considerations.In this paper, we propose a novel method SRatio that can utilize information from high performing complex models to reweight a training dataset for a potentially low performing simple model such as a decision tree or a shallow network enhancing its performance.Our method also leverages the per sample hardness estimate of the simple model which is not the case with the prior works which primarily consider the complex models confidences/predictions and is thus conceptually novel.Moreover, we generalize and formalize the concept of attaching probes to intermediate layers of a neural network, which was one of the main ideas in previous work p, to other commonly used classifiers and incorporate this into our method.The benefit of these contributions is witnessed in the experiments where on 6 UCI datasets and CIFAR-10 we outperform competitors in a majority of the cases and tie for best performance in the remaining cases.In fact, in a couple of cases, we even approach the complex models performance.We also conduct further experiments to validate assertions and intuitively understand why our method works.Theoretically, we motivate our approach by showing that the weighted loss minimized by simple models using our weighting upper bounds the loss of the complex model.",Method to improve simple models performance given a (accurate) complex model.The paper proposes a means of improving the predictions of a low-capacity model which shows benefits over existing approaches. 557,Learning Deep Mean Field Games for Modeling Large Population Behavior,"We consider the problem of representing collective behavior of large populations and predicting the evolution of a population distribution over a discrete state space.A discrete time mean field game is motivated as an interpretable model founded on game theory for understanding the aggregate effect of individual actions and predicting the temporal evolution of population distributions.We achieve a synthesis of MFG and Markov decision processes by showing that a special MFG is reducible to an MDP.This enables us to broaden the scope of mean field game theory and infer MFG models of large real-world systems via deep inverse reinforcement learning.Our method learns both the reward function and forward dynamics of an MFG from real data, and we report the first empirical test of a mean field game model of a real-world social media population.",Inference of a mean field game (MFG) model of large population behavior via a synthesis of MFG and Markov decision processes.The authors deal with inference in models of collective behavior by using inverse reinforcement learning to learn the reward functions of agents in the model. 558,Generating Multi-Agent Trajectories using Programmatic Weak Supervision,"We study the problem of training sequential generative models for capturing coordinated multi-agent trajectory behavior, such as offensive basketball gameplay. When modeling such settings, it is often beneficial to design hierarchical models that can capture long-term coordination using intermediate variables. Furthermore, these intermediate variables should capture interesting high-level behavioral semantics in an interpretable and manipulable way.We present a hierarchical framework that can effectively learn such sequential generative models. Our approach is inspired by recent work on leveraging programmatically produced weak labels, which we extend to the spatiotemporal regime.In addition to synthetic settings, we show how to instantiate our framework to effectively model complex interactions between basketball players and generate realistic multi-agent trajectories of basketball gameplay over long time periods.We validate our approach using both quantitative and qualitative evaluations, including a user study comparison conducted with professional sports analysts.",We blend deep generative models with programmatic weak supervision to generate coordinated multi-agent trajectories of significantly higher quality than previous baselines.Proposes multi-agent sequential generative models.The paper proposes training generative models that produce multi-agent trajectories using heuristic functions that label variables that would otherwise be latent in training data 559,Learning to Rank Learning Curves,"Many automated machine learning methods, such as those for hyperparameter and neural architecture optimization, are computationally expensive because they involve training many different model configurations.In this work, we present a new method that saves computational budget by terminating poor configurations early on in the training.In contrast to existing methods, we consider this task as a ranking and transfer learning problem.We qualitatively show that by optimizing a pairwise ranking loss and leveraging learning curves from other data sets, our model is able to effectively rank learning curves without having to observe many or very long learning curves.We further demonstrate that our method can be used to accelerate a neural architecture search by a factor of up to 100 without a significant performance degradation of the discovered architecture.In further experiments we analyze the quality of ranking, the influence of different model components as well as the predictive behavior of the model.","Learn to rank learning curves in order to stop unpromising training jobs early. Novelty: use of pairwise ranking loss to directly model the probability of improving and transfer learning across data sets to reduce required training data.The paper proposes a method to rank learning curves of neural networks that can model learning curves across different datasets, achieving higher speed-ups on image classification tasks." 560,Experience replay for continual learning,"Continual learning is the problem of learning new tasks or knowledge while protecting old knowledge and ideally generalizing from old experience to learn new tasks faster.Neural networks trained by stochastic gradient descent often degrade on old tasks when trained successively on new tasks with different data distributions.This phenomenon, referred to as catastrophic forgetting, is considered a major hurdle to learning with non-stationary data or sequences of new tasks, and prevents networks from continually accumulating knowledge and skills.We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence.Unlike most other work, we do not provide an explicit indication to the model of task boundaries, which is the most general circumstance for a learning agent exposed to continuous experience.While various methods to counteract catastrophic forgetting have recently been proposed, we explore a straightforward, general, and seemingly overlooked solution - that of using experience replay buffers for all past events - with a mixture of on- and off-policy learning, leveraging behavioral cloning.We show that this strategy can still learn new tasks quickly yet can substantially reduce catastrophic forgetting in both Atari and DMLab domains, even matching the performance of methods that require task identities.When buffer storage is constrained, we confirm that a simple mechanism for randomly discarding data allows a limited size buffer to perform almost as well as an unbounded one.","We show that, in continual learning settings, catastrophic forgetting can be avoided by applying off-policy RL to a mixture of new and replay experience, with a behavioral cloning loss.Proposes a particular variant of experience replay with behavior cloning as a method for continual learning." 561,Stochastic Prediction of Multi-Agent Interactions from Partial Observations,"We present a method which learns to integrate temporal information, from a learned dynamics model, with ambiguous visual information, from a learned vision model, in the context of interacting agents.Our method is based on a graph-structured variational recurrent neural network, which is trained end-to-end to infer the current state of the world, as well as to forecast future states.We show that our method outperforms various baselines on two sports datasets, one based on real basketball trajectories, and one generated by a soccer game engine.",We present a method which learns to integrate temporal information and ambiguous visual information in the context of interacting agents.The authors propose Graph VRNN which models the interaction of multiple agents by deploying a VRNN for each agentThis paper presents a graph neural network based architecture that is trained to locate and model the interactions of agents in an environment directly from pixels and show advantage of model for tracking tasks and forecasting agent locations. 562,End-to-end Learning of a Convolutional Neural Network via Deep Tensor Decomposition,In this paper we study the problem of learning the weights of a deep convolutional neural network.We consider a network where convolutions are carried out over non-overlapping patches with a single kernel in each layer.We develop an algorithm for simultaneously learning all the kernels from the training data.Our approach dubbed Deep Tensor Decomposition is based on a rank-1 tensor decomposition.We theoretically investigate DeepTD under a realizable model for the training data where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted convolutional kernels.We show that DeepTD is data-efficient and provably works as soon as the sample size exceeds the total number of convolutional weights in the network.Our numerical experiments demonstrate the effectiveness of DeepTD and verify our theoretical findings.,"We consider a simplified deep convolutional neural network model. We show that all layers of this network can be approximately learned with a proper application of tensor decomposition.Provides theoretical guarantees for learning deep convolutional neural networks using rank-one tensor decomposition.This paper proposes a learning method for a restricted case of deep convolutional networks, where the layers are limited to the non-overlapping case and have only one output channel per layerAnalyzes the problem of learning a very special class of CNNs: each layers consists of a single filter, applied to non-overlapping patches of the input." 563,"The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks","Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy.However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance.We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively.Based on these results, we articulate the ""lottery ticket hypothesis:"" dense, randomly-initialized, feed-forward networks contain subnetworks that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations.The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective.We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations.We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10.Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.","Feedforward neural networks that can have weights pruned after training could have had the same weights pruned before trainingShows that there exists sparse subnetworks that can be trained from scratch with good generalization performance and proposes a unpruned, randomly initialized NNs contain subnetworks that can be trained from scratch with similar generalization accuracy.The paper examines the hypothesis that randomly initialized neural networks contain sub-networks that converge equally fast or faster and can reach the same or better classification accuracy" 564,The Difficulty of Training Sparse Neural Networks,"We investigate the difficulties of training sparse neural networks and make new observations about optimization dynamics and the energy landscape within the sparse regime.Recent work of p has shown that sparse ResNet-50 architectures trained on ImageNet-2012 dataset converge to solutions that are significantly worse than those found by pruning.We show that, despite the failure of optimizers, there is a linear path with a monotonically decreasing objective from the initialization to the good solution."", ""Additionally, our attempts to find a decreasing objective path from bad solutions to the good ones in the sparse subspace fail.However, if we allow the path to traverse the dense subspace, then we consistently find a path between two solutions.These findings suggest traversing extra dimensions may be needed to escape stationary points found in the sparse subspace.",In this paper we highlight the difficulty of training sparse neural networks by doing interpolation experiments in the energy landscape 565,Weight-space symmetry in neural network loss landscapes revisited,"Neural network training depends on the structure of the underlying loss landscape, i.e. local minima, saddle points, flat plateaus, and loss barriers.In relation to the structure of the landscape, we study the permutation symmetry of neurons in each layer of a deep neural network, which gives rise not only to multiple equivalent global minima of the loss function but also to critical points in between partner minima.In a network of hidden layers with neurons in layers, we construct continuous paths between equivalent global minima that lead through a `permutation point where the input and output weight vectors of two neurons in the same hidden layer collide and interchange.We show that such permutation points are critical points which lie inside high-dimensional subspaces of equal loss, contributing to the global flatness of the landscape.We also find that a permutation point for the exchange of neurons and transits into a flat high-dimensional plateau that enables allpermutations of neurons in a given layer at the same loss value. Moreover, we introduce higher-order permutation points by exploiting the hierarchical structure in the loss landscapes of neural networks, and find that the number of-th order permutation points is much larger than the number of equivalent global minima -- at least by a polynomial factor of order. In twotasks, we demonstrate numerically with our path finding method that continuous paths between partner minima exist: first, in a toy network with a single hidden layer on a function approximation task and, second, in a multilayer network on the MNIST task. Our geometricapproach yields a lower bound on the number of critical points generated by weight-space symmetries and provides a simple intuitive link between previous theoretical results and numerical observations.",Weight-space symmetry in neural network landscapes gives rise to numerous number of saddles and flat high-dimensional subspaces.The paper presented a low-loss method for studying the loss function with respect to parameters in a neural network from the perspective of weight-space symmetry. 566,Critical initialisation in continuous approximations of binary neural networks,"The training of stochastic neural network models with binary weights and activations via continuous surrogate networks is investigated.We derive, using mean field theory, a set of scalar equations describing how input signals propagate through surrogate networks.The equations reveal that depending on the choice of surrogate model, the networks may or may not exhibit an order to chaos transition, and the presence of depth scales that limit the maximum trainable depth.Specifically, in solving the equations for edge of chaos conditions, we show that surrogates derived using the Gaussian local reparameterisation trick have no critical initialisation, whereas a deterministic surrogates based on analytic Gaussian integration do.The theory is applied to a range of binary neuron and weight design choices, such as different neuron noise models, allowing the categorisation of algorithms in terms of their behaviour at initialisation.Moreover, we predict theoretically and confirm numerically, that common weight initialization schemes used in standard continuous networks, when applied to the mean values of the stochastic binary weights, yield poor training performance.This study shows that, contrary to common intuition, the means of the stochastic binary weights should be initialised close to close to for deeper networks to be trainable.","signal propagation theory applied to continuous surrogates of binary nets; counter intuitive initialisation; reparameterisation trick not helpfulThe authors investigate the training dynamics of binary neural networks when using continuous surrogates, study what properties networks should have at initialization to best train, and provide concrete advice about stochastic weights at initialization.An in-depth exploration of stochastic binary networks, continuous surrogates, and their training dynamics, with insights on how to initialize weights for best performance." 567,Semi-Supervised Semantic Dependency Parsing Using CRF Autoencoders,"Semantic dependency parsing, which aims to find rich bi-lexical relationships, allows words to have multiple dependency heads, resulting in graph-structured representations.We propose an approach to semi-supervised learning of semantic dependency parsers based on the CRF autoencoder framework.Our encoder is a discriminative neural semantic dependency parser that predicts the latent parse graph of the input sentence.Our decoder is a generative neural model that reconstructs the input sentence conditioned on the latent parse graph.Our model is arc-factored and therefore parsing and learning are both tractable.Experiments show our model achieves significant and consistent improvement over the supervised baseline.","We propose an approach to semi-supervised learning of semantic dependency parsers based on the CRF autoencoder framework.This paper focuses on semi-supervised semantic dependency parsing using the CRF-autoencoder to train the model in a semi-supervised style, indicating effectiveness on low resource labeled data tasks." 568,DeFINE: Deep Factorized Input Word Embeddings for Neural Sequence Modeling,"For sequence models with large word-level vocabularies, a majority of network parameters lie in the input and output layers.In this work, we describe a new method, DeFINE, for learning deep word-level representations efficiently.Our architecture uses a hierarchical structure with novel skip-connections which allows for the use of low dimensional input and output layers, reducing total parameters and training time while delivering similar or better performance versus existing methods.DeFINE can be incorporated easily in new or existing sequence models.Compared to state-of-the-art methods including adaptive input representations, this technique results in a 6% to 20% drop in perplexity.On WikiText-103, DeFINE reduces total parameters of Transformer-XL by half with minimal impact on performance.On the Penn Treebank, DeFINE improves AWD-LSTM by 4 points with a 17% reduction in parameters, achieving comparable performance to state-of-the-art methods with fewer parameters.For machine translation, DeFINE improves a Transformer model by 2% while simultaneously reducing total parameters by 26%","DeFINE uses a deep, hierarchical, sparse network with new skip connections to learn better word embeddings efficiently. This paper describes a new method for learning deep word-level representations efficiently by using a hierarchical structure with skip-connections for the use of low dimensional input and output layers." 569,Reproducing Meta-learning with differentiable closed-form solvers,"In this paper, we present a reproduction of the paper of Bertinetto et al. [2019] ""Meta-learning with differentiable closed-form solvers"" as part of the ICLR 2019 Reproducibility Challenge.In successfully reproducing the most crucial part of the paper, we reach a performance that is comparable with or superior to the original paper on two benchmarks for several settings.We evaluate new baseline results, using a new dataset presented in the paper.Yet, we also provide multiple remarks and recommendations about reproducibility and comparability. After we brought our reproducibility work to the authors’ attention, they have updated the original paper on which this work is based and released code as well.Our contributions mainly consist in reproducing the most important results of their original paper, in giving insight in the reproducibility and in providing a first open-source implementation.",We successfully reproduce and give remarks on the comparison with baselines of a meta-learning approach for few-shot classification that works by backpropagating through the solution of a closed-form solver. 570,Dynamic parameter reallocation improves trainability of deep convolutional networks,"Network pruning has emerged as a powerful technique for reducing the size of deep neural networks.Pruning uncovers high-performance subnetworks by taking a trained dense network and gradually removing unimportant connections.Recently, alternative techniques have emerged for training sparse networks directly without having to train a large dense model beforehand, thereby achieving small memory footprints during both training and inference.These techniques are based on dynamic reallocation of non-zero parameters during training.Thus, they are in effect executing a training-time search for the optimal subnetwork.We investigate a most recent one of these techniques and conduct additional experiments to elucidate its behavior in training sparse deep convolutional networks.Dynamic parameter reallocation converges early during training to a highly trainable subnetwork.We show that neither the structure, nor the initialization of the discovered high-performance subnetwork is sufficient to explain its good performance.Rather, it is the dynamics of parameter reallocation that are responsible for successful learning.Dynamic parameter reallocation thus improves the trainability of deep convolutional networks, playing a similar role as overparameterization, without incurring the memory and computational cost of the latter.","Dynamic parameter-reallocation enables the successful direct training of compact sparse networks, and it plays an indispensable role even when we know the optimal sparse network a-priori" 571,TOWARDS ROBOT VISION MODULE DEVELOPMENT WITH EXPERIENTIAL ROBOT LEARNING,"n this paper we present a thrust in three directions of visual development us- ing supervised and semi-supervised techniques.The first is an implementation of semi-supervised object detection and recognition using the principles of Soft At- tention and Generative Adversarial Networks.The second and the third are supervised networks that learn basic concepts of spatial locality and quantity respectively using Convolutional Neural Networks.The three thrusts to- gether are based on the approach of Experiential Robot Learning, introduced in previous publication.While the results are unripe for implementation, we believe they constitute a stepping stone towards autonomous development of robotic vi- sual modules.",3 thrusts serving as stepping stones for robot experiential learning of vision moduleInvestigates is performance of existing image classifiers and object detectors. 572,How transferable are features in convolutional neural network acoustic models across languages?,"Characterization of the representations learned in intermediate layers of deep networks can provide valuable insight into the nature of a task and can guide the development of well-tailored learning strategies.Here we study convolutional neural network-based acoustic models in the context of automatic speech recognition.Adapting a method proposed by Yosinski et al. [2014], we measure the transferability of each layer between German and English to assess the their language-specifity.We observe three distinct regions of transferability: the first two layers are entirely transferable between languages, layers 2–8 are also highly transferable but we find evidence of some language specificity, the subsequent fully connected layers are more language specific but can be successfully finetuned to the target language.To further probe the effect of weight freezing, we performed follow-up experiments using freeze-training [Raghu et al., 2017].Our results are consistent with the observation that CCNs converge bottom up during training and demonstrate the benefit of freeze training, especially for transfer learning.","All but the first two layers of our CNNs based acoustic models demonstrated some degree of language-specificity but freeze training enabled successful transfer between languages.The paper measures the transferability of features for each layer in CNN-based acoustic models across languages, concluding that AMs trained with 'the freeze training' technique outperformed other transferred models." 573,Diffusing Policies : Towards Wasserstein Policy Gradient Flows,"Policy gradients methods often achieve better performance when the change in policy is limited to a small Kullback-Leibler divergence.We derive policy gradients where the change in policy is limited to a small Wasserstein distance.This is done in the discrete and continuous multi-armed bandit settings with entropy regularisation.We show that in the small steps limit with respect to the Wasserstein distance, policy dynamics are governed by the heat equation, following the Jordan-Kinderlehrer-Otto result.This means that policies undergo diffusion and advection, concentrating near actions with high reward.This helps elucidate the nature of convergence in the probability matching setup, and provides justification for empirical practices such as Gaussian policy priors and additive gradient noise.","Linking Wasserstein-trust region entropic policy gradients, and the heat equation.The paper explores the connections between reinforcement learning and the theory of quadratic optimal transportThe authors studied policy gradient with change of policies limited by a trust region of Wasserstein distance in the multi-armed bandit setting, showing that in the small steps limit, the policy dynamics are governed by the heat equation (Fokker-Planck equation)." 574,Softmax Supervision with Isotropic Normalization,"The softmax function is widely used to train deep neural networks for multi-class classification.Despite its outstanding performance in classification tasks, the features derived from the supervision of softmax are usually sub-optimal in some scenarios where Euclidean distances apply in feature spaces.To address this issue, we propose a new loss, dubbed the isotropic loss, in the sense that the overall distribution of data points is regularized to approach the isotropic normal one.Combined with the vanilla softmax, we formalize a novel criterion called the isotropic softmax, or isomax for short, for supervised learning of deep neural networks.By virtue of the isomax, the intra-class features are penalized by the isotropic loss while inter-class distances are well kept by the original softmax loss.Moreover, the isomax loss does not require any additional modifications to the network, mini-batches or the training process.Extensive experiments on classification and clustering are performed to demonstrate the superiority and robustness of the isomax loss.",The discriminative capability of softmax for learning feature vectors of objects is effectively enhanced by virture of isotropic normalization on global distribution of data points. 575,Q-learning with UCB Exploration is Sample Efficient for Infinite-Horizon MDP,"A fundamental question in reinforcement learning is whether model-free algorithms are sample efficient.Recently, Jin et al. proposed a Q-learning algorithm with UCB exploration policy, and proved it has nearly optimal regret bound for finite-horizon episodic MDP.In this paper, we adapt Q-learning with UCB-exploration bonus to infinite-horizon MDP with discounted rewards accessing a generative model.We show that the of our algorithm is bounded by.This improves the previously best known result of in this setting achieved by delayed Q-learning,, and matches the lower bound in terms of as well as and up to logarithmic factors.","We adapt Q-learning with UCB-exploration bonus to infinite-horizon MDP with discounted rewards without accessing a generative model, and improves the previously best known result.This paper considered a Q-learning algorithm with UCB exploration policy for infinite-horizon MDP." 576,Learning to solve the credit assignment problem,"Backpropagation is driving todays artificial neural networks.However, despite extensive research, it remains unclear if the brain implements this algorithm.Among neuroscientists, reinforcement learning algorithms are often seen as a realistic alternative: neurons can randomly introduce change, and use unspecific feedback signals to observe their effect on the cost and thus approximate their gradient.However, the convergence rate of such learning scales poorly with the number of involved neurons.Here we propose a hybrid learning approach.Each neuron uses an RL-type strategy to learn how to approximate the gradients that backpropagation would provide.We provide proof that our approach converges to the true gradient for certain classes of networks.In both feedforward and convolutional networks, we empirically show that our approach learns to approximate the gradient, and can match the performance of gradient-based learning.Learning feedback weights provides a biologically plausible mechanism of achieving good performance, without the need for precise, pre-specified learning rules.","Perturbations can be used to train feedback weights to learn in fully connected and convolutional neural networksThis paper proposes a method that addresses the ""weight transport"" problem by estimating the weights for the backward pass using a noise-based estimator " 577,Universality Patterns in the Training of Neural Networks,"This paper proposes and demonstrates a surprising pattern in the training of neural networks: there is a one to one relation between the values of any pair of losses evaluated for a model arising at a training run.This pattern is universal in the sense that this one to one relationship is identical across architectures, algorithms and training loss functions.","We identify some universal patterns (i.e., holding across architectures) in the behavior of different surrogate losses (CE, MSE, 0-1 loss) while training neural networks and present supporting empirical evidence."