id
stringlengths 11
20
| paper_text
stringlengths 29
163k
| review
stringlengths 666
24.3k
|
---|---|---|
iclr_2018_rJGY8GbR- | A recent line of work has studied the statistical properties of neural networks to great success from a mean field theory perspective, making and verifying very precise predictions of neural network behavior and test time performance. In this paper, we build upon these works to explore two methods for taming the behaviors of random residual networks (with only fully connected layers and no batchnorm). The first method is width variation (WV), i.e. varying the widths of layers as a function of depth. We show that width decay reduces gradient explosion without affecting the mean forward dynamics of the random network. The second method is variance variation (VV), i.e. changing the initialization variances of weights and biases over depth. We show VV, used appropriately, can reduce gradient explosion of tanh and ReLU resnets from exp(Θ( √ L)) and exp(Θ(L)) respectively to constant Θ(1). A complete phase-diagram is derived for how variance decay affects different dynamics, such as those of gradient and activation norms. In particular, we show the existence of many phase transitions where these dynamics switch between exponential, polynomial, logarithmic, and even constant behaviors. Using the obtained mean field theory, we are able to track surprisingly well how VV at initialization time affects training and test time performance on MNIST after a set number of epochs: the level sets of test/train set accuracies coincide with the level sets of the expectations of certain gradient norms or of metric expressivity (as defined in Yang and Schoenholz (2017)), a measure of expansion in a random neural network. Based on insights from past works in deep mean field theory and information geometry, we also provide a new perspective on the gradient explosion/vanishing problems: they lead to ill-conditioning of the Fisher information matrix, causing optimization troubles. | Mean field theory is an approach to analysing complex systems where correlations between highly dependent random variables are ignored, thus making the problem analytically tractable. It is hoped that analytical insights gained in this idealised setting might translate back to the original (and far messier) problem. The authors use a mean field theory approach to study how varying certain network hyperparameters with depth can effect gradient and activation statistics. A correlation between the behaviour of these statistics and training performance on MNIST is noted.
As someone asked to conduct an 'emergency' review of this paper, I would have greatly appreciated the authors making more of an effort to present their results clearly. Some general comments in this regard:
Clarity issues:
- the authors appear to have ignored the ICLR style guidelines
- the references are all written in green, making them difficult to read
- figures are either missing color maps or make poor choice of colors
- the figure captions are difficult to understand in isolation from the main text
- the authors themselves appear to muddle their 'zigs' and 'zags' (first line of discussion)
Now to get to the actual content of the paper. The authors do not properly place their work in context. Mean field theory has been studied in the context of neural networks at least since the 80's. Entire books have been written on the statistical mechanics of neural networks. It seems wrong that the authors only cite papers on this matter going back to 2016.
With that said, the main thrust of the paper is very interesting. The authors derive recurrence relations for mean activations and gradients. They show how scaling layer width and initialisation variance with depth can better control the propagation of these means. The results of their calculations appear to match their random network simulations, and this part of the work seems strong.
What is not clear is what effect we should expect these quantities to have on learning? The authors claim there is a tradeoff between expressivity and exploding gradients. This seems quite speculative since it is not clear to me what effect either of these things will have on training. For one, how expressive does a model need to be to correctly classify MNIST? And are exploding gradients necessarily a bad thing? Provided they do not reach infinity, can we not just choose a smaller learning rate?
I'm open to reevaluating the review if the issues of clarity and missing literature review are fixed. |
iclr_2018_BkabRiQpb | Published as a conference paper at ICLR 2018 CONSEQUENTIALIST CONDITIONAL COOPERATION IN SOCIAL DILEMMAS WITH IMPERFECT INFORMATION
Social dilemmas, where mutual cooperation can lead to high payoffs but participants face incentives to cheat, are ubiquitous in multi-agent interaction. We wish to construct agents that cooperate with pure cooperators, avoid exploitation by pure defectors, and incentivize cooperation from the rest. However, often the actions taken by a partner are (partially) unobserved or the consequences of individual actions are hard to predict. We show that in a large class of games good strategies can be constructed by conditioning one's behavior solely on outcomes (ie. one's past rewards). We call this consequentialist conditional cooperation. We show how to construct such strategies using deep reinforcement learning techniques and demonstrate, both analytically and experimentally, that they are effective in social dilemmas beyond simple matrix games. We also show the limitations of relying purely on consequences and discuss the need for understanding both the consequences of and the intentions behind an action. | This paper studies learning to play two-player general-sum games with state (Markov games) with imperfect information. The idea is to learn to cooperate (think prisoner's dilemma) but in more complex domains. Generally, in repeated prisoner's dilemma, one can punish one's opponent for noncooperation. In this paper, they design an apporach to learn to cooperate in a more complex game, like a hybrid pong meets prisoner's dilemma game. This is fun but I did not find it particularly surprising from a game-theoretic or from a deep learning point of view.
From a game-theoretic point of view, the paper begins with a game-theoretic analysis of a cooperative strategy for these markov games with imperfect information. It is basically a straightforward generalization of the idea of punishing, which is common in "folk theorems" from game theory, to give a particular equilibrium for cooperating in Markov games. Many Markov games do not have a cooperative equilibrium, so this paper restricts attention to those that do. Even in games where there is a cooperative solution that maximizes the total welfare, it is not clear why players would choose to do so. When the game is symmetric, this might be "the natural" solution but in general it is far from clear why all players would want to maximize the total payoff.
The paper follows with some fun experiments implementing these new game theory notions. Unfortunately, since the game theory was not particularly well-motivated, I did not find the overall story compelling. It is perhaps interesting that one can make deep learning learn to cooperate with imperfect information, but one could have illustrated the game theory equally well with other techniques.
In contrast, the paper "Coco-Q: Learning in Stochastic Games with Side Payments" by Sodomka et. al. is an example where they took a well-motivated game theoretic cooperative solution concept and explored how to implement that with reinforcement learning. I would think that generalizing such solution concepts to stochastic games and/or deep learning might be more interesting.
It should also be noted that I was asked to review another ICLR submission entitled "MAINTAINING COOPERATION IN COMPLEX SOCIAL DILEMMAS USING DEEP REINFORCEMENT LEARNING" which amazingly introduced the same "Pong Player’s Dilemma" game as in this paper.
Notice the following suspiciously similar paragraphs from the two papers:
From "MAINTAINING COOPERATION IN COMPLEX SOCIAL DILEMMAS USING DEEP REINFORCEMENT LEARNING":
We also look at an environment where strategies must be learned from raw pixels. We use the method
of Tampuu et al. (2017) to alter the reward structure of Atari Pong so that whenever an agent scores a
point they receive a reward of 1 and the other player receives −2. We refer to this game as the Pong
Player’s Dilemma (PPD). In the PPD the only (jointly) winning move is not to play. However, a fully
cooperative agent can be exploited by a defector.
From "CONSEQUENTIALIST CONDITIONAL COOPERATION IN SOCIAL DILEMMAS WITH IMPERFECT INFORMATION":
To demonstrate this we follow the method of Tampuu et al. (2017) to construct a version of Atari Pong
which makes the game into a social dilemma. In what we call the Pong Player’s Dilemma (PPD) when an agent
scores they gain a reward of 1 but the partner receives a reward of −2. Thus, in the PPD the only (jointly) winning
move is not to play, but selfish agents are again tempted to defect and try to score points even though
this decreases total social reward. We see that CCC is a successful, robust, and simple strategy in this
game. |
iclr_2018_rJTutzbA- | ON THE INSUFFICIENCY OF EXISTING MOMENTUM SCHEMES FOR STOCHASTIC OPTIMIZATION
Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov's accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). Rigorously speaking, "fast gradient" methods have provable improvements over gradient descent only for the deterministic case, where the gradients are exact. In the stochastic case, the popular explanations for their wide applicability is that when these fast gradient methods are applied in the stochastic case, they partially mimic their exact gradient counterparts, resulting in some practical gain. This work provides a counterpoint to this belief by proving that there exist simple problem instances where these methods cannot outperform SGD despite the best setting of its parameters. These negative problem instances are, in an informal sense, generic; they do not look like carefully constructed pathological instances. These results suggest (along with empirical evidence) that HB or NAG's practical performance gains are a by-product of mini-batching. Furthermore, this work provides a viable (and provable) alternative, which, on the same set of problem instances, significantly improves over HB, NAG, and SGD's performance. This algorithm, referred to as Accelerated Stochastic Gradient Descent (ASGD), is a simple to implement stochastic algorithm, based on a relatively less popular variant of Nesterov's Acceleration. Extensive empirical results in this paper show that ASGD has performance gains over HB, NAG, and SGD. The code implementing the ASGD Algorithm can be found here 1 . | I like the idea of the paper. Momentum and accelerations are proved to be very useful both in deterministic and stochastic optimization. It is natural that it is understood better in the deterministic case. However, this comes quite naturally, as deterministic case is a bit easier ;) Indeed, just recently people start looking an accelerating in stochastic formulations. There is already accelerated SVRG, Jain et al 2017, or even Richtarik et al (arXiv: 1706.01108, arXiv:1710.10737).
I would somehow split the contributions into two parts:
1) Theoretical contribution: Proposition 3 (+ proofs in appendix)
2) Experimental comparison.
I like the experimental part (it is written clearly, and all experiments are described in a lot of detail).
I really like the Proposition 3 as this is the most important contribution of the paper. (Indeed, Algorithms 1 and 2 are for reference and Algorithm 3 was basically described in Jain, right?).
Significance: I think that this paper is important because it shows that the classical HB method cannot achieve acceleration in a stochastic regime.
Clarity: I was easy to read the paper and understand it.
Few minor comments:
1. Page 1, Paragraph 1: It is not known only for smooth problems, it is also true for simple non-smooth (see e.g. https://link.springer.com/article/10.1007/s10107-012-0629-5)
2. In abstract : Line 6 - not completely true, there is accelerated SVRG method, i.e. the gradient is not exact there, also see Recht (https://arxiv.org/pdf/1701.03863.pdf) or Richtarik et al (arXiv: 1706.01108, arXiv:1710.10737) for some examples where acceleration can be proved when you do not have an exact gradient.
3. Page 2, block "4" missing "." in "SGD We validate"....
4. Section 2. I think you are missing 1/2 in the definition of the function. Otherwise, you would have a constant "2" in the Hessian, i.e. H= 2 E[xx^T]. So please define the function as f_i(w) = 1/2 (y - <w,x_i>)^2. The same applies to Section 3.
5. Page 6, last line, .... was downloaded from "pre". I know it is a link, but when printed, it looks weird. |
iclr_2018_By-7dz-AZ | A FRAMEWORK FOR THE QUANTITATIVE EVALUATION OF DISENTANGLED REPRESENTATIONS
Recent AI research has emphasised the importance of learning disentangled representations of the explanatory factors behind data. Despite the growing interest in models which can learn such representations, visual inspection remains the standard evaluation metric. While various desiderata have been implied in recent definitions, it is currently unclear what exactly makes one disentangled representation better than another. In this work we propose a framework for the quantitative evaluation of disentangled representations when the ground-truth latent structure is available. Three criteria are explicitly defined and quantified to elucidate the quality of learnt representations and thus compare models on an equal basis. To illustrate the appropriateness of the framework, we employ it to compare quantitatively the representations learned by recent state-of-the-art models. | The paper addresses the problem of devising a quantitative benchmark to evaluate the capability of algorithms to disentangle factors of variation in the data.
*Quality*
The problem addressed is surely relevant in general terms. However, the contributed framework did not account for previously proposed metrics (such as equivariance, invariance and equivalence). Within the experimental results, only two methods are considered: although Info-GAN is a reliable competitor, PCA seems a little too basic to compete against. The choice of using noise-free data only is a limiting constraint (in [Chen et al. 2016], Info-GAN is applied to real-world data).
Finally, in order to corroborate the quantitative results, authors should have reported some visual experiments in order to assess whether a change in c_j really correspond to a change in the corresponding factor of variation z_i according to the learnt monomial matrix.
*Clarity*
The explanation of the theoretical framework is not clear. In fact, Figure 1 is straight in identifying disentanglement and completeness as a deviation from an ideal bijective mapping. But, then, the authors missed to clarify how the definitions of D_i and C_j translate this requirement into math.
Also, the criterion of informativeness of Section 2 is split into two sub-criteria in Section 3.3, namely test set NRMSE and Zero-Shot NRMSE: such shift needs to be smoothed and better explained, possibly introducing it in Section 2.
*Originality*
The paper does not allow to judge whether the three proposed criteria are original or not with respect to the previously proposed ones of [Goodfellow et al. 2009, Lenc & Vedaldi 2015, Cohen & Welling 2014, Jayaraman & Grauman 2015].
*Significance*
The significance of the proposed evaluation framework is not fully clear. The initial assumption of considering factors of variations related to graphics-generated data undermines the relevance of the work. Actually, authors only consider synthetic (noise-free) data belonging to one class only, thus not including the factors of variations related to noise and/or different classes.
PROS:
The problem faced by the authors is interesting
CONS:
The criteria of disentanglement, informativeness & completeness are not fully clear as they are presented.
The proposed criteria are not compared with previously proposed ones - equivariance, invariance and equivalence [Goodfellow et al. 2009, Lenc & Vedaldi 2015, Cohen & Welling 2014, Jayaraman & Grauman 2015]. Thus, it is not possible to elicit from the paper to which extent they are novel or how they are related..
The dataset considered is noise-free and considers one class only. Thus, several factors of variation are excluded a priori and this undermines the significance of the analysis.
The experimental evaluation only considers two methods, comparing Info-GAN, a state-of-the-art method, with a very basic PCA.
**FINAL EVALUATION**
The reviewer rates this paper with a weak reject due to the following points.
1) The novel criteria are not compared with existing ones [Goodfellow et al. 2009, Lenc & Vedaldi 2015, Cohen & Welling 2014, Jayaraman & Grauman 2015].
2) There are two flaws in the experimental validation:
2.1) The number of methods in comparison (InfoGAN and PCA) is limited.
2.2) A synthetic dataset is only considered.
The reviewer is favorable in rising the rating towards acceptance if points 1 and 2 will be fixed.
**EVALUATION AFTER AUTHORS' REBUTTAL**
The reviewer has read the responses provided by the authors during the rebuttal period. In particular, with respect to the highlighted points 1 and 2, point 1 has been thoroughly answered and the novelty with respect previous work is now clearly stated in the paper. Despite the same level of clarification has not been reached for what concerns point 2, the proposed framework (although still limited in relevance due to the lack of more realistic settings) can be useful for the community as a benchmark to verify the level of disentanglement than newly proposed deep architectures can achieve. Finally, by also taking into account the positive evaluation provided by the fellow reviewers, the rating of the paper has been risen towards acceptance. |
iclr_2018_Hy8hkYeRb | It has been argued that the brain is a prediction machine that continuously learns how to make better predictions about the stimuli received from the external environment. It builds a model of the world around us and uses this model to infer the external stimulus. Predictive coding has been proposed as a mechanism through which the brain might be able to build such a model of the external environment. However, it is not clear how predictive coding can be used to build deep neural network models of the brain while complying with the architectural constraints imposed by the brain. In this paper, we describe an algorithm to build a deep generative model using predictive coding that can be used to infer latent representations about the stimuli received from external environment. Specifically, we used predictive coding to train a deep neural network on real-world images in a unsupervised learning paradigm. To understand the capacity of the network with regards to modeling the external environment, we studied the latent representations generated by the model on images of objects that are never presented to the model during training. Despite the novel features of these objects the model is able to infer the latent representations for them. Furthermore, the reconstructions of the original images obtained from these latent representations preserve the important details of these objects. | The paper "A Deep Predictive Coding Network for Learning Latent Representations" considers learning of a generative neural network. The network learns unsupervised using a predictive coding setup. A subset of the CIFAR-10 image database (1000 images horses and ships) are used for training. Then images generated using the latent representations inferred on these images, on translated images, and on images of other objects are shown. It is then claimed that the generated images show that the network has learned good latent representations.
I have some concerns about the paper, maybe most notably about the experimental result and the conclusions drawn from them. The numerical experiments are motivated as a way to "understand the capacity of the network with regards to modeling the external environment" (abstract). And it is concluded in the final three sentences of the paper that the presented network "can infer effective latent representations for images of other objects" (i.e., of objects that have not been used for training); and further, that "in this regards, the network is better than most existing algorithms [...]".
I expected the numerical experiments to show results instructive about what representations or what abstractions are learned in the different layers of the network using the learning algorithm and objectives suggested. Also some at least quantifiable (if not benchmarked) outcomes should have been presented given the rather strong claims/conclusions in abstract and discussion/conclusion sections. As a matter of fact, all images shown (including those in the appendix) are blurred versions of the original images, except of one single image: Fig. 4 last row, 2nd image (and that is not commented on). If these are the generated images, then some reconstruction is done by the network, fine, but also not unsurprising as the network was told to do so by the used objective function. What precisely do we learn here? I would have expected the presentation of experimental results to facilitate the development of an understanding of the computations going on in the trained network. How can the reader conclude any functioning from these images? Using the right objective function, reconstructions can also be obtained using random (not learned) generative fields and relatively basic models. The fact that image reconstruction for shifted images or new images is evidence for a sophisticated latent representations is, to my mind, not at all shown here. What would be a good measure for an "effective latent representation" that substantiates the claims made? The reconstruction of unseen images is claimed central but as far as I could see, Figures 2, 3, and 4 are not even referred to in the text, nor is there any objective measure discussed. Studying the relation between predictive coding and deep learning makes sense, but I do not come to the same (strong) conclusions as the author(s) by considering the experimental results - and I do not see evidence for a sophisticated latent representation learned by the network. I am not saying that there is none, but I do not see how the presented experimental results show evidence for this.
Furthermore, the authors stress that a main distinguishing feature of their approach (top of page 3) is that in their network information flows from latent space to observed space (e.g. in contrast to CNNs). That is a true statement but also one which is true for basically all generative models, e.g., of standard directed graphical models such as wake-sleep approaches (Hinton et al., 1995), deep SBNs and more recent generative models used in GANs (Goodfellow et al, 2014). Any of these references would have made a lot of sense.
With my evaluation I do not want to be discouraging about the general approach. But I can not at all give a good evaluation given the current experimental results (unless substantial new evidence which make me evaluate these results differently is provided in a discussion).
Minor:
- no legend for Fig. 1
-notes -> noted
have focused |
iclr_2018_S1TgE7WR- | Workshop track -ICLR 2018 COVARIANT COMPOSITIONAL NETWORKS FOR LEARNING GRAPHS
Most existing neural networks for learning graphs address permutation invariance by conceiving of the network as a message passing scheme, where each node sums the feature vectors coming from its neighbors. We argue that this imposes a limitation on their representation power, and instead propose a new general architecture for representing objects consisting of a hierarchy of parts, which we call covariant compositional networks (CCNs). Here, covariance means that the activation of each neuron must transform in a specific way under permutations, similarly to steerability in CNNs. We achieve covariance by making each activation transform according to a tensor representation of the permutation group, and derive the corresponding tensor aggregation rules that each neuron must implement. Experiments show that CCNs can outperform competing methods on standard graph learning benchmarks. | Thank you for your contribution to ICLR. The paper covers a very interesting topic and presents some though-provoking ideas.
The paper introduces "covariant compositional networks" with the purpose of learning graph representations. An example application also covered in the experimental section is graph classification.
Given a finite set S, a compositional network is simply a partially ordered set P where each element of P is a subset of S and where P contains all sets of cardinality 1 and the set S itself. Unfortunately, the presentation of the approach is extremely verbose and introduces old concepts (e.g., partially ordered set) under new names. The basic idea (which is not new) of this work is that we need to impose some sort of hierarchical order on the nodes of the graph so as to learn hierarchical feature representations. Moreover, the hierarchical order of the nodes should be invariant to valid permutations of the nodes, that is, two isomorphic graphs should have the same hierarchical order on their nodes and the same feature representations. Since this is the case for graph embedding methods that collect feature representations from their neighbors in the graph (and where the feature aggregation functions are symmetric) it makes sense that "compositional networks" generalize graph convolutional networks (and the more general message passing neural networks framework).
The most challenging problem, however, namely the problem of finding a concrete and suitable permutation invariant hierarchical decomposition of the nodes plus some aggregation/pooling functions to compute the feature representations is not addressed in sufficient detail. The paper spends a lot of time on some theoretical definitions and (trivial) proofs but then fails to make the connection to an approach that works in practice. The description of the experiments and which compositional network is chosen and how it is chosen seems to be missing. The only part hinting at the model that was actually used in the experiments is the second paragraph of the section 'Experimental Setup', consisting of one long sentence that is incomprehensible to me.
Instead of spending a lot of effort on the definitions and (somewhat trivial) propositions in the first half of the paper, the authors should spend much more time on detailing the experiments and the actual model that they used. In an effort to make the framework as general as possible, you ended up making the paper highly verbose and difficult to follow.
Please address the following points or clarify in your rebuttal if I misunderstood something:
- what precisely is the novel contribution of your work (it cannot be "compositional networks" and the propositions concerning those because these are just old concepts under new names)?
- explain precisely (and/or more directly/less convoluted) how your model used in the experiments looks like; why do you think it is better than the other methods?
- given that compositional network is a very general concept (partially ordered set imposed on subsets of the graph vertices), what is the principled set of steps one has to follow to arrive at such a compositional network tailored to a particular graph collection? isn't (or shouldn't) that be the contribution of this work? Am I missing something?
In general, you should write the paper much more to the point and leave out unnecessary math (or move to an appendix). The paper is currently highly inaccessible. |
iclr_2018_H1uP7ebAW | The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures. Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies. Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset. This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples -ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks. We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art results on the largest publicly available chest x-ray dataset from the NIH without pre-training. Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice. | The paper proposes to combine the recently proposed DenseNet architecture with LSTMs to tackle the problem of predicting different pathologic patterns from chest x-rays. In particular, the use of LSTMs helps take into account interdependencies between pattern labels.
Strengths:
- The paper is very well written. Contextualization with respect to previous work is adequate. Explanations are clear. Novelties are clearly identified by the authors.
- Quantitative improvement with respect to the state the art.
Weaknesses:
- The paper does not introduce strong technical novelties -- mostly, it seems to apply previous techniques to the medical domain. It could have been interesting to know if there are more insights / lessons learned in this process. This could be of interest for a broader audience. For instance, what are the implications of using higher-resolution images as input to DenseNet / decreasing the number of layers? How do the features learned at different layers compare to the ones of the original network trained for image classification? How do features of networks pre-trained on ImageNet, and then fine-tuned for the medical domain, compare to features learned from medical images from scratch?
- The impact of the proposed approach on medical diagnostics is unclear. The authors could better discuss how the approach could be adopted in practice. Also, it could be interesting also to discuss how the results in Table 2 and 3 compare to human classification capabilities, and if that performance would be already enough for building a computer-aided diagnosis system.
Finally -- is it expected that the ordering of the factorization in Eq. 3 does not count much (results in Table 3)? As a non-expert in the field, I'd expect that ordering between pathologic patterns matters more. |
iclr_2018_rkLyJl-0- | Published as a conference paper at ICLR 2018 NEUMANN OPTIMIZER: A PRACTICAL OPTIMIZATION ALGORITHM FOR DEEP NEURAL NETWORKS
Progress in deep learning is slowed by the days or weeks it takes to train large models. The natural solution of using more hardware is limited by diminishing returns, and leads to inefficient use of additional resources. In this paper, we present a large batch, stochastic optimization algorithm that is both faster than widely used algorithms for fixed amounts of computation, and also scales up substantially better as more computational resources become available. Our algorithm implicitly computes the inverse Hessian of each mini-batch to produce descent directions; we do so without either an explicit approximation to the Hessian or Hessian-vector products. We demonstrate the effectiveness of our algorithm by successfully training large ImageNet models (Inception-V3, Resnet-50, Resnet-101 and Inception-Resnet-V2) with mini-batch sizes of up to 32000 with no loss in validation error relative to current baselines, and no increase in the total number of steps. At smaller mini-batch sizes, our optimizer improves the validation error in these models by 0.8-0.9%. Alternatively, we can trade off this accuracy to reduce the number of training steps needed by roughly 10-30%. Our work is practical and easily usable by others -only one hyperparameter (learning rate) needs tuning, and furthermore, the algorithm is as computationally cheap as the commonly used Adam optimizer. | This paper presents a new 2nd-order algorithm that implicitly uses curvature information, and it shows the intuition behind the approximation schemes in the algorithms and also validates the heuristics in various experiments. The method involves using Neumann Series and Richardson iteration to avoid Hessian-vector product in second order method for NN. In the actual performance, the paper presents both practical efficiency and better generalization error in different deep neural networks for image classification tasks, and the authors also show differences according to different settings, e.g., Batch Size, Regularization. The numerical examples are relatively clear and easy to figure out details.
1. While the paper presents the algorithm as an optimization algorithm, although it gets better learning performance, it would be interesting to see how well it is as an optimizer. For example, one simple experiment would be showing how it works for convex problems, e.g., logistic regression. Realistic DNN systems are very complex, and evaluating the method in a simple setting would help a lot in determining what if anything is novel about the method.
2. Also, for deep learning problems, it would be more convincing to see how different initialization can affect the performances.
3. Although the authors present their algorithm as a second order method at beginning, the final algorithm is kind of like a complex momentum SGD with limited memory. Rather than simply throwing out a new method with a new name, it would be helpful to understand what the steps of this method are implicitly doing. Please explain more about this.
4. It said that the algorithm is hyperparameter free except for learning rate. However, it is hard to see why there is no need to tune other hyperparameters, e.g., Cubic Regularizer, Repulsive Regularizer. The effect/sensitivity of hyperparameters for second order methods are quite different than hyperparameters for first order methods, and it is of interest to know how hyperparameters for implicit second order methods perform.
5. For Section 4.2, the well know benefit by using large batch size to train models is that it could reduce training time and epochs. However, from Table 3, there is no such phenomenon. Please explain. |
iclr_2018_r1SuFjkRW | It has long been assumed that high dimensional continuous control problems cannot be solved effectively by discretizing individual dimensions of the action space due to the exponentially large number of bins over which policies would have to be learned. In this paper, we draw inspiration from the recent success of sequence-to-sequence models for structured prediction problems to develop policies over discretized spaces. Central to this method is the realization that complex functions over high dimensional spaces can be modeled by neural networks that predict one dimension at a time. Specifically, we show how Q-values and policies over continuous spaces can be modeled using a next step prediction model over discretized dimensions. With this parameterization, it is possible to both leverage the compositional structure of action spaces during learning, as well as compute maxima over action spaces (approximately). On a simple example task we demonstrate empirically that our method can perform global search, which effectively gets around the local optimization issues that plague DDPG. We apply the technique to off-policy (Q-learning) methods and show that our method can achieve the state-of-the-art for off-policy methods on several continuous control tasks. | The paper describes a new RL technique for high dimensional action spaces. It discretizes each dimension of the action space, but to avoid an exponential blowup, it selects the action for each dimension in sequence. This is an interesting approach. The paper reformulates the MDP with a high dimensional action space into an equivalent MDP with more time steps (one per dimension) that each selects the action in one dimension. This makes sense.
While I do like very much the model, I am perplex about the training technique. The lower MDP is precisely the new proposed model with unidimensional actions and therefore it should be sufficient. However, the paper also describes an upper MDP that seems to be superfluous. The two MDPs are mathematically equivalent, but their Q-values are obtained differently (TD-0 for the upper MDP and Q-learning for the lower MDP) and yet the paper tries to minimize the Euclidean distance between them. This is really puzzling since the different training algorithms suggest that the Q-values should be different while minimizing the Euclidean distance between them tries to make them equal. The paper suggests that divergence occurs without the upper MDP. This is really suspicious. The approach feels like a band-aid solution to cover a problem that the authors could not identify. While the empirical results are good, I don't think the paper should be published until the authors figure out a principled way of training.
The proposed approach reformulates the MDP with high dimensional actions into an equivalent one with uni dimensional actions. There is a catch. This approach effectively hides the exponential action space into the state space which becomes exponential. Since u contains all the actions of the previous dimensions, we are effectively increasing the state space by an exponential factor. The paper should discuss this and explain what are the consequences in practice. In the end, the MDP does not become simpler.
Overall, this is an interesting paper with a good idea, but the training technique is not mature enough for publication. |
iclr_2018_Bk6qQGWRb | We propose Bayesian Deep Q-Network (BDQN), a practical Thompson sampling based Reinforcement Learning (RL) Algorithm. Thompson sampling allows for targeted exploration in high dimensions through posterior sampling, but is usually computationally expensive. We address this limitation by introducing uncertainty only at the output layer of the network through a Bayesian Linear Regression (BLR) model, which can be trained with fast closed-form updates and its samples can be drawn efficiently through the Gaussian distribution. We apply our method to a wide range of Atari Arcade Learning Environments. Since BDQN carries out more efficient exploration, it is able to reach higher rewards substantially faster than a key baseline, DDQN. | The authors describe how to use Bayesian neural networks with Thompson sampling
for efficient exploration in q-learning. The Bayesian neural networks are only
Bayesian in the last layer. That is, the authors learn all the previous layers
by finding point estimates. The Bayesian learning of the last layer is then
tractable since it consists of a linear Gaussian model. The resulting method is
called BDQL. The experiments performed show that the proposed approach, after
hyper-parameter tuning, significantly outperforms the epsilon-greedy
exploration approaches such as DDQN.
Quality:
I am very concern about the authors stating on page 1 "we sample from the
posterior on the set of Q-functions". I believe this statement is not correct.
The Bayesian posterior distribution is obtained by combining an assumed
generative model for the data, data sampled from that model and some prior
assumptions. In this paper there is no generative model for the data and the
data obtained is not actually sampled from the model. The data are just targets
obtained by the q-learning rule. This means that the authors are adapting
Q-learning methods so that they look Bayesian, but in no way they are dealing
with a principled posterior distribution over Q-functions. At least this is my
opinion, I would like to encourage the authors to be more precise and show in
the paper what is the exact posterior distribution over Q-functions and show
how they approximate that distribution, taking into account that a posterior
distribution is obtained as $p(theta|D) \propto p(D|theta)p(\theta)$. In the
case addressed in the paper, what is the likelihood $p(D|\theta)$ and what are
the modeling assumptions that explain how $D$ is generated by sampling from a
model parameterized by \theta?
I am also concerned about the hyper-parameter tuning for the baselines. In
section 5 (choice of hyper-parameters) the authors describe a quite exhaustive
hyper-parameter tuning procedure for BDQL. However, they do not mention whether
they perform a similar hyper-parameter tuning for DDQN, in particular for the
parameter epsilon which will determine the amount of exploration. This makes me
wonder if the comparison in table 2 is fair. Especially, because the authors
tune the amount of data from the replay-buffer that is used to update their
posterior distribution. This will have the effect of tuning the width of their
posterior approximation which is directly related to the amount of exploration
performed by Thompson sampling. You can, therefore, conclude that the authors are
tuning the amount of exploration that they perform on each specific problem.
Is that also being done for the baseline DDQN, for example, by tuning epsilon in
each problem?
The authors also report in table 2 the scores obtained for DDQN by Osband et
al. 2016. What is the purpose of including two rows in table 2 with the same
method? It feels a bit that the authors want to hide the fact that they only
compare with a singe epsilon-greedy baseline (DDQN). Epsilon-greedy methods
have already been shown to be less efficient than Bayesian methods with
Thompson sampling for exploration in q learning (Lipton et al. 2016).
The authors do not compare with variational approaches to Bayesian learning
(Lipton et al. 2016). They indicate that since Lipton et al. "do not
investigate the Atari games, we are not able to have their method as an
additional baseline". This statement seems completely unjustified. The authors
should clearly include a description of why Lipton's approach cannot be applied
to the Atari games or include it as a baseline.
The method proposed by the authors is very similar to Lipton's approach. The
only difference is that Lipton et al. use variational inference with a
factorized Gaussian distribution to approximate the posterior on all the
network weights. The authors by contrast, perform exact Bayesian inference, but
only on the last layer of their neural network. It would be very useful to know
whether the exact linear Gaussian model in the last layer proposed by the
authors has advantages with respect to a variational approximation on all the
network weights. If Lipton's method would be expensive to apply to large-scale
settings such as the Atari games, the authors could also compare with that
method in smaller and simpler problems.
The plots in Figure 2 include performance in terms of episodes. However, it
would also be useful to know how much is the extra computational costs of
the proposed method. One could imagine that computing the posterior
approximation in equation 6 has some additional cost. How do BDQN and DDQN
compare when one takes into account running time and not episode count into
account?
Clarity:
The paper is clearly written. However, I found a lack of motivation for the
specific design choices made to obtain equations 9 and 10. What is a_t in
equation 9? The parameters \theta are updated just after equation 10 by
following the gradient of the loss in which the weights of the last layer are
fixed to a posterior sample, instead of the posterior mean. Is this update rule
guaranteed to produce convergence of \theta? I could imagine that at different
times, different posterior samples of the weights will be used to compute the
gradients. Does this create any instability in learning?
I found the paragraph just above section 5 describing the maze-like
deterministic game confusing and not very useful. The authors should improve
this paragraph.
Originality:
The proposed approach in which the weights in the last layer of the neural
network are the only Bayesian ones is not new. The same method was proposed in
Snoek, J., Rippel, O., Swersky, K., Kiros, R., Satish, N., Sundaram, N., ... &
Adams, R. (2015, June). Scalable Bayesian optimization using deep neural
networks. In International Conference on Machine Learning (pp. 2171-2180).
which the authors fail to cite. The use of Thompson sampling for efficient
exploration in deep Q learning is also not new since it has been proposed by
Lipton et al. 2016. The main contribution of the paper is to combine these two
methods (equations 6-10) and evaluate the results in the large-scale setting of
ATARI games, showing that it works in practice.
Significance:
It is hard to determine how significant the work is since the authors only
compare with a single baseline and leave aside previous work on efficient
exploration with Thompson sampling based on variational approximations.
As far as the method is described, I believe it would be impossible to
reproduce their results because of the complexity of the hyper-parameter tuning
performed by the authors. I would encourage the authors to release code that can
directly generate Figure 2 and table 2. |
iclr_2018_r1HhRfWRZ | Published as a conference paper at ICLR 2018 LEARNING AWARENESS MODELS
We consider the setting of an agent with a fixed body interacting with an unknown and uncertain external world. We show that models trained to predict proprioceptive information about the agent's body come to represent objects in the external world. In spite of being trained with only internally available signals, these dynamic body models come to represent external objects through the necessity of predicting their effects on the agent's own body. That is, the model learns holistic persistent representations of objects in the world, even though the only training signals are body signals. Our dynamics model is able to successfully predict distributions over 132 sensor readings over 100 steps into the future and we demonstrate that even when the body is no longer in contact with an object, the latent variables of the dynamics model continue to represent its shape. We show that active data collection by maximizing the entropy of predictions about the bodytouch sensors, proprioception and vestibular information-leads to learning of dynamic models that show superior performance when used for control. We also collect data from a real robotic hand and show that the same models can be used to answer questions about properties of objects in the real world. Videos with qualitative results of our models are available at https://goo.gl/mZuqAV. | Summary:
The paper describes a system which creates an internal representation of the scene given observations, being this internal representation advantageous over raw sensory input for object classification and control. The internal representation comes from a recurrent network (more specifically, a sequence2sequence net) trained to maximize the likelihood of the observations from training
Positive aspects:
The authors suggest an interesting hypothesis: an internal representation of the world which is useful for control could be obtained just by forcing the agent to be able to predict the outcome of its actions in the world. This hypothesis would enable robots to train it in a self-supervised manner, which would be extremely valuable.
Negative aspects:
Although the premise of the paper is interesting, its execution is not ideal. The formulation of the problem is unclear and difficult to follow, with a number of important terms left undefined. Moreover, the experiment task is too simplistic; from the results, it's not clear whether the representation is anything more than trivial accumulation of sensory input
- Lack of clarity:
-- what exactly is the "generic cost" C in section 7.1?
-- why are both f and z parameters of C? f is directly a function of z. Given that the form of C is not explained, seems like f could be directly computing as part of C.
-- what is the relation between actions a in section 7.1 and u in section 4?
-- How is the minimization problem of u_{1:T} solved?
-- Are the authors sure that they perform gathering information through "maximizing uncertainty" (section 7.1)? This sounds profoundly counterintuitive. Maximizing the uncertainty in the world state should result in minimum information about the worlds state. I would assume this is a serious typo, but cannot confirm given that the relation between the minimize cost C and the Renyi entropy H is not explicitely stated.
-- When the authors state that "The learner trains the model by maximum likelihood" in section 7.1, do they refer to the prediction model or the control model? It would seem that it is the control model, but the objective being "the same as in section 6" points in the opposite direction
-- What is the method for classifying and/or regressing given the features and internal representation? This is important because, if the method was a recurrent net with memory, the differences between the two representations would probably be minimal.
- Simplistic experimental task:
My main intake from the experiments is that having a recurrent network processing the sensory input provides some "memory" to the system which reduces uncertainty when sensory data is ambiguous. This is visible from the fact that the performance from both systems is comparable at the beginning, but degrades for sensory input when the hand is open. This could be achievable in many simple ways, like modeling the classification/regression problem directly with an LSTM for example. Simpler modalities of providing a memory to the system should be used as a baseline.
Conclusion:
Although the idea of learning an internal representation of the world by being able to predict its state from observations is interesting, the presented paper is a) too simplistic in its experimental evaluation and b) too unclear about its implementation. Consequently, I believe the authors should improve these aspects before the article is valuable to the community |
iclr_2018_Skz_WfbCZ | A PAC-BAYESIAN APPROACH TO SPECTRALLY-NORMALIZED MARGIN BOUNDS FOR NEURAL NETWORKS
We present a generalization bound for feedforward neural networks with ReLU activations in terms of the product of the spectral norm of the layers and the Frobenius norm of the weights. The key ingredient is a bound on the changes in the output of a network with respect to perturbation of its weights, thereby bounding the sharpness of the network. We combine this perturbation bound with the PAC-Bayes analysis to derive the generalization bound. | This paper combines a simple PAC-Bayes argument with a simple perturbation analysis (Lemma 2) to get a margin based generalization error bound for ReLU neural networks (Theorem 1) which depends on the product of the spectral norms of the layer parameters as well as their Frobenius norm. The main contribution of the paper is the simple proof technique to derive Theorem 1, much simpler than the one use in the very interesting work [Bartlett et al. 2017] (appearing at NIPS 2017) which got an analogous bound but with a dependence on the l1-norm of the layers instead of the Frobenius norm. The authors make a useful comparison between these bounds in Section 3 showing that none is dominating the others, but still analyzing their properties in terms of structural properties of the weight matrices.
I enjoyed reading this paper. One could think that it makes a somewhat incremental contribution with respect to the more complete work (both theory and practice) from [Bartlett et al. 2017]. Nevertheless, the simplicity and elegance of the proof as well as the result might be useful for the community to get progress on the theoretical analysis of NNs.
The paper is well written, though I make some suggestions for the camera ready version below to improve clarity.
I verified most of the math.
== Detailed suggestions ==
1) The authors should specify in the abstract and in the introduction that they are analyzing feedforward neural networks *with ReLU activation functions* so that the current context of the result is more transparent. It is quite unclear how one could generalize the Theorem 1 to arbitrary activation functions phi given the crucial use of the homogeneity of the ReLU at the beginning of p.4. Though the proof of Lemma 2 only appears to be using the 1-Lipschitzness property of phi as well as phi(0) =0. (Unless they can generalize further; I also suggest that they explicitly state in the (interesting) Lemma 2 that it is for the ReLU activations (like they did in Theorem 1)).
2) A footnote (or citation) could be useful to give a hint on how the inequality 1/e beta^(d-1) <= tilde{beta}^(d-1) <= e beta^(d-1) is proven from the property |beta-tilde{beta}|<= 1/d beta (middle of p.4).
3) Equation (3) -- put the missing 2 subscript for the l2 norm of |f_(w+u)(x) - f_w(x)|_2 on the LHS (for clarity).
4) One extra line of derivation would be helpful for the reader to rederive the bound|w|^2/2sigma^2 <= O(...) just above equation (4). I.e. first doing the expansion keeping the beta terms and Frobenius norm sum, and then going directly to the current O(...) term.
5) bottom of p.4: use hat{L}_gamma = 1 instead of L_gamma =1 for more clarity.
6) Top of p.5: the sentence "Since we need tilde{beta} to satisfy (...)" is currently awkwardly stated. I suggest instead to say that "|tilde{beta}- beta| <= 1/d (gamma/2B)^(1/d) is a sufficient condition to have the needed condition |tilde{beta}-beta| <= 1/d beta over this range, thus we can use a cover of size dm^(1/2d)."
7) Typo below (6): citetbarlett2017...
8) Last paragraph p.5: "Recalling that W_i is *at most* a hxh matrix" (as your result do not require constant size layers and covers the rectangular case). |
iclr_2018_BywyFQlAW | Published as a conference paper at ICLR 2018 MINIMAX CURRICULUM LEARNING: MACHINE TEACHING WITH DESIRABLE DIFFICULTIES AND SCHEDULED DIVERSITY
We introduce and study minimax curriculum learning (MCL), a new method for adaptively selecting a sequence of training subsets for a succession of stages in machine learning. The subsets are encouraged to be small and diverse early on, and then larger, harder, and allowably more homogeneous in later stages. At each stage, model weights and training sets are chosen by solving a joint continuousdiscrete minimax optimization, whose objective is composed of a continuous loss (reflecting training set hardness) and a discrete submodular promoter of diversity for the chosen subset. MCL repeatedly solves a sequence of such optimizations with a schedule of increasing training set size and decreasing pressure on diversity encouragement. We reduce MCL to the minimization of a surrogate function handled by submodular maximization and continuous gradient methods. We show that MCL achieves better performance and, with a clustering trick, uses fewer labeled samples for both shallow and deep models. Our method involves repeatedly solving constrained submodular maximization of an only slowly varying function on the same ground set. Therefore, we develop a heuristic method that utilizes the previous submodular maximization solution as a warm start for the current submodular maximization process to reduce computation while still yielding a guarantee. | This paper introduces MiniMax Curriculum learning, as an approach for adaptively train models by providing it different subsets of data. The authors formulate the learning problem as a minimax problem which tries to choose diverse example and "hard" examples, where the diversity is captured via a Submodular Loss function and the hardness is captured via the Loss function. The authors formulate the problem as an iterative technique which involves solving a minimax objective at every iteration. The authors argue the convergence results on the minimax objective subproblem, but do not seem to give results on the general problem. The ideas for this paper are built on existing work in Curriculum learning, which attempts to provide the learner easy examples followed by harder examples later on. The belief is that this learning style mimics human learners.
Pros:
- The analysis of the minimax objective is novel and the proof technique introduces several interesting ideas.
- This is a very interesting application of joint convex and submodular optimization, and uses properties of both to show the final convergence results
- Even through the submodular objective is only approximately solvable, it still translates into a convergence result
- The experimental results seem to be complete for the most part. They argue how the submodular optimization does not really affect the performance and diversity seems to empirically bring improvement on the datasets tried.
Cons:
- The main algorithm MCL is only a hueristic. Though the MiniMax subproblem can converge, the authors use this in somewhat of a hueristic manner.
- It seems somewhat hand wavy in the way the authors describe the hyper parameters of MCL, and it seems unclear when the algorithm converge and how to increase/decrease it over iterations
- The objective function also seems somewhat non-intuitive. Though the experimental results seem to indicate that the idea works, I think the paper does not motivate the loss function and the algorithm well.
- It seems to me the authors have experimented with smaller datasets (CIFAR, MNIST, 20NewsGroups). This being mainly an empirical paper, I would have expected results on a few larger datasets (e.g. ImageNet, CelebFaces etc.), particularly to see if the idea also scales to these more real world larger datasets.
Overall, I would like to see if the paper could have been stronger empirically. Nevertheless, I do think there are some interesting ideas theoretically and algorithmically. For this reason, I vote for a borderline accept. |
iclr_2018_SJky6Ry0W | Independent causal mechanisms are a central concept in the study of causality with implications for machine learning tasks. In this work we develop an algorithm to recover a set of (inverse) independent mechanisms relating a distribution transformed by the mechanisms to a reference distribution. The approach is fully unsupervised and based on a set of experts that compete for data to specialize and extract the mechanisms. We test and analyze the proposed method on a series of experiments based on image transformations. Each expert successfully maps a subset of the transformed data to the original domain, and the learned mechanisms generalize to other domains. We discuss implications for domain transfer and links to recent trends in generative modeling. | This paper describes a setting in which a system learns collections of inverse-mapping functions that transform altered inputs to their unaltered "canonical" counterparts, while only needing unassociated and separate sets of examples of each at training time. Each inverse map is an "expert" E akin to a MoE expert, but instead of using a feed-forward gating on the input, an expert is selected (for training or inference) based on the value of a distribution-modeling function c applied to the output of all experts: The expert with maximum value c(E(x)) is selected. When c is an adversarially trained discriminator network, the experts learn to model the different transformations that map altered images back to unaltered ones. This is demonstrated using MNIST with a small set of synthetic translations and noise.
The fact that these different inverse maps arise under these conditions is interesting --- and Figure 5 is quite convincing in showing how each expert generalizes. However, I think the experimental conditions are very limited: Only one collection of transformations is studied, and on MNIST digits only. In particular, I found the fact that only one of ten transformations can be applied at a time (as opposed to a series of multiple transforms) to be restrictive. This is touched on in the conclusion, but to me it seems fundamental, as any real-world new example will undergo significantly more complex processes with many different variables all applied at once.
Another direction I think would be interesting, is how few examples are needed in the canonical distribution? For example, in MNIST, could the canonical distribution P be limited to just one example per digit (or just one example per mode / style of digit, e.g. "2" with loop, and without loop)? The different handwriters of the digits, and sampling and scanning process, may themselves constitute in-the-wild transformations that might be inverted to single (or few) canonical examples --- Is this possible with this mechanism?
Overall, it is nice to see the different inverse maps arise naturally in this setting. But I find the single setting limiting, and think the investigation could be pushed further into less restricted settings, a couple of which I mention above.
Other comments:
- c is first described to be any distribution model, e.g. the autoencoder described on p.5. But it seems that using such a fixed, predefined c like the autoencoder may lead to collapse: What is preventing an expert from learning a single constant mode that has high c value? The adversarially trained c doesn't suffer from this, because presumably the discriminator will be able to learn the difference between a single constant mode output and the distribution P. But if this is the case, it seems a critical part of the system, not a simple implementation choice as the text seems to say.
- The single-net baseline is good, but I'd like to get a clearer picture of its results. p.8 says this didn't manage to "learn more than one inverse mechanism" --- Does that mean it learns to invert a single mechanism (that is always translates up, for example, when presented an image)? Or that it learned some mix of transforms that didn't seem to generalize as well? Or does it have some other behavior? Also, I'm not entirely clear on how it was trained wrt c --- is argmax(c(E(x)) always just the single expert? Is c also trained adversarially? And if so, is the approximate identity initialization used? |
iclr_2018_SkwAEQbAb | Determining the number of latent dimensions is a ubiquitous problem in machine learning. In this study, we introduce a novel method that relies on SVD to discover the number of latent dimensions. The general principle behind the method is to compare the curve of singular values of the SVD decomposition of a data set with the randomized data set curve. The inferred number of latent dimensions corresponds to the crossing point of the two curves. To evaluate our methodology, we compare it with competing methods such as Kaisers eigenvalue-greater-than-one rule (K1), Parallel Analysis (PA), Velicers MAP test (Minimum Average Partial). We also compare our method with the Silhouette Width (SW) technique which is used in different clustering methods to determine the optimal number of clusters.
The result on synthetic data shows that the Parallel Analysis and our method have similar results and more accurate than the other methods, and that our methods is slightly better result than the Parallel Analysis method for the sparse data sets. | The authors propose the use of bootstrapping the data (random sampling entries with replacement) to form surrogate data for which they can evaluate the singular value spectrum of the SVD of the matrix to the singular values of the bootstrapped data, thereby determining the number of latent dimensions in PCA by the point in which the singular values are no greater than the bootstrapped sampled values. The procedure is contrasted to some existing methods for determining the number of latent components and found to perform similarly to another procedure based on bootstrapping correlation matrices, the PA procedure.
Pros:
Determining the number of components is an important problem that the authors here address.
Cons:
I find the paper poorly written and the methodology not sufficiently rooted in the existing literature. There are many approaches to determining the number of latent components in PCA that needs to be discussed and constrasted including:
Cross-validation:
http://scholar.google.dk/scholar_url?url=http%3A%2F%2Fwww.academia.edu%2Fdownload%2F43416804%2FGeneralizable_Patterns_in_Neuroimaging_H20160306-9605-1xf9c9h.pdf&hl=da&sa=T&oi=gga&ct=gga&cd=0&ei=rjkXWrzKKImMmAH-xo7gBw&scisig=AAGBfm2iRQhmI2EHEO7Cl6UZoRbfAxDRng&nossl=1&ws=1728x1023
Variational Bayesian PCA:
https://www.microsoft.com/en-us/research/publication/variational-principal-components/
Furthermore, the idea of bootstrapping for the SVD has been discussed in prior publications and the present work need to be related to these prior works. This includes:
Milan, Luis, and Joe Whittaker. “Application of the Parametric Bootstrap to Models That Incorporate a Singular Value Decomposition.” Journal of the Royal Statistical Society. Series C (Applied Statistics), vol. 44, no. 1, 1995, pp. 31–49. JSTOR, JSTOR, www.jstor.org/stable/2986193.
Fisher A, Caffo B, Schwartz B, Zipunnikov V. Fast, Exact Bootstrap Principal Component Analysis for p > 1 million. Journal of the American Statistical Association. 2016;111(514):846-860. doi:10.1080/01621459.2015.1062383.
Including the following package in R for performing bootstrapped SVD: https://cran.r-project.org/web/packages/bootSVD/bootSVD.pdf
The novelty of the present approach is therefore unclear given prior works on bootstrapping SVD/PCA.
Furthermore, for sparse data with missing entries there are specialized algorithms handling sparsity either using imputation or marginalization, which would be more principled to estimate the PCA parameters.
Finaly, the performance appears almost identical with the PA procedure. In fact, it seems bootstrapping the correlation matrix has a very similar effect as the proposed bootstrapping procedure. Thus, it seems the proposed procedure which is very similar in spirit to PA does not have much benefit over this procedure.
Minor comments:
Explain what SW abbreviates when introduced first.
We will see that it PA a close relationship with BSVD-> We will see that PA is closely related to BSVD
more effective than SVD under certain conditions (?). – please provide reference instead of ?
But table 4 that shows -> But table 4 shows that
We can sum up with that the result seems ->To summarize, the result seems |
iclr_2018_SJvu-GW0b | Neural networks are increasingly used as a general purpose approach to learning algorithms over graph structured data. However, techniques for representing graphs as real-valued vectors are still in their infancy. Recent works have proposed several approaches (e.g., graph convolutional networks), but as we show in this paper, these methods have difficulty generalizing to large graphs. In this paper we propose GRAPH2SEQ, an embedding framework that represents graphs as an infinite time-series. By not limiting the representation to a fixed dimension, GRAPH2SEQ naturally scales to graphs of arbitrary size. Moreover, through analysis of a formal computational model we show that an unbounded sequence is necessary for scalability. GRAPH2SEQ is also reversible, allowing full recovery of the graph structure from the sequence. Experimental evaluations of GRAPH2SEQ on a variety of combinatorial optimization problems show strong generalization and strict improvement over state of the art. | This paper proposes a novel way of embedding graph structure into a sequence that can have an unbounded length.
There has been a significant amount of prior work (e.g. d graph convolutional neural networks) for signals supported on a specific graph. This paper on the contrary tries to encode the topology of a graph using a dynamical system created by the graph and randomization.
The main theorem is that the created dynamical system can be used to reverse engineer the graph topology for any digraph.
As far as I understood, the authors are doing essentially reverse directed graphical model learning. In classical learning of directed graphical models (or causal DAGs) one wants to learn the structure of a graph from observed data created by this graph inducing conditional independencies on data. This procedure is creating a dynamical system that (following very closely previous work) estimates conditional directed information for every pair of vertices u,v and can find if an edge is present from the observed trajectory.
The recovery algorithm is essentially previous work (but the application to graph recovery is new).
The authors state:
``Estimating conditional directed information efficiently from samples is itself an active area of research Quinn et al. (2011), but simple plug-in estimators with a standard kernel density estimator will be consistent.''
One thing that is missing here is that the number of samples needed could be exponential in the degrees of the graph. Therefore, it is not clear at all that high-dimensional densities or directed information can be estimated from a number of samples that is polynomial in the dimension (e.g. graph degree).
This is related to the second limitation, that there is no sample complexity bounds presented only an asymptotic statement.
One remark is that there are many ways to represent a finite graph with a sequence that can be decoded back to the graph (and of course if there is no bound on the graph size, there will be no bound on the size of the sequence). For example, one could take the adjacency matrix and sequentially write down one row after the other (perhaps using a special symbol to indicate 'next row'). Many other simple methods can be obtained also, with a size of sequence being polynomial (in fact linear) in the size of the graph. I understand that such trivial representations might not work well with RNNs but they would satisfy stronger versions of Theorem 1 with optimal size.
On the contrary it was not clear how the proposed sequence will scale in the graph size.
Another remark is that it seems that GCNN and this paper solve different problems.
GCNNs want to represent graph-supported signals (on a fixed graph) while this paper tries to represent the topology of a graph, which seems different.
The experimental evaluation was somewhat limited and that is the biggest problem from a practical standpoint. It is not clear why one would want to use these sequences for solving MVC. There are several graph classification tasks that try to use the graph structure (as well as possibly other features) see eg the bioinformatics
and other applications. Literature includes for example:
Graph Kernels by S.V.N. Vishwanathan et al.
Deep graph kernels (Yanardag & Vishwanathan and graph invariant kernels (Orsini et al.),
which use counts of small substructures as features.
The are many benchmarks of graph classification tasks where the proposed representation could be useful but significantly more validation work would be needed to make that case. |
iclr_2018_H196sainb | Published as a conference paper at ICLR 2018 WORD TRANSLATION WITHOUT PARALLEL DATA
State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora. Recent studies showed that the need for parallel data supervision can be alleviated with character-level information. While these methods showed encouraging results, they are not on par with their supervised counterparts and are limited to pairs of languages sharing a common alphabet. In this work, we show that we can build a bilingual dictionary between two languages without using any parallel corpora, by aligning monolingual word embedding spaces in an unsupervised way. Without using any character information, our model even outperforms existing supervised methods on cross-lingual tasks for some language pairs. Our experiments demonstrate that our method works very well also for distant language pairs, like English-Russian or EnglishChinese. We finally describe experiments on the English-Esperanto low-resource language pair, on which there only exists a limited amount of parallel data, to show the potential impact of our method in fully unsupervised machine translation. Our code, embeddings and dictionaries are publicly available 1 . | This paper presents a new method for obtaining a bilingual dictionary, without requiring any parallel data between the source and target languages. The method consists of an adversarial approach for aligning two monolingual word embedding spaces, followed by a refinement step using frequent aligned words (according to the adversarial mapping). The approach is evaluated on single word translation, cross-lingual word similarity, and sentence translation retrieval tasks.
The paper presents an interesting approach which achieves good performance. The work is presented clearly, the approach is well-motivated and related to previous studies, and a thorough evaluation is performed.
My one concern is that the supervised approach that the paper compares to is limited: it is trained on a small fixed number of anchor points, while the unsupervised method uses significantly more words. I think the paper's comparisons are valid, but the abstract and introduction make very strong claims about outperforming "state-of-the-art supervised approaches". I think either a stronger supervised baseline should be included (trained on comparable data as the unsupervised approach), or the language/claims in the paper should be softened. The same holds for statements like "... our method is a first step ...", which is very hard to justify. I also do not think it is necessary to over-sell, given the solid work in the paper.
Further comments, questions and suggestions:
- It might be useful to add more details of your actual approach in the Abstract, not just what it achieves.
- Given you use trained word embeddings, it is not a given that the monolingual word embedding spaces would be alignable in a linear way. The actual word embedding method, therefore, has a big influence on performance (as you show). Could you comment on how crucial it would be to train monolingual embedding spaces on similar domains/data with similar co-occurrence statistics, in order for your method to be appropriate?
- Would it be possible to add weights to the terms in eq. (6), or is this done implicitly?
- How were the 5k source words for Procrustes supervised baseline selected?
- Have you considered non-linear mappings, or jointly training the monolingual word embeddings while attempting the linear mapping between embedding spaces?
- Do you think your approach would benefit from having a few parallel training points?
Some minor grammatical mistakes/typos (nitpicking):
- "gives a good performance" -> "gives good performance"
- "Recent works", "several works", "most works", etc. -> "recent studies", "several studies", etc.
- "i.e, the improvements" -> "i.e., the improvements"
The paper is well-written, relevant and interesting. I therefore recommend that the paper be accepted. |
iclr_2018_BJcAWaeCW | Inspired by the success of generative adversarial networks (GANs) in image domains, we introduce a novel hierarchical architecture for learning characteristic topological features from a single arbitrary input graph via GANs. The hierarchical architecture consisting of multiple GANs preserves both local and global topological features, and automatically partitions the input graph into representative stages for feature learning. The stages facilitate reconstruction and can be used as indicators of the importance of the associated topological structures. Experiments show that our method produces subgraphs retaining a wide range of topological features, even in early reconstruction stages. This paper contains original research on combining the use of GANs and graph topological analysis. | Quality: The work has too many gaps for the reader to fill in. The generator (reconstructed matrix) is supposed to generate a 0-1 matrix (adjacency matrix) and allow backpropagation of the gradients to the generator. I am not sure how this is achieved in this work. The matrix is not isomorphic invariant and the different clusters don’t share a common model. Even implicit models should be trained with some way to leverage graph isomorphisms and pattern similarities between clusters. How can such a limited technique be generalizing? There is no metric in the results showing how the model generalizes, it may be just overfitting the data.
Clarity: The paper organization needs work; there are also some missing pieces to put the NN training together. It is only in Section 2.3 that the nature of G_i^\prime becomes clear, although it is used in Section 2.2. Equation (3) is rather vague for a mathematical equation. From what I understood from the text, equation (3) creates a binary matrix from the softmax output using an indicator function. If the output is binary, how can the gradients backpropagate? Is it backpropagating with a trick like the Gumbel-Softmax trick of Jang, Gu, and Poole 2017 or Bengio’s path derivative estimator? This is a key point not discussed in the manuscript.
And if I misunderstood the sentence “turn re_G into a binary matrix” and the values are continuous, wouldn’t the discriminator have an easy time distinguishing the generated data from the real data. And wouldn’t the generator start working towards vanishing gradients in its quest to saturate the re_G output?
Originality: The work proposes an interesting approach: first cluster the network, then learning distinct GANs over each cluster. There are many such ideas now on ArXiv but it would be unfair to contrast this approach with unpublished work. There is no contribution in the GAN / neural network aspect. It is also unclear whether the model generalizes. I don’t think this is a good fit for ICLR.
Significance: Generating graphs is an important task in in relational learning tasks, drug discovery, and in learning to generate new relationships from knowledge bases. The work itself, however, falls short of the goal. At best the generator seems to be working but I fear it is overfitting. The contribution for ICLR is rather minimal, unfortunately.
Minor:
GTI was not introduced before it is first mentioned in the into.
Y. Bengio, N. Leonard, and A. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv:1308.3432, 2013. |
iclr_2018_HJsjkMb0Z | Published as a conference paper at ICLR 2018 i-REVNET: DEEP INVERTIBLE NETWORKS
It is widely believed that the success of deep convolutional networks is based on progressively discarding uninformative variability about the input with respect to the problem at hand. This is supported empirically by the difficulty of recovering images from their hidden representations, in most commonly used network architectures. In this paper we show via a one-to-one mapping that this loss of information is not a necessary condition to learn representations that generalize well on complicated problems, such as ImageNet. Via a cascade of homeomorphic layers, we build the i-RevNet, a network that can be fully inverted up to the final projection onto the classes, i.e. no information is discarded. Building an invertible architecture is difficult, for one, because the local inversion is ill-conditioned, we overcome this by providing an explicit inverse. An analysis of i-RevNets learned representations suggests an alternative explanation for the success of deep networks by a progressive contraction and linear separation with depth. To shed light on the nature of the model learned by the i-RevNet we reconstruct linear interpolations between natural image representations. | In this paper, the authors propose deep architecture that preserves mutual information between the input and the hidden representation and show that the loss of information can only occur at the final layer. They illustrate empirically that the loss of information can be avoided on large-scale classification such as ImageNet and propose to build an invertible deep network that is capable of retaining the information of the input signal through all the layers of the network until the last layer where the input could be reconstructed.
The authors demonstrate that progressive contraction and separation of the information can be obtained while at the same time allowing an exact reconstruction of the signal.
As it requires a special care to design an invertible architecture, the authors architecture is based on the recent reversible residual network (RevNet) introduced in (Gomez et al., 2017) and an invertible down-sampling operator introduced in (Shi et al., 2016). The inverse (classification) path of the network uses the same convolutions as the forward (reconstructing) one. It also uses subtraction operations instead of additions in the output computation in order to reconstruct intermediate and input layers.
To show the effectiveness of their approach on large-scale classification problem, the authors report top-1 error rates on the validation set of ILSVRC-2012. The obtained result is competitive with the original Resnet and the RevNet models. However, the proposed approach is expensive in terms of parameter budget as it requires almost 6.5 times more parameters than the RevNet and the Resnet architectures. Still, the classification and the reconstructing results are quite impressive as the work is the first empirical evidence that learning invertible representation that preserves information about the input is possible on large-scale classification tasks. Worth noting that recently, (Shwartz-Ziv and Tishby) demonstrated, not on large-scale datasets but on small ones, that an optimal representation for a classification task must reduce as much uninformative variability as possible while maximizing the mutual information between the desired output and its representation in order discriminate as much as possible between classes. This is called “information bottleneck principle”. The submitted paper shows that this principle is not a necessary condition large-scale classification.
The proposed approach is potentially of great benefit. It is also simple and easy to understand. The paper is well written and the authors position their work with respect to what has been done before. The spectral analysis of the differential operator in section 4.1 provide another motivation for the “hard-constrained” invertible architecture. Section 4.2 illustrates the ability of the network to reconstruct input signals. The visualization obtained suggests that network performs linear separation between complex learned factors. Section 5 shows that even when using either an SVM or a Nearest Neighbor classifier on n extracted features from a layer in the network, both classifiers progressively improve with deeper layers. When the d first principal components are used to summarize the n extracted features, the SVM and NN classifier performs better when d is bigger. This shows that the deeper the network gets, the more linearly separable and contracted the learned representations are.
In the conclusion, the authors state the following: “The absence of loss of information is surprising, given the wide believe, that discarding information is essential for learning representations that generalize well to unseen data”. Indeed, the authors have succeed in showing that this is not necessarily the case. However, the loss of information might be necessary to generalize well on unseen data and at the same time minimize the parameter budget for a given classification task. |
iclr_2018_rybDdHe0Z | A fundamental challenge in designing brain-computer interfaces (BCIs) is decoding behavior from time-varying neural oscillations. In typical applications, decoders are constructed for individual subjects and with limited data leading to restrictions on the types of models that can be utilized. Currently, the best performing decoders are typically linear models capable of utilizing rigid timing constraints with limited training data. Here we demonstrate the use of Long Short-Term Memory (LSTM) networks to take advantage of the temporal information present in sequential neural data collected from subjects implanted with electrocorticographic (ECoG) electrode arrays performing a finger flexion task. Our constructed models are capable of achieving accuracies that are comparable to existing techniques while also being robust to variation in sample data size. Moreover, we utilize the LSTM networks and an affine transformation layer to construct a novel architecture for transfer learning. We demonstrate that in scenarios where only the affine transform is learned for a new subject, it is possible to achieve results comparable to existing state-of-the-art techniques. The notable advantage is the increased stability of the model during training on novel subjects. Relaxing the constraint of only training the affine transformation, we establish our model as capable of exceeding performance of current models across all training data sizes. Overall, this work demonstrates that LSTMs are a versatile model that can accurately capture temporal patterns in neural data and can provide a foundation for transfer learning in neural decoding. | The paper describes an approach to use LSTM’s for finger classification based on ECOG. and a transfer learning extension of which two variations exists. From the presented results, the LSTM model is not an improvement over a basic linear model. The transfer learning models performs better than subject specific models on a subset of the subjects. Overall, I think the problem Is interesting but the technical description and the evaluation can be improved. I am not confident in the analysis of the model. Additionally, the citations are not always correct and some related work is not referenced at all. For the reasons above, I am not willing to recommend the paper for acceptance at his point.
The paper tackles a problem that is challenging and interesting. Unfortunately, the dataset size is limited.
This is common for brain data and makes evaluation much more difficult.
The paper states that all hyper-parameters were optimized on 75% of subject B data.
The actual model training was done using cross-validation.
So far this approach seems more or less correct but in this case I would argue that subject B should not be considered for evaluation since its data is heavily used for hyper-parameter optimization and the results obtained on this subject are at risk of being biased.
Omitting subject B from the analysis, each non-transfer learning method performs best on one of the remaining subjects.
Therefore it is not clear that an LSTM model is an improvement.
For transfer learning (ignoring B again) only C and D are improved but it is unclear what the variance is.
In the BCI community there are many approaches that use transfer learning with linear models. I think that it would be interesting how linear model transfer learning would fare in this task.
A second issue that might inflate the results is the fact that the data is shuffled randomly. While this is common practice for most machine learning tasks, it is dangerous when working with brain data due to changes in the signal over time. As a result, selecting random samples might inflate the accuracy compared to having a proper train and test set that are separated in time. Ideally the cross-validation should be done using contiguous folds.
I am not quite sure whether it should be possible to have an accuracy above chance level half a second before movement onset? How long does motor preparation take? I am not familiar with this specific subject, but a quick search gave me a reaction time for sprinters of .15 seconds. Is it possible that cue processing activity was used to obtain the classification result? Please discuss this effect because I am do not understand why it should be possible to get above chance level accuracy half a second before movement onset.
There are also several technical aspects that are not clear to me. I am confident that I am unable to re-implement the proposed method and their baseline given the information provided.
LDA baseline:
—————————
For the LDA baseline, how is the varying sequence length treated?
Ledoit wolf analytic regularization is used, but it isn not referenced. If you use that method, cite the paper.
The claim that LDA works for structured experimental tasks but not in naturalistic scenarios and will not generalize when electrode count and trial duration increases is a statement that might be true. However, it is never empirically verified. Therefore this statement should not be in the paper.
HMM baseline
—————————
How are the 1 and the 2 state HMM used w.r.t. the 5 classes? It is unclear to me how they are used exactly. Is there a single HMM per class? Please be specific.
LSTM Model
—————
What is the random and language model initialization scheme? I can only find the sequence auto-encoder in the Dai and Le paper.
Model analysis
——————————-
It is widely accepted in the neuroimaging community that linear weight vectors should not be interpreted directly. It is actually impossible to do this. Therefore this section should be completely re-done. Please read the following paper on this subject.
Haufe, Stefan, et al. "On the interpretation of weight vectors of linear models in multivariate neuroimaging." Neuroimage 87 (2014): 96-110.
References
————
Ledoit wolf regularization is used but not cited. Fix this.
There is no citation for the random/language model initialization of the LSTM model. I have no clue how to do this without proper citation.
Le at al (2011) are referenced for auto-encoders. This is definitely not the right citation.
Rumelhart, Hinton, & Williams, 1986a; Bourlard & Kamp, 1988; Hinton & Zemel, 1994 and Bengio, Lamblin, Popovici, & Larochelle, 2007; Ranzato, Poultney, Chopra, & LeCun, 2007 are probably all more relevant.
Please cite the relevant work on affine transformations for transfer learning especially the work by morioka et al who also learn an input transferm.
Morioka, Hiroshi, et al. "Learning a common dictionary for subject-transfer decoding with resting calibration." NeuroImage 111 (2015): 167-178. |
iclr_2018_Hk91SGWR- | INVESTIGATING HUMAN PRIORS FOR PLAYING VIDEO GAMES
What makes humans so good at solving seemingly complex video games? Unlike computers, humans bring in a great deal of prior knowledge about the world, enabling efficient decision making. This paper investigates the role of human priors for solving video games. Given a sample game, we conduct a series of ablation studies to quantify the importance of various priors. We do this by modifying the video game environment to systematically mask different types of visual information that could be used by humans as priors. We find that removal of some prior knowledge causes a drastic degradation in the speed with which human players solve the game, e.g. from 2 minutes to over 20 minutes. Furthermore, our results indicate that general priors, such as the importance of objects and visual consistency, are critical for efficient game-play. | Overall:
I really enjoyed reading this paper and think the question is super important. I have some reservations about the execution of the experiments as well as some of the conclusions drawn. For this reason I am currently a weak reject (weak because I believe the question is very interesting). However, I believe that many of my criticisms can be assuaged during the rebuttal period.
Paper Summary:
For RL to play video games, it has to play many many many many times. In fact, many more times than a human where prior knowledge lets us learn quite fast in new (but related) environments. The authors study, using experiments, what aspects of human priors are the important parts.
The authors’ Main Claim appears to be: “While common wisdom might suggest that prior knowledge about game semantics such as ladders are to be climbed, jumping on spikes is dangerous or the agent must fetch the key before reaching the door are crucial to human performance, we find that instead more general and high-level priors such as the world is composed of objects, object like entities are used as subgoals for exploration, and things that look the same, act the same are more critical.”
Overall, I find this interesting. However, I am not completely convinced by some of the experimental demonstrations.
Issue 0: The experiments seem underpowered / not that well analyzed.
There are only 30 participants per condition and so it’s hard to tell whether the large differences in conditions are due to noise and what a stable ranking of conditions actually looks like. I would recommend that the authors triple the sample size and be more clear about reporting the outcomes in each of the conditions.
It’s not clear what the error bars in figure 1 represent, are they standard deviations of the mean? Are they standard deviations of the data? Are they confidence intervals for the mean effect?
Did you collect any extra data about participants? One potentially helpful example is asking how familiar participants are with platformer video games. This would give at least some proxy to study the importance of priors about “how video games are generally constructed” rather than priors like “objects are special”.
Issue 1: What do you mean by “objects”?
The authors interpret the fact that performance falls so much between conditions b and c to mean that human priors about “objects are special” are very important. However, an alternative explanation is that people explore things which look “different” (ie. Orange when everything else is black).
The problem here comes from an unclear definition of what the authors mean by an “object” so in revision I would like authors to clarify what precisely they mean by a prior about “the world is composed of objects” and how this particular experiment differentiates “object” from a more general prior about “video games have clearly defined goals, there are 4 clearly defined boxes here, let me try touching them.”
This is important because a clear definition will give us an idea for how to actually build this prior into AI systems.
Issue 2: Are the results here really about “high level” priors?
There are two ways to interpret the authors’ main claim: the strong version would maintain that semantic priors aren’t important at all.
There is no real evidence here for the strong version of the claim. A real test would be to reverse some of the expected game semantics and see if people perform just as well as in the “masked semantics” condition.
For example, suppose we had exactly the same game and N different types of objects in various places of the game where N-1 of them caused death but 1 of them opened the door (but it wasn’t the object that looked like a key). My hypothesis would be that performance would fall drastically as semantic priors would quickly lead people in that direction.
Thus, we could consider a weaker version of the claim: semantic priors are important but even in the absence of explicit semantic cues (note, this is different from having the wrong semantic cues as above) people can do a good job on the game. This is much more supported by the data, but still I think very particular to this situation. Imagine a slight twist on the game:
There is a sword (with a lock on it), a key, a slime and the door (and maybe some spikes). The player must do things in exactly this order: first the player must get the key, then they must touch the sword, then they must kill the slime, then they go to the door. Here without semantic priors I would hypothesize that human performance would fall quite far (whereas with semantics people would be able to figure it out quite well).
Thus, I think the authors’ claim needs to be qualified quite a bit. It’s also important to take into account how much work general priors about video game playing (games have goals, up jumps, there is basic physics) are doing here (the authors do this when they discuss versions of the game with different physics). |
iclr_2018_HJaDJZ-0W | Recurrent Neural Networks (RNNs) are used in state-of-the-art models in domains such as speech recognition, machine translation, and language modelling. Sparsity is a technique to reduce compute and memory requirements of deep learning models. Sparse RNNs are easier to deploy on devices and high-end server processors. Even though sparse operations need less compute and memory relative to their dense counterparts, the speed-up observed by using sparse operations is less than expected on different hardware platforms. In order to address this issue, we investigate two different approaches to induce block sparsity in RNNs: pruning blocks of weights in a layer and using group lasso regularization with pruning to create blocks of weights with zeros. Using these techniques, we can create block-sparse RNNs with sparsity ranging from 80% to 90% with a small loss in accuracy. This technique allows us to reduce the model size by roughly 10×. Additionally, we can prune a larger dense network to recover this loss in accuracy while maintaining high block sparsity and reducing the overall parameter count. Our technique works with a variety of block sizes up to 32×32. Block-sparse RNNs eliminate overheads related to data storage and irregular memory accesses while increasing hardware efficiency compared to unstructured sparsity. | Thanks to the authors for their response.
Though the paper presents an interesting approach, but it relies heavily on heuristics (such as those mentioned in the initial review) without a thorough investigation of scenarios in which this might not work. Also, it might be helpful to investigate if there ways to better group the variables for group lasso regularization. The paper therefore needs further improvements towards following a more principled approach.
=====================================
This paper presents methods for inducing sparsity in terms of blocks of weights in neural networks which aims to combine benefits of sparsity and faster access based on computing architectures. This is achieved by pruning blocks of weights and group lasso regularization. It is demonstrated empirically that model size can be reduced by upto 10 times with some loss in prediction accuracy.
Though the paper presents some interesting evaluations on the impact of block based sparsity in RNNs, some of the shortcomings of the paper seem to be :
- The approach taken consists of several heuristics rather than following a more principled approach such as taking the maximum of the weights in a block to represent that block and stop pruning till 40% training has been achieved. Also, the algorithm for computing the pruning threshold is based on a new set of hyper-parameters. It is not clear under what conditions the above settings will (not) work.
- For the group lasso method, since there are many ways to group the variable, it is not clear how the variables are grouped. Is there a reasoning behind a particular grouping of the variables. Individually, group lasso does not seem to work, and gives much worse results. The reasons for worse performance could be investigated. It is possible that important weights are in different groups, and group sparsity is forcing some of them to be zero, and hence leading to worse results. It would be insightful to explain the kind of solver used for group lasso regularization, and if that works for large-scale problems.
- The results for various kinds of sparsity are unclear in the sense that it is not clear how to set the block size a-priori for having minimum reduction in accuracy and still significant sparsity without having to repeat the process for various choices.
Overall, the paper does not seem to present novel ideas, and is mainly focused on evaluating the impact of block-based sparsity instead of weight pruning by Han etal. As mentioned in Section 2, regularization has been used earlier to achieve sparsity in deep networks. In this view the significance over existing work is relatively narrow, and no explicit comparison with existing methods is provided. It is possible that an existing method leads to pruning method such as by Han etal. leads to 8x decrease in model size while retaining the accuracy, while the proposed method leads to 10x decrease while also decreasing the accuracy by 10%. Scenarios like these need to be evaluated to understand the impact of the method proposed in this paper. |
iclr_2018_HJNGGmZ0Z | We hypothesize that end-to-end neural image captioning systems work seemingly well because they exploit and learn 'distributional similarity' in a multimodal feature space, by mapping a test image to similar training images in this space and generating a caption from the same space. To validate our hypothesis, we focus on the 'image' side of image captioning, and vary the input image representation but keep the RNN text generation model of a CNN-RNN constant. We propose a sparse bag-of-objects vector as an interpretable representation to investigate our distributional similarity hypothesis. We found that image captioning models (i) are capable of separating structure from noisy input representations; (ii) experience virtually no significant performance loss when a high dimensional representation is compressed to a lower dimensional space; (iii) cluster images with similar visual and linguistic information together; (iv) are heavily reliant on test sets with a similar distribution as the training set; (v) repeatedly generate the same captions by matching images and 'retrieving' a caption in the joint visual-textual space. Our experiments all point to one fact: that our distributional similarity hypothesis holds. We conclude that, regardless of the image representation, image captioning systems seem to match images and generate captions in a learned joint image-text semantic subspace. | This paper analyzes the effect of image features on image captioning. The authors propose to use a model similar to that of Vinyals et al., 2015 and change the image features it is conditioned on. The MSCOCO captioning and Flickr30K datasets are used for evaluation.
Introduction
- The introduction to the paper could be made clearer - the authors talk about the language of captioning datasets being repetitive, but that fact is neither used or discussed later.
- The introduction also states that the authors will propose ways to improve image captioning. This is never discussed.
Captioning Model and Table 1
- The authors use greedy (argmax) decoding which is known to result in repetitive captions. In fact, Vinyals et al. note this very point in their paper. I understand this design choice was made to focus more on the image side, rather than the decoding (language) side, but I find it to be very limiting. In this regime of greedy decoding it is hard to see any difference between the different ConvNet features used for captioning - Table 1 shows meteor scores within 0.19 - 0.22 for all methods.
- Another effect (possibly due to greedy decoding + choice of model), is that the numbers in Table 1 are rather low compared to the COCO leaderboard. The top 50 entries have METEOR scores >= 0.25, while the maximum METEOR score reported by the authors is 0.22. Similar trend holds for other metrics like BLEU-4.
- The results of Table 5 need to be presented and interpreted in the light of this caveat of greedy decoding.
Experimental Setup and Training Details
- How was the model optimized? No training details are provided. Did you use dropout? Were hyperparamters fixed for training across different feature sizes of VGG19 and ResNet-152? What is the variance in the numbers for Table 1?
Main claim of the paper
Devlin et al., 2015 show a simple nearest neighbor baseline which in my opinion shows this more convincingly. Two more papers from the same group which use also make similar observations - tweaking the image representation makes image captioning better: (1) Fang et al., 2015: Multiple-instance Learning using bag-of-objects helps captioning (2) Misra et al. 2016 (not cited): label noise can be modeled which helps captioning. This claim has been both made and empirically demonstrated earlier.
Metrics for evaluation
- Anderson et al., 2016 (not cited) proposed the SPICE metric and also showed how current metrics including CiDER may not be suitable for evaluating image captions. The COCO leaderboard also uses this metric as one of its evaluation metrics. If the authors are evaluating on the test set and reporting numbers, then it is odd that they `skipped' reporting SPICE numbers.
Choice of Datasets
- If we are thoroughly evaluating the effect of image features, doing so on other datasets is very important. Visual Genome (Krishnan et al., not cited) and SIND (Huang et al., not cited) are two datasets which are both larger than Flickr30k and have different image distributions from MSCOCO. These datasets should show whether using more general features (YOLO-9k) helps.
The authors should evaluate on these datasets to make their findings stronger and more valuable.
Minor comments
- Figure 1 is hard to read on paper. Please improve it.
- Figure 2 is hard to read even on screen. It is really interesting, so improving the quality of this figure will really help. |
iclr_2018_SJtChcgAW | Recent DNN pruning algorithms have succeeded in reducing the number of parameters in fully connected layers, often with little or no drop in classification accuracy. However, most of the existing pruning schemes either have to be applied during training or require a costly retraining procedure after pruning to regain classification accuracy. In this paper we propose a cheap pruning algorithm based on difference of convex functions (DC) optimisation. We also provide theoretical analysis for the growth in the Generalization Error (GE) of the new pruned network. Our method can be used with any convex regulariser and allows for a controlled degradation in classification accuracy, while being orders of magnitude faster than competing approaches. Experiments on common feedforward neural networks show that for sparsity levels above 90% our method achieves 10% higher classification accuracy compared to Hard Thresholding. | This paper casts the pruning optimization problem of NetTrim as a difference of convex problems, and uses DCA to obtain the smaller weight matrix; this algorithm is also analyzed theoretically to provide a bound on the generalization error of the pruned network.
However, there are many questions that aren't answered in the paper that make it difficult to evaluate: in particular, some experimental results leave open more questions for performance analysis.
Quality: of good quality, but incomplete.
Clarity: clear with some typos
Originality: a new approach to the NetTrim algorithm, which is somewhat original, and a new generalization bound for the algorithm.
Significance: somewhat significant.
PROS
- A very efficient algorithm for pruning, which can run orders of magnitude faster than the approaches that were compared to on certain architectures.
- An interesting generalization bound for the pruned network which is in line experimentally with decreasing robustness to pruning on layers close to the input.
CONS
- Non-trivial loss of accuracy on the pruned network, which cannot be estimated for larger-scale pruning as the experiments only prune one layer.
- No in-depth analysis of the generalization bound.
Main questions:
- You mention you use a variant of DCA: could you detail what differences Alg. 2 has with classical DCA?
- Where do you use the 0-1 loss in Thm. 3.2?
- I think your result in Theorem 3.2 would be significantly stronger if you could provide an analysis of the bound you obtain: in which cases can we expect certain terms to be larger or smaller, etc.
- Your experiments in section 4.2 show a non-trivial degradation of the accuracy with FeTa. Although the time savings seem worth the tradeoff to prune *one* layer, have you run the same experiments when pruning multiple layers? Could you comment on how the accuracy evolves with multiple pruned layers?
- It would be nice to see the curves for NetTrim and/or LOBS in Fig. 2.
- Have you tried retraining the network after pruning? Did you observe the same behavior as mentioned in (Dong et al., 2017) and (Wolfe et al., 2017)?
- It would be interseting to plot the theoretical (pessimistic) GE bound as well as the experimental accuracy degradation.
Nitpicks:
-Ubiquitous (first paragraph)
-difference of convex problemS
- The references should be placed before the appendix.
- The amount of white space should be reduced (e.g. around Eq. (1)). |
iclr_2018_BkpXqwUTZ | In vanilla backpropagation (VBP), activation function matters considerably in terms of non-linearity and differentiability. Vanishing gradient has been an important problem related to the bad choice of activation function in deep learning (DL). This work shows that a differentiable activation function is not necessary any more for error backpropagation. The derivative of the activation function can be replaced by an iterative temporal differencing (ITD) using fixed random feedback weight alignment (FBA). Using FBA with ITD, we can transform the VBP into a more biologically plausible approach for learning deep neural network architectures. We don't claim that ITD works completely the same as the spike-time dependent plasticity (STDP) in our brain but this work can be a step toward the integration of STDP-based error backpropagation in deep learning. | - This paper is not well written and incomplete. There is no clear explanation of what exactly the authors want to achieve in the paper, what exactly is their approach/contribution, experimental setup, and analysis of their results.
- The paper is hard to read due to many abbreviations, e.g., the last paragraph in page 2.
- The format is inconsistent. Section 1 is numbered, but not the other sections.
- in page 2, what do the numbers mean at the end of each sentence? Probably the figures?
- in page 2, "in this figure": which figure is this referring to?
Comments on prior work:
p 1: authors write: "vanilla backpropagation (VBP)" "was proposed around 1987 Rumelhart et al. (1985)."
Not true. A main problem with the 1985 paper is that it does not cite the inventors of backpropagation. The VBP that everybody is using now is the one published by Linnainmaa in 1970, extending Kelley's work of 1960. The first to publish the application of VBP to NNs was Werbos in 1982. Please correct.
p 1: authors write: "Almost at the same time, biologically inspired convolutional networks was also introduced as well using VBP LeCun et al. (1989)."
Here one must cite the person who really invented this biologically inspired convolutional architecture (but did not apply backprop to it): Fukushima (1979). He is cited later, but in a misleading way. Please correct.
p 1: authors write: "Deep learning (DL) was introduced as an approach to learn deep neural network architecture using VBP LeCun et al. (1989; 2015); Krizhevsky et al. (2012)."
Not true. Deep Learning was introduced by Ivakhnenko and Lapa in 1965: the first working method for learning in multilayer perceptrons of arbitrary depth. Please correct. (The term "deep learning" was introduced to ML in 1986 by Dechter for something else.)
p1: authors write: "Extremely deep networks learning reached 152 layers of representation with residual and highway networks He et al. (2016); Srivastava et al. (2015)."
Highway networks were published half a year earlier than resnets, and reached many hundreds of layers before resnets. Please correct.
General recommendation: Clear rejection for now. But perhaps the author want to resubmit this to another conference, taking into account the reviewer comments. |
iclr_2018_Skw0n-W0Z | Published as a conference paper at ICLR 2018 TEMPORAL DIFFERENCE MODELS: MODEL-FREE DEEP RL FOR MODEL-BASED CONTROL
Model-free reinforcement learning (RL) is a powerful, general tool for learning complex behaviors. However, its sample efficiency is often impractically large for solving challenging real-world problems, even with off-policy algorithms such as Q-learning. A limiting factor in classic model-free RL is that the learning signal consists only of scalar rewards, ignoring much of the rich information contained in state transition tuples. Model-based RL uses this information, by training a predictive model, but often does not achieve the same asymptotic performance as model-free RL due to model bias. We introduce temporal difference models (TDMs), a family of goal-conditioned value functions that can be trained with model-free learning and used for model-based control. TDMs combine the benefits of model-free and model-based RL: they leverage the rich information in state transitions to learn very efficiently, while still attaining asymptotic performance that exceeds that of direct model-based RL methods. Our experimental results show that, on a range of continuous control tasks, TDMs provide a substantial improvement in efficiency compared to state-of-the-art model-based and model-free methods. | This paper proposes a "temporal difference model learning", a method that aims to combine the benefits of model-based and model-free RL. The proposed method essentially learns a time-varying goal-conditional value function for a specific reward formulation, which acts as a surrogate for a model in an MPC-like setting. The authors show that the method outperforms some alternatives on three continuous control domains and real robot system.
I believe this paper to be borderline, but ultimately below the threshold for acceptance. On the positive side, there are certainly some interesting ideas here: the notion of goal-conditioned value functions as proxies for a model, and as a means of merging model-free and model-based approaches is very really interesting, and hints at a deeper structure to goal-conditioned value functions in general. Ultimately, though, I feel that there are two main issues that make this research feel as though it is still ultimately in the earlier stages: 1) the very large focus on the perspective that this approach is unifying model-based and model-free RL, when it fact this connection seems a bit tenuous; and 2) the rather lackluster experimental results, which show only marginal improvement over purely model-based methods (at the cost of much additional complexity), and which make me wonder if there's an issue with their implementation of prior work (namely the Highsight Experience Replay algorithm).
To address the first point, although the paper stresses it to a very high degree, I can't help but feel that the connection that the claimed advance of "unifying model-based and model-free RL" is overstated. As far as I can tell, the connection is as follows: the learned quantity here is a time-varying goal-conditioned value function, and under some specific definition of reward, we can interpret the constraint that this value function equal zero as a proxy for the dynamics constraint in MPC. But the exact correspondence between this and the MPC formulation only occurs for a horizon of size zero: longer horizons require a multi-step MPC for the definition of the model-free and model-based correspondence. The fact that the action selection of a model-based method and this approach have some function which looks similar (but only under certain conditions), just seems like a fairly odd connection to highlight so heavily.
Rather, it seems to me that what's happening here is really quite simple: the authors are extending goal-conditioned value functions to the case of non-stationary finite horizon value functions (the claimed "key insight" in eq (5) is a completely standard finite-horizon MDP formulation). This seems to describe perfectly well what is happening here, and it does also seem intuitive that this provides an advantage over stationary goal-conditioned value functions: just as goal conditioned value functions offer the advantage of considering "every state as a goal", this method can consider "every state as a goal for every time horizon". This seems interesting enough on its own, and I admit I don't see the need for the method to be yet another claimed unification of model-free and model-based RL.
I would also suggest that the authors look into the literature on how TD methods implicitly learn models (see e.g. Boyan 1997 "Least-squares temporal difference learning", and Parr et al., 2007 "An analysis of linear models..."). In these works it has been shown that least squares TD methods (at least in the linear feature setting), implicitly learn a dynamics model in feature space, but only the "projection" of the reward function is actually needed to learn the TD weights. In building the proposed value functions, it seems like the authors are effectively solving for multiple rewards simultaneously, which would effectively preserve the learned dynamics model. I feel like this may be an interesting line of analysis for the paper if the authors _do_ want to stick with the notion of the method as unifying model-free and model-based RL.
All these points may ultimately just be a matter of interpretation, though, if not for the second issue with the paper, which is that the results seem quite lackluster, and the claimed performance of HER seems rather suspicious. But instead, the authors evaluate the algorithm on just three continuous control tasks (and a real robot, which is more impressive, but the task here is still so extremely simple for a real robot system that it really just qualifies as a real-world demonstration rather than an actual application). And in these three settings, a model-based approach seems to work just as well on two of the tasks, and may soon perform just as well after a few more episodes on the last task (it doesn't appear to have converged yet). And despite the HER paper showing improvement over traditional policy approaches, in these experiments plain DDPG consistently performs as well or better than HER. |
iclr_2018_H1cWzoxA- | BI-DIRECTIONAL BLOCK SELF-ATTENTION FOR FAST AND MEMORY-EFFICIENT SEQUENCE MODELING
Recurrent neural networks (RNN), convolutional neural networks (CNN) and selfattention networks (SAN) are commonly used to produce context-aware representations. RNN can capture long-range dependency but is hard to parallelize and not time-efficient. CNN focuses on local dependency but does not perform well on some tasks. SAN can model both such dependencies via highly parallelizable computation, but memory requirement grows rapidly in line with sequence length. In this paper, we propose a model, called "bi-directional block self-attention network (Bi-BloSAN)", for RNN/CNN-free sequence encoding. It requires as little memory as RNN but with all the merits of SAN. Bi-BloSAN splits the entire sequence into blocks, and applies an intra-block SAN to each block for modeling local context, then applies an inter-block SAN to the outputs for all blocks to capture long-range dependency. Thus, each SAN only needs to process a short sequence, and only a small amount of memory is required. Additionally, we use feature-level attention to handle the variation of contexts around the same word, and use forward/backward masks to encode temporal order information. On nine benchmark datasets for different NLP tasks, Bi-BloSAN achieves or improves upon state-of-the-art accuracy, and shows better efficiency-memory trade-off than existing RNN/CNN/SAN. | Pros:
The paper proposes a “bi-directional block self-attention network (Bi-BloSAN)” for sequence encoding, which inherits the advantages of multi-head (Vaswani et al., 2017) and DiSAN (Shen et al., 2017) network but is claimed to be more memory-efficient. The paper is written clearly and is easy to follow. The source code is released for duplicability. The main originality is using block (or hierarchical) structures; i.e., the proposed models split the an entire sequence into blocks, apply an intra-block SAN to each block for modeling local context, and then apply an inter-block SAN to the output for all blocks to capture long-range dependency. The proposed model was tested on nine benchmarks and achieve good efficiency-memory trade-off.
Cons:
- Methodology of the paper is very incremental compared with previous models.
- Many of the baselines listed in the paper are not competitive; e.g., for SNLI, state-of-the-art results are not included in the paper.
- The paper argues advantages of the proposed models over CNN by assuming the latter only captures local dependency, which, however, is not supported by discussion on or comparison with hierarchical CNN.
- The block splitting (as detailed in appendix) is rather arbitrary in terms of that it potentially divides coherent language segments apart. This is unnatural, e.g., compared with alternatives such as using linguistic segments as blocks.
- The main originality of paper is the block style. However, the paper doesn’t analyze how and why the block brings improvement.
-If we remove intra-block self-attention (but only keep token-level self-attention), whether the performance will be significantly worse? |
iclr_2018_BJ_wN01C- | Published as a conference paper at ICLR 2018 DEEP REWIRING: TRAINING VERY SPARSE DEEP NET- WORKS
Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them. But also generic hardware and software implementations of deep learning run more efficiently for sparse networks. Several methods exist for pruning connections of a neural network after it was trained without connectivity constraints. We present an algorithm, DEEP R, that enables us to train directly a sparsely connected neural network. DEEP R automatically rewires the network during supervised training so that connections are there where they are most needed for the task, while its total number is all the time strictly bounded. We demonstrate that DEEP R can be used to train very sparse feedforward and recurrent neural networks on standard benchmark tasks with just a minor loss in performance. DEEP R is based on a rigorous theoretical foundation that views rewiring as stochastic sampling of network configurations from a posterior. | This paper presents an iterative approach to sparsify a network already during training. During the training process, the amount of connections in the network is guaranteed to stay under a specific threshold. This is a big advantage when training is performed on hardware with computational limitations, in comparison to "post-hoc" sparsification methods, that compress the network after training.
The method is derived by considering the "rewiring" of an (artificial) neural network as a stochastic process. This perspective is based on a recent model in computational biology but also can be interpreted as a (sequential) monte carlo sampling based stochastic gradient descent approach. References to previous work in this area are missing, e.g.
[1] de Freitas et al., Sequential Monte Carlo Methods to Train Neural Network
Models, Neural Computation 2000
[2] Welling et al., Bayesian Learning via Stochastic Gradient Langevin Dynamics, ICML 2011
Especially the stochastic gradient method in [2] is strongly related to the existing approach.
Positive aspects
- The presented approach is well grounded in the theory of stochastic processes. The authors provide proofs of convergence by showing that the iterative updates converge to a fixpoint of the stochastic process
- By keeping the temperature parameter of the stochastic process high, it can be directly applied to online transfer learning.
- The method is specifically designed for online learning with limited hardware ressources.
Negative aspects
- The presented approach is outperformed for moderate compression levels (by Han's pruning method for >5% connectivity on MNIST, Fig. 3 A, and by l1-shrinkage for >40% connectivity on CIFAR-10 and TIMIT, Fig. 3 B&C). Especially the results on MNIST suggest that this method is most advantageous for very high compression levels. However in these cases the overall classification accuracy has already dropped significantly which could limit the practical applicability.
- A detailled discussion of the relation to previously existing very similar work is missing (see above)
Technical Remarks
Fig. 1, 2 and 3 are referenced on the pages following the page containing the figure. Readibility could be slightly increased by putting the figures on the respective pages. |
iclr_2018_SyW4Gjg0W | Graph kernels have been successfully applied to many graph classification problems. Typically, a kernel is first designed, and then an SVM classifier is trained based on the features defined implicitly by this kernel. This two-stage approach decouples data representation from learning, which is suboptimal. On the other hand, Convolutional Neural Networks (CNNs) have the capability to learn their own features directly from the raw data during training. Unfortunately, they cannot handle irregular data such as graphs. We address this challenge by using graph kernels to embed meaningful local neighborhoods of the graphs in a continuous vector space. A set of filters is then convolved with these patches, pooled, and the output is then passed to a feedforward network. With limited parameter tuning, our approach outperforms strong baselines on 7 out of 10 benchmark datasets, and reaches comparable performance elsewhere. Code and data are publicly available 1 . | This paper proposes a graph classification method by integrating three techniques, community detection, graph kernels, and CNNs.
* This paper is clearly written and easy to follow. Thus the clarity is high.
* The originality is not high as the application of neural networks for graph classification has already been studied elsewhere and the proposed method is a direct combination of three existing methods, community detection, graph kernels, and CNNs.
* The quality and the significance of this paper it not high due to the following reasons:
- The motivation is misleading in two folds.
First, the authors say that the graph kernel + SVM approach has a drawback due to two independent processes of graph representation and learning.
However, the parameters included in respective graph kernel is usually optimized via the SVM classification, hence they are not independent with each other.
Second, the authors say that the proposed method addresses the above issue of independence between graph representation and learning.
However, it also uses the two-step procedure as it first obtain the kernel matrix K via graph kernels and then apply CNN for classification, which is fundamentally the same as the existing approach.
Although community detection is used before graph kernels, such subgraph extraction process is already implicitly employed in various graph kernels.
I recommend to revise and clarify this point.
- In experimental evaluation, why several kernels including SP, RW, and WL are not used in the latter five datasets?
This missing experiment significantly deteriorate the quality of empirical evaluation and I strongly recommend to add results for such kernels.
- It is mentioned that the parameter h is fixed to 5 in the WL kernel. However, it is known that the performance of the WL kernel depends on the parameter and it should be tuned by cross-validation.
In contrast, parameters (number of epochs and the learning rate) are tuned in the proposed method. Thus the current comparison is not fair.
- In addition to the above point, how are parameters for GR and RW?
- Runtime is shown in Table 4 but there is no comparison with other methods. Although it is mentioned in the main text that the proposed method is faster than Graph CNN and Depp Graph Kernels, there is no concrete values and this statement is questionable (Runtime will easily vary due to the hardware configuration).
* Additional comment:
- Why is the community detection step needed? What will happen if K is directly constructed from given N graphs and what is the advantage of using not the original graphs but extracted subgraphs?
- In the first step of finding characteristic subgraphs, frequent subgraph mining can be used instead community detection.
Frequent subgraph mining is extensively used in various methods for classification of graph-structured data, for example:
* Tsuda, K., Entire regularization paths for graph data, ICML 2007.
* Thoma, M. et al., Discriminative frequent subgraph mining with optimality guarantees, Statistical Analysis and Data Mining, 2010
* Takigawa, I., Mamitsuka, H., Generalized Sparse Learning of Linear Models Over the Complete Subgraph Feature Set, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017
What is the advantage of using the community detection compared to frequent subgraph mining or other subgraph enumeration methods? |
iclr_2018_S1680_1Rb | The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely-connected graphs, and can handle different constructions of Laplacian operators. Extensive experimental results show the superior performance of our approach on spectral image classification, community detection, vertex classification and matrix completion tasks. | Summary: This paper proposes a new graph-convolution architecture, based on Cayley transform of the matrix. Succinctly, if L denotes the Laplacian of a graph, this filter corresponds to an operator that is a low degree polynomial of C(L) = (hL - i)/(hL+i), where h is a scalar and i denotes sqrt(-1). The authors contend that such filters are interesting because they can 'zoom' into a part of the spectrum, depending on the choice of h, and that C(L) is always a rotation matrix with eigenvalues with magnitude 1. The authors propose to compute them using Jacobi iteration (using the diagonal as a preconditioner), and present experimental results.
Opinion: Though the Cayley filters seem to have interesting properties, I find the authors theoretical and experimental justification insufficient to conclude that they offer sufficient advantage over existing methods. I list my major criticisms below:
1. The comparison to Chebyshev filters (small degree polynomials in the Chebyshev basis) at several places is unconvincing. The results on CORA (Fig 5a) compare filters with the same order, though Cayley filters have twice the number of variables for the same order as Chebyshev filters. Similarly for Fig 1, order 3 Cayley should be compared to Order 6 Chebyshev (roughly).
2. Since Chebyshev polynomials blow up exponentially when applied to values larger than 1, applying Chebyshev filters to unnormalized Laplacians (Fig 5b) is an unfair comparison.
3. The authors basically apply Jacobi iteration (gradient descent using a diagonal preconditioner) to estimate the Cayley filters, and contend that a constant number of iterations of Jacobi are sufficient. This ignores the fact that their convergence rate scales quadratically in h and the max-degree of the graph. Moreover, this means that the Filter is effectively a low degree polynomial in (D^(-1)A)^K, where A is the adjacency matrix of the graph, and K is the number of Jacobi iterations. It's unclear how (or why) a choice of K might be good, or why does it make sense to throw away all powers of D^(-1)Af, even though we're computing all of them.
Also, note that this means a K-fold increase in the runtime for each evaluation of the network, compared to the Chebyshev filter.
Among the other experimental results, the synthetic results do clearly convey a significant advantage at least over Chebyshev filters with the same number of parameters. The CORA results (table 2) do convey a small but clear advantage. The MNIST result seems a tie, and the comparison for MovieLens doesn't make it obvious that the number of parameters is the same.
Overall, this leads me to conclude that the paper presents insufficient justification to conclude that Cayley filters offer a significant advantage over existing work. |
iclr_2018_BkA7gfZAb | Workshop track -ICLR 2018 STABLE DISTRIBUTION ALIGNMENT USING THE DUAL OF THE ADVERSARIAL DISTANCE
Methods that align distributions by minimizing an adversarial distance between them have recently achieved impressive results. However, these approaches are difficult to optimize with gradient descent and they often do not converge well without careful hyperparameter tuning and proper initialization. We investigate whether turning the adversarial min-max problem into an optimization problem by replacing the maximization part with its dual improves the quality of the resulting alignment and explore its connections to Maximum Mean Discrepancy. Our empirical results suggest that using the dual formulation for the restricted family of linear discriminators results in a more stable convergence to a desirable solution when compared with the performance of a primal min-max GAN-like objective and an MMD objective under the same restrictions. We test our hypothesis on the problem of aligning two synthetic point clouds on a plane and on a real-image domain adaptation problem on digits. In both cases, the dual formulation yields an iterative procedure that gives more stable and monotonic improvement over time. | This paper proposes to re-formulate the GAN saddle point objective (for a logistic regression discriminator) as a minimization problem by dualizing the maximum likelihood objective for regularized logistic regression (where the dual function can be obtained in closed form when the discriminator is linear). They motivate their approach by repeating the previously made claim that the naive gradient approach is non-convergent for generic saddle point problems (Figure 1); while a gradient approach often works well for a minimization formulation.
The dual problem of regularized logistic regression is an entropy-regularized concave quadratic objective problem where the Hessian is y_i y_j <x_i, x_j>, thus highlighting the pairwise similarities between the points x_i & x_j; here the labels represent whether the point x comes from the samples A from the target distribution or B from the proposal distribution. This paper then compare this objective with the MMD distance between the samples A & B. It points out that the adversarial logistic distance can be viewed as an iteratively reweighted empirical estimator of the MMD distance, an interesting analogy (but also showing the limited power of the adversarial logistic distance for getting good generating distributions, given e.g. that the MMD has been observed in the past to perform poorly for face generation [Dziugaite et al. UAI 2015]). From this analogy, one could expect the method to improves over MMD, but not necessarily significantly in comparison to an approach which would use more powerful discriminators.
This paper then investigates the behavior of this adversarial logistic distance in the context of aligning distributions for domain adaptation, with experiments on a visual adaptation task. They observe better performance for their approach in comparison to ADDA, improved WGAN and MMD, when restricting the discriminators to be a linear classifier.
== Evaluation
I found this paper quite clear to read and enjoyed reading it. The observations are interesting, despite being on the toyish side. I am not an expert on GANs for domain adaptation, and thus I can not judge of the quality of the experimental comparison for this application, but it seemed reasonable, apart for the restriction to the linear discriminators (which is required by the framework of this paper).
One concern about the paper (but this is an unfortunate common feature of most GAN papers) is that it ignores the vast knowledge on saddle point optimization coming from the optimization community. The instability of a gradient method on non-strongly convex-concave saddle point problems (like the bilinear form of Figure 1) is a well-known property, and many alternative *stable* gradient based algorithms have been proposed to solve saddle point problems which do not require transforming them to a minimization problem as suggested in this paper. Moreover, the transformation to the minimization form crucially required the closed form computation of the dual function (with w* just defined above equation (2)), and this is limited to linear discriminators, thus ruling out the use of the proposed approach to more powerful discriminators like deep neural nets. Thus the significance appears a bit limited to me.
== Other comments
1) Note that d(A, B'_theta) is *equal* to min_alpha max_w (...) above equation (2) (it is not just an upper bound). This is a standard result coming from the fact that the Fenchel dual problem to regularized maximum likelihood is the maximum entropy problem with a quadratic objective as (2). See e.g. Section 2.2 of [Collins et al. JMLR 2008] (this is for the more general multiclass logistic regression problem, but (2) is just the binary special case of equation (4) in the [Collins ... ] reference). And note that the "w(u)" defined in this reference is the lambda*w*(alpha) optimal relationship defined in this paper (but without the 1/lambda factor because of just slightly different writing; the point though is that strong duality holds there and thus one really has equality).
[Collins et al. JMLR 2008] Michael Collins, Amir Globerson, Terry Koo , Xavier Carreras, Peter L. Bartlett, Exponentiated Gradient Algorithms for Conditional Random Fields and Max-Margin Markov Networks, , JMLR 2008.
[Dziugaite et al. UAI 2015] Gintare Karolina Dziugaite, Daniel M. Roy, and Zoubin Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In UAI, 2015 |
iclr_2018_rkcQFMZRb | Published as a conference paper at ICLR 2018 VARIATIONAL IMAGE COMPRESSION WITH A SCALE HYPERPRIOR
We describe an end-to-end trainable model for image compression based on variational autoencoders. The model incorporates a hyperprior to effectively capture spatial dependencies in the latent representation. This hyperprior relates to side information, a concept universal to virtually all modern image codecs, but largely unexplored in image compression using artificial neural networks (ANNs). Unlike existing autoencoder compression methods, our model trains a complex prior jointly with the underlying autoencoder. We demonstrate that this model leads to state-of-the-art image compression when measuring visual quality using the popular MS-SSIM index, and yields rate-distortion performance surpassing published ANN-based methods when evaluated using a more traditional metric based on squared error (PSNR). Furthermore, we provide a qualitative comparison of models trained for different distortion metrics. | Authors propose a transform coding solution by extending the work in Balle 2016. They define an hyperprior for the entropy coder to model the spatial relation between the transformed coefficients.
The paper is well written, although I had trouble following some parts. The results of the proposal are state-of-the-art, and there is an extremely exhaustive comparison with many methods.
In my opinion the work has a good quality to be presented at the ICLR. However, I think it could be excellent if some parts are improved. Below I detail some parts that I think could be improved.
*** MAIN ISSUES
I have two main concerns about motivation that are related. The first refers to hyperprior motivation. It is not clear why, if GDN was proposed to eliminate statistical dependencies between pixels in the image, the main motivation is that GDN coefficients are not independent. Perhaps this confusion could be resolved by broadening the explanation in Figure 2. My second concern is that it is not clear why it is better to modify the probability distribution for the entropy encoder than to improve the GDN model. I think this is a very interesting issue, although it may be outside the scope of this work. As far as I know, there is no theoretical solution to find the right balance between the complexity of transformation and the entropy encoder. However, it would be interesting to discuss this as it is the main novelty of the work compared to other methods of image compression based on deep learning.
*** OTHER ISSUES
INTRODUCTION
-"...because our models are optimized end-to-end, they minimize the total expected code length by learning to balance the amount of side information with the expected improvement of the entropy model."
I think this point is very interesting, it would be good to see some numbers of how this happens for the results presented, and also during the training procedure. For example, a simple comparison of the number of bits in the signal and side information depending on the compression rate or the number of iterations during model training.
COMPRESSION WITH VARIATIONAL MODELS
- There is something missing in the sentence: "...such as arithmetic coding () and transmitted..."
- Fig1. To me it is not clear how to read the left hand schemes. Could it be possible to include the distributions specifically? Also it is strange that there is a \tiled{y} in both schemes but with different conditional dependencies. Another thing is that the symbol ψ appears in this figure and is not used in section 2.
- It would be easier to follow if change the symbols of the functions parameters by something like \theta_a and \theta_s.
- "Distortion is the expected difference between..." Why is the "expected" word used here?
- "...and substituting additive uniform noise..." is this phrase correct? Are authors is Balle 2016 substituting additive uniform noise?
- In equation (1), is the first term zero or constant? when talking about equation (7) authors say "Again, the first term is constant,...".
- The sentence "Most previous work assumes..." sounds strange.
- The example in Fig. 2 is extremely important to understand the motivation behind the hyperprior but I think it needs a little more explanation. This example is so important that it may need to be explained at the beginning of the work. Is this a real example, of a model trained without and with normalization? If so specify it please. Why is GDN not able to eliminate these spatial dependencies? Would these dependencies be eliminated if normalization were applied between spatial coefficients? Could you remove dependencies with more layers or different parameters in the GDN?
INTRODUCTION OF A SCALE HYPERPRIOR
- TYPO "...from the center pane of..."
- "...and propose the following extension of the model (figure 3):" there is nothing after the colon. Maybe there is something missing, or maybe it should be a dot instead of a colon. However to me there is a lack of explanation about the model.
RESULTS
- "...,the probability mass functions P_ŷi need to be constructed “on the fly”..."
How computationally costly is this?
- "...batch normalization or learning rate decay were found to have no beneficial effect (this may be due to the local normalization properties of GDN, which contain global normalization as a special case)."
This is extremely interesting. I see the connection for batch normalization, but not for decay of the learning rate. Please, clarify it. Does this mean that when using GDN instead of regular nonlinearity we no longer need to use batch normalization? Or in other words, do you think that batch normalization is useful only because it is special case of GSN? It would be useful for the community to assess what are the benefits of local normalization versus global normalization.
- "...each of these combinations with 8 different values of λ in order to cover a range of rate–distortion tradeoffs."
Would it be possible with your methods including \lambda as an input and the model parameters as side information?
- I guess you included the side information when computing the total entropy (or number of bits), was there a different way of compressing the image and the side information?
- Using the same metrics to train and to evaluate is a little bit misleading. Evaluation plots using a different perceptual metric would be helpful.
-"Since MS-SSIM yields values between 0 (worst) and 1 (best), and most of the compared methods achieve values well above 0.9, we converted the quantity to decibels in order to improve legibility."
Are differences of MS-SSIM with this conversion significant? Is this transformation necessary, I lose the intuition. Besides, probably is my fault but I have not being able to "unconvert" the dB to MS-SSIM units, for instance 20*log10(1)= 20 but most curves surpass this value.
- "..., results differ substantially depending on which distortion metric is used in the loss function during training."
It would be informative to understand how the parameters change depending on the metric employed, or at least get an intuition about which set of parameters adapt more g_a, g_s, h_a and h_s.
- Figs 5, 8 and 9. How are the curves aggregated for different images? Is it the mean for each rate value? Note that depending on how this is done it could be totally misleading.
- It would be nice to include results from other methods (like the BPG and Rippel 2017) to compare with visually.
RELATED WORK
Balle et al. already published a work including a perceptual metric in the end-to-end training procedure, which I think is one of the main contributions of this work. Please include it in related work:
"End-to-end optimization of nonlinear transform codes for perceptual quality." J. Ballé, V. Laparra, and E.P. Simoncelli. PCS: Picture Coding Symposium, (2016)
DISCUSSION
First paragraphs of discussion section look more like a second section of "related work".
I think it is more interesting if the authors discuss the relevance of putting effort into modelling hyperprior or the distribution of images (or transformation). Are these things equivalent? Or is there some reason why we can't include hyperprior modeling in the g_a transformation? For me it is not clear why we should model the distribution of outputs as, in principle, the g_a transformation has to enforce (using the training procedure) that the transformed data follow the imposed distribution. Is it because the GDN is not powerful enough to make the outputs independent? or is it because it is beneficial in compression to divide the problem into two parts?
REFERENCES
- Balle 2016 and Theis 2017 seem to be published in the same conference the same year. Using different years for the references is confusing.
- There is something strange with these references
Ballé, J, V Laparra, and E P Simoncelli (2016). “Density Modeling of Images using a Generalized
Normalization Transformation”. In: Int’l. Conf. on Learning Representations (ICLR2016). URL :
https://arxiv.org/abs/1511.06281.
Ballé, Valero Laparra, and Eero P. Simoncelli (2015). “Density Modeling of Images Using a Gen-
eralized Normalization Transformation”. In: arXiv e-prints. Published as a conference paper at
the 4th International Conference for Learning Representations, San Juan, 2016. arXiv: 1511.
06281.
– (2016). “End-to-end Optimized Image Compression”. In: arXiv e-prints. 5th Int. Conf. for Learn-
ing Representations. |
iclr_2018_SJDJNzWAZ | Existing sequence prediction methods are mostly concerned with time-independent sequences, in which the actual time span between events is irrelevant and the distance between events is simply the difference between their order positions in the sequence. While this time-independent view of sequences is applicable for data such as natural languages, e.g., dealing with words in a sentence, it is inappropriate and inefficient for many real world events that are observed and collected at unequally spaced points of time as they naturally arise, e.g., when a person goes to a grocery store or makes a phone call. The time span between events can carry important information about the sequence dependence of human behaviors. In this work, we propose a set of methods for using time in sequence prediction. Because neural sequence models such as RNN are more amenable for handling token-like input, we propose two methods for time-dependent event representation, based on the intuition on how time is tokenized in everyday life and previous work on embedding contextualization. We also introduce two methods for using next event duration as regularization for training a sequence prediction model. We discuss these methods based on recurrent neural nets. We evaluate these methods as well as baseline models on five datasets that resemble a variety of sequence prediction tasks. The experiments revealed that the proposed methods offer accuracy gain over baseline models in a range of settings. | The authors present a model base on an RNN to predict marks and duration of events in a temporal point process. The main innovation of the paper is a new representation of a point process with duration (which could also be understood as marks), which allows them to use a "time mask", following the idea of word mask introduced by Choi et al, 2016. In Addition to the mask, the authors also propose a discretization of the duration using one hot encoding and using the event duration as a regularizer. They compare their method to several variations of their own method, two trivial baselines, and one state of the art method (RMTPP) using several real-world datasets and report small gains with respect to that state of the art method.
Overall, the technical contribution of the paper is minor, the gains in performance with respect to a single state of the art are minimal, and the authors oversell their contribution specially in comparison with the related literature. More specifically, my concerns, which prevent me from recommending acceptance, are as follows:
- The authors assume the point process contains duration and intervals, however, point processes generally do not have duration per event but they are discrete events localized in particular time points. Moreover, the duration in their representation (Figure 1) is sometimes an interevent time and sometimes a duration, which makes the whole construction inconsistent. Moreover, what happens then to the representation depicted in Figure 1 when duration is nonexistent or zero?
- The use of "time mask" is not properly justified and the authors are just extending the idea of word mask to their setting -- it is unclear why the duration of an event is going to provide context and in any case this seems like a minor technical contribution.
- The use of a time mask does not appear "more principled" than previous work (Due et al., Mei & Esiner, Xiao et al.). Previous work use the framework of temporal point processes in a principled way, the current work does not. I would encourage to authors to tone down their language.
- The regularization proposed by the authors uses a Gaussian on the "prediction error" of the duration or just cross entropy on a discretization of the duration. Given the inconsistency in the definition of the duration (sometimes it is duration, sometimes is interevent time), the resulting regularization may lead to unexpected/undesirable results. Moreover, it is unclear why the authors do not model the duration time with an appropriate distribution (e.g., Weibull) and add the log-likelihood of the durations under that distribution as regularization.
- The difference in performance with respect to a single nontrivial baseline (the remaining baselines are trivial or versions of their own model) is minimal. Moreover, the authors fail to compare with other methods, e.g., the method by Mei & Eisner, which beats RMTPP. This is specially surprising since the authors mention such work in the related work and there is available source code at https://github.com/HMEIatJHU/neurawkes. |
iclr_2018_SJD8YjCpW | Weight-sharing plays a significant role in the success of many deep neural networks, by increasing memory efficiency and incorporating useful inductive priors about the problem into the network. But understanding how weight-sharing can be used effectively in general is a topic that has not been studied extensively. Chen et al. (2015) proposed HashedNets, which augments a multi-layer perceptron with a hash table, as a method for neural network compression. We generalize this method into a framework (ArbNets) that allows for efficient arbitrary weightsharing, and use it to study the role of weight-sharing in neural networks. We show that common neural networks can be expressed as ArbNets with different hash functions. We also present two novel hash functions, the Dirichlet hash and the Neighborhood hash, and use them to demonstrate experimentally that balanced and deterministic weight-sharing helps with the performance of a neural network. | The manuscript advocates to study the weight sharing in a more systematic way by proposing ArbNets which defines the weight sharing function as a hash function. In this framework, any existing neural network architectures, including CNN and RNN, could be incorporated into ArbNets.
The manuscript is not well written. There are multiple grammar errors and typos. Content-wise, it is already well known that CNN and RNN can be expressed as general MLP with weight sharing. The introduction of ArbNets does not bring much value or insight to this area. So it seems that most content before experimental section is common sense.
In the experimental section, it is interesting to see how different hash function with different level of entropy can affect the performance of neural nets. However, this single observation cannot enrich the whole manuscript. Two questions:
(1) What is the definition of sparsity here, and how is it controlled?
(2) There seems to be a step change in Figure 3. All the results are either between 10 to 20, or near 50. And the blue line goes up and down. Is this expected? |
iclr_2018_rJQDjk-0b | Published as a conference paper at ICLR 2018 UNBIASED ONLINE RECURRENT OPTIMIZATION
The novel Unbiased Online Recurrent Optimization (UORO) algorithm allows for online learning of general recurrent computational graphs such as recurrent network models. It works in a streaming fashion and avoids backtracking through past activations and inputs. UORO is computationally as costly as Truncated Backpropagation Through Time (truncated BPTT), a widespread algorithm for online learning of recurrent networks Jaeger (2002). UORO is a modification of NoBackTrack Ollivier et al. (2015) that bypasses the need for model sparsity and makes implementation easy in current deep learning frameworks, even for complex models. Like NoBackTrack, UORO provides unbiased gradient estimates; unbiasedness is the core hypothesis in stochastic gradient descent theory, without which convergence to a local optimum is not guaranteed. On the contrary, truncated BPTT does not provide this property, leading to possible divergence. On synthetic tasks where truncated BPTT is shown to diverge, UORO converges. For instance, when a parameter has a positive short-term but negative long-term influence, truncated BPTT diverges unless the truncation span is very significantly longer than the intrinsic temporal range of the interactions, while UORO performs well thanks to the unbiasedness of its gradients.
Current recurrent network learning algorithms are ill-suited to online learning via a single pass through long sequences of temporal data. Backpropagation Through Time (BPTT Jaeger (2002)), the current standard for training recurrent architectures, is well suited to many short training sequences. Treating long sequences with BPTT requires either storing all past inputs in memory and waiting for a long time between each learning step, or arbitrarily splitting the input sequence into smaller sequences, and applying BPTT to each of those short sequences, at the cost of losing long term dependencies. This paper introduces Unbiased Online Recurrent Optimization (UORO), an online and memoryless learning algorithm for recurrent architectures: UORO processes and learns from data samples sequentially, one sample at a time. Contrary to BPTT, UORO does not maintain a history of previous inputs and activations. Moreover, UORO is scalable: processing data samples with UORO comes at a similar computational and memory cost as just running the recurrent model on those data.
Like most neural network training algorithms, UORO relies on stochastic gradient optimization. The theory of stochastic gradient crucially relies on the unbiasedness of gradient estimates to provide convergence to a local optimum. To this end, in the footsteps of NoBackTrack (NBT) Ollivier et al. (2015), UORO provides provably unbiased gradient estimates, in a scalable, streaming fashion.
Unlike NBT, though, UORO can be easily implemented in a black-box fashion on top of an existing recurrent model in current machine learning software, without delving into the structure and code of the model.
The framework for recurrent optimization and UORO is introduced in Section 2. The final algorithm is reasonably simple (Alg. 1), but its derivation (Section 3) is more complex. In Section 6, UORO is shown to provide convergence on a set of synthetic experiments where truncated BPTT fails to display reliable convergence. An implementation of UORO is provided as supplementary material.
Current recurrent network learning algorithms are ill-suited to online learning via a single pass through long sequences of temporal data. Backpropagation Through Time (BPTT Jaeger (2002)), the current standard for training recurrent architectures, is well suited to many short training sequences. Treating long sequences with BPTT requires either storing all past inputs in memory and waiting for a long time between each learning step, or arbitrarily splitting the input sequence into smaller sequences, and applying BPTT to each of those short sequences, at the cost of losing long term dependencies. This paper introduces Unbiased Online Recurrent Optimization (UORO), an online and memoryless learning algorithm for recurrent architectures: UORO processes and learns from data samples sequentially, one sample at a time. Contrary to BPTT, UORO does not maintain a history of previous inputs and activations. Moreover, UORO is scalable: processing data samples with UORO comes at a similar computational and memory cost as just running the recurrent model on those data.
Like most neural network training algorithms, UORO relies on stochastic gradient optimization. The theory of stochastic gradient crucially relies on the unbiasedness of gradient estimates to provide convergence to a local optimum. To this end, in the footsteps of NoBackTrack (NBT) Ollivier et al. (2015), UORO provides provably unbiased gradient estimates, in a scalable, streaming fashion.
Unlike NBT, though, UORO can be easily implemented in a black-box fashion on top of an existing recurrent model in current machine learning software, without delving into the structure and code of the model.
The framework for recurrent optimization and UORO is introduced in Section 2. The final algorithm is reasonably simple (Alg. 1), but its derivation (Section 3) is more complex. In Section 6, UORO is shown to provide convergence on a set of synthetic experiments where truncated BPTT fails to display reliable convergence. An implementation of UORO is provided as supplementary material. | Post-rebuttal update:
I am happy with the rebuttal and therefore I will keep the score of 7.
This is a very interesting paper. Training RNN's in an online fashion (with no backpropagation through time) is one of those problems which are not well explored in the research community. And I think, this paper approaches this problem in a very principled manner. The authors proposes to use forward approach for the calculation of the gradients. The author proposes to modify RTRL by maintaining a rank one approximation of jacobian matrix (derivative of state w.r.t parameters) which was done in NoBackTrack Paper. The way I think this paper is different from NoBackTrack Paper is that this version can be implemented in a black box fashion and hence easy to implement using current DL libraries like Pytorch.
Pros.
- Its an interesting paper, very easy to follow, and with proper literature survey.
Cons:
- The results are quite preliminary. I'll note that this is a very difficult problem.
- "The proof of UORO’s convergence to a local optimum is soon to be published Masse & Ollivier (To appear)." I think, paper violates the anonymity. So, I'd encourage the authors to remove this.
Some Points:
- I find the argument of stochastic gradient descent wrong (I could be wrong though). RNN's follow the markov property (wrt hidden states from previous time step and the current input) so from time step t to t+1, if you change the parameters, the hidden state at time t (and all the time steps before) would carry stale information unless until you're using something like eligibility traces from RL literature. I also don't know how to overcome this issue.
- I'd be worried about the variance in the estimate of rank one approximation. All the experiments carried out by the authors are small scale (hidden size = 64). I'm curious if authors tried experimenting with larger networks, I'd guess it wont perform well due to the high variance in the approximation. I'd like to see an experiment with hidden size = 128/256/512/1024. My intuition is that because of high variance it would be difficult to train this network, but I could be wrong. I'm curious what the authors had to say about this.
- If the variance of the approximation is indeed high, can we use something to control the dynamics of the network which can result in less variance. Have authors thought about this ?
- I'd also like to see experiments on copying task/adding task (as these are standard experiments which are done for analysis of long term dependencies)
- I'd also like to see what effect the length of sequence has on the approximation. As small errors in approximation on each step can compound giving rise to chaotic dynamics. (small change in input => large change in output)
- I'd also like to know how using UORO changes the optimization as compared to Back-propagation through time in the sense, does the two approaches would reach same local minimum ? or is there a possibility that the former can reach "less" number of potential local minimas as compared to BPTT.
I'm tempted to give high score for this paper( Score - 7) , as it is unexplored direction in our research community, and I think this paper makes a very useful contribution to tackle this problem in a very principled way. But I'd like some more experiments to be done (which I have mentioned above), failing to do those experiments, I'd be forced to reduce the score (to score - 5) |
iclr_2018_B1IDRdeCW | THE HIGH-DIMENSIONAL GEOMETRY OF BINARY NEURAL NETWORKS
Recent research has shown that one can train a neural network with binary weights and activations at train time by augmenting the weights with a high-precision continuous latent variable that accumulates small changes from stochastic gradient descent. However, there is a dearth of work to explain why one can effectively capture the features in data with binary weights and activations. Our main result is that the neural networks with binary weights and activations trained using the method of Courbariaux, Hubara et al. (2016) work because of the high-dimensional geometry of binary vectors. In particular, the ideal continuous vectors that extract out features in the intermediate representations of these BNNs are well-approximated by binary vectors in the sense that dot products are approximately preserved. Furthermore, the results and analysis used on BNNs are shown to generalize to neural networks with ternary weights and activations. Compared to previous research that demonstrated good classification performance with BNNs, our work explains why these BNNs work in terms of HD geometry. Our theory serves as a foundation for understanding not only BNNs but a variety of methods that seek to compress traditional neural networks. Furthermore, a better understanding of multilayer binary neural networks serves as a starting point for generalizing BNNs to other neural network architectures such as recurrent neural networks. | This paper investigates numerically and theoretically the reasons behind the empirical success of binarized neural networks. Specifically, they observe that:
(1) The angle between continuous vectors sampled from a spherical symmetric distribution and their binarized version is relatively small in high dimensions (proven to be about 37 degrees when the dimension goes to infinity), and this demonstrated empirically to be true for the binarized weight matrices of a convenet.
(2) Except the first layer, the dot product of weights*activations in each layer is highly correlated with the dot product of (binarized weights)*activations in each layer. There is also a strong correlation between (binarized weights)*activations and (binarized weights)*(binarized activations). This is claimed to entail that the continuous weights of the binarized neural net approximate the continuous weights of a non-binarized neural net trained in the same manner.
(3) To correct the issue with the first layer in (2) it is suggested to use a random rotation, or simply use continues weights in that layer.
The first observation is interesting, is explained clearly and convincingly, and is novel to the best of my knowledge.
The second observation is much less clear to me. Specifically,
a. The author claim that “A sufficient condition for \delta u to be the same in both cases is L’(x = f(u)) ~ L’(x = g(u))”. However, I’m not sure if I see why this is true: in a binarized neural net, u also changes, since the previous layers are also binarized.
b. Related to the previous issue, it is not clear to me if in figure 3 and 5, did the authors binarize the activations of that specific layer or all the layers? If it is the first case, I would be interested to know the latter: It is possible that if all layers are binarized, then the differences between the binarized and non-binarized version become more amplified.
c. For BNNs, where both the weights and activations are binarized, shouldn’t we compare weights*activations to (binarized weights)*(binarized activations)?
d. To make sure, in figure 4, the permutation of the activations was randomized (independently) for each data sample? If not, then C is not proportional the identity matrix, as claimed in section 5.3.
e. It is not completely clear to me that batch-normalization takes care of the scale constant (if so, then why did XNOR-NET needed an additional scale constant?), perhaps this should be further clarified.
The third observation seems less useful to me. Though a random rotation may improve angle preservation in certain cases (as demonstrated in Figure 4), it may hurt classification performance (e.g., distinguishing between 6 and 9 in MNIST). Furthermore, since it uses non-binary operations, it is not clear if this rotation may have some benefits (in terms of resource efficiency) over simply keeping the input layer non-binarized.
To summarize, the first part is interesting and nice, the second part was not clear to me, and the last part does not seem very useful.
%%% After Author's response %%%
a. My mistake. Perhaps it should be clarified in the text that u are the weights. I thought that g(u) is a forward propagation function, and therefore u is the neural input (i.e., pre-activation).
Following the author's response and revisions, I have raised my grade. |
iclr_2018_H1bM1fZCW | Deep multitask networks, in which one neural network produces multiple predictive outputs, are more scalable and often better regularized than their single-task counterparts. Such advantages can potentially lead to gains in both speed and performance, but multitask networks are also difficult to train without finding the right balance between tasks. We present a novel gradient normalization (GradNorm) technique which automatically balances the multitask loss function by directly tuning the gradients to equalize task training rates. We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, GradNorm improves accuracy and reduces overfitting over single networks, static baselines, and other adaptive multitask loss balancing techniques. GradNorm also matches or surpasses the performance of exhaustive grid search methods, despite only involving a single asymmetry hyperparameter α. Thus, what was once a tedious search process which incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks. Ultimately, we hope to demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning. | Paper summary:
Existing works on multi-task neural networks typically use hand-tuned weights for weighing losses across different tasks. This work proposes a dynamic weight update scheme that updates weights for different task losses during training time by making use of the loss ratios of different tasks. Experiments on two different network indicate that the proposed scheme is better than using hand-tuned weights for multi-task neural networks.
Paper Strengths:
- The proposed technique seems simple yet effective for multi-task learning.
- Experiments on two different network architectures showcasing the generality of the proposed method.
Major Weaknesses:
- The main weakness of this work is the unclear exposition of the proposed technique. Entire technique is explained in a short section-3.1 with many important details missing. There is no clear basis for the main equations 1 and 2. How does equation-2 follow from equation-1? Where is the expectation coming from? What exactly does ‘F’ refer to? There is dependency of ‘F’ on only one of sides in equations 1 and 2? More importantly, how does the gradient normalization relate to loss weight update? It is very difficult to decipher these details from the short descriptions given in the paper.
- Also, several details are missing in toy experiments. What is the task here? What are input and output distributions and what is the relation between input and output? Are they just random noises? If so, is the network learning to overfit to the data as there is no relationship between input and output?
Minor Weaknesses:
- There are no training time comparisons between the proposed technique and the standard fixed loss learning.
- Authors claim that they operate directly on the gradients inside the network. But, as far as I understood, the authors only update loss weights in this paper. Did authors also experiment with gradient normalization in the intermediate CNN layers?
- No comparison with state-of-the-art techniques on the experimented tasks and datasets.
Clarifications:
- See the above mentioned issues with the exposition of the technique.
- In the experiments, why are the input images downsampled to 320x320?
- What does it mean by ‘unofficial dataset’ (page-4). Any references here?
- Why is 'task normalized' test-time loss as good measure for comparison between models in the toy example (Section 4)? The loss ratios depend on initial loss, which is not important for the final performance of the system.
Suggestions:
- I strongly suggest the authors to clearly explain the proposed technique to get this into a publishable state.
- The term ’GradNorm’ seem to be not defined anywhere in the paper.
Review Summary:
Despite promising results, the proposed technique is quite unclear from the paper. With its poor exposition of the technique, it is difficult to recommend this paper for publication. |
iclr_2018_rkdU7tCaZ | We present methodology for using dynamic evaluation to improve neural sequence models. Models are adapted to recent history via a gradient descent based mechanism, causing them to assign higher probabilities to re-occurring sequential patterns. Dynamic evaluation outperforms existing adaptation approaches in our comparisons. Dynamic evaluation improves the state-of-the-art word-level perplexities on the Penn Treebank and WikiText-2 datasets to 51.1 and 44.3 respectively, and the state-of-the-art character-level cross-entropies on the text8 and Hutter Prize datasets to 1.19 bits/char and 1.08 bits/char respectively. | The authors provide an improved implementation of the idea of dynamic evaluation, where the update of the parameters used in the last time step proposed in (Mikolov et al. 2010) is replaced with a back-propagation through the last few time steps, and uses RMSprop rather than vanilla SGD. The method is applied to word level and character level language modeling where it yields some gains in perplexity. The algorithm also appears able to perform domain adaptation, in a setting where a character-level language model trained mostly on English manages to quickly adapt to a Spanish test set.
While the general idea is not novel, the implementation choices matter, and the authors provide one which appears to work well with recently proposed models. The character level experiments on the multiplicative LSTM make the most convincing point, providing a significant improvement over already good results on medium size data sets. Figure 2 also makes a strong case for the method's suitability for applications where domain adaptation is important.
The paper's weakest part is the word level language modeling section. Given the small size of the data sets considered, the results provided are of limited use, especially since the development set is used to fit the RMSprop hyper-parameters. How sensitive are the final results to this choice? Comparing dynamic evaluation to neural cache models is a good idea, given how both depend en medium-term history: (Grave et al. 2017) provide results on the larger text8 and wiki103, it would be useful to see results for dynamic evaluation at least on the former.
An indication of the actual additional evaluation time for word-level, char-level and sparse char-level dynamic evaluation would also be welcome.
Pros:
- Good new implementation of an existing idea
- Significant perplexity gains on character level language modeling
- Good at domain adaptation
Cons:
- Memory requirements of the method
- Word-level language modeling experiments need to be run on larger data sets
(Edit: the authors did respond satisfactorily to the original concern about the size of the word-level data set) |
iclr_2018_ryazCMbR- | Published as a conference paper at ICLR 2018 COMMUNICATION ALGORITHMS VIA DEEP LEARNING
Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age. Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century. In this paper we study whether it is possible to automate the discovery of decoding algorithms via deep learning. We study a family of sequential codes parametrized by recurrent neural network (RNN) architectures. We show that creatively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise (AWGN) channel, which itself is achieved by breakthrough algorithms of our times (Viterbi and BCJR decoders, representing dynamic programing and forward-backward algorithms). We show strong generalizations, i.e., we train at a specific signal to noise ratio and block length but test at a wide range of these quantities, as well as robustness and adaptivity to deviations from the AWGN setting. | In this paper the authors propose to use RNNs and LSTMs for channel coding. But I have the impression the authors completely miss the state of the art in channel coding and the results are completely useless for any current communication system. I believe that machine learning, in general, and deep learning, in particular, might be of useful for physical layer communications. I just do not see why it would be useful for channel coding over the AWGN channel. Let me explain.
If the decoder knows that the encoder is using a convolutional code, why does it need to learn the decoder instead of using the Viterbi or BCJR algorithms that are known to be optimal for sequences and symbols, respectively. I cannot imagine an scenario in which the decoder does not know the convolutional code that it is being used and the encoder sends 120,000 bits of training sequence (useless bits from information standpoint) for the decoder to learn it. More important question, do the authors envision that this learning is done every time there is a new connection or it is learnt once and for all. If it is learnt every time that would be ideal if we were discovering new channel codes everyday, clearly not the case. If we learnt it one and for all and then we incorporated in the standard that would only make sense if the GRU structure was computationally better than the BCJR or Viterbi. I would be surprise if it is. If instead of using 2 or 3 memories, we used 6-8 does 120,000 bits be good enough or we need to exponentially increase the training sequence? So the first result in the paper shows that a tailored structure for convolutional encoding can learn to decode it. Basically, the authors are solving a problem that does not need solving.
For the Turbocodes the same principle as before applies. In this case the comments of the authors really show that they do not know anything about coding. In Page 6, we can read: “Unlike the convolutional codes, the state of the art (message-passing) decoders for turbo codes are not the corresponding MAP decoders, so there is no contradiction in that our neural decoder would beat the message-passing ones”. This is so true, so I expected the DNN structure to be significantly better than turbodecoding. But actually, they do not. These results are in Figure 15 page 6 and the solution for the turbo decoders and the DNN architecture are equivalent. I am sure that the differences in the plots can be explained by the variability in the received sequence and not because the DNN is superior to the turbodecoder. Also in this case the training sequence is measured in the megabits for extremely simple components. If the convolutional encoders were larger 6-8 bits, we would be talking about significantly longer training sequences and more complicated NNs.
In the third set the NNs seems to be superior to the standard methods when burst-y noise is used, but the authors seems to indicate that that NN is trained with more information about these bursts that the other methods do not have. My impression is that the authors would be better of focusing on this example and explain it in a way that it is reproducible. This experiment is clearly not well explained and it is hard to know if there is any merit for the proposed NN structure.
Finally, the last result would be the more interesting one, because it would show that we can learn a better channel coding and decoding mechanism that the ones humans have been able to come up with. In this sense, if NNs can solve this problem that would be impressive and would turn around how channel coding is done nowadays. If this result were good enough, the authors should only focus in it and forget about the other 3 cases. The issue with this result is that it actually does not make sense. The main problem with the procedure is that the feedback proposal is unrealistic, this is easy to see in Figure 16 in which the neural encoder is proposed. It basically assumes that the received real-valued y_k can be sent (almost) noiselessly to the encoder with minimal delay and almost instantaneously. So the encoder knows the received error and is able to cancel it out. Even if this procedure could be implemented, which it cannot be. The code only uses 50 bits and it needed 10^7 iterations (500Mbs) to converge. The authors do not show how far they are from the Shannon limit, but I can imagine that with 50 bit code, it should be pretty far.
We know that with long enough LDPC codes we can (almost) reach the Shannon limit, so new structure are not needed. If we are focusing on shorter codes (e.g. latency?) then it will be good to understand why do we need to learn the channel codes. A comparison to the state of the art would be needed. Because clearly the used codes are not close to state of the art. For me the authors either do not know about coding or are assuming that we do not, which explains part of the tone of this review. |
iclr_2018_HyY0Ff-AZ | Two main families of reinforcement learning algorithms, Q-learning and policy gradients, have recently been proven to be equivalent when using a softmax relaxation on one part, and an entropic regularization on the other. We relate this result to the well-known convex duality of Shannon entropy and the softmax function. Such a result is also known as the Donsker-Varadhan formula. This provides a short proof of the equivalence. We then interpret this duality further, and use ideas of convex analysis to prove a new policy inequality relative to soft Q-learning.
• Policy gradients (V. ), looks to maximize the expected reward by improving policies to favor high-reward actions. In general, the target loss function is regularized by the addition of an entropic functional for the policy. This makes policies more diffuse and less likely to yield degenerate results.
A critical step in the theoretical understanding of the field has been a smooth relaxation of the greedy max operation involved in selecting actions, turned into a Boltzmann softmax O. Nachum & Schuurmans. (2017b.). This new context has lead to a breakthrough this year J. Schulman & Abbeel. (2017) with the proof of the equivalence of both methods of Q-learning and policy gradients. While that result is extremely impressive in its unification, we argue that it is critical to look additionally at the fundamental reasons as to why it occurs. We believe that the convexity of the entropy functional used for policy regularization is at the root of the phenomenon, and that (Lagrangian) duality can be exploited as well, either yielding faster proofs, or further understanding. The contributions of our paper are as follows:
1. We show how convex duality expedites the proof of the equivalence between soft Qlearning and softmax entropic policy gradients -heuristically in the general case, rigorously in the bandit case.
2. We introduce a transportation inequality that relates the expected optimality gap of any policy with its Kullback-Leibler divergence to the optimal policy.
We describe our notations here. Abusing notation heavily by identifying measures with their densities as in dπ(a|s) = π(a|s)da, if we note as either r(s, a) or r(a, s) the reward obtained by taking action a in state s, the expected reward expands as:
K r is a linear functional of π. Adding Shannon entropic regularization 1 improves numerical stability of the algorithm, and prevents early convergence to degenerate solutions. Noting regularization strength β, the objective becomes a free energy functional, named by analogy with a similar quantity in statistical mechanics:
Crucially, viewed as a functional of π, J is convex and is the sum of two parts
2 THE GIBBS VARIATIONAL PRINCIPLE FOR POLICY EVALUATION | Summary
*******
The paper provides a collection of existing results in statistics.
Comments
********
Page 1: references to Q-learning and Policy-gradients look awkwardly recent, given that these have been around for several decades.
I dont get what is the novelty in this paper. There is no doubt that all the tools that are detailed here are extremely useful and powerful results in mathematical statistics. But they are all known.
The Gibbs variational principle is folklore, Proposition 1,2 are available in all good text books on the topic,
and Proposition 4 is nothing but a transportation Lemma.
Now, Proposition 3 is about soft-Bellman operators. This perhaps is less standard because contraction property of soft-Bellman operator in infinite norm is more recent than for Bellman operators.
But as mentioned by the authors, this is not new either.
Also I don't really see the point of providing the proofs of these results in the main material, and not for instance in appendix, as there is no novelty either in the proof techniques.
I don't get the sentence "we have restricted so far the proof in the bandit setting": bandits are not even mentioned earlier.
Decision
********
I am sorry but unless I missed something (that then should be clarified) this seems to be an empty paper: Strong reject. |
iclr_2018_Sk03Yi10Z | Human-computer conversation systems have attracted much attention in Natural Language Processing. Conversation systems can be roughly divided into two categories: retrieval-based and generation-based systems. Retrieval systems search a user-issued utterance (namely a query) in a large conversational repository and return a reply that best matches the query. Generative approaches synthesize new replies. Both ways have certain advantages but suffer from their own disadvantages. We propose a novel ensemble of retrieval-based and generation-based conversation system. The retrieved candidates, in addition to the original query, are fed to a reply generator via a neural network, so that the model is aware of more information. The generated reply together with the retrieved ones then participates in a re-ranking process to find the final reply to output. Experimental results show that such an ensemble system outperforms each single module by a large margin. | Summary:
The paper proposes a new dialog model combining both retrieval-based and generation-based modules. Answers are produced in three phases: a retrieval-based model extracts candidate answers; a generator model, conditioned on retrieved answers, produces an additional candidate; a reranker outputs the best among all candidates.
The approach is interesting: the proposed ensemble can improve on both the retrieval module and the generation module, since it does not restrict modeling power (e.g. the generator is not forced to be consistent with the candidates). I am not aware of similar approaches for this task. One work that comes to mind regarding the blend of retrieval and generation is Memory Networks (e.g. https://arxiv.org/pdf/1606.03126.pdf and references): given a query, a set of relevant memories is extracted from a KB using an inverted index and the memories are fed into the generator. However, the extracted items in the current work are candidate answers which are used both to feed the generator and to participate in reranking.
The experimental section focuses on the task of building conversational systems. The performance measures used are 1) a human evaluation score with three volunteers and 2) BLUE scores. While these methods are not very satisfying, effective evaluation of such systems is a known difficulty.
The results show that the ensemble outperforms the individual modules, indicating that: the multi-seq2seq models have learned to use the new inputs as needed and that the ranker is correlated with the evaluation metrics.
However, the results themselves do not look impressive to me: the subjective evaluation is close to the "borderline" score; in the examples provided, one is good, the other is borderline/bad, and the baseline always provides something very short. Does the LSTM work particularly poor on this dataset? Given that this is a novel dataset, I don't know what the state-of-the-art should be. Could you provide more insight? Have you considered adding a benchmark dataset (e.g. a QA dataset)?
Specific questions:
1. The paper motivates conditioning on the candidates in two ways. First, that the candidates bring additional information which the decoder can use (e.g. read from the candidates locations, actions, etc.). Second, that the probability of universal replies must decrease due to the additional condition. I think the second argument depends on how the conditioning is performed: if the candidates are simply appended to the input, the model can learn to ignore them.
2. The copy mechanism is a nice touch, encouraging the decoder to use the provided queries. Why not copy from the query too, e.g. with some answers reusing part of the query <"Where are you going?", "I'm going to the park">?
3. How often does the model select the generated answer vs. the extracted answers? In both examples provided the selected answer is the one merging the candidate answers.
Minor issues:
- Section 3.2: using and the state
- Section 3.2: more than one replies
- last sentence on page 3: what are the "following principles"? |
iclr_2018_SkFvV0yC- | In this research, we present a novel learning scheme called network iterative learning for deep neural networks. Different from traditional optimization algorithms that usually optimize directly on a static objective function, we propose in this work to optimize a dynamic objective function in an iterative fashion capable of adapting its function form when being optimized. The optimization is implemented as a series of intermediate neural net functions that is able to dynamically grow into the targeted neural net objective function. This is done via network morphism so that the network knowledge is fully preserved with each network growth. Experimental results demonstrate that the proposed network iterative learning scheme is able to significantly alleviate the degradation problem. Its effectiveness is verified on diverse benchmark datasets. | This submission develops a learning scheme for training deep neural networks with adoption of network morphism (Wei et al., 2016), which optimizes a dynamic objective function in an iterative fashion capable of adapting its function form when being optimized, instead of directly optimizing a static objective function. Overall, the idea looks interesting and the manuscript is well-written. The shown experimental results should be able to validate the effectiveness of the learning scheme to some extent.
It would be more convincing to include the performance evaluation of the learning scheme in some representative applications, since the optimality of the training objective function is not necessarily the same as that of the trained network in the application of interest.
Below are two minor issues:
- In page 2, it is stated that Fig. 2(e) illustrates the idea of the proposed network iterative learning scheme for deep neural networks based on network morphism. However, the idea seems not clear from Fig. 2(e).
- In page 4, “such network iterative learning process” should be “such a network iterative learning process”. |
iclr_2018_rywHCPkAW | Published as a conference paper at ICLR 2018 NOISY NETWORKS FOR EXPLORATION
We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent's policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet is straightforward to implement and adds little computational overhead. We find that replacing the conventional exploration heuristics for A3C, DQN and Dueling agents (entropy reward and -greedy respectively) with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance. | This paper introdues NoisyNets, that are neural networks whose parameters are perturbed by a parametric noise function, and they apply them to 3 state-of-the-art deep reinforcement learning algorithms: DQN, Dueling networks and A3C. They obtain a substantial performance improvement over the baseline algorithms, without explaining clearly why.
The general concept is nice, the paper is well written and the experiments are convincing, so to me this paper should be accepted, despite a weak analysis.
Below are my comments for the authors.
---------------------------------
General, conceptual comments:
The second paragraph of the intro is rather nice, but it might be updated with recent work about exploration in RL.
Note that more than 30 papers are submitted to ICLR 2018 mentionning this topic, and many things have happened since this paper was
posted on arxiv (see the "official comments" too).
p2: "our NoisyNet approach requires only one extra parameter per weight" Parameters in a NN are mostly weights and biases, so from this sentence
one may understand that you close-to-double the number of parameters, which is not so few! If this is not what you mean, you should reformulate...
p2: "Though these methods often rely on a non-trainable noise of vanishing size as opposed to NoisyNet which tunes the parameter of noise by gradient descent."
Two ideas seem to be collapsed here: the idea of diminishing noise over an experiment, exploring first and exploiting later, and the idea of
adapting the amount of noise to a specific problem. It should be made clearer whether NoisyNet can address both issues and whether other
algorithms do so too...
In particular, an algorithm may adapt noise along an experiment or from an experiment to the next.
From Fig.3, one can see that having the same initial noise in all environments is not a good idea, so the second mechanism may help much.
BTW, the short section in Appendix B about initialization of noisy networks should be moved into the main text.
p4: the presentation of NoisyNets is not so easy to follow and could be clarified in several respects:
- a picture could be given to better explain the structure of parameters, particularly in the case of factorised (factorized, factored?) Gaussian noise.
- I would start with the paragraph "Considering a linear layer [...] below)" and only after this I would introduce \theta and \xi as a more synthetic notation.
Later in the paper, you then have to state "...are now noted \xi" several times, which I found rather clumsy.
p5: Why do you use option (b) for DQN and Dueling and option (a) for A3C? The reason why (if any) should be made clear from the clearer presentation required above.
By the way, a wild question: if you wanted to use NoisyNets in an actor-critic architecture like DDPG, would you put noise both in the actor and the critic?
The paragraph above Fig3 raises important questions which do not get a satisfactory answer.
Why is it that, in deterministic environments, the network does not converge to a deterministic policy, which should be able to perform better?
Why is it that the adequate level of noise changes depending on the environment? By the way, are we sure that the curves of Fig3 correspond to some progress
in noise tuning (that is, is the level of noise really "better" through time with these curves, or they they show something poorly correlated with the true reasons of success?)?
Finally, I would be glad to see the effect of your technique on algorithms like TRPO and PPO which require a stochastic policy for exploration, and where I believe that the role of the KL divergence bound is mostly to prevent the level of stochasticity from collasping too quickly.
-----------------------------------
Local comments:
The first sentence may make the reader think you only know about 4-5 old works about exploration.
Pp. 1-2 : "the approach differs ... from variational inference. [...] It also differs variational inference..."
If you mean it differs from variational inference in two ways, the paragraph should be reorganized.
p2: "At a high level our algorithm induces a randomised network for exploration, with care exploration
via randomised value functions can be provably-efficient with suitable linear basis (Osband et al., 2014)"
=> I don't understand this sentence at all.
At the top of p3, you may update your list with PPO and ACKTR, which are now "classical" baselines too.
Appendices A1 and A2 are a lot redundant with the main text (some sentences and equations are just copy-pasted), this should be improved.
The best would be to need to reject nothing to the Appendix.
---------------------------------------
Typos, language issues:
p2
the idea ... the optimization process have been => has
p2
Though these methods often rely on a non-trainable noise of vanishing size as opposed to NoisyNet which tunes the parameter of noise by gradient descent.
=> you should make a sentence...
p3
the the double-DQN
several times, an equation is cut over two lines, a line finishing with "=", which is inelegant
You should deal better with appendices: Every "Sec. Ax/By/Cz" should be replaced by "Appendix Ax/By/Cz".
Besides, the big table and the list of performances figures should themselves be put in two additional appendices
and you should refer to them as Appendix D or E rather than "the Appendix". |
iclr_2018_HktJec1RZ | Published as a conference paper at ICLR 2018 TOWARDS NEURAL PHRASE-BASED MACHINE TRANSLATION
In this paper, we present Neural Phrase-based Machine Translation (NPMT).
1 Our method explicitly models the phrase structures in output sequences using SleepWAke Networks (SWAN), a recently proposed segmentation-based sequence modeling method. To mitigate the monotonic alignment requirement of SWAN, we introduce a new layer to perform (soft) local reordering of input sequences. Different from existing neural machine translation (NMT) approaches, NPMT does not use attention-based decoding mechanisms. Instead, it directly outputs phrases in a sequential order and can decode in linear time. Our experiments show that NPMT achieves superior performances on IWSLT 2014 German-English/EnglishGerman and IWSLT 2015 English-Vietnamese machine translation tasks compared with strong NMT baselines. We also observe that our method produces meaningful phrases in output languages. | Authors proposed a new neural-network based machine translation method that generates the target sentence by generating multiple partial segments in the target sentence from different positions in the source information. The model is based on the SWAN architecture which is previously proposed, and an additional "local reordering" layer to reshuffle source information to adjust those positions to the target sentence.
Using the SWAN architecture looks more reasonable than the conventional attention mechanism when the ground-truth word alignment is monotone. Also, the concept of local reordering mechanism looks well to improve the basic SWAN model to reconfigure it to the situation of machine translation tasks.
The "window size" of the local reordering layer looks like the "distortion limit" used in traditional phrase-based statistical machine translation methods, and this hyperparameter may impose a similar issue with that of the distortion limit into the proposed model; small window sizes may drop information about long dependency. For example, verbs in German sentences sometimes move to the tail of the sentence and they introduce a dependency between some distant words in the sentence. Since reordering windows restrict the context of each position to a limited number of neighbors, it may not capture distant information enough. I expected that some observations about this point will be unveiled in the paper, but unfortunately, the paper described only a few BLEU scores with different window sizes which have not enough information about it. It is useful for all followers of this paper to provide some observations about this point.
In addition, it could be very meaningful to provide some experimental results on linguistically distant language pairs, such as Japanese and English, or simply reversing word orders in either source or target sentences (this might work to simulate the case of distant reordering).
Authors argued some differences between conventional attention mechanism and the local reordering mechanism, but it is somewhat unclear that which ones are the definite difference between those approaches.
A super interesting and mysterious point of the proposed method is that it achieves better BLEU than conventional methods despite no any global language models (Table 1 row 8), and the language model options (Table 1 row 9 and footnote 4) may reduce the model accuracy as well as it works not so effectively. This phenomenon definitely goes against the intuitions about developing most of the conventional machine translation models. Specifically, it is unclear how the model correctly treats word connections between segments without any global language model. Authors should pay attention to explain more detailed analysis about this point in the paper.
Eq. (1) is incorrect. According to Fig. 2, the conditional probability in the product operator should be revised to p(a_t | x_{1:t}, a_{1:t-1}), and the independence approximation to remove a_{1:t-1} from the conditions should also be noted in the paper.
Nevertheless, the condition x_{1:t} could not be reduced because the source position is always conditioned by all previous positions through an RNN. |
iclr_2018_SyVVXngRW | We propose Deep Asymmetric Multitask Feature Learning (Deep-AMTFL) which can learn deep representations shared across multiple tasks while effectively preventing negative transfer that may happen in the feature sharing process. Specifically, we introduce an asymmetric autoencoder term that allows reliable predictors for the easy tasks to have high contribution to the feature learning while suppressing the influences of unreliable predictors for more difficult tasks. This allows the learning of less noisy representations, and enables unreliable predictors to exploit knowledge from the reliable predictors via the shared latent features. Such asymmetric knowledge transfer through shared features is also more scalable and efficient than inter-task asymmetric transfer. We validate our Deep-AMTFL model on multiple benchmark datasets for multitask learning and image classification, on which it significantly outperforms existing symmetric and asymmetric multitask learning models, by effectively preventing negative transfer in deep feature learning. | Summary: The paper proposes a multi-task feature learning framework with a focus on avoiding negative transfer. The objective has two kinds of terms to minimise: (1) The reweighed per-task loss, and (2) Regularisation. The new contribution is an asymmetric reconstruction error in the regularisation term, and one parameter matrix in the regulariser influences the reweighing of the pre-task loss.
Strength:
The method has some contribution in dealing with negative transfer. The experimental results are positive.
Weakness:
Several issues in terms of concept, methodology, experiments and analysis.
Details:
1. Overall conceptual issues.
1.1. Unclear motivation re prior work. The proposed approach is motivated by the claim that GO-MTL style models assumes symmetric transfer where bad tasks can hurt good tasks. This assertion seems flawed. The point of grouping/overlap in “GO”-MTL is that a “noisy”, “hard”, or “unrelated" task can just take its own latent predictor that is disjoint from the pool of predictors shared by the good/related tasks.
Correspondingly, Fig 2 seems over-contrived. A good GO-MTL solution would assign the noisy task $w_3$ its own latent basis, and let the two good tasks share the other two latent bases.
1.2 Very unclear intuition of the algorithm. In the AMTFL, task asymmetry is driven by the per-task loss. The paper claims this is because transfer must go from easy=>hard to avoid negative transfer. But this logic relies on several questionable assumptions surrounding conflating the distinct issues of difficulty and relatedness: (i) There could be several easy tasks that are totally un-related. One could construct synthetic examples with data that are trivially separable (easy) but require unrelated or orthogonal classifiers. (ii) A task could appear to be “easy" just by severe overfitting, and therefore still be detrimental to transfer despite low loss. (iii) A task could be very "difficult" in the sense of high loss, but it could still be perfectly learned in the sense of finding the ideal "ground-truth” classifier, but for a dataset that is highly non-separable in the provided feature-space. Such a perfectly learned classifier may still be useful to transfer despite high loss. (iv) Analogous to point (i), there could be several “difficult” tasks that are indeed related and should share knowledge. (Since difficult/high loss != badly learned as mentioned before). Overall there are lots of holes in the intuitive justification of the algorithm.
2. Somewhat incremental method.
3.1 It’s a combination of AMTL (Lee 2016) and vanilla auto encoder.
3. Methodology issues:
3.1 Most of the explanation (Sec 3-3.1) is given re: Matrix B in Eq.(4) (AMTL method’s objective function). However the final proposed model uses matrix A in Eq.(6) for the same purpose of measuring the amount of outgoing transfers from task $t$ to all other tasks. However in the reconstruction loss, they work in very different ways: matrix B is for the reconstruction of model parameters, while matrix A is for the reconstruction of latent features. This is a big change of paradigm without adequate explanation. Why is it still a valid approach?
3.2 Matrix B in the original paper of AMTL (Eq.(1) of Lee et al., 2016) has a constraint $B \geq 0$, should matrix A have the same constraint? If not, why?
3.3 Question Re: the |W-WB| type assumption for task relatedness. A bad task could learn an all-zero vector of outgoing related ness $b^0_t$ so it doesn’t directly influence other tasks in feed-forward sense. But hat about during training? Does training one task’s weights endup influencing other tasks’s weights via backprop? If a bad task is defined in terms of incoming relatedness from good tasks, then tuning the bad task with backprop will eventually also update the good tasks? (presumably detrimentally).
4. Experimental Results not very strong.
4.1 Tab 1: Neural Network NN and MT-NN beat the conventional shallow MTL approaches decisively for AWA and MNIST. The difference between MT-NN and AMTFL is not significant. The performance boost is more likely due to using NNs rather than the proposed MTL module. For School, there is not significant difference between the methods. For ImageNet-Room AMTL and AMTFL have overlapping errors. Also, a variant of AMTL (AMTL-imbalance) was reported in Lee’2016, but not here where the number is $40\pm1.71$.
4.2 Tab 2: The “real” experiments are missing state of the art competitors. Besides a deep GO-MTL alternative, which should be a minimum, there are lots of deep MTL state of the art: Misra CVPR’16 , Yang ICLR’17, Long arXiv/NIPS’17 Multilinear Relationship Nets, Ruder arXiv’17 Sluice Nets, etc.
5. Analysis
5.1 The proposed method revolves around the notion of “noisy”/“unrelated”/“difficult” tasks. Although the paper conflates them, it may still be a useful algorithm in practice. But it in this case it should devise much better analysis to provide insight and convince us that this is not a fatal oversimplification: What is the discovered relatedness matrix in some benchmarks? Does the discovered relatedness reflect expert knowledge where this is available? Is there a statistically significant correlation between relatedness and task difficulty in practice? Or between relatedness and degree of benefit from transfer, etc? But this is hard to do cleanly as even if the results show a correlation between difficulty and relatedness, it may just be because that’s how relatedness is defined in the proposed algorithm. |
iclr_2018_S1fHmlbCW | Designing neural networks for continuous-time stochastic processes is challenging, especially when observations are made irregularly. In this article, we analyze neural networks from a frame theoretic perspective to identify the sufficient conditions that enable smoothly recoverable representations of signals in L 2 (R). Moreover, we show that, under certain assumptions, these properties hold even when signals are irregularly observed. As we obtain a family of (convolutional) neural networks that satisfy these conditions, we show that we can optimize our convolution filters while constraining them so that they effectively compute a Discrete Wavelet Transform. Such a neural network can efficiently divide the time-axis of a signal into orthogonal sub-spaces of different temporal scale and localization. We evaluate the resulting neural network on an assortment of synthetic and real-world tasks: parsimonious auto-encoding, video classification, and financial forecasting. | Summary
This article considers neural networks over time-series, defined as a succession of convolutions and fully-connected layers with Leaky ReLU activations. The authors provide relatively general conditions for transformations described by such networks to admit a Lipschitz-continuous inverse. They extend these results to the case where the first layer is a convolution with irregular sampling. Finally, they show that the first convolutional filters can be chosen so as to represent a discrete wavelet transform, and provide some numerical experiments.
Main remarks
While the introduction seemed promising, and I enjoyed the writing style, I was disappointed with this article.
(1) There are many mistakes in the mathematical statements. First, in Theorem 1.1, I do not think that phi_L \circ ... \circ phi_1 \circ F is a non-linear frame, because I do not see why it should be of the form of Definition 1.2 (what would be the functions psi_n?). For the same reason, I also do not understand Theorem 1.2. In Proof 1.4, the line of equalities after « Also with the Plancherel formula » is, in my opinion, not true, because the L^2 norm of a product of functions is not the product of the L^2 norms of the functions. It also seems to me that Theorem 1.3, from [Benedetto, 1992], is incorrect: it is not the limit of t_n/n that must be larger than 2R, but the limit of N_n/n (with N_n the number of t_i's that belong to the interval [-n;n]), and there must probably be a compatibility condition between (t_n)_n and R_1, not only between (t_n)_n and R. In Proposition 1.6, I think that the equality should be a strict inequality. Additionally, I do not say that Proof 2.1 is not true, but the fact that the undersampling by a factor 2 does not prevent the operator from being a frame should be justified.
(2) The authors do not justify, in the introduction, why admitting a continuous inverse should be a crucial criterion of quality for the representation described by a neural network. Additionally, the existence of this continous inverse relies on the fact that the non-linearity that is used is a Leaky ReLU, which looks a bit like "cheating" to me, because the Lipschitz constant of the inverse of a Leaky ReLU, although finite, is large, so it seems to me that cascading several layers with Leaky ReLUs could encode a transformation with strictly positive, but still very poor frame bounds.
(3) I also do not understand why having "orthogonal outputs", as in Section 2, is really desirable; I think that it should be better justified. Also, there are probably other ways to achieve orthogonality than using wavelets in the first layer, so the fact that wavelets achieve orthogonality does not really justify why using wavelets in the first layer is a good choice, compared to other filters.
(4) I had understood in the introduction that the authors would explain how to define a (good) deep representation for data of the form (x_n)_{n\in\N}, where each x_n would be the value of a time series at instant t_n, with the t_n non-uniformly spaced. But all the representations considered in the article seem to be applicable to functions in L^2(\R) only (like in Theorem 1.4 and Theorem 2.2), and not to sequences (x_n)_{n\in\N}. There is something that I did not get here.
Minor remarks
- Fourth paragraph, third line: "this generalization frames"?
- Last paragraph before "Contributions & Organization": "that that".
- Paragraph about notations: it seems to me that what is defined as l^2(R) is denoted as l^2(Z) after the introduction.
- Last line of this paragraph: R^d_1 should be R^{d_1}, and R^d_2 R^{d_2}.
- I think "smooth" could be replaced by "continuous" (smoothness implies a notion of differentiability).
- Paragraph before Proposition 1.1: \sqrt{s} is not defined, and "is supported" should be "are supported".
- Theorem 1.1: the f_k should be phi_k.
- Definition 1.4: "piece-linear" -> "piecewise linear"?
- Lemma 1.2 and Proof 1.4: there are indices missing to \tilde h and \tilde g.
- Proof 1.4: "and finally" -> "And finally".
- Proof 1.5: I do not understand the grammatical structure of the second sentence.
- Proposition 1.4: the definition of a RNN is the same as definition 1.2 (except for the frame bounds); I do not see why such transformations should model RNNs.
- Paragraph before Proposition 1.5: "in,formation".
- Proposition 1.6: it should be said on which space the frame is injective.
- On page 8, "Lipschitz" is erroneously written (twice).
- Proposition 1.7: "ProjW,l"?
- Definition 2.1: in the "nested" property, I think that the inclusion should be the other way around.
- Before Theorem 2.1, the sentence "Such Riesz basis is proven" is unclear to me.
- Theorem 2.1: "filters convolution filters".
- I think the architecture described in Theorem 2.2 could be clarified; I am not exactly sure where all the arrows start from.
- First line of Subsection 2.3: ". is always" -> "is always".
- First paragraph of Subsection 3.2: "the the".
- Paragraph 3.2: could the previous algorithms developed for this dataset be described in slightly more detail? I also do not understand the meaning of "must solely leverage the temporal structure".
- I think that the section about numerical experiments could be slightly rewritten, so that the architecture used in each experiment is clearer. In Paragraph 3.2 in particular, I did not get why the architecture presented in Figure 6 has far fewer parameters than the one in Figure 5; it would help if the authors clearly precised how many parameters each layer contains.
- Conclusion: "we can to" -> "we can".
- Definition 4.1: p_v(s) -> p_v(t). |
iclr_2018_HyPpD0g0Z | When training a deep neural network for supervised image classification, one can broadly distinguish between two types of latent features of images that will drive the classification of class Y . Following the notation of Gong et al. (2016), we can divide features broadly into the classes of (i) 'core' or 'conditionally invariant' features X ci whose distribution P (X ci |Y ) does not change substantially across domains and (ii) 'style' or 'orthogonal' features X ⊥ whose distribution P (X ⊥ |Y ) can change substantially across domains. These latter orthogonal features would generally include features such as position, rotation, image quality or brightness but also more complex ones like hair color or posture for images of persons. We try to guard against future adversarial domain shifts by ideally just using the 'conditionally invariant' features for classification. In contrast to previous work, we assume that the domain itself is not observed and hence a latent variable. We can hence not directly see the distributional change of features across different domains. We do assume, however, that we can sometimes observe a so-called identifier or ID variable. We might know, for example, that two images show the same person, with ID referring to the identity of the person. In data augmentation, we generate several images from the same original image, with ID referring to the relevant original image. The method requires only a small fraction of images to have an ID variable. We provide a causal framework for the problem by adding the ID variable to the model of Gong et al. (2016). However, we are interested in settings where we cannot observe the domain directly and we treat domain as a latent variable. If two or more samples share the same class and identifier, (Y, ID) = (y, id), then we treat those samples as counterfactuals under different style interventions on the orthogonal or style features. Using this grouping-by-ID approach, we regularize the network to provide near constant output across samples that share the same ID by penalizing with an appropriate graph Laplacian. This is shown to substantially improve performance in settings where domains change in terms of image quality, brightness, color changes, and more complex changes such as changes in movement and posture. We show links to questions of interpretability, fairness and transfer learning. | The paper discusses ways to guard against adversarial domain shifts with so-called counterfactual regularization. The main idea is that in several datasets there are many instances of images for the same object/person, and that taking this into account by learning a classifier that is invariant to the superficial changes (or “style” features, e.g. hair color, lighting, rotation etc.) can improve the robustness and prediction accuracy. The authors show the benefit of this approach, as opposed to the naive way of just using all images without any grouping, in several toy experimental settings.
Although I really wanted to like the paper, I have several concerns. First and most importantly, the paper is not citing several important related work. Especially, I have the impression that the paper is focusing on a very similar setting (causally) to the one considered in [Gong et al. 2016] (http://proceedings.mlr.press/v48/gong16.html), as can be seen from Fig. 1. Although not focusing on classification directly, this paper also tries to a function T(X) such that P(Y|T(X)) is invariant to domain change. Moreover, in that paper, the authors assume that even the distribution of the class can be changed in the different domains (or interventions in this paper).
Besides, there are also other less related papers, e.g. http://proceedings.mlr.press/v28/zhang13d.pdf, https://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/10052/0, https://arxiv.org/abs/1707.09724, (or potentially https://arxiv.org/abs/1507.05333 and https://arxiv.org/abs/1707.06422), that I think may be mentioned for a more complete picture. Since there is some related work, it may be also worth to compare with it, or use the same datasets.
I’m also not very happy with the term “counterfactual”. As the authors mention in footnote, this is not the correct use of the term, since counterfactual means “against the fact”. For example, a counterfactual query is “we gave the patient a drug and the patient died, what would have happened if we didn’t give the drug?” In this case, these are just different interventions on possibly the same object. I’m not sure that in the practical applications one can assure that the noise variables stay the same, which, as the authors correctly mention, would make it a bit closer to counterfactuals. It may sound pedantic, but I don’t understand why use the wrong and confusing terminology for no specific reason, also because in practice the paper reduces to the simple idea of finding a classifier that doesn’t vary too much in the different images of the single object.
**EDIT**: I was satisfied with the clarifications from the authors and I appreciated the changes that they did with respect to the related work and terminology, so I changed my evaluation from a 5 (marginally below threshold) to a 7 (good paper, accept). |
iclr_2018_SJ71VXZAZ | We explore the properties of byte-level recurrent language models. When given sufficient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts. Specifically, we find a single unit which performs sentiment analysis. These representations, learned in an unsupervised manner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank. They are also very data efficient. When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets. We also demonstrate the sentiment unit has a direct influence on the generative process of the model. Simply fixing its value to be positive or negative generates samples with the corresponding positive or negative sentiment. | This paper shows that an LSTM language model trained on a large corpus of Amazon product reviews can learn representations that are useful for sentiment analysis.
Given representations from the language model, a logistic regression classifier is trained with supervised data from the task of interest to produce the final model.
The authors evaluated their approach on six sentiment analysis datasets (MR, CR, SUBJ, MPQA, SST, and IMDB), and found that the proposed method is competitive with existing supervised methods.
The results are mixed, and they understandably are better for test datasets from similar domains to the Amazon product reviews dataset used to train the language model.
An interesting finding is that one of the neurons captures sentiment property and can be used to predict sentiment as a single unit.
I think the main result of the paper is not surprising and does not show much beyond we can do pretraining on unlabeled datasets from a similar domain to the domain of interest.
This semi-supervised approach has been known to improve in the low data regime, and pretraining an expressive neural network model with a lot of unlabeled data has also been shown to help in the past.
There are a few unanswered questions in the paper:
- What are the performance of the sentiment unit on other datasets (e.g., SST, MR, CR)? Is it also competitive with the full model?
- How does this method compare to an approach that first pretrains a language model on the training set of each corpus without using the labels, and then trains a logistic regression while fixing the language model? Is the large amount of unlabeled data important to obtain good performance here? Or is similarity to the corpus of interest more important?
- I assume that the reason to use byte LSTM is because it is cheaper than a word level LSTM. Is this correct or was there any performance issue with using the word directly?
- More analysis on why the proposed method does well on the binary classification task of SST, but performs poorly on the fine-grained classification would be useful. If the model is capturing sentiment as is claimed by the authors, why does it only capture binary sentiment instead of a spectrum of sentiment level?
The paper is also poorly written. There are many typos (e.g., "This advantage is also its difficulty", "Much previous work on language modeling has evaluated ", "We focus in on the task", and others) so the writing needs to be significantly improved for it to be a conference paper, preferably with some help from a native English speaker. |
iclr_2018_H1I3M7Z0b | We present a new approach and a novel architecture, termed WSNet, for learning compact and efficient deep neural networks. Existing approaches conventionally learn full model parameters independently and then compress them via ad hoc processing such as model pruning or filter factorization. Alternatively, WSNet proposes learning model parameters by sampling from a compact set of learnable parameters, which naturally enforces parameter sharing throughout the learning process. We demonstrate that such a novel weight sampling approach (and induced WSNet) promotes both weights and computation sharing favorably. By employing this method, we can more efficiently learn much smaller networks with competitive performance compared to baseline networks with equal numbers of convolution filters. Specifically, we consider learning compact and efficient 1D convolutional neural networks for audio classification. Extensive experiments on multiple audio classification datasets verify the effectiveness of WSNet. Combined with weight quantization, the resulted models are up to 180× smaller and theoretically up to 16× faster than the well-established baselines, without noticeable performance drop. | In this work, the authors propose a technique to compress convolutional and fully-connected layers in a network by tying various weights in the convolutional filters: specifically within a single channel (weight sampling) and across channels (channel sampling). When combined with quantization, the proposed approach allows for large compression ratios with minimal loss in performance on various audio classification tasks. Although the results are interesting, I have a number of concerns about this work, which are listed below:
1. The idea of tying weights in the neural network in order to compress the model is not entirely new. This has been proposed previously in the context of feed-forward networks [1], and convolutional networks [2] where the choice of parameter tying is based on hash functions which ensure a random (but deterministic) mapping from a small set of “true” weights to a larger set of “virtual” weights. I think it would be more fair to compare against the HashedNet technique.
References:
[1] Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. 2015. Compressing neural networks with the hashing trick. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37 (ICML'15), Francis Bach and David Blei (Eds.), Vol. 37. JMLR.org 2285-2294.
[2] Wenlin Chen, James Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. 2016. Compressing Convolutional Neural Networks in the Frequency Domain. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16). ACM, New York, NY, USA, 1475-1484. DOI: https://doi.org/10.1145/2939672.2939839
2. Given that the experiments are conducted on tasks where there isn’t a large amount of training data, one concern is that the baseline model used by the authors might be overparameterized. It would be interesting to see how performance varies as a function of number of parameters for these tasks without any “compression”, i.e., just by reducing filter sizes, for example.
3. It seems somewhat surprising that repeating the filter weights across channels as is done in the channel sharing technique yields no loss in accuracy, especially for the deeper convolutional layers. Could this perhaps be a function of the tasks that the binary “music detection” task that these models are evaluated on? Do the authors have any comments on why this doesn't hurt performance?
4. In citing relevant previous work, the authors should also include student-teacher approaches [1, 2] and distillation [3], and work by Denil et al. [4] on compression.
References:
[1] C. Bucilua, R. Caruana, and A. Niculescu-Mizil. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 535–541. ACM, 2006
[2] J. Ba and R. Caruana. Do deep nets really need to be deep? In Advances in neural information processing systems, pages 2654–2662, 2014.
[3] G. Hinton, O. Vinyals, J. Dean. Distilling the Knowledge in a Neural Network, NIPS 2014 Deep Learning Workshop. 2014.
[4] M. Denil, B. Shakibi, L. Dinh, N. de Freitas, et al. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pages 2148–2156, 2013.
5. Section 3, where the authors describe the proposed techniques is somewhat confusing to read, because of a lack of detailed mathematical explanations of the proposed techniques. This makes the paper harder to understand, in my view. Please re-write these sections in order to clearly express the parameter tying mechanism. In particular, I had the following questions:
- Are weights tied across layers i.e., are the “weight sharing” matrices shared across layers?
- There appears to be a typo in Equation 3: I believe it should be m = m* C.
- Filter augmentation/Weight quantization are applicable to all methods, including the baseline. It would therefore be interesting to examine how they affect the baseline, not just the proposed system.
- Section 3.5, on using the “Integral Image” to speed up computation was not clear to me. In particular, could the authors re-write to explain how the computation is computed efficiently with “two subtraction operations”. Could the authors also clarify the savings achieved by this technique?
6. Results are reported on the various test sets without any discussion of statistical significance. Could the authors describe whether the differences in performance on the various test sets are statistically significant?
7. On the ESC-50, UrbanSound8K, and DCASE tasks, it is a bit odd to compare against previous baselines which use different input features, use different model configurations, etc. It would be much better to use one of the previously published configurations as the baseline, and apply the proposed techniques to that configuration to examine performance. In particular, could the authors also use log-Mel filterbank energies as input features similar to (Piczak, 2015) and (Salomon and Bello, 2015), and apply the proposed techniques starting from those input features? Also, it would be useful when comparing against previously published baselines to indicate total number of independent parameters in the system in addition to accuracy numbers.
8. Minor Typographical Errors: There are a number of minor typographical/grammatical errors in the paper, some of which are listed below:
- Abstract: “Combining weight quantization ...” → “Combining with weight quantization ...”
- Sec 1: “... without sacrificing the loss of accuracy” → “... without sacrificing accuracy”
- Sec 1: “Above experimental results strongly evident the capability of WSNet …” → “Above experimental results strongly evidence the capability of WSNet …”
- Sec 2: “... deep learning based approaches has been recently proven ...” → “... deep learning based approaches have been recently proven ...”
- The work by Aytar et al., 2016 is repeated twice in the references. |
iclr_2018_SJCq_fZ0Z | A major drawback of backpropagation through time (BPTT) is the difficulty of learning long-term dependencies, coming from having to propagate credit information backwards through every single step of the forward computation. This makes BPTT both computationally impractical and biologically implausible. For this reason, full backpropagation through time is rarely used on long sequences, and truncated backpropagation through time is used as a heuristic. However, this usually leads to biased estimates of the gradient in which longer term dependencies are ignored. Addressing this issue, we propose an alternative algorithm, Sparse Attentive Backtracking, which might also be related to principles used by brains to learn long-term dependencies. Sparse Attentive Backtracking learns an attention mechanism over the hidden states of the past and selectively backpropagates through paths with high attention weights. This allows the model to learn long term dependencies while only backtracking for a small number of time steps, not just from the recent past but also from attended relevant past states. | This work proposes Sparse Attentive Backtracking, an attention-based approach to incorporating long-range dependencies into RNNs. Through time, a “macrostate” of previous hidden states is accumulated. An attention mechanism is used to select the states within the macro-state most relevant to the current timestep. A weighted combination of these previous states is then added to the hidden state as computed in the ordinary way. This construction allows gradients to flow backwards quickly across longer time scales via the macrostate. The proposed architecture is compared against LSTMs trained with both BPTT and truncated BPTT.
Pros:
- Novel combination of recurrent skip connections with attention.
- The paper is overall written clearly and structured well.
Cons:
- The proposed algorithm is compared against TBPTT but it is unclear the extent to which it is solving the same computational issues TBPTT is designed to solve.
- Design decisions, particularly regarding the attention computation, are not fully explained.
SAB, like TBPTT, allows for more frequent updates to the parameters. However, unlike TBPTT, activations for previous timesteps (even those far in the past) need to be maintained since gradients could flow backwards to them via the macrostate. Thus SAB seems to have higher memory requirements than TBPTT. The empirical results demonstrate that SAB performs slightly better than TBPTT for most tasks in terms of accuracy/CE, but there is no mention of comparing the memory requirements of each. Results demonstrating also whether SAB trains more quickly than the LSTM baselines would be helpful.
The proposed affine form of attention does not appear to actually represent the salience of a microstate and a given time. The second term of the RHS of equation 1 (w_2^T \hat{h}^{(t)}) is canceled out in the subtraction in equation 2, since this term is constant for all i. Thus the attention weights for a given microstate are constant throughout time, which seems undesirable.
The related work discusses skip connections in the context of convolutional nets, but doesn’t mention previous works incorporating skip connections into RNN architectures, such as [1], [2], or [3].
Overall, the combination of recurrent skip connections and attention appears to be novel, but experimental comparisons to other skip connection RNN architectures are missing and thus it is not clear how this work is positioned relative to previous related work.
[1] Lin, Tsungnan, et al. "Learning long-term dependencies in NARX recurrent neural networks." IEEE Transactions on Neural Networks 7.6 (1996): 1329-1338.
[2] Koutnik, Jan, et al. "A clockwork rnn." International Conference on Machine Learning. 2014.
[3] Chang, Shiyu, et al. "Dilated recurrent neural networks." Advances in Neural Information Processing Systems. 2017.
EDIT: I have read the updated paper and the author's rebuttal. I am satisfied with the update to the attention weight formulation. Overall, I still feel that the proposed SAB approach represents a change to the model structure via skip connections. Therefore SAB should also be compared against other approaches that use skip connections, and not just BPTT / TBPTT, which operate on the standard LSTM. Thus to me the experiments are still lacking. However, I think the approach is quite interesting and as such I am revising my rating from 4 to 5. |
iclr_2018_B1l8BtlCb | NON-AUTOREGRESSIVE NEURAL MACHINE TRANSLATION
Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English-German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 EnglishRomanian. | This work proposes non-autoregressive decoder for the encoder-decoder framework in which the decision of generating a word does not depends on the prior decision of generated words. The key idea is to model the fertility of each word so that copies for source words are fed as input to the encoder part, not the generated target words as inputs. To achieve the goal, authors investigated various techniques: For inference, sample fertility space for generating multiple possible translations. For training, apply knowledge distilation for better training followed by fine tuning by reinforce. Experiments for English/German and English/Romanian show comparable translation qualities with speedup by non-autoregressive decoding.
The motivation is clear and proposed methods are very sound. Experiments are carried out very carefully.
I have only minor concerns to this paper:
- The experiments are designed to achieve comparable BLEU with improved latency. I'd like to know whether any BLUE improvement might be possible under similar latency, for instance, by increasing the model size given that inference is already fast enough.
- It'd also like to see other language pairs with distorted word alignment, e.g., Chinese/English, to further strengthen this work, though it might have little impact given that attention already capture sort of alignment.
- What is the impact of the external word aligner quality? For instance, it would be possible to introduce a noise in the word alignment results or use smaller data to train a model for word aligner.
- The positional attention is rather unclear and it would be better to revise it. Note that equation 4 is simply mentioning attention computation, not the proposed positional attention. |
iclr_2018_S1jBcueAb | Published as a conference paper at ICLR 2018 DEPTHWISE SEPARABLE CONVOLUTIONS FOR NEURAL MACHINE TRANSLATION
Depthwise separable convolutions reduce the number of parameters and computation used in convolutional operations while increasing representational efficiency. They have been shown to be successful in image classification models, both in obtaining better models than previously possible for a given parameter count (the Xception architecture) and considerably reducing the number of parameters required to perform at a given level (the MobileNets family of architectures). Recently, convolutional sequence-to-sequence networks have been applied to machine translation tasks with good results. In this work, we study how depthwise separable convolutions can be applied to neural machine translation. We introduce a new architecture inspired by Xception and ByteNet, called SliceNet, which enables a significant reduction of the parameter count and amount of computation needed to obtain results like ByteNet, and, with a similar parameter count, achieves better results. In addition to showing that depthwise separable convolutions perform well for machine translation, we investigate the architectural changes that they enable: we observe that thanks to depthwise separability, we can increase the length of convolution windows, removing the need for filter dilation. We also introduce a new "super-separable" convolution operation that further reduces the number of parameters and computational cost of the models. | The paper proposes to use depthwise separable convolution layers in a fully convolutional neural machine translation model. The authors also introduce a new "super-separable" convolution layer, which further reduces the computational cost of depthwise separable convolutions. Results are presented on the WMT English to German translation task, where the method is shown to perform second-best behind the Transformer model.
The paper's greatest strength is in my opinion the quality of its exposition of the proposed method. The relationship between spatial convolutions, pointwise convolutions, depthwise convolutions, depthwise separable convolutions, grouped convolutions, and super-separable convolutions is explained very clearly, and the authors properly introduce each model component.
Perhaps as a consequence of this, the experimental section feels squeezed in comparison. Quantitative results are presented in two fairly dense tables (especially Table 2) which, although parsable after reading the paper carefully, could benefit from a little bit more information on how they should be read. The conclusions that are drawn in the text are stated without citing metrics or architectural configurations, leaving it up to the reader to connect the conclusions to the table contents.
Overall, I feel that the results presented make a compelling case both for the effectiveness of depthwise separable convolutions and larger convolution windows, as well as the overall performance achievable by such an architecture. I think the paper constitutes a good contribution, and adjustments to the experimental section could make it a great contribution. |
iclr_2018_Hyp3i2xRb | Plain recurrent networks greatly suffer from the vanishing gradient problem while Gated Neural Networks (GNNs) such as Long-short Term Memory (LSTM) and Gated Recurrent Unit (GRU) deliver promising results in many sequence learning tasks through sophisticated network designs. This paper shows how we can address this problem in a plain recurrent network by analyzing the gating mechanisms in GNNs. We propose a novel network called the Recurrent Identity Network (RIN) which allows a plain recurrent network to overcome the vanishing gradient problem while training very deep models without the use of gates. We compare this model with IRNNs and LSTMs on multiple sequence modeling benchmarks. The RINs demonstrate competitive performance and converge faster in all tasks. Notably, small RIN models produce 12%-67% higher accuracy on the Sequential and Permuted MNIST datasets and reach state-of-the-art performance on the bAbI question answering dataset. | Summary:
The authors present a simple variation of vanilla recurrent neural networks, which use ReLU hiddens and a fixed identity matrix that is added to the hidden-to-hidden weight matrix. This identity connection acts as a “surrogate memory” component, preserving hidden activations over time steps.
The experiments demonstrate that this architecture reliably solves the addition task for up to 400 input frames. It also achieves a very good performance on sequential and permuted MNIST and achieves SOTA performance on bAbI.
The authors observe that the proposed recurrent identity network (RIN) is relatively robust to hyperparameter choices. After Le et al. (2015), the paper presents another convincing case for the application of ReLUs in RNNs.
Review:
I very much like the paper. The motivation and architecture is presented very clearly and I am happy to also see explorations of simpler recurrent architectures in parallel to research of gated architectures!
I have a few comments and questions:
1) Clarification: In Section 2.2, do you really mean bit-wise multiplication or element-wise? If bit-wise, can you elaborate why? I might have missed something.
2) Why does the learning curve of the IRNN stop around epoch 270 in Figure 2c? Also some curves in the appendix stop abruptly without visible explosions. Were these experiments run until completion? If so, would it be possible to plot the complete curves?
3) I think for a fair comparison with LSTMs and IRNNs a limited hyperparameter search should be performed separately on all three architectures at least for the addition task. Optimal hyperparameters are usually model-specific. Admittedly, the authors mention that they do not intend to make claims about superior performance to LSTMs, however the competitive performance of small RINs is mentioned a couple of times in the manuscript.
Le et al. (2015) for instance perform a coarse grid search for each model.
4) I wouldn't say that ResNets are Gated Neural Networks, as the branches are just summed up. There is no (multiplicative) gating as in Highway Networks.
5) I think what enables the training of very deep networks or LSTMs on long sequences is the presence of a (close-to-)identity component in forward/backward propagation, not the gating. The use of ReLU activations in IRNNs (with identity initialization of the hidden-to-hidden weights) and RINs (effectively initialized with identity plus some noise) makes the recurrence more linear than with squashing activation functions.
6) Regarding the absence of gating in RINs: What is your intuition on how the model would perform in tasks for which conditional forgetting is useful. Consider for example a task with long sequences, outputs at every time step and hidden activations not necessarily being encouraged to estimate last step hidden activations. Would RINs readily learn to reset parts of the hidden state?
7) Henaff et al. (2016) might be related, as they are also looking into the addition task with long sequences.
Overall, the presented idea is novel to the best of my knowledge and the manuscript is well-written. I would recommend it for acceptance, but would like to see the above points addressed (especially 1-3 and some comments on 4-6). After a revision I would consider to increase the score.
References:
Henaff, Mikael, Arthur Szlam, and Yann LeCun. "Recurrent orthogonal networks and long-memory tasks." In International Conference on Machine Learning, pp. 2034-2042. 2016.
Le, Quoc V., Navdeep Jaitly, and Geoffrey E. Hinton. "A simple way to initialize recurrent networks of rectified linear units." arXiv preprint arXiv:1504.00941 (2015). |
iclr_2018_SyhRVm-Rb | Reinforcement learning (RL) is a powerful technique to train an agent to perform a task. However, an agent that is trained using RL is only capable of achieving the single task that is specified via its reward function. Such an approach does not scale well to settings in which an agent needs to perform a diverse set of tasks, such as navigating to varying positions in a room or moving objects to varying locations. Instead, we propose a method that allows an agent to automatically discover the range of tasks that it is capable of performing in its environment. We use a generator network to propose tasks for the agent to try to accomplish, each task being specified as reaching a certain parametrized subset of the state-space. The generator network is optimized using adversarial training to produce tasks that are always at the appropriate level of difficulty for the agent. Our method thus automatically produces a curriculum of tasks for the agent to learn. We show that, by using this framework, an agent can efficiently and automatically learn to perform a wide set of tasks without requiring any prior knowledge of its environment 1 . Our method can also learn to accomplish tasks with sparse rewards, which pose significant challenges for traditional RL methods. | Summary:
This paper proposes to use a GAN to generate goals to implement a form of curriculum learning. A goal is defined as a subset of the state space. The authors claim that this model can discover all "goals" in the environment and their 'difficulty', which can be measured by the success rate / reward of the policy. Hence the goal network could learn a form of curriculum, where a goal is 'good' if it is a state that the policy can reach after a (small) improvement of the current policy.
Training the goal GAN is done via labels, which are states together with the achieved reward by the policy that is being learned.
The benchmark problems are whether the GAN generates goals that allow the agent to reach the end of a U-maze, and a point-mass task.
Authors compare GAN goal generation vs uniformly choosing a goal and 2 other methods.
My overall impression is that this work addresses an interesting question, but the experimental setup / results are not clearly worked out. More broadly, the paper does not address how one can combine RL and training a goal GAN in a stable way.
Pro:
- Developing hierarchical learning methods to improve the sample complexity of RL is an important problem.
- The paper shows that the U-maze can be 'solved' using a variety of methods that generate goals in a non-uniform way.
Con:
- It is not clear to me how the asymmetric self-play and SAGG-RIAC are implemented and why they are natural baselines.
- It is not clear to me what the 'goals' are in the point mass experiment. This entire experiment should be explained much more clearly (+image).
- It is not clear how this method compares qualitatively vs baselines (differences in goals etc).
- This method doesn't seem to always outperform the asymm-selfplay baseline. The text mentions that baseline is less efficient, but this doesn't make the graph very interpretable.
- The curriculum in the maze-case consists of regions that just progress along the maze, and hence is a 1-dimensional space. Hence using a manually defined set of goals should work quite well. It would be better to include such a baseline as well.
- The experimental maze-setting and point-mass have a simple state / goal structure. How can this method generalize to harder problems?
-- The entire method is quite complicated (e.g. training GANs can be highly unstable). How do we stabilize / balance training the GAN vs the RL problem?
-- I don't see how this method could generalize to problems where the goals / subregions of space do not have a simple distribution as in the maze problem, e.g. if there are multiple ways of navigating a maze towards some final goal state. In that case, to discover a good solution, the generated goals should focus on one alternative and hence the GAN should have a unimodal distribution. How do you force the GAN in a principled way to focus on one goal in this case? How could you combine RL and training the GAN stably in that case?
Detailed:
- (2) is a bit strange: shouldn't the indicator say: 1( \exists t: s_t \in S^g )? Surely not all states in the rollout (s_0 ... s_t) are in the goal subspace: the indicator does not factorize over the union. Same for other formulas that use \union.
- Are goals overlapping or non-overlapping subsets of the state space?
Definition around (1) basically says it's non-overlapping, yet the goal GAN seems to predict goals in a 2d space, hence the predicted goals are overlapping?
- What are the goals that the non-uniform baselines predict? Does the GAN produce better goals?
- Generating goal labels is
- Paper should discuss literature on hierarchical methods that use goals learned from data and via variational methods:
1. Strategic Attentive Writer (STRAW), V. Mnih et al, NIPS 2016
2. Generating Long-term Trajectories Using Deep Hierarchical Networks. S.
Zheng et al, NIPS 2016 |
iclr_2018_H1-oTz-Cb | It is commonly agreed that the use of relevant invariances as a good statistical bias is important in machine-learning. However, most approaches that explicitely incorporate invariances into a model architecture only make use of very simple transformations, such as translations and rotations. Hence, there is a need for methods to model and extract richer transformations that capture much higherlevel invariances. To that end, we introduce a tool allowing to parametrize the set of filters of a trained convolutional neural network with the latent space of a generative adversarial network. We then show that the method can capture highly non-linear invariances of the data by visualizing their effect in the data space. | Recent work on incorporating prior knowledge about invariances into neural networks suggests that the feature dimension in a stack of feature maps has some kind of group or manifold structure, similar to how the spatial axes form a plane. This paper proposes a method to uncover this structure from the filters of a trained ConvNet. The method uses an InfoGAN to learn the distribution of filters. By varying the latent variables of the GAN, one can traverse the manifold of filters. The effect of moving over the manifold can be visualized by optimizing an input image to produce the same activation profile when using a perturbed synthesized filter as when using an unperturbed synthesized filter.
The idea of empirically studying the manifold / topological / group structure in the space of filters is interesting. A priori, using a GAN to model a relatively small number of filters seems problematic due to overfitting, but the authors show that their InfoGAN approach seems to work well.
My main concerns are:
Controls
To generate the visualizations, two coordinates in the latent space are varied, and for each variation, a figure is produced. To figure out if the GAN is adding anything, it would be nice to see what would happen if you varied individual coordinates in the filter space ("x-space" of the GAN), or varied the magnitude of filters or filter planes. Since the visualizations are as much a function of the previous layers as they are a function of the filters in layer l which are modelled by the GAN, I would expect to see similar plots for these baselines.
Lack of new Insights
The visualizations produced in this paper are interesting to look at, but it is not clear what they tell us, other than "something non-trivial is going on in these networks". In fact, it is not even clear that the transformations being visualized are indeed non-linear in pixel space (note that even a 2D diffeomorphism, which is a non-linear map on R^2, is a linear operator on the space of *functions* on R^2, i.e. on the space of images). In any case, no attempt is made to analyze the results, or provide new insights into the computations performed by a trained ConvNet.
Interpretation
This is a minor point, but I would not say (as the paper does) that the method captures the invariances learned by the model, but rather that it aims to show the variability captured by the model. A ReLU net is only invariant to changes that are mapped to zero by the ReLU, or that end up in the kernel of one of the linear layers. The presented method does not consider this and hence does not analyze invariances.
Minor issues:
- In the last equation on page 2, the right-hand side is missing a "min max". |
iclr_2018_r1vuQG-CW | Published as a conference paper at ICLR 2018 HEXACONV
The effectiveness of convolutional neural networks stems in large part from their ability to exploit the translation invariance that is inherent in many learning problems. Recently, it was shown that CNNs can exploit other sources of invariance, such as rotation invariance, by using group convolutions instead of planar convolutions. However, for reasons of performance and ease of implementation, it has been necessary to limit the group convolution to transformations that can be applied to the filters without interpolation. Thus, for images with square pixels, only integer translations, rotations by multiples of 90 degrees, and reflections are admissible. Whereas the square tiling provides a 4-fold rotational symmetry, a hexagonal tiling of the plane has a 6-fold rotational symmetry. In this paper we show how one can efficiently implement planar convolution and group convolution over hexagonal lattices, by re-using existing highly optimized convolution routines. We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget. Furthermore, we find that the increased degree of symmetry of the hexagonal grid increases the effectiveness of group convolutions, by allowing for more parameter sharing. We show that our method significantly outperforms conventional CNNs on the AID aerial scene classification dataset, even outperforming ImageNet pretrained models. | The authors took my comments nicely into account in their revision, and their answers are convincing. I increase my rating from 5 to 7. The authors could also integrate their discussion about their results on CIFAR in the paper, I think it would help readers understand better the advantage of the contribution.
----
This paper is based on the theory of group equivariant CNNs (G-CNNs), proposed by Cohen and Welling ICML'16.
Regular convolutions are translation-equivariant, meaning that if an image is translated, its convolution by any filter is also translated. They are however not rotation-invariant for example. G-CNN introduces G-convolutions, which are equivariant to a given transformation group G.
This paper proposes an efficient implementation of G-convolutions for 6-fold rotations (rotations of multiple of 60 degrees), using a hexagonal lattice. The approach is evaluated on CIFAR-10 and AID, a dataset of aerial scene classification. The approach outperforms G-convolutions implemented on a squared lattice, which allows only 4-fold rotations on AID by a short margin. On CIFAR-10, the difference does not seem significative (according to Tables 1 and 2).
I guess this can be explained by the fact that rotation equivariance makes sense for aerial images, where the scene is mostly fronto-parallel, but less for CIFAR (especially in the upper layers), which exhibits 3D objects.
I like the general approach of explicitly putting desired equivariance in the convolutional networks. Using a hexagonal lattice is elegant, even if it is not new in computer vision (as written in the paper). However, as the transformation group is limited to rotations, this is interesting in practice mostly for fronto-parallel scenes, as the experiences seem to show. It is not clear how the method can be extended to other groups than 2D rotations.
Moreover, I feel like the paper sometimes tries to mask the fact that the proposed method is limited to rotations. It is admittedly clearly stated in the abstract and introduction, but much less in the rest of the paper.
The second paragraph of Section 5.1 is difficult to keep in a paper. It says that "From a qualitative inspection of these hexagonal interpolations we conclude that no information is lost during the sampling procedure." "No information is lost" is a strong statement from a qualitative inspection, especially of a hexagonal image. This statement should probably be removed. One way to evaluate the information lost could be to iterate interpolation between hexagonal and squared lattices to see if the image starts degrading at some point. |
iclr_2018_HyDAQl-AW | In reinforcement learning, it is common to let an agent interact with its environment for a fixed amount of time before resetting the environment and repeating the process in a series of episodes. The task that the agent has to learn can either be to maximize its performance over (i) that fixed amount of time, or (ii) an indefinite period where the time limit is only used during training. In this paper, we investigate theoretically how time limits could effectively be handled in each of the two cases. In the first one, we argue that the terminations due to time limits are in fact part of the environment, and propose to include a notion of the remaining time as part of the agent's input. In the second case, the time limits are not part of the environment and are only used to facilitate learning. We argue that such terminations should not be treated as environmental ones and propose a method, specific to value-based algorithms, that incorporates this insight by continuing to bootstrap at the end of each partial episode. To illustrate the significance of our proposals, we perform several experiments on a range of environments from simple few-state transition graphs to complex control tasks, including novel and standard benchmark domains. Our results show that the proposed methods improve the performance and stability of existing reinforcement learning algorithms. | Summary: This paper explores how to handle two practical issues in reinforcement learning. The first is including time remaining in the state, for domains where episodes are cut-off before a terminal state is reached in the usual way. The second idea is to allow bootstrapping at episode boundaries, but cutting off episodes to facilitate exploration. The ideas are illustrated through several well-worked micro-world experiments.
Overall the paper is well written and polished. They slowly worked through a simple set of ideas trying to convey a better understanding to the reader, with a focus on performance of RL in practice.
My main issue with the paper is that these two topics are actually not new and are well covered by the existing RL formalisms. That is not to say that an empirical exploration of the practical implications is not of value, but that the paper would be much stronger if it was better positioned in the literature that exists.
The first idea of the paper is to include time-remaining in the state. This is of course always possible in the MDP formalism. If it was not done, as in your examples, the state would not be Markov and thus it would not be an MDP at all. In addition, the technical term for this is finite horizon MDPs (in many cases the horizon is taken to be a constant, H). It is not surprising that algorithms that take this into account do better, as your examples and experiments illustrate. The paper should make this connection to the literature more clear and discuss what is missing in our existing understanding of this case, to motivate your work. See Dynamic Programming and Optimal Control and references too it.
The second idea is that episodes may terminate due to time out, but we should include the discounted value of the time-out termination state in the return. I could not tell from the text but I assume, the next transition to the start state is fully discounted to zero, otherwise the value function would link the values of S_T and the next state, which I assume you do not want. The impact of this choice is S_T is no longer a termination state, and there is a direct fully discounted transition to the start states. This is in my view is how implementations of episodic tasks with a timeout should be done and is implemented this way is classic RL frameworks (e.g., RL glue). If we treat the value of S_T as zero or consider gamma on the transition into the time-out state as zero, then in cost to goal problems the agent will learn that these states are good and will seek them out leading to suboptimal behavior. The literature might not be totally clear about this, but it is very well discussed in a recent ICML paper: White 2017 [1]
Another way to pose and think about this problem is using the off-policy learning setting---perhaps best described in the Horde paper [2]. In this setting the behavior policy can have terminations and episodes in the classic sense (perhaps due to time outs). However, the agent's continuation function (gamma : S -> [0,1]) can specify weightings on states representing complex terminations (or not), completely independent of the behavior policy or actual state transition dynamics of the underlying MDP. To clearly establish your contributions, the authors must do a better job of relating their work to [1] and [2].
[1] White. Unifying task specification in reinforcement learning. Martha White. International Conference on Machine Learning (ICML), 2017.
[2] Sutton, R. S., Modayil, J., Delp, M., Degris, T., Pilarski, P. M., White, A., & Precup, D. (2011). Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In The 10th International Conference on Autonomous Agents and Multiagent Systems: 2, 761--768.
Small comments that did not impact paper scoring:
1) eq 1 we usually don't use the superscript \gamma
2) eq2, usually we talk about truncated n-step returns include the value of the last state to correct the return. You should mention this
3) Last paragraph of page 2 should not be in the intro
4) in section 2.2 why is the behavior policy random instead of epsilon greedy?
5) It would be useful to discuss the average reward setting and how it relates to your work.
6) Fig 5. What does good performance look like in this domain. I have no reference point to understand these graphs
7) page 9, second par outlines alternative approaches but they are not presented as such. Confusing |
iclr_2018_Hy7EPh10W | The ability of a classifier to recognize unknown inputs is important for many classification-based systems. We discuss the problem of simultaneous classification and novelty detection, i.e. determining whether an input is from the known set of classes and from which specific class, or from an unknown domain and does not belong to any of the known classes. We propose a method based on the Generative Adversarial Networks (GAN) framework. We show that a multi-class discriminator trained with a generator that generates samples from a mixture of nominal and novel data distributions is the optimal novelty detector. We approximate that generator with a mixture generator trained with the Feature Matching loss and empirically show that the proposed method outperforms conventional methods for novelty detection. Our findings demonstrate a simple, yet powerful new application of the GAN framework for the task of novelty detection. | This paper proposed a GAN to unify classification and novelty detection. The technical difficulty is acceptable, but there are several issues. First of all, the motivation is clearly given in the 1st paragraph of the introduction: "In fact for such novel input the algorithm will produce erroneous output and classify it as one of the classes that were available to it during training. Ideally, we would like that the classifier, in addition to its generalization ability, be able to detect novel inputs, or in other words, we would like the classifier to say, 'I don't know.'" There is a logical gap between the ability of saying 'I don't know' and the necessity of novelty detection. Moreover, there are many papers known as "learning with abstention" and/or "learning with rejection" from NIPS, ICML, COLT, etc. (some are coauthored by Dr. Peter Bartlett or Dr. Corinna Cortes), but the current paper didn't cite those that are particularly designed to let the classifier be able to say 'I don't know'. All those abstention/rejection papers have solid theoretical guarantees.
The 3rd issue is that the novelty for the novelty detection part in the proposed GAN seems quite incremental. As mentioned in the paper, there are already a few GANs, such that "If the 'real' data consists of K classes, then the output of the discriminator is K+1 class probabilities where K probabilities corresponds to K known classes, and the K+1 probability correspond to the 'fake' class." On the other hand, the idea in this paper is that "At test time, when the discriminator classifies a real example to the K+1th class, i.e., class which represented 'fake examples' during training, this the example is most likely a novel example and not from one of the K nominal classes." This is just a replacement of concepts, where the original one is the fake class in training and the new one is the novel class in test. Furthermore, the 4th issue also comes from this replacement. The proposed method assumes a very strong distributional assumption, that is, the class-conditional density of the union of all novel classes at test time is very similar to the class-conditional density of the fake class at training time, where the choice of similarity depends on the divergence measure for training GAN. This assumption is too strong for the application of novelty detection, since novel data can be whatsoever unseen during training.
This inconsistency leads to the last issue. Again mentioned in the 1st paragraph, "there are no requirements whatsoever on how the
classifier should behave for new types of input that differ substantially from the data that are available during training". This evidences that novel data can be whatsoever unseen during training (per my words). However, the ultimate goal of the generator is to fool the discriminator by generating fake data as similar to the real data as possible in all types of GANs. Therefore, it is conceptually and theoretically strange to apply GAN to novelty detection, which is the major contribution of this paper.
Last but not least, there is an issue not quite directly related to this paper. Novelty detection sounds very data mining rather than machine learning. It is fully unsupervised without a clearly-defined goal which makes it sounds like an art rather than a science. The experimental performance is promising indeed, but a lot of domain knowledge is involved in the experiment design. I am not sure they are really novelty detection tasks because the real novelty detection tasks should be fully exploratory.
BTW, there is a paper in IPMI 2017 entitled "Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery", which is very closely related to the current paper but the authors seem not aware of it. |
iclr_2018_SJZ2Mf-0- | Real-world Question Answering (QA) tasks often consist of thousands of words that represent many facts and entities. Existing models based on LSTMs require a large number of parameters to support external memory and do not generalize efficiently for long sequence inputs. Memory networks address these limitations by storing information to an external memory module but must examine all inputs in the memory. Hence, for longer sequence inputs, the intermediate memory components proportionally scale in size, resulting in poor inference times. We present Adaptive Memory Networks (AMN) that process input-question pairs to dynamically construct a network architecture optimized for lower inference times. AMN creates multiple memory banks that store entities from the input story to answer the questions. The model learns to reason important entities from the input text based on the question and concentrates these entities within a single memory bank. At inference, one or few banks are used, creating a tradeoff between accuracy and performance. AMN is enabled by first, a novel bank controller that makes discrete decisions with high accuracy and second, the capabilities of a dynamic framework (such as PyTorch) that allow for dynamic network sizing and efficient variable mini-batching. In our results, we demonstrate that our model learns to construct a varying number of memory banks based on task complexity and achieves faster inference times for standard bAbI tasks, and modified bAbI tasks. We solve all bAbI tasks with an average of 48% fewer entities on tasks containing excess, unrelated information. | This paper offers a very promising approach to the processing of the type of sequences we find in dialogues, somewhat in between RNNs which have problem modeling memory, and memory networks whose explicit modeling of the memory is too rigid.
To achieve that, the starting point seems to be a strength GRU that has the ability to dynamically add memory banks to the original dialogue and question sentence representations, thanks to the use of imperative DNN programming. The use of the reparametrization trick to enable global differentiability is reminiscent of an ICLR'17 paper "Learning graphical state transitions". Compared to the latter, the current paper seems to offer a more tractable architecture and optimization problem that does not require strong supervision and should be much faster to train.
Unfortunately, this is the best understanding I got from this paper, as it seems to be in such a preliminary stage that the exact operations of the SGRU are not parsable. Maybe the authors have been taken off guard by the new review process where one can no longer improve the manuscript during this 2017 review (something that had enabled a few paper to pass the 2016 review).
After a nice introduction, everything seems to fall apart in section 4, as if the authors did not have time to finish their write-up.
- N is both the number of sentences and number of word per sentence, which does not make sense.
- i iterates over both the sentences and the words.
The critical SGRU algorithm is impossible to parse
- The hidden vector sigma, which is usually noted h in the GRU notation, is not even defined
- The critical reset gate operation in Eq.(6) is not even explained, and modified in a way I do not understand compared to standard GRU.
- What is t? From algorithm 1 in Appendix A, it seems to correspond to looping over both sentences and words.
- The most novel and critical operation of this SGRU, to process the entities of the memory bank, is not even explained. All we get at the end of section 4.2 is " After these steps are finished, all entities are passed through the strength modified GRU (4.1) to recompute question relevance."
The algorithm in Appendix A does not help much. With PyTorch being so readable, I wish some source code had been made available.
Experiments reporting also contains unacceptable omissions and errors:
- The definition of 'failed task', essential for understanding, is not stated (more than 5% error)
- Reported numbers of failed tasks are erroneous: it should be 1 for DMN+ and 3 for MemN2N.
The reviewers corrections, while significant, do not seem enough to clarify the core of the paper.
Page 3: dynanet -> dynet |
iclr_2018_SyF7Erp6W | Machine learning algorithms for controlling devices will need to learn quickly, with few trials. Such a goal can be attained with concepts borrowed from continental philosophy and formalized using tools from the mathematical theory of categories. Illustrations of this approach are presented on a cyberphysical system: the slot car game, and also on Atari 2600 games. | The authors argue that many machine learning systems need a large amount of data and long training times. To mend those shortcomings their proposed algorithm takes the novel approach of combining mathematical category theory and continental philosophy. Instead of computation units, the concept of entities and a 'me' is introduced to solve reinforcement learning tasks on a cyber-physical system as well as the Atari environment. This allows for an AI that is understandable for humans at every step of the computation in comparison to the 'black box learning of an neural network.
Positives:
• Novel approach towards more explainable and shorter training times/ less data
• Solid mathematical description in part 3.3
• Setup well explained
Negatives:
• Use of colloquial language (the first sentence of the abstract alone contains the word 'very' twice)
• Some paragraphs are strangely structured
• Incoherent abstract
• Only brief and shallow motivation given (No evidence to support the claim)
•. Brief and therefore confusing mention of methods
• No mention of results
• Very short in general
• Many grammatical errors (wrong tense use, misuse of a/an,... )
• Related Work is either Background or an explanation of the two test systems. While related approaches in those systems are also provided, the section is mainly used to introduce the test beds
• No direct comparison between algorithm and existing methods is given. It is stated that some extra measures from other measures such as sensors are not used and that it learns to rank with a human in under a minute. However, many questions remain unanswered: But how good is this? How long do other systems need? Is this a valid point to raise? What score functions do other papers use?
• 2.2: Title choice could have been more descriptive of the subsection. 'Video Games' indicates a broader analysis of RL in any game but the section mainly restricts itself to the Atari Environment
• While many methods are mentioned they are not set in context but only enumerated. Many concepts are only named without explanation or how they fit into the picture the authors are trying to paint.
• A clear statement of the hypothesis and reason/motivation behind pursuing this approach is missing. Information is indirectly given in the third section where the point is raised that the approach was chosen in contrast to 'black box NNs'. This seems to be a very crucial point that could have been highlighted more. The achieved result are by no means comparable to the NN approaches but they are faster and explainable for a human.
• Dreyfus' criticism of AI is presented as the key initiator for this idea. Ideas by other authors that utilise this criticism as their foundation are conceptually similar, they could have therefore been mentioned in the related work section.
• The paper fails to mention the current movement in the AI community to make AI more explainable. One of their two key advantages seems to be that they develop a more intuitive explainable system. However, this movement is completely ignored and not given a single mention. The paper, therefore, does not set their approach in context and is not able to acknowledge related work in this area.
• The section about continental based philosophy is rather confusing
• Instead of explaining the philosophy, analytical philosophy is described in details and continental philosophy is only described as not following analytical patterns. A clear introduction to this topic is missing.
• When described, it is stated that it's a mix of different German and French doctrines that are name dropped but not explained\ and leave the reader confused.
• Result section not well structured and results lack credibility:
• Long sections in the result section describe the actual algorithm. This should have been discussed before the results.
• Results for slot car are not convincing:
• Table 1 only shows the first the last and the best lap (and in most of them the human is better)
• Not even an average measure is given only samples. This is very suspicious.
• Why the comparison with DQN and only DQN? How was this comparison initialised? Which parameters were used? Neither is the term DQN resolved as Deep Q-Network nor is any explanation given. There are many methods/method classes performing RL on the Atari Environment. The mention of only one comparison leaves reasonable doubt about the claim that the system learns faster.
SUMMARY: Reject. Even though the idea presented is a novel contribution and has potential the paper itself is highly unstructured and confusing and lacks a proper grammar check. No clear hypothesis is formed until section 3. The concept of Explainable AI which could have been a good motivation does not find any mentioning. Key concepts such as continental philosophy are not explained in a coherent way. The results are presented in a questionable way. As the idea is promising it is recommended to the authors to restructure the paper and conduct more experiments to be able to get accepted. |
iclr_2018_SkT5Yg-RZ | Published as a conference paper at ICLR 2018 INTRINSIC MOTIVATION AND AUTOMATIC CURRICULA VIA ASYMMETRIC SELF-PLAY
We describe a simple scheme that allows an agent to learn about its environment in an unsupervised manner. Our scheme pits two versions of the same agent, Alice and Bob, against one another. Alice proposes a task for Bob to complete; and then Bob attempts to complete the task. In this work we will focus on two kinds of environments: (nearly) reversible environments and environments that can be reset. Alice will "propose" the task by doing a sequence of actions and then Bob must undo or repeat them, respectively. Via an appropriate reward structure, Alice and Bob automatically generate a curriculum of exploration, enabling unsupervised training of the agent. When Bob is deployed on an RL task within the environment, this unsupervised training reduces the number of supervised episodes needed to learn, and in some cases converges to a higher reward. | The paper presents a method for learning a curriculum for reinforcement learning tasks.The approach revolves around splitting the personality of the agent into two parts. The first personality learns to generate goals for other personality for which the second agent is just barely capable--much in the same way a teacher always pushes just past the frontier of a student’s ability. The second personality attempts to achieve the objectives set by the first as well as achieve the original RL task.
The novelty of the proposed method is introduction of a teacher that learns to generate a curriculum for the agent.The formulation is simple and elegant as the teacher is incentivised to widen the gap between bob but pays a price for the time it takes which balances the adversarial behavior.
Prior and concurrent work on learning curriculum and intrinsic motivation in RL rely on GANs (e.g., automatic goal generation by Held et al.), adversarial agents (e.g., RARL by Pinto et al.), or algorithmic/heuristic methods (e.g., reverse curriculum by Florensa et al. and HER Andrychowicz et al.). In the context of this work, the contribution is the insight that an agent can be learned to explore the immediate reachable space but that is just within the capabilities of the agent. HER and goal generation share the core insight on training to reach goals. However, HER does generate goals beyond the reachable it instead relies on training on existing reached states or explicitly consider the capabilities of the agent on reaching a goal. Goal generation while learning to sample from the achievable frontier does not ensure the goal is reachable and may not be as stable to train.
As noted by the authors the above mentioned prior work is closely related to the proposed approach. However, the paper only briefly mentions this corpus of work. A more thorough comparison with these techniques should be provided even if somewhat concurrent with the proposed method. The authors should consider additional experiments on the same domains of this prior work to contrast performance.
Questions:
Do the plots track the combined iterations that both Alice and Bob are in control of the environment or just for Bob? |
iclr_2018_Hkn7CBaTW | LEARNING HOW TO EXPLAIN NEURAL NETWORKS: PATTERNNET AND PATTERNATTRIBUTION
DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks. We show that these methods do not produce the theoretically correct explanation for a linear model. Yet they are used on multi-layer networks with millions of parameters. This is a cause for concern since linear models are simple neural networks. We argue that explanation methods for neural nets should work reliably in the limit of simplicity, the linear models. Based on our analysis of linear models we propose a generalization that yields two explanation techniques (PatternNet and PatternAttribution) that are theoretically sound for linear models and produce improved explanations for deep networks. | summary of article:
This paper organizes existing methods for understanding and explaining deep neural networks into three categories based on what they reveal about a network: functions, signals, or attribution. “The function extracts the signal from the data by removing the distractor. The attribution of output values to input dimensions shows how much an individual component of the signal contributes to the output…” (p. 5). The authors propose a novel quality criterion for signal estimators, inspired by the analysis of linear models. They also propose two new explanatory methods, PatternNet (for signal estimation) and PatternAttribution (for relevance attribution), based on optimizing their new quality criterion. They present quantitative and qualitative analyses comparing PatternNet and PatternAttribution to several existing explanation methods on VGG-19.
* Quality: The claims of the paper are well supported by quantitative results and qualitative visualizations.
* Clarity: Overall the paper is clear and well organized. There are a few points that could benefit from clarification.
* Originality: The paper puts forth an original framing of the problem of explaining deep neural networks. Related work is appropriately cited and compared. The authors's quality criterion for signal estimators allows them to do a quantitative analysis for a problem that is often hard to quantify.
* Significance: This paper justifies PatternNet and PatternAttribution as good methods to explain predictions made by neural networks. These methods may now serve as an important tool for future work which may lead to new insights about how neural networks work.
Pros:
* Helps to organize existing methods for understanding neural networks in terms of the types of descriptions they provide: functions, signals or attribution.
* Creative quantitative analyses that evaluate their signal estimator at the level of single units and entire networks.
Cons:
* Experiments consider only the pre-trained VGG-19 model trained on ImageNet. Results may not generalize to other architectures/datasets.
* Limited visualizations are provided.
Comments:
* Most of the paper is dedicated to explaining these signal estimators and quality criterion in case of a linear model. Only one paragraph is given to explain how they are used to estimate the signal at each layer in VGG-19. On first reading, there are some ambiguities about how the estimators scale up to deep networks. It would help to clarify if you included the expression for the two-component estimator and maybe your quality criterion for an arbitrary hidden unit.
* The concept of signal is somewhat unclear. Is the signal
* (a) the part of the input image that led to a particular classification, as described in the introduction and suggested by the visualizations, in which case there is one signal per image for a given trained network?
* (b) the part of the input that led to activation of a particular unit, as your unit wise signal estimators are applied, in which case there is one signal for every unit of a trained network? You might benefit from two terms to separate the unit-level signal (what caused the activation of a particular unit?) from the total signal (what caused all activations in this network?).
* Assuming definition (b) I think the visualizations would be more convincing if you showed the signal for several output units. One would like to see that the signal estimation is doing more than separating foreground from background but is actually semantically specific. For instance, for the mailbox image, what does the signal look like if you propagate back from only the output unit for umbrella compared to the output unit for mailbox?
* Do you have any intuition about why your two-component estimator doesn’t seem to be working as well in the convolutional layers? Do you think it is related to the fact that you are averaging within feature maps? Is it strictly necessary to do this averaging? Can you imagine a signal estimator more specifically designed for convolutional layers?
Minor issues:
* The label "Figure 4" is missing. Only subcaptions (a) and (b) are present.
* Color scheme of figures: Why two oranges? It’s hard to see the difference. |
iclr_2018_ryCM8zWRb | RNNs have been shown to be excellent models for sequential data and in particular for session-based user behavior. The use of RNNs provides impressive performance benefits over classical methods in session-based recommendations. In this work we introduce a novel ranking loss function tailored for RNNs in recommendation settings. The better performance of such loss over alternatives, along with further tricks and improvements described in this work, allow to achieve an overall improvement of up to 35% in terms of MRR and Recall@20 over previous session-based RNN solutions and up to 51% over classical collaborative filtering approaches. Unlike data augmentation-based improvements, our method does not increase training times significantly. | This paper presents a few modifications on top of some earlier work (GRU4Rec, Hidasi et al. 2016) for session-based recommendation using RNN. The first one is to include additional negative samples based on popularity raised to some power between 0 and 1. The second one is to mitigate the vanishing gradient problem for pairwise ranking loss, especially with the increased number of negative samples from the first modification. The basic idea is to weight all the negative examples by their “relevance”, since for the irrelevant negatives the gradients are vanishingly small. Experimentally these modifications prove to be effective compared with the original GRU4Rec paper.
The writing could have been more clear, especially in terms of notations and definitions. I found myself sometimes having to infer the missing bits. For example, in Eq (4) and (5), and many that follow, the index i and j are not defined (I can infer it from the later part), as well as N_s (which I take it as the number of negative examples). This is just one example, but I hope the authors could carefully check the paper and make sure all the notations/terminologies are properly defined or referred with a citation when first introduced (e.g., pointwise, pairwise, and listwise loss functions). I consider myself very familiar with the RecSys literature, and yet sometimes I cannot follow the paper very well, not to mention the general ICLR audience.
Regarding the two main modifications, I found the negative sampling rather trivial (and I am surprised in Hidasi et al. (2016) the negatives are only from the same batch, which seems a huge computational compromise) with many existing work on related topic: Steck (Item popularity and recommendation accuracy, 2011) used the same “popularity to the power between 0 and 1” strategy (they weighted the positive by the inverse popularity to the power). More closely, the negative sampling distribution in word2vec is in fact a unigram raised to the power of 0.75, which is the same as the proposed strategy here. As for the gradient vanishing problem for pairwise ranking loss, it has been previously observed in Rendle & Freudenthaler (Improving Pairwise Learning for Item Recommendation from Implicit Feedback, 2014) for BPR and they proposed an adaptive negative sampling strategy (trying to sample more relevant negatives while still keeping the computational cost low), which is closely related to the ranking-max loss function proposed in this paper. Overall, I don’t think this paper adds much on top of the previous work, and I think a more RecSys-oriented venue might benefit more from the insights presented in this paper.
I also have some high-level comments regarding using RNN for session-based recommendation (this was also my initial reaction after reading Hidasi et al. 2016). As mentioned in this paper, when applying RNN on RecSys datasets with longer time-span (which means there can be more temporal dynamics in users’ preference and item popularity), the results are not striking (e.g., Wu et al. 2017) with the proposed methods barely outperforming standard matrix factorization methods. It is puzzling that how RNN can work better for session-based case where a user’s preference can hardly change within such a short period of time. I wonder how a simple matrix factorization approach would work for session-based recommendation (which is an important baseline that is missing): regarding the claim that MF is not suited for session-based because of the absence of the concept of a user, each session can simply be considered as a pseudo-user and approaches like asymmetric matrix factorization (Paterek 2007, Improving regularized singular value decomposition for collaborative filtering) can even eliminate the need for learning user factors. ItemKNN is a pretty weak baseline and I wonder if a scalable version of the SLIM (Ning & Karypis 2011, SLIM: Sparse Linear Methods for Top-N Recommender Systems) would give better results. Finally, my general experience with BPR-type of pairwise ranking loss is that it is good at optimizing AUC, but not very well-suited for head-heavy metrics (MRR, Recall, etc.) I wonder how the propose loss would perform comparing with more competitive baselines.
Regarding the page limit, given currently the paper is quite long (12 pages excluding references), I suggest the authors cutting down some space. For example, the part about fixing the cross entropy is not very relevant and can totally be put in the appendix.
Minor comment:
1. Section 3.3.1, “Part of the reasons lies in the rare occurrence…”, should r_j >> r_i be the other way around? |
iclr_2018_B1NGT8xCZ | We propose a probabilistic framework for domain adaptation that blends both generative and discriminative modeling in a principled way. Under this framework, generative and discriminative models correspond to specific choices of the prior over parameters. This provides us a very general way to interpolate between generative and discriminative extremes through different choices of priors. By maximizing both the marginal and the conditional log-likelihoods, models derived from this framework can use both labeled instances from the source domain as well as unlabeled instances from both source and target domains. Under this framework, we show that the popular reconstruction loss of autoencoder corresponds to an upper bound of the negative marginal log-likelihoods of unlabeled instances, where marginal distributions are given by proper kernel density estimations. This provides a way to interpret the empirical success of autoencoders in domain adaptation and semi-supervised learning. We instantiate our framework using neural networks, and build a concrete model, DAuto. Empirically, we demonstrate the effectiveness of DAuto on text, image and speech datasets, showing that it outperforms related competitors when domain adaptation is possible. | The authors propose a probabilistic framework for semi-supervised learning and domain adaptation. By varying the prior distribution, the framework can incorporate both generative and discriminative modeling. The authors emphasize on one particular form of constraint on the prior distribution, that is weight (parameter) sharing, and come up with a concrete model named Dauto for domain adaptation. A domain confusion loss is added to learn domain-invariant feature representations. The authors compared Dauto with several baseline methods on several datasets and showed improvement.
The paper is well-organized and easy to follow. The probabilistic framework itself is quite straight-forward. The paper will be more interesting if the authors are able to extend the discussion on different forms of prior instead of the simple parameter sharing scheme.
The proposed DAuto is essentially DANN+autoencoder. The minimax loss employed in DANN and DAuto is known to be prone to degenerated gradient for the generator. It would be interesting to see if the additional auto-encoder part help address the issue.
The experiments miss some of the more recent baseline in domain adaptation, such as Adversarial Discriminative Domain Adaptation (Tzeng, Eric, et al. 2017).
It could be more meaningful to organize the pairs in table by target domain instead of source, for example, grouping 9->9, 8->9, 7->9 and 3->9 in the same block. DAuto does seem to offer more boost in domain pairs that are less similar. |
iclr_2018_B1spAqUp- | Deconvolutional layers have been widely used in a variety of deep models for up-sampling, including encoder-decoder networks for semantic segmentation and deep generative models for unsupervised learning. One of the key limitations of deconvolutional operations is that they result in the so-called checkerboard problem. This is caused by the fact that no direct relationship exists among adjacent pixels on the output feature map. To address this problem, we propose the pixel deconvolutional layer (PixelDCL) to establish direct relationships among adjacent pixels on the up-sampled feature map. Our method is based on a fresh interpretation of the regular deconvolution operation. The resulting PixelDCL can be used to replace any deconvolutional layer in a plug-and-play manner without compromising the fully trainable capabilities of original models. The proposed PixelDCL may result in slight decrease in efficiency, but this can be overcome by an implementation trick. Experimental results on semantic segmentation demonstrate that PixelDCL can consider spatial features such as edges and shapes and yields more accurate segmentation outputs than deconvolutional layers. When used in image generation tasks, our PixelDCL can largely overcome the checkerboard problem suffered by regular deconvolution operations. | Paper summary:
This paper proposes a technique to generalize deconvolution operations used in standard CNN architectures. Traditional deconvolution operation uses independent filter weights to compute output features at adjacent pixels. This work proposes to do sequential prediction of adjacent pixel features (via intermediate feature maps) resulting in more spatially smooth outputs for deconvolution layer. This new layer is referred to as ‘pixel deconvolution layer’ and it is demonstrated on two tasks of semantic segmentation and face generation.
Paper Strengths:
- Despite being simple technique, the proposed pixel deconvolution layer is novel and interesting.
- Experimental results on two different tasks demonstrating the general use of the proposed deconvolution layer.
Major Weaknesses:
- The main weakness of this paper lies in its weak experiments. Although authors say that several possibilities exist for the dependencies between intermediate feature maps, there are no systematic ablation studies on what type of connectivities work best for the proposed layer. Authors experimented with two randomly chosen connectivities which is not enough to understand what type of connectivities work best. This is important as this forms the main contribution of the paper.
- Also, several quantitative results seem incomplete. Why is the DeepLab-ResNet performance so low? A quick look at PascalVOC results indicate that DeepLab-ResNet has IoU of over 79 on this dataset, but the reported numbers in this paper are only around 73 IoU. There is no mention of IoU for base DeepLab-ResNet model and the standard DeepLab+CRF technique. And, there are no quantitative results on image generation.
Minor Weaknesses:
- Although the paper is easy to understand, several parts of the paper are poorly written. Several sentences are repeated multiple times across the paper. Some statements need corrections/refinements such as “mean IoU is a more accuracy evaluation measure”. And, it is better to under-tone some statements such as changing “solving” to “tackling”.
- The illustration of checkerboard artifacts from standard deconvolution technique is not clear. For example, the results presented in Figure-4 indicate segmentation mistakes of the network rather than checkerboard artifacts.
Clarifications:
- Why authors choose to ‘resize’ the images for training semantic segmentation networks, instead of generally used ‘cropping’ to create batches?
- I can not see the ‘red’ in Figure-5. I see the later feature map more as ‘pinkish’ color. It is probably due to my color vision. In any case, it is better to use different color scheme to distinguish.
Suggestions:
- I strongly advice authors to do some ablation studies on connectivities to make this a good paper. Also, it would be great if authors can revise the writing thoroughly to make this a more enjoyable read.
Review Summary:
The proposed technique, despite being simple, is novel and interesting. But, the weak and incomplete experiments make this not yet ready for publication. |
iclr_2018_HkNGsseC- | Published as a conference paper at ICLR 2018 ON THE EXPRESSIVE POWER OF OVERLAPPING ARCHITECTURES OF DEEP LEARNING
Expressive efficiency refers to the relation between two architectures A and B, whereby any function realized by B could be replicated by A, but there exists functions realized by A, which cannot be replicated by B unless its size grows significantly larger. For example, it is known that deep networks are exponentially efficient with respect to shallow networks, in the sense that a shallow network must grow exponentially large in order to approximate the functions represented by a deep network of polynomial size. In this work, we extend the study of expressive efficiency to the attribute of network connectivity and in particular to the effect of "overlaps" in the convolutional process, i.e., when the stride of the convolution is smaller than its filter size (receptive field). To theoretically analyze this aspect of network's design, we focus on a well-established surrogate for ConvNets called Convolutional Arithmetic Circuits (ConvACs), and then demonstrate empirically that our results hold for standard ConvNets as well. Specifically, our analysis shows that having overlapping local receptive fields, and more broadly denser connectivity, results in an exponential increase in the expressive capacity of neural networks. Moreover, while denser connectivity can increase the expressive capacity, we show that the most common types of modern architectures already exhibit exponential increase in expressivity, without relying on fully-connected layers. | The paper studies convolutional neural networks where the stride is smaller than the convolutional filter size; the so called overlapping convolutional architectures. The main object of study is to quantify the benefits of overlap in convolutional architectures.
The main claim of the paper is Theorem 1, which is that overlapping convolutional architectures are efficient with respect to non-overlapping architectures, i.e., there exists functions in the overlapping architecture which require an exponential increase in size to be represented in the non-overlapping architecture; whereas overlapping architecture can capture within a linear size the functions represented by the non-overlapping architectures. The main workhorse behind the paper is the notion of rank of matricized grid tensors following a paper of Cohen and Shashua which capture the relationship between the inputs and the outputs, the function implemented by the neural network.
(1) The results of the paper hold only for product pooling and linear activation function except for the representation layer, which allows general functions. It is unclear why the generalized convolutional networks are stated with such generality when the results apply only to this special case. That this is the case should be made clear in the title and abstract. The paper makes a point that generalized tensor decompositions can be potentially applied to solve the more general case, but since it is left as future work, the paper should make it clear throughout.
(2) The experiment is minimal and even the given experiment is not described well. What data augmentation was used for the CIFAR-10 dataset? It is only mentioned that the data is augmented with translations and horizontal flips. What is the factor of augmentation? How much translation? These are important because there maybe a much simpler explanation to the benefit of overlap: it is able to detect these translated patterns easily. Indeed, this simple intuition seems to be why the authors chose to make the problem by introducing translations and flips.
(3) It is unclear if the paper resolves the mystery that they set out to solve, which is a reconciliation of the following two observations (a) why are non-overlapping architectures so common? (b) why only slight overlap is used in practice? The paper seems to claim that since overlapping architectures have higher expressivity that answers (a). It appears that the paper does not answer (b) well: it points out that since there is exponential increase, there is no reason to increase it beyond a particular point. It seems the right resolution will be to show that after the overlap is set to a certain small value, there will be *only* linear increase with increasing overlap; i.e., the paper should show that small overlap networks are efficient with respect to *large* overlap networks; a comparison that does not seem to be made in the paper.
(4) Small typo: the dimensions seem to be wrong in the line below the equation in page 3.
The paper makes important progress on a highly relevant problem using a new methodology (borrowed from a previous paper). However, the writing is hurried and the high-level conclusions are not fully supported by theory and experiments. |
iclr_2018_SyUkxxZ0b | State of the art computer vision models have been shown to be vulnerable to small adversarial perturbations of the input. In other words, most images in the data distribution are both correctly classified by the model and are very close to a visually similar misclassified image. Despite substantial research interest, the cause of the phenomenon is still poorly understood and remains unsolved. We hypothesize that this counter intuitive behavior is a naturally occurring result of the high dimensional geometry of the data manifold. As a first step towards exploring this hypothesis, we study a simple synthetic dataset of classifying between two concentric high dimensional spheres. For this dataset we show a fundamental tradeoff between the amount of test error and the average distance to nearest error. In particular, we prove that any model which misclassifies a small constant fraction of a sphere will be vulnerable to adversarial perturbations of size
. Surprisingly, when we train several different architectures on this dataset, all of their error sets naturally approach this theoretical bound. As a result of the theory, the vulnerability of neural networks to small adversarial perturbations is a logical consequence of the amount of test error observed. We hope that our theoretical analysis of this very simple case will point the way forward to explore how the geometry of complex real-world data sets leads to adversarial examples. | The idea of analyzing a simple synthetic data set to get insights into open issues about adversarial examples has merit. However, the results reported here are not sufficiently significant for ICLR.
The authors make a big deal throughout the paper about how close to training data the adversarial examples they can find on the data manifold are. E.g.: “Despite being extremely rare, these misclassifications appear close to randomly sampled points on the sphere.” They report mean distance to nearest errors on the data manifold is 0.18 whereas mean distance between two random points on inner sphere is 1.41. However, distance between two random points on the sphere is not the right comparison. The mean distance between random nearest neighbors from the training samples would be much more appropriate.
They also stress in the Conclusions their Conjecture 5.1 that under some assumptions “the average distance to nearest error may decrease on the order of O(1 / d) as the input dimension grows large.” However, earlier they admitted that “Whether or not a similar conjecture holds for image manifolds is unclear and should be investigated in future work.” So, the practical significance of this conjecture is unclear. Furthermore, it is well known that in high dimensions, the distances between pairs of training samples tends towards a large constant (e.g. making nearest neighbor search using triangular inequality pruning infeasible), so extreme care much be taken to not over generalize any results from these sorts of synthetic high dimensional experiments.
Authors note that for higher dimensional spheres, adversarial examples on the manifold (sphere shell) could found, but not smaller d: “In our experiments the highest dimension we were able to train the ReLU net without adversarial examples seems to be around d = 60.” Yet,in their later statement in that same paragraph “We did not investigate if larger networks will work for larger d.”, it is unclear what is meant by “will work”; because, presumably, larger networks (with more weights) would be HARDER to avoid adversarial examples being found on the data manifold, so larger networks should be less likely “to work”, if “work” means avoid adversarial examples. In any case, their apparent use of only h=1000 unit networks (for both ReLU and quadratic cases) is disappointing, because it is not clear whether the phenomena observed would be qualitatively similar for different fully-separable discriminants (e.g. different h values with different regularization costs even if all such networks had zero classification errors).
The authors repeat the following exact same phrase in both the Introduction and the Conclusion:
“Our results highlight the fact that the epsilon norm ball adversarial examples often studied in defence papers are not the real problem but are rather a tractable research problem. “
But it is not clear exactly what the authors meant by this. Also, the term “epsilon norm ball” is not commonly used in adversarial literature, and the only reference to such papers is Madry et al, (2017), which is only on ArXiv and not widely known — if these types of adversarial examples are “often studied” as claimed, there should be other / more established references to cite here.
In short, this work addresses the important problem of better understanding adversarial examples, but the simple setup has a higher burden to establish significance, which this paper as written has not met. |
iclr_2018_r1q7n9gAb | THE IMPLICIT BIAS OF GRADIENT DESCENT ON SEPA- RABLE DATA
We show that gradient descent on an unregularized logistic regression problem, for almost all separable datasets, converges to the same direction as the max-margin solution. The result generalizes also to other monotone decreasing loss functions with an infimum at infinity, and we also discuss a multi-class generalizations to the cross entropy loss. Furthermore, we show this convergence is very slow, and only logarithmic in the convergence of the loss itself. This can help explain the benefit of continuing to optimize the logistic or cross-entropy loss even after the training error is zero and the training loss is extremely small, and, as we show, even if the validation loss increases. Our methodology can also aid in understanding implicit regularization in more complex models and with other optimization methods. | The paper offers a formal proof that gradient descent on the logistic
loss converges very slowly to the hard SVM solution in the case where
the data are linearly separable. This result should be viewed in the
context of recent attempts at trying to understand the generalization
ability of neural networks, which have turned to trying to understand
the implicit regularization bias that comes from the choice of
optimizer. Since we do not even understand the regularization bias of
optimizers for the simpler case of linear models, I consider the paper's
topic very interesting and timely.
The overall discussion of the paper is well written, but on a more
detailed level the paper gives an unpolished impression, and has many
technical issues. Although I suspect that most (or even all) of these
issues can be resolved, they interfere with checking the correctness of
the results. Unfortunately, in its current state I therefore do not
consider the paper ready for publication.
Technical Issues:
The statement of Lemma 5 has a trivial part and for the other part the
proof is incorrect: Let x_u = ||nabla L(w(u))||^2.
- Then the statement sum_{u=0}^t x_u < infinity is trivial, because
it follows directly from ||nabla L(w(u))||^2 < infinity for all u. I
would expect the intended statement to be sum_{u=0}^infinity x_u <
infinity, which actually follows from the proof of the lemma.
- The proof of the claim that t*x_t -> 0 is incorrect: sum_{u=0}^t x_u
< infinity does not in itself imply that t*x_t -> 0, as claimed. For
instance, we might have x_t = 1/i^2 when t=2^i for i = 1,2,... and
x_t = 0 for all other t.
Definition of tilde{w} in Theorem 4:
- Why would tilde{w} be unique? In particular, if the support vectors
do not span the space, because all data lie in the same
lower-dimensional hyperplane, then this is not the case.
- The KKT conditions do not rule out the case that \hat{w}^top x_n =
1, but alpha_n = 0 (i.e. a support vector that touches the margin,
but does not exert force against it). Such n are then included in
cal{S}, but lead to problems in (2.7), because they would require
tilde{w}^top x_n = infinity, which is not possible.
In the proof of Lemma 6, case 2. at the bottom of p.14:
- After the first inequality, C_0^2 t^{-1.5 epsilon_+} should be
C_0^2 t^{-epsilon_+}
- After the second inequality the part between brackets is missing an
additional term C_0^2 t^{-\epsilon_+}.
- In addition, the label (1) should be on the previous inequality and
it should be mentioned that e^{-x} <= 1-x+x^2 is applied for x >= 0
(otherwise it might be false).
In the proof of Lemma 6, case 2 in the middle of p.15:
- In the line of inequality (1) there is a t^{-epsilon_-} missing. In
the next line there is a factor t^{-epsilon_-} too much.
- In addition, the inequality e^x >= 1 + x holds for all x, so no need
to mention that x > 0.
In Lemma 1:
- claim (3) should be lim_{t \to \infty} w(t)^\top x_n = infinity
- In the proof: w(t)^top x_n > 0 only holds for large enough t.
Remarks:
p.4 The claim that "we can expect the population (or test)
misclassification error of w(t) to improve" because "the margin of w(t)
keeps improving" is worded a little too strongly, because it presumes
that the maximum margin solution will always have the best
generalization error.
In the proof sketch (p.3):
- Why does the fact that the limit is dominated by gradients that are
a linear combination of support vectors imply that w_infinity will
also be a non-negative linear combination of support vectors?
- "converges to some limit". Mention that you call this limit
w_infinity
Minor Issues:
In (2.4): add "for all n".
p.10, footnote: Shouldn't "P_1 = X_s X_s^+" be something like "P_1 =
(X_s^top X_s)^+"?
A.9: ell should be ell'
The paper needs a round of copy editing. For instance:
- top of p.4: "where tilde{w} A is the unique"
- p.10: "the solution tilde{w} to TO eq. A.2"
- p.10: "might BOT be unique"
- p.10: "penrose-moorse pseudo inverse" -> "Moore-Penrose
pseudoinverse"
In the bibliography, Kingma and Ba is cited twice, with different years. |
iclr_2018_r1kjEuHpZ | In representation learning (RL), how to make the learned representations easy to interpret and less overfitted to training data are two important but challenging issues. To address these problems, we study a new type of regularization approach that encourages the supports of weight vectors in RL models to have small overlap, by simultaneously promoting near-orthogonality among vectors and sparsity of each vector. We apply the proposed regularizer to two models: neural networks (NNs) and sparse coding (SC), and develop an efficient ADMM-based algorithm for regularized SC. Experiments on various datasets demonstrate that weight vectors learned under our regularizer are more interpretable and have better generalization performance. | The paper proposed a new regularization approach that simultaneously encourages the weight vectors (W) to be sparse and orthogonal to each other. The argument is that the sparsity helps to eliminate the irrelevant feature vectors by making the corresponding weights zero. Nearly orthogonal sparse vectors will have zeros at different indexes and hence, encourages the weight vectors to have small overlap in terms of indices of nonzero entries (called support). Small overlap in support of weight vectors, aids interpretability as each weight vector is associated with a unique subset of feature vectors. For example, in the topic model, small overlap encourages, each topic to have unique set of representation words.
The proposed approach used L1 regularizer for enforcing sparsity in W. For enforcing orthogonality between different weight vectors (wi, wj), the log-determinant divergence (LDD) regularization term encourages the Gram Matrix G (Gij = wiTwj) to be close to an identity matrix I. The authors applied and tested the performance of proposed approach on Neural Network and Sparse Coding (SC) machine learning models. The authors validated the need for their proposed regularizer through experiments on 4 datasets (3 text and 1 images).
Major
* The novelty of the paper is not clear. Neither L1 no logdet() are novel regularizers (see the literature of Determinatal Point Process). With the presence of the auto-differentiator, one cannot claim the making derivative a novelty.
* L1 is also encourages diversity although as explicit as logdet. This is also obvious from Fig 2. Perhaps the advantage of diversity is in interpretability but that is hard to quantify and the authors did not put enough effort to do that; we only have small anecdotal results in section 4.3.
* The Table 1 is not convincing because one can argue, for example, gun (vec 1) and weapon (vec 4) are colinear.
* In section 4.2, the authors experimented with SC on text dataset. The overlap score decreases as the strength of regularization increases. The authors didn’t show the effect of increasing the regularization strength on the model accuracy and convergence time. This analysis is important to make sure, the decrease in overlap score is not coming at the expense of model accuracy and performance.
* In section 4.4, increase in test set accuracy and difference between test and train set accuracy is used to validate the claim, that the proposed regularizer helps reducing over fitting. In Table-2, , the test accuracy increases between SC and LDD-L1 SC while the train accuracy remains almost the same. Also, the authors didn’t do any cross validation to support their claim. The difference is numbers is too small to support the claim.
* In section on LSTM for Language Modeling, the perplexity score of LDD-L1 regularization on PytorchLM received perplexity score of 1.2 lower than without regularization. Although, the author mentions it as a significant reduction, the lowest perplexity score in Table 3 is significantly lower than this result. It’s not clear how 1.2 reduction in perplexity is significant and why the method should be preferred while much better models already exists.
* Results of the best perplexity model, Neural Architecture Search + WT V2, with proposed regularization would also help, validating the generalizability claims of the new approach.
* In CNN for Image Classification section, details of increase interpretability of the model, in terms of classification decision, is missing.
* In Table-4, the proposed LDD-L1 WideResNet is not the best results. Results of adding the proposed regularization, to the best know method (Pyramid Sep Drop) would be interesting.
* The proposed regularization claims to provide more interpretable representation and less overfit model. The given experiments are inadequate to validate the claims.
* A more extensive experimentation is required to validate the applicability of the method.
* In SC, aj are the linear coefficients or the coefficient vector of the j-th sample. If A ∈ Rm×n then aj ∈ Rm×1 and j ranges between [1,n] as in equation 6. The notation in section 2.2, Sparse Coding section is misleading as j ranges between [1,m].
* In Related works, the authors mention previous work done on interpreting the results of the machine learning models. Related works on enhancing interpretability and reducing overfitting by using regularization is missing. |
iclr_2018_HkfXMz-Ab | Published as a conference paper at ICLR 2018 NEURAL SKETCH LEARNING FOR CONDITIONAL PROGRAM GENERATION
We study the problem of generating source code in a strongly typed, Java-like programming language, given a label (for example a set of API calls or types) carrying a small amount of information about the code that is desired. The generated programs are expected to respect a "realistic" relationship between programs and labels, as exemplified by a corpus of labeled programs available during training. Two challenges in such conditional program generation are that the generated programs must satisfy a rich set of syntactic and semantic constraints, and that source code contains many low-level features that impede learning. We address these problems by training a neural generator not on code but on program sketches, or models of program syntax that abstract out names and operations that do not generalize across programs. During generation, we infer a posterior distribution over sketches, then concretize samples from this distribution into type-safe programs using combinatorial techniques. We implement our ideas in a system for generating API-heavy Java code, and show that it can often predict the entire body of a method given just a few API calls or data types that appear in the method. | The authors introduce an algorithm in the subfield of conditional program generation that is able to create programs in a rich java like programming language. In this setting, they propose an algorithm based on sketches- abstractions of programs that capture the structure but discard program specific information that is not generalizable such as variable names. Conditioned on information such as type specification or keywords of a method they generate the method's body from the trained sketches.
Positives:
• Novel algorithm and addition of rich java like language in subfield of 'conditional program generation' proposed
• Very good abstract: It explains high level overview of topic and sets it into context plus gives a sketch of the algorithm and presents the positive results.
• Excellently structured and presented paper
• Motivation given in form of relevant applications and mention that it is relatively unstudied
• The hypothesis/ the papers goal is clearly stated. It is introduced with 'We ask' followed by two well formulated lines that make up the hypothesis. It is repeated multiple times throughout the paper. Every mention introduces either a new argument on why this is necessary or sets it in contrast to other learners, clearly stating discrepancies.
• Explanations are exceptionally well done: terms that might not be familiar to the reader are explained. This is true for mathematical aspects as well as program generating specific terms. Examples are given where appropriate in a clear and coherent manner
• Problem statement well defined mathematically and understandable for a broad audience
• Mentioning of failures and limitations demonstrates a realistic view on the project
• Complexity and time analysis provided
• Paper written so that it's easy for a reader to implement the methods
• Detailed descriptions of all instantiations even parameters and comparison methods
• System specified
• Validation method specified
• Data and repository, as well as cleaning process provided
• Every figure and plot is well explained and interpreted
• Large successful evaluation section provided
• Many different evaluation measures defined to measure different properties of the project
• Different observability modes
• Evaluation against most compatible methods from other sources
• Results are in line with hypothesis
• Thorough appendix clearing any open questions
It would have been good to have a summary/conclusion/future work section
SUMMARY: ACCEPT. The authors present a very intriguing novel approach that in a clear and coherent way. The approach is thoroughly explained for a large audience. The task itself is interesting and novel. The large evaluation section that discusses many different properties is a further indication that this approach is not only novel but also very promising. Even though no conclusive section is provided, the paper is not missing any information. |
iclr_2018_H1T2hmZAb | Published as a conference paper at ICLR 2018 DEEP COMPLEX NETWORKS
At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks and convolutional LSTMs. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their realvalued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech Spectrum Prediction using the TIMIT dataset. We achieve state-of-the-art performance on these audio-related tasks. | This paper defines building blocks for complex-valued convolutional neural networks: complex convolutions, complex batch normalisation, several variants of the ReLU nonlinearity for complex inputs, and an initialisation strategy. The writing is clear, concise and easy to follow.
An important argument in favour of using complex-valued networks is said to be the propagation of phase information. However, I feel that the observation that CReLU works best out of the 3 proposed alternatives contradicts this somewhat. CReLU simply applies ReLU component-wise to the real and imaginary parts, which has an effect on the phase information that is hard to conceptualise. It definitely does not preserve phase, like modReLU would.
This makes me wonder whether the "complex numbers" paradigm is applied meaningfully here, or whether this is just an arbitrary way of doing some parameter sharing in convnets that happens to work reasonably well (note that even completely random parameter tying can work well, as shown in "Compressing neural networks with the hashing trick" by Chen et al.). Some more insight into how phase information is used, what it represents and how it is propagated through the network would help to make sense of this.
The image recognition results are mostly inconclusive, which makes it hard to assess the benefit of this approach. The improved performance on the audio tasks seems significant, but how the complex nature of the networks helps achieve this is not really demonstrated. It is unclear how the phase information in the input waveform is transformed into the phase of the complex activations in the network (because I think it is implied that this is what happens). This connection is a bit vague. Once again, a more in-depth analysis of this phase behavior would be very welcome.
I'm on the fence about this work: I like the ideas and they are explained well, but I'm missing some insight into why and how all of this is actually helping to improve performance (especially w.r.t. how phase information is used).
Comments:
- The related work section is comprehensive but a bit unstructured, with each new paragraph seemingly describing a completely different type of work. Maybe some subsection titles would help make it feel a bit more cohesive.
- page 3: "(cite a couple of them)" should be replaced by some actual references :)
- Although care is taken to ensure that the complex and real-valued networks that are compared in the experiments have roughly the same number of parameters, doesn't the complex version always require more computation on account of there being more filters in each layer? It would be nice to discuss computational cost as well.
REVISION: I have decided to raise my rating from 5 to 7 as I feel that the authors have adequately addressed many of my comments. In particular, I really appreciated the additional appendix sections to clarify what actually happens as the phase information is propagated through the network.
Regarding the CIFAR results, I may have read over it, but I think it would be good to state even more clearly that these experiments constitute a sanity check, as both reviewer 1 and myself were seemingly unaware of this. With this in mind, it is of course completely fine that the results are not better than for real-valued networks. |
iclr_2018_rJ3fy0k0Z | The goal of imitation learning (IL) is to enable a learner to imitate an expert's behavior given the expert's demonstrations. Recently, generative adversarial imitation learning (GAIL) has successfully achieved it even on complex continuous control tasks. However, GAIL requires a huge number of interactions with environment during training. We believe that IL algorithm could be more applicable to the real-world environments if the number of interactions could be reduced. To this end, we propose a model free, off-policy IL algorithm for continuous control. The keys of our algorithm are two folds: 1) adopting deterministic policy that allows us to derive a novel type of policy gradient which we call deterministic policy imitation gradient (DPIG), 2) introducing a function which we call state screening function (SSF) to avoid noisy policy updates with states that are not typical of those appeared on the expert's demonstrations. Experimental results show that our algorithm can achieve the goal of IL with at least tens of times less interactions than GAIL on a variety of continuous control tasks. | This paper considers the problem of model-free imitation learning. The problem is formulated in the framework of generative adversarial imitation learning (GAIL), wherein we alternate between optimizing reward parameters and learner policy's parameters. The reward parameters are optimized so that the margin between the cost of the learner's policy and the expert's policy is maximized. The learner's policy is optimized (using any model-free RL method) so that the same cost margin is minimized. Previous formulation of GAIL uses a stochastic behavior policy and the RIENFORCE-like algorithms. The authors of this paper propose to use a deterministic policy instead, and apply the deterministic policy gradient DPG (Silver et al., 2014) for optimizing the behavior policy.
The authors also briefly discuss the problem of the little overlap between the teacher's covered state space and the learner's. A state screening function (SSF) method is proposed to drive the learner to remain in areas of the state space that have been covered by the teacher. Although, a more detailed discussion and a clearer explanation is needed to clarify what SSF is actually doing, based on the provided formulation.
Except from a few typos here and there, the paper is overall well-written. The proposed idea seems new. However, the reviewer finds the main contribution rather incremental in its nature. Replacing a stochastic policy with a deterministic one does not change much the original GAIL algorithm, since the adoption of stochastic policies is often used just to have differentiable parameterized policies, and if the action space is continuous, then there is not much need for it (except for exploration, which is done here through re-initializations anyway). My guess is that if someone would use the GAIL algorithm for real problems (e.g, robotic task), they would significantly reduce the stochasticity of the behavior policy, which would make it virtually similar in term of data efficiency to the proposed method.
Pros:
- A new GAIL formulation for saving on interaction data.
Cons:
- Incremental improvement over GAIL
- Experiments only on simulated toy problems
- No theoretical guarantees for the state screening function (SSF) method |
iclr_2018_BkwHObbRZ | LEARNING ONE-HIDDEN-LAYER NEURAL NETWORKS WITH LANDSCAPE DESIGN
We consider the problem of learning a one-hidden-layer neural network: we assume the input x ∈ R d is from Gaussian distribution and the label y = a σ(Bx) + ξ, where a is a nonnegative vector in R m with m ≤ d, B ∈ R m×d is a full-rank weight matrix, and ξ is a noise vector. We first give an analytic formula for the population risk of the standard squared loss and demonstrate that it implicitly attempts to decompose a sequence of low-rank tensors simultaneously.
Inspired by the formula, we design a non-convex objective function G(·) whose landscape is guaranteed to have the following properties:
1. All local minima of G are also global minima. 2. All global minima of G correspond to the ground truth parameters. 3. The value and gradient of G can be estimated using samples.
With these properties, stochastic gradient descent on G provably converges to the global minimum and learn the ground-truth parameters. We also prove finite sample complexity results and validate the results by simulations. | [ =========================== REVISION ===============================================================]
I am satisfied with the answers to my questions. The paper still needs some work on clarity, and authors defer the changes to the next version (but as I understood, they did no changes for this paper as of now), which is a bit frustrating. However I am fine accepting it.
[ ============================== END OF REVISION =====================================================]
This paper concerns with addressing the issue of SGD not converging to the optimal parameters on one hidden layer network for a particular type of data and label (gaussian features, label generated using a particular function that should be learnable with neural net). Authors demonstrate empirically that this particular learning problem is hard for SGD with l2 loss (due to apparently bad local optima) and suggest two ways of addressing it, on top of the known way of dealing with this problem (which is overparameterization). First is to use a new activation function, the second is by designing a new objective function that has only global optima and which can be efficiently learnt with SGD
Overall the paper is well written. The authors first introduce their suggested loss function and then go into details about what inspired its creation. I do find interesting the formulation of population risk in terms of tensor decomposition, this is insightful
My issues with the paper are as follows:
- The loss function designed seems overly complicated. On top of that authors notice that to learn with this loss efficiently, much larger batches had to be used. I wonder how applicable this in practice - I frankly didn't see insights here that I can apply to other problems that don't fit into this particular narrowly defined framework
- I do find it somewhat strange that no insight to the actual problem is provided (e.g. it is known empirically but there is no explanation of what actually happens and there is a idea that it is due to local optima), but authors are concerned with developing new loss function that has provable properties about global optima. Since it is all empirical, the first fix (activation function) seems sufficient to me and new loss is very far-fetched.
- It seems that changing activation function from relu to their proposed one fixes the problem without their new loss, so i wonder whether it is a problem with relu itself and may be other activations funcs, like sigmoids will not suffer from the same problem
- No comparison with overparameterization in experiments results is given, which makes me wonder why their method is better.
Minor: fix margins in formula 2.7. |
iclr_2018_Byht0GbRZ | Many tasks in natural language processing involve comparing two sentences to compute some notion of relevance, entailment, or similarity. Typically this comparison is done either at the word level or at the sentence level, with no attempt to leverage the inherent structure of the sentence. When sentence structure is used for comparison, it is obtained during a non-differentiable pre-processing step, leading to propagation of errors. We introduce a model of structured alignments between sentences, showing how to compare two sentences by matching their latent structures. Using a structured attention mechanism, our model matches possible spans in the first sentence to possible spans in the second sentence, simultaneously discovering the tree structure of each sentence and performing a comparison, in a model that is fully differentiable and is trained only on the comparison objective. We evaluate this model on two sentence comparison tasks: the Stanford natural language inference dataset and the TREC-QA dataset. We find that comparing spans results in superior performance to comparing words individually, and that the learned trees are consistent with actual linguistic structures. | This paper describes the use of latent context-free derivations, using
a CRF-style neural model, as a latent level of representation in neural
attention models that consider pairs of sentences. The model implicitly
learns a distribution over derivations, and uses marginals under this
distribution to bias attention distributions over spans in one sentence
given a span in another sentence.
This is an intriguing idea. I had a couple of reservations however:
* The empirical improvements from the method seem pretty marginal, to the
point that it's difficult to know what is really helping the model. I would
liked to have seen more explanation of what the model has learned, and
more comparisons to other baselines that make use of attention over spans.
For example, what happens if every span is considered as an independent random
variable, with no use of a tree structure or the CKY chart?
* The use of the \alpha^0 vs. \alpha^1 variables is not entirely clear. Once they
have been calculated in Algorithm 1, how are they used? Do the \rho values
somewhere treat these two quantities differently?
* I'm skeptical of the type of qualitative analysis in section 4.3, unfortunately.
I think something much more extensive would be interesting here. As one
example, the PP attachment example with "at a large venue" is highly suspect;
there's a 50/50 chance that any attachment like this will be correct, there's
absolutely no way of knowing if the model is doing something interesting/correct
or performing at a chance level, given a single example. |
iclr_2018_By0ANxbRW | The growing interest to implement Deep Neural Networks (DNNs) on resourcebound hardware has motivated the innovation of compression algorithms. Using these algorithms, DNN model sizes can be substantially reduced, with little-to-no accuracy degradation. This is achieved by either eliminating components from the model, or penalizing complexity during training. While both approaches demonstrate considerable compressions, the former often ignores the loss function during compression while the latter produces unpredictable compressions. In this paper, we propose a technique that directly minimizes both the model complexity and the changes in the loss function. In this technique, we formulate compression as a constrained optimization problem, and then present a solution for it. We will show that using this technique, we can achieve competitive results. | 1. Summary
This paper introduced a method to learn a compressed version of a neural network such that the loss of the compressed network doesn't dramatically change.
2. High level paper
- I believe the writing is a bit sloppy. For instance equation 3 takes the minimum over all m in C but C is defined to be a set of c_1, ..., c_k, and other examples (see section 4 below). This is unfortunate because I believe this method, which takes as input a large complex network and compresses it so the loss in accuracy is small, would be really appealing to companies who are resource constrained but want to use neural network models.
3. High level technical
- I'm confused at the first and second lines of equation (19). In the first line, shouldn't the first term not contain \Delta W ? In the second line, shouldn't the first term be \tilde{\mathcal{L}}(W_0 + \Delta W) ?
- For CIFAR-10 and SVHN you're using Binarized Neural Networks and the two nice things about this method are (a) that the memory usage of the network is very small, and (b) network operations can be specialized to be fast on binary data. My worry is if you're compressing these networks with your method are the weights not treated as binary anymore? Now I know in Binarized Neural Networks they keep a copy of real-valued weights so if you're just compressing these then maybe all is alright. But if you're compressing the weights _after_ binarization then this would be very inefficient because the weights won't likely be binary anymore and (a) and (b) above no longer apply.
- Your compression ratio is much higher for MNIST but your accuracy loss is somewhat dramatic, especially for MNIST (an increase of 0.53 in error nearly doubles your error and makes the network worse than many other competing methods: http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#4d4e495354). What is your compression ratio for 0 accuracy loss? I think this is a key experiment that should be run as this result would be much easier to compare with the other methods.
- Previous compression work uses a lot of tricks to compress convolutional weights. Does your method work for convolutional layers?
- The first paper to propose weight sharing was not Han et al., 2015, it was actually:
Chen W., Wilson, J. T., Tyree, S., Weinberger K. Q., Chen, Y. "Compressing Neural Networks with the Hashing Trick" ICML 2015
Although they did not learn the weight sharing function, but use random hash functions.
4. Low level technical
- The end of Section 2 has an extra 'p' character
- Section 3.1: "Here, X and y define a set of samples and ideal output distributions we use for training" this sentence is a bit confusing. Here y isn't a distribution, but also samples drawn from some distribution. Actually I don't think it makes sense to talk about distributions at all in Section 3.
- Section 3.1: "W is the learnt model...\hat{W} is the final, trained model" This is unclear: W and \hat{W} seem to describe the same thing. I would just remove "is the learnt model and"
5. Review summary
While the trust-region-like optimization of the method is nice and I believe this method could be useful for practitioners, I found the paper somewhat confusing to read. This combined with some key experimental questions I have make me think this paper still needs work before being accepted to ICLR. |
iclr_2018_ByS1VpgRZ | Published as a conference paper at ICLR 2018 CGANS WITH PROJECTION DISCRIMINATOR
We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection. | I thank the authors for the thoughtful response and updated manuscript. After reading through both, my review score remains unchanged.
=================
The authors describe a new variant of a generative adversarial network (GAN) for generating images. This model employs a 'projection discriminator' in order to incorporate image labels and demonstrate that the resulting model outperforms state-of-the-art GAN models.
Major comments:
1) Spatial resolution. What spatial resolution is the model generating images at? The AC-GAN work performed an analysis to assess how information is being introduced at each spatial resolution by assessing the gains in the Inception score versus naively resizing the image. It is not clear how much the gains of this model is due to generating better lower resolution images and performing simple upscaling. It would be great to see the authors address this issue in a serious manner.
2) FID in real data. The numbers in Table 1 appear favorable to the projection model. Please add error bars (based on Figure 4, I would imagine they are quite large). Additionally, would it be possible to compute this statistic for *real* images? I would be curious to know what the FID looks like as a 'gold standard'.
3) Conditional batch normalization. I am not clear how much of the gains arose from employing conditional batch normalization versus the proposed method for incorporating the projection based discriminator. The former has been seen to be quite powerful in accomodating multi-modal tasks (e.g. https://arxiv.org/abs/1709.07871, https://arxiv.org/abs/1610.07629
). If the authors could provide some evidence highlighting the marginal gains of one technique, that would be extremely helpful.
Minor comments:
- I believe you have the incorrect reference for conditional batch normalization on Page 5.
A Learned Representation For Artistic Style
Dumoulin, Shlens and Kudlur (2017)
https://arxiv.org/abs/1610.07629
- Please enlarge images in Figure 5-8. Hard to see the detail of 128x128 images.
- Please add citations for Figures 1a-1b. Do these correspond with some known models?
Depending on how the authors respond to the reviews, I would consider upgrading the score of my review. |
iclr_2018_SkZ-BnyCW | There have been numerous recent advancements on learning deep generative models with latent variables thanks to the reparameterization trick that allows to train deep directed models effectively. However, since reparameterization trick only works on continuous variables, deep generative models with discrete latent variables still remain hard to train and perform considerably worse than their continuous counterparts. In this paper, we attempt to shrink this gap by introducing a new architecture and its learning procedure. We develop a hybrid generative model with binary latent variables that consists of an undirected graphical model and a deep neural network. We propose an efficient two-stage pretraining and training procedure that is crucial for learning these models. Experiments on binarized digits and images of natural scenes demonstrate that our model achieves close to the state-of-the-art performance in terms of density estimation and is capable of generating coherent images of natural scenes. | Summary of the paper:
The paper proposes to augment a variational auto encoder (VAE) with an binary restricted Boltzmann machine (RBM) in the role of the prior of the generative model. To yield a good initialisation of the parameters of the RBM and the inference network a special pertaining procedure is introduced. The model produces competitive Likelihood results on MNIST and was further tested on CIFAR 10.
Clarity and quality:
1. From the description of the pertaining procedure and the appendix B I got the impression that the inference network maps into [0,1] and not into {0,1}. Does it mean, you are not really considering binary latent variables (making the RBM model the values in [0,1] by its probability p(z|h))?
2. on page 2:
RWS...."derive a tighter lower bound": Where does the "tighter" refer to?
3. "multivariate Bernoulli modeled by an RBM": Note, while in a multivariate Bernoulli the binary variables would be independent from each others, this is usually not the case for the visible variables of RBMs (only in the conditional distribution given the state of the hidden variables).
4. The notation could be improved, e.g.:
-x_data and x_sample are not explained
- M is not defined in equation 5.
5. "this training method has been previously used to produce the best results on MNIST" Note, that parallel tempering often leads to better results when training RBMs (see http://proceedings.mlr.press/v9/desjardins10a/desjardins10a.pdf) . Furthermore, centred RBMs are also get better results than vanilla RBMs (see: http://jmlr.org/papers/v17/14-237.html).
Originality and significance:
As already mentioned in a comment on open-review the current version of the paper misses to mention one very related work: "discrete variational auto encoders". Also "bidirectional Helmholtz machines" could be mentioned as generative model with discrete latent variables. The results for both should also be reported in Table 1 (discrete VAEs: 81,01, BiHMs: 84,3).
From the motivation the advantages of the model did not become very clear to me. Main advantage seems to be the good likelihood result on MNIST (but likelihood does not improve compared to IWAE on CIFAR 10 for example). However, using an RBM as prior has the disadvantage that sampling from the generative model requires running a Markov chain now while having a solely directed generative model allows for fast sampling.
Experiments show good likelihood results on MNIST. Best results are obtained when using a ResNet decoder. I wondered how much a standard VAE is improved by using such a powerful decoder. Reporting this, would allow to understand, how much is gained from using a RBM for learning the prior.
Minor comments:
page 1:
"debut of variational auto encoder (VAE) and reparametrization trick" -> debut of variational auto encoders (VAE) and the reparametrization trick",
page 2:
"with respect to the parameter of p(x,z)" -> "with respect to the parameters of p(x,z)"
"parameters in p" -> "parameters of p"
"is multivariate Bernoulli" -> "is a multivariate Bernoulli"
"we compute them" -> "we compute it"
page 3:
"help find a good" -> "help to find a good"
page 7:
"possible apply" -> "possible to apply" |
iclr_2018_rJBiunlAW | Common recurrent neural network architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU) architecture, a recurrent unit that simplifies the computation and exposes more parallelism. In SRU, the majority of computation for each step is independent of the recurrence and can be easily parallelized. SRU is as fast as a convolutional layer and 5-10x faster than an optimized LSTM implementation. We study SRUs on a wide range of applications, including classification, question answering, language modeling, translation and speech recognition. Our experiments demonstrate the effectiveness of SRU and the tradeoff it enables between speed and performance. We open source our implementation in PyTorch and CNTK. | The authors introduce SRU, the Simple Recurrent Unit that can be used as a substitute for LSTM or GRU cells in RNNs. SRU is much more parallel than the standard LSTM or GRU, so it trains much faster: almost as fast as a convolutional layer with properly optimized CUDA code. Authors perform experiments on numerous tasks showing that SRU performs on par with LSTMs, but the baselines for these tasks are a little problematic (see below).
On the positive side, the paper is very clear and well-written, the SRU is a superbly elegant architecture with a fair bit of originality in its structure, and the results show that it could be a significant contribution to the field as it can probably replace LSTMs in most cases but yield fast training. On the negative side, the authors present the results without fully referencing and acknowledging state-of-the-art. Some of this has been pointed out in the comments below already. As another example: Table 5 that presents results for English-German WMT translation only compares to OpenNMT setups with maximum BLEU about 21. But already a long time ago Wu et. al. presented LSTMs reaching 25 BLEU and current SOTA is above 28 with training time much faster than those early models (https://arxiv.org/abs/1706.03762). While the latest are non-RNN architectures, a table like Table 5 should include them too, for a fair presentation. In conclusion: the authors seem to avoid discussing the problem that current non-RNN architectures could be both faster and yield better results on some of the studied problems. That's bad presentation of related work and should be improved in the next versions (at which point this reviewer is willing to revise the score). But in all cases, this is a significant contribution to deep learning and deserves acceptance.
Update: the revised version of the paper addresses all my concerns and the comments show new evidence of potential applications, so I'm increasing my score. |
iclr_2018_By-IifZRW | We propose a method to learn stochastic activation functions for use in probabilistic neural networks. First, we develop a framework to embed stochastic activation functions based on Gaussian processes in probabilistic neural networks. Second, we analytically derive expressions for the propagation of means and covariances in such a network, thus allowing for an efficient implementation and training without the need for sampling. Third, we show how to apply variational Bayesian inference to regularize and efficiently train this model. The resulting model can deal with uncertain inputs and implicitly provides an estimate of the confidence of its predictions. Like a conventional neural network it can scale to datasets of arbitrary size and be extended with convolutional and recurrent connections, if desired. | In Bayesian neural networks, a deterministic or parametric activation is typically used. In this work, activation functions are considered random functions with a GP prior and are inferred from data.
- Unnecessary complexity
The presentation of the paper is unnecessarily complex. It seems that authors spend extra space creating problems and then solving them. Although some of the derivations in Section 3.2.2 are a bit involved, most of the derivations up to that point (which is already in page 6) follow preexisting literature.
For instance, eq. (3) proposes one model for p(F|X). Eq. (8) proposes a different model for p(F|X), which is an approximation to the previous one. Instead, the second model could have been proposed directly, with the appropriate citation from the literature, since it isn't new. Eq. (13) is introduced as a "solution" to a non-existent problem, because the virtual observations are drawn from the same prior as the real ones, so it is not that we are "coming up" with a convenient GP prior that turns out to produce a computationally tractable solution, we are just using the prior on the observations consistently.
In general, the authors seem to use "approximately equal" and "equal" interchangeably, which is incorrect. There should be a single definition for p(F|X). And there should be a single definition for L_pred. The expression for L_pred given in eq. (20) (exact) and eq. (41) (approximate) do not match and yet both are connected with an equality (or proportionality), which they shouldn't.
Q(A) is sometimes taken to mean the true posterior (i.e., eq. (31)), sometimes a Gaussian approximation (i.e., eq (32) inside the integral), and both are used interchangeably.
- Incorrect references to the literature
Page 3: "using virtual observations (originally proposed by Quiñonero-Candela & Rasmussen (2005) for sparse approximations of GPs)"
The authors are citing as the origin of virtual observations a survey paper on the topic. Of course, that survey paper correctly attributes the origin to [1].
Page 4: "we apply the technique of variational inference Wainwright et al. (2008)".
How can variational inference be attributed to (again) a survey paper on the topic from 2008, when for instance [2] appeared in 2003?
- Correctness of the approach
Can the authors guarantee that the variational bound that they are introducing (as defined in eqs. (19) and (41)) is actually a variational bound? It seems to me that the approximations made to Q(A) to propagate the uncertainty are breaking the bounding guarantee. If it is no longer a lower bound, what is the rationale behind maximizing it?
The mathematical basis for this paper is actually introduced in [3] and a single-layer version of the current model is developed in [4]. However, in [4] the authors manage to avoid the additional Q(A) approximation that breaks the variational bound. The authors should contrast their approach with [4] and discuss if and why that additional central limit theorem application is necessary.
- No experiments
The use of a non-parametric definition for the activation function should be contrasted with the use of a parametric one. With enough data, both might produce similar results. And the parameter sharing in the parametric one might actually be beneficial. With no experiments at all showing the benefit of this proposal, this paper cannot be considered complete.
- Minor errors:
Eq. (4), for consistency, should use the identity matrix for the covariance matrix definition.
Eq. (10) uses subscript d where it should be using subscript n
Eq. (17) includes p(X^L|F^L) in the definition of Q(...), but it shouldn't. That was particularly misleading, since if we take eq. (17) to be correct (which I did at first), then p(X^L|F^L) cancels out and should not appear in eq. (20).
Eq. (23) uses Q(F|A) to mean the same as P(F|A) as far as I understand. Then why use Q?
- References
[1] Edward Snelson and Zoubin Ghahramani. Sparse Gaussian processes using pseudo-inputs.
[2] Beal, M.J. Variational Algorithms for Approximate Bayesian Inference.
[3] M.K. Titsias and N.D. Lawrence. Bayesian Gaussian process latent variable model.
[4] M. Lázaro-Gredilla. Bayesian warped Gaussian processes. |
iclr_2018_HydnA1WCb | We propose a novel architecture for k-shot classification on the Omniglot dataset. Building on prototypical networks, we extend their architecture to what we call Gaussian prototypical networks. Prototypical networks learn a map between images and embedding vectors, and use their clustering for classification. In our model, a part of the encoder output is interpreted as a confidence region estimate about the embedding point, and expressed as a Gaussian covariance matrix. Our network then constructs a direction and class dependent distance metric on the embedding space, using uncertainties of individual data points as weights. We show that Gaussian prototypical networks are a preferred architecture over vanilla prototypical networks with an equivalent number of parameters. We report results consistent with state-of-the-art performance in 1-shot and 5-shot classification both in 5-way and 20-way regime on the Omniglot dataset. We explore artificially down-sampling a fraction of images in the training set, which improves our performance. Our experiments therefore lead us to hypothesize that Gaussian prototypical networks might perform better in less homogeneous, noisier datasets, which are commonplace in real world applications. | The paper extends the prototypical networks of Snell et al, NIPS 2017 for one shot learning. Snell et al use a soft kNN classification rule, typically used in standard metric learning work (e.g. NCA, MCML), over learned instance projections, i.e. distances are computed over the learned projections. Each class is represented by a class prototype which is given by the average of the projections of the class instances. Classification is done with soft k-NN on the class prototypes. The distance that is used is the Euclidean distance over the learned representations, i.e. (z-c)^T(z-c), where z is the projection of the x instance to be classified and c is a class prototype, computed as the average of the projections of the support instances of a given class.
The present paper extends the above work to include the learning of a Mahalanobis matrix, S, for each instance, in addition to learning its projection. Thus now the classification is based on the Mahalanobis distance: (z-c)^T S_c (z-c). On a conceptual level since S_c should be a PSD matrix it can be written as the square of some matrix, i.e. S_c = A_c^TA_c, then the Mahanalobis distance becomes (A_c z - A_c c)^T ( A_c z-A_c c), i.e. in addition to learning a projection as it is done in Snell et al, the authors now learn also a linear transformation matrix which is a function of the support points (i.e. the ones which give rise to the class prototypes). The interesting part here is that the linear projection is a function of the support points. I wonder though if such a transformation could not be learned by the vanilla prototypical networks simply by learning now a projection matrix A_z as a function of the query point z. I am not sure I see any reason why the vanilla prototypical networks cannot learn to project x directly to A_z z and why one would need to do this indirectly through the use of the Mahalanobis distance as proposed in this paper.
On a more technical level the properties of the learned Mahalanobis matrix, i.e. the fact that it should be PSD, are not really discussed neither how this can be enforced especially in the case where S is a full matrix (even though the authors state that this method was not further explored). If S is diagonal then the S generation methods a) b) c) in the end of section 3.1 will make sure that S is PSD, I do not think that this is the case with d) though.
In the definition of the prototypes the component wise weigthing (eq. 5) works when the Mahalanobis matrix is diagonal (even though the weighting should be done by the \sqrt of it), how would it work if it was a full matrix is not clear.
On the experiments side the authors could have also experimented with miniImageNet and not only omniglot as is the standard practice in one shot learning papers.
I am not sure I understand figure 3 in which the authors try to see what happens if instead of learning the Mahalanobis matrix one would learn a projection that would have as many additional dimensions as free elements in the Mahalanobis matrix. I would expect to see a comparison of the vanilla prototypical nets against their method for each one of the different scenarios of the free parameters of the S matrix, something like a ratio of accuracies of the two methods in order to establish whether learning the Mahalanobis matrix brings an improvement over the prototypical nets with an equal number of output parameters. |
iclr_2018_BkM27IxR- | Learning to Optimize (Li & Malik, 2016) is a recently proposed framework for learning optimization algorithms using reinforcement learning. In this paper, we explore learning an optimization algorithm for training shallow neural nets. Such high-dimensional stochastic optimization problems present interesting challenges for existing reinforcement learning algorithms. We develop an extension that is suited to learning optimization algorithms in this setting and demonstrate that the learned optimization algorithm consistently outperforms other known optimization algorithms even on unseen tasks and is robust to changes in stochasticity of gradients and the neural net architecture. More specifically, we show that an optimization algorithm trained with the proposed method on the problem of training a neural net on MNIST generalizes to the problems of training neural nets on the Toronto Faces Dataset, CIFAR-10 and CIFAR-100. | Summary of the paper
---------------------------
The paper derives a scheme for learning optimization algorithm for high-dimensional stochastic problems as the one involved in shallow neural nets training. The main motivation is to learn to optimize with the goal to design a meta-learner able to generalize across optimization problems (related to machine learning applications as learning a neural network) sharing the same properties. For this sake, the paper casts the problem into reinforcement learning framework and relies on guided policy search (GPS) to explore the space of states and actions. The states are represented by the iterates, the gradients, the objective function values, derived statistics and features, the actions are the update directions of parameters to be learned. To make the formulated problem tractable, some simplifications are introduced (the policies are restricted to gaussian distributions family, block diagonal structure is imposed on the involved parameters). The mean of the stationary non-linear policy of GPS is modeled as a recurrent network with parameters to be learned. A hatch of how to learn the overall process is presented. Finally experimental evaluations on synthetic or real datasets are conducted to show the effectiveness of the approach.
Comments
-------------
- The overall idea of the paper, learning how to optimize, is very seducing and the experimental evaluations (comparison to normal optimizers and other meta-learners) tend to conclude the proposed method is able to learn the behavior of an optimizer and to generalize to unseen problems.
- Materials of the paper sometimes appear tedious to follow, mainly in sub-sections 3.4 and 3.5. It would be desirable to sum up the overall procedure in an algorithm. Page 5, the term $\omega$ intervening in the definition of the policy $\pi$ is not defined.
- The definitions of the statistics and features (state and observation features) look highly elaborated. Can authors provide more intuition on these precise definitions? How do they impact for instance changing the time range in the definition of $\Phi$) in the performance of the meta-learner?
- Figures 3 and 4 illustrate some oscillations of the proposed approach. Which guarantees do we have that the algorithm will not diverge as L2LBGDBGD does? How long should be the training to ensure a good and stable convergence of the method?
- An interesting experience to be conducted and shown is to train the meta-learner on another dataset (CIFAR for example) and to evaluate its generalization ability on the other sets to emphasize the effectiveness of the method. |
iclr_2018_SyfiiMZA- | Jointly Learning to Construct and Control Agents using Deep Reinforcement Learning
The physical design of a robot and the policy that controls its motion are inherently coupled. However, existing approaches largely ignore this coupling, instead choosing to alternate between separate design and control phases, which requires expert intuition throughout and risks convergence to suboptimal designs. In this work, we propose a method that jointly optimizes over the physical design of a robot and the corresponding control policy in a model-free fashion, without any need for expert supervision. Given an arbitrary robot morphology, our method maintains a distribution over the design parameters and uses reinforcement learning to train a neural network controller. Throughout training, we refine the robot distribution to maximize the expected reward. This results in an assignment to the robot parameters and neural network policy that are jointly optimal. We evaluate our approach in the context of legged locomotion, and demonstrate that it discovers novel robot designs and walking gaits for several different morphologies, achieving performance comparable to or better than that of hand-crafted designs. | This is a well written paper, very nice work.
It makes progress on the problem of co-optimization of the physical parameters of a design
and its control system. While it is not the first to explore this kind of direction,
the method is efficient for what it does; it shows that at least for some systems,
the physical parameters can be optimized without optimizing the controller for each
individual configuration. Instead, they require that the same controller works over an evolving
distribution of the agents. This is a simple-but-solid insight that makes it possible
to make real progress on a difficult problem.
Pros: simple idea with impact; the problem being tackled is a difficult one
Cons: not many; real systems have constraints between physical dimensions and the forces/torques they can exert
Some additional related work to consider citing. The resulting solutions are not necessarily natural configurations,
given the use of torques instead of musculotendon-modeling. But the current system is a great start.
The introduction could also promote that over an evolutionary time-frame, the body and
control system (reflexes, muscle capabilities, etc.) presumably co-evolved.
The following papers all optimize over both the motion control and the physical configuration of the agents.
They all use derivative free optimization, and thus do not require detailed supervision or precise models
of the dynamics.
- Geijtenbeek, T., van de Panne, M., & van der Stappen, A. F. (2013). Flexible muscle-based locomotion
for bipedal creatures. ACM Transactions on Graphics (TOG), 32(6), 206.
(muscle routing parameters, including insertion and attachment points) are optimized along with the control).
- Sims, K. (1994, July). Evolving virtual creatures. In Proceedings of the 21st annual conference on
Computer graphics and interactive techniques (pp. 15-22). ACM.
(a combination of morphology, and control are co-optimized)
- Agrawal, S., Shen, S., & van de Panne, M. (2014). Diverse Motions and Character Shapes for Simulated
Skills. IEEE transactions on visualization and computer graphics, 20(10), 1345-1355.
(diversity in control and diversity in body morphology are explored for fixed tasks)
re: heavier feet requiring stronger ankles
This commment is worth revisiting. Stronger ankles are more generally correlated with
a heavier body rather than heavy feet, given that a key role of the ankle is to be able
to provide a "push" to the body at the end of a stride, and perhaps less for "lifting the foot".
I am surprised that the optimization does not converge to more degenerate solutions
given that the capability to generate forces and torques is independent of the actual
link masses, whereas in nature, larger muscles (and therefore larger masses) would correlate
with the ability to generate larger forces and torques. The work of Sims takes these kinds of
constraints loosely into account (see end of sec 3.3).
It would be interesting to compare to a baseline where the control systems are allowed to adapt to the individual design parameters.
I suspect that the reward function that penalizes torques in a uniform fashion across all joints would
favor body configurations that more evenly distribute the motion effort across all joints, in an effort
to avoid large torques.
Are the four mixture components over the robot parameters updated independently of each other
when the parameter-exploring policy gradients updates are applied? It would be interesting
to know a bit more about how the mean and variances of these modes behave over time during
the optimization, i.e., do multiple modes end up converging to the same mean? What does the
evolution of the variances look like for the various modes? |
iclr_2018_ByzvHagA- | Deep neural networks have been tremendously successful in a number of tasks. One of the main reasons for this is their capability to automatically learn representations of data in levels of abstraction, increasingly disentangling the data as the internal transformations are applied. In this paper we propose a novel regularization method that penalize covariance between dimensions of the hidden layers in a network, something that benefits the disentanglement. This makes the network learn nonlinear representations that are linearly uncorrelated, yet allows the model to obtain good results on a number of tasks, as demonstrated by our experimental evaluation. The proposed technique can be used to find the dimensionality of the underlying data, because it effectively disables dimensions that aren't needed. Our approach is simple and computationally cheap, as it can be applied as a regularizer to any gradient-based learning model. | This paper presents a regularization mechanism which penalizes covariance between all dimensions in the latent representation of a neural network. This penalty is meant to disentangle the latent representation by removing shared covariance between each dimension.
While the proposed penalty is described as a novel contribution, there are multiple instances of previous work which use the same type of penalty (Cheung et. al. 2014, Cogswell et. al. 2016). Like this work, Cheung et. al. 2014 propose the XCov penalty which penalizes cross-covariance to disentangle subsets of dimensions in the latent representation of autoencoder models. Cogswell et. al. 2016 also proposes a similar penalty (DeCov) to this work for reducing overfitting in supervised learning.
The novel contribution of the regularizer proposed in this work is that it also penalizes the variance of individual dimensions along with the cross-covariance. Intuitively, this should lead to dimensionality reduction as the model will discard variance in dimensions which are unnecessary for reconstruction. But given the similarity to previous work, the authors need to quantitatively evaluate the value in additionally penalizing variance of each dimension as compared with earlier work. Cogswell et. al. 2016 explicitly remove these terms from their regularizer to prevent the dynamic range of the activations from being unnecessarily rescaled. It would be helpful to understand how this approach avoids this issues - i.e., if you penalize all the variance terms then you could just be arbitrarily rescaling the activities, so what prevents this trivial solution?
There doesn't appear to be a definition of the L1 penalty this paper compares against and it's unclear why this is a reasonable baseline. The evaluation metrics this work uses (MAPC, CVR, TdV, UD) need to be justified more in the absence of their use in previous work. While they evaluate their method on non-toy dataset such as CIFAR, they do not show what the actual utility of their proposed regularizer serves for such a dataset beyond having no-regularization at all. Again, the utility of the evaluation metrics proposed in this work is unclear.
The toy examples are kind of interesting but it would be more compelling if the dimensionality reduction aspect extended to real datasets.
> Our method has no penalty on the performance on tasks evaluated in the experiments, while it does disentangle the data
This needs to be expanded in the results as all the results presented appear to show Mean Squared Error increasing when increasing the weight of the regularization penalty. |
iclr_2018_SkRsFSRpb | Workshop track -ICLR 2018 GEOSEQ2SEQ: INFORMATION GEOMETRIC SEQUENCE-TO-SEQUENCE NETWORKS
The Fisher information metric is an important foundation of information geometry, wherein it allows us to approximate the local geometry of a probability distribution. Recurrent neural networks such as the Sequence-to-Sequence (Seq2Seq) networks that have lately been used to yield state-of-the-art performance on speech translation or image captioning have so far ignored the geometry of the latent embedding, that they iteratively learn. We propose the information geometric Seq2Seq (GeoSeq2Seq) network which abridges the gap between deep recurrent neural networks and information geometry. Specifically, the latent embedding offered by a recurrent network is encoded as a Fisher kernel of a parametric Gaussian Mixture Model, a formalism common in computer vision. We utilise such a network to predict the shortest routes between two nodes of a graph by learning the adjacency matrix using the GeoSeq2Seq formalism; our results show that for such a problem the probabilistic representation of the latent embedding supersedes the non-probabilistic embedding by 10-15%. | ==== UPDATE AFTER REVIEWER RESPONSE
I apologize to the authors for my late response.
I appreciate the reviewer responses, and they are helpful on a number of
fronts. Still, there are several problematic points.
First, as the authors anticipated, I question whether the geometric encoding
operations can be included in an end-to-end learning setting. I can imagine
several arguments why an end-to-end algorithm may not be preferred, but the
authors do not offer any such arguments.
Second, I am still interested in more discussion of the empirical investigation
into the behavior of the algorithm. For example, "Shortest" and "Successful"
in Table 1 still do not really capture how close "successful but not shortest"
paths are to optimal.
The authors have addressed a number of my concerns, but there
are a few outstanding concerns. Also, other reviewers are much more familiar
with the work than myself. I defer to their judgement after the updates.
==== Original review
In this work, the authors propose an approach to adapt latent representations to account for local geometry in the embedding space. They show modest improvement compared to reasonable baselines.
While I find the idea of incorporating information geometry into embeddings very promising, the current work omits a number of key details that would allow the reader to draw deeper connections between the two (specific comments below). Additionally, the experiments are not particularly insightful.
I believe a substantially revised version of the paper could address most of my concerns; still, I find the current version too preliminary for publication.
=== Major comments / questions
The transformation from context vectors into Fisher vectors is not clear. Presumably, shortest paths in the training data have different lengths, and thus produce different numbers of context vectors. Does the GMM treat all of these independently (regardless of sample)? or is a separate GMM somehow trained for each training sequence? The same question applies to the VLAD-based approach.
In a related vein, it is not clear to what extent this method depends on the sequential nature of the considered networks. In particular, could a similar approach be applied to latent space embeddings from non-sequential models?
It is not clear if the geometric encoding operations are differentiable, or more generally, the entire training algorithm is not clear.
The choice to limit the road network graph feels quite arbitrary. Why was this done?
Deep models are known to be sensitive to the choice of hyperparameters. How were these chosen? was a validation set used in addtion to the training and testing sets?
The target for training is very unclear. Throughout Sections 1 and 2, the aim of the paper appears to be to learn shortest paths; however, Section 3 states that the “network is capable of learning the adjacency matrix”, and the caption for Figure 2 suggests that “[t]he adjacency matrix is iteratively learnt (sic)....” However, calculating such training error for back-propagation/optimization would seem to rely on *already knowing* the adjacency matrix.
The performed experiments are minimal and offer very little insight into what is learned. For example, does the model predict “short” shortest paths better than longer ones? what do the “valid but not optimal” paths look like? are they close to optimal? what do the invalid paths look like? does it seem to learn parts of the road network better than others? sparse parts of the network? dense parts?
=== Minor comments / questions
The term “context vector” is not explicitly defined or described. Based on the second paragraph in the “Fisher encoding” section, I assume these are the latent states for each element in the shortest path sequences.
Is the graph directed? weighted? by Euclidean distance? (Roads are not necessarily straight, so the Euclidean distance from intersection to intersection may not accurately reflect the distance in some cases.)
Are the nodes sampled uniformly at random for creating the training data?
Is the choice to use a diagonal covariance matrix (as opposed to some more flexible one) a computational choice? or does the theory justify this choice?
Roughly, what are the computational resources required for training?
The discussion should explain “condition number” in more detail.
Do the “more precise” results for the Fisher encoding somehow rely on an infinite mixture? or, how much does using only a single component in the GMM affect the results?
It is not clear what “features” and “dictionary elements” are in the context of VLAD.
What value of k was used for K-means clustering for VLAD?
It is not possible to assess the statistical significance of the presented experimental results. More datasets (or different parts of the road network) or cross-validation should be used to provide an indication of the variance of each method.
=== Typos, etc.
The paper includes a number of runon sentences and other small grammatical mistakes. I have included some below.
The first paragraph in Section 2.2 in particular needs to be edited.
The references are inconsistently and improperly (e.g., “Turing” should be capitalized) formatted.
It seems like that $q_{ik} \in \{0,1\}$ for the hard assignments in clustering. |
iclr_2018_S1ANxQW0b | Published as a conference paper at ICLR 2018 MAXIMUM A POSTERIORI POLICY OPTIMISATION
We introduce a new algorithm for reinforcement learning called Maximum aposteriori Policy Optimisation (MPO) based on coordinate ascent on a relativeentropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings. | The paper presents a new algorithm for inference-based reinforcement learning for deep RL. The algorithm decomposes the policy update in two steps, an E and an M-step. In the E-step, the algorithm estimates a variational distribution q which is subsequentially used for the M-step to obtain a new policy. Two versions of the algorithm are presented, using a parametric or a non-parametric (sample-based) distribution for q. The algorithm is used in combination with the retrace algorithm to estimate the q-function, which is also needed in the policy update.
This is a well written paper presenting an interesting algorithm. The algorithm is similar to other inference-based RL algorithm, but is the first application of inference based RL to deep reinforcement learning. The results look very promising and define a new state of the art or deep reinforcement learning in continuous control, which is a very active topic right now. Hence, I think the paper should be accepted.
I do have a few comments / corrections / questions about the paper:
- There are several approaches that already use the a combination of the KL-constraint with reverse KL on a non-parametric distribution and subsequently an M-projection to obtain again a parametric distribution, see HiREPS, non-parametric REPS [Hoof2017, JMLR] or AC-REPS [Wirth2016, AAAI]. These algorithms do not use the inference-based view but the trust region justification. As in the non-parametric case, the asymptotic performance guarantees from the EM framework are gone, why is it beneficial to formulate it with EM instead of directly with a trust region of the expected reward?
- It is not clear to me whether the algorithm really optimizes the original maximum a posteriori objective defined in Equation 1. First, alpha changes every iteration of the algorithm while the objective assumes that alpha is constant. This means that we change the objective all the time which is theoretically a bit weird. Moreover, the presented algorithm also changes the prior all the time (in order to introduce the 2nd trust region) in the M-step. Again, this changes the objective, so it is unclear to me what exactly is maximised in the end. Would it not be cleaner to start with the average reward objective (no prior or alpha) and then introduce both trust regions just out of the motivation that we need trust regions in policy search? Then the objective is clearly defined.
- I did not get whether the additional "one-step KL regularisation" is obtained from the lower bound or just added as additional regularisation? Could you explain?
- The algorithm has now 2 KL constraints, for E and M step. Is the epsilon for both the same or can we achieve better performance by using different epsilons?
- I think the following experiments would be very informative:
- MPO without trust region in M-step
- MPO without retrace algorithm for getting the Q-value
- test different epsilons for E and M step |
iclr_2018_HkAClQgA- | A DEEP REINFORCED MODEL FOR ABSTRACTIVE SUMMARIZATION
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intraattention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" -they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries. | The paper proposes a model for abstractive document summarization using a self-critical policy gradient training algorithm, which is mixed with maximum likelihood objective. The Seq2seq architecture incorporates both intra-temporal and intra-decoder attention, and a pointer copying mechanism. A hard constraint is imposed during decoding to avoid trigram repetition. Most of the modelling ideas already exists, but this paper show how they can be applied as a strong summarization model.
The approach obtains strong results on the CNN/Daily Mail and NYT datasets. Results show that intra-attention improves performance for only one of the datasets. RL results are reported with only the best-performing attention setup for each dataset. My concern with that is that the authors might be using the test set for model selection; It is not a priori clear that the setup that works better for ML should also be better for RL, especially as it is not the same across datasets. So I suggest that results for RL should be reported with and without intra-attention on both datasets, at least on the validation set.
It is shown that intra-decoder attention decoder improves performance on longer sentences. It would be interesting to see more analysis on this, especially analyzing what the mechanism is attending to, as it is less clear what its interpretation should be than for intra-temporal attention. Further ablations such as the effect of the trigram repetition constraint will also help to analyse the contribution of different modelling choices to the performance.
For the mixed decoding objective, how is the mixing weight chosen and what is its effect on performance? If it is purely a scaling factor, how is the scale quantified? It is claimed that readability correlates with perplexity, so it would be interesting to see perplexity results for the models. The lack of correlation between automatic and human evaluation raises interesting questions about the evaluation of abstractive summarization that should be investigated further in future work.
This is a strong paper that presents a significant improvement in document summarization. |
iclr_2018_B18WgG-CZ | Published as a conference paper at ICLR 2018 LEARNING GENERAL PURPOSE DISTRIBUTED SEN- TENCE REPRESENTATIONS VIA LARGE SCALE MULTI- TASK LEARNING
A lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner. These representations are typically used as general purpose features for words across a range of NLP problems. However, extending this success to learning representations of sequences of words, such as sentences, remains an open problem. Recent work has explored unsupervised as well as supervised learning techniques with different training objectives to learn general purpose fixed-length sentence representations. In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model. We train this model on several data sources with multiple training objectives on over 100 million sentences. Extensive experiments demonstrate that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods. We present substantial improvements in the context of transfer learning and low-resource settings using our learned general-purpose representations. | ---- updates: ----
I had a ton of comments and concerns, and I think the authors did an admirable job in addressing them. I think the paper represents a solid empirical contribution to this area and is worth publishing in ICLR.
---- original review follows: ----
This paper is about learning sentence embeddings by combining a bunch of training signals: predicting the next & previous sentences (skip-thought), predicting the sentence's translation, classifying entailment relationships between two sentences, and predicting the constituent parse of a sentence. This is a simple idea that combines a bunch of things from prior work into one framework and yields strong results, outperforming most prior work on most tasks.
I think this paper is impressive in how it scales up training to use so many tasks and such large training sets for each task. That and its strong experimental results make it worthy of publication. It's not very surprising that adding more tasks and data improves performance on average across downstream tasks, but it is nice to see the experimental results in detail. While many people would think of this idea, few would have the resources and expertise necessary to do it justice. I also like how the authors move beyond the standard sentence tasks to evaluate also on the Quora question duplicate task with different amounts of training data and also consider the sentence characteristic / syntactic property tasks. It would be great if the authors could release their pretrained sentence representation model so that other researchers could use it.
I do have some nitpicks here and there with the presentation and exposition, and I am concerned that at times the paper appears to be minimizing its weaknesses, but I think these are things that can be addressed in the next revision. I understand that sometimes it's tempting to minimize one's weaknesses in order to get a paper accepted because the reviewers may not understand the area very well and may get hung up on the wrong things. I understand the area well and so all the feedback I offer below comes from a place of desiring this paper's publication while also desiring it to be as accurate and helpful for the community as possible.
Below I'll discuss my concerns with the experiments and description of the results.
Regarding the results in Table 2:
The results in Table 2 seem a little bit unstable, as it is unclear which setting to use for the classification tasks; maybe it depends on the kind of classification being performed. One model seems best for the sentiment tasks ("+2L +STP") while other models seem best for SUBJ and MPQA. Adding parsing as a training task hurts performance on the sentence classification tasks while helping performance on the semantic tasks, as the authors note. It is unclear which is the best general model. In particular, when others write papers comparing to the results in this paper, which setting should they compare to? It would be nice if the authors could discuss this.
The results reported for the CNN-LSTM of Gan et al. do not exactly match those of any single row from Gan et al, either v1 or v2 on arxiv or the published EMNLP version. How were those specific numbers selected?
The caption of Table 2 states "All results except ours are taken from Conneau et al. (2017)." However, Conneau et al (neither the latest arxiv version nor the published EMNLP version) does not include many of the results in the table, such as CNN-LSTM and DiscSent mentioned in the following sentence in the caption. Did the authors replicate the results of those methods themselves, or report them from other papers?
What does bold and underlining indicate in Table 2? I couldn't find this explained anywhere.
At the bottom of Table 2, in the section with approaches trained from scratch on these tasks, I'd suggest including the 89.7 SST result of Munkhdalai and Yu (2017) and the 96.1 TREC result of Zhou et al. (2016) (as well as potentially other results from Zhou et al, since they report results on others of these datasets). The reason this is important is because readers may observe that the paper's new method achieves higher accuracies on SST and TREC than all other reported results and mistakenly think that the new method is SOTA on those tasks. I'd also suggest adding the results from Radford et al. (2017) who report 86.9 on MR and 91.4 on CR. For other results on these datasets, including stronger results in non-fixed-dimensional-sentence-embedding transfer settings, see results and references in McCann et al. (2017). While the methods presented in this paper are better than prior work in learning general purpose, fixed-dimensional sentence embeddings, they still do not produce state-of-the-art results on that many of these tasks, if any. I think this is important to note.
For all tasks for which there is additional training, there's a confound due to the dimensionality of the sentence embeddings across papers. Using higher-dimensional sentence embeddings leads to more parameters in the linear model being trained on the task data. So it is unclear if the increase in hidden units in rows with "+L" is improving the results because of providing more weights for the linear model or whether it is learning a better sentence representation.
The main sentence embedding results are in Table 2, and use the SentEval framework. However, not all tasks are included. The STS Benchmark results are included, which use an additional layer trained on the STS Benchmark training data just like the SICK tasks. But the other STS results, which use cosine similarity on the embedding space directly without any retraining, are only included in the appendix (in Table 7). The new approach does not do very well on those unsupervised tasks. On two years of data it is better than InferSent and on two years it is worse. Both are always worse than the charagram-phrase results of Wieting et al (2016a), which has 66.1 on 2012, 57.2 on 2013, 74.7 on 2014, and 76.1 on 2015. Charagram-phrase trains on automatically-generated paraphrase phrase pairs, but these are generated automatically from parallel text, the same type of resource used in the "+Fr" and "+De" models proposed in this submission, so I think it should be considered as a comparable model.
The results in the bottom section of Table 7, reported from Arora et al (2016), were in turn copied from Wieting et al (2016b), so I think it would make sense to also cite Wieting et al (2016b) if those results are to be included. Also, it doesn't seem appropriate to designate those as "Supervised Approaches" as they only require parallel text, which is a subset of the resources required by the new model.
There are some other details in the appendix that I find concerning:
Section 8 describes how there is some task-specific tuning of which function to compute on the encoder to produce the sentence representation for the task. This means that part of the improvement over prior work (especially skip-thought and InferSent) is likely due to this additional tuning. So I suppose to use these sentence representations in other tasks, this same kind of tuning would have to be done on a validation set for each task? Doesn't that slightly weaken the point about having "general purpose" sentence representations?
Section 9 provides details about how the representations are created for different training settings. I am confused by the language here. For example, the first setting ("+STN +Fr +De") is described as "A concatenation of the representations trained on these tasks with a unidirectional and bidirectional GRU with 1500 hidden units each." I'm not able to parse this. I think the authors mean "The sentence representation h_x is the concatenation of the final hidden vectors from a forward GRU (with 1500-dimensional hidden vectors) and a bidirectional GRU (also with 1500-dimensional hidden vectors)". Is this correct?
Also in Sec 9: I found it surprising how each setting that adds a training task uses the concatenation of a representation with that task and one without that task. What is the motivation for doing this? This seems to me to be an important point that should be discussed in Section 3 or 4. And when doing this, are the concatenated representations always trained jointly from scratch with the special task only updating a subset of the parameters, or do you use the fixed pretrained sentence representation from the previous row and just concatenate it with the new one? To be more concrete, if I want to get the encoder for the second setting ("+STN +Fr +De +NLI"), do I have to train two times or can I just train once? That is, the train-once setting would correspond to only updating the NLI-specific representation parameters when training on NLI data; on other data, all parameters would be updated. The train-twice setting would first train a representation on "+STN +Fr +De", then set it aside, then train a separate representation on "+STN +Fr +De +NLI", then finally concatenate the two representations as my sentence representation. Do you use train-once or train-twice?
Regarding the results in Table 3:
What do bold and underline indicate?
What are the embeddings corresponding to the row labeled "Multilingual"?
In the caption, I can't find footnote 4.
The caption includes the sentence "our embeddings have 1040 pairs out of 2034 for which atleast one of the words is OOV, so a comparison with other embeddings isn't fair on RW." How were those pairs handled? If they were excluded, then I think the authors should not report results on RW. I suspect that most of the embeddings included in the table also have many OOVs in the RW dataset but still compute results on it using either an unknown word embedding or some baseline similarity of zero for pairs with an OOV. I think the authors should find some way (like one of those mentioned, or some other way) of computing similarity of those pairs with OOVs. It doesn't make much sense to me to omit pairs with OOVs.
There are much better embeddings on SimLex than the embeddings whose results are reported in the table. Wieting et al. (2016a) report SimLex correlation of 0.706 and Mrkšić et al. (2017) report 0.751. I'd suggest adding the results of some stronger embeddings to better contextualize the embeddings obtained by the new method. Some readers may mistakenly think that the embeddings are SOTA on SimLex since no stronger results are provided in the table.
The points below are more minor/specific:
Sec. 2:
In Sec. 2, the paper discusses its focus on fixed-length sentence representations to distinguish itself from other work that produces sentence representations that are not fixed-length. I feel the motivation for this is lacking. Why should we prefer a fixed-length representation of a sentence? For certain downstream applications, it might actually be easier for practitioners to use a representation that provides a representation for each position in a sentence (Melamud et al., 2016; Peters et al., 2017; McCann et al., 2017) rather than an opaque sentence representation. Some might argue that since sentences have different lengths, it would be appropriate for a sentence representation to have a length proportional to the length of the sentence. I would suggest adding some motivation for the focus on fixed-length representations.
Sec. 4.1:
"We take a simpler approach and pick a new task to train on after every parameter update sampled uniformly. An NLI minibatch is interspersed after every ten parameter updates on sequence-to-sequence tasks"
These two sentences seem contradictory. Maybe in the first sentence "pick a new task" should be changed to "pick a new sequence-to-sequence task"?
Sec. 5.1:
typo: "updating the parameters our sentence" --> "updating the parameters of our sentence"
Sec. 5.2:
typo in Table 4 caption: "and The" --> ". The"
typo: "parsing improvements performance" --> "parsing improves performance"
In general, there are many missing citations for the tasks, datasets, and prior work on them. I understand that the authors are pasting in numbers from many places and just providing pointers to papers that provide more citation info, but I think this can lead to mis-attribution of methods. I would suggest including citations for all datasets/tasks and methods whose results are being reported.
References:
McCann, Bryan, James Bradbury, Caiming Xiong, and Richard Socher. "Learned in translation: Contextualized word vectors." CoRR 2017.
Melamud, Oren, Jacob Goldberger, and Ido Dagan. "context2vec: Learning Generic Context Embedding with Bidirectional LSTM." CoNLL 2016.
Mrkšić, Nikola, Ivan Vulić, Diarmuid Ó. Séaghdha, Ira Leviant, Roi Reichart, Milica Gašić, Anna Korhonen, and Steve Young. "Semantic Specialisation of Distributional Word Vector Spaces using Monolingual and Cross-Lingual Constraints." TACL 2017.
Munkhdalai, Tsendsuren, and Hong Yu. "Neural semantic encoders." EACL 2017.
Pagliardini, Matteo, Prakhar Gupta, and Martin Jaggi. "Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features." arXiv preprint arXiv:1703.02507 (2017).
Peters, Matthew E., Waleed Ammar, Chandra Bhagavatula, and Russell Power. "Semi-supervised sequence tagging with bidirectional language models." ACL 2017.
Radford, Alec, Rafal Jozefowicz, and Ilya Sutskever. "Learning to generate reviews and discovering sentiment." arXiv preprint arXiv:1704.01444 2017.
Wieting, John, Mohit Bansal, Kevin Gimpel, and Karen Livescu. "Charagram: Embedding words and sentences via character n-grams." EMNLP 2016a.
Wieting, John, Mohit Bansal, Kevin Gimpel, and Karen Livescu. "Towards universal paraphrastic sentence embeddings." ICLR 2016b.
Zhou, Peng, Zhenyu Qi, Suncong Zheng, Jiaming Xu, Hongyun Bao, and Bo Xu. "Text Classification Improved by Integrating Bidirectional LSTM with Two-dimensional Max Pooling." COLING 2016. |
iclr_2018_HJ4IhxZAb | Active learning (AL) aims to enable training high performance classifiers with low annotation cost by predicting which subset of unlabelled instances would be most beneficial to label. The importance of AL has motivated extensive research, proposing a wide variety of manually designed AL algorithms with diverse theoretical and intuitive motivations. In contrast to this body of research, we propose to treat active learning algorithm design as a meta-learning problem and learn the best criterion from data. We model an active learning algorithm as a deep neural network that inputs the base learner state and the unlabelled point set and predicts the best point to annotate next. Training this active query policy network with reinforcement learning, produces the best non-myopic policy for a given dataset. The key challenge in achieving a general solution to AL then becomes that of learner generalisation, particularly across heterogeneous datasets. We propose a multi-task dataset-embedding approach that allows dataset-agnostic active learners to be trained. Our evaluation shows that AL algorithms trained in this way can directly generalise across diverse problems. | The approach solves an important problem as getting labelled data is hard. The focus is on the key aspect, which is generalisation across heteregeneous data. The novel idea is the dataset embedding so that their RL policy can be trained to work across diverse datasets.
Pros:
1. The approach performs well against all the baselines, and also achieves good cross-task generalisation in the tasks they evaluated on.
2. In particular, they alsoevaluated on test datasets with fairly different statistics from the training datasets, which isnt very common in most meta-learning papers today, so it’s encouraging that the method works in that regime.
Cons:
1. The embedding strategy, especially the representative and discriminative histograms, is complicated. It is unclear if the strategy is general enough to work on harder problems / larger datasets, or with higher dimensional data like images. More evidence in the paper for why it would work on harder problems would be great.
2. The policy network would have to output a probability for each datapoint in the dataset U, which could be fairly large, thus the method is computationally much more expensive than random sampling. A section devoted to showing what practical problems could be potentially solved by this method would be useful.
3. It is unclear to me if the results in table 3 and 4 are achieved by retraining from scratch with an RBF SVM, or by freezing the policy network trained on a linear SVM and directly evaluating it with a RBF SVM base learner.
Significance/Conclusion: The idea of meta-learning or learning to learn is fairly common now. While they do show good performance, it’s unclear if the specific embedding strategy suggested in this paper will generalise to harder tasks.
Comments: There’s lots of typos, please proof read to improve the paper.
Revision: I thank the authors for the updates and addressing some of my concerns. I agree the computational budget makes sense for cross data transfer, however the embedding strategy and lack of larger experiments makes it unclear if it'll generalise to harder tasks. I update my review to 6. |
iclr_2018_S1HlA-ZAZ | THE KANERVA MACHINE: A GENERATIVE DISTRIBUTED MEMORY
We present an end-to-end trained memory system that quickly adapts to new data and generates samples like them. Inspired by Kanerva's sparse distributed memory, it has a robust distributed reading and writing mechanism. The memory is analytically tractable, which enables optimal on-line compression via a Bayesian update-rule. We formulate it as a hierarchical conditional generative model, where memory provides a rich data-dependent prior distribution. Consequently, the top-down memory and bottom-up perception are combined to produce the code representing an observation. Empirically, we demonstrate that the adaptive memory significantly improves generative models trained on both the Omniglot and CIFAR datasets. Compared with the Differentiable Neural Computer (DNC) and its variants, our memory model has greater capacity and is significantly easier to train. | The generative model comprises a real-valued matrix M (with a multivariate normal prior) that serves
as the memory for an episode (an unordered set of datapoints). For each datapoint a marginally independent
latent variable y_t is used to index into M and realize a conditional density
of another latent variable z. z_t is used to generate the data.
The proposal of learning with a probabilistic memory is interesting and the framework proposed is elegant and cleanly explained. The model is evaluated on the following tasks:
* Qualitative results on denoising and one-shot generation using the Omniglot dataset.
* Qualitative results on sampling from the model using the CIFAR dataset.
* Likelihood estimation on the Omniglot dataset
Questions and concerns:
The model appears novel and is interesting, the experiments, however, are lacking in that they
do not compare against other any recently proposed memory augmented deep generative models [Bornschein et al] and [Li et. al] (https://arxiv.org/pdf/1602.07416.pdf). At the very minimum, the paper should include a discussion and a comparison with the latter. Doing so will help better understand what is gained from using retaining a probabilistic form of memory versus a determinstic memory indexed with attention as in [Li et. al].
How does the model perform as a function of varying T (size of episodes) during training? It would be interesting to see how well the model performs in the limiting case of T=1.
What is the task being solved in Section 4.4 by the DNC and the Kanerva machine? Please state this in the main paper.
Training and Evaluation: There is a mismatch in the training and evaluation procedure the implications of which I don't
fully understand yet. The text states that the model was trained where each observation in an episode comprised randomly sampled datapoints. This corresponds to a generative process where (1) a memory is randomly drawn, (2) each observation in the episode is an independent draws from the memory conditioned decoder. During training,
points in an episode are randomly selected. At test time, (if I understand correctly, please correct me if I haven't), the model is evaluated by having multiple copies of the same test point within an episode. Is that correct? If so, doesn't that correspond to evaluating the model under a different generative assumption? Why is this OK?
Likelihood evaluation: Could you expand on how the ELBO of 68.3 is computed under the model for a single test image in the Omniglot dataset? The text says that the likelihood of each data-point was divided by T (the length of the episode considered). This seems at odds with models, such as DRAW, evaluate the likelihood -- once at the end of the generative drawing process. What is the per-pixel likelihood obtained on the CIFAR dataset and what is the likelihood on a model where T=1 (for omniglot/cifar)?
Using Labels: Following up on the previous point, what happens if labelled information from Omniglot or CIFAR is used to define points within an episode during the training procedure? Does this help or hurt performance?
For the denoising comparison, how do the results compare to those obtained if you simulate a Markov Chain (sample latent state conditioned on noisy image, sample latent state, sample denoised observation, repeat using denoised observation) using a VAE? |
iclr_2018_SJJQVZW0b | Published as a conference paper at ICLR 2018 HIERARCHICAL AND INTERPRETABLE SKILL ACQUI- SITION IN MULTI-TASK REINFORCEMENT LEARNING
Learning policies for complex tasks that require multiple different skills is a major challenge in reinforcement learning (RL). It is also a requirement for its deployment in real-world scenarios. This paper proposes a novel framework for efficient multi-task reinforcement learning. Our framework trains agents to employ hierarchical policies that decide when to use a previously learned policy and when to learn a new skill. This enables agents to continually acquire new skills during different stages of training. Each learned task corresponds to a human language description. Because agents can only access previously learned skills through these descriptions, the agent can always provide a human-interpretable description of its choices. In order to help the agent learn the complex temporal dependencies necessary for the hierarchical policy, we provide it with a stochastic temporal grammar that modulates when to rely on previously learned skills and when to execute new skills. We validate our approach on Minecraft games designed to explicitly test the ability to reuse previously learned skills while simultaneously learning new skills. | Summary:
This paper proposes an approach to learning hierarchical policies in a lifelong learning context. This is achieved by stacking policies - an explicit "switch" policy is then used to decide whether to execute a primitive action or call the policy of the layer below it. Additionally, each task is encoded in a human-readable template, which provides interpretability.
Review:
Overall, I found the paper to be generally well-written and the core idea to be interesting. My main concern is about the performance against existing methods (no empirical results are provided), and while it does provide interpretability, I am not sure that other approaches (e.g. Tessler et al. 2017) could not be slightly modified to do the same. I think the paper could also benefit from at least one more experiment in a different, harder domain.
I have a few questions and comments about the paper:
The first paragraph claims "This precludes transfer of previously learned simple skills to a new policy defined over a space with differing states or actions". I do not see how this approach avoids suffering from the same problem? Additionally, approaches such as agent-space options [Konidaris and Barto. Building Portable Options: Skill Transfer in Reinforcement Learning, IJCAI 2007] get around at least the state part.
I do not quite follow what is meant by "a global policy is assumed to be executable by only using local policies over specific options". It sounds like this is saying that the inter-option policy can pick only options, and not primitive actions, which is obviously untrue. Can you clarify this sentence?
In section 3.1, it may be best to mention that the policy accepts both a state and task and outputs an action. This is stated shortly afterwards, but it was confusing because section 3.1 says that there is a single policy for a set of tasks, and so obviously a normal state-action policy would not work here.
At the bottom of page 6, are there any drawbacks to the instruction policy being defined as two independent distributions? What if not all skills are applicable to all items?
In section 5, what does the "without grammar" agent entail? How is the sampling from the switch and instruction policies done in this case?
While the results in Figures 4 and 5 show improvement over a flat policy, as well as the value of using the grammar, I am *very* surprised there is no comparison to existing methods. For example, Tessler's H-DRLN seems like one obvious comparison here, since it learns when to execute a primitive action and when to reuse a skill.
There were also some typos/small issues (I may have missed some):
pg 3: "In addition, previous work usually useS..."
pg 3. "we encode a human instruction to LEARN A..." (?)
pg 4. "...with A stochastic temporal grammar..."
pg 4. "... described above through A/THE modified..."
pg 6. "...TOTALLING six colors..."
There are some issues with the references (capital letters missing e.g. Minecraft)
It also would be preferable if the figures could appear after they are referenced in the text, since it is quite confusing otherwise. For example, Figure 2 contains V(s,g), but that is only defined much later on. Also, I struggled to make out the yellow box in Figure 2, and the positioning of Figure 3 on the side is not ideal either. |
iclr_2018_BJDEbngCZ | Direct policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model 2) they are an "end-to-end" approach, directly optimizing the performance metric of interest 3) they inherently allow for richly parameterized policies. A notable drawback is that even in the most basic continuous control problem (that of linear quadratic regulators), these methods must solve a non-convex optimization problem, where little is understood about their efficiency from both computational and statistical perspectives. In contrast, system identification and model based planning in optimal control theory have a much more solid theoretical footing, where much is known with regards to their computational and statistical properties. This work bridges this gap showing that (model free) policy gradient methods globally converge to the optimal solution and are efficient (polynomially so in relevant problem dependent quantities) with regards to their sample and computational complexities. | The paper studies the global convergence for policy gradient methods for linear control problems.
(1) The topic of this paper seems to have minimal connection with ICRL. It might be more appropriate for this paper to be reviewed at a control/optimization conference, so that all the technical analysis can be evaluated carefully.
(2) I am not convinced if the main results are novel. The convergence of policy gradient does not rely on the convexity of the loss function, which is known in the community of control and dynamic programming. The convergence of policy gradient is related to the convergence of actor-critic, which is essentially a form of policy iteration. I am not sure if it is a good idea to examine the convergence purely from an optimization perspective.
(3) The main results of this paper seem technical sound. However, the results seem a bit limited because it does not apply to neural-network function approximator. It does not apply to the more general control problem rather than quadratic cost function, which is quite restricted. I might have missed something here. I strongly suggest that these results be submitted to a more suitable venue. |
iclr_2018_BydjJte0- | TOWARDS REVERSE-ENGINEERING BLACK-BOX NEURAL NETWORKS
Many deployed learned models are black boxes: given input, returns output. Internal information about the model, such as the architecture, optimisation procedure, or training data, is not disclosed explicitly as it might contain proprietary information or make the system more vulnerable. This work shows that such attributes of neural networks can be exposed from a sequence of queries. This has multiple implications. On the one hand, our work exposes the vulnerability of black-box neural networks to different types of attacks -we show that the revealed internal information helps generate more effective adversarial examples against the black box model. On the other hand, this technique can be used for better protection of private content from automatic recognition models using adversarial examples. Our paper suggests that it is actually hard to draw a line between white box and black box models. The code is available at goo.gl/MbYfsv. | -----UPDATE------
Having read the responses from the authors, and the other reviews, I am happy with my rating and maintain that this paper should be accepted.
----------------------
In this paper, the authors trains a large number of MNIST classifier networks with differing attributes (batch-size, activation function, no. layers etc.) and then utilises the inputs and outputs of these networks to predict said attributes successfully. They then show that they are able to use the methods developed to predict the family of Imagenet-trained networks and use this information to improve adversarial attack.
I enjoyed reading this paper. It is a very interesting set up, and a novel idea.
A few comments:
The paper is easy to read, and largely written well. The article is missing from the nouns quite often though so this is something that should be amended. There are a few spelling slip ups ("to a certain extend" --> "to a certain extent", "as will see" --> "as we will see")
It appears that the output for kennen-o is a discrete probability vector for each attribute, where each entry corresponds to a possibility (for example, for "batch-size" it is a length 3 vector where the first entry corresponds to 64, the second 128, and the third 256). What happens if you instead treat it as a regression task, would it then be able to hint at intermediates (a batch size of 96) or extremes (say, 512).
A flaw of this paper is that kennen-i and io appear to require gradients from the network being probed (you do mention this in passing), which realistically you would never have access to. (Please do correct me if I have misunderstood this)
It would be helpful if Section 4 had a paragraph as to your thoughts regarding why certain attributes are easier/harder to predict. Also, the caption for Table 2 could contain more information regarding the network outputs.
You have jumped from predicting 12 attributes on MNIST to 1 attribute on Imagenet. It could be beneficial to do an intermediate experiment (a handful of attributes on a middling task).
I think this paper should be accepted as it is interesting and novel.
Pros
------
- Interesting idea
- Reads well
- Fairly good experimental results
Cons
------
- kennen-i seems like it couldn't be realistically deployed
- lack of an intermediate difficulty task |
iclr_2018_Sk4w0A0Tb | The concepts of unitary evolution matrices and associative memory have boosted the field of Recurrent Neural Networks (RNN) to state-of-the-art performance in a variety of sequential tasks. However, RNN still have a limited capacity to manipulate long-term memory. To bypass this weakness the most successful applications of RNN use external techniques such as attention mechanisms. In this paper we propose a novel RNN model that unifies the state-of-the-art approaches: Rotational Unit of Memory (RUM). The core of RUM is its rotational operation, which is, naturally, a unitary matrix, providing architectures with the power to learn long-term dependencies by overcoming the vanishing and exploding gradients problem. Moreover, the rotational unit also serves as associative memory. We evaluate our model on synthetic memorization, question answering and language modeling tasks. RUM learns the Copying Memory task completely and improves the state-of-the-art result in the Recall task. RUM's performance in the bAbI Question Answering task is comparable to that of models with attention mechanism. We also improve the state-of-the-art result by 0.001 to 1.189 bits-percharacter (BPC) test loss in the Character Level Penn Treebank (PTB) task. Moreover, our models achieve 0.002 BPC improvement to the validation too, which is to signify the applications of RUM to real-world sequential data. The universality of our construction, at the core of RNN, establishes RUM as a promising approach to language modeling, speech recognition and machine translation. | Summary:
This paper proposes a way to incorporate rotation memories into gated RNNs. They use a specific parametrization of the rotation matrices. They run experiments on several toy tasks and on language modelling with PTB character-level language modeling (which I would still consider to be toyish.)
Question:
Can the rotation proposed here cause unintentional forgetting by interleaving the memories? Because in some sense rotations are glorified summation in high-dimensions, if you do a full-rotation of a vector (360 degrees) you can end up in the same location. Thus the model might overwrite into its past memories.
Pros:
Proposes an interesting way to incorporate the rotation operations into the gated architectures.
Cons:
The specific choice of rotation operation is not very well justified.
This paper more or less uses the same architecture from Jing et al 2017 from EU-RNNs with a different parametrization for the rotation matrices.
The experiments are still limited to simple small-scale tasks.
General Comments:
The idea and the premise of this paper is interesting. In general the paper seems to be well-written. However the most important part of the paper section 3.1 is not very well justified. Why this particular parameterization of the rotation matrices is used and where does actually that come from? Can you point out to some citation? I think the RUM architecture section also requires better explanation on for instance why why R_t is parameterized that way (as a multiplicative function of R_{t-1}). A detailed ablation study would help too.
The model seems to perform really close to the GORU on Copying Task. I would be interested in seeing comparisons to GORU on “Associative Recall” as well. On QA task, which subset of bAbI dataset have you used? 1k or 10k training sets?
On language modelling there is only insignificant difference between the FS-LSTM-2 with FS-RUM model. This does not tell us much. |
iclr_2018_Hy6GHpkCW | A NEURAL REPRESENTATION OF SKETCH DRAWINGS
We present sketch-rnn, a recurrent neural network (RNN) able to construct stroke-based drawings of common objects. The model is trained on a dataset of human-drawn images representing many different classes. We outline a framework for conditional and unconditional sketch generation, and describe new robust training methods for generating coherent sketch drawings in a vector format. | This paper introduces a neural network architecture for generating sketch drawings. The authors propose that this is particularly interesting over generating pixel data as it emphasises more human concepts. I agree. The contribution of this paper of this paper is two-fold. Firstly, the paper introduces a large sketch dataset that future papers can rely on. Secondly, the paper introduces the model for generating sketch drawings.
The model is inspired by the variational autoencoder. However, the proposed method departs from the theory that justifies the variational autoencoder. I believe the following things would be interesting points to discuss / follow up:
- The paper preliminarily investigates the influence of the KL regularisation term on a validation data likelihood. It seems to have a negative impact for the range of values that are discussed. However, I would expect there to be an optimum. Does the KL term help prevent overfitting at some stage? Answering this question may help understand what influence variational inference has on this model.
- The decoder model has randomness injected in it at every stage of the RNN. Because of this, the latent state actually encodes a distribution over drawings, rather than a single drawing. It seems plausible that this is one of the reasons that the model cannot obtain a high likelihood with a high KL regularisation term. Would it help to rephrase the model to make the mapping from latent representation to drawing more deterministic? This definitely would bring it closer to the way the VAE was originally introduced.
- The unconditional generative model *only* relies on the "injected randomness" for generating drawings, as the initial state is initialised to 0. This also is not in the spirit of the original VAE, where unconditional generation involves sampling from the prior over the latent space.
I believe the design choices made by the authors to be valid in order to get things to work. But it would be interesting to see why a more straightforward application of theory perhaps *doesn't* work as well (or whether it works better). This would help interesting applications inform what is wrong with current theoretical views.
Overall, I would argue that this paper is a clear accept. |
iclr_2018_ryvxcPeAb | Deep neural networks provide state-of-the-art performance for many applications of interest. Unfortunately they are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can transfer across models: adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack against the deployed black-box systems. In this work, we demonstrate that the adversarial perturbation can be decomposed into two components: model-specific and data-dependent one, and it is the latter that mainly contributes to the transferability. Motivated by this understanding, we propose to craft adversarial examples by utilizing the noise reduced gradient (NRG) which approximates the data-dependent component. Experiments on various classification models trained on ImageNet demonstrates that the new approach enhances the transferability dramatically. We also find that low-capacity models have more powerful attack capability than high-capacity counterparts, under the condition that they have comparable test performance. These insights give rise to a principled manner to construct adversarial examples with high success rates and could potentially provide us guidance for designing effective defense approaches against black-box attacks. | This paper postulates that an adversarial perturbation consists of a model-specific and data-specific component, and that amplification of the latter is best suited for adversarial attacks.
This paper has many grammatical errors. The article is almost always missing from nouns. Some of the sentences need changing. For example:
"training model paramater" --> "training model parameters" (assuming the neural networks have more than 1 parameter)
"same or similar dataset with" --> "same or a similar dataset to"
"human eyes" --> "the human eye"!
"in analogous to" --> "analogous to"
"start-of-the-art" --> "state-of-the-art"
Some roughly chronological comments follow:
In equation (1) although it is obvious that y is the output of f, you should define it. As you are considering the single highest-scoring class, there should probably be an argmax somewhere.
"The best metric should be human eyes, which is unfortunately difficult to quantify". I don't recommend that you quantify things in terms of eyes.
In section 3.1 I am not convinced there is yet sufficient justification to claim that grad(f||)^A is aligned with the inter-class deviation. It would be helpful to put equation (8) here. The "human" line on figure 1a doesn't make much sense. By u & v in the figure 1 caption you presumably the x and y axes on the plot. These should be labelled.
In section 4 you write "it is meaningless to construct adversarial perturbations for the images that target models cannot classify correctly". I'm not sure this is true. Imagenet has a *lot* of dog breeds. For an adversarial attack, it may be advantageous to change the classification from "wrong breed of dog" to "not a dog at all".
Something that concerns me is that, although your methods produce good results, it looks like the hyperparameters are chosen so as to overfit to the data (please do correct me if this is not the case). A better procedure would be to split the imagenet validation set in two and optimise the hyperparameters on one split, and test on the second. You also "try lots of \alphas", which again seems like overfitting.
Target attack experiments are missing from 5.1, in 5.2 you write that it is a harder problem so it is omitted. I would argue it is still worth presenting these results even if they are less flattering.
Section 6.2 feels out of place and disjointed from the narrative of the paper.
A lot of choices in Section 6 feel arbitrary. In 6.3, why is resnet34 the chosen source model? In 6.4 why do you select those two target models?
I think this paper contains an interesting idea, but suffers from poor writing and unprincipled experimentation. I therefore recommend it be rejected.
Pros:
- Promising results
- Good summary of adversarial methods
Cons:
- Poorly written
- Appears to overfit to the test data |
iclr_2018_ryepFJbA- | We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We hypothesize the existence of undesirable local equilibria in this non-convex game to be responsible for mode collapse. We observe that these local equilibria often exhibit sharp gradients of the discriminator function around some real data points. We demonstrate that these degenerate local equilibria can be avoided with a gradient penalty scheme called DRAGAN. We show that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions. | This paper contains a collection of ideas about Generative Adversarial Networks (GAN) but it is very hard for me to get the main point of this paper. I am not saying ideas are not interesting, but I think the author needs to choose the main point of the paper, and should focus on delivering in-depth studies on the main point.
1. On the game theoretic interpretations
The paper, Generative Adversarial Nets, NIPS 2014, already presented the game theoretic interpretations to GANs, so it's hard for me to think what's new in the section. Best response dynamics is not used in the conventional GAN training, because it's very hard to find the global optimal of inner minimization and outer maximization.
The convergence of online primal-dual gradient descent method in the minimax game is already well-known, but this analysis cannot be applied to the usual GAN setting because the objective is not convex-concave. I found this analysis would be very interesting if the authors can find the toy example when GAN becomes convex-concave by using different model parameterizations and/or different f-divergence, and conduct various studies on the convergence and stability on this problem.
I also found that the hypothesis on the model collapsing has very limited connection to the convex-concave case. It is OK to form the hypothesis and present an interesting research direction, but in order to make this as a main point of the paper, the author should provide more rigorous arguments or experimental studies instead of jumping to the hypothesis in two sentences. For example, if the authors can provide the toy example where GAN becomes convex-concave vs. non-convex-concave case, and how the loss function shape or gradient dynamics are changing, that will provide very valuable insights on the problem.
2. DRAGAN
As open commenters pointed out, I found it's difficult to find why we want to make the norm of the gradient to 1.
Why not 2? why not 1/2? Why 1 is very special?
In the WGAN paper, the gradient is clipped to a number less than 1, because it is a sufficient condition to being 1-Lipshitz, but this paper provides no justification on this number.
It's OK not to have the theoretical answers to the questions but in that case the authors should provide ablation experiments. For example, sweeping gradient norm target from 10^-3, 10^-2, 10^-1, 1.0, 10.0, etc and their impact on the performance.
Also scheduling regularization parameter like reducing the size of lambda exponentially would be interesting as well.
Most of those studies won't be necessary if the theory is sound. However, since this paper does not provide a justification on the magic number "1", I think it's better to include some form of ablation studies.
Note that the item 1 and item 2 are not strongly related to each other, and can be two separate papers. I recommend to choose one direction and provide in-depth study on one topic. Currently, this paper tries to present interesting ideas without very deep investigations, and I cannot recommend this paper to be published. |
iclr_2018_B1mSWUxR- | Reward augmented maximum likelihood (RAML), a simple and effective learning framework to directly optimize towards the reward function in structured prediction tasks, has led to a number of impressive empirical successes. RAML incorporates task-specific reward by performing maximum-likelihood updates on candidate outputs sampled according to an exponentiated payoff distribution, which gives higher probabilities to candidates that are close to the reference output. While RAML is notable for its simplicity, efficiency, and its impressive empirical successes, the theoretical properties of RAML, especially the behavior of the exponentiated payoff distribution, has not been examined thoroughly. In this work, we introduce softmax Q-distribution estimation, a novel theoretical interpretation of RAML, which reveals the relation between RAML and Bayesian decision theory. The softmax Q-distribution can be regarded as a smooth approximation of the Bayes decision boundary, and the Bayes decision rule is achieved by decoding with this Qdistribution. We further show that RAML is equivalent to approximately estimating the softmax Q-distribution, with the temperature τ controlling approximation error. We perform two experiments, one on synthetic data of multi-class classification and one on real data of image captioning, to demonstrate the relationship between RAML and the proposed softmax Q-distribution estimation method, verifying our theoretical analysis. Additional experiments on three structured prediction tasks with rewards defined on sequential (named entity recognition), tree-based (dependency parsing) and irregular (machine translation) structures show notable improvements over maximum likelihood baselines. | The authors claim three contributions in this paper. (1) They introduce the framework of softmax Q-distribution estimation, through which they are able to interpret the role the payoff distribution plays in RAML. Specifically, the softmax Q-distribution serves as a smooth approximation to the Bayes decision boundary. The RAML approximately estimates the softmax Q-distribution, and thus approximates the Bayes decision rule. (2) Algorithmically, they further propose softmax Q-distribution maximum likelihood (SQDML) which improves RAML by achieving the exact Bayes decision boundary asymptotically. (3) Through one experiment using synthetic data on multi-class classification and one using real data on image captioning, they show that SQDML is consistently as good or better than RAML on the task-specific metrics that is desired to optimize.
I found the first contribution is sound, and it reasonably explains why RAML achieves better performance when measured by a specific metric. Given a reward function, one can define the Bayes decision rule. The softmax Q-distribution (Eqn. 12) is defined to be the softmax approximation of the deterministic Bayes rule. The authors show that the RAML can be explained by moving the expectation out of the nonlinear function and replacing it with empirical expectation (Eqn. 17). Of course, the moving-out is biased but the replacing is unbiased.
The second contribution is partially valid, although I doubt how much improvement one can get from SQDML. The authors define the empirical Q-distribution by replacing the expectation in Eqn. 12 with empirical expectation (Eqn. 15). In fact, this step can result in biased estimation because the replacement is inside the nonlinear function. When x is repeated sufficiently in the data, this bias is small and improvement can be observed, like in the synthetic data example. However, when x is not repeated frequently, both RAML and SQDML are biased. Experiment in section 4.1.2 do not validate significant improvement, either.
The numerical results are relatively weak. The synthetic experiment verifies the reward-maximizing property of RAML and SQDML. However, from Figure 2, we can see that the result is quite sensitive to the temperature \tau. Is there any guidelines to choose \tau? For experiments in Section 4.2, all of them are to show the effectiveness of RAML, which are not very relevant to this paper. These experiment results show very small improvement compared to the ML baselines (see Table 2,3 and 5). These results are also lower than the state of the art performance.
A few questions:
(1). The author may want to check whether (8) can be called a Bayes decision rule. This is a direct result from definition of conditional probability. No Bayesian elements, like prior or likelihood appears here.
(2). In the implementation of SQDML, one can sample from (15) without exactly computing the summation in the denominator. Compared with the n-gram replacement used in the paper, which one is better?
(3). The authors may want to write Eqn. 17 in the same conditional form of Eqn. 12 and Eqn. 14. This will make the comparison much more clear.
(4). What is Theorem 2 trying to convey? Although \tau goes to 0, there is still a gap between Q and Q'. This seems to suggest that for small \tau, Q' is not a good approximation of Q. Are the assumptions in Theorem 2 reasonable? There are several typos in the proof of Theorem 2.
(5). In section 4.2.2, the authors write "the rewards we directly optimized in training (token-level accuracy for NER and UAS for dependency parsing) are more stable w.r.t. τ than the evaluation metrics (F1 in NER), illustrating that in practice, choosing a training reward that correlates well with the evaluation metric is important". Could you explain it in more details? |
iclr_2018_SywXXwJAb | Published as a conference paper at ICLR 2018 DEEP LEARNING AND QUANTUM ENTANGLEMENT: FUNDAMENTAL CONNECTIONS WITH IMPLICATIONS TO NETWORK DESIGN
Formal understanding of the inductive bias behind deep convolutional networks, i.e. the relation between the network's architectural features and the functions it is able to model, is limited. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning, and use it for obtaining novel theoretical observations regarding the inductive bias of convolutional networks. Specifically, we show a structural equivalence between the function realized by a convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which facilitates the use of quantum entanglement measures as quantifiers of a deep network's expressive ability to model correlations. Furthermore, the construction of a deep ConvAC in terms of a quantum Tensor Network is enabled. This allows us to perform a graph-theoretic analysis of a convolutional network, tying its expressiveness to a min-cut in its underlying graph. We demonstrate a practical outcome in the form of a direct control over the inductive bias via the number of channels (width) of each layer. We empirically validate our findings on standard convolutional networks which involve ReLU activations and max pooling. The description of a deep convolutional network in well-defined graph-theoretic tools and the structural connection to quantum entanglement, are two interdisciplinary bridges that are brought forth by this work. | The paper proposes a structural equivalence between the function realised by a convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which facilitates the use of quantum entanglement measures as quantifiers of a deep network’s expressive ability to model correlations.
The work is definitely worthwhile digging deeper, bridging some gap and discussions between physics and deep learning. The ultimate goal for this work, if I understands correctly, is a provide a theoretical explanation to the design of deep neural architectures. The paper is well-written (above most submissions, top 10%) with clear clarity. However, removing all the fancy stuff and looking into the picture further, I have several major concerns.
+ Potential good research direction to connect physical sciences (via TN) to deep learning theories (via ConvAC).
- [Novelty is limited and proof is vague] The paper uses physical concepts to establish a "LAYER WIDTHS EFFECT ON THE EXPRESSIVENESS OF A DEEP NETWORK", the core theory (proposed method) part is Section 5 alone and the rest (Section 2,3,4) is for introductory purposes. Putting Theorem 1 in simple, deep learning based English, it says for a dataset with features of a size D, there exists a partition of length scale \epsilon < D, which is guaranteed to separate between different parts of a feature. Based on this, they give a rule of thumb to design the width (i.e., channel numbers) of layers in a deep neural network: (a) layer l = logD is more important than those of deeper layers; (b) among these deeper layers, deeper ones need to be wider, which is derived from the min-cut in the ConvAC TN case. How (a) is derived or implied from theorem 1?
It seems to me that the paper goes with a rigorous manner till the proof of theorem 1, with all the concepts and denotations well demonstrated. Suddenly when it comes to connecting the practical design of deep networks, the conclusion becomes qualitative without much explanation via figures or visualisation of the learned features to prove the effectiveness of the proposed scheme.
- [Experiments are super weak] The paper has a good motivation and a beautiful story and yet, the experiments are poor to verify them. The reason as to why authors use ConvAC is that it more resembles the tensor operations introduced in the paper. There is a sentence, "Importantly, through the concept of generalized tensor decompositions, a ConvAC can be transformed to a standard convolutional network with ReLU activation and average/max pooling", to tell the relation between ConvAC and traditional convolutions. The theory is based on the analysis of ConvAC, and all of a sudden the experiments are conducted on the traditional convolution. This is not rigorous and not professional for a "technically-sound" paper. How the generalized concepts of tensor decompositions can be applied from ConvAC to vanilla convolutions?
The experiments seem to extend the channel width of *all* layers in a hand-crafted manner (10, 4r, 4r, xxx). Based on the derived rule of thumb, the most important layer in MNIST should be layer 3 or 4 (log 10). Some simple ablative analysis should be:
(i) baseline: fix layer 3, add more layers thereafter in the network;
(ii) fix layer 3, reduce the channel numbers after layer 3.
The (ii) case should be at least comparable to (i) if theorem 1 is correct.
Moreover, to verify conclusion (b) which I mentioned earlier, does the current setting (10, 4r, 4r, xx) consider "deeper ones need to be wider"? What is the value of r? MNIST is a over-used dataset and quite small. I see the performance in Figure 4 (the only experiment result in the paper) just exceeds 90%. A simple trained NN (not CNN) could reach well 96% or so.
More ablative study (convAC or vanilla conv, other datasets, comparison components in width design, etc.) are seriously needed. Otherwise, it is just not convincing to me.
If the authors target on the network design in a more general manner (not just in layer width, but the design of number of filters, layers, etc.), there are already some neat work in this community and you should definitely compare them: e.g., Neural Architecture Search with Reinforcement Learning, ICLR 2017. I know the paper starts with building the connection from physics to deep learning and it is natural to solve the width design issue alone. This is not a major concern.
-------------
We see lots of fancy conceptions, trying to bind interdisciplinary subjects to help analyse deep learning theories over the last few years, especially in ICLR 2017. This year same thing happens. I am not degrading the motivation/intuition of the work; instead, I think it is pretty novel to explain the design of neural nets, by way of quantum physics. But the experiments and the conclusion derived from the analysis make the paper not solid to me and I am quite skeptical about its actual effectiveness. |
iclr_2018_ry018WZAZ | Published as a conference paper at ICLR 2018 DEEP ACTIVE LEARNING FOR NAMED ENTITY RECOGNITION
Deep learning has yielded state-of-the-art performance on many natural language processing tasks including named entity recognition (NER). However, this typically requires large amounts of labeled data. In this work, we demonstrate that the amount of labeled training data can be drastically reduced when deep learning is combined with active learning. While active learning is sample-efficient, it can be computationally expensive since it requires iterative retraining. To speed this up, we introduce a lightweight architecture for NER, viz., the CNN-CNN-LSTM model consisting of convolutional character and word encoders and a long short term memory (LSTM) tag decoder. The model achieves nearly state-of-the-art performance on standard datasets for the task while being computationally much more efficient than best performing models. We carry out incremental active learning, during the training process, and are able to nearly match state-of-the-art performance with just 25% of the original training data. | This paper studies the application of different existing active learning strategies for the deep models for NER.
Pros:
* Active learning may be used for improving the performance of deep models for NER in practice
* All the proposed approaches are sound and the experimental results showed that active learning is beneficial for the deep models for NER
Cons:
* The novelty of this paper is marginal. The proposed approaches turn out to be a combination of existing active learning strategies for selecting data to query with the existing deep model for NER.
* No conclusion can be drawn by comparing with the 4 different strategies.
======= After rebuttal ================
Thank you for the clarification and revision on this paper. It looks better now.
I understand that the purpose of this paper is to give actionable insights to the practice of deep learning. However, since AL itself is a meta learning framework and neural net as the base learner has been shown to be effective for AL, the novelty and contribution of a general discussion of applying AL for deep neural nets is marginal. What I really expected is a tightly-coupled active learning strategy that is specially designed for the particular deep neural network structure used for NER. Apparently, however, none of the strategies used in this work is designed for this purpose (e.g., the query strategy or model update strategy should at least reflex some properties of deep learning or NER). Thus, it is still below my expectation.
Anyway, since the authors had attempted to improved this paper, and the results may provide some information to practice, I would like to slightly raise my rating to give this attempt a chance. |
iclr_2018_SyX0IeWAW | Published as a conference paper at ICLR 2018 META LEARNING SHARED HIERARCHIES Work done as an intern at OpenAI
We develop a metalearning approach for learning hierarchically structured policies, improving sample efficiency on unseen tasks through the use of shared primitives-policies that are executed for large numbers of timesteps. Specifically, a set of primitives are shared within a distribution of tasks, and are switched between by task-specific policies. We provide a concrete metric for measuring the strength of such hierarchies, leading to an optimization problem for quickly reaching high reward on unseen tasks. We then present an algorithm to solve this problem end-to-end through the use of any off-the-shelf reinforcement learning method, by repeatedly sampling new tasks and resetting task-specific policies. We successfully discover 1 meaningful motor primitives for the directional movement of four-legged robots, solely by interacting with distributions of mazes. We also demonstrate the transferability of primitives to solve long-timescale sparse-reward obstacle courses, and we enable 3D humanoid robots to robustly walk and crawl with the same policy. | This paper proposes a novel hierarchical reinforcement learning method for a fairly particular setting. The setting is one where the agent must solve some task for many episodes in a sequence, after which the task will change and the process repeats. The proposed solution method splits the agent into two components, a master policy which is reset to random initial weights for each new task, and several sub-policies (motor primitives) that are selected between by the master policy every N steps and whose weights are not reset on task switches. The core idea is that the master policy is given a relatively easy learning task of selecting between useful motor primitives and this can be efficiently learned from scratch on each new task, whereas learning the motor primitives occurs slowly over many different tasks. To push this motivation into the learning process, the master policy is updated always but the sub-policies are only updated after an extended warmup period (called the joint-update or training period). This experiments include both small domains (moving to 2D goals and four-rooms) and more complex physics simulations (4-legged ants and humanoids). In both the simple and complex domains, the proposed method (MLSH) is able to robustly achieve good performance.
This approach to obtaining complex structured behavior appears impressive despite the amount of temporal structure that must be provided to the method (the choice of N, the warmup period, and the joint-update period). Relying on the temporal structure for the hierarchy, and forcing the master policy to be relearned from scratch for each new task may be problematic in general, but this work shows that in some complex settings, a simple temporal decomposition may be sufficient to encourage the development of reusable motor primitives and to also enable quick learning of meta-policies over these motor-primitives. Moreover, the results show that these temporal hierarchies are helpful in these domains, as the corresponding non-hierarchical methods failed on the more challenging tasks.
The paper could be improved in some places (e.g. unclear aliases of joint-update or training periods, describing how the parameters were chosen, and describing what kinds of sub-policies are learned in these domains when different parameter choices are made). |