paper_id
stringlengths 10
19
| venue
stringclasses 14
values | focused_review
stringlengths 7
8.45k
| point
stringlengths 60
643
|
---|---|---|---|
NIPS_2022_1637 | NIPS_2022 | 1. The examples of scoring systems in the Introduction seem out of date, there are many newer and recognized clinical scoring systems. It also should briefly introduce the traditional framework of the scoring system and its difference in methodology and performance with the proposed method. 2. As shown in figure 3, the performance improvement of proposed methods seems not so significant, the biggest improvement in the bank dataset was ~0.02. Additionally, using some tables to directly show the key improvements may be more intuitive and detailed. 3. Although extensive experiments and discussion on performance, in my opinion, its most significant improvement would be efficiency, and there are few discussions or ablation experiments on efficiency. 4. The model AUC can assess the model discriminant ability, i.e., the probability of a positive case is bigger than that of a negative case, but may be hard to show its consistency between predicted score and actual risk. However, this consistency may be more crucial to the clinical scoring system (differentiated with classification task). Therefore, the related studies are encouraged to conduct calibration curves to show the agreement. It would be better to prove the feasibility of the generated scoring system? The difference between the traditional method and our method can also be discussed in this paper. | 4. The model AUC can assess the model discriminant ability, i.e., the probability of a positive case is bigger than that of a negative case, but may be hard to show its consistency between predicted score and actual risk. However, this consistency may be more crucial to the clinical scoring system (differentiated with classification task). Therefore, the related studies are encouraged to conduct calibration curves to show the agreement. It would be better to prove the feasibility of the generated scoring system? The difference between the traditional method and our method can also be discussed in this paper. |
ICLR_2022_3058 | ICLR_2022 | . At the end of section 2, the authors tried to explain noisy signals are harmful for the OOD detection. It's obvious that with more independent units the variance of the output is higher. But this affects both ID and OOD data. The explanation is not clear.
. The analysis in section 6 is kind of superficial. 1) Lemma 2: the conclusion is under the assumption that the mean is approximately the same. However, as DICE is not designed to guarantee this assumption, the conclusion in Lemma 2 may not apply to DICE. 2) mean of output: the scoring function used for OOD detection is max_cf_c(x). The difference of mean is not directly related to the detection scoring, so the associated observation may not be used to explain why the algorithm works.
. Overall, it is not well explained why the proposed algorithm would work for some OOD detection. 1) From the observation, although DICE can reduce the variance of both ID and OOD data, the effect on OOD seems more significant. This may due to the large difference between ID and OOD. Therefore, it would be interesting to exam the performance of DICE by varying the likeness between OOD and ID. 2) From Figure 4, the range of ID and OOD seems not to be changed much by sparsification. Similarly, Lemma 2 requires approximately identical mean as the assumption. These conditions are crucial for DICE, but is not well discussed, eg., how to ensure DICE meet these conditions.
. In the experiment, the OOD samples generally are significantly different from ID samples (thus less challenging). As pointed out in the above comment, it would be interesting to compare the performance of DICE by varying the OODness of test samples. For example, the ID data is 8 from MNIST, OOD datasets can be 1) 3 from MNIST; 2) 1 from MNIST; 3) FMNIST; and 4) CIFAR-10.
. The comparison between DICE and generative-based model (Table 3) is unfair as DICE is supervised while the benchmarks are unsupervised. It's not surprising that DICE is better. The authors should add comments on that.
. It is claimed in the experimental part that the in-distribution classification accuracy can be maintained under DICE. Only the result on CIFAR-10 is shown. Please provide more results to support the conclusion if possible.
. Instead of using directed sparsification, one possible solution may be just using a simpler network. Of course this would change the original network architecture. But as one part of the ablation study, it would be interesting to know whether a simpler network would be more beneficial for the OOD detection. | 2) From Figure 4, the range of ID and OOD seems not to be changed much by sparsification. Similarly, Lemma 2 requires approximately identical mean as the assumption. These conditions are crucial for DICE, but is not well discussed, eg., how to ensure DICE meet these conditions. |
ICLR_2022_3205 | ICLR_2022 | This method trades one intractible problem for another: it requires the learning of cross-values v e ′ ( x t ; e )
for all pairs of possible environments e , e ′
. It is not clear that this will be an improvement when scaling up.
At a few points the paper introduces approximations, but the gap to the true value and the implications of these approximations are not made completely clear to me. The authors should be more precise about the tradeoffs and costs of the methods they propose, both in terms of accuracy and computational cost.
On page 6, it claims that estimating v c
according to samples will lead to Thompson sampling-like behavior, which might lead to better exploration. This seems a bit facetious given that this paper attempts to find a Bayes-optimal policy and explicitly points out the weaknesses of Thompson sampling in an earlier section.
Not scaled to larger domains, but this is understandable.
Questions and minor comments
Is the belief state conditioning the policy also supposed to change with time τ
? As written it looks like the optimal Bayes-adaptive policy conditions on one sampled belief about the environment and then plays without updating that belief.
It is not intuitive to me how it is possible to estimate v f
, despite the Bellman equation written in Eq. 12. It would seem that this update would have to integrate over all possible environments in order to be meaningful, assuming that the true environment is not known at update time. Is that correct?
I guess this was probably for space reasons, but the bolded sections in page 6 should really be broken out into \paragraphs — it's currently a huge wall of text. | 12. It would seem that this update would have to integrate over all possible environments in order to be meaningful, assuming that the true environment is not known at update time. Is that correct? I guess this was probably for space reasons, but the bolded sections in page 6 should really be broken out into \paragraphs — it's currently a huge wall of text. |
NIPS_2022_601 | NIPS_2022 | Although I like the general idea of using DPP, I found there are various issues with the current version of the paper. Please see my detailed comments as follows.
• The paper specifically targets the permutation problems, but I don't see how this permutation property is incorporated into the design of the proposed acquisition function (except the fact that batch BO is used so that we can evaluate multiple data points parallel and thus can avoid the issue of large search space for permutation problems).
• Even though the paper provides various theoretical analysis, but these analyses seem not to be rigorous, and might not really help to answer the analytical properties of the proposed approach.
o The Acquisition Weighted Kernel L^{AW} defined in Line 122 seems to not be a real (valid) kernel? As it depends on any acquisition function a(x), so it seems impossible to me that it is a valid kernel for all cases. Besides, this seems to be like a component of the proposed acquisition function, rather than to be called a "kernel".
o The regret analysis in Theorem 3.6 depends on the maximum information gain \gamma_T, but this \gamma_T value is not properly bounded in Theorem 3.9. Theorem 3.9 only shows that \gamma_T is smaller than a function of \lambda_{max} but there is no guarantee that \lambda_{max} is bounded when T goes to infinity. This is a key analysis in any BO analysis. In the literature, only several kernels have been shown that their \lambda_{max} is bounded when T goes to infinity.
o Again, same problem with Theorem 3.12, is there any guarantee that the \lambda_{max} of the position kernel is upper bounded?
• The performance of the proposed approach on the permutation optimization problems are not that good though (Section 5.1). LAW-EI performs pretty bad, much worse than other baselines in various cases, while LAW-EST is on par with other baselines and only performs well in one problem. Besides, I don't understand why LAW-UCB is not added as one of the baselines. The justification regarding the size of search space does not seem reasonable to me.
• Besides, the experiments seem not too strong and fair to me. I don't understand why all the baselines use the position kernels, why don't we use the default settings of these baselines in the literature? Besides, it seems like some baselines related to BO with discrete & categorial variables are missing. The paper also needs to compare its proposed approach with these baselines.
I think the paper does not mention much about the limitations or the societal impacts of their proposed approach. | • Besides, the experiments seem not too strong and fair to me. I don't understand why all the baselines use the position kernels, why don't we use the default settings of these baselines in the literature? Besides, it seems like some baselines related to BO with discrete & categorial variables are missing. The paper also needs to compare its proposed approach with these baselines. I think the paper does not mention much about the limitations or the societal impacts of their proposed approach. |
NIPS_2022_1770 | NIPS_2022 | Weakness: There are still several concerns with the finding that the perplexity is highly correlated with the number of decoder parameters.
According to Figure 4, the correlation decreases as top-10% architectures are chosen instead of top-100%, which indicates that the training-free proxy is less accurate for parameter-heavy decoders.
The range of sampled architectures should also affect the correlation. For instance, once the sampled architectures are of similar sizes, it could be more challenging to differentiate their perplexity and thus the correlation can be lower.
Detailed Comments:
Some questions regarding Figure 4: 1) there is a drop of correlation after a short period of training, which goes up with more training iterations; 2) the title "Top-x%" should be further explained;
Though the proposed approach yields the Pareto frontier of perplexity, latency and memory, is there any systematic way to choose a single architecture given the target perplexity? | 1) there is a drop of correlation after a short period of training, which goes up with more training iterations; |
ICLR_2023_4878 | ICLR_2023 | 1. The proposed sparsity technique seems to be limited in scope. While it has been shown to work well for a particular misinformation detector, there is no guarantee that it will work well for other networks. 2. Though the experimental results are encouraging, it is not clear why the simple pre-fixed should work well. No explanation is provided. 3. The dataset used for evaluation is not a widely used dataset. Basically only previous work has used it and this work is an extension of the previous work. 4. There is no novelty in the methodological aspects of the work.
Questions for the authors: 1. Why do misinformation detection models need to be deployed on smartphones? Can you give a real-world use case? 2. How does the proposed sparsity pattern compare with masks that are inferred from a pretrained model or learned during training? 3. Wu et al use event level detection results for document: "these two tasks are not mutually independent. Intuitively, document-level detection can benefit from the results of event-level detection, because the presence of a large number of false events indicates that the document is more likely to be fake. Therefore, we feed the results produced by a well-trained event-level detector into each layer of the document-level detector." Their ablation study (Table 4) shows that event level detection is crucial for getting best document level results (86.76 vs 84.57 F1). This seems to go against your claim that document level detection model needs to be separated from event level detection model. 4. Results in Table 1 (doc classifier exit #4) doesn't match with those of Wu et al. Why is sparse model giving better results than unpruned? 5. HSF, GROVER and CDMD are fake news detection algorithms whereas MP and LTH are sparse n/w methods. Which fake news detection algorithm is used in conjunction with MP and LTH? (Table 4) 6. Doc classifier exit #1 is nearly as good as exit # 2. And there's not a whole lot of difference b/w exit #1 and exit #4. Is this because document level event detection is a easy problem or something to do with the dataset? 7. Why no event level results for 90% sparsity in Table 4? Do the event level results degrade more drastically than even level as the sparsity is increased? 8. Why no results for a random sparsity pattern? That would be a good baseline for relative assessment of sparsity patterns. 9. In Tables 4,5 and 6, why is SMD 90% sparsity better than SMD 50%? 10. Looks like all sparsity patterns do almost equally well. No insight provided as to what is happening here. Is this something unique to the sparsity detection problem or is this true for GNN in general?
Section 4.3: presentation bits --> representation bits | 10. Looks like all sparsity patterns do almost equally well. No insight provided as to what is happening here. Is this something unique to the sparsity detection problem or is this true for GNN in general? Section 4.3: presentation bits --> representation bits |
ICLR_2023_650 | ICLR_2023 | 1.One severe problem of this paper is that it misses several important related work/baselines to compare[1,2,3,4], either in discussion [1,2,3,4]or experiments[1,2]. This paper addresses to design a normalization layer that can be plugged in the network for avoiding the dimensional collapse of representation (in intermediate layer). This idea has been done by the batch whitening methods [1,2,3] (e.g, Decorrelated Batch Normalization (DBN), IterNorm, etal.). Batch whitening, which is a general extent of BN that further decorrelating the axes, can ensure the covariance matrix of the normalized output as Identity (IterNorm can obtain an approximate one). These normalization modules can surely satisfy the requirements these paper aims to do. I noted that this paper cites the work of Hua et al, 2021, which uses Decorrelated Batch Normalization for Self-supervised learning (with further revision using shuffling). This paper should note the exist of Decorrelated Batch Normalization. Indeed, the first work to using whitening for self-supervised learning is [4], where it shows how the main motivations of whitening benefits self-supervised learning.
2.I have concerns on the connections and analyses, which is not rigorous for me. Firstly, this paper removes the A D − 1
in Eqn.6, and claims that “In fact, the operation corresponds to the stop-gradient technique, which is widely used in contrastive learning methods (He et al., 2020; Grill et al., 2020). By throwing away some terms in the gradient, stop-gradient makes the training process asymmetric and thus avoids representation collapse with less computational overhead. It verifies the feasibility of our discarding operation”. I do not understand how to stop gradients used in SSL can be connected to the removement of A D − 1
, I expect this paper can provide the demonstration or further clarification.
Secondly, It is not clear why layerNorm is necessary. Besides, how the layer normalization can be replace with an additional factor (1+s) to rescale H shown in claims “For the convenience of analysis, we replace the layer normalization with an additional factor 1 + s to rescale H”. I think the assumption is too strong.
In summary, the connections between the proposed contraNorm and uniformity loss requires: 1) removing A D − 1
and 2) add layer normalization, furthermore the propositions for support the connection require the assumption “layer normalization can be replace with an additional factor (1+s) to rescale H”. I personally feel that the connection and analysis are somewhat farfetched.
Other minors:
1)Figure 1 is too similar to the Figure 1 of Hua et al, 2021, I feel it is like a copy at my first glance, even though I noted some slightly differences when I carefully compare Figure 1 of this paper to Figure 1 of Hua et al, 2021.
2)The derivation from Eqn. 3 to Eqn. 4 misses the temperature τ , τ
should be shown in a rigorous way or this paper mention it.
3)In page 6. the reference of Eq.(24)? References:
[1] Decorrelated Batch Normalization, CVPR 2018
[2] Iterative Normalization: Beyond Standardization towards Efficient Whitening, CVPR 2019
[3] Whitening and Coloring transform for GANs. ICLR, 2019
[4]Whitening for Self-Supervised Representation Learning, ICML 2021 | 2)The derivation from Eqn. 3 to Eqn. 4 misses the temperature τ , τ should be shown in a rigorous way or this paper mention it. |
3vXpZpOn29 | ICLR_2025 | It is unclear that linear datamodels extend to other kinds of tasks, e.g. language modeling or regression problems. I believe this to be a major weakness of the paper. While linear datamodels lead to simple algorithms in this paper, the previous work [1] does not have a good argument for why linear datamodels work [1; Section 7.2]---in fact Figure 6 of [1] display imperfect matching using linear datamodels. It'd be useful to mention this limitation in this manuscript as well, and discuss the limitation's impact to machine learning.
# Suggestions:
1. Line 156. It'd be useful to the reader to add a citation on differential privacy, e.g. one of the standard works like [2].
2. Line 176. $\hat{f}$ should have output range in $\mathbb{R}^k$ since the range of $f_x$ is in $\mathbb{R}^k$.
3. Line 182. "show" -> "empirically show".
4. Definition 3. Write safe, $S_F$, and input $x$ explicitly in KLoM, otherwise KLoM$(\mathcal{U})$ looks like KLoM of the unlearning function across _all_ safe functions and inputs. I'm curious why the authors wrote KLoM$(\mathcal{U})$.
5. Add a Limitations section.
[1] Ilyas, A., Park, S. M., Engstrom, L., Leclerc, G., & Madry, A. (2022). Datamodels: Predicting predictions from training data. arXiv preprint arXiv:2202.00622.
[2] Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4), 211-407. | 1. Line 156. It'd be useful to the reader to add a citation on differential privacy, e.g. one of the standard works like [2]. |
NIPS_2018_43 | NIPS_2018 | - Theoretical analyses are not particularly difficult, even if they do provide some insights. That is, the analyses are what I would expect any competent grad student to be able to come up with within the context of a homework assignment. I would consider the contributions there to be worthy of a posted note / arXiv article. - Section 4 is interesting, but does not provide any actionable advice to the practitioner, unlike Theorem 4. The conclusion I took was that the learned function f needs to achieve a compression rate of \zeta / m with a false positive rate F_p and false negative rate F_n. To know if my deep neural network (for example) can do that, I would have to actually train a fixed size network and then empirically measure its errors. But if I have to do that, the current theory on standard Bloom filters would provide me with an estimate of the equivalent Bloom filter that achieves the same error false positive as the learned Bloom filter. - To reiterate the above point, the analysis of Section 4 doesn't change how I would build, evaluate, and decide on whether to use learned Bloom filters. - The analytical approach of Section 4 gets confusing by starting with a fixed f with known \zeta, F_p, F_n, and then drawing the conclusion for an a priori fixed F_p, F_n (lines 231-233) before fixing the learned function f (lines 235-237). In practice, one typically fixes the function class (e.g. parameterized neural networks with the same architecture) *first* and measures F_p, F_n after. For such settings where \zeta and b are fixed a priori, one would be advised to minimize the learned Bloom filter's overall false positive (F_p + (1-F_p)\alpha^{b/F_n}) in the function class. An interesting analysis would then be to say whether this is feasible, and how it compares to the log loss function. Experiments can then conducted to back this up. This could constitute actionable advice to practitioners. Similarly for the sandwiched learned Bloom filter. - Claim (first para of Section 3.2) that "this methodology requires significant additional assumptions" seems too extreme to me. The only additional assumption is that the test set be drawn from the same distribution as the query set, which is natural for many machine learning settings where the train, validation, test sets are typically assumed to be from the same iid distribution. (If this assumption is in fact too hard to satisfy, then Theorem 4 isn't very useful too.) - Inequality on line 310 has wrong sign; compare inequality line 227 --- base \alpha < 1. - No empirical validation. I would have like to see some experiments where the bounds are validated. | - Claim (first para of Section 3.2) that "this methodology requires significant additional assumptions" seems too extreme to me. The only additional assumption is that the test set be drawn from the same distribution as the query set, which is natural for many machine learning settings where the train, validation, test sets are typically assumed to be from the same iid distribution. (If this assumption is in fact too hard to satisfy, then Theorem 4 isn't very useful too.) - Inequality on line 310 has wrong sign; compare inequality line 227 --- base \alpha < 1. |
ICLR_2021_1504 | ICLR_2021 | W1) The authors should compare their approach (methodologically as well as experimentally) to other concept-based explanations for high-dimensional data such as (Kim et al., 2018), (Ghorbani et al., 2019) and (Goyal et al., 2019). The related work claims that (Kim et al., 2018) requires large sets of annotated data. I disagree. (Kim et al., 2018) only requires a few images describing the concept you want to measure the importance of. This is significantly less than the number of annotations required in the image-to-image translation experiment in the paper where the complete dataset needs to be annotated. In addition, (Kim et al., 2018) allows the flexibility to consider any given semantic concept for explanation while the proposed approach is limited either to semantic concepts captured by frequency information, or to semantic concepts automatically discovered by representation learning, or to concepts annotated in the complete dataset. (Ghorbani et al., 2019) also overcomes the issue of needing annotations by discovering useful concepts from the data itself. What advantages does the proposed approach offer over these existing methods?
W2) Faithfulness of the explanations with the pretrained classifier. The methods of disentangled representation and image-to-image translation require training another network to learn a lower-dimensional representation. This runs the risk of encoding some biases of its own. If we find some concerns with the explanations, we cannot infer if the concerns are with the trained classifier or the newly trained network, potentially making the explanations useless.
W3) In the 2-module approach proposed in the paper, the second module can theoretically be any explainability approach for low-dimensional data. What is the reason that the authors decide to use Shapely instead of other works such as (Breiman, 2001) or (Ribeiro et al., 2016)?
W4) Among the three ways of transforming the high-dimensional data to low-dimensional latent space, what criteria should be used by a user to decide which method to use? Or, in other words, what are the advantages and disadvantages of each of these methods which might make them more or less suitable for certain tasks/datasets/applications?
W5) The paper uses the phrase “human-interpretable explainability”. What other type of explainability could be possible if it’s not human-interpretable? I think the paper might benefit with more precise definitions of these terms in the paper.
References mentioned above which are not present in the main paper:
(Ghorbani et al., 2019) Amirata Ghorbani, James Wexler, James Zou, Been Kim. Towards Automatic Concept-based Explanations. NeurIPS 2019.
(Goyal et al., 2019) Yash Goyal, Amir Feder, Uri Shalit, Been Kim. Explaining Classifiers with Causal Concept Effect (CaCE). ArXiv 2019.
—————————————————————————————————————————————————————————————— ——————————————————————————————————————————————————————————————
Update after rebuttal: I thank the authors for their responses to all my questions. However, I believe that these answers need to be justified experimentally in order for the paper’s contributions to be significant for acceptance. In particular, I still have two major concerns. 1) the faithfulness of the proposed approach. I think that the authors’ answer that their method is less at risk to biases than other methods needs to be demonstrated with at least a simple experiment. 2) Shapely values over other methods. I think the authors need to back up their argument for using Shapely value explanations over other methods by comparing experimentally with other methods such as CaCE or even raw gradients. In addition, I think the paper would benefit a lot by including a significant discussion on the advantages and disadvantages of each of the three ways of transforming the high-dimensional data to low-dimensional latent space which might make them more or less suitable for certain tasks/datasets/applications. Because of these concerns, I am keeping my original rating. | 2) Shapely values over other methods. I think the authors need to back up their argument for using Shapely value explanations over other methods by comparing experimentally with other methods such as CaCE or even raw gradients. In addition, I think the paper would benefit a lot by including a significant discussion on the advantages and disadvantages of each of the three ways of transforming the high-dimensional data to low-dimensional latent space which might make them more or less suitable for certain tasks/datasets/applications. Because of these concerns, I am keeping my original rating. |
NIPS_2017_337 | NIPS_2017 | of the manuscript stem from the restrictive---but acceptable---assumptions made throughout the analysis in order to make it tractable. The most important one is that the analysis considers the impact of data poisoning on the training loss in lieu of the test loss. This simplification is clearly acknowledged in the writing at line 102 and defended in Appendix B. Another related assumption is made at line 121: the parameter space is assumed to be an l2-ball of radius rho.
The paper is well written. Here are some minor comments:
- The appendices are well connected to the main body, this is very much appreciated.
- Figure 2 and 3 are hard to read on paper when printed in black-and-white.
- There is a typo on line 237.
- Although the related work is comprehensive, Section 6 could benefit from comparing the perspective taken in the present manuscript to the contributions of prior efforts.
- The use of the terminology "certificate" in some contexts (for instance at line 267) might be misinterpreted, due to its strong meaning in complexity theory. | - Although the related work is comprehensive, Section 6 could benefit from comparing the perspective taken in the present manuscript to the contributions of prior efforts. |
ICLR_2022_497 | ICLR_2022 | I have the following questions to which I wish the author could respond in the rebuttal. If I missed something in the paper, I would appreciate it if the authors could point them out.
Main concerns: - In my understanding, the best scenarios are those generated from the true distribution P (over the scenarios), and therefore, the CVAE essentially attempts to approximate the true distribution P. In such a sense, if the true distribution P is independent of the context (which is the case in the experiments in this paper), I do not see the rationale for having the scenarios conditioned on the context, which in theory does not provide any statistical evidence. Therefore, the rationale behind CVAE-SIP is not clear to me. If the goal is not to approximate P but to solve the optimization problem, then having the objective values involved as a predicting goal is reasonable; in this case, having the context involved is justified because they can have an impact on the optimization results. Thus, CVAE-SIPA to me is a valid method. - While reducing the scenarios from 200 to 10 is promising, the quality of optimization has decreased a little bit. On the other hand, in Figure 2, using K-medoids with K=20 can perfectly recover the original value, which suggests that K-medoids is a decent solution and complex learning methods are not necessary for the considered settings. In addition, I am also wondering the performance under the setting that the 200 scenarios (or random scenarios of a certain number from the true distributions) are directly used as the input of CPLEX. In addition, to justify the performance, it is necessary to provide information about robustness as well as to identify the case where simple methods are not satisfactory (such as larger graphs).
Minor concerns: - Given the structure of the proposed CVAE, the generation process takes the input of z and c where z
is derived from w
. This suggests that the proposed method requires us to know a collection of scenarios from the true distribution. If this is the case, it would be better to have a clear problem statement in Sec 3. Based on such understanding, I am wondering about the process of generating scenarios used for getting K representatives - it would be great if codes like Alg 1 was provided. - I would assume that the performance is closely related to the number of scenarios used for training, and therefore, it is interesting to examine the performance with different numbers of scenarios (which is fixed as 200 in the paper). - The structure of the encoder is not clear to me. The notation q_{\phi} is used to denote two different functions q(z w,D) and q ( c , D )
. Does that mean they are the same network? - It would be better to experimentally justify the choice of the dimension of c and z. - It looks to me that the proposed methods are designed for graph-based problems, while two-stage integer programming does not have to be graph problems in general. If this is the case, it would be better to clearly indicate the scope of the considered problem. Before reaching Sec 4.2, I was thinking that the paper could address general settings. - The paper introduces CVAE-SIP and CVAE-SIPA in Sec 5 -- after discussing the training methods, so I am wondering if they follow the same training scheme. In particular, it is not clear to me by saying “append objective values to the representations” at the beginning of Sec 5. - The approximation error is defined as the gap between the objective values, which is somehow ambiguous unless one has seen the values in the table. It would be better to provide a mathematical characterization. | - I would assume that the performance is closely related to the number of scenarios used for training, and therefore, it is interesting to examine the performance with different numbers of scenarios (which is fixed as 200 in the paper). |
tsbdcgaCtk | ICLR_2024 | 1. generating a quality label does not necessarily mean that the model has the ability to predict it. I am wondering if there is some disturbances are made to the sentence in the training data, will the proposed model generate the correct quality label (showing the quality goes down)?
2. according to fig.1 , the prediction of quality labels is not good at all. The model seems not to be able to discriminate candidates with different qualities.
3. using QE label as the generation labels seems to be an interesting idea. Will you please give some examples of the same source sentence translated with different QE labels? It would be nice to see the effect demonstrated.
4. I am not quite sure how is the quality difference between two translations with 1 point difference in MetricX or Comet score. It will be better to give some examples to show how the translation quality is improved indeed. | 1. generating a quality label does not necessarily mean that the model has the ability to predict it. I am wondering if there is some disturbances are made to the sentence in the training data, will the proposed model generate the correct quality label (showing the quality goes down)? |
NIPS_2022_2373 | NIPS_2022 | weakness in He et al., and proposes a more invisible watermarking algorithm, making their method more appealing to the community. 2. Instead of using a heuristic search, the authors elegantly cast the watermark search issue into an optimization problem and provide rigorous proof. 3. The authors conduct comprehensive experiments to validate the efficacy of CATER in various settings, including an architectural mismatch between the victim and the imitation model and cross-domain imitation. 4. This work theoretically proves that CATER is resilient to statistical reverse-engineering, which is also verified by their experiments. In addition, they show that CATER can defend against ONION, an effective approach for backdoor removal.
Weakness: 1. The authors assume that all training data are from the API response, but what if the adversary only uses the part of the API response? 2. Figure 5 is hard to comprehend. I would like to see more details about the two baselines presented in Figure 5.
The authors only study CATER for the English-centric datasets. However, as we know, the widespread text generation APIs are for translation, which supports multiple languages. Probably, the authors could extend CATER to other languages in the future. | 3. The authors conduct comprehensive experiments to validate the efficacy of CATER in various settings, including an architectural mismatch between the victim and the imitation model and cross-domain imitation. |
NIPS_2017_143 | NIPS_2017 | For me the main issue with this paper is that the relevance of the *specific* problem that they study -- maximizing the "best response" payoff (l127) on test data -- remains unclear. I don't see a substantial motivation in terms of a link to settings (real or theoretical) that are relevant:
- In which real scenarios is the objective given by the adverserial prediction accuracy they propose, in contrast to classical prediction accuracy?
- In l32-45 they pretend to give a real example but for me this is too vague. I do see that in some scenarios the loss/objective they consider (high accuracy on majority) kind of makes sense. But I imagine that such losses already have been studied, without necessarily referring to "strategic" settings. In particular, how is this related to robust statistics, Huber loss, precision, recall, etc.?
- In l50 they claim that "pershaps even in most [...] practical scenarios" predicting accurate on the majority is most important. I contradict: in many areas with safety issues such as robotics and self-driving cars (generally: control), the models are allowed to have small errors, but by no means may have large errors (imagine a self-driving car to significantly overestimate the distance to the next car in 1% of the situations).
Related to this, in my view they fall short of what they claim as their contribution in the introduction and in l79-87:
- Generally, this seems like only a very first step towards real strategic settings: in light of what they claim ("strategic predictions", l28), their setting is only partially strategic/game theoretic as the opponent doesn't behave strategically (i.e., take into account the other strategic player).
- In particular, in the experiments, it doesn't come as a complete surprise that the opponent can be outperformed w.r.t. the multi-agent payoff proposed by the authors, because the opponent simply doesn't aim at maximizing it (e.g. in the experiments he maximizes classical SE and AE).
- Related to this, in the experiments it would be interesting to see the comparison of the classical squared/absolute error on the test set as well (since this is what LSE claims to optimize).
- I agree that "prediction is not done in isolation", but I don't see the "main" contribution of showing that the "task of prediction may have strategic aspects" yet. REMARKS:
What's "true" payoff in Table 1? I would have expected to see the test set payoff in that column. Or is it the population (complete sample) empirical payoff?
Have you looked into the work by Vapnik about teaching a learner with side information? This looks a bit similar as having your discrapency p alongside x,y. | - Generally, this seems like only a very first step towards real strategic settings: in light of what they claim ("strategic predictions", l28), their setting is only partially strategic/game theoretic as the opponent doesn't behave strategically (i.e., take into account the other strategic player). |
ICLR_2021_863 | ICLR_2021 | Weakness 1. The presentation of the paper should be improved. Right now all the model details are placed in the appendix. This can cause confusion for readers reading the main text. 2. The necessity of using techniques includes Distributional RL and Deep Sets should be explained more thoroughly. From this paper, the illustration of Distributional RL lacks clarity. 3. The details of state representation are not explained clear. For an end-to-end method like DRL, it is crucial for state representation for training a good agent, as for network architecture. 4. The experiments are not comprehensive for validating that this algorithm works well in a wide range of scenarios. The efficiency, especially the time efficiency of the proposed algorithm, is not shown. Moreover, other DRL benchmarks, e.g., TD3 and DQN, should also be compared with. 5. There are typos and grammar errors.
Detailed Comments 1. Section 3.1, first paragraph, quotation mark error for "importance". 2. Appendix A.2 does not illustrate the state space representation of the environment clearly. 3. The authors should state clearly as to why the complete state history is enough to reduce POMDP for the no-CSI case. 4. Section 3.2.1: The first expression for J ( θ )
is incorrect, which should be Q ( s t 0 , π θ ( s t 0 ) )
. 5. The paper did not explain Figure 2 clearly. In particular, what does the curve with the label "Expected" in Fig. 2(a) stand for? Not to mention there are multiple misleading curves in Fig. 2(b)&(c). The benefit of introducing distributional RL is not clearly explained. 6. In Table 1, only 4 classes of users are considered in the experiment sections, which might not be in accordance with practical situations, where there can be more classes of users in the real system and more user numbers. 7. In the experiment sections, the paper only showed the Satisfaction Probability of the proposed method is larger than conventional methods. The algorithm complexity, especially the time complexity of the proposed method in an ultra multi-user scenario, is not shown. 8. There is a large literature on wireless scheduling with latency guarantees from the networking community, e.g., Sigcomm, INFOCOM, Sigmetrics. Representative results there should also be discussed and compared with.
====== post rebuttal: My concern regarding the experiments remains. I will keep my score unchanged. | 2. Appendix A.2 does not illustrate the state space representation of the environment clearly. |
NIPS_2018_430 | NIPS_2018 | - The authors approach is only applicable for problems that are small or medium scale. Truly large problems will overwhelm current LP-solvers. - The authors only applied their method on peculiar types of machine learning applications that were already used for testing boolean classifier generation. It is unclear whether the method could lead to progress in the direction of cleaner machine learning methods for standard machine learning tasks (e.g. MNIST). Questions: - How where the time limits in the inner and outer problem chosen? Did larger timeouts lead to better solutions? - It would be helpful to have an algorithmic writeup of the solution of the pricing problem. - SVM gave often good results on the datasets. Did you use a standard SVM that produced a linear classifier or a Kernel method? If the former is true, this would mean that the machine learning tasks where rather easy and it would be necessary to see results on more complicated problems where no good linear separator exists. Conclusion: I very much like the paper and strongly recommend its publication. The authors propose a theoretically well grounded approach to supervised classifier learning. While the number of problems that one can attack with the method is not so large, the theoretical (problem formulation) and practical (Dantzig-Wolfe solver) contribution can possibly serve as a starting point for further progress in this area of machine learning. | - The authors approach is only applicable for problems that are small or medium scale. Truly large problems will overwhelm current LP-solvers. |
nE1l0vpQDP | ICLR_2025 | - Given the existing literature on the implicit bias of optimization methods, the primary concern is the significance of the results presented. For instance, the classic result by [Z. Ji and M. Telgarsky] demonstrates a convergence rate $\log\log n/\log n$ of GD to the L2-margin solution, which is faster than the rate shown in this submission. Moreover, [C. Zhang, D. Zou, and Y. Cao] have shown much faster rates for Adam converging to the L-infinity margin solution. This submission also lacks citations to these papers and other relevant works:
[Z. Ji and M. Telgarsky] The implicit bias of gradient descent on nonseparable data, COLT 2019.
[C. Zhang, D. Zou, and Y. Cao] The Implicit Bias of Adam on Separable Data. 2024.
[S. Xie and Z. Li] Implicit Bias of AdamW: l_\infty-Norm Constrained Optimization. ICML 2024
[M. Nacson, N. Srebro, and D. Soudry] Stochastic gradient descent on separable data: Exact convergence with a fixed learning rate. AISTATS 2019.
- Since AdaGrad-Norm has the same implicit bias as GD, the advantages of using AdaGrad-Norm over GD are unclear.
- The bounded noise assumption, while common, is somewhat restrictive in stochastic optimization literature. There have been several efforts to extend these noise conditions:
[A. Khaled and P. Richt´arik]. Better theory for sgd in the nonconvex world. TMLR 2023.
[R. Gower, O. Sebbouh, and N. Loizou] Sgd for structured nonconvex functions: Learning rates, minibatching and interpolation. AISTATS 2021. | - The bounded noise assumption, while common, is somewhat restrictive in stochastic optimization literature. There have been several efforts to extend these noise conditions: [A. Khaled and P. Richt´arik]. Better theory for sgd in the nonconvex world. TMLR 2023. [R. Gower, O. Sebbouh, and N. Loizou] Sgd for structured nonconvex functions: Learning rates, minibatching and interpolation. AISTATS 2021. |
BSGQHpGI1Q | ICLR_2025 | - The overall motivation of using characteristic function regularization is not clear.
- The abstract states “improves performance … by preserving essential distributional properties…” -> How does the preservation of such properties aid in generalization?
- The abstract states that the method is meant to be used in conjunction with existing regularization methods. Were the results presented results utilizing multiple forms of regularization (such as $L_2 + \psi_2$) or were the only singular forms of regularization?
- In the conclusion, the author state the follwoing: “integrating these techniques can offer a probability theory based perspective on model architecture construction which allows assembling relevant regularization mechanisms.” —> I do not see how this can be done after reading the work. can you give a concrete example of how the results presented in this work may give any insight into model architecture construction?
## Overall
While I found the work interesting and captivating to read, after finishing the manuscript I am left wondering what possible benefit the regularization provides over existing methods. The results are somewhat ambiguous and I find they do not demonstrate why or when a clear benefit can be achieved by applying the given regularization method. If the authors could provide some insight as to when and why the method would be successful, I think it would go a long way in demonstrating the real-world usefulness of characteristic function regularization. Even if this could be demonstrated in a synthetic toy setting, it could provide interesting insights. | - The overall motivation of using characteristic function regularization is not clear. |
NIPS_2021_2367 | NIPS_2021 | 1. The paper appears to be limited to a combination of existing techniques: adaptation to an unknown level of corruption (Lykouris et al., 2018); varying variances treated with a weighted version of OFUL (Zhou et al., 2021); variable decision sets (standard in contextual linear bandits). The fact that these results can be combined together is not surprising, and thus the contribution could be considered incremental. 2. The regret bounds seem sub-optimal in the level of corruption (they are of order C^2 when existing bounds seem to be of order C). The authors should discuss more this sub-optimality, is it due to the unknown corruption level? Why should we incur it?
Other comments:
The authors state that if C is known and the variances are fixed then one could directly apply OFUL with a modified variance (to add the corruption). This yields several questions: a. The regret bound would then be of order O((R+C)d\sqrt{T}), right? It would be informative to write it, so that we can compare it with the bound of Thm. 5.1. b. It is then claimed that one of the problems is the varying variances. But this problem was already solved by Weighed OFUL (Thm. 4.2 of Zhou et al 2021) for OFUL without corruption. Isn't it possible to apply the same reasoning with this algorithm? c. For the adaptation to the unknown value of C, I am wondering whether it is not possible to just apply the Corral algorithm (see Cor. 6 of [1]) to (Weighed)-OFUL with an exponential grid of possible values for C (from 1 to T). Wouldn't this imply a bound of order C \sqrt{T} log T?
The assumption that the variance is revealed by the adversary (l. 135) is not clear and should be better motivated. We understand that it was already done by Kirschner and Krause (2018) and Zhou et al (2021) but examples of practical applications would be enjoyable to make the paper more self-contained. Similarly, the assumption of varying decision sets is standard in linear bandits but a few lines to recall why this allows dealing with contexts could be helpful for a reader new to the area.
About the experiments: a.The results seem significantly different when C = 0 and 300. What is the intermediate regime? For what level of corruption does Multi-level OFUL outperform algorithms that do not consider corruption? b. I regret that the algorithm is only compared to baselines that are not designed to deal with corruption and suffer linear regrets. It would be interesting to compare Multi-level OFUL with algorithms for linear bandits with corruptions. This could be done by considering fixed variance and fixed decision sets for instance to apply existing algorithms, so that we can see the actual cost of having a more general algorithm. The algorithm could also be compared with the version of OFUL which knows C and takes into account the corruption.
Minor remarks:
How a \min in (6.3) is obtained using lemma 6.5 is not clear and should be clarified. Same for the \min in (6.8) using lemma 6.6?
How substituting (6.5) and (6.6) into (6.3) gives (6.7) should also be more detailed.
[1] Agarwal et al. Corralling a Band of Bandit Algorithms, 2017.
The authors did not mention the limitations and potential negative societal impact of their work. | 1. The paper appears to be limited to a combination of existing techniques: adaptation to an unknown level of corruption (Lykouris et al., 2018); varying variances treated with a weighted version of OFUL (Zhou et al., 2021); variable decision sets (standard in contextual linear bandits). The fact that these results can be combined together is not surprising, and thus the contribution could be considered incremental. |
8HG2QrtXXB | ICLR_2024 | - Source of Improvement and Ablation Study:
- Given the presence of various complex architectural choices, it's difficult to determine whether the Helmholtz decomposition is the primary source of the observed performance improvement. Notably, the absence of the multi-head mechanism leads to a performance drop (0.1261 -> 0.1344) for the 64x64 Navier-Stokes, which is somewhat comparable to the performance decrease resulting from the ablation of the Helmholtz decomposition (0.1261 -> 0.1412). These results raise questions about the model's overall performance gain compared to the baseline models when the multi-head trick is absent. Additionally, the ablation studies need to be explained more comprehensively with sufficient details, as the current presentation makes it difficult to understand the methodology and outcomes.
- The paper claims that Vortex (Deng et al., 2023) cannot be tested on other datasets, which seems unusual, as they are the same type of task and data that are disconnected from the choice of dynamics modeling itself. It should be further clarified why Vortex cannot be applied to other datasets.
- Interpretability Claim:
- The paper's claim about interpretability is not well-explained. If the interpretability claim is based on the model's prediction of an explicit term of velocity, it needs further comparison and a more comprehensive explanation. Does the Helmholtz decomposition significantly improve interpretability compared to baseline models, such as Vortex (Deng et al., 2023)?
- In Figure 4, it appears that the model predicts incoherent velocity fields around the circle boundary, even with non-zero velocity outside the boundary, while baseline models do not exhibit such artifacts. This weakens the interpretability claim.
- Multiscale modeling:
- The aggregation operation after "Integration" needs further clarification. Please provide more details in the main paper, and if you refer to other architectures, acknowledge their structure properly.
- Regarding some missing experimental results with cited baselines, it's crucial to include and report all baseline results to ensure transparency, even if the outcomes are considered inferior.
- Minor issues:
- Ensure proper citation format for baseline models (Authors, Year).
- Make sure that symbols are well-defined with clear reference to their definitions. For example, in Equation (4), the undefined operator $\mathbb{I}_{\vec r\in\mathbb{S}}$ needs clarification. If it's an indicator function, use standard notation with a proper explanation. "Embed(•)" should be indicated more explicitly. | - Multiscale modeling:- The aggregation operation after "Integration" needs further clarification. Please provide more details in the main paper, and if you refer to other architectures, acknowledge their structure properly. |
NIPS_2018_768 | NIPS_2018 | Weakness] 1: I like the paper's idea and result. However, this paper really REQUIRE the ablation study to justify the effectiveness of different compositions. For example: - In eq2, what is the number of m, and how m affect the results? - In eq3, what is the dimension of w_n? what if the use the Euclidean coordinate instead of Polar coordinate? - In eq3, how the number of Gaussian kernels changes the experiment results. 2: Based on the paper's description, I think it will be hard to replicate the result. It would be great if the authors can release the code after the acceptance of the paper. 3: There are several typos in the paper, need better proof reading. | 2: Based on the paper's description, I think it will be hard to replicate the result. It would be great if the authors can release the code after the acceptance of the paper. |
ICLR_2022_1675 | ICLR_2022 | Weakness] 1. The paper contains severe writing issues such as grammatical errors, abuses of mathematical symbols, unclear sentences, etc. 2. The paper needs more literature survey, especially about the existing defense methods using the manifold assumption. 3. The paper does not have enough (either theoretical or experimental) progress to get accepted, compared to previous methods using the manifold assumption.
[Comments] 1. First of all, use some spell/grammar checker (or ask someone else to proofread) to fix basic grammatical errors. 2. Section 3 is very unclear in general. First, I cannot understand the reason why the Section is needed at all. Manifold-based defense against adversarial example is not a new approach and reviewers know well about the manifold assumption in the adversarial machine learning setting. Section 3 does not introduce anything new more than those reviewers’ understanding, and the reasonings are too crude to be called an “analysis”. Second, the writing is not cohesive enough. Each paragraph is saying some topic, however, the connections between the paragraphs are not very clear, making Section 3 more confusing. Even in a single paragraph, the logical reasonings between sentences are sometimes not provided at all. Third, some of those contents are added for no reason. For example, Figure 1 exists for no reason whereas the figure is not referred at all in the paper. The propositions mentioned in Section 3.3 are vaguely written and not used at all. By having these unnecessary parts, the writing looks to be verbose and overstating. 3. In Section 5, the defense method should be written with more formality. Based on the description given in the paper “dx = p(max)-p(secondmax)”, it is very unclear what each term means. Each probability (the authors did not even say that they are probabilities) must correspond to the output from a softmax layer, but which model provides such a softmax layer output, the target classifier, or is there another classifier prepared for it? How are the described transformations used to get the divergence value? What does the detector do with the divergence? (All of these details should be described in Section 5.) Section 6.2 mentions some thresholding strategies, how did the detector work in Section 6.1, though? When thresholding is used, what is the threshold value used and what is the rationale of the choice of the threshold value? There are so many missing details to understand the method. 4. Section 7 looks to be a conclusion for experiments. This should be moved to Section 6 and Section 7 should be an overall conclusion of the paper. 5. The suggested method is neither creative nor novel, compared to the existing methods utilizing the distance from manifolds. As pointed out, the defense based on the manifold assumption is not a new approach. [1][3][4][6](These papers are only a few representative examples. There are many other papers on this type of defense.) Moreover, the idea of using probability divergence is already proposed by previous work [1] and an effective attack for such detection already exists. [2] (Of course, this paper proposes another probability divergence, but there is no support that this method could be significantly better than the previous work.) 6. The experiment should be done more extensively. It looks like that some transformations were brought from the Raff et al. paper [5] which tested the defense against the adversary as strong as possible. Specifically, Raff et al. considered potential improvements of existing attacks to attack their work then tested the defense performance against the improved attack. However, the paper only uses vanilla implementation in the Cleverhans library (or by the original authors). The authors should have shown that the proposed method is robust against a stronger adversary because adversaries who are aware of the method will not use a simple version of the attack. (At least, those adversaries will try using the attack suggested by Raff et al.) [References]
[1] (Meng & Chen) MagNet: a Two-Pronged Defense Against Adversarial Examples
[2] (Carlini & Wagner) MagNet and “Efficient Defenses Against Adversarial Attacks” are Not Robust to Adversarial Examples
[3] (Samangouei et al.) Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
[4] (Jiang et al.) To Trust or Not to Trust a Classifier
[5] (Raff et al.) Barrage of Random Transforms for Adversarially Robust Defense
[6] (Dubey et al.) Defense Against Adversarial Images using Web-Scale Nearest-Neighbor Search | 1. The paper contains severe writing issues such as grammatical errors, abuses of mathematical symbols, unclear sentences, etc. |
ICLR_2021_1716 | ICLR_2021 | Results are on MNIST only. Historically it’s often been the case that strong results on MNIST would not carry over to more complex data. Additionally, at least some core parts of the analysis does not require training networks (but could even be performed e.g. with pre-trained classifiers on ImageNet) - there is thus no severe computational bottleneck, which is often the case when going beyond MNIST.
The “Average stochastic activation diameter” is a quite crude measure and results must thus be taken with a (large) grain of salt. It would be good to perform some control experiments and sanity checks to make sure that the measure behaves as expected, particularly in high-dimensional spaces.
The current paper reports the hashing effect and starts relating it to what’s known in the literature, and has some experiments that try to understand the underlying causes for the hashing effect. However, while some factors are found to have an influence on the strength of the effect, some control experiments are still missing (training on random labels, results on untrained networks, and an analysis of how the results change when starting to leave out more and more of the early layers).
Correctness Overall the methodology, results, and conclusions seem mostly fine (I’m currently not very convinced by the “stochastic activation diameter” and would not read too much into the corresponding results). Additionally some claims are not entirely supported (in fullest generality), based on the results shown, see comments for more on this.
Clarity The main idea is well presented and related literature is nicely cited. However, some of the writing is quite redundant (some parts of the intro appear as literal copies later in the text). Most importantly the writing in some parts of the manuscript seems quite rushed with quite a few typos and some sentences/passages that could be rephrased for more fluent reading.
Improvements (that would make me raise my score) / major issues (that need to be addressed)
Experiments on more complex datasets.
One question that is currently unresolved is: is the hashing effect mostly attributed to early layer activations? Ultimately, a high-accuracy classifier will “lump together” all datapoints of a certain class when looking at the network output only. The question is whether this really happens at the very last layer or already earlier in the network. Similarly, when considering the input to the network (the raw data) the hashing effect holds since each data-point is unique. It is conceivable that the first layer activations only marginally transform the data in which case it would be somewhat trivially expected to see the hashing effect (when considering all activations simultaneously). However that might not explain e.g. the K-NN results. I think it would be very insightful to compute the redundancy ratio layer-wise and/or when leaving out more and more of the early layer activations (i.e. more and more rows of the activation pattern matrix). Additionally it would be great to see how this evolves over time, i.e. is the hashing effect initially mostly localized in early layers and does it gradually shape deeper activations over training? This would also shed some light on the very important issue of how a network that maps each (test-) data-point to a unique pattern generalize well?
Another unresolved question is whether it’s mostly the structure of the input-data or the labels driving the organization of the hashed space? The random data experiments answers this partially. Additionally it would be interesting to see what happens when (i) training with random data, (ii) training with random labels - is the hashing effect still there, does the K-NN classification still work?
Clarify: Does Fig 3c and 4a show results for untrained networks? I.e. is the redundancy ratio near 0 for training, test and random data in an untrained network? I would not be entirely surprised by that (a “reservoir effect”) but if that’s the case that should be commented/discussed in the paper, and improvement 3) mentioned above would become even more important. If the figures do not show results for untrained networks then please run the corresponding experiments and add them to the figures and Table 1.
Clarify: Random data (Fig 3c). Was the network trained on random data, or do the dotted lines show networks trained on unaltered data, evaluated with random data?
Clarify: Random data (Fig 3). Was the non-random data normalized or not (i.e. is the additional “unit-ball” noise small or large compared to the data). Ideally show some examples of the random data in the appendix.
P3: “It is worth noting that the volume of boundaries between linear regions is zero” - is this still true for non-ReLU nonlinearities (e.g. sigmoids)? If not what are the consequences (can you still easily make the claims on P1: “This linear region partition can be extended to the neural networks containing smooth activations”)? Otherwise please rephrase the claims to refer to ReLU networks only.
I disagree that model capacity is well measured by layer width. Please use the term ‘model-size’ instead of ‘model-capacity’ throughout the text. Model capacity is a more complex concept that is influenced by regularizers and other architectural properties (also note that the term capacity has e.g. a well-defined meaning in information theory, and when applied to neural networks it does not simply correspond to layer-width).
Sec 5.4: I disagree that regularization “has very little impact” (as mentioned in the abstract and intro). Looking at the redundancy ratio for weight decay (unfortunately only shown in the appendix) one can clearly see a significant and systematic impact of the regularizer towards higher redundancy ratios (as theoretically expected) for some networks (I guess the impact is stronger for larger networks, unfortunately Fig 8 in the appendix does not allow to precisely answer which networks are which).
Minor comments A) Formally define what “well-trained” means. The term is used quite often and it is unclear whether it simply means converged, or whether it refers to the trained classifier having to have a certain performance.
B) There is quite an extensive body of literature (mainly 90s and early 2000s) on “reservoir effects” in randomly initialized, untrained networks (e.g. echo state networks and liquid state machines, however the latter use recurrent random nets). Perhaps it’s worth checking that literature for similar results.
C) Remark 1: is really only the training distribution meant, i.e. without the test data, or is it the unaltered data generating distribution (i.e. without unit-ball noise)?
D) Is the red histogram in Fig 3a and 3b the same (i.e. does Fig 3b use the network trained with 500 epochs)?
E) P2 - Sufficiently-expressive regime: “This regime involves almost all common scenarios in the current practice of deep learning”. This is a bit of a strong claim which is not fully supported by the experiments - please tone it down a bit. It is for instance unclear whether the effect holds for non-classification tasks, and variational methods with strong entropy-based regularizers, or Dropout, ...
F) P2- The Rosenblatt 1961 citation is not entirely accurate, MLP today typically only loosely refers to the original Perceptron (stacked into multiple-layers), most notably the latter is not trained via gradient backpropagation. I think it’s fine to use the term MLP without citation, or point out that MLP refers to a multi-layer feedforward network (trained via backprop).
G) First paragraph in Sec. 4 is very redundant with the first two bullet points on P2 (parts of the text are literally copied). This is not a good writing style.
H) P4 - first bullet point: “Generally, a larger redundancy ratio corresponds a worse encoding property.”. This is a quite hand-wavy statement - “worse” with respect to what? One could argue that for instance for good generalization high redundancy could be good.
I) Fig 3: “10 epochs (red) and 500 epochs (blue),” does not match the figure legend where red and blue are swapped.
J) Fig 3: Panel b says “Rondom” data.
K) Should the x-axis in Fig 3c be 10^x where x is what’s currently shown on the axis? (Similar to how 4a is labelled?)
L) Some typos P2: It is worths noting P2: By contrast, our the partition in activation hash phase chart characerizes goodnessof-hash. P3: For the brevity P3: activation statue | 3) mentioned above would become even more important. If the figures do not show results for untrained networks then please run the corresponding experiments and add them to the figures and Table 1. Clarify: Random data (Fig 3c). Was the network trained on random data, or do the dotted lines show networks trained on unaltered data, evaluated with random data? Clarify: Random data (Fig 3). Was the non-random data normalized or not (i.e. is the additional “unit-ball” noise small or large compared to the data). Ideally show some examples of the random data in the appendix. |
NIPS_2018_947 | NIPS_2018 | weakness of the paper, in its current version, is the experimental results. This is not to say that the proposed method is not promising - it definitely is. However, I have some questions that I hope the authors can address. - Time limit of 10 seconds: I am quite intrigued as to the particular choice of time limit, which seems really small. In comparison, when I look at the SMT Competition of 2017, specifically the QF_NIA division (http://smtcomp.sourceforge.net/2017/results-QF_NIA.shtml?v=1500632282), I find that all 5 solvers listed require 300-700 seconds. The same can be said about QF_BF and QF_NRA (links to results here http://smtcomp.sourceforge.net/2017/results-toc.shtml). While the learned model definitely improves over Z3 under the time limit of 10 seconds, the discrepancy with the competition results on similar formula types is intriguing. Can you please clarify? I should note that while researching this point, I found that the SMT Competition of 2018 will have a "10 Second wonder" category (http://smtcomp.sourceforge.net/2018/rules18.pdf). - Pruning via equivalence classes: I could not understand what is the partial "current cost" you mention here. Thanks for clarifying. - Figure 3: please annotate the axes!! - Bilinear model: is the label y_i in {-1,+1}? - Dataset statistics: please provide statistics for each of the datasets: number of formulas, sizes of the formulas, etc. - Search models comparison 5.1: what does 100 steps here mean? Is it 100 sampled strategies? - Missing references: the references below are relevant to your topic, especially [a]. Please discuss connections with [a], which uses supervised learning in QBF solving, where QBF generalizes SMT, in my understanding. [a] Samulowitz, Horst, and Roland Memisevic. "Learning to solve QBF." AAAI. Vol. 7. 2007. [b] Khalil, Elias Boutros, et al. "Learning to Branch in Mixed Integer Programming." AAAI. 2016. Minor typos: - Line 283: looses -> loses | - Search models comparison 5.1: what does 100 steps here mean? Is it 100 sampled strategies? |
NIPS_2020_556 | NIPS_2020 | * The visual quality/fidelity of the generated images is quite low. Making sure that the visual fidelity on common metrics such as FID matches or is at least close enough to GAN models will be useful to validate that the approach supports high fidelity (as otherwise it may be the case that it achieves compositionality at the expense of lower potential for fine details or high fidelity, as is the case in e.g. VAEs). Given that there have been many works that explore combinations of properties for CelebA images with GANs, showing that the proposed approach can compete with them is especially important. * It is unclear to me if MCMC is efficient in terms of training and convergence. Showing learning plots as well compared to other types of generative models will be useful. * The use of energy models for image generation is much more unexplored compared to GANs and VAEs and so exploring it further is great. However, note that the motivation and goals of the model -- to achieve compositional generation through logical combination of concepts learned through data subsets, is similar to a prior VAE paper. See further details in the related work review part. * Given the visual samples in the paper, it looks as if it might be the case that the model has limited variability in generated images: the face images in figure 3 show that both in the second and 4th rows the model tends to generate images that feature unspecified but correlated properties, such as the blonde hair or the very similar bottom three faces. That’s also the case in figure 5 rows 2-4. Consequently, it gives the sense that the model or sampling may not allow for large variation in the generated images, but rather tend to take typical likely examples, as happened in the earlier GAN models. A quantitative comparison of the variance in the images compared to other types of generative models will be useful to either refute or validate this. | * The use of energy models for image generation is much more unexplored compared to GANs and VAEs and so exploring it further is great. However, note that the motivation and goals of the model -- to achieve compositional generation through logical combination of concepts learned through data subsets, is similar to a prior VAE paper. See further details in the related work review part. |
NIPS_2020_1335 | NIPS_2020 | Given how strong the first four sections (five pages) of the paper were, I was relatively disappointed in the experiments, which were somewhat light. Specifically: 1) While the authors' methods allow for learning a state-action-dependent weighting of the shaping rewards, it seemed to me possible that in all of the experiments presented, learning a *uniform* state-action-independent weighting would have sufficed. Moreover, since learning a state-action-independent weighting is much simpler (i.e. it is a single scalar), it may even outperform the authors' methods for the current experiments. Based on this, I would like to suggest the following: 1a) Could the authors provide visualizations of the state-action variation of their learnt weightings? They plot the average weight in some cases (Fig 1 and 3), but given Cartpole has such a small state-action space, it should be possible to visualize the variation. The specific question here is: do the weights vary much at all in these cases? 1b) Could the authors include a baseline of learnt state-action-*independent* weights? In other words, this model has a single parameter, replacing z_phi(s,a) with a single scalar z. This should be pretty easy to implement. The authors could take any (or all) of their existing gradient approximators and simply average them across all (s,a) in a batch to get the gradient w.r.t. z. 1c) Could the authors include an additional experiment that specifically benefits from learning state-action-*dependent* (so non-uniform) weights? Here is a simple example for Cartpole: the shaping reward f(s,a) is helpful for half the state space and unhelpful for the other half. The "halves" could be whether the pole orientation is in the left or right half. The helpful reward could be that from Section 5.1 while the unhelpful reward could be that from the first adaptability test in Section 5.3. 2) To me, the true power of the author's approach is not in learning to ignore bad rewards (just turn them off!) but to intelligently incorporate sort-of-useful-but-not-perfect rewards. This way a researcher can quickly hand design an ok shaping reward but then let the authors' method transform it into a good one. Thus, I was surprised the experiments focussed primarily on ignoring obviously bad rewards and upweighting obviously good rewards. In particular, the MuJoCo experiments would be more compelling if they included more than just a single unhelpful shaping reward. I think the authors could really demonstrate the usefulness of their method there by doing the following: hand design a roughly ok shaping reward for each task. For example, the torso velocity or head height off the ground for Humanoid-v2. Then apply the authors' method and show that it outperforms naive use of this shaping reward. 3) Although the authors discussed learning a shaping reward *from scratch* in the related work section, I was surprised that they did not included this as a baseline. One would like to see that their method, when provided with a decent shaping reward to start, can learn faster by leveraging this hand-crafted knowledge. Fortunately, it seems to me again very easy to implement a baseline like this within the author's framework: simply set f(s,a)=1 and use the authors' methods (perhaps also initializing z_phi(s,a)=0). | 1) While the authors' methods allow for learning a state-action-dependent weighting of the shaping rewards, it seemed to me possible that in all of the experiments presented, learning a *uniform* state-action-independent weighting would have sufficed. Moreover, since learning a state-action-independent weighting is much simpler (i.e. it is a single scalar), it may even outperform the authors' methods for the current experiments. Based on this, I would like to suggest the following: 1a) Could the authors provide visualizations of the state-action variation of their learnt weightings? They plot the average weight in some cases (Fig 1 and |
NIPS_2018_985 | NIPS_2018 | Weakness: - One drawback is that the idea of dropping a spatial region in training is not new. Cutout [22] and [a] have been explored this direction. The difference towards previous dropout variants is marginal. [a] CVPR'17. A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection. - The improvement over previous methods is small, about 0.2%-1%. Also the results in Table 1 and Fig.5 don't report the mean and standard deviation, and whether the difference is statistically significant is hard to know. I will suggest to repeat the experiments and conduct statistical significance analysis on the numbers. Thus, due to the limited novelty and marginal improvement, I suggest to reject the paper. | - The improvement over previous methods is small, about 0.2%-1%. Also the results in Table 1 and Fig.5 don't report the mean and standard deviation, and whether the difference is statistically significant is hard to know. I will suggest to repeat the experiments and conduct statistical significance analysis on the numbers. Thus, due to the limited novelty and marginal improvement, I suggest to reject the paper. |
ICLR_2023_624 | ICLR_2023 | 1. evaluation on a single domain
The method is evaluated only on the tasks from Meta World, a robotic manipulation domain. Hence, it is difficult to judge whether the results will generalize to other domains. I strongly recommend running experiments on a different benchmark such as Atari which is commonly used in the literature. This would also verify whether the method works with discrete action spaces and high-dimensional observations.
2. evaluation on a setting created by the authors, no well-established external benchmark
The authors seem to create their own train and test splits in Meta World. This seems strange since Meta World recommends a particular train and test split (e.g. MT10 or MT50) in order to ensure fair comparison across different papers. I strongly suggest running experiments on a pre-established setting so that your results can easily be compared with prior work (without having to re-implement or re-run them). You don't need to get SOTA results, just show how it compares with reasonable baselines like the ones you already include. Otherwise, there is a big question mark around why you created your own "benchmark" when a very similar one exists already and whether this was somehow carefully designed to make your approach look better.
3. limited number of baselines
While you do have some transformer-based baselines I believe the method could greatly benefit from additional ones like BC, transformer-BC, and other offline RL methods like CQL or IQL. Such comparisons could help shed more light into whether the transformer architecture is crucial, the hypernetwork initialization, the adaptation layers, or the training objective.
4. more analysis is needed
It isn't clear how the methods compare with the given expert demonstrations on the new tasks. Do they learn to imitate the policy or do they learn a better policy than the given demonstration? I suggest comparing with the performance of the demonstration or policy from which the demonstration was collected.
If the environment is deterministic and the agent gets to see expert demonstrations, isn't the problem of learning to imitate it quite easy? What happens if there is more stochasticity in the environments or the given demonstration isn't optimal?
When finetuning transformers, it is often the case that they forget the tasks they were trained on. It would be valuable to show the performance of your different methods on the tasks they were trained on after being finetuned on the downstream tasks. Are some of them better than the others at preserving previously learned skills?
5. missing some important details
The paper seems to be missing some important details regarding the experimental setup. For example, it wasn't clear to me how the learning from observations setting works. At some point you mention that you condition on the expert observations while collecting online data. Does this assume the ability to reset the environment in any state / observation? If so, this is a big assumption that should be more clearly emphasized and discussed. how exactly are you using the expert observations in combination with online learning?
There are also some missing details regarding the expertise of the demonstrations at test time. Are these demonstrations coming from an an expert or how good are they? Minor
sometimes you refer to generalization to new tasks. however, you finetune your models, so i believe a better term would be transfer or adaptation to new tasks. | 1. evaluation on a single domain The method is evaluated only on the tasks from Meta World, a robotic manipulation domain. Hence, it is difficult to judge whether the results will generalize to other domains. I strongly recommend running experiments on a different benchmark such as Atari which is commonly used in the literature. This would also verify whether the method works with discrete action spaces and high-dimensional observations. |
NIPS_2016_450 | NIPS_2016 | . First of all, the experimental results are quite interesting, especially that the algorithm outperforms DQN on Atari. The results on the synthetic experiment are also interesting. I have three main concerns about the paper. 1. There is significant difficulty in reconstructing what is precisely going on. For example, in Figure 1, what exactly is a head? How many layers would it have? What is the "Frame"? I wish the paper would spend a lot more space explaining how exactly bootstrapped DQN operates (Appendix B cleared up a lot of my queries and I suggest this be moved into the main body). 2. The general approach involves partitioning (with some duplication) the samples between the heads with the idea that some heads will be optimistic and encouraging exploration. I think that's an interesting idea, but the setting where it is used is complicated. It would be useful if this was reduced to (say) a bandit setting without the neural network. The resulting algorithm will partition the data for each arm into K (possibly overlapping) sub-samples and use the empirical estimate from each partition at random in each step. This seems like it could be interesting, but I am worried that the partitioning will mean that a lot of data is essentially discarded when it comes to eliminating arms. Any thoughts on how much data efficiency is lost in simple settings? Can you prove regret guarantees in this setting? 3. The paper does an OK job at describing the experimental setup, but still it is complicated with a lot of engineering going on in the background. This presents two issues. First, it would take months to re-produce these experiments (besides the hardware requirements). Second, with such complicated algorithms it's hard to know what exactly is leading to the improvement. For this reason I find this kind of paper a little unscientific, but maybe this is how things have to be. I wonder, do the authors plan to release their code? Overall I think this is an interesting idea, but the authors have not convinced me that this is a principled approach. The experimental results do look promising, however, and I'm sure there would be interest in this paper at NIPS. I wish the paper was more concrete, and also that code/data/network initialisation can be released. For me it is borderline. Minor comments: * L156-166: I can barely understand this paragraph, although I think I know what you want to say. First of all, there /are/ bandit algorithms that plan to explore. Notably the Gittins strategy, which treats the evolution of the posterior for each arm as a Markov chain. Besides this, the figure is hard to understand. "Dashed lines indicate that the agent can plan ahead..." is too vague to be understood concretely. * L176: What is $x$? * L37: Might want to mention that these algorithms follow the sampled policy for awhile. * L81: Please give more details. The state-space is finite? Continuous? What about the actions? In what space does theta lie? I can guess the answers to all these questions, but why not be precise? * Can you say something about the computation required to implement the experiments? How long did the experiments take and on what kind of hardware? * Just before Appendix D.2. "For training we used an epsilon-greedy ..." What does this mean exactly? You have epsilon-greedy exploration on top of the proposed strategy? | . First of all, the experimental results are quite interesting, especially that the algorithm outperforms DQN on Atari. The results on the synthetic experiment are also interesting. I have three main concerns about the paper. |
NIPS_2020_1817 | NIPS_2020 | There are a few points that are not clear from the paper, which I list below: - As far as I understood in the clustered attention (not the improved one), the value of the i-th query becomes the value of the centroid of the cluster that the query belongs to. So after one round of applying the clustered attention, we have a set C distinct values in N nodes. I wonder what is the implication of this for the next round of the clustered attention, because there is no way to have two nodes that were in the same cluster in the previous round to be in different clusters in the next round (as their values will be the same after round 1) and the only change in the clustering that makes sense is merging clusters (which is not the case as apparently the number of clusters stays the same). Isn’t this too restrictive? What if the initial clustering is not good, then the model has no chance to recover? If the number of clusters stays the same, does the clustering in the layer after layer 1 does anything different than the clustering in the layer 1 (if not they're removable)? - It’s a bit unclear if LSH-X is the Reformer, or a simpler version of the reformer (LSH Transformer). The authors mentioned that the Reformer can’t be used in a setup with heterogeneous queries and keys. First of all, I think it shouldn't be that hard to modify Reformer to support this case. Besides, authors don’t have any task in that setup to see how well the clustered attention does when the clustered queries are not the projections of the inputs that the keys are projected from. - The experiments that are done in the setup that the model has to deal with long sequences is limited to a single modality. Would be nice to have the model evaluated on large inputs in vision/text/algorithmic tasks as well. - Although the method is presented nicely and the experiments are rather good and complete, a bit of analysis on what the model does, which can be extremely interesting, is missing (check the feedback/suggestions). - The authors only consider vanilla transformer and (I think an incomplete version of) Reformer, while there are obvious baselines, e.g. Longformer, sparse transformer, or even Local attention (check the feedback/suggestions). | - Although the method is presented nicely and the experiments are rather good and complete, a bit of analysis on what the model does, which can be extremely interesting, is missing (check the feedback/suggestions). |
ARR_2022_209_review | ARR_2022 | 1. The task setup is not described clearly. For example, which notes in the EHR (only the current admission or all previous admissions) do you use as input and how far away are the outcomes from the last note date?
2. There isn't one clear aggregation strategy that gives consistent performance gains across all tasks. So it is hard for someone to implement this approach in practice.
1. Experimental setup details: Can you explain how you pick which notes from the patient's EHR do you use an input and how far away are the outcomes from the last note date? Also, how do you select the patient population for the experiments? Do you use all patients and their admissions for prediction? Is the test set temporally split or split according to different patients?
2. Is precision more important or recall? You seem to consider precision more important in order to not raise false alarms. But isn't recall also important since you would otherwise miss out on reporting at-risk patients?
3. You cannot refer to appendix figures in the main paper (line 497). You should either move the whole analysis to appendix or move up the figures.
4. How do you think your approach would compare to/work in line with other inputs such as structured information? AUCROC seems pretty high in other models in literature.
5. Consider explaining the tasks and performance metrics when you call them out in the abstract in a little more detail. It's a little confusing now since you mention mortality prediction and say precision@topK, which isn't a regular binary classification metric. | 1. The task setup is not described clearly. For example, which notes in the EHR (only the current admission or all previous admissions) do you use as input and how far away are the outcomes from the last note date? |
K98byXpOpU | ICLR_2024 | 1. The proposed algorithm DMLCBO is based on double momentum technique. In previous works, e.g., SUSTAIN[1] and MRBO[2], double momentum technique improves the convergence rate to $\mathcal{\widetilde O}(\epsilon^{-3})$ while proposed algorithm only achieves the $\mathcal{\widetilde O}(\epsilon^{-4})$. The authors are encouraged to discuss the reason why DMLCBO does not achieve it and the theoretical technique difference between DMLCBO and above mentioned works.
2. In the experimental part, the author only shows the results of DMLCBO in early time, it will be more informative to provide results in the later steps.
3. In Table 3, DMLCBO exhibits higher variance compared with other baselines in MNIST datasets, the authors are encouraged to discuss more experimental details about it and explain the behind reason.
[1] A Near-Optimal Algorithm for Stochastic Bilevel Optimization via Double-Momentum
[2] Provably Faster Algorithms for Bilevel Optimization | 1. The proposed algorithm DMLCBO is based on double momentum technique. In previous works, e.g., SUSTAIN[1] and MRBO[2], double momentum technique improves the convergence rate to $\mathcal{\widetilde O}(\epsilon^{-3})$ while proposed algorithm only achieves the $\mathcal{\widetilde O}(\epsilon^{-4})$. The authors are encouraged to discuss the reason why DMLCBO does not achieve it and the theoretical technique difference between DMLCBO and above mentioned works. |
NIPS_2021_386 | NIPS_2021 | 1. It is unclear if this proposed method will lead to any improvement for hyper-parameter search or NAS kind of works for large scale datasets since even going from CIFAR-10 to CIFAR-100, the model's performance reduced below prior art (if #samples are beyond 1). Hence, it is unlikely that this will help tasks like NAS with ImageNet dataset. 2. There is no actual new algorithmic or research contribution in this paper. The paper uses the methods of [Nguyen et al., 2021] directly. The only contribution seems to be running large-scale experiments of the same methods. However, compared to [Nguyen et al., 2021], it seems that there are some qualitative differences in the obtained images as well (lines 173-175). The authors do not clearly explain what these differences are, or why there are any differences at all (since the approach is identical). The only thing reviewer could understand is that this is due to ZCA preprocessing which does not sound like a major contribution. 3. The approach section is missing in the main paper. The reviewer did go through the “parallelization descriptions” in the supplementary material but the supplementary should be used more like additional information and not as an extension to the paper as it is.
Timothy Nguyen, Zhourong Chen, and Jaehoon Lee. Dataset meta-learning from kernel ridge-regression. In International Conference on Learning Representations, 2021.
Update: Please see my comment below. I have increased the score from 3 to 5. | 3. The approach section is missing in the main paper. The reviewer did go through the “parallelization descriptions” in the supplementary material but the supplementary should be used more like additional information and not as an extension to the paper as it is. Timothy Nguyen, Zhourong Chen, and Jaehoon Lee. Dataset meta-learning from kernel ridge-regression. In International Conference on Learning Representations, 2021. Update: Please see my comment below. I have increased the score from 3 to 5. |
oEuTWBfVoe | ICLR_2024 | I think the paper has several weaknesses. Please see the following list and the questions sections.
* The statement in the introduction regarding the biological plausibility of backpropagation may be too weak ("While the backpropagation ..., its biological plausibility remains a subject of debate."). It is widely accepted that backpropagation is biologically implausible.
* Regarding the following sentence in Section 3 "We further define a readout ... ,i.e., $\mathbf{m}^t = f(y^t).$", did you mean to write $\mathbf{m}^t = f(\mathbf{y}^t)$ ($\mathbf{y}^t$ with boldsymbol)?
* On page 4, "setting $\theta_{110} = 1$ and $\theta_{012} = -1$," the second term should be $\theta_{021}$ rather than $\theta_{012}$.
* The initialization of polynomial coefficient parameters is not clear, and it seems they are initialized close to zero according to Figure 2. It would be valuable to explain how they were initialized.
* The paper models synaptic plasticity rules only for feedforward connections. It would be interesting to explore the impact of lateral connections (by adding additional terms in Equation 6). Have you experimented with such a setup?
* In page 7, the authors state that "In the case of the MLP, we tested various architectures and highlight results for a 3-10-1 neuron topology." What are the results for other various architectures? Putting them into the paper would also be valuable (as ablation studies).
* The hyperparameters for the experiments are missing? What is the learning rate, what is the optimizer, etc.?
* I do not see that much difference between the experiment presented in Section 4 and the experiment in (Confavreux et al., 2020) (Section 3.1) except the choice of optimization method. In your experimental setup, you also do not model the global reward. Therefore, I think it makes it more similar to the experiment in (Confavreux et al., 2020).
* Comparison to previous work is missing. | * The statement in the introduction regarding the biological plausibility of backpropagation may be too weak ("While the backpropagation ..., its biological plausibility remains a subject of debate."). It is widely accepted that backpropagation is biologically implausible. |
ztT70ubhsc | ICLR_2025 | - The professional sketches (Multi-Gen-20M) considered in this work are in binarised versions of HED edges, which is very different from what a real artist would draw (no artist or professional sketcher would produce lines like those in Figure 1). This makes the basic assumptions/conditions of the paper not very rigorous, somewhat deviating from the ambitious objectives, i.e., dealing with pro-sketch and any other complexity levels with a unified model.
- The modulator is heuristically designed. It is hard to justify if there is a scalability issue that might need tedious hyperparameter tuning for diverse training data.
- The effectiveness and applicability of the knob mechanism is questionable.
- From Figure 6, the effect does not seem very pronounced: in the volcano example, the volcano corresponding to the intermediate gamma value appears to match the details of the input sketch better; in Keith's example (the second row from the bottom), the changes in facial details are also not noticeable.
- Besides, the user has to try different knob values until satisfaction (and this may be pretty different for diverse input sketches) since it has no apparent relation to the user's need for the complexity level from the input sketches.
- The impact of fine-grained cues is hard to manage precisely, as they have been injected into the model at early denoising steps, and the effect will last in the following denoising steps.
- The current competitors in experiments are not designed for sketches. It would be great if some sketch-guided image generation works, e.g., [a], could be compared and discussed.
- There is a “second evaluation set” with 100 hand-drawn images created by novice users used for experiments. It would be great to show these sketch images for completion.
[a] Sketch-Guided Text-to-Image Diffusion Models, SIGGRAPH 2023 | - The modulator is heuristically designed. It is hard to justify if there is a scalability issue that might need tedious hyperparameter tuning for diverse training data. |
NIPS_2016_241 | NIPS_2016 | /challenges of this approach. For instance... - The paper does not discuss runtime, but I assume that the VIN module adds a *lot* of computational expense. - Though f_R and f_P can be adapted over time, the experiments performed here did incorporate a great deal of domain knowledge into their structure. A less informed f_R/f_P might require an impractical amount of data to learn. - The results are only reported after a bunch of training has occurred, but in RL we are often also interested in how the agent behaves *while* learning. I presume that early in training the model parameters are essentially garbage and the planning component of the network might actually *hurt* more than it helps. This is pure speculation, but I wonder if the CNN is able to perform reasonably well with less data. - I wonder whether more could be said about when this approach is likely to be most effective. The navigation domains all have a similar property where the *dynamics* follow relatively simple, locally comprehensible rules, and the state is only complicated due to the combinatorial number of arrangements of those local dynamics. WebNav is less clear, but then the benefit of this approach is also more modest. In what kinds of problems would this approach be inappropriate to apply? ---Clarity--- I found the paper to be clear and highly readable. I thought it did a good job of motivating the approach and also clearly explaining the work at both a high level and a technical level. I thought the results presented in the main text were sufficient to make the paper's case, and the additional details and results presented in the supplementary materials were a good compliment. This is a small point, but as a reader I personally don't like the supplementary appendix to be an entire long version of the paper; it makes it harder to simply flip to the information I want to look up. I would suggest simply taking the appendices from that document and putting them up on their own. ---Summary of Review--- I think this paper presents a clever, thought-provoking idea that has the potential for practical impact. I think it would be of significant interest to a substantial portion of the NIPS audience and I recommend that it be accepted. | - Though f_R and f_P can be adapted over time, the experiments performed here did incorporate a great deal of domain knowledge into their structure. A less informed f_R/f_P might require an impractical amount of data to learn. |
NIPS_2022_532 | NIPS_2022 | • It seems that ODA, one of the methods of solving the MOIP problem, has learned the policy to imitate the problem-solving method, but it did not clearly suggest how the presented method improved the performance and computation speed of the solution rather than just using ODA.
• In order to apply imitation learning, it is necessary to obtain labeled data by optimally solving various problems. There are no experiments on whether there are any difficulties in obtaining the corresponding data, and how the performance changes depending on the size of the labeled data. | • In order to apply imitation learning, it is necessary to obtain labeled data by optimally solving various problems. There are no experiments on whether there are any difficulties in obtaining the corresponding data, and how the performance changes depending on the size of the labeled data. |
47hDbAMLbc | ICLR_2024 | - The paper is mainly dedicated to the existence of robust training. No results on optimization or robust generalization are derived. Given that, the scope seems to be quite limited.
- Since overparameterization can often lead to powerful memorization and good generalization performance, the necessary conditions may have stronger implications if they are connected to generalization bounds. It is not clear in the paper that the constructions of ReLU networks for robust memorization would lead to robust generalization. I know the authors acknowledge this in the conclusion, but I think this is a very serious question.
- The main theorems 4.8 and 5.2 only guarantee the existence of optimal robust memorization. These results would be more useful if an optimization or constructive algorithm is given to find the optimal memorization. | - Since overparameterization can often lead to powerful memorization and good generalization performance, the necessary conditions may have stronger implications if they are connected to generalization bounds. It is not clear in the paper that the constructions of ReLU networks for robust memorization would lead to robust generalization. I know the authors acknowledge this in the conclusion, but I think this is a very serious question. |
D0gAwtclWk | EMNLP_2023 | 1 While the paper provides valuable insights for contrastive learning in code search tasks, it does not thoroughly explore the implications of their proposed method for other NLP tasks. This somewhat limits the generalizability of the results.
2 The paper does not discuss the computational efficiency of the proposed method. As the Soft-InfoNCE method involves additional computations for weight assignment, it would be important to understand the trade-off between improved performance and increased computational cost.
3 While the authors present the results of their experiments, they do not provide an in-depth analysis of these results. More detailed analysis, including a discussion of cases where the proposed method performs exceptionally well or poorly, could have added depth to the paper. | 1 While the paper provides valuable insights for contrastive learning in code search tasks, it does not thoroughly explore the implications of their proposed method for other NLP tasks. This somewhat limits the generalizability of the results. |
NIPS_2017_337 | NIPS_2017 | of the manuscript stem from the restrictive---but acceptable---assumptions made throughout the analysis in order to make it tractable. The most important one is that the analysis considers the impact of data poisoning on the training loss in lieu of the test loss. This simplification is clearly acknowledged in the writing at line 102 and defended in Appendix B. Another related assumption is made at line 121: the parameter space is assumed to be an l2-ball of radius rho.
The paper is well written. Here are some minor comments:
- The appendices are well connected to the main body, this is very much appreciated.
- Figure 2 and 3 are hard to read on paper when printed in black-and-white.
- There is a typo on line 237.
- Although the related work is comprehensive, Section 6 could benefit from comparing the perspective taken in the present manuscript to the contributions of prior efforts.
- The use of the terminology "certificate" in some contexts (for instance at line 267) might be misinterpreted, due to its strong meaning in complexity theory. | - The use of the terminology "certificate" in some contexts (for instance at line 267) might be misinterpreted, due to its strong meaning in complexity theory. |
NIPS_2020_696 | NIPS_2020 | * A big concern for me is that this paper was hard to read. Since it is very applications specific, I am not familiar with a lot of the theory or the inverse problem(s) considered here. As a result, I am unable to appreciate the key aspects of the paper. For example, the introduction directly gets into the details of wave based imaging without sufficient detail or context with more commonly considered inverse problems. This makes it unapproachable for someone not familiar with this exact application. There is quite a bit of detail left in the supplement, but I believe this should be in the main paper for the contribution to be fully appreciated. * A second point about it being so applications specific is that the paper lacks context to existing methods, for e.g. how can FIONets be useful for someone outside of wave-based imaging? * Another issue is that there are no quantitative comparisons in the main paper (but in the supplement), leaving only qualitative comparisons. * There are no comparisons to any other method other than a U-Net (which essentially serves as an ablation of whether or not including the physics based network helps). Considering this is a linear inverse problem, what are other existing solutions to this problem? It is imperative to compare the proposed FIONet to iterative or classical solutions to the problem to place them in context. * Regarding the OOD experiments, this is indeed interesting because the trained network is able to give strong OOD generalization. However, particularly in imaging in the recent few years several papers have shown that untrained NNs (like deep image prior Ulyanov et al., CVPR 2018) can be used to solve inverse problems across a very wide class of images. It maybe good to mention this in the paper and place the current method in context and Ideally, also compare with those class of methods. * I am not very sure how to read or interpret figure 7 describing the diffeomorphisms. * A minor comment, there is already a model called “routing networks” (Rosenbaum et al, ICLR 2018) which are different from those described in the paper. In the interest of mitigating confusion for the reader it maybe better to clarify or re-name the model. | * Regarding the OOD experiments, this is indeed interesting because the trained network is able to give strong OOD generalization. However, particularly in imaging in the recent few years several papers have shown that untrained NNs (like deep image prior Ulyanov et al., CVPR 2018) can be used to solve inverse problems across a very wide class of images. It maybe good to mention this in the paper and place the current method in context and Ideally, also compare with those class of methods. |
NIPS_2020_83 | NIPS_2020 | While I think the paper makes a good contribution, there are some limitation at the present stage: - [Remark 3.1] While it has been done in previous works, I think that a deeper understanding of those cases where modelling the pushforward P in (8) as a composition of perturbation in an RKHS does not introduce an error, would increase the quality of the work. Alternatively, trying to undestand the kind of error that this parametrization introduces would be valuable too. - The analysis does not cover explicitly what happens when the input measures \beta_i are absolutely continuous and one has to rely on samples. How does the sampling part impact the bound? - The experiments are limited to toy data. There is a range of problems with real data where barycenters can be used and it would be interesting to show performance of the method in those settings too. | - The experiments are limited to toy data. There is a range of problems with real data where barycenters can be used and it would be interesting to show performance of the method in those settings too. |
ICLR_2021_738 | ICLR_2021 | ---:
1: This paper ensembles some existing compression/NAS approaches to improve the performance of BNNs, which is not significant enough.
The dynamic routing strategy (conditional on input) has been widely explored. For example, the proposed dynamic formulation in this paper has been used in several studies [2, 3].
Varying width and depth has been extensively explored in the quantization literature, especially in AutoML based approaches [Shen et al. 2019, Bulat et al. 2020], to design high capacity quantized networks.
The effectiveness of the group convolution in BNNs was initially studied in [1]. Later works also incorporate the group convolution into the search space in NAS+BNNs methods [e.g., Bulat et al. 2020a] to reduce the complexity.
2: In each layer, the paper introduces a full-precision fully-connected layer to decide which expert to use. However, for deeper networks, such as ResNet-101, it will include ~100 full-precision layers, which can be very expensive especially in BNNs. As a result, it deteriorates the benefits and practicability of the dynamic routing mechanism.
3: The actual speedup, memory usage and energy consumption on edge devices (e.g., CPU/GPU/FPGA) or IoT devices must be reported. Even though the full-precision operations only account for a small amount of computations in statistics, it can have a big influence on the efficiency on platforms like FPGA.
4: This paper proposes to learn the binary gates via gradient-based optimization while exploring the network structure via EfficientNet manner. Then the problem comes. This paper can formulate the <width, depth, groups and layer arrangement> as configuration vectors and optimize them using policy gradients and so on, with the binary gates learning unified in a gradient-based framework. So what is the advantage of the "semi-automated" method of EfficientNet over the gradient-based optimization? In addition, how about learning a policy agent via RL to predict the gates? I encourage the authors can add comparsions and discussions with these alternatives.
5: More experiments on deeper networks (e.g., ResNet-50) and other network structures (e.g., MobileNet) are needed to further strengthen the paper. References:
[1] MoBiNet: A Mobile Binary Network for Image Classification, in WACV 2020.
[2] Dynamic Channel Pruning: Feature Boosting and Suppression, in ICLR2019.
[3] Learning Dynamic Routing for Semantic Segmentation, in CVPR2020. | 5: More experiments on deeper networks (e.g., ResNet-50) and other network structures (e.g., MobileNet) are needed to further strengthen the paper. References: [1] MoBiNet: A Mobile Binary Network for Image Classification, in WACV 2020. [2] Dynamic Channel Pruning: Feature Boosting and Suppression, in ICLR2019. [3] Learning Dynamic Routing for Semantic Segmentation, in CVPR2020. |
NIPS_2018_87 | NIPS_2018 | for a wide range of supervisory signals such as, video level action labels, single temporal point, one GT bounding box, temporal bounds etc. The method is experimentally evaluated on the UCF-101-24 and DALY action detection datasets. Paper Strengths: - The paper is clear and easy to understand. - The problem formulation is interesting and described with enough details. - The experimental results are interesting and promising, clearly demonstrate the significance of varying level of supervision on the detection performance - Table 1. - On DALY dataset, as expected, the detection performance increases with access to more supervision. - The proposed approach outperforms the SOA [46] by a large margin of 18% (video mAP) on DALY dataset at all levels of supervision. - On UCF-101-24, the proposed approach outperforms the SOA [46] when bounding box annotations are available at any level, i.e., Temp.+1 BB, Temp. + 3 BBs, Fully Spervised (cf. Table 1). - The visuals are helpful, support well the paper, and the qualitative experiments (in supplementary material) are interesting and convincing. Paper Weaknesses: I haven't noticed any major weakness in this paper, however would like to mention that - on UCF-101-24, the proposed method has drop in performance as compared to the SOA [46] when supervision level is "Temporal + spatial points". This work addresses one of the major problems associated with action detection approaches based on fully supervised learning, i.e., these methods require dense frame level GT bounding box annotations and their labels, which is impractical for large scale video datasets and also highly expensive. The proposed unified action detection framework provides a way to train a ML model with weak supervision at various levels, contributing significantly to address the aforementioned problem. Thus, I vote for a clear accept. | - The experimental results are interesting and promising, clearly demonstrate the significance of varying level of supervision on the detection performance - Table 1. |
NIPS_2022_477 | NIPS_2022 | 1.In experiments, the PRODEN method also uses mixup and consistency training techniques for fair comparisons. What about other competitive baselines? I'd like to see how much the strong CC method could benefit from the representation training technique.
2.It is not clear why the proposed sample selection mechanism helps preserve the label distribution.
3.In App. B.2, a relaxed solution of Sinkhorn-Knopp algorithm is proposed. Why the relaxed problem guarantees to converge?Does Solar always run this relaxed version of Sinkhorn-Knopp?
4.How is gamma in the Sinknhorn-Knopp affect the performance?
5.How does the class distribution estimate for PRODEN in Figure 1?
Societal Impacts: The main negative impact is lower annotation costs may decrease the requirement for annotator employment.
Limitations: The experiments need to be further improved. | 2.It is not clear why the proposed sample selection mechanism helps preserve the label distribution. |
NIPS_2020_232 | NIPS_2020 | - The results/analysis albeit being detailed and comprehensive, only two relatively old and small models are evaluated. - Some of the comparison with other related works, is not completely apple-to-apples, for instance comparing fixed point representation for training, while comparing against AdderNet and DeepShift which use at least half floating points for training. Its understandable the fixed point representation has benefits, but it would probably have been more relevant to compare against similar such fixed point training, for instance other works such (not limited to) - https://arxiv.org/abs/1802.00930, https://arxiv.org/abs/1802.04680, https://arxiv.org/abs/1909.02384 - While the FPGA implementation and results with that are quite impressive and is very valuable as a prototype for the proposed method. However, for such HW solution it would help if the authors extend to do a wider comparison. For instance, with a dedicated ASIC-based implementation the quantum of benefits (table.2) would be considerably reduced. Since fp multiplication could still be cheaper since most optimizations would require non-trivial changes to the datapath, which would take away for the benefits of the faster computations | - The results/analysis albeit being detailed and comprehensive, only two relatively old and small models are evaluated. |
NIPS_2020_930 | NIPS_2020 | 1. The title is misleading and the authors might overclaim their contribution. Indeed, the stochastic problem in Eq.(1) is a special instance of nonconvex-concave minimax problems and equivalent to nonconvex compositional optimization problem in Eq.(2). Solving such problem is easier than the general case consider in [23, 34]; see also (Rafique, Arxiv 1810.02060) and (Thekumparampil, NeurIPS'19). In addition, the KKT points and approximate KKT points are also defined based on such special structure. 2. The literature review is not complete. The authors mainly focus on the algorithms for stochastic compositional optimization instead of stochastic nonconvex-concave minimax optimization. 3. The algorithm is not single-loop in general. To be more specific, Algorithm 1 needs to solve Eq.(9) at each loop. This is also a nonsmooth strongly convex problem in general and the solution does not have the closed form. To this end, what is the advantage of Algorithm 1 over prox-linear algorithms in nonsmooth case? 4. Given the current stochastic problem in Eq.(1), I believe that the prox-linear subproblem can be reformulated using the conjugate function and becomes the same as the subproblem in Algorithm 1. That is to say, we can simply improve prox-linear algorithms for solving stochastic problem in Eq.(1). This makes the motivation of Algorithm 1 unclear. 5. The proof techniques heavily depend on the biased hybrid estimators introduced in [29]. The current paper does not convince me that such extension is nontrivial and has sufficient technical novelty. | 4. Given the current stochastic problem in Eq.(1), I believe that the prox-linear subproblem can be reformulated using the conjugate function and becomes the same as the subproblem in Algorithm 1. That is to say, we can simply improve prox-linear algorithms for solving stochastic problem in Eq.(1). This makes the motivation of Algorithm 1 unclear. |
j9e3WVc49w | EMNLP_2023 | - The claim is grounded in empirical findings and does not provide a solid mathematical foundation.
- Although I acknowledge that KD and LS are not identical, I believe KD can be viewed as a special form of LS. This is particularly true when the teacher network is uniformly distributed and the temperature is set at 1, then LS and KD are equivalent.
- The authors only compared one of the existing works in this area and did not sufficiently address related works.
Here are some related works for LS and KD:
Lee, Dongkyu, Ka Chun Cheung, and Nevin Zhang. "Adaptive Label Smoothing with Self-Knowledge in Natural Language Generation." Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022.
Zhang, Zhilu, and Mert Sabuncu. "Self-distillation as instance-specific label smoothing." Advances in Neural Information Processing Systems 33 (2020): 2184-2195.
Li Yuan, Francis EH Tay, Guilin Li, Tao Wang, and Jiashi Feng. "Revisit knowledge distillation: a teacher-free framework." arXiv preprint arXiv:1909.11723, 2019.
Yun, Sukmin, et al. "Regularizing class-wise predictions via self-knowledge distillation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020. | - Although I acknowledge that KD and LS are not identical, I believe KD can be viewed as a special form of LS. This is particularly true when the teacher network is uniformly distributed and the temperature is set at 1, then LS and KD are equivalent. |
ICLR_2023_1599 | ICLR_2023 | of the proposed method are listed as below:
There are two key components of the method, namely, the attention computation and learn-to-rank module. For the first component, it is a common practice to compute importance using SE blocks. Therefore, the novelty of this component is limited.
Some important SOTAs are missing and some of them as below outperform the proposed method: (1) Ding, Xiaohan, et al. "Resrep: Lossless cnn pruning via decoupling remembering and forgetting." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. (2) Li, Bailin, et al. "Eagleeye: Fast sub-net evaluation for efficient neural network pruning." European conference on computer vision. Springer, Cham, 2020. (3) Ruan, Xiaofeng, et al. "DPFPS: dynamic and progressive filter pruning for compressing convolutional neural networks from scratch." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 3. 2021.
Competing dynamic-pruning methods are kind of out-of-date. More recent works should be included.
Only results on small scale datasets are provided. Results on large scale datasets including ImageNet should be included to further verify the effectiveness of the proposed method. | 35. No.3. 2021. Competing dynamic-pruning methods are kind of out-of-date. More recent works should be included. Only results on small scale datasets are provided. Results on large scale datasets including ImageNet should be included to further verify the effectiveness of the proposed method. |
NIPS_2017_486 | NIPS_2017 | 1. The paper is motivated with using natural language feedback just as humans would provide while teaching a child. However, in addition to natural language feedback, the proposed feedback network also uses three additional pieces of information â which phrase is incorrect, what is the correct phrase, and what is the type of the mistake. Using these additional pieces is more than just natural language feedback. So I would like the authors to be clearer about this in introduction.
2. The improvements of the proposed model over the RL without feedback model is not so high (row3 vs. row4 in table 6), in fact a bit worse for BLEU-1. So, I would like the authors to verify if the improvements are statistically significant.
3. How much does the information about incorrect phrase / corrected phrase and the information about the type of the mistake help the feedback network? What is the performance without each of these two types of information and what is the performance with just the natural language feedback?
4. In figure 1 caption, the paper mentions that in training the feedback network, along with the natural language feedback sentence, the phrase marked as incorrect by the annotator and the corrected phrase is also used. However, from equations 1-4, it is not clear where the information about incorrect phrase and corrected phrase is used. Also L175 and L176 are not clear. What do the authors mean by âas an exampleâ?
5. L216-217: What is the rationale behind using cross entropy for first (P â floor(t/m)) phrases? How is the performance when using reinforcement algorithm for all phrases?
6. L222: Why is the official test set of MSCOCO not used for reporting results?
7. FBN results (table 5): can authors please throw light on why the performance degrades when using the additional information about missing/wrong/redundant?
8. Table 6: can authors please clarify why the MLEC accuracy using ROUGE-L is so low? Is that a typo?
9. Can authors discuss the failure cases of the proposed (RLF) network in order to guide future research?
10. Other errors/typos:
a. L190: complete -> completed
b. L201, âWe use either ⦠feedback collectionâ: incorrect phrasing
c. L218: multiply -> multiple
d. L235: drop âbyâ
Post-rebuttal comments:
I agree that proper evaluation is critical. Hence I would like the authors to verify that the baseline results [33] are comparable and the proposed model is adding on top of that.
So, I would like to change my rating to marginally below acceptance threshold. | 7. FBN results (table 5): can authors please throw light on why the performance degrades when using the additional information about missing/wrong/redundant? |
NIPS_2017_567 | NIPS_2017 | Weakness:
1. I find the first two sections of the paper hard to read. The author stacked a number of previous approaches but failed to explain each method clearly.
Here are some examples:
(1) In line 43, I do not understand why the stacked LSTM in Fig 2(a) is "trivial" to convert to the sequential LSTM Fig2(b). Where are the h_{t-1}^{1..5} in Fig2(b)? What is h_{t-1} in Figure2(b)?
(2) In line 96, I do not understand the sentence "our lower hierarchical layers zoom in time" and the sentence following that.
2. It seems to me that the multi-scale statement is a bit misleading, because the slow and fast RNN do not operate on different physical time scale, but rather on the logical time scale when the stacks are sequentialized in the graph. Therefore, the only benefit here seems to be the reduce of gradient path by the slow RNN.
3. To reduce the gradient path on stacked RNN, a simpler approach is to use the Residual Units or simply fully connect the stacked cells. However, there is no comparison or mention in the paper.
4. The experimental results do not contain standard deviations and therefore it is hard to judge the significance of the results. | 1. I find the first two sections of the paper hard to read. The author stacked a number of previous approaches but failed to explain each method clearly. Here are some examples: (1) In line 43, I do not understand why the stacked LSTM in Fig 2(a) is "trivial" to convert to the sequential LSTM Fig2(b). Where are the h_{t-1}^{1..5} in Fig2(b)? What is h_{t-1} in Figure2(b)? (2) In line 96, I do not understand the sentence "our lower hierarchical layers zoom in time" and the sentence following that. |
NIPS_2019_165 | NIPS_2019 | of the approach and experiments or list future direction for readers. The writeup is exceptionally clear and well organized-- full marks! I have only minor feedback to improve clarity: 1. Add a few more sentences explaining the experimental setting for continual learning 2. In Fig 3, explain the correspondence between the learning curves and M-PHATE. Why do you want to want me to look at the learning curves? Does worse performing model always result in structural collapse? What is the accuracy number? For the last task? or average? 3. Make the captions more descriptive. It's annoying to have to search through the text for your interpretation of the figures, which is usually on a different page 4. Explain the scramble network better... 5. Fig 1, Are these the same plots, just colored differently? It would be nice to keep all three on the same scale (the left one seems condensed) M-PHATE results in significantly more interpretable visualization of evolution than previous work. It also preserves neighbors better (Question: why do you think t-SNE works better in two conditions? The difference is very small tho). On continual learning tasks, M-PHATE clearly distinguishes poor performing learning algorithms via a collapse. (See the question about this in 5. Improvement). The generalization vignette shows that the heterogeneity in M-PHATE output correlates with performance. I would really like to recommend a strong accept for this paper, but my major concern is that the vignettes focus on one dataset MNIST and one NN architecture MLP, which makes the experiments feel incomplete. The results and observations made by authors would be much more convincing if they could repeat these experiments for more datasets and NN architectures. | 3. Make the captions more descriptive. It's annoying to have to search through the text for your interpretation of the figures, which is usually on a different page 4. Explain the scramble network better... |
NIPS_2017_434 | NIPS_2017 | ---
This paper is very clean, so I mainly have nits to pick and suggestions for material that would be interesting to see. In roughly decreasing order of importance:
1. A seemingly important novel feature of the model is the use of multiple INs at different speeds in the dynamics predictor. This design choice is not
ablated. How important is the added complexity? Will one IN do?
2. Section 4.2: To what extent should long term rollouts be predictable? After a certain amount of time it seems MSE becomes meaningless because too many small errors have accumulated. This is a subtle point that could mislead readers who see relatively large MSEs in figure 4, so perhaps a discussion should be added in section 4.2.
3. The images used in this paper sample have randomly sampled CIFAR images as backgrounds to make the task harder.
While more difficult tasks are more interesting modulo all other factors of interest, this choice is not well motivated.
Why is this particular dimension of difficulty interesting?
4. line 232: This hypothesis could be specified a bit more clearly. How do noisy rollouts contribute to lower rollout error?
5. Are the learned object state embeddings interpretable in any way before decoding?
6. It may be beneficial to spend more time discussing model limitations and other dimensions of generalization. Some suggestions:
* The number of entities is fixed and it's not clear how to generalize a model to different numbers of entities (e.g., as shown in figure 3 of INs).
* How many different kinds of physical interaction can be in one simulation?
* How sensitive is the visual encoder to shorter/longer sequence lengths? Does the model deal well with different frame rates?
Preliminary Evaluation ---
Clear accept. The only thing which I feel is really missing is the first point in the weaknesses section, but its lack would not merit rejection. | 3. The images used in this paper sample have randomly sampled CIFAR images as backgrounds to make the task harder. While more difficult tasks are more interesting modulo all other factors of interest, this choice is not well motivated. Why is this particular dimension of difficulty interesting? |
NIPS_2021_121 | NIPS_2021 | Weakness] 1. Although the paper argue that proposed method finds the flat minima, the analysis about flatness is missing. The loss used for training base model is the averaged loss for the noise injected models, and the authors provided convergence analysis on this loss. However, minimizing the averaged loss across the noise injected models does not ensure the flatness of the minima. So, to claim that the minima found by minimizing the loss in Eq (3), the analysis on the losses of the noise-injected models after training is required. 2. In Eq (4), the class prototypes before and after injecting noise are utilized for prototype fixing regularization. However, this means that F2M have to compute the prototypes of the base class every time the noise is injected: M+1 times for each update. Considering the fact that there are many classes and many samples for the base classes, this prototype fixing is computationally inefficient. If I miss some details about the prototype fixing, please fix my misunderstanding in rebuttal. 3. Analysis on the sampling times M
and noise bound value b
is missing. These values decide the flat area around the flat minima, and the performance would be affected by theses value. However, there is no analysis on M and b
in the main paper nor the appendix. Moreover, the exact value M
used for the experiments is not reported. 4. Comparison with single session incremental few-shot learning is missing. Like [42] in the main paper, there are some meta-learning based single session incremental FSL methods are being studied. Although this paper targets on multi-session incremental FSL with different setting and different dataset split, it would be more informative to compare the proposed F2M with that kind of methods, considering that the idea of finding flat minima seems valuable for the single session incremental few-shot learning task too.
There is a typo in Table 2 – the miniImageNet task is 5-way, but it is written as 10-way.
Post Rebuttal
Reviewer clarified the confusing parts of the paper, and added useful analysis during rebuttal. Therefore, I raise my score to 6. | 1. Although the paper argue that proposed method finds the flat minima, the analysis about flatness is missing. The loss used for training base model is the averaged loss for the noise injected models, and the authors provided convergence analysis on this loss. However, minimizing the averaged loss across the noise injected models does not ensure the flatness of the minima. So, to claim that the minima found by minimizing the loss in Eq (3), the analysis on the losses of the noise-injected models after training is required. |
ICLR_2023_1765 | ICLR_2023 | weakness, which are summarized in the following points:
Important limitations of the quasi-convex architecture are not addressed in the main text. The proposed architecture can only represent non-negative functions, which is a significant weakness for regression problems. However, this is completed elided and could be missed by the casual reader.
The submission is not always rigorous and some of the mathematical developments are unclear. For example, see the development of the feasibility algorithm in Eq. 4 and Eq. 5. Firstly, t ∈ R while y , f ( θ ) ∈ R n
, where n
is the size of the training set, so that the operation y − t − f ( θ )
is not well-defined. Moreover, even if y , f ( θ ) ∈ R
, the inequality ψ t ( θ ) ≤ 0 implies l ( θ ) ≤ t 2 / 2
, rather than ( θ ) ≤ t
. Since, in general, the training problem will be defined for y ∈ R n
, the derivations in the text should handle this general case.
The experiments are fairly weak and do not convince me that the proposed models have sufficient representation power to merit use over kernel methods and other easy-to-train models. The main issue here is the experimental evaluation does not contain a single standard benchmark problem nor does it compare against standard baseline methods. For example, I would really have liked to see regression experiments on several UCI datsets with comparisons against kernel regression, two-layer ReLU networks, etc. Although boring, such experiments establish a baseline capacity for the quasi-concave networks; this is necessary to show they are "reasonable". The experiments as given have several notable flaws:
Synthetic dataset: This is a cute synthetic problem, but obviously plays to the strength of the quasi-concave models. I would have preferred to see a synthetic problem for which was noisy with non piece-wise linear relationship.
Contour Detection Dataset: It is standard to report the overall test ODS, instead of reporting it on different subgroups. This allows the reader to make a fair overall comparison between the two methods.
Mass-Damper System Datasets: This is a noiseless linear regression problem in disguise, so it's not surprising that quasi-concave networks perform well.
Change-point Detection: Again, I would really have rather seen some basic benchmarks like MNIST before moving on to novel applications like detecting changes in data distribution.
Minor Comments
Introduction: - The correct reference for SGD is the seminal paper by Robbins and Monro [1]. - The correct reference for backpropagation is Rumelhart et al. [2]
- "Issue 1: Is non-convex deep neural networks always better?": "is" should be "are". - "While some experiments show that certain local optima are equivalent and yield similar learning performance" -- this should be supported by a reference. - "However, the derivation of strong duality in the literature requires the planted model assumption" --- what do you mean by "planted model assumption"? The only necessary assumption for these works is that the shallow network is sufficiently wide.
Section 4: - "In fact, suppose there are m weights, constraining all the weights to be non-negative will result in only 1 / 2 m
representation power." -- A statement like this only makes sense under some definition of "representation power". For example, it is not obvious how non-negativity constraints affect the underlying hypothesis class (aside from forcing it to contain only non-negative functions), which is the natural notion of representation power. - Equation 3: There are several important aspects of this model which should be mentioned explicitly in the text. Firstly, it consists of only one neuron; this is obvious from the notation, but should be stated as well. Secondly, it can only model non-negative functions. This is a strong restriction and should be discussed somewhere. - "Among these operations, we choose the minimization procedure because it is easy to apply and has a simple gradient." --- the minimization operator may produce a non-smooth function, which does not admit a gradient everywhere. Nor is it guaranteed to have a subgradient since the negative function only quasi-convex, rather than convex. - "... too many minimization pooling layers will damage the representation power of the neural network" --- why? Can the authors expand on this observation?
Section 5: - "... if we restrict the network output to be smaller than the network labels, i.e., f ( θ ) ≤ y
" --- note that this observation requires y ≥ 0
, which does not appear to be explicitly mentioned. - What method is being used to solve the convex feasibility problem in Eq. (5)? I cannot find this stated anywhere.
Figure 6: - Panel (b): "conveyers" -> "converges".
Figure 7: - The text inside the figure and the labels are too small to read without zooming. This text should be roughly the same size as the manuscript text. - "It could explain that the classification accuracy of QCNN (94.2%) outperforms that of deep networks (92.7%)" --- Is this test accuracy, or training accuracy? I assume this is the test metric on the hold-out set, but the text should state this clearly. References
[1] Robbins, Herbert, and Sutton Monro. "A stochastic approximation method." The annals of mathematical statistics (1951): 400-407.
[2] Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. "Learning representations by back-propagating errors." nature 323.6088 (1986): 533-536. | - The text inside the figure and the labels are too small to read without zooming. This text should be roughly the same size as the manuscript text. |
ICLR_2023_3948 | ICLR_2023 | 1.This paper lacks novelty and is only a combination of some existing approaches, such as Qu et al. (2020). Moreover, I find that the equations are similar.
2.The motivation is not clear at all. The introduction should be carefully revised to make this paper easy to follow.
3.I find the experimental analysis is vague, and why the model works better is not clear. No case studies and no detailed ablation analysis. | 2.The motivation is not clear at all. The introduction should be carefully revised to make this paper easy to follow. |
NIPS_2022_2813 | NIPS_2022 | 1. The proposed method is a two-stage optimization strategy, which is a bit difficult to balance the two steps optimization. Could it be end-to-end training? 2. Although it is intuitive that including multiple local prompts helps, for different categories, the features and their positions are not the same. | 2. Although it is intuitive that including multiple local prompts helps, for different categories, the features and their positions are not the same. |
iamWnRpMuQ | ICLR_2025 | The results have a few issues which make evaluating the contribution difficult:
1. The paper lacks a comparison with some existing works, particularly methods involve iterative PPO/DPO method that train a reward model simultaneously and reward ensembles [1].
[1] Coste T, Anwar U, Kirk R, et al. Reward model ensembles help mitigate overoptimization.
2. The alignment of relabeled reward data with human annotator judgments remains insufficiently validated. | 2. The alignment of relabeled reward data with human annotator judgments remains insufficiently validated. |
NIPS_2017_104 | NIPS_2017 | ---
There aren't any major weaknesses, but there are some additional questions that could be answered and the presentation might be improved a bit.
* More details about the hard-coded demonstration policy should be included. Were different versions of the hard-coded policy tried? How human-like is the hard-coded policy (e.g., how a human would demonstrate for Baxter)? Does the model generalize from any working policy? What about a policy which spends most of its time doing irrelevant or intentionally misleading manipulations? Can a demonstration task be input in a higher level language like the one used throughout the paper (e.g., at line 129)?
* How does this setting relate to question answering or visual question answering?
* How does the model perform on the same train data it's seen already? How much does it overfit?
* How hard is it to find intuitive attention examples as in figure 4?
* The model is somewhat complicated and its presentation in section 4 requires careful reading, perhaps with reference to the supplement. If possible, try to improve this presentation. Replacing some of the natural language description with notation and adding breakout diagrams showing the attention mechanisms might help.
* The related works section would be better understood knowing how the model works, so it should be presented later. | * The model is somewhat complicated and its presentation in section 4 requires careful reading, perhaps with reference to the supplement. If possible, try to improve this presentation. Replacing some of the natural language description with notation and adding breakout diagrams showing the attention mechanisms might help. |
Wo66GEFnXd | ICLR_2025 | 1. This paper just simply combines neural networks into the physical sciences problems for predicting TDDFT for molecules. Due to the lack of comparison with other learning based methods and insufficient experiment results, I don’t see the novelty and effectiveness of this method from the learning perspective. Maybe this work is more appropriate for some physical science journals.
2. This paper only does experiments on a very limited number of molecules and only provides in-distribution testing for these samples. I think the value of this method would be limited if it needs to train for each molecule individually.
3. There is no comparison for this method with other state-of-art work but I think using neural networks to predict for molecules is a very popular topic. | 2. This paper only does experiments on a very limited number of molecules and only provides in-distribution testing for these samples. I think the value of this method would be limited if it needs to train for each molecule individually. |
NIPS_2018_461 | NIPS_2018 | 1. Symbols are a little bit complicated and takes a lot of time to understand. 2. The author should probably focus more on the proposed problem and framework, instead of spending much space on the applications. 3. No conclusion section Generally I think this paper is good, but my main concern is the originality. If this paper appears a couple years ago, I would think that using meta-learning to solve problems is a creative idea. However, for now, there are many works using meta-learning to solve a variety of tasks, such as in active learning and reinforcement learning. Hence, this paper seems not very exciting. Nevertheless, deciding the number of clusters and selecting good clustering algorithms are still useful. Quality: 4 of 5 Clarity: 3 of 5 Originality: 2 of 5 Significance: 4 of 5 Typo: Line 240 & 257: Figure 5 should be Figure 3. | 1. Symbols are a little bit complicated and takes a lot of time to understand. |
NIPS_2019_1350 | NIPS_2019 | of the method. CLARITY: The paper is well organized, partially well written and easy to follow, in other parts with quite some potential for improvement, specifically in the experiments section. Suggestions for more clarity below. SIGNIFICANCE: I consider the work significant, because there might be many settings in which integrated data about the same quantity (or related quantities) may come at different cost. There is no earlier method that allows to take several sources of data into account, and even though it is a fairly straightforward extension of multi-task models and inference on aggregated data, it is relevant. MORE DETAILED COMMENTS: --INTRO & RELATED WORK: * Could you state somewhere early in the introduction that by "task" you mean "output"? * Regarding the 3rd paragraph of the introduction and the related work section: They read unnaturally separated. The paragraph in the introduction reads very technical and it would be great if the authors could put more emphasis there in how their work differs from previous work and introduce just the main concepts (e.g. in what way multi-task learning differs from multiple instance learning). Much of the more technical assessment could go into the related work section (or partially be condensed). --SECTION 2.3: Section 2 was straightforward to follow up to 2.3 (SVI). From there on, it would be helpful if a bit more explanation was available (at the expense of parts of the related work section, for example). More concretely: * l.145ff: $N_d$ is not defined. It would be good to state explicitely that there could be a different number of observations per task. * l.145ff: The notation has confused me when first reading, e.g. $\mathbb{y}$ has been used in l.132 for a data vector with one observation per task, and in l.145 for the collection of all observations. I am aware that the setting (multi-task, multiple supports, different number of observations per task) is inherently complex, but it would help to better guide the reader through this by adding some more explanation and changing notation. Also l.155: do you mean the process f as in l.126 or do you refer to the object introduced in l.147? * l.150ff: How are the inducing inputs Z chosen? Is there any effect of the integration on the choice of inducing inputs? l.170: What is z' here? Is that where the inducing inputs go? * l.166ff: It would be very helpful for the reader to be reminded of the dimensions of the matrices involved. * l.174 Could you explicitly state the computational complexity? * Could you comment on the performance of this approximate inference scheme based on inducing inputs and SVI? --EXPERIMENTS: * synthetic data: Could you give an example what kind of data could look like this? In Figure 1, what is meant by "support data" and what by "predicted training count data"? Could you write down the model used here explicitly, e.g. add it to the appendix? * Fertility rates: - It is unclear to me how the training data is aggregated and over which inputs, i.e. what you mean by 5x5. - Now that the likelihood is Gaussian, why not go for exact inference? * Sensor network: - l.283/4 You might want to emphasize here that CI give high accuracy but low time resolution results, e.g. "...a cheaper method for __accurately__ assessing the mass..." - Again, given a Gaussian likelihood, why do you use inducing inputs? What is the trade-off (computational and quality) between using the full model and SVI? - l.304ff: What do you mean by "additional training data"? - Figure 3: I don't understand the red line: Where does the test data come from? Do you have a ground truth? - Now the sensors are co-located. Ideally, you would want to have more low-cost sensors that high-cost (high accuracy) sensors in different locations. Do you have a thought on how you would account for spatial distribution of sensors? --REFERENCES: * please make the style of your references consistent, and start with the last name. Typos etc: ------------- * l.25 types of datasets * l.113 should be $f_{d'}(v')$, i.e. $d'$ instead of $d$ * l.282 "... but are badly bias" should be "is(?) badly biased" (does the verb refer to measurement or the sensor? Maybe rephrase.) * l.292 biased * Figure 3: biased, higher peaks, 500 with unit. * l.285 consisting of? Or just "...as observations of integrals" * l.293 these variables | - Figure 3: I don't understand the red line: Where does the test data come from? Do you have a ground truth? |
NIPS_2016_417 | NIPS_2016 | 1. Most of the human function learning literature has used tasks in which people never visualize data or functions. This is also the case in naturalistic settings where function learning takes place, where we have to form a continuous mapping between variables from experience. All of the tasks that were used in this paper involved presenting people with data in the form of a scatterplot or functional relationship, and asking them to evaluate lines applied to those axes. This task is more akin to data analysis than the traditional function learning task, and much less naturalistic. This distinction matters because performance in the two tasks is likely to be quite different. In the standard function learning task, it is quite hard to get people to learn periodic functions without other cues to periodicity. Many of the effects in this paper seem to be driven by periodic functions, suggesting that they may not hold if traditional tasks were used. I don't think this is a major problem if it is clearly acknowledged and it is made clear that the goal is to evaluate whether data-analysis systems using compositional functions match human intuitions about data analysis. But it is important if the paper is intended to be primarily about function learning in relation to the psychological literature, which has focused on a very different task. 2. I'm curious to what extent the results are due to being able to capture periodicity, rather than compositionality more generally. The comparison model is one that cannot capture periodic relationships, and in all of the experiments except Experiment 1b the relationships that people were learning involved periodicity. Would adding periodicity to the spectral kernel be enough to allow it to capture all of these results at a similar level to the explicitly compositional model? 3. Some of the details of the models are missing. In particular the grammar over kernels is not explained in any detail, making it hard to understand how this approach is applied in practice. Presumably there are also probabilities associated with the grammar that define a hypothesis space of kernels? How is inference performed? | 2. I'm curious to what extent the results are due to being able to capture periodicity, rather than compositionality more generally. The comparison model is one that cannot capture periodic relationships, and in all of the experiments except Experiment 1b the relationships that people were learning involved periodicity. Would adding periodicity to the spectral kernel be enough to allow it to capture all of these results at a similar level to the explicitly compositional model? |
NIPS_2018_537 | NIPS_2018 | 1. The motivation or the need for this technique is unclear. It would have been great to have some intuition why replacing last layer of ResNets by capsule projection layer is necessary and why should it work. 2. The paper is not very well-written, possibly hurriedly written, so not easy to read. A lot is left desired in presentation and formatting, especially in figures/tables. 3. Even though the technique is novel, the contributions of this paper is not very significant. Also, there is not much attempt in contrasting this technique with traditional classification or manifold learning literature. 4. There are a lot of missing entries in the experimental results table and it is not clear why. Questions for authors: Why is the input feature vector from backbone network needed to be decomposed into the capsule subspace component and also its component perpendicular to the subspace? What shortcomings in the current techniques lead to such a design? What purpose is the component perpendicular to the subspace serving? The authors state that this component appears in the gradient and helps in detecting novel characteristics. However, the gradient (Eq 3) does not only contain the perpendicular component but also another term x^T W_l^{+T} - is not this transformation similar to P_l x (the projection to the subspace). How to interpret this term in the gradient? Moreover, should we interpret the projection onto subspace as a dimensionality reduction technique? If so, how does it compare with standard dimensionality reduction techniques or a simple dimension-reducing matrix transformation? What does "grouping neurons to form capsules" mean - any reference or explanation would be useful? Any insights into why orthogonal projection is needed will be helpful. Are there any reason why subspace dimension c was chosen to be in smaller ranges apart from computational aspect/independence assumption? Is it possible that a larger c can lead to better separability? Regarding experiments, it will be good to have baselines like densenet, capsule networks (Dynamic routing between capsules, Sabour et al NIPS 2017 - they have also tried out on CIFAR10). Moreover it will be interesting to see if the capsule projection layer is working well only if the backbone network is a ResNet type network or does it help even when backbone is InceptionNet or VGGNet/AlexNet. | 2. The paper is not very well-written, possibly hurriedly written, so not easy to read. A lot is left desired in presentation and formatting, especially in figures/tables. |
oKn2eMAdfc | ICLR_2024 | 1. The introduction to orthogonality in Part 2 could be more detailed.
2. No details on how the capsule blocks are connected to each other.
3. The fourth line of Algorithm 1 does not state why the flatten operation is performed.
4.The presentation of the α-enmax function is not clear.
5. Eq. (4) does not specify why BatchNorm is used for scalars (L2-norm of sj).
6. The proposed method was tested on relatively small datasets, so that the effectiveness of the method was not well evaluated. | 1. The introduction to orthogonality in Part 2 could be more detailed. |
ICLR_2022_1216 | ICLR_2022 | of the paper: Overall the paper is reasonably well-written but the writing can improve in certain aspects. Some comments and questions below. 1. It is not apparent to the reader why the authors choose an asymptotic regime to focus on. My understanding is that the primary reason is easier theoretical tractability. It would help the reader to know why the paper focuses on the asymptotic setting. 2. It is unclear in the write-up if sample-wise descent occurs only in the over-parameterized regime or not. Pointing this explicitly in the place where you list your contributions would help. More broadly, it is important to have a discussion around these regimes in the main body and also a discussion around how they are defined in the asymptotic regime would help. 3. The paper is written in a very technical manner with very little proof intuition provided in the main body. It would benefit from having more intuition on the tools used and the reasons the main theorems hold. 4. Given that prior work already theoretically shows that sample-wise multiple descent can occur in linear regression, the main contribution of the paper appears to be the result that optimal regularization can remove double descent even in certain anisotropic settings. If this is not the case, the paper should do a better job of highlighting the novelty of their result in relation to prior results.
I am not too familiar with the particular techniques and tools used in the paper and could not verify the claims but they seem correct. | 4. Given that prior work already theoretically shows that sample-wise multiple descent can occur in linear regression, the main contribution of the paper appears to be the result that optimal regularization can remove double descent even in certain anisotropic settings. If this is not the case, the paper should do a better job of highlighting the novelty of their result in relation to prior results. I am not too familiar with the particular techniques and tools used in the paper and could not verify the claims but they seem correct. |
NIPS_2022_2315 | NIPS_2022 | Weakness: 1) The proposed methods - contrastive training objective and contrastive search - are two independent methods that have little inner connection on both the intuition and the algorithm. 2) The justification for isotropic representation and contractive search could be more solid. | 1) The proposed methods - contrastive training objective and contrastive search - are two independent methods that have little inner connection on both the intuition and the algorithm. |
7ipjMIHVJt | ICLR_2024 | 1) The first concern is the goal of the paper. Indeed, DAS earthquake detectors exists (one of them was cited by the autors, PhaseNet-Das, Zhu et al. 2023, there might be others), and no comparison was made, nor a justification on the benefit of your method against theirs. If the claim is to say that this is a foundation model, and the test on this task is only as a proof of concept, it should be clearer, and then show or justify a future useful application.
2) I think the purpose of a foundation model would be its applicability at a larger scale. Yet, is your method generalizable to other DAS sensors? It is not clear whether it is site and sensor-specific or not; if so it means a new self-training needs to be performed again for any new DAS.
3) The whole idea of this method is that earthquakes are unpredictible. It is clever indeed, but I see 2 major limitations: 1) this foundation model is thus harder to use for other tasks (which could be predictable) 2) in a series of aftershocks (which could maybe be seen as more predictable), how does your measure performs?
4) The comparison with other multi-variate time series are somehow misleading. Indeed, in multi-variate time-series, we suppose that the different time series (or sensors) are not ordered and not equally-spaced: DAS is a very particular type of 'multi-variate time-series'. I don't think it is worth presenting all of these methods (maybe only one), and it should be clearly stated in the paper. Yet, a comparison with image 2D foundation models, or by modifying a video framework from a 2D+t to a 1D+t, would be more relevant. | 1) The first concern is the goal of the paper. Indeed, DAS earthquake detectors exists (one of them was cited by the autors, PhaseNet-Das, Zhu et al. 2023, there might be others), and no comparison was made, nor a justification on the benefit of your method against theirs. If the claim is to say that this is a foundation model, and the test on this task is only as a proof of concept, it should be clearer, and then show or justify a future useful application. |
ICLR_2021_2846 | ICLR_2021 | Weakness: There are some concerns authors should further address: 1)The transductive inference stage is essentially an ensemble of a serial of models. Especially, the proposed data perturbation can be considered as a common data augmentation. What if such an ensemble is applied to the existing transductive methods? And whether the flipping already is adopted in the data augmentation before the inputs fed to the network? 2)During meta-training, only the selected single path is used in one transductive step, what about the performance of optimizing all paths simultaneously? Given during inference all paths are utilized. 3)What about the performance of MCT (pair + instance)? 4)Why the results of Table 6 is not aligned with Table 1 (MCT-pair)? Also what about the ablation studies of MCT without the adaptive metrics. 5)Though this is not necessary, I'm curious about the performance of the SOTA method (e.g. LST) combined with the adaptive metric. | 4)Why the results of Table 6 is not aligned with Table 1 (MCT-pair)? Also what about the ablation studies of MCT without the adaptive metrics. |
RnYd44LR2v | ICLR_2024 | - Similar analyses are already present in prior works, although on a (sometimes much) smaller scale, and then the results are not particularly surprising. For example, the robustness of CIFAR-10 models on distributions shifts (CIFAR-10.1, CINIC-10, CIFAR-10-C, which are also included in this work) was studied on the initial classifiers in RobustBench (see [Croce et al. (2021)](https://arxiv.org/abs/2010.09670)), showing a similar linear correlation with ID robustness. Moreover, [A, B] have also evaluated the robustness of adversarially trained models to unseen attacks.
- A central aspect of evaluating adversarial robustness is the attacks used to measure it. In the paper, this is described with sufficient details only in the appendix. In particular for the non $\ell_p$-threat models I think it would be important to discuss the strength (e.g. number of iterations) of the attacks used, since these are not widely explored in prior works.
[A] https://arxiv.org/abs/1908.08016
[B] https://arxiv.org/abs/2105.12508 | - Similar analyses are already present in prior works, although on a (sometimes much) smaller scale, and then the results are not particularly surprising. For example, the robustness of CIFAR-10 models on distributions shifts (CIFAR-10.1, CINIC-10, CIFAR-10-C, which are also included in this work) was studied on the initial classifiers in RobustBench (see [Croce et al. (2021)](https://arxiv.org/abs/2010.09670)), showing a similar linear correlation with ID robustness. Moreover, [A, B] have also evaluated the robustness of adversarially trained models to unseen attacks. |
NIPS_2016_93 | NIPS_2016 | - The claims made in the introduction are far from what has been achieved by the tasks and the models. The authors call this task language learning, but evaluate on question answering. I recommend the authors tone-down the intro and not call this language learning. It is rather a feedback driven QA in the form of a dialog. - With a fixed policy, this setting is a subset of reinforcement learning. Can tasks get more complicated (like what explained in the last paragraph of the paper) so that the policy is not fixed. Then, the authors can compare with a reinforcement learning algorithm baseline. - The details of the forward-prediction model is not well explained. In particular, Figure 2(b) does not really show the schematic representation of the forward prediction model; the figure should be redrawn. It was hard to connect the pieces of the text with the figure as well as the equations. - Overall, the writing quality of the paper should be improved; e.g., the authors spend the same space on explaining basic memory networks and then the forward model. The related work has missing pieces on more reinforcement learning tasks in the literature. - The 10 sub-tasks are rather simplistic for bAbi. They could solve all the sub-tasks with their final model. More discussions are required here. - The error analysis on the movie dataset is missing. In order for other researchers to continue on this task, they need to know what are the cases that such model fails. | - The 10 sub-tasks are rather simplistic for bAbi. They could solve all the sub-tasks with their final model. More discussions are required here. |
NIPS_2016_321 | NIPS_2016 | #ERROR! | - The restriction to triplets (or a sliding window of length 3) is quite limiting. Is this a fundamental limitation of the approach or is an extension to longer subsequences (without a sliding window) straightforward? |
of2rhALq8l | ICLR_2024 | 1. A significant weakness of this paper is the lack of clarity in explaining the implementation of the core concept, which involves the use of strictly diagonal matrices and the proposed Gradual Mask (GM). Figure 2 suggests that the GM matrix is element-wise multiplied by the matrix A, but the description implies a different interpretation, where it functions as a learning rate for each element in A. This discrepancy needs further clarification to provide a complete understanding of the method.
2. The hyper-parameters $b$ (bit-width) and $\alpha$ (stability factor) may introduce significant computational overhead in the pursuit of determining the optimal trade-off between model size and accuracy. | 2. The hyper-parameters $b$ (bit-width) and $\alpha$ (stability factor) may introduce significant computational overhead in the pursuit of determining the optimal trade-off between model size and accuracy. |
NIPS_2019_1180 | NIPS_2019 | --- There are two somewhat minor weakness: presentation and some missing related work. The main points in this paper can be understood with a bit of work, but there are lots of minor missing details and points of confusion. I've listed them roughly in order, with the most important first: * What factors varied in order to compute the error bars in figure 2? Were different random initializations used? Were different splits of the dataset used? How many samples do the error bars include? Do they indicate standard deviation or standard error? * L174: How exactly does the reactive baseline work? * L185: What does "without agent embeddings" mean precisely? * L201: More details about this metric are needed. I don't know exactly what is plotted on the y axis without reading the paper. Before looking into the details I'm not even sure whether higher or lower is good without looking into the details. (Does higher mean more information or does lower mean more information?) * Section 3: This would be much clearer if an example were used to illustrate the problem from the beginning of the section. * Will code be released? * L162: Since most experiments share perception between speaker and listener it would be much clearer to introduce this as a shared module and then present section 4.3 as a change to that norm. * L118: To what degree is this actually realized? * L84: It's not information content itself that will suffer, right? * L77: This is unnecessary and a bit distracting. * L144: Define M and N here. * L167: What is a "sqeuence of episodes" here? Are practice and evaluation the two types of this kind of sequence? Missing related work (seems very related, but does not negate this work's novelty): * Existing work has tried to model human minds, especially in robotics. It looks like [2] and [3] are good examples. The beginning of the related work in [1] has more references along these lines. This literature seems significantly different from what is proposed in this paper because the goals and settings are different. Only the high level motivation appears to be similar. Still, the literature seems significant enough (on brief inspection) to warrent a section in the related work. I'm not very familiar with this literature, so I'm not confident about how it relates to the current paper. [1]: Chandrasekaran, Arjun et al. âIt Takes Two to Tango: Towards Theory of AI's Mind.â CVPR 2017 [2]: Butterfield, Jesse et al. âModeling Aspects of Theory of Mind with Markov Random Fields.â International Journal of Social Robotics 1 (2009): 41-51. [3]: Warnier, Matthieu et al. âWhen the robot puts itself in your shoes. Managing and exploiting human and robot beliefs.â 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication (2012): 948-954. Suggestions --- * L216: It would be interesting to realize this by having the speaker interact with humans since the listeners are analogous to role humans take in the high level motivation. That would be a great addition to this or future work. Final Justification --- Clarity - This work could significantly improve its presentation and add more detail, but it currently is clear enough to get the main idea. Quality - Despite the missing details, the experiments seem to be measuring the right things and support very clear conclusions. Novelty - Lots of work uses reference games with multiple agents, but I'm not aware of attempts to specifically measure and model other agents' minds. Significance - The work is a useful step toward agents with a theory of mind because it presents interesting research directions that didn't exist before. Overall, this is a pretty good paper and should be accepted. Post-rebuttal Updates --- After reading the reviews and the rebuttal this paper seems like a clear accept. After discussion with R3 I think we all roughly agree. The rebuttal addressed all my concerns except the minor one listed below satisfactorily. There is one piece R3 and I touched on which is still missing. I asked about the relation to meta-learning and there was no response. More importantly, R3 asked about a comparison to a practice-stage only reward, which would show the importance of the meta-learning aspect of the reward. This was also not addressed satisfactorily, so it's still hard to understand the role of practice/evaluation stages in this work. This would be nice to have, but rest of the paper provides a valuable contribution without it. Though it's hard to tell how presentation and related work will ultimately be addressed in the final version, the rebuttal goes in the right direction so I'll increase my score as indicated in the Improvements section of my initial review. | * L167: What is a "sqeuence of episodes" here? Are practice and evaluation the two types of this kind of sequence? Missing related work (seems very related, but does not negate this work's novelty): |
ICLR_2022_1393 | ICLR_2022 | I think that:
The comparison to baselines could be improved.
Some of the claims are not carefully backed up.
The explanation of the relationship to the existing literature could be improved.
More details on the above weaknesses:
Comparison to baselines:
"We did not find good benchmarks to compare our unsupervised, iterative inferencing algorithm against" I think this is a slightly unfair comment. The unsupervised and iterative inferencing aspects are only positives if they have the claimed benefits, as compared to other ML methods (more accurate and better generalization). There is a lot of recent work addressing the same ML task (as mentioned in the related work section.) This paper contains some comparisons to previous work, but as I detail below, there seem to be some holes.
FCNN is by far the strongest competitor for the Laplace example in the appendix. Why is this left off of the baseline comparison table in the main paper? Further, is there any reason that FCNN couldn't have been used for the other examples?
Why is FNO not applied to the Chip cooling (Temperature) example?
A major point in this paper is improved generalization across PDE conditions. However, I think that's hard to check when only looking at the test errors for each method. In other words, is CoAE-MLSim's error lower than UNet's error because the approach fit the training data better, or is it because it generalized better? Further, in some cases, it's not obvious to me if the test errors are impressive, so maybe it is having a hard time generalizing. It would be helpful to see train vs. test errors, and ideally I like to see train vs. val. vs. test.
For the second main example (vortex decay over time), looking at Figures 8 and 33 (four of the fifty test conditions), CoAE-MLSim has much lower error than the baselines in the extrapolation phase but noticeably higher in the interpolation phase. In some cases, it's hard to tell how close the FNO line is to zero - it could be that CoAE-MLSim even has orders of magnitude more error. Since we can see that there's a big difference between interpolation and extrapolation, it would be helpful to see the test error averaged over the 50 test cases but not averaged over the 50 time steps. When averaged over all 50 time steps for the table on page 9, it could be that CoAE-MLSim looks better than FNO just because of the extrapolation regime. In practice, someone might pick FNO over CoAE-MLSim if they aren't interested in extrapolating in time. Do the results in the table for vortex decay back up the claim that CoAE-MLSim is generalizing over initial conditions better than FNO, or is it just better at extrapolation in time?
Backing up claims:
The abstract says that the method is tested for a variety of cases to demonstrate a list of things, including "scalability." The list of "significant contributions" also includes "This enables scaling to arbitrary PDE conditions..." I might have missed/forgotten something, but I think this wasn't tested?
"Hence, the choice of subdomain size depends on the trade-off between speed and accuracy." This isn't clear to me from the results. It seems like 32^3 is the fastest and most accurate?
I noticed some other claims that I think are speculations, not backed up with reported experiments. If I didn't miss something, this could be fixed by adding words like "might."
"Physics constrained optimization at inference time can be used to improve convergence robustness and fidelity with physics."
"The decoupling allows for better modeling of long range time dynamics and results in improved stability and generalizability."
"Each solution variable can be trained using a different autoencoder to improve accuracy."
"Since, the PDE solutions are dependent and unique to PDE conditions, establishing this explicit dependency in the autoencoder improves robustness."
"Additionally, the CoAE-MLSim apprach solves the PDE solution in the latent space, and hence, the idea of conditioning at the bottleneck layer improves solution predictions near geometry and boundaries, especially when the solution latent vector prediction has minor deviations."
"It may be observed that the FCNN performs better than both UNet and FNO and this points to an important aspect about representation of PDE conditions and its impact on accuracy." The representation of the PDE conditions could be why, but it's hard to say without careful ablation studies. There's a lot different about the networks.
Similarly: "Furthermore, compressed representations of sparse, high-dimensional PDE conditions improves generalizability."
Relationship to literature:
The citation in this sentence is abrupt and confusing because it sounds like CoAE-MLSim is a method from that paper instead of the new method: "Figure 4 shows a schematic of the autoencoder setup used in the CoAE-MLSim (Ranade et al., 2021a)." More broadly, Ranade et al., 2021a, Ranade et al., 2021b, and Maleki, et al., 2021 are all cited and all quite related to this paper. It should be more clear how the authors are building on those papers (what exactly they are citing them for), and which parts of CoAE-MLSim are new. (The Maleki part is clearer in the appendix, but the reader shouldn't have to check the appendix to know what is new in a paper.)
I thought that otherwise the related work section was okay but was largely just summarizing some papers without giving context for how they relate to this paper.
Additional feedback (minor details, could fix in a later version, but no need to discuss in the discussion phase):
- The abstract could be clearer about what the machine learning task is that CoAE-MLSim addresses.
- The text in the figures is often too small.
- "using pre-trained decoders (g)" - probably meant g_u?
- Many of the figures would be more clear if they said pre-trained solution encoders & solution decoders, since there are multiple types of autoencoders.
- The notation is inconsistent, especially with nu. For example, the notation in Figures 2 & 3 doesn't seem to match the notation in Alg 1. Then on Page 4 & Figure 4, the notation changes again.
- Why is the error table not ordered 8^3, 16^3, 32^3 like Figure 9? The order makes it harder for the reader to reason about the tradeoff.
- Why is Err(T_max) negative sometimes? Maybe I don't understand the definition, but I would expect to use absolute value?
- I don't think the study about different subdomain sizes is an "ablation" study since they aren't removing a component of the method.
- Figure 11: I'm guessing that the y-axis is log error, but this isn't labeled as such. I didn't really understand the the legend or the figure in general until I got to the appendix, since there's little discussion of it in the main paper.
- "Figure 30 shows comparisons of CoAE-MLSim with Ansys Fluent for 4 unseen objects in addition to the example shown in the main paper." - probably from previous draft. Now this whole example is in the appendix, unless I missed something.
- My understanding is that each type of autoencoder is trained separately and that there's an ordering that makes sense to do this in, so you can use one trained autoencoder for the next one (i.e. train the PDE condition AEs, then the PDE solution AE, then the flux conservation AE, then the time integration AE). This took me a while to understand though, so maybe this could be mentioned in the body of the paper. (Or perhaps I missed that!)
- It seems that the time integration autoencoder isn't actually an autoencoder if it's outputting the solution at the next time step, not reconstructing the input.
- Either I don't understand Figure 5 or the labels are wrong.
- It's implied in the paper (like in Algorithm 1) that the boundary conditions are encoded like the other PDE conditions. In the Appendix (A.1), it's stated that "The training portion of the CoAE-MLSim approach proposed in this work corresponds to training of several autoencoders to learn the representations of PDE solutions, conditions, such as geometry, boundary conditions and PDE source terms as well as flux conservation and time integration." But then later in the appendix (A.1.3), it's stated that boundary conditions could be learned with autoencoders but are actually manually encoded for this paper. That seems misleading. | - I don't think the study about different subdomain sizes is an "ablation" study since they aren't removing a component of the method. |
NIPS_2018_134 | NIPS_2018 | - Some parts of the work are harder to follow and it helps to have checked [Cohen and Shashua, 2016] for background information. # Typos and Presentation - The citation of Kraehenbuehl and Koltun: it seems that the first and last name of the first author, i.e. Philipp, are swapped. - The paper seems to be using a different citation style than the rest of the NIPS submission. Is this intended? - line 111: it might make sense to not call g activation function, but rather a binary operator; similar to Cohen and Shashua, 2016. They do introduce the activation-pooling operator though that fulfils the required conditions. - line 111: I believe that the weight w_i is missing in the sum. - line 114: Why not mention that the operator has to be associative and commutative? - eq 6 and related equations: I believe that the operator after w_i should be the multiplication of the underlying vector space and not \cross_g: It is an operator between a scalar and a tensor, and not just between two scalars. - line 126: by the black *line* in the input # Further Questions - Would it make sense to include and learn AccNet as part of a larger predictor, e.g., for semantic segmentation, that make use of similar operators? - Do you plan to publish their implementation of the proposed AccNet? # Conclusion The work shows that the proposed method is expressive enough to approximate high-dimensional filtering operations while being fast. I think the paper makes an interesting contribution and I would like to see this work being published. | - line 126: by the black *line* in the input # Further Questions - Would it make sense to include and learn AccNet as part of a larger predictor, e.g., for semantic segmentation, that make use of similar operators? |
rwpv2kCt4X | EMNLP_2023 | The primary concerns include,
* The necessity of evaluating the degree of personalization is not clear to me.
- According to this paper, I only found three previous research that did personalized summarizers. And all of them utilize the current common metrics to measure performance. It seems these metrics are enough for this task.
- Let's assume the new evaluation metric is necessary. When we have the pairs of user profiles (such as user-expected summaries) and generated summaries for each user, why can we not use the average and variance of current metrics (such as Rouge) to show the degree of personalization? The average presents the performance of the summarizer to generate high-quality summaries, and variance can represent the performance of generating summaries close to each user. It is much easier to evaluate based on current metrics than new ones.
* The new proposed metric is only tested on a single dataset.
* There is no human judgment for this new metric. I notice the authors said, in Limitations, they are trying for the human evaluation. I think it is better to accept the next version with human judgment results.
* The metric is high time-cost due to the eight Jenson-Shannon Divergence calculations.
Besides, the details of $rot()$ were missed in Line 184. | * The new proposed metric is only tested on a single dataset. |
NIPS_2019_175 | NIPS_2019 | 1. Weak novelty. Addressing domain-shift via domain specific moments is not new. It was done among others by Bilen & Vedaldi, 2017,âUniversal representations: The missing link between faces, text, planktons, and cat breedsâ. Although this paper may have made some better design decisions about exactly how to do it. 2. Justification & analysis: A normalisation-layer based algorithm is proposed, but without much theoretical analysis to justify the specific choices. EG: Why is is exactly: that gamma and beta should be domain-agnostic, but alpha should be domain specific. 3. Positioning wrt AutoDial, etc: The paper claims âparameter-freeâ as a strength compared to AutoDIAL, which has a domain-mixing parameter. However, this spin is a bit misleading. It removes one learnable parameter, but instead includes a somewhat complicated heuristic Eq 5-7 governing transferability. Itâs not clear that removing a single parameters (which is learned in AutoDIAL) with a complicated heuristic function (which is hand-crafted here) is a clear win. 4. The evaluation is a good start with comparing several base DA methods with and without the proposed TransferNorm architecture. It would be stronger if the base DA methods were similarly evaluated with/without the architectural competitors such as AutoDial and AdaBN that are direct competitors to TN. 5. English is full of errors throughout. "Seldom previous works", etc. ------ Update ----- The authors response did a decent job of responding to the concerns. The paper could be reasonable to accept. I hope the authors can update the paper with the additional information from the response. | 4. The evaluation is a good start with comparing several base DA methods with and without the proposed TransferNorm architecture. It would be stronger if the base DA methods were similarly evaluated with/without the architectural competitors such as AutoDial and AdaBN that are direct competitors to TN. |
NIPS_2022_489 | NIPS_2022 | Concern regarding representativeness of baselines used for evaluation
Practical benefits in terms of communication overhead & training time could be more strongly motivated
Detailed Comments:
Overall, the paper was interesting to read and the problem itself is well motivated. Formulation of the problem as an MPG appears sound and offers a variety of important insights with promising applications. There are, however, some concerns regarding evaluation fairness and practical benefits.
The baselines used for evaluation do not seem to accurately represent the state-of-the-art in CTDE. In particular, there have been a variety of recent works that explore more efficient strategies (e.g., [1-3]) and consistently outperform QMix with relatively low inter-agent communication. Although the proposed work appears effective as a fully-decentralized approach, it is unclear how well it would perform against more competitive CTDE baselines. Comparison against these more recent works would greatly improve the strength of evaluation.
Benefits in terms of reduced communication overhead could also be more strongly motivated. Presumably, communication between agents could be done over purpose-built inter-LB links, thus avoiding QoS degradation due to contention on links between LBs and servers. Even without inter-LB links, the increase in latency demonstrated in Appendix E.2.2 appears relatively low.
Robustness against dynamic changes in network setup are discussed to some degree, but it’s unclear how significant this issue is in a real-world environment. Even in a large-scale setup, the number of LBs/servers is likely to remain fairly constant at the timescales considered in this work (i.e., minutes). Given this, it seems that the paper should at least discuss trade-offs with a longer training time, which could impact the relative benefits of various approaches.
Some confusion in notation: - Algorithm 2, L8 should be t = 1,…,H (for horizon)? - L100, [M] denotes the set of LBs?
Minor notes: - Some abbreviations are not defined, e.g., “NE” on L73 - Superscript notation in Eq 6 is not defined until much later (L166), which hindered understanding in an initial read.
[1] S. Zhang et al, “Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control”, NeurIPS 2019. [2] Z. Ding et al, “Learning Individually Inferred Communication for Multi-Agent Cooperation”, NeurIPS 2020. [3] T. Wang et al, “Learning Nearly Decomposable Value Functions Via Communication Minimization”, ICLR 2020. | - Some abbreviations are not defined, e.g., “NE” on L73 - Superscript notation in Eq 6 is not defined until much later (L166), which hindered understanding in an initial read. [1] S. Zhang et al, “Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control”, NeurIPS 2019. [2] Z. Ding et al, “Learning Individually Inferred Communication for Multi-Agent Cooperation”, NeurIPS 2020. [3] T. Wang et al, “Learning Nearly Decomposable Value Functions Via Communication Minimization”, ICLR 2020. |
NIPS_2021_442 | NIPS_2021 | of the paper:
Strengths: 1) To the best of my knowledge, the problem investigated in the paper is original in the sense that top-m identification has not been studied in the misspecified setting. 2) The paper provides some interesting results:
i) (Section 3.1) Knowing the level of misspecification ε
is a key ingredient, as not knowing the same would yield sample complexity bounds which are no better than the bound obtainable from unstructured ( ε = ∞
) stochastic bandits. ii) A single no-regret learner is used for the sampling strategy instead of assigning a learner for each of the (N choose k) answers, thus exponentially reducing the number of online learners. iii) The proposed decision rules are shown to match the prescribed lower bound asymptotically. 3) Sufficient experimental validation is provided to showcase the empirical performance of the prescribed decision rules.
Weaknesses: Some of the explanations provided by the authors are a bit unclear to me. Specifically, I have the following questions: 1) IMO, a better explanation of investigating top-m identification in this setting is required. Specifically, in this setting, we could readily convert the problem to the general top-m identification by appending the constant 1 to the features (converting them into d + 1
dimensional features) and trying to estimate the misspecifications η
in the higher dimensional space. Why is that disadvantageous?
Can the authors explain how the lower bound in Theorem 1 explicitly captures the effect of the upper bound on misspecification ε
? The relationship could be shown, for instance, by providing an example of a particular bandit environment (say, Gaussian bandits) ala [Kaufmann2016].
Sample complexity: Theorem 2 states the sample complexity in a very abstract way; it provides an equation which needs to be solved in order to get an explicit expression of the sample complexity. In order to make a comparison, the authors then mention that the unstructured confidence interval β t , δ u n s
is approximately log ( 1 δ )
in the limit of δ → 0
, which is then used to argue that the sample complexity of MISLID is asymptotically optimal. However, β t , δ u n s
also depends on t
. In fact, my understanding is that as δ
goes to 0
, the stopping time t
goes to infinity, where it is not clear as to what value the overall expression β t , δ u n s
converges. Overall, I feel that the authors need to explicate the sample complexity a bit more. My suggestions are: can the authors find a solution to equation (5) (or at least an upper bound on the solution for different regimes of ε
)? Using such an upper bound, even if the authors could give an explicit expression of the (asymptotic) sample complexity and show how it compares to the lower bound, it would be a great contribution.
Looking at Figure 1A (the second figure from the left, for the case of ε = 2
), it looks like LinGapE outperforms MISLID in terms of average sample complexity. Please correct me if I’m missing something, but if what I understand is correct, then why use MISLID and not LinGapE?
Probable typo: Line 211: Should it be θ
instead of θ t
for the self-normalized concentration?
The authors have explained the limitations of the investigation in Section 6. | 3) Sufficient experimental validation is provided to showcase the empirical performance of the prescribed decision rules. Weaknesses: Some of the explanations provided by the authors are a bit unclear to me. Specifically, I have the following questions: |
NIPS_2017_65 | NIPS_2017 | 1) the evaluation is weak; the baselines used in the paper are not even designed for fair classification
2) the optimization procedure used to solve the multi-objective optimization problem is not discussed in adequate detail
Detailed comments below:
Methods and Evaluation: The proposed objective is interesting and utilizes ideas from two well studied lines of research, namely, privileged learning and distribution matching to build classifiers that can incorporate multiple notions of fairness. The authors also demonstrate how some of the existing methods for learning fair classifiers are special cases of their framework. It would have been good to discuss the goal of each of the terms in the objective in more detail in Section 3.3. The part that is probably the most weakest in the entire discussion of the approach is the discussion of the optimization procedure. The authors state that there are different ways to optimize the multi-objective optimization problem they formulate without mentioning clearly which is the procedure they employ and why (in Section 3). There seems to be some discussion about the same in experiments section (first paragraph) and I think what was done is that the objective was first converted into unconstrained optimization problem and then an optimal solution from the pareto set was found using BFGS. This discussion is still quite rudimentary and it would be good to explain the pros and cons of this procedure w.r.t. other possible optimization procedures that could have been employed to optimize the objective.
The baselines used to compare the proposed approach and the evaluation in general seems a bit weak to me. Ideally, it would be good to employ baselines that learn fair classifiers based on different notions (E.g., Hardt et. al. and Zafar et. al.) and compare how well the proposed approach performs on each notion of fairness in comparison with the corresponding baseline that is designed to optimize for that notion. Furthermore, I am curious as to why k-fold cross validation was not used in generating the results. Also, was the split between train and test set done randomly? And, why are the proportions of train and test different for different datasets?
Clarity of Presentation:
The presentation is clear in general and the paper is readable. However, there are certain cases where the writing gets a bit choppy. Comments:
1. Lines 145-147 provide the reason behind x*_n being the concatenation of x_n and z_n. This is not very clear.
2. In Section 3.3, it would be good to discuss the goal of including each of the terms in the objective in the text clearly.
3. In Section 4, more details about the choice of train/test splits need to be provided (see above).
While this paper proposes a useful framework that can handle multiple notions of fairness, there is scope for improving it quite a bit in terms of its experimental evaluation and discussion of some of the technical details. | 1) the evaluation is weak; the baselines used in the paper are not even designed for fair classification |
NIPS_2016_537 | NIPS_2016 | weakness of the paper is the lack of clarity in some of the presentation. Here are some examples of what I mean. 1) l 63, refers to a "joint distribution on D x C". But C is a collection of classifiers, so this framework where the decision functions are random is unfamiliar. 2) In the first three paragraphs of section 2, the setting needs to be spelled out more clearly. It seems like the authors want to receive credit for doing something in greater generality than what they actually present, and this muddles the exposition. 3) l 123, this is not the definition of "dominated" 4) for the third point of definition one, is there some connection to properties of universal kernels? See in particular chapter 4 of Steinwart and Christmann which discusses the ability of universal kernels two separate an arbitrary finite data set with margin arbitrarily close to one. 5) an example and perhaps a figure would be quite helpful in explaining the definition of uniform shattering. 6) in section 2.1 the phrase "group action" is used repeatedly, but it is not clear what this means. 7) in the same section, the notation {\cal P} with a subscript is used several times without being defined. 8) l 196-7: this requires more explanation. Why exactly are the two quantities different, and why does this capture the difference in learning settings? ---- I still lean toward acceptance. I think NIPS should have room for a few "pure theory" papers. | 2) In the first three paragraphs of section 2, the setting needs to be spelled out more clearly. It seems like the authors want to receive credit for doing something in greater generality than what they actually present, and this muddles the exposition. |
ICLR_2023_4741 | ICLR_2023 | Weakness
1 The novelty is limited. The low-rank design is relevant to (Hu et al., 2021). The sparse design is similar to Taylor pruning (Molchanov et al., 2019).
2 The experiments are not quite convincing. The authors choose the old baseline like R3D and C3D. To reduce computation complexity, many papers have been proposed in 3D CNN (X3D, SlowFast, etc). Does the proposed method also works on these 3D CNNs? Or compared to these approaches, what is the advantage of the proposed method?
3 The paper is hard to follow. In fact, I have to read many times to get what it is. I understand it is a theoretical-kind paper. But please further explain the mathmetical formulation clearly to show why and how it works. | 2 The experiments are not quite convincing. The authors choose the old baseline like R3D and C3D. To reduce computation complexity, many papers have been proposed in 3D CNN (X3D, SlowFast, etc). Does the proposed method also works on these 3D CNNs? Or compared to these approaches, what is the advantage of the proposed method? |
NIPS_2020_125 | NIPS_2020 | 1. It is not very clear how exactly is the attention module attached to the backbone ResNet-20 architecture when performing the search. How many attention modules are used? Where are they placed? After each block? After each stage? It would be good to clarify this. 2. Similar to above, it would be good to provide more details of how the attention modules are added to tested architectures. I assume they are added following the SE paper but would be good to clarify. 3. Related to above, how is the complexity of the added module controlled? Is there a tunable channel weight similar to SE? It would be good to clarify this. 4. In Table 3, the additional complexity of the found module is ~5-15% in terms of parameters and flops. It is not clear if this is actually negligible. Would be good to perform comparisons where the complexity matches more closely. 5. In Table 3, it seems that the gains are decreasing for larger models. It would be good to show results with larger and deeper models (ResNet-101 and ResNet-152) to see if the gains transfer. 6. Similar to above, it would be good to show results for different model types (e.g. ResNeXt or MobileNet) to see if the module transfer to different model types. All current experiments use ResNet models. 7. It would be good to discuss and report how the searched module affect the training time, inference time, and memory usage (compared to vanilla baselines and other attention modules). 8. It would be interesting to see the results of searching for the module using a different backbone (e.g. ResNet-56) or a different dataset (e.g. CIFAR-100) and compare both the performance and the resulting module. 9. The current search space for the attention module consists largely of existing attention operations as basic ops. It would be interesting to consider a richer / less specific set of operators. | 1. It is not very clear how exactly is the attention module attached to the backbone ResNet-20 architecture when performing the search. How many attention modules are used? Where are they placed? After each block? After each stage? It would be good to clarify this. |
ICLR_2022_1522 | ICLR_2022 | Weakness:
The overall novelty seems limited since the instance-adaptive method is from existing work with no primary changes. Here are some main questions and concerns:
1). How many optimization steps are used to produce the final reported performance in Figure.1 as well as in some other figs and tables?
2). The proposed method looks stronger at high bitrate but close to the baselines at low bitrate. What is the precise bitrate range used for BD-rate comparison?
Besides, a related work about implementing content adaptive algorithm in learned video compression is suggested for discussion or comparison:
Guo Lu, et al., "Content Adaptive and Error Propagation Aware Deep Video Compression." ECCV 2020. | 2). The proposed method looks stronger at high bitrate but close to the baselines at low bitrate. What is the precise bitrate range used for BD-rate comparison? Besides, a related work about implementing content adaptive algorithm in learned video compression is suggested for discussion or comparison: Guo Lu, et al., "Content Adaptive and Error Propagation Aware Deep Video Compression." ECCV 2020. |
NIPS_2020_833 | NIPS_2020 | I don't think the paper has signficant weaknesses. The model setting is admittedly quite limited, but given the relatively sparse literature on the cutoff phenomenon in learning, I do not think this is a strong complaint. Some suggestions to improve: - I would recommend the authors to distinguish the all-or-nothing or cutoff phenomenon from usual statistical bounds that the machine learning and NeurIPS community is familiar with. - Mention that 'teacher student' setting is another phrasing of 'well-specified' models in statistics, 'realizable setting' in learning theory, and 'proper learning' in computer science. - Regularizing noise: if this is needed for the proof, preferably write it as such. Presumably a limiting argument works for Delta vanishing, e.g. one can always add small noise to the observvations. - It would also be nice to have a rigorous proof for the MMSE portion. Since this is not done in the current paper, but potentially accessible to present techniques, the authors should identify Eq (10) as a 'Claim'. | - I would recommend the authors to distinguish the all-or-nothing or cutoff phenomenon from usual statistical bounds that the machine learning and NeurIPS community is familiar with. |
NIPS_2022_1516 | NIPS_2022 | 1). Technically speaking, the contribution of this work is incremental, and its technique deep is shallow. The proposed probabilistic word dropout is not that impressive or novel. To me, it sounds like a probabilistic teacher forcing. The massive notations and formulas appear to be not that necessary for me to understand the idea. 2). The improvements on three tasks over the previous works and self-implemented baselines are marginal. Further analysis beyond the main experiments is not sufficient. 3). The backbone is constrained to be the double LSTM, while popular transformers is not involved. Although this seems to work for different encoders, pre-trained language models are not covered as well. The application of this method to more extensive model structures remains a potential concern.
Limitations are mentioned in Section 6, while it seems to be not sufficient. | 2). The improvements on three tasks over the previous works and self-implemented baselines are marginal. Further analysis beyond the main experiments is not sufficient. |
NIPS_2017_74 | NIPS_2017 | - Theorem 2 which presentation is problematic and does not really provide any convergence guaranty.
- All the linear convergence rates rely on Theorem 8 which is burried at the end of the appendix and which proof is not clear enough.
- Lower bounds on the number of good steps of each algorithm which are not really proved since they rely on an argument of the type "it works the same as in another close setting".
The numerical experiments are numerous and convincing, but I think that the authors should provide empirical evidences showing that the computational cost are of the same order of magnitude compared competing methods for the experiments they carried out.
%%%% Details on the main comments
%% Theorem 2
The presention and statement of Theorem 2 (and all the sublinear rates given in the paper) has the following form:
- Given a fixed horizon T
- Consider rho, a bound on the iterates x_0 ... x_T
- Then for all t > 0 the suboptimality is of the order of c / t where c depends on rho.
First, the proof cannot hold for all t > 0 but only for 0 < t <= T. Indeed, in the proof, equation (16) relies on the fact that the rho bound holds for x_t which is only ensured for t < = T.
Second the numerator actually contains rho^2. When T increases, rho could increase as well and the given bound does not even need to approach 0.
This presentation is problematic. One possible way to fix this would be to provide a priori conditions (such as coercivity) which ensure that the sequence of iterates remain in a compact set, allowing to define an upper bound independantly of the horizon T.
In the proof I did not understand the sentence "The reason being that f is convex, therefore, for t > 0 we have f (x t ) < = f (0)."
%% Lemma 7 and Theorem 8
I could not understand Lemma 7.
The equation is given without any comment and I cannot understand its meaning without further explaination. Is this equation defining K'? Or is it the case that K' can be chosen to satisfy this equation? Does it have any other meaning?
Lemma 7 deals only with g-faces which are polytopes. Is it always the case? What happens if K is not a polytope? Can this be done without loss of generality? Is it just a typo?
Theorem 8:
The presentation is problematic. In Lemma 7, r is not a feasible direction. In Theorem 8, it is the gradient of f at x_t. Theorem 8 says "using the notation from Lemma 7". The proof of Theorem 8 says "if r is a feasible direction". All this makes the work of the reader very hard.
Notations of Lemma 7 are not properly used:
- What is e? e is not fixed by Lemma 7, it is just a variable defining a maximum. This is a recurent mistake in the proofs.
- What is K? K is supposed to be given in Lemma 7 but not in Theorem 8.
- Polytope?
All this could be more explicit.
"As x is not optimal by convexity we have that < r , e > > 0". Where is it assumed that $x$ is not optimal? How does this translate in the proposed inequality?
What does the following mean?
"We then project r on the faces of cone(A) containing x until it is a feasible direction"
Do the author project on an intersection of faces or alternatively on each face or something else?
It would be more appropriate to say "the projection is a feasible direction" since r is fixed to be the gradient of f. It is very uncomfortable to have the value of r changing within the proof in an "algorithmic fashion" and makes it very hard to check accuracy of the arguments.
In any case, I suspect that the resulting r could be 0 in which case the next equation does not make sense. What prevents the resulting r from being null?
In the next sentences, the authors use Lemma 7 which assumes that r is not a feasible direction. This is contradictory with the preceeding paragraph. At this point I was completely confused and lost hope to understand the details of this proof.
What is r' on line 723 and in the preceeding equation?
I understand that there is a kind of recursive process in the proof. Why should the last sentence be true?
%% Further comments
Line 220, max should be argmax
I did not really understand the non-negative matrix facotrization experiment. Since the resulting approximation is of rank 10, does it mean that the authors ran their algorithm for 10 steps only? | - All the linear convergence rates rely on Theorem 8 which is burried at the end of the appendix and which proof is not clear enough. |
yuYMJQIhEU | ICLR_2024 | This paper mostly combines standard algorithms (random walk, Adam without momentum, SAM), although this is not a problem, the theoretically analysis needs to be improved. Meanwhile, the experimental part lacks new insights except for some expected results.
Major comments:
1. Theorem 3.7 looks like a direct combination of theoretical results obtained by Adam without momentum and SAM. Furthermore, the proof in Appendix does not consider the convergence guarantee that could be achieved by random walk method. That is, the Markov chain is not considered. Note that the last equation in Page 13 is almost the same as the convergence result (Theorem 4.3, Triastcyn et al., 2022)) except it does not have the compression part. The proof also follows exactly as Triastcyn et al. (2022). The perturbed model is not used, means that sharp awareness minimization is not analyzed, which makes me question the soundness of Theorem 3.7.
2. Since SAM is integrated to prevent potential overfitting, the experiment should present this effect compared with its counterpart that does not have the perturbed model. The lack of this experiment comparison would question the necessity of incorporating SAM in the proposed Localized framework.
3. The simulation only shows the loss performance of the proposed algorithms and the benchmarks, however, in practical, we would be more interested to see the classification accuracy.
4. The proposed algorithm is compared with FedAvg, however, for FedAvg, not all agents are communicating all the time, which does not make sense in the setting that FedAvg does not need to consider communication. That means, I suppose that if all agents in FedAvg communicate all the time, the performance of FedAvg might be much better than all the other methods, since there exists a coordinator, although the communication cost would be very high. The figures presented, however, show that Localized SAM is always better than FedAvg in the random sample setting in both performance and communication, which is not a fair comparison.
Minor comments:
1. In Page 2, first paragraph, Localized SAM is introduced first and then “sharpness-aware minimization (SAM (Foret et al., 2021))” is repeated again. It would be better to revise it.
2. Page 2, second paragraph in Related Work, the Walkman algorithm (Mao et al., 2020) is solved by ADMM, with two versions, one is to solve a local optimization problem, the other is to solve a gradient approximation. Therefore, it is not accurate to say that “However, these works are all based on the simple SGD for decentralized optimization.”
3. Section 3, first paragraph, in “It can often have faster convergence and better generalization than the SGD-based Algorithm 1, as will be demonstrated empirically in Section 4.1.” The “it” does not have a clear reference.
4. In Section 3.1, you introduced $\boldsymbol{u}_k$, which was not defined previously and did not show up after Algorithm 3.
5. Figure 6 seems to be reused from your previous LoL optimizer work. | 2. Page 2, second paragraph in Related Work, the Walkman algorithm (Mao et al., 2020) is solved by ADMM, with two versions, one is to solve a local optimization problem, the other is to solve a gradient approximation. Therefore, it is not accurate to say that “However, these works are all based on the simple SGD for decentralized optimization.” 3. Section 3, first paragraph, in “It can often have faster convergence and better generalization than the SGD-based Algorithm 1, as will be demonstrated empirically in Section 4.1.” The “it” does not have a clear reference. |
NIPS_2022_2786 | NIPS_2022 | 1. The main (and only) theoretical result in the paper provides utility guarantees for the proposed algorithm only when the features and noise are Gaussian. This is a strong requirement on the data, especially given that previous algorithms don’t need this assumption as well. Moreover, the authors should compare the rates achieved by their procedure to existing rates in the literature. 2. Experiments: the experimental results in the paper don’t provide a convincing argument for their algorithms. First, all of the experiments are done over synthetic data. Moreover, the authors only consider low-dimensional datasets where d<30 and therefore it is not clear if the same improvements hold for high-dimensional problems. Finally, it is not clear whether the authors used any hyper-parameter tuning for DP-GD (or DP-SGD); this could result in significantly better results for DP-GD. 3. Writing: I encourage the authors to improve the writing in this paper. For example, the introduction could use more work on setting up the problem, stating the main results and comparing to previous work, before moving on to present the algorithm (which is done too soon in the current version). More:
Typo (first sentence): “is a standard”
First paragraph in page 4 has m. What is m? Should that be n? | 1. The main (and only) theoretical result in the paper provides utility guarantees for the proposed algorithm only when the features and noise are Gaussian. This is a strong requirement on the data, especially given that previous algorithms don’t need this assumption as well. Moreover, the authors should compare the rates achieved by their procedure to existing rates in the literature. |
NIPS_2017_110 | NIPS_2017 | weakness of this paper in my opinion (and one that does not seem to be resolved in Schiratti et al., 2015 either), is that it makes no attempt to answer this question, either theoretically, or by comparing the model with a classical longitudinal approach.
If we take the advantage of the manifold approach on faith, then this paper certainly presents a highly useful extension to the method presented in Schiratti et al. (2015). The added flexibility is very welcome, and allows for modelling a wider variety of trajectories. It does seem that only a single breakpoint was tried in the application to renal cancer data; this seems appropriate given this dataset, but it would have been nice to have an application to a case where more than one breakpoint is advantageous (even if it is in the simulated data). Similarly, the authors point out that the model is general and can deal with trajectories in more than one dimensions, but do not demonstrate this on an applied example.
(As a side note, it would be interesting to see this approach applied to drug response data, such as the Sanger Genomics of Drug Sensitivity in Cancer project).
Overall, the paper is well-written, although some parts clearly require a background in working on manifolds. The work presented extends Schiratti et al. (2015) in a useful way, making it applicable to a wider variety of datasets.
Minor comments:
- In the introduction, the second paragraph talks about modelling curves, but it is not immediately obvious what is being modelled (presumably tumour growth).
- The paper has a number of typos, here are some that caught my eyes: p.1 l.36 "our model amounts to estimate an average trajectory", p.4 l.142 "asymptotic constrains", p.7 l. 245 "the biggest the sample size", p.7l.257 "a Symetric Random Walk", p.8 l.269 "the escapement of a patient".
- Section 2.2., it is stated that n=2, but n is the number of patients; I believe the authors meant m=2.
- p.4, l.154 describes a particular choice of shift and scaling, and the authors state that "this [choice] is the more appropriate.", but neglect to explain why.
- p.5, l.164, "must be null" - should this be "must be zero"?
- On parameter estimation, the authors are no doubt aware that in classical mixed models, a popular estimation technique is maximum likelihood via REML. While my intuition is that either the existence of breakpoints or the restriction to a manifold makes REML impossible, I was wondering if the authors could comment on this.
- In the simulation study, the authors state that the standard deviation of the noise is 3, but judging from the observations in the plot compared to the true trajectories, this is actually not a very high noise value. It would be good to study the behaviour of the model under higher noise.
- For Figure 2, I think the x axis needs to show the scale of the trajectories, as well as a label for the unit.
- For Figure 3, labels for the y axes are missing.
- It would have been useful to compare the proposed extension with the original approach from Schiratti et al. (2015), even if only on the simulated data. | - It would have been useful to compare the proposed extension with the original approach from Schiratti et al. (2015), even if only on the simulated data. |
OBUQNASaWw | ICLR_2025 | 1.It’s suggested that authors give a comprehensive survey on adaptive sparse training method. Although authors claim “Previous works have only managed to solve one, or perhaps two of these challenges”, can authors give a comprehensive comparison of existing methods?
2.Considering different clients train different submodels, the server also maintains a full model. So can the sparsity of clients be different to apply for heterogeneous hardware?
3.Can authors further explain why clients should achieves consensus on the clients’ sparse model masks when server always maintain a full model.
4.What’s the definition of the model plasticity?
5.In experimental section, authors only compared with two baselines, there’re several works also focus on the same questions, for example [1,2,3], so it’s suggested to add more experimental to show the effectiveness of proposed method.
6.Considering the model architecture, authors only show the effectiveness on convolutional network, what’s the performance on other architecture, for example Transformer?
[1]Stripelis, Dimitris, et al. "Federated progressive sparsification (purge, merge, tune)+." arXiv preprint arXiv:2204.12430 (2022).
[2]Wang, Yangyang, et al. "Theoretical convergence guaranteed resource-adaptive federated learning with mixed heterogeneity." Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023.
[3]Zhou, Hanhan, et al. "Every parameter matters: Ensuring the convergence of federated learning with dynamic heterogeneous models reduction." Advances in Neural Information Processing Systems 36 (2024). | 5.In experimental section, authors only compared with two baselines, there’re several works also focus on the same questions, for example [1,2,3], so it’s suggested to add more experimental to show the effectiveness of proposed method. |
ARR_2022_59_review | ARR_2022 | - If I understand correctly, In Tables 1 and 2, the authors report the best results on the **dev set** with the hyper-parameter search and model selection on **dev set**, which is not enough to be convincing. I strongly suggest that the paper should present the **average** results on the **test set** with clearly defined error bars under different random seeds. - Another concern is that the method may not be practical. In fine-tuning, THE-X firstly drops the pooler of the pre-trained model and replaces softmax and GeLU, then conducts standard fine-tuning. For the fine-tuned model, they add LayerNorm approximation and d distill knowledge from original LN layers. Next, they drop the original LN and convert the model into fully HE-supported ops. The pipeline is too complicated and the knowledge distillation may not be easy to control. - Only evaluating the approach on BERTtiny is also not convincing although I understand that there are other existing papers that may do the same thing. For example, a BiLSTM-CRF could yield a 91.03 F1-score and a BERT-base could achieve 92.8. Although computation efficiency and energy-saving are important, it is necessary to comprehensively evaluate the proposed approach.
- The LayerNorm approximation seems to have a non-negligible impact on the performances for several tasks. I think it is an important issue that is worth exploring.
- I am willing to see other reviews of this paper and the response of the authors. - Line #069: it possible -> it is possible? | - If I understand correctly, In Tables 1 and 2, the authors report the best results on the **dev set** with the hyper-parameter search and model selection on **dev set**, which is not enough to be convincing. I strongly suggest that the paper should present the **average** results on the **test set** with clearly defined error bars under different random seeds. |
ICLR_2023_2396 | ICLR_2023 | 1. Lack of the explanation about the importance and the necessity to design deep GNN models . In this paper, the author tries to address the issue of over-smoothing and build deeper GNN models. However, there is no explanation about why should we build a deep GNN model. For CNN, it could be built for thousands of layers with significant improvement of the performance. While for GNN, the performance decreases with the increase of the depth (shown in Figure 1). Since the deeper GNN model does not show the significant improvement and consumes more computational resource, the reviewer wonders the explanation of the importance and the necessity to design deep models. 2. The experimental results are not significantly improved compared with GRAND. For example, GRAND++-l on Cora with T=128 in Table 1, on Computers with T=16,32 in Table 2. Since the author claims that GRAND suffers from the over-smoothing issue while DeepGRAND significantly mitigates such issue, how to explain the differences between the theoretical and practical results, why GRAND performs better when T is larger? Besides, in Table 3, DeepGRAND could not achieve the best performance with 1/2 labeled on Citeseer, Pubmed, Computers and CoauthorCS dataset, which could not support the argument that DeepGRAND is more resilient under limited labeled training data. 3. Insufficient ablation study on \alpha. \alpha is only set to 1e-4, 1e-1, 5e-1 in section 5.4 with a large gap between 1e-4 and 1e-1. The author is recommended to provide more values of \alpha, at least 1e-2 and 1e-3. 4. Minor issues. The x label of Figure 2, Depth (T) rather than Time (T). | 3. Insufficient ablation study on \alpha. \alpha is only set to 1e-4, 1e-1, 5e-1 in section 5.4 with a large gap between 1e-4 and 1e-1. The author is recommended to provide more values of \alpha, at least 1e-2 and 1e-3. |
ICLR_2022_1872 | ICLR_2022 | I list 5 concerns here, with detailed discussion and questions for the authors below
W1: While theorems suggest "existence" of a linear transformation that will approximate the posterior, the actual construction procedure for the "recovered topic posterior" is unclear
W2: Many steps are difficult to understand / replicate from main paper
W3: Unclear what theorems can say about finite training sets
W4: Justification / intuition for Theorems is limited in the main paper
Responses to W1-W3 are most important for the rebuttal.
W1: Actual procedure for constructing the "recovered topic posterior" is unclear
In both synthetic and real experiments, the proposed self-supervised learning (SSL) method is used to produce a "recovered topic posterior" p( w x). However, the procedure used here is unclear... how do we estimate p( w x) using the learning function f(x)?
The theorems imply that a linear function exists with limited (or zero) approximation error for any chosen scalar summary of the doc-topic weights w. However, how such a linear function is constructed is unclear. The bottom of page four suggests that when t=1 and A is full rank, that "one can use the pseudoinverse of A to recover the posterior", however it seems (1) unclear what the procedure is in general and what its assumptions are, and (2) odd that the prior may not needed at all.
Can the authors clarify how to estimate the recovered topic posterior using the proposed SSL method?
W2: Many other steps are difficult to understand / replicate from main paper
Here's a quick list of questions on experimental steps I am confused about / would have trouble reproducing
For the toy experiments in Sec. 5:
Do you estimate the topic-word parameter A? Or assume the true value is given?
What is the format for document x provided as input to the neural networks that define f(x)? The top paragraph of page 7 makes it seem like you provide an ordered list of words. Wouldn't a bag-of-words count vector be a more robust choice?
How do you set t=1 (predict one word given others) but somehow also use "the last 6 words are chosen as the prediction target"?
How do you estimate the "recovered topic posterior" for each individual model (LDA, CTM, etc)? Is this also using HMC (which is used to infer the ground-truth posterior)?
Why use 2000 documents for the "pure" topic model but 500 in test set for other models? Wouldn't more complex models benefit from a larger test set?
For the real experiments in Sec. 6:
How many topics were used?
How did you get topic-word parameters for this "real" dataset?
How big is the AG news dataset? Main paper should at least describe how many documents in train/test, and how many vocabulary words.
W3: Unclear what theorems / methods can say about finite training sets
All the theorems seem to hold when considering terms that are expectations over a known distribution over observed-data x and missing-data y. However, in practical data analysis we do not know the true data generating distribution, we only have a finite training set.
I am wondering about this method's potential in practice for modest-size datasets. For the synthetic dataset with V=5000 (a modest vocabulary size), the experiments considered 0.72 million to 6 million documents, which seems quite large.
What practically must be true of the observed dataset for the presented methods to work well?
W4: Justification / intuition for Theorems is limited in the main paper
All 3 theorems in the main paper are presented without much intuition or justification about why they should be true, which I think limits their impact on the reader. (I'll try to wade thru the supplement, but did not have time before the review deadline).
Theorem 3 tries to give intuition for the t=1 case, but I think could be stronger: why should f(x) have an optimal form p ( y = v 1 x )
? Why should "these probabilities" have the form A E [ w x ]
? I know space is limited, but helping your reader figure things out a bit more explicitly will increase the impact.
Furthermore, the reader would benefit from understanding how tight the bounds in Theorem 4 are. Can we compute the bound quality for toy data and understand it more practically?
Detailed Feedback on Presentation
No need to reply to these in rebuttal but please do address as you see fit in any revision
Page 3:
"many topic models can be viewed"... should probably say "the generative process of many topic models can be viewed..."
the definition of A_ij is not quite right. I would not say "word i \in topic j", I would say "word i topic j". A word is not contained in a topic, Each word has a chance of being generated.
I'd really avoid writing Δ ( K )
and would just use Δ
throughout .... unclear why this needs to be a function of K
but the topic-word parameters (whose size also depends on K
) does not
Should we call the reconstruction objective a "partial reconstruction" or "masked reconstruction"? I'm used to reconstruction in an auto-encoder context, where the usual "reconstruction" objective is literally to recover all observed data, not a piece of observed data that we are pretending not to see
In Eq. 1, are you assuming an ordered or unordered representation of the words in x and y?
Page 4:
I would not reuse the variable y in both reconstruction and contrastive contexts. Find another variable. Same with theta.
Page 5:
I would use f ∗
to denote the exact minimizer, not just f
Figure 2 caption should clarify:
what is the takeaway for this figure? Does reader want to see low values? Does this figure suggest the approach is working as expected?
what procedure is used for the "recovered" posterior? Your proposed SSL method?
why does Pure have a non-monotonic trend as alpha gets larger? | 6: How many topics were used? How did you get topic-word parameters for this "real" dataset? How big is the AG news dataset? Main paper should at least describe how many documents in train/test, and how many vocabulary words. |
NIPS_2020_878 | NIPS_2020 | * The GCN-based predictor and experiments don't have open-sourced code (not mentioned in the main paper or supplement), however the authors do provide detailed descriptions. * Some correctness issues (see next section) * The paper presents 2 important NAS objectives: latency optimization and accuracy optimization. However, the BRP-NAS (section 4) seems out-of-place since the rest of the paper deals with latency prediction. It nearly feels like BRP-NAS could be a separate paper, or Section 3 was used only to suggest using GCN (in this case, why not directly start with accuracy prediction with GCN?). * The analysis on BRP-NAS is also somewhat barebones: it only compares against 3 basic alternatives and ignores some other NAS (e.g. super-net/one-shot approaches, etc...). * Unclear if code will be released, as the GCN implementation may be hard to reproduce without original code (though the author's descriptions are fairly detailed and there is more information in the supplement). | * The analysis on BRP-NAS is also somewhat barebones: it only compares against 3 basic alternatives and ignores some other NAS (e.g. super-net/one-shot approaches, etc...). |
NIPS_2017_390 | NIPS_2017 | - I am curious how the performance varies quantitatively if the training "shot" is not the same as "test" shot: In realistic applications, knowing the "shot" before-hand is a fairly strong and impractical assumption.
- I find the zero-shot version and the connection to density estimation a bit distracting to the main point of the paper, which is that one can learn to produce good prototypes that are effective for few-shot learning. However, this is more an aesthetic argument than a technical one. | - I find the zero-shot version and the connection to density estimation a bit distracting to the main point of the paper, which is that one can learn to produce good prototypes that are effective for few-shot learning. However, this is more an aesthetic argument than a technical one. |
7GxY4WVBzc | EMNLP_2023 | * The contribution of the vector database to improving QA performance is unclear. More analysis and ablation studies are needed to determine its impact and value for the climate change QA task.
* Details around the filtering process used to create the Arabic climate change QA dataset are lacking. More information on the translation and filtering methodology is needed to assess the dataset quality.
* The work is focused on a narrow task (climate change QA) in a specific language (Arabic), so its broader impact may be limited.
* The limitations section lacks specific references to errors and issues found through error analysis of the current model. Performing an analysis of the model's errors and limitations would make this section more insightful. | * Details around the filtering process used to create the Arabic climate change QA dataset are lacking. More information on the translation and filtering methodology is needed to assess the dataset quality. |
kz78RIVL7G | ICLR_2025 | 1. Detecting adversarial examples by comparing the original example against its de-noised version is not a new idea. There exist many methods that either use the statistics of model input itself, or statistics of intermediate results when passing though a network. In order to justify that the value of the proposed method, it is critical to show that the proposed method is superior than previous ones either. As the novelty is relative low, the key to justify this work is to show that the proposed method is superior than others, either theoretically or empirically. However, in the paper it is lacking of detailed analysis of drawbacks of pervious works, motivations or intuition of what the additional value that the proposed method can provide, or direct comparison in experiment results. Authors should consider adding more previous methods that falls into the same kind, analyzing their similarity and differences, providing detailed comparison in experiment results, and trying to draw insights that what makes things better. An example of work in this kind is "Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction" but there are more.
2. The presentation is poor. There is lack of motivations and intuition. The whole paper sounds like "look this is what we did" but is lack of "why or what motivates us to do this". There are a lot of details and figures that can be moved to appendix, while on the other hand there is no diagram for the proposed method. The results are provides without drawing insights.
3. The experiment results can be enriched. it is lack of attacks with different strength. How different thresholds influence the detection performance is also lacking. | 3. The experiment results can be enriched. it is lack of attacks with different strength. How different thresholds influence the detection performance is also lacking. |