diff --git "a/data/ja/test.jsonl" "b/data/ja/test.jsonl" new file mode 100644--- /dev/null +++ "b/data/ja/test.jsonl" @@ -0,0 +1,200 @@ +{"source": "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.", "target": ["画像に関する質問に答える研究。CNNで画像特徴、LSTMで質問クエリ、二つ合わせて回答するのが鉄板だが、画像内の着目点をよりはっきりさせるため、画像の各領域に対するクエリのAttentionを反復して計算する(Stacked Attention)手法を提案。既存精度を大きく更新。"]} +{"source": "In this work we implement a training of a Language Model (LM), using Recurrent Neural Network (RNN) and GloVe word embeddings, introduced by Pennigton et al. in [1]. The implementation is following the general idea of training RNNs for LM tasks presented in [2], but is rather using Gated Recurrent Unit (GRU) [3] for a memory cell, and not the more commonly used LSTM [4].", "target": ["RNNを使った言語モデルにword embeddingを組み込むことで性能向上をはかっている話。メモリセルにはGRU、embeddingにはGloVeを使用。n番目の単語ベクトルをn-1個の単語ベクトルから予測している。"]} +{"source": "We present a new approach to cross channel fraud detection: build graphs representing transactions from all channels and use analytics on features extracted from these graphs. Our underlying hypothesis is community based fraud detection: an account (holder) performs normal or trusted transactions within a community that is “local” to the account. We explore several notions of community based on graph properties. Our results show that properties such as shortest distance between transaction endpoints, whether they are in the same strongly connected component, whether the destination has high page rank, etc., provide excellent discriminators of fraudulent and normal transactions whereas traditional social network analysis yields poor results. Evaluation on a large dataset from a European bank shows that such methods can substantially reduce false positives in traditional fraud scoring. We show that classifiers built purely out of graph properties are very promising, with high AUC, and can complement existing fraud detection approaches.", "target": ["オンラインバンキングやP2P paymentなどの複数のチャネルを統合的に扱った不正送金検出手法の提案。全チャネルをまとめてグラフを生成。グラフから仮説ベースの特徴抽出を行い、検出精度に対する実験を行っている。また、事前に特徴量を計算しておくことで、リアルタイムなFraud検出が可能としている。既存手法と比較してfalse positiveを大幅改善。"]} +{"source": "We employ deep multi-agent reinforcement learning to model the emergence of cooperation. The new notion of sequential social dilemmas allows us to model how rational agents interact, and arrive at more or less cooperative behaviours depending on the nature of the environment and the agents’ cognitive capacity. The research may enable us to better understand and control the behaviour of complex multi-agent systems such as the economy, traffic, and environmental challenges.", "target": ["強化学習で、複数エージェントの場合の研究。リンゴをとるタスクで、リンゴをとる以外にビームで相手を止められるようにして実験。最初は撃たなかったが、リンゴが少なくなると敵対が発生+複雑(賢い)なネットワークほど敵対的という結果に。また、複雑なゲームほど協調が多い傾向が出たとのこと。"]} +{"source": "Real-valued word representations have transformed NLP applications; popular examples are word2vec and GloVe, recognized for their ability to capture linguistic regularities. In this paper, we demonstrate a {\\em very simple}, and yet counter-intuitive, postprocessing technique -- eliminate the common mean vector and a few top dominating directions from the word vectors -- that renders off-the-shelf representations {\\em even stronger}. The postprocessing is empirically validated on a variety of lexical-level intrinsic tasks (word similarity, concept categorization, word analogy) and sentence-level tasks (semantic textural similarity and { text classification}) on multiple datasets and with a variety of representation methods and hyperparameter choices in multiple languages; in each case, the processed representations are consistently better than the original ones.", "target": ["分散表現の精度を上げる後処理の話。分散表現の問題として、どの次元でも共通するなにがしかのベクトル量を持っていると指摘。そこでそれらを差っ引いてやることで各次元の特徴を際立たせてやろうという試み。具体的には平均と主成分をひいている。各分散表現で精度の向上を確認、特にGloveで顕著"]} +{"source": "Machine learning classifiers are known to be vulnerable to inputs maliciously constructed by adversaries to force misclassification. Such adversarial examples have been extensively studied in the context of computer vision applications. In this work, we show adversarial attacks are also effective when targeting neural network policies in reinforcement learning. Specifically, we show existing adversarial example crafting techniques can be used to significantly degrade test-time performance of trained policies. Our threat model considers adversaries capable of introducing small perturbations to the raw input of the policy. We characterize the degree of vulnerability across tasks and training algorithms, for a subclass of adversarial-example attacks in white-box and black-box settings. Regardless of the learned task or training algorithm, we observe a significant drop in performance, even with small adversarial perturbations that do not interfere with human perception. Videos are available at this http URL.", "target": ["GANの話ではなくリアルにAdversarial(敵対的)な話。分類モデルで使える攻撃で、強化学習でも学習速度を遅くさせることが可能という話。目的関数の勾配にsign関数をかけたものを足すだけでOK(強化学習では選択した行動以外は勾配が入らないので学習時はsoftmaxを使用)"]} +{"source": "We present a pixel recursive super resolution model that synthesizes realistic details into images while enhancing their resolution. A low resolution image may correspond to multiple plausible high resolution images, thus modeling the super resolution process with a pixel independent conditional model often results in averaging different details--hence blurry edges. By contrast, our model is able to represent a multimodal conditional distribution by properly modeling the statistical dependencies among the high resolution image pixels, conditioned on a low resolution input. We employ a PixelCNN architecture to define a strong prior over natural images and jointly optimize this prior with a deep conditioning convolutional network. Human evaluations indicate that samples from our proposed model look more photo realistic than a strong L2 regression baseline.", "target": ["ピクセル間の依存を考慮しない推定(conditioning network)に、PixelCNNを用いて計算した高解像度画像におけるピクセル間の依存(prior network)を足し合わせることで、高解像度ピクセル推定を行っている。"]} +{"source": "In this paper we propose a novel model for unconditional audio generation task that generates one audio sample at a time. We show that our model which profits from combining memory-less modules, namely autoregressive multilayer perceptron, and stateful recurrent neural networks in a hierarchical structure is de facto powerful to capture the underlying sources of variations in temporal domain for very long time on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.", "target": ["音楽生成についての研究。長い時系列上での関係を捉えるために、RNNを階層状に積んで上の方ほど長い間隔の依存をとらえるのを担当するような構成を構築(最下層の出力は通常のNN)。音声合成・音楽のデータで検証しWaveNet(CNN)をうわまった、という結果。"]} +{"source": "We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a rep- resentation of the state of the world as it receives new data. For language under- standing tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory loca- tions can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children’s Book Test, where it obtains competitive performance, reading the story in a single pass.", "target": ["Facebookの文章読解タスクの研究。状況を記憶させるためのメモリユニットを組み合わせたネットワーク(Recurrent Entity Network)を提案。入力に対し更新を行う際key vectorを用いどのメモリに書き込むかを含め学習する。bAbIタスクで完全試合を達成。"]} +{"source": "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "target": ["低解像度から高解像度に復元する研究。今までは本物の画像との距離(MSE)を使用していたが、これだと「感覚的」な近さと一致しないことが多かった。そこでPerceptual lossという特徴マップ間の誤差を導入(学習済みVGGを使用)。5段階の主観評価で既存モデルより1スコアUP"]} +{"source": "Deep neural networks (DNN) have revolutionized the field of natural language processing (NLP). Convolutional neural network (CNN) and recurrent neural network (RNN), the two main types of DNN architectures, are widely explored to handle various NLP tasks. CNN is supposed to be good at extracting position-invariant features and RNN at modeling units in sequence. The state of the art on many NLP tasks often switches due to the battle between CNNs and RNNs. This work is the first systematic comparison of CNN and RNN on a wide range of representative NLP tasks, aiming to give basic guidance for DNN selection.", "target": ["最近のNLPではCNNかRNNがよく使われているがそのシステマティックな比較は行われてこなかった。そこでこの論文ではCNNとRNNの性能比較を行っている。具体的にはCNN、GRU、LSTMを7つのタスク(Sentiment Classification, Relation Classification, Textual Entailment, Answer Selection, Question Relation Match, Path Query Answering, POS)について評価を行っている。評価した結果、キーフレーズの認識が重要なタスク(Sentiment Detection, Question Answer Matching)以外のタスクについてはRNNの性能が上回った。"]} +{"source": "Word representations have proven useful for many NLP tasks, e.g., Brown clusters as features in dependency parsing (Koo et al., 2008). In this paper, we investigate the use of continuous word representations as features for dependency parsing. We compare several popular embeddings to Brown clusters, via multiple types of features, in both news and web domains. We find that all embeddings yield significant parsing gains, including some recent ones that can be trained in a fraction of the time of others. Explicitly tailoring the representations for the task leads to further improvements. Moreover, an ensemble of all representations achieves the best results, suggesting their complementarity.", "target": ["係り受け解析の特徴としてword embeddingを使用した話。先行研究と比べてすごいのは、単語表現をタスクに合わせて調整したり、複数のアルゴリズムで得られた単語表現を組み合わせることで性能を向上させられた点。PTBとEnglish Web treebankをデータセットとして検証した結果、Ensembleした表現を使うと一番良い結果になった。"]} +{"source": "Short texts usually encounter data sparsity and ambiguity problems in representations for their lack of context. In this paper, we propose a novel method to model short texts based on semantic clustering and convolutional neural network. Particularly, we first discover semantic cliques in embedding spaces by a fast clustering algorithm. Then, multi-scale semantic units are detected under the supervision of semantic cliques, which introduce useful external knowledge for short texts. These meaningful semantic units are combined and fed into convolutional layer, followed by max-pooling operation. Experimental results on two open benchmarks validate the effectiveness of the proposed method.", "target": ["短いテキストは文脈情報の不足やデータのスパース性の問題がある。これら問題に対処するためにクラスタリングとCNNを用いてテキストをモデル化する話。具体的にはembedding空間をクラスタリングすることでsemanticクリークを作成し、そこから得られたsemantic unitをCNNに入力する。文書分類タスクでstate-of-the-artな手法と比較した結果、提案手法の有効性を示すことができた。"]} +{"source": "Data sparsity is a large problem in natural language processing that refers to the fact that language is a system of rare events, so varied and complex, that even using an extremely large corpus, we can never accurately model all possible strings of words. This paper examines the use of skip-grams (a technique where by n-grams are still stored to model language, but they allow for tokens to be skipped) to overcome the data sparsity problem. We analyze this by computing all possible skip-grams in a training corpus and measure how many adjacent (standard) n-grams these cover in test documents. We examine skip-gram modelling using one to four skips with various amount of training data and test against similar documents as well as documents generated from a machine translation system. In this paper we also determine the amount of extra training data required to achieve skip-gram coverage using standard adjacent tri-grams.", "target": ["言語におけるデータのスパース性の問題に対処するためにSkip-gramを使ってみた話。一般的なbigramやtrigramに比べて、skip-bigramやskip-trigramを使うことでカバレッジを向上させることができる。実際にカバレッジを比較したところ、データ数を増やすよりskip-gramを使ったほうがカバレッジ向上の役に立っている。"]} +{"source": "Detecting hypernymy relations is a key task in NLP, which is addressed in the literature using two complementary approaches. Distributional methods, whose supervised variants are the current best performers, and path-based methods, which received less research attention. We suggest an improved path-based algorithm, in which the dependency paths are encoded using a recurrent neural network, that achieves results comparable to distributional methods. We then extend the approach to integrate both path-based and distributional signals, significantly improving upon the state-of-the-art on this task.", "target": ["RNNを用いて上位語の検知を行う話。具体的には依存関係のパスをLSTMsを使ってエンコードしてそれを分類している。評価した結果、従来よく行われているDistributionalの方法に匹敵する性能を示した。また、Distributionalな手法と組み合わせることでF1で14ポイントの向上が見られた。"]} +{"source": "We introduce an extension to the bag-ofwords model for learning words representations that take into account both syntactic and semantic properties within language. This is done by employing an attention model that finds within the contextual words, the words that are relevant for each prediction. The general intuition of our model is that some words are only relevant for predicting local context (e.g. function words), while other words are more suited for determining global context, such as the topic of the document. Experiments performed on both semantically and syntactically oriented tasks show gains using our model over the existing bag of words model. Furthermore, compared to other more sophisticated models, our model scales better as we increase the size of the context of the model.", "target": ["「ある単語の予測は文脈内のある単語に大きく依存している」という仮説を考慮して単語埋め込み表現を獲得するために、CBOWにAttentionを導入した話。POS Induction(教師なしの品詞タグ付け)、品詞タグ付け、評判分析で評価したところ、POS Inductionに対しては既存の手法(CBOW, Skip-ngram, SSkip-ngram)と比較して良い結果であった。その他タスクでもそこそこの性能を示した。"]} +{"source": "For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks. Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function. We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A. Positive transfer was demonstrated for binary MNIST, CIFAR, and SVHN supervised learning classification tasks, and a set of Atari and Labyrinth reinforcement learning tasks, suggesting PathNets have general applicability for neural network training. Finally, PathNet also significantly improves the robustness to hyperparameter choices of a parallel asynchronous reinforcement learning algorithm (A3C).", "target": ["同構造のネットワーク同士で、互いの学習結果を壊さず相手の学習結果を取り込むことを目指した研究。学習と同時にモジュール(畳込層など)を結ぶパスを遺伝的アルゴリズムで進化させていき、学習が完了したら重みを固定し次のタスクに入る。これで効果的な転移が可能なことを確認(画像&強化学習)。"]} +{"source": "Deep artificial neural networks have made remarkable progress in different tasks in the field of computer vision. However, the empirical analysis of these models and investigation of their failure cases has received attention recently. In this work, we show that deep learning models cannot generalize to atypical images that are substantially different from training images. This is in contrast to the superior generalization ability of the visual system in the human brain. We focus on Convolutional Neural Networks (CNN) as the state-of-the-art models in object recognition and classification; investigate this problem in more detail, and hypothesize that training CNN models suffer from unstructured loss minimization. We propose computational models to improve the generalization capacity of CNNs by considering how typical a training image looks like. By conducting an extensive set of experiments we show that involving a typicality measure can improve the classification results on a new set of images by a large margin. More importantly, this significant improvement is achieved without fine-tuning the CNN model on the target image set.", "target": ["分類タスクの学習に対して全ての入力データでlossの重みが同一で良いかを検証し、重み付けの手法を提案。同一クラス内のデータに対してTypical, Atypicalなデータを区別するためにTypical Scoreを算出し、lossのweightとして利用。Typical Scoreはクラス内の重心(平均値など)っぽさ(近さ)。weightは非線形なものも試している。"]} +{"source": "In this paper, drawing intuition from the Turing test, we propose using adversarial training for open-domain dialogue generation: the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. We cast the task as a reinforcement learning (RL) problem where we jointly train two systems, a generative model to produce response sequences, and a discriminator---analagous to the human evaluator in the Turing test--- to distinguish between the human-generated dialogues and the machine-generated ones. The outputs from the discriminator are then used as rewards for the generative model, pushing the system to generate dialogues that mostly resemble human dialogues. In addition to adversarial training we describe a model for adversarial {\\em evaluation} that uses success in fooling an adversary as a dialogue evaluation metric, while avoiding a number of potential pitfalls. Experimental results on several metrics, including adversarial evaluation, demonstrate that the adversarially-trained system generates higher-quality responses than previous baselines.", "target": ["対話モデルについて、チューリングテストのように「生成した発話が人間と区別できないか」を評価する、つまりGANと同じ仕組みの導入を提案。Seq2Seqを基本とし、騙せたかどうかで報酬を得る強化学習の仕組みを導入(生成単語ごとに判定を実施)。既存のSeq2Seqモデルより優秀な結果。"]} +{"source": "Oriental ink painting, called Sumi-e, is one of the most appealing painting styles that has attracted artists around the world. Major challenges in computer-based Sumi-e simulation are to abstract complex scene information and draw smooth and natural brush strokes. To automatically find such strokes, we propose to model the brush as a reinforcement learning agent, and learn desired brush-trajectories by maximizing the sum of rewards in the policy search framework. We also provide elaborate design of actions, states, and rewards tailored for a Sumi-e agent. The effectiveness of our proposed approach is demonstrated through simulated Sumi-e experiments.", "target": ["墨絵における筆の運びを強化学習で学習させる話。筆の動き・止め・跳ね・回転をアクションとし、筆の動きの滑らかさを報酬として学習をしている。状態はグロバール上の位置と、それを基に計算するストロークの中における相対情報(Figure2参照)の双方を扱っている(計算に使うのは相対のみ)。"]} +{"source": "Task-oriented dialogue focuses on conversational agents that participate in user-initiated dialogues on domain-specific topics. In contrast to chatbots, which simply seek to sustain open-ended meaningful discourse, existing task-oriented agents usually explicitly model user intent and belief states. This paper examines bypassing such an explicit representation by depending on a latent neural embedding of state and learning selective attention to dialogue history together with copying to incorporate relevant prior context. We complement recent work by showing the effectiveness of simple sequence-to-sequence neural architectures with a copy mechanism. Our model outperforms more complex memory-augmented models by 7% in per-response generation and is on par with the current state-of-the-art on DSTC2.", "target": ["Seq2Seqでタスク指向対話を行う話。単純に出力(システム発話)を予測させるだけでなく、Attentionが最も高い入力ベクトルを予測させる(入力からコピーして教師ベクトルにする)。さらに知識ベースに入っている語か否かの情報を加えてそこそこの精度なので、結果はちょっと微妙。"]} +{"source": "We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the corresponding optimization problem is sound, and provide extensive theoretical work highlighting the deep connections to other distances between distributions.", "target": ["VAEやGANなどの生成系のタスクでは、「真の分布」との距離の最小化を目的にしている。つまり「距離」の定義はモデルの精度の大きな要素で、GANではこの自由度が高い反面学習が安定しない要因になっていた。そこでWasserstein距離を使うと勾配が消失せず学習が安定したという話。"]} +{"source": "State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available. In this paper, we introduce two new neural architectures---one based on bidirectional LSTMs and conditional random fields, and the other that constructs and labels segments using a transition-based approach inspired by shift-reduce parsers. Our models rely on two sources of information about words: character-based word representations learned from the supervised corpus and unsupervised word representations learned from unannotated corpora. Our models obtain state-of-the-art performance in NER in four languages without resorting to any language-specific knowledge or resources such as gazetteers.", "target": ["言語固有の知識やリソースに依存しないような固有表現認識の手法を提案した話。具体的には2つのモデル(LSTM-CRFとS-LSTM)を提案している。ラベル付きコーパスから学習した文字ベースの単語表現とラベルなしコーパスから学習した単語表現を入力とすることで、4つの言語でSOTAとなった。"]} +{"source": "This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks.", "target": ["テキスト分類のためにCNNを使った半教師あり学習のフレームワークを提案した話。従来モデルでは事前学習済みのword embeddingを畳み込み層の入力に使っていたが、本研究では小さいテキス���の領域から教師なしでembeddingを学習し、教師ありCNNにおける畳み込み層の入力の一部として使う。評価分析(IMDB, Elec)とトピック分類(RCV1)で実験したところ、先行研究より高い性能を示した。"]} +{"source": "We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags. As well as strong performance on the hashtag prediction task itself, we show that its learned representation of text (ignoring the hashtag labels) is useful for other tasks as well. To that end, we present results on a document recommendation task, where it also outperforms a number of baselines.", "target": ["ハッシュタグを教師にして短いテキストの表現を学習する話。具体的にはCNNを用いて、テキストとハッシュタグのペアに対してスコアを出力し、ハッシュタグのランク付けを行う過程でテキストの表現を学習する。ハッシュタグの予測と文書推薦タスクで評価を行った結果、ベースラインの手法よりも良い結果となった。"]} +{"source": "Convolutional Neural Networks (CNNs) have recently achieved remarkably strong performance on the practically important task of sentence classification (kim 2014, kalchbrenner 2014, johnson 2014). However, these models require practitioners to specify an exact model architecture and set accompanying hyperparameters, including the filter region size, regularization parameters, and so on. It is currently unknown how sensitive model performance is to changes in these configurations for the task of sentence classification. We thus conduct a sensitivity analysis of one-layer CNNs to explore the effect of architecture components on model performance; our aim is to distinguish between important and comparatively inconsequential design decisions for sentence classification. We focus on one-layer CNNs (to the exclusion of more complex models) due to their comparative simplicity and strong empirical performance, which makes it a modern standard baseline method akin to Support Vector Machine (SVMs) and logistic regression. We derive practical advice from our extensive empirical results for those interested in getting the most out of CNNs for sentence classification in real world settings.", "target": ["CNNのモデルは文分類でいい結果を残しているけど、熟練者がアーキテクチャ決めたりハイパーパラメータを設定する必要がある。これらの変更がどのような結果を及ぼすのかよくわからないので、一層のCNNを使って検証した話。最後に、CNNで文分類するときにモデルのアーキテクチャやハイパーパラメータをどう設定すべきか実践的なアドバイスをしている。"]} +{"source": "We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including: part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.", "target": ["品詞タグ付け、チャンキング、固有表現抽出、意味役割付与を学習できるニューラルネットワークを提案した話。単純に学習させるだけではベンチマークより性能が下回ったが、ラベルなしデータを用いて言語モデルの学習を事前に行うことで、質の良い単語ベクトルが性能向上に寄与することを示した。さらに各タスクを解くためのモデル間でパラメタを共有してマルチタスク学習を行うことで性能がより向上することも示した。"]} +{"source": "Convolutional neural network (CNN) is a neural network that can make use of the internal structure of data such as the 2D structure of image data. This paper studies CNN on text categorization to exploit the 1D structure (namely, word order) of text data for accurate prediction. Instead of using low-dimensional word vectors as input as is often done, we directly apply CNN to high-dimensional text data, which leads to directly learning embedding of small text regions for use in classification. In addition to a straightforward adaptation of CNN from image to text, a simple but new variation which employs bag-of-word conversion in the convolution layer is proposed. An extension to combine multiple convolution layers is also explored for higher accuracy. The experiments demonstrate the effectiveness of our approach in comparison with state-of-the-art methods.", "target": ["CNNを使って語順を考慮したテキスト分類を行う話。たいていのCNNの手法では入力としてword embeddingを入力するが、この研究では高次元のone-hotベクトルをそのまま入力して、小さなテキスト領域のembeddingを学習する。評判分析(IMDB含む)とトピック分類に関する3つのデータセットでSOTAな手法と比較した結果、提案手法の有効性を示せた。"]} +{"source": "This article offers an empirical exploration on the use of character-level convolutional networks (ConvNets) for text classification. We constructed several large-scale datasets to show that character-level convolutional networks could achieve state-of-the-art or competitive results. Comparisons are offered against traditional models such as bag of words, n-grams and their TFIDF variants, and deep learning models such as word-based ConvNets and recurrent neural networks.", "target": ["文字レベルの畳み込みニューラルネットワークをテキスト分類に使った話。シソーラスを使ってテキスト中の単語を同義語で置換することでデータを増やしている。比較は、伝統的な手法としてbow、bag-of-ngram、bag-of-means、Deep Learning手法として、単語ベースのCNN、LSTMを対象に行っている。8つのデータセットを作成してベースの手法と比較した結果、いくつかのデータセットでは有効性を示せた。"]} +{"source": "We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.", "target": ["並列に動作する複数エージェントのサンプルから学習する強化学習手法A3Cの提案。複数のエージェントごとに並列にサンプルを収集し勾配を評価しロスを足しあげ、その勾配を使い共有パラメータを非同期に更新する。また非同期に共有パラメータを各エージェントのパラメータとして取得する。サンプルは保持しないのでexperience replayは必要がない。DQN,DDQNなどと比較しCPUを使い短い学習時間であってもより高性能、この時点のSOTA."]} +{"source": "Semantic Textual Similarity (STS) seeks to measure the degree of semantic equivalence between two snippets of text. Similarity is expressed on an ordinal scale that spans from semantic equivalence to complete unrelatedness. Intermediate values capture specifically defined levels of partial similarity. While prior evaluations constrained themselves to just monolingual snippets of text, the 2016 shared task includes a pilot subtask on computing semantic similarity on cross-lingual text snippets. This year’s traditional monolingual subtask involves the evaluation of English text snippets from the following four domains: Plagiarism Detection, Post-Edited Machine Translations, Question-Answering and News Article Headlines. From the questionanswering domain, we include both questionquestion and answer-answer pairs. The cross-lingual subtask provides paired SpanishEnglish text snippets drawn from the same sources as the English data as well as independently sampled news data. The English subtask attracted 43 participating teams producing 119 system submissions, while the crosslingual Spanish-English pilot subtask attracted 10 teams resulting in 26 systems.", "target": ["テキストの類似性を測るタスクであるSemEval-2016 Task 1の説明をしている論文。説明内容はタスクの説明、アノテーション方法の説明、参加者が提出した結果の提示、および総括からなる。2016年はcross-lingualなテキストの類似性を含んだのが特徴。"]} +{"source": "Distributed word representations have recently been proven to be an invaluable resource for NLP. These representations are normally learned using neural networks and capture syntactic and semantic information about words. Information about word morphology and shape is normally ignored when learning word representations. However, for tasks like part-of-speech tagging, intra-word information is extremely useful, specially when dealing with morphologically rich languages. In this paper, we propose a deep neural network that learns character-level representation of words and associate them with usual word representations to perform POS tagging. Using the proposed approach, while avoiding the use of any handcrafted feature, we produce state-of-the-art POS taggers for two languages: English, with 97.32% accuracy on the Penn Treebank WSJ corpus; and Portuguese, with 97.47% accuracy on the Mac-Morpho corpus, where the latter represents an error reduction of 12.2% on the best previous known result.", "target": ["品詞タグ付けをCNN(CharWNN)を使って行う話。具体的には、単語レベルと文字レベルのembeddingsを統合して単語のベクトル表現を構築し、構築したベクトルを入力することで品詞のスコアを出力するCNNを構築した。英語とポルトガル語に対するデータセット(WSJとMac-Morpho)を用いて実験した結果、SOTAな結果となった。"]} +{"source": "Sentiment analysis of short texts such as single sentences and Twitter messages is challenging because of the limited contextual information that they normally contain. Effectively solving this task requires strategies that combine the small text content with prior knowledge and use more than just bag-of-words. In this work we propose a new deep convolutional neural network that exploits from character- to sentence-level information to perform sentiment analysis of short texts. We apply our approach for two corpora of two different domains: the Stanford Sentiment Treebank (SSTb), which contains sentences from movie reviews; and the Stanford Twitter Sentiment corpus (STS), which contains Twitter messages. For the SSTb corpus, our approach achieves state-of-the-art results for single sentence sentiment prediction in both binary positive/negative classification, with 85.7% accuracy, and fine-grained classification, with 48.3% accuracy. For the STS corpus, our approach achieves a sentiment prediction accuracy of 86.4%.", "target": ["映画レビューやTwitterに対する評判分析をCNN(CharSCNN)を使って行う話。具体的には、単語レベルと文字レベルのembeddingsから文のベクトル表現を構築し、構築したベクトルを入力することで評判のスコアを出力するCNNを構築した。映画レビュー(SSTb)とTwitter(STS)に対するデータセットを用いて実験した結果、SOTAな結果となった。"]} +{"source": "This paper addresses the automatic generation of a typographic font from a subset of characters. Specifically, we use a subset of a typographic font to extrapolate additional characters. Consequently, we obtain a complete font containing a number of characters sufficient for daily use. The automated generation of Japanese fonts is in high demand because a Japanese font requires over 1,000 characters. Unfortunately, professional typographers create most fonts, resulting in significant financial and time investments for font generation. The proposed method can be a great aid for font creation because designers do not need to create the majority of the characters for a new font. The proposed method uses strokes from given samples for font generation. The strokes, from which we construct characters, are extracted by exploiting a character skeleton dataset. This study makes three main contributions: a novel method of extracting strokes from characters, which is applicable to both standard fonts and their variations; a fully automated approach for constructing characters; and a selection method for sample characters. We demonstrate our proposed method by generating 2,965 characters in 47 fonts. Objective and subjective evaluations verify that the generated characters are similar to handmade characters.", "target": ["日本語フォントを作るにはたくさん文字を書かないといけないので、それを楽にしようという論文。基準となる文字セット(Skeleton dataset)を用意し、それと新しいフォントの数文字について比較を行う。そこから構造(傾きなど)とストロークを抽出し、他の文字にも適用するという手法"]} +{"source": "Attention plays a critical role in human visual experience. Furthermore, it has recently been demonstrated that attention can also play an important role in the context of applying artificial neural networks to a variety of tasks from fields such as computer vision and NLP. In this work we show that, by properly defining attention for convolutional neural networks, we can actually use this type of information in order to significantly improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network. To that end, we propose several novel methods of transferring attention, showing consistent improvement across a variety of datasets and convolutional neural network architectures. Code and models for our experiments are available at this https URL", "target": ["画像を認識するとき、この辺に注目するとええよ、というようにAttentionをTransferするという研究。Activationベース(lossを調整)と、Gradientベース(勾配を調整)の2種類を提���。効果は微少な感じだが、Activationベースの方が有効"]} +{"source": "We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given function f, which accepts inputs in either domains, would remain unchanged. Other than the function f, the training data is unsupervised and consist of a set of samples from each domain. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity.", "target": ["ドメイン(スタイル)トランスファーの研究で、スタイルを変換する関数Gを画像の特徴抽出(f))とスタイル変換(g)に分割し、あたかもアナロジーを行うように(元画像+スタイル)転化を行っている。これにより汎用性が高くなっている。メインのタスクとして、顔画像をアイコン風に変換している。"]} +{"source": "We propose an end-to-end learning framework for generating foreground object segmentations. Given a single novel image, our approach produces pixel-level masks for all \"object-like\" regions---even for object categories never seen during training. We formulate the task as a structured prediction problem of assigning foreground/background labels to all pixels, implemented using a deep fully convolutional network. Key to our idea is training with a mix of image-level object category examples together with relatively few images with boundary-level annotations. Our method substantially improves the state-of-the-art on foreground segmentation for ImageNet and MIT Object Discovery datasets. Furthermore, on over 1 million images, we show that it generalizes well to segment object categories unseen in the foreground maps used for training. Finally, we demonstrate how our approach benefits image retrieval and image retargeting, both of which flourish when given our high-quality foreground maps.", "target": ["オブジェクト検知をピクセルレベルで行った話。前景部分の抽出、領域提案といった既存の手法と異なり、ピクセル単位で前景/背景の確率をCNNで計算する。 クラス識別のモデルを、セグメンテーションのデータセットでファインチューニングするという面白い手法をとっている。"]} +{"source": "Despite recent advances, memory-augmented deep neural networks are still limited when it comes to life-long and one-shot learning, especially in remembering rare events. We present a large-scale life-long memory module for use in deep learning. The module exploits fast nearest-neighbor algorithms for efficiency and thus scales to large memory sizes. Except for the nearest-neighbor query, the module is fully differentiable and trained end-to-end with no extra supervision. It operates in a life-long manner, i.e., without the need to reset it during training. Our memory module can be easily added to any part of a supervised neural network. To show its versatility we add it to a number of networks, from simple convolutional ones tested on image classification to deep sequence-to-sequence and recurrent-convolutional models. In all cases, the enhanced network gains the ability to remember and do life-long one-shot learning. Our module remembers training examples shown many thousands of steps in the past and it can successfully generalize from them. We set new state-of-the-art for one-shot learning on the Omniglot dataset and demonstrate, for the first time, life-long one-shot learning in recurrent neural networks on a large-scale machine translation task.", "target": ["外部メモリを利用したone-shot learningの精度向上手法を提案。one-shot learningタスクではSoTA。hashing trickを使って最近傍法を効率的に実行。memory lossという独自のlossを導入している。"]} +{"source": "We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.", "target": ["CNNを用いて文分類を行う話。具体的には文を単語ベクトルの列として表し、それ���対してCNNを用いて特徴抽出・分類を行っている。論文では事前学習済みの単語ベクトル(Google Newsをword2vecで学習したもの)を使っている。評価分析や質問タイプ分類\bを含む7つのタスクで評価したところ、7つ中4つでSOTAな結果になった。"]} +{"source": "We present Tweet2Vec, a novel method for generating general-purpose vector representation of tweets. The model learns tweet embeddings using character-level CNN-LSTM encoder-decoder. We trained our model on 3 million, randomly selected English-language tweets. The model was evaluated using two methods: tweet semantic similarity and tweet sentiment categorization, outperforming the previous state-of-the-art in both tasks. The evaluations demonstrate the power of the tweet embeddings generated by our model for various tweet categorization tasks. The vector representations generated by our model are generic, and hence can be applied to a variety of tasks. Though the model presented in this paper is trained on English-language tweets, the method presented can be used to learn tweet embeddings for different languages.", "target": ["文字レベルのCNN-LSTM Encoder-Decoderモデルを構築して、Tweetのembeddingsを学習する話。文字レベルで処理を行うことで、Tweetに混じるノイズに頑健になり、他言語でも適用可能となった。Semantic Relatednessと評価分類で評価した結果これまでで最高の性能を達成した。"]} +{"source": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-of-the-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.", "target": ["PixelCNNなどでうまくいっている自己回帰モデルによる生成モデルを音声の生成に適用してみた。dilationが幾何級数的に大きくなるconvolutionの列の導入により波形の直接生成に必要な大きな受容野を計算量を大きくせず実現できた。主観評価でこれまでで最高の品質を実現。音素認識にも使えて、有望な結果を出した。"]} +{"source": "Text from social media provides a set of challenges that can cause traditional NLP approaches to fail. Informal language, spelling errors, abbreviations, and special characters are all commonplace in these posts, leading to a prohibitively large vocabulary size for word-level approaches. We propose a character composition model, tweet2vec, which finds vector-space representations of whole tweets by learning complex, non-local dependencies in character sequences. The proposed model outperforms a word-level baseline at predicting user-annotated hashtags associated with the posts, doing significantly better when the input contains many out-of-vocabulary words or unusual character sequences. Our tweet2vec encoder is publicly available.", "target": ["Twitterの投稿内容から投稿についているハッシュタグを予測する文字ベースのニューラルネットワーク(Bi-GRU)を構築する話。文字ベースで予測することで膨大な単語を扱う必要がない、未知語に強い、単語分割が必要ない問いった利点がある。ハッシュタグの予測性能で評価した結果、単語レベルの方法に比べて良い性能を示した。"]} +{"source": "We present a novel word level vector representation based on symmetric patterns (SPs). For this aim we automatically acquire SPs (e.g., “X and Y”) from a large corpus of plain text, and generate vectors where each coordinate represents the cooccurrence in SPs of the represented word with another word of the vocabulary. Our representation has three advantages over existing alternatives: First, being based on symmetric word relationships, it is highly suitable for word similarity prediction. Particularly, on the SimLex999 word similarity dataset, our model achieves a Spearman’s ρ score of 0.517, compared to 0.462 of the state-of-the-art word2vec model. Interestingly, our model performs exceptionally well on verbs, outperforming stateof-the-art baselines by 20.2–41.5%. Second, pattern features can be adapted to the needs of a target NLP application. For example, we show that we can easily control whether the embeddings derived from SPs deem antonym pairs (e.g. (big,small)) as similar or dissimilar, an important distinction for tasks such as word classification and sentiment analysis. Finally, we show that a simple combination of the word similarity scores generated by our method and by word2vec results in a superior predictive power over that of each individual model, scoring as high as 0.563 in Spearman’s ρ on SimLex999. This emphasizes the differences between the signals captured by each of the models.", "target": ["Symmetric Pattern(SP)(たとえばX and Y)に基づく単語ベクトル表現を提案した話。手法としてはSPをコーパスから獲得し、それに基づきPPMIを用いてベクトルを生成する。単語類似度タスクで評価した結果、SimLex999ではSOTAとなった。また、動詞に対して有効であることも分かった。"]} +{"source": "Despite interest in using cross-lingual knowledge to learn word embeddings for various tasks, a systematic comparison of the possible approaches is lacking in the literature. We perform an extensive evaluation of four popular approaches of inducing cross-lingual embeddings, each requiring a different form of supervision, on four typographically different language pairs. Our evaluation setup spans four different tasks, including intrinsic evaluation on mono-lingual and cross-lingual similarity, and extrinsic evaluation on downstream semantic and syntactic applications. We show that models which require expensive cross-lingual knowledge almost always perform better, but cheaply supervised models often prove competitive on certain tasks.", "target": ["多言語情報を使ってword embeddingsを得ることで性能向上することは知られていたが、手法の比較は行われてこなかった。そのため、4つの手法を比較した話(BiSkip, BiCVM, BiVCD, BiCCA)。Intrinsicなタスク(monolingualとcross-lingualでの単語類似度タスク)とExtrinsicなタスク(cross-lingualでの文書分類と係り受け解析)で評価した結果、単言語の類似度タスクではBiSkipとBiVCDは同じくらいだが、cross-lingualなタスクではBiSkipがかなり良い結果となることを示し、対照的にSyntacticなタスクではBiCCAが最も良い結果となった。"]} +{"source": "We propose Edward, a Turing-complete probabilistic programming language. Edward defines two compositional representations---random variables and inference. By treating inference as a first class citizen, on a par with modeling, we show that probabilistic programming can be as flexible and computationally efficient as traditional deep learning. For flexibility, Edward makes it easy to fit the same model using a variety of composable inference methods, ranging from point estimation to variational inference to MCMC. In addition, Edward can reuse the modeling representation as part of inference, facilitating the design of rich variational models and generative adversarial networks. For efficiency, Edward is integrated into TensorFlow, providing significant speedups over existing probabilistic systems. For example, we show on a benchmark logistic regression task that Edward is at least 35x faster than Stan and 6x faster than PyMC3. Further, Edward incurs no runtime overhead: it is as fast as handwritten TensorFlow.", "target": ["確率モデルを記述するためのフレームワークEdwardの論文。確率変数をつないでグラフ構造を構築するように記載でき、TensorFlowのグラフと統合し爆速で動作可能。VAEやGANなど、確率モデルと統合されたようなモデルが書きやすくなる。"]} +{"source": "One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world. Humans can learn about the characteristics of objects and the relationships that occur between them to learn a large variety of visual concepts, often with few examples. This paper investigates the use of structured prior knowledge in the form of knowledge graphs and shows that using this knowledge improves performance on image classification. We build on recent work on end-to-end learning on graphs, introducing the Graph Search Neural Network as a way of efficiently incorporating large knowledge graphs into a vision classification pipeline. We show in a number of experiments that our method outperforms standard neural network baselines for multi-label classification.", "target": ["画像認識について、人間は少ないサンプルですぐに識別ができるが、機械学習モデルはそうはいかない。人間は認識結果をもとにその知識から推論を行っているからそれができる、と仮定し、この「知識」=認識画像間の関係グラフを組み込むことで1-shotの精度を上げようという話。 MSCOCOの事前学習モデルで識別を行い、Visual Genomeというよりスパースなデータセットで精度を検証。1-shot/5-shotでVGGの1.4~5倍の精度を達成。"]} +{"source": "In this paper, we propose LexVec, a new method for generating distributed word representations that uses low-rank, weighted factorization of the Positive Point-wise Mutual Information matrix via stochastic gradient descent, employing a weighting scheme that assigns heavier penalties for errors on frequent cooccurrences while still accounting for negative co-occurrence. Evaluation on word similarity and analogy tasks shows that LexVec matches and often outperforms state-of-the-art methods on many of these tasks.", "target": ["LexVecという単語埋め込みベクトル獲得方法を提案した話。具体的にはPPMI行列を分解することによって得る。単語類似度タスクとアナロジータスクで評価した結果、単語類似度タスクではいくつかの評価セットにおいてSGNSを上回ったもののアナロジータスクではSGNSやGloVeの方が良い結果となった。"]} +{"source": "Word embeddings – distributed representations of words – in deep learning are beneficial for many tasks in NLP. However, different embedding sets vary greatly in quality and characteristics of the captured information. Instead of relying on a more advanced algorithm for embedding learning, this paper proposes an ensemble approach of combining different public embedding sets with the aim of learning metaembeddings. Experiments on word similarity and analogy tasks and on part-of-speech tagging show better performance of metaembeddings compared to individual embedding sets. One advantage of metaembeddings is the increased vocabulary coverage. We release our metaembeddings publicly at http:// cistern.cis.lmu.de/meta-emb.", "target": ["異なる性質を持つembedding集合を組み合わせてmeta embeddingを得る話。具体的には5つのembedding集合(HLBL, Huang, GloVe, CW, word2vec)を4つの手法(CONC, SVD, 1toN, 1toN+)で組み合わせて実験。これにより、単語類似度タスク、アナロジータスク、POSの性能が向上した。また組み合わせることでボキャブラリのカバレッジをあげられるのもメリット。"]} +{"source": "Verbs play a critical role in the meaning of sentences, but these ubiquitous words have received little attention in recent distributional semantics research. We introduce SimVerb-3500, an evaluation resource that provides human ratings for the similarity of 3,500 verb pairs. SimVerb-3500 covers all normed verb types from the USF free-association database, providing at least three examples for every VerbNet class. This broad coverage facilitates detailed analyses of how syntactic and semantic phenomena together influence human understanding of verb meaning. Further, with significantly larger development and test sets than existing benchmarks, SimVerb-3500 enables more robust evaluation of representation learning architectures and promotes the development of methods tailored to verbs. We hope that SimVerb-3500 will enable a richer understanding of the diversity and complexity of verb semantics and guide the development of systems that can effectively represent and interpret this meaning.", "target": ["動詞は文の意味を決定するのに重要な役割を占めるのに、最近の単語の意味研究ではあんまり注目されていないよねということで3500の動詞ペアの類似度を人間が評価したデータセットを提案した話。既存の単語表現学習モデルで分析したところ、低頻度で多義の動詞については非常に低い性能となることがわかった。"]} +{"source": "We introduce an exceptionally simple gated recurrent neural network (RNN) that achieves performance comparable to well-known gated architectures, such as LSTMs and GRUs, on the word-level language modeling task. We prove that our model has simple, predicable and non-chaotic dynamics. This stands in stark contrast to more standard gated architectures, whose underlying dynamical systems exhibit chaotic behavior.", "target": ["GRUよりさらに単純化したRNNゲートアーキテクチャの提案。hidden stateから次の時刻のhidden stateへの重み行列を単位行列で置き換えた。言語モデルではLSTMやGRUと同等のパフォーマンスを示した。"]} +{"source": "Modern automatic speech recognition (ASR) systems need to be robust under acoustic variability arising from environmental, speaker, channel, and recording conditions. Ensuring such robustness to variability is a challenge in modern day neural network-based ASR systems, especially when all types of variability are not seen during training. We attempt to address this problem by encouraging the neural network acoustic model to learn invariant feature representations. We use ideas from recent research on image generation using Generative Adversarial Networks and domain adaptation ideas extending adversarial gradient-based training. A recent work from Ganin et al. proposes to use adversarial training for image domain adaptation by using an intermediate representation from the main target classification network to deteriorate the domain classifier performance through a separate neural network. Our work focuses on investigating neural architectures which produce representations invariant to noise conditions for ASR. We evaluate the proposed architecture on the Aurora-4 task, a popular benchmark for noise robust ASR. We show that our method generalizes better than the standard multi-condition training especially when only a few noise categories are seen during training.", "target": ["音声認識するニューラルネットの中間の特徴(h)をノイズに対して不変になるように訓練すると、ノイズデータセットが小さくとも精度が上がったという話。音の入力はエンコーダ(E)によりhに変換されるが、このhに音素の識別機Rとノイズの種類の識別機Dがついている。RとDの訓練と並行して、Dが弁別能力をなくするようにEを訓練するというアイデア。"]} +{"source": "Convolutional Neural Networks (CNNs) are effective models for reducing spectral variations and modeling spectral correlations in acoustic features for automatic speech recognition (ASR). Hybrid speech recognition systems incorporating CNNs with Hidden Markov Models/Gaussian Mixture Models (HMMs/GMMs) have achieved the state-of-the-art in various benchmarks. Meanwhile, Connectionist Temporal Classification (CTC) with Recurrent Neural Networks (RNNs), which is proposed for labeling unsegmented sequences, makes it feasible to train an end-to-end speech recognition system instead of hybrid settings. However, RNNs are computationally expensive and sometimes difficult to train. In this paper, inspired by the advantages of both CNNs and the CTC approach, we propose an end-to-end speech framework for sequence labeling, by combining hierarchical CNNs with CTC directly without recurrent connections. By evaluating the approach on the TIMIT phoneme recognition task, we show that the proposed model is not only computationally efficient, but also competitive with the existing baseline systems. Moreover, we argue that CNNs have the capability to model temporal correlations with appropriate context information.", "target": ["CNNだけで音声認識を行う試み。LSTMを使ったときより計算的にefficient。TIMITデータセットではLSTMと同程度の精度を出すことができた。"]} +{"source": "In this paper, we introduce a variation of the skip-gram model which jointly learns distributed word vector representations and their way of composing to form phrase embeddings. In particular, we propose a learning procedure that incorporates a phrase-compositionality function which can capture how we want to compose phrases vectors from their component word vectors. Our experiments show improvement in word and phrase similarity tasks as well as syntactic tasks like dependency parsing using the proposed joint models.", "target": ["skip-gramを使ってフレーズレベルのベクトルを獲得する話。単語ベクトルと単語ベクトルを組み合わせて作るフレーズベクトルの作り方を同時に学習している。単語類似度とアナロジーおよび係り受け解析で評価した結果、単なるskip-gramより性能が若干向上した。"]} +{"source": "One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world. Humans can learn about the characteristics of objects and the relationships that occur between them to learn a large variety of visual concepts, often with few examples. This paper investigates the use of structured prior knowledge in the form of knowledge graphs and shows that using this knowledge improves performance on image classification. We build on recent work on end-to-end learning on graphs, introducing the Graph Search Neural Network as a way of efficiently incorporating large knowledge graphs into a vision classification pipeline. We show in a number of experiments that our method outperforms standard neural network baselines for multi-label classification.", "target": ["人間が持っている構造化された事前知識をナレッジグラフの形でディープラーニングのclassificationに導入する手法の提案。Graph Search Neural Networkを使用しEnd-to-Endで学習が可能。"]} +{"source": "Artificial intelligence has seen several breakthroughs in recent years, with games often serving as milestones. A common feature of these games is that players have perfect information. Poker is the quintessential game of imperfect information, and a longstanding challenge problem in artificial intelligence. We introduce DeepStack, an algorithm for imperfect information settings. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition that is automatically learned from self-play using deep learning. In a study involving 44,000 hands of poker, DeepStack defeated with statistical significance professional poker players in heads-up no-limit Texas hold'em. The approach is theoretically sound and is shown to produce more difficult to exploit strategies than prior approaches.", "target": ["DNNでポーカーを行い、プロより強くなったという話。ポーカーが、相手の手札がわからない不完全情報ゲームという点でこの意義は大きい。 判断の後悔を最小化するというCFRの考えがベースになっている。ただ、これは当然結末に至る手札がわからないと後悔の程度がわからない。そこで、その推計に相手のあり得る手札を入力としたDNNを使用している。"]} +{"source": "In the last few years, deep learning has led to very good performance on a variety of problems, such as visual recognition, speech recognition and natural language processing. Among different types of deep neural networks, convolutional neural networks have been most extensively studied. Due to the lack of training data and computing power in early days, it is hard to train a large high-capacity convolutional neural network without overfitting. After the rapid growth in the amount of the annotated data and the recent improvements in the strengths of graphics processor units (GPUs), the research on convolutional neural networks has been emerged swiftly and achieved state-of-the-art results on various tasks. In this paper, we provide a broad survey of the recent advances in convolutional neural networks. Besides, we also introduce some applications of convolutional neural networks in computer vision.", "target": ["2017年年初に送る、これまでのCNNのまとめ。構成方法、最適化手法から適用先まで、幅広くまとめられている。この図だけでもかなりの価値がある。"]} +{"source": "Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network (S^2-GAN). Our S^2-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our S^2-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.", "target": ["GANで生成を行う際に物体の構造が考慮されるべき、とし、表面のテクスチャ(surface normal map)を生成してからそれを入力に画像生成を行う、という二段構えのGANを考案。その名もS2-GAN。これで通常より高い識別性を持つ画像を生成できた。"]} +{"source": "Despite the loss of semantic information, bag-of-ngram based methods still achieve state-of-the-art results for tasks such as sentiment classification of long movie reviews. Many document embeddings methods have been proposed to capture semantics, but they still can't outperform bag-of-ngram based methods on this task. In this paper, we modify the architecture of the recently proposed Paragraph Vector, allowing it to learn document vectors by predicting not only words, but n-gram features as well. Our model is able to capture both semantics and word order in documents while keeping the expressive power of learned vectors. Experimental results on IMDB movie review dataset shows that our model outperforms previous deep learning models and bag-of-ngram based models due to the above advantages. More robust results are also obtained when our model is combined with other models. The source code of our model will be also published together with this paper.", "target": ["評価分析を長い映画レビューに対して行うと、既存の文書embeddingではbag-of-ngramを下回るので、超えられる文書embedding手法を提案した話。具体的にはMikolovらが提案したParagraph Vectorの考え方をベースに、文書から単語とbag-of-ngramを予測することで文書ベクトルを獲得している。IMDBのデータセットで評価をしたところ、accuracyで既存の文書embedding手法とbag-of-ngramを上回った。"]} +{"source": "Recently, a new document metric called the word mover’s distance (WMD) has been proposed with unprecedented results on kNN-based document classification. The WMD elevates high-quality word embeddings to a document metric by formulating the distance between two documents as an optimal transport problem between the embedded words. However, the document distances are entirely unsupervised and lack a mechanism to incorporate supervision when available. In this paper we propose an efficient technique to learn a supervised metric, which we call the Supervised-WMD (S-WMD) metric. The supervised training minimizes the stochastic leave-one-out nearest neighbor classification error on a perdocument level by updating an affine transformation of the underlying word embedding space and a word-imporance weight vector. As the gradient of the original WMD distance would result in an inefficient nested optimization problem, we provide an arbitrarily close approximation that results in a practical and efficient update rule. We evaluate S-WMD on eight real-world text classification tasks on which it consistently outperforms almost all of our 26 competitive baselines.", "target": ["分類タスクを対象に教師なしの文書間距離指標WMD(#147)の教師あり拡張Supervised-WMD(SWMD)の提案。WMDとの違いは文書に含まれる単語ヒストグラムへの単語ごと重要度の重み付けと埋め込み表現の線形変換を行うことで、この重みと変換行列を訓練用データから学習する。8つのデータセットの文書分類タスクを教師なし、教師あり、WMDを含む26の手法で比較し最もよい性能を示した。"]} +{"source": "Neural word representations have proven useful in Natural Language Processing (NLP) tasks due to their ability to efficiently model complex semantic and syntactic word relationships. However, most techniques model only one representation per word, despite the fact that a single word can have multiple meanings or \"senses\". Some techniques model words by using multiple vectors that are clustered based on context. However, recent neural approaches rarely focus on the application to a consuming NLP algorithm. Furthermore, the training process of recent word-sense models is expensive relative to single-sense embedding processes. This paper presents a novel approach which addresses these concerns by modeling multiple embeddings for each word based on supervised disambiguation, which provides a fast and accurate way for a consuming NLP model to select a sense-disambiguated embedding. We demonstrate that these embeddings can disambiguate both contrastive senses such as nominal and verbal senses as well as nuanced senses such as sarcasm. We further evaluate Part-of-Speech disambiguated embeddings on neural dependency parsing, yielding a greater than 8% average error reduction in unlabeled attachment scores across 6 languages.", "target": ["単語には複数の語義があるのに各単語につき一つの単語ベクトルしか学習していない問題に対して、語義ごとにベクトルを作ることを提案している。手法としてはPoSタガーを用いてタグ付けしたコーパスを用いて、CBOWやSkip-gramで語義を予測させることで学習する。\"主観的\"な評価の結果、確かに語義ごとのベクトルを得られていた。"]} +{"source": "We present the Word Mover’s Distance (WMD), a novel distance function between text documents. Our work is based on recent results in word embeddings that learn semantically meaningful representations for words from local cooccurrences in sentences. The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to “travel” to reach the embedded words of another document. We show that this distance metric can be cast as an instance of the Earth Mover’s Distance, a well studied transportation problem for which several highly efficient solvers have been developed. Our metric has no hyperparameters and is straight-forward to implement. Further, we demonstrate on eight real world document classification data sets, in comparison with seven stateof-the-art baselines, that the WMD metric leads to unprecedented low k-nearest neighbor document classification error rates.", "target": ["文書間の距離指標Word Mover's Distance(WMD)の提案。分布間の距離指標Earth Mover's Distance(EMD)をベースに文書に含まれる単語の埋め込みベクトルの集合を分布とみなして距離を算出。k-nnによる文書分類タスクで評価しSOTA手法より低エラーを実現。"]} +{"source": "Paragraph Vectors has been recently proposed as an unsupervised method for learning distributed representations for pieces of texts. In their work, the authors showed that the method can learn an embedding of movie review texts which can be leveraged for sentiment analysis. That proof of concept, while encouraging, was rather narrow. Here we consider tasks other than sentiment analysis, provide a more thorough comparison of Paragraph Vectors to other document modelling algorithms such as Latent Dirichlet Allocation, and evaluate performance of the method as we vary the dimensionality of the learned representation. We benchmarked the models on two document similarity data sets, one from Wikipedia, one from arXiv. We observe that the Paragraph Vector method performs significantly better than other methods, and propose a simple improvement to enhance embedding quality. Somewhat surprisingly, we also show that much like word embeddings, vector operations on Paragraph Vectors can perform useful semantic results.", "target": ["Paragraph Vectorの有効性を文書類似度タスクでLDAと比較した話。WikipediaとarXivを対象に比較した結果、LDAと同等かそれ以上の結果を示した。また、Paragraph Vectorでもベクトルの足し引きで意味のある結果(e.g. \"Lady Gaga\" - \"American\" + \"Japanese\" = \"Ayumi Hamasaki\")を得ることができた。"]} +{"source": "We introduce a new language learning setting relevant to building adaptive natural language interfaces. It is inspired by Wittgenstein’s language games: a human wishes to accomplish some task (e.g., achieving a certain configuration of blocks), but can only communicate with a computer, who performs the actual actions (e.g., removing all red blocks). The computer initially knows nothing about language and therefore must learn it from scratch through interaction, while the human adapts to the computer’s capabilities. We created a game called SHRDLURN in a blocks world and collected interactions from 100 people playing it. First, we analyze the humans’ strategies, showing that using compositionality and avoiding synonyms correlates positively with task performance. Second, we compare computer strategies, showing that modeling pragmatics on a semantic parsing model accelerates learning for more strategic players.", "target": ["言語がわからない状態で、人の指示を学習させる試み。ブロックを積み替えるゲームを通じて、言語による指示(赤を消す)->論理構造(remove(red))へのパースを学習させる。"]} +{"source": "We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).", "target": ["NVIDIAによる自動運転。この論文では画像入力からのend-to-endでの走路追従を対象とし出力としてステアリング舵角の値を教師有りの学習をCNNで行う。人間のドライバーの走行データから学習。汎化のため入力画像をaugumentationしそれに対応してステアリング舵角の値も補正し学習する。"]} +{"source": "Objects appear to scale differently in natural images. This fact requires methods dealing with object-centric tasks (e.g. object proposal) to have robust performance over variances in object scales. In the paper, we present a novel segment proposal framework, namely FastMask, which takes advantage of hierarchical features in deep convolutional neural networks to segment multi-scale objects in one shot. Innovatively, we adapt segment proposal network into three different functional components (body, neck and head). We further propose a weight-shared residual neck module as well as a scale-tolerant attentional head module for efficient one-shot inference. On MS COCO benchmark, the proposed FastMask outperforms all state-of-the-art segment proposal methods in average recall being 2~5 times faster. Moreover, with a slight trade-off in accuracy, FastMask can segment objects in near real time (~13 fps) with 800*600 resolution images, demonstrating its potential in practical applications. Our implementation is available on this https URL.", "target": ["one-shotで複数のscaleの物体に対応出来るInstanceを考慮したSemantic Segmentation手法であるFastMaskの提案。役割の違うbody, neck, headという3つの構造を組み合わせてネットワークを構築(Fig.2参照)。ネットワークのコアは重み共有したResNeckモジュール。ResNeck構造でfeature mapをズームアウトして効率的に異なるスケールの物体に対応。既存手法と比較して処理スピードは2〜5倍。提案したhead構造(Attention Head)によって、受容野サイズが物体に合っていない事に起因した背景のノイズの影響を減少させることが出来る。SoTA。"]} +{"source": "Currently, the most successful learning models in computer vision are based on learning successive representations followed by a decision layer. This is usually actualized through feedforward multilayer neural networks, e.g. ConvNets, where each layer forms one of such successive representations. However, an alternative that can achieve the same goal is a feedback based approach in which the representation is formed in an iterative manner based on a feedback received from previous iteration's output. We establish that a feedback based approach has several fundamental advantages over feedforward: it enables making early predictions at the query time, its output naturally conforms to a hierarchical structure in the label space (e.g. a taxonomy), and it provides a new basis for Curriculum Learning. We observe that feedback networks develop a considerably different representation compared to feedforward counterparts, in line with the aforementioned advantages. We put forth a general feedback based learning architecture with the endpoint results on par or better than existing feedforward networks with the addition of the above advantages. We also investigate several mechanisms in feedback architectures (e.g. skip connections in time) and design choices (e.g. feedback length). We hope this study offers new perspectives in quest for more natural and practical learning models.", "target": ["feedforwardに取って代わる可能性のあるfeedbackネットワーク提案。同一入力に対して繰り返し予測を行い、前の処理の結果を次の処理に反映することでfeedbackネットワークを構築。フィードフォワードと比較した利点として段階的な予測(Early Predictions)が可能である。ラベル空間の階層構造(分類法など)に準拠(Taxnomoy Compliance)しcurriculum learning(簡単なものから順次学んでいく学習)を行う。ネットワークのコアとなる構造はstacking ConvLSTM。lossは各Tでの出力におけるlossと最終出力のlossの荷重平均。CIFAR100を使った実験結果でResNet24超え。"]} +{"source": "Vector space word representations are learned from distributional information of words in large corpora. Although such statistics are semantically informative, they disregard the valuable information that is contained in semantic lexicons such as WordNet, FrameNet, and the Paraphrase Database. This paper proposes a method for refining vector space representations using relational information from semantic lexicons by encouraging linked words to have similar vector representations, and it makes no assumptions about how the input vectors were constructed. Evaluated on a battery of standard lexical semantic evaluation tasks in several languages, we obtain substantial improvements starting with a variety of word vector models. Our refinement method outperforms prior techniques for incorporating semantic lexicons into the word vector training algorithms.", "target": ["Skip-gramやGloVeで得た単語ベクトルに対し、WordNetなどの外部知識を用いることで単語ベクトルを洗練する手法を提案。外部知識上で関連する単語を似たベクトルにするために似せたいベクトル間のユークリッド距離を最小化する。意味評価をした結果、一部タスクを除いて性能が向上した。"]} +{"source": "Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7% of the classes have samples exhibiting diversity comparable to real ImageNet data.", "target": ["画像生成について、単に生成するだけでなく、クラスの識別をさせる補助的なタスクをさせることで、識別性能(=解像度)を上げることができたという話。 研究内容はもちろんだが、画像生成について「識別性(=解像度)��と「多様性」という評価軸が設定されていて、この点は今後意識が必要と感じた"]} +{"source": "While neural networks have been successfully applied to many natural language processing tasks, they come at the cost of interpretability. In this paper, we propose a general methodology to analyze and interpret decisions from a neural model by observing the effects on the model of erasing various parts of the representation, such as input word-vector dimensions, intermediate hidden units, or input words. We present several approaches to analyzing the effects of such erasure, from computing the relative difference in evaluation metrics, to using reinforcement learning to erase the minimum set of input words in order to flip a neural model's decision. In a comprehensive analysis of multiple NLP tasks, including linguistic feature classification, sentence-level sentiment analysis, and document level sentiment aspect prediction, we show that the proposed methodology not only offers clear explanations about neural model decisions, but also provides a way to conduct error analysis on neural models.", "target": ["自然言語において、ニューラルネットが何をどう判断しているのか解釈する試み。基本はある次元を抜いたときどれだけ尤度に影響があるかを調べることで、隠れ層の重要度ヒートマップを作る。ただ、自然言語においては単語の複合も重要。そこで強化学習を用い、主要要素以外を抜く試みも行っている。"]} +{"source": "We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by Mikolov et al., and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, shifted by a global constant. We find that another embedding method, NCE, is implicitly factorizing a similar matrix, where each cell is the (shifted) log conditional probability of a word given its context. We show that using a sparse Shifted Positive PMI word-context matrix to represent words improves results on two word similarity tasks and one of two analogy tasks. When dense low-dimensional vectors are preferred, exact factorization with SVD can achieve solutions that are at least as good as SGNS’s solutions for word similarity tasks. On analogy questions SGNS remains superior to SVD. We conjecture that this stems from the weighted nature of SGNS’s factorization.", "target": ["Skip-gram with negative samplingで学習したword embeddingが、ある仮定の下ではPMIの行列を分解しているのと等価なことを示した論文。SPPMIを用いて単語を表現したところ単語類似度タスクとアナロジータスクのうちの一つで性能が向上することを示した。"]} +{"source": "Context-predicting models (more commonly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the literature is still lacking a systematic comparison of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counterparts.", "target": ["単語ベクトルを獲得するための2つの枠組みであるcount-baseとpredict-baseな手法を初めてシステマティックに比較した論文。ハイパーパラメータを変え、analogyやsemantic relatedness含む5つの意味タスクで比較した結果、predict-baseな手法の方が優れているという結果になった。Don't count, predict!"]} +{"source": "We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning.", "target": ["DNNの重み更新時に重みベクトルをノルム��正規化したベクトルの向きにパラメトライズしそれぞれの勾配から学習するweight normalization(WN)を提案。画像識別、生成モデル、強化学習タスクでbatch normalization(BN)などと比較評価しタスクによらず学習が高速に収束することを示した。"]} +{"source": "Recent trends suggest that neural-network-inspired word embedding models outperform traditional count-based distributional models on word similarity and analogy detection tasks. We reveal that much of the performance gains of word embeddings are due to certain system design choices and hyperparameter optimizations, rather than the embedding algorithms themselves. Furthermore, we show that these modifications can be transferred to traditional distributional models, yielding similar gains. In contrast to prior reports, we observe mostly local or insignificant performance differences between the methods, with no global advantage to any single approach over the others.", "target": ["word embeddingの性能向上はアルゴリズムでなくハイパーパラメータのチューニングによるところが大きいのではないかということで検証した論文。様々なハイパーパラメータの組み合わせで検証した結果、word similarityタスクにおいて、従来のcount-baseの手法でもpredict-baseの手法と同等の性能を示した。ただし、analogy-taskにおいてはpredict-base手法の方が強かった。"]} +{"source": "This paper deals with price optimization, which is to find the best pricing strategy that maximizes revenue or profit, on the basis of demand forecasting models. Though recent advances in regression technologies have made it possible to reveal price-demand relationship of a number of multiple products, most existing price optimization methods, such as mixed integer programming formulation, cannot handle tens or hundreds of products because of their high computational costs. To cope with this problem, this paper proposes a novel approach based on network flow algorithms. We reveal a connection between supermodularity of the revenue and cross elasticity of demand. On the basis of this connection, we propose an efficient algorithm that employs network flow algorithms. The proposed algorithm can handle hundreds or thousands of products, and returns an exact optimal solution under an assumption regarding cross elasticity of demand. Even in case in which the assumption does not hold, the proposed algorithm can efficiently find approximate solutions as good as can other state-of-the-art methods, as empirical results show.", "target": ["商品の価格最適化に関する論文。利益を最大化するには大量の商品に対して価格-需要の関係性を明らかにする必要があるが、混合整数計画問法は計算量的に難しかった。当該論文ではnetwork flow algorithmsを使用することによって計算量の問題を解決。また、収益のsupermodularityと需要の交差弾力性との関係を明らかにし、その関係性による仮定を置くことで計算量を削減。仮説が成り立たない場合でも既存手法と同等の性能。最終的にはBQP(binary quadratic programming)に落とせるのでBQPを解く。"]} +{"source": "Vector-space word representations have been very successful in recent years at improving performance across a variety of NLP tasks. However, common to most existing work, words are regarded as independent entities without any explicit relationship among morphologically related words being modeled. As a result, rare and complex words are often poorly estimated, and all unknown words are represented in a rather crude way using only one or a few vectors. This paper addresses this shortcoming by proposing a novel model that is capable of building representations for morphologically complex words from their morphemes. We combine recursive neural networks (RNNs), where each morpheme is a basic unit, with neural language models (NLMs) to consider contextual information in learning morphologicallyaware word representations. Our learned models outperform existing word representations by a good margin on word similarity tasks across many datasets, including a new dataset we introduce focused on rare words to complement existing ones in an interesting way.", "target": ["Recursive Neural Network(RNN)とニューラル言語モデルを組み合わせて形態素から単語ベクトル表現を構築することで、まれ語や複合語、未知語を上手く表現する手法を提案している。単語の類似度タスクで評価した結果、ほとんどすべてのデータセットで従来手法を超える性能を得られた。"]} +{"source": "We analyze three critical components of word embedding training: the model, the corpus, and the training parameters. We systematize existing neural-network-based word embedding algorithms and compare them using the same corpus. We evaluate each word embedding in three ways: analyzing its semantic properties, using it as a feature for supervised tasks and using it to initialize neural networks. We also provide several simple guidelines for training word embeddings. First, we discover that corpus domain is more important than corpus size. We recommend choosing a corpus in a suitable domain for the desired task, after that, using a larger corpus yields better results. Second, we find that faster models provide sufficient performance in most cases, and more complex models can be used if the training corpus is sufficiently large. Third, the early stopping metric for iterating should rely on the development set of the desired task rather than the validation loss of training embedding.", "target": ["word embeddingの学習に重要なコンポーネントをモデル、コーパス、パラメータの3つとし、既存の手法を分類・比較した論文。比較評価を行い、良いword embeddingを学習するためのガイドラインを示した。"]} +{"source": "The pre-dominant approach to language modeling to date is based on recurrent neural networks. Their success on this task is often linked to their ability to capture unbounded context. In this paper we develop a finite context approach through stacked convolutions, which can be more efficient since they allow parallelization over sequential tokens. We propose a novel simplified gating mechanism that outperforms Oord et al (2016) and investigate the impact of key architectural decisions. The proposed approach achieves state-of-the-art on the WikiText-103 benchmark, even though it features long-term dependencies, as well as competitive results on the Google Billion Words benchmark. Our model reduces the latency to score a sentence by an order of magnitude compared to a recurrent baseline. To our knowledge, this is the first time a non-recurrent approach is competitive with strong recurrent models on these large scale language tasks.", "target": ["言語モデルのタスクで、CNNでLSTM同等以上の精度を出したという話。畳み込んだ結果をGRUに近い機構で処理し、過去の情報が消失しないようにしている。Google Billion Wordのデータセットでは、LSTMと同等の精度を出す一方計算効率が20倍程度改善された。"]} +{"source": "With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.", "target": ["合成画像で学習できると良いけど独特のクセにより上手くいかない、という点を克服する試み。より「本物らしく」するNN対見破るNNで学習(GAN)。元からかけ離れた「本物化」を防ぐため元画像との差異を利用した正規化などの試みがとられている"]} +{"source": "Reasoning about objects, relations, and physics is central to human intelligence, and a key goal of artificial intelligence. Here we introduce the interaction network, a model which can reason about how objects in complex systems interact, supporting dynamical predictions, as well as inferences about the abstract properties of the system. Our model takes graphs as input, performs object- and relation-centric reasoning in a way that is analogous to a simulation, and is implemented using deep neural networks. We evaluate its ability to reason about several challenging physical domains: n-body problems, rigid-body collision, and non-rigid dynamics. Our results show it can be trained to accurately simulate the physical trajectories of dozens of objects over thousands of time steps, estimate abstract quantities such as energy, and generalize automatically to systems with different numbers and configurations of objects and relations. Our interaction network implementation is the first general-purpose, learnable physics engine, and a powerful general framework for reasoning about object and relations in a wide variety of complex real-world domains.", "target": ["多体問題などの相互作用のある複雑な物理法則を物体とその関係性に分解することで、推論可能とした手法の提案。ネットワークへの入力、は物体(ノード)とノード間の関係(エッジ)のグラフ。1つ目の近似関数(ネットワーク)で物体に対する効果量を計算し、外力とともに相互的な影響を加味した状態推定を行う関数(ネットワークに)に投入される。計算量的に大規模なものには対応出来ていないが、手法としては汎用性が高く一般的な多くの物理現象に適用可能。"]} +{"source": "Several machine learning tasks require to represent the data using only a sparse set of interest points. An ideal detector is able to find the corresponding interest points even if the data undergo a transformation typical for a given domain. Since the task is of high practical interest in computer vision, many hand-crafted solutions were proposed. In this paper, we ask a fundamental question: can we learn such detectors from scratch? Since it is often unclear what points are \"interesting\", human labelling cannot be used to find a truly unbiased solution. Therefore, the task requires an unsupervised formulation. We are the first to propose such a formulation: training a neural network to rank points in a transformation-invariant manner. Interest points are then extracted from the top/bottom quantiles of this ranking. We validate our approach on two tasks: standard RGB image interest point detection and challenging cross-modal interest point detection between RGB and depth images. We quantitatively show that our unsupervised method performs better or on-par with baselines.", "target": ["位置, 回転, スケールについて不変量と共変量を考慮したDLによる特徴点抽出手法を提案。性能としては既存手法と同程度だが、クロスモーダルなデータ間で再現可能性があることが特徴。学習方法は、2つの異なるアングルの画像で同一位置のpatchを切り出して、forward処理。response functionを算出し、2つの値をhingi lossで評価。完全な教師なし学習では、同一画像で異なるaugmentationを掛けたものを入力として評価している。baselineはDoG。"]} +{"source": "The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic to real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Existing approaches focus either on mapping representations from one domain to the other, or on learning to extract features that are invariant to the domain from which they were extracted. However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain. We suggest that explicitly modeling what is unique to each domain can improve a model's ability to extract domain-invariant features. Inspired by work on private-shared component analysis, we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. Our model is trained not only to perform the task we care about in the source domain, but also to use the partitioned representation to reconstruct the images from both domains. Our novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process.", "target": ["DNNのラベル無しデータをターゲットとしたドメインアダプテーションにおいてネットワークをドメイン依存部分とドメイン共通部分を分離するようにソース、ターゲット両データで学習し適用するネットワーク構造の提案。識別はドメイン共通素性ネットワークを使う。教師なしドメインアダプテーションのSoTA。"]} +{"source": "Recent work has shown that neuralembedded word representations capture many relational similarities, which can be recovered by means of vector arithmetic in the embedded space. We show that Mikolov et al.’s method of first adding and subtracting word vectors, and then searching for a word similar to the result, is equivalent to searching for a word that maximizes a linear combination of three pairwise word similarities. Based on this observation, we suggest an improved method of recovering relational similarities, improving the state-of-the-art results on two recent word-analogy datasets. Moreover, we demonstrate that analogy recovery is not restricted to neural word embeddings, and that a similar amount of relational similarities can be recovered from traditional distributional word representations.", "target": ["word embeddingにおけるアナロジータスクを解くための類似度計算手法を検証した論文。検証の結果、Neural embeddingでなく従来の単語共起を用いた手法でも、類似度計算手法によってはstate-of-the-artな結果を出せることがわかった。"]} +{"source": "Efficient simulation of the Navier-Stokes equations for fluid flow is a long standing problem in applied mathematics, for which state-of-the-art methods require large compute resources. In this work, we propose a data-driven approach that leverages the approximation power of deep-learning with the precision of standard solvers to obtain fast and highly realistic simulations. Our method solves the incompressible Euler equations using the standard operator splitting method, in which a large sparse linear system with many free parameters must be solved. We use a Convolutional Network with a highly tailored architecture, trained using a novel unsupervised learning framework to solve the linear system. We present real-time 2D and 3D simulations that outperform recently proposed data-driven methods; the obtained results are realistic and show good generalization properties.", "target": ["オイラー的手法(差分法)とCNNを組み合わせて、非圧縮流体シミュレーションを行う事を提案。CNNを使っているのは通常線形凸最適化手法が適用される圧力算出部分。CNNへの入力は現在の格子点の速度, 位置, 圧力。NS方程式の粘性項を無視。半教師あり学習で使われている手法をloss(推定速度のダイバージェンス)に導入。これにより収束性が相当改善(入れないと収束しない)。計算速度的にはSoTA。(煙のような)スパースな流体で威力を発揮する。"]} +{"source": "Traditional fluid simulations require large computational resources even for an average sized scene with the main bottleneck being a very small time step size, required to guarantee the stability of the solution. Despite a large progress in parallel computing and efficient algorithms for pressure computation in the recent years, realtime fluid simulations have been possible only under very restricted conditions. In this paper we propose a novel machine learning based approach, that formulates physics-based fluid simulation as a regression problem, estimating the acceleration of every particle for each frame. We designed a feature vector, directly modelling individual forces and constraints from the Navier-Stokes equations, giving the method strong generalization properties to reliably predict positions and velocities of particles in a large time step setting on yet unseen test videos. We used a regression forest to approximate the behaviour of particles observed in the large training set of simulations obtained using a traditional solver. Our GPU implementation led to a speed-up of one to three orders of magnitude compared to the state-of-the-art position-based fluid solver and runs in real-time for systems with up to 2 million particles.", "target": ["粒子法による流体シミュレーションをRegression Forestを使った回帰モデルで近似的に行い高速化した研究。入力は各粒子の位置と速度。NS方程式を基に作られたモデルによって、各粒子の位置と速度から特徴量を生成。特徴量は圧力、表面張力、粘性、非圧縮性制約に関連したもの(圧力などそれ自体の値を求めてはいない)。これをRegression Forestに入力し各粒子の加速度(または速度)の推定を行い、時間ステップを使って数値積分をして、次ステップの粒子位置と速度を計算。速度的には従来手法(PBF)の200倍強高速。"]} +{"source": "Semantic parsing aims at mapping natural language to machine interpretable meaning representations. Traditional approaches rely on high-quality lexicons, manually-built templates, and linguistic features which are either domain- or representation-specific. In this paper we present a general method based on an attention-enhanced encoder-decoder model. We encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vectors. Experimental results on four datasets show that our approach performs competitively without using hand-engineered features and is easy to adapt across domains and meaning representations.", "target": ["Encoder-Decoderモデル(with Attention)を使って、文を論理式に変換するという研究(例: 面先が一番大きい県の人口は?⇒(人口: i (argmax _ (県:k ) (面積:i )))) など。Lisp的。"]} +{"source": "Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images often fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that attempt to map representations between the two domains or learn to extract features that are domain-invariant. In this work, we present a new approach that learns, in an unsupervised manner, a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.", "target": ["ラベル付けされたデータを作るのはとても大変だから、データのあるソースからデータのあまりないターゲットの画像が作れたらいいよね、という話。ソース⇒ターゲットの画像生成を行うGANを作成し、ドメイン適用を実現。生成画像だけを使った学習で、ほかの手法に対し最高精度を達成。"]} +{"source": "Recurrent Neural Networks (RNNs) have become the state-of-the-art choice for extracting patterns from temporal sequences. However, current RNN models are ill-suited to process irregularly sampled data triggered by events generated in continuous time by sensors or other neurons. Such data can occur, for example, when the input comes from novel event-driven artificial sensors that generate sparse, asynchronous streams of events or from multiple conventional sensors with different update intervals. In this work, we introduce the Phased LSTM model, which extends the LSTM unit by adding a new time gate. This gate is controlled by a parametrized oscillation with a frequency range that produces updates of the memory cell only during a small percentage of the cycle. Even with the sparse updates imposed by the oscillation, the Phased LSTM network achieves faster convergence than regular LSTMs on tasks which require learning of long sequences. The model naturally integrates inputs from sensors of arbitrary sampling rates, thereby opening new areas of investigation for processing asynchronous sensory events that carry timing information. It also greatly improves the performance of LSTMs in standard RNN applications, and does so with an order-of-magnitude fewer computes at runtime.", "target": ["何かのイベント(手を上げたらとか)をトリガに発生する時系列データは、挙動があるタイミングが疎のため既存のLSTMでは学習が難しい(混在するならなおさら)。そこで、時間情報から今ONなのかOFFなのかを制御するPhaseゲートを設けて学習する話。TensorFlow実装有"]} +{"source": "Violence is a serious problems for cities like Chicago and has been exacerbated by the use of social media by gang-involved youths for taunting rival gangs. We present a corpus of tweets from a young and powerful female gang member and her communicators, which we have annotated with discourse intention, using a deep read to understand how and what triggered conversations to escalate into aggression. We use this corpus to develop a part-of-speech tagger and phrase table for the variant of English that is used and a classifier for identifying tweets that express grieving and aggression.", "target": ["ギャングによる暴力を未然に検出、抑制する研究。ギャングにおける中心的な女性メンバと若手のメンバのTwitterのやり取りを分析。どのような対話がエスカレートしやすいのかについて、アノテーションを行いコーパスを作成した。タグは侵略(aggression=相手の攻撃)、喪失(loss=仲間の死など)といったものが付与されている。"]} +{"source": "While continuous word embeddings are gaining popularity, current models are based solely on linear contexts. In this work, we generalize the skip-gram model with negative sampling introduced by Mikolov et al. to include arbitrary contexts. In particular, we perform experiments with dependency-based contexts, and show that they produce markedly different embeddings. The dependencybased embeddings are less topical and exhibit more functional similarity than the original skip-gram embeddings.", "target": ["Skip-gramのコンテキストは周辺語しか考えていないけどそれを一般化する話。具体的には、コンテキストとして依存関係を用いて実験を行った。実験を行ったところ、依存関係ベースのembeddingsは周辺語ベースのものに比べて機能的類似性が高いことがわかった。"]} +{"source": "We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in stateof-the-art performance.", "target": ["NLPの様々なタスク(NER, POS, チャンキング, 言語モデル, SRL)を同時に学習させることで汎化性能を向上させようという話。具体的には同時に学習するためのニューラルネットワークの枠組みを提案している。結果として、タスク単体で学習させるより複数のタスクを同時に学習させた方が良い結果となった。"]} +{"source": "This paper presents our recent work on the design and development of a new, large scale dataset, which we name MS MARCO, for MAchine Reading COmprehension.This new dataset is aimed to overcome a number of well-known weaknesses of previous publicly available datasets for the same task of reading comprehension and question answering. In MS MARCO, all questions are sampled from real anonymized user queries. The context passages, from which answers in the dataset are derived, are extracted from real web documents using the most advanced version of the Bing search engine. The answers to the queries are human generated. Finally, a subset of these queries has multiple answers. We aim to release one million queries and the corresponding answers in the dataset, which, to the best of our knowledge, is the most comprehensive real-world dataset of its kind in both quantity and quality. We are currently releasing 100,000 queries with their corresponding answers to inspire work in reading comprehension and question answering along with gathering feedback from the research community.", "target": ["Microsoftが公開した質問応答のデータセット(10万件)。質問/回答が、人間のものである点が特徴(Bing=検索エンジンへの入力なのでどこまで質問っぽいかは要確認)。回答はBingの検索結果から抜粋して作成。"]} +{"source": "Recurrent neural networks (RNNs) have achieved state-of-the-art performances in many natural language processing tasks, such as language modeling and machine translation. However, when the vocabulary is large, the RNN model will become very big (e.g., possibly beyond the memory capacity of a GPU device) and its training will become very inefficient. In this work, we propose a novel technique to tackle this challenge. The key idea is to use 2-Component (2C) shared embedding for word representations. We allocate every word in the vocabulary into a table, each row of which is associated with a vector, and each column associated with another vector. Depending on its position in the table, a word is jointly represented by two components: a row vector and a column vector. Since the words in the same row share the row vector and the words in the same column share the column vector, we only need 2 \\sqrt{|V|} vectors to represent a vocabulary of |V| unique words, which are far less than the |V| vectors required by existing approaches. Based on the 2-Component shared embedding, we design a new RNN algorithm and evaluate it using the language modeling task on several benchmark datasets. The results show that our algorithm significantly reduces the model size and speeds up the training process, without sacrifice of accuracy (it achieves similar, if not better, perplexity as compared to state-of-the-art language models). Remarkably, on the One-Billion-Word benchmark Dataset, our algorithm achieves comparable perplexity to previous language models, whilst reducing the model size by a factor of 40-100, and speeding up the training process by a factor of 2. We name our proposed algorithm \\emph{LightRNN} to reflect its very small model size and very high training speed.", "target": ["RNNにおいてaccuracyを犠牲にせずに時間/空間計算量を削減可能なLightRNNの提案。embedding空間を1単語1vectorで表現するのではなく、tableに分解して表現(Figure1 参照)。これにより単語数がVの場合、2\\sqrt{V}個のembedding vectorで空間を表現出来る。したがって、このtableを構築する部分がLightRNNの勘所。構築したtableを使用し、1単語毎にrow, colmn別にhidden stateを計算する。"]} +{"source": "Real-world objects occur in specific contexts. Such context has been shown to facilitate detection by constraining the locations to search. But can context directly benefit object detection? To do so, context needs to be learned independently from target features. This is impossible in traditional object detection where classifiers are trained on images containing both target features and surrounding context. In contrast, humans can learn context and target features separately, such as when we see highways without cars. Here we show for the first time that human-derived scene expectations can be used to improve object detection performance in machines. To measure contextual expectations, we asked human subjects to indicate the scale, location and likelihood at which cars or people might occur in scenes without these objects. Humans showed highly systematic expectations that we could accurately predict using scene features. This allowed us to predict human expectations on novel scenes without requiring manual annotation. On augmenting deep neural networks with predicted human expectations, we obtained substantial gains in accuracy for detecting cars and people (1-3%) as well as on detecting associated objects (3-20%). In contrast, augmenting deep networks with other conventional features yielded far smaller gains. This improvement was due to relatively poor matches at highly likely locations being correctly labelled as target and conversely strong matches at unlikely locations being correctly rejected as false alarms. Taken together, our results show that augmenting deep neural networks with human-derived context features improves their performance, suggesting that humans learn scene context separately unlike deep networks.", "target": ["人間のシーン情報などのコンテキスト情報の活用の仕方を取り入れて、物体認識の性能を向上させるための提案を行った。HOG, Gaussian blurなどの従来手法を積極的に取り入れ、人間(車)が写ってそうなシーンか、写っていそうな場所であるかのaverage likelihoodを算出。算出した数値とCNNの特徴量を合わせて使用。CNN単体では確信度が低い物体やシーンに対して効果的という結果が得られた。"]} +{"source": "Neural language models learn word representations that capture rich linguistic and conceptual information. Here we investigate the embeddings learned by neural machine translation models. We show that translation-based embeddings outperform those learned by cutting-edge monolingual models at single-language tasks requiring knowledge of conceptual similarity and/or syntactic role. The findings suggest that, while monolingual models learn information about how concepts are related, neural-translation models better capture their true ontological status.", "target": ["ニューラル機械翻訳モデルによってword embeddingsを獲得した論文。Skip-gramのような単一言語モデルより翻訳モデルの方が概念間の類似度を捉えられるのではないかということを検証している。単一言語モデルと比較した結果、ニューラル翻訳モデルは概念間の類似度をより良く捉えられ、単一言語モデルは関連性をより良く捉えられることが示唆された。"]} +{"source": "State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.", "target": ["強化学習において、価値観数をパラメトリックに推定するDQNなどの方法は学習の初期段階で非効率的であり、少ない事例に対する汎化能力は低い。これに対して、提案手法はある状態の価値関数を、K-nearest-neighborsのような事例ベースのノンパラメトリックな手法で与えている。このとき、neighborsを決める類似度は変分オートエンコーダのような教師なし学習で得た表現をもとに計算する。論文ではさらに、実際の脳では、ノンパラメトリックな学習を行う海馬がエピソード的で急速な学習を行い、パラメトリックな学習を行う大脳皮質が汎用的で長期的な学習を行うと主張している。"]} +{"source": "We propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between the use of external memory in neural network and cache models used with count based language models. We demonstrate on several language model datasets that our approach performs significantly better than recent memory augmented networks.", "target": ["Neural Cache Modelという汎用的なcache modelの提案。既存のneural network language modelに適用可能。pre-trainingされたモデルにattachするだけで良く、fine tune不要。cache対象はhidden stateとwordの対。neural cache language modelでは、現在見ている単語から推定される次の単語の確率と現在までのcacheから推定される確率の加重平均により次の単語の推定が行われる。"]} +{"source": "Though tremendous strides have been made in object recognition, one of the remaining open challenges is detecting small objects. We explore three aspects of the problem in the context of finding small faces: the role of scale invariance, image resolution, and contextual reasoning. While most recognition approaches aim to be scale-invariant, the cues for recognizing a 3px tall face are fundamentally different than those for recognizing a 300px tall face. We take a different approach and train separate detectors for different scales. To maintain efficiency, detectors are trained in a multi-task fashion: they make use of features extracted from multiple layers of single (deep) feature hierarchy. While training detectors for large objects is straightforward, the crucial challenge remains training detectors for small objects. We show that context is crucial, and define templates that make use of massively-large receptive fields (where 99% of the template extends beyond the object of interest). Finally, we explore the role of scale in pre-trained deep networks, providing ways to extrapolate networks tuned for limited scales to rather extreme ranges. We demonstrate state-of-the-art results on massively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when compared to prior art on WIDER FACE, our results reduce error by a factor of 2 (our models produce an AP of 82% while prior art ranges from 29-64%).", "target": ["small objectをdetectionする際の難しさの解析と改善を行った論文。題材は顔検出。templateのscale、画像解像度、contextの必要性について解析を行っている。小さな画像を引き伸ばすと認識精度は下がり、人間も機械も小さな物体を探すにはcontextをかなり必要としている事を検証。また、one-size-fits-allのようなtemplateのサイズを1つに固定すると認識に限界があるため、入力画像を2倍、0.5倍し、originalとともに独立の3つのネットワークで特徴抽出しdetectionを行い、その後マージするという方法を取っている。SoTA。"]} +{"source": "Mini-batch stochastic gradient descent and variants thereof have become standard for large-scale empirical risk minimization like the training of neural networks. These methods are usually used with a constant batch size chosen by simple empirical inspection. The batch size significantly influences the behavior of the stochastic optimization algorithm, though, since it determines the variance of the gradient estimates. This variance also changes over the optimization process; when using a constant batch size, stability and convergence is thus often enforced by means of a (manually tuned) decreasing learning rate schedule. We propose a practical method for dynamic batch size adaptation. It estimates the variance of the stochastic gradients and adapts the batch size to decrease the variance proportionally to the value of the objective function, removing the need for the aforementioned learning rate decrease. In contrast to recent related work, our algorithm couples the batch size to the learning rate, directly reflecting the known relationship between the two. On popular image classification benchmarks, our batch size adaptation yields faster optimization convergence, while simultaneously simplifying learning rate tuning. A TensorFlow implementation is available.", "target": ["batch sizeを調整するCABSというルールを提案。mini batch sizeは確率的勾配のバリアンスに影響し、収束速度に大きく関係するため、重要なパラメタである。learning rateが大きい(小さい)場合はbatch sizeを大きく(小さく)する必要があるが((19)式の関係)、本手法ではその関係性に基いて自動でbatch sizeの調整を行う。従来手法と比較してSGDでの収束速度が向上。"]} +{"source": "We introduce SceneNet RGB-D, expanding the previous work of SceneNet to enable large scale photorealistic rendering of indoor scene trajectories. It provides pixel-perfect ground truth for scene understanding problems such as semantic segmentation, instance segmentation, and object detection, and also for geometric computer vision problems such as optical flow, depth estimation, camera pose estimation, and 3D reconstruction. Random sampling permits virtually unlimited scene configurations, and here we provide a set of 5M rendered RGB-D images from over 15K trajectories in synthetic layouts with random but physically simulated object poses. Each layout also has random lighting, camera trajectories, and textures. The scale of this dataset is well suited for pre-training data-driven computer vision techniques from scratch with RGB-D inputs, which previously has been limited by relatively small labelled datasets in NYUv2 and SUN RGB-D. It also provides a basis for investigating 3D scene labelling tasks by providing perfect camera poses and depth data as proxy for a SLAM system. We host the dataset at this http URL", "target": ["シーン認識のための学習データセットSceneNet RGB-Dの公開。物理シミュレーターでシーン(部屋の中にものが散らばった環境)を作り、そこでカメラの軌跡を設定し映像を作製、その映像のRGB+Depthをデータ化、という感じで生成"]} +{"source": "Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256x256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions.", "target": ["テキストから画像を生成する話で、単純なEncode情報+GANからさらに解像度を上げるためのGANを重ねて(Stackして)画像の鮮明度を上げるという話。"]} +{"source": "Current vector-space models of lexical semantics create a single “prototype” vector to represent the meaning of a word. However, due to lexical ambiguity, encoding word meaning with a single vector is problematic. This paper presents a method that uses clustering to produce multiple “sense-specific” vectors for each word. This approach provides a context-dependent vector representation of word meaning that naturally accommodates homonymy and polysemy. Experimental comparisons to human judgements of semantic similarity for both isolated words as well as words in sentential contexts demonstrate the superiority of this approach over both prototype and exemplar based vector-space models.", "target": ["単語のあいまい性を考慮して、語義ごとのベクトル表現を生成することを提案している。具体的には、語義ごとのベクトルを生成するために、クラスタリング(movMF)を使う手法を提案している。評価については人間が判断した意味類似度と比較している。"]} +{"source": "We present two simple modifications to the models in the popular Word2Vec tool, in order to generate embeddings more suited to tasks involving syntax. The main issue with the original models is the fact that they are insensitive to word order. While order independence is useful for inducing semantic representations, this leads to suboptimal results when they are used to solve syntax-based problems. We show improvements in part-ofspeech tagging and dependency parsing using our proposed models.", "target": ["品詞タグ付や係り受け解析のようなsyntaxが重要なタスクに適したword embeddingを生成する手法を提案。Mikolovらのモデルでは語順を無視していたのに対して、提案モデルではそのあたりを考慮。結果として品詞タグ付と係り受け解析の性能が向上した。比較相手はSENNA。"]} +{"source": "Predictive models deployed in the world may assign incorrect labels to instances with high confidence. Such errors or unknown unknowns are rooted in model incompleteness, and typically arise because of the mismatch between training data and the cases seen in the open world. As the models are blind to such errors, input from an oracle is needed to identify these failures. In this paper, we formulate and address the problem of optimizing the discovery of unknown unknowns of any predictive model under a fixed budget, which limits the number of times an oracle can be queried for true labels. We propose a model-agnostic methodology which uses feedback from an oracle to both identify unknown unknowns and to intelligently guide the discovery. We employ a two-phase approach which first organizes the data into multiple partitions based on instance similarity, and then utilizes an explore-exploit strategy for discovering unknown unknowns across these partitions. We demonstrate the efficacy of our framework by varying the underlying causes of unknown unknowns across various applications. To the best of our knowledge, this paper presents the first algorithmic approach to the problem of discovering unknown unknowns of predictive models.", "target": ["Active Learningのようにラベル無しデータ(known unknowns)を選択するだけでなく、そもそもラベル自体を知らないデータ(unknown unknowns)についても、効率よくオラクルに問い合わせるための手法を提案している。例えば、白い猫画像と黒い犬画像ばかりを学習しているとき、テスト対象の白い犬画像を、白の特徴量に基いて\"猫\"として高い信頼度で誤判別してしまうところ、本手法によりそれを判別するためのデータの不完全性を検知することが可能となる。"]} +{"source": "Many Collaborative Filtering (CF) algorithms are item-based in the sense that they analyze item-item relations in order to produce item similarities. Recently, several works in the field of Natural Language Processing (NLP) suggested to learn a latent representation of words using neural embedding algorithms. Among them, the Skip-gram with Negative Sampling (SGNS), also known as word2vec, was shown to provide state-of-the-art results on various linguistics tasks. In this paper, we show that item-based CF can be cast in the same framework of neural word embedding. Inspired by SGNS, we describe a method we name item2vec for item-based CF that produces embedding for items in a latent space. The method is capable of inferring item-item relations even when user information is not available. We present experimental results that demonstrate the effectiveness of the item2vec method and show it is competitive with SVD.", "target": ["Skip-gramをちょっと変えたものを協調フィルタリングに使った話。Skip-gramのようなモデルを使ってItemに対するベクトルを生成し、Item間の類似度から推薦する。SVDを用いた手法に匹敵する結果を得られた。特にunpopularなitemに関してはSVDより良い結果となった。"]} +{"source": "We describe an approach for unsupervised learning of a generic, distributed sentence encoder. Using the continuity of text from books, we train an encoder-decoder model that tries to reconstruct the surrounding sentences of an encoded passage. Sentences that share semantic and syntactic properties are thus mapped to similar vector representations. We next introduce a simple vocabulary expansion method to encode words that were not seen as part of training, allowing us to expand our vocabulary to a million words. After training our model, we extract and evaluate our vectors with linear models on 8 tasks: semantic relatedness, paraphrase detection, image-sentence ranking, question-type classification and 4 benchmark sentiment and subjectivity datasets. The end result is an off-the-shelf encoder that can produce highly generic sentence representations that are robust and perform well in practice. We will make our encoder publicly available.", "target": ["encoder-decoderモデルを使って文の分散表現を学習しようという話。学習方法はSkip-gramに似ていて、ある文から周りの文を予測することを通じて文の分散表現を学習する。得られたベクトルについては8つのタスクで評価しており、まずまずの結果となっていた。"]} +{"source": "Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.", "target": ["いわゆるDoc2vecの基となった論文。bag-of-wordsでは語順や単語の意味を無視するという弱点があったのでそれを克服するためにParagrapheベクトルを提案。評価分析と情報検索で評価したところ、state-of-the-artより良い結果となった。"]} +{"source": "This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore~CPU, and classify half a million sentences among~312K classes in less than a minute.", "target": ["fastTextを使ってテキスト分類をしてみた論文。具体的には、タグ予測と評価分析を行っている。accuracyでは深層学習モデルの分類器と同等で学習は数十倍速いという結果になった。"]} +{"source": "Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Many popular models to learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for morphologically rich languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skip-gram model, where each word is represented as a bag of character n-grams. A vector representation is associated to each character n-gram, words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpus quickly. We evaluate the obtained word representations on five different languages, on word similarity and analogy tasks.", "target": ["FastTextの論文。現在の単語表現を獲得するモデルの多くは単語の形態素を無視している。この論文ではこれら形態素を考慮するために各単語を文字ngramで表現し、それらのベクトル表現を学習している。その評価は単語類似度とアナロジータスクで行った。"]} +{"source": "Most contemporary approaches to instance segmentation use complex pipelines involving conditional random fields, recurrent neural networks, object proposals, or template matching schemes. In our paper, we present a simple yet powerful end-to-end convolutional neural network to tackle this task. Our approach combines intuitions from the classical watershed transform and modern deep learning to produce an energy map of the image where object instances are unambiguously represented as basins in the energy map. We then perform a cut at a single energy level to directly yield connected components corresponding to object instances. Our model more than doubles the performance of the state-of-the-art on the challenging Cityscapes Instance Level Segmentation task.", "target": ["instance segmentationは複雑なpiplineで構成されたものが多かったが、本稿ではシンプルなネットワーク構成で当該タスクの実行を可能とした。watershedアルゴリズム(領域分割アルゴリズム)とDeep Learningモデルのあわせ技で、watershed transformも含めてend-to-endで学習を行う。RNNなどのiterationが必要なアルゴリズムを使用していないため、1画像中のobject数が多くても高速に推定が可能。従来のwatershed transformアルゴリズムとの違いとしては、各instanceのエネルギー分布の高さが大体同じになるように学習を行っている。入力はRGB。SoTA。"]} +{"source": "Attention-based neural encoder-decoder frameworks have been widely adopted for image captioning. Most methods force visual attention to be active for every generated word. However, the decoder likely requires little to no visual information from the image to predict non-visual words such as \"the\" and \"of\". Other words that may seem visual can often be predicted reliably just from the language model e.g., \"sign\" after \"behind a red stop\" or \"phone\" following \"talking on a cell\". In this paper, we propose a novel adaptive attention model with a visual sentinel. At each time step, our model decides whether to attend to the image (and if so, to which regions) or to the visual sentinel. The model decides whether to attend to the image and where, in order to extract meaningful information for sequential word generation. We test our method on the COCO image captioning 2015 challenge dataset and Flickr30K. Our approach sets the new state-of-the-art by a significant margin.", "target": ["image caption generateタスクにおいて、attentionの際にwhereだけでなくwhenにも着目した論文。“the”や“of”はvisualizeさせる意味はないため、次の単語を予測する際にattentionを行う必要があるかどうかを判断するvisual sentinel(sentinel gate)をネットワークに追加。whenはsentinel gateが、 whereはspatial attentionが担う。Spatial Attention Modelの提案とvisual sentinelを構成要素として追加したAdaptive Attention Modelの提案を行っている。SoTA。"]} +{"source": "The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.", "target": ["ニューラルネットの学習ってなんだかんだ手作業orルールベースだよね、ということで、「学習のさせ方」のノウハウを学習するという研究。ターゲットのネットワークの重みを、RNN(2層LSTM)を使って推定させる。AdamやRMSpropより優秀な結果。TensorFlow実装有り。"]} +{"source": "This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. Most existing graph embedding methods do not scale for real world information networks which usually contain millions of nodes. In this paper, we propose a novel network embedding method called the \"LINE,\" which is suitable for arbitrary types of information networks: undirected, directed, and/or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference. Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. The source code of the LINE is available online.", "target": ["大規模グラフを想定したノードの埋め込み表現を得る手法LINEの提案。ノードi,jの接続確率p(i,j)が埋め込みベクトルを用いてp(i,j)~exp((v_i,v_j))なるモデルを考え。最近接、第2最近接までの接続確率分布を学習し埋め込みベクトルを得る。negative samplingと非同期SGDアップデートで学習の高速化。共起語ネットワーク、filckr,youtube社会ネットワーク, citationネットワークで評価し既存手法(skip-gram,deepwalk含む)より高性能、高速。スケーラビリティーもよいので大規模グラフに適用化。 WWW '15"]} +{"source": "Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.", "target": ["RNNでは1wordごとに処理するので並列処理できないし、前の隠れ層からの入力を受け続けることで隠れ層はいろんな単語の情報がミックスされた謎の何かになる。そこでCNNにより並列処理+隠れ層を前回独立にキープして出力を計算するブロックを発明。その名はQRNN。Chainer実装有。"]} +{"source": "We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F_1 scores up to 10% higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60% less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "target": ["与えられたグラフ上のノードの埋め込み表現を得る方法を提案。着目するノードを始点とするランダムウォークをコンテキストとして始点ノードを予測するskip-gramによって埋め込み表現を得る。大規模グラフをターゲットとしデータセットとしてブロガー、flickr、youtubeのソーシャルネットワークを使って評価。グラフ中のいくつかのノードを訓練データとして学習し、訓練データ以外のノードラベルを埋め込みベクトルを入力とするlogistic regressionで推定した結果を評価。(当時のまだDNN的手法が一般的でない)既存手法をうわまわる精度。 KDD 2014"]} +{"source": "Transferring knowledge from prior source tasks in solving a new target task can be useful in several learning applications. The application of transfer poses two serious challenges which have not been adequately addressed. First, the agent should be able to avoid negative transfer, which happens when the transfer hampers or slows down the learning instead of helping it. Second, the agent should be able to selectively transfer, which is the ability to select and transfer from different and multiple source tasks for different parts of the state space of the target task. We propose A2T (Attend, Adapt and Transfer), an attentive deep architecture which adapts and transfers from these source tasks. Our model is generic enough to effect transfer of either policies or value functions. Empirical evaluations on different learning algorithms show that A2T is an effective architecture for transfer by being able to avoid negative transfer while transferring selectively from multiple source tasks in the same domain.", "target": ["ドメインが同じ転移学習を凸結合によって行う手法の提案。凸結合の係数を、どのタスクにどのくらい注目すべきかというスコアを出力するattention networkを使って求める。強化学習の方策や価値関数を転移学習させて実験した。転移させない学習と比べて高速に学習できたが、転移学習における他の結合方法との比較はないように見える。"]} +{"source": "A number of recent approaches to policy learning in 2D game domains have been successful going directly from raw input images to actions. However when employed in complex 3D environments, they typically suffer from challenges related to partial observability, combinatorial exploration spaces, path planning, and a scarcity of rewarding scenarios. Inspired from prior work in human cognition that indicates how humans employ a variety of semantic concepts and abstractions (object categories, localisation, etc.) to reason about the world, we build an agent-model that incorporates such abstractions into its policy-learning framework. We augment the raw image input to a Deep Q-Learning Network (DQN), by adding details of objects and structural elements encountered, along with the agent's localisation. The different components are automatically extracted and composed into a topological representation using on-the-fly object detection and 3D-scene reconstruction.We evaluate the efficacy of our approach in Doom, a 3D first-person combat game that exhibits a number of challenges discussed, and show that our augmented framework consistently learns better, more effective policies.", "target": ["DQNを3次元に適用すると、次元が増えた分探索が疎になり学習が非常に難しい。そこで、SLAMによる自己位置推定と、Faster-RCNNによる物体検出で俯瞰図(=神の視点)を作って補助情報として入れてやる手法の提案。単純なDQNより学習の初速が速く、総報酬も2倍超となった。"]} +{"source": "Recent captioning models are limited in their ability to scale and describe concepts unseen in paired image-text corpora. We propose the Novel Object Captioner (NOC), a deep visual semantic captioning model that can describe a large number of object categories not present in existing image-caption datasets. Our model takes advantage of external sources -- labeled images from object recognition datasets, and semantic knowledge extracted from unannotated text. We propose minimizing a joint objective which can learn from these diverse data sources and leverage distributional semantic embeddings, enabling the model to generalize and describe novel objects outside of image-caption datasets. We demonstrate that our model exploits semantic information to generate captions for hundreds of object categories in the ImageNet object recognition dataset that are not observed in MSCOCO image-caption training data, as well as many categories that are observed very rarely. Both automatic evaluations and human judgements show that our model considerably outperforms prior work in being able to describe many more categories of objects.", "target": ["Novel Object Captioningタスクで、(1) 画像とキャプションが対となっていないデータの活用/学習、(2) 複数ソースからのデータの活用、(3)pre-training済みのembedding空間の活用を行えるようにした。これにより、zero-shotなデータに対してもcaption生成が可能。手法としては、visual recognition network(画像のみのencode)、caption model(caption generation)、language model(sentence generation)を別々のデータソースで学習。そのパラメタを共有するというもの。従来手法と比較してzero-shotデータに対してMS COCOで10%、ImageNetで20%のF1を改善。"]} +{"source": "In this paper we advance the state-of-the-art for crowd counting in high density scenes by further exploring the idea of a fully convolutional crowd counting model introduced by (Zhang et al., 2016). Producing an accurate and robust crowd count estimator using computer vision techniques has attracted significant research interest in recent years. Applications for crowd counting systems exist in many diverse areas including city planning, retail, and of course general public safety. Developing a highly generalised counting model that can be deployed in any surveillance scenario with any camera perspective is the key objective for research in this area. Techniques developed in the past have generally performed poorly in highly congested scenes with several thousands of people in frame (Rodriguez et al., 2011). Our approach, influenced by the work of (Zhang et al., 2016), consists of the following contributions: (1) A training set augmentation scheme that minimises redundancy among training samples to improve model generalisation and overall counting performance; (2) a deep, single column, fully convolutional network (FCN) architecture; (3) a multi-scale averaging step during inference. The developed technique can analyse images of any resolution or aspect ratio and achieves state-of-the-art counting performance on the Shanghaitech Part B and UCF CC 50 datasets as well as competitive performance on Shanghaitech Part A.", "target": ["FCNベースの人混みの中の人物数カウントモデルの提案。実際に提案しているのはaugmentation手法。入力画像のcropを行う際に4象限っぽく、4つにcropした画像を入力とし、入力データとして重なりをなくした。これによりoverfitのリスクを減らしている。また、test時の入力画像の大きさを50%downsamplingしても精度が大きく変わらず、計算速度は4倍早くなる事を指摘。SoTA。"]} +{"source": "Community based question answering services have arisen as a popular knowledge sharing pattern for netizens. With abundant interactions among users, individuals are capable of obtaining satisfactory information. However, it is not effective for users to attain answers within minutes. Users have to check the progress over time until the satisfying answers submitted. We address this problem as a user personalized satisfaction prediction task. Existing methods usually exploit manual feature selection. It is not desirable as it requires careful design and is labor intensive. In this paper, we settle this issue by developing a new multiple instance deep learning framework. Specifically, in our settings, each question follows a weakly supervised learning multiple instance learning assumption, where its obtained answers can be regarded as instance sets and we define the question resolved with at least one satisfactory answer. We thus design an efficient framework exploiting multiple instance learning property with deep learning to model the question answer pairs. Extensive experiments on large scale datasets from Stack Exchange demonstrate the feasibility of our proposed framework in predicting askers personalized satisfaction. Our framework can be extended to numerous applications such as UI satisfaction Prediction, multi armed bandit problem, expert finding and so on.", "target": ["コミュニティ内での質問に対して、質問者が満足する適切な回答を素早く返す事モデルを作れないか、というのを研究した論文。深層学習とmultiple instance learning(弱教師あり学習)の枠組を手法として用いている。従来手法では特徴量をハンドメイキングしていたが、本手法では特徴量抽出は自動で行われるため、従来手法に比べ、性能だけでなく説明力も向上。"]} +{"source": "Recent advances in deep learning have shown exciting promise in filling large holes in natural images with semantically plausible and context aware details, impacting fundamental image manipulation tasks such as object removal. While these learning-based methods are significantly more effective in capturing high-level features than prior techniques, they can only handle very low-resolution inputs due to memory limitations and difficulty in training. Even for slightly larger images, the inpainted regions would appear blurry and unpleasant boundaries become visible. We propose a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network. We evaluate our method on the ImageNet and Paris Streetview datasets and achieved state-of-the-art inpainting accuracy. We show our approach produces sharper and more coherent results than prior methods, especially for high-resolution images.", "target": ["画像の補完を、高解像度で行う手法の提案。単に画像全体だけでなく、周辺のテクスチャのパターンとの差異も見ることで従来手法より良い(loss的には微減という感じだが)補完を実現。"]} +{"source": "An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.", "target": ["対話中の学習を可能にするため、Memory Networkと強化学習を組み合わせる手法の提案。正しい回答「だけ」を模倣するよう学習するモデル(RBI)と、返答から報酬を推定するモデル(FP)を検証。双方有効なことを確認。"]} +{"source": "We introduce the multiresolution recurrent neural network, which extends the sequence-to-sequence framework to model natural language generation as two parallel discrete stochastic processes: a sequence of high-level coarse tokens, and a sequence of natural language tokens. There are many ways to estimate or learn the high-level coarse tokens, but we argue that a simple extraction procedure is sufficient to capture a wealth of high-level discourse semantics. Such procedure allows training the multiresolution recurrent neural network by maximizing the exact joint log-likelihood over both sequences. In contrast to the standard log- likelihood objective w.r.t. natural language tokens (word perplexity), optimizing the joint log-likelihood biases the model towards modeling high-level abstractions. We apply the proposed model to the task of dialogue response generation in two challenging domains: the Ubuntu technical support domain, and Twitter conversations. On Ubuntu, the model outperforms competing approaches by a substantial margin, achieving state-of-the-art results according to both automatic evaluation metrics and a human evaluation study. On Twitter, the model appears to generate more relevant and on-topic responses according to automatic evaluation metrics. Finally, our experiments demonstrate that the proposed model is more adept at overcoming the sparsity of natural language and is better able to capture long-term structure.", "target": ["RNNはLSTMとかの構造をいじるのが主流になっているけれど、入力を工夫した方がいいんじゃない?ということで、入力列を通常の文字列とそこから「粗い(Coarse)」情報を抽出したものとを並列で入力するモデルを提案している(=multi resolution)。"]} +{"source": "Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require hand-crafted rules. In this paper, we present a simple approach for this task which uses the recently proposed sequence to sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset. Our preliminary results suggest that, despite optimizing the wrong objective function, the model is able to converse well. It is able extract knowledge from both a domain specific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On a domain-specific IT helpdesk dataset, the model can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model.", "target": ["sequence2sequenceモデルを対話に使ったシンプルなモデルで対話を実現した。評価についてはある質問に対する答え方を既存の対話システム(CleverBot)と比較することで行った。提案モデルでは200の質問の内97が好ましく、CleverBotは200の内60が好ましいという結果となった。"]} +{"source": "In this work, we explore the problem of generating fantastic special-effects for the typography. It is quite challenging due to the model diversities to illustrate varied text effects for different characters. To address this issue, our key idea is to exploit the analytics on the high regularity of the spatial distribution for text effects to guide the synthesis process. Specifically, we characterize the stylized patches by their normalized positions and the optimal scales to depict their style elements. Our method first estimates these two features and derives their correlation statistically. They are then converted into soft constraints for texture transfer to accomplish adaptive multi-scale texture synthesis and to make style element distribution uniform. It allows our algorithm to produce artistic typography that fits for both local texture patterns and the global spatial distribution in the example. Experimental results demonstrate the superiority of our method for various text effects over conventional style transfer methods. In addition, we validate the effectiveness of our algorithm with extensive artistic typography library generation.", "target": ["StyleTransferをテキストに応用し、かっこいいフォントを作るというもの。フォントにかかるエフェクトは細かく、単純にStyleTransferをかけても上手くいかない。そこで、既存のフォントの特性を数学的に定義していって、それを適用していくという手法を取っている。"]} +{"source": "Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typically these methods use attention to focus on a small portion of the context and summarize it with a fixed-size vector, couple attentions temporally, and/or often form a uni-directional attention. In this paper we introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test.", "target": ["SQuADで最高精度を更新した論文。文字・単語・フレーズ、その上に文書/クエリの関連(Attention)、さらにそれらの関連、出力、という階層型のモデル。Attentionとencodeの役割分担が肝とのこと。"]} +{"source": "To enhance developer productivity, all modern integrated development environments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very long-range dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past.", "target": ["Pythonのコード補完をRNN言語モデルで行う話���Attentionをclassやfunctionの宣言にはることで、予測精度を上げている。なるべく品質の高いPythonコードを使用するため、GitHubでStar100以上のリポジトリ、かつfolkが多いものを学習データとして使用。"]} +{"source": "The use of Convolutional Neural Networks (CNN) in natural image classification systems has produced very impressive results. Combined with the inherent nature of medical images that make them ideal for deep-learning, further application of such systems to medical image classification holds much promise. However, the usefulness and potential impact of such a system can be completely negated if it does not reach a target accuracy. In this paper, we present a study on determining the optimum size of the training data set necessary to achieve high classification accuracy with low variance in medical image classification systems. The CNN was applied to classify axial Computed Tomography (CT) images into six anatomical classes. We trained the CNN using six different sizes of training data set (5, 10, 20, 50, 100, and 200) and then tested the resulting system with a total of 6000 CT images. All images were acquired from the Massachusetts General Hospital (MGH) Picture Archiving and Communication System (PACS). Using this data, we employ the learning curve approach to predict classification accuracy at a given training sample size. Our research will present a general methodology for determining the training data set size necessary to achieve a certain target classification accuracy that can be easily applied to other problems within such systems.", "target": ["CT画像の異常状態の分類器において何枚の画像が必要か、を実験した論文。結果としては100枚以上からは精度の向上はあまり見られなかった。使用したNetworkはGoogLeNet。"]} +{"source": "Unsupervised word representations are very useful in NLP tasks both as inputs to learning algorithms and as extra word features in NLP systems. However, most of these models are built with only local context and one representation per word. This is problematic because words are often polysemous and global context can also provide useful information for learning word meanings. We present a new neural network architecture which 1) learns word embeddings that better capture the semantics of words by incorporating both local and global document context, and 2) accounts for homonymy and polysemy by learning multiple embeddings per word. We introduce a new dataset with human judgments on pairs of words in sentential context, and evaluate our model on it, showing that our model outperforms competitive baselines and other neural language models.", "target": ["GloVeの基となった論文(多分)。従来の単語表現にはlocal contextしか使っていない、一つの単語につき一つの表現という問題があった。提案モデルでは、local contextとglobal contextを組み合わせて使用し、また一単語につき複数のembeddingsを学習することで性能向上を図った。"]} +{"source": "We present results that show it is possible to build a competitive, greatly simplified, large vocabulary continuous speech recognition system with whole words as acoustic units. We model the output vocabulary of about 100,000 words directly using deep bi-directional LSTM RNNs with CTC loss. The model is trained on 125,000 hours of semi-supervised acoustic training data, which enables us to alleviate the data sparsity problem for word models. We show that the CTC word models work very well as an end-to-end all-neural speech recognition model without the use of traditional context-dependent sub-word phone units that require a pronunciation lexicon, and without any language model removing the need to decode. We demonstrate that the CTC word models perform better than a strong, more complex, state-of-the-art baseline with sub-word units.", "target": ["音声認識システムにおいて語彙数が多いときは単語より小さい音声単位を出力せざるを得ないと考えられていた。しかしこの論文では10万語彙で単語を直接出力するシステムが可能であることを示した。そのためモデルは従来よりも単純化できて、双方向LSTMとCTS lossを使いend-to-end trainingされている。12万5千時間のsemi-supervisedなデータセットが単語単位モデルにおけるデータ不足の問題を解消した。"]} +{"source": "We present a comprehensive study of evaluation methods for unsupervised embedding techniques that obtain meaningful representations of words from text. Different evaluations result in different orderings of embedding methods, calling into question the common assumption that there is one single optimal vector representation. We present new evaluation techniques that directly compare embeddings with respect to specific queries. These methods reduce bias, provide greater insight, and allow us to solicit data-driven relevance judgments rapidly and accurately through crowdsourcing.", "target": ["word embeddingの評価手法に関するサーベイ論文。既存の評価手法を解説しつつ、クラウドソーシングを用いた新しい評価手法を提案し、比較を行っている。結果として、既存の評価手法と新しい評価手法の結果が類似していることがわかった。"]} +{"source": "While recent deep neural networks have achieved a promising performance on object recognition, they rely implicitly on the visual contents of the whole image. In this paper, we train deep neural net- works on the foreground (object) and background (context) regions of images respectively. Consider- ing human recognition in the same situations, net- works trained on the pure background without ob- jects achieves highly reasonable recognition performance that beats humans by a large margin if only given context. However, humans still outperform networks with pure object available, which indicates networks and human beings have different mechanisms in understanding an image. Furthermore, we straightforwardly combine multiple trained networks to explore different visual cues learned by different networks. Experiments show that useful visual hints can be explicitly learned separately and then combined to achieve higher performance, which verifies the advantages of the proposed framework.", "target": ["object recognitionでは人間がAlexnetに勝つ。この論文は、objectをマスクして背景だけにしてobjectを推測させると、人間よりAlexnetの方が上ということを確かめた。またobjectだけにすると人間が勝つことも。deep CNNは認識において背景情報をやはりかなり利用していることが分かる。"]} +{"source": "We propose a new self-supervised CNN pre-training technique based on a novel auxiliary task called \"odd-one-out learning\". In this task, the machine is asked to identify the unrelated or odd element from a set of otherwise related elements. We apply this technique to self-supervised video representation learning where we sample subsequences from videos and ask the network to learn to predict the odd video subsequence. The odd video subsequence is sampled such that it has wrong temporal order of frames while the even ones have the correct temporal order. Therefore, to generate a odd-one-out question no manual annotation is required. Our learning machine is implemented as multi-stream convolutional neural network, which is learned end-to-end. Using odd-one-out networks, we learn temporal representations for videos that generalizes to other related tasks such as action recognition. On action classification, our method obtains 60.3\\% on the UCF101 dataset using only UCF101 data for training which is approximately 10% better than current state-of-the-art self-supervised learning methods. Similarly, on HMDB51 dataset we outperform self-supervised state-of-the art methods by 12.7% on action classification task.", "target": ["時間順序をひっくり返したビデオクリップをひっくり返していないものと区別させるタスクで学習させるのでアノーテーション不要。UCF101データセットでの(アノーテーションありの)行動分類タスクにこのself-supervisedなtrainingを併用した場合、およそ10%のperformace向上をするらしい。"]} +{"source": "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.", "target": ["単語ベクトルを学習するためのモデルであるGloVeの論文。グローバルな行列分解のモデルとlocal context windowのモデルを組み合わせてよい単語ベクトルを学習した。評価はアナロジータスクとNERで行っている。"]} +{"source": "Continuous space language models have recently demonstrated outstanding results across a variety of tasks. In this paper, we examine the vector-space word representations that are implicitly learned by the input-layer weights. We find that these representations are surprisingly good at capturing syntactic and semantic regularities in language, and that each relationship is characterized by a relation-specific vector offset. This allows vector-oriented reasoning based on the offsets between words. For example, the male/female relationship is automatically learned, and with the induced vector representations, “King - Man + Woman” results in a vector very close to “Queen.” We demonstrate that the word vectors capture syntactic regularities by means of syntactic analogy questions (provided with this paper), and are able to correctly answer almost 40% of the questions. We demonstrate that the word vectors capture semantic regularities by using the vector offset method to answer SemEval-2012 Task 2 questions. Remarkably, this method outperforms the best previous systems.", "target": ["word2vecの基となった論文の一つ。#59が基になっている。おそらくベクトルの足し引きで意味のある結果を得られた初めての論文。RNNLMで言語モデルを学習させた時の入力層の重みが良い単語表現だったという話。syntacticとsemanticな単語の関係について評価を行っており、従来手法(LSAと2つの先行論文の手法)より良い結果だった。"]} +{"source": "Deep reinforcement learning (RL) can acquire complex behaviors from low-level inputs, such as images. However, real-world applications of such methods require generalizing to the vast variability of the real world. Deep networks are known to achieve remarkable generalization when provided with massive amounts of labeled data, but can we provide this breadth of experience to an RL agent, such as a robot? The robot might continuously learn as it explores the world around it, even while it is deployed and performing useful tasks. However, this learning requires access to a reward function, to tell the agent whether it is succeeding or failing at its task. Such reward functions are often hard to measure in the real world, especially in domains such as robotics and dialog systems, where the reward could depend on the unknown positions of objects or the emotional state of the user. On the other hand, it is often quite practical to provide the agent with reward functions in a limited set of situations, such as when a human supervisor is present, or in a controlled laboratory setting. Can we make use of this limited supervision, and still benefit from the breadth of experience an agent might collect in the unstructured real world? In this paper, we formalize this problem setting as semi-supervised reinforcement learning (SSRL), where the reward function can only be evaluated in a set of “labeled” MDPs, and the agent must generalize its behavior to the wide range of states it might encounter in a set of “unlabeled” MDPs, by using experience from both settings. Our proposed method infers the task objective in the unlabeled MDPs through an algorithm that resembles inverse RL, using the agent’s own prior experience in the labeled MDPs as a kind of demonstration of optimal behavior. We evaluate our method on challenging, continuous control tasks that require control directly from images, and show that our approach can improve the generalization of a learned deep neural network policy by using experience for which no reward function is available. We also show that our method outperforms direct supervised learning of the reward.", "target": ["報酬ありMDPと報酬なしMDP、両方使い強化学習するsemi-supervised reinforcement learning(半教師あり強化学習)を考え、その学習アルゴリズムsemi-supervised skill generalization(S3G)を提案。報酬ありなしの情報を両方使い汎化できていることが確かめられた。"]} +{"source": "Class imbalance is a common problem in the case of real-world object detection and classification tasks. Data of some classes is abundant making them an over-represented majority, and data of other classes is scarce, making them an under-represented minority. This imbalance makes it challenging for a classifier to appropriately learn the discriminating boundaries of the majority and minority classes. In this work, we propose a cost sensitive deep neural network which can automatically learn robust feature representations for both the majority and minority classes. During training, our learning procedure jointly optimizes the class dependent costs and the neural network parameters. The proposed approach is applicable to both binary and multi-class problems without any modification. Moreover, as opposed to data level approaches, we do not alter the original data distribution which results in a lower computational cost during the training process. We report the results of our experiments on six major image classification datasets and show that the proposed approach significantly outperforms the baseline algorithms. Comparisons with popular data sampling techniques and cost sensitive classifiers demonstrate the superior performance of our proposed method.", "target": ["不均衡データに対して頑健に学習(特徴量抽出)を行うために、Cost-Sensitive Learning手法をnetworkの中に組み込み、自動でclass依存性のcost matrixを学習しようという提案。cost matrix用のlossは混合分布をベースとて算出、epoch毎に1度更新。"]} +{"source": "In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.", "target": ["ResNetの学習の振る舞いを調査した研究。勾配消失が何故防げているのかを解き明かしている。 ultra-deepなシングルネットワークだが、実は複数のネットワークが並行し、上層で結合しているように(アンサンブルのように)振る舞っている。また、gradientとして貢献しているのはshort path(勾配の伝搬を上手く行っているのがshort path)。deep pathはtraining時は重要ではない。という解釈がされている。"]} +{"source": "We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call \"cardinality\" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.", "target": ["ResNetの拡張版のResNeXtを提案。block内にinceptionモデルような構造を持たせた、“cardinality”という構造に変更。単純に層を深く、広くするよりも効果的にaccuracyの向上が行える。パラメタ数や時間計算量も通常のResNetと同程度。"]} +{"source": "We propose a technique for making Convolutional Neural Network (CNN)-based models more transparent by visualizing input regions that are 'important' for predictions -- or visual explanations. Our approach, called Gradient-weighted Class Activation Mapping (Grad-CAM), uses class-specific gradient information to localize important regions. These localizations are combined with existing pixel-space visualizations to create a novel high-resolution and class-discriminative visualization called Guided Grad-CAM. These methods help better understand CNN-based models, including image captioning and visual question answering (VQA) models. We evaluate our visual explanations by measuring their ability to discriminate between classes, to inspire trust in humans, and their correlation with occlusion maps. Grad-CAM provides a new way to understand CNN-based models. We have released code, an online demo hosted on CloudCV, and a full version of this extended abstract.", "target": ["CNNモデルの出力結果に対する可視化手法であるGrad-CAMの提案。入力画像に対する注目領域の可視化を行う。image captioningやvisual question answering(VQA)モデルにも適用可能。 行われている操作は以下のとおり。 (1) guided backpropagationで得られたサリエンシーマップを作成 (2) target categoryを1、それ以外を0としてBPしたgradとfeature mapの値の積からblue heatmapを作成 (3) (1)と(2)の積をして、最終的なサリエンシーマップを作成"]} +{"source": "Deep Neural Networks often require good regularizers to generalize well. Dropout is one such regularizer that is widely used among Deep Learning practitioners. Recent work has shown that Dropout can also be viewed as performing Approximate Bayesian Inference over the network parameters. In this work, we generalize this notion and introduce a rich family of regularizers which we call Generalized Dropout. One set of methods in this family, called Dropout++, is a version of Dropout with trainable parameters. Classical Dropout emerges as a special case of this method. Another member of this family selects the width of neural network layers. Experiments show that these methods help in improving generalization performance over Dropout.", "target": ["ニューラルネットの重みをベイズ推定するBNNは、すべての重みが互いに独立なのは過程として無理がある。そこで、重みの相関を表現するゲートを設けるという話。ノードの接続を調整するという面でDropOutの拡張、Dropout++と命名。"]} +{"source": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "target": ["Word2vecの基となった論文の基となった論文。ニューラルネットワークを使った言語モデル(NNLM)を作る話。NNLMでは各単語の分散表現と単語列の確率を同時に学習することができる。単語の分散表現を使うことで、以前に現れたことのない単語列でもそれまでに現れた単語列と似た単語列で構成されていれば高い確率を付与することができるようになった。その結果として当時(2003年)のstate-of-the-artよりパープレキシティで10%〜20%の改善を行えた。ただ課題として計算速度の向上が挙げられてる。"]} +{"source": "We propose a method for finding alternate features missing in the Lasso optimal solution. In ordinary Lasso problem, one global optimum is obtained and the resulting features are interpreted as task-relevant features. However, this can overlook possibly relevant features not selected by the Lasso. With the proposed method, we can provide not only the Lasso optimal solution but also possible alternate features to the Lasso solution. We show that such alternate features can be computed efficiently by avoiding redundant computations. We also demonstrate how the proposed method works in the 20 newsgroup data, which shows that reasonable features are found as alternate features.", "target": ["Lasso最適解では無視されてしまう、モデルの結果の性能に影響を与える重要な変数を抽出する手法の提案。"]} +{"source": "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.", "target": ["最近流行のconditional GAN(条件指定の画像生成)。ある画像(イラスト)から画像(写真)を作り出す、画像翻訳ともいえるもの。特徴量比較でなく、GANの仕組みで「ペアかどうか」を真偽判定させる形で学習。少ないデータ(数百件・数時間 on GPU)でも学習可能なことを確認"]} +{"source": "Speech is one of the most effective ways of communication among humans. Even though audio is the most common way of transmitting speech, very important information can be found in other modalities, such as vision. Vision is particularly useful when the acoustic signal is corrupted. Multi-modal speech recognition however has not yet found wide-spread use, mostly because the temporal alignment and fusion of the different information sources is challenging. This paper presents an end-to-end audiovisual speech recognizer (AVSR), based on recurrent neural networks (RNN) with a connectionist temporal classification (CTC) loss function. CTC creates sparse \"peaky\" output activations, and we analyze the differences in the alignments of output targets (phonemes or visemes) between audio-only, video-only, and audio-visual feature representations. We present the first such experiments on the large vocabulary IBM ViaVoice database, which outperform previously published approaches on phone accuracy in clean and noisy conditions.", "target": ["LipNetに先を越された感はあるが、音声認識に画像特徴量を組み合わせる試み。ノイズありの環境では、あらかじめノイズありの音声で学習+口のあたりの画像特徴量を併用するのが良い結果になるとの結果。ノイズなしで学習させた場合、画像を組み合わせても精度が出なくなるのは重要な示唆。"]} +{"source": "We introduce a simple permutation equivariant layer for deep learning with set structure.This type of layer, obtained by parameter-sharing, has a simple implementation and linear-time complexity in the size of each set. We use deep permutation-invariant networks to perform point-could classification and MNIST-digit summation, where in both cases the output is invariant to permutations of the input. In a semi-supervised setting, where the goal is make predictions for each instance within a set, we demonstrate the usefulness of this type of layer in set-outlier detection as well as semi-supervised learning with clustering side-information.", "target": ["データセットの中の各データを「点」と考えると、データセットは各点を関連付ける「構造」(点をつなぐ構造=グラフ構造)を持っていると考えることができる。この構造として何パターンかのシンプルな定義を行い、データセットへ適用してみることで「構造からの外れ値」の検出などを行っている。"]} +{"source": "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "target": ["目的はskip-gramの高速化とより良いword vectorを得ること。またフレーズに対する学習を行うことも目的としている。高速化のために階層的ソフトマックス、ネガティブサンプリング、サブサンプリングを用いている。結果として、高速化しつつ良いword vectorを得られている。skip-gramに初めてネガティブサンプリングを使った論文のようだが説明が簡素なので、ネガティブサンプリングについて理解するには#41のほうが良い。"]} +{"source": "We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities.", "target": ["Word2vecの本家の論文。モデルの詳細についてはほとんど記述されていないし目的ともしていない。それより、既存モデル(NNLM、RNNLM)と提案モデル(Skip-Gram、CBOW)について計算量や性能の比較を行っている。結果として、既存手法より性能が高く計算量は小さかった。また、ベクトルの足し引きで興味深い結果(ex: vector(\"biggest\")-vector(\"big\")+vector(\"small\")=vector(\"smallest\"))を得られることがわかった。"]} +{"source": "Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\\% expert human performance, and a challenging suite of first-person, three-dimensional \\emph{Labyrinth} tasks leading to a mean speedup in learning of 10× and averaging 87\\% expert human performance on Labyrinth.", "target": ["教師なしの補助タスクを同時に行う強化学習の手法UNsupervised REinforcement and Auxiliary Learning (UNREAL)を提案。画像入力3D迷路で従来手法に対し10倍の学習速度、人間の87%のスコア、Atariで人間の9倍のスコア。"]} +{"source": "The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem - unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) a 'Watch, Listen, Attend and Spell' (WLAS) network that learns to transcribe videos of mouth motion to characters; (2) a curriculum learning strategy to accelerate training and to reduce overfitting; (3) a 'Lip Reading Sentences' (LRS) dataset for visual speech recognition, consisting of over 100,000 natural sentences from British television. The WLAS model trained on the LRS dataset surpasses the performance of all previous work on standard lip reading benchmark datasets, often by a significant margin. This lip reading performance beats a professional lip reader on videos from BBC television, and we also demonstrate that visual information helps to improve speech recognition performance even when the audio is available.", "target": ["DNNによる読唇術の続編。画像に加え、音声を用い(MFCCをLSTMでencode)、これらにAttentionを張った文字ベースのLSTM(Attend and Spell)で挑戦。※音声は使わない版も検証。BBCニュースの映像で検証したところ、プロを上回ることに成功。"]} +{"source": "Part of the appeal of Visual Question Answering (VQA) is its promise to answer new questions about previously unseen images. Most current methods demand training questions that illustrate every possible concept, and will therefore never achieve this capability, since the volume of required training data would be prohibitive. Answering general questions about images requires methods capable of Zero-Shot VQA, that is, methods able to answer questions beyond the scope of the training questions. We propose a new evaluation protocol for VQA methods which measures their ability to perform Zero-Shot VQA, and in doing so highlights significant practical deficiencies of current approaches, some of which are masked by the biases in current datasets. We propose and evaluate several strategies for achieving Zero-Shot VQA, including methods based on pretrained word embeddings, object classifiers with semantic embeddings, and test-time retrieval of example images. Our extensive experiments are intended to serve as baselines for Zero-Shot VQA, and they also achieve state-of-the-art performance in the standard VQA evaluation setting.", "target": ["画像を見て質問に答えるタスクでは、学習した画像についてだけ答えられる、良くある答え(「2つ」とか)を多めに繰り出して精度が上がっているなど明らかな過適合が見られた。そこで真実見たことない画像(Zero-Shot)に回答可能かをテストするためのデータとベースラインモデルの提案"]} +{"source": "We introduce recurrent neural network grammars, probabilistic models of sentences with explicit phrase structure. We explain efficient inference procedures that allow application to both parsing and language modeling. Experiments show that they provide better parsing in English than any single previously published supervised generative model and better language modeling than state-of-the-art sequential RNNs in English and Chinese.", "target": ["RNNを発展させたRecurrnt Neural Network Grammars(RNND)を用いて 構文解析、文生成を行った。"]} +{"source": "The ability to track a moving vehicle is of crucial importance in numerous applications. The task has often been approached by the importance sampling technique of particle filters due to its ability to model non-linear and non-Gaussian dynamics, of which a vehicle travelling on a road network is a good example. Particle filters perform poorly when observations are highly informative. In this paper, we address this problem by proposing particle filters that sample around the most recent observation. The proposal leads to an order of magnitude improvement in accuracy and efficiency over conventional particle filters, especially when observations are infrequent but low-noise.", "target": ["粒子フィルタは移動物体の位置推定問題で多用されている。 本手法は従来の粒子フィルタの改良版。位置推定プロセスにおける直近の観測を元に提案分布を生成するのが従来手法との大きな違い。これにより、粒子の使用がより効率的になる。特に、観測ノイズに関連した遷移ノイズが高い状況で効果的。従来手法の性能を大きく改善。"]} +{"source": "Previous research has shown that computation of convolution in the frequency domain provides a significant speedup versus traditional convolution network implementations. However, this performance increase comes at the expense of repeatedly computing the transform and its inverse in order to apply other network operations such as activation, pooling, and dropout. We show, mathematically, how convolution and activation can both be implemented in the frequency domain using either the Fourier or Laplace transformation. The main contributions are a description of spectral activation under the Fourier transform and a further description of an efficient algorithm for computing both convolution and activation under the Laplace transform. By computing both the convolution and activation functions in the frequency domain, we can reduce the number of transforms required, as well as reducing overall complexity. Our description of a spectral activation function, together with existing spectral analogs of other network functions may then be used to compose a fully spectral implementation of a convolution network.", "target": ["ラプラス変換やフーリエ変換を応用した計算量削減手法を、さらに効率的に改良した手法を提案。 従来はF->C->F^{-1}->A->F->C->F^{-1}->A、というように活性化関数の前に逆変換が必要であったが、提案手法ではF->C->A->F->C->A->F^{-1}という操作が可能となり、空間/時間計算量ともに削減。 従来はスペクトル表現では有限空間がサポートされていなかったために起きていたが、spectral activation functionによって有限空間でのactivationが可能となった。"]} +{"source": "In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.", "target": ["強化学習は、精度は高いが大量のデータを必要とする。人間ならもうちょい効率よくやるのに・・・ということで、様々なタスクのモデル(RNN)の学習を強化学習でやることで、各タスク間の共通構造などを習得させられないか(習得できれば、他の新しいタスクの時に上手くやれる)という試み。"]} +{"source": "Training time on large datasets for deep neural networks is the principal workflow bottleneck in a number of important applications of deep learning, such as object classification and detection in automatic driver assistance systems (ADAS). To minimize training time, the training of a deep neural network must be scaled beyond a single machine to as many machines as possible by distributing the optimization method used for training. While a number of approaches have been proposed for distributed stochastic gradient descent (SGD), at the current time synchronous approaches to distributed SGD appear to be showing the greatest performance at large scale. Synchronous scaling of SGD suffers from the need to synchronize all processors on each gradient step and is not resilient in the face of failing or lagging processors. In asynchronous approaches using parameter servers, training is slowed by contention to the parameter server. In this paper we compare the convergence of synchronous and asynchronous SGD for training a modern ResNet network architecture on the ImageNet classification problem. We also propose an asynchronous method, gossiping SGD, that aims to retain the positive features of both systems by replacing the all-reduce collective operation of synchronous training with a gossip aggregation algorithm. We find, perhaps counterintuitively, that asynchronous SGD, including both elastic averaging and gossiping, converges faster at fewer nodes (up to about 32 nodes), whereas synchronous SGD scales better to more nodes (up to about 100 nodes).", "target": ["DeepLearningを行う上でボトルネックとなる学習時間を短縮するために、分散処理アルゴリズムとしてgossiping SGDという手法を提案。従来の同期アプローチでは障害(1ノードの計算失敗など)に弱く、非同期アプローチではparameterサーバを使用しているためパラメタが競合状態となり、学習が遅くなる。 提案手法では、同期アプローチのall-reduce collective operationをgossip aggregationアルゴリズムに置き換える事により、従来の同期/非同期アプローチの良いところを継承した非同期アプローチを構築。 直感に反するが、非同期アプローチでは32ノード, 同期アプローチでは100ノードまでスケール可能。"]} +{"source": "Transfer and multi-task learning have traditionally focused on either a single source-target pair or very few, similar tasks. Ideally, the linguistic levels of morphology, syntax and semantics would benefit each other by being trained in a single model. We introduce a joint many-task model together with a strategy for successively growing its depth to solve increasingly complex tasks. Higher layers include shortcut connections to lower-level task predictions to reflect linguistic hierarchies. We use a simple regularization term to allow for optimizing all model weights to improve one task's loss without exhibiting catastrophic interference of the other tasks. Our single end-to-end model obtains state-of-the-art or competitive results on five different tasks from tagging, parsing, relatedness, and entailment tasks.", "target": ["言語を扱う複数のタスクは、相互に有用な知識を持つはずだから、組み合わせたほうがいい精度が出るのでは、という話。品詞づけ・文節判定・係り受け・文意関係(補強・反対・普通)・文関係の度合い、といった複数のタスクをこなす一つのネットワークを構築し、最高精度を達成。"]} +{"source": "Several deep learning models have been proposed for question answering. However, due to their single-pass nature, they have no way to recover from local maxima corresponding to incorrect answers. To address this problem, we introduce the Dynamic Coattention Network (DCN) for question answering. The DCN first fuses co-dependent representations of the question and the document in order to focus on relevant parts of both. Then a dynamic pointing decoder iterates over potential answer spans. This iterative procedure enables the model to recover from initial local maxima corresponding to incorrect answers. On the Stanford question answering dataset, a single DCN model improves the previous state of the art from 71.0% F1 to 75.9%, while a DCN ensemble obtains 80.4% F1.", "target": ["一度だけ文書を読んで質問に答えるより、質問がわかってから見返せたほうがいいよね?ということで文書のencodeだけでなく、質問も掛け合わせたものを利用する(Coattention)、また一度で回答するのでなく何回か見直すことで局所最適を避けるという手法の提案。これでSOTAを更新"]} +{"source": "We present a method for performing hierarchical object detection in images guided by a deep reinforcement learning agent. The key idea is to focus on those parts of the image that contain richer information and zoom on them. We train an intelligent agent that, given an image window, is capable of deciding where to focus the attention among five different predefined region candidates (smaller windows). This procedure is iterated providing a hierarchical image analysis.We compare two different candidate proposal strategies to guide the object search: with and without overlap. Moreover, our work compares two different strategies to extract features from a convolutional neural network for each region proposal: a first one that computes new feature maps for each region proposal, and a second one that computes the feature maps for the whole image to later generate crops for each region proposal. Experiments indicate better results for the overlapping candidate proposal strategy and a loss of performance for the cropped image features due to the loss of spatial resolution. We argue that, while this loss seems unavoidable when working with large amounts of object candidates, the much more reduced amount of region proposals generated by our reinforcement learning agent allows considering to extract features for each location without sharing convolutional computation among regions.", "target": ["物体認識を行う際、写真のどこに注目するかを決めて(右上・左上、など)切り取り、拡大する。そこからさらにどこに注目するか決め・・・と再帰的に繰り返すことで認識精度を上げるという研究。この挙動を強化学習で学習させる。重複には強くなったが、切り取りによる領域縮小で精度が下がったとのこと"]} +{"source": "Sequence models can be trained using supervised learning and a next-step prediction objective. This approach, however, suffers from known failure modes. For example, it is notoriously difficult to ensure multi-step generated sequences have coherent global structure. Motivated by the fact that reinforcement learning (RL) can be used to impose arbitrary properties on generated data by choosing appropriate reward functions, in this paper we propose a novel approach for sequence training which combines Maximum Likelihood (ML) and RL training. We refine a sequence predictor by optimizing for some imposed reward functions, while maintaining good predictive properties learned from data. We propose efficient ways to solve this by augmenting deep Q-learning with a cross-entropy reward and deriving novel off-policy methods for RNNs from stochastic optimal control (SOC). We explore the usefulness of our approach in the context of music generation. An LSTM is trained on a large corpus of songs to predict the next note in a musical sequence. This Note-RNN is then refined using RL, where the reward function is a combination of rewards based on rules of music theory, as well as the output of another trained Note-RNN. We show that by combining ML and RL, this RL Tuner method can not only produce more pleasing melodies, but that it can significantly reduce unwanted behaviors and failure modes of the RNN.", "target": ["音楽を生成するRNNを、強化学習で学習させるという方法。actionは音符を選ぶことで、Rewardは実際の曲的に出現しうるか+音楽理論に沿っているか(いくつかの特徴量で設定)で与える。これにより、これまでより格段に音楽的に好ましくない性質は低減し、好ましい性質は高くすることができた。"]} +{"source": "We build a machine solver for word problems on the physics of a free falling object under constant acceleration of gravity. Each problem consists of a formulation part, describing the setting, and a question part asking for the value of an unknown. Our solver consists of two long short-term memory recurrent neural networks and a numerical integrator. The first neural network (the labeler) labels each word of the problem, identifying the physical parameters and the question part of the problem. The second neural network (the classifier) identifies what is being asked in the question. Using the information extracted by both networks, the numerical integrator computes the solution. We observe that the classifier is resilient to errors made by the labeler, which does a better job of identifying the physics parameters than the question. Training, validation and test sets of problems are generated from a grammar, with validation and test problems structurally different from the training problems. The overall accuracy of the solver on the test cases is 99.8%.", "target": ["\"椅子が51mphで28度の角度で打ち上げられた。惑星ワトソンでは重力加速度を98m^2/sとせよ。最大の高さに達するにはかかる時間を求めよ。\"のような力学の問題が与えられたとき回答を返すシステムを構築。具体的には2次元空間中で重力しか力のかからない自由粒子に関する自然言語で与えられる問題を想定。入力文にラベル付与するLSTMとラベル付与済の文からタスク抽出するLSTMとODEソルバーからなるシステム。テストケースで99.8%の正解率。"]} +{"source": "The word2vec software of Tomas Mikolov and colleagues (this https URL ) has gained a lot of traction lately, and provides state-of-the-art word embeddings. The learning models behind the software are described in two research papers. We found the description of the models in these papers to be somewhat cryptic and hard to follow. While the motivations and presentation may be obvious to the neural-networks language-modeling crowd, we had to struggle quite a bit to figure out the rationale behind the equations. This note is an attempt to explain equation (4) (negative sampling) in \"Distributed Representations of Words and Phrases and their Compositionality\" by Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado and Jeffrey Dean.", "target": ["ネガティブサンプリングの式の導出を詳しく解説している論文"]} +{"source": "Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP. A key factor impeding its solution by machine learned systems is the limited availability of human-annotated data. Hermann et al. (2015) seek to solve this problem by creating over a million training examples by pairing CNN and Daily Mail news articles with their summarized bullet points, and show that a neural network can then be trained to give good performance on this task. In this paper, we conduct a thorough examination of this new reading comprehension task. Our primary aim is to understand what depth of language understanding is required to do well on this task. We approach this from one side by doing a careful hand-analysis of a small subset of the problems and from the other by showing that simple, carefully designed systems can obtain accuracies of 73.6% and 76.6% on these two datasets, exceeding current state-of-the-art results by 7-10% and approaching what we believe is the ceiling for performance on this task.", "target": ["DeepMindの教師なしによるQA回答(下)の追試。これは、文章中の固有表現をentityタグに置換して学習し、QAをentityの穴埋め問題として解くという手法。"]} +{"source": "The word2vec model and application by Mikolov et al. have attracted a great amount of attention in recent two years. The vector representations of words learned by word2vec models have been shown to carry semantic meanings and are useful in various NLP tasks. As an increasing number of researchers would like to experiment with word2vec or similar techniques, I notice that there lacks a material that comprehensively explains the parameter learning process of word embedding models in details, thus preventing researchers that are non-experts in neural networks from understanding the working mechanism of such models. This note provides detailed derivations and explanations of the parameter update equations of the word2vec models, including the original continuous bag-of-word (CBOW) and skip-gram (SG) models, as well as advanced optimization techniques, including hierarchical softmax and negative sampling. Intuitive interpretations of the gradient equations are also provided alongside mathematical derivations. In the appendix, a review on the basics of neuron networks and backpropagation is provided. I also created an interactive demo, wevi, to facilitate the intuitive understanding of the model.", "target": ["word2vecのパラメータ更新式の詳細な導出と説明を行っている論文。対象のモデルはCBOWとSkip-Gram。パラメータ学習の過程を詳細に解説した資料がなかったから書いたそうだ。"]} +{"source": "Deep learning methods exhibit promising performance for predictive modeling in healthcare, but two important challenges remain: - Data insufficiency: Often in healthcare predictive modeling, the sample size is insufficient for deep learning methods to achieve satisfactory results. - Interpretation: The representations learned by deep learning models should align with medical knowledge. To address these challenges, we propose a GRaph-based Attention Model, GRAM that supplements electronic health records (EHR) with hierarchical information inherent to medical ontologies. Based on the data volume and the ontology structure, GRAM represents a medical concept as a combination of its ancestors in the ontology via an attention mechanism. We compared predictive performance (i.e. accuracy, data needs, interpretability) of GRAM to various methods including the recurrent neural network (RNN) in two sequential diagnoses prediction tasks and one heart failure prediction task. Compared to the basic RNN, GRAM achieved 10% higher accuracy for predicting diseases rarely observed in the training data and 3% improved area under the ROC curve for predicting heart failure using an order of magnitude less training data. Additionally, unlike other methods, the medical concept representations learned by GRAM are well aligned with the medical ontology. Finally, GRAM exhibits intuitive attention behaviors by adaptively generalizing to higher level concepts when facing data insufficiency at the lower level concepts.", "target": ["診断記録のsequenceから病気をあてるタスク。各診断記録にのっている専門用語の集合を1つの入力とするRNNを構築する。各専門用語のベクタをそのまま入力に使わず、オントロジーを使い上位概念のベクタも考慮しattentionで上位概念を考慮した専門用語のベクタ作り入力とする手法GRAMを提案。単純にRNNを使った場合と比較しaccuracyで10%,AUCで3%の性能向上。"]} +{"source": "We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.", "target": ["NNモデル(の重み)の不確実性を、重み上の確率分布を用いて扱う。確率分布の学習方法として、誤差逆伝搬法を模したBayes by backpropを提案。分類や回帰、バンディットなど色々なタスクで成功。特に、Thompson Samplingなどと組み合わせて探索ができるところが良さそう。"]} +{"source": "In this work, we study the problem of feature representation learning for graph-structured data. Many of the existing work in the area are task-specific and based on supervised techniques. We study a method for obtaining a generic feature representation for a graph using an unsupervised approach. The neural encoder-decoder model is a method that has been used in the natural language processing domain to learn feature representations of sentences. In our proposed approach, we train the encoder-decoder model to predict the random walk sequence of neighboring regions in a graph given a random walk along a particular region. The goal is to map subgraphs — as represented by their random walks — that are structurally and functionally similar to nearby locations in feature space. We evaluate the learned graph vectors using several real-world datasets on the graph classification task. The proposed model is able to achieve good results against state-of- the-art techniques.", "target": ["与えられたグラフの埋め込み表現を抽出する手法を提案。生成したグラフ上のランダムウォークのシーケンスに対しseq2vecと同様にベクタ表現を得る。分子構造データベースを使い識別タスクで他のグラフ埋め込み手法と比較。SoTa"]} +{"source": "Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.", "target": ["Experience replayにおいて効率的にサンプルを学習するため優先度によってサンプリングすることを提案。優先度としてTD-errorを使用、importace samplingになるので重みを調節し、学習終了付近でのバイアスの影響をなくすため一様サンンプリング近づくようなannealingをしている。DQN,Double-DQNで評価しともにbaselineより良好。人工的なimbalanced dataを使い通常の識別学習への適用も評価している。"]} +{"source": "Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.", "target": ["Progressive netというfine tuningのためのネットワークの提案,pre trained networkの各層の出力をfine tuneしたいnetworkの各層に重みづけして足し合わせる.fine tuneの際pre trained networkの重みについては変更せずfine tuneに入力するときの重みを学習することで,pre trainした結果を忘れることもなく、また複数のpre trained networkを重ねあわせることもできる。この手法自体の適用先は強化学習に限らないが強化学習タスクで評価。Fisher情報行列を評価してpre trainded networkのどの素性が寄与しているかを解析もしている。#1 はこの結果のロボティクスへの応用."]} +{"source": "When encountering novel object, humans are able to infer a wide range of physical properties such as mass, friction and deformability by interacting with them in a goal driven way. This process of active interaction is in the same spirit of a scientist performing an experiment to discover hidden facts. Recent advances in artificial intelligence have yielded machines that can achieve superhuman performance in Go, Atari, natural language processing, and complex control problems, but it is not clear that these systems can rival the scientific intuition of even a young child. In this work we introduce a basic set of tasks that require agents to estimate hidden properties such as mass and cohesion of objects in an interactive simulated environment where they can manipulate the objects and observe the consequences. We found that state of art deep reinforcement learning methods can learn to perform the experiments necessary to discover such hidden properties. By systematically manipulating the problem difficulty and the cost incurred by the agent for performing experiments, we found that agents learn different strategies that balance the cost of gathering information against the cost of making mistakes in different situations.", "target": ["人が物体を目の前にしたときに、触ったり持ち上げたりすることで物体の特性を知るように、強化学習でその過程を再現しようという試み。具体的には、ものの重さを当てるタスク、ブロックが積まれている中で何個あるか当てるというタスクにチャレンジしている。"]} +{"source": "We investigate deep generative models that can exchange multiple modalities bi-directionally, e.g., generating images from corresponding texts and vice versa. Recently, some studies handle multiple modalities on deep generative models, such as variational autoencoders (VAEs). However, these models typically assume that modalities are forced to have a conditioned relation, i.e., we can only generate modalities in one direction. To achieve our objective, we should extract a joint representation that captures high-level concepts among all modalities and through which we can exchange them bi-directionally. As described herein, we propose a joint multimodal variational autoencoder (JMVAE), in which all modalities are independently conditioned on joint representation. In other words, it models a joint distribution of modalities. Furthermore, to be able to generate missing modalities from the remaining modalities properly, we develop an additional method, JMVAE-kl, that is trained by reducing the divergence between JMVAE's encoder and prepared networks of respective modalities. Our experiments show that our proposed method can obtain appropriate joint representation from multiple modalities and that it can generate and reconstruct them more properly than conventional VAEs. We further demonstrate that JMVAE can generate multiple modalities bi-directionally.", "target": ["画像の特徴と、モダリティ(男性・女性など)の画像的特徴をそれぞれ学習させ、その結果を結合させる=元の画像に近く、かつ指定したモダリティの画像的特徴にも近い画像を生成する、という研究。画像のVAEにモダリティのVAEをJointさせると言うことで、JMVAEと命名。"]} +{"source": "Adaptive gradient methods for stochastic optimization adjust the learning rate for each parameter locally. However, there is also a global learning rate which must be tuned in order to get the best performance. In this paper, we present a new algorithm that adapts the learning rate locally for each parameter separately, and also globally for all parameters together. Specifically, we modify Adam, a popular method for training deep learning models, with a coefficient that captures properties of the objective function. Empirically, we show that our method, which we call Eve, outperforms Adam and other popular methods in training deep neural networks, like convolutional neural networks for image classification, and recurrent neural networks for language tasks.", "target": ["Adamの変形で、目的関数からのフィードバック、具体的には目的関数の出力増減率の移動平均を加味してやることで性能が向上したというもの。論理コードがあるため、簡単に実装できそう。"]} +{"source": "We present a novel framework for generating pop music. Our model is a hierarchical Recurrent Neural Network, where the layers and the structure of the hierarchy encode our prior knowledge about how pop music is composed. In particular, the bottom layers generate the melody, while the higher levels produce the drums and chords. We conduct several human studies that show strong preference of our generated music over that produced by the recent method by Google. We additionally show two applications of our framework: neural dancing and karaoke, as well as neural story singing.", "target": ["ポップ・ミュージックを生成する研究。音楽理論に基づいた設計を行っていて、音符(キー)、その長さ(キー+長さでメロディー)、さらに和音、ドラムを重ねている。これで、GoogleのMagentaより圧倒的に評価の良い音楽を生成。また、曲に合わせたダンスと歌詞の生成も行っている。"]} +{"source": "We present the Neural Physics Engine (NPE), a framework for learning simulators of intuitive physics that naturally generalize across variable object count and different scene configurations. We propose a factorization of a physical scene into composable object-based representations and a neural network architecture whose compositional structure factorizes object dynamics into pairwise interactions. Like a symbolic physics engine, the NPE is endowed with generic notions of objects and their interactions; realized as a neural network, it can be trained via stochastic gradient descent to adapt to specific object properties and dynamics of different worlds. We evaluate the efficacy of our approach on simple rigid body dynamics in two-dimensional worlds. By comparing to less structured architectures, we show that the NPE's compositional representation of the structure in physical interactions improves its ability to predict movement, generalize across variable object count and different scene configurations, and infer latent properties of objects such as mass.", "target": ["NNで物理法則を表現しようという話。物理法則による変化を状態遷移モデルとして捉えて、状態を「着目物体」と「その周辺の物体(=コンテキスト)」で表現し、それらの状態を入力にしたLSTMで遷移を表現している。Matter.jsを使用した実験結果があり予測結果をgifで確認できる。"]} +{"source": "We demonstrate improved text-to-image synthesis with controllable object locations using an extension of Pixel Convolutional Neural Networks (PixelCNN). In addition to conditioning on text, we show how the model can generate images conditioned on part keypoints and segmentation masks. The character-level text encoder and image generation network are jointly trained end-to-end via maximum likelihood. We establish quantitative baselines in terms of text and structure-conditional pixel log-likelihood for three data sets: Caltech-UCSD Birds (CUB), MPII Human Pose (MHP), and Common Objects in Context (MS-COCO).", "target": ["テキスト以外にオブジェクトの範囲やキーポイントなどを渡して画像を生成する研究。よくあるGANではなくPixcelCNNを拡張した自己回帰モデルを利用しており(周辺ピクセルから妥当なピクセルを予測する)、より高速に学習できる利点がある。"]} +{"source": "Sequence models can be trained using supervised learning and a next-step prediction objective. This approach, however, suffers from known failure modes. For example, it is notoriously difficult to ensure multi-step generated sequences have coherent global structure. Motivated by the fact that reinforcement learning (RL) can be used to impose arbitrary properties on generated data by choosing appropriate reward functions, in this paper we propose a novel approach for sequence training which combines Maximum Likelihood (ML) and RL training. We refine a sequence predictor by optimizing for some imposed reward functions, while maintaining good predictive properties learned from data. We propose efficient ways to solve this by augmenting deep Q-learning with a cross-entropy reward and deriving novel off-policy methods for RNNs from stochastic optimal control (SOC). We explore the usefulness of our approach in the context of music generation. An LSTM is trained on a large corpus of songs to predict the next note in a musical sequence. This Note-RNN is then refined using RL, where the reward function is a combination of rewards based on rules of music theory, as well as the output of another trained Note-RNN. We show that by combining ML and RL, this RL Tuner method can not only produce more pleasing melodies, but that it can significantly reduce unwanted behaviors and failure modes of the RNN.", "target": ["RNNによる楽譜生成をよりよくするため、学習済のRNNから与えられる音符の生成確率と音楽理論により要請されるよい音符の条件を組合せたものを報酬とし、音符の生成をactionとする強化学習の問題を設定した。単にRNNより与えられる音符の連鎖を保持しつつ音楽理論によって与えられる条件をみたすよりよい楽譜を生成することができた"]} +{"source": "Lipreading is the task of decoding text from the movement of a speaker's mouth. Traditional approaches separated the problem into two stages: designing or learning visual features, and prediction. More recent deep lipreading approaches are end-to-end trainable (Wand et al., 2016; Chung & Zisserman, 2016a). However, existing work on models trained end-to-end perform only word classification, rather than sentence-level sequence prediction. Studies have shown that human lipreading performance increases for longer words (Easton & Basala, 1982), indicating the importance of features capturing temporal context in an ambiguous communication channel. Motivated by this observation, we present LipNet, a model that maps a variable-length sequence of video frames to text, making use of spatiotemporal convolutions, a recurrent network, and the connectionist temporal classification loss, trained entirely end-to-end. To the best of our knowledge, LipNet is the first end-to-end sentence-level lipreading model that simultaneously learns spatiotemporal visual features and a sequence model. On the GRID corpus, LipNet achieves 95.2% accuracy in sentence-level, overlapped speaker split task, outperforming experienced human lipreaders and the previous 86.4% word-level state-of-the-art accuracy (Gergen et al., 2016).", "target": ["唇の動きを読んでテキストにするという、DNNによる読唇術ともいえるもの。既存の語ベースではなく、文単位での認識を可能にした。時空間の畳み込み(STCNN)から、最終的には特徴ベクトルから直接テキストを生成するCTCを用いて文を生成。既存79.6%の精度を93.4と圧倒。"]} +{"source": "In this work, we take a fresh look at some old and new algorithms for off-policy, return-based reinforcement learning. Expressing these in a common form, we derive a novel algorithm, Retrace(\\lambda), with three desired properties: (1) it has low variance; (2) it safely uses samples collected from any behaviour policy, whatever its degree of \"off-policyness\"; and (3) it is efficient as it makes the best use of samples collected from near on-policy behaviour policies. We analyze the contractive nature of the related operator under both off-policy policy evaluation and control settings and derive online sample-based algorithms. We believe this is the first return-based off-policy control algorithm converging a.s. to Q^* without the GLIE assumption (Greedy in the Limit with Infinite Exploration). As a corollary, we prove the convergence of Watkins' Q(\\lambda), which was an open problem since 1989. We illustrate the benefits of Retrace(\\lambda) on a standard suite of Atari 2600 games.", "target": ["returnベースの方策オフ強化学習における安全で効率的なアルゴリズムの提案。安全とは、方策の\"オフ具合\"に対して性能がロバストであること。効率的とは、学習効率が良いこと。収束性の保証と実験を与えた。NIPS 2016に通っていて、真面目に解析を読むのはつらそう。"]} +{"source": "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.", "target": ["CNNの各レイヤのパラメーター(幅や高さ)をRNNで決め、そのRNNがCNNの精度が最大になるパラメーターを出力するように、強化学習するという手法。なお普通にやっていると日が暮れるので、並列で実行する工夫も行われている。結果として、最高精度同等、またRNNではより良い結果が出た"]} +{"source": "Coreference resolution systems are typically trained with heuristic loss functions that require careful tuning. In this paper we instead apply reinforcement learning to directly optimize a neural mention-ranking model for coreference evaluation metrics. We experiment with two approaches: the REINFORCE policy gradient algorithm and a reward-rescaled max-margin objective. We find the latter to be more effective, resulting in significant improvements over the current state-of-the-art on the English and Chinese portions of the CoNLL 2012 Shared Task.", "target": ["文中で同一のエンティティを言及している言葉(鈴木さん=彼など)を探索するタスク(Coreference Resolution)についての論文。手法としてAはBのことを指している(mention)、とするのを「行動」とみなし、強化学習の手法を用いて最適化している。GitHub実装有"]} +{"source": "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.", "target": ["与えられたグラフのノードを半教師ありクラスタリングする。既存の研究では隣接ノードは同じクラスであるという前提を正則化項として加えていたためモデルのキャパシティが制限されていたが(エッジのもつ情報はsimilarityとは限らない)、グラフ構造をニューラルネットとして表現することでその制約を取り払った。各Layerにおいて隣接行列が登場するので例えば3層ネットワークの場合は3段飛びの関係が考慮されることになり、それが正則化項の代わりをする(と思われる)。"]} +{"source": "We present a novel neural network for processing sequences. The ByteNet is a one-dimensional convolutional neural network that is composed of two parts, one to encode the source sequence and the other to decode the target sequence. The two network parts are connected by stacking the decoder on top of the encoder and preserving the temporal resolution of the sequences. To address the differing lengths of the source and the target, we introduce an efficient mechanism by which the decoder is dynamically unfolded over the representation of the encoder. The ByteNet uses dilation in the convolutional layers to increase its receptive field. The resulting network has two core properties: it runs in time that is linear in the length of the sequences and it sidesteps the need for excessive memorization. The ByteNet decoder attains state-of-the-art performance on character-level language modelling and outperforms the previous best results obtained with recurrent networks. The ByteNet also achieves state-of-the-art performance on character-to-character machine translation on the English-to-German WMT translation task, surpassing comparable neural translation models that are based on recurrent networks with attentional pooling and run in quadratic time. We find that the latent alignment structure contained in the representations reflects the expected alignment between the tokens.", "target": ["イメージ的には、Encoder/DecoderをCNNで行うといった形。RNNはシーケンスを順に入れていかないので並列計算が難しく、前後の単語の関係を記憶するための機構を入れるとさらに計算が重たくなる。そこで、計算を並列でできるようにしつつ、単語間の関係も考慮できるようにということで考案。これをBytenetと名付けている。 Encoder側は常に固定長を送るためdilation、Decoder側はEOSが前後に出たら停止するdynamic unfoldという処理を行っている。これでstate-of-the-artの性能を出せたほか、文字ベースのモデルでは他を圧倒した。"]} +{"source": "We learn rich natural sound representations by capitalizing on large amounts of unlabeled sound data collected in the wild. We leverage the natural synchronization between vision and sound to learn an acoustic representation using two-million unlabeled videos. Unlabeled video has the advantage that it can be economically acquired at massive scales, yet contains useful signals about natural sound. We propose a student-teacher training procedure which transfers discriminative visual knowledge from well established visual recognition models into the sound modality using unlabeled video as a bridge. Our sound representation yields significant performance improvements over the state-of-the-art results on standard benchmarks for acoustic scene/object classification. Visualizations suggest some high-level semantics automatically emerge in the sound network, even though it is trained without ground truth labels.", "target": ["自然音のデータを利用して特徴認識(クラス分類)を行おうという話。学習に際しては、ラベルを学習済みの物体/シーン認識のモデルから取得し、それを音声の教師ラベルにするという手法。これで教師有りより高い性能を出せた。"]} +{"source": "Compressed Learning (CL) is a joint signal processing and machine learning framework for inference from a signal, using a small number of measurements obtained by linear projections of the signal. In this paper we present an end-to-end deep learning approach for CL, in which a network composed of fully-connected layers followed by convolutional layers perform the linear sensing and non-linear inference stages. During the training phase, the sensing matrix and the non-linear inference operator are jointly optimized, and the proposed approach outperforms state-of-the-art for the task of image classification. For example, at a sensing rate of 1% (only 8 measurements of 28 X 28 pixels images), the classification error for the MNIST handwritten digits dataset is 6.46% compared to 41.06% with state-of-the-art.", "target": ["信号処理で使われているcompressed sensing手法を機械学習に転用したCompressed LearningをDLに適用。1層目でsensing matrixを学習(入力と同じunit数)。2層目のFC層で入力の1%までunit数を落とした場合(sensing rate=1%)でもerror rateは6.46%。実験データはMNIST。SoTA。"]} +{"source": "We study active learning where the labeler can not only return incorrect labels but also abstain from labeling. We consider different noise and abstention conditions of the labeler. We propose an algorithm which utilizes abstention responses, and analyze its statistical consistency and query complexity under fairly natural assumptions on the noise and abstention rate of the labeler. This algorithm is adaptive in a sense that it can automatically request less queries with a more informed or less noisy labeler. We couple our algorithm with lower bounds to show that under some technical conditions, it achieves nearly optimal query complexity.", "target": ["ラベル付が間違っているもの、ラベルが付けられていないもの含むデータに対しての学習を頑健にするための能動学習アルゴリズムを提案。(下限)性能保証付き。"]} +{"source": "Variational inference is an umbrella term for algorithms which cast Bayesian inference as optimization. Classically, variational inference uses the Kullback-Leibler divergence to define the optimization. Though this divergence has been widely used, the resultant posterior approximation can suffer from undesirable statistical properties. To address this, we reexamine variational inference from its roots as an optimization problem. We use operators, or functions of functions, to design variational objectives. As one example, we design a variational objective with a Langevin-Stein operator. We develop a black box algorithm, operator variational inference (OPVI), for optimizing any operator objective. Importantly, operators enable us to make explicit the statistical and computational tradeoffs for variational inference. We can characterize different properties of variational objectives, such as objectives that admit data subsampling---allowing inference to scale to massive data---as well as objectives that admit variational programs---a rich class of posterior approximations that does not require a tractable density. We illustrate the benefits of OPVI on a mixture model and a generative model of images.", "target": ["ベイズ最適化などでは事前分布として正規分布が仮定されており、KLダイバージェンスが収束判定に使用されているが、実データが仮定と異なる場合は推定に悪影響を及ぼす。 それを解決するためにoperater(メタ関数っぽいもの?)を導入したopvⅠという手法を提案。"]} +{"source": "Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as \"teachers\" for a \"student\" model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning.", "target": ["医療情報などの(個人情報が含まれるなどの理由で)扱いに制約があるデータで、他のデータと結合することが出来ない時に効果的にモデルの精度を上げる手法の研究。半教師あり学習を活用。 それぞれのデータの塊毎にモデル(teacher)を作成しアンサンブル。studentへの教師として使用。queryにstudentが回答するモデル。teacherやstudentに使用するモデル、アンサンブルの方法��どに制限はない。"]} +{"source": "Deep convolutional neural networks (CNN) have achieved great success. On the other hand, modeling structural information has been proved critical in many vision problems. It is of great interest to integrate them effectively. In a classical neural network, there is no message passing between neurons in the same layer. In this paper, we propose a CRF-CNN framework which can simultaneously model structural information in both output and hidden feature layers in a probabilistic way, and it is applied to human pose estimation. A message passing scheme is proposed, so that in various layers each body joint receives messages from all the others in an efficient way. Such message passing can be implemented with convolution between features maps in the same layer, and it is also integrated with feedforward propagation in neural networks. Finally, a neural network implementation of end-to-end learning CRF-CNN is provided. Its effectiveness is demonstrated through experiments on two benchmark datasets.", "target": ["ポーズ推定(関節のポジション推定)にCNNで構築したCRFアルゴリズムを付加し、関節間の関係性を考慮した推定を行えるようにした。近似にはmessage passingを使用。それぞれの関節を独立に扱う事によって効率的に近似が行える。"]} +{"source": "Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability. We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasi-imperceptible to the human eye. We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks. The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers. It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.", "target": ["画像の分類機を「だます」ことについての研究で、異なる種類の分類機でも同様の手法で「間違わせる」ことが可能という結果。だます側には正常な分類結果から「押し出す」最小の変更量を学習させる。しかも変更を加えた画像は元のものとほとんど見分けがつかない(Figure3と11参照)。"]} +{"source": "A common problem in knowledge representation and related fields is reasoning over a large joint knowledge graph, represented as triples of a relation between two entities. The goal of this paper is to develop a more powerful neural network model suitable for inference over these relationships. Previous models suffer from weak interaction between entities or simple linear projection of the vector space. We address these problems by introducing a neural tensor network (NTN) model which allow the entities and relations to interact multiplicatively. Additionally, we observe that such knowledge base models can be further improved by representing each entity as the average of vectors for the words in the entity name, giving an additional dimension of similarity by which entities can share statistical strength. We assess the model by considering the problem of predicting additional true relations between entities given a partial knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2% and 90.0%, respectively.", "target": ["エンティティ間の関係性(猫/持つ/しっぽ、などといったトリプル)を表現するモデル(Neural Tensor Network)の提案と、エンティティの表現に単一の分散表現でなく所属する分散表現の平均を用いると良かったという話(学習済みならなお良し)。かなりの高精度。"]} +{"source": "Patient notes contain a wealth of information of potentially great interest to medical investigators. However, to protect patients' privacy, Protected Health Information (PHI) must be removed from the patient notes before they can be legally released, a process known as patient note de-identification. The main objective for a de-identification system is to have the highest possible recall. Recently, the first neural-network-based de-identification system has been proposed, yielding state-of-the-art results. Unlike other systems, it does not rely on human-engineered features, which allows it to be quickly deployed, but does not leverage knowledge from human experts or from electronic health records (EHRs). In this work, we explore a method to incorporate human-engineered features as well as features derived from EHRs to a neural-network-based de-identification system. Our results show that the addition of features, especially the EHR-derived features, further improves the state-of-the-art in patient note de-identification, including for some of the most sensitive PHI types such as patient names. Since in a real-life setting patient notes typically come with EHRs, we recommend developers of de-identification systems to leverage the information EHRs contain.", "target": ["DLを使って医療データの匿名化すべき箇所を特定するDernoncourtet al., 2016の手法の特徴量作成部分を改良。SoTA。 BoW->DNN, token embedding, bidirectional-LSTM(character embedding)の3つのネットワークにtext(BoW, token, char)を食わせ 、出力を結合し特徴量とする。"]} +{"source": "While deep learning has become a key ingredient in the top performing methods for many computer vision tasks, it has failed so far to bring similar improvements to instance-level image retrieval. In this article, we argue that reasons for the underwhelming results of deep methods on image retrieval are threefold: i) noisy training data, ii) inappropriate deep architecture, and iii) suboptimal training procedure. We address all three issues. First, we leverage a large-scale but noisy landmark dataset and develop an automatic cleaning method that produces a suitable training set for deep retrieval. Second, we build on the recent R-MAC descriptor, show that it can be interpreted as a deep and differentiable architecture, and present improvements to enhance it. Last, we train this network with a siamese architecture that combines three streams with a triplet loss. At the end of the training process, the proposed architecture produces a global image representation in a single forward pass that is well suited for image retrieval. Extensive experiments show that our approach significantly outperforms previous retrieval approaches, including state-of-the-art methods based on costly local descriptor indexing and spatial verification. On Oxford 5k, Paris 6k and Holidays, we respectively report 94.7, 96.6, and 94.8 mean average precision. Our representations can also be heavily compressed using product quantization with little loss in accuracy. For additional material, please see this http URL.", "target": ["i) training dataを自動的にクリーニング、ii)ネットワークを最適化、iii)訓練を最適化して画像検索精度をSoTaまで上げたという話。"]} +{"source": "We propose a novel extension of the encoder-decoder framework, called a review network. The review network is generic and can enhance any existing encoder- decoder model: in this paper, we consider RNN decoders with both CNN and RNN encoders. The review network performs a number of review steps with attention mechanism on the encoder hidden states, and outputs a thought vector after each review step; the thought vectors are used as the input of the attention mechanism in the decoder. We show that conventional encoder-decoders are a special case of our framework. Empirically, we show that our framework improves over state-of- the-art encoder-decoder systems on the tasks of image captioning and source code captioning.", "target": ["イメージキャプション生成におけるEncoder-Decoder+attentionモデルにreview steps(thought vectors)を加えて既存アルゴリズムの欠点を緩和した。"]} +{"source": "We ask whether neural networks can learn to use secret keys to protect information from other neural networks. Specifically, we focus on ensuring confidentiality properties in a multiagent system, and we specify those properties in terms of an adversary. Thus, a system may consist of neural networks named Alice and Bob, and we aim to limit what a third neural network named Eve learns from eavesdropping on the communication between Alice and Bob. We do not prescribe specific cryptographic algorithms to these neural networks; instead, we train end-to-end, adversarially. We demonstrate that the neural networks can learn how to perform forms of encryption and decryption, and also how to apply these operations selectively in order to meet confidentiality goals.", "target": ["情報を守るための鍵の使い方を3つのDNN(3つのエージェント)を使って学習させた研究。"]} +{"source": "Many applications in speech, robotics, finance, and biology deal with sequential data, where ordering matters and recurrent structures are common. However, this structure cannot be easily captured by standard kernel functions. To model such structure, we propose expressive closed-form kernel functions for Gaussian processes. The resulting model, GP-LSTM, fully encapsulates the inductive biases of long short-term memory (LSTM) recurrent networks, while retaining the non-parametric probabilistic advantages of Gaussian processes. We learn the properties of the proposed kernels by optimizing the Gaussian process marginal likelihood using a new provably convergent semi-stochastic gradient procedure and exploit the structure of these kernels for scalable training and prediction. This approach provides a practical representation for Bayesian LSTMs. We demonstrate state-of-the-art performance on several benchmarks, and thoroughly investigate a consequential autonomous driving application, where the predictive uncertainties provided by GP-LSTM are uniquely valuable.", "target": ["LSTMの出側にGaussian processを加え、データにおいて量的にも質的にもrobustにした研究。カーネル自体を学習する。推定値の信頼性も出力されるのもありがたみの一つ。計算量削減の工夫もされている。"]} +{"source": "Describable visual facial attributes are now commonplace in human biometrics and affective computing, with existing algorithms even reaching a sufficient point of maturity for placement into commercial products. These algorithms model objective facets of facial appearance, such as hair and eye color, expression, and aspects of the geometry of the face. A natural extension, which has not been studied to any great extent thus far, is the ability to model subjective attributes that are assigned to a face based purely on visual judgements. For instance, with just a glance, our first impression of a face may lead us to believe that a person is smart, worthy of our trust, and perhaps even our admiration - regardless of the underlying truth behind such attributes. Psychologists believe that these judgements are based on a variety of factors such as emotional states, personality traits, and other physiognomic cues. But work in this direction leads to an interesting question: how do we create models for problems where there is no ground truth, only measurable behavior? In this paper, we introduce a new convolutional neural network-based regression framework that allows us to train predictive models of crowd behavior for social attribute assignment. Over images from the AFLW face database, these models demonstrate strong correlations with human crowd ratings.", "target": ["CNNで、人の顔から第一印象(賢そうだなとか、年いってるなとか)を予測した研究。データセットはクラウドソーシングで作成。TestMyBrain.orgという心理テスト?のサイトを使って作成したらしい。"]} +{"source": "Ground vehicles equipped with monocular vision systems are a valuable source of high resolution image data for precision agriculture applications in orchards. This paper presents an image processing framework for fruit detection and counting using orchard image data. A general purpose image segmentation approach is used, including two feature learning algorithms; multi-scale Multi-Layered Perceptrons (MLP) and Convolutional Neural Networks (CNN). These networks were extended by including contextual information about how the image data was captured (metadata), which correlates with some of the appearance variations and/or class distributions observed in the data. The pixel-wise fruit segmentation output is processed using the Watershed Segmentation (WS) and Circular Hough Transform (CHT) algorithms to detect and count individual fruits. Experiments were conducted in a commercial apple orchard near Melbourne, Australia. The results show an improvement in fruit segmentation performance with the inclusion of metadata on the previously benchmarked MLP network. We extend this work with CNNs, bringing agrovision closer to the state-of-the-art in computer vision, where although metadata had negligible influence, the best pixel-wise F1-score of 0.791 was achieved. The WS algorithm produced the best apple detection and counting results, with a detection F1-score of 0.858. As a final step, image fruit counts were accumulated over multiple rows at the orchard and compared against the post-harvest fruit counts that were obtained from a grading and counting machine. The count estimates using CNN and WS resulted in the best performance for this dataset, with a squared correlation coefficient of r^2=0.826.", "target": ["リンゴの果樹園で、CNNつかってリンゴ数えまくった話"]} +{"source": "The diversity of painting styles represents a rich visual vocabulary for the construction of an image. The degree to which one may learn and parsimoniously capture this visual vocabulary measures our understanding of the higher level features of paintings, if not images in general. In this work we investigate the construction of a single, scalable deep network that can parsimoniously capture the artistic style of a diversity of paintings. We demonstrate that such a network generalizes across a diversity of artistic styles by reducing a painting to a point in an embedding space. Importantly, this model permits a user to explore new painting styles by arbitrarily combining the styles learned from individual paintings. We hope that this work provides a useful step towards building rich models of paintings and offers a window on to the structure of the learned representation of artistic style.", "target": ["いわゆる画風変換についての論文。色々な変換ができるけど遅いvs速く変換できるけどスタイルごとに学習させる必要があるというトレードオフを解決。色々なスタイルとはいえ共通するところはあるはず、という仮定の下試してみたら正規化とパラメーター調整だけで複数スタイルが表現できたという話。"]} +{"source": "We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods.", "target": ["3DのGANだけでなく、GANで使ったdiscriminator(真偽判定を行うネットワーク)の隠れ層使ってSVMしたら分類性能がすごかったり、写真から3Dモデルを構築したり、3Dモデルの足し算引き算をやったりとやりたい放題である。"]} +{"source": "Until recently, research on artificial neural networks was largely restricted to systems with only two types of variable: Neural activities that represent the current or recent input and weights that learn to capture regularities among inputs, outputs and payoffs. There is no good reason for this restriction. Synapses have dynamics at many different time-scales and this suggests that artificial neural networks might benefit from variables that change slower than activities but much faster than the standard weights. These \"fast weights\" can be used to store temporary memories of the recent past and they provide a neurally plausible way of implementing the type of attention to the past that has recently proved very helpful in sequence-to-sequence models. By using fast weights we can avoid the need to store copies of neural activity patterns.", "target": ["RNNでは通常前回の隠れ層しか考慮されないが、「前回」しか考慮しないのに意味はないので、前回、前々回・・・(S回)の隠れ層を再帰的に適用しようという話。もちろん、直近の隠れ層の方を重く見る(decay rateで調整)。これで既存のLSTMより優秀な性能を記録できたとのこと"]} +{"source": "Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.", "target": ["強化学習(DQN)の転移学習についての論文。読む限りは、アンサンブル学習のように(シミュレーター等で)学習させたモデル(これがColumnになる)を並列に統合している感じ(これをProgressive Networksと呼んでいる)。"]} \ No newline at end of file