table_id_paper
stringlengths
15
15
caption
stringlengths
14
1.88k
row_header_level
int32
1
9
row_headers
large_stringlengths
15
1.75k
column_header_level
int32
1
6
column_headers
large_stringlengths
7
1.01k
contents
large_stringlengths
18
2.36k
metrics_loc
stringclasses
2 values
metrics_type
large_stringlengths
5
532
target_entity
large_stringlengths
2
330
table_html_clean
large_stringlengths
274
7.88k
table_name
stringclasses
9 values
table_id
stringclasses
9 values
paper_id
stringlengths
8
8
page_no
int32
1
13
dir
stringclasses
8 values
description
large_stringlengths
103
3.8k
class_sentence
stringlengths
3
120
sentences
large_stringlengths
110
3.92k
header_mention
stringlengths
12
1.8k
valid
int32
0
1
D16-1007table_2
Comparison of different position features.
2
[['Position Feature', 'plain text PF'], ['Position Feature', 'TPF1'], ['Position Feature', 'TPF2']]
1
[['F1']]
[['83.21'], ['83.99'], ['83.90']]
column
['F1']
['Position Feature']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Position Feature || plain text PF</td> <td>83.21</td> </tr> <tr> <td>Position Feature || TPF1</td> <td>83.99</td> </tr> <tr> <td>Position Feature || TPF2</td> <td>83.90</td> </tr> </tbody></table>
Table 2
table_2
D16-1007
8
emnlp2016
Table 2 summarizes the performances of proposed model when different position features are exploited. To concentrate on studying the effect of position features, we do not involve lexical features in this section. As the table shows, the position feature on plain text is still effective in our model and we accredit its satisfactory result to the dependency information and tree-based kernels. The F1 scores of tree-based position features are higher since they are “specially designed” for our model. Contrary to our expectation, the more fine-grained TPF2 does not yield a better performance than TPF1, and two kinds of TPF give fairly close results. One possible reason is that the influence of a more elaborated definition of relative position is minimal. As most sentences in this dataset are of short length and their dependency trees are not so complicated, replacing TPF1 with TPF2 usually brings little new structural information and thus results in a similar F1 score. However, though the performances of different position features are close, tree-based position feature is an essential part of our model. The F1 score is severely reduced to 75.22 when we remove the tree-based position feature in PECNN.
[1, 2, 1, 1, 1, 2, 2, 0, 0]
['Table 2 summarizes the performances of proposed model when different position features are exploited.', 'To concentrate on studying the effect of position features, we do not involve lexical features in this section.', 'As the table shows, the position feature on plain text is still effective in our model and we accredit its satisfactory result to the dependency information and tree-based kernels.', 'The F1 scores of tree-based position features are higher since they are “specially designed” for our model.', 'Contrary to our expectation, the more fine-grained TPF2 does not yield a better performance than TPF1, and two kinds of TPF give fairly close results.', 'One possible reason is that the influence of a more elaborated definition of relative position is minimal.', 'As most sentences in this dataset are of short length and their dependency trees are not so complicated, replacing TPF1 with TPF2 usually brings little new structural information and thus results in a similar F1 score.', 'However, though the performances of different position features are close, tree-based position feature is an essential part of our model.', 'The F1 score is severely reduced to 75.22 when we remove the tree-based position feature in PECNN.']
[None, None, ['plain text PF', 'TPF1', 'TPF2'], ['TPF1', 'TPF2'], ['TPF1', 'TPF2'], None, ['TPF1', 'TPF2'], None, None]
1
D16-1010table_3
Pearson correlation values between human and model preferences for each construction and the verb-bias score; training on raw frequencies and 2 constructions. All correlations significant with p-value < 0.001, except the one value with *. Best result for each row is marked in boldface.
1
[['DO'], ['PD'], ['DO-PD']]
2
[['AB (Connectionist)', '-'], ['BFS (Bayesian)', 'Level 1'], ['BFS (Bayesian)', 'Level 2']]
[['0.06*', '0.23', '0.25'], ['0.33', '0.38', '0.32'], ['0.39', '0.53', '0.59']]
column
['correlation', 'correlation', 'correlation']
['AB (Connectionist)', 'BFS (Bayesian)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AB (Connectionist) || -</th> <th>BFS (Bayesian) || Level 1</th> <th>BFS (Bayesian) || Level 2</th> </tr> </thead> <tbody> <tr> <td>DO</td> <td>[0.06]</td> <td>0.23</td> <td>0.25</td> </tr> <tr> <td>PD</td> <td>0.33</td> <td>0.38</td> <td>0.32</td> </tr> <tr> <td>DO-PD</td> <td>0.39</td> <td>0.53</td> <td>0.59</td> </tr> </tbody></table>
Table 3
table_3
D16-1010
8
emnlp2016
Table 3 presents the correlation results for the two models’ preferences for each construction and the verb bias score. The AB model does not correlate with the judgments for the DO. However, the model produces significant positive correlations with the PD judgments and with the verb bias score. The BFS model, on the other hand, achieves significant positive correlations on all measures, by both levels. As in the earlier experiments, the best correlation with the verb bias score is produced by the second level of the BFS model, as Figure 3 demonstrates.
[1, 1, 1, 1, 1]
['Table 3 presents the correlation results for the two models’ preferences for each construction and the verb bias score.', 'The AB model does not correlate with the judgments for the DO.', 'However, the model produces significant positive correlations with the PD judgments and with the verb bias score.', 'The BFS model, on the other hand, achieves significant positive correlations on all measures, by both levels.', 'As in the earlier experiments, the best correlation with the verb bias score is produced by the second level of the BFS model.']
[['AB (Connectionist)', 'BFS (Bayesian)'], ['AB (Connectionist)', 'DO'], ['AB (Connectionist)', 'PD'], ['DO', 'PD', 'DO-PD', 'AB (Connectionist)', 'BFS (Bayesian)'], ['Level 2', 'BFS (Bayesian)']]
1
D16-1011table_4
Comparison between rationale models (middle and bottom rows) and the baselines using full title or body (top row).
1
[['Full title'], ['Full body'], ['Independent'], ['Independent'], ['Dependent'], ['Dependent']]
1
[['MAP (dev)'], ['MAP (test)'], ['% words']]
[['56.5', '60.0', '10.1'], ['54.2', '53.0', '89.9'], ['55.7', '53.6', '9.7'], ['56.3', '52.6', '19.7'], ['56.1', '54.6', '11.6'], ['56.5', '55.6', '32.8']]
column
['MAP (dev)', 'MAP (test)', '% words']
['Independent', 'Dependent']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MAP (dev)</th> <th>MAP (test)</th> <th>% words</th> </tr> </thead> <tbody> <tr> <td>Full title</td> <td>56.5</td> <td>60.0</td> <td>10.1</td> </tr> <tr> <td>Full body</td> <td>54.2</td> <td>53.0</td> <td>89.9</td> </tr> <tr> <td>Independent</td> <td>55.7</td> <td>53.6</td> <td>9.7</td> </tr> <tr> <td>Independent</td> <td>56.3</td> <td>52.6</td> <td>19.7</td> </tr> <tr> <td>Dependent</td> <td>56.1</td> <td>54.6</td> <td>11.6</td> </tr> <tr> <td>Dependent</td> <td>56.5</td> <td>55.6</td> <td>32.8</td> </tr> </tbody></table>
Table 4
table_4
D16-1011
8
emnlp2016
Results. Table 4 presents the results of our rationale model. We explore a range of hyper-parameter values. We include two runs for each version. The first one achieves the highest MAP on the development set, The second run is selected to compare the models when they use roughly 10% of question text (7 words on average). We also show the results of different runs in Figure 6. The rationales achieve the MAP up to 56.5%, getting close to using the titles. The models also outperform the baseline of using the noisy question bodies, indicating the the models’ capacity of extracting short but important fragments.
[2, 1, 2, 2, 1, 2, 1, 1]
['Results.', 'Table 4 presents the results of our rationale model.', 'We explore a range of hyper-parameter values.', 'We include two runs for each version.', 'The first one achieves the highest MAP on the development set, The second run is selected to compare the models when they use roughly 10% of question text (7 words on average).', 'We also show the results of different runs in Figure 6.', 'The rationales achieve the MAP up to 56.5%, getting close to using the titles.', 'The models also outperform the baseline of using the noisy question bodies, indicating the the models’ capacity of extracting short but important fragments.']
[None, None, None, None, ['Independent', 'Dependent', 'MAP (dev)'], None, ['Dependent', 'MAP (dev)', 'Full title'], ['Independent', 'Dependent', 'Full title', 'Full body']]
1
D16-1018table_2
Spearman’s rank correlation results on the SCWS dataset
4
[['Model', 'Huang', 'Similarity Metrics', 'AvgSim'], ['Model', 'Huang', 'Similarity Metrics', 'AvgSimC'], ['Model', 'Chen', 'Similarity Metrics', 'AvgSim'], ['Model', 'Chen', 'Similarity Metrics', 'AvgSimC'], ['Model', 'Neelakantan', 'Similarity Metrics', 'AvgSim'], ['Model', 'Neelakantan', 'Similarity Metrics', 'AvgSimC'], ['Model', 'Li', 'Similarity Metrics', '-'], ['Model', 'Tian', 'Similarity Metrics', 'Model_M'], ['Model', 'Tian', 'Similarity Metrics', 'Model_W'], ['Model', 'Bartunov', 'Similarity Metrics', 'AvgSimC'], ['Model', 'Ours + CBOW', 'Similarity Metrics', 'HardSim'], ['Model', 'Ours + CBOW', 'Similarity Metrics', 'SoftSim'], ['Model', 'Ours + Skip-gram', 'Similarity Metrics', 'HardSim'], ['Model', 'Ours + Skip-gram', 'Similarity Metrics', 'SoftSim']]
1
[['ρ × 100']]
[['62.8'], ['65.7'], ['66.2'], ['68.9'], ['67.2'], ['69.2'], ['69.7'], ['63.6'], ['65.4'], ['61.2'], ['64.3'], ['65.6'], ['64.9'], ['66.1']]
column
['correlation']
['Ours + CBOW', 'Ours + Skip-gram']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ρ × 100</th> </tr> </thead> <tbody> <tr> <td>Model || Huang || Similarity Metrics || AvgSim</td> <td>62.8</td> </tr> <tr> <td>Model || Huang || Similarity Metrics || AvgSimC</td> <td>65.7</td> </tr> <tr> <td>Model || Chen || Similarity Metrics || AvgSim</td> <td>66.2</td> </tr> <tr> <td>Model || Chen || Similarity Metrics || AvgSimC</td> <td>68.9</td> </tr> <tr> <td>Model || Neelakantan || Similarity Metrics || AvgSim</td> <td>67.2</td> </tr> <tr> <td>Model || Neelakantan || Similarity Metrics || AvgSimC</td> <td>69.2</td> </tr> <tr> <td>Model || Li || Similarity Metrics || -</td> <td>69.7</td> </tr> <tr> <td>Model || Tian || Similarity Metrics || Model_M</td> <td>63.6</td> </tr> <tr> <td>Model || Tian || Similarity Metrics || Model_W</td> <td>65.4</td> </tr> <tr> <td>Model || Bartunov || Similarity Metrics || AvgSimC</td> <td>61.2</td> </tr> <tr> <td>Model || Ours + CBOW || Similarity Metrics || HardSim</td> <td>64.3</td> </tr> <tr> <td>Model || Ours + CBOW || Similarity Metrics || SoftSim</td> <td>65.6</td> </tr> <tr> <td>Model || Ours + Skip-gram || Similarity Metrics || HardSim</td> <td>64.9</td> </tr> <tr> <td>Model || Ours + Skip-gram || Similarity Metrics || SoftSim</td> <td>66.1</td> </tr> </tbody></table>
Table 2
table_2
D16-1018
7
emnlp2016
Table 2 shows the results of our contextdependent sense embedding models on the SCWS dataset. In this table, ρ refers to the Spearman’s rank correlation and a higher value of ρ indicates better performance. The baseline performances are from Huang et al. (2012), Chen et al. (2014), Neelakantan et al. (2014), Li and Jurafsky (2015), Tian et al. (2014) and Bartunov et al. (2016). Here Ours + CBOW denotes our model with a CBOW based energy function and Ours + Skip-gram denotes our model with a Skip-gram based energy function. The results above the thick line are the models based on context clustering methods and the results below the thick line are the probabilistic models including ours. The similarity metrics of context clustering based models are AvgSim and AvgSimC proposed by Reisinger and Mooney (2010). Tian et al. (2014) propose two metrics Model_M and Model_W which are similar to our HardSim and SoftSim metrics. From Table 2, we can observe that our model outperforms the other probabilistic models and is not as good as the best context clustering based model. The context clustering based models are overall better than the probabilistic models on this task. A possible reason is that most context clustering based methods make use of more external knowledge than probabilistic models. However, note that Faruqui et al. (2016) presented several problems associated with the evaluation of word vectors on word similarity datasets and pointed out that the use of word similarity tasks for evaluation of word vectors is not sustainable. Bartunov et al. (2016) also suggest that SCWS should be of limited use for evaluating word representation models. Therefore, the results on this task shall be taken with caution. We consider that more realistic natural language processing tasks like word sense induction are better for evaluating sense embedding models.
[1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2]
['Table 2 shows the results of our contextdependent sense embedding models on the SCWS dataset.', 'In this table, ρ refers to the Spearman’s rank correlation and a higher value of ρ indicates better performance.', 'The baseline performances are from Huang et al. (2012), Chen et al. (2014), Neelakantan et al. (2014), Li and Jurafsky (2015), Tian et al. (2014) and Bartunov et al. (2016).', 'Here Ours + CBOW denotes our model with a CBOW based energy function and Ours + Skip-gram denotes our model with a Skip-gram based energy function.', 'The results above the thick line are the models based on context clustering methods and the results below the thick line are the probabilistic models including ours.', 'The similarity metrics of context clustering based models are AvgSim and AvgSimC proposed by Reisinger and Mooney (2010).', 'Tian et al. (2014) propose two metrics Model_M and Model_W which are similar to our HardSim and SoftSim metrics.', 'From Table 2, we can observe that our model outperforms the other probabilistic models and is not as good as the best context clustering based model.', 'The context clustering based models are overall better than the probabilistic models on this task.', 'A possible reason is that most context clustering based methods make use of more external knowledge than probabilistic models.', 'However, note that Faruqui et al. (2016) presented several problems associated with the evaluation of word vectors on word similarity datasets and pointed out that the use of word similarity tasks for evaluation of word vectors is not sustainable.', 'Bartunov et al. (2016) also suggest that SCWS should be of limited use for evaluating word representation models.', 'Therefore, the results on this task shall be taken with caution.', 'We consider that more realistic natural language processing tasks like word sense induction are better for evaluating sense embedding models.']
[None, None, ['Huang', 'Chen', 'Neelakantan', 'Li', 'Tian', 'Bartunov'], ['Ours + CBOW', 'Ours + Skip-gram'], None, ['AvgSim', 'AvgSimC'], ['Model_M', 'Model_W', 'HardSim', 'SoftSim'], ['Model'], ['Model'], ['Model'], None, None, None, None]
1
D16-1021table_4
Examples of attention weights in different hops for aspect level sentiment classification. The model only uses content attention. The hop columns show the weights of context words in each hop, indicated by values and gray color. This example shows the results of sentence “great food but the service was dreadful!” with “food” and “service” as the aspects.
1
[['great'], ['food'], ['but'], ['the'], ['was'], ['dreadful'], ['!']]
2
[['hop 1', 'service'], ['hop 1', 'food'], ['hop 2', 'service'], ['hop 2', 'food'], ['hop 3', 'service'], ['hop 3', 'food'], ['hop 4', 'service'], ['hop 4', 'food'], ['hop 5', 'service'], ['hop 5', 'food']]
[['0.20', '0.22', '0.15', '0.12', '0.14', '0.14', '0.13', '0.12', '0.23', '0.20'], ['0.11', '0.21', '0.07', '0.11', '0.08', '0.10', '0.12', '0.11', '0.06', '0.12'], ['0.20', '0.03', '0.10', '0.11', '0.10', '0.08', '0.12', '0.11', '0.13', '0.06'], ['0.03', '0.11', '0.07', '0.11', '0.08', '0.08', '0.12', '0.11', '0.06', '0.06'], ['0.08', '0.04', '0.07', '0.11', '0.08', '0.08', '0.12', '0.11', '0.06', '0.06'], ['0.20', '0.22', '0.45', '0.32', '0.45', '0.45', '0.28', '0.32', '0.40', '0.43'], ['0.19', '0.16', '0.08', '0.11', '0.08', '0.08', '0.12', '0.11', '0.07', '0.07']]
column
['weights', 'weights', 'weights', 'weights', 'weights', 'weights', 'weights', 'weights', 'weights', 'weights']
['service', 'food']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>hop 1 || service</th> <th>hop 1 || food</th> <th>hop 2 || service</th> <th>hop 2 || food</th> <th>hop 3 || service</th> <th>hop 3 || food</th> <th>hop 4 || service</th> <th>hop 4 || food</th> <th>hop 5 || service</th> <th>hop 5 || food</th> </tr> </thead> <tbody> <tr> <td>great</td> <td>0.20</td> <td>0.22</td> <td>0.15</td> <td>0.12</td> <td>0.14</td> <td>0.14</td> <td>0.13</td> <td>0.12</td> <td>0.23</td> <td>0.20</td> </tr> <tr> <td>food</td> <td>0.11</td> <td>0.21</td> <td>0.07</td> <td>0.11</td> <td>0.08</td> <td>0.10</td> <td>0.12</td> <td>0.11</td> <td>0.06</td> <td>0.12</td> </tr> <tr> <td>but</td> <td>0.20</td> <td>0.03</td> <td>0.10</td> <td>0.11</td> <td>0.10</td> <td>0.08</td> <td>0.12</td> <td>0.11</td> <td>0.13</td> <td>0.06</td> </tr> <tr> <td>the</td> <td>0.03</td> <td>0.11</td> <td>0.07</td> <td>0.11</td> <td>0.08</td> <td>0.08</td> <td>0.12</td> <td>0.11</td> <td>0.06</td> <td>0.06</td> </tr> <tr> <td>was</td> <td>0.08</td> <td>0.04</td> <td>0.07</td> <td>0.11</td> <td>0.08</td> <td>0.08</td> <td>0.12</td> <td>0.11</td> <td>0.06</td> <td>0.06</td> </tr> <tr> <td>dreadful</td> <td>0.20</td> <td>0.22</td> <td>0.45</td> <td>0.32</td> <td>0.45</td> <td>0.45</td> <td>0.28</td> <td>0.32</td> <td>0.40</td> <td>0.43</td> </tr> <tr> <td>!</td> <td>0.19</td> <td>0.16</td> <td>0.08</td> <td>0.11</td> <td>0.08</td> <td>0.08</td> <td>0.12</td> <td>0.11</td> <td>0.07</td> <td>0.07</td> </tr> </tbody></table>
Table 4
table_4
D16-1021
7
emnlp2016
From Table 4, we can find that in the first hop the context words “great”, “but” and “dreadful” contribute equally to the aspect “service”. While after the second hop, the weight of “dreadful” increases and finally the model correctly predict the polarity towards “service” as negative. This case shows the effects of multiple hops. However, for food aspect, the content-based model also gives a larger weight to “dreadful” when the target we focus on is “food”. As a result, the model incorrectly predicts the polarity towards “food” as negative. This phenomenon might be caused by the neglect of location information.
[1, 1, 1, 1, 1, 2]
['From Table 4, we can find that in the first hop the context words “great”, “but” and “dreadful” contribute equally to the aspect “service”.', 'While after the second hop, the weight of “dreadful” increases and finally the model correctly predict the polarity towards “service” as negative.', 'This case shows the effects of multiple hops.', 'However, for food aspect, the content-based model also gives a larger weight to “dreadful” when the target we focus on is “food”.', 'As a result, the model incorrectly predicts the polarity towards “food” as negative.', 'This phenomenon might be caused by the neglect of location information.']
[['great', 'but', 'dreadful', 'service'], None, ['dreadful', 'service'], ['dreadful', 'food'], ['food'], None]
1
D16-1025table_2
Overall results on the HE Set: BLEU, computed against the original reference translation, and TER, computed with respect to the targeted post-edit (HTER) and multiple postedits (mTER).
2
[['system', 'PBSY'], ['system', 'HPB'], ['system', 'SPB'], ['system', 'NMT']]
1
[['BLEU'], ['HTER'], ['mTER']]
[['25.3', '28.0', '21.8'], ['24.6', '29.9', '23.4'], ['25.8', '29.0', '22.7'], ['31.1*', '21.1*', '16.2*']]
column
['BLEU', 'HTER', 'mTER']
['NMT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>HTER</th> <th>mTER</th> </tr> </thead> <tbody> <tr> <td>system || PBSY</td> <td>25.3</td> <td>28.0</td> <td>21.8</td> </tr> <tr> <td>system || HPB</td> <td>24.6</td> <td>29.9</td> <td>23.4</td> </tr> <tr> <td>system || SPB</td> <td>25.8</td> <td>29.0</td> <td>22.7</td> </tr> <tr> <td>system || NMT</td> <td>31.1*</td> <td>21.1*</td> <td>16.2*</td> </tr> </tbody></table>
Table 2
table_2
D16-1025
4
emnlp2016
4 Overall Translation Quality. Table 2 presents overall system results according to HTER and mTER, as well as BLEU computed against the original TED Talks reference translation. We can see that NMT clearly outperforms all other approaches both in terms of BLEU and TER scores. Focusing on mTER results, the gain obtained by NMT over the second best system (PBSY) amounts to 26%. It is also worth noticing that mTER is considerably lower than HTER for each system. This reduction shows that exploiting all the available postedits as references for TER is a viable way to control and overcome post-editors variability, thus ensuring a more reliable and informative evaluation about the real overall performance of MT systems. For this reason, the two following analyses rely on mTER. In particular, we investigate how specific characteristics of input documents affect the system’s overall translation quality, focusing on (i) sentence length and (ii) the different talks composing the dataset.
[2, 1, 1, 1, 1, 2, 0, 0]
['4 Overall Translation Quality.', 'Table 2 presents overall system results according to HTER and mTER, as well as BLEU computed against the original TED Talks reference translation.', 'We can see that NMT clearly outperforms all other approaches both in terms of BLEU and TER scores.', 'Focusing on mTER results, the gain obtained by NMT over the second best system (PBSY) amounts to 26%.', 'It is also worth noticing that mTER is considerably lower than HTER for each system.', 'This reduction shows that exploiting all the available postedits as references for TER is a viable way to control and overcome post-editors variability, thus ensuring a more reliable and informative evaluation about the real overall performance of MT systems.', 'For this reason, the two following analyses rely on mTER.', 'In particular, we investigate how specific characteristics of input documents affect the system’s overall translation quality, focusing on (i) sentence length and (ii) the different talks composing the dataset.']
[None, ['system', 'HTER', 'mTER', 'BLEU'], ['NMT', 'BLEU', 'HTER', 'mTER'], ['mTER', 'NMT', 'PBSY'], ['mTER', 'HTER'], ['HTER', 'mTER'], None, None]
1
D16-1025table_4
Word reordering evaluation in terms of shift operations in HTER calculation and of KRS. For each system, the number of generated words, the number of shift errors and their corresponding percentages are reported.
2
[['system', 'PBSY'], ['system', 'HPB'], ['system', 'SPB'], ['system', 'NMT']]
1
[['#words'], ['#shifts'], ['%shifts'], ['KRS']]
[['11517', '354', '3.1', '84.6'], ['11417', '415', '3.6', '84.3'], ['11420', '398', '3.5', '84.5'], ['11284', '173', '1.5*', '88.3*']]
column
['#words', '#shifts', '%shifts', 'KRS']
['NMT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>#words</th> <th>#shifts</th> <th>%shifts</th> <th>KRS</th> </tr> </thead> <tbody> <tr> <td>system || PBSY</td> <td>11517</td> <td>354</td> <td>3.1</td> <td>84.6</td> </tr> <tr> <td>system || HPB</td> <td>11417</td> <td>415</td> <td>3.6</td> <td>84.3</td> </tr> <tr> <td>system || SPB</td> <td>11420</td> <td>398</td> <td>3.5</td> <td>84.5</td> </tr> <tr> <td>system || NMT</td> <td>11284</td> <td>173</td> <td>1.5*</td> <td>88.3*</td> </tr> </tbody></table>
Table 4
table_4
D16-1025
7
emnlp2016
5.3 Word order errors. To analyse reordering errors, we start by focusing on shift operations identified by the HTER metrics. The first three columns of Table 4 show, respectively: (i) the number of words generated by each system (ii) the number of shifts required to align each system output to the corresponding post-edit; and (iii) the corresponding percentage of shift errors. Notice that the shift error percentages are incorporated in the HTER scores reported in Table 2. We can see in Table 4 that shift errors in NMT translations are definitely less than in the other systems. The error reduction of NMT with respect to the second best system (PBSY) is about 50% (173 vs. 354). It should be recalled that these numbers only refer to shifts detected by HTER, that is (groups of) words of the MT output and corresponding post-edit that are identical but occurring in different positions. Words that had to be moved and modified at the same time (for instance replaced by a synonym or a morphological variant) are not counted in HTER shift figures, but are detected as substitution, insertion or deletion operations. To ensure that our reordering evaluation is not biased towards the alignment between the MT output and the post-edit performed by HTER, we run an additional assessment using KRS – Kendall Reordering Score (Birch et al., 2010) – which measures the similarity between the source-reference reorderings and the source-MT output reorderings. Being based on bilingual word alignment via the source sentence, KRS detects reordering errors also when post-edit and MT words are not identical. Also unlike TER, KRS is sensitive to the distance between the position of a word in the MT output and that in the reference. Looking at the last column of Table 4, we can say that our observations on HTER are confirmed by the KRS results: the reorderings performed by NMT are much more accurate than those performed by any PBMT system. Moreover, according to the approximate randomization test, KRS differences are statistically significant between NMT and all other systems, but not among the three PBMT systems.
[2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 1, 1]
['5.3 Word order errors.', 'To analyse reordering errors, we start by focusing on shift operations identified by the HTER metrics.', 'The first three columns of Table 4 show, respectively: (i) the number of words generated by each system (ii) the number of shifts required to align each system output to the corresponding post-edit; and (iii) the corresponding percentage of shift errors. Notice that the shift error percentages are incorporated in the HTER scores reported in Table 2.', 'We can see in Table 4 that shift errors in NMT translations are definitely less than in the other systems.', 'The error reduction of NMT with respect to the second best system (PBSY) is about 50% (173 vs. 354).', 'It should be recalled that these numbers only refer to shifts detected by HTER, that is (groups of) words of the MT output and corresponding post-edit that are identical but occurring in different positions.', 'Words that had to be moved and modified at the same time (for instance replaced by a synonym or a morphological variant) are not counted in HTER shift figures, but are detected as substitution, insertion or deletion operations.', 'To ensure that our reordering evaluation is not biased towards the alignment between the MT output and the post-edit performed by HTER, we run an additional assessment using KRS – Kendall Reordering Score (Birch et al., 2010) – which measures the similarity between the source-reference reorderings and the source-MT output reorderings.', 'Being based on bilingual word alignment via the source sentence, KRS detects reordering errors also when post-edit and MT words are not identical.', 'Also unlike TER, KRS is sensitive to the distance between the position of a word in the MT output and that in the reference.', 'Looking at the last column of Table 4, we can say that our observations on HTER are confirmed by the KRS results: the reorderings performed by NMT are much more accurate than those performed by any PBMT system.', 'Moreover, according to the approximate randomization test, KRS differences are statistically significant between NMT and all other systems, but not among the three PBMT systems.']
[None, None, ['#words', '#shifts', '%shifts'], ['NMT', 'system', '#shifts', '%shifts'], ['NMT', 'PBSY'], None, None, ['KRS'], ['KRS'], ['KRS'], ['KRS', 'NMT'], ['KRS', 'NMT', 'system']]
1
D16-1032table_2
Human evaluation results on the generated and true recipes. Scores range in [1, 5].
2
[['Model', 'Attention'], ['Model', 'EncDec'], ['Model', 'NN'], ['Model', 'NN-Swap'], ['Model', 'Checklist'], ['Model', 'Checklist+'], ['Model', 'Truth']]
1
[['Syntax'], ['Ingredient use'], ['Follows goal']]
[['4.47', '3.02', '3.47'], ['4.58', '3.29', '3.61'], ['4.22', '3.02', '3.36'], ['4.11', '3.51', '3.78'], ['4.58', '3.80', '3.94'], ['4.39', '3.95', '4.10'], ['4.39', '4.03', '4.34']]
column
['Syntax', 'Ingridient use', 'Follows goal']
['Checklist', 'Checklist+']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Syntax</th> <th>Ingredient use</th> <th>Follows goal</th> </tr> </thead> <tbody> <tr> <td>Model || Attention</td> <td>4.47</td> <td>3.02</td> <td>3.47</td> </tr> <tr> <td>Model || EncDec</td> <td>4.58</td> <td>3.29</td> <td>3.61</td> </tr> <tr> <td>Model || NN</td> <td>4.22</td> <td>3.02</td> <td>3.36</td> </tr> <tr> <td>Model || NN-Swap</td> <td>4.11</td> <td>3.51</td> <td>3.78</td> </tr> <tr> <td>Model || Checklist</td> <td>4.58</td> <td>3.80</td> <td>3.94</td> </tr> <tr> <td>Model || Checklist+</td> <td>4.39</td> <td>3.95</td> <td>4.10</td> </tr> <tr> <td>Model || Truth</td> <td>4.39</td> <td>4.03</td> <td>4.34</td> </tr> </tbody></table>
Table 2
table_2
D16-1032
8
emnlp2016
Table 2 shows the averaged scores over the responses. The checklist models outperform all baselines in generating recipes that follow the provided agenda closely and accomplish the desired goal, where NN in particular often generates the wrong dish. Perhaps surprisingly, both the Attention and EncDec baselines and the Checklist model beat the true recipes in terms of having better grammar. This can partly be attributed to noise in the parsing of the true recipes, and partly because the neural models tend to generate shorter, simpler texts.
[1, 1, 1, 2]
['Table 2 shows the averaged scores over the responses.', 'The checklist models outperform all baselines in generating recipes that follow the provided agenda closely and accomplish the desired goal, where NN in particular often generates the wrong dish.', 'Perhaps surprisingly, both the Attention and EncDec baselines and the Checklist model beat the true recipes in terms of having better grammar.', 'This can partly be attributed to noise in the parsing of the true recipes, and partly because the neural models tend to generate shorter, simpler texts.']
[None, ['Checklist', 'Checklist+', 'NN', 'Model'], ['Attention', 'EncDec', 'Checklist'], None]
1
D16-1035table_4
Performance comparison with other state-of-the-art systems on RST-DT.
2
[['System', 'Joty et al. (2013)'], ['System', 'Ji and Eisenstein. (2014)'], ['System', 'Feng and Hirst. (2014)'], ['System', 'Li et al. (2014a)'], ['System', 'Li et al. (2014b)'], ['System', 'Heilman and Sagae. (2015)'], ['System', 'Ours'], ['System', 'Human']]
1
[['S'], ['N'], ['R']]
[['82.7', '68.4', '55.7'], ['82.1', '71.1', '61.6'], ['85.7', '71.0', '58.2'], ['84.0', '70.8', '58.6'], ['83.4', '73.8', '57.8'], ['83.5', '68.1', '55.1'], ['85.8', '71.1', '58.9'], ['88.7', '77.7', '65.8']]
column
['S', 'N', 'R']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>S</th> <th>N</th> <th>R</th> </tr> </thead> <tbody> <tr> <td>System || Joty et al. (2013)</td> <td>82.7</td> <td>68.4</td> <td>55.7</td> </tr> <tr> <td>System || Ji and Eisenstein. (2014)</td> <td>82.1</td> <td>71.1</td> <td>61.6</td> </tr> <tr> <td>System || Feng and Hirst. (2014)</td> <td>85.7</td> <td>71.0</td> <td>58.2</td> </tr> <tr> <td>System || Li et al. (2014a)</td> <td>84.0</td> <td>70.8</td> <td>58.6</td> </tr> <tr> <td>System || Li et al. (2014b)</td> <td>83.4</td> <td>73.8</td> <td>57.8</td> </tr> <tr> <td>System || Heilman and Sagae. (2015)</td> <td>83.5</td> <td>68.1</td> <td>55.1</td> </tr> <tr> <td>System || Ours</td> <td>85.8</td> <td>71.1</td> <td>58.9</td> </tr> <tr> <td>System || Human</td> <td>88.7</td> <td>77.7</td> <td>65.8</td> </tr> </tbody></table>
Table 4
table_4
D16-1035
8
emnlp2016
Table 4 shows the performance for our system and those systems. Our system achieves the best result in span and relatively lower performance in nucleus and relation identification comparing with the corresponding best results but still better than most systems. No system achieves the best result on all three metrics. To further show the effectiveness of the deep learning model itself without handcrafted features, we compare the performance between our model and the model proposed by Li et al. (2014a) without handcrafted features and the results are shown in Table 5. It shows our overall performance outperforms the model proposed by Li et al. (2014a) which illustrates our model is effective.
[1, 1, 1, 0, 0]
['Table 4 shows the performance for our system and those systems.', 'Our system achieves the best result in span and relatively lower performance in nucleus and relation identification comparing with the corresponding best results but still better than most systems.', 'No system achieves the best result on all three metrics.', 'To further show the effectiveness of the deep learning model itself without handcrafted features, we compare the performance between our model and the model proposed by Li et al. (2014a) without handcrafted features and the results are shown in Table 5.', 'It shows our overall performance outperforms the model proposed by Li et al. (2014a) which illustrates our model is effective.']
[None, ['Ours', 'System'], ['System', 'S', 'N', 'R'], None, None]
1
D16-1038table_7
Domain Transfer Results. We conduct the evaluation on TAC-KBP corpus with the split of newswire (NW) and discussion form (DF) documents. Here, we choose MSEP-EMD and MSEP-CorefESA+AUG+KNOW as the MSEP approach for event detection and co-reference respectively. We use SSED and SupervisedBase as the supervised modules for comparison. For event detection, we compare F1 scores of span plus type match while we report the average F1 scores for event co-reference.
3
[['Event Detection', 'In Domain', 'Train NW Test NW'], ['Event Detection', 'Out of Domain', 'Train DF Test NW'], ['Event Detection', 'In Domain', 'Train DF Test DF'], ['Event Detection', 'Out of Domain', 'Train NW Test DF'], ['Event Co-reference', 'In Domain', 'Train NW Test NW'], ['Event Co-reference', 'Out of Domain', 'Train DF Test NW'], ['Event Co-reference', 'In Domain', 'Train DF Test DF'], ['Event Co-reference', 'Out of Domain', 'Train NW Test DF']]
1
[['MSEP'], ['Supervised']]
[['58.5', '63.7'], ['55.1', '54.8'], ['57.9', '62.6'], ['52.8', '52.3'], ['73.2', '73.6'], ['71', '70.1'], ['68.6', '68.9'], ['67.9', '67']]
column
['F1', 'F1']
['MSEP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MSEP</th> <th>Supervised</th> </tr> </thead> <tbody> <tr> <td>Event Detection || In Domain || Train NW Test NW</td> <td>58.5</td> <td>63.7</td> </tr> <tr> <td>Event Detection || Out of Domain || Train DF Test NW</td> <td>55.1</td> <td>54.8</td> </tr> <tr> <td>Event Detection || In Domain || Train DF Test DF</td> <td>57.9</td> <td>62.6</td> </tr> <tr> <td>Event Detection || Out of Domain || Train NW Test DF</td> <td>52.8</td> <td>52.3</td> </tr> <tr> <td>Event Co-reference || In Domain || Train NW Test NW</td> <td>73.2</td> <td>73.6</td> </tr> <tr> <td>Event Co-reference || Out of Domain || Train DF Test NW</td> <td>71</td> <td>70.1</td> </tr> <tr> <td>Event Co-reference || In Domain || Train DF Test DF</td> <td>68.6</td> <td>68.9</td> </tr> <tr> <td>Event Co-reference || Out of Domain || Train NW Test DF</td> <td>67.9</td> <td>67</td> </tr> </tbody></table>
Table 7
table_7
D16-1038
9
emnlp2016
4.7 Domain Transfer Evaluation. To demonstrate the superiority of the adaptation capabilities of the proposed MSEP system, we test its performance on new domains and compare with the supervised system. TAC-KBP corpus contains two genres: newswire (NW) and discussion forum (DF), and they have roughly equal number of documents. When trained on NW and tested on DF, supervised methods encounter out-of-domain situations. However, the MSEP system can adapt well. Table 7 shows that MSEP outperforms supervised methods in out-of-domain situations for both tasks. The differences are statistically significant with p < 0.05.
[2, 2, 2, 1, 1, 1, 2]
['4.7 Domain Transfer Evaluation.', 'To demonstrate the superiority of the adaptation capabilities of the proposed MSEP system, we test its performance on new domains and compare with the supervised system.', 'TAC-KBP corpus contains two genres: newswire (NW) and discussion forum (DF), and they have roughly equal number of documents.', 'When trained on NW and tested on DF, supervised methods encounter out-of-domain situations.', 'However, the MSEP system can adapt well.', 'Table 7 shows that MSEP outperforms supervised methods in out-of-domain situations for both tasks.', 'The differences are statistically significant with p < 0.05.']
[None, ['MSEP', 'Supervised'], None, ['Train NW Test DF'], ['MSEP'], ['MSEP', 'Supervised', 'Out of Domain', 'Event Detection', 'Event Co-reference'], None]
1
D16-1039table_2
Performance results for the BLESS and ENTAILMENT datasets.
4
[['Model', 'SVM+Yu', 'Dataset', 'BLESS'], ['Model', 'SVM+Word2Vecshort', 'Dataset', 'BLESS'], ['Model', 'SVM+Word2Vec', 'Dataset', 'BLESS'], ['Model', 'SVM+Ourshort', 'Dataset', 'BLESS'], ['Model', 'SVM+Our', 'Dataset', 'BLESS'], ['Model', 'SVM+Yu', 'Dataset', 'ENTAIL'], ['Model', 'SVM+Word2Vecshort', 'Dataset', 'ENTAIL'], ['Model', 'SVM+Word2Vec', 'Dataset', 'ENTAIL'], ['Model', 'SVM+Ourshort', 'Dataset', 'ENTAIL'], ['Model', 'SVM+Our', 'Dataset', 'ENTAIL']]
1
[['Accuracy']]
[['90.4%'], ['83.8%'], ['84.0%'], ['91.1%'], ['93.6%'], ['87.5%'], ['82.8%'], ['83.3%'], ['88.2%'], ['91.7%']]
column
['accuracy']
['SVM+Our']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || SVM+Yu || Dataset || BLESS</td> <td>90.4%</td> </tr> <tr> <td>Model || SVM+Word2Vecshort || Dataset || BLESS</td> <td>83.8%</td> </tr> <tr> <td>Model || SVM+Word2Vec || Dataset || BLESS</td> <td>84.0%</td> </tr> <tr> <td>Model || SVM+Ourshort || Dataset || BLESS</td> <td>91.1%</td> </tr> <tr> <td>Model || SVM+Our || Dataset || BLESS</td> <td>93.6%</td> </tr> <tr> <td>Model || SVM+Yu || Dataset || ENTAIL</td> <td>87.5%</td> </tr> <tr> <td>Model || SVM+Word2Vecshort || Dataset || ENTAIL</td> <td>82.8%</td> </tr> <tr> <td>Model || SVM+Word2Vec || Dataset || ENTAIL</td> <td>83.3%</td> </tr> <tr> <td>Model || SVM+Ourshort || Dataset || ENTAIL</td> <td>88.2%</td> </tr> <tr> <td>Model || SVM+Our || Dataset || ENTAIL</td> <td>91.7%</td> </tr> </tbody></table>
Table 2
table_2
D16-1039
7
emnlp2016
Table 2 shows the performance of the three supervised models in Experiment 1. Our approach achieves significantly better performance than Yu’s method and Word2Vec method in terms of accuracy (t-test, p-value < 0.05) for both BLESS and ENTAILMENT datasets. Specifically, our approach improves the average accuracy by 4% compared to Yu’s method, and by 9% compared to the Word2Vec method. The Word2Vec embeddings have the worst result because it is based only on co-occurrence based similarity, which is not effective for the classifier to accurately recognize all the taxonomic relations. Our approach performs better than Yu’s method and it shows that our approach can learn embeddings more effectively. Our approach encodes not only hypernym and hyponym terms but also the contextual information between them, while Yu’s method ignores the contextual information for taxonomic relation identification. Moreover, from the experimental results of SVM+Our and SVM+Ourshort, we can observe that the offset vector between hypernym and hyponym, which captures the contextual information, plays an important role in our approach as it helps to improve the performance in both datasets. However, the offset feature is not so important for the Word2Vec model. The reason is that the Word2Vec model is targeted for the analogy task rather than taxonomic relation identification.
[1, 1, 1, 1, 1, 2, 1, 2, 2]
['Table 2 shows the performance of the three supervised models in Experiment 1.', 'Our approach achieves significantly better performance than Yu’s method and Word2Vec method in terms of accuracy (t-test, p-value < 0.05) for both BLESS and ENTAILMENT datasets.', 'Specifically, our approach improves the average accuracy by 4% compared to Yu’s method, and by 9% compared to the Word2Vec method.', 'The Word2Vec embeddings have the worst result because it is based only on co-occurrence based similarity, which is not effective for the classifier to accurately recognize all the taxonomic relations.', 'Our approach performs better than Yu’s method and it shows that our approach can learn embeddings more effectively.', 'Our approach encodes not only hypernym and hyponym terms but also the contextual information between them, while Yu’s method ignores the contextual information for taxonomic relation identification.', 'Moreover, from the experimental results of SVM+Our and SVM+Ourshort, we can observe that the offset vector between hypernym and hyponym, which captures the contextual information, plays an important role in our approach as it helps to improve the performance in both datasets.', 'However, the offset feature is not so important for the Word2Vec model.', 'The reason is that the Word2Vec model is targeted for the analogy task rather than taxonomic relation identification.']
[None, ['SVM+Ourshort', 'SVM+Our', 'BLESS', 'ENTAIL', 'Accuracy'], ['SVM+Ourshort', 'SVM+Our', 'SVM+Yu', 'SVM+Word2Vecshort', 'SVM+Word2Vec'], ['SVM+Word2Vecshort', 'SVM+Word2Vec'], ['SVM+Ourshort', 'SVM+Our', 'SVM+Yu'], ['SVM+Yu'], ['SVM+Ourshort', 'SVM+Our'], ['SVM+Word2Vecshort', 'SVM+Word2Vec'], ['SVM+Word2Vecshort', 'SVM+Word2Vec']]
1
D16-1039table_3
Performance results for the general domain datasets when using one domain for training and another domain for testing.
6
[['Model', 'SVM+Yu', 'Training', 'BLESS', 'Testing', 'ENTAIL'], ['Model', 'SVM+Word2Vecshort', 'Training', 'BLESS', 'Testing', 'ENTAIL'], ['Model', 'SVM+Word2Vec', 'Training', 'BLESS', 'Testing', 'ENTAIL'], ['Model', 'SVM+Ourshort', 'Training', 'BLESS', 'Testing', 'ENTAIL'], ['Model', 'SVM+Our', 'Training', 'BLESS', 'Testing', 'ENTAIL'], ['Model', 'SVM+Yu', 'Training', 'ENTAIL', 'Testing', 'BLESS'], ['Model', 'SVM+Word2Vecshort', 'Training', 'ENTAIL', 'Testing', 'BLESS'], ['Model', 'SVM+Word2Vec', 'Training', 'ENTAIL', 'Testing', 'BLESS'], ['Model', 'SVM+Ourshort', 'Training', 'ENTAIL', 'Testing', 'BLESS'], ['Model', 'SVM+Our', 'Training', 'ENTAIL', 'Testing', 'BLESS']]
1
[['Accuracy']]
[['83.7%'], ['76.5%'], ['77.1%'], ['85.8%'], ['89.4%'], ['87.1%'], ['78.0%'], ['78.9%'], ['87.1%'], ['90.6%']]
column
['accuracy']
['SVM+Our']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || SVM+Yu || Training || BLESS || Testing || ENTAIL</td> <td>83.7%</td> </tr> <tr> <td>Model || SVM+Word2Vecshort || Training || BLESS || Testing || ENTAIL</td> <td>76.5%</td> </tr> <tr> <td>Model || SVM+Word2Vec || Training || BLESS || Testing || ENTAIL</td> <td>77.1%</td> </tr> <tr> <td>Model || SVM+Ourshort || Training || BLESS || Testing || ENTAIL</td> <td>85.8%</td> </tr> <tr> <td>Model || SVM+Our || Training || BLESS || Testing || ENTAIL</td> <td>89.4%</td> </tr> <tr> <td>Model || SVM+Yu || Training || ENTAIL || Testing || BLESS</td> <td>87.1%</td> </tr> <tr> <td>Model || SVM+Word2Vecshort || Training || ENTAIL || Testing || BLESS</td> <td>78.0%</td> </tr> <tr> <td>Model || SVM+Word2Vec || Training || ENTAIL || Testing || BLESS</td> <td>78.9%</td> </tr> <tr> <td>Model || SVM+Ourshort || Training || ENTAIL || Testing || BLESS</td> <td>87.1%</td> </tr> <tr> <td>Model || SVM+Our || Training || ENTAIL || Testing || BLESS</td> <td>90.6%</td> </tr> </tbody></table>
Table 3
table_3
D16-1039
7
emnlp2016
Experiment 2. This experiment aims to evaluate the generalization capability of our extracted term embeddings. In the experiment, we train the classifier on the BLESS dataset, test it on the ENTAILMENT dataset and vice versa. Similarly, we exclude from the training set any pair of terms that has one term appearing in the testing set. The experimental results in Table 3 show that our term embedding learning approach performs better than other methods in accuracy. It also shows that the taxonomic properties identified by our term embedding learning approach have great generalization capability (i.e. less dependent on the training set), and can be used generically for representing taxonomic relations.
[2, 2, 2, 2, 1, 1]
['Experiment 2.', 'This experiment aims to evaluate the generalization capability of our extracted term embeddings.', 'In the experiment, we train the classifier on the BLESS dataset, test it on the ENTAILMENT dataset and vice versa.', 'Similarly, we exclude from the training set any pair of terms that has one term appearing in the testing set.', 'The experimental results in Table 3 show that our term embedding learning approach performs better than other methods in accuracy.', 'It also shows that the taxonomic properties identified by our term embedding learning approach have great generalization capability (i.e. less dependent on the training set), and can be used generically for representing taxonomic relations.']
[None, None, ['BLESS', 'ENTAIL'], None, ['SVM+Our', 'Model'], ['SVM+Our']]
1
D16-1043table_5
Performance on common coverage subsets of the datasets (MEN* and SimLex*).
3
[['Source', 'Wikipedia', 'Text'], ['Source', 'Google', 'Visual'], ['Source', 'Google', 'MM'], ['Source', 'Bing', 'Visual'], ['Source', 'Bing', 'MM'], ['Source', 'Flickr', 'Visual'], ['Source', 'Flickr', 'MM'], ['Source', 'ImageNet', 'Visual'], ['Source', 'ImageNet', 'MM'], ['Source', 'ESPGame', 'Visual'], ['Source', 'ESPGame', 'MM']]
6
[['Arch.', 'AlexNet', 'Agg.', 'Mean', 'Type/Eval', 'SL'], ['Arch.', 'AlexNet', 'Agg.', 'Mean', 'Type/Eval', 'MEN'], ['Arch.', 'AlexNet', 'Agg.', 'Max', 'Type/Eval', 'SL'], ['Arch.', 'AlexNet', 'Agg.', 'Max', 'Type/Eval', 'MEN'], ['Arch.', 'GoogLeNet', 'Agg.', 'Mean', 'Type/Eval', 'SL'], ['Arch.', 'GoogLeNet', 'Agg.', 'Mean', 'Type/Eval', 'MEN'], ['Arch.', 'GoogLeNet', 'Agg.', 'Max', 'Type/Eval', 'SL'], ['Arch.', 'GoogLeNet', 'Agg.', 'Max', 'Type/Eval', 'MEN'], ['Arch.', 'VGGNet', 'Agg.', 'Mean', 'Type/Eval', 'SL'], ['Arch.', 'VGGNet', 'Agg.', 'Mean', 'Type/Eval', 'MEN'], ['Arch.', 'VGGNet', 'Agg.', 'Max', 'Type/Eval', 'SL'], ['Arch.', 'VGGNet', 'Agg.', 'Max', 'Type/Eval', 'MEN']]
[['0.248', '0.654', '0.248', '0.654', '0.248', '0.654', '0.248', '0.654', '0.248', '0.654', '0.248', '0.654'], ['0.406', '0.549', '0.402', '0.552', '0.420', '0.570', '0.434', '0.579', '0.430', '0.576', '0.406', '0.560'], ['0.366', '0.691', '0.344', '0.693', '0.366', '0.701', '0.342', '0.699', '0.378', '0.701', '0.341', '0.693'], ['0.431', '0.613', '0.425', '0.601', '0.410', '0.612', '0.414', '0.603', '0.400', '0.611', '0.398', '0.569'], ['0.384', '0.715', '0.355', '0.708', '0.374', '0.725', '0.343', '0.712', '0.363', '0.720', '0.340', '0.705'], ['0.382', '0.577', '0.371', '0.544', '0.378', '0.547', '0.354', '0.518', '0.378', '0.567', '0.340', '0.511'], ['0.372', '0.725', '0.344', '0.712', '0.367', '0.728', '0.336', '0.716', '0.370', '0.726', '0.330', '0.711'], ['0.316', '0.560', '0.316', '0.560', '0.347', '0.538', '0.423', '0.600', '0.412', '0.581', '0.413', '0.574'], ['0.348', '0.711', '0.348', '0.711', '0.364', '0.717', '0.394', '0.729', '0.418', '0.724', '0.405', '0.721'], ['0.037', '0.431', '0.039', '0.347', '0.104', '0.501', '0.125', '0.438', '0.188', '0.514', '0.125', '0.460'], ['0.179', '0.666', '0.147', '0.651', '0.224', '0.692', '0.226', '0.683', '0.268', '0.697', '0.222', '0.688']]
column
['similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity']
['VGGNet']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Arch. || AlexNet || Agg. || Mean || Type/Eval || SL</th> <th>Arch. || AlexNet || Agg. || Mean || Type/Eval || MEN</th> <th>Arch. || AlexNet || Agg. || Max || Type/Eval || SL</th> <th>Arch. || AlexNet || Agg. || Max || Type/Eval || MEN</th> <th>Arch. || GoogLeNet || Agg. || Mean || Type/Eval || SL</th> <th>Arch. || GoogLeNet || Agg. || Mean || Type/Eval || MEN</th> <th>Arch. || GoogLeNet || Agg. || Max || Type/Eval || SL</th> <th>Arch. || GoogLeNet || Agg. || Max || Type/Eval || MEN</th> <th>Arch. || VGGNet || Agg. || Mean || Type/Eval || SL</th> <th>Arch. || VGGNet || Agg. || Mean || Type/Eval || MEN</th> <th>Arch. || VGGNet || Agg. || Max || Type/Eval || SL</th> <th>Arch. || VGGNet || Agg. || Max || Type/Eval || MEN</th> </tr> </thead> <tbody> <tr> <td>Source || Wikipedia || Text</td> <td>0.248</td> <td>0.654</td> <td>0.248</td> <td>0.654</td> <td>0.248</td> <td>0.654</td> <td>0.248</td> <td>0.654</td> <td>0.248</td> <td>0.654</td> <td>0.248</td> <td>0.654</td> </tr> <tr> <td>Source || Google || Visual</td> <td>0.406</td> <td>0.549</td> <td>0.402</td> <td>0.552</td> <td>0.420</td> <td>0.570</td> <td>0.434</td> <td>0.579</td> <td>0.430</td> <td>0.576</td> <td>0.406</td> <td>0.560</td> </tr> <tr> <td>Source || Google || MM</td> <td>0.366</td> <td>0.691</td> <td>0.344</td> <td>0.693</td> <td>0.366</td> <td>0.701</td> <td>0.342</td> <td>0.699</td> <td>0.378</td> <td>0.701</td> <td>0.341</td> <td>0.693</td> </tr> <tr> <td>Source || Bing || Visual</td> <td>0.431</td> <td>0.613</td> <td>0.425</td> <td>0.601</td> <td>0.410</td> <td>0.612</td> <td>0.414</td> <td>0.603</td> <td>0.400</td> <td>0.611</td> <td>0.398</td> <td>0.569</td> </tr> <tr> <td>Source || Bing || MM</td> <td>0.384</td> <td>0.715</td> <td>0.355</td> <td>0.708</td> <td>0.374</td> <td>0.725</td> <td>0.343</td> <td>0.712</td> <td>0.363</td> <td>0.720</td> <td>0.340</td> <td>0.705</td> </tr> <tr> <td>Source || Flickr || Visual</td> <td>0.382</td> <td>0.577</td> <td>0.371</td> <td>0.544</td> <td>0.378</td> <td>0.547</td> <td>0.354</td> <td>0.518</td> <td>0.378</td> <td>0.567</td> <td>0.340</td> <td>0.511</td> </tr> <tr> <td>Source || Flickr || MM</td> <td>0.372</td> <td>0.725</td> <td>0.344</td> <td>0.712</td> <td>0.367</td> <td>0.728</td> <td>0.336</td> <td>0.716</td> <td>0.370</td> <td>0.726</td> <td>0.330</td> <td>0.711</td> </tr> <tr> <td>Source || ImageNet || Visual</td> <td>0.316</td> <td>0.560</td> <td>0.316</td> <td>0.560</td> <td>0.347</td> <td>0.538</td> <td>0.423</td> <td>0.600</td> <td>0.412</td> <td>0.581</td> <td>0.413</td> <td>0.574</td> </tr> <tr> <td>Source || ImageNet || MM</td> <td>0.348</td> <td>0.711</td> <td>0.348</td> <td>0.711</td> <td>0.364</td> <td>0.717</td> <td>0.394</td> <td>0.729</td> <td>0.418</td> <td>0.724</td> <td>0.405</td> <td>0.721</td> </tr> <tr> <td>Source || ESPGame || Visual</td> <td>0.037</td> <td>0.431</td> <td>0.039</td> <td>0.347</td> <td>0.104</td> <td>0.501</td> <td>0.125</td> <td>0.438</td> <td>0.188</td> <td>0.514</td> <td>0.125</td> <td>0.460</td> </tr> <tr> <td>Source || ESPGame || MM</td> <td>0.179</td> <td>0.666</td> <td>0.147</td> <td>0.651</td> <td>0.224</td> <td>0.692</td> <td>0.226</td> <td>0.683</td> <td>0.268</td> <td>0.697</td> <td>0.222</td> <td>0.688</td> </tr> </tbody></table>
Table 5
table_5
D16-1043
6
emnlp2016
5.2 Common subset comparison. Table 5 shows the results on the common subset of the evaluation datasets, where all word pairs have images in each of the data sources. First, note the same patterns as before: multi-modal representations perform better than linguistic ones. Even for the poorly performing ESP Game dataset, the VGGNet representations perform better on both SimLex and MEN (bottom right of the table). Visual representations from Google, Bing, Flickr and ImageNet all perform much better than ESP Game on this common covered subset. In a sense, the fullcoverage datasets were “punished” for their ability to return images for abstract words in the previous experiment: on this subset, which is more concrete, the search engines do much better. To a certain extent, including linguistic information is actually detrimental to performance, with multi-modal performing worse than purely visual. Again, we see the marked improvement with VGGNet for ImageNet, while Google, Bing and Flickr all do very well, regardless of the architecture.
[2, 1, 1, 1, 1, 1, 1, 1]
['5.2 Common subset comparison.', 'Table 5 shows the results on the common subset of the evaluation datasets, where all word pairs have images in each of the data sources.', 'First, note the same patterns as before: multi-modal representations perform better than linguistic ones.', 'Even for the poorly performing ESP Game dataset, the VGGNet representations perform better on both SimLex and MEN (bottom right of the table).', 'Visual representations from Google, Bing, Flickr and ImageNet all perform much better than ESP Game on this common covered subset.', 'In a sense, the fullcoverage datasets were “punished” for their ability to return images for abstract words in the previous experiment: on this subset, which is more concrete, the search engines do much better.', 'To a certain extent, including linguistic information is actually detrimental to performance, with multi-modal performing worse than purely visual.', 'Again, we see the marked improvement with VGGNet for ImageNet, while Google, Bing and Flickr all do very well, regardless of the architecture.']
[None, None, None, ['ESPGame', 'VGGNet', 'SL', 'MEN'], ['Google', 'Bing', 'Flickr', 'ImageNet', 'ESPGame'], None, None, ['VGGNet', 'ImageNet', 'Google', 'Bing', 'Flickr']]
1
D16-1044table_1
Comparison of multimodal pooling methods. Models are trained on the VQA train split and tested on test-dev.
2
[['Method', 'Element-wise Sum'], ['Method', 'Concatenation'], ['Method', 'Concatenation + FC'], ['Method', 'Concatenation + FC + FC'], ['Method', 'Element-wise Product'], ['Method', 'Element-wise Product + FC'], ['Method', 'Element-wise Product + FC + FC'], ['Method', 'MCB (2048 × 2048 → 16K)'], ['Method', 'Full Bilinear (128 × 128 → 16K)'], ['Method', 'MCB (128 × 128 → 4K)'], ['Method', 'Element-wise Product with VGG-19'], ['Method', 'MCB (d = 16K) with VGG-19'], ['Method', 'Concatenation + FC with Attention'], ['Method', 'MCB (d = 16K) with Attention']]
1
[['Accuracy']]
[['56.50'], ['57.49'], ['58.40'], ['57.10'], ['58.57'], ['56.44'], ['57.88'], ['59.83'], ['58.46'], ['58.69'], ['55.97'], ['57.05'], ['58.36'], ['62.50']]
column
['accuracy']
['MCB (2048 × 2048 → 16K)', 'MCB (128 × 128 → 4K)', 'MCB (d = 16K) with VGG-19', 'MCB (d = 16K) with Attention']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Method || Element-wise Sum</td> <td>56.50</td> </tr> <tr> <td>Method || Concatenation</td> <td>57.49</td> </tr> <tr> <td>Method || Concatenation + FC</td> <td>58.40</td> </tr> <tr> <td>Method || Concatenation + FC + FC</td> <td>57.10</td> </tr> <tr> <td>Method || Element-wise Product</td> <td>58.57</td> </tr> <tr> <td>Method || Element-wise Product + FC</td> <td>56.44</td> </tr> <tr> <td>Method || Element-wise Product + FC + FC</td> <td>57.88</td> </tr> <tr> <td>Method || MCB (2048 × 2048 → 16K)</td> <td>59.83</td> </tr> <tr> <td>Method || Full Bilinear (128 × 128 → 16K)</td> <td>58.46</td> </tr> <tr> <td>Method || MCB (128 × 128 → 4K)</td> <td>58.69</td> </tr> <tr> <td>Method || Element-wise Product with VGG-19</td> <td>55.97</td> </tr> <tr> <td>Method || MCB (d = 16K) with VGG-19</td> <td>57.05</td> </tr> <tr> <td>Method || Concatenation + FC with Attention</td> <td>58.36</td> </tr> <tr> <td>Method || MCB (d = 16K) with Attention</td> <td>62.50</td> </tr> </tbody></table>
Table 1
table_1
D16-1044
6
emnlp2016
4.3 Ablation Results. We compare the performance of non-bilinear and bilinear pooling methods in Table 1. We see that MCB pooling outperforms all non-bilinear pooling methods, such as eltwise sum, concatenation, and eltwise product. One could argue that the compact bilinear method simply has more parameters than the non-bilinear pooling methods, which contributes to its performance. We compensated for this by stacking fully connected layers (with 4096 units per layer, ReLU activation, and dropout) after the non-bilinear pooling methods to increase their number of parameters. However, even with similar parameter budgets, nonbilinear methods could not achieve the same accuracy as the MCB method. For example, the “Concatenation + FC + FC” pooling method has approximately 40962 + 40962 + 4096 × 3000 ≈ 46 million parameters, which matches the 48 million parameters available in MCB with d = 16000. However, the performance of the “Concatenation + FC + FC” method is only 57.10% compared to MCB’s 59.83%. Section 2 in Table 1 also shows that compact bilinear pooling has no impact on accuracy compared to full bilinear pooling. Section 3 in Table 1 demonstrates that the MCB brings improvements regardless of the image CNN used. We primarily use ResNet152 in this paper, but MCB also improves performance if VGG-19 is used. Section 4 in Table 1 shows that our soft attention model works best with MCB pooling. In fact, attending to the Concatenation + FC layer has the same performance as not using attention at all, while attending to the MCB layer improves performance by 2.67 points.
[2, 1, 1, 1, 2, 1, 2, 2, 1, 1, 1, 1]
['4.3 Ablation Results.', 'We compare the performance of non-bilinear and bilinear pooling methods in Table 1.', 'We see that MCB pooling outperforms all non-bilinear pooling methods, such as eltwise sum, concatenation, and eltwise product.', 'One could argue that the compact bilinear method simply has more parameters than the non-bilinear pooling methods, which contributes to its performance.', 'We compensated for this by stacking fully connected layers (with 4096 units per layer, ReLU activation, and dropout) after the non-bilinear pooling methods to increase their number of parameters.', 'However, even with similar parameter budgets, nonbilinear methods could not achieve the same accuracy as the MCB method.', 'For example, the “Concatenation + FC + FC” pooling method has approximately 40962 + 40962 + 4096 × 3000 ≈ 46 million parameters, which matches the 48 million parameters available in MCB with d = 16000.', 'However, the performance of the “Concatenation + FC + FC” method is only 57.10% compared to MCB’s 59.83%.', 'Section 2 in Table 1 also shows that compact bilinear pooling has no impact on accuracy compared to full bilinear pooling.', 'Section 3 in Table 1 demonstrates that the MCB brings improvements regardless of the image CNN used.', 'We primarily use ResNet152 in this paper, but MCB also improves performance if VGG-19 is used. Section 4 in Table 1 shows that our soft attention model works best with MCB pooling.', 'In fact, attending to the Concatenation + FC layer has the same performance as not using attention at all, while attending to the MCB layer improves performance by 2.67 points.']
[None, None, ['MCB (2048 × 2048 → 16K)', 'MCB (128 × 128 → 4K)', 'MCB (d = 16K) with VGG-19', 'MCB (d = 16K) with Attention', 'Method'], None, None, ['MCB (2048 × 2048 → 16K)'], ['MCB (2048 × 2048 → 16K)'], ['MCB (2048 × 2048 → 16K)'], None, ['MCB (d = 16K) with VGG-19'], ['MCB (d = 16K) with Attention'], ['MCB (d = 16K) with Attention']]
1
D16-1045table_1
Overall Synthetic Data Results. Aand Bdenote an aggressive and a balanced approaches, respectively. Acc. (std) is the average and the standard deviation of the accuracy across 10 test sets. # Wins is the number of test sets on which the SWVP algorithm outperforms CSP. Gener. is the number of times the best β hyper-parameter value on the development set is also the best value on the test set, or the test set accuracy with the best development set β is at most 0.5% lower than that with the best test set β.
2
[['Model', 'B-WM'], ['Model', 'B-WMR'], ['Model', 'A-WM'], ['Model', 'A-WMR'], ['Model', 'CSP']]
2
[['simple(++), learnable(+++)', 'Acc. (std)'], ['simple(++), learnable(+++)', '# Wins'], ['simple(++), learnable(+++)', 'Gener.'], ['simple(++), learnable(++)', 'Acc. (std)'], ['simple(++), learnable(++)', '# Wins'], ['simple(++), learnable(++)', 'Gener.'], ['simple(+), learnable(+)', 'Acc. (std)'], ['simple(+), learnable(+)', '# Wins'], ['simple(+), learnable(+)', 'Gener.']]
[['75.47(3.05)', '9/10', '10/10', '63.18 (1.32)', '9/10', '10/10', '28.48 (1.9)', '5/10', '10/10'], ['75.96 (2.42)', '8/10', '10/10', '63.02 (2.49)', '9/10', '10/10', '24.31 (5.2)', '4/10', '10/10'], ['74.18 (2.16)', '7/10', '10/10', '61.65 (2.30)', '9/10', '10/10', '30.45 (1.0)', '6/10', '10/10'], ['75.17 (3.07)', '7/10', '10/10', '61.02 (1.93)', '8/10', '10/10', '25.8 (3.18)', '2/10', '10/10'], ['72.24 (3.45)', 'NA', 'NA', '57.89 (2.85)', 'NA', 'NA', '25.27(8.55)', 'NA', 'NA']]
column
['Acc. (std)', '# Wins', 'Gener.', 'Acc. (std)', '# Wins', 'Gener.', 'Acc. (std)', '# Wins', 'Gener.']
['B-WM', 'B-WMR', 'A-WM', 'A-WMR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>simple(++), learnable(+++) || Acc. (std)</th> <th>simple(++), learnable(+++) || # Wins</th> <th>simple(++), learnable(+++) || Gener.</th> <th>simple(++), learnable(++) || Acc. (std)</th> <th>simple(++), learnable(++) || # Wins</th> <th>simple(++), learnable(++) || Gener.</th> <th>simple(+), learnable(+) || Acc. (std)</th> <th>simple(+), learnable(+) || # Wins</th> <th>simple(+), learnable(+) || Gener.</th> </tr> </thead> <tbody> <tr> <td>Model || B-WM</td> <td>75.47(3.05)</td> <td>9/10</td> <td>10/10</td> <td>63.18 (1.32)</td> <td>9/10</td> <td>10/10</td> <td>28.48 (1.9)</td> <td>5/10</td> <td>10/10</td> </tr> <tr> <td>Model || B-WMR</td> <td>75.96 (2.42)</td> <td>8/10</td> <td>10/10</td> <td>63.02 (2.49)</td> <td>9/10</td> <td>10/10</td> <td>24.31 (5.2)</td> <td>4/10</td> <td>10/10</td> </tr> <tr> <td>Model || A-WM</td> <td>74.18 (2.16)</td> <td>7/10</td> <td>10/10</td> <td>61.65 (2.30)</td> <td>9/10</td> <td>10/10</td> <td>30.45 (1.0)</td> <td>6/10</td> <td>10/10</td> </tr> <tr> <td>Model || A-WMR</td> <td>75.17 (3.07)</td> <td>7/10</td> <td>10/10</td> <td>61.02 (1.93)</td> <td>8/10</td> <td>10/10</td> <td>25.8 (3.18)</td> <td>2/10</td> <td>10/10</td> </tr> <tr> <td>Model || CSP</td> <td>72.24 (3.45)</td> <td>NA</td> <td>NA</td> <td>57.89 (2.85)</td> <td>NA</td> <td>NA</td> <td>25.27(8.55)</td> <td>NA</td> <td>NA</td> </tr> </tbody></table>
Table 1
table_1
D16-1045
8
emnlp2016
Synthetic Data. Table 1 presents our results. In all three setups an SWVP algorithm is superior. Averaged accuracy differences between the best performing algorithms and CSP are: 3.72 (B-WMR, (simple(++), learnable(+++))), 5.29 (B-WM, (simple(++), learnable(++))) and 5.18 (A-WM, (simple(+), learnable(+))). In all setups SWVP outperforms CSP in terms of averaged performance (except from B-WMR for (simple(+), learnable(+))). Moreover, the weighted models are more stable than CSP, as indicated by the lower standard deviation of their accuracy scores. Finally, for the more simple and learnable datasets the SWVP models outperform CSP in the majority of cases (7-10/10).
[2, 1, 1, 1, 1, 1, 1]
['Synthetic Data.', 'Table 1 presents our results.', 'In all three setups an SWVP algorithm is superior.', 'Averaged accuracy differences between the best performing algorithms and CSP are: 3.72 (B-WMR, (simple(++), learnable(+++))), 5.29 (B-WM, (simple(++), learnable(++))) and 5.18 (A-WM, (simple(+), learnable(+))).', 'In all setups SWVP outperforms CSP in terms of averaged performance (except from B-WMR for (simple(+), learnable(+))).', 'Moreover, the weighted models are more stable than CSP, as indicated by the lower standard deviation of their accuracy scores.', 'Finally, for the more simple and learnable datasets the SWVP models outperform CSP in the majority of cases (7-10/10).']
[None, None, ['B-WM', 'B-WMR', 'A-WM', 'A-WMR'], ['B-WM', 'B-WMR', 'A-WM', 'A-WMR'], ['B-WM', 'B-WMR', 'A-WM', 'A-WMR', 'CSP'], ['CSP'], ['B-WM', 'B-WMR', 'A-WM', 'A-WMR', 'CSP']]
1
D16-1048table_2
The performance of cross-lingually similarized Chinese dependency grammars with different configurations.
2
[['Grammar', 'baseline'], ['Grammar', 'proj : fixed'], ['Grammar', 'proj : proj'], ['Grammar', 'proj : nonproj'], ['Grammar', 'nonproj : fixed'], ['Grammar', 'nonproj : proj'], ['Grammar', 'nonproj : nonproj']]
1
[['Similarity (%)'], ['Dep. P (%)'], ['Ada. P (%)'], ['BLEU-4 (%)']]
[['34.2', '84.5', '84.5', '24.6'], ['46.3', '54.1', '82.3', '25.8 (+1.2)'], ['63.2', '72.2', '84.6', '26.1 (+1.5)'], ['64.3', '74.6', '84.7', '26.2 (+1.6)'], ['48.4', '56.1', '82.6', '20.1 (−4.5)'], ['63.6', '71.4', '84.4', '22.9 (−1.7)'], ['64.1', '73.9', '84.9', '20.7 (−3.9)']]
column
['Similarity (%)', 'Dep. P (%)', 'Ada. P (%)', 'BLEU-4 (%)']
['Grammar']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Similarity (%)</th> <th>Dep. P (%)</th> <th>Ada. P (%)</th> <th>BLEU-4 (%)</th> </tr> </thead> <tbody> <tr> <td>Grammar || baseline</td> <td>34.2</td> <td>84.5</td> <td>84.5</td> <td>24.6</td> </tr> <tr> <td>Grammar || proj : fixed</td> <td>46.3</td> <td>54.1</td> <td>82.3</td> <td>25.8 (+1.2)</td> </tr> <tr> <td>Grammar || proj : proj</td> <td>63.2</td> <td>72.2</td> <td>84.6</td> <td>26.1 (+1.5)</td> </tr> <tr> <td>Grammar || proj : nonproj</td> <td>64.3</td> <td>74.6</td> <td>84.7</td> <td>26.2 (+1.6)</td> </tr> <tr> <td>Grammar || nonproj : fixed</td> <td>48.4</td> <td>56.1</td> <td>82.6</td> <td>20.1 (−4.5)</td> </tr> <tr> <td>Grammar || nonproj : proj</td> <td>63.6</td> <td>71.4</td> <td>84.4</td> <td>22.9 (−1.7)</td> </tr> <tr> <td>Grammar || nonproj : nonproj</td> <td>64.1</td> <td>73.9</td> <td>84.9</td> <td>20.7 (−3.9)</td> </tr> </tbody></table>
Table 2
table_2
D16-1048
8
emnlp2016
5.2.2 Selection of Searching Modes. With the hyper-parameters given by the developing procedures, cross-lingual similarization is conducted on the whole FBIS dataset. All the searching mode configurations are tried and 6 pairs of grammars are generated. For each of the 6 Chinese dependency grammars, we also give the three indicators as described before. Table 2 shows that, cross-lingual similarization results in grammars with much higher cross-lingual similarity, and the adaptive accuracies given by the adapted grammars approach to those of the original grammars. It indicates that the proposed algorithm improve the crosslingual similarity without losing syntactic knowledge. To determine the best searching mode for treebased machine translation, we use the ChineseEnglish FBIS dataset as the small-scale bilingual corpus. A 4-gram language model is trained on the Xinhua portion of the Gigaword corpus with the SRILM toolkit (Stolcke and Andreas, 2002). For the analysis given by non-projective similarized grammars, The projective transformation should be conducted in order to produce projective dependency structures for rule extraction and translation decoding. In details, the projective transformation first traverses the non-projective dependency structures just as they are projective, then adjusts the order of the nodes according to the traversed word sequences. We take NIST MT Evaluation testing set 2002 (NIST 02) for developing , and use the casesensitive BLEU (Papineni et al., 2002) to measure the translation accuracy. The last column of Table 2 shows the performance of the grammars on machine translation. The cross-lingually similarized grammars corresponding to the configurations with projective searching for Chinese always improve the translation performance, while non-projective grammars always hurt the performance. It probably can be attributed to the low performance of non-projective parsing as well as the inappropriateness of the simple projective transformation method. In the final application in machine translation, we adopted the similarized grammar corresponding to the configuration with projective searching on the source side and nonprojective searching on the target side.
[0, 0, 0, 0, 1, 1, 2, 2, 2, 2, 2, 1, 1, 2, 0]
['5.2.2 Selection of Searching Modes.', 'With the hyper-parameters given by the developing procedures, cross-lingual similarization is conducted on the whole FBIS dataset.', 'All the searching mode configurations are tried and 6 pairs of grammars are generated.', 'For each of the 6 Chinese dependency grammars, we also give the three indicators as described before.', 'Table 2 shows that, cross-lingual similarization results in grammars with much higher cross-lingual similarity, and the adaptive accuracies given by the adapted grammars approach to those of the original grammars.', 'It indicates that the proposed algorithm improve the crosslingual similarity without losing syntactic knowledge.', 'To determine the best searching mode for treebased machine translation, we use the ChineseEnglish FBIS dataset as the small-scale bilingual corpus.', 'A 4-gram language model is trained on the Xinhua portion of the Gigaword corpus with the SRILM toolkit (Stolcke and Andreas, 2002).', 'For the analysis given by non-projective similarized grammars, The projective transformation should be conducted in order to produce projective dependency structures for rule extraction and translation decoding.', 'In details, the projective transformation first traverses the non-projective dependency structures just as they are projective, then adjusts the order of the nodes according to the traversed word sequences.', 'We take NIST MT Evaluation testing set 2002 (NIST 02) for developing , and use the casesensitive BLEU (Papineni et al., 2002) to measure the translation accuracy.', 'The last column of Table 2 shows the performance of the grammars on machine translation.', 'The cross-lingually similarized grammars corresponding to the configurations with projective searching for Chinese always improve the translation performance, while non-projective grammars always hurt the performance.', 'It probably can be attributed to the low performance of non-projective parsing as well as the inappropriateness of the simple projective transformation method.', 'In the final application in machine translation, we adopted the similarized grammar corresponding to the configuration with projective searching on the source side and nonprojective searching on the target side.']
[None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]
1
D16-1048table_3
The performance of the cross-lingually similarized grammar on dependency tree-based translation, compared with related work.
2
[['System', '(Liu et al. 2006)'], ['System', '(Chiang 2007)'], ['System', '(Xie et al. 2011)'], ['System', 'Original Grammar'], ['System', 'Similarized Grammar']]
1
[['NIST 04'], ['NIST 05']]
[['34.55', '31.94'], ['35.29', '33.22'], ['35.82', '33.62'], ['35.44', '33.08'], ['36.78', '35.12']]
column
['BLEU', 'BLEU']
['Original Grammar', 'Similarized Grammar']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NIST 04</th> <th>NIST 05</th> </tr> </thead> <tbody> <tr> <td>System || (Liu et al. 2006)</td> <td>34.55</td> <td>31.94</td> </tr> <tr> <td>System || (Chiang 2007)</td> <td>35.29</td> <td>33.22</td> </tr> <tr> <td>System || (Xie et al. 2011)</td> <td>35.82</td> <td>33.62</td> </tr> <tr> <td>System || Original Grammar</td> <td>35.44</td> <td>33.08</td> </tr> <tr> <td>System || Similarized Grammar</td> <td>36.78</td> <td>35.12</td> </tr> </tbody></table>
Table 3
table_3
D16-1048
8
emnlp2016
Table 3 shows the performance of the crosslingually similarized grammar on dependency treebased translation, compared with previous work (Xie et al., 2011). We also give the performance of constituency tree-based translation (Liu et al., 2006) and formal syntax-based translation (Chiang, 2007). The original grammar performs slightly worse than the previous work in dependency tree-based translation, this can ascribed to the difference between the implementation of the original grammar and the dependency parser used in the previous work. However, the similarized grammar achieves very significant improvement based on the original grammar, and also significant surpass the previous work. Note that there is no other modification on the translation model besides the replacement of the source parser.
[1, 2, 1, 1, 2]
['Table 3 shows the performance of the crosslingually similarized grammar on dependency treebased translation, compared with previous work (Xie et al., 2011).', 'We also give the performance of constituency tree-based translation (Liu et al., 2006) and formal syntax-based translation (Chiang, 2007).', 'The original grammar performs slightly worse than the previous work in dependency tree-based translation, this can ascribed to the difference between the implementation of the original grammar and the dependency parser used in the previous work.', 'However, the similarized grammar achieves very significant improvement based on the original grammar, and also significant surpass the previous work.', 'Note that there is no other modification on the translation model besides the replacement of the source parser.']
[['Similarized Grammar', 'Original Grammar', '(Xie et al. 2011)'], ['(Liu et al. 2006)', '(Chiang 2007)'], ['Original Grammar', '(Xie et al. 2011)'], ['Similarized Grammar', 'Original Grammar', '(Xie et al. 2011)'], None]
1
D16-1050table_1
BLEU scores on the NIST Chinese-English translation task. AVG = average BLEU scores on test sets. We highlight the best results in bold for each test set. “↑/⇑”: significantly better than Moses (p < 0.05/p < 0.01); “+/++”: significantly better than GroundHog (p < 0.05/p < 0.01);
2
[['System', 'Moses'], ['System', 'GroundHog'], ['System', 'VNMT w/o KL'], ['System', 'VNMT']]
1
[['MT05'], ['MT02'], ['MT03'], ['MT04'], ['MT06'], ['MT08'], ['AVG']]
[['33.68', '34.19', '34.39', '35.34', '29.20', '22.94', '31.21'], ['31.38', '33.32', '32.59', '35.05', '29.80', '22.82', '30.72'], ['31.40', '33.50', '32.92', '34.95', '28.74', '22.07', '30.44'], ['32.25', '34.50++', '33.78++', '36.72⇑++', '30.92⇑++', '24.41↑++', '32.07']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['VNMT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT05</th> <th>MT02</th> <th>MT03</th> <th>MT04</th> <th>MT06</th> <th>MT08</th> <th>AVG</th> </tr> </thead> <tbody> <tr> <td>System || Moses</td> <td>33.68</td> <td>34.19</td> <td>34.39</td> <td>35.34</td> <td>29.20</td> <td>22.94</td> <td>31.21</td> </tr> <tr> <td>System || GroundHog</td> <td>31.38</td> <td>33.32</td> <td>32.59</td> <td>35.05</td> <td>29.80</td> <td>22.82</td> <td>30.72</td> </tr> <tr> <td>System || VNMT w/o KL</td> <td>31.40</td> <td>33.50</td> <td>32.92</td> <td>34.95</td> <td>28.74</td> <td>22.07</td> <td>30.44</td> </tr> <tr> <td>System || VNMT</td> <td>32.25</td> <td>34.50++</td> <td>33.78++</td> <td>36.72⇑++</td> <td>30.92⇑++</td> <td>24.41↑++</td> <td>32.07</td> </tr> </tbody></table>
Table 1
table_1
D16-1050
6
emnlp2016
Table 1 summarizes the BLEU scores of different systems on the Chinese-English translation tasks. Clearly VNMT significantly improves translation quality in terms of BLEU on most cases, and obtains the best average results that gain 0.86 and 1.35 BLEU points over Moses and GroundHog respectively. Besides, without the KL objective, VNMT w/o KL obtains even worse results than GroundHog. These results indicate the following two points: 1) explicitly modeling underlying semantics by a latent variable indeed benefits neural machine translation, and 2) the improvements of our model are not from enlarging the network.
[1, 1, 1, 2]
['Table 1 summarizes the BLEU scores of different systems on the Chinese-English translation tasks.', 'Clearly VNMT significantly improves translation quality in terms of BLEU on most cases, and obtains the best average results that gain 0.86 and 1.35 BLEU points over Moses and GroundHog respectively.', 'Besides, without the KL objective, VNMT w/o KL obtains even worse results than GroundHog.', 'These results indicate the following two points: 1) explicitly modeling underlying semantics by a latent variable indeed benefits neural machine translation, and 2) the improvements of our model are not from enlarging the network.']
[None, ['VNMT', 'Moses', 'GroundHog'], ['VNMT w/o KL', 'GroundHog'], None]
1
D16-1051table_1
Alignment quality results for IBM2-HMM (2H) and its convex relaxation (2HC) using either HMM-style dynamic programming or “Joint” decoding. The first and last columns above are for the GIZA++ HMM initialized either with IBM Model 1 or Model 1 followed by Model 2. FA above refers to the improved IBM Model 2 (FastAlign) of (Dyer et al., 2013).
2
[['Iteration', '1'], ['Iteration', '2'], ['Iteration', '3'], ['Iteration', '4'], ['Iteration', '5'], ['Iteration', '6'], ['Iteration', '7'], ['Iteration', '8'], ['Iteration', '9'], ['Iteration', '10']]
5
[['Training', '2H', 'Decoding', 'HMM', 'AER'], ['Training', '2H', 'Decoding', 'HMM', 'F-Measure'], ['Training', '2H', 'Decoding', 'Joint', 'AER'], ['Training', '2H', 'Decoding', 'Joint', 'F-Measure'], ['Training', '2HC', 'Decoding', 'HMM', 'AER'], ['Training', '2HC', 'Decoding', 'HMM', 'F-Measure'], ['Training', '2HC', 'Decoding', 'Joint', 'AER'], ['Training', '2HC', 'Decoding', 'Joint', 'F-Measure'], ['Training', 'FA', 'Decoding', 'IBM2', 'AER'], ['Training', 'FA', 'Decoding', 'IBM2', 'F-Measure'], ['Training', '1-2H', 'Decoding', 'HMM', 'AER'], ['Training', '1-2H', 'Decoding', 'HMM', 'F-Measure']]
[['0.0956', '0.7829', '0.1076', '0.7797', '0.1538', '0.7199', '0.1814', '0.6914', '0.5406', '0.2951', '0.1761', '0.7219'], ['0.0884', '0.7854', '0.0943', '0.7805', '0.1093', '0.7594', '0.1343', '0.733', '0.1625', '0.7111', '0.0873', '0.8039'], ['0.0844', '0.7899', '0.0916', '0.7806', '0.1023', '0.7651', '0.1234', '0.7427', '0.1254', '0.7484', '0.0786', '0.8112'], ['0.0828', '0.7908', '0.0904', '0.7813', '0.0996', '0.7668', '0.1204', '0.7457', '0.1169', '0.7589', '0.0753', '0.8094'], ['0.0808', '0.7928', '0.0907', '0.7806', '0.0992', '0.7673', '0.1197', '0.7461', '0.1131', '0.7624', '0.0737', '0.8058'], ['0.0804', '0.7928', '0.0906', '0.7807', '0.0989', '0.7678', '0.1199', '0.7457', '0.1128', '0.763', '0.0719', '0.8056'], ['0.0795', '0.7939', '0.091', '0.7817', '0.0986', '0.7679', '0.1197', '0.7457', '0.1116', '0.7633', '0.0717', '0.8046'], ['0.0789', '0.7942', '0.09', '0.7814', '0.0988', '0.7679', '0.1195', '0.7458', '0.1086', '0.7658', '0.0725', '0.8024'], ['0.0793', '0.7937', '0.0904', '0.7813', '0.0986', '0.768', '0.1195', '0.7457', '0.1076', '0.7672', '0.0738', '0.8007'], ['0.0793', '0.7927', '0.0902', '0.7816', '0.0986', '0.768', '0.1195', '0.7457', '0.1072', '0.7679', '0.0734', '0.801']]
column
['AER', 'F-Measure', 'AER', 'F-Measure', 'AER', 'F-Measure', 'AER', 'F-Measure', 'AER', 'F-Measure', 'AER', 'F-Measure']
['HMM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Training || 15210H || Decoding || HMM || AER</th> <th>Training || 15210H || Decoding || HMM || F-Measure</th> <th>Training || 15210H || Decoding || Joint || AER</th> <th>Training || 15210H || Decoding || Joint || F-Measure</th> <th>Training || 210HC || Decoding || HMM || AER</th> <th>Training || 210HC || Decoding || HMM || F-Measure</th> <th>Training || 210HC || Decoding || Joint || AER</th> <th>Training || 210HC || Decoding || Joint || F-Measure</th> <th>Training || FA10 || Decoding || IBM2 || AER</th> <th>Training || FA10 || Decoding || IBM2 || F-Measure</th> <th>Training || 1525H10 || Decoding || HMM || AER</th> <th>Training || 1525H10 || Decoding || HMM || F-Measure</th> </tr> </thead> <tbody> <tr> <td>Iteration || 1</td> <td>0.0956</td> <td>0.7829</td> <td>0.1076</td> <td>0.7797</td> <td>0.1538</td> <td>0.7199</td> <td>0.1814</td> <td>0.6914</td> <td>0.5406</td> <td>0.2951</td> <td>0.1761</td> <td>0.7219</td> </tr> <tr> <td>Iteration || 2</td> <td>0.0884</td> <td>0.7854</td> <td>0.0943</td> <td>0.7805</td> <td>0.1093</td> <td>0.7594</td> <td>0.1343</td> <td>0.733</td> <td>0.1625</td> <td>0.7111</td> <td>0.0873</td> <td>0.8039</td> </tr> <tr> <td>Iteration || 3</td> <td>0.0844</td> <td>0.7899</td> <td>0.0916</td> <td>0.7806</td> <td>0.1023</td> <td>0.7651</td> <td>0.1234</td> <td>0.7427</td> <td>0.1254</td> <td>0.7484</td> <td>0.0786</td> <td>0.8112</td> </tr> <tr> <td>Iteration || 4</td> <td>0.0828</td> <td>0.7908</td> <td>0.0904</td> <td>0.7813</td> <td>0.0996</td> <td>0.7668</td> <td>0.1204</td> <td>0.7457</td> <td>0.1169</td> <td>0.7589</td> <td>0.0753</td> <td>0.8094</td> </tr> <tr> <td>Iteration || 5</td> <td>0.0808</td> <td>0.7928</td> <td>0.0907</td> <td>0.7806</td> <td>0.0992</td> <td>0.7673</td> <td>0.1197</td> <td>0.7461</td> <td>0.1131</td> <td>0.7624</td> <td>0.0737</td> <td>0.8058</td> </tr> <tr> <td>Iteration || 6</td> <td>0.0804</td> <td>0.7928</td> <td>0.0906</td> <td>0.7807</td> <td>0.0989</td> <td>0.7678</td> <td>0.1199</td> <td>0.7457</td> <td>0.1128</td> <td>0.763</td> <td>0.0719</td> <td>0.8056</td> </tr> <tr> <td>Iteration || 7</td> <td>0.0795</td> <td>0.7939</td> <td>0.091</td> <td>0.7817</td> <td>0.0986</td> <td>0.7679</td> <td>0.1197</td> <td>0.7457</td> <td>0.1116</td> <td>0.7633</td> <td>0.0717</td> <td>0.8046</td> </tr> <tr> <td>Iteration || 8</td> <td>0.0789</td> <td>0.7942</td> <td>0.09</td> <td>0.7814</td> <td>0.0988</td> <td>0.7679</td> <td>0.1195</td> <td>0.7458</td> <td>0.1086</td> <td>0.7658</td> <td>0.0725</td> <td>0.8024</td> </tr> <tr> <td>Iteration || 9</td> <td>0.0793</td> <td>0.7937</td> <td>0.0904</td> <td>0.7813</td> <td>0.0986</td> <td>0.768</td> <td>0.1195</td> <td>0.7457</td> <td>0.1076</td> <td>0.7672</td> <td>0.0738</td> <td>0.8007</td> </tr> <tr> <td>Iteration || 10</td> <td>0.0793</td> <td>0.7927</td> <td>0.0902</td> <td>0.7816</td> <td>0.0986</td> <td>0.768</td> <td>0.1195</td> <td>0.7457</td> <td>0.1072</td> <td>0.7679</td> <td>0.0734</td> <td>0.801</td> </tr> </tbody></table>
Table 1
table_1
D16-1051
9
emnlp2016
Table 1 shows the alignment summary statistics for the 447 sentences present in the Hansard test data. We present alignments quality scores using either the FastAlign IBM Model 2, the GIZA++ HMM, and our model and its relaxation using either the “HMM” or “Joint” decoding. First, we note that in deciding the decoding style for IBM2-HMM, the HMM method is better than the Joint method. We expected this type of performance since HMM decoding introduces positional dependance among the entire set of words in the sentence, which is shown to be a good modeling assumption (Vogel et al., 1996). From the results in Table 1 we see that the HMM outperforms all other models, including IBM2HMM and its convex relaxation. However, IBM2- HMM is not far in AER performance from the HMM and both it and its relaxation do better than FastAlign or IBM Model 3 (the results for IBM Model 3 are not presented; a one-directional English-French run of 1 52 53 15 gave AER and F-Measure numbers of 0.1768 and 0.6588, respectively, and this was behind both the IBM Model 2 FastAlign and our models).
[1, 2, 1, 2, 1, 1]
['Table 1 shows the alignment summary statistics for the 447 sentences present in the Hansard test data.', 'We present alignments quality scores using either the FastAlign IBM Model 2, the GIZA++ HMM, and our model and its relaxation using either the “HMM” or “Joint” decoding.', 'First, we note that in deciding the decoding style for IBM2-HMM, the HMM method is better than the Joint method.', 'We expected this type of performance since HMM decoding introduces positional dependance among the entire set of words in the sentence, which is shown to be a good modeling assumption (Vogel et al., 1996).', 'From the results in Table 1 we see that the HMM outperforms all other models, including IBM2-HMM and its convex relaxation.', 'However, IBM2-HMM is not far in AER performance from the HMM and both it and its relaxation do better than FastAlign.']
[None, ['Training', 'HMM', 'Joint', 'Decoding'], ['2H', 'HMM', 'Joint'], ['HMM'], ['HMM', '2H', '2HC'], ['2H', 'IBM2', 'AER', 'HMM', 'F-Measure', 'FA']]
1
D16-1062table_3
Comparison of Fleiss’ κ scores with scores from SNLI quality control sentence pairs.
2
[['Fleiss’κ', 'Contradiction'], ['Fleiss’κ', 'Entailment'], ['Fleiss’κ', 'Neutral'], ['Fleiss’κ', 'Overall']]
1
[['4GS'], ['5GS'], ['Bowman et al. 2015']]
[['0.37', '0.59', '0.77'], ['0.48', '0.63', '0.72'], ['0.41', '0.54', '0.6'], ['0.43', '0.6', '0.7']]
column
['Fleiss’κ', 'Fleiss’κ', 'Fleiss’κ']
['4GS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>4GS</th> <th>5GS</th> <th>Bowman et al. 2015</th> </tr> </thead> <tbody> <tr> <td>Fleiss’κ || Contradiction</td> <td>0.37</td> <td>0.59</td> <td>0.77</td> </tr> <tr> <td>Fleiss’κ || Entailment</td> <td>0.48</td> <td>0.63</td> <td>0.72</td> </tr> <tr> <td>Fleiss’κ || Neutral</td> <td>0.41</td> <td>0.54</td> <td>0.6</td> </tr> <tr> <td>Fleiss’κ || Overall</td> <td>0.43</td> <td>0.6</td> <td>0.7</td> </tr> </tbody></table>
Table 3
table_3
D16-1062
6
emnlp2016
Table 3 shows that the level of agreement as measured by the Fleiss’κ score is much lower when the number of annotators is increased, particularly for the 4GS set of sentence pairs, as compared to scores noted in Bowman et al. (2015). The decrease in agreement is particularly large with regard to contradiction. This could occur for a number of reasons. Recognizing entailment is an inherently difficult task, and classifying a correct label, particularly for contradiction and neutral, can be difficult due to an individual’s interpretation of the sentences and assumptions that an individual makes about the key facts of each sentence (e.g. coreference). It may also be the case that the individuals tasked with creating the sentence pairs on AMT created sentences that appeared to contradict a premise text, but can be interpreted differently given a different context.
[1, 1, 2, 2, 2]
['Table 3 shows that the level of agreement as measured by the Fleiss’κ score is much lower when the number of annotators is increased, particularly for the 4GS set of sentence pairs, as compared to scores noted in Bowman et al. (2015).', 'The decrease in agreement is particularly large with regard to contradiction.', 'This could occur for a number of reasons.', 'Recognizing entailment is an inherently difficult task, and classifying a correct label, particularly for contradiction and neutral, can be difficult due to an individual’s interpretation of the sentences and assumptions that an individual makes about the key facts of each sentence (e.g. coreference).', 'It may also be the case that the individuals tasked with creating the sentence pairs on AMT created sentences that appeared to contradict a premise text, but can be interpreted differently given a different context.']
[['Fleiss’κ', '4GS', 'Bowman et al. 2015'], ['Contradiction'], None, None, None]
1
D16-1062table_5
Theta scores and area under curve percentiles for LSTM trained on SNLI and tested on GSIRT . We also report the accuracy for the same LSTM tested on all SNLI quality control items (see Section 3.1). All performance is based on binary classification for each label.
4
[['Item', 'Set', '5GS', 'Entailment'], ['Item', 'Set', '5GS', 'Contradiction'], ['Item', 'Set', '5GS', 'Neutral'], ['Item', 'Set', '4GS', 'Contradiction'], ['Item', 'Set', '4GS', 'Neutral']]
1
[['Theta Score'], ['Percentile'], ['Test Acc.']]
[['-0.133', '44.83%', '96.5%'], ['1.539', '93.82%', '87.9%'], ['0.423', '66.28%', '88%'], ['1.777', '96.25%', '78.9%'], ['0.441', '67%', '83%']]
column
['Theta Score', 'Percentile', 'Test Acc.']
['4GS', '5GS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Theta Score</th> <th>Percentile</th> <th>Test Acc.</th> </tr> </thead> <tbody> <tr> <td>Item || Set || 5GS || Entailment</td> <td>-0.133</td> <td>44.83%</td> <td>96.5%</td> </tr> <tr> <td>Item || Set || 5GS || Contradiction</td> <td>1.539</td> <td>93.82%</td> <td>87.9%</td> </tr> <tr> <td>Item || Set || 5GS || Neutral</td> <td>0.423</td> <td>66.28%</td> <td>88%</td> </tr> <tr> <td>Item || Set || 4GS || Contradiction</td> <td>1.777</td> <td>96.25%</td> <td>78.9%</td> </tr> <tr> <td>Item || Set || 4GS || Neutral</td> <td>0.441</td> <td>67%</td> <td>83%</td> </tr> </tbody></table>
Table 5
table_5
D16-1062
8
emnlp2016
The theta scores from IRT in Table 5 show that, compared to AMT users, the system performed well above average for contradiction items compared to human performance, and performed around the average for entailment and neutral items. For both the neutral and contradiction items, the theta scores are similar across the 4GS and 5GS sets, whereas the accuracy of the more difficult 4GS items is consistently lower. This shows the advantage of IRT to account for item characteristics in its ability estimates. A similar theta score across sets indicates that we can measure the “ability level” regardless of whether the test set is easy or hard. Theta score is a consistent measurement, compared to accuracy which varies with the difficulty of the dataset.
[1, 1, 2, 2, 2]
['The theta scores from IRT in Table 5 show that, compared to AMT users, the system performed well above average for contradiction items compared to human performance, and performed around the average for entailment and neutral items.', 'For both the neutral and contradiction items, the theta scores are similar across the 4GS and 5GS sets, whereas the accuracy of the more difficult 4GS items is consistently lower.', 'This shows the advantage of IRT to account for item characteristics in its ability estimates.', 'A similar theta score across sets indicates that we can measure the “ability level” regardless of whether the test set is easy or hard.', 'Theta score is a consistent measurement, compared to accuracy which varies with the difficulty of the dataset.']
[['Theta Score', 'Contradiction', 'Entailment', 'Neutral'], ['Neutral', 'Contradiction', 'Theta Score', '4GS', '5GS', 'Test Acc.'], None, ['Theta Score'], ['Theta Score', 'Test Acc.']]
1
D16-1063table_2
Performance of different rho functions on Text8 dataset with 17M tokens.
2
[['Task', 'Similarity'], ['Task', 'Analogy']]
2
[['Robi', '-'], ['ρ0', 'off'], ['ρ0', 'on'], ['ρ1', 'off'], ['ρ1', 'on'], ['ρ2', 'off'], ['ρ2', 'on'], ['ρ3', 'off'], ['ρ3', 'on']]
[['41.2', '69.0', '71.0', '66.7', '70.4', '66.8', '70.8', '68.1', '68.0'], ['22.7', '24.9', '31.9', '34.3', '44.5', '32.3', '40.4', '33.6', '42.9']]
column
['Robi', 'ρ0', 'ρ0', 'ρ1', 'ρ1', 'ρ2', 'ρ2', 'ρ3', 'ρ3']
['Similarity', 'Analogy']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Robi || -</th> <th>ρ0 || off</th> <th>ρ0 || on</th> <th>ρ1 || off</th> <th>ρ1 || on</th> <th>ρ2 || off</th> <th>ρ2 || on</th> <th>ρ3 || off</th> <th>ρ3 || on</th> </tr> </thead> <tbody> <tr> <td>Task || Similarity</td> <td>41.2</td> <td>69.0</td> <td>71.0</td> <td>66.7</td> <td>70.4</td> <td>66.8</td> <td>70.8</td> <td>68.1</td> <td>68.0</td> </tr> <tr> <td>Task || Analogy</td> <td>22.7</td> <td>24.9</td> <td>31.9</td> <td>34.3</td> <td>44.5</td> <td>32.3</td> <td>40.4</td> <td>33.6</td> <td>42.9</td> </tr> </tbody></table>
Table 2
table_2
D16-1063
7
emnlp2016
It can be seen from Table 2 that adding the weight rw,c improves performance in all the cases, especially on the word analogy task. Among the four ρ functions, ρ0 performs the best on the word similarity task but suffers notably on the analogy task, while ρ1 = log performs the best overall. Given these observations, which are consistent with the results on large scale datasets, in the experiments that follow we only report WordRank with the best configuration, i.e., using ρ1 with the weight rw,c as defined in (4).
[1, 1, 2]
['It can be seen from Table 2 that adding the weight rw,c improves performance in all the cases, especially on the word analogy task.', 'Among the four ρ functions, ρ0 performs the best on the word similarity task but suffers notably on the analogy task, while ρ1 = log performs the best overall.', 'Given these observations, which are consistent with the results on large scale datasets, in the experiments that follow we only report WordRank with the best configuration, i.e., using ρ1 with the weight rw,c as defined in (4).']
[['Analogy', 'Similarity'], ['ρ0', 'ρ1', 'Similarity', 'Analogy'], ['ρ1']]
1
D16-1065table_3
Comparison between our joint approaches and the pipelined counterparts.
4
[['Dataset', 'LDC2013E117', 'System', 'JAMR (fixed)'], ['Dataset', 'LDC2013E117', 'System', 'System 1'], ['Dataset', 'LDC2013E117', 'System', 'System 2'], ['Dataset', 'LDC2014T12', 'System', 'JAMR (fixed)'], ['Dataset', 'LDC2014T12', 'System', 'System 1'], ['Dataset', 'LDC2014T12', 'System', 'System 2']]
1
[['P'], ['R'], ['F1']]
[['0.67', '0.58', '0.62'], ['0.72', '0.65', '0.68'], ['0.73', '0.69', '0.71'], ['0.68', '0.59', '0.63'], ['0.74', '0.63', '0.68'], ['0.73', '0.68', '0.71']]
column
['P', 'R', 'F1']
['System 1', 'System 2']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Dataset || LDC2013E117 || System || JAMR(fixed)</td> <td>0.67</td> <td>0.58</td> <td>0.62</td> </tr> <tr> <td>Dataset || LDC2013E117 || System || System 1</td> <td>0.72</td> <td>0.65</td> <td>0.68</td> </tr> <tr> <td>Dataset || LDC2013E117 || System || System 2</td> <td>0.73</td> <td>0.69</td> <td>0.71</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || JAMR(fixed)</td> <td>0.68</td> <td>0.59</td> <td>0.63</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || System 1</td> <td>0.74</td> <td>0.63</td> <td>0.68</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || System 2</td> <td>0.73</td> <td>0.68</td> <td>0.71</td> </tr> </tbody></table>
Table 3
table_3
D16-1065
8
emnlp2016
4.4 Joint Model vs. Pipelined Model. In this section, we compare the overall performance of our joint model to the pipelined model, JAMR. To give a fair comparison, we first implemented system 1 only using the same features (i.e., features 1- 4 in Table 1) as JAMR for concept fragments. Table 3 gives the results on the two datasets. In terms of F-measure, we gain a 6% absolute improvement, and a 5% absolute improvement over the results of JAMR on the two different experimental setups respectively. Next, we implemented system 2 by using more lexical features to capture the association between concept and the context (i.e., features 5-16 in Table 1). Intuitively, these lexical contextual features should be helpful in identifying concepts in parsing process. As expected, the results in Table 3 show that we gain 3% improvement over the two different datasets respectively, by adding only some additional lexical features.
[2, 2, 2, 1, 1, 2, 2, 1]
['4.4 Joint Model vs. Pipelined Model.', 'In this section, we compare the overall performance of our joint model to the pipelined model, JAMR.', 'To give a fair comparison, we first implemented system 1 only using the same features (i.e., features 1- 4 in Table 1) as JAMR for concept fragments.', 'Table 3 gives the results on the two datasets.', 'In terms of F-measure, we gain a 6% absolute improvement, and a 5% absolute improvement over the results of JAMR on the two different experimental setups respectively.', 'Next, we implemented system 2 by using more lexical features to capture the association between concept and the context (i.e., features 5-16 in Table 1).', 'Intuitively, these lexical contextual features should be helpful in identifying concepts in parsing process.', 'As expected, the results in Table 3 show that we gain 3% improvement over the two different datasets respectively, by adding only some additional lexical features.']
[None, None, None, None, ['F1', 'System 1', 'JAMR (fixed)'], ['System 2'], None, ['System 2']]
1
D16-1065table_4
Final results of various methods.
4
[['Dataset', 'LDC2013E117', 'System', 'CAMR*'], ['Dataset', 'LDC2013E117', 'System', 'CAMR'], ['Dataset', 'LDC2013E117', 'System', 'Our approach'], ['Dataset', 'LDC2014T12', 'System', 'CAMR*'], ['Dataset', 'LDC2014T12', 'System', 'CAMR'], ['Dataset', 'LDC2014T12', 'System', 'CCG-based'], ['Dataset', 'LDC2014T12', 'System', 'Our approach']]
1
[['P'], ['R'], ['F1']]
[['.69', '.67', '.68'], ['.71', '.69', '.70'], ['.73', '.69', '.71'], ['.70', '.66', '.68'], ['.72', '.67', '.70'], ['.67', '.66', '.66'], ['.73', '.68', '.71']]
column
['P', 'R', 'F1']
['Our approach']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Dataset || LDC2013E117 || System || CAMR*</td> <td>.69</td> <td>.67</td> <td>.68</td> </tr> <tr> <td>Dataset || LDC2013E117 || System || CAMR</td> <td>.71</td> <td>.69</td> <td>.70</td> </tr> <tr> <td>Dataset || LDC2013E117 || System || Our approach</td> <td>.73</td> <td>.69</td> <td>.71</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || CAMR*</td> <td>.70</td> <td>.66</td> <td>.68</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || CAMR</td> <td>.72</td> <td>.67</td> <td>.70</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || CCG-based</td> <td>.67</td> <td>.66</td> <td>.66</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || Our approach</td> <td>.73</td> <td>.68</td> <td>.71</td> </tr> </tbody></table>
Table 4
table_4
D16-1065
8
emnlp2016
We give a comparison between our approach and other state-of-the-art AMR parsers, including CCGbased parser (Artzi et al., 2015) and dependencybased parser (Wang et al., 2015b). For comparison purposes, we give two results from two different versions of dependency-based AMR parser: CAMR* and CAMR. Compared to the latter, the former denotes the system that does not use the extended features generated from the semantic role labeling system, word sense disambiguation system and so on, which is directly comparable to our system. From Table 4 we can see that our parser achieves better performance than other approaches, even without utilizing any external semantic resources.
[2, 2, 2, 1]
['We give a comparison between our approach and other state-of-the-art AMR parsers, including CCGbased parser (Artzi et al., 2015) and dependency-based parser (Wang et al., 2015b).', 'For comparison purposes, we give two results from two different versions of dependency-based AMR parser: CAMR* and CAMR.', 'Compared to the latter, the former denotes the system that does not use the extended features generated from the semantic role labeling system, word sense disambiguation system and so on, which is directly comparable to our system.', 'From Table 4 we can see that our parser achieves better performance than other approaches, even without utilizing any external semantic resources.']
[None, ['CAMR*', 'CAMR'], None, ['Our approach', 'System']]
1
D16-1065table_5
Final results on the full LDC2014T12 dataset.
4
[['Dataset', 'LDC2014T12', 'System', 'JAMR (fixed)'], ['Dataset', 'LDC2014T12', 'System', 'CAMR*'], ['Dataset', 'LDC2014T12', 'System', 'CAMR'], ['Dataset', 'LDC2014T12', 'System', 'SMBT-based'], ['Dataset', 'LDC2014T12', 'System', 'Our approach']]
1
[['P'], ['R'], ['F1']]
[['.64', '.53', '.58'], ['.68', '.60', '.64'], ['.70', '.62', '.66'], ['-', '-', '.67'], ['.70', '.62', '.66']]
column
['P', 'R', 'F1']
['Our approach']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Dataset || LDC2014T12 || System || JAMR (fixed)</td> <td>.64</td> <td>.53</td> <td>.58</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || CAMR*</td> <td>.68</td> <td>.60</td> <td>.64</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || CAMR</td> <td>.70</td> <td>.62</td> <td>.66</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || SMBT-based</td> <td>-</td> <td>-</td> <td>.67</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || Our approach</td> <td>.70</td> <td>.62</td> <td>.66</td> </tr> </tbody></table>
Table 5
table_5
D16-1065
8
emnlp2016
We also evaluate our parser on the full LDC2014T12 dataset. We use the training/development/test split recommended in the release: 10,312 sentences for training, 1368 sentences for development and 1371 sentences for testing. For comparison, we include the results of JAMR, CAMR*, CAMR and SMBT-based parser (Pust et al., 2015), which are also trained on the same dataset. The results in Table 5 show that our approach outperforms CAMR*, and obtains comparable performance with CAMR. However, our approach achieves slightly lower performance, compared to the SMBT-based parser, which adds data and features drawn from various external semantic resources.
[2, 2, 2, 1, 1]
['We also evaluate our parser on the full LDC2014T12 dataset.', 'We use the training/development/test split recommended in the release: 10,312 sentences for training, 1368 sentences for development and 1371 sentences for testing.', 'For comparison, we include the results of JAMR, CAMR*, CAMR and SMBT-based parser (Pust et al., 2015), which are also trained on the same dataset.', 'The results in Table 5 show that our approach outperforms CAMR*, and obtains comparable performance with CAMR.', 'However, our approach achieves slightly lower performance, compared to the SMBT-based parser, which adds data and features drawn from various external semantic resources.']
[['LDC2014T12'], None, ['JAMR (fixed)', 'CAMR*', 'CAMR', 'SMBT-based', 'Our approach'], ['Our approach', 'CAMR*', 'CAMR'], ['Our approach', 'SMBT-based']]
1
D16-1068table_2
Per language UAS for the fully supervised setup. Model names are as in Table 1, ‘e’ stands for ensemble. Best results for each language and parsing model order are highlighted in bold.
2
[['language', 'swedish'], ['language', 'bulgarian'], ['language', 'chinese'], ['language', 'czech'], ['language', 'dutch'], ['language', 'japanese'], ['language', 'catalan'], ['language', 'english']]
2
[['First Order', 'TurboParser'], ['First Order', 'BGI-PP'], ['First Order', 'BGI-PP+i+b'], ['First Order', 'BGI-PP+i+b+e'], ['Second Order', 'TurboParser'], ['Second Order', 'BGI-PP'], ['Second Order', 'BGI-PP+i+b'], ['Second Order', 'BGI-PP+i+b+e']]
[['87.12', '86.35', '86.93', '87.12', '88.65', '86.14', '87.85', '89.29'], ['90.66', '90.22', '90.42', '90.66', '92.43', '89.73', '91.50', '92.58'], ['84.88', '83.89', '84.17', '84.17', '86.53', '81.33', '85.18', '86.59'], ['83.53', '83.46', '83.44', '83.44', '86.35', '84.91', '86.26', '87.50'], ['88.48', '88.56', '88.43', '88.43', '91.30', '89.64', '90.49', '91.34'], ['93.03', '93.18', '93.27', '93.27', '93.83', '93.78', '94.01', '94.01'], ['88.94', '88.50', '88.67', '88.93', '92.25', '89.3', '90.46', '92.24'], ['87.18', '86.94', '86.84', '87.18', '90.70', '86.52', '88.24', '90.66']]
column
['UAS', 'UAS', 'UAS', 'UAS', 'UAS', 'UAS', 'UAS', 'UAS']
['BGI-PP+i+b', 'BGI-PP+i+b+e']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>First Order || TurboParser</th> <th>First Order || BGI-PP</th> <th>First Order || BGI-PP+i+b</th> <th>First Order || BGI-PP+i+b+e</th> <th>Second Order || TurboParser</th> <th>Second Order || BGI-PP</th> <th>Second Order || BGI-PP+i+b</th> <th>Second Order || BGI-PP+i+b+e</th> </tr> </thead> <tbody> <tr> <td>language || swedish</td> <td>87.12</td> <td>86.35</td> <td>86.93</td> <td>87.12</td> <td>88.65</td> <td>86.14</td> <td>87.85</td> <td>89.29</td> </tr> <tr> <td>language || bulgarian</td> <td>90.66</td> <td>90.22</td> <td>90.42</td> <td>90.66</td> <td>92.43</td> <td>89.73</td> <td>91.50</td> <td>92.58</td> </tr> <tr> <td>language || chinese</td> <td>84.88</td> <td>83.89</td> <td>84.17</td> <td>84.17</td> <td>86.53</td> <td>81.33</td> <td>85.18</td> <td>86.59</td> </tr> <tr> <td>language || czech</td> <td>83.53</td> <td>83.46</td> <td>83.44</td> <td>83.44</td> <td>86.35</td> <td>84.91</td> <td>86.26</td> <td>87.50</td> </tr> <tr> <td>language || dutch</td> <td>88.48</td> <td>88.56</td> <td>88.43</td> <td>88.43</td> <td>91.30</td> <td>89.64</td> <td>90.49</td> <td>91.34</td> </tr> <tr> <td>language || japanese</td> <td>93.03</td> <td>93.18</td> <td>93.27</td> <td>93.27</td> <td>93.83</td> <td>93.78</td> <td>94.01</td> <td>94.01</td> </tr> <tr> <td>language || catalan</td> <td>88.94</td> <td>88.50</td> <td>88.67</td> <td>88.93</td> <td>92.25</td> <td>89.3</td> <td>90.46</td> <td>92.24</td> </tr> <tr> <td>language || english</td> <td>87.18</td> <td>86.94</td> <td>86.84</td> <td>87.18</td> <td>90.70</td> <td>86.52</td> <td>88.24</td> <td>90.66</td> </tr> </tbody></table>
Table 2
table_2
D16-1068
8
emnlp2016
Table 2 complements our results, providing UAS values for each of the 8 languages participating in this setup. The UAS difference between BGI-PP+i+b and the TurboParser are (+0.24)-(- 0.71) in first order parsing and (+0.18)-(-2.46) in second order parsing. In the latter case, combining these two models (BGI+PP+i+b+e) yields improvements over the TurboParser in 6 out of 8 languages.
[1, 1, 1]
['Table 2 complements our results, providing UAS values for each of the 8 languages participating in this setup.', 'The UAS difference between BGI-PP+i+b and the TurboParser are (+0.24)-(- 0.71) in first order parsing and (+0.18)-(-2.46) in second order parsing.', 'In the latter case, combining these two models (BGI+PP+i+b+e) yields improvements over the TurboParser in 6 out of 8 languages.']
[['language', 'First Order', 'Second Order'], ['BGI-PP+i+b', 'TurboParser', 'First Order', 'Second Order'], ['BGI-PP+i+b+e', 'TurboParser', 'language']]
1
D16-1071table_3
Word relation results. MRR per language and POS type for all models. unfiltered is the unfiltered nearest neighbor search space; filtered is the nearest neighbor search space that contains only one POS. ‡ (resp. †): significantly worse than LAMB (sign test, p < .01, resp. p < .05). Best unfiltered/filtered result per row is in bold.
4
[['lang', 'cz', 'POS', 'a'], ['lang', 'cz', 'POS', 'n'], ['lang', 'cz', 'POS', 'v'], ['lang', 'cz', 'POS', 'all'], ['lang', 'de', 'POS', 'a'], ['lang', 'de', 'POS', 'n'], ['lang', 'de', 'POS', 'v'], ['lang', 'de', 'POS', 'all'], ['lang', 'en', 'POS', 'a'], ['lang', 'en', 'POS', 'n'], ['lang', 'en', 'POS', 'v'], ['lang', 'en', 'POS', 'all'], ['lang', 'es', 'POS', 'a'], ['lang', 'es', 'POS', 'n'], ['lang', 'es', 'POS', 'v'], ['lang', 'es', 'POS', 'all'], ['lang', 'hu', 'POS', 'a'], ['lang', 'hu', 'POS', 'n'], ['lang', 'hu', 'POS', 'v'], ['lang', 'hu', 'POS', 'all']]
3
[['unfiltered', 'form', 'real'], ['unfiltered', 'form', 'opt'], ['unfiltered', 'form', 'sum'], ['unfiltered', 'STEM', 'real'], ['unfiltered', 'STEM', 'opt'], ['unfiltered', 'STEM', 'sum'], ['unfiltered', '-', 'LAMB'], ['filtered', 'form', 'real'], ['filtered', 'form', 'opt'], ['filtered', 'form', 'sum'], ['filtered', 'STEM', 'real'], ['filtered', 'STEM', 'opt'], ['filtered', 'STEM', 'sum'], ['filtered', 'LAMB', '-']]
[['0.03', '0.04', '0.05', '0.02', '0.05', '0.05', '0.06', '0.03‡', '0.05†', '0.07', '0.04†', '0.08', '0.08', '0.09'], ['0.15‡', '0.21‡', '0.24‡', '0.18‡', '0.27‡', '0.26‡', '0.30', '0.17‡', '0.23‡', '0.26‡', '0.20‡', '0.29‡', '0.28‡', '0.32'], ['0.07‡', '0.13‡', '0.16†', '0.08‡', '0.14‡', '0.16‡', '0.18', '0.09‡', '0.15‡', '0.17‡', '0.09‡', '0.17†', '0.18', '0.20'], ['0.12‡', '0.18‡', '0.20‡', '0.14‡', '0.22‡', '0.21‡', '0.25', '-', '-', '-', '-', '-', '-', '-'], ['0.14‡', '0.22‡', '0.25†', '0.17‡', '0.26', '0.21‡', '0.27', '0.17‡', '0.25‡', '0.27‡', '0.23‡', '0.33', '0.33', '0.33'], ['0.23‡', '0.35‡', '0.30‡', '0.28‡', '0.35†', '0.33‡', '0.36', '0.24‡', '0.36‡', '0.31‡', '0.28‡', '0.36', '0.35‡', '0.37'], ['0.11‡', '0.19‡', '0.18‡', '0.11‡', '0.22', '0.18‡', '0.23', '0.13‡', '0.20‡', '0.21‡', '0.13‡', '0.24‡', '0.23‡', '0.26'], ['0.21‡', '0.32‡', '0.28‡', '0.24‡', '0.33†', '0.30‡', '0.34', '-', '-', '-', '-', '-', '-', '-'], ['0.22‡', '0.25‡', '0.24‡', '0.16‡', '0.26‡', '0.25‡', '0.28', '0.25‡', '0.28‡', '0.28‡', '0.18‡', '0.29‡', '0.32', '0.31'], ['0.24‡', '0.27‡', '0.28‡', '0.22‡', '0.30', '0.28‡', '0.30', '0.25‡', '0.28‡', '0.29‡', '0.23‡', '0.31†', '0.31‡', '0.32'], ['0.29‡', '0.35‡', '0.37', '0.17‡', '0.35', '0.24‡', '0.37', '0.33‡', '0.39‡', '0.42‡', '0.21‡', '0.42†', '0.39‡', '0.44'], ['0.23‡', '0.26‡', '0.27‡', '0.20‡', '0.28‡', '0.25‡', '0.29', '-', '-', '-', '-', '-', '-', '-'], ['0.20‡', '0.23‡', '0.23‡', '0.08‡', '0.21‡', '0.18‡', '0.27', '0.21‡', '0.25‡', '0.26‡', '0.10‡', '0.26‡', '0.26‡', '0.30'], ['0.21‡', '0.25‡', '0.25‡', '0.16‡', '0.25‡', '0.23‡', '0.29', '0.22‡', '0.26‡', '0.27‡', '0.17‡', '0.27‡', '0.26‡', '0.30'], ['0.19‡', '0.35†', '0.36', '0.11‡', '0.29‡', '0.19‡', '0.38', '0.22‡', '0.36‡', '0.36‡', '0.16‡', '0.36‡', '0.33‡', '0.42'], ['0.20‡', '0.26‡', '0.26‡', '0.14‡', '0.24‡', '0.21‡', '0.30', '-', '-', '-', '-', '-', '-', '-'], ['0.02‡', '0.06‡', '0.06‡', '0.05‡', '0.08', '0.08', '0.09', '0.04‡', '0.08‡', '0.08‡', '0.06‡', '0.12', '0.11', '0.12'], ['0.01‡', '0.04‡', '0.05‡', '0.03‡', '0.07', '0.06‡', '0.07', '0.01‡', '0.04‡', '0.05‡', '0.04‡', '0.07†', '0.06‡', '0.07'], ['0.04‡', '0.11‡', '0.13‡', '0.07‡', '0.14‡', '0.15', '0.17', '0.05‡', '0.13‡', '0.14‡', '0.07‡', '0.15‡', '0.16†', '0.19'], ['0.02‡', '0.05‡', '0.06‡', '0.04‡', '0.08‡', '0.07‡', '0.09', '-', '-', '-', '-', '-', '-', '-']]
column
['MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR']
['LAMB']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>unfiltered || form || real</th> <th>unfiltered || form || opt</th> <th>unfiltered || form || sum</th> <th>unfiltered || STEM || real</th> <th>unfiltered || STEM || opt</th> <th>unfiltered || STEM || sum</th> <th>unfiltered || - || LAMB</th> <th>filtered || form || real</th> <th>filtered || form || opt</th> <th>filtered || form || sum</th> <th>filtered || STEM || real</th> <th>filtered || STEM || opt</th> <th>filtered || STEM || sum</th> <th>filtered || - || LAMB</th> </tr> </thead> <tbody> <tr> <td>lang || cz || POS || a</td> <td>0.03</td> <td>0.04</td> <td>0.05</td> <td>0.02</td> <td>0.05</td> <td>0.05</td> <td>0.06</td> <td>0.03‡</td> <td>0.05†</td> <td>0.07</td> <td>0.04†</td> <td>0.08</td> <td>0.08</td> <td>0.09</td> </tr> <tr> <td>lang || cz || POS || n</td> <td>0.15‡</td> <td>0.21‡</td> <td>0.24‡</td> <td>0.18‡</td> <td>0.27‡</td> <td>0.26‡</td> <td>0.30</td> <td>0.17‡</td> <td>0.23‡</td> <td>0.26‡</td> <td>0.20‡</td> <td>0.29‡</td> <td>0.28‡</td> <td>0.32</td> </tr> <tr> <td>lang || cz || POS || v</td> <td>0.07‡</td> <td>0.13‡</td> <td>0.16†</td> <td>0.08‡</td> <td>0.14‡</td> <td>0.16‡</td> <td>0.18</td> <td>0.09‡</td> <td>0.15‡</td> <td>0.17‡</td> <td>0.09‡</td> <td>0.17†</td> <td>0.18</td> <td>0.20</td> </tr> <tr> <td>lang || cz || POS || all</td> <td>0.12‡</td> <td>0.18‡</td> <td>0.20‡</td> <td>0.14‡</td> <td>0.22‡</td> <td>0.21‡</td> <td>0.25</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>lang || de || POS || a</td> <td>0.14‡</td> <td>0.22‡</td> <td>0.25†</td> <td>0.17‡</td> <td>0.26</td> <td>0.21‡</td> <td>0.27</td> <td>0.17‡</td> <td>0.25‡</td> <td>0.27‡</td> <td>0.23‡</td> <td>0.33</td> <td>0.33</td> <td>0.33</td> </tr> <tr> <td>lang || de || POS || n</td> <td>0.23‡</td> <td>0.35‡</td> <td>0.30‡</td> <td>0.28‡</td> <td>0.35†</td> <td>0.33‡</td> <td>0.36</td> <td>0.24‡</td> <td>0.36‡</td> <td>0.31‡</td> <td>0.28‡</td> <td>0.36</td> <td>0.35‡</td> <td>0.37</td> </tr> <tr> <td>lang || de || POS || v</td> <td>0.11‡</td> <td>0.19‡</td> <td>0.18‡</td> <td>0.11‡</td> <td>0.22</td> <td>0.18‡</td> <td>0.23</td> <td>0.13‡</td> <td>0.20‡</td> <td>0.21‡</td> <td>0.13‡</td> <td>0.24‡</td> <td>0.23‡</td> <td>0.26</td> </tr> <tr> <td>lang || de || POS || all</td> <td>0.21‡</td> <td>0.32‡</td> <td>0.28‡</td> <td>0.24‡</td> <td>0.33†</td> <td>0.30‡</td> <td>0.34</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>lang || en || POS || a</td> <td>0.22‡</td> <td>0.25‡</td> <td>0.24‡</td> <td>0.16‡</td> <td>0.26‡</td> <td>0.25‡</td> <td>0.28</td> <td>0.25‡</td> <td>0.28‡</td> <td>0.28‡</td> <td>0.18‡</td> <td>0.29‡</td> <td>0.32</td> <td>0.31</td> </tr> <tr> <td>lang || en || POS || n</td> <td>0.24‡</td> <td>0.27‡</td> <td>0.28‡</td> <td>0.22‡</td> <td>0.30</td> <td>0.28‡</td> <td>0.30</td> <td>0.25‡</td> <td>0.28‡</td> <td>0.29‡</td> <td>0.23‡</td> <td>0.31†</td> <td>0.31‡</td> <td>0.32</td> </tr> <tr> <td>lang || en || POS || v</td> <td>0.29‡</td> <td>0.35‡</td> <td>0.37</td> <td>0.17‡</td> <td>0.35</td> <td>0.24‡</td> <td>0.37</td> <td>0.33‡</td> <td>0.39‡</td> <td>0.42‡</td> <td>0.21‡</td> <td>0.42†</td> <td>0.39‡</td> <td>0.44</td> </tr> <tr> <td>lang || en || POS || all</td> <td>0.23‡</td> <td>0.26‡</td> <td>0.27‡</td> <td>0.20‡</td> <td>0.28‡</td> <td>0.25‡</td> <td>0.29</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>lang || es || POS || a</td> <td>0.20‡</td> <td>0.23‡</td> <td>0.23‡</td> <td>0.08‡</td> <td>0.21‡</td> <td>0.18‡</td> <td>0.27</td> <td>0.21‡</td> <td>0.25‡</td> <td>0.26‡</td> <td>0.10‡</td> <td>0.26‡</td> <td>0.26‡</td> <td>0.30</td> </tr> <tr> <td>lang || es || POS || n</td> <td>0.21‡</td> <td>0.25‡</td> <td>0.25‡</td> <td>0.16‡</td> <td>0.25‡</td> <td>0.23‡</td> <td>0.29</td> <td>0.22‡</td> <td>0.26‡</td> <td>0.27‡</td> <td>0.17‡</td> <td>0.27‡</td> <td>0.26‡</td> <td>0.30</td> </tr> <tr> <td>lang || es || POS || v</td> <td>0.19‡</td> <td>0.35†</td> <td>0.36</td> <td>0.11‡</td> <td>0.29‡</td> <td>0.19‡</td> <td>0.38</td> <td>0.22‡</td> <td>0.36‡</td> <td>0.36‡</td> <td>0.16‡</td> <td>0.36‡</td> <td>0.33‡</td> <td>0.42</td> </tr> <tr> <td>lang || es || POS || all</td> <td>0.20‡</td> <td>0.26‡</td> <td>0.26‡</td> <td>0.14‡</td> <td>0.24‡</td> <td>0.21‡</td> <td>0.30</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>lang || hu || POS || a</td> <td>0.02‡</td> <td>0.06‡</td> <td>0.06‡</td> <td>0.05‡</td> <td>0.08</td> <td>0.08</td> <td>0.09</td> <td>0.04‡</td> <td>0.08‡</td> <td>0.08‡</td> <td>0.06‡</td> <td>0.12</td> <td>0.11</td> <td>0.12</td> </tr> <tr> <td>lang || hu || POS || n</td> <td>0.01‡</td> <td>0.04‡</td> <td>0.05‡</td> <td>0.03‡</td> <td>0.07</td> <td>0.06‡</td> <td>0.07</td> <td>0.01‡</td> <td>0.04‡</td> <td>0.05‡</td> <td>0.04‡</td> <td>0.07†</td> <td>0.06‡</td> <td>0.07</td> </tr> <tr> <td>lang || hu || POS || v</td> <td>0.04‡</td> <td>0.11‡</td> <td>0.13‡</td> <td>0.07‡</td> <td>0.14‡</td> <td>0.15</td> <td>0.17</td> <td>0.05‡</td> <td>0.13‡</td> <td>0.14‡</td> <td>0.07‡</td> <td>0.15‡</td> <td>0.16†</td> <td>0.19</td> </tr> <tr> <td>lang || hu || POS || all</td> <td>0.02‡</td> <td>0.05‡</td> <td>0.06‡</td> <td>0.04‡</td> <td>0.08‡</td> <td>0.07‡</td> <td>0.09</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> </tbody></table>
Table 3
table_3
D16-1071
7
emnlp2016
Results. The MRR results in the left half of Table 3 (“unfiltered”) show that for all languages and for all POS, form real has the worst performance among the form models. This comes at no surprise since this model does barely know anything about word forms and lemmata. The form opt model improves these results based on the additional information it has access to (the mapping from lemma to its most frequent form). form sum performs similar to form opt. For Czech, Hungarian and Spanish it is slightly better (or equally good), whereas for English and German there is no clear trend. There is a large difference between these two models on German nouns, with form sum performing considerably worse. We attribute this to the fact that many German noun forms are rare compounds and therefore lead to badly trained form embeddings, which summed up do not lead to high quality embeddings either. Among the stemming models, stem real also is the worst performing model. We can further see that for all languages and almost all POS,stem sum performs worse than stem opt. That indicates that stemming leads to many low-frequency stems or many words sharing the same stem. This is especially apparent in Spanish verbs. There, the stemming models are clearly inferior to form models. Overall, LAMB performs best for all languages and POS types. Most improvements of LAMB are significant. The improvement over the best formmodel reaches up to 6 points (e.g., Czech nouns). In contrast to form sum, LAMB improves over form opt on German nouns. This indicates that the sparsity issue is successfully addressed by LAMB.
[2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
['Results.', 'The MRR results in the left half of Table 3 (“unfiltered”) show that for all languages and for all POS, form real has the worst performance among the form models.', 'This comes at no surprise since this model does barely know anything about word forms and lemmata.', 'The form opt model improves these results based on the additional information it has access to (the mapping from lemma to its most frequent form).', 'form sum performs similar to form opt.', 'For Czech, Hungarian and Spanish it is slightly better (or equally good), whereas for English and German there is no clear trend.', 'There is a large difference between these two models on German nouns, with form sum performing considerably worse.', 'We attribute this to the fact that many German noun forms are rare compounds and therefore lead to badly trained form embeddings, which summed up do not lead to high quality embeddings either.', 'Among the stemming models, stem real also is the worst performing model.', 'We can further see that for all languages and almost all POS,stem sum performs worse than stem opt.', 'That indicates that stemming leads to many low-frequency stems or many words sharing the same stem.', 'This is especially apparent in Spanish verbs.', 'There, the stemming models are clearly inferior to form models.', 'Overall, LAMB performs best for all languages and POS types.', 'Most improvements of LAMB are significant.', 'The improvement over the best formmodel reaches up to 6 points (e.g., Czech nouns).', 'In contrast to form sum, LAMB improves over form opt on German nouns.', 'This indicates that the sparsity issue is successfully addressed by LAMB.']
[None, ['lang', 'POS', 'unfiltered'], None, ['form'], ['form', 'sum', 'opt'], ['lang'], ['form', 'sum', 'de'], ['de'], ['STEM'], ['lang', 'POS', 'STEM', 'sum', 'opt'], ['STEM'], ['es'], ['STEM'], ['LAMB', 'lang', 'POS'], ['LAMB'], ['cz'], ['LAMB', 'form', 'sum', 'opt', 'de'], ['LAMB']]
1
D16-1071table_5
Polarity classification results. Bold is best per language and column.
4
[['lang', 'cz', 'features', 'Brychcin et al. (2013)'], ['lang', 'cz', 'features', 'form'], ['lang', 'cz', 'features', 'STEM'], ['lang', 'cz', 'features', 'LAMB'], ['lang', 'en', 'features', 'Hagen et al. (2015)'], ['lang', 'en', 'features', 'form'], ['lang', 'en', 'features', 'STEM'], ['lang', 'en', 'features', 'LAMB']]
1
[['acc'], ['F1']]
[['-', '81.53'], ['80.86', '80.75'], ['81.51', '81.39'], ['81.21', '81.09'], ['-', '64.84'], ['66.78', '62.21'], ['66.95', '62.06'], ['67.49', '63.01']]
column
['acc', 'F1']
['LAMB']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>acc</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>lang || cz || features || Brychcin et al. (2013)</td> <td>-</td> <td>81.53</td> </tr> <tr> <td>lang || cz || features || form</td> <td>80.86</td> <td>80.75</td> </tr> <tr> <td>lang || cz || features || STEM</td> <td>81.51</td> <td>81.39</td> </tr> <tr> <td>lang || cz || features || LAMB</td> <td>81.21</td> <td>81.09</td> </tr> <tr> <td>lang || en || features || Hagen et al. (2015)</td> <td>-</td> <td>64.84</td> </tr> <tr> <td>lang || en || features || form</td> <td>66.78</td> <td>62.21</td> </tr> <tr> <td>lang || en || features || STEM</td> <td>66.95</td> <td>62.06</td> </tr> <tr> <td>lang || en || features || LAMB</td> <td>67.49</td> <td>63.01</td> </tr> </tbody></table>
Table 5
table_5
D16-1071
8
emnlp2016
Results. Table 5 lists the 10-fold cross-validation results (accuracy and macro F1) on the CSFD dataset. LAMB/STEM results are consistently better than form results. In our analysis, we found the following example for the benefit of normalization: “popis a nazev za- ´ jmavy a film je takov ´ a filma ´ ˇrska pras ´ arna” (engl. ´ “description and title are interesting, but it is bad film-making”). This example is correctly classified as negative by the LAMB model because it has an embedding for “prasarna” (bad, smut) whereas the ´ form model does not. The out-of-vocabulary counts for form and LAMB on the first fold of the CSFD experiment are 26.3k and 25.5k, respectively. The similarity of these two numbers suggests that the quality of word embeddings (form vs. LAMB) are responsible for the performance gain. On the SemEval data, LAMB improves the results over form and stem (cf. Table 5). Hence, LAMB can still pick up additional information despite the simple morphology of English. This is probably due to better embeddings for rare words. The SemEval 2015 winner (Hagen et al., 2015) is a highly domaindependent and specialized system that we do not outperform.
[2, 1, 1, 2, 2, 2, 2, 1, 1, 2, 1]
['Results.', 'Table 5 lists the 10-fold cross-validation results (accuracy and macro F1) on the CSFD dataset.', 'LAMB/STEM results are consistently better than form results.', 'In our analysis, we found the following example for the benefit of normalization: “popis a nazev za- ´ jmavy a film je takov ´ a filma ´ ˇrska pras ´ arna” (engl. ´ “description and title are interesting, but it is bad film-making”).', 'This example is correctly classified as negative by the LAMB model because it has an embedding for “prasarna” (bad, smut) whereas the ´ form model does not.', 'The out-of-vocabulary counts for form and LAMB on the first fold of the CSFD experiment are 26.3k and 25.5k, respectively.', 'The similarity of these two numbers suggests that the quality of word embeddings (form vs. LAMB) are responsible for the performance gain.', 'On the SemEval data, LAMB improves the results over form and stem (cf. Table 5).', 'Hence, LAMB can still pick up additional information despite the simple morphology of English.', 'This is probably due to better embeddings for rare words.', 'The SemEval 2015 winner (Hagen et al., 2015) is a highly domain-dependent and specialized system that we do not outperform.']
[None, ['acc', 'F1'], ['LAMB', 'STEM'], None, ['LAMB'], ['form', 'LAMB'], ['form', 'LAMB'], ['LAMB', 'form', 'STEM'], ['LAMB', 'en'], None, ['Hagen et al. (2015)']]
1
D16-1072table_2
POS tagging performance of online and offline pruning with different r and λ on CTB5 and PD.
5
[['Online Pruning', 'r', '2', 'λ', '0.98'], ['Online Pruning', 'r', '4', 'λ', '0.98'], ['Online Pruning', 'r', '8', 'λ', '0.98'], ['Online Pruning', 'r', '16', 'λ', '0.98'], ['Online Pruning', 'r', '8', 'λ', '0.90'], ['Online Pruning', 'r', '8', 'λ', '0.95'], ['Online Pruning', 'r', '8', 'λ', '0.99'], ['Online Pruning', 'r', '8', 'λ', '1.00'], ['Offline Pruning', 'r', '8', 'λ', '0.9999'], ['Offline Pruning', 'r', '16', 'λ', '0.9999'], ['Offline Pruning', 'r', '32', 'λ', '0.9999'], ['Offline Pruning', 'r', '16', 'λ', '0.99'], ['Offline Pruning', 'r', '16', 'λ', '0.999'], ['Offline Pruning', 'r', '16', 'λ', '0.99999']]
2
[['Accuracy (%)', 'CTB5-dev'], ['Accuracy (%)', 'PD-dev'], ['#Tags (pruned)', 'CTB-side'], ['#Tags (pruned)', 'PD-side']]
[['94.25', '95.03', '2.0', '2.0'], ['95.06', '95.66', '3.9', '4.0'], ['95.14', '95.83', '6.3', '7.4'], ['95.12', '95.81', '7.8', '14.1'], ['95.15', '95.79', '3.7', '6.3'], ['95.13', '95.82', '5.1', '7.1'], ['95.15', '95.74', '7.4', '7.9'], ['95.15', '95.76', '8.0', '8.0'], ['94.95', '96.05', '4.1', '5.1'], ['95.15', '96.09', '5.2', '7.6'], ['95.13', '96.09', '5.5', '9.3'], ['94.42', '95.77', '1.6', '2.2'], ['95.02', '96.10', '2.6', '4.0'], ['95.10', '96.09', '6.8', '8.9']]
column
['Accuracy (%)', 'Accuracy (%)', '#Tags (pruned)', '#Tags (pruned)']
['Online Pruning', 'Offline Pruning']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%) || CTB5-dev</th> <th>Accuracy (%) || PD-dev</th> <th>#Tags (pruned) || CTB-side</th> <th>#Tags (pruned) || PD-side</th> </tr> </thead> <tbody> <tr> <td>Online Pruning || r || 2 || λ || 0.98</td> <td>94.25</td> <td>95.03</td> <td>2.0</td> <td>2.0</td> </tr> <tr> <td>Online Pruning || r || 4 || λ || 0.98</td> <td>95.06</td> <td>95.66</td> <td>3.9</td> <td>4.0</td> </tr> <tr> <td>Online Pruning || r || 8 || λ || 0.98</td> <td>95.14</td> <td>95.83</td> <td>6.3</td> <td>7.4</td> </tr> <tr> <td>Online Pruning || r || 16 || λ || 0.98</td> <td>95.12</td> <td>95.81</td> <td>7.8</td> <td>14.1</td> </tr> <tr> <td>Online Pruning || r || 8 || λ || 0.90</td> <td>95.15</td> <td>95.79</td> <td>3.7</td> <td>6.3</td> </tr> <tr> <td>Online Pruning || r || 8 || λ || 0.95</td> <td>95.13</td> <td>95.82</td> <td>5.1</td> <td>7.1</td> </tr> <tr> <td>Online Pruning || r || 8 || λ || 0.99</td> <td>95.15</td> <td>95.74</td> <td>7.4</td> <td>7.9</td> </tr> <tr> <td>Online Pruning || r || 8 || λ || 1.00</td> <td>95.15</td> <td>95.76</td> <td>8.0</td> <td>8.0</td> </tr> <tr> <td>Offline Pruning || r || 8 || λ || 0.9999</td> <td>94.95</td> <td>96.05</td> <td>4.1</td> <td>5.1</td> </tr> <tr> <td>Offline Pruning || r || 16 || λ || 0.9999</td> <td>95.15</td> <td>96.09</td> <td>5.2</td> <td>7.6</td> </tr> <tr> <td>Offline Pruning || r || 32 || λ || 0.9999</td> <td>95.13</td> <td>96.09</td> <td>5.5</td> <td>9.3</td> </tr> <tr> <td>Offline Pruning || r || 16 || λ || 0.99</td> <td>94.42</td> <td>95.77</td> <td>1.6</td> <td>2.2</td> </tr> <tr> <td>Offline Pruning || r || 16 || λ || 0.999</td> <td>95.02</td> <td>96.10</td> <td>2.6</td> <td>4.0</td> </tr> <tr> <td>Offline Pruning || r || 16 || λ || 0.99999</td> <td>95.10</td> <td>96.09</td> <td>6.8</td> <td>8.9</td> </tr> </tbody></table>
Table 2
table_2
D16-1072
5
emnlp2016
5 Experiments on POS Tagging. 5.1 Parameter Tuning. For both online and offline pruning, we need to decide the maximum number of single-side tag candidates r and the accumulative probability threshold λ for further truncating the candidates. Table 2 shows the tagging accuracies and the averaged numbers of single-side tags for each token after pruning. The first major row tunes the two hyperparameters for online pruning. We first fix λ = 0.98 and increase r from 2 to 8, leading to consistently improved accuracies on both CTB5-dev and PDdev. No further improvement is gained with r = 16, indicating that tags below the top-8 are mostly very unlikely ones and thus insignificant for computing feature expectations. Then we fix r = 8 and try different λ. We find that λ has little effect on tagging accuracies but influences the numbers of remaining single-side tags. We choose r = 8 and λ = 0.98 for final evaluation. The second major row tunes r and λ for offline pruning. Different from online pruning, λ has much greater effect on the number of remaining single-side tags. Under λ = 0.9999, increasing r from 8 to 16 leads to 0.20% accuracy improvement on CTB5-dev, but using r = 32 has no further gain. Then we fix r = 16 and vary λ from 0.99 to 0.99999. We choose r = 16 and λ = 0.9999 for offline pruning for final evaluation, which leaves each word with about 5.2 CTB-tags and 7.6 PD-tags on average.
[2, 2, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
['5 Experiments on POS Tagging.', '5.1 Parameter Tuning.', 'For both online and offline pruning, we need to decide the maximum number of single-side tag candidates r and the accumulative probability threshold λ for further truncating the candidates.', 'Table 2 shows the tagging accuracies and the averaged numbers of single-side tags for each token after pruning.', 'The first major row tunes the two hyperparameters for online pruning.', 'We first fix λ = 0.98 and increase r from 2 to 8, leading to consistently improved accuracies on both CTB5-dev and PDdev.', 'No further improvement is gained with r = 16, indicating that tags below the top-8 are mostly very unlikely ones and thus insignificant for computing feature expectations.', 'Then we fix r = 8 and try different λ.', 'We find that λ has little effect on tagging accuracies but influences the numbers of remaining single-side tags.', 'We choose r = 8 and λ = 0.98 for final evaluation.', 'The second major row tunes r and λ for offline pruning.', 'Different from online pruning, λ has much greater effect on the number of remaining single-side tags.', 'Under λ = 0.9999, increasing r from 8 to 16 leads to 0.20% accuracy improvement on CTB5-dev, but using r = 32 has no further gain.', 'Then we fix r = 16 and vary λ from 0.99 to 0.99999.', 'We choose r = 16 and λ = 0.9999 for offline pruning for final evaluation, which leaves each word with about 5.2 CTB-tags and 7.6 PD-tags on average.']
[None, None, ['Online Pruning', 'Offline Pruning', 'λ', 'r'], None, ['Online Pruning'], ['λ', 'r', 'CTB5-dev', 'PD-dev'], ['r'], ['λ', 'r'], ['λ'], ['r', 'λ'], ['r', 'λ'], ['λ'], ['r', 'λ', 'CTB5-dev'], ['r', 'λ'], ['r', 'λ', 'CTB-side', 'PD-side']]
1
D16-1072table_3
POS tagging performance of difference approaches on CTB5 and PD.
1
[['Coupled (Offline)'], ['Coupled (Online)'], ['Coupled (No Prune)'], ['Coupled (Relaxed)'], ['Guide-feature'], ['Baseline'], ['Li et al. (2012b)']]
2
[['Accuracy (%)', 'CTB5-test'], ['Accuracy (%)', 'PD-test'], ['Speed', 'Toks/Sec']]
[['94.83', '95.90', '246'], ['94.74', '95.95', '365'], ['94.58', '95.79', '3'], ['94.63', '95.87', '127'], ['94.35', '95.63', '584'], ['94.07', '95.82', '1573'], ['94.60', '—', '—']]
column
['Accuracy (%)', 'Accuracy (%)', 'Speed']
['Coupled (Offline)', 'Coupled (Online)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%) || CTB5-test</th> <th>Accuracy (%) || PD-test</th> <th>Speed || Toks/Sec</th> </tr> </thead> <tbody> <tr> <td>Coupled (Offline)</td> <td>94.83</td> <td>95.90</td> <td>246</td> </tr> <tr> <td>Coupled (Online)</td> <td>94.74</td> <td>95.95</td> <td>365</td> </tr> <tr> <td>Coupled (No Prune)</td> <td>94.58</td> <td>95.79</td> <td>3</td> </tr> <tr> <td>Coupled (Relaxed)</td> <td>94.63</td> <td>95.87</td> <td>127</td> </tr> <tr> <td>Guide-feature</td> <td>94.35</td> <td>95.63</td> <td>584</td> </tr> <tr> <td>Baseline</td> <td>94.07</td> <td>95.82</td> <td>1573</td> </tr> <tr> <td>Li et al. (2012b)</td> <td>94.60</td> <td>—</td> <td>—</td> </tr> </tbody></table>
Table 3
table_3
D16-1072
6
emnlp2016
5.2 Main Results. Table 3 summarizes the accuracies on the test data and the tagging speed during the test phase. “Coupled (No Prune)” refers to the coupled model with complete mapping in Li et al. (2015), which maps each one-side tag to all the-other-side tags. “Coupled (Relaxed)” refers the coupled model with relaxed mapping in Li et al. (2015), which maps a one-side tag to a manually-designed small set of the-otherside tags. Li et al. (2012b) report the state-of-theart accuracy on this CTB data, with a joint model of Chinese POS tagging and dependency parsing. It is clear that both online and offline pruning greatly improve the efficiency of the coupled model by about two magnitudes, without the need of a carefully predefined set of tag-to-tag mapping rules. Moreover, the coupled model with offline pruning achieves 0.76% accuracy improvement on CTB5- test over the baseline model, and 0.48% over our reimplemented guide-feature approach of Jiang et al. (2009). The gains on PD-test are marginal, possibly due to the large size of PD-train, similar to the results in Li et al. (2015).
[0, 1, 2, 2, 2, 1, 1, 1]
['5.2 Main Results.', 'Table 3 summarizes the accuracies on the test data and the tagging speed during the test phase.', '“Coupled (No Prune)” refers to the coupled model with complete mapping in Li et al. (2015), which maps each one-side tag to all the-other-side tags.', '“Coupled (Relaxed)” refers the coupled model with relaxed mapping in Li et al. (2015), which maps a one-side tag to a manually-designed small set of the-otherside tags.', 'Li et al. (2012b) report the state-of-theart accuracy on this CTB data, with a joint model of Chinese POS tagging and dependency parsing.', 'It is clear that both online and offline pruning greatly improve the efficiency of the coupled model by about two magnitudes, without the need of a carefully predefined set of tag-to-tag mapping rules.', 'Moreover, the coupled model with offline pruning achieves 0.76% accuracy improvement on CTB5- test over the baseline model, and 0.48% over our reimplemented guide-feature approach of Jiang et al. (2009).', 'The gains on PD-test are marginal, possibly due to the large size of PD-train, similar to the results in Li et al. (2015).']
[None, None, ['Coupled (No Prune)'], ['Coupled (Relaxed)'], ['Li et al. (2012b)'], ['Coupled (Offline)', 'Coupled (Online)'], ['Coupled (Offline)', 'CTB5-test'], ['PD-test']]
1
D16-1072table_4
WS&POS tagging performance of online and offline pruning with different r and λ on CTB5 and PD.
5
[['Online Pruning', 'r', '8', 'λ', '1.00'], ['Online Pruning', 'r', '16', 'λ', '0.95'], ['Online Pruning', 'r', '16', 'λ', '0.99'], ['Online Pruning', 'r', '16', 'λ', '1.00'], ['Offline Pruning', 'r', '16', 'λ', '0.99']]
2
[['Accuracy (%)', 'CTB5-dev'], ['Accuracy (%)', 'PD-dev'], ['#Tags (pruned)', 'CTB-side'], ['#Tags (pruned)', 'PD-side']]
[['90.41', '89.91', '8.0', '8.0'], ['90.65', '90.22', '15.9', '16.0'], ['90.77', '90.49', '16.0', '16.0'], ['90.79', '90.49', '16.0', '16.0'], ['91.64', '91.92', '2.5', '3.5']]
column
['Accuracy (%)', 'Accuracy (%)', '#Tags (pruned)', '#Tags (pruned)']
['Online Pruning']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%) || CTB5-dev</th> <th>Accuracy (%) || PD-dev</th> <th>#Tags (pruned) || CTB-side</th> <th>#Tags (pruned) || PD-side</th> </tr> </thead> <tbody> <tr> <td>Online Pruning || r || 8 || λ || 1.00</td> <td>90.41</td> <td>89.91</td> <td>8.0</td> <td>8.0</td> </tr> <tr> <td>Online Pruning || r || 16 || λ || 0.95</td> <td>90.65</td> <td>90.22</td> <td>15.9</td> <td>16.0</td> </tr> <tr> <td>Online Pruning || r || 16 || λ || 0.99</td> <td>90.77</td> <td>90.49</td> <td>16.0</td> <td>16.0</td> </tr> <tr> <td>Online Pruning || r || 16 || λ || 1.00</td> <td>90.79</td> <td>90.49</td> <td>16.0</td> <td>16.0</td> </tr> <tr> <td>Offline Pruning || r || 8 || λ || 0.995</td> <td>91.22</td> <td>91.62</td> <td>2.6</td> <td>3.1</td> </tr> <tr> <td>Offline Pruning || r || 16 || λ || 0.995</td> <td>91.66</td> <td>91.85</td> <td>3.2</td> <td>4.3</td> </tr> <tr> <td>Offline Pruning || r || 32 || λ || 0.995</td> <td>91.67</td> <td>91.87</td> <td>3.5</td> <td>5.6</td> </tr> <tr> <td>Offline Pruning || r || 16 || λ || 0.95</td> <td>90.69</td> <td>91.30</td> <td>1.6</td> <td>2.1</td> </tr> <tr> <td>Offline Pruning || r || 16 || λ || 0.99</td> <td>91.64</td> <td>91.92</td> <td>2.5</td> <td>3.5</td> </tr> <tr> <td>Offline Pruning || r || 16 || λ || 0.999</td> <td>91.62</td> <td>91.75</td> <td>5.1</td> <td>6.4</td> </tr> </tbody></table>
Table 4
table_4
D16-1072
6
emnlp2016
Table 4 shows results for tuning r and λ. From the results, we can see that in the online pruning method, λ seems useless and r becomes the only threshold for pruning unlikely single-side tags. The accuracies are much inferior to those from the offline pruning approach. We believe that the accuracies can be further improved with larger r, which would nevertheless lead to severe inefficiency issue. Based on the results, we choose r = 16 and λ = 1.00 for final evaluation.
[1, 1, 1, 2, 1]
['Table 4 shows results for tuning r and λ.', 'From the results, we can see that in the online pruning method, λ seems useless and r becomes the only threshold for pruning unlikely single-side tags.', 'The accuracies are much inferior to those from the offline pruning approach.', 'We believe that the accuracies can be further improved with larger r, which would nevertheless lead to severe inefficiency issue.', 'Based on the results, we choose r = 16 and λ = 1.00 for final evaluation.']
[None, ['Online Pruning', 'λ'], ['Online Pruning', 'Offline Pruning'], ['r'], ['r', 'λ']]
1
D16-1072table_5
WS&POS tagging performance of difference approaches on CTB5 and PD.
1
[['Coupled (Offline)'], ['Coupled (Online)'], ['Guide-feature'], ['Baseline']]
2
[['F (%) on CTB5-test', 'Only WS'], ['F (%) on CTB5-test', 'Joint WS&POS'], ['F (%) on PD-test', 'Only WS'], ['F (%) on PD-test', 'Joint WS&POS'], ['Speed (Char/Sec)', '-']]
[['95.55', '90.58', '96.12', '92.44', '115'], ['94.94', '89.58', '95.60', '91.56', '26'], ['95.07', '89.79', '95.66', '91.61', '27'], ['94.88', '89.49', '96.28', '92.47', '119']]
column
['F (%) on CTB5-test', 'F (%) on CTB5-test', 'F (%) on PD-test', 'F (%) on PD-test', 'Speed (Char/Sec)']
['Coupled (Offline)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P/R/F (%) on CTB5-test || Only WS</th> <th>P/R/F (%) on CTB5-test || Joint WS&amp;POS</th> <th>P/R/F (%) on PD-test || Only WS</th> <th>P/R/F (%) on PD-test || Joint WS&amp;POS</th> <th>Speed || Char/Sec</th> </tr> </thead> <tbody> <tr> <td>Coupled (Offline)</td> <td>95.65/95.46/95.55</td> <td>90.68/90.49/90.58</td> <td>96.39/95.86/96.12</td> <td>92.70/92.19/92.44</td> <td>115</td> </tr> <tr> <td>Coupled (Online)</td> <td>95.17/94.71/94.94</td> <td>89.80/89.37/89.58</td> <td>95.76/95.45/95.60</td> <td>91.71/91.41/91.56</td> <td>26</td> </tr> <tr> <td>Guide-feature</td> <td>95.26/94.89/95.07</td> <td>89.96/89.61/89.79</td> <td>95.99/95.33/95.66</td> <td>91.92/91.30/91.61</td> <td>27</td> </tr> <tr> <td>Baseline</td> <td>95.00/94.77/94.88</td> <td>89.60/89.38/89.49</td> <td>96.56/96.00/96.28</td> <td>92.74/92.20/92.47</td> <td>119</td> </tr> </tbody></table>
Table 5
table_5
D16-1072
7
emnlp2016
6.2 Main Results. Table 5 summarizes the accuracies on the test data and the tagging speed (characters per second) during the test phase. “Coupled (No Prune)” is not tried due to the prohibitive tag set size in joint WS&POS tagging, and “Coupled (Relaxed)” is also skipped since it seems impossible to manually design reasonable tag-to-tag mapping rules in this case. In terms of efficiency, the coupled model with offline pruning is on par with the baseline single-side tagging model. In terms of F-score, the coupled model with offline pruning achieves 0.67% (WS) and 1.09% (WS&POS) gains on CTB5-test over the baseline model, and 0.48% (WS) and 0.79% (WS&POS) over our reimplemented guide-feature approach of Jiang et al. (2009). Similar to the case of POS tagging, the baseline model is very competitive on PD-test due to the large scale of PD-train.
[2, 1, 2, 1, 1, 2]
['6.2 Main Results.', 'Table 5 summarizes the accuracies on the test data and the tagging speed (characters per second) during the test phase.', '“Coupled (No Prune)” is not tried due to the prohibitive tag set size in joint WS&POS tagging, and “Coupled (Relaxed)” is also skipped since it seems impossible to manually design reasonable tag-to-tag mapping rules in this case.', 'In terms of efficiency, the coupled model with offline pruning is on par with the baseline single-side tagging model.', 'In terms of F-score, the coupled model with offline pruning achieves 0.67% (WS) and 1.09% (WS&POS) gains on CTB5-test over the baseline model, and 0.48% (WS) and 0.79% (WS&POS) over our reimplemented guide-feature approach of Jiang et al. (2009).', 'Similar to the case of POS tagging, the baseline model is very competitive on PD-test due to the large scale of PD-train.']
[None, None, None, ['Speed (Char/Sec)'], ['F (%) on CTB5-test', 'Coupled (Offline)'], None]
1
D16-1072table_6
WS&POS tagging performance of difference approaches on CTB5X and PD.
1
[['Coupled (Offline)'], ['Guide-feature'], ['Baseline'], ['Sun and Wan (2012)'], ['Jiang et al. (2009)']]
2
[['F (%) on CTB5X-test', 'Only WS'], ['F (%) on CTB5X-test', 'Joint WS&POS']]
[['98.01', '94.39'], ['97.96', '94.06'], ['97.37', '93.23'], ['—', '94.36'], ['98.23', '94.03']]
column
['F (%) on CTB5X-test', 'F (%) on CTB5X-test']
['Coupled (Offline)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F (%) on CTB5X-test || Only WS</th> <th>F (%) on CTB5X-test || Joint WS&amp;POS</th> </tr> </thead> <tbody> <tr> <td>Coupled (Offline)</td> <td>98.01</td> <td>94.39</td> </tr> <tr> <td>Guide-feature</td> <td>97.96</td> <td>94.06</td> </tr> <tr> <td>Baseline</td> <td>97.37</td> <td>93.23</td> </tr> <tr> <td>Sun and Wan (2012)</td> <td>—</td> <td>94.36</td> </tr> <tr> <td>Jiang et al. (2009)</td> <td>98.23</td> <td>94.03</td> </tr> </tbody></table>
Table 6
table_6
D16-1072
8
emnlp2016
6.4 Comparison with Previous Work. In order to compare with previous work, we also run our models on CTB5X and PD, where CTB5X adopts a different data split of CTB5 and is widely used in previous research on joint WS&POS tagging (Jiang et al., 2009; Sun and Wan, 2012). CTB5X-dev/test only contain 352/348 sentences respectively. Table 6 presents the F scores on CTB5X-test. We can see that the coupled model with offline pruning achieves 0.64% (WS) and 1.16% (WS&POS) F-score improvements over the baseline model, and 0.05% (WS) and 0.33% (WS&POS) over the guide-feature approach. The original guide-feature method in Jiang et al. (2009) achieves 98.23% and 94.03% F-score, which is very close to the results of our reimplemented model. The sub-word stacking approach of Sun and Wan (2012) can be understood as a more complex variant of the basic guide-feature method.
[2, 2, 2, 1, 1, 1, 2]
['6.4 Comparison with Previous Work.', 'In order to compare with previous work, we also run our models on CTB5X and PD, where CTB5X adopts a different data split of CTB5 and is widely used in previous research on joint WS&POS tagging (Jiang et al., 2009; Sun and Wan, 2012).', 'CTB5X-dev/test only contain 352/348 sentences respectively.', 'Table 6 presents the F scores on CTB5X-test.', 'We can see that the coupled model with offline pruning achieves 0.64% (WS) and 1.16% (WS&POS) F-score improvements over the baseline model, and 0.05% (WS) and 0.33% (WS&POS) over the guide-feature approach.', 'The original guide-feature method in Jiang et al. (2009) achieves 98.23% and 94.03% F-score, which is very close to the results of our reimplemented model.', 'The sub-word stacking approach of Sun and Wan (2012) can be understood as a more complex variant of the basic guide-feature method.']
[None, None, None, ['F (%) on CTB5X-test'], ['Coupled (Offline)', 'Guide-feature', 'Baseline', 'F (%) on CTB5X-test'], ['Jiang et al. (2009)', 'F (%) on CTB5X-test', 'Coupled (Offline)'], ['Sun and Wan (2012)']]
1
D16-1075table_3
Performance of various approaches on stream summarization on five topics.
1
[['Random'], ['NB'], ['B-HAC'], ['TaHBM'], ['Ge et al. (2015b)'], ['BINet-NodeRank'], ['BINet-AreaRank']]
2
[['sports', 'P@50'], ['sports', 'P@100'], ['politics', 'P@50'], ['politics', 'P@100'], ['disaster', 'P@50'], ['disaster', 'P@100'], ['military', 'P@50'], ['military', 'P@100'], ['comprehensive', 'P@50'], ['comprehensive', 'P@100']]
[['0.02', '0.08', '0', '0', '0.02', '0.04', '0', '0', '0.02', '0.03'], ['0.08', '0.12', '0.18', '0.19', '0.42', '0.36', '0.18', '0.17', '0.38', '0.31'], ['0.10', '0.13', '0.30', '0.26', '0.50', '0.47', '0.30', '0.22', '0.36', '0.32'], ['0.18', '0.15', '0.30', '0.29', '0.50', '0.43', '0.46', '0.36', '0.38', '0.33'], ['0.20', '0.15', '0.38', '0.36', '0.64', '0.53', '0.54', '0.41', '0.40', '0.33'], ['0.24', '0.20', '0.38', '0.30', '0.54', '0.51', '0.48', '0.43', '0.36', '0.33'], ['0.40', '0.33', '0.40', '0.34', '0.80', '0.62', '0.50', '0.49', '0.32', '0.30']]
column
['P@50', 'P@100', 'P@50', 'P@100', 'P@50', 'P@100', 'P@50', 'P@100', 'P@50', 'P@100']
['BINet-NodeRank', 'BINet-AreaRank']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>sports || P@50</th> <th>sports || P@100</th> <th>politics || P@50</th> <th>politics || P@100</th> <th>disaster || P@50</th> <th>disaster || P@100</th> <th>military || P@50</th> <th>military || P@100</th> <th>comprehensive || P@50</th> <th>comprehensive || P@100</th> </tr> </thead> <tbody> <tr> <td>Random</td> <td>0.02</td> <td>0.08</td> <td>0</td> <td>0</td> <td>0.02</td> <td>0.04</td> <td>0</td> <td>0</td> <td>0.02</td> <td>0.03</td> </tr> <tr> <td>NB</td> <td>0.08</td> <td>0.12</td> <td>0.18</td> <td>0.19</td> <td>0.42</td> <td>0.36</td> <td>0.18</td> <td>0.17</td> <td>0.38</td> <td>0.31</td> </tr> <tr> <td>B-HAC</td> <td>0.10</td> <td>0.13</td> <td>0.30</td> <td>0.26</td> <td>0.50</td> <td>0.47</td> <td>0.30</td> <td>0.22</td> <td>0.36</td> <td>0.32</td> </tr> <tr> <td>TaHBM</td> <td>0.18</td> <td>0.15</td> <td>0.30</td> <td>0.29</td> <td>0.50</td> <td>0.43</td> <td>0.46</td> <td>0.36</td> <td>0.38</td> <td>0.33</td> </tr> <tr> <td>Ge et al. (2015b)</td> <td>0.20</td> <td>0.15</td> <td>0.38</td> <td>0.36</td> <td>0.64</td> <td>0.53</td> <td>0.54</td> <td>0.41</td> <td>0.40</td> <td>0.33</td> </tr> <tr> <td>BINet-NodeRank</td> <td>0.24</td> <td>0.20</td> <td>0.38</td> <td>0.30</td> <td>0.54</td> <td>0.51</td> <td>0.48</td> <td>0.43</td> <td>0.36</td> <td>0.33</td> </tr> <tr> <td>BINet-AreaRank</td> <td>0.40</td> <td>0.33</td> <td>0.40</td> <td>0.34</td> <td>0.80</td> <td>0.62</td> <td>0.50</td> <td>0.49</td> <td>0.32</td> <td>0.30</td> </tr> </tbody></table>
Table 3
table_3
D16-1075
7
emnlp2016
The results are shown in Table 3. It can be clearly observed that BINet-based approaches outperform baselines and perform comparably to the state-ofthe-art model on generating the summaries on most topics: AreaRank achieves the significant improvement over the state-of-the-art model on sports and disasters, and performs comparably on politics and military and NodeRank’s performance achieves the comparable performance to previous state-of-the-art model though it is inferior to AreaRank on most topics. Among these five topics, almost all models perform well on disaster and military topics because disaster and military reference summaries have more entries than the topics such as politics and sports and topics of event entries in the summaries are focused. The high-quality training data benefits models’ performance especially for AreaRank which is purely data-driven. In contrast, on sports and politics, the number of entries in the reference summaries is small, which results in weaker supervision and affect the performance of models. It is notable that AreaRank does not perform well on generating the comprehensive summary in which topics of event entries are miscellaneous. The reason for the undesirable performance is that the topics of event entries in the comprehensive reference summary are not focused, which results in very few reference (positive) examples for each topic. As a result, the miscellaneousness of topics of positive examples makes them tend to be overwhelmed by large numbers of negative examples during training the model, leading to very week supervision and making it difficult for AreaRank to learn the patterns of positive examples. Compared to AreaRank, the strategy of selecting documents for generating event entries in other baselines and NodeRank use more or less heuristic knowledge, which makes these models perform stably even if the training examples are not sufficient.
[1, 1, 1, 1, 1, 1, 2, 2, 2]
['The results are shown in Table 3.', 'It can be clearly observed that BINet-based approaches outperform baselines and perform comparably to the state-ofthe-art model on generating the summaries on most topics: AreaRank achieves the significant improvement over the state-of-the-art model on sports and disasters, and performs comparably on politics and military and NodeRank’s performance achieves the comparable performance to previous state-of-the-art model though it is inferior to AreaRank on most topics.', 'Among these five topics, almost all models perform well on disaster and military topics because disaster and military reference summaries have more entries than the topics such as politics and sports and topics of event entries in the summaries are focused.', 'The high-quality training data benefits models’ performance especially for AreaRank which is purely data-driven.', 'In contrast, on sports and politics, the number of entries in the reference summaries is small, which results in weaker supervision and affect the performance of models.', 'It is notable that AreaRank does not perform well on generating the comprehensive summary in which topics of event entries are miscellaneous.', 'The reason for the undesirable performance is that the topics of event entries in the comprehensive reference summary are not focused, which results in very few reference (positive) examples for each topic.', 'As a result, the miscellaneousness of topics of positive examples makes them tend to be overwhelmed by large numbers of negative examples during training the model, leading to very week supervision and making it difficult for AreaRank to learn the patterns of positive examples.', 'Compared to AreaRank, the strategy of selecting documents for generating event entries in other baselines and NodeRank use more or less heuristic knowledge, which makes these models perform stably even if the training examples are not sufficient.']
[None, ['BINet-NodeRank', 'BINet-AreaRank'], ['sports', 'politics', 'disaster', 'military', 'comprehensive'], ['BINet-AreaRank'], ['sports', 'politics'], ['BINet-AreaRank'], None, ['BINet-AreaRank'], ['BINet-NodeRank', 'BINet-AreaRank']]
1
D16-1078table_2
The performances on the Abstracts sub-corpus.
3
[['Speculation', 'Systems', 'Baseline'], ['Speculation', 'Systems', 'CNN_C'], ['Speculation', 'Systems', 'CNN_D'], ['Negation', 'Systems', 'Baseline'], ['Negation', 'Systems', 'CNN_C'], ['Negation', 'Systems', 'CNN_D']]
1
[['P (%)'], ['R (%)'], ['F1'], ['PCLB (%)'], ['PCRB (%)'], ['PCS (%)']]
[['94.71', '90.54', '92.56', '84.81', '85.11', '72.47'], ['95.95', '95.19', '95.56', '93.16', '91.50', '85.75'], ['92.25', '94.98', '93.55', '86.39', '84.50', '74.43'], ['85.46', '72.95', '78.63', '84.00', '58.29', '46.42'], ['85.10', '92.74', '89.64', '81.04', '87.73', '70.86'], ['89.49', '90.54', '89.91', '91.91', '83.54', '77.14']]
column
['P (%)', 'R (%)', 'F1', 'PCLB (%)', 'PCRB (%)', 'PCS (%)']
['CNN_C', 'CNN_D']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P (%)</th> <th>R (%)</th> <th>F1</th> <th>PCLB (%)</th> <th>PCRB (%)</th> <th>PCS (%)</th> </tr> </thead> <tbody> <tr> <td>Speculation || Systems || Baseline</td> <td>94.71</td> <td>90.54</td> <td>92.56</td> <td>84.81</td> <td>85.11</td> <td>72.47</td> </tr> <tr> <td>Speculation || Systems || CNN_C</td> <td>95.95</td> <td>95.19</td> <td>95.56</td> <td>93.16</td> <td>91.50</td> <td>85.75</td> </tr> <tr> <td>Speculation || Systems || CNN_D</td> <td>92.25</td> <td>94.98</td> <td>93.55</td> <td>86.39</td> <td>84.50</td> <td>74.43</td> </tr> <tr> <td>Negation || Systems || Baseline</td> <td>85.46</td> <td>72.95</td> <td>78.63</td> <td>84.00</td> <td>58.29</td> <td>46.42</td> </tr> <tr> <td>Negation || Systems || CNN_C</td> <td>85.10</td> <td>92.74</td> <td>89.64</td> <td>81.04</td> <td>87.73</td> <td>70.86</td> </tr> <tr> <td>Negation || Systems || CNN_D</td> <td>89.49</td> <td>90.54</td> <td>89.91</td> <td>91.91</td> <td>83.54</td> <td>77.14</td> </tr> </tbody></table>
Table 2
table_2
D16-1078
7
emnlp2016
4.3 Experimental Results on Abstracts. Table 2 summarizes the performances of scope detection on Abstracts. In Table 2, CNN_C and CNN_D refer the CNN-based model with constituency paths and dependency paths, respectively (the same below). It shows that our CNN-based models (both CNN_C and CNN_D) can achieve better performances than the baseline in most measurements. This indicates that our CNN-based models can better extract and model effective features. Besides, compared to the baseline, our CNN-based models consider fewer features and need less human intervention. It also manifests that our CNN-based models improve significantly more on negation scope detection than on speculation scope detection. Much of this is due to the better ability of our CNN-based models in identifying the right boundaries of scopes than the left ones on negation scope detection, with the huge gains of 29.44% and 25.25% on PCRB using CNN_C and CNN_D, respectively. Table 2 illustrates that the performance of speculation scope detection is higher than that of negation (Best PCS: 85.75% vs 77.14%). It is mainly attributed to the shorter scopes of negation cues. Under the circumstances that the average length of negation sentences is almost as long as that of speculation ones (29.28 vs 29.77), shorter negation scopes mean that more tokens do not belong to the scopes, indicating more negative instances. The imbalance between positive and negative instances has negative effects on both the baseline and the CNN-based models for negation scope detection. Table 2 also shows that our CNN_D outperforms CNN_C in negation scope detection (PCS: 77.14% vs 70.86%), while our CNN_C performs better than CNN_D in speculation scope detection (PCS: 85.75% vs 74.43%). To explore the results of our CNN-based models in details, we present the analysis of top 10 speculative and negative cues below on CNN_C and CNN_D, respectively.
[2, 1, 2, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 0]
['4.3 Experimental Results on Abstracts.', 'Table 2 summarizes the performances of scope detection on Abstracts.', 'In Table 2, CNN_C and CNN_D refer the CNN-based model with constituency paths and dependency paths, respectively (the same below).', 'It shows that our CNN-based models (both CNN_C and CNN_D) can achieve better performances than the baseline in most measurements.', 'This indicates that our CNN-based models can better extract and model effective features.', 'Besides, compared to the baseline, our CNN-based models consider fewer features and need less human intervention.', 'It also manifests that our CNN-based models improve significantly more on negation scope detection than on speculation scope detection.', 'Much of this is due to the better ability of our CNN-based models in identifying the right boundaries of scopes than the left ones on negation scope detection, with the huge gains of 29.44% and 25.25% on PCRB using CNN_C and CNN_D, respectively.', 'Table 2 illustrates that the performance of speculation scope detection is higher than that of negation (Best PCS: 85.75% vs 77.14%).', 'It is mainly attributed to the shorter scopes of negation cues.', 'Under the circumstances that the average length of negation sentences is almost as long as that of speculation ones (29.28 vs 29.77), shorter negation scopes mean that more tokens do not belong to the scopes, indicating more negative instances.', 'The imbalance between positive and negative instances has negative effects on both the baseline and the CNN-based models for negation scope detection.', 'Table 2 also shows that our CNN_D outperforms CNN_C in negation scope detection (PCS: 77.14% vs 70.86%), while our CNN_C performs better than CNN_D in speculation scope detection (PCS: 85.75% vs 74.43%).', 'To explore the results of our CNN-based models in details, we present the analysis of top 10 speculative and negative cues below on CNN_C and CNN_D, respectively.']
[None, None, ['CNN_C', 'CNN_D'], ['CNN_C', 'CNN_D', 'Baseline'], ['CNN_C', 'CNN_D'], ['CNN_C', 'CNN_D', 'Baseline'], ['CNN_C', 'CNN_D'], ['CNN_C', 'CNN_D', 'PCRB (%)'], ['PCS (%)'], None, None, ['Negation', 'CNN_C', 'CNN_D'], ['Negation', 'CNN_C', 'CNN_D'], None]
1
D16-1078table_4
Comparison of our CNN-based model with the state-
3
[['Spe', 'System', 'Morante (2009a)'], ['Spe', 'System', 'Özgür (2009)'], ['Spe', 'System', 'Velldal (2012)'], ['Spe', 'System', 'Zou (2013)'], ['Spe', 'System', 'Ours'], ['Neg', 'System', 'Morante (2008)'], ['Neg', 'System', 'Morante (2009b)'], ['Neg', 'System', 'Li (2010)'], ['Neg', 'System', 'Velldal (2012)'], ['Neg', 'System', 'Zou (2013)'], ['Neg', 'System', 'Ours']]
1
[['Abstracts'], ['Cli'], ['Papers']]
[['77.13', '60.59', '47.94'], ['79.89', 'N/A', '61.13'], ['79.56', '78.69', '75.15'], ['84.21', '72.92', '67.24'], ['85.75', '73.92', '59.82'], ['57.33', 'N/A', 'N/A'], ['73.36', '87.27', '50.26'], ['81.84', '89.79', '64.02'], ['74.35', '90.74', '70.21'], ['76.90', '85.31', '61.19'], ['77.14', '89.66', '55.32']]
column
['PCS', 'PCS', 'PCS']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Abstracts</th> <th>Cli</th> <th>Papers</th> </tr> </thead> <tbody> <tr> <td>Spe || System || Morante (2009a)</td> <td>77.13</td> <td>60.59</td> <td>47.94</td> </tr> <tr> <td>Spe || System || Özgür (2009)</td> <td>79.89</td> <td>N/A</td> <td>61.13</td> </tr> <tr> <td>Spe || System || Velldal (2012)</td> <td>79.56</td> <td>78.69</td> <td>75.15</td> </tr> <tr> <td>Spe || System || Zou (2013)</td> <td>84.21</td> <td>72.92</td> <td>67.24</td> </tr> <tr> <td>Spe || System || Ours</td> <td>85.75</td> <td>73.92</td> <td>59.82</td> </tr> <tr> <td>Neg || System || Morante (2008)</td> <td>57.33</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>Neg || System || Morante (2009b)</td> <td>73.36</td> <td>87.27</td> <td>50.26</td> </tr> <tr> <td>Neg || System || Li (2010)</td> <td>81.84</td> <td>89.79</td> <td>64.02</td> </tr> <tr> <td>Neg || System || Velldal (2012)</td> <td>74.35</td> <td>90.74</td> <td>70.21</td> </tr> <tr> <td>Neg || System || Zou (2013)</td> <td>76.90</td> <td>85.31</td> <td>61.19</td> </tr> <tr> <td>Neg || System || Ours</td> <td>77.14</td> <td>89.66</td> <td>55.32</td> </tr> </tbody></table>
Table 4
table_4
D16-1078
9
emnlp2016
Table 4 compares our CNN-based models with the state-of-the-art systems. It shows that our CNNbased models can achieve higher PCSs (+1.54%) than those of the state-of-the-art systems for speculation scope detection and the second highest PCS for negation scope detection on Abstracts, and can get comparable PCSs on Clinical Records (73.92% vs 78.69% for speculation scopes, 89.66% vs 90.74% for negation scopes). It is worth noting that Abstracts and Clinical Records come from different genres. It also displays that our CNN-based models perform worse than the state-of-the-art on Full Papers due to the complex syntactic structures of the sentences and the cross-domain nature of our evaluation. Although our evaluation on Clinical Records is cross-domain, the sentences in Clinical Records are much simpler and the results on Clinical Records are satisfactory. Remind that our CNN-based models are all trained on Abstracts. Another reason is that those state-of-the-art systems on Full Papers (e.g., Li et al., 2010; Velldal et al., 2012) are tree-based, instead of token-based. Li et al. (2010) proposed a semantic parsing framework and focused on determining whether a constituent, rather than a word, is in the scope of a negative cue. Velldal et al. (2012) presented a hybrid framework, combining a rule-based approach using dependency structures and a data-driven approach for selecting appropriate subtrees in constituent structures. Normally, tree-based models can better capture long-distance syntactic dependency than token-based ones. Compared to those tree-based models, however, our CNN-based model needs less manual intervention. To improve the performances of scope detection task, we will explore this alternative in our future work.
[1, 1, 2, 1, 2, 2, 2, 2, 2, 2, 2, 0]
['Table 4 compares our CNN-based models with the state-of-the-art systems.', 'It shows that our CNNbased models can achieve higher PCSs (+1.54%) than those of the state-of-the-art systems for speculation scope detection and the second highest PCS for negation scope detection on Abstracts, and can get comparable PCSs on Clinical Records (73.92% vs 78.69% for speculation scopes, 89.66% vs 90.74% for negation scopes).', 'It is worth noting that Abstracts and Clinical Records come from different genres.', 'It also displays that our CNN-based models perform worse than the state-of-the-art on Full Papers due to the complex syntactic structures of the sentences and the cross-domain nature of our evaluation.', 'Although our evaluation on Clinical Records is cross-domain, the sentences in Clinical Records are much simpler and the results on Clinical Records are satisfactory.', 'Remind that our CNN-based models are all trained on Abstracts.', 'Another reason is that those state-of-the-art systems on Full Papers (e.g., Li et al., 2010; Velldal et al., 2012) are tree-based, instead of token-based.', 'Li et al. (2010) proposed a semantic parsing framework and focused on determining whether a constituent, rather than a word, is in the scope of a negative cue.', 'Velldal et al. (2012) presented a hybrid framework, combining a rule-based approach using dependency structures and a data-driven approach for selecting appropriate subtrees in constituent structures.', 'Normally, tree-based models can better capture long-distance syntactic dependency than token-based ones.', 'Compared to those tree-based models, however, our CNN-based model needs less manual intervention.', 'To improve the performances of scope detection task, we will explore this alternative in our future work.']
[['System'], ['Ours', 'Abstracts', 'Cli'], ['Abstracts', 'Cli'], ['Ours', 'System'], ['Cli'], ['Ours', 'Abstracts'], None, ['Li (2010)'], ['Velldal (2012)'], None, ['Ours'], None]
1
D16-1080table_4
Effects of embedding on performance. WEU, WENU, REU and RENU represent word embedding update, word embedding without update, random embedding update and random embedding without update respectively.
1
[['WEU'], ['WENU'], ['REU'], ['RENU']]
1
[['P'], ['R'], ['F1']]
[['80.74%', '81.19%', '80.97%'], ['74.10%', '69.30%', '71.62%'], ['79.01%', '79.75%', '79.38%'], ['78.16%', '64.55%', '70.70%']]
column
['P', 'R', 'F1']
['WEU']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>WEU</td> <td>80.74%</td> <td>81.19%</td> <td>80.97%</td> </tr> <tr> <td>WENU</td> <td>74.10%</td> <td>69.30%</td> <td>71.62%</td> </tr> <tr> <td>REU</td> <td>79.01%</td> <td>79.75%</td> <td>79.38%</td> </tr> <tr> <td>RENU</td> <td>78.16%</td> <td>64.55%</td> <td>70.70%</td> </tr> </tbody></table>
Table 4
table_4
D16-1080
7
emnlp2016
Table 4 lists the effects of word embedding. We can see that the performance when updating the word embedding is better than when not updating, and the performance of word embedding is a little better than random word embedding. The main reason is that the vocabulary size is 147,377, but the number of words from tweets that exist in the word embedding trained on the Google News dataset is just 35,133. This means that 76.2% of the words are missing. This also confirms that the proposed jointlayer RNN is more suitable for keyphrase extraction on Twitter.
[1, 1, 2, 2, 2]
['Table 4 lists the effects of word embedding.', 'We can see that the performance when updating the word embedding is better than when not updating, and the performance of word embedding is a little better than random word embedding.', 'The main reason is that the vocabulary size is 147,377, but the number of words from tweets that exist in the word embedding trained on the Google News dataset is just 35,133.', 'This means that 76.2% of the words are missing.', 'This also confirms that the proposed jointlayer RNN is more suitable for keyphrase extraction on Twitter.']
[None, ['WEU', 'WENU', 'REU', 'RENU'], None, None, None]
1
D16-1083table_3
Classification results across the behavioral features (BF), the reviewer embeddings (RE) , product embeddings (PE) and bigram of the review texts. Training uses balanced data (50:50). Testing uses two class distributions (C.D.): 50:50 (balanced) and Natural Distribution (N.D.). Improvements of our method are statistically significant with p<0.005 based on paired t-test.
3
[['Method', 'SPEAGLE+(80%)', '50.50.00'], ['Method', 'SPEAGLE+(80%)', 'N.D.'], ['Method', 'Mukherjee_BF', '50.50.00'], ['Method', 'Mukherjee_BF', 'N.D.'], ['Method', 'Mukherjee_BF+Bigram', '50.50.00'], ['Method', 'Mukherjee_BF+Bigram', 'N.D.'], ['Method', 'Ours_RE', '50.50.00'], ['Method', 'Ours_RE', 'N.D.'], ['Method', 'Ours_RE+PE', '50.50.00'], ['Method', 'Ours_RE+PE', 'N.D.'], ['Method', 'Ours_RE+PE+Bigram', '50.50.00'], ['Method', 'Ours_RE+PE+Bigram', 'N.D.']]
2
[['P', 'Hotel'], ['P', 'Restaurant'], ['R', 'Hotel'], ['R', 'Restaurant'], ['F1', 'Hotel'], ['F1', 'Restaurant'], ['A', 'Hotel'], ['A', 'Restaurant']]
[['75.7', '80.5', '83', '83.2', '79.1', '81.8', '81', '82.5'], ['26.5', '50.1', '56', '70.5', '36', '58.6', '80.4', '82'], ['82.4', '82.8', '85.2', '88.5', '83.7', '85.6', '83.8', '83.3'], ['41.4', '48.2', '84.6', '87.9', '55.6', '62.3', '82.4', '78.6'], ['82.8', '84.5', '86.9', '87.8', '84.8', '86.1', '85.1', '86.5'], ['46.5', '48.9', '82.5', '87.3', '59.4', '62.7', '84.9', '82.3'], ['83.3', '85.4', '88.1', '90.2', '85.6', '87.7', '85.5', '87.4'], ['47.1', '56.9', '83.5', '90.1', '60.2', '69.8', '85', '85.8'], ['83.6', '86', '89', '90.7', '86.2', '88.3', '85.7', '88'], ['47.5', '57.4', '84.1', '89.9', '60.7', '70.1', '85.3', '86.1'], ['84.2', '86.8', '89.9', '91.8', '87', '89.2', '86.5', '89.9'], ['48.2', '58.2', '85', '90.3', '61.5', '70.8', '85.9', '87.8']]
column
['P', 'P', 'R', 'R', 'F1', 'F1', 'A', 'A']
['Ours_RE', 'Ours_RE+PE', 'Ours_RE+PE+Bigram']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P || Hotel</th> <th>P || Restaurant</th> <th>R || Hotel</th> <th>R || Restaurant</th> <th>F1 || Hotel</th> <th>F1 || Restaurant</th> <th>A || Hotel</th> <th>A || Restaurant</th> </tr> </thead> <tbody> <tr> <td>Method || SPEAGLE+(80%) || 50.50.00</td> <td>75.7</td> <td>80.5</td> <td>83</td> <td>83.2</td> <td>79.1</td> <td>81.8</td> <td>81</td> <td>82.5</td> </tr> <tr> <td>Method || SPEAGLE+(80%) || N.D.</td> <td>26.5</td> <td>50.1</td> <td>56</td> <td>70.5</td> <td>36</td> <td>58.6</td> <td>80.4</td> <td>82</td> </tr> <tr> <td>Method || Mukherjee_BF || 50.50.00</td> <td>82.4</td> <td>82.8</td> <td>85.2</td> <td>88.5</td> <td>83.7</td> <td>85.6</td> <td>83.8</td> <td>83.3</td> </tr> <tr> <td>Method || Mukherjee_BF || N.D.</td> <td>41.4</td> <td>48.2</td> <td>84.6</td> <td>87.9</td> <td>55.6</td> <td>62.3</td> <td>82.4</td> <td>78.6</td> </tr> <tr> <td>Method || Mukherjee_BF+Bigram || 50.50.00</td> <td>82.8</td> <td>84.5</td> <td>86.9</td> <td>87.8</td> <td>84.8</td> <td>86.1</td> <td>85.1</td> <td>86.5</td> </tr> <tr> <td>Method || Mukherjee_BF+Bigram || N.D.</td> <td>46.5</td> <td>48.9</td> <td>82.5</td> <td>87.3</td> <td>59.4</td> <td>62.7</td> <td>84.9</td> <td>82.3</td> </tr> <tr> <td>Method || Ours_RE || 50.50.00</td> <td>83.3</td> <td>85.4</td> <td>88.1</td> <td>90.2</td> <td>85.6</td> <td>87.7</td> <td>85.5</td> <td>87.4</td> </tr> <tr> <td>Method || Ours_RE || N.D.</td> <td>47.1</td> <td>56.9</td> <td>83.5</td> <td>90.1</td> <td>60.2</td> <td>69.8</td> <td>85</td> <td>85.8</td> </tr> <tr> <td>Method || Ours_RE+PE || 50.50.00</td> <td>83.6</td> <td>86</td> <td>89</td> <td>90.7</td> <td>86.2</td> <td>88.3</td> <td>85.7</td> <td>88</td> </tr> <tr> <td>Method || Ours_RE+PE || N.D.</td> <td>47.5</td> <td>57.4</td> <td>84.1</td> <td>89.9</td> <td>60.7</td> <td>70.1</td> <td>85.3</td> <td>86.1</td> </tr> <tr> <td>Method || Ours_RE+PE+Bigram || 50.50.00</td> <td>84.2</td> <td>86.8</td> <td>89.9</td> <td>91.8</td> <td>87</td> <td>89.2</td> <td>86.5</td> <td>89.9</td> </tr> <tr> <td>Method || Ours_RE+PE+Bigram || N.D.</td> <td>48.2</td> <td>58.2</td> <td>85</td> <td>90.3</td> <td>61.5</td> <td>70.8</td> <td>85.9</td> <td>87.8</td> </tr> </tbody></table>
Table 3
table_3
D16-1083
7
emnlp2016
The compared results are shown in Table 3. We utilize our learnt embeddings of reviewers (Ours RE), both of reviewers’ embeddings and products’ embeddings (Ours RE+PE), respectively. Moreover, to perform fair comparison, like Mukherjee et al. (2013b), we add representations of the review text in classifier (Ours RE+PE+Bigram). From the results, we can observe that our method could outperform all state-of-the-arts in both the hotel and restaurant domains. It proves that our method is effective. Furthermore, the improvements in both the hotel and restaurant domains prove that our model possesses preferable domain-adaptability. It could represent the reviews more accurately and globally by learning from the original data, rather than the experts’ knowledge or assumption.
[1, 2, 2, 1, 1, 1, 2]
['The compared results are shown in Table 3.', 'We utilize our learnt embeddings of reviewers (Ours RE), both of reviewers’ embeddings and products’ embeddings (Ours RE+PE), respectively.', 'Moreover, to perform fair comparison, like Mukherjee et al. (2013b), we add representations of the review text in classifier (Ours RE+PE+Bigram).', 'From the results, we can observe that our method could outperform all state-of-the-arts in both the hotel and restaurant domains.', 'It proves that our method is effective.', 'Furthermore, the improvements in both the hotel and restaurant domains prove that our model possesses preferable domain-adaptability.', 'It could represent the reviews more accurately and globally by learning from the original data, rather than the experts’ knowledge or assumption.']
[None, ['Ours_RE', 'Ours_RE+PE'], ['Ours_RE+PE+Bigram'], ['Hotel', 'Restaurant', 'Ours_RE', 'Ours_RE+PE', 'Ours_RE+PE+Bigram'], ['Ours_RE+PE', 'Ours_RE+PE+Bigram'], ['Hotel', 'Restaurant', 'Ours_RE', 'Ours_RE+PE', 'Ours_RE+PE+Bigram'], None]
1

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
498
Add dataset card