arxiv_dump / txt /2107.13662.txt
billxbf's picture
Upload 101 files
8f1929a verified
raw
history blame
28.4 kB
Investigating Text Simplification Evaluation
Laura V ´asquez-Rodr ´ıguez1,Matthew Shardlow2,Piotr Przybyła3,Sophia Ananiadou1
1National Centre for Text Mining,
The University of Manchester, Manchester, United Kingdom
2Department of Computing and Mathematics,
Manchester Metropolitan University, Manchester, United Kingdom
3Institute of Computer Science, Polish Academy of Sciences, Warsaw, Poland
flaura.vasquezrodriguez, sophia.ananiadou [email protected]
[email protected] [email protected]
Abstract
Modern text simplification (TS) heavily relies
on the availability of gold standard data to
build machine learning models. However, ex-
isting studies show that parallel TS corpora
contain inaccurate simplifications and incor-
rect alignments. Additionally, evaluation is
usually performed by using metrics such as
BLEU or SARI to compare system output to
the gold standard. A major limitation is that
these metrics do not match human judgements
and the performance on different datasets and
linguistic phenomena vary greatly. Further-
more, our research shows that the test and
training subsets of parallel datasets differ sig-
nificantly. In this work, we investigate existing
TS corpora, providing new insights that will
motivate the improvement of existing state-of-
the-art TS evaluation methods. Our contribu-
tions include the analysis of TS corpora based
on existing modifications used for simplifica-
tion and an empirical study on TS models per-
formance by using better-distributed datasets.
We demonstrate that by improving the distribu-
tion of TS datasets, we can build more robust
TS models.
1 Introduction
Text Simplification transforms natural language
from a complex to a simple format, with the aim to
not only reach wider audiences (Rello et al., 2013;
De Belder and Moens, 2010; Aluisio et al., 2010;
Inui et al., 2003) but also as a preprocessing step in
related tasks (Shardlow, 2014; Silveira and Branco,
2012).
Simplifications are achieved by using parallel
datasets to train sequence-to-sequence text gen-
eration algorithms (Nisioi et al., 2017) to make
complex sentences easier to understand. They are
typically produced by crowdsourcing (Xu et al.,
2016; Alva-Manchego et al., 2020a) or by align-
ment (Cao et al., 2020; Jiang et al., 2020). They areinfamously noisy and models trained on these give
poor results when evaluated by humans (Cooper
and Shardlow, 2020). In this paper we add to the
growing narrative around the evaluation of natu-
ral language generation (van der Lee et al., 2019;
Caglayan et al., 2020; Pang, 2019), focusing on
parallel text simplification datasets and how they
can be improved.
Why do we need to re-evaluate TS resources?
In the last decade, TS research has relied on
Wikipedia-based datasets (Zhang and Lapata, 2017;
Xu et al., 2016; Jiang et al., 2020), despite their
known limitations (Xu et al., 2015; Alva-Manchego
et al., 2020a) such as questionable sentence pairs
alignments, inaccurate simplifications and a limited
variety of simplification modifications. Apart from
affecting the reliability of models trained on these
datasets, their low quality influences the evaluation
relying on automatic metrics that requires gold-
standard simplifications, such as SARI (Xu et al.,
2016) and BLEU (Papineni et al., 2001).
Hence, evaluation data resources must be further
explored and improved to achieve reliable evalu-
ation scenarios. There is a growing body of ev-
idence (Xu et al., 2015) (including this work) to
show that existing datasets do not contain accurate
and well-constructed simplifications, significantly
impeding the progress of the TS field.
Furthermore, well-known evaluation metrics
such as BLEU are not suitable for simplification
evaluation. According to previous research (Sulem
et al., 2018) BLEU does not significantly correlate
with simplicity (Xu et al., 2016), making it inap-
propriate for TS evaluation. Moreover, it does not
correlate (or the correlation is low) with grammati-
cality and meaning preservation when performing
syntactic simplification such as sentence splitting.
Therefore in most recent TS research BLEU has
not been considered as a reliable evaluation metric.
We use SARI as the preferred method for TS eval-arXiv:2107.13662v1 [cs.CL] 28 Jul 2021uation, which has also been used as the standard
evaluation metric in all the corpora analysed in this
research.
Our contributions include 1) the analysis of the
most common TS corpora based on quantifying
modifications used for simplification, evidencing
their limitations and 2) an empirical study on TS
models performance by using better-distributed
datasets. We demonstrate that by improving the
distribution of TS datasets, we can build TS mod-
els that gain a higher SARI score in our evaluation
setting.
2 Related Work
The exploration of neural networks in TS started
with the work of Nisioi et al. (2017), using
the largest parallel simplification resource avail-
able (Hwang et al., 2015). Neural-based work
focused on state-of-the-art deep learning and
MT-based methods, such as reinforcement learn-
ing (Zhang and Lapata, 2017), adversarial train-
ing (Surya et al., 2019), pointer-copy mecha-
nism (Guo et al., 2018), neural semantic en-
coders (Vu et al., 2018) and transformers supported
by paraphrasing rules (Zhao et al., 2018).
Other successful approaches include the usage
of control tokens to tune the level of simplification
expected (Alva-Manchego et al., 2020a; Scarton
and Specia, 2018) and the prediction of operations
using parallel corpora (Alva-Manchego et al., 2017;
Dong et al., 2020). The neural methods are trained
mostly on Wikipedia-based sets, varying in size
and improvements in the quality of the alignments.
Xu et al. (2015) carried out a systematic study on
Wikipedia-based simplification resources, claim-
ing Wikipedia is not a quality resource, based on
the observed alignments and the type of simplifi-
cations. Alva-Manchego et al. (2020a) proposed
a new dataset, performing a detailed analysis in-
cluding edit distance and proportion of words that
are deleted, inserted and reordered, and evaluation
metrics performance for their proposed corpus.
Chasing the state-of-the-art is rife in NLP (Hou
et al., 2019), and no less so in TS, where a SARI
score is too often considered the main quality indi-
cator. However, recent work has shown that these
metrics are unreliable (Caglayan et al., 2020) and
gains in performance according to them may not de-
liver improvements in simplification performance
when the text is presented to an end user.3 Simplification Datasets: Exploration
3.1 Data and Methods
In the initial exploration of TS datasets, we investi-
gated the training, test and validation subsets (when
available) of the following: WikiSmall and Wiki-
Large (Zhang and Lapata, 2017), TurkCorpus (Xu
et al., 2015), MSD dataset (Cao et al., 2020), AS-
SET (Alva-Manchego et al., 2020a) and WikiMan-
ual (Jiang et al., 2020). For the WikiManual dataset,
we only considered sentences labelled as “aligned”.
We computed the number of changes between
the original and simplified sentences through the
token edit distance . Traditionally, edit distance
quantifies character-level changes from one char-
acter string to another (additions, deletions and re-
placements). In this work, we calculated the token-
based edit distance by adapting the Wagner–Fischer
algorithm (Wagner and Fischer, 1974) to determine
changes at a token level. We preprocessed our
sentences by changing them into lowercase prior
to this analysis. To make the results comparable
across sentences, we divide the number of changes
by the length of the original sentence and obtain
values between 0% (no changes) to 100% (com-
pletely different sentence).
In addition to toked-based edit operation exper-
iments, we analysed the difference of sentence
length between complex and simple variants, the
quantity of edit operations type (INSERT, DELETE
and REPLACE) and an analysis of redundant oper-
ations such as deletions and insertions in the same
sentence over the same text piece (we define this as
the MOVE operation). Based on our objective to
show how different split configurations affect TS
model performance, we have presented the percent-
age of edit operations as the more informative anal-
ysis performed on the most representative datasets.
3.2 Edit Distance Distribution
Except for the recent work of Alva-Manchego et al.
(2020b), there has been little work on new TS
datasets. Most prior datasets are derived by align-
ing English and Simple English Wikipedia, for ex-
ample WikiSmall andWikiLarge (Zhang and La-
pata, 2017).
In Figure 1 we can see that the edit distance
distribution of the splits in the selected datasets is
not even. By comparing the test and development
subsets in WikiSmall (Figure 1a) we can see dif-
ferences in the number of modifications involved
in simplification. Moreover, the WikiLarge dataset(a) WikiSmall Test/Dev/Train
(b) WikiLarge Test/Dev/Train
(c) TurkCorpus Test
(d) MSD Test
(e) ASSET Test
(f) WikiManual Test/Dev/Train
Figure 1: Comparison of TS datasets with respect to the number of edit operations between the original and
simplified sentences. X-axis: token edit distance normalised by sentence length, Y-axis: probability density for the
change percentage between complex and simple sentence pairs.
(Figure 1b) shows a complete divergence of the test
subset. Additionally, it is possible to notice a signif-
icant number of unaligned or noisy cases, between
the 80% and 100% of change in the WikiLarge
training and validation subsets (Figure 1b).
We manually checked a sample of these cases
and confirmed they were poor-quality simplifica-
tions, including incorrect alignments. The simplifi-
cation outputs (complex/simple pairs) were sorted
by their edit distances and then manually checked
to determine an approximate heuristic for noisy sen-
tences detection. Since many of these alignments
had really poor quality, it was easy to determine the
number that removed a significant number of cases
without actually reducing dramatically the size of
the dataset.
Datasets such as Turk Corpus (Xu et al., 2015)
are widely used for evaluation and their opera-
tions mostly consist of lexical simplification (Alva-
Manchego et al., 2020a). We can see this behaviour
in Figure 1c, where most edits involve a small per-
centage of the tokens. This can be noticed when a
large proportion of the sample cases are between
0% (no change) to 40%.
In the search of better evaluation resources, Turk-
Corpus was improved with the development of
ASSET (Alva-Manchego et al., 2020a) including
more heterogeneous modification measures. As
we can see in Figure 1e, the data are more evenly
distributed than in Figure 1c.Recently proposed datasets, such as WikiMan-
ual(Jiang et al., 2020), as shown in Figure 1f, have
an approximately consistent distribution, and their
simplifications are less conservative. Based on a
visual inspection on the uppermost values of the
distribution (80%), we can tell that often most
of the information in the original sentence is re-
moved or the target simplification does not express
accurately the original meaning.
MSD dataset (Cao et al., 2020) is a domain-
specific dataset, developed for style transfer in the
health domain. In the style transfer setting, the
simplifications are aggressive (i.e., not limited to
individual words), to promote the detection of a
difference between one style (expert language) and
another (lay language). Figure 1d shows how their
change-percentage distribution differs dramatically
in comparison to the other datasets, placing most
of the results at the right-side of the distribution.
Among TS datasets, it is important to mention
that the raw text of the Newsela (Xu et al., 2015)
dataset was produced by professional writers and is
likely of higher quality than other TS datasets. Un-
fortunately, it is not aligned at the sentence level by
default and its usage and distribution are limited by
a restrictive data agreement. We have not included
this dataset in our analysis due to the restrictive
licence under which it is distributed.Dataset Split KL-div p-value
WikiSmallTest/Dev 0.0696 0.51292
Test/Tr 0.0580 0.83186
WikiLargeTest/Dev 0.4623 <0.00001
Test/Tr 0.4639 <0.00001
WikiManualTest/Dev 0.1020 0.00003
Test/Tr 0.0176 0.04184
TurkCorpus Test/Dev 0.0071 0.00026
ASSET Test/Dev 0.0491 <0.00001
Table 1: KL-divergence between testing (Test) and de-
velopment (Dev) or training (Tr) subsets.
3.3 KL Divergence
In addition to edit distance measurements presented
in Figure 1, we further analysed KL divergence
(Kullback and Leibler, 1951) of those distributions
to understand how much dataset subsets diverge.
Specifically, we compared the distribution of the
test set to the development and training sets for
WikiSmall, WikiLarge, WikiManual, TurkCorpus
and ASSET Corpus (when available). We did not
include MSD dataset since it only has a testing set.
We performed randomised permutation
tests (Morgan, 2006) to confirm the statistical
significance of our results. Each dataset was
joined together and split randomly for 100,000
iterations. We then computed the p-value as a
percentage of random splits that result in the KL
value equal to or higher than the one observed in
the data. Based on the p-value, we can decide
whether the null hypothesis (i.e. that the original
splits are truly random) can be accepted. We reject
the hypothesis for p-value lower than 0.05. In
Table 1 we show the computed KL-divergence and
p-values. The p-values below 0.05 for WikiManual
and WikiLarge confirm that these datasets do not
follow a truly random distribution.
4 Simplification Datasets: Experiments
We carried out the following experiments to eval-
uate the variability in performance of TS models
caused by the issues described in Wiki-based data.
4.1 Data and Methods
For the proposed experiments, we used the
EditNTS model, a Programmer-Interpreter
Model (Dong et al., 2020). Although the original
code was published, its implementation required
minor modifications to run in our setting. The
modifications performed, the experimental subsetsas well as the source code are documented via
GitHub1. We selected EditNTS model due to its
competitive performance in both WikiSmall and
WikiLarge datasets2. Hence, we consider this
model as a suitable candidate for evaluating the
different limitations of TS datasets. In future work,
we will definitely consider testing our assumptions
under additional metrics and models.
In relation to TS datasets, we trained our mod-
els on the training and development subsets from
WikiLarge and WikiSmall, widely used in most
of TS research. In addition, these datasets have
a train, development and test set, which is essen-
tial for retraining and testing the model with new
split configurations. The model was first trained
with the original splits, and then with the following
variations:
Randomised split : as explained in Section 3.3,
the original WikiLarge split does not have an even
distribution of edit-distance pairs between subsets.
For this experiment, we resampled two of our
datasets (WikiSmall and WikiLarge). For each
dataset, we joined all subsets together and per-
formed a new random split.
Refined and randomised split : we created sub-
sets that minimise the impact of poor alignments.
These alignments were selected by edit distance
and then subsets were randomised as above. We
presume that the high-distance cases correspond
to noisy and misaligned sentences. For both Wik-
iSmall and WikiLarge, we reran our experiments
removing 5% and 2% of the worst alignments.
Finally, we evaluated the models by using the
test subsets of external datasets, including: Turk-
Corpus, ASSET and WikiManual.
5 Discussion
Figure 2 shows the results for WikiSmall. We can
see a minor decrease in SARI score with the ran-
dom splits, which means that the noisy alignments
were equivalently present in all the sets rather than
using the best cases for training. On the other hand,
when the noisy cases are removed from the datasets
the increase in model performance is clear.
Likewise, we show WikiLarge results in Figure
3. When the data is randomly distributed, we obtain
better performance than the original splits. This
1https://github.com/lmvasque/
ts-explore
2https://github.com/sebastianruder/
NLP-progress/blob/master/english/
simplification.mdFigure 2: SARI scores for evaluating WikiSmall-based
models on external test sets.
is consistent with WikiLarge having the largest
discrepancy according to our KL-divergence mea-
surements, as shown in Section 3.3. We also found
that the 95% split gave a similar behaviour to Wiki-
Large Random. Meanwhile, the 98% dataset, gave
a similar performance to the original splits for AS-
SET and TurkCorpus3.
We can also note, that although there is a per-
formance difference between WikiSmall Random
and WikiSmall 95%, in WikiLarge the same splits
have quite similar results. We believe these dis-
crepancies are related to the size and distribution
of the training sets. WikiLarge subset is three
times bigger than WikiSmall in the number of sim-
ple/complex pairs. Also, WikiLarge has a higher
KL-divergence (0.46) than WikiSmall ( 0.06),
which means that WikiLarge could benefit more
from a random distribution experiment than Wik-
iSmall, resulting in higher performance on Wiki-
Large. Further differences may be caused by the
procedures used to make the training/test splits in
the original research, which were not described in
the accompanying publications.
Using randomised permutation testing, we have
confirmed that the SARI differences between the
models based on the original split and our best
alternative (95% refined) is statistically significant
(p <0:05) for each configuration discussed above.
In this study, we have shown the limitations of
TS datasets and the variations in performance in
different splits configurations. In contrast, exist-
ing evidence cannot determine which is the most
suitable split, especially since this could depend
on each specific scenario or target audience (e.g.,
model data similar to “real world” applications).
3ASSET and Turk Corpus results are an average on their
multiple references scores.
Figure 3: SARI scores for evaluating WikiLarge-based
models on external test sets.
Also, we have measured our results using SARI,
not only because it is the standard evaluation metric
in TS but also because there is no better automatic
alternatives to measure simplicity. We use SARI
as a way to expose and quantify SOTA TS datasets
limitations. The increase in SARI scores should be
interpreted as the variability in the relative quality
of the output simplifications. By relative we mean,
that there is a change in simplicity gain but we
cannot state the simplification is at its best quality
since the metric itself has its own weaknesses.
6 Conclusions
In this paper, we have shown 1) the statistical limita-
tions of TS datasets, and 2) the relevance of subset
distribution for building more robust models. To
our knowledge, distribution-based TS datasets anal-
ysis has not been considered before. We hope that
the exposure of these limitations kicks off a discus-
sion in the TS community on whether we are in the
correct direction regarding evaluation resources in
TS and more widely in NLG. The creation of new
resources is expensive and complex, however, we
have shown that current resources can be refined,
motivating future studies in the field of TS.
Acknowledgments
We would like to thank Nhung T.H. Nguyen and
Jake Vasilakes for their valuable discussions and
comments. Laura V ´asquez-Rodr ´ıguez’s work was
funded by the Kilburn Scholarship from the Uni-
versity of Manchester . Piotr Przybyła’s work was
supported by the Polish National Agency for Aca-
demic Exchange through a Polish Returns grant
number PPN/PPO/2018/1/00006.References
Sandra Aluisio, Lucia Specia, Caroline Gasperin, and
Carolina Scarton. 2010. Readability assessment for
text simplification. Proceedings of the NAACL HLT
2010 Fifth Workshop on Innovative Use of NLP for
Building Educational Applications , pages 1–9.
Fernando Alva-Manchego, Joachim Bingel, Gustavo H
Paetzold, Carolina Scarton, and Lucia Specia. 2017.
Learning How to Simplify From Explicit Labeling
of Complex-Simplified Text Pairs. Proceedings of
the Eighth International Joint Conference on Natu-
ral Language Processing (Volume 1: Long Papers) ,
pages 295–305.
Fernando Alva-Manchego, Louis Martin, Antoine Bor-
des, Carolina Scarton, Beno ˆıt Sagot, and Lucia Spe-
cia. 2020a. ASSET: A Dataset for Tuning and Eval-
uation of Sentence Simplification Models with Mul-
tiple Rewriting Transformations. arXiv .
Fernando Alva-Manchego, Louis Martin, Antoine Bor-
des, Carolina Scarton, Beno ˆıt Sagot, and Lucia Spe-
cia. 2020b. ASSET: A dataset for tuning and eval-
uation of sentence simplification models with multi-
ple rewriting transformations. In Proceedings of the
58th Annual Meeting of the Association for Compu-
tational Linguistics , pages 4668–4679, Online. As-
sociation for Computational Linguistics.
Ozan Caglayan, Pranava Madhyastha, and Lucia Spe-
cia. 2020. Curious case of language generation
evaluation metrics: A cautionary tale. In Proceed-
ings of the 28th International Conference on Com-
putational Linguistics , pages 2322–2328, Barcelona,
Spain (Online). International Committee on Compu-
tational Linguistics.
Yixin Cao, Ruihao Shui, Liangming Pan, Min-Yen Kan,
Zhiyuan Liu, and Tat-Seng Chua. 2020. Expertise
Style Transfer: A New Task Towards Better Com-
munication between Experts and Laymen. In arXiv ,
pages 1061–1071. Association for Computational
Linguistics (ACL).
Michael Cooper and Matthew Shardlow. 2020. Com-
biNMT: An exploration into neural text simplifica-
tion models. In Proceedings of the 12th Language
Resources and Evaluation Conference , pages 5588–
5594, Marseille, France. European Language Re-
sources Association.
Jan De Belder and Marie-Francine Moens. 2010. Text
Simplification for Children. Proceedings of the SI-
GIR Workshop on Accessible Search Systems , pages
19–26.
Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and
Jackie Chi Kit Cheung. 2020. Editnts: An neu-
ral programmer-interpreter model for sentence sim-
plification through explicit editing. In ACL 2019 -
57th Annual Meeting of the Association for Com-
putational Linguistics, Proceedings of the Confer-
ence, pages 3393–3402. Association for Computa-
tional Linguistics (ACL).Han Guo, Ramakanth Pasunuru, and Mohit Bansal.
2018. Dynamic Multi-Level Multi-Task Learning
for Sentence Simplification. In Proceedings of the
27th International Conference on Computational
Linguistics (COLING 2018) , pages 462–476.
Yufang Hou, Charles Jochim, Martin Gleize, Francesca
Bonin, and Debasis Ganguly. 2019. Identifica-
tion of tasks, datasets, evaluation metrics, and nu-
meric scores for scientific leaderboards construction.
InProceedings of the 57th Annual Meeting of the
Association for Computational Linguistics , pages
5203–5213, Florence, Italy. Association for Compu-
tational Linguistics.
William Hwang, Hannaneh Hajishirzi, Mari Ostendorf,
and Wei Wu. 2015. Aligning sentences from stan-
dard Wikipedia to simple Wikipedia. In NAACL
HLT 2015 - 2015 Conference of the North American
Chapter of the Association for Computational Lin-
guistics: Human Language Technologies, Proceed-
ings of the Conference , pages 211–217. Association
for Computational Linguistics (ACL).
Kentaro Inui, Atsushi Fujita, Tetsuro Takahashi, Ryu
Iida, and Tomoya Iwakura. 2003. Text Simplifica-
tion for Reading Assistance: A Project Note. In
Proceedings of the Second International Workshop
on Paraphrasing - Volume 16 , PARAPHRASE ’03,
pages 9–16, USA. Association for Computational
Linguistics (ACL).
Chao Jiang, Mounica Maddela, Wuwei Lan, Yang
Zhong, and Wei Xu. 2020. Neural CRF Model
for Sentence Alignment in Text Simplification. In
arXiv , pages 7943–7960. arXiv.
S. Kullback and R. A. Leibler. 1951. On Information
and Sufficiency. The Annals of Mathematical Statis-
tics, 22(1):79–86.
Chris van der Lee, Albert Gatt, Emiel van Miltenburg,
Sander Wubben, and Emiel Krahmer. 2019. Best
practices for the human evaluation of automatically
generated text. In Proceedings of the 12th Interna-
tional Conference on Natural Language Generation ,
pages 355–368, Tokyo, Japan. Association for Com-
putational Linguistics.
William Morgan. 2006. Statistical Hypothesis Tests for
NLP or: Approximate Randomization for Fun and
Profit.
Sergiu Nisioi, Sanja ˇStajner, Simone Paolo Ponzetto,
and Liviu P. Dinu. 2017. Exploring neural text sim-
plification models. In ACL 2017 - 55th Annual Meet-
ing of the Association for Computational Linguistics,
Proceedings of the Conference (Long Papers) , vol-
ume 2, pages 85–91. Association for Computational
Linguistics (ACL).
Richard Yuanzhe Pang. 2019. The Daunting Task of
Real-World Textual Style Transfer Auto-Evaluation.
arXiv .Kishore Papineni, Salim Roukos, Todd Ward, and Wei-
Jing Zhu. 2001. BLEU: a method for automatic eval-
uation of machine translation. ACL, Proceedings
of the 40th Annual Meeting of the Association for
Computational Linguistics(July):311–318.
Luz Rello, Ricardo Baeza-Yates, Stefan Bott, and Ho-
racio Saggion. 2013. Simplify or help? Text simpli-
fication strategies for people with dyslexia. In W4A
2013 - International Cross-Disciplinary Conference
on Web Accessibility .
Carolina Scarton and Lucia Specia. 2018. Learning
simplifications for specific target audiences. In ACL
2018 - 56th Annual Meeting of the Association for
Computational Linguistics, Proceedings of the Con-
ference (Long Papers) , volume 2, pages 712–718,
Stroudsburg, PA, USA. Association for Computa-
tional Linguistics.
Matthew Shardlow. 2014. A Survey of Automated Text
Simplification. International Journal of Advanced
Computer Science and Applications , 4(1).
Sara Botelho Silveira and Ant ´onio Branco. 2012. En-
hancing multi-document summaries with sentence
simplification. In Proceedings of the 2012 Inter-
national Conference on Artificial Intelligence, ICAI
2012 , volume 2, pages 742–748.
Elior Sulem, Omri Abend, and Ari Rappoport. 2018.
BLEU is Not Suitable for the Evaluation of Text
Simplification. In Proceedings of the 2018 Con-
ference on Empirical Methods in Natural Language
Processing , pages 738–744, Stroudsburg, PA, USA.
Association for Computational Linguistics.
Sai Surya, Abhijit Mishra, Anirban Laha, Parag Jain,
and Karthik Sankaranarayanan. 2019. Unsupervised
Neural Text Simplification. ACL 2019 - 57th An-
nual Meeting of the Association for Computational
Linguistics, Proceedings of the Conference , pages
2058–2068.
Tu Vu, Baotian Hu, Tsendsuren Munkhdalai, and Hong
Yu. 2018. Sentence simplification with memory-
augmented neural networks. In NAACL HLT 2018
- 2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Hu-
man Language Technologies - Proceedings of the
Conference , volume 2, pages 79–85. Association for
Computational Linguistics (ACL).
Robert A. Wagner and Michael J. Fischer. 1974. The
String-to-String Correction Problem. Journal of the
ACM (JACM) , 21(1):168–173.
Wei Xu, Chris Callison-Burch, and Courtney Napoles.
2015. Problems in Current Text Simplification Re-
search: New Data Can Help. Transactions of the As-
sociation for Computational Linguistics , 3:283–297.
Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze
Chen, and Chris Callison-Burch. 2016. Optimizing
Statistical Machine Translation for Text Simplifica-
tion. Transactions of the Association for Computa-
tional Linguistics , 4:401–415.Xingxing Zhang and Mirella Lapata. 2017. Sentence
Simplification with Deep Reinforcement Learning.
InEMNLP 2017 - Conference on Empirical Meth-
ods in Natural Language Processing, Proceedings ,
pages 584–594. Association for Computational Lin-
guistics (ACL).
Sanqiang Zhao, Rui Meng, Daqing He, Saptono Andi,
and Parmanto Bambang. 2018. Integrating trans-
former and paraphrase rules for sentence simplifi-
cation. In Proceedings of the 2018 Conference on
Empirical Methods in Natural Language Process-
ing, EMNLP 2018 , pages 3164–3173. Association
for Computational Linguistics.