arxiv_dump / txt /2103.02372.txt
billxbf's picture
Upload 101 files
8f1929a verified
raw
history blame
35.3 kB
Root cause prediction based on bug reports
Thomas Hirsch
Institute of Software Technology
Graz University of Technology
Graz, Austria
[email protected] Hofer
Institute of Software Technology
Graz University of Technology
Graz, Austria
[email protected]
Abstract —This paper proposes a supervised machine learning
approach for predicting the root cause of a given bug report.
Knowing the root cause of a bug can help developers in the de-
bugging process—either directly or indirectly by choosing proper
tool support for the debugging task. We mined 54 755 closed
bug reports from the issue trackers of 103 GitHub projects and
applied a set of heuristics to create a benchmark consisting of
10 459 reports. A subset was manually classified into three groups
(semantic, memory, and concurrency) based on the bugs’ root
causes. Since the types of root cause are not equally distributed, a
combination of keyword search and random selection was applied.
Our data set for the machine learning approach consists of 369 bug
reports (122 concurrency, 121 memory, and 126 semantic bugs).
The bug reports are used as input to a natural language processing
algorithm. We evaluated the performance of several classifiers
for predicting the root causes for the given bug reports. Linear
Support Vector machines achieved the highest mean precision
(0.74) and recall (0.72) scores. The created bug data set and
classification are publicly available.
Index Terms —bug report, bug benchmark, root cause prediction
I. I NTRODUCTION
Debugging is one of the most time-consuming parts in the
software development process. While there exist numerous
fault localization [1] and repair [2] techniques to support
programmers in the debugging process, it is often unclear which
techniques work best for a given bug. For this reason, Sobreira et
al.[3] investigated the structure of Defects4J [4] bugs. For each
bug, they determined the size of the patch, the repair action,
and the change pattern. They have invited other researchers to
investigate which types of bugs1can be handled by their repair
tools.
In this paper, we change the perspective of this research topic:
instead of only providing root cause information for a bench-
mark to help researchers in evaluating their tools, we predict
the root cause for a given bug description so that programmers
can choose a proper tool for their debugging problem. There
are tools that focus on concurrency (e.g. ConcBugAssist [5])
or memory (e.g. Valgrind) bugs, while others are better suited
for semantic bugs (e.g. Jaguar [6]). While some root causes
can easily be determined when reading a bug report, other
0©2020 IEEE. Personal use of this material is permitted. Permission from
IEEE must be obtained for all other uses, in any current or future media,
including reprinting/republishing this material for advertising or promotional
purposes, creating new collective works, for resale or redistribution to servers
or lists, or reuse of any copyrighted component of this work in other works.
1http://program-repair.org/defects4j-dissection/root causes are not that obvious. Consider for example issue
ticket #514 from TwelveMonkeys project2:
TIFF: Invalid StripByteCounts when writing large resolution
files (9800x8000)
Hello, when writing a high resolution tiff file the stripByteCounts
appears to be corrupt. An approx 300 mb output file has a single
image strip with the byte count of: 4071696385 which is larger than
the file itself. However when working with lower (more common)
resolutions the meta for the image strips is created properly. [. . . ]
This code creates the file with the incorrect meta data:
// Input high resolution 48 bit depth final
InputStream inStream = [. . . ]
Attaching zipped image: 9800x8000 resolution 48bit depth.zip
I’ve tested and reproduced the issue with the following versions:
3.4.1, 3.4.2, 3.4.3
Thanks in advance,
-Jesse
Our goal is to provide information to the programmer about
the root cause of this bug. For instance, the incorrect byte
count mentioned in this bug report together with the information
about high resolution can raise suspicion of an integer overflow
occurring.
We propose a supervised machine learning (ML) approach
that uses the bug description from issue tickets to predict the
root cause of the bug. For processing the text from the issue
tickets, we make use of natural language processing (NLP). For
creating the training set, we have mined bug reports from 103
GitHub projects and manually examined a subset, classifying
them as memory, concurrency or semantic bugs based on the
actual fix. Since the number of concurrency and memory bugs
is usually very low [7], we have performed a keyword search in
the commit messages of fixes to find more instances with these
root causes.
While the primary goal of this paper is the root cause
prediction approach, the generated training data can be used
as a benchmark for specific types of faults. Often, researchers
focus on certain bug types when developing a fault localization
or repair method. While these approaches have a high potential,
their evaluation is often limited to a few real-world bugs or
artificially seeded bugs, as mentioned in [8]. The training data
set created in this paper can be used as a bug benchmark by
researchers who are interested in certain types of bugs. It can
be seen as a Java pendant to the C/C++ benchmark BugBench
that also distinguishes memory, concurrency, and semantic bugs.
Furthermore, it can be used to evaluate information retrieval
based bug localization approaches [9].
2https://github.com/haraldk/TwelveMonkeys/issues/514
©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000671arXiv:2103.02372v1 [cs.SE] 3 Mar 2021The contributions of this work can be summarized as:
a machine learning approach for predicting the root cause
for a given bug report with a mean precision of 0.74 and
a mean recall of 0.72,
a data set consisting of 10 459 bug reports and fixes from
103 GitHub repositories,
a data set of 122 concurrency, 121 memory, and 269
semantic bugs with detailed sub-categories, and
a framework for building such data sets.
The created data sets, all scripts, and the categorization are
publicly available.3The structure of this paper is as follows:
Section II introduces the main root cause categories and their
sub-categories. Section III explains how we have collected
closed bug reports and their corresponding fixes. Section IV
presents the machine learning approach. We discuss the results
and threats to validity in Section V. Section VI discusses the
related work and Section VII concludes the paper.
II. C LASSIFICATION SCHEMA
We use three main categories and 18 detailed root causes as
described in Table I. The semantic and memory sub-categories
are based on Tan et al. [7]; the concurrency sub-categories are
based on Zhou et al. [10].
A problem with post mortem bug classification arises through
often unclear separation of the actual fix from other code
changes, e.g., commits that include more than one bug fix,
commits that include the bug fix aside of some refactoring
or new extension, or bug fixes that are scattered over multiple
commits. Additionally, it is difficult to distinguish a workaround
from a fix [11]. All of the above make it hard to correctly
identify the fix and to properly categorize the root cause. To deal
with these issues, we have added a confidence value ranging
from 1-10 that reflects our confidence on the correctness of
our classification: A confidence level of 10 indicates showcase
quality; 9 indicates that we are very confident about the main
category and the subcategory; 8 indicates that we are very
confident about main category and subcategory assigned, but a
different subcategory cannot be ruled out with 100 % certainty.
For example, differentiating “processing” and “missing case”
is often not possible without having the knowledge of the
programmer who wrote the code. A confidence level of 7 or be-
low indicates doubts about the chosen subcategory. Confidence
levels between 3 and 5 indicate a strong confidence about the
main category, but the subcategories were not identifiable. A
confidence level of 2 indicates doubts about the main category
while a level of 1 indicates that it was not possible to determine
the main root cause category for the bug.
III. D ATA ACQUISITION
In this section, we provide details on the collection of the
bug data set that builds the basis for creating the training set
for the machine learning approach.
Purpose of the data set. The data set should provide a realistic
distribution of different bug types, and should serve as basis for
3https://doi.org/10.5281/zenodo.3973048TABLE I
ROOT CAUSE CATEGORIES AND SUB -CATEGORIES
Semantic Description
Exception handl. Missing or improper exception handling.
Missing case Faults due to unawareness of a certain case or simply
a forgotten implementation.
Processing Incorrect implementation (e.g. miscalculations, in-
correct method output, wrong method/library usage).
Typo Ambiguous naming, typos in SQL calls/URLs/paths.
Dependency The code can be built but behaves unexpected be-
cause of changes in a foreign system (e.g. update of
utilized library or underlying OS).
Other All other semantic faults.
Memory
Buffer overflow Buffer overflows, not overflowing numeric types.
Null pointer deref. All null pointer dereferences.
Uninit. mem. read All uninitialized memory reads except null pointer
dereference.
Memory leak Memory leak.
Dangling pointer Dangling pointer.
Double free Double free.
Other All other memory bugs.
Concurrency
Order violation Missing or incorrect synchronization, e.g. object is
dereferenced by thread B before it is initialized by
thread A.
Race condition Two or more threads access the same resource with
at least one being a write access and the access is
not ordered properly.
Atomic violation Constraints on the interleaving of operations are
missing. This happens when atomicity of a certain
code region was assumed but failed to guarantee
atomicity in the implementation.
Deadlock Two or more threads wait for the other one to release
a resource.
Other All other concurrency bugs.
experiments with various fault localization and ML experiments.
The bugs should be real world Java bugs.
Project selection. We chose 103 GitHub Java projects to
source our data set. Primary selection criteria were a well known
organization driving the project, or the project having a high
star rating on GitHub. However, the list also contains lesser
known projects that were already used in other research [4],
[12], [13]. The selection process was performed manually. All of
the projects utilize GitHub’s built-in issue tracker together with
its labeling system, and have at least 100 closed issues identified
as bugs. The project sizes range from 13k Java LOC (Lines Of
Code) for HikariCP4to 1.7M Java LOC for Elasticsearch5. The
full list of mined projects can be found in the online appendix3.
Bug ticket identification. We identified bugs via the labels
used in the issue tickets and we only considered closed issue
tickets. In order to omit feature requests, maintenance tickets
and other non-bug issues, we only considered issues whose
labels contain “bug”, “defect”, or “regression”.
Filtering criteria. GitHub automatically links commits to
issue tickets based on issue ids and provides this data together
with issue tickets. We only consider issues for which at least
4https://github.com/brettwooldridge/HikariCP
5https://github.com/elastic/elasticsearch
©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000672one commit is linked to the issue, and all linked commits are
still available in the Git repository. If any of the commits are
linked to multiple issues, or the commit message suggests that
the fix is done aside of other changes, the issue is discarded. As
of writing this, we omit issues whose commits do not contain
changes in production Java code. We plan to lift this limitation
to incorporate other root causes, e.g. in the documentation or
build system in the future.
We use Gumtree Spoon AST Diff6[14] to create Java
aware diffs. To manage overwhelming runtime and memory
requirements arising from the size of the data set, we limit the
size and number of the commits per issue. We only consider
issues where the number of commits linked to the issue is
smaller than 10, the number of files changed per commit is
smaller than 20, and the number of lines changed per commit
is smaller than 250. Our analysis shows that these limitations
only remove less than 3 % of the issues.
The data set. In total, 54 755 issues have been mined from
GitHub. Following the filtering criteria described above leaves
us with 10 459 issues that form the basis for our further
investigations. This bug data set consists of:
textual bug report including metadata in form of time-
stamps and user names,
all commits associated to each bug report including meta-
data as commit message and git stats, and
Java aware diff statistics and location of the changes in
terms of file, class, and method.
IV. M ACHINE LEARNING APPROACH
We employ an NLP approach, vectorizing the textual bug
reports into unigrams and bigrams, to train a model for auto-
mated classification along our fault classification schema. This
approach calculates a frequency vector from words and word
pairs occurring in the input text that is used as feature vector
for the classifier.
Input preprocessing. To increase performance of the classifier,
we applied the following preprocessing steps:
Stop word removal (i.e. removing common words that are
not adding any value)
Case folding (i.e. converting all characters to lower case)
Stemming (i.e. reducing each word to its word stem)
The bug reports often include stack traces, exceptions, and log
outputs. Currently, we process them in the same way as the
rest of the input text. In future work, we will investigate the
usefulness of domain specific preprocessing of these artifacts.
Training set creation. Figure 1 provides an overview of the
training set creation. We manually classified 160 randomly
selected issues and identified 119 semantic, 2 memory, and 4
concurrency bugs. 35 issues were not classified because the bug
reports were non-English, feature requests, deemed not a bug, or
issues for which we were not confident about the sub-category
(confidence level <8).
Concurrency and memory bugs are usually rare, accounting
for 2 % respectively 6 % of all bugs [7], [15], [16], which poses
6https://github.com/SpoonLabs/gumtree-spoon-ast-diff
Random sample of 471 issuesFilter criteria:
•Commits linked
•Commits only address
one issue
•Change of production
Java code
•Reasonable size for fix
11954 755 issues labeled as bug
from 103 GitHub projects
10 459 complete issues
Random
sample
of 160
issues756 potential memory or
concurrency bugsRandom
selectionKeyword
search
150 semantic
bugs
keyword
biased121
memory
bugs122
concurrency
bugs118
2 4150 119
Filter criteria:
•Classification
confidence > 7
•English issue text
•Bug, not hidden
feature request
119
semantic
Bugs
369 issues for machine learning approachAll All All5 %
(7)Random
selection
119Fig. 1. Training set creation
a challenge for the creation of reasonably sized training sets.
For this reason, we have performed a keyword search on the
commit messages linked to the issues to identify candidates of
memory and concurrency bugs analog to Ray et al. ’s approach
[15], resulting in a total of 756 issues. As of writing this, 471
randomly selected issues from this set have been examined and
classified. 150 semantic, 119 memory, and 118 concurrency
bugs have been identified in this sample. 84 issues could not be
classified due to the reasons mentioned above.
Training set composition. To avoid a bias towards the se-
mantic bugs that were “accidentally found” during the manual
classification of the keyword search results and to have approx-
imately equally large training sets, we reduced their volume
to 5 % of all semantic bugs. This is a rather high estimate
given the fact, that only 7.2 % of all bugs have been reported
in the keyword search and only one third of these bugs are
actually semantic bugs. Further, using separate data bases for
the keyword search (commit messages) and training set for our
ML classifier (bug reports) makes us confident that the bias
introduced by the keywords is limited. As of writing this, our
training set consists of 122 concurrency bugs, 121 memory
bugs, and 126 semantic bugs. The complete training set consists
of 369 textual bug reports.
Classifiers. We applied various supervised ML classifier
algorithms on our data set, namely Multinomial Naive Bayes
(MNB), Linear Support Vector (LSVC), Linear Support Vector
with Stochastic Gradient Descent learning (SGDC), Random
Forrest (RFC), and Logistic Regression (LRC). The selection
of classifiers is based on their suitability for multi-class clas-
sification problems based textual inputs, and their application
in similar research. Support vector machines have been used in
©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000673comparable endeavors [7], [15]–[18]; the same applies to naive
Bayes [7], [16], [17], [19]–[21], logistic regression [20], [21],
and decision tree based algorithms [7], [20], [21].
Experiment. The 369 bug reports were split into a training
set (80 %) and a test set (20 %). We performed 5-fold cross
validation on the training set for each classifier, using grid
search for hyperparameter tuning. Mean accuracy was used as
scoring metric in the grid search. The highest scoring model for
each classifier was then evaluated on the test set yielding the
final score for this classifier. This experiment was performed
10 times, followed by manual examination of its results and
corresponding hyperparameters to reduce the grid search space.
Finally, to enable comparison of the classifiers given the small
input data set, the above described experiment with the reduced
hyperparameter set was performed 100 times with randomized
test and training splits. The employed set of hyperparameters,
and grid search results for each experiment, can be found in the
online appendix.
V. R ESULTS
A. Classification results
We graphically compare the classifiers’ performance by
means of the weighted averages of F1, precision, and recall
in Figure 2 and we report mean, median, standard deviation,
min, and max of each classifier in Table II based on the scores
of 100 runs. Please note that the F1 scores are computed for
the individual test runs and then the mean, median, standard
deviation, min, and max values of these F1 scores are computed.
Thus, they cannot be computed from the precision and recall
given in the table.
We observed a tight clustering of classifiers, which is also
evident in individual runs, although individual runs exhibit
varying performances. We attribute this behavior to the small
data set size and high variance in data quality. The best overall
performance was achieved with LSVC, with mean F1 (0.72),
precision (0.74), and recall (0.72). LSVC also produced the
highest observed scores in an individual run, yielding F1 (0.85),
precision (0.88), and recall (0.85).
B. Discussion
The biggest challenge lies in the creation of a reasonably
sized data set. Further, varying data quality constitutes a sig-
nificant problem. The textual bug reports in our data set range
from only 5 words to 60kB of text per report. However, our
examination of those bug tickets shows that the length is not
necessarily correlating with the quality in terms of usefulness
for the developer. Issue #4960 from Elasticsearch7is an example
of a bug report that requires context and knowledge about the
project for understanding:
Filtered query parses name incorrectly
There are bug reports that merely describe the impact, e.g.
issue #338 from the Redisson project8:
7https://github.com/elastic/elasticsearch/issues/4960
8https://github.com/redisson/redisson/issues/338
Fig. 2. Mean weighted average precision, recall and F1 score
TABLE II
WEIGHTED AVERAGE PRECISION ,RECALL ,AND F1SCORES (100 RUNS )
Classifier Precision Recall F1
MeanLRC 0.73 0.71 0.71
RFC 0.72 0.62 0.62
SGDC 0.74 0.71 0.71
LSVC 0.74 0.72 0.72
MNB 0.72 0.70 0.70
MedianLRC 0.73 0.70 0.70
RFC 0.72 0.62 0.62
SGDC 0.74 0.70 0.71
LSVC 0.74 0.72 0.72
MNB 0.73 0.70 0.70
Std. dev.LRC 0.046 0.048 0.048
RFC 0.053 0.076 0.082
SGDC 0.045 0.049 0.049
LSVC 0.046 0.046 0.046
MNB 0.045 0.049 0.049
MinLRC 0.63 0.62 0.62
RFC 0.53 0.39 0.38
SGDC 0.64 0.62 0.63
LSVC 0.65 0.64 0.63
MNB 0.60 0.57 0.57
MaxLRC 0.84 0.82 0.82
RFC 0.85 0.77 0.77
SGDC 0.86 0.84 0.84
LSVC 0.88 0.85 0.85
MNB 0.83 0.82 0.82
©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000674New version 2.1.4 or greater performance is low.
When I use redisson 2.1.3 the ubuntu’s load average is 1.8 2.3; but
I use 2.1.4 or greater, the load average is often greater than 3.00,
my java application often overload.
In some cases, the bug reports point right away at the fault,
e.g. Netty issue #18789:
WebSocket08FrameDecoder leaks ByteBuf when payload is
masked
Further research is required to determine metrics to mea-
sure bug report quality for our purpose. On the other end of
the spectrum, for very long bug reports, additional text pre-
processing is required. Heuristics for reduction or removal of
artifacts have to be implemented. Such artifacts are stack traces,
code snippets, log outputs, or similar text portions, whose size
is disproportionate to the added information.
C. Threats to validity
The selection of bugs from the issue tickets by searching for
certain labels is a threat to the internal validity. While we have
considered a wide range of bug labels, we cannot rule out to
miss bugs with special labels or wrongly labeled bugs. A study
on 7000 issue reports from five open-source projects showed
that up to 40 % of the issues were wrongly labeled [22].
Manually categorizing the root cause might be error-prone
and the true root cause of the bug can only be determined
by the original programmer. For this reason, we indicated the
confidence level for each bug we categorized and excluded bugs
with a low confidence level. Furthermore, the fix might be a
workaround instead of a fix of the true fault.
The keyword search might only reveal certain types of
memory and concurrency bugs. We have tried to avoid a bias in
the classification towards the words used in the keyword search
by performing the keyword search on the commit messages and
NLP for classification on the bug description.
The small sample size is the biggest threat to external validity.
In future work, we will therefore enlarge the training set. The
performance of this approach may vary based on the software
domain of the examined project. We tried to counteract this
by including software projects to source our data set. However,
data mining was exclusively performed on open source projects.
Further, most of the examined projects are libraries. In contrast
to end-user software, bug reports for libraries are almost ex-
clusively written by other developers. Such bug reports often
already contain insights into the underlying problem. Further,
our approach may not work as good for bug descriptions of
software written in a different programming language.
VI. R ELATED WORK
Ray et al. [15] analyzed more 560 000 bug fixes from
729 GitHub projects written in 17 languages. They classified
the root causes and impacts for 10 % of the bugs by searching
for keywords in the commit messages and trained a supervised
ML approach to classify the remaining 90 % of the bugs. They
9https://github.com/netty/netty/issues/1878validated their approach by manual classifying 180 bug fixes
(83.7 % precision, 84.3 % recall). While we also rely on a
keyword search, we did not perform the keyword search on
the same text that was used in NLP to avoid biasing.
Li and colleagues [16] classified the root cause, impact,
and software component of nearly 30 000 Bugzilla entries
using NLP with SVM, Winnow, Perceptron and Naive Bayes
as classifiers. Their training set consists of 709 bugs (51 %
randomly sampled, 36 % security-related, and 13 % concurrency
bugs).
Tan et al. [7] manually classified 339 bugs from randomly
selected fixed issues of three open-source projects into the
dimensions root cause, impact, and component. Because of
the low number of concurrency bugs in the sample, they
performed a keyword search to identify additional concurrency
bugs. Semantic bugs are the dominant root cause with 70-
87 %. The Linux kernel has nearly 13.6 % concurrency bugs;
the other projects (Mozilla and Apache) have a lower number
of concurrency bugs with 1.2 % and 5.2 %. Furthermore, the
authors automatically classified more than 100 000 bugs using
a supervised ML (precision: 67 % for memory and 93 % for
semantic bugs, recall: 57 % resp. 95 %).
Ortu et al. [17] investigated whether there are differences
in the characteristics of high and low priority defects in more
than 1200 open-source software projects. Therefore, they trained
different supervised machine learning classifiers to predict the
root cause, impact, and software component.
Thung et al. [18] used machine learning to classify bugs ac-
cording to the Orthogonal Defect Classification (ODC) scheme.
They distinguished three defect groups: data and control flow,
structural, and non-functional. They manually classified 500
bugs that serve as training set. They use the description of the
bug as well as the fixes to train a model. The SVM multi-
class classification algorithm performed best (69 % precision,
70 % recall). Lopes and colleagues [23] applied different ML
algorithms on bug descriptions to classify bugs according to
different ODC dimensions. They manually categorized more
than 4000 fixed bugs from three NoSQL databases. Recurrent
Neural Networks have the highest accuracy when predicting
the activity (47.6 %) and impact (33.3 %). Linear support vector
machines are suited best to predict the target (accuracy 85.5 %)
and the defect type (34.7 %).
Hern ´andez-Gonz ´alez and colleagues [19] proposed a learning
from crowds ML approach. The training data consists of bug
reports and labels for OCD’s impact dimension. Each bug report
was labeled by five annotators. In the majority of the cases, the
annotators disagree on the labels. In the learning from crowds
paradigm, the individual labels are taken in the machine learning
training instead of the label that was assigned by the majority
of the annotators.
Antoniol et al. [20] use decision trees, naive Bayes and
logistic regression to classify issue reports as bug or feature
request. Their approach was able to correctly classify 77-82 %
of the issues. Chawla and Singh [21] also classify issue reports
as bug or other request. They receive an accuracy of 84-91 %.
©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000675VII. C ONCLUSION AND FUTURE WORK
The presented approach automatically predicts the root cause
for a given bug report. This information can be used by the
developer to choose a proper debugging tool. It can also be
used by a meta-debugging approach to recommend a debugging
tool. The data set created in this work can be used to evaluate
which debugging tools are particularly well-suited to support
programmers in the debugging process of a particular bug
instance. In addition, the proposed approach can be utilized
for building benchmarks of specific bug types. This benchmark
is especially suitable for evaluating IR-based fault localization
techniques since it includes textual data in form of bug reports
and commit messages, as well as detailed information on the
fix location.
During the manual classification, we have noticed recurring
fault patterns. We will investigate if we can establish links be-
tween these fault patterns and code-smells detected by existing
code analysis tools such as SonarQube10. If so, knowledge about
the bug type combined with reports from code analysis tools can
be utilized to aid fault localization.
Besides that, we will improve the approach by pre-processing
stack trace information and other artifacts (if available in the bug
report). Currently, stack trace information is treated the same
way as human written text.
Since a detailed predicted root cause is even more helpful,
we will refine the prediction to the sub-categories. To do so, we
have to enlarge the training set. Since certain subcategories will
be underrepresented in the training set, we will up-sample those
categories by means of the Synthetic Minority Over-sampling
TEchnique (SMOTE) [24].
ACKNOWLEDGMENT
The work described in this paper has been funded by the
Austrian Science Fund (FWF): P 32653 (Automated Debugging
in Use).
REFERENCES
[1] W. E. Wong, R. Gao, Y . Li, R. Abreu, and F. Wotawa, “A Survey on
Software Fault Localization,” IEEE Transactions on Software Engineering ,
vol. 42, no. 8, pp. 707–740, aug 2016.
[2] L. Gazzola, D. Micucci, and L. Mariani, “Automatic Software Repair: A
Survey,” IEEE Transactions on Software Engineering , vol. 45, no. 1, pp.
34–67, jan 2019.
[3] V . Sobreira, T. Durieux, F. Madeiral, M. Monperrus, and M. A. Maia,
“Dissection of a Bug Dataset: Anatomy of 395 Patches from Defects4J,”
25th IEEE International Conference on Software Analysis, Evolution and
Reengineering (SANER 2018) , vol. 2018-March, pp. 130–140, jan 2018.
[Online]. Available: http://dx.doi.org/10.1109/SANER.2018.8330203
[4] R. Just, D. Jalali, and M. D. Ernst, “Defects4J: a database of
existing faults to enable controlled testing studies for Java programs,”
inInternational Symposium on Software Testing and Analysis (ISSTA
2014) . ACM Press, jul 2014, pp. 437–440. [Online]. Available:
http://dl.acm.org/citation.cfm?doid=2610384.2628055
[5] S. Khoshnood, M. Kusano, and C. Wang, “ConcBugAssist: Constraint
solving for diagnosis and repair of concurrency bugs,” in Int. Symp. on
Software Testing and Analysis (ISSTA 2015) . ACM, 2015, pp. 165–176.
[6] H. L. Ribeiro, H. A. De Souza, R. P. A. De Araujo, M. L. Chaim, and
F. Kon, “Jaguar: A Spectrum-Based Fault Localization Tool for Real-
World Software,” in 11th International Conference on Software Testing,
Verification and Validation (ICST 2018) . IEEE, may 2018, pp. 404–409.
10https://www.sonarqube.org/[7] L. Tan, C. Liu, Z. Li, X. Wang, Y . Zhou, and C. Zhai, “Bug characteristics
in open source software,” Empirical Software Engineering , vol. 19, no. 6,
pp. 1665–1705, oct 2014.
[8] Y . Tang, Q. Gao, and F. Qin, “LeakSurvivor: Towards Safely Tolerating
Memory Leaks for Garbage-Collected Languages,” in USENIX Annual
Technical conference , 2008, pp. 307–320.
[9] T. D. B. Le, F. Thung, and D. Lo, “Will this localization tool be effective
for this bug? Mitigating the impact of unreliability of information retrieval
based bug localization tools,” Empirical Software Engineering , vol. 22,
no. 4, pp. 2237–2279, aug 2017.
[10] B. Zhou, I. Neamtiu, and R. Gupta, “Predicting concurrency bugs:
How many, what kind and where are they?” in 9th International
Conference on Evaluation and Assessment in Software Engineering
(EASE’15) . ACM, apr 2015, pp. 1–10. [Online]. Available: https:
//doi.org/10.1145/2745802.2745807
[11] M. B ¨ohme, E. O. Soremekun, S. Chattopadhyay, E. Ugherughe, and
A. Zeller, “Where is the bug and how is it fixed? An experiment
with practitioners,” in 11th Joint Meeting on Foundations of Software
Engineering (ESEC/FSE 2017) , vol. Part F1301. Association for
Computing Machinery, aug 2017, pp. 117–128. [Online]. Available:
http://dl.acm.org/citation.cfm?doid=3106237.3106255
[12] P. Gyimesi, G. Gyimesi, Z. T ´oth, and R. Ferenc, “Characterization of
source code defects by data mining conducted on GitHub,” in Lecture
Notes in Computer Science , vol. 9159. Springer, 2015, pp. 47–62.
[13] Z. T ´oth, P. Gyimesi, and R. Ferenc, “A public bug database of GitHub
projects and its application in bug prediction,” in 16th Int. Conference
on Computational Science and Its Applications (ICCSA’16) , vol. 9789.
Lecture Notes in Computer Science, Springer, 2016, pp. 625–638.
[14] J. R. Falleri, F. Morandat, X. Blanc, M. Martinez, and M. Monperrus,
“Fine-grained and accurate source code differencing,” in 29th ACM/IEEE
International Conference on Automated Software Engineering (ASE
2014) . ACM, 2014, pp. 313–323. [Online]. Available: http://dl.acm.org/
citation.cfm?doid=2642937.2642982
[15] B. Ray, D. Posnett, V . Filkov, and P. Devanbu, “A large scale study
of programming languages and code quality in GitHub,” in ACM
SIGSOFT Symposium on the Foundations of Software Engineering
(FSE’14) . ACM, nov 2014, pp. 155–165. [Online]. Available:
http://dl.acm.org/citation.cfm?doid=2635868.2635922
[16] Z. Li, L. Tan, X. Wang, S. Lu, Y . Zhou, and C. Zhai, “Have things
changed now?: An empirical study of bug characteristics in modern open
source software,” in 1st Workshop on Architectural and System Support
for Improving Software Dependability (ASID’06) , 2006, pp. 25–33.
[17] M. Ortu, G. Destefanis, S. Swift, and M. Marchesi, “Measuring high and
low priority defects on traditional and mobile open source software,”
in7th International Workshop on Emerging Trends in Software Metrics
(WETSoM 2016) . ACM, may 2016, pp. 1–7. [Online]. Available:
http://dl.acm.org/citation.cfm?doid=2897695.2897696
[18] F. Thung, D. Lo, and L. Jiang, “Automatic defect categorization,” in
Working Conf. on Reverse Engineering (WCRE) , 2012, pp. 205–214.
[19] J. Hern ´andez-Gonz ´alez, D. Rodriguez, I. Inza, R. Harrison, and J. A.
Lozano, “Learning to classify software defects from crowds: A novel
approach,” Applied Soft Computing Journal , vol. 62, pp. 579–591, 2018.
[20] G. Antoniol, K. Ayari, M. Di Penta, F. Khomh, and Y .-G. Gu ´eh´eneuc,
“Is it a bug or an enhancement?” in Conference of the center
for advanced studies on collaborative research meeting of minds
(CASCON ’08) . ACM Press, 2008, pp. 304—-318. [Online]. Available:
http://portal.acm.org/citation.cfm?doid=1463788.1463819
[21] I. Chawla and S. K. Singh, “An automated approach for bug
categorization using fuzzy logic,” in 8th India Software Engineering
Conference (ISEC) . ACM, feb 2015, pp. 90–99. [Online]. Available:
http://dl.acm.org/citation.cfm?doid=2723742.2723751
[22] K. Herzig, S. Just, and A. Zeller, “It’s not a bug, it’s a feature: How
misclassification impacts bug prediction,” in International Conference on
Software Engineering (ICSE 2013) , 2013, pp. 392–401.
[23] F. Lopes, J. Agnelo, C. A. Teixeira, N. Laranjeiro, and J. Bernardino,
“Automating orthogonal defect classification using machine learning al-
gorithms,” Future Generation Computer Systems , vol. 102, pp. 932–947,
jan 2020.
[24] N. V . Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer,
“SMOTE: Synthetic minority over-sampling technique,” Journal of
Artificial Intelligence Research , vol. 16, pp. 321–357, jan 2002. [Online].
Available: https://www.jair.org/index.php/jair/article/view/10302
©2020 IEEE, DOI: 10.1109/ISSREW51248.2020.000676