arxiv_dump / txt /2202.08613.txt
billxbf's picture
Upload 101 files
8f1929a verified
raw
history blame
27.6 kB
On the evaluation of (meta-)solver approaches
Research Note
On the evaluation of (meta-)solver approaches
Roberto Amadini [email protected]
Maurizio Gabbrielli [email protected]
Department of Computer Science and Engineering,
University of Bologna, Italy
Tong Liu [email protected]
Meituan,
Beijing, China
Jacopo Mauro [email protected]
Department of Mathematics and Computer Science,
University of Southern Denmark, Denmark
Abstract
Meta-solver approaches exploits a number of individual solvers to potentially build a
better solver. To assess the performance of meta-solvers, one can simply adopt the metrics
typically used for individual solvers (e.g., runtime or solution quality), or employ more
specific evaluation metrics (e.g., by measuring how close the meta-solver gets to its virtual
best performance). In this paper, based on some recently published works, we provide an
overview of different performance metrics for evaluating (meta-)solvers, by underlying their
strengths and weaknesses.
1. Introduction
A famous quote attributed to Aristotle says that “ the whole is greater than the sum of its
parts”. This principle has been applied in several contexts, including the field of constraint
solving and optimization. Combinatiorial problems arising from application domains such as
scheduling, manufacturing, routing or logistics can be tackled by combining and leveraging
the complementary strengths of different solvers to create a better global meta-solver .1
Several approaches for combining solvers and hence creating effective meta-solvers have
been developed. Over the last decades we witnessed the creation of new Algorithm Selec-
tion (Kotthoff, 2016) and Configuration (Hoos, 2012) approaches2that reached peak results
in various solving competitions (SAT competition, 2021; Stuckey, Becket, & Fischer, 2010;
ICAPS, 2021). To compare different meta-solvers, new competitions were created, e.g., the
2015 ICON challenge (Kotthoff, Hurley, & O’Sullivan, 2017) and 2017 OASC competition
on algorithm selection (Lindauer, van Rijn, & Kotthoff, 2019). However, the discussion of
why a particular evaluation metric has been chosen to rank the solvers is lacking.
1. Meta-solvers are sometimes referred in the literature as portfolio solvers , because they take advantage of
a “portfolio” of different solvers.
2. A fine-tuned solver can be seen as a meta-solver where we consider different configurations of the same
solver as different solvers.
1arXiv:2202.08613v1 [cs.AI] 17 Feb 2022Amadini, Gabbrielli, Liu & Mauro
We believe that further study on this issue is necessary because often meta-solvers are
evaluated on heterogeneous scenarios, characterized by a different number of problems, dif-
ferent timeouts, and different individual solvers from which the meta-solvers approaches
are built. In this paper, starting from some surprising results presented by Liu, Amadini,
Mauro, and Gabbrielli (2021) showing dramatic ranking changes with different, but rea-
sonable, metrics we would like to draw more attention on the evaluation of meta-solvers
approaches by shedding some light on the strengths and weaknesses of different metrics.
2. Evaluation metrics
Before talking about the evaluation metrics, we should spend some words on what we need
to evaluate: the solvers. In our context, a solver is a program that takes as input the
description of a computational problem in a given language, and returns an observable
outcome providing zero or more solutions for the given problem. For example, for decision
problems the outcome may be simply “yes” or “no” while for optimization problems we might
be interested in the sub-optimal solutions found along the search. An evaluation metric , or
performance metric, is a function mapping the outcome of a solver on a given instance to a
number representing “how good” the solver on this instance is.
An evaluation metric is often not just defined by the output of the (meta-)solver but
can also be influenced by other actors such as the computational resources available, the
problems on which we evaluate the solver, and the other solvers involved in the evaluation.
For example, it is often unavoidable to set a timeouton the solver’s execution when there
is no guarantee of termination in a reasonable amount of time (e.g., NP-hard problems).
Timeouts makes the evaluation feasible, but inevitably couple the evaluation metric to the
execution context. For this reason, the evaluation of a meta-solver should also take into
account the scenario that encompasses the solvers to evaluate, the instances used for the
validation, and the timeout. Formally, at least for the purposes of this paper, we can define
a scenario as a triple (I;S;)where:Iis a set of problem instances, Sis a set of individual
solvers,2(0;+1)is a timeout such that the outcome of solver s2Sover instance i2I
is always measured in the time interval [0;).
Evaluating meta-solvers over heterogeneous scenarios is complicated by the fact that
the set of instances, solvers and the timeout can have high variability. As we shall see in
Sect. 2.3, things are even trickier in scenarios including optimization problems.
2.1 Absolute vs relative metrics
A sharp distinction between evaluation metrics can be drawn depending on whether their
value depend on the outcome of other solvers or not. We say that an evaluation metric is
relativein the former case, absolute otherwise. For example, a well-known absolute metric
is thepenalized average runtime with penalty 1(PAR) that compares the solvers by
using the average solving runtime and penalizes the timeouts with times the timeout.
Formally, let time (i;s; )be the function returning the runtime of solver son instance
iwith timeout , assuming time (i;s; ) =ifscannot solve ibefore the timeout . For
optimization problems, we consider the runtime as the time taken by sto solveito opti-
2On the evaluation of (meta-)solver approaches
mality3assuming w.l.o.g. that an optimization problem is always a minimization problem.
We can define PAR as follows.
Definition 1 (Penalized Average Runtime) .Let(I;S;)be a scenario, the PAR score of
solvers2SoverIis given by1
jIjP
i2Ipar(i;s; )where:
par(i;s; ) =(
time (i;s; )iftime (i;s; )<
 otherwise.
Well-known PAR measures are, e.g., the PAR 2adopted in the SAT competitions (SAT
competition, 2021) or the PAR 10used by Lindauer et al. (2019). Evaluating (meta-)solvers
with scenarios having different timeouts should imply a normalization of PARin a fixed
range to avoid misleading comparisons.
Another absolute metric for decision problems is the number (or percentage) of instances
solved where ties are broken by favoring the solver minimizing the average running time,
i.e., minimizing the PAR 1score. This metric has been used in various tracks of the planning
competition (ICAPS, 2021), the XCSP competition (XCSP Competition, 2019), and the
QBF evaluations (QBFEVAL, 2021).
A well-established relative metric is instead the Borda count , adopted for example by the
MiniZinc Challenge (Stuckey, Feydy, Schutt, Tack, & Fischer, 2014) for both single solvers
and meta-solvers. The Borda count is a family of voting rules that can be applied to the
evaluation of a solver by considering the comparison as an election where the solvers are
the candidates and the problem instances are the voters. The MiniZinc challenge uses a
variant of Borda4where each solver scores points proportionally to the number of solvers it
beats. Assumingthat obj(i;s;t )isthebestobjectivevaluefoundbysolver sonoptimization
problemiat timet, with obj(i;s;t ) =1when no solution is found at time t, the MiniZinc
challenge score is defined as follows.
Definition 2 (MiniZinc challenge score) .Let(S;I;)be a scenario where I=Idec[
IoptwithIdecdecision problems and Ioptoptimization problems. The MiniZinc challenge
(MZNC) score of s2SoverIisP
i2I;s02Snfsgms(i;s0;)where:
ms(i;s0;) =8
>>>>>>>><
>>>>>>>>:0 ifunknown (i;s)_better (i;s0;s)
1 ifbetter (i;s;s0)
0:5 iftime (i;s; ) =time (i;s0;)
andobj(i;s; ) =obj(i;s0;)
time (i;s0;)
time (i;s; ) +time (i;s0;)otherwise
where the predicate unknown (i;s)holds ifsdoes not produce a solution within the timeout:
unknown (i;s) = (i2Idec^time (i;s; ) =)_(i2Iopt^obj(i;s; ) =1)
3. Ifscannot solve ito optimality before , then time (i;s) =even if sub-optimal solutions are found.
4. In the original definition, the lowest-ranked candidate gets 0 points, the next-lowest 1 point, and so on.
3Amadini, Gabbrielli, Liu & Mauro
andbetter (i;s;s0)holds ifsfinishes earlier than s0or it produces a better solution:
better (i;s;s0) =time (i;s; )<time (i;s0;) =_obj(i;s; )<obj(i;s0;)
This is clearly a relative metric because changing the set of available solvers can affect
the MiniZinc scores.
Tohandlethedisparatenatureofthescenarioswhencomparingmeta-solversapproaches,
the evaluation function adopted in the ICON and OASC challenges was relative: the closed
gap score . This metric assigns to a meta-solver a value in [0;1]proportional to how much it
closes the gap between the best individual solver available, or single best solver (SBS), and
thevirtual best solver (VBS),i.e.,anoracle-likemeta-solveralwaysselectingthebestindivid-
ual solver. The closed gap is actually a “meta-metric”, defined in terms of another evaluation
metric. Formally, if (I;S;)is a scenario and man evaluation metric to minimize, we have
m(i;VBS;) = minfm(i;s; )js2Sgfor eachi2IandSBS = argmin
s2SP
i2Im(i;s; ).
We can define the closed gap as follows.
Definition 3 (Closed gap) .Letmbe an evaluation metric to minimize for a scenario
(I;S;), and letmVBS=P
i2Im(i;VBS;)andmSBS=P
i2Im(i;SBS;). Theclosed
gapof a (meta-)solver sw.r.t.mon that scenario is:
mSBSP
i2Im(i;s; )
mSBSmVBS
If not specified, we will assume the closed gap computed w.r.t. the PAR 10score as done
in the AS challenges 2015 and 2017.5
2.2 A surprising outcome
An interesting outcome reported in Liu et al. (2021) was the profound difference between
the closed gap and the MiniZinc challenge scores. Liu et al. compared the performance of
six meta-solvers approaches across 15 decision-problems scenarios taken from ASlib (Bischl
et al., 2016) and coming from heterogeneous domains such as Answer-Set Programming,
Constraint Programming, Quantified Boolean Formula, Boolean Satisfiability.
Tab. 1 reports the performance of ASAP and RF, respectively the best approach accord-
ing to the closed gap score and the MZNC score. The scores in the four leftmost columns
clearly show a remarkable difference rank if we swap the evaluation metric. With the closed
gap, ASAP is the best approach and RF the worst among all the meta-solvers considered,
while with the MZNC score RF climbs to the first position while ASAP drops to the last
position.
Another thing that catches the eye in Tab. 1 is the presence of negative scores . This
happens because, by definition, the closed gap has upper bound 1 (no meta-solver can
improve the VBS) but not a fixed lower bound. Hence, when the performance of the meta-
solver is worse than the performance of the single best solver, the closed gap drops below
zero. While on a first glance this seems reasonable—meta-solvers should perform no worse
than the individual solvers—it is worth noting that the penalty for performing worse than
5. In the 2015 edition, the closed gap was computed as 1mSBSm(i;s; )
mSBSmVBS=mVBSm(i;s; )
mVBSmSBS.
4On the evaluation of (meta-)solver approaches
Table 1: Comparison ASAP vs RF. The MZNC column reports the average MZNC score
per scenario. Negative scores are in bold font.
Closed gap MZNC Better than other
Scenario ASAP RF ASAP RF ASAP RF
ASP-POTASSCO 0.7444 0.5314 2.2235 2.6163 275 671
BNSL-2016 0.8463 0.7451 1.2830 3.0250 98 993
CPMP-2015 0.6323 0.1732 2.0501 2.3660 137 334
CSP-MiniZinc-Time-2016 0.6251 0.2723 2.1552 2.7214 17 53
GLUHACK-2018 0.4663 0.4057 1.9040 2.4528 62 147
GRAPHS-2015 0.758-0.6412 2.3045 3.3731 489 3663
MAXSAT-PMS-2016 0.5734 0.3263 1.4747 2.8616 66 439
MAXSAT-WPMS-2016 0.7736-1.1826 1.5168 2.4043 126 386
MAXSAT19-UCMS 0.6583-0.2413 2.0893 2.5189 145 269
MIP-2016 0.35-0.3626 2.4035 2.4239 81 105
QBF-2016 0.7568-0.1366 1.8642 2.7154 193 467
SAT03-16_INDU 0.3997 0.1503 2.1508 2.5812 491 1116
SAT12-ALL 0.7617 0.6528 1.6785 2.8250 262 1227
SAT18-EXP 0.5576 0.3202 1.9239 2.4998 61 164
TSP-LION2015 0.4042-19.1569 2.4352 2.6979 1115 1949
Tot. 9.3077-18.1439 29.4573 40.0826 3618 11983
Tot.TSP-LION2015 8.9035 1.013 27.0221 37.3846 2503 10034
the SBS also depends on the denominator mSBSmVBS. This means that in scenarios
where the performance of the SBSis close to the perfect performance of the VBSthis
penalty can be significantly magnified. The TSP-LION2015 scenario is a clear example: the
RF approach gets a penalization of more than 19 points, meaning that RF should perform
flawlessly in about 20 other scenarios to expiate this punishment. In fact, in TSP-LION2015
thePAR 10distributions of SBSandVBSare very close: the SBSis able to solve 99.65% of
the instances solved by the VBS, leaving little room for improvement. RF scores -19.1569
while still solving more than 90% of the instances of the scenario and having a difference
with ASAP of slightly more than 5% instances solved.
Why are the closed gap and the MZNC rankings so different? Looking at the rightmost
twocolumnsinTab.1showing, foreachscenario, thenumberofinstanceswhereanapproach
is faster than the other, one may conclude that RF is far better than ASAP. In all the
scenarios the number of instances where its runtime is lower than ASAP runtime is greater
than the number of instances where ASAP is faster. Overall, it is quite impressive to see
that RF beats ASAP on 11983 instances while ASAP beats RF on 3618 times only.
An initial clue of why this happens is revealed in Liu et al. (2021), where a parametric
versionofMZNCscoreisused. Inpractice,Def.2isgeneralizedbyassumingtheperformance
of two solvers equivalent if their runtime difference is below a given time threshold .6This
variantwasconsideredbecauseatimedifferenceoffewsecondscouldbeconsideredirrelevant
6. Formally, ifjtime (i;s; )time (i;s0;)jthen bothsands0scores 0.5 points—note that if = 0we
get the original MZNC score as in Def. 2.
5Amadini, Gabbrielli, Liu & Mauro
Figure 1: Cumulative Borda count by varying the threshold.
Figure 2: Solve instances difference ASAP vs RF.
if solving a problem can take minutes or hours. The parametric MZNC score is depicted in
Fig. 1, where different thresholds are considered on the x-axis. It is easy to see how the
performance of ASAP and RF reverses when increases: ASAP bubbles from the bottom
to the top, while RF gradually sinks to the bottom.
Let us further investigate this anomaly. Fig. 2 shows the runtime distributions of the
instances solved by ASAP and RF, sorted by ascending runtime. We can see that ASAP
solves more instances, but for around 15k instances RF is never slower than ASAP. Sum-
6On the evaluation of (meta-)solver approaches
Table 2: Average closed gap, speedup, and normalized runtime. Peak performance in bold
font.
Meta-solver Closed gap Speedup Norm. runtime
ASAP 0.4866 0.4026 0.8829
sunny-as2 0.4717 0.4122 0.8879
autofolio 0.4713 0.4110 0.8855
SUNNY-original 0.4412 0.3905 0.8790
*Zilla 0.3416 0.3742 0.8753
Random Forest -0.1921 0.3038 0.8507
marizing, ASAP solves more instances but RF is in general quicker when it solves an (often
easy) instance. This entails the significant difference between closed gap and Borda metrics.
Inouropinion, ontheonehand, itisfairtothinkthatASAPperformsbetterthanRFon
these scenarios. The MZNC score seems to over-penalize ASAP w.r.t. RF. Moreover, from
Fig. 1 we can also note that for 103the parametric MZNC score of RF is still better,
but 103 seconds looks quite a high threshold to consider two performances as equivalent.
On the other hand, the closed gap score can also be over-penalizing due to negative outliers.
We also would like to point out that the definitions of SBSfound in the literature do
not clarify how it is computed in scenarios where the set of instances Iis split into test and
training sets. Should the SBSbe computed on the instances of the training set, the test set,
or the whole dataset I? One would be inclined to use the test set to select the SBS, but this
choice might be problematic because the test set is usually quite small w.r.t. the training set
when using, e.g., cross-validation methods. In this case issues with negative outliers might
be amplified. If not clarified, this could lead to confusion. For example, in the 2015 ICON
challenge the SBSwas computed by considering the entire dataset (training and testing
instances together). In the 2017 OASC instead the SBSwas originally computed on the
test set of the scenarios, but then the results were amended by computing the SBSon the
training set.
An alternative to the above metrics is the speedupof a single solver w.r.t. the SBSor the
VBS, i.e., how much a meta-solver can improve a baseline solver. Tab. 2 reports, using the
data of (Liu et al., 2021), for each meta-solver sin a scenario (I;S;)the average speedup
computed as1
jIjP
i2Itime (i;VBS;)
time (i;s; ). Unlike the closed gap, that has no lower bound, the
speedup always falls in [0;1]with bigger values meaning better performance. We compared
this with the average normalized runtime score, computed as 11
jIjP
i2Itime (i;s; )
and the
average closed gap score w.r.t PAR 1. We use PAR 1instead of PAR 10to be consistent with
speedup and normalized runtime, which do not get any penalization.
The rank with speedup and normalised runtime is the same. The podium changes if we
use the closed gap: in this case sunny-as2 and autofolio lose one position while ASAP rises
from third to first position. However, as we shall see in the next section, the generalization
of these metrics to optimization problems is not trivial.
7Amadini, Gabbrielli, Liu & Mauro
2.3 Optimization problems
So far we have mainly talked about evaluating meta-solvers on decision problems. While the
MZNC score takes into account also optimization problems, for the closed gap, (normalized)
runtime, and speedup the generalization is not as obvious as it might seem. Here using
the runtime might not be the right choice: often a solver cannot prove the optimality of a
solution, even when it actually finds it. Hence, the obvious alternative is to consider just
the objective value of a solution. But this value needs to be normalized , and to do so what
bounds should we choose? Furthermore: how to reward a solver that actually proves the
optimality of a solution? And how to penalize solvers that cannot find any solution?
Ifsolvingtooptimalityisnotrewarded, metricssuchastheratioscoreofthesatisfiability
track of the planning competition can be used.7This score is computed as the ratio between
the best known solution and the best objective value found by the solver, giving 0 points in
case no solution is found.
A different metric that focuses on quickly reaching good solution is the areascore,
introduced in the MZNC starting from 2017. This metric computes the integral of a step
function of the solution value over the runtime horizon. Intuitively, a solver that finds good
solutions earlier can outperform a solver that finds better solutions much later in the solving
stage.
Other attempts have been proposed to take into account the objective value and the run-
ning times. For example, the ASP competition (Calimeri, Gebser, Maratea, & Ricca, 2016)
adopted an elaborate scoring system that combines together the percentage of instances
solved within the timeout, the evaluation time, and the quality of a solution. Similarly,
Amadini, Gabbrielli, and Mauro (2016) proposed a relative metric where each solver sgets
a reward inf0g[[ ; ][f1gaccording to the objective value obj(s;i; )of the best solution
it finds, with 0  1. If no solution is found then sscores 0, if it solves ito optimality
it scores 1, otherwise the score is computed by linearly scaling obj(s;i; )in[ ; ]according
to the best and worst objective values find by any other available solver on problem i.
2.4 Randomness and aggregation
We conclude the section with some remarks about randomness and data aggregation.
When evaluating a meta-solver son scenario (I;S;), it is common practice to partition
Iinto a training set Itr, on which s“learns” how to leverage its individual solvers, and a
test setItswhere the performance of son unforeseen problems is measured. In particular,
to prevent overfitting, it is possible to use a k-fold cross validation by first splitting Iinto
kdisjoint folds, and then using in turn one fold as test set and the union of the other
folds as training set. In the AS challenge 2015 (Lindauer et al., 2019) the submissions were
indeed evaluated with a 10-fold cross validation, while in the OASC in 2017 the dataset of
the scenarios was divided only in one test set and one training set. As also underlined by
the organizers of the competition, this is risky because it may reward a lucky meta-solver
performing well on that split but poorly on other splits.
Note that so far we have always assumed deterministic solvers, i.e., solvers always provid-
ing the same outcome if executed on the same instance in the same execution environment.
Unfortunately, the scenario may contain randomized solvers potentially producing different
7. This track includes optimization problems where the goal is to minimize the length of a plan.
8On the evaluation of (meta-)solver approaches
results with a high variability. In this case, solvers should be evaluated over a number of
runs and particular care must be taken because the assumption that a solver can never
outperform the VBS would be no longer true.
A cautious choice to decrease the variance of model predictions would be to repeat the
k-fold cross validation n>1times with different random splits. However, this might imply
a tremendous computational effort—the training phase of a meta-solver might take hours or
days—and therefore a significant energy consumption. This issue is becoming an increasing
concern. For example, in their recent work Matricon, Anastacio, Fijalkow, Simon, and Hoos
(2021) propose an approach to early stop running an individual solver that it is likely to
perform worse than another solver on a subset of the instances of the scenario. In this way,
less resources are wasted for solvers that most likely will not bring any improvement.
Finally, we spend a few words on the aggregation of the results. It is quite common
to use the arithmetic mean, or just the sum, when it comes to aggregate the outcomes
of a meta-solver over different problems of the same scenario (e.g., when evaluating the
results on the nktest sets of a k-fold cross validation repeated ntimes). The same
applies when evaluating different scenarios. The choice of how to aggregate the metric
values into a unique value should however be motivated since the arithmetic mean can lead
to misleading conclusions when summarizing normalized benchmark (Fleming & Wallace,
1986). For example, to amortize the effect of outliers, one may use the median or use the
geometric mean to average over normalized numbers.
3. Conclusions
As it happens in many other fields, the choice of reasonable metrics can have divergent
effectsontheassessmentof(meta-)solvers. Whiletheseissuesaremitigatedwhencomparing
individual solvers in competitions having uniform scenarios in term of size, difficulty, and
nature, the comparison of meta-solver approaches poses new challenges due to diversity of
the scenarios on which they are evaluated. Although it is impossible to define a fits-all
metric, we believe that we should aim at more robust metrics avoiding as much as possible
the under- and over-penalization of meta-solvers.
Particular care should be taken when using relativemeasurements, because the risk is
to amplify small performance variations into large differences of the metric’s value. Present-
ing the results in terms of orthogonal evaluation metrics allows a better understanding of
the (meta-)solvers performance, and these insights may help the researchers to build meta-
solvers that better fit their needs, as well as to prefer an evaluation metric over another.
Moreover, well-established metrics may be combined into hybrid “meta-metrics” folding to-
gether different performance aspects and handling the possible presence of randomness.
9Amadini, Gabbrielli, Liu & Mauro
References
Amadini, R., Gabbrielli, M., & Mauro, J. (2016). Portfolio approaches for constraint
optimization problems. Annals of Mathematics and Artificial Intelligence ,76(1-2),
229–246.
Bischl, B., Kerschke, P., Kotthoff, L., Lindauer, M., Malitsky, Y., Fréchette, A., ... Van-
schoren, J. (2016). Aslib: A benchmark library for algorithm selection. Artificial
Intelligence ,237, 41–58.
Calimeri, F., Gebser, M., Maratea, M., & Ricca, F. (2016). Design and results of the
fifth answer set programming competition. Artif. Intell. ,231, 151–181. Retrieved
from https://doi.org/10.1016/j.artint.2015.09.008 doi: 10.1016/
j.artint.2015.09.008
Fleming, P. J., & Wallace, J. J. (1986). How not to lie with statistics: The correct way
to summarize benchmark results. Commun. ACM ,29(3), 218–221. Retrieved from
https://doi.org/10.1145/5666.5673 doi: 10.1145/5666.5673
Hoos, H. H. (2012). Automated algorithm configuration and parameter tuning. In
Y. Hamadi, É. Monfroy, & F. Saubion (Eds.), Autonomous search (pp. 37–71).
Springer.
ICAPS. (2021). The international planning competition web page. https://www.icaps
-conference.org/competitions/ . (Accessed: 2021-12-10)
Kotthoff, L. (2016). Algorithm selection for combinatorial search problems: A survey. In
Data mining and constraint programming (pp. 149–190). Springer.
Kotthoff,L.,Hurley,B.,&O’Sullivan,B. (2017). TheICONchallengeonalgorithmselection.
AI Magazine ,38(2), 91–93.
Lindauer, M., van Rijn, J. N., & Kotthoff, L. (2019). The algorithm selection competitions
2015 and 2017. Artificial Intelligence ,272, 86–100.
Liu, T., Amadini, R., Mauro, J., & Gabbrielli, M. (2021). sunny-as2: Enhancing SUNNY
for algorithm selection. J. Artif. Intell. Res. ,72, 329–376.
Matricon, T., Anastacio, M., Fijalkow, N., Simon, L., & Hoos, H. H. (2021). Statistical
comparisonofalgorithmperformancethroughinstanceselection. InL.D.Michel(Ed.),
27th international conference on principles and practice of constraint programming,
CP 2021, montpellier, france (virtual conference), october 25-29, 2021 (Vol. 210, pp.
43:1–43:21). Schloss Dagstuhl - Leibniz-Zentrum für Informatik.
QBFEVAL. (2021). Qbf evaluations web page. http://www.qbflib.org/index _eval
.php. (Accessed: 2021-12-10)
SAT competition. (2021). The international sat competition web page. http://www
.satcompetition.org/ . (Accessed: 2021-12-10)
Stuckey, P. J., Becket, R., & Fischer, J. (2010). Philosophy of the minizinc challenge. Con-
straints An Int. J. ,15(3), 307–316. Retrieved from https://doi.org/10.1007/
s10601-010-9093-0 doi: 10.1007/s10601-010-9093-0
Stuckey, P. J., Feydy, T., Schutt, A., Tack, G., & Fischer, J. (2014). The MiniZinc Challenge
2008-2013. AI Magazine ,35(2), 55–60. Retrieved from http://www.aaai.org/
ojs/index.php/aimagazine/article/view/2539
XCSP Competition. (2019). Xcsp competition. http://www.cril.univ-artois.fr/
XCSP19/ . (Accessed: 2021-12-10)
10