On the evaluation of (meta-)solver approaches Research Note On the evaluation of (meta-)solver approaches Roberto Amadini roberto.amadini@unibo.it Maurizio Gabbrielli maurizio.gabbrielli@unibo.it Department of Computer Science and Engineering, University of Bologna, Italy Tong Liu lteu@icloud.com Meituan, Beijing, China Jacopo Mauro mauro@imada.sdu.dk Department of Mathematics and Computer Science, University of Southern Denmark, Denmark Abstract Meta-solver approaches exploits a number of individual solvers to potentially build a better solver. To assess the performance of meta-solvers, one can simply adopt the metrics typically used for individual solvers (e.g., runtime or solution quality), or employ more specific evaluation metrics (e.g., by measuring how close the meta-solver gets to its virtual best performance). In this paper, based on some recently published works, we provide an overview of different performance metrics for evaluating (meta-)solvers, by underlying their strengths and weaknesses. 1. Introduction A famous quote attributed to Aristotle says that “ the whole is greater than the sum of its parts”. This principle has been applied in several contexts, including the field of constraint solving and optimization. Combinatiorial problems arising from application domains such as scheduling, manufacturing, routing or logistics can be tackled by combining and leveraging the complementary strengths of different solvers to create a better global meta-solver .1 Several approaches for combining solvers and hence creating effective meta-solvers have been developed. Over the last decades we witnessed the creation of new Algorithm Selec- tion (Kotthoff, 2016) and Configuration (Hoos, 2012) approaches2that reached peak results in various solving competitions (SAT competition, 2021; Stuckey, Becket, & Fischer, 2010; ICAPS, 2021). To compare different meta-solvers, new competitions were created, e.g., the 2015 ICON challenge (Kotthoff, Hurley, & O’Sullivan, 2017) and 2017 OASC competition on algorithm selection (Lindauer, van Rijn, & Kotthoff, 2019). However, the discussion of why a particular evaluation metric has been chosen to rank the solvers is lacking. 1. Meta-solvers are sometimes referred in the literature as portfolio solvers , because they take advantage of a “portfolio” of different solvers. 2. A fine-tuned solver can be seen as a meta-solver where we consider different configurations of the same solver as different solvers. 1arXiv:2202.08613v1 [cs.AI] 17 Feb 2022Amadini, Gabbrielli, Liu & Mauro We believe that further study on this issue is necessary because often meta-solvers are evaluated on heterogeneous scenarios, characterized by a different number of problems, dif- ferent timeouts, and different individual solvers from which the meta-solvers approaches are built. In this paper, starting from some surprising results presented by Liu, Amadini, Mauro, and Gabbrielli (2021) showing dramatic ranking changes with different, but rea- sonable, metrics we would like to draw more attention on the evaluation of meta-solvers approaches by shedding some light on the strengths and weaknesses of different metrics. 2. Evaluation metrics Before talking about the evaluation metrics, we should spend some words on what we need to evaluate: the solvers. In our context, a solver is a program that takes as input the description of a computational problem in a given language, and returns an observable outcome providing zero or more solutions for the given problem. For example, for decision problems the outcome may be simply “yes” or “no” while for optimization problems we might be interested in the sub-optimal solutions found along the search. An evaluation metric , or performance metric, is a function mapping the outcome of a solver on a given instance to a number representing “how good” the solver on this instance is. An evaluation metric is often not just defined by the output of the (meta-)solver but can also be influenced by other actors such as the computational resources available, the problems on which we evaluate the solver, and the other solvers involved in the evaluation. For example, it is often unavoidable to set a timeouton the solver’s execution when there is no guarantee of termination in a reasonable amount of time (e.g., NP-hard problems). Timeouts makes the evaluation feasible, but inevitably couple the evaluation metric to the execution context. For this reason, the evaluation of a meta-solver should also take into account the scenario that encompasses the solvers to evaluate, the instances used for the validation, and the timeout. Formally, at least for the purposes of this paper, we can define a scenario as a triple (I;S;)where:Iis a set of problem instances, Sis a set of individual solvers,2(0;+1)is a timeout such that the outcome of solver s2Sover instance i2I is always measured in the time interval [0;). Evaluating meta-solvers over heterogeneous scenarios is complicated by the fact that the set of instances, solvers and the timeout can have high variability. As we shall see in Sect. 2.3, things are even trickier in scenarios including optimization problems. 2.1 Absolute vs relative metrics A sharp distinction between evaluation metrics can be drawn depending on whether their value depend on the outcome of other solvers or not. We say that an evaluation metric is relativein the former case, absolute otherwise. For example, a well-known absolute metric is thepenalized average runtime with penalty 1(PAR) that compares the solvers by using the average solving runtime and penalizes the timeouts with times the timeout. Formally, let time (i;s; )be the function returning the runtime of solver son instance iwith timeout , assuming time (i;s; ) =ifscannot solve ibefore the timeout . For optimization problems, we consider the runtime as the time taken by sto solveito opti- 2On the evaluation of (meta-)solver approaches mality3assuming w.l.o.g. that an optimization problem is always a minimization problem. We can define PAR as follows. Definition 1 (Penalized Average Runtime) .Let(I;S;)be a scenario, the PAR score of solvers2SoverIis given by1 jIjP i2Ipar(i;s; )where: par(i;s; ) =( time (i;s; )iftime (i;s; )<  otherwise. Well-known PAR measures are, e.g., the PAR 2adopted in the SAT competitions (SAT competition, 2021) or the PAR 10used by Lindauer et al. (2019). Evaluating (meta-)solvers with scenarios having different timeouts should imply a normalization of PARin a fixed range to avoid misleading comparisons. Another absolute metric for decision problems is the number (or percentage) of instances solved where ties are broken by favoring the solver minimizing the average running time, i.e., minimizing the PAR 1score. This metric has been used in various tracks of the planning competition (ICAPS, 2021), the XCSP competition (XCSP Competition, 2019), and the QBF evaluations (QBFEVAL, 2021). A well-established relative metric is instead the Borda count , adopted for example by the MiniZinc Challenge (Stuckey, Feydy, Schutt, Tack, & Fischer, 2014) for both single solvers and meta-solvers. The Borda count is a family of voting rules that can be applied to the evaluation of a solver by considering the comparison as an election where the solvers are the candidates and the problem instances are the voters. The MiniZinc challenge uses a variant of Borda4where each solver scores points proportionally to the number of solvers it beats. Assumingthat obj(i;s;t )isthebestobjectivevaluefoundbysolver sonoptimization problemiat timet, with obj(i;s;t ) =1when no solution is found at time t, the MiniZinc challenge score is defined as follows. Definition 2 (MiniZinc challenge score) .Let(S;I;)be a scenario where I=Idec[ IoptwithIdecdecision problems and Ioptoptimization problems. The MiniZinc challenge (MZNC) score of s2SoverIisP i2I;s02Snfsgms(i;s0;)where: ms(i;s0;) =8 >>>>>>>>< >>>>>>>>:0 ifunknown (i;s)_better (i;s0;s) 1 ifbetter (i;s;s0) 0:5 iftime (i;s; ) =time (i;s0;) andobj(i;s; ) =obj(i;s0;) time (i;s0;) time (i;s; ) +time (i;s0;)otherwise where the predicate unknown (i;s)holds ifsdoes not produce a solution within the timeout: unknown (i;s) = (i2Idec^time (i;s; ) =)_(i2Iopt^obj(i;s; ) =1) 3. Ifscannot solve ito optimality before , then time (i;s) =even if sub-optimal solutions are found. 4. In the original definition, the lowest-ranked candidate gets 0 points, the next-lowest 1 point, and so on. 3Amadini, Gabbrielli, Liu & Mauro andbetter (i;s;s0)holds ifsfinishes earlier than s0or it produces a better solution: better (i;s;s0) =time (i;s; )