arxiv_dump / txt /2103.14651.txt
billxbf's picture
Upload 101 files
8f1929a verified
raw
history blame
71.5 kB
Local Explanations via Necessity and Sufficiency:
Unifying Theory and Practice
David S. Watson*1Limor Gultchin*2,3Ankur Taly4Luciano Floridi5,3
*Equal contribution1Department of Statistical Science, University College London, London, UK
2Department of Computer Science, University of Oxford, Oxford, UK
3The Alan Turing Institute, London, UK4Google Inc., Mountain View, USA
5Oxford Internet Institute, University of Oxford, Oxford, UK
Abstract
Necessity and sufficiency are the building blocks
of all successful explanations. Yet despite their im-
portance, these notions have been conceptually un-
derdeveloped and inconsistently applied in explain-
able artificial intelligence (XAI), a fast-growing re-
search area that is so far lacking in firm theoretical
foundations. Building on work in logic, probabil-
ity, and causality, we establish the central role of
necessity and sufficiency in XAI, unifying seem-
ingly disparate methods in a single formal frame-
work. We provide a sound and complete algorithm
for computing explanatory factors with respect to
a given context, and demonstrate its flexibility and
competitive performance against state of the art al-
ternatives on various tasks.
1 INTRODUCTION
Machine learning algorithms are increasingly used in a va-
riety of high-stakes domains, from credit scoring to medi-
cal diagnosis. However, many such methods are opaque , in
that humans cannot understand the reasoning behind partic-
ular predictions. Post-hoc, model-agnostic local explanation
tools (e.g., feature attributions, rule lists, and counterfactu-
als) are at the forefront of a fast-growing area of research
variously referred to as interpretable machine learning or
explainable artificial intelligence (XAI).
Many authors have pointed out the inconsistencies between
popular XAI tools, raising questions as to which method
is more reliable in particular cases [Mothilal et al., 2020a;
Ramon et al., 2020; Fernández-Loría et al., 2020]. Theoret-
ical foundations have proven elusive in this area, perhaps
due to the perceived subjectivity inherent to notions such
as “intelligible” and “relevant” [Watson and Floridi, 2020].
Practitioners often seek refuge in the axiomatic guarantees
of Shapley values, which have become the de facto stan-
Figure 1: We describe minimal sufficient factors (here, sets
of features) for a given input (top row), with the aim of
preserving or flipping the original prediction. We report a
sufficiency score for each set and a cumulative necessity
score for all sets, indicating the proportion of paths towards
the outcome that are covered by the explanation. Feature
colors indicate source of feature values (input or reference).
dard in many XAI applications, due in no small part to their
attractive theoretical properties [Bhatt et al., 2020]. How-
ever, ambiguities regarding the underlying assumptions of
the method [Kumar et al., 2020] and the recent prolifera-
tion of mutually incompatible implementations [Sundarara-
jan and Najmi, 2019; Merrick and Taly, 2020] have com-
plicated this picture. Despite the abundance of alternative
XAI tools [Molnar, 2021], a dearth of theory persists. This
has led some to conclude that the goals of XAI are under-
specified [Lipton, 2018], and even that post-hoc methods do
more harm than good [Rudin, 2019].
We argue that this lacuna at the heart of XAI should be filled
by a return to fundamentals – specifically, to necessity and
sufficiency . As the building blocks of all successful expla-
nations, these dual concepts deserve a privileged position
in the theory and practice of XAI. Following a review of re-
lated work (Sect. 2), we operationalize this insight with a
unified framework (Sect. 3) that reveals unexpected affinities
Accepted for the 37thConference on Uncertainty in Artificial Intelligence (UAI 2021).arXiv:2103.14651v2 [cs.LG] 10 Jun 2021between various XAI tools and probabilities of causation
(Sect. 4). We proceed to implement a novel procedure for
computing model explanations that improves upon the state
of the art in various quantitative and qualitative comparisons
(Sect. 5). Following a brief discussion (Sect. 6), we conclude
with a summary and directions for future work (Sect. 7).
We make three main contributions. (1) We present a formal
framework for XAI that unifies several popular approaches,
including feature attributions, rule lists, and counterfactu-
als. (2) We introduce novel measures of necessity and suf-
ficiency that can be computed for any feature subset. The
method enables users to incorporate domain knowledge,
search various subspaces, and select a utility-maximizing
explanation. (3) We present a sound and complete algorithm
for identifying explanatory factors, and illustrate its perfor-
mance on a range of tasks.
2 NECESSITY AND SUFFICIENCY
Necessity and sufficiency have a long philosophical tradi-
tion [Mackie, 1965; Lewis, 1973; Halpern and Pearl, 2005b],
spanning logical, probabilistic, and causal variants. In propo-
sitional logic, we say that xis a sufficient condition for y
iffx!y, andxis a necessary condition for yiffy!x.
So stated, necessity and sufficiency are logically converse .
However, by the law of contraposition, both definitions ad-
mit alternative formulations, whereby sufficiency may be
rewritten as:y!:xand necessity as:x!:y. By pair-
ing the original definition of sufficiency with the latter def-
inition of necessity (and vice versa), we find that the two
concepts are also logically inverse .
These formulae suggest probabilistic relaxations, measur-
ingx’s sufficiency for ybyP(yjx)andx’s necessity for y
byP(xjy). Because there is no probabilistic law of contra-
position, these quantities are generally uninformative w.r.t.
P(:xj:y)andP(:yj:x), which may be of independent
interest. Thus, while necessity is both the converse and in-
verse of sufficiency in propositional logic, the two formula-
tions come apart in probability calculus. We revisit the dis-
tinction between probabilistic conversion and inversion in
Rmk. 1 and Sect. 4.
These definitions struggle to track our intuitions when we
consider causal explanations [Pearl, 2000; Tian and Pearl,
2000]. It may make sense to say in logic that if xis a neces-
sary condition for y, thenyis a sufficient condition for x;
it does not follow that if xis a necessary cause ofy, theny
is a sufficient cause ofx. We may amend both concepts us-
ingcounterfactual probabilities – e.g., the probability that
Alice would still have a headache if she had not taken an as-
pirin, given that she does not have a headache and did take
an aspirin. Let P(yxjx0;y0)denote such a quantity, to be
read as “the probability that Ywould equal yunder an in-
tervention that sets Xtox, given that we observe X=x0andY=y0.” Then, according to Pearl [2000, Ch. 9], the
probability that xis a sufficient cause of yis given by
suf(x;y) :=P(yxjx0;y0), and the probability that xis a
necessary cause of yis given by nec(x;y) :=P(y0
x0jx;y):
Analysis becomes more difficult in higher dimensions,
where variables may interact to block or unblock causal path-
ways. VanderWeele and Robins [2008] analyze sufficient
causal interactions in the potential outcomes framework,
refining notions of synergism without monotonicity con-
straints. In a subsequent paper, VanderWeele and Richard-
son [2012] study the irreducibility and singularity of interac-
tions in sufficient-component cause models. Halpern [2016]
devotes an entire monograph to the subject, providing vari-
ous criteria to distinguish between subtly different notions
of “actual causality”, as well as “but-for” (similar to nec-
essary) and sufficient causes. These authors generally limit
their analyses to Boolean systems with convenient structural
properties, e.g. conditional ignorability and the stable unit
treatment value assumption [Imbens and Rubin, 2015]. Op-
erationalizing their theories in a practical method without
such restrictions is one of our primary contributions.
Necessity and sufficiency have begun to receive explicit at-
tention in the XAI literature. Ribeiro et al. [2018a] propose
a bandit procedure for identifying a minimal set of Boolean
conditions that entails a predictive outcome (more on this in
Sect. 4). Dhurandhar et al. [2018] propose an autoencoder
for learning pertinent negatives and positives, i.e. features
whose presence or absence is decisive for a given label,
while Zhang et al. [2018] develop a technique for generat-
ing symbolic corrections to alter model outputs. Both meth-
ods are optimized for neural networks, unlike the model-
agnostic approach we develop here.
Another strand of research in this area is rooted in logic pro-
gramming. Several authors have sought to reframe XAI as
either a SAT [Ignatiev et al., 2019; Narodytska et al., 2019]
or a set cover problem [Lakkaraju et al., 2019; Grover et al.,
2019], typically deriving approximate solutions on a pre-
specified subspace to ensure computability in polynomial
time. We adopt a different strategy that prioritizes complete-
ness over efficiency, an approach we show to be feasible in
moderate dimensions (see Sect. 6 for a discussion).
Mothilal et al. [2020a] build on Halpern [2016]’s definitions
of necessity and sufficiency to critique popular XAI tools,
proposing a new feature attribution measure with some pur-
ported advantages. Their method relies on the strong as-
sumption that predictors are mutually independent. Galho-
tra et al. [2021] adapt Pearl [2000]’s probabilities of cau-
sation for XAI under a more inclusive range of data gen-
erating processes. They derive analytic bounds on multidi-
mensional extensions of nec andsuf, as well as an algo-
rithm for point identification when graphical structure per-
mits. Oddly, they claim that non-causal applications of ne-
cessity and sufficiency are somehow “incorrect and mislead-ing” (p. 2), a normative judgment that is inconsistent with
many common uses of these concepts.
Rather than insisting on any particular interpretation of ne-
cessity and sufficiency, we propose a general framework that
admits logical, probabilistic, and causal interpretations as
special cases. Whereas previous works evaluate individual
predictors, we focus on feature subsets , allowing us to detect
and quantify interaction effects. Our formal results clarify
the relationship between existing XAI methods and proba-
bilities of causation, while our empirical results demonstrate
their applicability to a wide array of tasks and datasets.
3 A UNIFYING FRAMEWORK
We propose a unifying framework that highlights the role of
necessity and sufficiency in XAI. Its constituent elements
are described below.
Target function. Post-hoc explainability methods assume
access to a target function f:X7!Y , i.e. the model whose
prediction(s) we seek to explain. For simplicity, we restrict
attention to the binary setting, with Y2f0;1g. Multi-class
extensions are straightforward, while continuous outcomes
may be accommodated via discretization. Though this in-
evitably involves some information loss, we follow authors
in the contrastivist tradition in arguing that, even for con-
tinuous outcomes, explanations always involve a juxtapo-
sition (perhaps implicit) of “fact and foil” [Lipton, 1990].
For instance, a loan applicant is probably less interested in
knowing why her credit score is precisely ythan she is in
discovering why it is below some threshold (say, 700). Of
course, binary outcomes can approximate continuous values
with arbitrary precision over repeated trials.
Context. The contextDis a probability distribution over
which we quantify sufficiency and necessity. Contexts may
be constructed in various ways but always consist of at least
some input (point or space) and reference (point or space).
For instance, we may want to compare xiwith all other
samples, or else just those perturbed along one or two axes,
perhaps based on some conditioning event(s).
In addition to predictors and outcomes, we optionally in-
clude information exogenous to f. For instance, if any
events were conditioned upon to generate a given refer-
ence sample, this information may be recorded among a
set of auxiliary variables W. Other examples of potential
auxiliaries include metadata or engineered features such as
those learned via neural embeddings. This augmentation al-
lows us to evaluate the necessity and sufficiency of factors
beyond those found in X. Contextual data take the form
Z= (X;W)D . The distribution may or may not en-
code dependencies between (elements of) Xand (elements
of)W. We extend the target function to augmented inputs
by definingf(z) :=f(x).Factors. Factors pick out the properties whose necessity
and sufficiency we wish to quantify. Formally, a factor
c:Z 7!f 0;1gindicates whether its argument satisfies
some criteria with respect to predictors or auxiliaries. For
instance, if xis an input to a credit lending model, and w
contains information about the subspace from which data
were sampled, then a factor could be c(z) =1[x[gender =
“female” ]^w[do(income>$50k)]], i.e. checking if zis
female and drawn from a context in which an intervention
fixes income at greater than $50k. We use the term “factor”
as opposed to “condition” or “cause” to suggest an inclusive
set of criteria that may apply to predictors xand/or auxil-
iaries w. Such criteria are always observational w.r.t. zbut
may be interventional or counterfactual w.r.t. x. We assume
a finite space of factors C.
Partial order. When multiple factors pass a given neces-
sity or sufficiency threshold, users will tend to prefer some
over others. For instance, factors with fewer conditions are
often preferable to those with more, all else being equal;
factors that change a variable by one unit as opposed to two
are preferable, and so on. Rather than formalize this pref-
erence in terms of a distance metric, which unnecessarily
constrains the solution space, we treat the partial ordering
as primitive and require only that it be complete and transi-
tive. This covers not just distance-based measures but also
more idiosyncratic orderings that are unique to individual
agents. Ordinal preferences may be represented by cardi-
nal utility functions under reasonable assumptions (see, e.g.,
[von Neumann and Morgenstern, 1944]).
We are now ready to formally specify our framework.
Definition 1 (Basis) .Abasis for computing necessary and
sufficient factors for model predictions is a tuple B=
hf;D;C;i, wherefis a target function, Dis a context,C
is a set of factors, and is a partial ordering on C.
3.1 EXPLANATORY MEASURES
For some fixed basis B=hf;D;C;i, we define the fol-
lowing measures of sufficiency and necessity, with probabil-
ity taken overD.
Definition 2 (Probability of Sufficiency) .The probability
thatcis a sufficient factor for outcome yis given by:
PS(c;y) :=P(f(z) =yjc(z) = 1):
The probability that factor set C=fc1;:::;ckgis sufficient
foryis given by:
PS(C;y) :=P(f(z) =yjkX
i=1ci(z)1):
Definition 3 (Probability of Necessity) .The probability
thatcis a necessary factor for outcome yis given by:
PN(c;y) :=P(c(z) = 1jf(z) =y):The probability that factor set C=fc1;:::;ckgis neces-
sary foryis given by:
PN(C;y) :=P(kX
i=1ci(z)1jf(z) =y):
Remark 1. These probabilities can be likened to the “pre-
cision” (positive predictive value) and “recall” (true posi-
tive rate) of a (hypothetical) classifier that predicts whether
f(z) =ybased on whether c(z) = 1 . By examining the
confusion matrix of this classifier, one can define other
related quantities, e.g. the true negative rate P(c(z) =
0jf(z)6=y)and the negative predictive value P(f(z)6=
yjc(z) = 0) , which are contrapositive transformations of
our proposed measures. We can recover these values exactly
viaPS(1c;1y)andPN(1c;1y), respectively.
When necessity and sufficiency are defined as probabilistic
inversions (rather than conversions), such transformations
are impossible.
3.2 MINIMAL SUFFICIENT FACTORS
We introduce Local Explanations via Necessity and Suffi-
ciency (LENS), a procedure for computing explanatory fac-
tors with respect to a given basis Band threshold parame-
ter(see Alg. 1). First, we calculate a factor’s probability
of sufficiency (see probSuff ) by drawing nsamples from
Dand taking the maximum likelihood estimate ^PS(c;y).
Next, we sort the space of factors w.r.t. in search of those
that are-minimal.
Definition 4 (-minimality) .We say thatcis-minimal iff
(i)PS(c;y)and (ii) there exists no factor c0such that
PS(c0;y)andc0c.
Since a factor is necessary to the extent that it covers all
possible pathways towards a given outcome, our next step is
to span the-minimal factors and compute their cumulative
PN (seeprobNec ). As a minimal factor cstands for all c0
such thatcc0, in reporting probability of necessity, we
expandCto its upward closure.
Thms. 1 and 2 state that this procedure is optimal in a sense
that depends on whether we assume access to oracle or
sample estimates of PS(see Appendix A for all proofs).
Theorem 1. With oracle estimates PS(c;y)for allc2C,
Alg. 1 is sound and complete. That is, for any Creturned
by Alg. 1 and all c2C,cis-minimal iff c2C.
Population proportions may be obtained if data fully saturate
the spaceD, a plausible prospect for categorical variables
of low to moderate dimensionality. Otherwise, proportions
will need to be estimated.
Theorem 2. With sample estimates ^PS(c;y)for allc2C,
Alg. 1 is uniformly most powerful. That is, Alg. 1 identifiesthe most-minimal factors of any method with fixed type I
error .
Multiple testing adjustments can easily be accommodated,
in which case modified optimality criteria apply [Storey,
2007].
Remark 2. We take it that the main quantity of interest
in most applications is sufficiency, be it for the original or
alternative outcome, and therefore define -minimality w.r.t.
sufficient (rather than necessary) factors. However, necessity
serves an important role in tuning , as there is an inherent
trade-off between the parameters. More factors are excluded
at higher values of , thereby inducing lower cumulative
PN; more factors are included at lower values of , thereby
inducing higher cumulative PN. See Appendix B.
Algorithm 1 LENS
1:Input:B=hf;D;C;i;
2:Output: Factor setC,(8c2C)PS(c;y);PN (C;y)
3:Sample ^D=fzign
i=1D
4:function probSuff (c,y)
5: n(c&y) =Pn
i=11[c(zi) = 1^f(zi) =y]
6: n(c) =Pn
i=1c(zi)
7: return n(c&y) / n(c)
8:function probNec (C,y, upward_closure_flag)
9: ifupward_closure_flag then
10:C=fcjc2C^9c02C:c0cg
11: end if
12: n(C&y) =Pn
i=11[Pk
j=1cj(zi)1^f(zi) =y]
13: n(y) =Pn
i=11[f(zi) =y]
14: return n(C&y) / n(y)
15:function minimalSuffFactors (y,, sample_flag, )
16: sorted_factors = topological _sort(C;)
17: cands = []
18: forcin sorted_factors do
19: if9(c0;_)2cands :c0cthen
20: continue
21: end if
22: ps =probSuff (c,y)
23: ifsample_flag then
24: p =binom.test (n(c&y), n(c), , alt =>)
25: ifp then
26: cands.append( c, ps)
27: end if
28: else if psthen
29: cands.append( c, ps)
30: end if
31: end for
32: cum_pn = probNec (fcj(c;_)2candsg;y, TRUE)
33: return cands, cum_pn4 ENCODING EXISTING MEASURES
Explanatory measures can be shown to play a central role in
many seemingly unrelated XAI tools, albeit under different
assumptions about the basis tuple B. In this section, we
relate our framework to a number of existing methods.
Feature attributions. Several popular feature attribution
algorithms are based on Shapley values [Shapley, 1953],
which decompose the predictions of any target function as a
sum of weights over dinput features:
f(xi) =0+dX
j=1j; (1)
where0represents a baseline expectation and jthe
weight assigned to Xjat point xi. Letv: 2d7!Rbe a
value function such that v(S)is the payoff associated with
feature subset S[d]andv(f;g) = 0 . Define the comple-
mentR= [d]nSsuch that we may rewrite any xias a pair
of subvectors, (xS
i;xR
i). Payoffs are given by:
v(S) =E[f(xS
i;XR)]; (2)
although this introduces some ambiguity regarding the ref-
erence distribution for XR(more on this below). The Shap-
ley valuejis thenj’s average marginal contribution to all
subsets that exclude it:
j=X
S[d]nfjgjSj!(djSj1)!
d!v(S[fjg)v(S):(3)
It can be shown that this is the unique solution to the attri-
bution problem that satisfies certain desirable properties, in-
cluding efficiency, linearity, sensitivity, and symmetry.
Reformulating this in our framework, we find that the value
functionvis a sufficiency measure. To see this, let each
zD be a sample in which a random subset of variables
Sare held at their original values, while remaining features
Rare drawn from a fixed distribution D(jS).1
Proposition 1. LetcS(z) = 1 iffxzwas constructed
by holding xSfixed and sampling XRaccording toD(jS).
Thenv(S) =PS(cS;y).
Thus, the Shapley value jmeasuresXj’s average marginal
increase to the sufficiency of a random feature subset. The
advantage of our method is that, by focusing on particular
subsets instead of weighting them all equally, we disregard
irrelevant permutations and home in on just those that meet
a-minimality criterion. Kumar et al. [2020] observe that,
1The diversity of Shapley value algorithms is largely due to
variation in how this distribution is defined. Popular choices in-
clude the marginal P(XR)[Lundberg and Lee, 2017]; conditional
P(XRjxS)[Aas et al., 2019]; and interventional P(XRjdo(xS))
[Heskes et al., 2020] distributions.“since there is no standard procedure for converting Shapley
values into a statement about a model’s behavior, developers
rely on their own mental model of what the values represent”
(p. 8). By contrast, necessary and sufficient factors are more
transparent and informative, offering a direct path to what
Shapley values indirectly summarize.
Rule lists. Rule lists are sequences of if-then statements
that describe a hyperrectangle in feature space, creating par-
titions that can be visualized as decision or regression trees.
Rule lists have long been popular in XAI. While early work
in this area tended to focus on global methods [Friedman
and Popescu, 2008; Letham et al., 2015], more recent efforts
have prioritized local explanation tasks [Lakkaraju et al.,
2019; Sokol and Flach, 2020].
We focus in particular on the Anchors algorithm [Ribeiro
et al., 2018a], which learns a set of Boolean conditions A
(the eponymous “anchors”) such that A(xi) = 1 and
PD(xjA)(f(xi) =f(x)): (4)
The lhs of Eq. 4 is termed the precision , prec(A), and proba-
bility is taken over a synthetic distribution in which the con-
ditions inAhold while other features are perturbed. Once 
is fixed, the goal is to maximize coverage , formally defined
asE[A(x) = 1] , i.e. the proportion of datapoints to which
the anchor applies.
The formal similarities between Eq. 4 and Def. 2 are imme-
diately apparent, and the authors themselves acknowledge
that Anchors are intended to provide “sufficient conditions”
for model predictions.
Proposition 2. LetcA(z) = 1 iffA(x) = 1 . Then
prec(A) =PS(cA;y).
While Anchors outputs just a single explanation, our method
generates a ranked list of candidates, thereby offering a
more comprehensive view of model behavior. Moreover, our
necessity measure adds a mode of explanatory information
entirely lacking in Anchors.
Counterfactuals. Counterfactual explanations identify
one or several nearest neighbors with different outcomes, e.g.
all datapoints xwithin an-ball of xisuch that labels f(x)
andf(xi)differ (for classification) or f(x)> f(xi) +
(for regression).2The optimization problem is:
x= argmin
x2CF(xi)cost(xi;x); (5)
where CF(xi)denotes a counterfactual space such that
f(xi)6=f(x)andcost is a user-supplied cost function, typ-
ically equated with some distance measure. [Wachter et al.,
2Confusingly, the term “counterfactual” in XAI refers to any
point with an alternative outcome, which is distinct from the causal
sense of the term (see Sect. 2). We use the word in both senses
here, but strive to make our intended meaning explicit in each case.2018] recommend using generative adversarial networks
to solve Eq. 5, while others have proposed alternatives de-
signed to ensure that counterfactuals are coherent and ac-
tionable [Ustun et al., 2019; Karimi et al., 2020a; Wexler
et al., 2020]. As with Shapley values, the variation in these
proposals is reducible to the choice of context D.
For counterfactuals, we rewrite the objective as a search for
minimal perturbations sufficient to flip an outcome.
Proposition 3. Letcost be a function representing , and
letcbe some factor spanning reference values. Then the
counterfactual recourse objective is:
c= argmin
c2Ccost(c)s.t.PS(c;1y); (6)
wheredenotes a decision threshold. Counterfactual out-
puts will then be any zD such thatc(z) = 1 .
Probabilities of causation. Our framework can describe
Pearl [2000]’s aforementioned probabilities of causation,
however in this case Dmust be constructed with care.
Proposition 4. Consider the bivariate Boolean setting, as
in Sect. 2. We have two counterfactual distributions: an in-
put spaceI, in which we observe x;ybut intervene to set
X=x0; and a reference space R, in which we observe x0;y0
but intervene to set X=x. LetDdenote a uniform mixture
over both spaces, and let auxiliary variable Wtag each sam-
ple with a label indicating whether it comes from the origi-
nal (W= 1) or contrastive ( W= 0) counterfactual space.
Definec(z) =w. Then we have suf(x;y) =PS(c;y)and
nec(x;y) =PS(1c;y0).
In other words, we regard Pearl’s notion of necessity as suf-
ficiency of the negated factor for the alternative outcome .
By contrast, Pearl [2000] has no analogue for our proba-
bility of necessity. This is true of any measure that defines
sufficiency and necessity via inverse, rather than converse
probabilities. While conditioning on the same variable(s)
for both measures may have some intuitive appeal, it comes
at a cost to expressive power. Whereas our framework can
recover all four explanatory measures, corresponding to the
classical definitions and their contrapositive forms, defini-
tions that merely negate instead of transpose the antecedent
and consequent are limited to just two.
Remark 3. We have assumed that factors and outcomes
are Boolean throughout. Our results can be extended to
continuous versions of either or both variables, so long as
c(Z)
j=YjZ. This conditional independence holds when-
everW
j=YjX, which is true by construction since
f(z) :=f(x). However, we defend the Boolean assump-
tion on the grounds that it is well motivated by contrastivist
epistemologies [Kahneman and Miller, 1986; Lipton, 1990;
Blaauw, 2013] and not especially restrictive, given that parti-
tions of arbitrary complexity may be defined over ZandY.
Figure 2: Comparison of top kfeatures ranked by SHAP
against the best performing LENS subset of size kin
terms ofPS(c;y).German results are over 50 inputs;
SpamAssassins results are over 25 inputs.
5 EXPERIMENTS
In this section, we demonstrate the use of LENS on a va-
riety of tasks and compare results with popular XAI tools,
using the basis configurations detailed in Table 1. A com-
prehensive discussion of experimental design, including
datasets and pre-processing pipelines, is left to Appendix
C. Code for reproducing all results is available at https:
//github.com/limorigu/LENS .
Contexts. We consider a range of contexts Din our exper-
iments. For the input-to-reference (I2R) setting, we replace
input values with reference values for feature subsets S; for
the reference-to-input (R2I) setting, we replace reference
values with input values. We use R2I for examining suffi-
ciency/necessity of the original model prediction, and I2R
for examining sufficiency/necessity of a contrastive model
prediction. We sample from the empirical data in all exper-
iments, except in Sect. 5.3, where we assume access to a
structural causal model (SCM).
Partial Orderings. We consider two types of partial or-
derings in our experiments. The first, subset , evaluates
subset relationships. For instance, if c(z) =1[x[gender =
“female” ]]andc0(z) = 1[x[gender =“female”^
age40]], then we say that csubsetc0. The second,
ccostc0:=csubsetc0^cost(c)cost(c0), adds the
additional constraint that chas cost no greater than c0. The
cost function could be arbitrary. Here, we consider distance
measures over either the entire state space or just the inter-
vention targets corresponding to c.
5.1 FEATURE ATTRIBUTIONS
Feature attributions are often used to identify the top- kmost
important features for a given model outcome [Barocas et al.,
2020]. However, we argue that these feature sets may not
be explanatory with respect to a given prediction. To show
this, we compute R2I and I2R sufficiency – i.e., PS(c;y)
andPS(1c;1y), respectively – for the top- kmost in-
fluential features ( k2[1;9]) as identified by SHAP [Lund-
berg and Lee, 2017] and LENS. Fig. 2 shows results from
the R2I setting for German credit [Dua and Graff, 2017]
andSpamAssassin datasets [SpamAssassin, 2006]. OurTable 1: Overview of experimental settings by basis configuration.
Experiment Datasets f DC 
Attribution comparison German ,SpamAssassins Extra-Trees R2I, I2R Intervention targets -
Anchors comparison: Brittle predictions IMDB LSTM R2I, I2R Intervention targets subset
Anchors comparison: PS and Prec German Extra-Trees R2I Intervention targets subset
Counterfactuals: Adverserial SpamAssassins MLP R2I Intervention targets subset
Counterfactuals: Recourse, DiCE comparison Adult MLP I2R Full interventions cost
Counterfactuals: Recourse, causal vs. non-causal German Extra-Trees I2Rcausal Full interventions cost
method attains higher PSfor all cardinalities. We repeat
the experiment over 50 inputs, plotting means and 95% con-
fidence intervals for all k. Results indicate that our rank-
ing procedure delivers more informative explanations than
SHAP at any fixed degree of sparsity. Results from the I2R
setting are in Appendix C.
5.2 RULE LISTS
Sentiment sensitivity analysis. Next, we use LENS to
study model weaknesses by considering minimal factors
with high R2I and I2R sufficiency in text models. Our
goal is to answer questions of the form, “What are words
with/without which our model would output the origi-
nal/opposite prediction for an input sentence?” For this ex-
periment, we train an LSTM network on the IMDB dataset
for sentiment analysis [Maas et al., 2011]. If the model mis-
labels a sample, we investigate further; if it does not, we
inspect the most explanatory factors to learn more about
model behavior. For the purpose of this example, we only
inspect sentences of length 10 or shorter. We provide two
examples below and compare with Anchors (see Table 2).
Consider our first example: READ BOOK FORGET MOVIE is
a sentence we would expect to receive a negative prediction,
but our model classifies it as positive. Since we are inves-
tigating a positive prediction, our reference space is condi-
tioned on a negative label. For this model, the classic UNK
token receives a positive prediction. Thus we opt for an al-
ternative, PLATE . Performing interventions on all possible
combinations of words with our token, we find the conjunc-
tion of READ ,FORGET , and MOVIE is a sufficient factor for
a positive prediction (R2I). We also find that changing any
ofREAD ,FORGET , or MOVIE to PLATE would result in a
negative prediction (I2R). Anchors, on the other hand, per-
turbs the data stochastically (see Appendix C), suggesting
the conjunction READ AND BOOK . Next, we investigate
the sentence: YOU BETTER CHOOSE PAUL VERHOEVEN
EVEN WATCHED . Since the label here is negative, we use
theUNK token. We find that this prediction is brittle – a
change of almost any word would be sufficient to flip the
outcome. Anchors, on the other hand, reports a conjunction
including most words in the sentence. Taking the R2I view,
we still find a more concise explanation: CHOOSE orEVEN
would be enough to attain a negative prediction. These brief
examples illustrate how LENS may be used to find brittle
predictions across samples, search for similarities between
Figure 3: We compare PS(c;y)against precision scores at-
tained by the output of LENS and Anchors for examples
from German . We repeat the experiment for 100 inputs,
and each time consider the single example generated by An-
chors against the mean PS(c;y)among LENS’s candidates.
Dotted line indicates = 0:9.
errors, or test for model reliance on sensitive attributes (e.g.,
gender pronouns).
Anchors comparison. Anchors also includes a tabular
variant, against which we compare LENS’s performance
in terms of R2I sufficiency. We present the results of this
comparison in Fig. 3, and include additional comparisons
in Appendix C. We sample 100 inputs from the German
dataset, and query both methods with = 0:9using the
classifier from Sect. 5.1. Anchors satisfies a PAC bound
controlled by parameter . At the default value = 0:1,
Anchors fails to meet the threshold on 14% of samples;
LENS meets it on 100% of samples. This result accords
with Thm. 1, and vividly demonstrates the benefits of our
optimality guarantee. Note that we also go beyond Anchors
in providing multiple explanations instead of just a single
output, as well as a cumulative probability measure with no
analogue in their algorithm.
5.3 COUNTERFACTUALS
Adversarial examples: spam emails. R2I sufficiency an-
swers questions of the form, “What would be sufficient
for the model to predict y?”. This is particularly valuable
in cases with unfavorable outcomes y0. Inspired by adver-
sarial interpretability approaches [Ribeiro et al., 2018b;
Lakkaraju and Bastani, 2020], we train an MLP classifier
on the SpamAssassins dataset and search for minimal
factors sufficient to relabel a sample of spam emails as non-
spam. Our examples follow some patterns common to spam
emails: received from unusual email addresses, includes sus-Table 2: Example prediction given by an LSTM model trained on the IMDB dataset. We compare -minimal factors identified
by LENS (as individual words), based on PS(c;y)andPS(1c;1y), and compare to output by Anchors.
Inputs Anchors LENS
Text Original model prediction Suggested anchors Precision Sufficient R2I factors Sufficient I2R factors
’read book forget movie’ wrongly predicted positive [read, movie] 0.94 [read, forget, movie] read, forget, movie
’you better choose paul verhoeven even watched’ correctly predicted negative [choose, better, even, you, paul, verhoeven] 0.95 choose, even better, choose, paul, even
Table 3: (Top) A selection of emails from SpamAssassins , correctly identified as spam by an MLP. The goal is to find
minimal perturbations that result in non-spam predictions. (Bottom) Minimal subsets of feature-value assignments that
achieve non-spam predictions with respect to the emails above.
From To Subject First Sentence Last Sentence
resumevalet info resumevalet com yyyy cv spamassassin taint org adv put resume back work dear candidate professionals online network inc
jacqui devito goodroughy ananzi co za picone linux midrange com enlargement breakthrough zibdrzpay recent survey conducted increase size enter detailsto come open
rose xu email com yyyyac idt net adv harvest lots target email address quickly want advertisement persons 18yrs old
Gaming options Feature subsets for value changes
From To
1crispin cown crispin wirex com example com mailing... list secprog securityfocus... moderator
From First Sentence
2crispin cowan crispin wirex com scott mackenzie wrote
From First Sentence
3tim one comcast net tim peters tim
picious keywords such as ENLARGEMENT orADVERTISE -
MENT in the subject line, etc. We identify minimal changes
that will flip labels to non-spam with high probability. Op-
tions include altering the incoming email address to more
common domains, and changing the subject or first sen-
tences (see Table 3). These results can improve understand-
ing of both a model’s behavior and a dataset’s properties.
Diverse counterfactuals. Our explanatory measures can
also be used to secure algorithmic recourse. For this experi-
ment, we benchmark against DiCE [Mothilal et al., 2020b],
which aims to provide diverse recourse options for any
underlying prediction model. We illustrate the differences
between our respective approaches on the Adult dataset
[Kochavi and Becker, 1996], using an MLP and following
the procedure from the original DiCE paper.
According to DiCE, a diverse set of counterfactuals is
one that differs in values assigned to features, and can
thus produce a counterfactual set that includes different
interventions on the same variables (e.g., CF1: age=
91;occupation = “retired”; CF2: age= 44;occupation =
“teacher”). Instead, we look at diversity of counterfactuals
in terms of intervention targets , i.e. features changed (in
this case, from input to reference values) and their effects.
We present minimal cost interventions that would lead to re-
course for each feature set but we summarize the set of paths
to recourse via subsets of features changed. Thus, DiCE pro-
vides answers of the form “Because you are not 91 and re-
tired” or “Because you are not 44 and a teacher”; we answer
“Because of your age and occupation”, and present the low-
est cost intervention on these features sufficient to flip the
prediction.
With this intuition in mind, we compare outputs given by
DiCE and LENS for various inputs. For simplicity, we let
all features vary independently. We consider two metrics for
comparison: (a) the mean cost of proposed factors, and (b)
the number of minimally valid candidates proposed, where a
Figure 4: A comparison of mean cost of outputs by LENS
and DiCE for 50 inputs sampled from the Adult dataset.
factorcfrom a method Misminimally valid iff for allc0pro-
posed byM0,:(c0costc)(i.e.,M0does not report a fac-
tor preferable to c). We report results based on 50 randomly
sampled inputs from the Adult dataset, where references
are fixed by conditioning on the opposite prediction. The
cost comparison results are shown in Fig. 4, where we find
that LENS identifies lower cost factors for the vast majority
of inputs. Furthermore, DiCE finds no minimally valid can-
didates that LENS did not already account for. Thus LENS
emphasizes minimality anddiversity of intervention targets,
while still identifying low cost intervention values.
Causal vs. non-causal recourse. When a user relies on
XAI methods to plan interventions on real-world systems,
causal relationships between predictors cannot be ignored.
In the following example, we consider the DAG in Fig. 5,
intended to represent dependencies in the German credit
dataset. For illustrative purposes, we assume access to the
structural equations of this data generating process. (There
are various ways to extend our approach using only partial
causal knowledge as input [Karimi et al., 2020b; Heskes
et al., 2020].) We construct Dby sampling from the SCM
under a series of different possible interventions. Table 4
describes an example of how using our framework with
augmented causal knowledge can lead to different recourse
options. Computing explanations under the assumption of
feature independence results in factors that span a large
part of the DAG depicted in Fig. 5. However, encoding
structural relationships in D, we find that LENS assigns
high explanatory value to nodes that appear early in the
topological ordering. This is because intervening on a single
root factor may result in various downstream changes once
effects are fully propagated.Table 4: Recourse example comparing causal and non-causal (i.e., feature independent) D. We sample a single input
example with a negative prediction, and 100 references with the opposite outcome. For I2R causal we propagate the effects
of interventions through a user-provided SCM.
input I2R I2Rcausal
Age Sex Job Housing Savings Checking Credit Duration Purpose -minimal factors ( = 0)Cost-minimal factors ( = 0)Cost
Job: Highly skilled 1 Age: 24 0.07
Checking: NA 1 Sex: Female 1
Duration: 30 1.25 Job: Highly skilled 1
Age: 65, Housing: Own 4.23 Housing: Rent 123 Male Skilled Free Little Little 1845 45 Radio/TV
Age: 34, Savings: N/A 1.84 Savings: N/A 1
AgeSex
JobSavingsHousingChecking
CreditDuration
Purpose
Figure 5: Example DAG for German dataset.
6 DISCUSSION
Our results, both theoretical and empirical, rely on access to
the relevant context Dand the complete enumeration of all
feature subsets. Neither may be feasible in practice. When
elements of Zare estimated, as is the case with the genera-
tive methods sometimes used in XAI, modeling errors could
lead to suboptimal explanations. For high-dimensional set-
tings such as image classification, LENS cannot be naïvely
applied without substantial data pre-processing. The first is-
sue is extremely general. No method is immune to model
misspecification, and attempts to recreate a data generat-
ing process must always be handled with care. Empirical
sampling, which we rely on above, is a reasonable choice
when data are fairly abundant and representative. However,
generative models may be necessary to correct for known
biases or sample from low-density regions of the feature
space. This comes with a host of challenges that no XAI al-
gorithm alone can easily resolve. The second issue – that
a complete enumeration of all variable subsets is often im-
practical – we consider to be a feature, not a bug. Complex
explanations that cite many contributing factors pose cog-
nitive as well as computational challenges. In an influen-
tial review of XAI, Miller [2019] finds near unanimous con-
sensus among philosophers and social scientists that, “all
things being equal, simpler explanations – those that cite
fewer causes... are better explanations” (p. 25). Even if we
could list all -minimal factors for some very large value of
d, it is not clear that such explanations would be helpful to
humans, who famously struggle to hold more than seven ob-
jects in short-term memory at any given time [Miller, 1955].
That is why many popular XAI tools include some sparsity
constraint to encourage simpler outputs.
Rather than throw out some or most of our low-level fea-
tures, we prefer to consider a higher level of abstraction,where explanations are more meaningful to end users. For
instance, in our SpamAssassins experiments, we started
with a pure text example, which can be represented via
high-dimensional vectors (e.g., word embeddings). How-
ever, we represent the data with just a few intelligible com-
ponents: From andToemail addresses, Subject , etc. In
other words, we create a more abstract object and consider
each segment as a potential intervention target, i.e. a candi-
date factor. This effectively compresses a high-dimensional
dataset into a 10-dimensional abstraction. Similar strategies
could be used in many cases, either through domain knowl-
edge or data-driven clustering and dimensionality reduction
techniques [Chalupka et al., 2017; Beckers et al., 2019; Lo-
catello et al., 2019]. In general, if data cannot be represented
by a reasonably low-dimensional, intelligible abstraction,
then post-hoc XAI methods are unlikely to be of much help.
7 CONCLUSION
We have presented a unified framework for XAI that fore-
grounds necessity and sufficiency, which we argue are the
fundamental building blocks of all successful explanations.
We defined simple measures of both, and showed how they
undergird various XAI methods. Our formulation, which re-
lies on converse rather than inverse probabilities, is uniquely
flexible and expressive. It covers all four basic explanatory
measures – i.e., the classical definitions and their contra-
positive transformations – and unambiguously accommo-
dates logical, probabilistic, and/or causal interpretations, de-
pending on how one constructs the basis tuple B. We illus-
trated illuminating connections between our measures and
existing proposals in XAI, as well as Pearl [2000]’s proba-
bilities of causation. We introduced a sound and complete
algorithm for identifying minimally sufficient factors, and
demonstrated our method on a range of tasks and datasets.
Our approach prioritizes completeness over efficiency, suit-
able for settings of moderate dimensionality. Future research
will explore more scalable approximations, model-specific
variants optimized for, e.g., convolutional neural networks,
and developing a graphical user interface.
Acknowledgements
DSW was supported by ONR grant N62909-19-1-2096.References
Kjersti Aas, Martin Jullum, and Anders Løland. Explain-
ing individual predictions when features are dependent:
More accurate approximations to Shapley values. arXiv
preprint, 1903.10464v2, 2019.
Solon Barocas, Andrew D Selbst, and Manish Raghavan.
The Hidden Assumptions behind Counterfactual Explana-
tions and Principal Reasons. In FAT* , pages 80–89, 2020.
Sander Beckers, Frederick Eberhardt, and Joseph Y Halpern.
Approximate causal abstraction. In UAI, pages 210–219,
2019.
Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian
Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir
Puri, José M F Moura, and Peter Eckersley. Explainable
machine learning in deployment. In FAT* , pages 648–
657, 2020.
Steven Bird, Ewan Klein, and Edward Loper. Natural lan-
guage processing with Python: Analyzing text with the
natural language toolkit . O’Reilly, 2009.
Martijn Blaauw, editor. Contrastivism in Philosophy . Rout-
ledge, New York, 2013.
Krzysztof Chalupka, Frederick Eberhardt, and Pietro Perona.
Causal feature learning: an overview. Behaviormetrika ,
44(1):137–164, 2017.
Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen
Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel
Das. Explanations based on the missing: Towards con-
trastive explanations with pertinent negatives. In NeurIPS ,
pages 592–603, 2018.
Dheeru Dua and Casey Graff. UCI machine learning
repository, 2017. URL http://archive.ics.uci.
edu/ml .
C. Fernández-Loría, F. Provost, and X. Han. Explaining
data-driven decisions made by AI systems: The counter-
factual approach. arXiv preprint, 2001.07417, 2020.
Jerome H Friedman and Bogdan E Popescu. Predictive
learning via rule ensembles. Ann. Appl. Stat. , 2(3):916–
954, 2008.
Sainyam Galhotra, Romila Pradhan, and Babak Salimi. Ex-
plaining black-box algorithms using probabilistic con-
trastive counterfactuals. In SIGMOD , 2021.
Pierre Geurts, Damien Ernst, and Louis Wehenkel. Ex-
tremely randomized trees. Mach. Learn. , 63(1):3–42,
2006.Sachin Grover, Chiara Pulice, Gerardo I. Simari, and V . S.
Subrahmanian. Beef: Balanced english explanations of
forecasts. IEEE Trans. Comput. Soc. Syst. , 6(2):350–364,
2019.
Joseph Y Halpern. Actual Causality . The MIT Press, Cam-
bridge, MA, 2016.
Joseph Y Halpern and Judea Pearl. Causes and explanations:
A structural-model approach. Part I: Causes. Br. J. Philos.
Sci., 56(4):843–887, 2005a.
Joseph Y Halpern and Judea Pearl. Causes and explanations:
A structural-model approach. Part II: Explanations. Br. J.
Philos. Sci. , 56(4):889–911, 2005b.
Tom Heskes, Evi Sijben, Ioan Gabriel Bucur, and Tom
Claassen. Causal Shapley values: Exploiting causal
knowledge to explain individual predictions of complex
models. In NeurIPS , 2020.
Alexey Ignatiev, Nina Narodytska, and Joao Marques-Silva.
Abduction-based explanations for machine learning mod-
els. In AAAI , pages 1511–1519, 2019.
Guido W Imbens and Donald B Rubin. Causal Inference
for Statistics, Social, and Biomedical Sciences: An Intro-
duction . Cambridge University Press, Cambridge, 2015.
Daniel Kahneman and Dale T. Miller. Norm theory: Com-
paring reality to its alternatives. Psychol. Rev. , 93(2):136–
153, 1986.
Amir-Hossein Karimi, Gilles Barthe, Bernhard Schölkopf,
and Isabel Valera. A survey of algorithmic recourse:
Definitions, formulations, solutions, and prospects. arXiv
preprint, 2010.04050, 2020a.
Amir-Hossein Karimi, Julius von Kügelgen, Bernhard
Schölkopf, and Isabel Valera. Algorithmic recourse under
imperfect causal knowledge: A probabilistic approach. In
NeurIPS , 2020b.
Diederik P. Kingma and Jimmy Ba. Adam: A method for
stochastic optimization. In The 3rd International Confer-
ence for Learning Representations , 2015.
Ronny Kochavi and Barry Becker. Adult income dataset,
1996. URL https://archive.ics.uci.edu/
ml/datasets/adult .
Indra Kumar, Suresh Venkatasubramanian, Carlos Scheideg-
ger, and Sorelle Friedler. Problems with Shapley-value-
based explanations as feature importance measures. In
ICML , pages 5491–5500, 2020.
Himabindu Lakkaraju and Osbert Bastani. “How do I fool
you?”: Manipulating user trust via misleading black box
explanations. In AIES , pages 79–85, 2020.Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure
Leskovec. Faithful and customizable explanations of
black box models. In AIES , pages 131–138, 2019.
E.L. Lehmann and Joseph P. Romano. Testing Statistical
Hypotheses . Springer, New York, Third edition, 2005.
Benjamin Letham, Cynthia Rudin, Tyler H McCormick, and
David Madigan. Interpretable classifiers using rules and
Bayesian analysis: Building a better stroke prediction
model. Ann. Appl. Stat. , 9(3):1350–1371, 2015.
David Lewis. Causation. J. Philos. , 70:556–567, 1973.
Peter Lipton. Contrastive explanation. Royal Inst. Philos.
Suppl. , 27:247–266, 1990.
Zachary Lipton. The mythos of model interpretability. Com-
mun. ACM , 61(10):36–43, 2018.
Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar
Raetsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier
Bachem. Challenging common assumptions in the un-
supervised learning of disentangled representations. In
ICML , pages 4114–4124, 2019.
Scott M Lundberg and Su-In Lee. A unified approach to
interpreting model predictions. In NeurIPS , pages 4765–
4774. 2017.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan
Huang, Andrew Y . Ng, and Christopher Potts. Learning
word vectors for sentiment analysis. In ACL, pages 142–
150, 2011.
J.L. Mackie. Causes and conditions. Am. Philos. Q. , 2(4):
245–264, 1965.
Luke Merrick and Ankur Taly. The explanation game: Ex-
plaining machine learning models using shapley values.
InCD-MAKE , pages 17–38. Springer, 2020.
George A. Miller. The magical number seven, plus or minus
two: Some limits on our capacity for processing informa-
tion. Psychol. Rev. , 101(2):343–352, 1955.
Tim Miller. Explanation in artificial intelligence: Insights
from the social sciences. Artif. Intell. , 267:1–38, 2019.
Christoph Molnar. Interpretable Machine Learning: A
Guide for Making Black Box Models Interpretable .
Münich, 2021. URL https://christophm.
github.io/interpretable-ml-book/ .
Ramaravind K. Mothilal, Divyat Mahajan, Chenhao Tan,
and Amit Sharma. Towards unifying feature attribution
and counterfactual explanations: Different means to the
same end. arXiv preprint, 2011.04917, 2020a.Ramaravind K. Mothilal, Amit Sharma, and Chenhao Tan.
Explaining machine learning classifiers through diverse
counterfactual explanations. In FAT* , pages 607–617,
2020b.
Nina Narodytska, Aditya Shrotri, Kuldeep S Meel, Alexey
Ignatiev, and Joao Marques-Silva. Assessing heuristic
machine learning explanations with model counting. In
SAT, pages 267–278, 2019.
Judea Pearl. Causality: Models, Reasoning, and Inference .
Cambridge University Press, New York, 2000.
Jeffrey Pennington, Richard Socher, and Christopher D Man-
ning. GloVe: Global vectors for word representation. In
EMNLP , pages 1532–1543, 2014.
Yanou Ramon, David Martens, Foster Provost, and
Theodoros Evgeniou. A comparison of instance-level
counterfactual explanation algorithms for behavioral and
textual data: SEDC, LIME-C and SHAP-C. Adv. Data
Anal. Classif. , 2020.
Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin.
Anchors: High-precision model-agnostic explanations. In
AAAI , pages 1527–1535, 2018a.
Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin.
Semantically equivalent adversarial rules for debugging
NLP models. In ACL, pages 856–865, 2018b.
Cynthia Rudin. Stop explaining black box machine learning
models for high stakes decisions and use interpretable
models instead. Nat. Mach. Intell. , 1(5):206–215, 2019.
Lloyd Shapley. A value for n-person games. In Contribu-
tions to the Theory of Games , chapter 17, pages 307–317.
Princeton University Press, Princeton, 1953.
Kacper Sokol and Peter Flach. LIMEtree: Interactively
customisable explanations based on local surrogate multi-
output regression trees. arXiv preprint, 2005.01427, 2020.
Apache SpamAssassin, 2006. URL https:
//spamassassin.apache.org/old/
publiccorpus/ . Accessed 2021.
John D Storey. The optimal discovery procedure: A new
approach to simultaneous significance testing. J. Royal
Stat. Soc. Ser. B Methodol. , 69(3):347–368, 2007.
Mukund Sundararajan and Amir Najmi. The many Shapley
values for model explanation. In ACM , New York, 2019.
Jin Tian and Judea Pearl. Probabilities of causation: Bounds
and identification. Ann. Math. Artif. Intell. , 28(1-4):287–
313, 2000.
Berk Ustun, Alexander Spangher, and Yang Liu. Actionable
recourse in linear classification. In FAT* , pages 10–19,
2019.Tyler J VanderWeele and Thomas S Richardson. General
theory for interactions in sufficient cause models with
dichotomous exposures. Ann. Stat. , 40(4):2128–2161,
2012.
Tyler J VanderWeele and James M Robins. Empirical and
counterfactual conditions for sufficient cause interactions.
Biometrika , 95(1):49–61, 2008.
John von Neumann and Oskar Morgenstern. Theory of
Games and Economic Behavior . Princeton University
Press, Princeton, NJ, 1944.
Sandra Wachter, Brent Mittelstadt, and Chris Russell. Coun-
terfactual explanations without opening the black box:
Automated decisions and the GDPR. Harvard J. Law
Technol. , 31(2):841–887, 2018.
David S Watson and Luciano Floridi. The explanation game:
a formal framework for interpretable machine learning.
Synthese , 2020.
J. Wexler, M. Pushkarna, T. Bolukbasi, M. Wattenberg,
F. Viégas, and J. Wilson. The what-if tool: Interactive
probing of machine learning models. IEEE Trans. Vis.
Comput. Graph. , 26(1):56–65, 2020.
Xin Zhang, Armando Solar-Lezama, and Rishabh Singh. In-
terpreting neural network judgments via minimal, stable,
and symbolic corrections. In NeurIPS , page 4879–4890,
2018.
A PROOFS
A.1 THEOREMS
A.1.1 Proof of Theorem 1
Theorem. With oracle estimates PS(c;y)for allc2C,
Alg. 1 is sound and complete.
Proof. Soundness and completeness follow directly from the
specification of (P1) Cand (P2)in the algorithm’s input
B, along with (P3) access to oracle estimates PS(c;y)for
allc2C. Recall that the partial ordering must be complete
and transitive, as noted in Sect. 3.
Assume that Alg. 1 generates a false positive, i.e. outputs
somecthat is not-minimal. Then by Def. 4, either the algo-
rithm failed to properly evaluate PS(c;y), thereby violating
(P3); or failed to identify some c0such that (i) PS(c0;y)
and (ii)c0c. (i) is impossible by (P3), and (ii) is impos-
sible by (P2). Thus there can be no false positives.
Assume that Alg. 1 generates a false negative, i.e. fails to
output some cthat is in fact -minimal. By (P1), this ccan-
not exist outside the finite set C. Therefore there must besomec2Cfor which either the algorithm failed to properly
evaluatePS(c;y), thereby violating (P3); or wrongly iden-
tified somec0such that (i) PS(c0;y)and (ii)c0c.
Once again, (i) is impossible by (P3), and (ii) is impossible
by (P2). Thus there can be no false negatives.
A.1.2 Proof of Theorem 2
Theorem. With sample estimates ^PS(c;y)for allc2C,
Alg. 1 is uniformly most powerful.
Proof. A testing procedure is uniformly most powerful
(UMP) if it attains the lowest type II error of all tests with
fixed type I error . Let0;1denote a partition of the pa-
rameter space into null and alternative regions, respectively.
The goal in frequentist inference is to test the null hypoth-
esisH0:20against the alternative H1:21for
some parameter . Let (X)be a testing procedure of the
form1[T(X)c ], whereXis a finite sample, T(X)is a
test statistic, and c is the critical value. This latter param-
eter defines a rejection region such that test statistics inte-
grate to underH0. We say that (X)is UMP iff, for any
other test 0(X)such that
sup
20E[ 0(X)] ;
we have
(821)E[ 0(X)]E[ (X)];
where E21[ (X)]denotes the power of the test to de-
tect the true ,1 (). The UMP-optimality of Alg. 1
follows from the UMP-optimality of the binomial test (see
[Lehmann and Romano, 2005, Ch. 3]), which is used to de-
cide between H0:PS(c;y)<  andH1:PS(c;y)
on the basis of observed proportions ^PS(c;y), estimated
fromnsamples for all c2C. The proof now takes the same
structure as that of Thm. 1, with (P3) replaced by (P 30): ac-
cess to UMP estimates of PS(c;y). False positives are no
longer impossible but bounded at level ; false negatives
are no longer impossible but occur with frequency . Be-
cause no procedure can find more -minimal factors for any
fixed , Alg. 1 is UMP.
A.2 PROPOSITIONS
A.2.1 Proof of Proposition 1
Proposition. LetcS(z) = 1 iffxzwas constructed by
holding xSfixed and sampling XRaccording toD(jS).
Thenv(S) =PS(cS;y).
As noted in the text, D(xjS)may be defined in a variety of
ways (e.g., via marginal, conditional, or interventional dis-
tributions). For any given choice, let cS(z) = 1 iffxis con-
structed by holding xS
ifixed and sampling XRaccordingtoD(xjS). Since we assume binary Y(or binarized, as dis-
cussed in Sect. 3), we can rewrite Eq. 2 as a probability:
v(S) =PD(xjS)(f(xi) =f(x));
where xidenotes the input point. Since conditional sam-
pling is equivalent to conditioning after sampling, this value
function is equivalent to PS(cS;y)by Def. 2.
A.2.2 Proof of Proposition 2
Proposition. LetcA(z) = 1 iffA(x) = 1 . Then
prec(A) =PS(cA;y).
The proof for this proposition is essentially identical, except
in this case our conditioning event is A(x) = 1 . LetcA=
1iffA(x) = 1 . Precision prec( A), given by the lhs of
Eq. 3, is defined over a conditional distribution D(xjA).
Since conditional sampling is equivalent to conditioning
after sampling, this probability reduces to PS(cA;y).
A.2.3 Proof of Proposition 3
Proposition. Letcost be a function representing , and
letcbe some factor spanning reference values. Then the
counterfactual recourse objective is:
c= argmin
c2Ccost(c)s.t.PS(c;1y); (7)
wheredenotes a decision threshold. Counterfactual out-
puts will then be any zD such thatc(z) = 1 .
There are two closely related ways of expressing the counter-
factual objective: as a search for optimal points , or optimal
actions . We start with the latter interpretation, reframing ac-
tions as factors. We are only interested in solutions that flip
the original outcome, and so we constrain the search to fac-
tors that meet an I2R sufficiency threshold, PS(c;1y)
. Then the optimal action is attained by whatever factor
(i) meets the sufficiency criterion and (ii) minimizes cost.
Call this factor c. The optimal point is then any zsuch that
c(z) = 1 .
A.2.4 Proof of Proposition 4
Proposition. Consider the bivariate Boolean setting, as in
Sect. 2. We have two counterfactual distributions: an input
spaceI, in which we observe x;ybut intervene to set X=
x0; and a reference space R, in which we observe x0;y0but
intervene to set X=x. LetDdenote a uniform mixture
over both spaces, and let auxiliary variable Wtag each sam-
ple with a label indicating whether it comes from the origi-
nal (W= 1) or contrastive ( W= 0) counterfactual space.
Definec(z) =w. Then we have suf(x;y) =PS(c;y)and
nec(x;y) =PS(1c;y0).Recall from Sect. 2 that Pearl [2000, Ch. 9] defines
suf(x;y) :=P(yxjx0;y0)andnec(x;y) :=P(y0
x0jx;y):
We may rewrite the former as PR(y), where the reference
spaceRdenotes a counterfactual distribution conditioned on
x0;y0;do(x). Similarly, we may rewrite the latter as PI(y0),
where the input space Idenotes a counterfactual distribu-
tion conditioned on x;y;do (x0). Our contextDis a uniform
mixture over both spaces.
The key point here is that the auxiliary variable Windicates
whether samples are drawn from IorR. Thus condition-
ing on different values of Wallows us to toggle between
probabilities over the two spaces. Therefore, for c(z) =w,
we have suf(x;y) =PS(c;y)andnec(x;y) =PS(1
c;y0).
B ADDITIONAL DISCUSSIONS OF
METHOD
B.1-MINIMALITY AND NECESSITY
As a follow up to Remark 2 in Sect. 3.2, we expand here
upon the relationship between and cumulative probabili-
ties of necessity, which is similar to a precision-recall curve
quantifying and qualifying errors in classification tasks. In
this case, as we lower , we allow more factors to be taken
into account, thus covering more pathways towards a desired
outcome in a cumulative sense. We provide an example of
such a precision-recall curve in Fig. 6, using an R2I view of
theGerman credit dataset. Different levels of cumulative
necessity may be warranted for different tasks, depending on
how important it is to survey multiple paths towards an out-
come. Users can therefore adjust to accommodate desired
levels of cumulative PN over successive calls to LENS.
Figure 6: An example curve exemplifying the relationship
betweenand cumulative probability necessity attained by
selected-minimal factors.C ADDITIONAL DISCUSSIONS OF
EXPERIMENTAL RESULTS
C.1 DATA PRE-PROCESSING AND MODEL
TRAINING
German Credit Risk. We first download the dataset from
Kaggle,3which is a slight modification of the UCI version
[Dua and Graff, 2017]. We follow the pre-processing steps
from a Kaggle tutorial.4In particular, we map the categori-
cal string variables in the dataset ( Savings ,Checking ,
Sex,Housing ,Purpose and the outcome Risk ) to nu-
meric encodings, and mean-impute values missing values
forSavings andChecking . We then train an Extra-Tree
classifier [Geurts et al., 2006] using scikit-learn, with ran-
dom state 0 and max depth 15. All other hyperparameters
are left to their default values. The model achieves a 71%
accuracy.
German Credit Risk - Causal. We assume a partial order-
ing over the features in the dataset, as described in Fig. 5.
We use this DAG to fit a structural causal model (SCM)
based on the original data. In particular, we fit linear regres-
sions for every continuous variable and a random forest clas-
sifier for every categorical variable. When sampling from
D, we let variables remain at their original values unless ei-
ther (a) they are directly intervened on, or (b) one of their
ancestors was intervened on. In the latter case, changes are
propagated via the structural equations. We add stochastic-
ity via Gaussian noise for continuous outcomes, with vari-
ance given by each model’s residual mean squared error.
For categorical variables, we perform multinomial sampling
over predicted class probabilities. We use the same fmodel
as for the non-causal German credit risk description above.
SpamAssassins. The original spam assassins dataset comes
in the form of raw, multi-sentence emails captured on
the Apache SpamAssassins project, 2003-2015.5We seg-
mented the emails to the following “features”: From
is the sender; Tois the recipient; Subject is the
email’s subject line; Urls records any URLs found in
the body; Emails denotes any email addresses found
in the body; First Sentence ,Second Sentence ,
Penult Sentence , andLast Sentence refer to the
first, second, penultimate, and final sentences of the email,
respectively. We use the original outcome label from the
dataset (indicated by which folder the different emails were
saved to). Once we obtain a dataset in the form above, we
continue to pre-process by lower-casing all characters, only
3See https://www.kaggle.com/kabure/
german-credit-data-with-risk?select=german_
credit_data.csv .
4See https://www.kaggle.com/vigneshj6/
german-credit-data-analysis-python .
5Seehttps:
//spamassassin.apache.org/old/credits.html .keeping words or digits, clearing most punctuation (except
for ‘-’ and ‘_’), and removing stopwords based on nltk’s pro-
vided list [Bird et al., 2009]. Finally, we convert all clean
strings to their mean 50-dim GloVe vector representation
[Pennington et al., 2014]. We train a standard MLP classi-
fier using scikit-learn, with random state 1, max iteration
300, and all other hyperparameters set to their default val-
ues.6This model attains an accuracy of 98.3%.
IMDB. We follow the pre-processing and modeling steps
taken in a standard tutorial on LSTM training for sentiment
prediction with the IMDB dataset.7The CSV is included in
the repository named above, and can be additionally down-
loaded from Kaggle or ai.standford.8In particular, these
include removal of HTML-tags, non-alphabetical charac-
ters, and stopwords based on the the list provided in the ntlk
package, as well as changing all alphabetical characters to
lower-case. We then train a standard LSTM model, with 32
as the embedding dimension and 64 as the dimensionality
of the output space of the LSTM layer, and an additional
dense layer with output size 1. We use the sigmoid activa-
tion function, binary cross-entropy loss, and optimize with
Adam [Kingma and Ba, 2015]. All other hyperparameters
are set to their default values as specified by Keras.9The
model achieves an accuracy of 87.03%.
Adult Income. We obtain the adult income dataset via
DiCE’s implementation10and followed Haojun Zhu’s pre-
processing steps.11For our recourse comparison, we use a
pretrained MLP model provided by the authors of DiCE,
which is a single layer, non-linear model trained with Ten-
sorFlow and stored in their repository as ‘adult.h5’.
C.2 TASKS
Comparison with attributions. For completeness, we also
include here comparison of cumulative attribution scores
per cardinality with probabilities of sufficiency for the I2R
view (see Fig. 7).
Sentiment sensitivity analysis. We identify sentences in
the original IMDB dataset that are up to 10 words long. Out
of those, for the first example we only look at wrongly pre-
dicted sentences to identify a suitable example. For the other
6Seehttps://scikit-learn.org/stable/
modules/generated/sklearn.\neural_network.
MLPClassifier.html .
7Seehttps://github.com/hansmichaels/
sentiment-analysis-IMDB-Review-using-LSTM/
blob/master/sentiment_analysis.py.ipynb .
8See
https://www.kaggle.com/lakshmi25npathi/
imdb-dataset-of-50k-movie-reviews orhttp:
//ai.stanford.edu/~amaas/data/sentiment/ .
9Seehttps://keras.io .
10Seehttps://github.com/interpretml/DiCE .
11Seehttps://rpubs.com/H_Zhu/235617 .Table 5: Recourse options for a single input given by DiCE and our method. We report targets of interventions as suggested
options, but they could correspond to different values of interventions. Our method tends to propose more minimal and
diverse intervention targets. Note that all of DiCE’s outputs are already subsets of LENS’s two top suggestions, and due to
-minimality LENS is forced to pick the next factors to be non-supersets of the two top rows. This explains the higher cost
of LENS’s bottom three rows.
input DiCE output LENS output
Age Wrkcls Edu. Marital Occp. Race Sex Hrs/week Targets of intervention Cost Targets of intervention Cost
Age, Edu., Marital, Hrs/week 8.13 Edu. 1
Age, Edu., Marital, Occp., Sex, Hrs/week 5.866 Martial 1
Age, Wrkcls, Educ., Marital, Hrs/week 5.36 Occp., Hrs/week 19.3
Age, Edu., Occp., Hrs/week 3.2 Wrkcls, Occp., Hrs/week 12.642 Govt. HS-grad Single Service White Male 40
Edu., Hrs/week 11.6 Age, Wrkcls, Occp., Hrs/week 12.2
Figure 7: Comparison of degrees of sufficiency in I2R set-
ting, for top kfeatures based on SHAP scores, against the
best performing subset of cardinality kidentified by our
method. Results for German are averaged over 50 inputs;
results for SpamAssassins are averaged over 25 inputs.
example, we simply consider a random example from the
10-word maximum length examples. We noted that Anchors
uses stochastic word-level perturbations for this setting. This
leads them to identify explanations of higher cardinality for
some sentences, which include elements that are not strictly
necessary. In other words, their outputs are not minimal, as
required for descriptions of “actual causes” [Halpern and
Pearl, 2005a; Halpern, 2016].
Comparison with Anchors. To complete the picture of
our comparison with Anchors on the German Credit Risk
dataset, we provide here additional results. In the main text,
we included a comparison of Anchors’s single output preci-
sion against the mean degree of sufficiency attained by our
multiple suggestions per input. We sample 100 different in-
puts from the German Credit dataset and repeat this same
comparison. Here we additionally consider the minimum
and maximum PS(c;y)attained by LENS against Anchors.
Note that even when considering minimum PSsuggestions
by LENS, i.e. our worst output, the method shows more con-
sistent performance. We qualify this discussion by noting
that Anchors may generate results comparable to our own
by setting the hyperparameter to a lower value. However,
Ribeiro et al. [2018a] do not discuss this parameter in de-
tail in either their original article or subsequent notebook
guides. They use default settings in their own experiments,
and we expect most practitioners will do the same.
Recourse: DiCE comparison First, we provide a single
Figure 8: We compare degree of sufficiency against preci-
sion scores attained by the output of LENS and Anchors for
examples from German . We repeat the experiment for 100
sampled inputs, and each time consider the single output
by Anchors against the min (left) and max (right) PS(c;y)
among LENS’s multiple candidates. Dotted line indicates
= 0:9, the threshold we chose for this experiment.
illustrative example of the lack of diversity in intervention
targets we identify in DiCE’s output. Let us consider one
example, shown in Table 5. While DiCE outputs are diverse
in terms of values and target combinations, they tend to
have great overlap in intervention targets. For instance, Age
andEducation appear in almost all of them. Our method
would focus on minimal paths to recourse that would involve
different combinations of features.
Figure 9: We show results over 50 input points sampled
from the original dataset, and all possible references of the
opposite class, across two metrics: the min cost (left) of
counterfactuals suggested by our method vs. DiCE, and the
max cost (right) of counterfactuals.
Next, we also provide additional results from our cost com-
parison with DiCE’s output in Fig. 8. While in the main text
we include a comparison of our mean cost output against
DiCE’s, here we additionally include a comparison of min
and max cost of the methods’ respective outputs. We see thateven when considering minimum and maximum cost, our
method tends to suggest lower cost recourse options. In par-
ticular, note that all of DiCE’s outputs are already subsets of
LENS’s two top suggestions. The higher costs incurred by
LENS for the next two lines are a reflection of this fact: due
to-minimality, LENS is forced to find other interventions
that are no longer supersets of options already listed above.