|
arXiv:2106.12269v1 [cs.AI] 23 Jun 2021Improved Acyclicity Reasoning for Bayesian Network Struct ure Learning with |
|
Constraint Programming |
|
Fulya Tr ¨osser1∗,Simon de Givry1and George Katsirelos2 |
|
1Universit´ e de Toulouse, INRAE, UR MIAT, F-31320, Castanet -Tolosan, France |
|
2UMR MIA-Paris, INRAE, AgroParisTech, Univ. Paris-Saclay, 75005 Paris, France |
|
{fulya.ural, simon.de-givry }@inrae.fr, [email protected] |
|
Abstract |
|
Bayesian networks are probabilistic graphical mod- |
|
els with a wide range of application areas includ- |
|
ing gene regulatory networks inference, risk anal- |
|
ysis and image processing. Learning the structure |
|
of a Bayesian network (BNSL) from discrete data |
|
is known to be an NP-hard task with a superexpo- |
|
nential search space of directed acyclic graphs. In |
|
this work, we propose a new polynomial time algo- |
|
rithm for discovering a subset of all possible cluster |
|
cuts, a greedy algorithm for approximately solving |
|
the resulting linear program, and a generalised arc |
|
consistency algorithm for the acyclicity constraint. |
|
We embed these in the constraint programming- |
|
based branch-and-bound solver CPBayes and show |
|
that, despite being suboptimal, they improve per- |
|
formance by orders of magnitude. The resulting |
|
solver also compares favourably with GOBNILP, a |
|
state-of-the-art solver for the BNSL problem which |
|
solves an NP-hard problem to discover each cut and |
|
solves the linear program exactly. |
|
1 Introduction |
|
Towards the goal of explainable AI, Bayesian networks offer |
|
a rich framework for probabilistic reasoning. Bayesian Net - |
|
work Structure Learning (BNSL) from discrete observations |
|
corresponds to finding a compact model which best explains |
|
the data. It defines an NP-hard problem with a superexponen- |
|
tial search space of Directed Acyclic Graphs (DAG). Several |
|
constraint-based (exploiting local conditional independ ence |
|
tests) and score-based (exploiting a global objective form ula- |
|
tion) BNSL methods have been developed in the past. |
|
Complete methods for score-based BNSL include dynamic |
|
programming [Silander and Myllym¨ aki, 2006 ], heuristic |
|
search [Yuan and Malone, 2013; Fan and Yuan, 2015 ], |
|
maximum satisfiability [Berg et al. , 2014 ], branch-and- |
|
cut [Bartlett and Cussens, 2017 ]and constraint program- |
|
ming [van Beek and Hoffmann, 2015 ]. Here, we focus on |
|
the latter two. |
|
GOBNILP [Bartlett and Cussens, 2017 ]is a state-of-the- |
|
art solver for BNSL. It implements branch-and-cut in an in- |
|
∗Contact Authorteger linear programming (ILP) solver. At each node of the |
|
branch-and-bound tree, it generates cuts that improve the l in- |
|
ear relaxation. A major class of cuts generated by GOBNILP |
|
arecluster cuts , which identify sets of parent sets that cannot |
|
be used together in an acyclic graph. In order to find cluster |
|
cuts, GOBNILP solves an NP-hard subproblem created from |
|
the current optimal solution of the linear relaxation. |
|
CPBayes [van Beek and Hoffmann, 2015 ]is a constraint |
|
programming-based (CP) method for BNSL. It uses a CP |
|
model that exploits symmetry and dominance relations |
|
present in the problem, subproblem caching, and a pattern |
|
database to compute lower bounds, adapted from heuris- |
|
tic search [Fan and Yuan, 2015 ]. van Beek and Hoffmann |
|
showed that CPBayes is competitive with GOBNILP in many |
|
instances. In contrast to GOBNILP, the inference mech- |
|
anisms of CPBayes are very lightweight, which allows it |
|
to explore many orders of magnitude more nodes per time |
|
unit, even accounting for the fact that computing the patter n |
|
databases before search can sometimes consume considerabl e |
|
time. On the other hand, the lightweight pattern-based boun d- |
|
ing mechanism can take into consideration only limited in- |
|
formation about the current state of the search. Specificall y, |
|
it can take into account the current total ordering implied b y |
|
the DAG under construction, but no information that has been |
|
derived about the potential parent sets of each vertex, i.e. , the |
|
current domains of parent set variables. |
|
In this work, we derive a lower bound that is computation- |
|
ally cheaper than that computed by GOBNILP. We give in |
|
Section 3 a polynomial-time algorithm that discovers a clas s |
|
of cluster cuts that provably improve the linear relaxation . In |
|
Section 4, we give a greedy algorithm for solving the linear |
|
relaxation, inspired by similar algorithms for MaxSAT and |
|
Weighted Constraint Satisfaction Problems (WCSP). Finall y, |
|
in Section 5 we give an algorithm that enforces generalised |
|
arc consistency on the acyclicity constraint, based on pre- |
|
vious work by van Beek and Hoffmann, but with improved |
|
complexity and practical performance. In Section 6, we show |
|
that our implementation of these techniques in CPBayes lead s |
|
to significantly improved performance, both in the size of th e |
|
search tree explored and in runtime. |
|
2 Preliminaries |
|
We give here only minimal background on (inte- |
|
ger) linear programming and constraint program-ming, and refer the reader to existing literature |
|
[Papadimitriou and Steiglitz, 1998; Rossi et al. , 2006 ] |
|
for more. |
|
Constraint Programming |
|
A constraint satisfaction problem (CSP) is a tuple /a\}bracketle{tV,D,C/a\}bracketri}ht, |
|
whereVis a set of variables, Dis a function mapping vari- |
|
ables to domains and C is a set of constraints. An assignment |
|
AtoV′⊆Vis a mapping from each v∈V′toD(v). A |
|
complete assignment is an assignment to V. If an assignment |
|
mapsvtoa, we say it assigns v=a. A constraint is a pair |
|
/a\}bracketle{tS,P/a\}bracketri}ht, whereS⊆Vis the scope of the constraint and Pis |
|
a predicate over/producttext |
|
V∈SD(V)which accepts assignments to |
|
Sthatsatisfy the constraint. For an assignment AtoS′⊇S, |
|
letA′|Sbe the restriction of AtoS. We say that Asatisfies |
|
c=/a\}bracketle{tS,P/a\}bracketri}htifA|Ssatisfiesc. A problem is satisfied by Aif |
|
Asatisfies all constraints. |
|
For a constraint c=/a\}bracketle{tS,P/a\}bracketri}htand forv∈S,a∈D(v), |
|
v=ais generalized arc consistent (GAC) for cif there exists |
|
an assignment Athat assigns v=aand satisfies c. If for all |
|
v∈S,a∈D(v),v=ais GAC for c, thencis GAC. If |
|
all constraints are GAC, the problem is GAC. A constraint is |
|
associated with an algorithm fc, called the propagator for c, |
|
that removes (or prunes ) values from the domains of variables |
|
inSthat are not GAC. |
|
CSPs are typically solved by backtracking search, using |
|
propagators to reduce domains at each node and avoid parts |
|
of the search tree that are proved to not contain any solution s. |
|
Although CSPs are decision problems, the technology can be |
|
used to solve optimization problems like BNSL by, for ex- |
|
ample, using branch-and-bound and embedding the bounding |
|
part in a propagator. This is the approach used by CPBayes. |
|
Integer Linear Programming |
|
A linear program (LP) is the problem of finding |
|
min{cTx|x∈Rn∧Ax≥b∧x≥0} |
|
wherecandbare vectors, Ais a matrix, and xis a vector |
|
of variables. A feasible solution of this problem is one that |
|
satisfiesx∈Rn∧Ax≥b∧x≥0and an optimal solution is a |
|
feasible one that minimizes the objective function cTx. This |
|
can be found in polynomial time. A row Aicorresponds to |
|
an individual linear constraint and a column AT |
|
jto a variable. |
|
The dual of a linear program Pin the above form is another |
|
linear program D: |
|
max{bTy|y∈Rm∧ATy≤c∧y≥0} |
|
whereA,b,c are as before and yis the vector of dual vari- |
|
ables. Rows of the dual correspond to variables of the primal |
|
and vice versa. The objective value of any dual feasible so- |
|
lution is a lower bound on the optimum of P. WhenPis |
|
satisfiable, its dual is also satisfiable and the values of the ir |
|
optima meet. For a given feasible solution ˆxofP, the slack |
|
of constraint iisslackˆx(i) =AT |
|
ix−bi. Given a dual feasible |
|
solutionˆy,slackD |
|
ˆy(i)is the reduced cost of primal variable |
|
i,rcˆy(i). The reduced cost rcˆy(i)is interpreted as a lower |
|
bound on the amount that the dual objective would increase |
|
overbTˆyifxiis forced to be non-zero in the primal.An integer linear program (ILP) is a linear program in |
|
which we replace the constraint x∈Rnbyx∈Znand it |
|
is an NP-hard optimization problem. |
|
Bayesian Networks |
|
A Bayesian network is a directed graphical model B= |
|
/a\}bracketle{tG,P/a\}bracketri}htwhereG=/a\}bracketle{tV,E/a\}bracketri}htis a directed acyclic graph (DAG) |
|
called the structure of BandPare its parameters. A BN |
|
describes a normalised joint probability distribution. Ea ch |
|
vertex of the graph corresponds to a random variable and |
|
presence of an edge between two vertices denotes direct con- |
|
ditional dependence. Each vertex viis also associated with |
|
a Conditional Probability Distribution P(vi|parents(vi)). |
|
The CPDs are the parameters of B. |
|
The approach which we use here for learning a BN from |
|
data is the score-and-search method. Given a set of mul- |
|
tivariate discrete data I={I1,...,I N}, a scoring func- |
|
tionσ(G|I)measures the quality of the BN with un- |
|
derlying structure G. The BNSL problem asks to find a |
|
structure Gthat minimises σ(G|I)for some scoring |
|
function σand it is NP-hard [Chickering, 1995 ]. Several |
|
scoring functions have been proposed for this purpose, in- |
|
cluding BDeu [Buntine, 1991; Heckerman et al. , 1995 ]and |
|
BIC [Schwarz, 1978; Lam and Bacchus, 1994 ]. These func- |
|
tions are decomposable and can be expressed as the sum |
|
of local scores which only depend on the set of parents |
|
(from now on, parent set ) of each vertex: σF(G|I) =/summationtext |
|
v∈Vσv |
|
F(parents(v)|I)forF∈ {BDeu,BIC}. In |
|
this setting, we first compute local scores and then com- |
|
pute the structure of minimal score. Although there are |
|
potentially an exponential number of local scores that have |
|
to be computed, the number of parent sets actually con- |
|
sidered is often much smaller, for example because we re- |
|
strict the maximum cardinality of parent sets considered or |
|
we exploit dedicated pruning rules [de Campos and Ji, 2010; |
|
de Campos et al. , 2018 ]. We denote PS(v)the set of candi- |
|
date parent sets of vandPS−C(v)those parent sets that do |
|
not intersect C. In the following, we assume that local scores |
|
are precomputed and given as input, as is common in similar |
|
works. We also omit explicitly mentioning IorF, as they are |
|
constant for solving any given instance. |
|
LetCbe a set of vertices of a graph G.Cis a violated |
|
cluster if the parent set of each vertex v∈Cintersects C. |
|
Then, we can prove the following property: |
|
Property 1. A directed graph G=/a\}bracketle{tV,E/a\}bracketri}htis acyclic if and |
|
only if it contains no violated clusters, i.e., for all C⊆V, |
|
there exists v∈C, such that parents(v)∩C=∅. |
|
The GOBNILP solver [Bartlett and Cussens, 2017 ]formu- |
|
lates the problem as the following 0/1 ILP: |
|
min/summationdisplay |
|
v∈V,S⊆V\{v}σv(S)xv,S (1) |
|
s.t./summationdisplay |
|
S∈PS(v)xv,S= 1 ∀v∈V (2) |
|
/summationdisplay |
|
v∈C,S∈PS−C(v)xv,S≥1∀C⊆V (3) |
|
xv,S∈{0,1} ∀ v∈V,S∈PS(v)(4)Algorithm 1: Acyclicity Checker |
|
acycChecker (V, D) |
|
order←{} |
|
changes←true |
|
whilechanges do |
|
changes←false |
|
foreachv∈V\order do |
|
if∃S∈D(v)s.t.(S∩V)⊆order then |
|
1 order←order+v |
|
changes←true |
|
returnorder |
|
This ILP has a 0/1 variable xv,Sfor each candidate parent |
|
setSof each vertex vwherexv,S= 1means that Sis the par- |
|
ent set of v. The objective (1) directly encodes the decompo- |
|
sition of the scoring function. The constraint (2) asserts t hat |
|
exactly one parent set is selected for each random variable. |
|
Finally, the cluster inequalities (3) are violated when Cis a |
|
violated cluster. We denote the cluster inequality for clus ter |
|
Cascons(C)and the 0/1 variables involved as varsof(C). |
|
As there is an exponential number of these, GOBNILP gen- |
|
erates only those that improve the current linear relaxatio n |
|
and they are referred to as cluster cuts . This itself is an NP- |
|
hard problem [Cussens et al. , 2017 ], which GOBNILP also |
|
encodes and solves as an ILP. Interestingly, these inequali - |
|
ties are facets of the BNSL polytope [Cussens et al. , 2017 ], |
|
so stand to improve the relaxation significantly. |
|
The CPBayes solver [van Beek and Hoffmann, 2015 ] |
|
models BNSL as a constraint program. The CP model has a |
|
parent set variable for each random variable, whose domain |
|
is the set of possible parent sets, as well as order variables , |
|
which give a total order of the variables that agrees with |
|
the partial order implied by the DAG. The objective is the |
|
same as (1). It includes channelling constraints between |
|
the set of variables and various symmetry breaking and |
|
dominance constraints. It computes a lower bound using |
|
two separate mechanisms: a component caching scheme |
|
and a pattern database that is computed before search and |
|
holds the optimal graphs for all orderings of partitions |
|
of the variables. Acyclicity is enforced using a global |
|
constraint with a bespoke propagator. The main routine |
|
of the propagator is acycChecker (Algorithm 1), which |
|
returns an order of all variables if the current set of domain s |
|
of the parent set variables may produce an acyclic graph, or |
|
a partially completed order if the constraint is unsatisfiab le. |
|
This algorithm is based on Property 1. |
|
Briefly, the algorithm takes the domains of the parent set |
|
variables as input and greedily constructs an ordering of th e |
|
variables, such that if variable vis later in the order than v′, |
|
thenv /∈parents(v′)1. It does so by trying to pick a parent |
|
setSfor an as yet unordered vertex such that Sis entirely |
|
contained in the set of previously ordered vertices2. If all |
|
assignments yield cyclic graphs, it will reach a point where |
|
1We treatorder as both a sequence and a set, as appropriate. |
|
2When propagating the acyclicity constraint it always holds that |
|
a∩V=a, so this statement is true. In section 3.1, we use the |
|
algorithm in a setting where this is not always the case.all remaining vertices are in a violated cluster in all possi ble |
|
graphs, and it will return a partially constructed order. If there |
|
exists an assignment that gives an acyclic graph, it will be |
|
possible by property 1 to select from a variable in V\order |
|
a parent set which does not intersect V\order , hence is a |
|
subset of order . The value Schosen for each variable in |
|
line 1 also gives a witness of such an acyclic graph. |
|
An immediate connection between the GOBNILP and CP- |
|
Bayes models is that the ILP variables xv,S,∀S∈PS(v)are |
|
the direct encoding [Walsh, 2000 ]of the parent set variables |
|
of the CP model. Therefore, we use them interchangeably, |
|
i.e., we can refer to the value SinD(v)asxv,S. |
|
3 Restricted Cluster Detection |
|
One of the issues hampering the performance of CPBayes is |
|
that it computes relatively poor lower bounds at deeper leve ls |
|
of the search tree. Intuitively, as the parent set variable d o- |
|
mains get reduced by removing values that are inconsistent |
|
with the current ordering, the lower bound computation dis- |
|
cards more information about the current state of the proble m. |
|
We address this by adapting the branch-and-cut approach of |
|
GOBNILP. However, instead of finding all violated cluster |
|
inequalities that may improve the LP lower bound, we only |
|
identify a subset of them. |
|
Consider the linear relaxation of the ILP (1)– (4), restrict ed |
|
to a subsetCof all valid cluster inequalities, i.e., with equa- |
|
tion (4) replaced by 0≤xv,S≤1∀v∈V,S∈PS(v)and |
|
with equation (3) restricted only to clusters in C. We denote |
|
thisLPC. We exploit the following property of this LP. |
|
Theorem 1. Letˆybe a dual feasible solution of LPCwith |
|
dual objective o. Then, if Cis a cluster such that C /∈C |
|
and the reduced cost rcof all variables varsof(C)is greater |
|
than 0, there exists a dual feasible solution ˆyofLPC∪Cwith |
|
dual objective o′≥o+minrc(C)whereminrc(C) = |
|
minx∈varsof(C)rcˆy(x). |
|
Proof. The only difference from LPCtoLPC∪Cis the ex- |
|
tra constraint cons(C)in the primal and corresponding dual |
|
variableyC. In the dual, yConly appears in the dual con- |
|
straints of the variables varsof(C)and in the objective, al- |
|
ways with coefficient 1. Under the feasible dual solution |
|
ˆy∪{yC= 0}, these constraints have slack at least minrc(C), |
|
by the definition of reduced cost. Therefore, we can set |
|
ˆy= ˆy∪{yC=minrc(C)}, which remains feasible and |
|
has objective o′=o+minrc(C), as required. |
|
Theorem 1 gives a class of cluster cuts, which we call RC- |
|
clusters, for reduced-cost clusters, guaranteed to improv e the |
|
lower bound. Importantly, this requires only a feasible, pe r- |
|
haps sub-optimal, solution. |
|
Example 1 (Running example) .Consider a BNSL instance |
|
with domains as shown in Table 1 and let C=∅. Then,ˆy= 0 |
|
leaves the reduced cost of every variable to exactly its prim al |
|
objective coefficient. The corresponding ˆxassigns 1 to vari- |
|
ables with reduced cost 0 and 0 to everything else. These are |
|
both optimal solutions, with cost 0 and ˆxis integral, so it is |
|
also a solution of the corresponding ILP . However, it is not a |
|
solution of the BNSL, as it contains several cycles, includi ngVariable Domain Value Cost |
|
0{2} 0 |
|
1{2,4} 0 |
|
{} 6 |
|
2{1,3} 0 |
|
{} 10 |
|
3{0} 0 |
|
{} 5 |
|
4{2,3} 0 |
|
{3} 1 |
|
{2} 2 |
|
{} 3 |
|
Table 1: BNSL instance used as running example. |
|
Algorithm 2: Lower bound computation with RC- |
|
clusters |
|
lowerBoundRC (V, D,C) |
|
ˆy←DualSolve (LPC(D)) |
|
while True do |
|
2C←V\acycChecker (V,Drc |
|
C,ˆy) |
|
3 ifC=∅then |
|
return/a\}bracketle{tcost(ˆy),C/a\}bracketri}ht |
|
C←minimise (C) |
|
C←C∪{ C} |
|
ˆy←DualImprove (ˆy,LPC(D),C) |
|
C={0,2,3}. The cluster inequality cons(C)is violated in |
|
the primal and allows the dual bound to be increased. |
|
We consider the problem of discovering RC-clusters within |
|
the CP model of CPBayes. First, we introduce the nota- |
|
tionLPC(D)which is LPCwith the additional constraint |
|
xv,S= 0 for eachS /∈D(v). Conversely, Drc |
|
C,ˆyis the set |
|
of domains minus values whose corresponding variable in |
|
LPC(D)has non-zero reduced cost under ˆy, i.e.,Drc |
|
C,ˆy=D′ |
|
whereD′(v) ={S|S∈D(v)∧rcˆy(xv,S) = 0}. With |
|
this notation, for values S /∈D(v),xv,S= 1 is infeasible in |
|
LPC(D), hence effectively rcˆy(xv,S) =∞. |
|
Theorem 2. Given a collection of clusters C, a set of domains |
|
Dandˆy, a feasible dual solution of LPC(D), there exists |
|
an RC-cluster C /∈C if and only if Drc |
|
C,ˆydoes not admit an |
|
acyclic assignment. |
|
Proof.(⇒)LetCbe such a cluster. Since for all xv,S∈ |
|
varsof(C), none of these are in Drc |
|
C,ˆy, socons(C)is violated |
|
and hence there is no acyclic assignment. |
|
(⇐)Consider once again acycChecker , in Algorithm 1. |
|
When it fails to find a witness of acyclicity, it has reached |
|
a point where order/subsetnoteqlVand for the remaining variables |
|
C=V\order , all allowed parent sets intersect C. So if |
|
acycChecker is called with Drc |
|
C,ˆy, all values in varsof(C) |
|
have reduced cost greater than 0, so Cis an RC-cluster. |
|
Theorem 2 shows that detecting unsatisfiability of Drc |
|
C,ˆyis |
|
enough to find an RC-cluster. Its proof also gives a way to |
|
extract such a cluster from acycChecker . |
|
Algorithm 2 shows how theorems 1 and 2 can be used |
|
to compute a lower bound. It is given the current set ofdomains and a set of clusters as input. It first solves the |
|
dual ofLPC(D), potentially suboptimally. Then, it uses |
|
acycChecker iteratively to determine whether there exists |
|
an RC-cluster Cunder the current dual solution ˆy. If that |
|
cluster is empty, there are no more RC-clusters, and it termi - |
|
nates and returns a lower bound equal to the cost of ˆyunder |
|
LPC(D)and an updated pool of clusters. Otherwise, it min- |
|
imisesC(see section 3.1), adds it to the pool of clusters and |
|
solves the updated LP. It does this by calling DualImprove , |
|
which solves LPC(D)exploiting the fact that only the cluster |
|
inequality cons(C)has been added. |
|
Example 2. Continuing our example, consider the behav- |
|
ior ofacycChecker with domains Drc |
|
∅,ˆyafter the initial dual |
|
solutionˆy= 0. Since the empty set has non-zero reduced |
|
cost for all variables, acycChecker fails with order={}, |
|
henceC=V. We postpone discussion of minimization for |
|
now, other than to observe that Ccan be minimized to C1= |
|
{1,2}. We add cons(C1)to the primal LP and set the dual |
|
variable of C1to 6 in the new dual solution ˆy1. The reduced |
|
costs ofx1,{}andx2,{}are decreased by 6 and, importantly, |
|
rcˆy1(x1,{}) = 0 . In the next iteration of lowerBoundRC , |
|
acycChecker is invoked on Drc |
|
{C1},ˆy1and returns the clus- |
|
ter{0,2,3,4}. This is minimized to C2={0,2,3}. The |
|
parent sets in the domains of these variables that do not in- |
|
tersectC2arex2,{}andx3,{}, sominrc(C2) = 4 , so we add |
|
cons(C2)to the primal and we set the dual variable of C2 |
|
to 4 inˆy2. This brings the dual objective to 10. The reduced |
|
cost ofx2,{}is 0, so in the next iteration acycChecker runs |
|
onDrc |
|
{C1,C2},ˆy2and succeeds with the order {2,0,3,4,1}, so |
|
the lower bound cannot be improved further. This also hap- |
|
pens to be the cost of the optimal structure. |
|
Theorem 3. Algorithm 2 terminates but is not confluent. |
|
Proof. It terminates because there is a finite number of cluster |
|
inequalities and each iteration generates one. In the extre me, |
|
all cluster inequalities are in Cand the test at line 3 succeeds, |
|
terminating the algorithm. |
|
To see that it is not confluent, consider an example with 3 |
|
clustersC1={v1,v2},C2={v2,v3}andC3={v3,v4} |
|
and assume that the minimum reduced cost for each cluster is |
|
unit and comes from x2,{4}andx3,{1}, i.e., the former value |
|
has minimum reduced cost for C1andC2and the latter for C2 |
|
andC3. Then, if minimisation generates first C1, the reduced |
|
cost ofx3,{1}is unaffected by DualImprove , so it can then |
|
discoverC3, to get a lower bound of 2. On the other hand, |
|
if minimisation generates first C2, the reduced costs of both |
|
x2,{4}andx3,{1}are decreased to 0 by DualImprove , so |
|
neitherC1norC3are RC-clusters under the new dual solution |
|
and the algorithm terminates with a lower bound of 1. |
|
Related Work. The idea of performing propagation on the |
|
subset of domains that have reduced cost 0 has been used |
|
in the V AC algorithm for WCSPs [Cooper et al. , 2010 ]. Our |
|
method is more light weight, as it only performs propagation |
|
on the acyclicity constraint, but may give worse bounds. The |
|
bound update mechanism in the proof of theorem 1 is also |
|
simpler than V AC and more akin to the “disjoint core phase” |
|
in core-guided MaxSAT solvers [Morgado et al. , 2013 ].3.1 Cluster Minimisation |
|
It is crucial for the quality of the lower bound produced by Al - |
|
gorithm 2 that the RC-clusters discovered by acycChecker |
|
are minimised, as the following example shows. Empirically , |
|
omitting minimisation rendered the lower bound ineffectiv e. |
|
Example 3. Suppose that we attempt to use lowerBoundRC |
|
without cluster minimization. Then, we use the cluster |
|
given byacycChecker ,C1={0,1,2,3,4}. We have |
|
minrc(C1) = 3 , given from the empty parent set value of |
|
all variables. This brings the reduced cost of x4,{}to 0. |
|
It then proceeds to find the cluster C2={0,1,2,3}with |
|
minrc(C2) = 2 and decrease the reduced cost of x3,{}to |
|
0, thenC3={0,1,2}withminrc(C3) = 1 , which brings |
|
the reduced cost of x1,{}to 0. At this point, acycChecker |
|
succeeds with the order {4,3,1,2,0}andlowerBoundRC re- |
|
turns a lower bound of 6, compared to 10 with minimization. |
|
The order produced by acycChecker also disagrees with the |
|
optimum structure. |
|
Therefore, when we get an RC-cluster Cat line 2 of algo- |
|
rithm 2, we want to extract a minimal RC-cluster (with re- |
|
spect to set inclusion) from C, i.e., a cluster C′⊆C, such |
|
that for all∅⊂C′′⊂C′,C′′is not a cluster. |
|
Minimisation problems like this are handled with an ap- |
|
propriate instantiation of QuickXPlain [Junker, 2004 ]. These |
|
algorithms find a minimal subset of constraints, not variabl es. |
|
We can pose this as a constraint set minimisation problem by |
|
implicitly treating a variable as the constraint “this vari able is |
|
assigned a value” and treating acyclicity as a hard constrai nt. |
|
However, the property of being an RC-cluster is not mono- |
|
tone. For example, consider the variables {v1,v2,v3,v4} |
|
andˆysuch that the domains restricted to values with 0 re- |
|
duced cost are{{v2}},{{v1}},{{v4}},{{v3}}, respectively. |
|
Then{v1,v2,v3,v4},{v1,v2}and{v3,v4}are RC-clusters. |
|
but{v1,v2,v3}is not because the sole value in the do- |
|
main ofv3does not intersect{v1,v2,v3}. We instead min- |
|
imise the set of variables that does not admit an acyclic so- |
|
lution and hence contains an RC-cluster. A minimal un- |
|
satisfiable set that contains a cluster is an RC-cluster, so |
|
this allows us to use the variants of QuickXPlain. We fo- |
|
cus on RobustXPlain, which is called the deletion-based al- |
|
gorithm in SAT literature for minimising unsatisfiable sub- |
|
sets[Marques-Silva and Menc´ ıa, 2020 ]. The main idea of the |
|
algorithm is to iteratively pick a variable and categorise i t as |
|
either appearing in all minimal subsets of C, in which case |
|
we mark it as necessary, or not, in which case we discard |
|
it. To detect if a variable appears in all minimal unsatisfi- |
|
able subsets, we only have to test if omitting this variable |
|
yields a set with no unsatisfiable subsets, i.e., with no vio- |
|
lated clusters. This is given in pseudocode in Algorithm 3. |
|
This exploits a subtle feature of acycChecker as described |
|
in Algorithm 1: if it is called with a subset of V, it does not |
|
try to place the missing variables in the order and allows par - |
|
ent sets to use these missing variables. Omitting variables |
|
from the set given to acycChecker acts as omitting the con- |
|
straint that these variables be assigned a value. The com- |
|
plexity ofMinimiseCluster isO(n3d), wheren=|V|and |
|
d= max v∈V|D(v)|, a convention we adopt throughout.Algorithm 3: Find a minimal RC-cluster subset of C |
|
MinimiseCluster (V, D, C) |
|
N=∅ |
|
whileC/\e}atio\slash=∅do |
|
Pickc∈C |
|
C←C\{c} |
|
C′←V\acycChecker (N∪C,D) |
|
ifC′=∅then |
|
N←N∪{c} |
|
else |
|
C←C′\N |
|
returnN |
|
4 Solving the Cluster LP |
|
Solving a linear program is in polynomial time, so in princip le |
|
DualSolve can be implemented using any of the commercial |
|
or free software libraries available for this. However, sol ving |
|
this LP using a general LP solver is too expensive in this set- |
|
ting. As a data point, solving the instance steelBIC with |
|
our modified solver took 25,016 search nodes and 45 sec- |
|
onds of search, and generated 5,869RC-clusters. Approx- |
|
imately 20% of search time was spent solving the LP using |
|
the greedy algorithm that we describe in this section. CPLEX |
|
took around 70 seconds to solve LPCwith these cluster in- |
|
equalities once. While this data point is not proof that solv ing |
|
the LP exactly is too expensive, it is a pretty strong indicat or. |
|
We have also not explored nearly linear time algorithms for |
|
solving positive LPs [Allen-Zhu and Orecchia, 2015 ]. |
|
Our greedy algorithm is derived from theorem 1. Observe |
|
first thatLPCwithC=∅, i.e., only with constraints (2) has |
|
optimal dual solution ˆy0that assigns the dual variable yvof/summationtext |
|
S∈PS(v)xv,S= 1 tominS∈PS(v)σv(S). That leaves at |
|
least one of xv,S,S∈PS(v)with reduced cost 0 for each |
|
v∈V.DualSolve starts with ˆy0and then iterates over C. |
|
Givenˆyi−1and a cluster C, it setsˆyi= ˆyi−1ifCis not |
|
an RC-cluster. Otherwise, it increases the lower bound by |
|
c=minrc(C)and setsˆyi= ˆyi−1∪{yC=c}. It remains to |
|
specify the order in which we traverse C. |
|
We sort clusters by increasing size |C|, breaking ties by |
|
decreasing minimum cost of all original parent set values |
|
invarsof(C). This favours finding non-overlapping cluster |
|
cuts with high minimum cost. In section 6, we give experi- |
|
mental evidence that this computes better lower bounds. |
|
DualImprove can be implemented by discarding previ- |
|
ous information and calling DualSolve (LPC(D)). Instead, |
|
it uses the RC-cluster Cto update the solution without revis- |
|
iting previous clusters. |
|
In terms of implementation, we store varsof(C)for each |
|
cluster, not cons(C). During DualSolve , we maintain the |
|
reduced costs of variables rather than the dual solution, ot h- |
|
erwise computing each reduced cost would require iterating |
|
over all cluster inequalities that contain a variable. Spec ifi- |
|
cally, we maintain ∆v,S=σv(S)−rcˆy(xv,S). In order to |
|
test whether a cluster Cis an RC-cluster, we need to com- |
|
puteminrc(C). To speed this up, we associate with each |
|
stored cluster a support pair (v,S)corresponding to the lastminimum cost found. If rcˆy(v,S) = 0 , the cluster is not an |
|
RC-cluster and is skipped. Moreover, parent set domains are |
|
sorted by increasing score σv(S), soS≻S′⇐⇒σv(S)> |
|
σv(S′). We also maintain the maximum amount of cost |
|
transferred to the lower bound, ∆max |
|
v= max S∈D(v)∆v,S |
|
for every v∈V. We stop iterating over D(v)as soon as |
|
σv(S)−∆max |
|
v is greater than or equal to the current mini- |
|
mum because∀S′≻S,σv(S′)−∆v,b≥σv(S)−∆max |
|
v. |
|
In practice, on very large instances 97.6%of unproductive |
|
clusters are detected by support pairs and 8.6%of the current |
|
domains are visited for the rest3. |
|
To keep a bounded-memory cluster pool, we discard fre- |
|
quently unproductive clusters. We throw away large cluster s |
|
with a productive ratio#productive |
|
#productive +#unproductivesmaller |
|
than1 |
|
1,000. Clusters of size 10 or less are always kept because |
|
they are often more productive and their number is bounded. |
|
5 GAC for the Acyclicity Constraint |
|
Previously, van Beek and |
|
Hoffmann [van Beek and Hoffmann, 2015 ]showed that |
|
usingacycChecker as a subroutine, one can construct a |
|
GAC propagator for the acyclicity constraint by probing, |
|
i.e., detecting unsatisfiability after assigning each indi vidual |
|
value and pruning those values that lead to unsatisfiability . |
|
acycChecker is inO(n2d), so this gives a GAC propagator |
|
inO(n3d2). We show here that we can enforce GAC in time |
|
O(n3d), a significant improvement given that dis usually |
|
much larger than n. |
|
SupposeacycChecker finds a witness of acyclicity and |
|
returns the order O={v1,...,v n}. Every parent set Sof |
|
a variable vthat is a subset of {v′|v′≺Ov}is supported |
|
byO. We call such values consistent with O. Consider now |
|
S∈D(vi)which is inconsistent with O, therefore we have to |
|
probe to see if it is supported. We know that during the probe, |
|
nothing forcesacycChecker to deviate from{v1,...,v i−1}. |
|
So in a successful probe, acycChecker constructs a new or- |
|
derO′which is identical to Oin the first i−1positions and |
|
in which it moves vifurther down. Then all values consistent |
|
withO′are supported. This suggests that instead of probing |
|
each value, we can probe different orders. |
|
Acyclicity-GAC , shown in Algorithm 4, exploits this in- |
|
sight. It ensures first that acycChecker can produce a valid |
|
orderO. For each variable v, it constructs a new order O′ |
|
fromOso thatvis as late as possible. It then prunes all par- |
|
ent set values of vthat are inconsistent with O′. |
|
Theorem 4. Algorithm 4 enforces GAC on the Acyclicity con- |
|
straint in O(n3d). |
|
Proof. Letv∈VandS∈D(v). LetO={O1,...,O n} |
|
andQ={Q1,...,Q n}be two valid orders such that O |
|
does not support SwhereasQdoes. It is enough to show |
|
that we can compute from Oa new order O′that supports |
|
Sby pushing vtowards the end. Let Oi=Qj=vand |
|
letOp={O1,...,O (i−1)},Qp={Q1,...,Q (j−1)}and |
|
Os={Oi+1,...,O n}. |
|
3See the supplementary material for more.Algorithm 4: GAC propagator for acyclicity |
|
Acyclicity-GAC (V , D) |
|
O←acycChecker (V,D) |
|
ifO/subsetnoteqlVthen |
|
return Failure |
|
foreachv∈Vdo |
|
changes←true |
|
i←O−1(v) |
|
prefix←{O1,...,O i−1} |
|
4 whilechanges do |
|
changes←false |
|
foreachw∈O\(prefix∪{v})do |
|
if∃S∈D(w)s.t.S⊆prefix then |
|
prefix←prefix∪{w} |
|
changes←true |
|
Prune{S|S∈D(v)∧S/notsubseteqlprefix} |
|
return Success |
|
LetO′be the order Opfollowed by Qp, followed by v, |
|
followed by Os, keeping only the first occurrence of each |
|
variable when there are duplicates. O′is a valid order: Op |
|
is witnessed by the assignment that witnesses O,Qpby the |
|
assignment that witnesses Q,vbyS(as inQ) andOsby the |
|
assignment that witnesses O. It also supports S, as required. |
|
Complexity is dominated by repeating O(n)times the loop |
|
at line 4, which is a version of acycChecker so has complex- |
|
ityO(n2d)for a total O(n3d). |
|
6 Experimental Results |
|
6.1 Benchmark Description and Settings |
|
The datasets come from the UCI Machine Learning Reposi- |
|
tory4, the Bayesian Network Repository5, and the Bayesian |
|
Network Learning and Inference Package6. Local scores |
|
were computed from the datasets using B. Malone’s code7. |
|
BDeu and BIC scores were used for medium size instances |
|
(less than 64 variables) and only BIC score for large instanc es |
|
(above 64 variables). The maximum number of parents was |
|
limited to 5 for large instances (except for accidents.test |
|
with maximum of 8), a high value that allows even learning |
|
complex structures [Scanagatta et al. , 2015 ]. For example, |
|
jester.test has 100 random variables, a sample size of |
|
4,116and770,950parent set values. For medium instances, |
|
no restriction was applied except for some BDeu scores (limi t |
|
sets to 6 or 8 to complete the computation of the local scores |
|
within 24 hours of CPU-time [Lee and van Beek, 2017 ]). |
|
We have modified the C++ source of CPBayes v1.1 by |
|
adding our lower bound mechanism and GAC propagator. |
|
We call the resulting solver ELSA and have made it publicly |
|
available. For the evaluation, we compare with GOBNILP |
|
v1.6.3 using SCIP v3.2.1 with cplex v12.7.0. All compu- |
|
tations were performed on a single core of Intel Xeon E5- |
|
2680 v3 at 2.50 GHz and 256 GB of RAM with a 1-hour |
|
4http://archive.ics.uci.edu/ml |
|
5http://www.bnlearn.com/bnrepository |
|
6https://ipg.idsia.ch/software.php?id=132 |
|
7http://urlearning.orgInstance |V|/summationtext|ps(v)|GOBNILP CPBayes ELSA ELSA\GAC ELSAchrono |
|
carpo100 BIC 60 424 0.6 78.5 (29.7) 40.6 (0.0) 40.7 (0.0) 40.6 (0.0) |
|
alarm1000 BIC 37 1003 1.2 204.2 (172.9) 27.8 (0.7) 28.8 (1.5) 29.9 (2.7) |
|
flagBDe 29 1325 4.4 19.0 (18.1) 0.9(0.1) 0.9 (0.1) 1.3 (0.5) |
|
wdbc BIC 31 14614 99.8 629.8 (576.6) 48.9 (1.6) 49.1 (1.7) 50.3 (3.1) |
|
kdd.ts 64 43584 327.6 † 1314.5 (158.2) 1405.4 (239.5) 1663.2 (512.4) |
|
steel BIC 28 93027 †1270.9 (1218.9) 98.0 (49.2) 99.2 (50.1) 130.0 (81.2) |
|
kdd.test 64 152873 1521.7 † 1475.3 (120.6) 1515.9 (128.5) 1492.4 (109.5) |
|
mushroom BDe 23 438186 † 176.4 (56.0) 135.4 (33.7) 137.0 (35.0) 133.7 (31.9) |
|
bnetflix.ts 100 446406 † 629.0 (431.4) 1065.1 (878.4) 1111.4 (931.0) 1132.4 (936.3) |
|
plants.test 69 520148 † †18981.9 (17224.0) 30791.2 (29073.0) † |
|
jester.ts 100 531961 † † 10166.0 (9697.9) 14915.9 (14470.1) 23877.6 (23325.7) |
|
accidents.ts 111 568160 1274.0 † 2238.7 (904.5) 2260.3 (986.1) 2221.1 (904.8) |
|
plants.valid 69 684141 † † 12347.6 (8509.7) 19853.1 (15963.1) † |
|
jester.test 100 770950 † †17637.8 (16979.2) 21284.0 (20661.9) † |
|
bnetflix.test 100 1103968 †3525.2 (3283.8) 8197.7 (7975.6) 8057.3 (7841.4) 7915.0 (7686.3) |
|
bnetflix.valid 100 1325818 †1456.6 (1097.0) 9282.0 (8950.3) 10220.5 (9898.4) 9619.7 (9257.4) |
|
accidents.test 111 1425966 4975.6 † 3661.7 (641.5) 4170.1 (1213.6) 3805.2 (687.6) |
|
Table 2: Comparison of ELSA against GOBNILP and CPBayes. Tim e limit for instances above the line is 1h, for the rest 10h. In stances are |
|
sorted by increasing total domain size. For variants of CPBa yes we report in parentheses time spent in search, after prep rocessing finishes. † |
|
indicates a timeout. |
|
(resp. 10-hour) CPU time limit for medium (resp. large) |
|
size instances. We used default settings for GOBNILP with |
|
no approximation in branch-and-cut ( limits/gap= 0 ). We |
|
used the same settings in CPBayes and ELSA for their pre- |
|
processing phase (partition lower bound sizes lmin,lmax and |
|
local search number of restarts rmin,rmax). We used two |
|
different settings depending on problem size |V|:lmin= |
|
20,lmax= 26,rmin= 50,rmax= 500 if|V|≤64, else |
|
lmin= 20,lmax= 20,rmin= 15,rmax= 30 . |
|
6.2 Evaluation |
|
In Table 2 we present the runtime to solve each instance to |
|
optimality with GOBNILP, CPBayes, and ELSA with default |
|
settings, without the GAC algorithm and without sorting the |
|
cluster pool (leaving clusters in chronological order, rat her |
|
than the heuristic ordering presented in Section 4). For the in- |
|
stances with/bardblV/bardbl≤64(resp.>64), we had a time limit of 1 |
|
hour (resp. 10 hours). We exclude instances that were solved |
|
within the time limit by GOBNILP and have a search time of |
|
less than 10 seconds for CPBayes and all variants of ELSA. |
|
We also exclude 8 instances that were not solved to optimalit y |
|
by any method. This leaves us 17 instances to analyse here |
|
out of 69 total. More details are given in the supplemental |
|
material, available from the authors’ web pages. |
|
Comparison to GOBNILP. CPBayes was al- |
|
ready proven to be competitive to GOBNILP |
|
[van Beek and Hoffmann, 2015 ]. Our results in Table 2 |
|
confirm this while showing that neither is clearly better. |
|
When it comes to our solver ELSA, for all the variants, all |
|
instances solved within the time limit by GOBNILP are |
|
solved, unlike CPBayes. On top of that, ELSA solves 9 more |
|
instances optimally. |
|
Comparison to CPBayes. We have made some low-level |
|
performance improvements in preprocessing of CPBayes, so |
|
for a more fair comparison, we should compare only thesearch time, shown in parentheses. ELSA takes several or- |
|
ders of magnitude less search time to optimally solve most |
|
instances, the only exception being the bnetflix instances. |
|
ELSA also proved optimality for 8 more instances within the |
|
time limit. |
|
Gain from GAC. The overhead of GAC pays off as the in- |
|
stances get larger. While we do not see either a clear im- |
|
provement nor a downgrade for the smaller instances, search |
|
time for ELSA improves by up to 47% for larger instances |
|
compared to ELSA \GAC. |
|
Gain from Cluster Ordering. We see that the ordering |
|
heuristic improves the bounds computed by our greedy dual |
|
LP algorithm significantly. Compared to not ordering the |
|
clusters, we see improved runtime throughout and 3 more in- |
|
stances solved to optimality. |
|
7 Conclusion |
|
We have presented a new set of inference techniques for |
|
BNSL using constraint programming, centered around the ex- |
|
pression of the acyclicity constraint. These new technique s |
|
exploit and improve on previous work on linear relaxations o f |
|
the acyclicity constraint and the associated propagator. T he |
|
resulting solver explores a different trade-off on the axis of |
|
strength of inference versus speed, with GOBNILP on one |
|
extreme and CPBayes on the other. We showed experimen- |
|
tally that the trade-off we achieve is a better fit than either ex- |
|
treme, as our solver ELSA outperforms both GOBNILP and |
|
CPBayes. The major obstacle towards better scalability to |
|
larger instances is the fact that domain sizes grow exponen- |
|
tially with the number of variables. This is to some degree |
|
unavoidable, so our future work will focus on exploiting the |
|
structure of these domains to improve performance.Acknowledgements |
|
We thank the GenoToul (Toulouse, France) Bioinformatics |
|
platform for its support. This work has been partly funded |
|
by the “Agence nationale de la Recherche” (ANR-16-CE40- |
|
0028 Demograph project and ANR-19-PIA3-0004 ANTI- |
|
DIL chair of Thomas Schiex). |
|
References |
|
[Allen-Zhu and Orecchia, 2015 ]Zeyuan Allen-Zhu and |
|
Lorenzo Orecchia. Nearly-linear time positive LP solver |
|
with faster convergence rate. In Proc. of the Forty-Seventh |
|
Annual ACM Symposium on Theory of Computing , |
|
STOC’15, page 229–236, New York, NY , USA, 2015. |
|
[Bartlett and Cussens, 2017 ]Mark Bartlett and James |
|
Cussens. Integer linear programming for the bayesian net- |
|
work structure learning problem. Artificial Intelligence , |
|
pages 258–271, 2017. |
|
[Berg et al. , 2014 ]Jeremias Berg, Matti J¨ arvisalo, and Bran- |
|
don Malone. Learning optimal bounded treewidth |
|
bayesian networks via maximum satisfiability. In Artificial |
|
Intelligence and Statistics , pages 86–95. PMLR, 2014. |
|
[Buntine, 1991 ]Wray Buntine. Theory refinement on |
|
bayesian networks. In Proc. of UAI , pages 52–60. Else- |
|
vier, 1991. |
|
[Chickering, 1995 ]David Maxwell Chickering. Learning |
|
bayesian networks is NP-Complete. In Proc. of Fifth Int. |
|
Workshop on Artificial Intelligence and Statistics (AIS- |
|
TATS) , pages 121–130, Key West, Florida, USA, 1995. |
|
[Cooper et al. , 2010 ]Martin C Cooper, Simon de Givry, |
|
Martı S´ anchez, Thomas Schiex, Matthias Zytnicki, and |
|
Tomas Werner. Soft arc consistency revisited. Artificial |
|
Intelligence , 174(7-8):449–478, 2010. |
|
[Cussens et al. , 2017 ]James Cussens, Matti J¨ arvisalo, |
|
Janne H Korhonen, and Mark Bartlett. Bayesian network |
|
structure learning with integer programming: Polytopes, |
|
facets and complexity. Journal of Artificial Intelligence |
|
Research , 58:185–229, 2017. |
|
[de Campos and Ji, 2010 ]Cassio Polpo de Campos and |
|
Qiang Ji. Properties of bayesian dirichlet scores to learn |
|
bayesian network structures. In Proc. of AAAI-00 , Atlanta, |
|
Georgia, USA, 2010. |
|
[de Campos et al. , 2018 ]Cassio P de Campos, Mauro |
|
Scanagatta, Giorgio Corani, and Marco Zaffalon. Entropy- |
|
based pruning for learning bayesian networks using BIC. |
|
Artificial Intelligence , 260:42–50, 2018. |
|
[Fan and Yuan, 2015 ]Xiannian Fan and Changhe Yuan. An |
|
improved lower bound for bayesian network structure |
|
learning. In Proc. of AAAI-15 , Austin, Texas, 2015. |
|
[Heckerman et al. , 1995 ]David Heckerman, Dan Geiger, |
|
and David M Chickering. Learning bayesian networks: |
|
The combination of knowledge and statistical data. Ma- |
|
chine learning , 20(3):197–243, 1995. |
|
[Junker, 2004 ]Ulrich Junker. Preferred explanations and re- |
|
laxations for over-constrained problems. In Proc. of AAAI- |
|
04, pages 167–172, San Jose, California, USA, 2004.[Lam and Bacchus, 1994 ]Wai Lam and Fahiem Bacchus. |
|
Using new data to refine a bayesian network. In Proc. of |
|
UAI, pages 383–390, 1994. |
|
[Lee and van Beek, 2017 ]Colin Lee and Peter van Beek. An |
|
experimental analysis of anytime algorithms for bayesian |
|
network structure learning. In Advanced Methodologies |
|
for Bayesian Networks , pages 69–80, 2017. |
|
[Marques-Silva and Menc´ ıa, 2020 ]Jo˜ ao Marques-Silva and |
|
Carlos Menc´ ıa. Reasoning about inconsistent formulas. |
|
In Christian Bessiere, editor, Proc. of IJCAI-2020 , pages |
|
4899–4906, 2020. |
|
[Morgado et al. , 2013 ]Ant´ onio Morgado, Federico Heras, |
|
Mark H. Liffiton, Jordi Planes, and Jo˜ ao Marques-Silva. |
|
Iterative and core-guided MaxSAT solving: A survey and |
|
assessment. Constraints An Int. J. , 18(4):478–534, 2013. |
|
[Papadimitriou and Steiglitz, 1998 ]Christos H Papadim- |
|
itriou and Kenneth Steiglitz. Combinatorial optimization: |
|
algorithms and complexity . Courier Corporation, 1998. |
|
[Rossi et al. , 2006 ]Francesca Rossi, Peter Van Beek, and |
|
Toby Walsh. Handbook of constraint programming . El- |
|
sevier, 2006. |
|
[Scanagatta et al. , 2015 ]Mauro Scanagatta, Cassio P |
|
de Campos, Giorgio Corani, and Marco Zaffalon. Learn- |
|
ing bayesian networks with thousands of variables. Proc. |
|
of NeurIPS , 28:1864–1872, 2015. |
|
[Schwarz, 1978 ]Gideon Schwarz. Estimating the dimension |
|
of a model. The Annals of Statistics , 6(2):461–464, 1978. |
|
[Silander and Myllym¨ aki, 2006 ]Tomi Silander and Petri |
|
Myllym¨ aki. A simple approach for finding the globally |
|
optimal bayesian network structure. In Proc. of UAI’06 , |
|
Cambridge, MA, USA, 2006. |
|
[van Beek and Hoffmann, 2015 ]Peter van Beek and Hella- |
|
Franziska Hoffmann. Machine learning of bayesian net- |
|
works using constraint programming. In Proc. of Inter- |
|
national Conference on Principles and Practice of Con- |
|
straint Programming , pages 429–445, Cork, Ireland, 2015. |
|
[Walsh, 2000 ]Toby Walsh. SAT vs CSP. In Proc. of the |
|
Sixth International Conference on Principles and Practice |
|
of Constraint Programming , pages 441–456, 2000. |
|
[Yuan and Malone, 2013 ]Changhe Yuan and Brandon Mal- |
|
one. Learning optimal bayesian networks: A shortest path |
|
perspective. J. of Artificial Intelligence Research , 48:23– |
|
65, 2013. |