arxiv_dump / txt /2105.12205.txt
billxbf's picture
Upload 101 files
8f1929a verified
raw
history blame
35.6 kB
arXiv:2105.12205v1 [cs.AI] 25 May 2021A New Score for Adaptive Tests
in Bayesian and Credal Networks
Alessandro Antonucci, Francesca Mangili,
Claudio Bonesana, and Giorgia Adorni
Istituto Dalle Molle di Studi sull’Intelligenza Artificial e, Lugano, Switzerland
{alessandro,francesca,claudio.bonesana,giorgia.adorn i}@idsia.ch
Abstract. Atest is adaptive when its sequence andnumber of questions
is dynamically tuned on the basis of the estimated skills of t he taker.
Graphical models, such as Bayesian networks, are used for ad aptive tests
as they allow to model the uncertainty about the questions an d the skills
in an explainable fashion, especially when coping with mult iple skills.
A better elicitation of the uncertainty in the question/ski lls relations
can be achieved by interval probabilities. This turns the mo del into a
credalnetwork, thus making more challenging the inferential comp lexity
of the queries required to select questions. This is especia lly the case for
the information theoretic quantities used as scoresto drive the adaptive
mechanism. We present an alternative family of scores, base d on the
mode of the posterior probabilities, and hence easier to exp lain. This
makes considerably simpler the evaluation in the credal cas e, without
significantly affectingthe qualityofthe adaptive process. Numerical tests
on synthetic and real-world data are used to support this cla im.
Keywords: computer adaptive tests ·information theory ·credal net-
works ·Bayesian networks ·index of qualitative variation
1 Introduction
A test or an exam can be naturally intended as a measurement proce ss, with the
questions acting as sensors measuring the skills of the test taker in a particular
discipline. Such measurement is typically imperfect with the skills modelle d as
latent variables whose actual values cannot be revealed in a perfec tly reliable
way. The role of the questions, whose answers are regarded inste ad as mani-
fest variables, is to reduce the uncertainty about the latent skills. Following this
perspective, probabilistic models are an obvious framework to desc ribe tests.
Consider for instance the example in Figure 1, where a Bayesian netw ork evalu-
ates the probability that the test taker knows how to multiply intege rs. In such
framework making the test adaptive, i.e., picking a next question on the basis of
the currentknowledgelevelofthe testtakerisalsoverynatural. The information
gain for the available questions might be used to select the question le ading to
the more informative results (e.g., according to Table 1, Q1is more informative
thanQ2no matter what the answer is). This might also be done before the
answer on the basis of expectations over the possible alternatives .2 Antonucci et al.
A critical point when coping with such approaches is to provide a realis tic
assessment for the probabilistic parameters associated with the m odelling of the
relationsbetweenthe questionsand the skills.Havingto providesha rpnumerical
values for these probabilities might be difficult. As the skill is a latent qu antity,
complete data are not available for a statistical learning and a direct elicitation
should be typically demanded to experts (e.g., a teacher). Yet, it mig ht be not
obvious to express such a domain knowledge by single numbers and a m ore
robust elicitation, such as a probability interval (e.g., P(Q1= 1|S1= 1)∈
[0.85,0.95]), might add realism and robustness to the modelling process [13].
With such generalized assessments of the parameters a Bayesian n etwork simply
becomes a credalnetwork [20]. The counterpart of such increased realism is
the higher computational complexity characterizing inference in cr edal networks
[19]. This is an issue especially when coping with information theoretic me asures
such an information gain, whose computation in credal networks mig ht lead to
complex non-linear optimization tasks [17].
The goal of this paper is to investigate the potential of alternative s to the
information-theoreticscoresdrivingthequestionselectioninadap tivetestsbased
on directed graphical models, no matter whether these are Bayes ian or credal
networks. In particular, we consider a family of scores based on th e (expected)
mode of the posterior distributions over the skills. We show that, wh en coping
withcredalnetworks,thecomputationofthesescorescanbere ducedtoasimpler
sequenceoflinearprogrammingtask.Moreover,weshowthatthe sescoresbenefit
of better explainability properties, thus allowing for a more transpa rent process
in the question selection.
P(Q1= 1|S= 1) = 0 .9
P(Q1= 1|S= 0) = 0 .3
P(Q2= 1|S= 1) = 0 .6
P(Q2= 1|S= 0) = 0 .4Knows multiplication ( S)
10×5? (Q1) 13×14?(Q2)
Fig.1.A Bayesian network over Boolean variables modelling a simpl e test to evaluate
integer multiplication skill with two questions.
The paper is organized as follows. A critical discussion about the exis ting
work in this area is in Section 2. The necessary background material is reviewed
in Section 3. The adaptive testing concepts are introduced in Sectio n 4 and
specialized to graphical models in 5. The technical part of the paper is in Section
6, where the new scores are discussed and specialized to the creda l case, while
the experiments are in Section 7. Conclusions and outlooks are in Sec tion 8.A New Score for Adaptive Tests in Bayesian and Credal Network s 3
Table 1. Posterior probabilities of the skill after one or two questi ons in the test based
on the Bayesian network in Figure 1. A uniform prior over the s kill is considered.
Probabilities are regarded as grades and sorted from the low est one. Bounds obtained
with a perturbation ǫ=±0.05 of all the input parameters are also reported.
Q1Q2P(S= 1|q1,q2)P(S= 1|q1,q2)P(S= 1|q1,q2)
0 0 0 .087 0 .028 0 .187
0− 0.125 0 .052 0 .220
0 1 0 .176 0 .092 0 .256
−0 0 .400 0 .306 0 .506
−1 0 .600 0 .599 0 .603
1 0 0 .667 0 .626 0 .708
1− 0.750 0 .748 0 .757
1 1 0 .818 0 .784 0 .852
2 Related Work
Modelling atest asa processrelatinglatent and manifest variablessin ce the clas-
sicalitem response theory (IRT), that has been widely used even to implement
adaptive sequences [12]. Despite its success related to the easeof implementation
and inference, IRT might be inadequate when coping with multiple laten t skills,
especially when these are dependent. This moved researcherstow ards the area of
probabilistic graphical models [15], as practical tools to implement IR T in more
complex setups [2]. Eventually, Bayesian networks have been even tually iden-
tified as a suitable formalism to models tests, even behind the IRT fra mework
[23], this being especially the case for adaptive models [24] and coach ed solving
[10]. In order to cope with latent skills, some authors successfully ad opted EM
approaches to these models [21], this also involving the extreme situa tion of no
ground truth information about the answers [5]. As an alternative a pproach to
the same issue, some authors considered relaxations of the Bayes ian formalism,
such as fuzzy models [6] and imprecise probabilities [17]. The latter is the di-
rection we consider here, but trying to overcome the computation al limitations
of that approach when coping with information-theoretic scores. This has some
analogy with the approach in [9], that is focused on the Bayesian case only, but
whose score, based on the same-decision problem, appears hard to be extended
to the imprecise framework without affecting the computational co mplexity.
3 Background on Bayesian and Credal Networks
We denote variables by Latin uppercase letters, while using lowercas e for their
generic values, and calligraphic for the set of their possible values. T hus,v∈ V
is a possible value of V. Here we only consider discrete variables.1
1IRT uses instead with continuous skills. Yet, when coping pr obabilistic models, hav-
ing discrete skill does no prevent evaluations to range over a continuous domain.
E.g., see Table 1, where the grade corresponds to a (continuo us) probability.4 Antonucci et al.
3.1 Bayesian Networks
A probability mass function (PMF) over Vis denoted as P(V), whileP(v) is
the probability assigned to state v. Given a function fofV, its expectation with
respect to P(V) isEP(f) :=/summationtext
v∈VP(v)f(v). The expectation of −logb[P(V)] is
calledentropyand denoted also as H(X).2We setb:=|V|to have the maximum
of the entropy, achieved for uniform PMFs, equal to one.
Given joint PMF P(U,V), the marginal PMF P(V) is obtained by sum-
ming out the other variable, i.e., P(v) =/summationtext
u∈UP(u,v). Conditional PMFs such
asP(U|v) are similarly obtained by Bayes’s rule, i.e., P(u|v) =P(u,v)/P(v)
provided that P(v)>0. Notation P(U|V) :={P(U|v)}v∈Vis used for such con-
ditional probability table (CPT). The entropy of a conditional PMF is d efined
as in the unconditional case and denoted as H(U|v). The conditional entropy
is a weighted average of entropies of the conditional PMFs, i.e., H(U|V) :=/summationtext
v∈VH(U|v)P(v). IfP(u,v) =P(u)P(v) for each u∈ Uandv∈ V, variables
UandVare independent. Conditional formulations are also considered.
We assume the set of variables V:= (V1,...,V r) to be in one-to-one corre-
spondence with a directed acyclic graph G. For each V∈V, the parents of V,
i.e., the predecessors of VinG, are denoted as Pa V. GraphGtogether with the
collection of CPTs {P(V|PaV)}V∈Vprovides a Bayesian network (BN) specifi-
cation [15]. Under the Markov condition, i.e., every variable is condition ally in-
dependent of its non-descendants non-parents given its parent s, a BN compactly
defines a joint PMF P(V) that factorizes as P(v) =/producttext
V∈VP(v|paV). Inference,
intended as the computation of the posterior PMF of a single (querie d) variables
given some evidence about other variables, is in general NP-hard, b ut exact and
approximate schemes are available (see [15] for details).
3.2 Credal Sets and Credal Networks
A set of PMFs over Vis denoted as K(V) and called credal set (CS). Expec-
tations based on CSs are the bounds of the PMF expectations with r espect to
the CS. Thus E[f] := inf P(V)∈K(V)E[f] and similarly for the supremum E. Ex-
pectations of events are in particular called lower and upper probab ilities and
denoted as PandP. Notation K(U|v) is used for a set of conditional CSs, while
K(U|V) :={K(U|v)}v∈Vis a credal CPT (CCPT).
Analogously to a BN, a credal network (CN) is specified by graph Gtogether
with a family of CCPTs {K(V|PaV)}V∈V[11]. A CN defines a joint CS K(V)
corresponding to all the joint PMFs induced by BNs whose CPTs are c onsis-
tent with the CN CCPTs. For CNs, we intend inference as the comput ation of
the lower and upper posterior probabilities. The task generalizes BN inference
being therefore NP-hard, see [19] for a deeper characterization . Yet, exact and
approximate schemes are also available to practically compute infere nces [4].
2We set 0·logb0 = 0 to cope with zero probabilities.A New Score for Adaptive Tests in Bayesian and Credal Network s 5
4 Testing Algorithms
Atypicaltestaimsatevaluatingthe knowledgelevelofatesttaker σonthe basis
of her answers to a number of questions. Let Qdenote a repository of questions
availabletothe instructor.The orderand the number ofquestions pickedfrom Q
to be asked to σmight not be defined in advance. We call testing algorithm (TA)
a procedure taking care of the selection of the sequence of quest ions asked to the
test taker, and to decide when the test stops. Algorithm 1 depicts a general TA
scheme, with edenoting the array of the answers collected from test taker σ.
Algorithm 1 General TA: given the profile σand repository Q, an evaluation
based on answers eis returned.
1:e←∅
2:while not Stopping (e)do
3:Q∗←Pick(Q,e)
4:q∗←Answer(Q∗,σ)
5:e←e∪{Q∗=q∗}
6:Q←Q\{Q∗}
7:end while
8:returnEvaluate (e)
Boolean function Stopping decides whether the test should end, this choice
being possibly based on the previous answers in e. Trivial stopping rules might
be based on the number of questions asked to the test takes ( Stopping (e) = 1
if and only if |e|> n) or on the number of correct answers provided that a
maximum number of questions is not exceeded. Function Pickselects instead
the question to be asked to the student from the repository Q. A TA is called
adaptive when this function takes into account the previous answers e. Trivial
non-adaptive strategies might consist in randomly picking an element ofQor
following a fixed order. Function Answeris simply collecting (or simulating) the
answeroftest taker σtoaparticularquestion Q. In ourassumptions,this answer
is not affected by the previous answers to other questions.3
Finally,Evaluate is a function returning the overall judgement of the test
(e.g., a numerical grade or a pass/fail Boolean) on the basis of all th e answers
collected after the test termination. Trivial examples of such func tions are the
percentage of correct answers or a Boolean that is true when a su fficient number
of correct answers has been provided. Note also that in our assum ptions the TA
isexchangeable , i.e., the stopping rule, the question finder and the evaluation
function are invariant with respect to permutations in e[22]. In other words,
3Generalized setupswhere thequalityofthestudentanswer i s affectedbytheprevious
answers will be discussed at the end of the paper. This might i nclude a fatigue
model negatively affecting the quality of the answers when ma ny questions have
been already answered as well as the presence of revealing questions that might
improve the quality of other answers [16].6 Antonucci et al.
the same next question, the same evaluation and the same stopping decision is
produced for any two students, who provided the same list of answ ers in two
different orders.
A TA is supposed to achieve reliable evaluation of taker σfrom the answers
e. As each answer is individually assumed to improve such quality, asking all the
questions, no matter the order because of the exchangeability as sumption, is an
obvious choice. Yet, this might be impractical (e.g., because of time lim itations)
or just provide an unnecessary burden to the test taker. The go al of a good TA
is therefore to trade off the evaluation accuracy and the number o f questions.4
5 Adaptive Testing in Bayesian and Credal Networks
The general TA setup in Algorithm 1 can be easily specialized to BNs as f ollows.
First, we identify the profile σof the test taker with the actual states of a
number of latent discrete variables, called skills. LetS={Si}n
j=1denote these
skill variables, and sσthe actual values of the skills for the taker. Skills are
typically ordinal variables, whose states corresponds to increasin g knowledge
levels. Questions in Qare still described as manifest variables whose actual
values are returned by the answerfunction. This is achieved by a (possibly
stochastic) function of the actual profile sσ. This reflects the taker perspective,
while the teacher has clearly no access to sσ. As a remark, note that we might
often coarsenthe set ofpossible values Qfor eachQ∈Q: for instance, amultiple
choicequestionwiththreeoptionsmighthaveasinglerightanswer,t hetwoother
answers being indistinguishable from the evaluation point of view.5
A joint PMF over the skills Sand the questions Qis supposed to be avail-
able. In particular we assume this to correspond to a BN whose grap h has the
questions as leaf nodes. Thus, for each Q∈Q,PaQ⊆Sand we call PaQ
thescopeof question Q. Note that this assumption about the graph is simply
reflecting a statement about the conditional independence betwe en (the answer
to) a question and all the other skills and questions given scope of th e ques-
tion. This basically means that the answers to other questions are n ot directly
affecting the answer to a particular question, and this naturally follo ws from the
exchangeability assumption.6
As the available data are typically incomplete because of the latent na ture
of the skills, dedicated learning strategies, such as various form of constrained
EM should be considered to train a BN from data. We refer the reade r to the
variouscontributionsofPlajnerand Vomlel in this field (e.g.,[21]) for acomplete
discussion of that approach. Here we assume the BN quantification available.
4In some generalized setups, other elements such as a serendipity in choice in order
to avoid tedious sequences of questions might be also consid ered [7].
5The case of abstention to an answer and the consequent problem of modelling the
incompleteness is a topic we do not consider here for the sake of conciseness. Yet,
general approaches based on the ideas in [18] could be easily adopted.
6Moving to other setups would not be really critical because o f the separation prop-
erties of observed nodes in Bayesian and credal networks, se e for instance [3,8].A New Score for Adaptive Tests in Bayesian and Credal Network s 7
In such a BN framework, Stopping (e) might be naturally based on an eval-
uation of the posterior PMF P(S|e), this being also the case for Evaluate . Re-
garding the question selection, Pickmight be similarly based on the (posterior)
CPTP(S|Q,e), whose values for the different answers to Qmight be weighted
by the marginal P(Q|e). More specifically, entropies and conditional entropies
are considered by Algorithm 2, while the evaluation is based on a condit ional
expectation for a given utility function.
Algorithm 2 Information Theoretic TA in BN over the questions Qand the
skillsS: given the student profile sσ, the algorithms returns an evaluation cor-
responding to the expectation of an evaluation function fwith respect to the
posterior for the skills given the answers e.
1:e=∅
2:whileH(S|e)> H∗do
3:Q∗←argmax Q∈Q[H(S|e)−H(S|Q,e)]
4:q∗←Answer(Q∗,sσ)
5:e←e∪{Q∗=q∗}
6:Q←Q\{Q∗}
7:end while
8:returnEP(S|e)[f(S)]
When no data are available for the BN training, elicitation techniques s hould
beconsideredinstead.AsalreadydiscussedCNsmightofferabette rformalismto
capture domain knowledge, especially by providing interval-valued pr obabilities
instead of sharp values. If this is the case, a CN version of Algorithm 2 can be
equivalentlyconsidered.MovingtoCNsisalmostthesame,providedt hatbounds
on the entropy are used instead for decisions. Yet, the price of su ch increased
realism in the elicitation is the higher complexity characterizinginferen ces based
on CNs. The work in [17] offers a critical discussion of those issues, that are
only partially addressed by heuristic techniques used there to appr oximate such
bounds. In the next section we consider an alternative approach t o cope with
CNs and adaptive TAs based on different scores used to select the q uestions.
6 Coping with the Mode
Following[25], we can regardthe PMF entropy(and its conditionalver sion)used
by Algorithm 2 as an example of index of qualitative variation (IQV). An IQV
is just a normalized number that takes value zero for degenerate P MFs, one on
uniform ones, being independent on the number of possible states ( and samples
for empirical models). The closer to uniform is the PMF, the higher is t he index
and vice versa.
In order to bypass the computational issues related to its applicat ion with
CNs and the explainability limits with both BNs and CNs, we want to consid er8 Antonucci et al.
alternative IQVs to replace entropy in Algorithm 2. Wilkox deviation from the
mode(DM) appears a sensible option. Given PMF P(V), this corresponds to:
M(V) := 1−/summationdisplay
v∈Vmaxv′∈VP(v′)−P(v)
|V|−1. (1)
It is a trivial exercise to check that this is a proper IQV, with the sam e unimodal
behaviour of the entropy. In terms of explainability, being a linear fu nction of
the modal probability, the numerical value of the DM offers a more tr ansparent
interpretation than the entropy. From a computational point of v iew, for both
marginaland unconditional PMFs, both the entropy and the DM can be directly
obtained from the probabilities of the singletons.
The situation is different when computing the bounds of these quant ities
with respect to a CS. The bounds of M(V) are obtained from the upper and
lower probabilities of the singletons by simple algebra, i.e,
M(V) := max
P(V)∈K(V)M(V) :=|V|−maxv′∈VP(v′)
|V|−1, (2)
and analogously with the lower probabilities for M(V). Maximizing entropy
requires instead a non-trivial, but convex, optimization. See for ins tance [1] for
an iterative procedure to find such maximum when coping with CSs defi ned by
probability intervals. The situation is even more critical for the minimiz ation,
that has been proved to be NP-hard in [26].
The optimization becomes even more challenging for conditional entr opies,
there basically are mixtures of conditional entropies based on impre cise weights.
Consequently, in [17], only inner approximation for the upper bound h ave been
derived.ThesituationisdifferentforconditionalDMs.Thefollowingr esultoffers
a feasible approachin a simplified setup, to be later extended to the g eneralcase.
Theorem 1. Under the setup of Section 5, consider a CN with a single skill
Sand a single question Q, that is a child of S. LetK(S)andK(Q|S)be the
CCPTs of such CN. Let also Q={q1,...,qn}andS={s1,...,sm}. The upper
conditional DM, i.e.,
M(S|Q) :=|S|− max
P(S)∈K(S)
P(Q|S)∈K(Q|S)/summationdisplay
i=1,...,n/bracketleftbigg
max
j=1,...,mP(sj|qi)/bracketrightbigg
P(qi),(3)
whose normalizing denominator was omitted for the sake of br evity, is such that:
M(S|Q) :=m−max
ˆji=1,...,m
i=1,...,nΩ(ˆj1,...,ˆjn), (4)A New Score for Adaptive Tests in Bayesian and Credal Network s 9
whereΩ(ˆj1,...,ˆjn)is the solution of the following linear programming task her e
below.
max/summationdisplay
jxij
s.t./summationdisplay
ijxij= 1 (5)
xij≥0 ∀i,j (6)
/summationdisplay
ixij≥P(sj)∀j (7)
/summationdisplay
ixij≤P(sj)∀j (8)
P(qi|sj)/summationdisplay
ixij≤xij ∀i,j (9)
P(qi|sj)/summationdisplay
ixij≥xij ∀i,j (10)
xiˆji≥xij ∀i,j (11)
Note that the bounds on the sums over the indexes and on the uni versal quanti-
fiers are also omitted for the sake of brevity.
Proof. Equation (3)rewrites as:
M(S|Q) =m−max
P(S)∈K(S)
P(Q|S)∈K(Q|S)n/summationdisplay
i=1/bracketleftbigg
max
j=1,...,mP(sj)P(qi|sj)/bracketrightbigg
.(12)
Let us define the variables of such constrained optimization task as:
xij:=P(sj)·P(qi|sj). (13)
for each i= 1,...,nandj= 1,...,m. Let us show how the CCPT constraints
can be easily reformulated with respect to such new variable s by simply noticing
thatxij=P(sj,qi), and hence P(si) =/summationtext
ixijandP(qj|si) =xij/(/summationtext
kxkj).
Consequently, the interval constraints on P(S)corresponds to the linear con-
straints in Equations (7)and(8). Similarly, for P(Q|S), we obtain:
P(qi|sj)≤xij/summationtext
kxkj≤P(qi|sj), (14)
that easily gives the linear constraints in Equations (9)and(10). The non-
negativity of the probabilities corresponds to Equation (6), while Equation (5)
gives the normalization of P(S)and the normalization of P(Q|S)is by con-
struction. Equation (12)rewrites therefore as:
M(S|Q) =m−max
{vij}ij∈Γ/summationdisplay
imax
jxij, (15)10 Antonucci et al.
whereΓdenotes the linear constraints in Equations (5)-(10). If we set
ˆji:= argmax
jxij, (16)
Equation (15)rewrites as
M(S|Q) = max
{vij}ij∈Γ′/summationdisplay
ixiˆji, (17)
whereΓ′are the constraints in Γwith the additional (linear) constraints in
Equation (11), that are implementing Equation (16).
The optimization on the right-hand side of Equation (17)is not a linear
programming task, as the values of the indexes ˆjicannot be decided in advance
being potentially different for different assignments of the optimization variables
consistent with the constraints in Γ. Yet,we might address such optimization as a
brute-force task with respect to all the possible assignati on of the indexes ˆji. This
is exactly what is done by Equation (4)where all the mnpossible assignations
are considered. This proves the thesis. ⊓ ⊔
An analogous result with the linear programming tasks minimizing the sa me
objectivefunctionswithexactlythesameconstraintsallowstocom puteM(S|Q).
The overall complexity is clearly O(mn) withn:=|Q|. This means quadratic
complexity for any test where only the difference between a wrong a nd a right
answer is considered from an elicitation perspective, and tractable computations
providedthatthenumberofpossibleanswerstothesamequestion wedistinguish
is bounded by a small constant. Coping with multiple answers becomes trivial
by means of the results in [3], that allows to merge multiple observed c hildren
into a single one. Finally, the case of multiple skills might be similarly conside red
by using the marginal bounds of the single skills in Equations (7) and (8 ).
7 Experiments
In this section we validate the ideas outlined in the previous section in o rder to
check whether or not the DM can be used for TAs as a sensible altern ative to
information-theoreticscoressuchastheentropy.IntheBNcon text,thisissimply
achieved by computing the necessary updated probabilities, while Th eorem 1 is
used instead for CNs.
7.1 Single-Skill Experiments on Synthetic Data
For a very first validation of our approach, we consider a simple setu p made of a
single Boolean skill Sand a repository with 18 Boolean questions based on nine
different parametrizations (two questions for parametrization). In such BN, the
CPT of a question can be parametrized by two numbers. E.g., in the ex ample
in Figure 1, we used the probabilities of correctly answering the ques tion givenA New Score for Adaptive Tests in Bayesian and Credal Network s 11
that the skill is present or not, i.e., P(Q= 1|S= 1) and P(Q= 1|S= 0). A
more interpretable parametrization can be obtained as follows:
δ:= 1−1
2[P(Q= 1|S= 1)+P(Q= 1|S= 0)], (18)
κ:=P(Q= 1|S= 1)−P(Q= 1|S= 0). (19)
Note that P(Q= 1|S= 1)> P(Q= 1|S= 0) is an obvious rationality con-
straint for questions, otherwise having the skill would make less likely to answer
properly to a question. Both parameters are therefore non-neg ative. Parameter
δ, corresponding to the (arithmetic) average of the probability for a wrong an-
swer over the different skill values, can be regarded as a normalized index of
the question difficulty . E.g., in Figure 1, Q1(δ= 0.4) is less difficult than Q2
(δ= 0.5). Parameter κcan be instead regarded as a descriptor of the differ-
ence of the conditional PMFs associated with the different skill value s. In the
most extreme case κ= 1, the CPT P(Q|S) is diagonal implementing an iden-
tity mapping between the skill and the question. We therefore rega rdκas a
indicator of the discriminative power of the question. In our tests, for the BN
quantification, we consider the nine possible parametrizations corr esponding to
(δ,γ)∈[0.4,0.5,0.6]2. ForP(S) we use instead a uniform quantification. For the
CN approach we perturb all the BN parameters with ǫ=±0.05, thus obtaining
a CN quantification. A group of 1024 simulated students, half of the m having
S= 0 and half with S= 1 is used for simulations. The student answers are
sampled from the CPT of the asked question on the basis of the stud ent profile.
Figure 2 (left) depicts the accuracy of the BN and CN approaches b ased on
both the entropy and the DM scores. For credal models, decisions are based on
the mid-point between the lower and the upper probability, while lower entropy
and conditional entropies are used. We notably see all the adaptive approaches
outperforming a non-adaptive, random, choice of the questions. To better in-
vestigate the strong overlap between these trajectories, in Figu re 2 (right) we
compute the Brier score and we might observe the strong similarity b etween
DM and entropy approaches in both the Bayesian and the credal ca se, with the
credal approaches slightly outperforming the Bayesian ones.
7.2 Multi-Skill Experiments on Real Data
For a validation on real data, we consider an online German language p lacement
test (see also [17]). Four different Booleanskills associated with differ ent abilities
(vocabulary, communication, listening and reading) are considered and modeled
by a chain-shaped graph, for which BN and CN quantification are alre ady avail-
able. A repository of 64 Boolean questions, 16 for each skill, with fou r different
levels of difficulty and discriminative power, have been used.
Experiments have been achieved by means of the CREMA library for c redal
networks [14].7The Java code used for the simulations is available together with
the Python scripts used to analyze the results and the model spec ifications.8
7github.com/IDSIA/crema
8github.com/IDSIA/adaptive-tests12 Antonucci et al.
0 5 10 15 2000.20.40.60.81
Number of questionsAccuracy
0 5 10 15 2000.10.20.30.40.5
Number of questionsBrier DistanceCredal Entropy
Credal Mode
Random
Bayesian Entropy
Bayesian Mode
Fig.2.Accuracy (left) and Brier distance (right) of TAs for a singl e-skill BN/CN
Performancesare evaluated as for the previous model, the only diff erence be-
ing that here the accuracy is aggregated by average over the sep arate accuracies
for the four skills. The observed behaviour, depicted in Figure 3, is a nalogous
to that of the single skill case: entropy-based and mode-based sc ores are provid-
ing similar results, with the credal approach typically leading to more a ccurate
evaluations (or evaluations of the same quality with fewer questions ).
0 10 20 30 40 50 6000.20.40.60.81
Number of questionsAggregated AccuracyCredal Entropy
Credal Mode
Random
Bayesian Entropy
Bayesian Mode
Fig.3.Aggregated Accuracy for a multi-skill TA
8 Outlooks and Conclusions
A new score for adaptive testing in Bayesian and credal networks h as been
proposed. Our proposal is based on indexes of qualitative variation , being in
particular focused on the modal probability for their explainability fe atures. An
algorithm to evaluate this quantity in the credal case is derived. Our experi-
ments show that moving to these scores does not really affect the q uality of the
selection process. Besides a deeper experimental validation, a nec essary future
work consists in the derivation of simpler elicitation strategies for th ese model
in order to promote their application to real-world testing environme nts.A New Score for Adaptive Tests in Bayesian and Credal Network s 13
References
1. Abellan, J., Moral, S.: Maximum of entropy for credal sets . International Journal
of Uncertainty, Fuzziness and Knowledge-Based Systems 11(05), 587–597 (2003)
2. Almond, R.G., Mislevy, R.J.: Graphical models and comput erized adaptive testing.
Applied Psychological Measurement 23(3), 223–237 (1999)
3. Antonucci, A., Piatti, A.: Modeling unreliable observat ions in Bayesian networks
by credal networks. In: Godo, L., Pugliese, A. (eds.) Scalab le Uncertainty Man-
agement, Third International Conference, SUM 2009. Procee dings. Lecture Notes
in Computer Science, vol. 5785, pp. 28–39. Springer (2009)
4. Antonucci, A., de Campos, C.P., Huber, D., Zaffalon, M.: Ap proximating credal
networkinferences bylinear programming. In:vanderGaag, L.C. (ed.)Proceedings
of the 12th European Conference on Symbolic and Quantitativ e Approaches to
Reasoning with Uncertainty. Lecture Notes in Artificial Int elligence, vol. 7958, pp.
13–25. Springer, Utrecht, The Netherlands (2013)
5. Bachrach, Y., Graepel, T., Minka, T., Guiver, J.: How to gr ade a test without
knowing the answers—a Bayesian graphical model for adaptiv e crowdsourcing and
aptitude testing. arXiv preprint arXiv:1206.6386 (2012)
6. Badaracco, M., Mart´ ınez, L.: A fuzzy linguistic algorit hm for adaptive test in intel-
ligent tutoring system based on competences. Expert System s with Applications
40(8), 3073–3086 (2013)
7. Badran, M.E.K., Abdo, J.B., Al Jurdi, W., Demerjian, J.: A daptive serendipity
for recommender systems: Let it find you. In: ICAART (2). pp. 7 39–745 (2019)
8. Bolt, J.H., De Bock, J., Renooij, S.: Exploiting Bayesian network sensitivity func-
tions for inference in credal networks. In: Proceedings of t he Twenty-Second Eu-
ropean Conference on Artificial Intelligence (ECAI). vol. 2 85, pp. 646–654. IOS
Press (2016)
9. Chen, S.J., Choi, A., Darwiche, A.: Computer adaptive tes ting using the same-
decision probability. In: BMA@ UAI. pp. 34–43 (2015)
10. Conati, C., Gertner, A.S., VanLehn, K., Druzdzel, M.J.: On-line student modeling
for coached problem solving using Bayesian networks. In: Us er Modeling. pp. 231–
242. Springer (1997)
11. Cozman, F.G.: Credal networks. Artificial intelligence 120(2), 199–233 (2000)
12. Embretson, S.E., Reise, S.P.: Item response theory. Psy chology Press (2013)
13. H´ ajek, A., Smithson, M.: Rationality and indeterminat e probabilities. Synthese
187(1), 33–48 (2012)
14. Huber, D., Caba˜ nas, R., Antonucci, A., Zaffalon, M.: CRE MA: a Java library
for credal network inference. In: Jaeger, M., Nielsen, T. (e ds.) Proceedings of the
10th International Conference on Probabilistic Graphical Models (PGM 2020).
Proceedings of Machine Learning Research, PMLR, Aalborg, D enmark (2020)
15. Koller, D., Friedman, N.: Probabilistic Graphical Mode ls: Principles and Tech-
niques. MIT press (2009)
16. Laitusis, C.C., Morgan, D.L., Bridgeman, B., Zanna, J., Stone, E.: Examination
of fatigue effects from extended-time accommodations on the SAT reasoning test.
ETS Research Report Series 2007(2), i–13 (2007)
17. Mangili, F., Bonesana, C., Antonucci, A.: Reliable know ledge-based adaptive tests
by credal networks. In: Antonucci, A., Cholvy, L., Papini, O . (eds.) Symbolic and
QuantitativeApproachestoReasoning with Uncertainty.EC SQARU2017. Lecture
Notes in Computer Science, vol. 10369, pp. 282–291. Springe r, Cham (2017)14 Antonucci et al.
18. Marchetti, S., Antonucci, A.: Reliable uncertain evide nce modeling in Bayesian
networks by credal networks. In: Brawner, K.W., Rus, V. (eds .) Proceedings of the
Thirty-First International Florida Artificial Intelligen ce Research Society Confer-
ence (FLAIRS-31). pp. 513–518. AAAI Press, Melbourne, Flor ida, USA (2018)
19. Mau´ a, D.D., De Campos, C.P., Benavoli, A., Antonucci, A .: Probabilistic infer-
ence in credal networks: new complexity results. Journal of Artificial Intelligence
Research 50, 603–637 (2014)
20. Piatti, A., Antonucci, A., Zaffalon, M.: Building knowle dge-based expert systems
by credal networks: a tutorial. In: Baswell, A. (ed.) Advanc es in Mathematics
Research, vol. 11, chap. 2. Nova Science Publishers, New Yor k (2010)
21. Plajner, M., Vomlel,J.: Monotonicityinpractice ofada ptivetesting. arXivpreprint
arXiv:2009.06981 (2020)
22. Sawatzky, R., Ratner, P.A., Kopec, J.A., Wu, A.D., Zumbo , B.D.: The accuracy
of computerized adaptive testing in heterogeneous populat ions: A mixture item-
response theory analysis. PloS one 11(3), e0150563 (2016)
23. Vomlel, J.: Bayesian networks in educational testing. I nternational Journal of Un-
certainty, Fuzziness and Knowledge-Based Systems 12(supp01), 83–100 (2004)
24. Vomlel, J.: Building adaptive tests using Bayesian netw orks. Kybernetika 40(3),
333–348 (2004)
25. Wilcox, A.R.: Indices of qualitative variation and poli tical measurement. Western
Political Quarterly 26(2), 325–343 (1973)
26. Xiang, G., Kosheleva, O., , Klir, G.J.: Estimating infor mation amount under in-
terval uncertainty: algorithmic solvability and computat ional complexity. Tech.
Rep. 158, Departmental Technical Reports (CS) (2006)