arxiv_dump / txt /2105.06948.txt
billxbf's picture
Upload 101 files
8f1929a verified
raw
history blame
104 kB
People construct simplified mental representations to plan
Mark K. Ho1,2,*, David Abel3,+, Carlos G. Correa4, Michael L. Littman3, Jonathan D. Cohen1,4,
and Thomas L. Griffiths1,2
1Princeton University, Department of Psychology, Princeton, NJ, USA;2Princeton University, Department
of Computer Science, Princeton, NJ, USA;3Brown University, Department of Computer Science,
Providence, RI, USA;+Now at DeepMind, London, United Kingdom;4Princeton University, Princeton
Neuroscience Institute, Princeton, NJ, USA;*Corresponding author: Mark K Ho,
[email protected]
One of the most striking features of human cognition is the capacity to plan. Two aspects
of human planning stand out: its efficiency and flexibility. Efficiency is especially impres-
sive because plans must often be made in complex environments, and yet people successfully
plan solutions to myriad everyday problems despite having limited cognitive resources1–3.
Standard accounts in psychology, economics, and artificial intelligence have suggested hu-
man planning succeeds because people have a complete representation of a task and then use
heuristics to plan future actions in that representation4–11. However, this approach gener-
ally assumes that task representations are fixed . Here, we propose that task representations
can be controlled and that such control provides opportunities to quickly simplify problems
and more easily reason about them. We propose a computational account of this simplifica-
tion process and, in a series of pre-registered behavioral experiments, show that it is subject
to online cognitive control12–14and that people optimally balance the complexity of a task
representation and its utility for planning and acting. These results demonstrate how strate-
gically perceiving and conceiving problems facilitates the effective use of limited cognitive
resources.
1arXiv:2105.06948v2 [cs.AI] 26 Nov 2022In the short story “On Exactitude in Science,” Jorge Luis Borges describes cartographers who
seek to create the perfect map, one that includes every possible detail of the country it represents.
However, this innocent premise leads to an absurd conclusion: The fully detailed map of the
country must be the size of the country itself, which makes it impractical for anyone to use. Borges’
allegory illustrates an important computational principle. Namely, useful representations do not
simply mirror every aspect of the world, but rather pick out a manageable subset of details that
are relevant to some purpose (Figure 1a). Here, we examine the consequences of this principle for
how humans flexibly construct simplified task representations to plan.
Classic theories of problem solving distinguish between representing a task andcomputing a
plan4,15,16. For instance, Newell and Simon17introduced heuristic search , in which a decision-
maker has a full representation of a task (e.g., a chess board, chess pieces, and the rules of chess),
and then computes a plan by simulating and evaluating possible action sequences (e.g., sequences
of chess moves) to find one that is likely to achieve a goal (e.g., checkmate the king). In artificial
intelligence, the main approach to making heuristic search tractable involves limiting the com-
putation of action sequences (e.g., only thinking a few moves into the future, or only examining
moves that seem promising)5. Similarly, psychological research on planning largely focuses on
how limiting, prioritizing, pruning, or chunking action sequences can reduce computation6–11,18–20.
However, people are not necessarily restricted to a single, full, or fixed representation for a
task. This matters since simpler representations can make better use of limited cognitive resources
when they are tailored to specific parts or versions of a task. For example, in chess, considering the
interaction of a few pieces, or focusing on part of the board, is easier than reasoning about every
piece and part of the board. Furthermore, it affords the opportunity to adapt the representation,
tailoring it to the specific needs of the circumstance—a process that we refer to as controlling a
task construal . Although studies show that people can flexibly form representations to guide action
(e.g., forming the ad hoc category of “things to buy for a party” when organizing a social gath-
ering21), a long-standing challenge for cognitive science and artificial intelligence is explaining,
predicting, and deriving such representations from general computational principles22,23.
2a
Task Action PlanDecision-Maker
Decision-Maker
Plan ConstrualTask
Task Actionb
cFigure 1. Construal and planning. a, A satellite photo of Princeton, NJ (top) and maps of
Princeton for bicycling versus automotive use cases (bottom). Like maps and unlike photographs,
a decision-maker’s construal picks out a manageable subset of details from the world relevant to
their current goals. Imagery ©2022 Google, Map data ©2022. b,Standard models assume that a
decision-maker computes a plan, , with respect to a fixed task representation, T, and then uses
it to guide their actions, a.c,According to our model of value-guided construal , the decision-
maker forms a simplified task construal, Tc, that is used to compute a plan, c. This process can
be understood as two nested optimizations: an “outer loop” of construal and an “inner loop” of
planning.
Our approach to studying how people control task construals starts with the premise that ef-
fective decision-making depends on making rational use of limited cognitive resources1–3. Specif-
ically, we derive how an ideal, cognitively-limited decision-maker should form value-guided con-
3struals that balance the complexity of a representation and its utility for planning and acting. We
then show that pre-registered predictions of this account explain how people attend to task elements
in several planning experiments (see Data Availability Statement). Our analysis and findings sug-
gest that controlled, moment-to-moment task construals play a key role in efficient and flexible
planning.
Task construals from first principles
We build on models of sequential decision-making expressed as Markov Decision Processes24.
Formally, a taskTconsists of a state space, S; an initial state, s02S; an action space, A; a
transition function P:SAS ! [0;1]; and a utility function U:S ! R. In standard
formulations of planning, the value of a plan:SA! [0;1]from a state sis determined
by the expected, cumulative utility of using that plan25:V(s) =U(s) +P
a(ajs)P
s0P(s0j
s;a)V(s0). Standard planning algorithms5(e.g., heuristic search methods) attempt to efficiently
compute plans that optimize value by directly planning over a fixed task representation, T, that
is not subject to the decision-maker’s control (Figure 1b). Our aim is to relax this constraint and
consider the process of adaptively selecting simplified task representations for planning, which we
call the construal process (Figure 1c).
Intuitively, a construal “picks out” details in a task to consider. Here, we examine construals
that pick out cause-effect relationships in a task. This focus is motivated by the intuition that a key
source of task complexity is the interaction of different causes and their effects with one another.
For instance, consider interacting with various objects in someone’s living room. Walking towards
the couch and hitting it is a cause-effect relationship, while pulling on the coffee table and moving
itmight be another such relationship. These individual effects can interact and may or may not be
integrated into a single representation of moving around the living room. For example, imagine
pulling on the coffee table and causing it to move, but in doing so, backing into the couch and
hitting it. Whether or not a decision-maker anticipates and represents the interaction of multiple
4effects depends on what causes and effects are incorporated into their construal; this, in turn, can
impact the outcome of behavior.
Related work has studied how attention guides learning about how different state features pre-
dict rewards26. By contrast, to model construals, we require a way to express how attention flexibly
combines different causes and their effects into an integrated model to use for planning. For this,
we use a product of experts27, a technique from the machine learning literature for combining dis-
tributions that is similar to factored approximations used in models of perception28. Specifically,
we assume that the agent has Nprimitive cause-effect relationships that each assign probabili-
ties to state, action, and next-state transitions, i:SAS ! [0;1],i= 1;:::;N . Each
i(s0js;a)is a potential function representing, say, the local effect of colliding with the couch or
pulling on the coffee table. Then a construal is a subset of these primitive cause-effect relation-
ships,cf1;:::;Ng, that produces a task construal, Tc, with the following construed transition
function:
Pc(s0js;a)/Y
i2ci(s0js;a): (1)
Here, we assume that task construals ( Tc) and the original task ( T) share the same state space,
action space, and utility function. But, crucially, the construed transition function can be simpler
than that of the actual task.
What task construal should a decision-maker select? Ideally, it would be one that only includes
those elements (cause-effect relationships) that lead to successful planning, excluding any others
so as to make the planning problem as simple as possible. To make this intuition precise, it is
essential to first distinguish between computing a plan with a construal and using the plan induced
by a construal. In our example, suppose the decision-maker forms a construal of their living
room that includes the effect of pulling on the coffee table but ignores the effect of colliding with
the couch. They might then compute a plan in which they pull on the coffee table without any
complications, but when they usethat plan in the actual living room, they inadvertently stumble
over their couch. This particular construal is less than optimal.
Thus, we formalize the distinction between the computed plan associated with a construal and
5its resulting behavioral utility : If the decision-maker has a task construal Tc, denote the plan that
optimizes it as c. Then, the utility of the computed plan when starting at state s0is given by its
performance when interacting with the actual transition dynamics, P:
U(c) =U(s0) +X
ac(ajs0)X
s0P(s0js0;a)Vc(s0): (2)
Put simply, the behavioral utility of a construal is determined by the consequences of using it to
plan and act in the actual task.
Having established the relationship between a construal and its utility, we can define the value
of representation (VOR) associated with a construal. Our formulation resembles previous models
of resource-rationality2and the expected value of control13by discounting utilities with a cognitive
cost,C. This cost could be further enriched by specifying algorithm-specific costs29or hard con-
straints30. However, our aim is to understand value-guided construal with respect to the complexity
of the construal itself and with minimal algorithmic assumptions. To this end, we use a cost that
penalizes the number of effects considered: C(c) =jcj, wherejcjis the cardinality of c. Intuitively,
this cost reflects the description length of a program that expresses the construed transition func-
tion in terms of primitive effects31. It also generalizes recent economic models of sparsity-based
behavioral inattention32. The value of representation for construal cis then its behavioral utility
minus its cognitive cost:
VOR (c) =U(c)C(c): (3)
In short, we introduce the notion of a task construal (Equation 1) that relaxes the assumption
of planning over a fixed task representation. We then define an optimality criterion for a construal
based on its complexity and its utility for planning and acting (Equations 2-3). This optimality
criterion provides a normative standard we can use to ask whether people form optimal value-
guided construals33,34. We note that the question of precisely how people identify or learn optimal
construals is beyond the scope of our current aims. Rather, here our goal is to simply determine
whether their planning is consistent with optimal construal. If so, then understanding how people
6achieve (or approximate) this ability will be a key direction for future research (see Supplementary
Discussion of Construal Optimization Algorithms).
A paradigm for examining construals
Do people form construals that optimally balance complexity and utility? To answer this question,
we designed a paradigm analogous to the example in Figure 1a, in which participants were shown
a two-dimensional map of a maze and had to move a blue dot to reach a goal location. On each
trial, participants were shown a new maze composed of a starting location, a goal location, center
black walls in the shape of a +, and an arrangement of blue obstacles. The goal, starting state,
and the blue obstacles (but not the center black walls) changed on every trial, which required
participants to examine the layout of the maze and plan an efficient route to the goal (Figure 2a).
In our framework, each obstacle corresponds to a cause-effect relationship, i—i.e., attempting to
move into the space occupied by the obstacle and then being blocked. This is analogous to the
effect of being blocked by a piece of furniture in our earlier example.
Two key features make our maze-navigation paradigm useful for isolating and studying the
construal process. First, the mazes are fully observable : Complete information about the task
is immediately accessible from the visual stimulus. Second, each instance of a maze emerges
from a particular composition of individual elements (e.g., the obstacles). This means that while
all the components of a particular maze are immediately accessible, participants need to choose
which ones to integrate into an effective representation for planning (i.e., select a construal). Fully
observable but compositionally-structured problems occur routinely in everyday life—e.g., using
a map to navigate through exhibits in a museum—as well as in popular games—e.g., in chess,
figuring out how to move one’s knight across a board occupied by an opponent’s pieces. By
providing people with immediate access to all the components of a task while planning, we can
examine which ones they attend to versus ignore and whether these patterns of awareness reflect
a process of value-guided construal (Methods, Model Implementations, Value-guided Construal
7Implementation; Code Availability Statement). Furthermore, this general paradigm can be used in
concert with several different experimental measures to assess attention (Extended Data Figures
1-3; Supplementary Experimental Materials; Data Availability Statement).
b
An obstacle was either in the yellow or
green location (not both), which one was it?
How confident are you?Goal, agent, and
obstacles appearObstacles are invisible
during navigationRecall probe
Confidence probea
Trial BeginsGoal, agent, and
obstacles appearParticipant navigatesAwareness probe
How aware of the highlighted
obstacle were you at any point?
Figure 2. Maze-navigation paradigm and design of memory probes, Value-guided con-
strual predicts how people will form representations that are simple but useful for planning
and acting. These predictions were tested in a new paradigm in which participants controlled
a blue circle and navigated mazes composed of center black walls in the shape of a cross, blue
tetronimo-shaped obstacles, and a yellow goal state with a shrinking green square. We assume
that attention to obstacles as a result of construal is reflected in memory of obstacles and used
two types of probes to assess memory. a,In our initial experiment, participants were shown the
maze and navigated to the goal (dashed line indicates an example path). After navigating, partic-
ipants were given awareness probes in which they were asked to report their awareness of each
obstacle on an 8-point scale (for analyses, responses were scaled to range from 0 to 1). b,In a
subsequent experiment, obstacles were only visible prior to moving in order to encourage plan-
ning up-front, and participants were given recall probes in which they were shown a pair of ob-
stacles in green and yellow, only one of which had been present in the maze they had just com-
pleted. They were then asked which one had been in the maze as well as their confidence.
8Traces of construals in people’s memory
We assume that the obstacles included in a construal will be associated with greater awareness
and thereby memory; accordingly, we began by probing memory for obstacles after participants
completed each maze to test whether they formed value-guided construals of the mazes. In our ini-
tial experiment, participants received awareness probes in which, following navigation, they were
shown a picture of the maze they had just completed with one of the obstacles highlighted. Then,
they were asked, “How aware of the highlighted obstacle were you at any point?” and responded
on an 8-point scale that was later scaled to range from 0 to 1 for analyses (Figure 2a). If participants
formed representations of the mazes that balance utility and complexity, their responses should be
positively predicted by value-guided construal. This is precisely what we found: Value-guided con-
strual predicted awareness judgments (likelihood ratio test comparing hierarchical linear models
with and without z-score normalized value-guided construal probabilities: 2(1) = 2297:21;p <
1:01016; = 0:133, S.E. = 0:003; Methods, Experiment Analyses; Figure 3). Furthermore,
we also observed the same results when participants could not see the obstacles while moving and
so needed to plan their route entirely up front ( 2(1) = 726:95;p < 1:01016; = 0:115,
S.E.= 0:004). This was also the case when we probed awareness judgments immediately after
planning but before execution (2(1) = 679:20;p< 1:01016; = 0:106, S.E. = 0:004; Meth-
ods, Experimental Design, Up-front Planning Experiment; Supplementary Memory Experiment
Analyses).
9Value-Guided Construal
Expected Obstacle Probability
≤ 0.5 > 0.5ab
c
0.00.51.0
Participant mean awareness response (experiment)
Value-guided construal probability (predicted)
0.00 0.25 0.50 0.75 1.00
Initial Experiment Mean Awareness051015Count
Figure 3. Initial experiment results, In our initial planning experiment (out of four), each
person (n= 161 independent participants) navigated twelve 2D mazes, each of which had seven
blue tetronimo-shaped obstacles. To assess whether attention to obstacles reflects a process of
value-guided construal, participants were given an awareness probe (see Figure 2a) for each ob-
stacle in each maze. a,For our first analysis, we split the set of 84 obstacles across mazes based
on whether value-guided construal assigned a probability less than or equal to 0:5or greater than
0:5. Here, we plot two histograms of participants’ mean awareness responses corresponding to
the two sets of obstacles ( 0:5in grey,>0:5in blue; individual by-obstacle mean awareness un-
derlying the histograms are represented underneath). We then similarly split the obstacles based
on whether mean awareness responses were less than or equal to 0:5or greater than 0:5and, us-
ing a chi-squared test for independence, found that this split was predicted by value-guided con-
strual (2(1;N= 84) = 23:03,p= 1:6106, effect size w= 0:52).b,Value-guided construal
predictions for three of the twelve mazes used in the experiment (blue circle indicates the starting
location, green and yellow square indicates the goal; obstacle colors represent model probabilities
according to the colorbar). c,Participant mean awareness judgments for the same three mazes
(obstacle colors represent mean judgments according to the colorbar). Responses in this initial
experiment generally reflect value-guided construal of mazes. Participants were recruited through
the Prolific online experiment platform.
While the awareness probes provide useful insight into people’s task construals, it is a step
removed from their memory (which is already a step removed from the construal process itself)
since it requires participants to reflect on their earlier awareness during planning. To address this
limitation, we developed a second set of critical mazes with two properties. First, the mazes were
10designed to test the distinctive predictions of value-guided construal (e.g., Figure 4a). Second,
these new mazes allowed us to use a more stringent measure of memory for task elements. Specif-
ically, we used obstacle recall probes , in which, following navigation, participants were shown a
grid with the black center walls, a green obstacle, a yellow obstacle, and no other obstacles. Either
the green or yellow obstacle had actually been present in the maze, whereas the other obstacle did
not overlap with any of those that had been present. Participants were then asked, “An obstacle
was either in the yellow or green location (not both), which one was it?” and could select either op-
tion, followed by a confidence judgment on an 8-point scale (Figure 2b; Extended Data Figure 4a).
The recall probes thus provided two measures, accuracy and confidence, and using hierarchical
generalized linear models (HGLMs) we found that value-guided construal predicted both types of
responses (likelihood ratio tests comparing models on accuracy: 2(1) = 249:34;p< 1:01016;
= 0:648, S.E. = 0:042; and confidence: 2(1) = 432:76;p < 1:01016; = 0:104,
S.E.= 0:005. Methods, Experiment Analyses). Additionally, when we gave a separate group
of participants the awareness probes on these mazes, value-guided construal was again predictive
(Awareness: 2(1) = 837:47;p < 1:01016; = 0:175, S.E. = 0:006). Thus, using three
different measures of memory (recall accuracy, recall confidence, and awareness judgments), we
found further evidence that when planning, people form task representations that optimally balance
complexity and utility.
11a b
c
0.0 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Accuracy0.00.30.40.50.60.70.80.9ConfidencePlanning
0.0 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Accuracy0.00.30.40.50.60.70.80.9Perception Control
Obstacle Type
Relevant/Near
Relevant/Far (Critical)
Irrelevant
0.0 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Accuracy0.00.30.40.50.60.70.80.9Execution Control
Optimal PathIrrelevantIrrelevant
Relevant/Far
(Critical)Relevant/Near
0 20 40 60 80 100 120
Change in AICVGC
Traj HS
Graph HS
Bottleneck
SR Overlap
Nav Dist
Nav Dist Step
Goal Dist
Start Dist
Wall Dist
Center DistLesioned
Predictor
Figure 4. Critical mazes recall experiment, model comparisons, and control studies. a,
The critical mazes recall experiment ( n= 78 independent participants; one version of one of the
four planning experiments) used critical mazes that included critical obstacles that were highly
relevant to planning but far from an optimal path (dashed line). Value-guided construal predicts
critical obstacles will be included in a construal while irrelevant obstacles will not, indepen-
dent of distance to the optimal path. b,We fit a global model to recall responses that included
the fixed parameter value-guided construal modification model (VGC) along with ten alternative
predictors based on heuristic search models, successor representation-based predictors, and low-
level perceptual cues (see Methods, Experiment Analyses). Then, each predictor was removed
from this global model, and we calculated the resulting change in fit (in AIC). Removing value-
guided construal led to the largest degradation of fit (greatest increase in AIC), underscoring its
unique explanatory value. c,In a pair of non-planning control experiments, new participants ei-
ther viewed patterns that looked exactly like the mazes (perceptual control; n= 88 independent
participants) or followed “breadcrumbs” through the maze along a path taken by a participant
from the original experiment (execution control; n= 80 independent participants). They then an-
swered the exact same recall questions. Value-guided construal remains a significant factor when
explaining recall in the original critical mazes experiment (planning) while including mean re-
call from the perceptual and execution controls as covariates (likelihood ratio test for accuracy:
2(1) = 106:36;p= 6:21025; confidence: 2(1) = 18:56;p= 1:6105;p-values
are unmodified). This confirms that responses consistent with value-guided construal are not a
simple function of perception and execution. Participants were recruited through the Prolific on-
line experiment platform. Plotted are the mean values for each obstacle, with relevant/near, rele-
vant/far (critical), and irrelevant obstacle types distinguished. Error bars are standard errors about
the mean.
12Controlling for perception and execution
The memory studies provide preliminary confirmation of our hypothesis, but they have several
limitations. One is that, although participants were engaged in planning , they were also necessarily
engaged in other forms of cognitive processing, and these unrelated processes may have influenced
memory of the obstacles. In particular, participants’ perception of a maze or their execution of a
particular plan through a maze may have influenced their responses to the memory probes. This
potentially confounds the interpretation of our results, since a key part of our hypothesis is that
task construals arise from planning , rather than simply perceiving or executing.
Thus, to test that responses to the memory probes cannot be fully explained by perception
and/or execution, we administered two sets of yoked controls that did not require planning (Meth-
ods, Experimental Design, Control Experiments). In the perceptual controls , new participants were
shown patterns that looked exactly like the mazes, but they performed an unrelated, non-planning
task. Each pattern was presented to a new participant for the same amount of time that a partic-
ipant in the original experiments had examined the corresponding maze before moving—i.e., the
amount of time the original participant spent examining the maze to plan. The new participant then
responded to the same probes, in the same order, as the original participant. For the execution con-
trols, we recruited another group of participants and gave them instructions similar to those in the
planning experiments. However, unlike the original experiments, the task did not require planning.
Rather, these mazes included “breadcrumbs” that needed to be collected and that appeared every
two steps. Breadcrumbs appeared along the exact path taken by one of the original participants,
meaning that the new participant executed the same actions but without having planned . After
completing each maze, the participant then received the same probes, in the same order, as the
original participant.
We assessed whether responses in the planning experiments can be explained by a simple
combination of perception and/or execution by testing whether value-guided construal remained
a significant factor after accounting for control responses. Specifically, we used the mean by-
obstacle responses from the perceptual and execution controls as predictors in HGLMs fit to
13the corresponding planning responses. We then tested whether adding value-guided construal
as a predictor improved fits. For the awareness, accuracy, and confidence responses in the re-
call experiment, we found that including value-guided construal significantly improved fits (like-
lihood ratio tests comparing models on accuracy: 2(1) = 106:36;p= 6:21025; confi-
dence:2(1) = 18:56;p= 1:6105; and awareness: 2(1) = 55:34;p= 1:01013)
and that value-guided construal predictions were positively associated with responses (coefficients
for accuracy: = 0:58;S.E. = 0:058; confidence: = 0:039;S.E. = 0:009; and awareness:
= 0:054;S.E. = 0:007). Thus, responses following planning are not reducible to a simple
combination of perception andexecution , and they can be further explained by the formation of
value-guided construals (Figure 4c; Supplementary Control Experiment Analyses).
Externalizing the planning process
Another limitation of the previous planning experiments is that they assess construal after planning
is complete (i.e., by probing memory). To obtain a measure of the planning process as it unfolds ,
we developed a novel process-tracing paradigm . In this version of the task, participants never
saw all of the obstacles at once. Instead, at the beginning of the trial, after being shown the start
and goal locations, they could use their mouse to reveal individual obstacles by hovering over them
(Methods, Experimental Design, Process-tracing Experiments; Extended Data Figure 4b). This led
participants to externalize the planning process, and so their behavior on this task provides insight
into how planning computations unfolded internally. We tested whether value-guided construal
accounted for behavior by analyzing two measures: whether an obstacle was hovered over and, if
it was hovered over, the duration of hovering. Value-guided construal was a significant predictor
for both these measures on both the initial mazes (likelihood ratio tests comparing HGLMs for
hovering:2(1) = 1221:76;p < 1:01016; = 0:704, S.E. = 0:021; and hover duration [log
milliseconds]: 2(1) = 169:90;p < 1:01016; = 0:161, S.E. = 0:012) and on the critical
mazes (hovering: 2(1) = 1361:92;p < 1:01016; = 0:802, S.E. = 0:023; hover duration
14[log milliseconds]: 2(1) = 540:63;p < 1:01016; = 0:369, S.E. = 0:016). These results
thus complement our original memory-based measurements of people’s task representations and
strengthen the interpretation of them in terms of value-guided construal during planning.
Value-guided construal modification
Thus far, our account of value-guided construal has assumed that an obstacle is either always
or never included in a construal. This simplification is useful since it enables us to derive clear
qualitative predictions based on whether a plan is influenced by an obstacle, but it overlooks graded
factors such as how much of a plan is influenced by an obstacle. For example, an obstacle may only
be relevant for planning a few movements around a participant’s initial location in a maze and, as
a result, could receive less total attention than one that is relevant for deciding how to act across
a larger area of the maze. To characterize these more fine-grained attentional processes, we first
generalized the original construal selection problem to a one in which the decision-maker revisits
and potentially modifies their construal during planning. Then, we derived obstacle awareness
predictions based on a theoretically optimal construal modification policy that balances complexity
and utility (Methods, Model Implementation, Value-Guided Construal).
To assess value-guided construal modification, we re-analyzed our data using three versions of
the model with increasing ability to capture variability in responses. First, we used an idealized
fixed parameter model to derive a single set of obstacle attention predictions and confirmed that
they also predict participant responses on the planning tasks (Supplementary Construal Modifica-
tion Analyses). Second, for each planning measure and experiment, we calculated fitted parameter
models in which noise parameters for the computed plan and construal modification policy were
fit (Methods, Model Implementation, Value-Guided Construal). Scatter plots comparing mean by-
obstacle responses and model outputs for parameters with the highest R2are shown in Figure 5.
Finally, we fit a set of models that allowed for biases in computed plans (e.g., a bias to stay along
the edge of a maze or an explicit penalty for bumping into walls) and found that this additional ex-
15pressiveness led to obstacle attention predictions with an improved correspondence to participant
responses (Supplementary Construal Modification Analyses). Together, these analyses provide
additional insight into the fine-grained dynamic structure of value-guided construal modification.
0.00 0.25 0.50 0.75 1.00
Fitted Value-Guided
Construal Modification Prob0.20.40.60.8Initial Exp
Awareness Judgment
R2=0.53
0.00 0.25 0.50 0.75 1.00
Fitted Value-Guided
Construal Modification Prob0.20.40.60.8Up-Front Planning Exp
Awareness Judgment
R2=0.44
0.00 0.25 0.50 0.75 1.00
Fitted Value-Guided
Construal Modification Prob0.40.50.60.70.80.91.0Critical Maze Exp
Recall Accuracy
R2=0.87
0.00 0.25 0.50 0.75 1.00
Fitted Value-Guided
Construal Modification Prob0.40.50.60.70.80.9Critical Maze Exp
Recall Confidence
R2=0.81
0.00 0.25 0.50 0.75 1.00
Fitted Value-Guided
Construal Modification Prob0.20.40.60.8Critical Maze Exp
Awareness Judgment
R2=0.74
0.00 0.25 0.50 0.75 1.00
Fitted Value-Guided
Construal Modification Prob0.00.20.40.60.81.0Process-Tracing
(Initial Mazes)
Hovering
R2=0.42
0.0 0.5 1.0
Fitted Value-Guided
Construal Modification Prob5.05.56.06.57.07.5Process-Tracing
(Initial Mazes)
Log-Hover Duration
R2=0.30
0.00 0.25 0.50 0.75 1.00
Fitted Value-Guided
Construal Modification Prob0.00.20.40.60.81.0Process-Tracing
(Critical Mazes)
Hovering
R2=0.61
0.00 0.25 0.50 0.75 1.00
Fitted Value-Guided
Construal Modification Prob5.56.06.57.0Process-Tracing
(Critical Mazes)
Log-Hover Duration
R2=0.48
Figure 5. Fitted value-guided construal modification. Our initial model of value-guided
construal focuses on whether an obstacle should or should not be included in a construal. We de-
veloped a generalization that additionally accounts for how much an obstacle influences a plan if
a decision-maker is optimally modifying their construal during planning (Methods, Model Im-
plementations, Value-Guided Construal). We used an "-softmax noise model [35] for computed
action plans and construal modification policies and, for each experiment and measure, searched
for parameters that maximize the R2between model predictions and mean by-obstacle responses.
Shown here are plots comparing scores that the fitted construal modification model assigns to
each obstacle with participants’ mean by-obstacle responses for the nine measures.
Accounting for alternative mechanisms
While the analyses so far confirm the predictive power of value-guided construal, it is also im-
portant to consider alternative planning processes. For instance, differential awareness could have
been a passive side-effect of planning computations , rather than an active facilitator of planning
computations as posited by value-guided construal. In particular, participants could have been
planning by performing heuristic search over action sequences without actively construing the task,
which would have led to differential awareness of obstacles as a byproduct of planning. Differ-
ential awareness could also have arisen from alternative representational processes, such as those
16based on the successor representation36or related subgoaling mechanisms37. Similarly, perceptual
factors, such as the distance to the start, goal, walls, center, optimal path, or path taken, could have
influenced responses.
Based on these considerations, we identified ten alternative predictors (Methods, Model Imple-
mentations; Extended Data Figures 5, 6, and 7; Code Availability Statement). All ten predictors
plus the fixed value-guided construal modification predictions were included in global models that
were fit to each of the nine planning experiment measures, and, in all cases, value-guided construal
was a significant predictor (Extended Data Table 1; see Supplementary Alternative Mechanisms
Analyses for the same analyses with the single-construal model).
Furthermore, to assess the relative importance of each predictor, we calculated the change in
fit (in terms of AIC) that resulted from removing each predictor from a global model (Methods,
Experiment Analyses). Across all planning experiment measures, removing value-guided con-
strual led to the first or second largest reduction in fit (Figure 4b; Extended Data Table 1). These
“knock-out” analyses demonstrate the explanatory necessity of value-guided construal. To assess
explanatory sufficiency , we fit a new set of single-predictor and two-predictor models using all pre-
dictors and then calculated their AICs (Methods, Experiment Analyses; Extended Data Figure 8).
For all nine experimental measures, value-guided construal was one of the top two single-predictor
models and was one of the two factors included in the best two-predictor model. Together, these
analyses confirm the explanatory necessity and sufficiency of value-guided construal.
Discussion
We tested the idea that when people plan, they do so by constructing a simplified mental representa-
tion of a problem that is sufficient to solve it—a process that we refer to as value-guided construal.
We began by formally articulating how an ideal, cognitively-limited decision-maker should con-
strue a task so as to balance complexity and utility. Then, we showed that pre-registered predictions
of this model explain people’s awareness, ability to recall problem elements (obstacles in a maze),
17confidence in recall ability, and behavior in a process-tracing paradigm, even after controlling for
the baseline influence of perception and execution as well as ten alternative mechanisms. These
findings support the hypothesis that people make use of a controlled process of value-guided con-
strual, and that it can help explain the efficiency of human planning. More generally, our account
provides a framework for further investigating the cognitive mechanisms involved in construal. For
instance, how are construal strategies acquired? How is construal selection shaped by computation
costs, time, or constraints? From a broader perspective, our analysis suggests a deep connection
between the control of construals and the acquisition of structured representations like objects and
their parts that can be cognitively manipulated38,39, which can inform the development of intelli-
gent machines. Future investigation into these and other mechanisms that interface with the control
of representations will be crucial for developing a comprehensive theory of flexible and efficient
intelligence.
Main References
1. Lewis, R. L., Howes, A. & Singh, S. Computational Rationality: Linking Mechanism and
Behavior Through Bounded Utility Maximization. Topics in Cognitive Science 6,279–311
(2014).
2. Griffiths, T. L., Lieder, F. & Goodman, N. D. Rational Use of Cognitive Resources: Levels
of Analysis Between the Computational and the Algorithmic. Topics in Cognitive Science 7,
217–229 (2015).
3. Gershman, S. J., Horvitz, E. J. & Tenenbaum, J. B. Computational rationality: A converging
paradigm for intelligence in brains, minds, and machines. Science 349, 273–278. ISSN : 0036-
8075 (2015).
4. Newell, A. & Simon, H. A. Human problem solving (Prentice-Hall, Englewood Cliffs, NJ,
1972).
185. Russell, S. & Norvig, P. Artificial Intelligence: A Modern Approach 3rd. ISBN : 0136042597
(Prentice Hall Press, USA, 2009).
6. Keramati, M., Smittenaar, P., Dolan, R. J. & Dayan, P. Adaptive integration of habits into
depth-limited planning defines a habitual-goal–directed spectrum. Proceedings of the Na-
tional Academy of Sciences 113, 12868–12873 (2016).
7. Huys, Q. J. M. et al. Bonsai trees in your head: How the Pavlovian system sculpts goal-
directed choices by pruning decision trees. PLoS Computational Biology 8,e1002410 (2012).
8. Huys, Q. J. M. et al. Interplay of approximate planning strategies. Proceedings of the Na-
tional Academy of Sciences 112, 3098–103 (2015).
9. Callaway, F. et al. A resource-rational analysis of human planning inProceedings of the
Annual Conference of the Cognitive Science Society (2018).
10. Sezener, C. E., Dezfouli, A. & Keramati, M. Optimizing the depth and the direction of
prospective planning using information values. PLoS Computational Biology 15,e1006827
(2019).
11. Pezzulo, G., Donnarumma, F., Maisto, D. & Stoianov, I. Planning at decision time and in
the background during spatial navigation. Current Opinion in Behavioral Sciences 29,69–76
(2019).
12. Miller, E. K. & Cohen, J. D. An Integrative Theory of Prefrontal Cortex Function. Annual
Review of Neuroscience 24,167–202 (2001).
13. Shenhav, A., Botvinick, M. M. & Cohen, J. D. The Expected Value of Control: An Integrative
Theory of Anterior Cingulate Cortex Function. Neuron 79,217–240 (2013).
14. Shenhav, A. et al. Toward a Rational and Mechanistic Account of Mental Effort. Annual
Review of Neuroscience, 99–124 (2017).
15. Norman, D. A. & Shallice, T. in Consciousness and self-regulation 1–18 (Springer, 1986).
1916. Holland, J. H., Holyoak, K. J., Nisbett, R. E. & Thagard, P. R. Induction: Processes of Infer-
ence, Learning, and Discovery (MIT Press, 1989).
17. Newell, A. & Simon, H. A. Computer science as empirical inquiry: Symbols and search.
Communications of the ACM 19,113–126 (1976).
18. Daw, N. D., Niv, Y . & Dayan, P. Uncertainty-based competition between prefrontal and dor-
solateral striatal systems for behavioral control. Nature neuroscience 8,1704–1711 (2005).
19. Gl ¨ascher, J., Daw, N., Dayan, P. & O’Doherty, J. P. States versus rewards: Dissociable neu-
ral prediction error signals underlying model-based and model-free reinforcement learning.
Neuron 66,585–595 (2010).
20. Ramkumar, P. et al. Chunking as the result of an efficiency computation trade-off. Nature
communications 7,1–11 (2016).
21. Barsalou, L. W. Ad hoc categories. Memory & cognition 11,211–227 (1983).
22. Simon, H. A. The functional equivalence of problem solving skills. Cognitive Psychology 7,
268–288. ISSN : 0010-0285 (1975).
23. Brooks, R. A. Intelligence without representation. Artificial intelligence 47,139–159 (1991).
24. Puterman, M. L. Markov Decision Processes: Discrete Stochastic Dynamic Programming
(John Wiley & Sons, Inc., 1994).
25. Bellman, R. Dynamic programming (Princeton University Press, 1957).
26. Leong, Y . C., Radulescu, A., Daniel, R., DeWoskin, V . & Niv, Y . Dynamic interaction be-
tween reinforcement learning and attention in multidimensional environments. Neuron 93,
451–463 (2017).
27. Hinton, G. Products of experts in1999 Ninth International Conference on Artificial Neural
Networks ICANN 99.(Conf. Publ. No. 470) 1(1999), 1–6.
28. Whiteley, L. & Sahani, M. Attention in a Bayesian framework. Frontiers in human neuro-
science 6,100 (2012).
2029. Lieder, F. & Griffiths, T. L. Resource-rational analysis: understanding human cognition as the
optimal use of limited computational resources. Behavioral and Brain Sciences, 1–60 (2020).
30. Yoo, A. H., Klyszejko, Z., Curtis, C. E. & Ma, W. J. Strategic allocation of working memory
resource. Scientific reports 8,1–8 (2018).
31. Gr ¨unwald, P. Model selection based on minimum description length. Journal of Mathemati-
cal Psychology 44,133–152 (2000).
32. Gabaix, X. A sparsity-based model of bounded rationality. The Quarterly Journal of Eco-
nomics 129, 1661–1710 (2014).
33. Marr, D. Vision: A computational investigation into the human representation and processing
of visual information (San Francisco: W. H. Freeman and Company, 1982).
34. Anderson, J. R. The Adaptive Character of Thought (Lawrence Erlbaum Associates, Inc.,
Hillsdale, NJ, 1990).
35. Nassar, M. R. & Frank, M. J. Taming the beast: extracting generalizable knowledge from
computational models of cognition. Current opinion in behavioral sciences 11,49–54 (2016).
36. Gershman, S. J. The successor representation: its computational logic and neural substrates.
Journal of Neuroscience 38,7193–7200 (2018).
37. Stachenfeld, K. L., Botvinick, M. M. & Gershman, S. J. The hippocampus as a predictive
map. Nature neuroscience 20,1643–1653 (2017).
38. Tversky, B. & Hemenway, K. Objects, parts, and categories. Journal of Experimental Psy-
chology: General 113, 169 (1984).
39. Tenenbaum, J. B., Kemp, C., Griffiths, T. L. & Goodman, N. D. How to grow a mind: Statis-
tics, structure, and abstraction. Science 331, 1279–1285 (2011).
21Methods
Model Implementations
Value-guided Construal
Our model assumes the decision-maker has a set of cause-effect relationships that can be combined
into a task construal that is then used for planning. To derive empirical predictions for the maze
tasks, we assume a set of primitive cause-effect relationships, each of which is analogous to the
example of interacting with furniture in a living room (see main text). For each maze, we modeled
the following: The default effect of movement (i.e., pressing an arrow key causes the circle to
move in that direction with probability 1"and stay in place with probability ","= 105),Move;
the effect of being blocked by the center, plus-shaped ( +) walls (i.e., the wall causes the circle to
notmove when the arrow key is pressed), Walls; and effects of being blocked by each of the N
obstacles,Obstacle i;i= 1;:::;N . Since every maze includes the same movements and walls, the
model only selected which obstacle effects to include. The utility function for all mazes was given
by a step cost of1until the goal state was reached.
Value-guided construal posits a bilevel optimization procedure involving an “outer loop” of
construal and an “inner loop” of planning. Here, we exhaustively calculate potential solutions to
this nested optimization problem by enumerating and planning with all possible construals (i.e.,
subsets of obstacle effects). We exactly solved the inner loop of planning for each construal us-
ing dynamic programming40and then evaluated the optimal stochastic computed plan under the
actual task dynamics (i.e., Equation 2). For planning and evaluation, transition probabilities were
multiplied by a discount rate of :99was used to ensure values were finite. The general procedure
for calculating the value of construals is outlined in the algorithm in Extended Data Table 2. To
be clear, our current research strategy is to derive theoretically optimal predictions for the inner
loop of planning and outer loop of construal in the spirit of resource-rational analysis2. Thus,
this specific procedure should not be interpreted as a process model of human construal. In the
Supplemental Discussion of Algorithms for Construal Optimization, we discuss the feasibility of
22optimizing construals and how an important direction for future research will involve investigating
tractable algorithms for finding good construals.
Given a value of representation function, VOR, that assigns a value to each construal, we model
participants as selecting a construal according to a softmax decision-rule:
P(c)/exp
1VOR (c)
; (4)
where > 0is a temperature parameter (for our pre-registered predictions = 0:1). We then
calculated a marginalized probability for each obstacle being included in the construal, from the
initial state, s0, corresponding to the expected awareness of that obstacle:
P(Obstacle i) =X
c1[Obstacle i2c]P(c); (5)
where, for a statement X, 1[X]evaluates to 1ifXis true and 0ifXis false. We implemented this
model in Python 3.7 using the msdm library (see Code Availability Statement).
The basic value-guided construal model makes the simplifying assumption that the decision-
maker plans with a single static construal. We can extend this idea to consider a decision-maker
who revisits and potentially modifies their construal at each stage of planning. In particular, we
can conceptualize this process in terms of a sequential decision-making problem induced by the
interaction between task dynamics (e.g., a maze) and the internal state of an agent (e.g., a con-
strual) [41]. The solution to this problem is then a sequence of modified construals associated with
planning over different parts of the task (e.g., planning movements for different areas of the maze).
Formally, we denote the set of possible construals as C=P(f1;:::;Ng), the powerset of
cause-effect relationships, and define a construal modification Markov Decision Process , which
has a state space corresponding to the Cartesian product of task states and construals, (s;c)2SC ,
and an action space corresponding to possible next construals, c02 C. Having chosen a new
construalc0, the probability of transitioning from task state stos0comes from first calculating
a joint distribution using the actual transition function P(s0js;a)and planc0(ajs)and then
23marginalizing over task actions a:
P(s0js;c0) =X
ac0(ajs)P(s0js;a): (6)
In this construal modification setting, the analogue to the value of representation (VOR; Equa-
tion 3) is the optimal construal modification value function , defined over all s;c:
V(s;c) =U(s) + max
c0"X
s0P(s0js;c0)V(s0;c0)C(c0;c)#
; (7)
whereC(c0;c) =jc0cjis the number of additional1cause-effect relationships in the new construal
c0compared to c. Importantly, this cost on modifying the construal encourages consistency—i.e.,
withoutC(c0;c), a decision-maker would have no disincentive to completely change their construal
for each state. Note that in the special case where c=?, we recover the original static construal
cost for a single step. Finally, using the construal modification value function, we define a softmax
policy over the task/construal state space, (c0js;c)/expf 1
c[P
s0P(s0js;c0)V(s0;c0)C(c0;c)]g.
For the fixed parameter model we set c= 0:1(as with the single-construal model).
The construal modification formulation allows us to consider not just whether an obstacle ap-
pears in a construal, but also how long it appears in a construal. In particular, we would like to
compute a quantity that is analogous to Equation 5 that assigns model values for each obstacle.
To do this, we use the normalized task/construal state occupancy induced by a construal policy 
from the initial task/construal state, (s;cjs0;c0)/M(s0;c0;s;c), wherec0=?andMis
the successor representation under (for a self-contained review of M, see the section on Succes-
sor Representation-based Predictors below). Given a policy and starting task state s0, for each
obstacle, we calculate the probability of having a construal that includes that obstacle:
P(Obstacle i) =X
s;c1[Obstacle i2c](s;cjs0;c0): (8)
1For sets AandB, the set difference AB=fa:a2Aanda =2Bg.
24To calculate the optimal construal modification value function, V(s;c), for each maze, we con-
structed construal modification Markov Decision Processes in Python (3.7) using scipy (1.5.2)
sparse matrices [42]. We then exactly solved for V(s;c)using a custom implementation of policy
iteration [43] designed to take advantage of the sparse matrix data structure (see Code Availability
Statement). For the fitted parameter models, we used separate "-softmax noise models [35] for the
computed plans, c(ajs), and construal modification policy, (c0js;c), and performed a grid
search over the four parameters for each of the nine planning measures ( 1
a2f1;3;5;7g;"a2
f0:0;0:1;0:2g; 1
c2f1;3;5;7;9g;"c2f0;0:05;0:1;0:2;0:3g). Additionally, for parameter fit-
ting, we limited the construals c02C to be of size three. This improves the speed of parameter
evaluation and yields results comparable to the fixed parameter model, which uses the full con-
strual set. Finally, to obtain obstacle value-guided construal probabilities we simulate 1000 rollouts
of the construal modification policy to estimate (js0;c0). As with the initial model, we empha-
size that these procedures are not intended as an algorithmic account of construal modification, but
rather allow us to derive theoretically optimal predictions of the fine-grained dynamics of value-
guided construals during planning.
Heuristic Search Over Action Sequences
Value-guided construal posits that people control their task representations to actively facilitate
planning , which, in the maze navigation paradigm, leads to differential attention to obstacles. How-
ever, differential attention could also occur as a passive side-effect of planning , even in the absence
of active construal. In particular, heuristic search over action sequences is another mechanism for
reducing the cost of planning, but it accomplishes this in a different way: by examining possible
action sequences in order of how promising they seem, not by simplifying the task representation.
If people are simulating candidate action sequences via heuristic search (and not engaged in an ac-
tive construal process), differential attention to task elements could have simply been a side-effect
of how those simulations unfolded.
Thus, we wanted to derive predictions of differential awareness as a byproduct of search over
25action sequences. To do so, we considered two general classes of heuristic search algorithms.
The first, a variant of Real-Time Dynamic Programming (RTDP)44,45, is a trajectory-based search
algorithm that simulates physically realizable trajectories (i.e., sequences of states and actions that
could be generated by repeatedly calling a fixed transition function). The algorithm works by
first initializing a heuristic value function (e.g., based on domain knowledge). Then, it simulates
trajectories that greedily maximize the heuristic value function while also performing Bellman
updates at simulated states44. This scheme then leads RTDP to simulate states in order of how
promising they are (according to the continuously updated heuristic value function) until the value
function converges. Importantly, RTDP can end up visiting a fraction of the total state space,
depending on the heuristic. Our implementation was based on the Labeled RTDP algorithm of
Bonet & Geffner45, which additionally includes a labeling scheme that marks states where the
estimate of the value function has converged, leading to faster overall convergence.
To derive obstacle awareness predictions, we ran RTDP (implemented in msdm ; see Code
Availability Statement) on each maze and initialized it with a heuristic corresponding to the optimal
value function assuming there are plus-shaped walls but no obstacles . This models the background
knowledge participants have about distances, while also providing a fair comparison to the initial
information provided to the value-guided construal implementation. Additionally, if at any point
the algorithm had to choose actions based on estimated value, ties were resolved randomly, making
the algorithm stochastic. For each maze, we ran 200 simulations of the algorithm to convergence
and examined which states were visited by the algorithm over all simulations. We calculated the
mean number of times each obstacle was hitby the algorithm, where a hit was defined as a visit
to a state adjacent to an obstacle such that the obstacle was in between the state and the goal.
Because the distribution of hit counts has a long tail, we used the natural log of hit counts +1as
the obstacle hit scores. The reason why the raw hit counts have a long tail is due to the particular
way in which RTDP calculates the value of regions where the heuristic value is much higher than
the actual value (e.g., dead ends in a maze). Specifically, RTDP explores such regions until it has
confirmed that it is no better than an alternative path, which can take many steps. More generally,
26trajectory-based algorithms are limited in that they can only update states by simulating physically
realizable trajectories starting from the initial state.
The limitations of trajectory-based planning algorithms motivated our use of a second class
ofgraph-based planning algorithms. We used LAO46, a version of the classic Aalgorithm47
generalized to be used on Markov Decision Processes (implemented in msdm ; see Code Availabil-
ity Statement). Unlike trajectory-based algorithms, graph-based algorithms like LAOmaintain a
graph of previously simulated states. LAOin particular builds a graph of the task rooted at the
initial state and then continuously plans over the graph. If it computes a plan that leads it to a state
at the edge of the graph, the graph is expanded according to the transition model to include that
state and then the planning cycle is restarted. Otherwise, if it computes an optimal plan that only
visits states in the simulated graph, the algorithm terminates. By continuously expanding the task
graph and performing planning updates, the algorithm can intelligently explore the most promising
(according to the heuristic) regions of the state space being constrained to physically realizable se-
quences. In particular, graph-based algorithms can quickly “backtrack” when they encounter dead
ends.
Obstacle awareness predictions based on LAOwere derived by using the same initial heuristic
as was used for RTDP and a similar scheme for handling ties. We then calculated the total number
of times an obstacle was hit during graph expansion phases only, using the same definition of a hit
as above. For each maze, we generated 200 planning simulations and used the raw hit counts as
the hit score.
Algorithms like RTDP and LAOplan by simulating realizable action sequences that begin at
the start state. As a result, these models tend to predict greater awareness to obstacles that are near
the start state and are consistent with the initial heuristic, regardless of whether those obstacles
strongly affect or lie along the final optimal path. For instance, obstacles down initially promising
dead ends have a high hit score. This contrasts with value-guided construal, which predicts greater
attention to relevant obstacles, even if they are distant, and lower attention to irrelevant ones, even
if they are nearby. For an example of these distinct model predictions, see maze #14 in Extended
27Data Figure 6.
To be clear, our goal was to obtain predictions for search over action sequences in the absence
of an active construal process for comparison with value-guided construal. However, in general,
heuristic search and value-guided construal are complementary mechanisms, since the former is a
way to plan given a representation and the latter is a way to choose a representation for planning.
For instance, one could perform heuristic search over a construed planning model, or a construal
could help with selecting a heuristic to guide search over actions. These kinds of interactions
between action-sequence search and construal are important directions for future research that can
be built on the ideas developed here.
Successor Representation-based Predictors
We also considered two measures based on the successor representation , which has been proposed
as a component in several computational theories of efficient sequential decision-making36,48. Im-
portantly, the successor representation is not a specific model; rather it is a predictive coding of a
task in which states are represented in terms of the future states likely to be visited from that state,
given the decision-maker follows a certain policy. Formally, the value function of a policy (ajs)
can be expressed in the following two equivalent ways:
V(s) =U(s) +X
a(ajs)X
s0P(s0js;a)V(s0) (9)
=X
s+M(s;s+)U(s+); (10)
whereM(s;s+)is expected occupancy of s+starting from s, when acting according to . The
successor representation of a state sunderis then the vector M(s;). Algorithmically, Mcan
be calculated by solving a set of recursive equations (implemented in Python with numpy49; see
Code Availability Statement):
M(s;s+) = 1[s=s+] +X
a;s0(ajs)P(s0js;a)M(s0;s+): (11)
28Again, the successor representation is not itself an algorithm, but rather a policy-conditioned re-
coding of states that can be a component of a larger computational process (e.g, different kinds
of learning or planning). Here, we focus on its use in the context of transfer learning48,50and
bottleneck states37,51.
Research on transfer learning posits that the successor representation supports transfer that is
more flexible than pure model-free mechanisms but less flexible than model-based planning. For
example, Russek et al.50model agents that learned a successor representation for the optimal pol-
icy in an initial maze and then examined transfer when the maze was changed (e.g., adding in a
new barrier). While their work focuses on learning, rather than planning, we can borrow the ba-
sic insight that the successor representation induced by the optimal policy for a source task can
influence the encoding of a target task, which constitutes a form of construal. In our experiments,
participants were not trained on any particular source task, but we can use the maze with all obsta-
cles removed as a proxy (i.e., representing what all mazes had in common). Thus, we calculated
the optimal policy for the maze without any obstacles (but with the start and goal), computed the
successor representation M, and then calculated, for each obstacle iin the actual maze with the
obstacles, a successor representation overlap (SR-Overlap) score:
SR-Overlap (i) =X
s2ObsiM(s0;s); (12)
wheres0is the starting state and Obs iis the set of states occupied by the obstacle i. This quantity
can be interpreted as the amount of overlap between an obstacle and the successor representation of
the starting state. If the successor representation shapes how people represent tasks, this quantity
would be associated with greater awareness of certain obstacles.
The second predictor is related to the idea of bottleneck states . These emerge from how the
successor representation encodes multi-scale task structure37, and they have been proposed as a
basis for subgoal selection51. If bottlenecks guide subgoal selection, then distance to bottleneck
states could give rise to differential awareness of obstacles via subgoaling processes. Thus, we
29wanted to test that responses consistent with value-guided construal were not entirely attributable to
the effect of bottleneck states calculated in the absence of an active construal process. Importantly,
we note that as with alternative planning mechanisms like heuristic search, the identification of
bottleneck states for subgoaling is compatible with value-guided construal (e.g., one could identify
subgoals for a construed version of a task).
When viewing the transition function of a task (e.g., a maze) as a graph over states, bottleneck
states lie on either side of a partitioning of the state space into two regions such that there is high
intra-region connectivity and low inter-region connectivity. This can be computed for any transition
function using the normalized min-cuts algorithm52or derived from the second eigenvector of the
successor representation under a random policy37. Here, we use a variant of the second approach
as described in the appendix of37. Formally, given a transition function, P(s0js;a), we define an
adjacency matrix, A(s;s0) = 1[9as.t.P(s0js;a)>0], and a diagonal degree matrix, D(s;s) =
P
s0A(s;s0). Then, the graph Laplacian, a representation often used to derive low-dimensional
embeddings of graphs in spectral graph theory, is L=DA. We take the eigenvector with
the second largest eigenvalue, which assigns a positive or negative value to each state in the task.
This vector can be interpreted as projecting the state space onto a single dimension in a way that
best preserves connectivity information, with a zero point that represents the mid-point of the
projected graph. Bottleneck states correspond to those states nearest to 0. For each maze, we
used this method to identify bottleneck states and further reduced these to the optimal bottleneck
states , defined as bottleneck states with a non-zero probability of being visited under the optimal
stochastic policy for the maze. Finally, for each obstacle, we calculated a bottleneck distance score,
the minimum Manhattan distance from an obstacle to any of these bottleneck states.
Notably, value-guided construal also predicts greater attention to obstacles that form bottle-
necks because one often needs to carefully navigate through them to reach the goal. However,
the predictions of our model differ for obstacles that are distant from the bottleneck. Specifically,
value-guided construal predicts greater attention to relevant obstacles that affect the optimal plan,
even if they are far from the bottleneck (e.g., see model predictions for maze #2 in Extended Data
30Figure 5).
Perceptual Landmarks
Finally, we considered several predictors based on low-level perceptual landmarks and partici-
pants’ behavior. These included the minimum Manhattan distance from an obstacle to the start
location, the goal location, the center black walls, the center of the grid, and any of the locations
visited by the participant in a trial (navigation distance). We also considered the timestep at which
participants were closest to an object as a measure of how recently they were near an object. In
cases where navigation distance was not an appropriate measure (e.g., if participants never nav-
igated to the goal), we used the minimum Manhattan distance to trajectories sampled from the
optimal policy averaged over 100 samples.
Experimental Design
All experiments were pre-registered (see Data Availability Statement) and approved by the Prince-
ton Institutional Review Board (IRB). All participants were recruited from the Prolific online plat-
form and provided informed consent. At the end of each experiment, participants provided free-
response demographic information (age and gender, coded as male/female/neither). Experiments
were implemented with psiTurk53and jsPsych54frameworks (see Code Availability Statement).
Instructions and example trials are shown in the Supplementary Experimental Materials.
Initial experiment
Our initial experiment used a maze-navigation task in which participants moved a circle from
a starting location on a grid to a goal location using the arrow keys. The set of initial mazes
consisted of twelve 11 11 mazes with seven blue tetronimo-shaped obstacles and center walls
arranged in a cross that blocked movement. On each trial, participants were first shown a screen
displaying only the center walls. When they pressed the spacebar, the circle they controlled, the
goal, and the obstacles appeared, and they could begin moving immediately. In addition, to ensure
31that participants remained focused on moving, we placed a green square on the goal that shrank
and would disappear after 1000ms but reset whenever an arrow key was pressed, except at the
beginning of the trial when the green square took longer to shrink (5000ms). Participants received
$0.10 for reaching the goal without the green square disappearing (in addition to the base pay
of $0.98). The mazes were pseudo-randomly rotated or flipped, so the start and end state was
constantly changing, and the order of mazes were pseudo-randomized. After completing each
trial, participants received awareness probes, which showed a static image of the maze they had
just navigated, with one of the obstacles shown in light blue. They were asked “How aware of the
highlighted obstacle were you at any point?” and could respond using an 8-point scale (rescaled
from 0 to 1 for analyses). Probes were presented for the seven obstacles in a maze. None of the
probes were associated with a bonus.
We requested 200 participants on Prolific and received 194 complete submissions. Following
pre-registered exclusion criteria, a trial was excluded if, during navigation, >5000ms was spent at
the initial state, >2000ms was spent at any non-initial state, >20000ms was spent on the entire
trial, or>1500ms was spent in the last three steps in total. Participants with <80% of trials after
exclusions or who failed 2 of 3 comprehension questions were excluded, which resulted in n= 161
participants’ data being analyzed (median age of 28;81male, 75female, 5neither).
Up-front planning experiment
The up-front planning version of the memory experiment was designed to dissociate planning
and execution. The main change was that after participants took their first step, all of the blue
obstacles (but not the walls or goal) were removed from the display (though they still blocked
movement). This strongly encouraged planning prior to execution. To provide sufficient time to
plan, the green square took 60000ms to shrink on the first step. Additionally, on a random half
of the trials, after taking two steps, participants were immediately presented with the awareness
probes ( early termination trials). The other half were fulltrials. We reasoned that responses
following early termination trials would better reflect awareness after planning but before execution
32(see Supplementary Memory Experiment Analyses for analyses comparing early versus full trials).
We requested 200 participants on Prolific and received 188 complete submissions. The exclu-
sion criteria were the same as in the initial experiment, except that the initial state and total trial
time criteria were raised to 30000ms and 60000ms, respectively. After exclusions, we analyzed
data fromn= 162 participants (median age of 28; 85 male, 72 female, 5 neither).
Critical mazes experiment
In the critical mazes experiment , participants again could not see the obstacles while executing
and so needed to plan up front, but no trials ended early. There were two main differences with
the previous experiments. First, we used a set of four critical mazes that included critical obsta-
cles chosen to test predictions specific to value-guided construal. These were obstacles relevant
to decision-making, but distant from the optimal path (see Supplementary Memory Experiment
Analyses for analyses focusing on these critical obstacles). Second, half of the participants re-
ceived recall probes in which they were shown a static image of the grid with only the walls, a
green obstacle, and a yellow obstacle. They were then asked “An obstacle was either in the yellow
or green location (not both), which one was it?” and could select either option, followed by a
confidence judgment on an 8-point scale (rescaled from 0 to 1 for analyses). Pairs of obstacles
and their contrasts in the critical mazes are shown in Extended Data Figure 4a. Participants each
received two blocks of the four critical mazes, pseudo-randomly oriented and/or flipped.
We requested 200 participants on Prolific and received 199 complete submissions. The trial
and participant exclusion criteria were the same as in the up-front planning experiment. After
exclusions, we analyzed data from n= 156 participants (median age of 26; 78 male, 75 female, 3
neither).
Control Experiments
The aim of the control experiments was to obtain yoked baselines for perception and execution for
comparison with probe responses in the memory studies. The perceptual control used a variant of
33the task in which participants were shown patterns that were perceptually identical to the mazes.
Instead of solving a maze, they were told to “catch the red dot”: On each trial, a small red dot could
appear anywhere on the grid, and participants were rewarded based on whether they pressed the
spacebar after it appeared. Each participant was yoked to the responses of a participant from either
theup-front planning orcritical mazes experiments. On yoked trials , participants were shown
the exact same maze/pattern as their counterpart. Additionally, they were shown the pattern for
the amount of time that their counterpart took before making their first move—since the obstacles
were not visible during execution for the counterpart, this is roughly the time the counterpart spent
looking at the maze to plan. A red dot never appeared on these trials, and they were followed by
the exact same probes that the counterpart received. References to “obstacles” were changed to
“tiles” (e.g., “highlighted tiles” as opposed to “highlighted obstacle” for the awareness probes).
We also included dummy trials , which showed mazes in orientations not appearing in the yoked
trials, for durations sampled from the yoked durations. Half of the dummy trials had red dots. We
recruited enough participants such that at least one participant was matched to each participant
from the original experiments and excluded people who said that they had participated in a similar
experiment. This resulted in data from n= 164 participants being analyzed for the initial mazes
perceptual control (median age of 30:5; 84 male, 79 female, 1 neither) and n= 172 for the critical
mazes perceptual control (median age of 36.5; 86 male, 85 female, 1 neither).
The execution control used a variant of the task in which participants followed a series of
“breadcrumbs” through the maze to the goal and so did not need to plan a path to the goal. Each
participant was yoked to a counterpart in either the initial experiment or the critical mazes experi-
ment so that the breadcrumbs were generated based on the exact path taken by the counterpart. The
ordering of the mazes and obstacle probes (i.e., awareness or location recall) were also the same.
We recruited participants until at least one participant was matched to each participant from the
original experiments. Additionally, we used the same exclusion criteria as in the initial experiment
with the additional requirement that all black dots be collected on a trial. This resulted in data from
n= 163 participants being analyzed for the initial mazes execution control (median age of 29; 86
34male, 77 female) and n= 161 for the critical mazes execution control (median age of 30; 94 male,
63 female; 4 neither).
Process-Tracing Experiments
We ran process-tracing experiments using the initial mazes and the critical mazes. These experi-
ments were similar to the memory experiments, except they used a novel process-tracing paradigm
designed to externalize the planning process. Specifically, participants never saw all the obstacles
in the maze at once. Rather, at the beginning of a trial, after clicking on a red X in the center
of the maze, the goal and agent appeared, and participants could use their mouse to hover over
the maze and reveal individual obstacles. An obstacle would become completely visible if the
mouse hovered over any tile that was part of it for at least 25ms, until the mouse was moved to a
tile that was not part of that obstacle. Once the participant started to move using the arrow keys,
the cursor became temporarily invisible (to prevent using the cursor as a cue to guide execution),
and the obstacles could no longer be revealed. We examined two dependent measures for each
obstacle: whether participants hovered over an obstacle, and if so, the duration of hovering in log
milliseconds.
For each experiment with each set of mazes, we requested 200 participants on Prolific. Partic-
ipants who completed the task had their data excluded if they did not hover over any obstacles on
more than half of the trials. For the experiment with the initial set, we received completed submis-
sions from 174 people and, after exclusions, analyzed data from n= 167 participants (median age
of 30; 82 male, 82 female, 3 neither). For the experiment with the critical set, we received com-
pleted submissions from 188 people and, after exclusions, analyzed data from n= 179 participants
(median age of 32; 89 male, 86 female, 4 neither).
Experiment Analyses
Hierarchical generalized linear models (HGLMs) were implemented in Python and R using the
lme455andrpy256packages (see Code Availability Statement). For all models, we included by-
35participant and by-maze random intercepts, unless the resulting model was singular, in which case
we removed by-maze random intercepts. For the memory experiment analyses testing whether
value-guided construal predicted responses, we fit models with and without z-score normalized
value-guided construal probabilities as a fixed effect and performed likelihood ratio tests to assess
significance. For the control experiment analyses reported in the main text, we calculated mean
by-obstacle responses from the perceptual and execution controls, and then included these values
as fixed effects in models fit to the responses in the planning experiments. We then contrasted
models with and without value-guided construal and performed likelihood ratio tests (additional
analyses are reported in the Supplementary Memory Experiment Analyses and Supplementary
Control Experiment Analyses).
For our comparison with alternative models, we considered 11 different predictors that assign
scores to obstacles in each maze: fixed-parameter value-guided construal modification probabil-
ity (VGC), trajectory-based heuristic search score (Traj HS), graph-based heuristic search score
(Graph HS), bottleneck state distance (Bottleneck), successor representation overlap (SR Over-
lap), minimum navigation distance (Nav Dist), timestep of minimum navigation distance (Nav
Dist Step), minimum optimal policy distance (Opt Dist), distance to goal (Goal Dist), distance to
start (Start Dist), distance to center walls (Wall Dist), and distance to the center of the maze (Cen-
ter Dist). We included predictors in the analysis of each experiment’s data where appropriate. For
example, in the up-front planning experiment, participants did not navigate on early termination
trials, and so we used the optimal policy distance rather than navigation distance. All predictors
were z-score normalized before being included as fixed effects in HGLMs in order to facilitate
comparison of estimated coefficients.
We performed three types of analyses using the 11 predictors. First, we wanted determine
whether value-guided construal captured variability in responses from the planning experiments
even when accounting for the other predictors. For these analyses, we compared HGLMs that
included all predictors to HGLMs with all predictors except value-guided construal and tested
whether there was a significant difference in fit using likelihood ratio tests (Extended Data Table 1).
36Second, we wanted to evaluate the relative necessity of each mechanism for explaining attention to
obstacles when planning. For these analyses, we compared global HGLMs to HGLMs with each
of the predictors removed and calculated the resulting change in AIC (see Extended Data Table
1 for estimated coefficients and resulting AIC values). Finally, we wanted to assess the relative
sufficiency of predictors in accounting for responses on the planning tasks. For these analyses, we
fit HGLMs to each set of responses that included only individual predictors or pairs of predictors,
and for each model we calculated the AIC relative to the best-fitting model (Extended Data
Figure 8). Note that for all of these models, AIC values are summed over participants.
Methods References
35. Nassar, M. R. & Frank, M. J. Taming the beast: extracting generalizable knowledge from
computational models of cognition. Current opinion in behavioral sciences 11,49–54 (2016).
40. Sutton, R. S. & Barto, A. G. Reinforcement learning: An introduction (MIT Press, 2018).
41. Parr, R. & Russell, S. Reinforcement learning with hierarchies of machines. Advances in
neural information processing systems 10(1997).
42. Virtanen, P. et al. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python.
Nature Methods 17,261–272 (2020).
43. Howard, R. A. Dynamic programming and markov processes. (1960).
44. Barto, A. G., Bradtke, S. J. & Singh, S. P. Learning to act using real-time dynamic program-
ming. Artificial intelligence 72,81–138 (1995).
45. Bonet, B. & Geffner, H. Labeled RTDP: Improving the Convergence of Real-Time Dynamic
Programming. inProceedings of the International Conference on Planning and Automated
Scheduling 3(2003), 12–21.
46. Hansen, E. A. & Zilberstein, S. LAO: A heuristic search algorithm that finds solutions with
loops. Artificial Intelligence 129, 35–62 (2001).
3747. Hart, P. E., Nilsson, N. J. & Raphael, B. A formal basis for the heuristic determination of
minimum cost paths. IEEE transactions on Systems Science and Cybernetics 4,100–107
(1968).
48. Momennejad, I. et al. The successor representation in human reinforcement learning. Nature
Human Behaviour 1,680–692 (2017).
49. Harris, C. R. et al. Array programming with NumPy. Nature 585, 357–362.
50. Russek, E. M., Momennejad, I., Botvinick, M. M., Gershman, S. J. & Daw, N. D. Predic-
tive representations can link model-based reinforcement learning to model-free mechanisms.
PLoS computational biology 13,e1005768 (2017).
51. Solway, A. et al. Optimal behavioral hierarchy. PLoS Computational Biology 10,e1003779
(2014).
52. Shi, J. & Malik, J. Normalized cuts and image segmentation. IEEE Transactions on pattern
analysis and machine intelligence 22,888–905 (2000).
53. Gureckis, T. M. et al. psiTurk: An open-source framework for conducting replicable behav-
ioral experiments online. Behavior research methods 48,829–842 (2016).
54. De Leeuw, J. R. jsPsych: A JavaScript library for creating behavioral experiments in a Web
browser. Behavior research methods 47,1–12 (2015).
55. Bates, D., M ¨achler, M., Bolker, B. & Walker, S. Fitting Linear Mixed-Effects Models Using
lme4. Journal of Statistical Software 67,1–48 (2015).
56. The rpy2 contributors. rpy2 version 3.3.6. Sept. 26, 2020. https://rpy2.github.io/ .
38Acknowledgements : The authors would like to thank Jessica Hamrick, Louis Gularte, Ceyda
Sayalı, Qiong Zhang, Rachit Dubey, and William Thompson for valuable feedback on this work.
This work was funded by NSF grant #1545126, John Templeton Foundation grant #61454, and
AFOSR grant # FA 9550-18-1-0077.
Author Contributions : All authors contributed to conceptualizing the project and editing the
manuscript. MKH, DA, MLL, and TLG developed the value-guided construal model. MKH im-
plemented it. MKH and CGC implemented the heuristic search models and msdm library. MKH,
JDC, and TLG designed the experiments. MKH implemented the experiments, analyzed the re-
sults, and drafted the manuscript.
Competing Interest Declaration : The authors declare no competing interests.
Supplementary Information is available for this paper.
Data Availability Statement : Data for the current study are available through the Open Science
Foundation repository http://doi.org/10.17605/OSF.IO/ZPQ69.
Code Availability Statement : Code for the current study are available through the Open Science
Foundation repository http://doi.org/10.17605/OSF.IO/ZPQ69, which links to a GitHub repository
and contains an archived version of the repository. The value-guided construal model and alterna-
tive models were implemented in Python (3.7) using the msdm (0.6) library, numpy (1.19.2), and
scipy (1.5.2). Experiments were implemented using psiTurk (3.2.0) and jsPsych (6.0.1).
Hierarchical generalized linear regressions were implemented using rpy2 (3.3.6), lme4 (1.1.21),
and R (3.6.1).
39Maze 0.68
.42 .74.26
.83
.60.19Initial Exp
Awareness
.53
.38 .69.25
.73
.57.20Up-front planning
Awareness
.72
.69 .75.31
.93
.90.35Process-tracing
Hovering
6.6
6.1 6.85.8
7.2
7.05.9Process-tracing
DurationMaze 1
.66 .63.29.76.28
.26.22
.57 .69.28.70.23
.27.21
.81 .85.26.75.31
.56.24
6.6 7.05.76.76.1
6.06.0Maze 2.70
.50
.59.50.79
.31
.44.56
.41
.65.50.71
.35
.47.94
.67
.75.64.88
.27
.796.6
5.7
6.56.36.6
6.0
6.6Maze 3.36.28
.33
.75.81
.66.51.36.25
.29
.71.75
.64.44.63.57
.45
.93.77
.87.686.66.5
6.4
7.06.5
6.86.0Maze 4.47
.72.76
.56.73
.72.33
.36
.71.69
.52.59
.59.32
.93
.86.83
.79.60
.81.56
6.5
6.96.8
5.96.0
6.66.5Maze 5.71
.37
.41
.76.54.83.24
.61
.29
.31
.71.53.73.23
.77
.52
.71
.84.85.89.52
6.7
6.1
6.4
7.47.17.36.4Extended Data Fig. 1 jExperimental measures on mazes 0 to 5, Average responses as-
sociated with each obstacle in mazes 0 to 5 in the initial experiment (awareness judgment), the
40up-front planning experiment (awareness judgment), and the process-tracing experiment (whether
an obstacle was hovered over and, if so, the duration of hovering in log milliseconds). Obstacle
colors are normalized by the minimum and maximum values for each measure/maze, except for
awareness judgments, which are scaled from 0 to 1.
41Maze 6.39
.48.40
.44.71.51
.38Initial Exp
Awareness
.33
.46.33
.61.74.49
.41Up-front planning
Awareness
.90
.68.39
.72.50.55
.60Process-tracing
Hovering
6.5
5.66.4
6.76.46.4
5.8Process-tracing
DurationMaze 7.36.19
.74
.20.43 .18
.69
.31.27
.76
.25.47 .26
.69
.71.16
.67
.25.27 .13
.67
6.25.4
6.3
6.15.8 5.8
6.6Maze 8.18
.61.29
.41.25.35
.70.20
.72.28
.35.24.40
.79.09
.51.49
.70.18.14
.885.0
6.36.0
6.25.75.5
6.5Maze 9
.30
.20.79.81.39
.34
.78.27
.23.73.80.42
.30
.78.50
.66.82.88.79
.82
.915.8
6.86.97.56.9
6.6
6.9Maze 10.66
.77 .27
.23
.39.56 .43.74
.73 .24
.30
.36.51 .39.92
.65 .41
.29
.50.72 .416.8
6.2 6.4
5.9
6.26.3 6.0Maze 11.71
.47.23.80.83
.19.41
.65
.32.22.77.79
.23.38
.59
.57.21.77.93
.15.63
6.3
6.45.97.17.5
6.06.5Extended Data Fig. 2 jExperimental measures on mazes 6 to 11, Average responses as-
sociated with each obstacle in mazes 6 to 11 in the initial experiment (awareness judgment), the
42up-front planning experiment (awareness judgment), and the process-tracing experiment (whether
an obstacle was hovered over and, if so, the duration of hovering in log milliseconds). Obstacle
colors are normalized by the minimum and maximum values for each measure/maze, except for
awareness judgments, which are scaled from 0 to 1.
43Maze 12
.67.51.57.92
.78Critical Mazes Exp
Accuracy
.59.43.44.84
.61Critical Mazes Exp
Confidence
.35.23.32.81
.71Critical Mazes Exp
Awareness
.62.18.39.90
.62Process-tracing
Hovering
6.45.35.36.8
6.5Process-tracing
DurationMaze 13
.70.60.51.76.92
.58.46.47.66.78
.34.35.26.69.84
.65.44.18.52.89
6.25.45.46.26.9Maze 14.65.49
.54.79.94
.54.53
.40.68.81
.41.35
.23.81.79
.78.69
.28.88.85
6.36.5
5.37.26.8Maze 15.66
.58.45
.85.87
.56
.42.62
.74.79
.39
.23.32
.81.76
.75
.56.67
.85.88
6.5
6.56.6
6.96.8Extended Data Fig. 3 jExperimental measures on mazes 12 to 15, Average responses as-
sociated with each obstacle in mazes 12 to 15 in the critical mazes experiment (recall accuracy,
recall confidence, and awareness judgment) and the process-tracing experiment (whether an ob-
stacle was hovered over and, if so, the duration of hovering in log milliseconds). Obstacle colors
are scaled to range from 0.5 to 1.0 for accuracy, 0 to 1 for hovering, confidence, and awareness
judgments, and the minimum to maximum values across obstacles in a maze for hovering duration
in log milliseconds.
44Extended Data Fig. 4 jAdditional Experimental Details, a, Items from critical mazes exper-
iment. Blue obstacles are the location of obstacles during the navigation part of the trial. Orange
obstacles with corresponding number are copies that were shown during location recall probes.
During recall probes, participants only saw an obstacle paired with its copy. b,Example trial from
process-tracing experiment. Participants could never see all the obstacles at once, but, before nav-
igating, could use their mouse to reveal obstacles. We analyzed whether value-guided construal
predicted which obstacles people tended to hover over and, if so, the duration of hovering.
45Maze 0.15
.05 .430.0
.73
0.00.0VGC
2.7
3.6 3.82.9
3.1
2.0.04Traj HS
1.4
3.0 4.0.82
5.0
2.00.0Graph HS
7.0
5.0 1.011.0
7.0
1.09.0Bottleneck
.76
1.3 1.4.32
.62
.01.12SR Overlap
1.0
1.1 1.04.0
1.0
1.03.6Opt DistMaze 1
.25 .040.0.400.0
0.00.0
1.7 .360.01.20.0
0.00.0
4.0 .530.01.30.0
0.00.0
11.0 3.07.01.07.0
11.012.0
1.6 .051.5.421.4
.81.01
1.0 1.64.01.03.0
2.08.0Maze 2.42
0.0
.170.0.85
0.0
0.03.1
4.5
1.83.13.1
0.0
.311.0
5.0
1.25.05.0
0.0
.096.0
5.0
1.05.09.0
3.0
5.01.0
0.0
0.00.00.0
0.0
0.01.0
1.2
1.01.41.0
2.0
1.8Maze 3.160.0
.02
.70.80
.23.363.33.6
3.0
1.71.1
1.72.40.01.1
2.4
2.52.0
1.72.38.07.0
5.0
8.09.0
5.01.0.61.46
.62
.84.20
.25.833.07.0
4.1
1.01.0
1.21.0Maze 4.45
.03.47
.17.20
.31.01
4.3
1.93.2
4.5.86
1.6.13
3.0
1.04.5
5.01.0
2.1.14
9.0
5.01.0
7.010.0
3.05.0
1.0
0.00.0
0.00.0
0.00.0
4.0
1.01.0
1.01.0
1.02.3Maze 5.14
.02
0.0
.450.0.730.0
3.8
3.4
2.6
3.53.53.14.1
0.0
2.4
2.0
5.04.05.00.0
6.0
6.0
5.0
1.03.08.09.0
.78
.75
.47
.57.64.67.37
1.0
4.0
1.6
1.01.01.03.0Maze 6.38
.230.0
0.00.0.01
.32
3.9
3.80.0
0.00.0.10
.52
4.0
5.00.0
0.00.0.15
.63
6.0
7.08.0
5.01.04.0
4.0
2.0
1.1.84
0.00.00.0
1.7
4.1
1.01.0
3.01.02.1
1.2Maze 7.170.0
.29
0.00.0 0.0
.63
2.60.0
.86
0.00.0 0.0
.69
.750.0
.57
0.00.0 0.0
1.0
6.08.0
2.0
11.04.0 10.0
1.0
.50.13
.38
0.0.03 .03
.25
5.53.6
1.0
4.02.6 3.0
1.0Extended Data Fig. 5 jModel predictions on mazes 0 through 7, Shown are the predictions
for six of the eleven predictors we tested: fixed parameter value-guided construal modification
46obstacle probability (VGC, our model); trajectory-based heuristic search obstacle hit score (Traj
HS); graph-based heuristic search obstacle hit score (Graph HS); distance to optimal bottleneck
(Bottleneck); successor representation overlap score (SR Overlap); and distance to optimal paths
(Opt Dist) (see Methods, Model Implementations). Mazes 0 to 7 were all in the initial set of mazes.
Darker obstacles correspond to greater predicted attention according to the model. Obstacle colors
normalized by the minimum and maximum values for each model/maze.
47Maze 80.0
0.0.05
.120.00.0
.33VGC
0.0
0.01.5
.970.00.0
.98Traj HS
0.0
0.0.26
1.60.00.0
1.0Graph HS
10.0
2.06.0
8.07.05.0
1.0Bottleneck
0.0
0.0.97
.810.00.0
.26SR Overlap
7.0
1.03.5
1.14.02.0
1.0Opt DistMaze 9
0.0
0.0.46.58.45
.28
.143.3
2.80.02.91.9
4.0
3.13.0
2.0.675.01.0
4.0
4.07.0
13.01.01.02.0
8.0
5.01.1
.01.19.63.58
.90
1.34.0
6.01.01.02.0
5.0
1.0Maze 10.19
.43 .20
0.0
0.0.04 0.0.84
2.6 4.1
3.9
1.9.28 .011.0
4.4 1.1
1.1
1.3.36 .015.0
1.0 8.0
8.0
6.08.0 11.0.06
1.4 .90
.58
.59.29 1.51.2
1.0 5.0
6.9
2.01.5 1.0Maze 11.32
.220.0.54.68
0.0.01
3.7
3.10.03.22.8
0.0.22
5.0
3.00.05.05.0
0.0.29
4.0
8.09.01.01.0
12.07.0
1.2
.581.2.86.60
.071.5
1.0
5.04.01.01.0
7.91.0Maze 12
.210.00.0.79
.36
3.50.03.73.4
4.6
3.00.05.05.0
5.0
8.08.06.04.0
5.0
.54.68.31.54
.57
6.04.02.01.0
1.0Maze 13
.190.00.0.38.84
3.43.50.03.93.1
4.05.00.04.05.0
9.05.09.01.05.0
.56.31.74.75.56
6.02.05.01.01.0Maze 14.280.0
0.0.29.82
4.31.2
3.83.63.1
4.01.9
4.03.05.0
8.06.0
6.01.010.0
.93.02
.73.64.58
3.03.8
4.01.01.0Maze 15.34
0.00.0
.30.87
4.5
3.51.5
4.13.1
5.0
3.01.9
4.05.0
6.0
10.07.0
1.011.0
1.0
.02.02
.61.56
4.0
6.53.7
1.01.0Extended Data Fig. 6 jModel predictions on mazes 8 through 15, Shown are the predictions
for six of the eleven predictors we tested (see Methods, Model Implementations). Mazes 8 to 11
48were part of the initial set of mazes, while mazes 12 to 15 constituted the set of critical mazes.
Darker obstacles correspond to greater predicted attention according to the model. Obstacle colors
normalized by the minimum and maximum values for each model/maze.
49R² = 0.50
0.0 0.50.00.51.0Initial Exp.
Awareness JudgmentVGC
R² = 0.05
0.0 2.5Traj HS
R² = 0.28
0 5Graph HS
R² = 0.32
5 10Bottleneck
R² = 0.00
0 2SR Overlap
R² = 0.55
2.5 5.0 7.5Opt Dist
R² = 0.00
5 10 15Goal Dist
R² = 0.00
5 10 15Start Dist
R² = 0.06
2.5 5.0Wall Dist
R² = 0.05
2.5 5.0 7.5Center Dist
R² = 0.40
0.0 0.50.00.51.0Up-front Planning Exp.
Awareness JudgmentR² = 0.01
0.0 2.5R² = 0.18
0 5R² = 0.39
5 10R² = 0.01
0 2R² = 0.53
2.5 5.0 7.5R² = 0.00
5 10 15R² = 0.00
5 10 15R² = 0.01
2.5 5.0R² = 0.01
2.5 5.0 7.5
R² = 0.83
0.0 0.50.00.51.0Critical Maze Exp.
Recall Accuracy
R² = 0.25
0.0 2.5R² = 0.42
0 5R² = 0.06
5 10R² = 0.11
0 1R² = 0.44
2.5 5.0R² = 0.15
5 10 15R² = 0.12
10 20R² = 0.04
2.5 5.0R² = 0.01
5 10
R² = 0.80
0.0 0.50.00.51.0Critical Mazes Exp.
Recall ConfidenceR² = 0.05
0.0 2.5R² = 0.19
0 5R² = 0.05
5 10R² = 0.02
0 1R² = 0.42
2.5 5.0R² = 0.27
5 10 15R² = 0.21
10 20R² = 0.02
2.5 5.0R² = 0.07
5 10
R² = 0.71
0.0 0.50.00.51.0Cricical Mazes Exp.
Awareness JudgmentR² = 0.15
0.0 2.5R² = 0.27
0 5R² = 0.20
5 10R² = 0.05
0 1R² = 0.69
2.5 5.0R² = 0.11
5 10 15R² = 0.07
10 20R² = 0.00
2.5 5.0R² = 0.01
5 10
R² = 0.38
0.0 0.50.00.51.0Process-Tracing
(Initial Mazes 0-11)
Hovering
R² = 0.23
0.0 2.5R² = 0.33
0 5R² = 0.17
5 10R² = 0.02
0 2R² = 0.32
2.5 5.0 7.5R² = 0.01
5 10 15R² = 0.03
5 10 15R² = 0.07
2.5 5.0R² = 0.07
2.5 5.0 7.5
R² = 0.29
0.0 0.5567Process-Tracing
(Initial Mazes 0-11)
Log-Hover Duration R² = 0.09
0.0 2.5R² = 0.17
0 5R² = 0.17
5 10R² = 0.00
0 2R² = 0.16
2.5 5.0 7.5R² = 0.00
5 10 15R² = 0.01
5 10 15R² = 0.00
2.5 5.0R² = 0.00
2.5 5.0 7.5
R² = 0.52
0.0 0.50.00.51.0Process-Tracing
(Critical Mazes 12-15)
Hovering
R² = 0.22
0.0 2.5R² = 0.30
0 5R² = 0.03
5 10R² = 0.00
0 1R² = 0.21
2.5 5.0R² = 0.18
5 10 15R² = 0.13
10 20R² = 0.07
2.5 5.0R² = 0.17
5 10
R² = 0.42
0.0 0.55.56.06.57.0Process-Tracing
(Critical Mazes 12-15)
Log-Hover Duration R² = 0.12
0.0 2.5R² = 0.11
0 5R² = 0.04
5 10R² = 0.00
0 1R² = 0.13
2.5 5.0R² = 0.11
5 10 15R² = 0.08
10 20R² = 0.12
2.5 5.0R² = 0.24
5 10Extended Data Fig. 7 jSummaries of candidate models and data from planning experi-
ments, Each row corresponds to a measurement of attention to obstacles from a planning exper-
iment: Awareness judgments from the initial memory experiment, the up-front planning experi-
ment, and the critical mazes experiment; recall accuracy and confidence from the critical mazes
50experiment; and the binary hovering measure and hovering duration measure (in log milliseconds)
from the two process-tracing experiments. Each column corresponds to candidate processes that
could predict attention to obstacles: fixed parameter value-guided construal modification obsta-
cle probability (VGC, our model), trajectory-based heuristic search hit score (Traj HS), graph-
based heuristic search hit score (Graph HS), distance to bottleneck states (Bottleneck), successor-
representation overlap (SR Overlap), expected distance to optimal paths (Opt Dist), distance to the
goal location (Goal Dist), distance to the start location (Start Dist), distance to the invariant black
walls (Wall Dist), and distance to the center of the maze (Center Dist). Note that for distance-based
predictors, the x-axis is flipped. For each predictor, we quartile-binned the predictions across ob-
stacles, and for each bin we plot (bright red lines) the mean and standard deviation of the predictor
and mean by-obstacle response (overlapping bins were collapsed into a single bin). Black circles
correspond to the mean response and prediction for each obstacle in each maze. Dashed dark red
lines are simple linear regressions on the black circles, with R2values shown in the lower right
of each plot. Across the nine measures, value-guided construal tracks attention to obstacles, while
other candidate processes are less consistently associated with obstacle attention (data are based
onn= 84215 observations taken from 825independent participants).
51a
b
cExtended Data Table 1 jNecessity of different mechanisms for explaining attention to ob-
stacles when planning, For each measure in each planning experiment, we fit hierarchical gener-
alized linear models (HGLMs) that included the following predictors as fixed-effects: fixed param-
eter value-guided construal modification obstacle probability (VGC, our model); trajectory-based
heuristic search obstacle hit score (Traj HS); graph-based heuristic search obstacle hit score (Graph
HS); distance to optimal bottleneck (Bottleneck); successor representation overlap score (SR Over-
52lap); distance to path taken (Nav Dist); timestep of point closest along path taken (Nav Dist Step);
distance to optimal paths (Opt Dist); distance to the goal state (Goal Dist); distance to the start
state (Start Dist); distance to any part of the center walls (Wall Dist); and distance to the center of
the maze (Center Dist) (Methods, Model Implementations). If the measure was taken before par-
ticipants navigated, distance to the optimal paths was used, otherwise, distance to the path taken
and its timestep were used. a, b, Estimated coefficients and standard errors for z-score normalized
predictors in HGLMs fit to responses from the initial experiment, up-front planning experiment (F
= full trials, E = early termination trials), the critical mazes experiment, and the process-tracing ex-
periments. We found that value-guided construal was a significant predictor even when accounting
for alternatives (likelihood ratio tests between full global models and models without value-guided
construal: Initial Exp, Awareness: 2(1) = 501:11;p< 1:01016; Up-front Exp, Awareness (F):
2(1) = 282:17;p< 1:01016; Up-front Exp, Awareness (E): 2(1) = 206:14;p< 1:01016;
Critical Mazes Exp, Accuracy: 2(1) = 114:87;p < 1:01016; Critical Mazes Exp, Confi-
dence:2(1) = 181:28;p < 1:01016; Critical Mazes Exp, Awareness: 2(1) = 486:99;p <
1:01016; Process-Tracing Exp (Initial Mazes), Hovering: 2(1) = 294:40;p < 1:01016;
Process-Tracing Exp (Initial Mazes), Duration: 2(1) = 177:58;p< 1:01016; Process-Tracing
Exp (Critical Mazes), Hovering: 2(1) = 183:52;p< 1:01016; Process-Tracing Exp (Critical
Mazes), Duration: 2(1) = 251:16;p < 1:01016).c,To assess the relative necessity of each
predictor for the fit of a HGLM, we conducted lesioning analyses in which, for each predictor in a
given global HGLM, we fit a new lesioned HGLM with only that predictor removed. Each entry of
the table shows the change in AIC when comparing global and lesioned HGLMs, where larger pos-
itive values indicate a greater reduction in fit as a result of removing a predictor. According to this
criterion, across all experiments and measures, value-guided construal is either the first or second
most important predictor.Largest increase in AIC after lesioning;ySecond-largest increase.
53VGCTraj HSGraph HS Bottleneck SR OverlapNav Dist
Nav Dist StepGoal Dist Start Dist Wall DistCenter DistVGC
Traj HS
Graph HS
Bottleneck
SR Overlap
Nav Dist
Nav Dist Step
Goal Dist
Start Dist
Wall Dist
Center Dist2110
20704937
204732463750
1096309223573127
20064938367231204993
0990684113213041306
2089486735363036494312744945
20244926370929814986129249374988
211149183602312949901305492849894995
2105481537343122478511584761480647954815
20994843374231274822119347924838482946954847Initial Exp.
Awareness
VGCTraj HSGraph HS Bottleneck SR OverlapGoal Dist Start Dist Opt Dist Wall DistCenter DistVGC
Traj HS
Graph HS
Bottleneck
SR Overlap
Goal Dist
Start Dist
Opt Dist
Wall Dist
Center Dist1405
13003312
138720112557
279144610721444
11493267235014203287
130833142551137332893317
1403329124131441328332883303
0673496326500731725731
13113298254713923214329832747323296
129833062541137332313306328573032123304Up-front Planning Exp.
Awareness
VGCTraj HSGraph HS Bottleneck SR OverlapNav Dist
Nav Dist StepGoal Dist Start Dist Wall DistCenter DistVGC
Traj HS
Graph HS
Bottleneck
SR Overlap
Nav Dist
Nav Dist Step
Goal Dist
Start Dist
Wall Dist
Center Dist28
24285
26221225
19285223350
30271207321332
22203180243223243
16272220340334244364
0209197252289222309313
2219201261301224325276326
27286227344330227352278297358
28284227351308239364300317246368Critical Mazes Exp.
Accuracy
VGCTraj HSGraph HS Bottleneck SR OverlapNav Dist
Nav Dist StepGoal Dist Start Dist Wall DistCenter DistVGC
Traj HS
Graph HS
Bottleneck
SR Overlap
Nav Dist
Nav Dist Step
Goal Dist
Start Dist
Wall Dist
Center Dist39
38638
32481536
0622522639
21636535634662
26438408440440440
33591497584637429638
17414394325475328471475
8459424366523348524320523
16577477614538430628477524657
8543450579429406598468510413624Critical Mazes Exp.
Confidence
VGCTraj HSGraph HS Bottleneck SR OverlapNav Dist
Nav Dist StepGoal Dist Start Dist Wall DistCenter DistVGC
Traj HS
Graph HS
Bottleneck
SR Overlap
Nav Dist
Nav Dist Step
Goal Dist
Start Dist
Wall Dist
Center Dist394
3671314
39211091129
011489321234
3921297110212031453
151687652715720740
28013011126120414557331511
13010971049782132070113241356
99115210758351373712139710451411
39512571096122613467421513134414061514
393122510721196120673714981358141211221498Critical Mazes Exp.
Awareness
VGCTraj HSGraph HS Bottleneck SR OverlapGoal Dist Start Dist Opt Dist Wall DistCenter DistVGC
Traj HS
Graph HS
Bottleneck
SR Overlap
Goal Dist
Start Dist
Opt Dist
Wall Dist
Center Dist274
2271153
197743743
124674443902
27611497448241360
209115374490213531427
1761136740782129812831337
0127204493551550441563
26111547438961337135312935271357
256115574490013461366130453713051370Process-Tracing (Initial Mazes)
Hovering
VGCTraj HSGraph HS Bottleneck SR OverlapGoal Dist Start Dist Opt Dist Wall DistCenter DistVGC
Traj HS
Graph HS
Bottleneck
SR Overlap
Goal Dist
Start Dist
Opt Dist
Wall Dist
Center Dist171
142455
167341414
35246223246
77421343229419
169446402201409447
144401323197383390399
125298281206255286243297
23406335150399394364247407
0388315124386378350229306390Process-Tracing (Initial Mazes)
Duration
VGCTraj HSGraph HS Bottleneck SR OverlapGoal Dist Start Dist Opt Dist Wall DistCenter DistVGC
Traj HS
Graph HS
Bottleneck
SR Overlap
Goal Dist
Start Dist
Opt Dist
Wall Dist
Center Dist157
1141176
119875897
10811568611452
26117089914521588
3173172995512941292
0824771103013887911386
158857775103510409069301038
95748699141114241294138510181566
315605471253936121212849132401409Process-Tracing (Critical Mazes)
Hovering
VGCTraj HSGraph HS Bottleneck SR OverlapGoal Dist Start Dist Opt Dist Wall DistCenter DistVGC
Traj HS
Graph HS
Bottleneck
SR Overlap
Goal Dist
Start Dist
Opt Dist
Wall Dist
Center Dist139
140580
72554552
7524476530
114582554529597
0484511337526526
5500521340540459541
141414419412419393393417
103367464481362513526384569
682643934211724734843357513Process-Tracing (Critical Mazes)
Duration
01000200030004000
050010001500200025003000
050100150200250300350
0100200300400500600
0200400600800100012001400
0200400600800100012001400
0100200300400
0200400600800100012001400
0100200300400500Extended Data Figure 8 jSufficiency of individual and pairs of mechanisms for explaining
attention to obstacles when planning, To assess the individual and pairwise sufficiency of each
54predictor for explaining responses in the planning experiments, we fit hierarchical generalized
linear models (HGLMs) that included pairs of predictors as fixed effects. Each lower-triangle plot
corresponds to one of the experimental measures, where pairs of predictors included in a HGLM
as fixed-effects are indicated on the x- and y-axes. Values are the AIC for each model relative
to the best fitting model associated with an experimental measure (lower values indicate better
fit). Values along the diagonals correspond to models fit with a single predictor. According to this
criterion, across all experimental measures, value-guided construal is the first, second, or third best
single-predictor HGLM, and is always in the best two-predictor HGLM.
55Extended Data Table 2 jAlgorithm for Computing the Value of Representation Function
To obtain predictions for our our ideal model of value-guided construal, we calculated the value of
representation of all construals in a maze. This was done by enumerating all construals (subsets of
obstacle effects) and then, for each construal, calculating its behavioral utility and cognitive cost.
This allows us to obtain theoretically optimal value-guided construals. For a discussion of alterna-
tive ways of calculating construals, see the Supplementary Discussion of Construal Optimization
Algorithms.
56