arxiv_dump / txt /2103.13922.txt
billxbf's picture
Upload 101 files
8f1929a verified
raw
history blame
83.5 kB
ScanGAN360: A Generative Model of Realistic Scanpaths for 360Images
Daniel Martin1Ana Serrano2Alexander W. Bergman3Gordon Wetzstein3
Belen Masia1
1Universidad de Zaragoza, I3A2Centro Universitario de la Defensa, Zaragoza3Stanford University
Abstract
Understanding and modeling the dynamics of human
gaze behavior in 360environments is a key challenge in
computer vision and virtual reality. Generative adversar-
ial approaches could alleviate this challenge by generat-
ing a large number of possible scanpaths for unseen im-
ages. Existing methods for scanpath generation, however,
do not adequately predict realistic scanpaths for 360im-
ages. We present ScanGAN360, a new generative adver-
sarial approach to address this challenging problem. Our
network generator is tailored to the specifics of 360im-
ages representing immersive environments. Specifically, we
accomplish this by leveraging the use of a spherical adapta-
tion of dynamic-time warping as a loss function and propos-
ing a novel parameterization of 360scanpaths. The quality
of our scanpaths outperforms competing approaches by a
large margin and is almost on par with the human baseline.
ScanGAN360 thus allows fast simulation of large numbers
ofvirtual observers , whose behavior mimics real users, en-
abling a better understanding of gaze behavior and novel
applications in virtual scene design.
1. Introduction
Virtual reality (VR) is an emerging medium that unlocks
unprecedented user experiences. To optimize these expe-
riences, however, it is crucial to develop computer vision
techniques that help us understand how people explore im-
mersive virtual environments. Models for time-dependent
visual exploration behavior are important for designing and
editing VR content [42], for generating realistic gaze trajec-
tories of digital avatars [18], for understanding dynamic vi-
sual attention and visual search behavior [60], and for devel-
oping new rendering, display, and compression algorithms,
among other applications.
Current approaches that model how people explore vir-
tual environments often leverage saliency prediction [43,
13, 31, 2]. While this is useful for some applications, the
fixation points predicted by these approaches do not account
Figure 1. We present ScanGAN360, a generative adversarial ap-
proach to scanpath generation for 360images. ScanGAN360
generates realistic scanpaths ( bottom rows ), outperforming state-
of-the-art methods and mimicking the human baseline ( top row ).
for the time-dependent visual behavior of the user, making
it difficult to predict the order of fixations, or give insight
into how people explore an environment over time. For this
purpose, some recent work has explored scanpath predic-
tion [2, 3, 62, 4], but these algorithms do not adequately
model how people explore immersive virtual environments,
resulting in erratic or non-plausible scanpaths.
In this work, we present ScanGAN360, a novel frame-
work for scanpath generation for 360images (Figure 1).
Our model builds on a conditional generative adversarial
network (cGAN) architecture, for which we discuss and val-
idate two important insights that we show are necessary for
realistic scanpath generation. First, we propose a loss func-
tion based on a spherical adaptation of dynamic time warp-
ing (DTW), which is a key aspect for training our GAN ro-
bustly. DTW is a metric for measuring similarity between
two time series, such as scanpaths, which to our knowledge
has not been used to train scanpath-generating GANs. Sec-
ond, to adequately tackle the problem of scanpath genera-
tion in 360images, we present a novel parameterization ofarXiv:2103.13922v1 [cs.CV] 25 Mar 2021the scanpaths. These insights allow us to demonstrate state-
of-the-art results for scanpath generation in VR, close to the
human baseline and far surpassing the performance of ex-
isting methods. Our approach is the first to enable robust
scanpath prediction over long time periods up to 30 sec-
onds, and, unlike previous work, our model does not rely
on saliency, which is typically not available as ground truth.
Our model produces about 1,000 scanpaths per second,
which enables fast simulation of large numbers of virtual
observers , whose behavior mimics that of real users. Us-
ing ScanGAN360, we explore applications in virtual scene
design, which is useful in video games, interior design,
cinematography, and tourism, and scanpath-driven video
thumbnail generation of 360images, which provides pre-
views of VR content for social media platforms. Beyond
these applications, we propose to use ScanGAN360 for
applications such as gaze behavior simulation for virtual
avatars or gaze-contingent rendering. Extended discussion
and results on applications are included in the supplemen-
tary material and video.
We will make our source code and pre-trained model
publicly available to promote future research.
2. Related work
Modeling and predicting attention The multimodal na-
ture of attention [30], together with the complexity of hu-
man gaze behavior, make this a very challenging task. Many
works devoted to it have relied on representations such as
saliency, which is a convenient representation for indicat-
ing the regions of an image more likely to attract atten-
tion. Early strategies for saliency modeling have focused
on either creating hand-crafted features representative of
saliency [19, 52, 61, 29, 20, 7], or directly learning data-
driven features [49, 22]. With the proliferation of exten-
sive datasets of human attention [43, 39, 20, 8, 59], deep
learning–based methods for saliency prediction have been
successfully applied, yielding impressive results [37, 36, 14,
50, 54, 55, 58].
However, saliency models do not take into account the
dynamic nature of human gaze behavior, and therefore, they
are unable to model or predict time-varying aspects of at-
tention. Being able to model and predict dynamic explo-
ration patterns has been proven to be useful, for example,
for avatar gaze control [12, 41], video rendering in virtual
reality [26], or for directing users’ attention over time in
many contexts [9, 38]. Scanpath models aim to predict vi-
sual patterns of exploration that an observer would perform
when presented with an image. In contrast to saliency mod-
els, scanpath models typically focus on predicting plausi-
ble scanpaths, i.e., they do not predict a unique scanpath
and instead they try to mimic human behavior when ex-
ploring an image, taking into account the variability be-
tween different observers. Ellis and Smith [16] were pio-neers in this field: they proposed a general framework for
generating scanpaths based on Markov stochastic processes.
Several approaches have followed this work, incorporating
behavioral biases in the process in order to produce more
plausible scanpaths [24, 47, 27, 48]. In recent years, deep
learning models have been used to predict human scanpaths
based on neural network features trained on object recogni-
tion [22, 53, 14, 5].
Attention in 360images Predicting plausible scanpaths
in 360imagery is a more complex task: Observers do not
only scan a given image with their gaze, but they can now
also turn their head or body, effectively changing their view-
port over time. Several works have been proposed for mod-
eling saliency in 360images [33, 43, 31, 11, 44]. However,
scanpath prediction has received less attention. In their re-
cent work, Assens et al. [3] generalize their 2D model to
360images, but their loss function is unable to reproduce
the behavior of ground truth scanpaths (see Figure 4, third
column). A few works have focused on predicting short-
term sequential gaze points based on users’ previous his-
tory for 360videos, but they are limited to small temporal
windows (from one to ten seconds) [56, 25, 35]. For the
case of images, a number of recent methods focus on devel-
oping improved saliency models and principled methods to
sample from them [2, 4, 62].
Instead, we directly learn dynamic aspects of attention
from ground truth scanpaths by training a generative model
in an adversarial manner, with an architecture and loss
function specifically designed for scanpaths in 360im-
ages. This allows us to (i) effectively mimic human be-
havior when exploring scenes, bypassing the saliency gen-
eration and sampling steps, and (ii) optimize our network to
stochastically generate 360scanpaths, taking into account
observer variability.
3. Our Model
We adopt a generative adversarial approach, specifically
designed for 360content in which the model learns to gen-
erate a plausible scanpath, given the 360image as a con-
dition. In the following, we describe the parameterization
employed for the scanpaths, the design of our loss function
for the generator, and the particularities of our conditional
GAN architecture, ending with details about the training
process.
3.1. Scanpath Parameterization
Scanpaths are commonly provided as a sequence of two-
dimensional values corresponding to the coordinates (i;j)
of each gaze point in the image. When dealing with 360
images in equirectangular projections, gaze points are also
often represented by their latitude and longitude (;),Figure 2. Illustration of our generator and discriminator networks. Both networks have a two-branch structure: Features extracted from the
360image with the aid of a CoordConv layer and an encoder-like network are concatenated with the input vector for further processing.
The generator learns to transform this input vector, conditioned by the image, into a plausible scanpath. The discriminator takes as input
vector a scanpath (either captured or synthesized by the generator), as well as the corresponding image, and determines the probability of
this scanpath being real (or fake). We train them end-to-end in an adversarial manner, following a conditional GAN scheme. Please refer
to the text for details on the loss functions and architecture.
2[
2;
2]and2[;]. However, these parame-
terizations either suffer from discontinuities at the borders
of a 360image, or result in periodic, ambiguous values.
The same point of the scene can have two different repre-
sentations in these parameterizations, hindering the learning
process.
We therefore resort to a three-dimensional parameteriza-
tion of our scanpaths, where each gaze point p= (;)is
transformed into its three-dimensional representation P=
(x;y;z )such that:
x=cos()cos();y=cos()sin();z=sin():
This transformation assumes, without loss of generality,
that the panorama is projected over a unit sphere. We
use this parameterization for our model, which learns a
scanpath Pas a set of three-dimensional points over time.
Specifically, given a number of samples Tover time, P=
(P1;:::;PT)2R3T. The results of the model are then
converted back to a two-dimensional parameterization in
terms of latitude ( =atan2 (z;p
x2+y2)) and longitude
(=atan2 (y;x)) for display and evaluation purposes.
3.2. Overview of the Model
Our model is a conditional GAN, where the condition
is the RGB 360image for which we wish to estimate a
scanpath. The generator Gis trained to generate a scanpath
from a latent code z(drawn randomly from a uniform distri-
bution,U(1;1)), conditioned by the RGB 360imagey.
The discriminator Dtakes as input a potential scanpath ( xorG(z;y)), as well as the condition y(the RGB 360im-
age), and outputs the probability of the scanpath being real
(or fake). The architecture of both networks, generator and
discriminator, can be seen in Figure 2, and further details
related to the architecture are described in Section 3.4.
3.3. Loss Function
The objective function of a conventional conditional
GAN is inspired by a minimax objective from game theory,
with an objective [32]:
min
Gmax
DV(D;G ) =
Ex[logD(x;y)] +Ez[log(1D(G(z;y);y)]:(1)
We can separate this into two losses, one for the generator,
LG, and one for the discriminator, LD:
LG=Ez[log(1D(G(z;y);y))]; (2)
LD=Ex[logD(x;y)] +Ez[log(1D(G(z;y);y))]:(3)
While this objective function suffices in certain cases, as
the complexity of the problem increases, the generator may
not be able to learn the transformation from the input distri-
bution into the target one. One can resort to adding a loss
term toLG, and in particular one that enforces similarity to
the scanpath ground truth data. However, using a conven-
tional data term, such as MSE, does not yield good results
(Section 4.4 includes an evaluation of this). To address this
issue, we introduce a novel term in LGspecifically targeted
to our problem, and based on dynamic time warping [34].Dynamic time warping (DTW) measures the similar-
ity between two temporal sequences, considering both the
shape and the order of the elements of a sequence, with-
out forcing a one-to-one correspondence between elements
of the time series. For this purpose, it takes into account
all the possible alignments of two time series rands, and
computes the one that yields the minimal distance between
them. Specifically, the DTW loss function between two
time series r2Rknands2Rkmcan be expressed
as [15]:
DTW (r;s) = min
AhA;(r;s)i; (4)
where (r;s) = [(ri;sj)]ij2Rnmis a matrix con-
taining the distances (;)between each pair of points in r
ands,Ais a binary matrix that accounts for the alignment
(or correspondence) between rands, andh;iis the inner
product between both matrices.
In our case, r= (r1;:::;rT)2R3Tands=
(s1;:::;sT)2R3Tare two scanpaths that we wish to com-
pare. While the Euclidean distance between each pair of
points is usually employed when computing (ri;sj)for
Equation 4, in our scenario that would yield erroneous dis-
tances derived from the projection of the 360image (both
if done in 2D over the image, or in 3D with the parameteri-
zation described in Section 3.1). We instead use the distance
over the surface of a sphere, or spherical distance, and de-
finesph(r;s) = [sph(ri;sj)]ij2Rnmsuch that:
sph(ri;sj) =
2 arcsin1
2q
(rx
isx
j)2+ (ry
isy
j)2+ (rz
isz
j)2
;
(5)
leading to our spherical DTW:
DTWsph(r;s) = min
AhA;sph(r;s)i: (6)
We incorporate the spherical DTW to the loss function of
the generator (LG, Equation 2), yielding our final generator
loss functionL
G:
L
G=LG+Ez[DTWsph(G(z;y);)]; (7)
whereis a ground truth scanpath for the conditioning im-
agey, and the weight is empirically set to 0:1.
While a loss function incorporating DTW (or spherical
DTW) is not differentiable, a differentiable version, soft-
DTW, has been proposed. We use this soft-DTW in our
model; details on it can be found in Section S1 in the sup-
plementary material or in the original publication [15].
3.4. Model Architecture
Both our generator and discriminator are based on a two-
branch structure (see Figure 2), with one branch for the con-
ditioning image yand the other for the input vector ( zin thegenerator, and xorG(z;y)in the discriminator). The im-
age branch extracts features from the 360image, yielding
a set of latent features that will be concatenated with the
input vector for further processing. Due to the distortion
inherent to equirectangular projections, traditional convo-
lutional feature extraction strategies are not well suited for
360images: They use a kernel window where neighboring
relations are established uniformly around a pixel. Instead,
we extract features using panoramic (or spherical) convolu-
tions [13]. Spherical convolutions are a type of dilated con-
volutions where the relations between elements in the im-
age are not established in image space, but in a gnomonic,
non-distorted space. These spherical convolutions can rep-
resent kernels as patches tangent to a sphere where the 360
is reprojected.
In our problem of scanpath generation, the location of
the features in the image is of particular importance. There-
fore, to facilitate spatial learning of the network, we use the
recently presented CoordConv strategy [28], which gives
convolutions access to its own input coordinates by adding
extra coordinate channels. We do this by concatenating a
CoordConv layer to the input 360image (see Figure 2).
This layer also helps stabilize the training process, as shown
in Section 4.4.
3.5. Dataset and Training Details
We train our model using Sitzmann et al.’s [43] dataset,
composed of 22 different 360images and a total of 1,980
scanpaths from 169 different users. Each scanpath contains
gaze information captured during 30 seconds with a binoc-
ular eye tracking recorder at 120 Hz. We sample these cap-
tured scanpaths at 1 Hz ( i.e.,T= 30 ), and reparameter-
ize them (Section 3.1), so that each scanpath is a sequence
P= (P0;:::;P 29)2R3T. Given the relatively small size
of the dataset, we perform data augmentation by longitu-
dinally shifting the 360images (and adjusting their scan-
paths accordingly); specifically, for each image we generate
six different variations with random longitudinal shifting.
We use 19 of the 22 images in this dataset for training, and
reserve three to be part of our test set (more details on the
full test set are described in Section 4). With the data aug-
mentation process, this yields 114 images in the training set.
During our training process we use the Adam opti-
mizer [21], with constant learning rates lG= 104for the
generator, and lD= 105for the discriminator, both of
them with momentum = (0:5;0:99). Further training and
implementation details can be found in the supplementary
material.
4. Validation and Analysis
We evaluate the quality of the generated scanpaths with
respect to the measured, ground truth scanpaths, as well asFigure 3. Results of our model for two different scenes: market andmall from Rai et al.’s dataset [39]. From left to right : 360image,
ground truth sample scanpath, and three scanpaths generated by our model. The generated scanpaths are plausible and focus on relevant
parts of the scene, yet they exhibit the diversity expected among different human observers. Please refer to the supplementary material for
a larger set of results.
to other approaches. We also ablate our model to illustrate
the contribution of the different design choices.
We evaluate or model on two different test sets. First,
using the three images from Sitzmann et al.’s dataset [43]
left out of the training (Section 3.5): room ,chess androbots .
To ensure our model has an ability to extrapolate, we also
evaluate it with a different dataset from Rai et al. [39]. This
dataset consists of 60 scenes watched by 40 to 42 observers
for 25 seconds. Thus, when comparing to their ground truth,
we cut our 30-second scanpaths to the maximum length of
their data. Please also refer to the supplementary material
for more details on the test set, as well as further evaluation
and results.
4.1. Scanpath Similarity Metrics
Our evaluation is both quantitative and qualitative. Eval-
uating scanpath similarity is not a trivial task, and a num-
ber of metrics have been proposed in the literature, each fo-
cused on a different context or aspect of gaze behavior [17].
Proposed metrics can be roughly categorized into: (i) di-
rect measures based on Euclidean distance; (ii) string-based
measures based on string alignment techniques (such as the
Levenshtein distance, LEV); (iii) curve similarity methods;
(iv) metrics from time-series analysis (like DTW, on which
our loss function is based); and (v) metrics from recurrence
analysis ( e.g., recurrence measure REC and determinism
measure DET). We refer the reader to supplementary mate-
rial and the review by Fahimi and Bruce [17] for an in-depth
explanation and comparison of existing metrics. Here, we
include a subset of metrics that take into account both the
position and the ordering of the points (namely LEV and
DTW), and two metrics from recurrence analysis (REC and
DET), which have been reported to be discriminative in
revealing viewing behaviors and patterns when comparing
scanpaths. We nevertheless compute our evaluation for the
full set of metrics reviewed by Fahimi and Bruce [17] in the
supplementary material.
Since for each image we have a number of ground truthscanpaths, and a set of generated scanpaths, we compute
each similarity metric for all possible pairwise comparisons
(each generated scanpath against each of the ground truth
scanpaths), and average the result. In order to provide an
upper baseline for each metric, we also compute the human
baseline ( Human BL ) [57], which is obtained by comparing
each ground truth scanpath against all the other ground truth
ones, and averaging the results. In a similar fashion, we
compute a lower baseline based on sampling gaze points
randomly over the image ( Random BL ).
4.2. Results
Qualitative results of our model can be seen in Figures 3
and 1 for scenes with different layouts. Figure 3, from left
to right, shows: the scene, a sample ground truth (captured)
scanpath, and three of our generated scanpaths sampled
from the generator. Our model is able to produce plausible,
coherent scanpaths that focus on relevant parts of the scene.
In the generated scanpaths we observe regions where the
user focuses (points of a similar color clustered together), as
well as more exploratory behavior. The generated scanpaths
are diverse but plausible, as one would expect if different
users watched the scene (the supplementary material con-
tains more ground truth, measured scanpaths, showing this
diversity). Further, our model is not affected by the inherent
distortions of the 360image. This is apparent, for exam-
ple, in the market scene: The central corridor, narrow and
seemingly featureless, is observed by generated virtual ob-
servers . Quantitative results in Table 1 further show that our
generated scanpaths are close to the human baseline ( Hu-
man BL ), both in the test set from Sitzmann et al.’s dataset,
and over Rai et al.’s dataset. A value close to Human BL in-
dicates that the generated scanpaths are as valid or as plau-
sible as the captured, ground truth ones. Note that obtaining
a value lower than Human BL is possible, if the generated
scanpaths are on average closer to the ground truth ones,
and exhibit less variance.
Since our model is generative, it can generate as manyFigure 4. Qualitative comparison to previous methods for five different scenes from Rai et al.’s dataset. In each row, from left to right:
360image, and a sample scanpath obtained with our method, PathGAN [3], SaltiNet [4], and Zhu et al.’s [62]. Note that, in the case
of PathGAN, we are including the results directly taken from their paper, thus the different visualization. Our method produces plausible
scanpaths focused on meaningful regions, in comparison with other techniques. Please see text for details, and the supplementary material
for a larger set of results, also including ground truth scanpaths.
scanpaths as needed and model many different potential ob-
servers. We perform our evaluations on a random set of 100
scanpaths generated by our model. We choose this num-
ber to match the number of generated scanpaths available
for competing methods, to perform a fair comparison. Nev-
ertheless, we have analyzed the stability of our generative
model by computing our evaluation metrics for a variable
number of generated scanpaths: Our results are very sta-
ble with the number of scanpaths (please see Table 2 in the
supplementary material).
4.3. Comparison to Other Methods
We compare ScanGAN360 to three methods devoted to
scanpath prediction in 360images: SaltiNet-based scan-
path prediction [2, 4] (we will refer to it as SaltiNet in the
following), PathGAN [3] and Zhu et al.’s method [62]. For
comparisons to SaltiNet we use the public implementation
of the authors, while the authors of Zhu et al. kindly pro-
vided us with the results of their method for the images from
Rai et al.’s dataset (but not for Sitzmann et al.’s); we there-
fore have both qualitative (Figure 4) and quantitative (Ta-
ble 1) comparisons to these two methods. In the case of
PathGAN, no model or implementation could be obtained,
so we compare qualitatively to the results extracted from
their paper (Figure 4, third column).
Table 1 shows that our model consistently provides re-sults closer to the ground truth scanpaths than Zhu et al.’s
and SaltiNet. The latter is based on a saliency-sampling
strategy, and thus these results indicate that indeed the tem-
poral information learnt by our model is relevant for the fi-
nal result. Our model, as expected, also amply surpasses the
random baseline. In Figure 4 we see how PathGAN scan-
paths fail to focus on the relevant parts of the scene (see,
e.g.,snow orsquare ), while SaltiNet exhibits a somewhat
erratic behavior, with large displacements and scarce areas
of focus ( train ,snow orsquare show this). Finally, Zhu
et al.’s approach tends to place gaze points at high contrast
borders (see, e.g.,square orresort ).
4.4. Ablation Studies
We also evaluate the contribution of different elements of
our model to the final result. For this purpose, we analyze
a standard GAN strategy ( i.e., using only the discriminative
loss), as the baseline. Figure 5 shows how the model is un-
able to learn both the temporal nature of the scanpaths, and
their relation to image features. We also analyze the results
yielded by adding a term based on the MSE between the
ground truth and the generated scanpath to the loss function,
instead of our DTW sphterm (the only previous GAN ap-
proach for scanpath generation [3] relied on MSE for their
loss term). The MSE only measures a one-to-one corre-
spondence between points, considering for each time instantFigure 5. Qualitative ablation results. From top to bottom : ba-
sic GAN strategy (baseline); adding MSE to the loss function of
the former; our approach; and an example ground truth scanpath.
These results illustrate the need for our DTW sphloss term.
Table 1. Quantitative comparisons of our model against
SaltiNet [4] and Zhu et al. [62]. We also include upper (human
baseline, Human BL ) and lower (randomly sampling over the im-
age, Random BL ) baselines. Arrows indicate whether higher or
lower is better, and boldface highlights the best result for each met-
ric (excluding the ground truth Human BL ).SaltiNet is trained
with Rai et al.’s dataset; we include it for completeness.
Dataset Method LEV#DTW#REC"DET"
Test set from
Sitzmann et al.Random BL 52.33 2370.56 0.47 0.93
SaltiNet 48.00 1928.85 1.45 1.78
ScanGAN360 (ours) 46.15 1921.95 4.82 2.32
Human BL 43.11 1843.72 7.81 4.07
Rai et al.’s
datasetRandom BL 43.11 1659.75 0.21 0.94
SaltiNet48.07 1928.41 1.43 1.81
Zhu et al. 43.55 1744.20 1.64 1.50
ScanGAN360 (ours) 40.99 1549.59 1.72 1.87
Human BL 39.59 1495.55 2.33 2.31
a single point, unrelated to the rest. This hinders the learn-
ing process, leading to non-plausible results (Figure 5, sec-
ond row). This behavior is corrected when our DTW sphis
added instead, since it is specifically targeted for time series
data and takes into account the actual spatial structure of the
data (Figure 5, third row). The corresponding quantitative
measures over our test set from Sitzmann et al. can be found
in Table 2. We also analyze the effect of removing the Co-
ordConv layer from our model: Results in Table 2 indicate
that the use of CoordConv does have a positive effect on the
results, helping learn the transformation from the input to
the target domain.Table 2. Quantitative results of our ablation study. Arrows indi-
cate whether higher or lower is better, and boldface highlights the
best result for each metric (excluding the ground truth Human BL ).
Please refer to the text for details on the ablated models.
Metric LEV# DTW# REC"DET"
Basic GAN 49.42 2088.44 3.01 1.74
MSE 48.90 1953.21 2.41 1.73
DTWsph(no CoordConv) 47.82 1988.38 3.67 1.99
DTWsph(ours) 46.19 1925.20 4.50 2.33
Human Baseline (Human BL) 43.11 1843.72 7.81 4.07
4.5. Behavioral Evaluation
While the previous subsections employ well-known met-
rics from the literature to analyze the performance of our
model, in this subsection we perform a higher-level analysis
of its results. We assess whether the behavioral characteris-
tics of our scanpaths match those which have been reported
from actual users watching 360images.
Exploration time Sitzmann et al. [43] measure the explo-
ration time as the average time that users took to move their
eyes to a certain longitude relative to their starting point,
and measure how long it takes for users to fully explore the
scene. Figure 6 (left) shows this exploration time, measured
by Sitzmann et al. from captured data, for the three scenes
from their dataset included in our test set ( room ,chess , and
robots ). To analyze whether our generated scanpaths mimic
this behavior and exploration speed, we plot the exploration
time of our generated scanpaths (Figure 6, center left) for
the same scenes and number of scanpaths. We can see how
the speed and exploration time are very similar between
real and generated data. Individual results per scene can
be found in the supplementary material.
Fixation bias Similar to the center bias of human eye fix-
ations observed in regular images [20], the existence of a
Laplacian-like equator bias has been measured in 360im-
ages [43]: The majority of fixations fall around the equa-
tor, in detriment of the poles. We have evaluated whether
the distribution of scanpaths generated by our model also
presents this bias. This is to be expected, since the data our
model is trained with exhibits it, but is yet another indicator
that we have succeeded in learning the ground truth distri-
bution. We test this by generating, for each scene, 1,000
different scanpaths with our model, and aggregating them
over time to produce a pseudo- saliency map, which we term
aggregate map . Figure 6 (right) shows this for two scenes
in our test set: We can see how this equator bias is indeed
present in our generated scanpaths.
Inter-observer congruency It is common in the literature
analyzing users’ gaze behavior to measure inter-observerFigure 6. Behavioral evaluation. Left: Exploration time for real captured data ( left) and scanpaths generated by our model ( center left ).
Speed and exploration time of our scanpaths are on par with that of real users. Center right : ROC curve of our generated scanpaths for
each individual test scene (gray), and averaged across scenes (magenta). The faster it converges to the maximum rate, the higher the
inter-observer congruency. Right : Aggregate maps for two different scenes, computed as heatmaps from 1,000 generated scanpaths. Our
model is able to produce aggregate maps that focus on relevant areas of the scenes and exhibit the equator bias reported in the literature.
congruency, often by means of a receiver operating char-
acteristic (ROC) curve. We compute the congruency of our
“generated observers” through this ROC curve for the three
scenes in our test set from the Sitzmann et al. dataset (Fig-
ure 6, center right). The curve calculates the ability of the
ithscanpath to predict the aggregate map of the correspond-
ing scene. Each point in the curve is computed by gener-
ating a map containing the top n%most salient regions of
the aggregate map (computed without the ithscanpath), and
calculating the percentage of gaze points of the ithscanpath
that fall into that map. Our ROC curve indicates a strong
agreement between our scanpaths, with around 75% of all
gaze points falling within 25% of the most salient regions.
These values are comparable to those measured in previous
studies with captured gaze data [43, 23].
Temporal and spatial coherence Our generated scan-
paths have a degree of stochasticity, to be able to model the
diversity of real human observers. However, human gaze
behavior follows specific patterns, and each gaze point is
conditioned not only by the features in the scene but also by
the previous history of gaze points of the user. If two users
start watching a scene in the same region, a certain degree
of coherence between their scanpaths is expected, that may
diverge more as more time passes. We analyze the temporal
coherence of generated scanpaths that start in the same re-
gion, and observe that indeed our generated scanpaths fol-
low a coherent pattern. Please refer to the supplementary
for more information on this part of the analysis.
5. Conclusion
In summary, we propose ScanGAN360, a conditional
GAN approach to generating gaze scanpaths for immersive
virtual environments. Our unique parameterization tailored
to panoramic content, coupled with our novel usage of a
DTW loss function, allow our model to generate scanpaths
of significantly higher quality and duration than previousapproaches. We further explore applications of our model:
Please refer to the supplementary material for a description
and examples of these.
Our GAN approach is well suited for the problem of
scanpath generation: A single ground truth scanpath does
not exist, yet real scanpaths follow certain patterns that
are difficult to model explicitly but that are automatically
learned by our approach. Note that our model is also very
fast and can produce about 1,000 scanpaths per second.
This may be a crucial capability for interactive applications:
our model can generate virtual observers in real time.
Limitations and future work Our model is trained with
30-second long scanpaths, sampled at 1 Hz. Although
this is significantly longer than most previous approaches
[16, 23, 27], exploring different or variable lengths or sam-
pling rates remains interesting for future work. When train-
ing our model, we focus on learning higher-level aspects of
visual behavior, and we do not explicitly enforce low-level
ocular movements ( e.g., fixations or saccades). Currently,
our relatively low sampling rate prevents us from model-
ing very fast dynamic phenomena, such as saccades. Yet,
fixation patterns naturally emerge in our results, and future
work could explicitly take low-level oculomotor aspects of
visual search into account.
The model, parameterization, and loss function are tai-
lored to 360images. In a similar spirit, a DTW-based loss
function could also be applied to conventional 2D images
(using an Euclidean distance in 2D instead of our sph), po-
tentially leading to better results than current 2D approaches
based on mean-squared error.
We believe that our work is a timely effort and a first step
towards understanding and modeling dynamic aspects of at-
tention in 360images. We hope that our work will serve
as a basis to advance this research, both in virtual reality
and in conventional imagery, and extend it to other scenar-
ios, such as dynamic or interactive content, analyzing the
influence of the task, including the presence of motion par-allax, or exploring multimodal experiences. We will make
our model and training code available in order to facilitate
the exploration of these and other possibilities.
References
[1] Elena Arabadzhiyska, Okan Tarhan Tursun, Karol
Myszkowski, Hans-Peter Seidel, and Piotr Didyk. Saccade
landing position prediction for gaze-contingent rendering.
ACM Transactions on Graphics (TOG) , 36(4):1–12, 2017. 5
[2] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and
Noel E O’Connor. Saltinet: Scan-path prediction on 360
degree images using saliency volumes. In Proceedings of
the IEEE ICCV Workshops , pages 2331–2338, 2017. 1, 2, 6,
4
[3] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and
Noel E O’Connor. Pathgan: visual scanpath prediction with
generative adversarial networks. In Proceedings of the Eu-
ropean Conference on Computer Vision (ECCV) , pages 0–0,
2018. 1, 2, 6, 4
[4] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and
Noel E O’Connor. Scanpath and saliency prediction on 360
degree images. Signal Processing: Image Communication ,
69:8–14, 2018. 1, 2, 6, 7
[5] Wentao Bao and Zhenzhong Chen. Human scanpath predic-
tion based on deep convolutional saccadic model. Neuro-
computing , 404:154 – 164, 2020. 2
[6] Mathieu Blondel, Arthur Mensch, and Jean-Philippe Vert.
Differentiable divergences between time series. arXiv
preprint arXiv:2010.08354 , 2020. 1
[7] A. Borji. Boosting bottom-up and top-down visual features
for saliency estimation. In 2012 IEEE Conference on Com-
puter Vision and Pattern Recognition , 2012. 2
[8] Zoya Bylinskii, Tilke Judd, Ali Borji, Laurent Itti, Fr ´edo Du-
rand, Aude Oliva, and Antonio Torralba. Mit saliency bench-
mark. http://saliency.mit.edu/, 2019. 2
[9] Ying Cao, Rynson WH Lau, and Antoni B Chan. Look over
here: Attention-directing composition of manga elements.
ACM Trans. Graph. , 33(4):1–11, 2014. 2, 3
[10] Chien-Yi Chang, De-An Huang, Yanan Sui, Li Fei-Fei, and
Juan Carlos Niebles. D3tw: Discriminative differentiable dy-
namic time warping for weakly supervised action alignment
and segmentation. In Proceedings of the IEEE/CVF Confer-
ence on Computer Vision and Pattern Recognition (CVPR) ,
June 2019. 1
[11] Fang-Yi Chao, Lu Zhang, Wassim Hamidouche, and Olivier
Deforges. Salgan360: Visual saliency prediction on 360 de-
gree images with generative adversarial networks. In 2018
IEEE Int. Conf. on Multim. & Expo Workshops (ICMEW) ,
pages 01–04. IEEE, 2018. 2
[12] Alex Colburn, Michael F Cohen, and Steven Drucker. The
role of eye gaze in avatar mediated conversational interfaces.
Technical report, Citeseer, 2000. 2
[13] Benjamin Coors, Alexandru Paul Condurache, and An-
dreas Geiger. Spherenet: Learning spherical representations
for detection and classification in omnidirectional images.
InProc. of the European Conference on Computer Vision
(ECCV) , pages 518–533, 2018. 1, 4[14] Marcella Cornia, Lorenzo Baraldi, Giuseppe Serra, and Rita
Cucchiara. Predicting human eye fixations via an lstm-based
saliency attentive model. IEEE Transactions on Image Pro-
cessing , 27(10):5142–5154, 2018. 2
[15] Marco Cuturi and Mathieu Blondel. Soft-dtw: a dif-
ferentiable loss function for time-series. arXiv preprint
arXiv:1703.01541 , 2017. 4, 1
[16] Stephen R Ellis and James Darrell Smith. Patterns of sta-
tistical dependency in visual scanning. Eye movements and
human information processing , pages 221–238, 1985. 2, 8
[17] Ramin Fahimi and Neil DB Bruce. On metrics for measuring
scanpath similarity. Behavior Research Methods , pages 1–
20, 2020. 5, 2
[18] Kaye Horley, Leanne M Williams, Craig Gonsalvez, and
Evian Gordon. Face to face: visual scanpath evidence for
abnormal processing of facial expressions in social phobia.
Psychiatry research , 127(1-2):43–53, 2004. 1
[19] Laurent Itti, Christof Koch, and Ernst Niebur. A model
of saliency-based visual attention for rapid scene analysis.
IEEE Transactions on pattern analysis and machine intelli-
gence , 20(11):1254–1259, 1998. 2
[20] Tilke Judd, Krista Ehinger, Fr ´edo Durand, and Antonio Tor-
ralba. Learning to predict where humans look. In IEEE
ICCV , pages 2106–2113. IEEE, 2009. 2, 7
[21] Diederik P. Kingma and Jimmy Ba. Adam: A method for
stochastic optimization. In ICLR , 2014. Last updated in
arXiv in 2017. 4
[22] Matthias K ¨ummerer, Thomas S. A. Wallis, and Matthias
Bethge. Deepgaze ii: Reading fixations from deep
features trained on object recognition. arXiv preprint
arXiv:1610.01563 , 2016. 2
[23] O. Le Meur and T. Baccino. Methods for comparing scan-
paths and saliency maps: strengths and weaknesses. Behav-
ior Research Methods , pages 251–266, 2013. 8
[24] Olivier Le Meur and Zhi Liu. Saccadic model of eye move-
ments for free-viewing condition. Vision Research , 116:152
– 164, 2015. 2
[25] Chenge Li, Weixi Zhang, Yong Liu, and Yao Wang. Very
long term field of view prediction for 360-degree video
streaming. In 2019 IEEE Conference on Multimedia Infor-
mation Processing and Retrieval (MIPR) , pages 297–302.
IEEE, 2019. 2
[26] Suiyi Ling, Jes ´us Guti ´errez, Ke Gu, and Patrick Le Callet.
Prediction of the influence of navigation scan-path on per-
ceived quality of free-viewpoint videos. IEEE Journal on
Emerging and Sel. Topics in Circ. and Sys. , 9(1):204–216,
2019. 2
[27] Huiying Liu, Dong Xu, Qingming Huang, Wen Li, Min Xu,
and Stephen Lin. Semantically-based human scanpath esti-
mation with hmms. In Proceedings of the IEEE International
Conference on Computer Vision , pages 3232–3239, 2013. 2,
8
[28] Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski
Such, Eric Frank, Alex Sergeev, and Jason Yosinski. An in-
triguing failing of convolutional neural networks and the co-
ordconv solution. In Neural information processing systems ,
pages 9605–9616, 2018. 4[29] Y . Lu, W. Zhang, C. Jin, and X. Xue. Learning attention map
from images. In 2012 IEEE Conference on Computer Vision
and Pattern Recognition , 2012. 2
[30] Daniel Martin, Sandra Malpica, Diego Gutierrez, Belen Ma-
sia, and Ana Serrano. Multimodality in VR: A survey. arXiv
preprint arXiv:2101.07906 , 2021. 2
[31] Daniel Martin, Ana Serrano, and Belen Masia. Panoramic
convolutions for 360single-image saliency prediction. In
CVPR Workshop on CV for AR/VR , 2020. 1, 2
[32] Mehdi Mirza and Simon Osindero. Conditional generative
adversarial nets. arXiv preprint arXiv:1411.1784 , 2014. 3
[33] Rafael Monroy, Sebastian Lutz, Tejo Chalasani, and Aljosa
Smolic. Salnet360: Saliency maps for omni-directional im-
ages with cnn. Signal Processing: Image Communication ,
69:26 – 34, 2018. 2
[34] Meinard M ¨uller. Dynamic time warping. Information re-
trieval for music and motion , pages 69–84, 2007. 3, 1
[35] Anh Nguyen, Zhisheng Yan, and Klara Nahrstedt. Your at-
tention is unique: Detecting 360-degree video saliency in
head-mounted display for head movement prediction. In
Proc. ACM Intern. Conf. on Multimedia , pages 1190–1198,
2018. 2
[36] Junting Pan, Cristian Canton, Kevin McGuinness, Noel E.
O’Connor, Jordi Torres, Elisa Sayrol, and Xavier and Giro-
i Nieto. Salgan: Visual saliency prediction with generative
adversarial networks. 2018. 2
[37] Junting Pan, Elisa Sayrol, Xavier Giro-i Nieto, Kevin
McGuinness, and Noel E. O’Connor. Shallow and deep con-
volutional networks for saliency prediction. In The IEEE
Conference on Computer Vision and Pattern Recognition
(CVPR) , June 2016. 2
[38] Xufang Pang, Ying Cao, Rynson WH Lau, and Antoni B
Chan. Directing user attention via visual flow on web de-
signs. ACM Trans. on Graph. , 35(6):1–11, 2016. 2, 3
[39] Yashas Rai, Jes ´us Guti ´errez, and Patrick Le Callet. A dataset
of head and eye movements for 360 degree images. In Pro-
ceedings of the 8th ACM on Multimedia Systems Conference ,
pages 205–210, 2017. 2, 5, 1
[40] Kerstin Ruhland, Christopher E Peters, Sean Andrist,
Jeremy B Badler, Norman I Badler, Michael Gleicher, Bilge
Mutlu, and Rachel McDonnell. A review of eye gaze in
virtual agents, social robotics and hci: Behaviour genera-
tion, user interaction and perception. In Computer graph-
ics forum , volume 34, pages 299–326. Wiley Online Library,
2015. 4
[41] Matan Sela, Pingmei Xu, Junfeng He, Vidhya Naval-
pakkam, and Dmitry Lagun. Gazegan-unpaired adversar-
ial image generation for gaze estimation. arXiv preprint
arXiv:1711.09767 , 2017. 2
[42] Ana Serrano, Vincent Sitzmann, Jaime Ruiz-Borau, Gordon
Wetzstein, Diego Gutierrez, and Belen Masia. Movie edit-
ing and cognitive event segmentation in virtual reality video.
ACM Trans. Graph. (SIGGRAPH) , 36(4), 2017. 1
[43] Vincent Sitzmann, Ana Serrano, Amy Pavel, Maneesh
Agrawala, Diego Gutierrez, Belen Masia, and Gordon Wet-
zstein. Saliency in VR: How do people explore virtual
environments? IEEE Trans. on Vis. and Comp. Graph. ,
24(4):1633–1642, 2018. 1, 2, 4, 5, 7, 8, 3[44] Mikhail Startsev and Michael Dorr. 360-aware saliency esti-
mation with conventional image saliency predictors. Signal
Proces.: Image Comm. , 69:43–52, 2018. 2
[45] Yu-Chuan Su and Kristen Grauman. Making 360 video
watchable in 2d: Learning videography for click free view-
ing. In 2017 IEEE Conference on Computer Vision and Pat-
tern Recognition (CVPR) , pages 1368–1376. IEEE, 2017. 3
[46] Yu-Chuan Su, Dinesh Jayaraman, and Kristen Grauman.
Pano2vid: Automatic cinematography for watching 360
videos. In Asian Conf. on CV , pages 154–171. Springer,
2016. 3
[47] Benjamin W Tatler and Benjamin T Vincent. The promi-
nence of behavioural biases in eye guidance. Visual Cogni-
tion, 17(6-7):1029–1054, 2009. 2
[48] Hamed Rezazadegan Tavakoli, Esa Rahtu, and Janne
Heikkil ¨a. Stochastic bottom–up fixation prediction and sac-
cade generation. Image and Vision Computing , 31(9):686–
693, 2013. 2
[49] Antonio Torralba, Aude Oliva, Monica S Castelhano, and
John M Henderson. Contextual guidance of eye movements
and attention in real-world scenes: the role of global features
in object search. Psychological review , 113(4):766, 2006. 2
[50] Eleonora Vig, Michael Dorr, and David Cox. Large-scale
optimization of hierarchical features for saliency prediction
in natural images. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition (CVPR) , June
2014. 2
[51] LE Vincent and Nicolas Thome. Shape and time distortion
loss for training deep time series forecasting models. In
Advances in neural information processing systems , pages
4189–4201, 2019. 1
[52] Dirk Walther and Christof Koch. Modeling attention to
salient proto-objects. Neural Networks , 19:1395–1407,
2006. 2
[53] Wenguan Wang and Jianbing Shen. Deep visual atten-
tion prediction. IEEE Transactions on Image Processing ,
27(5):2368–2378, 2017. 2
[54] W. Wang and J. Shen. Deep visual attention prediction. IEEE
Transactions on Image Processing , 27(5):2368–2378, 2018.
2
[55] Wenguan Wang, Jianbing Shen, Xingping Dong, and Ali
Borji. Salient object detection driven by fixation prediction.
InProceedings of the IEEE Conference on Computer Vision
and Pattern Recognition (CVPR) , June 2018. 2
[56] Chenglei Wu, Ruixiao Zhang, Zhi Wang, and Lifeng Sun. A
spherical convolution approach for learning long term view-
port prediction in 360 immersive video. In Proceedings of
the AAAI Conference on Artificial Intelligence , volume 34,
pages 14003–14040, 2020. 2
[57] Chen Xia, Junwei Han, Fei Qi, and Guangming Shi. Pre-
dicting human saccadic scanpaths based on iterative repre-
sentation learning. IEEE Transactions on Image Processing ,
28(7):3502–3515, 2019. 5
[58] M. Xu, Y . Song, J. Wang, M. Qiao, L. Huo, and Z. Wang.
Predicting head movement in panoramic video: A deep re-
inforcement learning approach. IEEE Transactions on Pat-
tern Analysis and Machine Intelligence , 41(11):2693–2708,
2019. 2[59] Chuan Yang, Lihe Zhang, Ruan Lu, Huchuan, Xiang, and
Ming-Hsuan Yang. Saliency detection via graph-based man-
ifold ranking. In Computer Vision and Pattern Recogni-
tion (CVPR), 2013 IEEE Conference on , pages 3166–3173.
IEEE, 2013. 2
[60] Kiwon Yun, Yifan Peng, Dimitris Samaras, Gregory J Zelin-
sky, and Tamara L Berg. Exploring the role of gaze behavior
and object detection in scene understanding. Frontiers in
psychology , 4:917, 2013. 1
[61] Qi Zhao and Christof Koch. Learning a saliency map using
fixated locations in natural scenes. Journal of Vision , 11:9,
2011. 2
[62] Yucheng Zhu, Guangtao Zhai, and Xiongkuo Min. The pre-
diction of head and eye movement for 360 degree images.
Signal Processing: Image Communication , 69:15–25, 2018.
1, 2, 6, 7, 4Supplementary Material
This document offers additional information and details
on the following topics:
• (S1) Extended description of the soft-DTW (differen-
tiable version of DTW) distance metric used in our
model.
• (S2) Additional results (scanpaths generated with our
method) for different scenes used in our evaluation in
the main paper.
• (S3) Additional ground truth scanpaths for the scenes
used in our evaluation in the main paper.
• (S4) Further details on our training process.
• (S5) Further details on metrics and evaluation, includ-
ing a larger set of metrics (which we briefly introduce),
and extended analysis.
• (S6) Further details on the behavioral evaluation of our
scanpaths.
• (S7) Example applications of our method.
S1. Differentiable Dynamic Time Warping:
soft-DTW
One of the key aspects of our framework relies in the
addition of a second term to the generator’s loss function,
based on dynamic time warping [34]. As pointed in Section
3.3 in the main paper, dynamic time warping (DTW) mea-
sures the similarity between two temporal sequences (see
Figure 71, Equation 5 in the main paper for the original
DTW formulation, and Equations 6 and 7 in the main pa-
per for our spherical modification on DTW). However, the
original DTW function is not differentiable, therefore it is
not suitable as a loss function. Instead, we use a differen-
tiable version of it, soft-DTW, which has been recently pro-
posed [15] and used as a loss function in different problems
dealing with time series [6, 10, 51].
Differently from the original DTW formulation (Equa-
tion 5 in the main paper), the soft-DTW is defined as fol-
lows:
soft-DTW (r;s) = min
A hA;(r;s)i; (8)
where, as with traditional DTW, (r;s) = [(ri;sj]ij2
Rnmis a matrix containing the distances (;)between
each pair of points in rands,Ais a binary matrix that
accounts for the alignment (or correspondence) between r
ands, andh;iis the inner product between both matrices.
1https://databricks.com/blog/2019/04/30/understanding-dynamic-
time-warping.html
Figure 7. Simple visualization of dynamic time warping (DTW)
alignment. Instead of assuming a pair-wise strict correspondence,
DTW optimizes the alignment between two sequences to minimize
their distance.
In our case, r= (r1;:::;rT)2R3Tands= (s1;:::;sT)2
R3Tare two scanpaths that we wish to compare.
The main difference lies in the replacement of the minA
with the minA function, which is defined as follows:
min (a1;:::;aN) =mininai; = 0
logPi=1
neai= ; > 0(9)
This soft-min function allows DTW to be differentiable,
with the parameter adjusting the similarity between the
soft implementation and the original DTW algorithm, both
being the same when = 0.
S2. Additional Results
We include in this section a more extended set of results.
First, we include results for the scenes room (see Figures 17
to 20), chess (see Figures 21 to 24), and robots (see Fig-
ures 25 to 28) from the Sitzmann et al. dataset [43]. Then,
we include results for the five scenes from the Rai et al.
dataset [39] used in comparisons throughout the main pa-
per: train (see Figures 29 to 32), resort (see Figures 33
to 36), square (see Figures 37 to 40), snow (see Figures 41
to 44), and museum (see Figures 45 to 48).
S3. Ground Truth Scanpaths for Comparison
Scenes
We include in Figures 49 to 53 sets of ground truth scan-
paths for all the images shown in Figure 4 in the main pa-
per, which is devoted to comparisons of our method against
other models; and in Figures 54 to 56 sets of ground truth
scanpaths for the three images from our test set from Sitz-
mann et al.’s dataset.
S4. Additional Details on our Training Process
In addition to the details commented in Section 3.5 in
the main paper, our generator trains two cycles per discrim-
inator cycle, to avoid the latter from surpassing the former.
To enhance the training process, we also resort to a mini-
batching strategy: Instead of inputting to our model a setcontaining all available scanpaths for a given image, we
split our data in different mini-batches of eight scanpaths
each. This way, the same image is input in our network mul-
tiple times per epoch, also allowing more images to be in-
cluded in the same batch, and therefore enhancing the train-
ing process. We trained our model for 217 epochs, as we
found that epoch to yield the better evaluation results.
S5. Additional Details on Metrics and Evalua-
tion
Throughout this work, we evaluate our model and com-
pare to state-of-the-art works by means of several widely
used metrics, recently reviewed by Fahimi and Bruce [17].
Table 3 shows a list of these metrics, indicating which ones
take into account position and/or order of gaze points. In
the following, we briefly introduce these metrics (please re-
fer to Fahimi and Bruce [17] for a formal description):
• Levenshtein distance: Transforms scanpaths into
strings, and then calculates the minimum number of
single-character edits (insertions, deletions, or substi-
tutions) required to change one string (scanpath) into
the other. All edits costs are treated equally.
• ScanMatch: Improved version of Levenshtein dis-
tance. Different from Levenshtein distance, Scan-
Match takes into account semantic information (as a
score matrix), and can even take into account duration
of data points. This way, each of the edit operations
can be differently weighted.
• Hausdorff distance: Represents the degree of mis-
match between two sets by measuring the farthest spa-
tial distance from one set to the other, i.e., the distance
between two different curves.
• Frechet distance: Similar to Hausdorff distance, it
measures the similarity between curves. However,
Frechet disatance takes into account both the position
and ordering of all the points in the curves.
• Dynamic time warping: Metric that compares two
time-series with varying (and differing) lengths to
find an optimal path to match both sequences while
preserving boundary, continuity, and monotonicity to
make sure that the path respects time.
• Time delay embedding: Splits a scanpath into sev-
eral sub-samples, i.e., small sub-scanpaths. This met-
rics calculates a similarity score by performing several
pair-wise Hausdorff comparisons over sub-samples
from both scanpaths to compare.
• Recurrence: Measures the percentage of gaze points
that match (are close) between the two scanpaths.• Determinism: Percentage of cross-recurrent points that
form diagonal lines (i.e., percentage of gaze trajecto-
ries common to both scanpaths).
• Laminarity: Measures locations that were fixated in
detail in one of the scanpaths, but only fixated briefly
in the other scanpath. This way, it indicates whether
specific areas of a scene are repeatedly fixated.
• Center of recurrence mass: Defined as the distance of
the center of gravity from the main diagonal, indicates
the dominant lag of cross recurrences, i.e., whether the
same gaze point in both scanpaths tends to occur close
in time.
Table 3. Set of metrics to quantitatively evaluate scanpath similar-
ity [17]. Each metric specializes in specific aspects of the scan-
paths, and as a result using any of them in isolation may not be
representative.
Metric Abrv Position Order
Levenshtein distance LEV 3 3
ScanMatch SMT 3 3
Hausdorff distance HAU 3 7
Frechet distance FRE 3 3
Dynamic time warping DTW 3 3
Time delay embedding TDE 3 7
Recurrence REC 3 7
Determinism DET 7 3
Laminarity LAM 7 7
Center of recurrence mass COR 7 7
Our model is stochastic by nature. This means that the
scanpaths that it generates for a given scene are always dif-
ferent, simulating observer variability. We have analyzed
whether the reported metrics vary depending on the num-
ber of scanpaths generated, to asses the stability and overall
goodness of our model. Results can be seen in Table 4
We include in Table 5 the evaluation results with the full
set of metrics shown in Table 3 (extension to Table 1 in the
main paper), and in Tables 6 and 7 the evaluation results of
our ablation studies over the full set of metrics (extension to
Table 2 in the main paper).
Images for one of our test sets belong to Rai et al.’s
dataset [39]. This dataset is larger than Sitzmann et al.’s
in size (number of images), but provides gaze data in the
form of fixations with associated timestamps, and not the
raw gaze points. Note that most of the metrics proposed in
the literature for scanpath similarity are designed to work
with time series of different length, and do not necessarily
assume a direct pairwise equivalence, making them valid to
compare our generated scanpaths to the ground truth ones
from Rai et al.’s dataset.Table 4. Quantitative results of our model with sets of generated
scanpaths with different number of samples. Our results are stable
regardless the number of generated samples.
Dataset # of samples LEV#DTW#REC"DET"
Test set from
Sitzmann et al.100 46.19 1925.20 4.50 2.33
800 46.10 1916.26 4.75 2.34
2500 46.15 1921.95 4.82 2.32
Human BL 43.11 1843.72 7.81 4.07
Rai et al.’s
dataset100 40.95 1548.86 1.91 1.85
800 40.94 1542.82 1.86 1.86
2500 40.99 1549.59 1.72 1.87
Human BL 39.59 1495.55 2.33 2.31
S6. Behavioral Evaluation
In this section, we include further analysis and additional
details on behavioral aspects of our scanpaths, extending
Section 4.5 in the main paper.
Temporal and spatial coherence As discussed in the
main paper, our generated scanpaths have a degree of
stochasticity, and different patterns arise depending on
users’ previous history. To assess whether our scanpaths
actually follow a coherent pattern, we generate a set of ran-
dom scanpaths for each of the scenes in our test dataset, and
separate them according to the longitudinal region where
the scanpath begins ( e.g.,[0;40);[40;80), etc.). Then,
we estimate the probability density of the generated scan-
paths from each starting region using kernel density esti-
mation (KDE) for each timestamp. We include the com-
plete KDE results for the three images from our test set in
Figures 11 to 16, for different starting regions, at different
timestamps, and computed over 1000 generated scanpaths.
During the first seconds (first column), gaze points tend to
stay in a smaller area, and closer to the starting region; as
time progresses, they exhibit a more exploratory behavior
with higher divergence, and eventually may reach a conver-
gence close to regions of interest. We can also see how the
behavior can differ depending on the starting region.
Exploration time As introduced in the main paper, we
also explore the time that users took to move their eyes to a
certain longitude relative to their starting point, and measure
how long it takes for users to fully explore the scene. We in-
clude in Figure 8 the comparison between ground truth and
generated scanpaths in terms of time to explore the scene,
for all the three scenes from our test set ( room ,chess , and
robots ), both individual and aggregated. We can see how
the speed and exploration time are very similar between real
and generated data.
S7. Applications of the Model
Our model is able to generate plausible 30-second scan-
paths, drawn from a distribution that mimics the behavior ofhuman observers. As we briefly discuss through the paper,
this enables a number of applications, starting with avoiding
the need to recruit and measure gaze from high numbers of
observers in certain scenarios. We show here two applica-
tions of our model, virtual scene design and scanpath-driven
video thumbnail creation for static 360images, and discuss
other potential application scenarios.
Virtual scene design In an immersive environment, the
user has control over the camera when exploring it. This
poses a challenge to content creators and designers, who
have to learn from experience how to layout the scene to
elicit a specific viewing or exploration behavior. This is
not only a problem in VR, but has also received attention
in,e.g., manga composition [9] or web design [38]. How-
ever, actually measuring gaze from a high enough number
of users to determine optimal layouts can be challenging
and time-consuming. While certain goals may require real
users, others can make use of our model to generate plausi-
ble and realistic generated observers.
As a proof of concept, we have analyzed our model’s
ability to adapt its behavior to different layouts of a scene
(Figure 9). Specifically, we have removed certain elements
from a scene, and run our model to analyze whether these
changes affect the behavior of our generated scanpaths. We
plot the resulting probability density (using KDE, see Sec-
tion S6) as a function of time. The presence of different ele-
ments in the scene affects the general viewing behavior, in-
cluding viewing direction, or time spent on a certain region.
These examples are particularly promising if we consider
that our model is trained with a relatively small number of
generic scenes.
Scanpath-driven video thumbnails of static 360images
360images capture the full sphere and are thus unintuitive
when projected into a conventional 2D image. To address
this problem, a number of approaches have proposed to re-
target 360images or videos to 2D [46, 43, 45]. In the case
of images, extracting a representative 2D visualization of
the 360image can be helpful to provide a thumbnail of it,
for example as a preview on a social media platform. How-
ever, these thumbnails are static. The Ken Burns effect can
be used to animate static images by panning and zooming
a cropping window over a static image. In the context of
360, however, it seems unclear what the trajectory of such
a moving window would be.
To address this question, we leverage our generated scan-
paths to drive a Ken Burns–like video thumbnail of a static
panorama. For this purpose, we use an average scanpath,
computed as the probability density of several generated
scanpaths using KDE (see Section S6), as the trajectory of
the virtual camera. Specifically, KDE allows us to find the
point of highest probability, along with its variance, of allTable 5. Quantitative comparison of our model against different approaches, following the metrics introduced in Table 1. We evaluate our
model over the test set we separated from Sitzmann et al.’s dataset, and compare against Saltinet [2]. On the other hand, we validate our
model over Rai et al.’s dataset, and compare us against Zhu et al.’s [62], whose results over this dataset were provided by the authors; and
against SaltiNet, which was trained over that specific dataset (*). HB accounts for the human baseline, computed with the set of ground-
truth scanpaths. We also compute a lower baseline, computed by randomly sampling the image. The arrow next to each metric indicates
whether higher or lower is better. Best results are in bold.
Dataset Method LEV#SMT"HAU#FRE# DTW#TDE#REC"DET"LAM"CORM"
Test set from
Sitzmann et al.Random BL 52.33 0.22 59.88 146.39 2370.56 27.93 0.47 0.93 9.19 33.19
SaltiNet 48.00 0.18 64.23 149.34 1928.85 28.19 1.45 1.78 10.45 29.23
ScanGAN360 (ours) 46.15 0.39 43.28 141.23 1921.95 18.62 4.82 2.32 24.51 35.78
Human BL 43.11 0.43 41.38 142.91 1843.72 16.05 7.81 4.07 24.69 35.32
Rai et al’s
datasetRandom BL 43.11 0.17 65.71 144.73 1659.75 35.41 0.21 0.94 4.30 19.08
SaltiNet()48.07 0.18 63.86 148.76 1928.41 28.42 1.43 1.81 10.22 29.33
Zhu et al. 43.55 0.20 73.09 136.37 1744.20 30.62 1.64 1.50 9.18 26.05
ScanGAN360 (ours) 40.99 0.24 61.86 139.10 1549.59 28.14 1.72 1.87 12.23 26.15
Human BL 39.59 0.24 66.23 136.70 1495.55 27.24 2.33 2.31 14.36 23.14
Table 6. Results of our ablation study over Sitzmann et al.’s test set. We take a basic GAN strategy as baseline, and evaluate the effects
of adding a second term into our generator’s loss function. We ablate a model with an MSE error (as used in the only GAN approach for
scanpath generation so far [3]), and compare it against our spherical DTW approach. We also analyze the importance of the CoordConv
layer, whhose absence slightly worsen the results. See Section 4 in the main paper for further discussion. Qualitative results of this ablation
study can be seen in Figure 5 in the main paper.
Metric LEV#SMT"HAU#FRE# DTW#TDE#REC"DET"LAM"CORM"
Basic GAN 49.42 0.36 43.69 145.95 2088.44 20.05 3.01 1.74 18.55 34.51
MSE 48.90 0.37 42.27 133.24 1953.21 19.48 2.41 1.73 18.47 37.34
DTWsph(no CoordConv) 47.82 0.37 46.59 144.92 1988.38 20.13 3.67 1.99 18.09 35.66
DTWsph(ours) 46.15 0.39 43.28 141.23 1921.95 18.62 4.82 2.32 24.21 35.78
Human BL 43.11 0.43 41.38 142.91 1843.72 16.05 7.81 4.07 24.69 35.32
Table 7. Results of our ablation study over Rai et al.’s dataset. We take a basic GAN strategy as baseline, and evaluate the effects of adding
a second term into our generator’s loss function. We ablate a model with an MSE error (as used in the only GAN approach for scanpath
generation so far [3]), and compare it against our spherical DTW approach. We also analyze the importance of the CoordConv layer,
whhose absence slightly worsen the results. See Section 4 in the main paper for further discussion. Qualitative results of this ablation study
can be seen in Figure 5 in the main paper.
Metric LEV#SMT"HAU#FRE# DTW#TDE#REC"DET"LAM"CORM"
Basic GAN 41.73 0.23 59.11 142.42 1542.52 28.40 0.99 1.47 8.08 24.55
MSE 41.81 0.23 61.30 139.59 1541.44 28.66 1.01 1.51 8.56 24.45
DTWsph(no CoordConv) 41.42 0.23 61.55 148.13 1610.10 28.78 1.61 1.65 10.25 24.68
DTWsph(ours) 40.99 0.24 61.86 139.10 1549.59 28.14 1.72 1.87 12.23 26.15
Human BL 39.59 0.24 66.23 136.70 1495.55 27.24 2.33 2.31 14.36 23.14
generated scanpaths at any point in time. Note that this
point is not necessarily the average of the scanpaths. We
use the time-varying center point as the center of our 2D
viewport, and its variance to drive the FOV or zoom of the
moving viewport.
Figure 10 shows several representative steps of this pro-
cess for two different scenes ( chess andstreet ). Full videos
of several scenes are included in the supplementary video.
The generated Ken Burns–style panorama previews look
like a human observer exploring these panorama and pro-
vide a very intuitive preview of the complex scenes they
depict.Other applications Our model has the potential to en-
able other applications beyond what we have shown in this
section. One such example is gaze simulation for virtual
avatars . When displaying or interacting with virtual char-
acters, eye gaze is one of the most critical, yet most diffi-
cult, aspects to simulate [40]. Accurately simulating gaze
behavior not only aids in conveying realism, but can also
provide additional information such as signalling interest,
aiding the conversation through non-verbal cues, facilitating
turn-taking in multi-party conversations, or indicating atten-
tiveness, among others. Given an avatar immersed within
a virtual scene, generating plausible scanpaths conditioned
by a 360image of their environment could be an efficient,
affordable way of driving the avatar’s gaze behavior in aFigure 8. Time to explore each of the scenes from the Sitzmann et al. test set, together with their ground truth counterpart.
Figure 9. Our model can be used to aid the design of virtual scenes. We show two examples, each with two possible layouts (original,
and removing some significant elements). We generate a large number of scanpaths (virtual observers) starting from the same region, and
compute their corresponding probability density function as a function of time, using KDE (see Section S6). room scene: The presence of
the dining table and lamps (top) retains the viewers’ attention longer, while in their absence they move faster towards the living room area,
performing a more linear exploration. gallery scene: When the central picture is present (top), the viewers linger there before splitting to
both sides of the scene. In its absence, observers move towards the left, then explore the scene linearly in that direction.
realistic manner.
Another potential application of our model is its use for
gaze-contingent rendering . These approaches have been
proposed to save rendering time and bandwidth in VR sys-
tems or drive the user’s accommodation. Eye trackers are
required for these applications, but they are often too slow,making computationally efficient approaches for predicting
gaze trajectories or landing positions important [1]. Our
method for generating scanpaths could not only help proto-
type and evaluate such systems in simulation, without the
need for a physical eye tracker and actual users, but also in
optimizing their latency and performance during runtime.Figure 10. Scanpath-driven video thumbnails of 360images. We
propose a technique to generate these videos that results in relevant
and intuitive explorations of the 360scenes. Top row: Points of
highest probability at each time instant, displayed as scanpaths.
These are used as a guiding trajectory for the virtual camera. Mid-
dle rows: Two viewports from the guiding trajectory, correspond-
ing to the temporal window with lowest variance. Bottom row: 2D
images retargeted from those viewports. Please refer to the text for
details.
References
[1] Elena Arabadzhiyska, Okan Tarhan Tursun, Karol
Myszkowski, Hans-Peter Seidel, and Piotr Didyk. Saccade
landing position prediction for gaze-contingent rendering.
ACM Transactions on Graphics (TOG) , 36(4):1–12, 2017. 5
[2] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and
Noel E O’Connor. Saltinet: Scan-path prediction on 360
degree images using saliency volumes. In Proceedings of
the IEEE ICCV Workshops , pages 2331–2338, 2017. 1, 2, 6,
4
[3] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and
Noel E O’Connor. Pathgan: visual scanpath prediction with
generative adversarial networks. In Proceedings of the Eu-
ropean Conference on Computer Vision (ECCV) , pages 0–0,
2018. 1, 2, 6, 4
[4] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and
Noel E O’Connor. Scanpath and saliency prediction on 360
degree images. Signal Processing: Image Communication ,
69:8–14, 2018. 1, 2, 6, 7
[5] Wentao Bao and Zhenzhong Chen. Human scanpath predic-tion based on deep convolutional saccadic model. Neuro-
computing , 404:154 – 164, 2020. 2
[6] Mathieu Blondel, Arthur Mensch, and Jean-Philippe Vert.
Differentiable divergences between time series. arXiv
preprint arXiv:2010.08354 , 2020. 1
[7] A. Borji. Boosting bottom-up and top-down visual features
for saliency estimation. In 2012 IEEE Conference on Com-
puter Vision and Pattern Recognition , 2012. 2
[8] Zoya Bylinskii, Tilke Judd, Ali Borji, Laurent Itti, Fr ´edo Du-
rand, Aude Oliva, and Antonio Torralba. Mit saliency bench-
mark. http://saliency.mit.edu/, 2019. 2
[9] Ying Cao, Rynson WH Lau, and Antoni B Chan. Look over
here: Attention-directing composition of manga elements.
ACM Trans. Graph. , 33(4):1–11, 2014. 2, 3
[10] Chien-Yi Chang, De-An Huang, Yanan Sui, Li Fei-Fei, and
Juan Carlos Niebles. D3tw: Discriminative differentiable dy-
namic time warping for weakly supervised action alignment
and segmentation. In Proceedings of the IEEE/CVF Confer-
ence on Computer Vision and Pattern Recognition (CVPR) ,
June 2019. 1
[11] Fang-Yi Chao, Lu Zhang, Wassim Hamidouche, and Olivier
Deforges. Salgan360: Visual saliency prediction on 360 de-
gree images with generative adversarial networks. In 2018
IEEE Int. Conf. on Multim. & Expo Workshops (ICMEW) ,
pages 01–04. IEEE, 2018. 2
[12] Alex Colburn, Michael F Cohen, and Steven Drucker. The
role of eye gaze in avatar mediated conversational interfaces.
Technical report, Citeseer, 2000. 2
[13] Benjamin Coors, Alexandru Paul Condurache, and An-
dreas Geiger. Spherenet: Learning spherical representations
for detection and classification in omnidirectional images.
InProc. of the European Conference on Computer Vision
(ECCV) , pages 518–533, 2018. 1, 4
[14] Marcella Cornia, Lorenzo Baraldi, Giuseppe Serra, and Rita
Cucchiara. Predicting human eye fixations via an lstm-based
saliency attentive model. IEEE Transactions on Image Pro-
cessing , 27(10):5142–5154, 2018. 2
[15] Marco Cuturi and Mathieu Blondel. Soft-dtw: a dif-
ferentiable loss function for time-series. arXiv preprint
arXiv:1703.01541 , 2017. 4, 1
[16] Stephen R Ellis and James Darrell Smith. Patterns of sta-
tistical dependency in visual scanning. Eye movements and
human information processing , pages 221–238, 1985. 2, 8
[17] Ramin Fahimi and Neil DB Bruce. On metrics for measuring
scanpath similarity. Behavior Research Methods , pages 1–
20, 2020. 5, 2
[18] Kaye Horley, Leanne M Williams, Craig Gonsalvez, and
Evian Gordon. Face to face: visual scanpath evidence for
abnormal processing of facial expressions in social phobia.
Psychiatry research , 127(1-2):43–53, 2004. 1
[19] Laurent Itti, Christof Koch, and Ernst Niebur. A model
of saliency-based visual attention for rapid scene analysis.
IEEE Transactions on pattern analysis and machine intelli-
gence , 20(11):1254–1259, 1998. 2
[20] Tilke Judd, Krista Ehinger, Fr ´edo Durand, and Antonio Tor-
ralba. Learning to predict where humans look. In IEEE
ICCV , pages 2106–2113. IEEE, 2009. 2, 7Figure 11. KDE for the room scene, including scanpaths starting from 0up to 160.
[21] Diederik P. Kingma and Jimmy Ba. Adam: A method for
stochastic optimization. In ICLR , 2014. Last updated in
arXiv in 2017. 4
[22] Matthias K ¨ummerer, Thomas S. A. Wallis, and MatthiasBethge. Deepgaze ii: Reading fixations from deep
features trained on object recognition. arXiv preprint
arXiv:1610.01563 , 2016. 2
[23] O. Le Meur and T. Baccino. Methods for comparing scan-Figure 12. KDE for the room scene, including scanpaths starting from 180up to 340.
paths and saliency maps: strengths and weaknesses. Behav-
ior Research Methods , pages 251–266, 2013. 8
[24] Olivier Le Meur and Zhi Liu. Saccadic model of eye move-ments for free-viewing condition. Vision Research , 116:152
– 164, 2015. 2
[25] Chenge Li, Weixi Zhang, Yong Liu, and Yao Wang. VeryFigure 13. KDE for the chess scene, including scanpaths starting from 0up to 160.
long term field of view prediction for 360-degree video
streaming. In 2019 IEEE Conference on Multimedia Infor-
mation Processing and Retrieval (MIPR) , pages 297–302.
IEEE, 2019. 2[26] Suiyi Ling, Jes ´us Guti ´errez, Ke Gu, and Patrick Le Callet.
Prediction of the influence of navigation scan-path on per-
ceived quality of free-viewpoint videos. IEEE Journal on
Emerging and Sel. Topics in Circ. and Sys. , 9(1):204–216,Figure 14. KDE for the chess scene, including scanpaths starting from 180up to 340.
2019. 2
[27] Huiying Liu, Dong Xu, Qingming Huang, Wen Li, Min Xu,
and Stephen Lin. Semantically-based human scanpath esti-
mation with hmms. In Proceedings of the IEEE InternationalConference on Computer Vision , pages 3232–3239, 2013. 2,
8
[28] Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski
Such, Eric Frank, Alex Sergeev, and Jason Yosinski. An in-Figure 15. KDE for the robots scene, including scanpaths starting from 0up to 160.
triguing failing of convolutional neural networks and the co-
ordconv solution. In Neural information processing systems ,
pages 9605–9616, 2018. 4[29] Y . Lu, W. Zhang, C. Jin, and X. Xue. Learning attention map
from images. In 2012 IEEE Conference on Computer Vision
and Pattern Recognition , 2012. 2Figure 16. KDE for the robots scene, including scanpaths starting from 180up to 340.
[30] Daniel Martin, Sandra Malpica, Diego Gutierrez, Belen Ma-
sia, and Ana Serrano. Multimodality in VR: A survey. arXiv
preprint arXiv:2101.07906 , 2021. 2
[31] Daniel Martin, Ana Serrano, and Belen Masia. Panoramicconvolutions for 360single-image saliency prediction. In
CVPR Workshop on CV for AR/VR , 2020. 1, 2
[32] Mehdi Mirza and Simon Osindero. Conditional generative
adversarial nets. arXiv preprint arXiv:1411.1784 , 2014. 3Figure 17. Generated scanpaths for the room scene.
[33] Rafael Monroy, Sebastian Lutz, Tejo Chalasani, and Aljosa
Smolic. Salnet360: Saliency maps for omni-directional im-
ages with cnn. Signal Processing: Image Communication ,
69:26 – 34, 2018. 2
[34] Meinard M ¨uller. Dynamic time warping. Information re-
trieval for music and motion , pages 69–84, 2007. 3, 1
[35] Anh Nguyen, Zhisheng Yan, and Klara Nahrstedt. Your at-
tention is unique: Detecting 360-degree video saliency in
head-mounted display for head movement prediction. In
Proc. ACM Intern. Conf. on Multimedia , pages 1190–1198,
2018. 2
[36] Junting Pan, Cristian Canton, Kevin McGuinness, Noel E.
O’Connor, Jordi Torres, Elisa Sayrol, and Xavier and Giro-
i Nieto. Salgan: Visual saliency prediction with generative
adversarial networks. 2018. 2
[37] Junting Pan, Elisa Sayrol, Xavier Giro-i Nieto, Kevin
McGuinness, and Noel E. O’Connor. Shallow and deep con-
volutional networks for saliency prediction. In The IEEE
Conference on Computer Vision and Pattern Recognition
(CVPR) , June 2016. 2
[38] Xufang Pang, Ying Cao, Rynson WH Lau, and Antoni B
Chan. Directing user attention via visual flow on web de-
signs. ACM Trans. on Graph. , 35(6):1–11, 2016. 2, 3
[39] Yashas Rai, Jes ´us Guti ´errez, and Patrick Le Callet. A dataset
of head and eye movements for 360 degree images. In Pro-
ceedings of the 8th ACM on Multimedia Systems Conference ,
pages 205–210, 2017. 2, 5, 1[40] Kerstin Ruhland, Christopher E Peters, Sean Andrist,
Jeremy B Badler, Norman I Badler, Michael Gleicher, Bilge
Mutlu, and Rachel McDonnell. A review of eye gaze in
virtual agents, social robotics and hci: Behaviour genera-
tion, user interaction and perception. In Computer graph-
ics forum , volume 34, pages 299–326. Wiley Online Library,
2015. 4
[41] Matan Sela, Pingmei Xu, Junfeng He, Vidhya Naval-
pakkam, and Dmitry Lagun. Gazegan-unpaired adversar-
ial image generation for gaze estimation. arXiv preprint
arXiv:1711.09767 , 2017. 2
[42] Ana Serrano, Vincent Sitzmann, Jaime Ruiz-Borau, Gordon
Wetzstein, Diego Gutierrez, and Belen Masia. Movie edit-
ing and cognitive event segmentation in virtual reality video.
ACM Trans. Graph. (SIGGRAPH) , 36(4), 2017. 1
[43] Vincent Sitzmann, Ana Serrano, Amy Pavel, Maneesh
Agrawala, Diego Gutierrez, Belen Masia, and Gordon Wet-
zstein. Saliency in VR: How do people explore virtual
environments? IEEE Trans. on Vis. and Comp. Graph. ,
24(4):1633–1642, 2018. 1, 2, 4, 5, 7, 8, 3
[44] Mikhail Startsev and Michael Dorr. 360-aware saliency esti-
mation with conventional image saliency predictors. Signal
Proces.: Image Comm. , 69:43–52, 2018. 2
[45] Yu-Chuan Su and Kristen Grauman. Making 360 video
watchable in 2d: Learning videography for click free view-
ing. In 2017 IEEE Conference on Computer Vision and Pat-
tern Recognition (CVPR) , pages 1368–1376. IEEE, 2017. 3Figure 18. Generated scanpaths for the room scene.
[46] Yu-Chuan Su, Dinesh Jayaraman, and Kristen Grauman.
Pano2vid: Automatic cinematography for watching 360
videos. In Asian Conf. on CV , pages 154–171. Springer,
2016. 3
[47] Benjamin W Tatler and Benjamin T Vincent. The promi-
nence of behavioural biases in eye guidance. Visual Cogni-
tion, 17(6-7):1029–1054, 2009. 2
[48] Hamed Rezazadegan Tavakoli, Esa Rahtu, and Janne
Heikkil ¨a. Stochastic bottom–up fixation prediction and sac-
cade generation. Image and Vision Computing , 31(9):686–
693, 2013. 2
[49] Antonio Torralba, Aude Oliva, Monica S Castelhano, and
John M Henderson. Contextual guidance of eye movements
and attention in real-world scenes: the role of global features
in object search. Psychological review , 113(4):766, 2006. 2
[50] Eleonora Vig, Michael Dorr, and David Cox. Large-scale
optimization of hierarchical features for saliency prediction
in natural images. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition (CVPR) , June
2014. 2
[51] LE Vincent and Nicolas Thome. Shape and time distortion
loss for training deep time series forecasting models. In
Advances in neural information processing systems , pages
4189–4201, 2019. 1
[52] Dirk Walther and Christof Koch. Modeling attention to
salient proto-objects. Neural Networks , 19:1395–1407,
2006. 2[53] Wenguan Wang and Jianbing Shen. Deep visual atten-
tion prediction. IEEE Transactions on Image Processing ,
27(5):2368–2378, 2017. 2
[54] W. Wang and J. Shen. Deep visual attention prediction. IEEE
Transactions on Image Processing , 27(5):2368–2378, 2018.
2
[55] Wenguan Wang, Jianbing Shen, Xingping Dong, and Ali
Borji. Salient object detection driven by fixation prediction.
InProceedings of the IEEE Conference on Computer Vision
and Pattern Recognition (CVPR) , June 2018. 2
[56] Chenglei Wu, Ruixiao Zhang, Zhi Wang, and Lifeng Sun. A
spherical convolution approach for learning long term view-
port prediction in 360 immersive video. In Proceedings of
the AAAI Conference on Artificial Intelligence , volume 34,
pages 14003–14040, 2020. 2
[57] Chen Xia, Junwei Han, Fei Qi, and Guangming Shi. Pre-
dicting human saccadic scanpaths based on iterative repre-
sentation learning. IEEE Transactions on Image Processing ,
28(7):3502–3515, 2019. 5
[58] M. Xu, Y . Song, J. Wang, M. Qiao, L. Huo, and Z. Wang.
Predicting head movement in panoramic video: A deep re-
inforcement learning approach. IEEE Transactions on Pat-
tern Analysis and Machine Intelligence , 41(11):2693–2708,
2019. 2
[59] Chuan Yang, Lihe Zhang, Ruan Lu, Huchuan, Xiang, and
Ming-Hsuan Yang. Saliency detection via graph-based man-
ifold ranking. In Computer Vision and Pattern Recogni-Figure 19. Generated scanpaths for the room scene.
tion (CVPR), 2013 IEEE Conference on , pages 3166–3173.
IEEE, 2013. 2
[60] Kiwon Yun, Yifan Peng, Dimitris Samaras, Gregory J Zelin-
sky, and Tamara L Berg. Exploring the role of gaze behavior
and object detection in scene understanding. Frontiers in
psychology , 4:917, 2013. 1
[61] Qi Zhao and Christof Koch. Learning a saliency map using
fixated locations in natural scenes. Journal of Vision , 11:9,
2011. 2
[62] Yucheng Zhu, Guangtao Zhai, and Xiongkuo Min. The pre-
diction of head and eye movement for 360 degree images.
Signal Processing: Image Communication , 69:15–25, 2018.
1, 2, 6, 7, 4Figure 20. Generated scanpaths for the room scene.
Figure 21. Generated scanpaths for the chess scene.Figure 22. Generated scanpaths for the chess scene.
Figure 23. Generated scanpaths for the chess scene.Figure 24. Generated scanpaths for the chess scene.
Figure 25. Generated scanpaths for the robots scene.Figure 26. Generated scanpaths for the robots scene.
Figure 27. Generated scanpaths for the robots scene.Figure 28. Generated scanpaths for the robots scene.
Figure 29. Generated scanpaths for the train scene.Figure 30. Generated scanpaths for the train scene.
Figure 31. Generated scanpaths for the train scene.Figure 32. Generated scanpaths for the train scene.
Figure 33. Generated scanpaths for the resort scene.Figure 34. Generated scanpaths for the resort scene.
Figure 35. Generated scanpaths for the resort scene.Figure 36. Generated scanpaths for the resort scene.
Figure 37. Generated scanpaths for the square scene.Figure 38. Generated scanpaths for the square scene.
Figure 39. Generated scanpaths for the square scene.Figure 40. Generated scanpaths for the square scene.
Figure 41. Generated scanpaths for the snow scene.Figure 42. Generated scanpaths for the snow scene.
Figure 43. Generated scanpaths for the snow scene.Figure 44. Generated scanpaths for the snow scene.
Figure 45. Generated scanpaths for the museum scene.Figure 46. Generated scanpaths for the museum scene.
Figure 47. Generated scanpaths for the museum scene.Figure 48. Generated scanpaths for the museum scene.
Figure 49. Ground truth scanpaths for the train scene.Figure 50. Ground truth scanpaths for the resort scene.
Figure 51. Ground truth scanpaths for the snow scene.Figure 52. Ground truth scanpaths for the museum scene.
Figure 53. Ground truth scanpaths for the square scene.Figure 54. Ground truth scanpaths for the room scene.
Figure 55. Ground truth scanpaths for the chess scene.Figure 56. Ground truth scanpaths for the robots scene.