|
ScanGAN360: A Generative Model of Realistic Scanpaths for 360Images |
|
Daniel Martin1Ana Serrano2Alexander W. Bergman3Gordon Wetzstein3 |
|
Belen Masia1 |
|
1Universidad de Zaragoza, I3A2Centro Universitario de la Defensa, Zaragoza3Stanford University |
|
Abstract |
|
Understanding and modeling the dynamics of human |
|
gaze behavior in 360environments is a key challenge in |
|
computer vision and virtual reality. Generative adversar- |
|
ial approaches could alleviate this challenge by generat- |
|
ing a large number of possible scanpaths for unseen im- |
|
ages. Existing methods for scanpath generation, however, |
|
do not adequately predict realistic scanpaths for 360im- |
|
ages. We present ScanGAN360, a new generative adver- |
|
sarial approach to address this challenging problem. Our |
|
network generator is tailored to the specifics of 360im- |
|
ages representing immersive environments. Specifically, we |
|
accomplish this by leveraging the use of a spherical adapta- |
|
tion of dynamic-time warping as a loss function and propos- |
|
ing a novel parameterization of 360scanpaths. The quality |
|
of our scanpaths outperforms competing approaches by a |
|
large margin and is almost on par with the human baseline. |
|
ScanGAN360 thus allows fast simulation of large numbers |
|
ofvirtual observers , whose behavior mimics real users, en- |
|
abling a better understanding of gaze behavior and novel |
|
applications in virtual scene design. |
|
1. Introduction |
|
Virtual reality (VR) is an emerging medium that unlocks |
|
unprecedented user experiences. To optimize these expe- |
|
riences, however, it is crucial to develop computer vision |
|
techniques that help us understand how people explore im- |
|
mersive virtual environments. Models for time-dependent |
|
visual exploration behavior are important for designing and |
|
editing VR content [42], for generating realistic gaze trajec- |
|
tories of digital avatars [18], for understanding dynamic vi- |
|
sual attention and visual search behavior [60], and for devel- |
|
oping new rendering, display, and compression algorithms, |
|
among other applications. |
|
Current approaches that model how people explore vir- |
|
tual environments often leverage saliency prediction [43, |
|
13, 31, 2]. While this is useful for some applications, the |
|
fixation points predicted by these approaches do not account |
|
Figure 1. We present ScanGAN360, a generative adversarial ap- |
|
proach to scanpath generation for 360images. ScanGAN360 |
|
generates realistic scanpaths ( bottom rows ), outperforming state- |
|
of-the-art methods and mimicking the human baseline ( top row ). |
|
for the time-dependent visual behavior of the user, making |
|
it difficult to predict the order of fixations, or give insight |
|
into how people explore an environment over time. For this |
|
purpose, some recent work has explored scanpath predic- |
|
tion [2, 3, 62, 4], but these algorithms do not adequately |
|
model how people explore immersive virtual environments, |
|
resulting in erratic or non-plausible scanpaths. |
|
In this work, we present ScanGAN360, a novel frame- |
|
work for scanpath generation for 360images (Figure 1). |
|
Our model builds on a conditional generative adversarial |
|
network (cGAN) architecture, for which we discuss and val- |
|
idate two important insights that we show are necessary for |
|
realistic scanpath generation. First, we propose a loss func- |
|
tion based on a spherical adaptation of dynamic time warp- |
|
ing (DTW), which is a key aspect for training our GAN ro- |
|
bustly. DTW is a metric for measuring similarity between |
|
two time series, such as scanpaths, which to our knowledge |
|
has not been used to train scanpath-generating GANs. Sec- |
|
ond, to adequately tackle the problem of scanpath genera- |
|
tion in 360images, we present a novel parameterization ofarXiv:2103.13922v1 [cs.CV] 25 Mar 2021the scanpaths. These insights allow us to demonstrate state- |
|
of-the-art results for scanpath generation in VR, close to the |
|
human baseline and far surpassing the performance of ex- |
|
isting methods. Our approach is the first to enable robust |
|
scanpath prediction over long time periods up to 30 sec- |
|
onds, and, unlike previous work, our model does not rely |
|
on saliency, which is typically not available as ground truth. |
|
Our model produces about 1,000 scanpaths per second, |
|
which enables fast simulation of large numbers of virtual |
|
observers , whose behavior mimics that of real users. Us- |
|
ing ScanGAN360, we explore applications in virtual scene |
|
design, which is useful in video games, interior design, |
|
cinematography, and tourism, and scanpath-driven video |
|
thumbnail generation of 360images, which provides pre- |
|
views of VR content for social media platforms. Beyond |
|
these applications, we propose to use ScanGAN360 for |
|
applications such as gaze behavior simulation for virtual |
|
avatars or gaze-contingent rendering. Extended discussion |
|
and results on applications are included in the supplemen- |
|
tary material and video. |
|
We will make our source code and pre-trained model |
|
publicly available to promote future research. |
|
2. Related work |
|
Modeling and predicting attention The multimodal na- |
|
ture of attention [30], together with the complexity of hu- |
|
man gaze behavior, make this a very challenging task. Many |
|
works devoted to it have relied on representations such as |
|
saliency, which is a convenient representation for indicat- |
|
ing the regions of an image more likely to attract atten- |
|
tion. Early strategies for saliency modeling have focused |
|
on either creating hand-crafted features representative of |
|
saliency [19, 52, 61, 29, 20, 7], or directly learning data- |
|
driven features [49, 22]. With the proliferation of exten- |
|
sive datasets of human attention [43, 39, 20, 8, 59], deep |
|
learning–based methods for saliency prediction have been |
|
successfully applied, yielding impressive results [37, 36, 14, |
|
50, 54, 55, 58]. |
|
However, saliency models do not take into account the |
|
dynamic nature of human gaze behavior, and therefore, they |
|
are unable to model or predict time-varying aspects of at- |
|
tention. Being able to model and predict dynamic explo- |
|
ration patterns has been proven to be useful, for example, |
|
for avatar gaze control [12, 41], video rendering in virtual |
|
reality [26], or for directing users’ attention over time in |
|
many contexts [9, 38]. Scanpath models aim to predict vi- |
|
sual patterns of exploration that an observer would perform |
|
when presented with an image. In contrast to saliency mod- |
|
els, scanpath models typically focus on predicting plausi- |
|
ble scanpaths, i.e., they do not predict a unique scanpath |
|
and instead they try to mimic human behavior when ex- |
|
ploring an image, taking into account the variability be- |
|
tween different observers. Ellis and Smith [16] were pio-neers in this field: they proposed a general framework for |
|
generating scanpaths based on Markov stochastic processes. |
|
Several approaches have followed this work, incorporating |
|
behavioral biases in the process in order to produce more |
|
plausible scanpaths [24, 47, 27, 48]. In recent years, deep |
|
learning models have been used to predict human scanpaths |
|
based on neural network features trained on object recogni- |
|
tion [22, 53, 14, 5]. |
|
Attention in 360images Predicting plausible scanpaths |
|
in 360imagery is a more complex task: Observers do not |
|
only scan a given image with their gaze, but they can now |
|
also turn their head or body, effectively changing their view- |
|
port over time. Several works have been proposed for mod- |
|
eling saliency in 360images [33, 43, 31, 11, 44]. However, |
|
scanpath prediction has received less attention. In their re- |
|
cent work, Assens et al. [3] generalize their 2D model to |
|
360images, but their loss function is unable to reproduce |
|
the behavior of ground truth scanpaths (see Figure 4, third |
|
column). A few works have focused on predicting short- |
|
term sequential gaze points based on users’ previous his- |
|
tory for 360videos, but they are limited to small temporal |
|
windows (from one to ten seconds) [56, 25, 35]. For the |
|
case of images, a number of recent methods focus on devel- |
|
oping improved saliency models and principled methods to |
|
sample from them [2, 4, 62]. |
|
Instead, we directly learn dynamic aspects of attention |
|
from ground truth scanpaths by training a generative model |
|
in an adversarial manner, with an architecture and loss |
|
function specifically designed for scanpaths in 360im- |
|
ages. This allows us to (i) effectively mimic human be- |
|
havior when exploring scenes, bypassing the saliency gen- |
|
eration and sampling steps, and (ii) optimize our network to |
|
stochastically generate 360scanpaths, taking into account |
|
observer variability. |
|
3. Our Model |
|
We adopt a generative adversarial approach, specifically |
|
designed for 360content in which the model learns to gen- |
|
erate a plausible scanpath, given the 360image as a con- |
|
dition. In the following, we describe the parameterization |
|
employed for the scanpaths, the design of our loss function |
|
for the generator, and the particularities of our conditional |
|
GAN architecture, ending with details about the training |
|
process. |
|
3.1. Scanpath Parameterization |
|
Scanpaths are commonly provided as a sequence of two- |
|
dimensional values corresponding to the coordinates (i;j) |
|
of each gaze point in the image. When dealing with 360 |
|
images in equirectangular projections, gaze points are also |
|
often represented by their latitude and longitude (;),Figure 2. Illustration of our generator and discriminator networks. Both networks have a two-branch structure: Features extracted from the |
|
360image with the aid of a CoordConv layer and an encoder-like network are concatenated with the input vector for further processing. |
|
The generator learns to transform this input vector, conditioned by the image, into a plausible scanpath. The discriminator takes as input |
|
vector a scanpath (either captured or synthesized by the generator), as well as the corresponding image, and determines the probability of |
|
this scanpath being real (or fake). We train them end-to-end in an adversarial manner, following a conditional GAN scheme. Please refer |
|
to the text for details on the loss functions and architecture. |
|
2[ |
|
2; |
|
2]and2[ ;]. However, these parame- |
|
terizations either suffer from discontinuities at the borders |
|
of a 360image, or result in periodic, ambiguous values. |
|
The same point of the scene can have two different repre- |
|
sentations in these parameterizations, hindering the learning |
|
process. |
|
We therefore resort to a three-dimensional parameteriza- |
|
tion of our scanpaths, where each gaze point p= (;)is |
|
transformed into its three-dimensional representation P= |
|
(x;y;z )such that: |
|
x=cos()cos();y=cos()sin();z=sin(): |
|
This transformation assumes, without loss of generality, |
|
that the panorama is projected over a unit sphere. We |
|
use this parameterization for our model, which learns a |
|
scanpath Pas a set of three-dimensional points over time. |
|
Specifically, given a number of samples Tover time, P= |
|
(P1;:::;PT)2R3T. The results of the model are then |
|
converted back to a two-dimensional parameterization in |
|
terms of latitude ( =atan2 (z;p |
|
x2+y2)) and longitude |
|
(=atan2 (y;x)) for display and evaluation purposes. |
|
3.2. Overview of the Model |
|
Our model is a conditional GAN, where the condition |
|
is the RGB 360image for which we wish to estimate a |
|
scanpath. The generator Gis trained to generate a scanpath |
|
from a latent code z(drawn randomly from a uniform distri- |
|
bution,U( 1;1)), conditioned by the RGB 360imagey. |
|
The discriminator Dtakes as input a potential scanpath ( xorG(z;y)), as well as the condition y(the RGB 360im- |
|
age), and outputs the probability of the scanpath being real |
|
(or fake). The architecture of both networks, generator and |
|
discriminator, can be seen in Figure 2, and further details |
|
related to the architecture are described in Section 3.4. |
|
3.3. Loss Function |
|
The objective function of a conventional conditional |
|
GAN is inspired by a minimax objective from game theory, |
|
with an objective [32]: |
|
min |
|
Gmax |
|
DV(D;G ) = |
|
Ex[logD(x;y)] +Ez[log(1 D(G(z;y);y)]:(1) |
|
We can separate this into two losses, one for the generator, |
|
LG, and one for the discriminator, LD: |
|
LG=Ez[log(1 D(G(z;y);y))]; (2) |
|
LD=Ex[logD(x;y)] +Ez[log(1 D(G(z;y);y))]:(3) |
|
While this objective function suffices in certain cases, as |
|
the complexity of the problem increases, the generator may |
|
not be able to learn the transformation from the input distri- |
|
bution into the target one. One can resort to adding a loss |
|
term toLG, and in particular one that enforces similarity to |
|
the scanpath ground truth data. However, using a conven- |
|
tional data term, such as MSE, does not yield good results |
|
(Section 4.4 includes an evaluation of this). To address this |
|
issue, we introduce a novel term in LGspecifically targeted |
|
to our problem, and based on dynamic time warping [34].Dynamic time warping (DTW) measures the similar- |
|
ity between two temporal sequences, considering both the |
|
shape and the order of the elements of a sequence, with- |
|
out forcing a one-to-one correspondence between elements |
|
of the time series. For this purpose, it takes into account |
|
all the possible alignments of two time series rands, and |
|
computes the one that yields the minimal distance between |
|
them. Specifically, the DTW loss function between two |
|
time series r2Rknands2Rkmcan be expressed |
|
as [15]: |
|
DTW (r;s) = min |
|
AhA;(r;s)i; (4) |
|
where (r;s) = [(ri;sj)]ij2Rnmis a matrix con- |
|
taining the distances (;)between each pair of points in r |
|
ands,Ais a binary matrix that accounts for the alignment |
|
(or correspondence) between rands, andh;iis the inner |
|
product between both matrices. |
|
In our case, r= (r1;:::;rT)2R3Tands= |
|
(s1;:::;sT)2R3Tare two scanpaths that we wish to com- |
|
pare. While the Euclidean distance between each pair of |
|
points is usually employed when computing (ri;sj)for |
|
Equation 4, in our scenario that would yield erroneous dis- |
|
tances derived from the projection of the 360image (both |
|
if done in 2D over the image, or in 3D with the parameteri- |
|
zation described in Section 3.1). We instead use the distance |
|
over the surface of a sphere, or spherical distance, and de- |
|
finesph(r;s) = [sph(ri;sj)]ij2Rnmsuch that: |
|
sph(ri;sj) = |
|
2 arcsin1 |
|
2q |
|
(rx |
|
i sx |
|
j)2+ (ry |
|
i sy |
|
j)2+ (rz |
|
i sz |
|
j)2 |
|
; |
|
(5) |
|
leading to our spherical DTW: |
|
DTWsph(r;s) = min |
|
AhA;sph(r;s)i: (6) |
|
We incorporate the spherical DTW to the loss function of |
|
the generator (LG, Equation 2), yielding our final generator |
|
loss functionL |
|
G: |
|
L |
|
G=LG+Ez[DTWsph(G(z;y);)]; (7) |
|
whereis a ground truth scanpath for the conditioning im- |
|
agey, and the weight is empirically set to 0:1. |
|
While a loss function incorporating DTW (or spherical |
|
DTW) is not differentiable, a differentiable version, soft- |
|
DTW, has been proposed. We use this soft-DTW in our |
|
model; details on it can be found in Section S1 in the sup- |
|
plementary material or in the original publication [15]. |
|
3.4. Model Architecture |
|
Both our generator and discriminator are based on a two- |
|
branch structure (see Figure 2), with one branch for the con- |
|
ditioning image yand the other for the input vector ( zin thegenerator, and xorG(z;y)in the discriminator). The im- |
|
age branch extracts features from the 360image, yielding |
|
a set of latent features that will be concatenated with the |
|
input vector for further processing. Due to the distortion |
|
inherent to equirectangular projections, traditional convo- |
|
lutional feature extraction strategies are not well suited for |
|
360images: They use a kernel window where neighboring |
|
relations are established uniformly around a pixel. Instead, |
|
we extract features using panoramic (or spherical) convolu- |
|
tions [13]. Spherical convolutions are a type of dilated con- |
|
volutions where the relations between elements in the im- |
|
age are not established in image space, but in a gnomonic, |
|
non-distorted space. These spherical convolutions can rep- |
|
resent kernels as patches tangent to a sphere where the 360 |
|
is reprojected. |
|
In our problem of scanpath generation, the location of |
|
the features in the image is of particular importance. There- |
|
fore, to facilitate spatial learning of the network, we use the |
|
recently presented CoordConv strategy [28], which gives |
|
convolutions access to its own input coordinates by adding |
|
extra coordinate channels. We do this by concatenating a |
|
CoordConv layer to the input 360image (see Figure 2). |
|
This layer also helps stabilize the training process, as shown |
|
in Section 4.4. |
|
3.5. Dataset and Training Details |
|
We train our model using Sitzmann et al.’s [43] dataset, |
|
composed of 22 different 360images and a total of 1,980 |
|
scanpaths from 169 different users. Each scanpath contains |
|
gaze information captured during 30 seconds with a binoc- |
|
ular eye tracking recorder at 120 Hz. We sample these cap- |
|
tured scanpaths at 1 Hz ( i.e.,T= 30 ), and reparameter- |
|
ize them (Section 3.1), so that each scanpath is a sequence |
|
P= (P0;:::;P 29)2R3T. Given the relatively small size |
|
of the dataset, we perform data augmentation by longitu- |
|
dinally shifting the 360images (and adjusting their scan- |
|
paths accordingly); specifically, for each image we generate |
|
six different variations with random longitudinal shifting. |
|
We use 19 of the 22 images in this dataset for training, and |
|
reserve three to be part of our test set (more details on the |
|
full test set are described in Section 4). With the data aug- |
|
mentation process, this yields 114 images in the training set. |
|
During our training process we use the Adam opti- |
|
mizer [21], with constant learning rates lG= 10 4for the |
|
generator, and lD= 10 5for the discriminator, both of |
|
them with momentum = (0:5;0:99). Further training and |
|
implementation details can be found in the supplementary |
|
material. |
|
4. Validation and Analysis |
|
We evaluate the quality of the generated scanpaths with |
|
respect to the measured, ground truth scanpaths, as well asFigure 3. Results of our model for two different scenes: market andmall from Rai et al.’s dataset [39]. From left to right : 360image, |
|
ground truth sample scanpath, and three scanpaths generated by our model. The generated scanpaths are plausible and focus on relevant |
|
parts of the scene, yet they exhibit the diversity expected among different human observers. Please refer to the supplementary material for |
|
a larger set of results. |
|
to other approaches. We also ablate our model to illustrate |
|
the contribution of the different design choices. |
|
We evaluate or model on two different test sets. First, |
|
using the three images from Sitzmann et al.’s dataset [43] |
|
left out of the training (Section 3.5): room ,chess androbots . |
|
To ensure our model has an ability to extrapolate, we also |
|
evaluate it with a different dataset from Rai et al. [39]. This |
|
dataset consists of 60 scenes watched by 40 to 42 observers |
|
for 25 seconds. Thus, when comparing to their ground truth, |
|
we cut our 30-second scanpaths to the maximum length of |
|
their data. Please also refer to the supplementary material |
|
for more details on the test set, as well as further evaluation |
|
and results. |
|
4.1. Scanpath Similarity Metrics |
|
Our evaluation is both quantitative and qualitative. Eval- |
|
uating scanpath similarity is not a trivial task, and a num- |
|
ber of metrics have been proposed in the literature, each fo- |
|
cused on a different context or aspect of gaze behavior [17]. |
|
Proposed metrics can be roughly categorized into: (i) di- |
|
rect measures based on Euclidean distance; (ii) string-based |
|
measures based on string alignment techniques (such as the |
|
Levenshtein distance, LEV); (iii) curve similarity methods; |
|
(iv) metrics from time-series analysis (like DTW, on which |
|
our loss function is based); and (v) metrics from recurrence |
|
analysis ( e.g., recurrence measure REC and determinism |
|
measure DET). We refer the reader to supplementary mate- |
|
rial and the review by Fahimi and Bruce [17] for an in-depth |
|
explanation and comparison of existing metrics. Here, we |
|
include a subset of metrics that take into account both the |
|
position and the ordering of the points (namely LEV and |
|
DTW), and two metrics from recurrence analysis (REC and |
|
DET), which have been reported to be discriminative in |
|
revealing viewing behaviors and patterns when comparing |
|
scanpaths. We nevertheless compute our evaluation for the |
|
full set of metrics reviewed by Fahimi and Bruce [17] in the |
|
supplementary material. |
|
Since for each image we have a number of ground truthscanpaths, and a set of generated scanpaths, we compute |
|
each similarity metric for all possible pairwise comparisons |
|
(each generated scanpath against each of the ground truth |
|
scanpaths), and average the result. In order to provide an |
|
upper baseline for each metric, we also compute the human |
|
baseline ( Human BL ) [57], which is obtained by comparing |
|
each ground truth scanpath against all the other ground truth |
|
ones, and averaging the results. In a similar fashion, we |
|
compute a lower baseline based on sampling gaze points |
|
randomly over the image ( Random BL ). |
|
4.2. Results |
|
Qualitative results of our model can be seen in Figures 3 |
|
and 1 for scenes with different layouts. Figure 3, from left |
|
to right, shows: the scene, a sample ground truth (captured) |
|
scanpath, and three of our generated scanpaths sampled |
|
from the generator. Our model is able to produce plausible, |
|
coherent scanpaths that focus on relevant parts of the scene. |
|
In the generated scanpaths we observe regions where the |
|
user focuses (points of a similar color clustered together), as |
|
well as more exploratory behavior. The generated scanpaths |
|
are diverse but plausible, as one would expect if different |
|
users watched the scene (the supplementary material con- |
|
tains more ground truth, measured scanpaths, showing this |
|
diversity). Further, our model is not affected by the inherent |
|
distortions of the 360image. This is apparent, for exam- |
|
ple, in the market scene: The central corridor, narrow and |
|
seemingly featureless, is observed by generated virtual ob- |
|
servers . Quantitative results in Table 1 further show that our |
|
generated scanpaths are close to the human baseline ( Hu- |
|
man BL ), both in the test set from Sitzmann et al.’s dataset, |
|
and over Rai et al.’s dataset. A value close to Human BL in- |
|
dicates that the generated scanpaths are as valid or as plau- |
|
sible as the captured, ground truth ones. Note that obtaining |
|
a value lower than Human BL is possible, if the generated |
|
scanpaths are on average closer to the ground truth ones, |
|
and exhibit less variance. |
|
Since our model is generative, it can generate as manyFigure 4. Qualitative comparison to previous methods for five different scenes from Rai et al.’s dataset. In each row, from left to right: |
|
360image, and a sample scanpath obtained with our method, PathGAN [3], SaltiNet [4], and Zhu et al.’s [62]. Note that, in the case |
|
of PathGAN, we are including the results directly taken from their paper, thus the different visualization. Our method produces plausible |
|
scanpaths focused on meaningful regions, in comparison with other techniques. Please see text for details, and the supplementary material |
|
for a larger set of results, also including ground truth scanpaths. |
|
scanpaths as needed and model many different potential ob- |
|
servers. We perform our evaluations on a random set of 100 |
|
scanpaths generated by our model. We choose this num- |
|
ber to match the number of generated scanpaths available |
|
for competing methods, to perform a fair comparison. Nev- |
|
ertheless, we have analyzed the stability of our generative |
|
model by computing our evaluation metrics for a variable |
|
number of generated scanpaths: Our results are very sta- |
|
ble with the number of scanpaths (please see Table 2 in the |
|
supplementary material). |
|
4.3. Comparison to Other Methods |
|
We compare ScanGAN360 to three methods devoted to |
|
scanpath prediction in 360images: SaltiNet-based scan- |
|
path prediction [2, 4] (we will refer to it as SaltiNet in the |
|
following), PathGAN [3] and Zhu et al.’s method [62]. For |
|
comparisons to SaltiNet we use the public implementation |
|
of the authors, while the authors of Zhu et al. kindly pro- |
|
vided us with the results of their method for the images from |
|
Rai et al.’s dataset (but not for Sitzmann et al.’s); we there- |
|
fore have both qualitative (Figure 4) and quantitative (Ta- |
|
ble 1) comparisons to these two methods. In the case of |
|
PathGAN, no model or implementation could be obtained, |
|
so we compare qualitatively to the results extracted from |
|
their paper (Figure 4, third column). |
|
Table 1 shows that our model consistently provides re-sults closer to the ground truth scanpaths than Zhu et al.’s |
|
and SaltiNet. The latter is based on a saliency-sampling |
|
strategy, and thus these results indicate that indeed the tem- |
|
poral information learnt by our model is relevant for the fi- |
|
nal result. Our model, as expected, also amply surpasses the |
|
random baseline. In Figure 4 we see how PathGAN scan- |
|
paths fail to focus on the relevant parts of the scene (see, |
|
e.g.,snow orsquare ), while SaltiNet exhibits a somewhat |
|
erratic behavior, with large displacements and scarce areas |
|
of focus ( train ,snow orsquare show this). Finally, Zhu |
|
et al.’s approach tends to place gaze points at high contrast |
|
borders (see, e.g.,square orresort ). |
|
4.4. Ablation Studies |
|
We also evaluate the contribution of different elements of |
|
our model to the final result. For this purpose, we analyze |
|
a standard GAN strategy ( i.e., using only the discriminative |
|
loss), as the baseline. Figure 5 shows how the model is un- |
|
able to learn both the temporal nature of the scanpaths, and |
|
their relation to image features. We also analyze the results |
|
yielded by adding a term based on the MSE between the |
|
ground truth and the generated scanpath to the loss function, |
|
instead of our DTW sphterm (the only previous GAN ap- |
|
proach for scanpath generation [3] relied on MSE for their |
|
loss term). The MSE only measures a one-to-one corre- |
|
spondence between points, considering for each time instantFigure 5. Qualitative ablation results. From top to bottom : ba- |
|
sic GAN strategy (baseline); adding MSE to the loss function of |
|
the former; our approach; and an example ground truth scanpath. |
|
These results illustrate the need for our DTW sphloss term. |
|
Table 1. Quantitative comparisons of our model against |
|
SaltiNet [4] and Zhu et al. [62]. We also include upper (human |
|
baseline, Human BL ) and lower (randomly sampling over the im- |
|
age, Random BL ) baselines. Arrows indicate whether higher or |
|
lower is better, and boldface highlights the best result for each met- |
|
ric (excluding the ground truth Human BL ).SaltiNet is trained |
|
with Rai et al.’s dataset; we include it for completeness. |
|
Dataset Method LEV#DTW#REC"DET" |
|
Test set from |
|
Sitzmann et al.Random BL 52.33 2370.56 0.47 0.93 |
|
SaltiNet 48.00 1928.85 1.45 1.78 |
|
ScanGAN360 (ours) 46.15 1921.95 4.82 2.32 |
|
Human BL 43.11 1843.72 7.81 4.07 |
|
Rai et al.’s |
|
datasetRandom BL 43.11 1659.75 0.21 0.94 |
|
SaltiNet48.07 1928.41 1.43 1.81 |
|
Zhu et al. 43.55 1744.20 1.64 1.50 |
|
ScanGAN360 (ours) 40.99 1549.59 1.72 1.87 |
|
Human BL 39.59 1495.55 2.33 2.31 |
|
a single point, unrelated to the rest. This hinders the learn- |
|
ing process, leading to non-plausible results (Figure 5, sec- |
|
ond row). This behavior is corrected when our DTW sphis |
|
added instead, since it is specifically targeted for time series |
|
data and takes into account the actual spatial structure of the |
|
data (Figure 5, third row). The corresponding quantitative |
|
measures over our test set from Sitzmann et al. can be found |
|
in Table 2. We also analyze the effect of removing the Co- |
|
ordConv layer from our model: Results in Table 2 indicate |
|
that the use of CoordConv does have a positive effect on the |
|
results, helping learn the transformation from the input to |
|
the target domain.Table 2. Quantitative results of our ablation study. Arrows indi- |
|
cate whether higher or lower is better, and boldface highlights the |
|
best result for each metric (excluding the ground truth Human BL ). |
|
Please refer to the text for details on the ablated models. |
|
Metric LEV# DTW# REC"DET" |
|
Basic GAN 49.42 2088.44 3.01 1.74 |
|
MSE 48.90 1953.21 2.41 1.73 |
|
DTWsph(no CoordConv) 47.82 1988.38 3.67 1.99 |
|
DTWsph(ours) 46.19 1925.20 4.50 2.33 |
|
Human Baseline (Human BL) 43.11 1843.72 7.81 4.07 |
|
4.5. Behavioral Evaluation |
|
While the previous subsections employ well-known met- |
|
rics from the literature to analyze the performance of our |
|
model, in this subsection we perform a higher-level analysis |
|
of its results. We assess whether the behavioral characteris- |
|
tics of our scanpaths match those which have been reported |
|
from actual users watching 360images. |
|
Exploration time Sitzmann et al. [43] measure the explo- |
|
ration time as the average time that users took to move their |
|
eyes to a certain longitude relative to their starting point, |
|
and measure how long it takes for users to fully explore the |
|
scene. Figure 6 (left) shows this exploration time, measured |
|
by Sitzmann et al. from captured data, for the three scenes |
|
from their dataset included in our test set ( room ,chess , and |
|
robots ). To analyze whether our generated scanpaths mimic |
|
this behavior and exploration speed, we plot the exploration |
|
time of our generated scanpaths (Figure 6, center left) for |
|
the same scenes and number of scanpaths. We can see how |
|
the speed and exploration time are very similar between |
|
real and generated data. Individual results per scene can |
|
be found in the supplementary material. |
|
Fixation bias Similar to the center bias of human eye fix- |
|
ations observed in regular images [20], the existence of a |
|
Laplacian-like equator bias has been measured in 360im- |
|
ages [43]: The majority of fixations fall around the equa- |
|
tor, in detriment of the poles. We have evaluated whether |
|
the distribution of scanpaths generated by our model also |
|
presents this bias. This is to be expected, since the data our |
|
model is trained with exhibits it, but is yet another indicator |
|
that we have succeeded in learning the ground truth distri- |
|
bution. We test this by generating, for each scene, 1,000 |
|
different scanpaths with our model, and aggregating them |
|
over time to produce a pseudo- saliency map, which we term |
|
aggregate map . Figure 6 (right) shows this for two scenes |
|
in our test set: We can see how this equator bias is indeed |
|
present in our generated scanpaths. |
|
Inter-observer congruency It is common in the literature |
|
analyzing users’ gaze behavior to measure inter-observerFigure 6. Behavioral evaluation. Left: Exploration time for real captured data ( left) and scanpaths generated by our model ( center left ). |
|
Speed and exploration time of our scanpaths are on par with that of real users. Center right : ROC curve of our generated scanpaths for |
|
each individual test scene (gray), and averaged across scenes (magenta). The faster it converges to the maximum rate, the higher the |
|
inter-observer congruency. Right : Aggregate maps for two different scenes, computed as heatmaps from 1,000 generated scanpaths. Our |
|
model is able to produce aggregate maps that focus on relevant areas of the scenes and exhibit the equator bias reported in the literature. |
|
congruency, often by means of a receiver operating char- |
|
acteristic (ROC) curve. We compute the congruency of our |
|
“generated observers” through this ROC curve for the three |
|
scenes in our test set from the Sitzmann et al. dataset (Fig- |
|
ure 6, center right). The curve calculates the ability of the |
|
ithscanpath to predict the aggregate map of the correspond- |
|
ing scene. Each point in the curve is computed by gener- |
|
ating a map containing the top n%most salient regions of |
|
the aggregate map (computed without the ithscanpath), and |
|
calculating the percentage of gaze points of the ithscanpath |
|
that fall into that map. Our ROC curve indicates a strong |
|
agreement between our scanpaths, with around 75% of all |
|
gaze points falling within 25% of the most salient regions. |
|
These values are comparable to those measured in previous |
|
studies with captured gaze data [43, 23]. |
|
Temporal and spatial coherence Our generated scan- |
|
paths have a degree of stochasticity, to be able to model the |
|
diversity of real human observers. However, human gaze |
|
behavior follows specific patterns, and each gaze point is |
|
conditioned not only by the features in the scene but also by |
|
the previous history of gaze points of the user. If two users |
|
start watching a scene in the same region, a certain degree |
|
of coherence between their scanpaths is expected, that may |
|
diverge more as more time passes. We analyze the temporal |
|
coherence of generated scanpaths that start in the same re- |
|
gion, and observe that indeed our generated scanpaths fol- |
|
low a coherent pattern. Please refer to the supplementary |
|
for more information on this part of the analysis. |
|
5. Conclusion |
|
In summary, we propose ScanGAN360, a conditional |
|
GAN approach to generating gaze scanpaths for immersive |
|
virtual environments. Our unique parameterization tailored |
|
to panoramic content, coupled with our novel usage of a |
|
DTW loss function, allow our model to generate scanpaths |
|
of significantly higher quality and duration than previousapproaches. We further explore applications of our model: |
|
Please refer to the supplementary material for a description |
|
and examples of these. |
|
Our GAN approach is well suited for the problem of |
|
scanpath generation: A single ground truth scanpath does |
|
not exist, yet real scanpaths follow certain patterns that |
|
are difficult to model explicitly but that are automatically |
|
learned by our approach. Note that our model is also very |
|
fast and can produce about 1,000 scanpaths per second. |
|
This may be a crucial capability for interactive applications: |
|
our model can generate virtual observers in real time. |
|
Limitations and future work Our model is trained with |
|
30-second long scanpaths, sampled at 1 Hz. Although |
|
this is significantly longer than most previous approaches |
|
[16, 23, 27], exploring different or variable lengths or sam- |
|
pling rates remains interesting for future work. When train- |
|
ing our model, we focus on learning higher-level aspects of |
|
visual behavior, and we do not explicitly enforce low-level |
|
ocular movements ( e.g., fixations or saccades). Currently, |
|
our relatively low sampling rate prevents us from model- |
|
ing very fast dynamic phenomena, such as saccades. Yet, |
|
fixation patterns naturally emerge in our results, and future |
|
work could explicitly take low-level oculomotor aspects of |
|
visual search into account. |
|
The model, parameterization, and loss function are tai- |
|
lored to 360images. In a similar spirit, a DTW-based loss |
|
function could also be applied to conventional 2D images |
|
(using an Euclidean distance in 2D instead of our sph), po- |
|
tentially leading to better results than current 2D approaches |
|
based on mean-squared error. |
|
We believe that our work is a timely effort and a first step |
|
towards understanding and modeling dynamic aspects of at- |
|
tention in 360images. We hope that our work will serve |
|
as a basis to advance this research, both in virtual reality |
|
and in conventional imagery, and extend it to other scenar- |
|
ios, such as dynamic or interactive content, analyzing the |
|
influence of the task, including the presence of motion par-allax, or exploring multimodal experiences. We will make |
|
our model and training code available in order to facilitate |
|
the exploration of these and other possibilities. |
|
References |
|
[1] Elena Arabadzhiyska, Okan Tarhan Tursun, Karol |
|
Myszkowski, Hans-Peter Seidel, and Piotr Didyk. Saccade |
|
landing position prediction for gaze-contingent rendering. |
|
ACM Transactions on Graphics (TOG) , 36(4):1–12, 2017. 5 |
|
[2] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and |
|
Noel E O’Connor. Saltinet: Scan-path prediction on 360 |
|
degree images using saliency volumes. In Proceedings of |
|
the IEEE ICCV Workshops , pages 2331–2338, 2017. 1, 2, 6, |
|
4 |
|
[3] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and |
|
Noel E O’Connor. Pathgan: visual scanpath prediction with |
|
generative adversarial networks. In Proceedings of the Eu- |
|
ropean Conference on Computer Vision (ECCV) , pages 0–0, |
|
2018. 1, 2, 6, 4 |
|
[4] Marc Assens, Xavier Giro-i Nieto, Kevin McGuinness, and |
|
Noel E O’Connor. Scanpath and saliency prediction on 360 |
|
degree images. Signal Processing: Image Communication , |
|
69:8–14, 2018. 1, 2, 6, 7 |
|
[5] Wentao Bao and Zhenzhong Chen. Human scanpath predic- |
|
tion based on deep convolutional saccadic model. Neuro- |
|
computing , 404:154 – 164, 2020. 2 |
|
[6] Mathieu Blondel, Arthur Mensch, and Jean-Philippe Vert. |
|
Differentiable divergences between time series. arXiv |
|
preprint arXiv:2010.08354 , 2020. 1 |
|
[7] A. Borji. Boosting bottom-up and top-down visual features |
|
for saliency estimation. In 2012 IEEE Conference on Com- |
|
puter Vision and Pattern Recognition , 2012. 2 |
|
[8] Zoya Bylinskii, Tilke Judd, Ali Borji, Laurent Itti, Fr ´edo Du- |
|
rand, Aude Oliva, and Antonio Torralba. Mit saliency bench- |
|
mark. http://saliency.mit.edu/, 2019. 2 |
|
[9] Ying Cao, Rynson WH Lau, and Antoni B Chan. Look over |
|
here: Attention-directing composition of manga elements. |
|
ACM Trans. Graph. , 33(4):1–11, 2014. 2, 3 |
|
[10] Chien-Yi Chang, De-An Huang, Yanan Sui, Li Fei-Fei, and |
|
Juan Carlos Niebles. D3tw: Discriminative differentiable dy- |
|
namic time warping for weakly supervised action alignment |
|
and segmentation. In Proceedings of the IEEE/CVF Confer- |
|
ence on Computer Vision and Pattern Recognition (CVPR) , |
|
June 2019. 1 |
|
[11] Fang-Yi Chao, Lu Zhang, Wassim Hamidouche, and Olivier |
|
Deforges. Salgan360: Visual saliency prediction on 360 de- |
|
gree images with generative adversarial networks. In 2018 |
|
IEEE Int. Conf. on Multim. & Expo Workshops (ICMEW) , |
|
pages 01–04. IEEE, 2018. 2 |
|
[12] Alex Colburn, Michael F Cohen, and Steven Drucker. The |
|
role of eye gaze in avatar mediated conversational interfaces. |
|
Technical report, Citeseer, 2000. 2 |
|
[13] Benjamin Coors, Alexandru Paul Condurache, and An- |
|
dreas Geiger. Spherenet: Learning spherical representations |
|
for detection and classification in omnidirectional images. |
|
InProc. of the European Conference on Computer Vision |
|
(ECCV) , pages 518–533, 2018. 1, 4[14] Marcella Cornia, Lorenzo Baraldi, Giuseppe Serra, and Rita |
|
Cucchiara. Predicting human eye fixations via an lstm-based |
|
saliency attentive model. IEEE Transactions on Image Pro- |
|
cessing , 27(10):5142–5154, 2018. 2 |
|
[15] Marco Cuturi and Mathieu Blondel. Soft-dtw: a dif- |
|
ferentiable loss function for time-series. arXiv preprint |
|
arXiv:1703.01541 , 2017. 4, 1 |
|
[16] Stephen R Ellis and James Darrell Smith. Patterns of sta- |
|
tistical dependency in visual scanning. Eye movements and |
|
human information processing , pages 221–238, 1985. 2, 8 |
|
[17] Ramin Fahimi and Neil DB Bruce. On metrics for measuring |
|
scanpath similarity. Behavior Research Methods , pages 1– |
|
20, 2020. 5, 2 |
|
[18] Kaye Horley, Leanne M Williams, Craig Gonsalvez, and |
|
Evian Gordon. Face to face: visual scanpath evidence for |
|
abnormal processing of facial expressions in social phobia. |
|
Psychiatry research , 127(1-2):43–53, 2004. 1 |
|
[19] Laurent Itti, Christof Koch, and Ernst Niebur. A model |
|
of saliency-based visual attention for rapid scene analysis. |
|
IEEE Transactions on pattern analysis and machine intelli- |
|
gence , 20(11):1254–1259, 1998. 2 |
|
[20] Tilke Judd, Krista Ehinger, Fr ´edo Durand, and Antonio Tor- |
|
ralba. Learning to predict where humans look. In IEEE |
|
ICCV , pages 2106–2113. IEEE, 2009. 2, 7 |
|
[21] Diederik P. Kingma and Jimmy Ba. Adam: A method for |
|
stochastic optimization. In ICLR , 2014. Last updated in |
|
arXiv in 2017. 4 |
|
[22] Matthias K ¨ummerer, Thomas S. A. Wallis, and Matthias |
|
Bethge. Deepgaze ii: Reading fixations from deep |
|
features trained on object recognition. arXiv preprint |
|
arXiv:1610.01563 , 2016. 2 |
|
[23] O. Le Meur and T. Baccino. Methods for comparing scan- |
|
paths and saliency maps: strengths and weaknesses. Behav- |
|
ior Research Methods , pages 251–266, 2013. 8 |
|
[24] Olivier Le Meur and Zhi Liu. Saccadic model of eye move- |
|
ments for free-viewing condition. Vision Research , 116:152 |
|
– 164, 2015. 2 |
|
[25] Chenge Li, Weixi Zhang, Yong Liu, and Yao Wang. Very |
|
long term field of view prediction for 360-degree video |
|
streaming. In 2019 IEEE Conference on Multimedia Infor- |
|
mation Processing and Retrieval (MIPR) , pages 297–302. |
|
IEEE, 2019. 2 |
|
[26] Suiyi Ling, Jes ´us Guti ´errez, Ke Gu, and Patrick Le Callet. |
|
Prediction of the influence of navigation scan-path on per- |
|
ceived quality of free-viewpoint videos. IEEE Journal on |
|
Emerging and Sel. Topics in Circ. and Sys. , 9(1):204–216, |
|
2019. 2 |
|
[27] Huiying Liu, Dong Xu, Qingming Huang, Wen Li, Min Xu, |
|
and Stephen Lin. Semantically-based human scanpath esti- |
|
mation with hmms. In Proceedings of the IEEE International |
|
Conference on Computer Vision , pages 3232–3239, 2013. 2, |
|
8 |
|
[28] Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski |
|
Such, Eric Frank, Alex Sergeev, and Jason Yosinski. An in- |
|
triguing failing of convolutional neural networks and the co- |
|
ordconv solution. In Neural information processing systems , |
|
pages 9605–9616, 2018. 4[29] Y . Lu, W. Zhang, C. Jin, and X. Xue. Learning attention map |
|
from images. In 2012 IEEE Conference on Computer Vision |
|
and Pattern Recognition , 2012. 2 |
|
[30] Daniel Martin, Sandra Malpica, Diego Gutierrez, Belen Ma- |
|
sia, and Ana Serrano. Multimodality in VR: A survey. arXiv |
|
preprint arXiv:2101.07906 , 2021. 2 |
|
[31] Daniel Martin, Ana Serrano, and Belen Masia. Panoramic |
|
convolutions for 360single-image saliency prediction. In |
|
CVPR Workshop on CV for AR/VR , 2020. 1, 2 |
|
[32] Mehdi Mirza and Simon Osindero. Conditional generative |
|
adversarial nets. arXiv preprint arXiv:1411.1784 , 2014. 3 |
|
[33] Rafael Monroy, Sebastian Lutz, Tejo Chalasani, and Aljosa |
|
Smolic. Salnet360: Saliency maps for omni-directional im- |
|
ages with cnn. Signal Processing: Image Communication , |
|
69:26 – 34, 2018. 2 |
|
[34] Meinard M ¨uller. Dynamic time warping. Information re- |
|
trieval for music and motion , pages 69–84, 2007. 3, 1 |
|
[35] Anh Nguyen, Zhisheng Yan, and Klara Nahrstedt. Your at- |
|
tention is unique: Detecting 360-degree video saliency in |
|
head-mounted display for head movement prediction. In |
|
Proc. ACM Intern. Conf. on Multimedia , pages 1190–1198, |
|
2018. 2 |
|
[36] Junting Pan, Cristian Canton, Kevin McGuinness, Noel E. |
|
O’Connor, Jordi Torres, Elisa Sayrol, and Xavier and Giro- |
|
i Nieto. Salgan: Visual saliency prediction with generative |
|
adversarial networks. 2018. 2 |
|
[37] Junting Pan, Elisa Sayrol, Xavier Giro-i Nieto, Kevin |
|
McGuinness, and Noel E. O’Connor. Shallow and deep con- |
|
volutional networks for saliency prediction. In The IEEE |
|
Conference on Computer Vision and Pattern Recognition |
|
(CVPR) , June 2016. 2 |
|
[38] Xufang Pang, Ying Cao, Rynson WH Lau, and Antoni B |
|
Chan. Directing user attention via visual flow on web de- |
|
signs. ACM Trans. on Graph. , 35(6):1–11, 2016. 2, 3 |
|
[39] Yashas Rai, Jes ´us Guti ´errez, and Patrick Le Callet. A dataset |
|
of head and eye movements for 360 degree images. In Pro- |
|
ceedings of the 8th ACM on Multimedia Systems Conference , |
|
pages 205–210, 2017. 2, 5, 1 |
|
[40] Kerstin Ruhland, Christopher E Peters, Sean Andrist, |
|
Jeremy B Badler, Norman I Badler, Michael Gleicher, Bilge |
|
Mutlu, and Rachel McDonnell. A review of eye gaze in |
|
virtual agents, social robotics and hci: Behaviour genera- |
|
tion, user interaction and perception. In Computer graph- |
|
ics forum , volume 34, pages 299–326. Wiley Online Library, |
|
2015. 4 |
|
[41] Matan Sela, Pingmei Xu, Junfeng He, Vidhya Naval- |
|
pakkam, and Dmitry Lagun. Gazegan-unpaired adversar- |
|
ial image generation for gaze estimation. arXiv preprint |
|
arXiv:1711.09767 , 2017. 2 |
|
[42] Ana Serrano, Vincent Sitzmann, Jaime Ruiz-Borau, Gordon |
|
Wetzstein, Diego Gutierrez, and Belen Masia. Movie edit- |
|
ing and cognitive event segmentation in virtual reality video. |
|
ACM Trans. Graph. (SIGGRAPH) , 36(4), 2017. 1 |
|
[43] Vincent Sitzmann, Ana Serrano, Amy Pavel, Maneesh |
|
Agrawala, Diego Gutierrez, Belen Masia, and Gordon Wet- |
|
zstein. Saliency in VR: How do people explore virtual |
|
environments? IEEE Trans. on Vis. and Comp. Graph. , |
|
24(4):1633–1642, 2018. 1, 2, 4, 5, 7, 8, 3[44] Mikhail Startsev and Michael Dorr. 360-aware saliency esti- |
|
mation with conventional image saliency predictors. Signal |
|
Proces.: Image Comm. , 69:43–52, 2018. 2 |
|
[45] Yu-Chuan Su and Kristen Grauman. Making 360 video |
|
watchable in 2d: Learning videography for click free view- |
|
ing. In 2017 IEEE Conference on Computer Vision and Pat- |
|
tern Recognition (CVPR) , pages 1368–1376. IEEE, 2017. 3 |
|
[46] Yu-Chuan Su, Dinesh Jayaraman, and Kristen Grauman. |
|
Pano2vid: Automatic cinematography for watching 360 |
|
videos. In Asian Conf. on CV , pages 154–171. Springer, |
|
2016. 3 |
|
[47] Benjamin W Tatler and Benjamin T Vincent. The promi- |
|
nence of behavioural biases in eye guidance. Visual Cogni- |
|
tion, 17(6-7):1029–1054, 2009. 2 |
|
[48] Hamed Rezazadegan Tavakoli, Esa Rahtu, and Janne |
|
Heikkil ¨a. Stochastic bottom–up fixation prediction and sac- |
|
cade generation. Image and Vision Computing , 31(9):686– |
|
693, 2013. 2 |
|
[49] Antonio Torralba, Aude Oliva, Monica S Castelhano, and |
|
John M Henderson. Contextual guidance of eye movements |
|
and attention in real-world scenes: the role of global features |
|
in object search. Psychological review , 113(4):766, 2006. 2 |
|
[50] Eleonora Vig, Michael Dorr, and David Cox. Large-scale |
|
optimization of hierarchical features for saliency prediction |
|
in natural images. In Proceedings of the IEEE Conference |
|
on Computer Vision and Pattern Recognition (CVPR) , June |
|
2014. 2 |
|
[51] LE Vincent and Nicolas Thome. Shape and time distortion |
|
loss for training deep time series forecasting models. In |
|
Advances in neural information processing systems , pages |
|
4189–4201, 2019. 1 |
|
[52] Dirk Walther and Christof Koch. Modeling attention to |
|
salient proto-objects. Neural Networks , 19:1395–1407, |
|
2006. 2 |
|
[53] Wenguan Wang and Jianbing Shen. Deep visual atten- |
|
tion prediction. IEEE Transactions on Image Processing , |
|
27(5):2368–2378, 2017. 2 |
|
[54] W. Wang and J. Shen. Deep visual attention prediction. IEEE |
|
Transactions on Image Processing , 27(5):2368–2378, 2018. |
|
2 |
|
[55] Wenguan Wang, Jianbing Shen, Xingping Dong, and Ali |
|
Borji. Salient object detection driven by fixation prediction. |
|
InProceedings of the IEEE Conference on Computer Vision |
|
and Pattern Recognition (CVPR) , June 2018. 2 |
|
[56] Chenglei Wu, Ruixiao Zhang, Zhi Wang, and Lifeng Sun. A |
|
spherical convolution approach for learning long term view- |
|
port prediction in 360 immersive video. In Proceedings of |
|
the AAAI Conference on Artificial Intelligence , volume 34, |
|
pages 14003–14040, 2020. 2 |
|
[57] Chen Xia, Junwei Han, Fei Qi, and Guangming Shi. Pre- |
|
dicting human saccadic scanpaths based on iterative repre- |
|
sentation learning. IEEE Transactions on Image Processing , |
|
28(7):3502–3515, 2019. 5 |
|
[58] M. Xu, Y . Song, J. Wang, M. Qiao, L. Huo, and Z. Wang. |
|
Predicting head movement in panoramic video: A deep re- |
|
inforcement learning approach. IEEE Transactions on Pat- |
|
tern Analysis and Machine Intelligence , 41(11):2693–2708, |
|
2019. 2[59] Chuan Yang, Lihe Zhang, Ruan Lu, Huchuan, Xiang, and |
|
Ming-Hsuan Yang. Saliency detection via graph-based man- |
|
ifold ranking. In Computer Vision and Pattern Recogni- |
|
tion (CVPR), 2013 IEEE Conference on , pages 3166–3173. |
|
IEEE, 2013. 2 |
|
[60] Kiwon Yun, Yifan Peng, Dimitris Samaras, Gregory J Zelin- |
|
sky, and Tamara L Berg. Exploring the role of gaze behavior |
|
and object detection in scene understanding. Frontiers in |
|
psychology , 4:917, 2013. 1 |
|
[61] Qi Zhao and Christof Koch. Learning a saliency map using |
|
fixated locations in natural scenes. Journal of Vision , 11:9, |
|
2011. 2 |
|
[62] Yucheng Zhu, Guangtao Zhai, and Xiongkuo Min. The pre- |
|
diction of head and eye movement for 360 degree images. |
|
Signal Processing: Image Communication , 69:15–25, 2018. |
|
1, 2, 6, 7, 4Supplementary Material |
|
This document offers additional information and details |
|
on the following topics: |
|
• (S1) Extended description of the soft-DTW (differen- |
|
tiable version of DTW) distance metric used in our |
|
model. |
|
• (S2) Additional results (scanpaths generated with our |
|
method) for different scenes used in our evaluation in |
|
the main paper. |
|
• (S3) Additional ground truth scanpaths for the scenes |
|
used in our evaluation in the main paper. |
|
• (S4) Further details on our training process. |
|
• (S5) Further details on metrics and evaluation, includ- |
|
ing a larger set of metrics (which we briefly introduce), |
|
and extended analysis. |
|
• (S6) Further details on the behavioral evaluation of our |
|
scanpaths. |
|
• (S7) Example applications of our method. |
|
S1. Differentiable Dynamic Time Warping: |
|
soft-DTW |
|
One of the key aspects of our framework relies in the |
|
addition of a second term to the generator’s loss function, |
|
based on dynamic time warping [34]. As pointed in Section |
|
3.3 in the main paper, dynamic time warping (DTW) mea- |
|
sures the similarity between two temporal sequences (see |
|
Figure 71, Equation 5 in the main paper for the original |
|
DTW formulation, and Equations 6 and 7 in the main pa- |
|
per for our spherical modification on DTW). However, the |
|
original DTW function is not differentiable, therefore it is |
|
not suitable as a loss function. Instead, we use a differen- |
|
tiable version of it, soft-DTW, which has been recently pro- |
|
posed [15] and used as a loss function in different problems |
|
dealing with time series [6, 10, 51]. |
|
Differently from the original DTW formulation (Equa- |
|
tion 5 in the main paper), the soft-DTW is defined as fol- |
|
lows: |
|
soft-DTW
(r;s) = min |
|
A
hA;(r;s)i; (8) |
|
where, as with traditional DTW, (r;s) = [(ri;sj]ij2 |
|
Rnmis a matrix containing the distances (;)between |
|
each pair of points in rands,Ais a binary matrix that |
|
accounts for the alignment (or correspondence) between r |
|
ands, andh;iis the inner product between both matrices. |
|
1https://databricks.com/blog/2019/04/30/understanding-dynamic- |
|
time-warping.html |
|
Figure 7. Simple visualization of dynamic time warping (DTW) |
|
alignment. Instead of assuming a pair-wise strict correspondence, |
|
DTW optimizes the alignment between two sequences to minimize |
|
their distance. |
|
In our case, r= (r1;:::;rT)2R3Tands= (s1;:::;sT)2 |
|
R3Tare two scanpaths that we wish to compare. |
|
The main difference lies in the replacement of the minA |
|
with the minA
function, which is defined as follows: |
|
min
(a1;:::;aN) =mininai;
= 0 |
|
|