|
This paper has been accepted for publication at the |
|
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, 2021. ©IEEE |
|
Time Lens: Event-based Video Frame Interpolation |
|
Stepan Tulyakov*;1Daniel Gehrig;2Stamatios Georgoulis1Julius Erbach1 |
|
Mathias Gehrig2Yuanyou Li1Davide Scaramuzza2 |
|
1Huawei Technologies, Zurich Research Center |
|
2Dept. of Informatics, Univ. of Zurich and Dept. of Neuroinformatics, Univ. of Zurich and ETH Zurich |
|
Figure 1: Qualitative results comparing our proposed method, Time Lens, with DAIN [3] and BMBC [29]. Our method can |
|
interpolate frames in highly-dynamic scenes, such as while spinning an umbrella (top row) and bursting a balloon (bottom |
|
row). It does this by combining events (b) and frames (a). |
|
Abstract |
|
State-of-the-art frame interpolation methods generate |
|
intermediate frames by inferring object motions in the |
|
image from consecutive key-frames. In the absence of |
|
additional information, first-order approximations, i.e. |
|
optical flow, must be used, but this choice restricts the |
|
types of motions that can be modeled, leading to errors |
|
in highly dynamic scenarios. Event cameras are novel |
|
sensors that address this limitation by providing auxiliary |
|
visual information in the blind-time between frames. They |
|
asynchronously measure per-pixel brightness changes and |
|
do this with high temporal resolution and low latency. |
|
Event-based frame interpolation methods typically adopt a |
|
synthesis-based approach, where predicted frame residuals |
|
are directly applied to the key-frames. However, while these |
|
approaches can capture non-linear motions they suffer |
|
from ghosting and perform poorly in low-texture regions |
|
with few events. Thus, synthesis-based and flow-based |
|
approaches are complementary. In this work, we introduce |
|
*indicates equal contributionTime Lens , a novel method that leverages the advantages of |
|
both. We extensively evaluate our method on three synthetic |
|
and two real benchmarks where we show an up to 5.21 |
|
dB improvement in terms of PSNR over state-of-the-art |
|
frame-based and event-based methods. Finally, we release |
|
a new large-scale dataset in highly dynamic scenarios, |
|
aimed at pushing the limits of existing methods. |
|
Multimedia Material |
|
The High-Speed Event and RGB (HS-ERGB) dataset |
|
and evaluation code can be found at: http://rpg.ifi. |
|
uzh.ch/timelens |
|
1. Introduction |
|
Many things in real life can happen in the blink of an |
|
eye. A hummingbird flapping its wings, a cheetah accel- |
|
erating towards its prey, a tricky stunt with the skateboard, |
|
or even a baby taking its first steps. Capturing these mo- |
|
ments as high-resolution videos with high frame rates typi-arXiv:2106.07286v1 [cs.CV] 14 Jun 2021cally requires professional high-speed cameras, that are in- |
|
accessible to casual users. Modern mobile device producers |
|
have tried to incorporate more affordable sensors with sim- |
|
ilar functionalities into their systems, but they still suffer |
|
from the large memory requirements and high power con- |
|
sumption associated with these sensors. |
|
Video Frame Interpolation (VFI) addresses this problem, |
|
by converting videos with moderate frame rates high frame |
|
rate videos in post-processing. In theory, any number of |
|
new frames can be generated between two keyframes of |
|
the input video. Therefore, VFI is an important problem |
|
in video processing with many applications, ranging from |
|
super slow motion [10] to video compression [42]. |
|
Frame-based interpolation approaches relying solely |
|
on input from a conventional frame-based camera that |
|
records frames synchronously and at a fixed rate. There are |
|
several classes of such methods that we describe below. |
|
Warping-based approaches [20, 10, 44, 21, 29] combine |
|
optical flow estimation [8, 16, 36] with image warping [9], |
|
to generate intermediate frames in-between two consecutive |
|
key frames. More specifically, under the assumptions of lin- |
|
ear motion and brightness constancy between frames, these |
|
works compute optical flow and warp the input keyframe(s) |
|
to the target frame, while leveraging concepts, like contex- |
|
tual information [20], visibility maps [10], spatial trans- |
|
former networks [44], forward warping [21], or dynamic |
|
blending filters [29], to improve the results. While most of |
|
these approaches assume linear motion, some recent works |
|
assume quadratic [43] or cubic [5] motions. Although these |
|
methods can address non-linear motions, they are still lim- |
|
ited by their order, failing to capture arbitrary motion. |
|
Kernel-based approaches [22, 23] avoid the explicit mo- |
|
tion estimation and warping stages of warping-based ap- |
|
proaches. Instead, they model VFI as local convolution over |
|
the input keyframes, where the convolutional kernel is esti- |
|
mated from the keyframes. This approach is more robust to |
|
motion blur and light changes. Alternatively, phase-based |
|
approaches [18] pose VFI as a phase shift estimation prob- |
|
lem, where a neural network decoder directly estimates the |
|
phase decomposition of the intermediate frame. However, |
|
while these methods can in theory model arbitrary motion, |
|
in practice they do not scale to large motions due to the lo- |
|
cality of the convolution kernels. |
|
In general, all frame-based approaches assume simplis- |
|
tic motion models (e.g. linear) due to the absence of vi- |
|
sual information in the blind-time between frames, which |
|
poses a fundamental limitation of purely frame-based VFI |
|
approaches. In particular, the simplifying assumptions rely |
|
on brightness and appearance constancy between frames, |
|
which limits their applicability in highly dynamic scenar- |
|
ios such as ( i) for non-linear motions between the input |
|
keyframes, ( ii) when there are changes in illumination or |
|
motion blur, and ( iii) non-rigid motions and new objects ap-pearing in the scene between keyframes. |
|
Multi-camera approaches. To overcome this limita- |
|
tion, some works seek to combine inputs from several |
|
frame-based cameras with different spatio-temporal trade- |
|
offs. For example, [1] combined low-resolution video with |
|
high resolution still images, whereas [25] fused a low- |
|
resolution high frame rate video with a high resolution low |
|
frame rate video. Both approaches can recover the miss- |
|
ing visual information necessary to reconstruct true object |
|
motions, but this comes at the cost of a bulkier form factor, |
|
higher power consumption, and a larger memory footprint. |
|
Event-based approaches. Compared to standard frame- |
|
based cameras, event cameras [14, 4] do not incur the afore- |
|
mentioned costs. They are novel sensors that only report the |
|
per-pixel intensity changes, as opposed to the full intensity |
|
images and do this with high temporal resolution and low |
|
latency on the order of microseconds. The resulting output |
|
is an asynchronous stream of binary “events” which can be |
|
considered a compressed representation of the true visual |
|
signal. These properties render them useful for VFI under |
|
highly dynamic scenarios (e.g. high-speed non-linear mo- |
|
tion, or challenging illumination). |
|
Events-only approaches reconstruct high frame rate |
|
videos directly from the stream of incoming events using |
|
GANs [38], RNNs [32, 33, 34], or even self-supervised |
|
CNNs [28], and can be thought of as a proxy to the VFI |
|
task. However, since the integration of intensity gradients |
|
into an intensity frame is an ill-posed problem, the global |
|
contrast of the interpolated frames is usually miscalculated. |
|
Moreover, as in event cameras intensity edges are only ex- |
|
posed when they move, the interpolation results are also de- |
|
pendent on the motion. |
|
Events-plus-frames approaches. As certain event cam- |
|
eras such as the Dynamic and Active VIsion Sensor |
|
(DA VIS) [4] can simultaneously output the event stream and |
|
intensity images – the latter at low frame rates and prone |
|
to the same issues as frame-based cameras (e.g. motion |
|
blur) – several works [26, 41, 11, 37] use both streams of |
|
information. Typically, these works tackle VFI in conjunc- |
|
tion with de-blurring, de-noising, super-resolution, or other |
|
relevant tasks. They synthesize intermediate frames by |
|
accumulating temporal brightness changes, represented by |
|
events, from the input keyframes and applying them to the |
|
key frames. While these methods can handle illumination |
|
changes and non-linear motion they still perform poorly |
|
compared to the frame-based methods (please see § 3.2), |
|
as due to the inherent instability of the contrast threshold |
|
and sensor noise, not all brightness changes are accurately |
|
registered as events. |
|
Our contributions are as follows |
|
1. We address the limitations of all aforementioned |
|
methods by introducing a CNN framework, named |
|
Time Lens , that marries the advantages of warping-Figure 2: Proposed event-based VFI approach. |
|
and synthesis-based interpolation approaches. In our |
|
framework, we use a synthesis-based approach to |
|
ground and refine results of high-quality warping- |
|
based approach and provide the ability to handle illu- |
|
mination changes and new objects appearing between |
|
keyframes (refer Fig. 7), |
|
2. We introduce a new warping-based interpolation ap- |
|
proach that estimates motion from events, rather than |
|
frames and thus has several advantages: it is more ro- |
|
bust to motion blur and can estimate non-linear mo- |
|
tion between frames. Moreover, the proposed method |
|
provides a higher quality interpolation compared to |
|
synthesis-based methods that use events when event |
|
information is not sufficient or noisy. |
|
3. We empirically show that the proposed Time Lens |
|
greatly outperforms state-of-the-art frame-based and |
|
event-based methods, published over recent months, |
|
on three synthetic and two real benchmarks where we |
|
show an up to 5.21 dB improvement in terms of PSNR. |
|
2. Method |
|
Problem formulation. Let us assume an event-based |
|
VFI setting, where we are given as input the left I0and |
|
rightI1RGB key frames, as well as the left E0!and right |
|
E!1event sequences , and we aim to interpolate (one or |
|
more) new frames ^Iat random timesteps in-between the |
|
key frames. Note that, the event sequences ( E0!,E!1) |
|
contain all asynchronous events that are triggered from the |
|
moment the respective (left I0or rightI1) key RGB frame |
|
is synchronously sampled, till the timestep at which we |
|
want to interpolate a new frame ^I. Fig. 2 illustrates the |
|
proposed event-based VFI setting. |
|
System overview. To tackle the problem under consid- |
|
eration we propose a learning-based framework, namely |
|
Time Lens , that consists of four dedicated modules that |
|
serve complementary interpolation schemes, i.e. warping- |
|
based and synthesis-based interpolation. In particular, (1) |
|
thewarping-based interpolation module estimates a new |
|
frame by warping the boundary RGB keyframes using op- |
|
tical flow estimated from the respective event sequence; (2) |
|
thewarping refinement module aims to improve this esti- |
|
mate by computing residual flow; (3) the interpolation by |
|
synthesis module estimates a new frame by directly fusing |
|
the input information from the boundary keyframes and the |
|
event sequences; finally (4) the attention-based averaging |
|
module aims to optimally combine the warping-based andsynthesis-based results. In doing so, Time Lens marries |
|
the advantages of warping- and synthesis-based interpola- |
|
tion techniques, allowing us to generate new frames with |
|
color and high textural details while handling non-linear |
|
motion, light changes, and motion blur. The workflow of |
|
our method is shown in Fig. 3a. |
|
All modules of the proposed method use the same back- |
|
bone architecture, which is an hourglass network with skip |
|
connections between the contracting and expanding parts, |
|
similar to [10]. The backbone architecture is described |
|
in more detail in the supplementary materials. Regarding |
|
the learning representation [7] used to encode the event |
|
sequences, all modules use the voxel grid representation. |
|
Specifically, for event sequence E0!endwe compute a |
|
voxel gridV0!endfollowing the procedure described |
|
in [46]. In the following paragraphs, we analyze each mod- |
|
ule and its scope within the overall framework. |
|
Interpolation by synthesis , as shown in Fig. 3b, directly |
|
regresses a new frame ^Isyngiven the left I0and rightI1 |
|
RGB keyframes and events sequences E0!andE!1re- |
|
spectively. The merits of this interpolation scheme lie in |
|
its ability to handle changes in lighting, such as water re- |
|
flections in Fig. 6 and a sudden appearance of new objects |
|
in the scene, because unlike warping-based method, it does |
|
not rely on the brightness constancy assumption. Its main |
|
drawback is the distortion of image edges and textures when |
|
event information is noisy or insufficient because of high |
|
contrast thresholds, e.g. triggered by the book in Fig. 6. |
|
Warping-based interpolation , shown in Fig. 3d, first |
|
estimates the optical flow F!0andF!1between a la- |
|
tent new frame ^Iand boundary keyframes I0andI1using |
|
eventsE!0andE!1respectively. We compute E!0, |
|
by reversing the event sequence E0!, as shown in Fig. 4. |
|
Then our method uses computed optical flow to warp the |
|
boundary keyframes in timestep using differentiable in- |
|
terpolation [9], which in turn produces two new frame esti- |
|
mates ^Iwarp |
|
0!and^Iwarp |
|
1!. |
|
The major difference of our approach from the tradi- |
|
tional warping-based interpolation methods [20, 10, 21, 43], |
|
is that the latter compute optical flow between keyframes |
|
using the frames themselves and then approximate opti- |
|
cal flow between the latent middle frame and boundary |
|
by using a linear motion assumption. This approach does |
|
not work when motion between frames is non-linear and |
|
keyframes suffer from motion blur. By contrast, our ap- |
|
proach computes the optical flow from the events, and thus |
|
can naturally handle blur and non-linear motion. Although |
|
events are sparse, the resulting flow is sufficiently dense as |
|
shown in Fig. 3d, especially in textured areas with dominant |
|
mostion, which is most important for interpolation. |
|
Moreover, the warping-based interpolation approach re- |
|
lying on events also works better than synthesis-based |
|
method in the scenarios when event data is noisy or not(a) Overview of the proposed method. |
|
(b) Interpolation by synthesis module. |
|
(c) Attention-based averaging module. |
|
(d) Warping-based interpolation module. |
|
(e) Warping refinement module. |
|
Figure 3: Structure of the proposed method. The overall workflow of the method is shown in Fig. 3a and individual modules |
|
are shown in Fig. 3d, 3b, 3e and 3c. In the figures we also show loss function that we use to train each module. We show |
|
similar modules in the same color across the figures. |
|
Figure 4: Example of an event sequence reversal. |
|
sufficient due to high contrast thresholds, e.g. the book in |
|
Fig. 6. On the down side, this method still relies on the |
|
brightness constancy assumption for optical flow estimation |
|
and thus can not handle brightness changes and new ob- |
|
jects appearing between keyframes, e.g. water reflections |
|
in Fig. 6. |
|
Warping refinement module computes refined interpo- |
|
lated frames, ^Irefine |
|
0!and^Irefine |
|
1!, by estimating residual op- |
|
tical flow, F!0andF!1respectively, between the |
|
warping-based interpolation results, ^Iwarp |
|
0!and^Iwarp |
|
1!, and |
|
the synthesis result ^Isyn |
|
. It then proceeds by warping ^Iwarp |
|
0! |
|
and^Iwarp |
|
1!for a second time using the estimated residual |
|
optical flow, as shown in Fig. 3e. The refinement module |
|
draws inspiration from the success of optical flow and dis- |
|
parity refinement modules in [8, 27], and also by our ob- |
|
servation that the synthesis interpolation results are usually |
|
perfectly aligned with the ground-truth new frame. Besides |
|
computing residual flow, the warping refinement module |
|
also performs inpainting of the occluded areas, by fillingthem with values from nearby regions. |
|
Finally, the attention averaging module, shown in |
|
Fig. 3c, blends in a pixel-wise manner the results of synthe- |
|
sis^Isyn |
|
and warping-based interpolation ^Irefine |
|
0!and^Irefine |
|
1!to |
|
achieve final interpolation result ^I. This module leverages |
|
the complementarity of the warping- and synthesis-based |
|
interpolation methods and produces a final result, which is |
|
better than the results of both methods by 1.73 dB in PSNR |
|
as shown in Tab. 1 and illustrated in Fig. 6. |
|
A similar strategy was used in [21, 10], however these |
|
works only blended the warping-based interpolation results |
|
to fill the occluded regions, while we blend both warping |
|
and synthesis-based results, and thus can also handle light |
|
changes. We estimate the blending coefficients using an at- |
|
tention network that takes as an input the interpolation re- |
|
sults, ^Irefine |
|
0!,^Irefine |
|
1!and^Isyn, the optical flow results F!0 |
|
andF!1and bi-linear coefficient , that depends on the |
|
position of the new frame as a channel with constant value. |
|
2.1. High Speed Events-RGB (HS-ERGB) dataset |
|
Due to the lack of available datasets that combine |
|
synchronized, high-resolution event cameras and standard |
|
RGB cameras, we build a hardware synchronized hybrid |
|
sensor which combines a high-resolution event camera withEvent Camera |
|
Prophesee Gen4M 720p |
|
Resolution 1280 720 |
|
RGB Camera |
|
FLIR BlackFly S |
|
Resolution: 1440 1080 |
|
2.5 cm baselineFigure 5: Illustration of the dual camera setup. It comprises |
|
a Prophesee Gen4 720p monochrome event camera (top) |
|
and a FLIR BlackFly S RGB camera (bottom). Both cam- |
|
eras are hardware synchronized with a baseline of 2:5 cm |
|
. |
|
a high resolution and high-speed color camera. We use this |
|
hybrid sensor to record a new large-scale dataset which we |
|
term the High-Speed Events and RGB (HS-ERGB) dataset |
|
which we use to validate our video frame interpolation ap- |
|
proach. The hybrid camera setup is illustrated in Fig. 5. |
|
It features a Prophesee Gen4 (1280 720) event camera |
|
(Fig. 5 top) and a FLIR BlackFly S global shutter RGB cam- |
|
era (1440 1080) (Fig. 5 bottom), separated by a baseline |
|
of2:5 cm . Both cameras are hardware synchronized and |
|
share a similar field of view (FoV). We provide a detailed |
|
comparison of our setup against the commercially available |
|
DA VIS 346 [4] and the recently introduced setup [40] in |
|
the appendix.Compared to both [4] and [40] our setup is |
|
able to record events at much higher resolution (1280 720 |
|
vs. 240 180 or 346 260) and standard frames at much |
|
higher framerate (225 FPS vs. 40 FPS or 35 FPS) and with |
|
a higher dynamic range (71.45 dB vs. 55 dB or 60 dB). |
|
Moreover, standard frames have a higher resolution com- |
|
pared to the DA VIS sensor (1440 1080 vs. 240 180) and |
|
provide color. The higher dynamic range and frame rate, |
|
enable us to more accurately compare event cameras with |
|
standard cameras in highly dynamic scenarios and high dy- |
|
namic range. Both cameras are hardware synchronized and |
|
aligned via rectification and global alignment. For more |
|
synchronization and alignment details see the appendix. |
|
We record data in a variety of conditions, both indoors |
|
and outdoors. Sequences were recorded outdoors with ex- |
|
posure times as low as 100µsor indoors with exposure |
|
times up to 1000 µs. The dataset features frame rates of |
|
160 FPS, which is much higher than previous datasets, en- |
|
abling larger frame skips with ground truth color frames. |
|
The dataset includes highly dynamic close scenes with non- |
|
linear motions and far-away scenes featuring mainly cam- |
|
era ego-motion. For far-away scenes, stereo rectification is |
|
sufficient for good per-pixel alignment. For each sequence, |
|
alignment is performed depending on the depth either by |
|
stereo rectification or using feature-based homography esti- |
|
mation.To this end, we perform standard stereo calibration |
|
between RGB images and E2VID [32] reconstructions and |
|
rectify the images and events accordingly. For the dynamic |
|
close scenes, we additionally estimate a global homogra- |
|
phy by matching SIFT features [17] between these two im-ages. Note that for feature-based alignment to work well, |
|
the camera must be static and objects of interest should only |
|
move in a fronto-parallel plane at a predetermined depth. |
|
While recording we made sure to follow these constraints. |
|
For a more detailed dataset overview we refer to the sup- |
|
plementary material. |
|
3. Experiments |
|
All experiments in this work are done using the Py- |
|
Torch framework [30]. For training, we use the Adam |
|
optimizer[12] with standard settings, batches of size 4 and |
|
learning rate 104, which we decrease by a factor of 10ev- |
|
ery 12 epoch. We train each module for 27 epoch. For the |
|
training, we use large dataset with synthetic events gener- |
|
ated from Vimeo90k septuplet dataset [44] using the video to |
|
events method [6], based on the event simulator from [31]. |
|
We train the network by adding and training modules |
|
one by one, while freezing the weights of all previously |
|
trained modules. We train modules in the following or- |
|
der: synthesis-based interpolation, warping-based interpo- |
|
lation, warping refinement, and attention averaging mod- |
|
ules. We adopted this training because end-to-end training |
|
from scratch does not converge, and fine-tuning of the en- |
|
tire network after pretraining only marginally improved the |
|
results. We supervise our network with perceptual [45] and |
|
L1losses as shown in Fig. 3b, 3d, 3e and 3c. We fine-tune |
|
our network on real data module-by-module in the order |
|
of training. To measure the quality of interpolated images |
|
we use structural similarity (SSIM) [39] and peak signal to |
|
noise ratio (PSNR) metrics. |
|
Note, that the computational complexity of our interpo- |
|
lation method is among the best: on our machine for image |
|
resolutions of 640480, a single interpolation on the GPU |
|
takes 878 ms for DAIN [3], 404 ms for BMBC [29], 138 ms |
|
for ours, 84 ms for RRIN [13], 73 ms for Super SloMo [10] |
|
and 33 ms for LEDVDI [15] methods. |
|
3.1. Ablation study |
|
To study the contribution of every module of the pro- |
|
posed method to the final interpolation, we investigate the |
|
interpolation quality after each module in Fig. 3a, and re- |
|
port their results in Tab. 1. The table shows two notable re- |
|
sults. First, it shows that adding a warping refinement block |
|
after the simple warping block significantly improves the |
|
interpolation result. Second, it shows that by attention aver- |
|
aging synthesis-based and warping-based results, the inter- |
|
polations are improved by 1.7 dB in terms of PSNR. This |
|
is because the attention averaging module combines the ad- |
|
vantages of both methods. To highlight this further, we il- |
|
lustrate example reconstructions from these two modules in |
|
Fig. 6. As can be seen, the warping-based module excels at |
|
reconstructing textures in non-occluded areas (fourth col- |
|
umn) while the synthesis module performs better in regionswith difficult lighting conditions (fifth column). The atten- |
|
tion module successfully combines the best parts of both |
|
modules (first column). |
|
Figure 6: Complementarity of warping- and synthesis- |
|
based interpolation. |
|
Table 1: Quality of interpolation after each module on |
|
Vimeo90k (denoising) validation set. For SSIM and PSNR |
|
we show mean and one standard deviation. The best result |
|
is highlighted. |
|
Module PSNR SSIM |
|
Warping interpolation 26.68 3.68 0.926 0.041 |
|
Interpolation by synthesis 34.10 3.98 0.964 0.029 |
|
Warping refinement 33.02 3.76 0.963 0.026 |
|
Attention averaging (ours) 35.83 3.70 0.976 0.019 |
|
3.2. Benchmarking |
|
Synthetic datasets. We compare the proposed |
|
method, which we call Time Lens , to four state-of-the-art |
|
frame-based interpolation methods DAIN [3],RRIN [13], |
|
BMBC [29], SuperSloMo [10], event-based video recon- |
|
struction method E2VID [33] and two event and frame- |
|
based methods EDI [26] and LEDVDI [15] on pop- |
|
ular video interpolation benchmark datasets, such as |
|
Vimeo90k (interpolation) [44], Middlebury [2]. During |
|
the evaluation, we take original video sequence, skip 1 |
|
or 3 frames respectively, reconstruct them using interpola- |
|
tion method and compare to ground truth skipped frames. |
|
Events for event-based methods we simulate using [6] from |
|
the skipped frames. We do not fine-tune the methods for |
|
each dataset but simply use pre-trained models provided by |
|
the authors. We summarise the results in Tab. 2. |
|
As we can see, the proposed method outperforms other |
|
method across datasets in terms of average PSNR (up to |
|
8.82 dB improvement) and SSIM scores (up to 0.192 im- |
|
provement). As before these improvements stem from the |
|
use of auxiliary events during the prediction stage which |
|
allow our method to perform accurate frame interpolation, |
|
event for very large non-linear motions. Also, it has signif- |
|
icantly lower standard deviation of the PSNR (2.53 dB vs. |
|
4.96 dB) and SSIM (0.025 vs. 0.112) scores, which sug- |
|
gests more consistent performance across examples. Also,we can see that PSNR and SSIM scores of the proposed |
|
method degrades to much lesser degree than scores of the |
|
frame-based methods (up to 1.6 dB vs. up to 5.4 dB), as we |
|
skip and attempt to reconstruct more frames. This suggests |
|
that our method is more robust to non-linear motion than |
|
frame-based methods. |
|
High Quality Frames (HQF) dataset. We also evalu- |
|
ate our method on High Quality Frames (HQF) dataset [35] |
|
collected using DA VIS240 event camera that consists of |
|
video sequences without blur and saturation. During eval- |
|
uation, we use the same methodology as for the synthetic |
|
datasets, with the only difference that in this case we use |
|
real events. In the evaluation, we consider two versions of |
|
our method: Time Lens-syn , which we trained only on syn- |
|
thetic data, and Time Lens-real , which we trained on syn- |
|
thetic data and fine-tuned on real event data from our own |
|
DA VIS346 camera. We summarise our results in Tab. 3. |
|
The results on the dataset are consistent with the re- |
|
sults on the synthetic datasets: the proposed method outper- |
|
forms state-of-the-art frame-based methods and produces |
|
more consistent results over examples. As we increase the |
|
number of frames that we skip, the performance gap be- |
|
tween the proposed method and the other methods widens |
|
from 2.53 dB to 4.25 dB, also the results of other methods |
|
become less consistent which is reflected in higher devia- |
|
tion of PSNR and SSIM scores. For a more detailed dis- |
|
cussion about the impact of frame skip length and perfor- |
|
mance, see the appendix. Interestingly, fine-tuning of the |
|
proposed method on real event data, captured by another |
|
camera, greatly boosts the performance of our method by |
|
an average of 1.94 dB. This suggest that existence of large |
|
domain gap between synthetic and real event data. |
|
High Speed Event-RGB dataset. Finally, we evaluate |
|
our method on our dataset introduced in § 2.1. As clear from |
|
Tab. 4, our method, again significantly outperforms frame- |
|
based and frame-plus-event-based competitors. In Fig. 7 we |
|
show several examples from the HS-ERGB test set which |
|
show that, compared to competing frame-based method, |
|
our method can interpolate frames in the case of nonlin- |
|
ear (“Umbrella” sequence) and non-rigid motion (“Water |
|
Bomb”), and also handle illumination changes (“Fountain |
|
Schaffhauserplatz” and “Fountain Bellevue”). |
|
4. Conclusion |
|
In this work, we introduce Time Lens, a method that |
|
can show us what happens in the blind-time between |
|
two intensity frames using high temporal resolution in- |
|
formation from an event camera. It works by leveraging |
|
the advantages of synthesis-based approaches, which can |
|
handle changing illumination conditions and non-rigid |
|
motions, and flow-based approach, relying on motion |
|
estimation from events. It is therefore robust to motion blur |
|
and non-linear motions. The proposed method achievesTable 2: Results on standard video interpolation benchmarks such as Middlebury [2],Vimeo90k (interpolation) [44] and |
|
GoPro [19]. In all cases, we use a test subset of the datasets. To compute SSIM and PSNR, we downsample the original |
|
video and reconstruct the skipped frames. For Middlebury and Vimeo90k (interpolation), we skip 1 and 3 frames, and for |
|
GoPro we skip 7 and 15 frames due its its high frame rate of 240 FPS. Uses frames andUses events indicate if a method uses |
|
frames and events for interpolation. For event-based methods we generate events from the skipped frames using the event |
|
simulator [6]. Color indicates if a method works with color frames. For SSIM and PSNR we show mean and one standard |
|
deviation. Note, that we can not produce results with 3 skips on the Vimeo90k dataset, since it consists of frame triplet. We |
|
show the best result in each column in bold and the second-best using underscore text. |
|
Method Uses frames Uses events Color PSNR SSIM PSNR SSIM |
|
Middlebury [2] 1 frame skip 3 frames skips |
|
DAIN [3] 4 8 4 30.875.38 0.899 0.110 26.674.53 0.838 0.130 |
|
SuperSloMo [10] 4 8 4 29.755.35 0.880 0.112 26.43 5.30 0.823 0.141 |
|
RRIN [13] 4 8 4 31.085.55 0.8960.112 27.18 5.57 0.8370.142 |
|
BMBC [29] 4 8 4 30.836.01 0.897 0.111 26.86 5.82 0.834 0.144 |
|
E2VID [32] 8 4 8 11.262.82 0.427 0.184 26.86 5.82 0.834 0.144 |
|
EDI [26] 4 4 8 19.722.95 0.725 0.155 18.44 2.52 0.669 0.173 |
|
Time Lens (ours) 4 4 4 33.273.11 0.929 0.027 32.13 2.81 0.908 0.039 |
|
Vimeo90k (interpolation) [44] 1 frame skip 3 frames skips |
|
DAIN [3] 4 8 4 34.204.43 0.962 0.023 - - |
|
SuperSloMo [10] 4 8 4 32.934.23 0.948 0.035 - - |
|
RRIN [13] 4 8 4 34.724.40 0.9620.029 - - |
|
BMBC [29] 4 8 4 34.564.40 0.962 0.024 - - |
|
E2VID [32] 8 4 8 10.082.89 0.395 0.141 - - |
|
EDI [26] 4 4 8 20.743.31 0.748 0.140 - - |
|
Time Lens (ours) 4 4 4 36.313.11 0.962 0.024 - - |
|
GoPro [19] 7 frames skip 15 frames skips |
|
DAIN [3] 4 8 4 28.814.20 0.876 0.117 24.39 4.69 0.7360.173 |
|
SuperSloMo [10] 4 8 4 28.984.30 0.875 0.118 24.38 4.78 0.747 0.177 |
|
RRIN [13] 4 8 4 28.964.38 0.876 0.119 24.324.80 0.749 0.175 |
|
BMBC [29] 4 8 4 29.084.58 0.8750.120 23.68 4.69 0.736 0.174 |
|
E2VID [32] 8 4 8 9.742.11 0.549 0.094 9.75 2.11 0.549 0.094 |
|
EDI [26] 4 4 8 18.792.03 0.670 0.144 17.45 2.23 0.603 0.149 |
|
Time Lens (ours) 4 4 4 34.811.63 0.959 0.012 33.21 2.00 0.942 0.023 |
|
Table 3: Benchmarking on the High Quality Frames (HQF) DA VIS240 dataset. We do not fine-tune our method and other |
|
methods and use models provided by the authors. We evaluate methods on all sequences of the dataset. To compute SSIM |
|
and PSNR, we downsample the original video by skip 1 and 3 frames, reconstruct these frames and compare them to the |
|
skipped frames. In Uses frames andUses events columns we specify if a method uses frames and events for interpolation. |
|
In the Color column, we indicate if a method works with color frames. In the table, we present two versions of our method: |
|
Time Lens-syn , which we trained only on synthetic data, and Time Lens-real , which we trained on synthetic data and fine- |
|
tuned on real event data from our own DA VIS346 camera. For SSIM and PSNR, we show mean and one standard deviation. |
|
We show the best result in each column in bold and the second-best using underscore text. |
|
Method Uses frames Uses events Color PSNR SSIM PSNR SSIM |
|
1 frame skip 3 frames skips |
|
DAIN [3] 4 8 4 29.826.91 0.875 0.124 26.10 7.52 0.782 0.185 |
|
SuperSloMo [10] 4 8 4 28.766.13 0.861 0.132 25.54 7.13 0.761 0.204 |
|
RRIN [13] 4 8 4 29.767.15 0.874 0.132 26.11 7.84 0.778 0.200 |
|
BMBC [29] 4 8 4 29.967.00 0.8750.126 26.327.78 0.7810.193 |
|
E2VID [32] 8 4 8 6.702.19 0.315 0.124 6.70 2.20 0.315 0.124 |
|
EDI [26] 4 4 8 18.76.53 0.574 0.244 18.8 6.88 0.579 0.274 |
|
Time Lens-syn (our) 4 4 4 30.575.01 0.903 0.067 28.98 5.09 0.873 0.086 |
|
Time Lens-real (ours) 4 4 4 32.494.60 0.927 0.048 30.57 5.08 0.900 0.069Figure 7: Qualitative results for the proposed method and its closes competitor DAIN [3] on our Dual Event and Color Camera |
|
Dataset test sequences: “Fountain Schaffhauserplatz” (top-left), “Fountain Bellevue” (bottom-left) “Water bomb” (top-right) |
|
and “Umbrella” (bottom-right). For each sequence, the figure shows interpolation results on the left (the animation can be |
|
viewed in Acrobat Reader) and close-up interpolation results on the right. The close-ups, show input left and right frame and |
|
intermediate interpolated frames. |
|
Table 4: Benchmarking on the test set of the High Speed Event and RGB camera (HS-ERGB) dataset. We report PSNR and |
|
SSIM for all sequences by skipping 5 and 7 frames respectively, and reconstructing the missing frames with each method. By |
|
design LEDVDI [15] can interpolate only 5 frames. Uses frames andUses events indicate if a method uses frames or events |
|
respectively. Color indicates whether a method works with color frames. For SSIM and PSNR the scores are averaged over |
|
the sequences. Best results are shown in bold and the second best are underlined. |
|
Method Uses frames Uses events Color PSNR SSIM PSNR SSIM |
|
Far-away sequences 5 frame skip 7 frames skips |
|
DAIN [3] 4 8 4 27.921.55 0.7800.141 27.131.75 0.7480.151 |
|
SuperSloMo [10] 4 8 4 25.666.24 0.727 0.221 24.16 5.20 0.692 0.199 |
|
RRIN [13] 4 8 4 25.265.81 0.738 0.196 23.73 4.74 0.703 0.170 |
|
BMBC [29] 4 8 4 25.626.13 0.742 0.202 24.13 4.99 0.710 0.175 |
|
LEDVDI [15] 4 4 8 12.501.74 0.393 0.174 n/a n/a |
|
Time Lens (ours) 4 4 4 33.132.10 0.877 0.092 32.31 2.27 0.869 0.110 |
|
Close planar sequences 5 frame skip 7 frames skips |
|
DAIN [3] 4 8 4 29.034.47 0.807 0.093 28.50 4.54 0:8010:096 |
|
SuperSloMo [10] 4 8 4 28.354.26 0.788 0.098 27.27 4.26 0:7750:099 |
|
RRIN [13] 4 8 4 28.694.17 0.813 0.083 27.46 4.24 0.800 0.084 |
|
BMBC [29] 4 8 4 29.224.45 0.8200.085 27.994.55 0.808 0.084 |
|
LEDVDI [15] 4 4 8 19.464.09 0.602 0.164 n/a n/a |
|
Time Lens (ours) 4 4 4 32.194.19 0.839 0.090 31.68 4.18 0.835 0.091 |
|
an up to 5.21 dB improvement over state-of-the-art |
|
frame-based and event-plus-frames-based methods on both |
|
synthetic and real datasets. In addition, we release the |
|
first High Speed Event and RGB (HS-ERGB) dataset, |
|
which aims at pushing the limits of existing interpola- |
|
tion approaches by establishing a new benchmark for both |
|
event- and frame-based video frame interpolation methods.5. Acknowledgement |
|
This work was supported by Huawei Zurich Research |
|
Center; by the National Centre of Competence in Re- |
|
search (NCCR) Robotics through the Swiss National Sci- |
|
ence Foundation (SNSF); the European Research Coun- |
|
cil (ERC) under the European Union’s Horizon 2020 re- |
|
search and innovation programme (Grant agreement No. |
|
864042).6. Video Demonstration |
|
This PDF is accompanied with a video showing advan- |
|
tages of the proposed method compared to state-of-the-art |
|
frame-based methods published over recent months, as well |
|
as potential practical applications of the method. |
|
7. Backbone network architecture |
|
Figure 8: Backbone hourglass network that we use in all |
|
modules of the proposed method. |
|
For all modules in the proposed method, we use the same |
|
backbone architecture which is an hourglass network with |
|
shortcut connections between the contracting and the ex- |
|
panding parts similar to [10] which we show in Fig. 8. |
|
8. Additional Ablation Experiments |
|
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 |
|
Percentage of locationsSynthesisWarped successiveWarp preceeding |
|
Figure 9: Percentage of pixels each interpolation method |
|
contributes on average to the final interpolation result for |
|
Vimeo90k (denoising) validation set. Note, that all meth- |
|
ods contribute almost equally to the final result and thus are |
|
equally important. |
|
Table 5: Importance of inter-frame events on Middlebury |
|
test set. To compute SSIM and PSNR, we skip one frame |
|
of the original video, reconstruct it and compare to the |
|
skipped frame. One version of the proposed method has |
|
access to the events synthesized from the skipped frame |
|
and another version does not have inter-frame information. |
|
We also show performance of frame-based SuperSloMo |
|
method [10], that is used in event simulator for reference. |
|
We highlight the best performing method. |
|
Method PSNR SSIM |
|
With inter-frame events (ours) 33.27 3.11 0.929 0.027 |
|
Without inter-frame events 29.03 4.85 0.866 0.111 |
|
SuperSloMo [10] 29.75 5.35 0.880 0.112 |
|
Importance of inter-frame events . To study the im- |
|
portance of additional information provided by events, we |
|
skip every second frame of the original video and attempt |
|
to reconstruct it using two versions of the proposed method.One version has access to the events synthesized from the |
|
skipped frame and another version does not have inter- |
|
frame information. As we can see from the Tab. 5, the |
|
former significantly outperforms the later by a margin of |
|
4.24dB. Indeed this large improvements can be explained |
|
by the fact that the method with inter-frame events has im- |
|
plicit access to the ground truth image it tries to recon- |
|
struct, albeit in the form of asynchronous events. This high- |
|
lights that our network is able to efficiently decode the asyn- |
|
chronous intermediate events to recover the missing frame. |
|
Moreover, this shows that the addition of events has a sig- |
|
nificant impact on the final task performance, proving the |
|
usefulness of an event camera as an auxiliary sensor. |
|
Importance of each interpolation method. To study |
|
relative importance of synthesis-based andwarping-based |
|
interpolation methods, we compute the percentage of pixels |
|
that each method contribute on average to the final interpo- |
|
lation result for the Vimeo90k (denoising) validation dataset |
|
and show the result in Fig. 9. As it is clear from the figure, |
|
all the methods contribute almost equally to the final result |
|
and thus are all equally important. |
|
1 2 3 4 5 6 7 |
|
Frame Index2224262830PSNR [dB] |
|
BMBC |
|
DAINRRIN |
|
SuperSlowMoOurs |
|
Figure 10: “Rope plot” showing interpolation quality as a |
|
function of distance from input boundary frames on High |
|
Quality Frames dataset. We skip all but every 7th frame and |
|
restore them using events and remaining frames. For each |
|
skip position, we compute average PSNR of the restored |
|
frame over entire dataset. We do not fine-tune the proposed |
|
and competing methods on the HQF dataset and simply use |
|
pre-trained models provided by the authors. Note, that the |
|
proposed method have the highest PSNR. Also, its PSNR |
|
decreases much slower than PSNR of other methods we |
|
move away from the input boundary frames. |
|
“Rope” plot. To study how the interpolation quality de- |
|
creases with the distance to the input frames, we skip all but |
|
every 7th frame in the input videos from the High Quality |
|
Frames dataset, restore them using our method and compare |
|
to the original frames. For each skipped frame position, we |
|
compute average PSNR of the restored frame over entire |
|
dataset and show results in Fig. 10. As clear from the fig-ure, the proposed method has the highest PSNR. Also, its |
|
PSNR decreases much slower than PSNR of the competing |
|
methods as we move away from the boundary frames. |
|
9. Additional Benchmarking Results |
|
To makes sure that the fine-tuning does not af- |
|
fect our general conclusions, we fine-tuned the pro- |
|
posed method and RRIN method [13] on subset of |
|
High Quality Frames dataset and test them on the |
|
remaining part (“poster pillar 1”, “slow andfastdesk”, |
|
“bike bayhdr” and “desk” sequences). We choose RRIN |
|
method for this experiment, because it showed good perfor- |
|
mance across synthetic and real datasets and it is fairly sim- |
|
ple. As clear from the Tab. 6, after the fine-tuning, perfor- |
|
mance of the proposed method remained very strong com- |
|
pared to the RRIN method. |
|
10. High Speed Events and RGB Dataset |
|
In this section we describe the sequences in the High- |
|
Speed Event and RGB (HS-ERGB) dataset. The commer- |
|
cially available DA VIS 346 [4] already allows the simul- |
|
taneous recording of events and grayscale frames, which |
|
are temporally and spatially synchronized. However, it has |
|
some shortcomings as the relatively low resolution of only |
|
346260 pixels of both frames and events. This is far |
|
below the resolution of typical frame based consumer cam- |
|
eras. Additionally, the DA VIS 346 has a very limited dy- |
|
namic range of 55 db and a maximum frame of 40 FPS. |
|
Those properties render it not ideal for many event based |
|
methods, which aim to outperform traditional frame based |
|
cameras in certain applications. The setup described in [40] |
|
shows improvements in the resolution of frames and dy- |
|
namic range, but has a reduced event resolution instead. The |
|
lack of publicly available high resolution event and color |
|
frame datasets and of the shelf hardware motivated the de- |
|
velopment of our dual camera setup. It features high reso- |
|
lution, high frame rate, high dynamic range color frames |
|
combined with high resolution events. A comparison of |
|
our setup with the DA VIS346[4] and the setup with beam |
|
splitter in [40] is shown in 7. With this new setup we col- |
|
lect new High Speed Events and RGB (HS-ERGB) Dataset |
|
that we summarize in Tab. 8. We show several fragments |
|
from the dataset in Fig. 12. In the following paragraphs we |
|
describe temporal synchronization and spatial alignment of |
|
frame and event data that we performed for our dataset. |
|
Synchronization In our setup, two cameras are hard- |
|
ware synchronized through the use of external triggers. |
|
Each time the standard camera starts and ends exposure, a |
|
trigger is sent to the event camera which records an exter- |
|
nal trigger event with precise timestamp information. This |
|
information allows us to assign accurate timestamps to the |
|
standard frames, as well as group events during exposure orbetween consecutive frames. |
|
Alignment In our setup event and RGB cameras are ar- |
|
ranged in stereo configuration, therefore event and frame |
|
data in addition to temporal, require spatial alignment. We |
|
perform the alignment in three steps: (i)stereo calibration, |
|
(ii)rectification and (iii) feature-based global alignment. |
|
We first calibrate the cameras using a standard checker- |
|
board pattern. The recorded asynchronous events are con- |
|
verted to temporally aligned video reconstructions using |
|
E2VID[32, 33]. Finally, we find the intrinsic and extrin- |
|
sics by applying the stereo calibration tool Kalibr[24] to the |
|
video reconstructions and the standard frames recorded by |
|
the color camera. We then use the found intrinsics and ex- |
|
trinsics to rectify the events and frames. |
|
Due to the small baseline and similar fields of view |
|
(FoV), stereo rectification is usually sufficient to align the |
|
output of both sensors for scenes with a large average depth |
|
(>40 m ). This is illustrated in Fig. 11 (a). |
|
For close scenes, however, events and frames are mis- |
|
aligned (Fig. 11 (b)). For this reason we perform the sec- |
|
ond step of global alignment using a homography which |
|
we estimate by matching SIFT features [17] extracted on |
|
the standard frames and video reconstructions. The homog- |
|
raphy estimation also utilizes RANSAC to eliminate false |
|
matches. When the cameras are static, and the objects of |
|
interest move within a plane, this yields accurate alignment |
|
between the two sensors (Fig.11 (c)). |
|
References |
|
[1] Enhancing and experiencing spacetime resolution with |
|
videos and stills. In ICCP , pages 1–9. IEEE, 2009. 2 |
|
[2] Simon Baker, Daniel Scharstein, JP Lewis, Stefan Roth, |
|
Michael J Black, and Richard Szeliski. A database and evalu- |
|
ation methodology for optical flow. IJCV , 92(1):1–31, 2011. |
|
6, 7 |
|
[3] Wenbo Bao, Wei-Sheng Lai, Chao Ma, Xiaoyun Zhang, |
|
Zhiyong Gao, and Ming-Hsuan Yang. Depth-aware video |
|
frame interpolation. In CVPR , pages 3703–3712, 2019. 1, 5, |
|
6, 7, 8 |
|
[4] Christian Brandli, Raphael Berner, Minhao Yang, Shih-Chii |
|
Liu, and Tobi Delbruck. A 240 180 130 db 3 s la- |
|
tency global shutter spatiotemporal vision sensor. JSSC , |
|
49(10):2333–2341, 2014. 2, 5, 10, 11 |
|
[5] Zhixiang Chi, Rasoul Mohammadi Nasiri, Zheng Liu, Juwei |
|
Lu, Jin Tang, and Konstantinos N Plataniotis. All at once: |
|
Temporally adaptive multi-frame interpolation with ad- |
|
vanced motion modeling. arXiv preprint arXiv:2007.11762 , |
|
2020. 2 |
|
[6] Daniel Gehrig, Mathias Gehrig, Javier Hidalgo-Carri ´o, and |
|
Davide Scaramuzza. Video to events: Recycling video |
|
datasets for event cameras. In CVPR , June 2020. 5, 6, 7 |
|
[7] Daniel Gehrig, Antonio Loquercio, Konstantinos G. Derpa- |
|
nis, and Davide Scaramuzza. End-to-end learning of repre- |
|
sentations for asynchronous event-based data. In Int. Conf. |
|
Comput. Vis. (ICCV) , 2019. 3Table 6: Results on High Quality Frames [35] with fine-tuning. Due to the time limitations, we only fine-tuned the pro- |
|
posed method and RRIN [13] method, that performed well across synthetic and real datasets. For evaluation, we used |
|
“poster pillar 1”, “slow andfastdesk”, “bike bayhdr” and “desk” sequences of the set and other sequences we used for the |
|
fine-tuning. For SSIM and PSNR, we show mean and one standard deviation across frames of all sequences. |
|
Method1 skip 3 skips |
|
PSNR SSIM PSNR SSIM |
|
RRIN [13] 28.62 5.51 0.839 0.132 25.36 5.70 0.750 0.173 |
|
Time Lens (Ours) 33.42 3.18 0.934 0.041 32.27 3.44 0.917 0.054 |
|
Table 7: Comparison of our HS-ERGB dataset against publicly available High Quality Frames (HQF) dataset, acquired by |
|
DA VIS 346 [4] and Guided Event Filtering (GEF) dataset, acquired by setup with DA VIS240 and RGB camera mounted with |
|
beam splitter [40]. Note, that in contrast to the previous datasets, the proposed dataset has high resolution of event data, |
|
and high frame rate. Also, it is the first dataset acquired by dual system with event and frame sensors arranged in stereo |
|
configuration. |
|
Frames Events |
|
FPS Dynamic Range, [dB] Resolution Color Dynamic Range, dB Resolution Sync. Aligned |
|
DA VIS 346 [4] 40 55 346 260 8 120 346 260 4 4 |
|
GEF[40] 35 60 24802048 4 120 240 180 4 4 |
|
HS-ERGB (Ours) 226 71.45 14401080 4 120 7201280 4 4 |
|
[8] Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, |
|
Alexey Dosovitskiy, and Thomas Brox. Flownet 2.0: Evolu- |
|
tion of optical flow estimation with deep networks. In CVPR , |
|
pages 2462–2470, 2017. 2, 4 |
|
[9] Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. |
|
Spatial transformer networks. In NIPS , pages 2017–2025, |
|
2015. 2, 3 |
|
[10] Huaizu Jiang, Deqing Sun, Varun Jampani, Ming-Hsuan |
|
Yang, Erik Learned-Miller, and Jan Kautz. Super slomo: |
|
High quality estimation of multiple intermediate frames for |
|
video interpolation. In CVPR , pages 9000–9008, 2018. 2, 3, |
|
4, 5, 6, 7, 8, 9 |
|
[11] Zhe Jiang, Yu Zhang, Dongqing Zou, Jimmy Ren, Jiancheng |
|
Lv, and Yebin Liu. Learning event-based motion deblurring. |
|
InCVPR , pages 3320–3329, 2020. 2 |
|
[12] Diederik P. Kingma and Jimmy L. Ba. Adam: A method for |
|
stochastic optimization. Int. Conf. Learn. Representations |
|
(ICLR) , 2015. 5 |
|
[13] Haopeng Li, Yuan Yuan, and Qi Wang. Video frame interpo- |
|
lation via residue refinement. In ICASSP 2020 , pages 2613– |
|
2617. IEEE, 2020. 5, 6, 7, 8, 10, 11 |
|
[14] Patrick Lichtsteiner, Christoph Posch, and Tobi Delbruck. A |
|
128128 120 dB 15 s latency asynchronous temporal con- |
|
trast vision sensor. IEEE J. Solid-State Circuits , 43(2):566– |
|
576, 2008. 2 |
|
[15] Songnan Lin, Jiawei Zhang, Jinshan Pan, Zhe Jiang, |
|
Dongqing Zou, Yongtian Wang, Jing Chen, and Jimmy Ren. |
|
Learning event-driven video deblurring and interpolation. |
|
ECCV , 2020. 5, 6, 8 |
|
[16] Ziwei Liu, Raymond A Yeh, Xiaoou Tang, Yiming Liu, and |
|
Aseem Agarwala. Video frame synthesis using deep voxel |
|
flow. In ICCV , pages 4463–4471, 2017. 2 |
|
[17] David G. Lowe. Distinctive image features from scale- |
|
invariant keypoints. Int. J. Comput. Vis. , 60(2):91–110, Nov. |
|
2004. 5, 10[18] Simone Meyer, Abdelaziz Djelouah, Brian McWilliams, |
|
Alexander Sorkine-Hornung, Markus Gross, and Christo- |
|
pher Schroers. Phasenet for video frame interpolation. In |
|
CVPR , 2018. 2 |
|
[19] Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep |
|
multi-scale convolutional neural network for dynamic scene |
|
deblurring. In CVPR , July 2017. 7 |
|
[20] Simon Niklaus and Feng Liu. Context-aware synthesis for |
|
video frame interpolation. In CVPR , pages 1701–1710, |
|
2018. 2, 3 |
|
[21] Simon Niklaus and Feng Liu. Softmax splatting for video |
|
frame interpolation. In CVPR , pages 5437–5446, 2020. 2, 3, |
|
4 |
|
[22] Simon Niklaus, Long Mai, and Feng Liu. Video frame inter- |
|
polation via adaptive convolution. In CVPR , 2017. 2 |
|
[23] Simon Niklaus, Long Mai, and Feng Liu. Video frame inter- |
|
polation via adaptive separable convolution. In ICCV , 2017. |
|
2 |
|
[24] L. Oth, P. Furgale, L. Kneip, and R. Siegwart. Rolling shutter |
|
camera calibration. In CVPR , 2013. 10 |
|
[25] Avinash Paliwal and Nima Khademi Kalantari. Deep slow |
|
motion video reconstruction with hybrid imaging system. |
|
PAMI , 2020. 2 |
|
[26] Liyuan Pan, Cedric Scheerlinck, Xin Yu, Richard Hartley, |
|
Miaomiao Liu, and Yuchao Dai. Bringing a blurry frame |
|
alive at high frame-rate with an event camera. In CVPR , |
|
pages 6820–6829, 2019. 2, 6, 7 |
|
[27] Jiahao Pang, Wenxiu Sun, JS Ren, Chengxi Yang, and Qiong |
|
Yan. Cascade Residual Learning: A Two-stage Convolu- |
|
tional Neural Network for Stereo Matching. In ICCV , pages |
|
887–895, 2017. 4 |
|
[28] Federico Paredes-Vall ´es and Guido CHE de Croon. Back to |
|
event basics: Self-supervised learning of image reconstruc- |
|
tion for event cameras via photometric constancy. CoRR , |
|
2020. 2(a) far away scenes (b) misaligned close scenes (c) after global alignment |
|
Figure 11: Alignment of standard frames with events. Aggregated events (blue positive, red negative) are overlain with the |
|
standard frame. For scenes with sufficient depth (more than 40 m ) stereo rectification of both outputs yields accurate per-pixel |
|
alignment (a). However, for close scenes (b) events and frames are misaligned. In the absence of camera motion and motion |
|
in a plane, the views can be aligned with a global homography (c). |
|
[29] Junheum Park, Keunsoo Ko, Chul Lee, and Chang-Su Kim. |
|
Bmbc: Bilateral motion estimation with bilateral cost vol- |
|
ume for video interpolation. ECCV , 2020. 1, 2, 5, 6, 7, 8 |
|
[30] Pytorch web site. http://http://pytorch.org/ |
|
Accessed: 08 March 2019. 5 |
|
[31] Henri Rebecq, Daniel Gehrig, and Davide Scaramuzza. |
|
ESIM: an open event camera simulator. In Conf. on Robotics |
|
Learning (CoRL) , 2018. 5 |
|
[32] Henri Rebecq, Ren ´e Ranftl, Vladlen Koltun, and Davide |
|
Scaramuzza. Events-to-video: Bringing modern computer |
|
vision to event cameras. In CVPR , pages 3857–3866, 2019. |
|
2, 5, 7, 10 |
|
[33] Henri Rebecq, Ren ´e Ranftl, Vladlen Koltun, and Davide |
|
Scaramuzza. High speed and high dynamic range video with |
|
an event camera. TPAMI , 2019. 2, 6, 10 |
|
[34] Cedric Scheerlinck, Henri Rebecq, Daniel Gehrig, Nick |
|
Barnes, Robert Mahony, and Davide Scaramuzza. Fast im- |
|
age reconstruction with an event camera. In WACV , pages |
|
156–163, 2020. 2 |
|
[35] Timo Stoffregen, Cedric Scheerlinck, Davide Scaramuzza, |
|
Tom Drummond, Nick Barnes, Lindsay Kleeman, and |
|
Robert Mahony. Reducing the sim-to-real gap for event cam- |
|
eras. In ECCV , 2020. 6, 11 |
|
[36] Deqing Sun, Xiaodong Yang, Ming-Yu Liu, and Jan Kautz. |
|
Pwc-net: Cnns for optical flow using pyramid, warping, and |
|
cost volume. In CVPR , pages 8934–8943, 2018. 2 |
|
[37] Bishan Wang, Jingwei He, Lei Yu, Gui-Song Xia, and Wen |
|
Yang. Event enhanced high-quality image recovery. ECCV , |
|
2020. 2 |
|
[38] Lin Wang, Yo-Sung Ho, Kuk-Jin Yoon, et al. Event- |
|
based high dynamic range image and very high frame rate |
|
video generation using conditional generative adversarial |
|
networks. In CVPR , pages 10081–10090, 2019. 2 |
|
[39] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Si- |
|
moncelli. Image quality assessment: from error visibility tostructural similarity. IEEE transactions on image processing , |
|
13(4):600–612, 2004. 5 |
|
[40] Zihao Wang, Peiqi Duan, Oliver Cossairt, Aggelos Kat- |
|
saggelos, Tiejun Huang, and Boxin Shi. Joint filtering of in- |
|
tensity images and neuromorphic events for high-resolution |
|
noise-robust imaging. In CVPR , 2020. 5, 10, 11 |
|
[41] Zihao W Wang, Weixin Jiang, Kuan He, Boxin Shi, Aggelos |
|
Katsaggelos, and Oliver Cossairt. Event-driven video frame |
|
synthesis. In ICCV Workshops , pages 0–0, 2019. 2 |
|
[42] Chao-Yuan Wu, Nayan Singhal, and Philipp Krahenbuhl. |
|
Video compression through image interpolation. In ECCV , |
|
pages 416–431, 2018. 2 |
|
[43] Xiangyu Xu, Li Siyao, Wenxiu Sun, Qian Yin, and Ming- |
|
Hsuan Yang. Quadratic video interpolation. In NeurIPS , |
|
pages 1647–1656, 2019. 2, 3 |
|
[44] Tianfan Xue, Baian Chen, Jiajun Wu, Donglai Wei, and |
|
William T Freeman. Video enhancement with task-oriented |
|
flow. IJCV , 127(8):1106–1125, 2019. 2, 5, 6, 7 |
|
[45] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, |
|
and Oliver Wang. The unreasonable effectiveness of deep |
|
features as a perceptual metric. In CVPR , pages 586–595, |
|
2018. 5 |
|
[46] Alex Zihao Zhu, Liangzhe Yuan, Kenneth Chaney, and |
|
Kostas Daniilidis. Unsupervised event-based optical flow us- |
|
ing motion compensation. In ECCV , pages 0–0, 2018. 3Table 8: Overview of all sequences of the High Speed Event-RGB (HS-ERGB) dataset. |
|
Sequence Name Subset Camera Settings Description |
|
Close planar sequences |
|
Water bomb air (Fig. 12a) |
|
Train163 FPS, 1080 µsexposure, 1065 frames accelerating object, water splash |
|
Lighting match 150 FPS, 2972 µsexposure, 666 frames illumination change, fire |
|
Fountain Schaffhauserplatz 1 150 FPS, 977µsexposure, 1038 frames illumination change, fire |
|
Water bomb ETH 2 (Fig. 12c) 163 FPS, 323µsexposure, 3494 frames accelerating object, water splash |
|
Waving arms 163 FPS, 3476 µsexposure, 762 frames non-linear motion |
|
Popping air balloon |
|
Test150 FPS, 2972 µsexposure, 335 frames non-linear motion, object disappearance |
|
Confetti (Fig. 12e 150 FPS, 2972 µsexposure, 832 frames non-linear motion, periodic motion |
|
Spinning plate 150 FPS, 2971 µsexposure, 1789 frames non-linear motion, periodic motion |
|
Spinning umbrella 163 FPS, 3479 µsexposure, 763 frames non-linear motion |
|
Water bomb floor 1 (Fig. 12d) 160 FPS, 628µsexposure, 686 frames accelerating object, water splash |
|
Fountain Schaffhauserplatz 2 150 FPS, 977µsexposure, 1205 frames non-linear motion, water |
|
Fountain Bellevue 2 (Fig. 12b) 160 FPS, 480µsexposure, 1329 frames non-linear motion, water, periodic movement |
|
Water bomb ETH 1 163 FPS, 323µsexposure, 3700 frames accelerating object, water splash |
|
Candle (Fig. 12f) 160 FPS, 478µsexposure, 804 frames illumination change, non-linear motion |
|
Far-away sequences |
|
Kornhausbruecke letten x 1 |
|
Train163 FPS, 266µsexposure, 831 frames fast camera rotation around z-axis |
|
Kornhausbruecke rot x 5 163 FPS, 266µsexposure, 834 frames fast camera rotation around x-axis |
|
Kornhausbruecke rot x 6 163 FPS, 266µsexposure, 834 frames fast camera rotation around x-axis |
|
Kornhausbruecke rot y 3 163 FPS, 266µsexposure, 833 frames fast camera rotation around y-axis |
|
Kornhausbruecke rot y 4 163 FPS, 266µsexposure, 833 frames fast camera rotation around y-axis |
|
Kornhausbruecke rot z 1 163 FPS, 266µsexposure, 857 frames fast camera rotation around z-axis |
|
Kornhausbruecke rot z 2 163 FPS, 266µsexposure, 833 frames fast camera rotation around z-axis |
|
Sihl 4 163 FPS, 426µsexposure, 833 frames fast camera rotation around z-axis |
|
Tree 3 163 FPS, 978µsexposure, 832 frames camera rotation around z-axis |
|
Lake 4 163 FPS, 334µsexposure, 833 frames camera rotation around z-axis |
|
Lake 5 163 FPS, 275µsexposure, 833 frames camera rotation around z-axis |
|
Lake 7 163 FPS, 274µsexposure, 833 frames camera rotation around z-axis |
|
Lake 8 163 FPS, 274µsexposure, 832 frames camera rotation around z-axis |
|
Lake 9 163 FPS, 274µsexposure, 832 frames camera rotation around z-axis |
|
Bridge lake 4 163 FPS, 236µsexposure, 836 frames camera rotation around z-axis |
|
Bridge lake 5 163 FPS, 236µsexposure, 834 frames camera rotation around z-axis |
|
Bridge lake 6 163 FPS, 235µsexposure, 832 frames camera rotation around z-axis |
|
Bridge lake 7 163 FPS, 235µsexposure, 832 frames camera rotation around z-axis |
|
Bridge lake 8 163 FPS, 235µsexposure, 834 frames camera rotation around z-axis |
|
Kornhausbruecke letten random 4 |
|
Test163 FPS, 266µsexposure, 834 frames random camera movement |
|
Sihl 03 163 FPS, 426µsexposure, 834 frames camera rotation around z-axis |
|
Lake 01 163 FPS, 335µsexposure, 784 frames camera rotation around z-axis |
|
Lake 03 163 FPS, 334µsexposure, 833 frames camera rotation around z-axis |
|
Bridge lake 1 163 FPS, 237µsexposure, 833 frames camera rotation around z-axis |
|
Bridge lake 3 163 FPS, 236µsexposure, 834 frames camera rotation around z-axis(a) Water bomb air (b) Fountain Bellevue |
|
(c) Water bomb ETH 2 (d) Water bomb floor 1 |
|
(e) Confetti (f) Candle |
|
Figure 12: Example sequences of the HS-ERGB dataset. This figure contains animation that can be viewed in Acrobat |
|
Reader. |