arxiv_dump / txt /2102.04993.txt
billxbf's picture
Upload 101 files
8f1929a verified
raw
history blame
66 kB
JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 1
Attention-Based Neural Networks for Chroma Intra
Prediction in Video Coding
Marc G ´orriz Blanch, Student Member IEEE, Saverio Blasi, Alan F. Smeaton, Fellow IEEE,
Noel E. O’Connor, Member IEEE, and Marta Mrak, Senior Member IEEE
Abstract —Neural networks can be successfully used to im-
prove several modules of advanced video coding schemes. In
particular, compression of colour components was shown to
greatly benefit from usage of machine learning models, thanks
to the design of appropriate attention-based architectures that
allow the prediction to exploit specific samples in the reference
region. However, such architectures tend to be complex and
computationally intense, and may be difficult to deploy in a
practical video coding pipeline. This work focuses on reducing
the complexity of such methodologies, to design a set of simpli-
fied and cost-effective attention-based architectures for chroma
intra-prediction. A novel size-agnostic multi-model approach is
proposed to reduce the complexity of the inference process. The
resulting simplified architecture is still capable of outperforming
state-of-the-art methods. Moreover, a collection of simplifications
is presented in this paper, to further reduce the complexity
overhead of the proposed prediction architecture. Thanks to
these simplifications, a reduction in the number of parameters
of around 90% is achieved with respect to the original attention-
based methodologies. Simplifications include a framework for re-
ducing the overhead of the convolutional operations, a simplified
cross-component processing model integrated into the original
architecture, and a methodology to perform integer-precision
approximations with the aim to obtain fast and hardware-aware
implementations. The proposed schemes are integrated into the
Versatile Video Coding (VVC) prediction pipeline, retaining
compression efficiency of state-of-the-art chroma intra-prediction
methods based on neural networks, while offering different
directions for significantly reducing coding complexity.
Index Terms —Chroma intra prediction, convolutional neural
networks, attention algorithms, multi-model architectures, com-
plexity reduction, video coding standards.
I. I NTRODUCTION
EFFICIENT video compression has become an essential
component of multimedia streaming. The convergence
of digital entertainment followed by the growth of web ser-
vices such as video conferencing, cloud gaming and real-time
high-quality video streaming, prompted the development of
advanced video coding technologies capable of tackling the
increasing demand for higher quality video content and its con-
sumption on multiple devices. New compression techniques
enable a compact representation of video data by identifying
Manuscript submitted July 1, 2020. The work described in this paper has
been conducted within the project JOLT funded by the European Union’s Hori-
zon 2020 research and innovation programme under the Marie Skłodowska
Curie grant agreement No 765140.
M. G ´orriz Blanch, S. Blasi and M. Mrak are with BBC Research &
Development, The Lighthouse, White City Place, 201 Wood Lane, Lon-
don, UK (e-mail: [email protected], [email protected],
[email protected]).
A. F. Smeaton and N. E. O’Connor are with Dublin City University, Glas-
nevin, Dublin 9, Ireland (e-mail: [email protected], [email protected]).
Fig. 1. Visualisation of the attentive prediction process. For each reference
sample 0-16 the attention module generates its contribution to the prediction
of individual pixels from a target 44block.
and removing spatial-temporal and statistical redundancies
within the signal. This results in smaller bitstreams, enabling
more efficient storage and transmission as well as distribution
of content at higher quality, requiring reduced resources.
Advanced video compression algorithms are often complex
and computationally intense, significantly increasing the en-
coding and decoding time. Therefore, despite bringing high
coding gains, their potential for application in practice is
limited. Among the current state-of-the-art solutions, the next
generation Versatile Video Coding standard [1] (referred to as
VVC in the rest of this paper), targets between 30-50% better
compression rates for the same perceptual quality, supporting
resolutions from 4K to 16K as well as 360videos. One
fundamental component of hybrid video coding schemes, intra
prediction, exploits spatial redundancies within a frame by
predicting samples of the current block from already recon-
structed samples in its close surroundings. VVC allows a
large number of possible intra prediction modes to be usedarXiv:2102.04993v1 [eess.IV] 9 Feb 2021JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 2
on the luma component at the cost of a considerable amount
of signalling data. Conversely, to limit the impact of mode
signalling, chroma components employ a reduced set of modes
[1].
In addition to traditional modes, more recent research intro-
duced schemes which further exploit cross-component correla-
tions between the luma and chroma components. Such corre-
lations motivated the development of the Cross-Component
Linear Model (CCLM, or simply LM in this paper) intra
modes. When using CCLM, the chroma components are
predicted from already reconstructed luma samples using a
linear model. Nonetheless, the limitation of simple linear
predictions comes from its high dependency on the selection
of predefined reference samples. Improved performance can
be achieved using more sophisticated Machine Learning (ML)
mechanisms [2], [3], which are able to derive more complex
representations of the reference data and hence boost the
prediction capabilities.
Methods based on Convolutional Neural Networks (CNNs)
[2], [4] provided significant improvements at the cost of two
main drawbacks: the associated increase in system complex-
ity and the tendency to disregard the location of individual
reference samples. Related works deployed complex neural
networks (NNs) by means of model-based interpretability
[5]. For instance, VVC recently adopted simplified NN-based
methods such as Matrix Intra Prediction (MIP) modes [6]
and Low-Frequency Non Separable Transform (LFNST) [7].
For the particular task of block-based intra-prediction, the
usage of complex NN models can be counterproductive if
there is no control over the relative position of the reference
samples. When using fully-connected layers, all input samples
contribute to all output positions, and after the consecutive
application of several hidden layers, the location of each
input sample is lost. This behaviour clearly runs counter
to the design of traditional approaches, in which predefined
directional modes carefully specify which boundary locations
contribute to each prediction position. A novel ML-based
cross-component intra-prediction method is proposed in [4],
introducing a new attention module capable of tracking the
contribution of each neighbouring reference sample when
computing the prediction of each chroma pixel, as shown in
Figure 1. As a result, the proposed scheme better captures
the relationship between the luma and chroma components,
resulting in more accurate prediction samples. However, such
NN-based methods significantly increase the codec complex-
ity, increasing the encoder and decoder times by up to 120%
and 947%, respectively.
This paper focuses on complexity reduction in video coding
with the aim to derive a set of simplified and cost-effective
attention-based architectures for chroma intra-prediction. Un-
derstanding and distilling knowledge from the networks en-
ables the implementation of less complex algorithms which
achieve similar performance to the original models. Moreover,
a novel training methodology is proposed in order to design a
block-independent multi-model which outperforms the state-
of-the-art attention-based architectures and reduces inference
complexity. The use of variable block sizes during training
helps the model to better generalise on content variety whileensuring higher precision on predicting large chroma blocks.
The main contributions of this work are the following:
A competitive block-independent attention-based multi-
model and training methodology;
A framework for complexity reduction of the convolu-
tional operations;
A simplified cross-component processing model using
sparse auto-encoders;
A fast and cost-effective attention-based multi-model with
integer precision approximations.
This paper is organised as follows: Section II provides a
brief overview on the related work, Section III introduces
the attention-based methodology in detail and establishes the
mathematical notation for the rest of the paper, Section IV
presents the proposed simplifications and Section V shows
experimental results, with conclusion drawn in Section VI.
II. B ACKGROUND
Colour images are typically represented by three colour
components (e.g. RGB, YCbCr). The YCbCr colour scheme
is often adopted by digital image and video coding standards
(such as JPEG, MPEG-1/2/4 and H.261/3/4) due to its ability
to compact the signal energy and to reduce the total required
bandwidth. Moreover, chrominance components are often sub-
sampled by a factor of two to conform to the YCbCr 4:2:0
chroma format, in which the luminance signal contains most of
the spatial information. Nevertheless, cross-component redun-
dancies can be further exploited by reusing information from
already coded components to compress another component.
In the case of YCbCr, the Cross-Component Linear model
(CCLM) [8] uses a linear model to predict the chroma signal
from a subsampled version of the already reconstructed luma
block signal. The model parameters are derived at both the
encoder and decoder sides without needing explicit signalling
in the bitstream.
Another example is the Cross-Component Prediction (CCP)
[9] which resides at the transform unit (TU) level regardless
of the input colour space. In case of YCbCr, a subsampled and
dequantised luma transform block (TB) is used to modify the
chroma TB at the same spatial location based on a context
parameter signalled in the bitstream. An extension of this
concept modifies one chroma component using the residual
signal of the other one [10]. Such methodologies significantly
improved the coding efficiency by further exploiting the cross-
component correlations within the chroma components.
In parallel, recent success of deep learning application
in computer vision and image processing influenced design
of novel video compression algorithms. In particular in the
context of intra-prediction, a new algorithm [3] was introduced
based on fully-connected layers and CNNs to map the predic-
tion of block positions from the already reconstructed neigh-
bouring samples, achieving BD-rate (Bjontegaard Delta rate)
[11] savings of up to 3.0% on average over HEVC, for approx.
200% increase in decoding time. The successful integration
of CNN-based methods for luma intra-prediction into existing
codec architectures has motivated research into alternative
methods for chroma prediction, exploiting cross-componentJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 3
Fig. 2. Baseline attention-based architecture for chroma intra prediction presented in [4] and described in Section III.
redundancies similar to the aforementioned LM methods. A
novel hybrid neural network for chroma intra prediction was
recently introduced in [2]. A first CNN was designed to
extract features from reconstructed luma samples. This was
combined with another fully-connected network used to extract
cross-component correlations between neighbouring luma and
chroma samples. The resulting architecture uses complex non-
linear mapping for end-to-end prediction of chroma channels.
However, this is achieved at the cost of disregarding the spatial
location of the boundary reference samples and significant
increase of the complexity of the prediction process. As shown
in [4], after a consecutive application of fully-connected layers
in [2], the location of each input boundary reference sample
is lost. Therefore, the fully-convolutional architecture in [4]
better matches the design of the directional VVC modes and
is able to provide significantly better performance.
The use of attention models enables effective utilisation
of the individual spatial location of the reference samples
[4]. The concept of “attention-based” learning is a well-
known idea used in deep learning frameworks, to improve the
performance of trained networks in complex prediction tasks
[12], [13], [14]. In particular, self-attention is used to assess the
impact of particular input variables on the outputs, whereby
the prediction is computed focusing on the most relevant
elements of the same sequence [15]. The novel attention-
based architecture introduced in [4] reports average BD-rate
reductions of -0.22%, -1.84% and -1.78% for the Y , Cb and
Cr components, respectively, although it significantly impacts
the encoder and decoder time.
One common aspect across all related work is that whilst
the result is an improvement in compression this comes at the
expense of increased complexity of the encoder and decoder.
In order to address the complexity challenge, this paper aims
to design a set of simplified attention-based architectures for
performing chroma intra-prediction faster and more efficiently.
Recent works addressed complexity reduction in neural net-
works using methods such as channel pruning [16], [17],
[18] and quantisation [19], [20], [21]. In particular for videocompression, many works used integer arithmetic in order
to efficiently implement trained neural networks on different
hardware platforms. For example, the work in [22] proposes a
training methodology to handle low precision multiplications,
proving that very low precision is sufficient not just for
running trained networks but also for training them. Similarly,
the work in [23] considers the problem of using variational
latent-variable models for data compression and proposes
integer networks as a universal solution of range coding as
an entropy coding technique. They demonstrate that such
models enable reliable cross-platform encoding and decoding
of images using variational models. Moreover, in order to
ensure deterministic implementations on hardware platforms,
they approximate non-linearities using lookup tables. Finally,
an efficient implementation of matrix-based intra prediction
is proposed in [24], where a performance analysis evaluates
the challenges of deploying models with integer arithmetic
in video coding standards. Inspired by this knowledge, this
paper develops a fast and cost-effective implementation of the
proposed attention-based architecture using integer precision
approximations. As shown Section V-D, while such approxi-
mations can significantly reduce the complexity, the associated
drop of performance is still not negligible.
III. A TTENTION -BASED ARCHITECTURES
This section describes in detail the attention-based approach
proposed in [4] (Figure 2), which will be the baseline for the
presented methodology in this paper. The section also provides
the mathematical notation used for the rest of this paper.
Without loss of generality, only square blocks of pixels
are considered in this work. After intra-prediction and recon-
struction of a luma block in the video compression chain,
luma samples can be used for prediction of co-located chroma
components. In this discussion, the size of a luma block is
assumed to be (downsampled to) NNsamples, which is
the size of the co-located chroma block. This may require the
usage of conventional downsampling operations, such as in
the case of using chroma sub-sampled picture formats suchJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 4
Fig. 3. Proposed multi-model attention-based architectures with the integration of the simplifications introduced in this paper. More details about the model’s
hyperparameters and a description of the referred schemes can be found in Section V.
as 4:2:0. Note that a video coding standard treats all image
samples as unsigned integer values within a certain precision
range based on the internal bit depth. However, in order to
utilise common deep learning frameworks, all samples are
converted to floating point and normalised to values within the
range [0;1]. For the chroma prediction process, the reference
samples used include the co-located luma block X02I RNN,
and the array of reference samples Bc2I Rb,b= 4N+1from
the left and from above the current block (Figure 1), where
c=Y,CborCrrefers to the three colour components. B
is constructed from samples on the left boundary (starting
from the bottom-most sample), then the corner is added,
and finally the samples on top are added (starting from the
left-most sample). In case some reference samples are not
available, these are padded using a predefined value, following
the standard approach defined in VVC. Finally, S02I R3b
is the cross-component volume obtained by concatenating
the three reference arrays BY,BCbandBCr. Similar to
the model in [2], the attention-based architecture adopts a
scheme based on three network branches that are combined to
produce prediction samples, illustrated in Figure 2. The first
two branches work concurrently to extract features from the
input reference samples.
The first branch (referred to as the cross-component bound-
ary branch) extracts cross component features from S02
I R3bby applying Iconsecutive Di- dimensional 11
convolutional layers to obtain the Si2I RDiboutput feature
maps, where i= 1;2:::I. By applying 11convolutions, the
boundary input dimensions are preserved, resulting in an Di-
dimensional vector of cross-component information for each
boundary location. The resulting volumes are activated using
a Rectified Linear Unit (ReLU) non-linear function.
In parallel, the second branch (referred to as the luma
convolutional branch) extracts spatial patterns over the co-
located reconstructed luma block X0by applying convolu-
tional operations. The luma convolutional branch is defined
byJconsecutive Cj-dimensional 33convolutional layers
with a stride of 1, to obtainXj2I RCjN2feature maps fromtheN2input samples, where j= 1;2:::J . Similar to the
cross-component boundary branch, in this branch a bias and
a ReLU activation are applied within convolutional layer.
The feature maps ( SIandXJ) from both branches are
each convolved using a 11kernel, to project them into
two corresponding reduced feature spaces. Specifically, SI
is convolved with a filter WF2I RhDto obtain the h-
dimensional feature matrix F. Similarly,XJis convolved with
a filterWG2I RhCto obtain the h-dimensional feature
matrixG. The two matrices are multiplied together to obtain
the pre-attention map M=GTF. Finally, the attention matrix
A2I RN2bis obtained applying a softmax operation to each
element ofM, to generate the probability of each boundary
location being able to predict a sample location in the block.
Each value j;iinAis obtained as:
j;i=exp (mi;j=T)
Pb1
n=0exp (mn;j=T); (1)
wherej= 0;:::;N21represents the sample location in
the predicted block, i= 0;:::;b1represents a reference
sample location, and Tis the softmax temperature parameter
controlling the smoothness of the generated probabilities, with
0<T1. Notice that the smaller the value of T, the more
localised are the obtained attention areas resulting in corre-
spondingly fewer boundary samples contributing to a given
prediction location. The weighted sum of the contribution of
each reference sample in predicting a given sample at a specific
location is obtained by computing the matrix multiplication
between the cross-component boundary features SIand the
attention matrix A, or formally ST
IA. In order to further refine
ST
IA, this weighted sum can be multiplied by the output of
the luma branch. To do so, the output of the luma branch
must be transformed to change its dimensions by means of a
11convolution using a matrix Wx2I RDCto obtain a
transformed representation X, thenO=X (ST
IA), where
is the element-wise product.
Finally, the output of the attention model is fed into the third
network branch, to compute the predicted chroma samples. InJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 5
this branch, a final CNN is used to map the fused features from
the first two branches as combined by means of the attention
model into the final chroma prediction. The prediction head
branch is defined by two convolutional layers, applying E-
dimensional 33convolutional filters and then 2-dimensional
11filters for deriving the two chroma components at once.
IV. M ULTI -MODEL ARCHITECTURES
This section introduces a new multi-model architecture
which improves the baseline attention-based approach (Section
III, [4]). The main improvement comes from its block-size
agnostic property as the proposed approach only requires one
model for all block sizes. Furthermore, a range of simpli-
fications is proposed with the aim to reduce the complex-
ity of related attention-based architectures while preserving
prediction performance as much as possible. The proposed
simplifications include a framework for complexity reduction
of the convolutional operations, a simplified cross-component
boundary branch using sparse autoencoders and insights for
fast and cost-effective implementations with integer precision
approximations. Figure 3 illustrates the proposed multi-model
attention-based schemes with the integration of the simplifica-
tions described in this section.
A. Multi-model size agnostic architecture
In order to handle variable block sizes, previous NN-based
chroma intra-prediction methods employ different architec-
tures for blocks of different sizes. These architectures differ
in the dimensionality of the networks, which depend on give
block size, as a trade-off between model complexity and
prediction performance [2]. Given a network structure, the
depth of the convolutional layers is the most predominant
factor when dealing with variable input sizes. This means that
increasingly complex architectures are needed for larger block
sizes, in order to ensure proper generalisation for these blocks
which have higher content variety. Such a factor significantly
increases requirements for inference because of the number of
multiple architectures.
In order to streamline the inference process, this work
proposes a novel multi-model architecture that is independent
of the input block size. Theoretically, a convolutional filter
can be applied over any input space. Therefore, the fully-
convolutional nature of the proposed architecture ( 11kernels
for the cross-component boundary branch and 33kernels
for the luma convolutional one) allows the design of a size
agnostic architecture. As shown in Figure 4, the same task
can be achieved using multiple models with different input
sizes sharing the weights, such that a unified set of filters can
be used a posterior, during inference. The given architecture
must employ a number of parameters that is sufficiently large
to ensure proper performance for larger blocks, but not too
large to incur overfitting for smaller blocks.
Figure 5 describes the algorithmic methodology employed
to train the multi-model approach. As defined in Section III,
the co-located luma block X02I RNNand the cross-
component volume S02I R3bare considered as inputs to
the chroma prediction network. Furthermore, for training of a
Fig. 4. Illustration of the proposed multi-model training and inference
methodologies. Multiple block-dependent models N(W(t))are used during
training time. A size-agnostic model with a single set of trained weighs W
is then used during inference.
Require:fX(N)
m,S(N)
m,Z(N)
mg,m2[0;M),N2f4;8;16g
Require:N(W(t)):Nmodel with shared weights W(t)
Require:L(t)
reg: Objective function at training step t
1:t 0(initialise timestep)
2:whiletnot converged do
3: form2[0;M)do
4: forN2f4;8;16gdo
5: t t+ 1
6:L(t)
reg MSE (Z(N)
m;N(X(N)
m;S(N)
m;W(t1)))
7: g(t) r WL(t)
reg(get gradients at step t)
8: W(t) optimiser (g(t))
9: end for
10: end for
11:end while
Fig. 5. Training algorithm for the proposed multi-model architecture.
multi-model the ground-truth is defined as Z(N)
m, for a given
inputfX(N)
m;S(N)
mg, and the set of instances from a database
ofMsamples or batches is defined as fX(N)
m;S(N)
m;Z(N)
mg,
wherem= 0;1:::M1andN2f4,8,16gis the set
of supported square block sizes NN(the method can be
extended to a different set of sizes). As shown in Figure 4,
multiple block-dependent models N(W)with shared weights
Ware updated in a concurrent way following the order of
supported block sizes. At training step t, the individual model
N(W(t))is updated obtaining a new set of weights W(t+1).
Finally, a single set of trained weights Wis used during
inference, obtaining a size-agnostic model (W). Model pa-
rameters are updated by minimising the Mean Square Error
(MSE) regression loss Lreg, as in:
L(t)
reg=1
CN2kZ(N)
mN(X(N)
m;S(N)
m;W(t1)k2
2;(2)
whereC= 2 refers to the number of predicted chroma
components, and N(W(t1))is the block-dependent model
at training step t1.JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 6
Fig. 6. Visualisation of the receptive field of a 2-layer convolutional branch
with 33kernels. Observe that an output pixel in layer 2is computed by
applying a 33kernel over a field F1of33samples from the first layer’s
output space. Similarly, each of the F1values are computed by means of
another 33kernel looking at a field F0of55samples over the input.
B. Simplified convolutions
Convolutional layers are responsible for most of the net-
work’s complexity. For instance, based on the network hyper-
parameters from experiments in Section V, the luma convo-
lutional branch and the prediction head branch (with 33
convolutional kernels) alone contain 46;882 out of 51;714
parameters, which constitute more than 90% of the parameters
in the entire model. Therefore, the model complexity can be
significantly reduced if convolutional layers can be simpli-
fied. This subsection explains how a new simplified structure
beneficial for practical implementation can be devised by
removing activation functions, i.e. by removing non-linearities.
It is important to stress that such process is devised only for
application on carefully selected layers, i.e. for branches where
such simplification does not significantly reduce expected
performance.
Consider specific two-layer convolutional branch (e.g. luma
convolutional branch from Figure 2) formulated as:
Y=R(W2R(W1X+b1) +b2) (3)
whereCiare the number of features in layer i,bi2I RCi
are biases, KiKiare square convolutional kernel sizes,
W12I RK2
1C0C1andW22I RK2
2C1C2are the weights
and bias of the first ( i= 1) and the second ( i= 2) layers,
respectively, C0the dimensions of the input feature map,
Ris a Rectified Linear Unit (ReLU) non-linear activation
function anddenotes convolution operation. Input to the
branch isX2I RN2C0and the result is a volume of features
Y2I RN2C2, which correspond to X0andX2from Figure 2,
respectively. Removing non-linearities, the given branch can
be simplified as:
^Y=W2(W1X+b1) +b2; (4)
where it can be observed that a new convolution and bias terms
can be defined using trained parameters from the two initial
layers, to form a new single layer:
^Y=WcX+bc; (5)
Fig. 7. Visualisation of the learnt colour space resulting of encoding input
YCbCr colours to the 3-dimensional hidden space of the autoencoder.
whereWc2I R[^K2C0]C2is the function of W1andW2with
^K=K1+K21, andbcis a constant vector derived from W2,
b1andb2. Figure 6 (a) illustrates the operations performed in
Eq. 4 forK1=K2= 3 andC= 1. Analysing the receptive
field of the whole branch, a pixel within the output volume Y
is computed by applying a K2K2kernel over a field F1from
the first layer’s output space. Similarly, each of the F1values
are computed by means of another K1K1kernel looking
at a fieldF0. Without the non-linearities, and equivalent of
this process is simplified, Figure 6 (b) and Eq. 5. Notice that
^K=K1+K21equals 5in the example in Figure 6. For a
variety of parameters, including the values of C0,CiandKi
used in [4] and in this paper, this simplification of concatenated
convolutional layers allows reduction of model’s parameters at
inference time, which will be shown in Section V-C.
Finally, it should be noted that we limit the removal of
activation functions only to branches which include more than
one layer, from which at least one layer has Ki>1, and only
the activation functions between layers in the same branch are
removed (to be able to merge them as in Equation 5). In such
branches with at least one Ki>1the number of parameters
is typically very high, while the removal of non-linearities
typically does not impact prediction performance. Activation
functions are not removed from the remaining layers. It should
be noted that in the attention module and at the intersections
of various branches the activation functions are critical and
therefore are left unchanged. Section V-C performs an ablation
test to evaluate the effect of removing the non-linearities, and
a test to evaluate how would a convolutional branch directly
trained with large-support kernels ^Kperform.
C. Simplified cross-component boundary branch
In the baseline model, the cross-component boundary
branch transforms the boundary inputs S2I R3bintoDJ-
dimensional feature vectors. More specifically, after applying
J= 2 consecutive 11convolutional layers, the branch
encodes each boundary colour into a high dimensional feature
space. It should be noted that a colour is typically represented
by 3 components, indexed within a system of coordinates
(referred to as the colour space). As such, a three-dimensional
feature space can be considered as the space with minimumJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 7
dimensionality that is still capable of representing colour
information. Therefore, this work proposes the use of autoen-
coders (AE) to reduce the complexity of the cross-component
boundary branch, by compacting the D-dimensional feature
space into a reduced, 3-dimensional space. An AE tries to learn
an approximation to the identity function h(x)xsuch that
the reconstructed output ^xis as close as possible to the input
x. The hidden layer will have a reduced dimensionality with
respect to the input, which also means that the transformation
process may introduce some distortion, i.e. the reconstructed
output will not be identical to the input.
An AE consists of two networks, the encoder fwhich maps
the input to the hidden features, and the decoder gwhich
reconstructs the input from the hidden features. Applying this
concept, a compressed representation of the input can be
obtained by using the encoder part alone, with the goal of
reducing the dimensionality of the input vectors. The encoder
network automatically learns how to reduce the dimensions
of the input vectors, in a similar fashion to what could be
obtained applying a manual Principal Component Analysis
(PCA) transformation. The transformation learned by the AE
can be trained using the same loss function that is used in the
PCA process [25]. Figure 7 shows the mapping function of
the resulting colour space when applying the encoder network
over the YCbCr colour space.
Overall, the proposed simplified cross-component boundary
branch consists of two 11convolutional layers using Leaky
ReLU activation functions with a slope = 0:2. First, aD-
dimensional layer is applied over the boundary inputs Sto
obtainS12I RDbfeature maps. Then, S1is fed to the AE’s
encoder layer fwith output 3dimensions, to obtain the hidden
feature maps S22I R3b. Finally, a third 11convolutional
layer (corresponding to the AE decoder layer g) is applied
to generate the reconstructed maps ~S1withD-dimensions.
Notice that the decoder layer is only necessary during the
training stage to obtain the reconstructed inputs necessary to
derive the values of the loss function. Only the encoder layer
is needed when using the network, in order to transform the
input feature vectors into the 3dimensional, reduced vectors.
Figure 3 illustrates the branch architecture and its integration
within the simplified multi-model.
Finally, in order to interpret the behaviour of the branch
and to identify prediction patterns, a sparsity constraint can
be imposed on the loss function. Formally, the following can
be used:
LAE=r
DbkS1~S1k2
2+s
3bkS2k1; (6)
where the right-most term is used to keep the activation
functions in the hidden space remain inactive most of the
time, and only return non-zero values for the most descriptive
samples. In order to evaluate the effect of the sparsity term,
Section V-C performs an ablation test that shows its positive
regularisation properties during training.
The objective function in Equation 2 can be updated such
that the global multi-model loss Lconsiders bothLregand
LAEas:
L=regLreg+AELAE (7)whereregandAEcontrol the contribution of both losses.
D. Integer precision approximation
While the training algorithm results in IEEE-754 64-bit
floating point weights and prediction buffers, an additional
simplification is proposed in this paper whereby the network
weights and prediction buffers are represented using fixed-
point integer arithmetic. This is beneficial for deployment of
resulting multi-models in efficient hardware implementations,
which complex operations such as Leaky ReLU and softmax
activation functions can become serious bottlenecks. All the
network weights obtained after the training stage are therefore
appropriately quantised to fit 32-bit signed integer values. it
should be noted that integer approximation introduces quanti-
sation errors, which may have an impact on the performance
of the overall predictions.
In order to prevent arithmetic overflows after performing
multiplications or additions, appropriate scaling factors are
defined for each layer during each of the network predic-
tion steps. To further reduce the complexity of the integer
approximation, the scaling factor Klfor a given layer lis
obtained as a power of 2, namelyKl= 2Ol, whereOlis the
respective precision offset. This ensures that multiplications
can be performed by means of simple binary shifts. Formally,
the integer weights ~Wland biases ~blfor each layer lin the
network with weights Wland biasblcan be obtained as:
~Wl=bWl2Olc;~bl=bbl2Olc: (8)
The offsetOldepends on the offset used on the previous layer
Ol1, as well as on an internal offset Oxnecessary to preserve
as much decimal information as possible, compensating for
the quantisation that occurred in the previous layer, namely
Ol=OxOl1.
Furthermore, in this approach the values predicted by the
network are also integers. In order to avoid defining large
internal offsets at each layer, namely having large values of
Ox, an additional stage of compensation is applied to the
predicted values, to keep their values in the range of 32-bit
signed integer. For this purpose, another offset Oyis defined,
computed as Oy=OxOl. The values generated by layer l
are then computed as:
Yl= (( ~WT
lXl+~bl) + (1<<(Oy1)))>>O y; (9)
where<< and>> represent the left and right binary shifts,
respectively, and the offset (1<<(Oy1))is considered to
reduce the rounding error.
Complex operations requiring floating point divisions need
to be approximated to integer precision. The Leaky ReLU
activation functions applied on the cross-component boundary
branch use a slope = 0:2which multiplies the negative
values. Such an operation can be simply approximated by
defining a new activation function ~A(x)for any input xas
follows:
~A(x) =0 : x0
26x>> 7 :x<0
(10)JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 8
Conversely, the softmax operations used in the attention
module are approximated following a more complex method-
ology, similar to the one used in [26]. Consider the matrix M
as defined in Equation 1 and a given row jinM, and a vector
mjas input to the softmax operation. First, all elements mj
in a row are subtracted by the maximum element in the row,
namely:
^mi;j= (mi;j=Tmax i(mi;j=T)) (11)
whereTis the temperature of the softmax operation, set to 0:5
as previously mentioned. The transformed elements ^mi;jrange
between the minimum signed integer value and zero, because
the arguments ^mi;jare obtained by subtracting the elements
inMby the maximum element in each row. To further reduce
the possibility of overflows, this range is further clipped to a
minimum negative value, set to pre-determined number Ve, so
that any ^mi;j<Veis set equal to Ve.
The elements ^mi;jare negative integer numbers within the
range [Ve;0], meaning there is a fixed number of Ne=jVej+
1possible values they can assume. To further simplify the
process, such an exponential function is replaced by a pre-
computed look-up table containing Neinteger elements. To
minimise the approximation error, the exponentials are scaled
by a given scaling factor before being approximated to the
nearest integer and stored in the corresponding look-up table
LUT-EXP . Formally, for a given index k, where 0k
Ne1, thek-th integer input is obtained as sk=Ve+k. The
k-th element in the look-up table can then be computed as the
approximated, scaled exponential value for sk, or:
LUT-EXP (k) =bKeeskc (12)
whereKe= 2Oeis the scaling factor, chosen in a way to
maximise the preservation of the original decimal information.
When using the look-up table during the prediction process,
given an element ^mi;jthe corresponding index kcan be
retrieved as: k=jVe^mi;jj, to produce the numerator
in the softmax function.
The integer approximation of the softmax function can then
be written as:
^ j;i=LUT-EXP (jVe^mi;jj)
D(j); (13)
where:
D(j) =b1X
n=0LUT-EXP (jVe^mn;jj); (14)
Equation 13 implies performing an integer division between
the numerator and denominator. This is not ideal, and integer
divisions are typically avoided in low complexity encoder
implementations. A simple solution to remove the integer
division can be obtained by replacing it with a binary shift.
However, a different approach is proposed in this paper to
provide a more robust approximation that introduces smaller
errors in the division. The denominator D(j)as in Equation 14
is obtained as the sum of bvalues extracted from LUT-EXP ,
wherebis the number of reference samples extracted from
the boundary of the block. As such, the largest blocks under
consideration ( 1616) will result in the largest possible value
of reference samples bMAX . This means that the maximumvalue that this denominator can assume is obtained when
b=bMAX and when all input ^mi;j= 0 (which correspond
toLUT-EXP (jVej) =Ke), corresponding to Vs=bMAXKe.
Similarly, the minimum value (obtained when ^mi;j=Ve) is0.
Correspondingly, D(j), can assume any positive integer value
in the range [0;Vs].
Considering a given scaling factor Ks= 2Os, integer divi-
sion byD(j)can be approximated using a multiplication by
the factorM(j) =bKs=D(j)c. A given value of M(j)could
be computed for all Vs+1possible values of D(j). Such values
can then be stored in another look-up table LUT-SUM . Clearly
though,Vsis too large which means LUT-SUM would be
impractical to use due to storage and complexity constraints.
For that reason, a smaller table is used, obtained by quantising
the possible values of D(j). A pre-defined step Qis used,
resulting in Ns= (Vs+ 1)=Qquantised values of D(j). The
table LUT-SUM of sizeNsis then filled accordingly, where
each element in the table is obtained as:
LUT-SUM (l) =bKs=(lQ)c (15)
Finally, when using the table during the prediction process,
given an integer sum D(j), the corresponding index lcan
be retrieved as: l=bD(j)=Qc. Following from these
simplifications, given an input ^mi;jobtained as in Equation 11,
the integer sum D(j)obtained from Equation 14, and a
quantisation step Q, the simplified integer approximation of
the softmax function can eventually be obtained as:
~ j;i=LUT-EXP (jVe^mi;jj)LUT-SUM (bD(j)=Qc);(16)
Notice that ~ j;ivalues are finally scaled by Ko=KeKs.
V. E XPERIMENTS
A. Training settings
Training examples were extracted from the DIV2K dataset
[27], which contains high-definition high-resolution content of
large diversity. This database contains 800 training samples
and100 samples for validation, providing 6lower resolution
versions with downsampling by factors of 2,3and4with
a bilinear and unknown filters. For each data instance, one
resolution was randomly selected and then M blocks of each
NNsizes (N= 4;8;16) were chosen, making balanced sets
between block sizes and uniformed spatial selections within
each image. Moreover, 4:2:0 chroma sub-sampling is assumed,
where the same downsampling filters implemented in VVC are
used to downsample co-located luma blocks to the size of the
corresponding chroma block. All the schemes were trained
from scratch using the Adam optimiser [28] with a learning
rate of 104.
B. Integration into VVC
The methods introduced in the paper where integrated
within a VVC encoder, using the VVC Test Model (VTM)
7.0 [29]. The integration of the proposed NN-based cross-
component prediction into the VVC coding scheme requires
normative changes not only in the prediction process, but also
in the way the chroma intra-prediction mode is signalled in
the bitstream and parsed by the decoder.JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 9
TABLE I
NETWORK HYPERPARAMETERS DURING TRAINING
Branch (Cin;KK;C out) Scheme 1 & 3 Scheme 2
CC Boundary3;11;32
32;11;323;11;32
32;11;3
Luma Convolutional1;33;64
64;33;641;33;64
64;33;64
Attention Module32;11;16
64;11;16
64;11;3232;11;16
64;11;16
64;11;3
Prediction Head32;33;32
32;11;23;33;3
3;11;2
TABLE II
NETWORK HYPERPARAMETERS DURING INFERENCE
Branch (Cin;KK;C out) Scheme 1 & 3 Scheme 2
CC Boundary3;11;32
32;11;323;11;32
32;11;3
Luma Convolutional 1;55;64 1;55;64
Attention Module32;11;16
64;11;16
64;11;3232;11;16
64;11;16
64;11;3
Prediction Head 32;33;2 3;33;2
A new block-level syntax flag is introduced to indicate
whether a given block makes use of one of the proposed
schemes. If the proposed NN-based method is used, a pre-
diction is computed for the two chroma components. No
additional information is signalled related to the chroma intra-
prediction mode for the block. Conversely, if the method
is not used, the encoder proceeds in signalling the chroma
intra-prediction mode as in conventional VVC specifications.
For instance, a subsequent flag is signalled to identify if
conventional LM modes are used in the current block or not.
The prediction path also needs to accommodate the new NN-
based predictions. This largely reuses prediction blocks that
are needed to perform conventional CCLM modes. In terms
of mode selection at the encoder side, the new NN-based mode
is added to the conventional list of modes to be tested in full
rate-distortion sense.
C. Architecture configurations
The proposed multi-model architectures and simplifications
(Section IV) are implemented in 3 different schemes:
Scheme 1: Multi-model architecture (Section IV-A) ap-
plying the methodology in Section IV-B to simplify the
convolutional layers within the luma convolutional branch
and the prediction branch, as illustrated in Figure 3.
Scheme 2: The multi-model architecture in Scheme 1
applying the methodology in Section IV-C to simplify
the cross-component boundary branch. As shown in Fig-
ure 3, the integration of the simplified branch requires
modification of the initial architecture with changes in
the attention module and the prediction branch.
Scheme 3: Architecture in Scheme 1 with the integer
precision approximations described in Section IV-D.
In contrast to previous state-of-the-art methods, the pro-
posed multi-model does not need to adapt its architectureto the input block size. Notice that the fully-convolutional
architecture introduced in [4] enables this design and is able
to significantly reduce the complexity of the cross-component
boundary branch in [2], which uses size-dependent fully-
connected layers. Table I shows the network hyperparameters
of the proposed schemes during training, whereas Table II
shows the resulting hyperparameters for inference after ap-
plying the proposed simplifications. As shown in Tables III
and IV, the employed number of parameters in the proposed
schemes represents the trade-off between complexity and
prediction performance, within the order of magnitude of
related attention-based CNNs in [4]. The proposed simplifi-
cations significantly reduce (around 90%) the original training
parameters, achieving lighter architectures for inference time.
Table III show that the inference version of Scheme 2 reduces
to around 85%, 96% and 99% the complexity of the hybrid
CNN models in [2] and to around 82%, 96% and 98% the
complexity of the attention-based models in [4], for 44;88
and1616input block sizes, respectively. Finally, in order
to provide more insights about the computational cost and
compare the proposed schemes with the state-of-the-art meth-
ods, Table V shows the number of floating point operations
(FLOPs) for each architecture per block size. The reduction of
operations (e.g. additions and matrix multiplications) to arrive
to the predictions is one the predominant factors towards the
given speedups. Notice the significant reduction of FLOPs for
the proposed inference models.
In order to obtain a preliminary evaluation of the proposed
schemes and to compare their prediction performance with the
state-of-the-art methods, the trained models were tested on the
DIV2K validation set (with 100 multi-resolution images) by
means of averaged PSNR. Test samples were obtained with
the same methodology as used in Section V-A for generating
the training dataset. Notice that this test uses the training
version of the proposed schemes. As shown in Table IV, the
multi-model approach introduced in Scheme 1 improves the
attention-based CNNs in [4] for 44and88blocks, while
only a small performance drop can be observed for 1616
blocks. However, because of using a fixed architecture for all
block sizes, the proposed multi-model architecture averages
the complexity of the individual models in [4] (Table III),
slightly increasing the complexity of the 44model and
simplifying the 1616architecture. The complexity reduction
in the 1616model leads to a small drop in performance. As
can be observed from Table IV , the generalisation process
induced by the multi-model methodology ([4] with multi-
model, compared to [4]) can minimise such drop by distilling
knowledge from the rest of block sizes, which is especially
evident for 88blocks where a reduced architecture can
improve the state-of-the-art performance.
Finally, the simplifications introduced in Scheme 2 (e.g.
the architecture changes required to integrate the modified
cross-component boundary branch within the original model)
lower the prediction performance of Scheme 1. However,
the highly simplified architecture is capable of outperforming
the hybrid CNN models in [2], observing training PSNR
improvements of an additional 1.30, 2.21 and 2.31 dB for
44;88and1616input block sizes, respectively. TheJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 10
TABLE III
MODEL COMPLEXITY PER BLOCK SIZE
Model (parameters) 44 88 1616
Hybrid CNN [2] 24435 96116 369222
Attention-based CNN [4] 21602 83106 186146
Scheme 1 & 3 (train/inference) 51714=7074
Scheme 2 (train/inference) 39371=3710
TABLE IV
PREDICTION PERFORMANCE PER BLOCK SIZE
Model (PSNR) 44 88 1616
Hybrid CNN [2] 28:61 31:47 33:36
Attention-based CNN [4] 30:23 33:13 36:13
[4] with multi-model 30:55 33:21 36:05
Scheme 1 single layer training 30:36 33:05 35:88
Scheme 2 without sparsity 29:89 32:66 35:64
(proposed) Scheme 1 30:54 33:20 35:99
(proposed) Scheme 2 29:91 32:68 35:67
combination of attention-based architectures with the proposed
multi-model methodology (Scheme 1) considerably improves
the NN-based chroma intra-prediction methods in [2], showing
training PSNR improvements by additional 1.93, 1.73 and
2.68 dB for the supported block sizes. In Section V-D it will
be shown how this relatively small PSNR differences lead to
significant differences in codec performance.
Several ablations were performed in order to evaluate the
effects of the proposed simplifications. First, the effect of
the multi-model methodology is evaluated by directly con-
verting the models in [4] to the size-agnostic architecture in
Scheme 1 but without the simplifications in Section IV-B
([4] with multi-model). As can be shown in Table IV, such
methodology improves the 44and88models, with
special emphasis in the 88case where the number of
parameters is smaller than in [4]. Moreover, the removal of
non-linearities towards Scheme 1 does not significantly affect
the performance, with a negligible PSNR loss of around 0.3
dB ([4] with multi-model compared with Scheme 1). Secondly,
in order to evaluate the simplified convolutions methodology
in Section IV-B, a version of Scheme 1 was trained with
single-layer convolutional branches with large support kernels
(e.g. instead of training 2 linear layers with 33kernels and
then combining them into 55kernels for inference, training
directly a single-layer branch with 55kernels). Experimental
results show the positive effects of the proposed methodology,
observing a significant drop of performance when a single-
layer trained branch is applied (Scheme 1 with single layer
training compared with Scheme 1). Finally, the effect of the
sparse autoencoder of Scheme 2 is evaluated by removing
the sparsity term in Equation 7. As can be observed, the
regularisation properties of the sparsity term, i.e. preventing
large activations, boosts the generalisation capabilities of the
multi-model and slightly increases the prediction performance
by around 0.2 dB. (Scheme 2 without sparsity compared with
Scheme 2).TABLE V
FLOP S PER BLOCK SIZE
Model (parameters) 44 88 1616
Hybrid CNN [2] 51465 187273 711945
Attention-based CNN [4] 42795 165451 186146
Scheme 1 & 3 (train/inference) 102859=13770
Scheme 2 (train/inference) 79103=7225
D. Simulation Results
The VVC reference software VTM-7.0 is used as our
benchmark and our proposed methodology is tested under the
Common Test Conditions (CTC) [30], using the suggested all-
intra configuration for VVC with a QP of 22, 27, 32 and 37. In
order to fully evaluate the performance of the proposed multi-
models, the encoder configuration is constrained to support
only square blocks of 44;88and1616pixels.
A corresponding VVC anchor was generated under these
conditions. BD-rate is adopted to evaluate the relative com-
pression efficiency with respect to the latest VVC anchor. Test
sequences include 26 video sequences of different resolutions:
38402160 (Class A1 and A2), 19201080 (Class B),
832480(Class C), 416240(Class D), 1280720(Class
E) and screen content (Class F). The “EncT” and “DecT” are
“Encoding Time” and “Decoding Time”, respectively.
A colour analysis is performed in order to evaluate the
impact of the chroma channels on the final prediction per-
formance. As suggested in previous colour prediction works
[31], standard regression methods for chroma prediction may
not be effective for content with wide distributions of colours.
A parametric model which is trained to minimise the Euclidean
distance between the estimations and the ground truth com-
monly tends to average the colours of the training examples
and hence produce desaturated results. As shown in Figure 8,
several CTC sequences are analysed by computing the loga-
rithmic histogram of both chroma components. The width of
the logarithmic histograms is compared to the compression
performance in Table VI. Gini index [32] is used to quantify
the width of the histograms, obtained as
Gini(H) = 1B1X
b=0
H(b)PB1
k=0H(k)!2
(17)
beingHa histogram of Bbins for a given chroma component.
Notice that the average value between both chroma compo-
nents is used in Table VI. A direct correlation between Gini
index and coding performance can be observed in Table VI,
suggesting that Scheme 1 performs better for narrower colour
distributions. For instance, the Tango 2 sequence with a Gini
index of 0.63 achieves an average Y/Cb/Cr BD-rates of -
0.46%/-8.13%/-3.13%, whereas Campfire with wide colour
histograms (Gini index of 0.98), obtains average Y/Cb/Cr
BD-rates of -0.21%/0.14%/-0.88%. Although the distributions
of chroma channels can be a reliable indicator of prediction
performance, wide colour distributions may not be the only
factor in restricting chroma prediction capabilities of proposed
methods, which can be investigated in future work.
A summary of the component-wise BD-rate results for
all the proposed schemes and the related attention-basedJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 11
Fig. 8. Comparison of logarithmic colour histograms for different sequences.
TABLE VI
BD-R ATES (%) SORTED BY GINI INDEX
SequenceScheme 1GiniY Cb Cr
Tango2 -0.46 -8.13 -3.13 0.63
MarketPlace -0.59 -2.46 -3.06 0.77
FoodMarket4 -0.16 -1.60 -1.55 0.85
DaylightRoad2 -0.09 -5.74 -1.85 0.89
Campfire -0.21 0.14 -0.88 0.98
ParkRunning3 -0.31 -0.73 -0.77 0.99
approach in [4] is shown in Table VII for all-intra conditions.
Scheme 1 achieves an average Y/Cb/Cr BD-rates of -0.25%/-
2.38%/-1.80% compared with the anchor, suggesting that the
proposed multi-model size agnostic methodology can improve
the coding performance of the related attention-based block-
dependent models. Besides improving the coding performance,
Scheme 1 significantly reduces the encoding (from 212% to
164%) and decoding (from 2163% to 1302%) times demon-
strating the positive effect of the inference simplification.
Finally, the proposed simplifications introduced in Scheme
2 and Scheme 3 further reduce the encoding and decoding time
at the cost of a drop in the coding performance. In particular,
the simplified cross-component boundary branch introduced
in Scheme 2, achieves an average Y/Cb/Cr BD-rates of -
0.13%/-1.56%/-1.63% and, compared to Scheme 1, reduces the
encoding (from 164% to 146%) and decoding (from 1302%
to 665%) times. Scheme 3 has lower reduction of encoding
time (154%) than Scheme 2, but it achieves higher reduction
in decoding time (665%), although the integer approximations
lowers the performance achieving average Y/Cb/Cr BD-rates
of -0.16%/-1.72%/-1.38%.
As described in Section IV, the simplified schemes in-troduced here tackle the complexity reduction of Scheme 1
with two different methodologies. Scheme 2 proposes direct
modifications on the original architecture which need to be
retrained before being integrated in the prediction pipeline.
Conversely, Scheme 3 directly simplifies the final prediction
process by approximating the already trained weights from
Scheme 1 with integer-precision arithmetic. Therefore, the
simulation results suggest that the methodology in Scheme
3 is better at retaining the original performance since a
retraining process is not required. However, the highly reduced
architecture in Scheme 2 is capable of approximating the
performance of Scheme 3 and further reduce the decoder time.
Overall, the comparison results in Table VII demonstrate
that proposed models offer various trade-offs between com-
pression performance and complexity. While it has been shown
that the complexity can be significantly reduced, it is still not
negligible. Challenges for future work include integerisation
of the simplified scheme (Scheme 2) while preventing the
compression drop observed for Scheme 3. Recent approaches,
including a published one which focuses on intra prediction
[24], demonstrate that sophisticated integerisation approaches
can help retain compression performance of originally trained
models while enabling them to become significantly less com-
plex and thus be integrated into future video coding standards.
VI. C ONCLUSION
This paper showcased the effectiveness of attention-based
architectures in performing chroma intra-prediction for video
coding. A novel size-agnostic multi-model and its corre-
sponding training methodology were proposed to reduce the
inference complexity of previous attention-based approaches.
Moreover, the proposed multi-model was proven to better
generalise to variable input sizes, outperforming state-of-the-
art attention-based models with a fixed and much simpler
architecture. Several simplifications were proposed to further
reduce the complexity of the original multi-model. First,
a framework for reducing the complexity of convolutional
operations was introduced and was able to derive an infer-
ence model with around 90% fewer parameters than its rela-
tive training version. Furthermore, sparse autoencoders were
applied to design a simplified cross-component processing
model capable of further reducing the coding complexity
of its preceding schemes. Finally, algorithmic insights were
proposed to approximate the multi-model schemes in integer-
precision arithmetic, which could lead to fast and hardware-
aware implementations of complex operations such as softmax
and Leaky ReLU activations.
The proposed schemes were integrated into the VVC an-
chor VTM-7.0, signalling the prediction methodology as a
new chroma intra-prediction mode working in parallel with
traditional modes towards predicting the chroma component
samples. Experimental results show the effectiveness of the
proposed methods, retaining compression efficiency of pre-
viously introduced neural network models, while offering 2
different directions for significantly reducing coding complex-
ity, translated to reduced encoding and decoding times. As
future work, we aim to implement a complete multi-modelJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 12
TABLE VII
BD-R ATE(%) OFY,Cb ANDCr FOR ALL PROPOSED SCHEMES AND [4] UNDER ALL -INTRA COMMON TESTCONDITIONS
Class A1 Class A2 Class B Class C
Y Cb Cr Y Cb Cr Y Cb Cr Y Cb Cr
Scheme 1 -0.28 -3.20 -1.85 -0.25 -3.11 -1.54 -0.26 -2.28 -2.33 -0.30 -1.92 -1.57
Scheme 2 -0.08 -1.24 -1.26 -0.12 -1.59 -1.31 -0.15 -1.80 -2.21 -0.20 -1.41 -1.62
Scheme 3 -0.19 -2.25 -1.56 -0.13 -2.44 -1.12 -0.16 -1.78 -2.05 -0.20 -1.44 -1.29
Anchor + [4] -0.26 -2.17 -1.96 -0.22 -2.37 -1.64 -0.23 -2.00 -2.17 -0.26 -1.64 -1.41
Class D Class E Class F OverallEncT[%] DecT[%]Y Cb Cr Y Cb Cr Y Cb Cr Y Cb Cr
Scheme 1 -0.29 -1.70 -1.77 -0.13 -1.59 -1.45 -0.50 -1.58 -1.99 -0.25 -2.38 -1.80 164% 1302%
Scheme 2 -0.18 -1.42 -1.73 -0.08 -1.67 -1.40 -0.34 -1.50 -1.90 -0.13 -1.56 -1.63 146% 665%
Scheme 3 -0.20 -1.64 -1.41 -0.07 -0.75 -0.46 -0.37 -1.24 -1.43 -0.16 -1.72 -1.38 154% 512%
Anchor + [4] -0.25 -1.55 -1.67 -0.03 -1.35 -1.77 -0.44 -1.30 -1.55 -0.21 -1.90 -1.81 212% 2163%
for all VVC block sizes in order to ensure a full usage
of the proposed approach building on the promising results
shown in the constrained test conditions. Finally, an improved
approach for integer approximations may enable the fusion of
all proposed simplifications, leading to a fast and powerful
multi-model.
REFERENCES
[1] B. Bross, J. Chen, and S. Liu, “Versatile Video Coding (VVC) draft 7,”
Geneva, Switzerland, October 2019.
[2] Y . Li, L. Li, Z. Li, J. Yang, N. Xu, D. Liu, and H. Li, “A hybrid neural
network for chroma intra prediction,” in 2018 25th IEEE International
Conference on Image Processing (ICIP) . IEEE, 2018, pp. 1797–1801.
[3] J. Pfaff, P. Helle, D. Maniry, S. Kaltenstadler, B. Stallenberger,
P. Merkle, M. Siekmann, H. Schwarz, D. Marpe, and T. Wiegand, “Intra
prediction modes based on neural networks,” Document JVET-J0037-
v2, Joint Video Exploration Team of ITU-T VCEG and ISO/IEC MPEG ,
2018.
[4] M. G ´orriz, S. Blasi, A. F. Smeaton, N. E. O’Connor, and M. Mrak,
“Chroma intra prediction with attention-based CNN architectures,” arXiv
preprint arXiv:2006.15349 , accepted for publication at IEEE ICIP,
October 2020.
[5] L. Murn, S. Blasi, A. F. Smeaton, N. E. O’Connor, and M. Mrak, “Inter-
preting cnn for low complexity learned sub-pixel motion compensation
in video coding,” arXiv preprint arXiv:2006.06392 , 2020.
[6] P. Helle, J. Pfaff, M. Sch ¨afer, R. Rischke, H. Schwarz, D. Marpe,
and T. Wiegand, “Intra picture prediction for video coding with neural
networks,” in 2019 Data Compression Conference (DCC) . IEEE, 2019,
pp. 448–457.
[7] X. Zhao, J. Chen, A. Said, V . Seregin, H. E. Egilmez, and M. Kar-
czewicz, “Nsst: Non-separable secondary transforms for next generation
video coding,” in 2016 Picture Coding Symposium (PCS) . IEEE, 2016,
pp. 1–5.
[8] K. Zhang, J. Chen, L. Zhang, X. Li, and M. Karczewicz, “Enhanced
cross-component linear model for chroma intra-prediction in video
coding,” IEEE Transactions on Image Processing , vol. 27, no. 8, pp.
3983–3997, 2018.
[9] L. T. Nguyen, A. Khairat and D. Marpe, “Adaptive inter-plane prediction
for RGB content,” Document JCTVC-M0230 , Incheon, April 2013.
[10] M. Siekmann, A. Khairat, T. Nguyen, D. Marpe, and T. Wiegand,
“Extended cross-component prediction in hevc,” APSIPA transactions
on signal and information processing , vol. 6, 2017.
[11] G. Bjontegaard, “Calculation of average PSNR differences between rd-
curves,” VCEG-M33 , 2001.
[12] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,
Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances
in neural information processing systems , 2017, pp. 5998–6008.
[13] Z. Lin, M. Feng, C. N. d. Santos, M. Yu, B. Xiang, B. Zhou, and
Y . Bengio, “A structured self-attentive sentence embedding,” arXiv
preprint arXiv:1703.03130 , 2017.
[14] A. P. Parikh, O. T ¨ackstr ¨om, D. Das, and J. Uszkoreit, “A decompos-
able attention model for natural language inference,” arXiv preprint
arXiv:1606.01933 , 2016.
[15] J. Cheng, L. Dong, and M. Lapata, “Long short-term memory-networks
for machine reading,” arXiv preprint arXiv:1601.06733 , 2016.[16] Y . He, X. Zhang, and J. Sun, “Channel pruning for accelerating
very deep neural networks,” in Proceedings of the IEEE International
Conference on Computer Vision , 2017, pp. 1389–1397.
[17] Z. Zhuang, M. Tan, B. Zhuang, J. Liu, Y . Guo, Q. Wu, J. Huang,
and J. Zhu, “Discrimination-aware channel pruning for deep neural
networks,” in Advances in Neural Information Processing Systems , 2018,
pp. 875–886.
[18] T.-W. Chin, R. Ding, C. Zhang, and D. Marculescu, “Towards efficient
model compression via learned global ranking,” in Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition ,
2020, pp. 1518–1528.
[19] B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam,
and D. Kalenichenko, “Quantization and training of neural networks
for efficient integer-arithmetic-only inference,” in Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition , 2018,
pp. 2704–2713.
[20] Y . Cai, Z. Yao, Z. Dong, A. Gholami, M. W. Mahoney, and K. Keutzer,
“Zeroq: A novel zero shot quantization framework,” in Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern Recognition ,
2020, pp. 13 169–13 178.
[21] S. Xu, H. Li, B. Zhuang, J. Liu, J. Cao, C. Liang, and M. Tan,
“Generative low-bitwidth data free quantization,” arXiv preprint
arXiv:2003.03603 , 2020.
[22] M. Courbariaux, Y . Bengio, and J.-P. David, “Training deep neu-
ral networks with low precision multiplications,” arXiv preprint
arXiv:1412.7024 , 2014.
[23] J. Ball ´e, N. Johnston, and D. Minnen, “Integer networks for data
compression with latent-variable models,” in International Conference
on Learning Representations , 2018.
[24] M. Sch ¨afer, B. Stallenberger, J. Pfaff, P. Helle, H. Schwarz, D. Marpe,
and T. Wiegand, “Efficient fixed-point implementation of matrix-based
intra prediction,” in 2020 IEEE International Conference on Image
Processing (ICIP) . IEEE, 2020, pp. 3364–3368.
[25] Y . Bengio, A. Courville, and P. Vincent, “Representation learning: A
review and new perspectives,” IEEE transactions on pattern analysis
and machine intelligence , vol. 35, no. 8, pp. 1798–1828, 2013.
[26] X. Geng, J. Lin, B. Zhao, A. Kong, M. M. S. Aly, and V . Chandrasekhar,
“Hardware-aware softmax approximation for deep neural networks,” in
Asian Conference on Computer Vision . Springer, 2018, pp. 107–122.
[27] R. Timofte, E. Agustsson, L. Van Gool, M.-H. Yang, and L. Zhang,
“Ntire 2017 challenge on single image super-resolution: Methods and
results,” in Proceedings of the IEEE conference on computer vision and
pattern recognition workshops , 2017, pp. 114–125.
[28] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
arXiv preprint arXiv:1412.6980 , 2014.
[29] S. K. J. Chen, Y . Ye, “Algorithm description for versatile video coding
and test model 7 (vtm 7),” Document JVET-P2002 , Geneva, October
2019.
[30] J. Boyce, K. Suehring, X. Li, and V . Seregin, “JVET common test
conditions and software reference configurations,” Document JVET-
J1010 , Ljubljana, Slovenia, July 2018.
[31] M. G. Blanch, M. Mrak, A. F. Smeaton, and N. E. O’Connor, “End-to-
end conditional gan-based architectures for image colourisation,” in 2019
IEEE 21st International Workshop on Multimedia Signal Processing
(MMSP) . IEEE, 2019, pp. 1–6.
[32] R. Davidson, “Reliable inference for the gini index,” Journal of econo-
metrics , vol. 150, no. 1, pp. 30–40, 2009.