JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 1 Attention-Based Neural Networks for Chroma Intra Prediction in Video Coding Marc G ´orriz Blanch, Student Member IEEE, Saverio Blasi, Alan F. Smeaton, Fellow IEEE, Noel E. O’Connor, Member IEEE, and Marta Mrak, Senior Member IEEE Abstract —Neural networks can be successfully used to im- prove several modules of advanced video coding schemes. In particular, compression of colour components was shown to greatly benefit from usage of machine learning models, thanks to the design of appropriate attention-based architectures that allow the prediction to exploit specific samples in the reference region. However, such architectures tend to be complex and computationally intense, and may be difficult to deploy in a practical video coding pipeline. This work focuses on reducing the complexity of such methodologies, to design a set of simpli- fied and cost-effective attention-based architectures for chroma intra-prediction. A novel size-agnostic multi-model approach is proposed to reduce the complexity of the inference process. The resulting simplified architecture is still capable of outperforming state-of-the-art methods. Moreover, a collection of simplifications is presented in this paper, to further reduce the complexity overhead of the proposed prediction architecture. Thanks to these simplifications, a reduction in the number of parameters of around 90% is achieved with respect to the original attention- based methodologies. Simplifications include a framework for re- ducing the overhead of the convolutional operations, a simplified cross-component processing model integrated into the original architecture, and a methodology to perform integer-precision approximations with the aim to obtain fast and hardware-aware implementations. The proposed schemes are integrated into the Versatile Video Coding (VVC) prediction pipeline, retaining compression efficiency of state-of-the-art chroma intra-prediction methods based on neural networks, while offering different directions for significantly reducing coding complexity. Index Terms —Chroma intra prediction, convolutional neural networks, attention algorithms, multi-model architectures, com- plexity reduction, video coding standards. I. I NTRODUCTION EFFICIENT video compression has become an essential component of multimedia streaming. The convergence of digital entertainment followed by the growth of web ser- vices such as video conferencing, cloud gaming and real-time high-quality video streaming, prompted the development of advanced video coding technologies capable of tackling the increasing demand for higher quality video content and its con- sumption on multiple devices. New compression techniques enable a compact representation of video data by identifying Manuscript submitted July 1, 2020. The work described in this paper has been conducted within the project JOLT funded by the European Union’s Hori- zon 2020 research and innovation programme under the Marie Skłodowska Curie grant agreement No 765140. M. G ´orriz Blanch, S. Blasi and M. Mrak are with BBC Research & Development, The Lighthouse, White City Place, 201 Wood Lane, Lon- don, UK (e-mail: marc.gorrizblanch@bbc.co.uk, saverio.blasi@bbc.co.uk, marta.mrak@bbc.co.uk). A. F. Smeaton and N. E. O’Connor are with Dublin City University, Glas- nevin, Dublin 9, Ireland (e-mail: alan.smeaton@dcu.ie, noel.oconnor@dcu.ie). Fig. 1. Visualisation of the attentive prediction process. For each reference sample 0-16 the attention module generates its contribution to the prediction of individual pixels from a target 44block. and removing spatial-temporal and statistical redundancies within the signal. This results in smaller bitstreams, enabling more efficient storage and transmission as well as distribution of content at higher quality, requiring reduced resources. Advanced video compression algorithms are often complex and computationally intense, significantly increasing the en- coding and decoding time. Therefore, despite bringing high coding gains, their potential for application in practice is limited. Among the current state-of-the-art solutions, the next generation Versatile Video Coding standard [1] (referred to as VVC in the rest of this paper), targets between 30-50% better compression rates for the same perceptual quality, supporting resolutions from 4K to 16K as well as 360videos. One fundamental component of hybrid video coding schemes, intra prediction, exploits spatial redundancies within a frame by predicting samples of the current block from already recon- structed samples in its close surroundings. VVC allows a large number of possible intra prediction modes to be usedarXiv:2102.04993v1 [eess.IV] 9 Feb 2021JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 2 on the luma component at the cost of a considerable amount of signalling data. Conversely, to limit the impact of mode signalling, chroma components employ a reduced set of modes [1]. In addition to traditional modes, more recent research intro- duced schemes which further exploit cross-component correla- tions between the luma and chroma components. Such corre- lations motivated the development of the Cross-Component Linear Model (CCLM, or simply LM in this paper) intra modes. When using CCLM, the chroma components are predicted from already reconstructed luma samples using a linear model. Nonetheless, the limitation of simple linear predictions comes from its high dependency on the selection of predefined reference samples. Improved performance can be achieved using more sophisticated Machine Learning (ML) mechanisms [2], [3], which are able to derive more complex representations of the reference data and hence boost the prediction capabilities. Methods based on Convolutional Neural Networks (CNNs) [2], [4] provided significant improvements at the cost of two main drawbacks: the associated increase in system complex- ity and the tendency to disregard the location of individual reference samples. Related works deployed complex neural networks (NNs) by means of model-based interpretability [5]. For instance, VVC recently adopted simplified NN-based methods such as Matrix Intra Prediction (MIP) modes [6] and Low-Frequency Non Separable Transform (LFNST) [7]. For the particular task of block-based intra-prediction, the usage of complex NN models can be counterproductive if there is no control over the relative position of the reference samples. When using fully-connected layers, all input samples contribute to all output positions, and after the consecutive application of several hidden layers, the location of each input sample is lost. This behaviour clearly runs counter to the design of traditional approaches, in which predefined directional modes carefully specify which boundary locations contribute to each prediction position. A novel ML-based cross-component intra-prediction method is proposed in [4], introducing a new attention module capable of tracking the contribution of each neighbouring reference sample when computing the prediction of each chroma pixel, as shown in Figure 1. As a result, the proposed scheme better captures the relationship between the luma and chroma components, resulting in more accurate prediction samples. However, such NN-based methods significantly increase the codec complex- ity, increasing the encoder and decoder times by up to 120% and 947%, respectively. This paper focuses on complexity reduction in video coding with the aim to derive a set of simplified and cost-effective attention-based architectures for chroma intra-prediction. Un- derstanding and distilling knowledge from the networks en- ables the implementation of less complex algorithms which achieve similar performance to the original models. Moreover, a novel training methodology is proposed in order to design a block-independent multi-model which outperforms the state- of-the-art attention-based architectures and reduces inference complexity. The use of variable block sizes during training helps the model to better generalise on content variety whileensuring higher precision on predicting large chroma blocks. The main contributions of this work are the following: A competitive block-independent attention-based multi- model and training methodology; A framework for complexity reduction of the convolu- tional operations; A simplified cross-component processing model using sparse auto-encoders; A fast and cost-effective attention-based multi-model with integer precision approximations. This paper is organised as follows: Section II provides a brief overview on the related work, Section III introduces the attention-based methodology in detail and establishes the mathematical notation for the rest of the paper, Section IV presents the proposed simplifications and Section V shows experimental results, with conclusion drawn in Section VI. II. B ACKGROUND Colour images are typically represented by three colour components (e.g. RGB, YCbCr). The YCbCr colour scheme is often adopted by digital image and video coding standards (such as JPEG, MPEG-1/2/4 and H.261/3/4) due to its ability to compact the signal energy and to reduce the total required bandwidth. Moreover, chrominance components are often sub- sampled by a factor of two to conform to the YCbCr 4:2:0 chroma format, in which the luminance signal contains most of the spatial information. Nevertheless, cross-component redun- dancies can be further exploited by reusing information from already coded components to compress another component. In the case of YCbCr, the Cross-Component Linear model (CCLM) [8] uses a linear model to predict the chroma signal from a subsampled version of the already reconstructed luma block signal. The model parameters are derived at both the encoder and decoder sides without needing explicit signalling in the bitstream. Another example is the Cross-Component Prediction (CCP) [9] which resides at the transform unit (TU) level regardless of the input colour space. In case of YCbCr, a subsampled and dequantised luma transform block (TB) is used to modify the chroma TB at the same spatial location based on a context parameter signalled in the bitstream. An extension of this concept modifies one chroma component using the residual signal of the other one [10]. Such methodologies significantly improved the coding efficiency by further exploiting the cross- component correlations within the chroma components. In parallel, recent success of deep learning application in computer vision and image processing influenced design of novel video compression algorithms. In particular in the context of intra-prediction, a new algorithm [3] was introduced based on fully-connected layers and CNNs to map the predic- tion of block positions from the already reconstructed neigh- bouring samples, achieving BD-rate (Bjontegaard Delta rate) [11] savings of up to 3.0% on average over HEVC, for approx. 200% increase in decoding time. The successful integration of CNN-based methods for luma intra-prediction into existing codec architectures has motivated research into alternative methods for chroma prediction, exploiting cross-componentJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 3 Fig. 2. Baseline attention-based architecture for chroma intra prediction presented in [4] and described in Section III. redundancies similar to the aforementioned LM methods. A novel hybrid neural network for chroma intra prediction was recently introduced in [2]. A first CNN was designed to extract features from reconstructed luma samples. This was combined with another fully-connected network used to extract cross-component correlations between neighbouring luma and chroma samples. The resulting architecture uses complex non- linear mapping for end-to-end prediction of chroma channels. However, this is achieved at the cost of disregarding the spatial location of the boundary reference samples and significant increase of the complexity of the prediction process. As shown in [4], after a consecutive application of fully-connected layers in [2], the location of each input boundary reference sample is lost. Therefore, the fully-convolutional architecture in [4] better matches the design of the directional VVC modes and is able to provide significantly better performance. The use of attention models enables effective utilisation of the individual spatial location of the reference samples [4]. The concept of “attention-based” learning is a well- known idea used in deep learning frameworks, to improve the performance of trained networks in complex prediction tasks [12], [13], [14]. In particular, self-attention is used to assess the impact of particular input variables on the outputs, whereby the prediction is computed focusing on the most relevant elements of the same sequence [15]. The novel attention- based architecture introduced in [4] reports average BD-rate reductions of -0.22%, -1.84% and -1.78% for the Y , Cb and Cr components, respectively, although it significantly impacts the encoder and decoder time. One common aspect across all related work is that whilst the result is an improvement in compression this comes at the expense of increased complexity of the encoder and decoder. In order to address the complexity challenge, this paper aims to design a set of simplified attention-based architectures for performing chroma intra-prediction faster and more efficiently. Recent works addressed complexity reduction in neural net- works using methods such as channel pruning [16], [17], [18] and quantisation [19], [20], [21]. In particular for videocompression, many works used integer arithmetic in order to efficiently implement trained neural networks on different hardware platforms. For example, the work in [22] proposes a training methodology to handle low precision multiplications, proving that very low precision is sufficient not just for running trained networks but also for training them. Similarly, the work in [23] considers the problem of using variational latent-variable models for data compression and proposes integer networks as a universal solution of range coding as an entropy coding technique. They demonstrate that such models enable reliable cross-platform encoding and decoding of images using variational models. Moreover, in order to ensure deterministic implementations on hardware platforms, they approximate non-linearities using lookup tables. Finally, an efficient implementation of matrix-based intra prediction is proposed in [24], where a performance analysis evaluates the challenges of deploying models with integer arithmetic in video coding standards. Inspired by this knowledge, this paper develops a fast and cost-effective implementation of the proposed attention-based architecture using integer precision approximations. As shown Section V-D, while such approxi- mations can significantly reduce the complexity, the associated drop of performance is still not negligible. III. A TTENTION -BASED ARCHITECTURES This section describes in detail the attention-based approach proposed in [4] (Figure 2), which will be the baseline for the presented methodology in this paper. The section also provides the mathematical notation used for the rest of this paper. Without loss of generality, only square blocks of pixels are considered in this work. After intra-prediction and recon- struction of a luma block in the video compression chain, luma samples can be used for prediction of co-located chroma components. In this discussion, the size of a luma block is assumed to be (downsampled to) NNsamples, which is the size of the co-located chroma block. This may require the usage of conventional downsampling operations, such as in the case of using chroma sub-sampled picture formats suchJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 4 Fig. 3. Proposed multi-model attention-based architectures with the integration of the simplifications introduced in this paper. More details about the model’s hyperparameters and a description of the referred schemes can be found in Section V. as 4:2:0. Note that a video coding standard treats all image samples as unsigned integer values within a certain precision range based on the internal bit depth. However, in order to utilise common deep learning frameworks, all samples are converted to floating point and normalised to values within the range [0;1]. For the chroma prediction process, the reference samples used include the co-located luma block X02I RNN, and the array of reference samples Bc2I Rb,b= 4N+1from the left and from above the current block (Figure 1), where c=Y,CborCrrefers to the three colour components. B is constructed from samples on the left boundary (starting from the bottom-most sample), then the corner is added, and finally the samples on top are added (starting from the left-most sample). In case some reference samples are not available, these are padded using a predefined value, following the standard approach defined in VVC. Finally, S02I R3b is the cross-component volume obtained by concatenating the three reference arrays BY,BCbandBCr. Similar to the model in [2], the attention-based architecture adopts a scheme based on three network branches that are combined to produce prediction samples, illustrated in Figure 2. The first two branches work concurrently to extract features from the input reference samples. The first branch (referred to as the cross-component bound- ary branch) extracts cross component features from S02 I R3bby applying Iconsecutive Di- dimensional 11 convolutional layers to obtain the Si2I RDiboutput feature maps, where i= 1;2:::I. By applying 11convolutions, the boundary input dimensions are preserved, resulting in an Di- dimensional vector of cross-component information for each boundary location. The resulting volumes are activated using a Rectified Linear Unit (ReLU) non-linear function. In parallel, the second branch (referred to as the luma convolutional branch) extracts spatial patterns over the co- located reconstructed luma block X0by applying convolu- tional operations. The luma convolutional branch is defined byJconsecutive Cj-dimensional 33convolutional layers with a stride of 1, to obtainXj2I RCjN2feature maps fromtheN2input samples, where j= 1;2:::J . Similar to the cross-component boundary branch, in this branch a bias and a ReLU activation are applied within convolutional layer. The feature maps ( SIandXJ) from both branches are each convolved using a 11kernel, to project them into two corresponding reduced feature spaces. Specifically, SI is convolved with a filter WF2I RhDto obtain the h- dimensional feature matrix F. Similarly,XJis convolved with a filterWG2I RhCto obtain the h-dimensional feature matrixG. The two matrices are multiplied together to obtain the pre-attention map M=GTF. Finally, the attention matrix A2I RN2bis obtained applying a softmax operation to each element ofM, to generate the probability of each boundary location being able to predict a sample location in the block. Each value j;iinAis obtained as: j;i=exp (mi;j=T) Pb1 n=0exp (mn;j=T); (1) wherej= 0;:::;N21represents the sample location in the predicted block, i= 0;:::;b1represents a reference sample location, and Tis the softmax temperature parameter controlling the smoothness of the generated probabilities, with 0