End-to-end optimized image compression with competition of prior distributions Benoit Brummer intoPIX Mont-Saint-Guibert, Belgium b.brummer@intopix.comChristophe De Vleeschouwer Universit ´e catholique de Louvain Louvain-la-Neuve, Belgium christophe.devleeschouwer@uclouvain.be Abstract Convolutional autoencoders are now at the forefront of image compression research. To improve their entropy cod- ing, encoder output is typically analyzed with a second autoencoder to generate per-variable parametrized prior probability distributions. We instead propose a compression scheme that uses a single convolutional autoencoder and multiple learned prior distributions working as a competition of experts. Trained prior distributions are stored in a static table of cumulative distribution functions. During inference, this table is used by an entropy coder as a look-up-table to determine the best prior for each spatial location. Our method offers rate-distortion performance comparable to that obtained with a predicted parametrized prior with only a fraction of its entropy coding and decoding complexity. 1. Introduction Image compression typically consists of a transforma- tion step (including quantization) and an entropy coding step that attempts to capture the probability distribution of a transformed context to generate a smaller compressed bit- stream. Entropy coding ranges in complexity from simple non-adaptive encoders [ 26,24] to complex arithmetic coders with adaptive context models [ 15,23]. The entropy cod- ing strategy has been revised to address the specificities of learned compression. More specifically, for recent works that make use of a convolutional autoencoder [ 12] (AE) as the all-inclusive transformation and quantization step, the en- tropy coder relies on a cumulative probability model (CPM) trained alongside the AE [ 5]. This model estimates the cumu- lative distribution function (CDF) of each channel coming out of the AE and passes these learned CDFs to an entropy coder such as range encoding [16]. Such a simple method outperforms traditional codecs like JPEG2000 but work is still needed to surpass complex codecs like BPG. Johannes Ball ´e et al. (2018) [ 6] proposed analyzing the output of the convolutional encoder with an- other AE to generate a floating-point scale parameter thatdiffers for every variable that needs to be encoded by the entropy coder, thus for every location in every channel. This method has been widely used in subsequent works but in- troduces substantial complexity in the entropy coding step because a different CDF is needed to encode every variable in the latent representation of the image, whereas the single AE method by Ball ´e et al. (2017) [ 5] reused the same CDF table for every latent spatial location. Our work uses the principle of competition of experts [22,14] to get the best out of both worlds. Multiple prior distributions compete for the lowest bit cost on every spatial location in the quantized latent representation. During train- ing, only the best prior distribution is updated in each spatial location, further improving the prior distributions special- ization. CDF tables are fixed at the end of training. Hence, at testing, the CDF table resulting in the lowest bitcost is assigned to each spatial location of the latent representation. The rate-distortion (RD) performance obtained is compa- rable to that obtained with a parametrized distribution [ 6], yet the entropy coding process is greatly simplified since it does not require a per-variable CDF and can build on look- up-tables (LUT) rather than the computation of analytical distributions. 2. Background Entropy coders such as range encoding [ 16] require cdfs where, for each variable to be encoded, the probability that a smaller or equal value appears is defined for every allowable value in the latent representation space. Johannes Ball ´e et al.’s seminal work (2017) [ 5] consists of an AE, computing a latent image representation consisting in CLchannels of size HLWL, and a CPM, consisting of one CDF per latent out- put channel, which are trained conjointly. The latent repre- sentation coming out of the encoder is quantized then passed through the CPM. The CPM defines, in a parametrized and differentiable manner, a CDF per channel. At the end of training, the CPM is evaluated at every possible value1to generate the static CDF table. The CDF table is not differen- tiable, but going from a differentiable CPM to a static CDF table speeds up the encoding and decoding process. The 1arXiv:2111.09172v1 [eess.IV] 17 Nov 2021CDF table is used to compress latent representations with an entropy coder, the approximate bit cost of a symbol is the binary logarithm of its probability. Ball´e et al. (2018) improved the RD efficiency by re- placing the unique CDF table with a Gaussian distribution parametrized with a hyperprior (HP) sub-network [ 6]. The HP generates a scale parameter, and in turn a different CDF, for every variable to be encoded. Thus, complexity is added by exploiting the parametrized Gaussian prior during the entropy coding process, since a different CDF is required for each variable in the channel and spatial dimensions. Minnen et al. proposed a scheme where one of multi- ple probability distributions is chosen to adapt the entropy model locally [ 21]. However, these distributions are defined a posteriori, given the encoder trained with a global entropy model. Thus [ 21] does not perform as well as the HP scheme [6] per [ 19, Fig. 2a]. In contrast, the present method jointly optimizes the local entropy models and the AE in an end-to- end fashion that results in greater performance. Minnen et al. [19] later proposed to improve RD with the use of an autore- gressive sequential context model. However, as highlighted in [13], this is obtained at the cost of increased runtime by several orders of magnitude. Subsequent works have attempted to reduce complexity of the neural network archi- tecture [ 10] and to bridge the RD gap with Minnen’s work [13], but entropy coding complexity has remained largely unaddressed and has instead evolved towards increased com- plexity [ 19,7,20] compared to [ 6]. The present work builds on Ball ´e et al. (2017) [ 5] and achieves the performance of Ball´e et al. (2018) [ 6] without the complexity introduced by a per-variable parametrized probability distribution. We chose Ball ´e et al. (2017) as a baseline because it corresponds to the basic unit adopted as a common reference and starting point for most models proposed in the recent literature to im- prove compression quality [ 6,19,13,20]. Due to its generic nature, our contribution remains relevant for the newer, often computationally more complex, incremental improvements on Ball ´e et al. (2017). 3. Competition of prior distributions Our proposed method introduces competitions of expert [22,14] prior distributions: a single AE transforms the image and a set of prior distributions are trained to model the CDF of the latent representation in each spatial location. For each latent spatial dimension the CDF table which minimizes bit cost is selected; that prior is either further optimized on the features it won in the training mode, or its index is stored for decoding in the inference mode. This scheme is illustrated in Figure 1, a set of 16 optimized CDF tables is shown in Figure 2, and three sample images are segmented by “winning” CDF table in Figure 3. All prior distributions are estimated in parallel by consid- ering NCDFCDF tables, and selecting, as a function of the CDF0... CDFNCDF...Cumulative probability model Input image Encoder Decoder Reconstruction Distortion (eg: MSE, MS-SSIM, discriminator)x̂ bitstream[ ŷk,l]ik,lbitstream bitCost( ŷk,l, )BackpropagateBackpropagate Entropy coder ik,l=argmin p [bitCost( ŷk,l, CDF p)]...y x ŷ CDFik,l Quantizer Noise (train), round (test) ŷk,lFor all k,l CDFik,l CDFik,lFigure 1. AE compression scheme with competition of prior distri- butions. The AE architecture is detailed in [ 6, Fig. 4]. The indices idenote the indices of CDF tables that minimizes the bitcount for each latent spatial dimension. Loss = Distortion + bitCost. 0.00.51.0 0.00.51.0 0.00.51.0 100 0 1000.00.51.0 100 0 100 100 0 100 100 0 100 Figure 2. We observe some diversity among the 16 cumulative distribution functions learned by a network trained with MSE loss and= 4096 . Each box presents a CDF table and each colored line corresponds to the cdfof one of 256 latent channels. The best fitting CDF table is selected for each latent spatial location. encoded latent spatial location, the one that minimizes the entropy coder bitcount. The CDF table index is determined for each spatial location by evaluating each CDF table in inference. This can be done in a vectorized operation given sufficient memory. During training the CPM is evaluated instead of CDF tables such that the probabilities are up to date and the model is differentiable, and the bit cost is re- turned as it contributes to the loss function. The cost of CDF table indices has been shown to be neglectable due to the reasonably small number of priors, which in turns results from the fact that little gain in latent code entropy has been obtained by increasing the number of priors. In all our experiments , the AE architecture follows the one in Ball ´e et al. (2018) [ 6], without the HP, since we found that the AE from [ 6] offers better RD than the one describedFigure 3. Segmentation of three test images [ 1]: each distinct color represents one of 64 CDF tables used to encode a latent spatial location ( 1616pixels patch) in Ball ´e et al. (2017) [ 5], even with a single CDF table. A functional training loop is described in Algorithm 1. Algorithm 1 Training loop y model.Encoder( x) ˆy quantize( y) ˆx clip(model.Decoder( ˆy), 0, 1) distortion visualLossFunction( ˆx,x) for0k< HLand0l