|
A Deep Learning based No-reference Quality Assessment Model |
|
for UGC Videos |
|
Wei Sun |
|
Shanghai Jiao Tong University |
|
Shanghai, China |
|
[email protected] Min |
|
Shanghai Jiao Tong University |
|
Shanghai, China |
|
[email protected] |
|
Wei Lu |
|
Shanghai Jiao Tong University |
|
Shanghai, China |
|
[email protected] Zhai∗ |
|
Shanghai Jiao Tong University |
|
Shanghai, China |
|
[email protected] |
|
ABSTRACT |
|
Quality assessment for User Generated Content (UGC) videos plays |
|
an important role in ensuring the viewing experience of end-users. |
|
Previous UGC video quality assessment (VQA) studies either use |
|
the image recognition model or the image quality assessment (IQA) |
|
models to extract frame-level features of UGC videos for quality |
|
regression, which are regarded as the sub-optimal solutions be- |
|
cause of the domain shifts between these tasks and the UGC VQA |
|
task. In this paper, we propose a very simple but effective UGC |
|
VQA model, which tries to address this problem by training an |
|
end-to-end spatial feature extraction network to directly learn the |
|
quality-aware spatial feature representation from raw pixels of the |
|
video frames. We also extract the motion features to measure the |
|
temporal-related distortions that the spatial features cannot model. |
|
The proposed model utilizes very sparse frames to extract spatial |
|
features and dense frames (i.e. the video chunk) with a very low |
|
spatial resolution to extract motion features, which thereby has |
|
low computational complexity. With the better quality-aware fea- |
|
tures, we only use the simple multilayer perception layer (MLP) |
|
network to regress them into the chunk-level quality scores, and |
|
then the temporal average pooling strategy is adopted to obtain |
|
the video-level quality score. We further introduce a multi-scale |
|
quality fusion strategy to solve the problem of VQA across differ- |
|
ent spatial resolutions, where the multi-scale weights are obtained |
|
from the contrast sensitivity function of the human visual system. |
|
The experimental results show that the proposed model achieves |
|
the best performance on five popular UGC VQA databases, which |
|
demonstrates the effectiveness of the proposed model. The code is |
|
available at https://github.com/sunwei925/SimpleVQA. |
|
CCS CONCEPTS |
|
•Computing methodologies →Modeling methodologies . |
|
∗Corresponding author: Guangtao Zhai. |
|
Permission to make digital or hard copies of all or part of this work for personal or |
|
classroom use is granted without fee provided that copies are not made or distributed |
|
for profit or commercial advantage and that copies bear this notice and the full citation |
|
on the first page. Copyrights for components of this work owned by others than ACM |
|
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, |
|
to post on servers or to redistribute to lists, requires prior specific permission and/or a |
|
fee. Request permissions from [email protected]. |
|
MM ’22, October 10–14, 2022, Lisboa, Portugal |
|
©2022 Association for Computing Machinery. |
|
ACM ISBN 978-1-4503-9203-7/22/10. . . $15.00 |
|
https://doi.org/10.1145/3503161.3548329KEYWORDS |
|
video quality assessment, UGC videos, deep learning, feature fusion |
|
ACM Reference Format: |
|
Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai∗. 2022. A Deep Learn- |
|
ing based No-reference Quality Assessment Model for UGC Videos. In Pro- |
|
ceedings of the 30th ACM International Conference on Multimedia (MM ’22), |
|
October 10–14, 2022, Lisboa, Portugal. ACM, New York, NY, USA, 10 pages. |
|
https://doi.org/10.1145/3503161.3548329 |
|
1 INTRODUCTION |
|
With the proliferation of mobile devices and wireless networks in |
|
recent years, User Generated Content (UGC) videos have exploded |
|
over the Internet. It has become a popular daily activity for the gen- |
|
eral public to create, view, and share UGC videos through various |
|
social media applications such as YouTube, TikTok, etc. However, |
|
UGC videos are captured by a wide variety of consumers, ranging |
|
from professional photographers to amateur users, which makes |
|
the visual quality of UGC videos vary greatly. In order to ensure |
|
the Quality of Experience (QoE) of end-users, the service providers |
|
need to monitor the quality of UGC videos in the entire streaming |
|
media link, including but not limited to video uploading, compress- |
|
ing, post-processing, transmitting, etc. Therefore, with billions of |
|
video viewing and millions of newly uploaded UGC videos every |
|
day, an effective and efficient video quality assessment (VQA) model |
|
is needed to measure the perceptual quality of UGC videos. |
|
Objective VQA can be divided into full-reference (FR), reduced- |
|
reference (RR), and no-reference (NR) according to the amount of |
|
pristine video information needed. Since there is no reference video |
|
for in-the-wild UGC videos, only NR VQA models are qualified for |
|
evaluating their quality. Although NR VQA algorithms [ 21,23,26] |
|
have been studied for many years, most of them were developed |
|
for Professionally Generated Content (PGC) videos with synthetic |
|
distortions, where the pristine PGC videos are shot by photogra- |
|
phers using professional devices and are normally of high quality, |
|
and the distorted PGC videos are then degraded by specific video |
|
processing algorithms such as video compression, transmission, etc. |
|
So, previous VQA studies mainly focus on modeling several types |
|
of distortions caused by specific algorithms, which makes them less |
|
effective for UGC videos with in-the-wild distortions. To be more |
|
specific, the emerging UGC videos pose the following challenges |
|
to the existing VQA algorithms for PGC videos:arXiv:2204.14047v2 [cs.CV] 20 Oct 2022MM ’22, October 10–14, 2022, Lisboa, Portugal Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai∗ |
|
First, the distortion types of UGC videos are diverse. A mass |
|
of UGC videos are captured by amateur users, which may suffer |
|
various distortion types such as under/over exposure, low visibility, |
|
jitter, noise, color shift, etc. These authentic distortions are intro- |
|
duced in the shooting processing and cannot be modeled by the |
|
single distortion type, which thereby requires that the VQA models |
|
have a more strong feature representation ability to qualify the |
|
authentic distortions. Second, the content and forms of UGC videos |
|
are extremely rich. UGC videos can be natural scenes, animation |
|
[35], games [ 46,47], screen content, etc. Note that the statistics |
|
characteristics of different video content vary greatly. For example, |
|
the natural scenes statistics (NSS) features [ 22–24,26] are com- |
|
monly used in the previous VQA studies to measure the distortions |
|
of natural scene content, but they may be ineffective for computer- |
|
generated content like animation or games. In addition, live videos, |
|
videoconferencing, etc. are also ubiquitous for UGC videos nowa- |
|
days, whose quality is severely affected by the network bandwidth. |
|
Third, due to the advancement of shooting devices, more high res- |
|
olution [ 18] and high frame rate [ 19,52,53] videos have emerged |
|
on the Internet. The various kinds of resolutions and frame rates |
|
are also important factors for video quality. What’s more, users |
|
can view the UGC videos through mobile devices anywhere and at |
|
any time, so the display [ 25] and the viewing environment such as |
|
ambient luminance [ 29], etc. also affect the perceptual quality of |
|
UGC videos to a certain extent. However, these factors are rarely |
|
considered by previous studies. |
|
The recently released large-scale UGC VQA databases such as |
|
KoNViD-1k [ 8], YouTube UGC [ 36], LSVQ [ 44], etc. have greatly |
|
promoted the development of UGC VQA. Several deep learning |
|
based NR VQA models [ 14,15,37,40,44] have been proposed to |
|
solve some challenges mentioned above and achieve pretty good |
|
performance. However, there are still some problems that need to |
|
be addressed. First, the previous studies either use the image recog- |
|
nition model [ 15][44] or the pretrained image quality assessment |
|
(IQA) models [ 37][40][14] to extract frame-level features, which |
|
lacks an end-to-end learning method to learn the quality-aware |
|
spatial feature representation from raw pixels of video frames. Sec- |
|
ond, previous studies usually extract the features from all video |
|
frames and have a very high computational complexity, making |
|
them difficult to apply to real-world scenarios. Since there is much |
|
redundancy spatial information between the adjacent frames, we |
|
argue that there is not necessary to extract the features from all |
|
frames. Third, the spatial resolution and frame rate of UGC videos |
|
as well as other factors such as the display, viewing environment, |
|
etc. are still rarely considered by these studies. However, these |
|
factors are very important for the perceptual quality of UGC videos |
|
since the contrast sensitivity of the human visual system (HVS) is |
|
affected by them. |
|
In this paper, to address the challenges mentioned above, we |
|
propose a very simple but effective deep learning based VQA model |
|
for UGC videos. The proposed framework is illustrated in Figure 1, |
|
which consists of the feature extraction module, the quality regres- |
|
sion module, and the quality pooling module. For the feature ex- |
|
traction module, we extract quality-aware features from the spatial |
|
domain and the spatial-temporal domain to respectively measure |
|
the spatial distortions and motion distortions. Instead of using thepretrained model to extract the spatial features in the previous stud- |
|
ies, we propose to train an end-to-end spatial feature extraction |
|
network to learn quality-aware feature representation in the spatial |
|
domain, which thereby makes full use of various video content |
|
and distortion types in current UGC VQA databases. We then uti- |
|
lize the action recognition network to extract the motion features, |
|
which can make up the temporal-related distortions that the spatial |
|
features cannot model. Considering that the spatial features are |
|
sensitive to the resolution while the motion features are sensitive |
|
to the frame rate, we first split the video into continuous chunks |
|
and then extract the spatial features and motion features by using a |
|
key frame of each chunk and all frames of each chunk but at a low |
|
spatial resolution respectively. So, the computational complexity of |
|
the proposed model can be greatly reduced. |
|
For the quality regression module, we use the multilayer per- |
|
ception (MLP) network to map the quality-aware features into |
|
the chunk-level quality scores, and the temporal average pooling |
|
strategy is adopted to obtain the final video quality. In order to |
|
solve the problem of quality assessment across different resolu- |
|
tions, we introduce a multi-scale quality fusion strategy to fuse |
|
the quality scores of the videos with different resolutions, where |
|
the multi-scale weights are obtained from the contrast sensitivity |
|
function (CSF) of HVS by considering the viewing environment |
|
information. The proposed models are validated on five popular |
|
UGC VQA databases and the experimental results show that the |
|
proposed model outperforms other state-of-the-art VQA models |
|
by a large margin. What’s more, the proposed model trained on a |
|
large-scale database such as LSVQ [ 44] achieves remarkable perfor- |
|
mance when tested on the other databases without any fine-tuning, |
|
which further demonstrates the effectiveness and generalizability |
|
of the proposed model. |
|
In summary, this paper makes the following contributions: |
|
(1)We propose an effective and efficient deep learning based |
|
model for UGC VQA, which includes the feature extraction |
|
module, the quality regression module, and the quality pool- |
|
ing module. The proposed model not only achieves remark- |
|
able performance on the five popular UGC VQA databases |
|
but also has a low computational complexity, which makes |
|
it very suitable for practical applications. |
|
(2)The feature extraction module extracts two kinds of quality- |
|
aware features, the spatial features for spatial distortions and |
|
the spatial-temporal features for motion distortions, where |
|
the spatial features are learned from raw pixels of video |
|
frames via an end-to-end manner and the spatial-temporal |
|
features are extracted by a pretrained action recognition |
|
network. |
|
(3)We introduce a multi-scale quality fusion strategy to solve |
|
the problem of quality assessment across different resolu- |
|
tions, where the multi-scale weights are obtained from the |
|
contrast sensitivity function of the human visual system by |
|
considering the viewing environment information. |
|
2 RELATED WORK |
|
2.1 Handcrafted feature based NR VQA Models |
|
A naive NR VQA method is to compute the quality of each frame |
|
via popular NR IQA methods such as NIQE [ 24], BRISQUE [ 22],A Deep Learning based No-reference Quality Assessment Model for UGC Videos MM ’22, October 10–14, 2022, Lisboa, Portugal |
|
Input video |
|
Frame extraction |
|
2D framesChunks |
|
Global Average and STD Pooling |
|
Spatial feature extractionStage 1Stage 2 Stage 3 Stage 4Motion feature extraction |
|
Quality regressionQuality |
|
ScoreMotion feature Spatial feature |
|
Pooling3D CNN |
|
Figure 1: The network architecture of the proposed model. The proposed model contains the feature extraction module, the |
|
quality regression module, and the quality pooling module. The feature extraction module extracts two kinds of features, the |
|
spatial features and the motion features. |
|
CORNIA [ 42] etc., and then pool them into the video quality score. |
|
A comparative study of various temporal pooling strategies on pop- |
|
ular NR IQA methods can refer to [ 32]. The temporal information |
|
is very important for VQA. V-BLIINDS [ 26] is a spatio-temporal |
|
natural scene statistics (NSS) model for videos by quantifying the |
|
NSS feature of frame-differences and motion coherency character- |
|
istics. Mittal et al. [23] propose a training-free blind VQA model |
|
named VIIDEO that exploits intrinsic statistics regularities of natu- |
|
ral videos to quantify disturbances introduced due to distortions. |
|
TLVQM [ 12] extracts abundant spatio-temporal features such as |
|
motion, jerkiness, blurriness, noise, blockiness, color, etc. at two |
|
levels of high and low complexity. VIDEVAL [ 33] further combines |
|
the selected features from typical NR I/VQA methods to train a SVR |
|
model to regress them into the video quality. Since video content |
|
also affects its quality, especially for UGC videos, understanding the |
|
video content is beneficial to NR VQA. Previous handcrafted feature |
|
based methods are difficult to understand semantic information. |
|
Hence, some studies [ 13,34] try to combine the handcrafted features |
|
with the semantic-level features extracted by the pretrained CNN |
|
model to improve the performance of NR VQA models. For example, |
|
CNN-TLVQM [ 13] combines the handcrafted statistical temporal |
|
features from TLVQM and spatial features extracted by 2D-CNN |
|
model trained for IQA. RAPIQUE [ 34] utilizes the quality-aware |
|
scene statistics features and semantics-aware deep CNN features |
|
to achieve a rapid and accurate VQA model for UGC videos. |
|
2.2 Deep learning based NR VQA Models |
|
With the release of several large-scale VQA databases [ 8,36,44], |
|
deep learning based NR VQA models [ 2,11,14,15,31,37,40,43,44] |
|
attract many researchers’ attention. Liu et al. [17] propose a multi- |
|
task BVQA model V-MEON by jointly optimizing the 3D-CNN |
|
for quality assessment and compression distortion classification. |
|
VSFA [ 15] first extracts the semantic features from a pre-trained |
|
CNN model and then uses a gated recurrent unit (GRU) network |
|
to model the temporal relationship between the semantic features |
|
of video frames. The authors of VSFA further propose MDVSFA[16], which trains the VSFA model on the multiple VQA databases |
|
to improve its performance and generalization. RIRNet [ 4] exploits |
|
the effect of motion information extracted from the multi-scale |
|
temporal frequencies for video quality assessment. Ying et al. [44] |
|
propose a local-to-global region-based NR VQA model that com- |
|
bines the spatial features extracted from a 2D-CNN model and the |
|
spatial-temporal features from a 3D-CNN network. Wang et al. [37] |
|
propose a feature-rich VQA model for UGC videos, which measures |
|
the quality from three aspects, compression level, video content, |
|
and distortion type and each aspect is evaluated by an individual |
|
neural network. Xu et al. [40] first extract the spatial feature of |
|
the video frame from a pre-trained IQA model and use the graph |
|
convolution to extract and enhance these features, then extract |
|
motion information from the optical flow domain, and finally inte- |
|
grated the spatial feature and motion information via a bidirectional |
|
long short-term memory network. Li et al. [14] also utilize the IQA |
|
model pre-trianed on multiple databases to extract quality-aware |
|
spatial features and the action recognition model to extract tem- |
|
poral features, and then a GRU network is used to model spatial |
|
and temporal features and regress them into the quality score. Wen |
|
and Wang [ 39] propose a baseline I/VQA model for UGC videos, |
|
which calculates the video quality by averaging the scores of each |
|
frame and frame-level quality scores are obtained by a simple CNN |
|
network. |
|
3 PROPOSED MODEL |
|
The framework of the proposed NR VQA model is shown in Fig- |
|
ure 1, which consists of the feature extraction module, the quality |
|
regression module, and the quality pooling module. First, we ex- |
|
tract the quality-aware features from the spatial domain and the |
|
spatial-temporal domain via the feature extraction module, which |
|
are utilized to evaluate the spatial distortions and motion distor- |
|
tions respectively. Then, the quality regression module is used to |
|
map the quality-aware features into chunk-level quality scores. Fi- |
|
nally, we perform the quality pooling module to obtain the video |
|
quality score.MM ’22, October 10–14, 2022, Lisboa, Portugal Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai∗ |
|
3.1 Feature Extraction Module |
|
In this section, we expect to extract the quality-aware features that |
|
can represent the impact of various distortion types and content |
|
on visual quality. The types of video distortion can be roughly |
|
divided into two categories: the spatial distortions and the motion |
|
distortions. The spatial distortions refer to the artifacts introduced |
|
in the video frames, such as noise, blur, compression, low visibility, |
|
etc. The motion distortions refer to the jitter, lagging due, etc., which |
|
are mainly caused by unstable shooting equipment, fast-moving |
|
objects, the low network bandwidth, etc. Therefore, we need to |
|
extract the quality-aware features from these two aspects. |
|
Note that the characteristics of the spatial features and motion |
|
features are quite different. The spatial features are sensitive to |
|
the video resolution but insensitive to the video frame rate since |
|
the adjacent frames of the video contain lots of redundancy spatial |
|
information and higher resolution can represent more abundant |
|
high-frequency information, while motion features are the opposite |
|
because the motion distortions are reflected on the temporal dimen- |
|
sion and these features are usually consistent for local regions of |
|
the frames. |
|
Therefore, considering these characteristics, given a video 𝑉, |
|
whose number of frames and frame rate are 𝑙and𝑟respectively, |
|
we first split the video 𝑉into𝑁𝑐continuous chunks 𝑐={𝑐𝑖}𝑁𝑐 |
|
𝑖=1at |
|
an time interval 𝜏, where𝑁𝑐=𝑙/(𝑟∗𝜏), and there are 𝑁𝑓=𝑟∗𝜏 |
|
frames in each chunk 𝑐𝑖, which is denoted as 𝑐𝑖={𝑥𝑖,𝑗}𝑁𝑓 |
|
𝑗=1. Then |
|
we only choose a key frame 𝑥𝑖,𝑘𝑒𝑦 in each chunk to extract the |
|
spatial features and the motion features of each chunk are extracted |
|
using all frames in 𝑐𝑖but at a very low spatial resolution. As a |
|
result, we can greatly reduce the computation complexity of the |
|
VQA model with little performance degradation. |
|
3.1.1 Spatial Feature Extraction Module. Given a frame 𝑥, we de- |
|
note𝑓𝑤(𝑥)as the output of the CNN model 𝑓with trainable pa- |
|
rameters𝑤={𝑤𝑘}applied on the frame 𝑥. Assume that there are |
|
𝑁𝑠stages in the CNN model, and 𝑓𝑘𝑤(𝑥)is the output feature maps |
|
extracted from the 𝑘-th stage, where 𝑓𝑘𝑤(𝑥)∈R𝐻𝑘×𝑊𝑘×𝐶𝑘, and𝐻𝑘, |
|
𝑊𝑘, and𝐶𝑘are the height, width, and the number of channels of |
|
the feature maps 𝑓𝑘𝑤(𝑥)respectively. In the following, we use the |
|
𝑓𝑘𝑤to replace the 𝑓𝑘𝑤(𝑥)for simplicity. |
|
It is well known that the features extracted by the deep lay- |
|
ers of the CNN model contain rich semantic information, and are |
|
suitable for representing content-aware features for UGC VQA. |
|
Moreover, previous studies indicate that the features extracted by |
|
the shallow layers of the CNN models contain low-level informa- |
|
tion [ 28,48], which responds to low-level features such as edges, |
|
corners, textures, etc. The low-level information is easily affected |
|
by the distortion and is therefore distortion-aware. Hence, we ex- |
|
tract the quality-aware features via calculating the global mean |
|
and stand deviation of feature maps extracted from all stages of the |
|
CNN model. Then, we apply global average and stand deviation |
|
pooling operations on the feature maps 𝑓𝑘𝑤: |
|
𝜇𝑓𝑘𝑤=GPavg(𝑓𝑘 |
|
𝑤), |
|
𝜎𝑓𝑘𝑤=GPstd(𝑓𝑘 |
|
𝑤),(1)where𝜇𝑓𝑘𝑤and𝜎𝑓𝑘𝑤are the global means and stand deviation of |
|
feature maps 𝑓𝑘𝑤respectively. Finally, we concatenate the 𝜇𝑓𝑘𝑤and |
|
𝜎𝑓𝑘𝑤to derive the spital feature representation of our NR VQA model: |
|
𝐹𝑘 |
|
𝑠=cat([𝜇𝑓𝑘𝑤,𝜎𝑓𝑘𝑤]), |
|
𝐹𝑠=cat({𝐹𝑘 |
|
𝑠}𝑁𝑠 |
|
𝑘=1).(2) |
|
3.1.2 Motion Feature Extraction Module. We extract the motion |
|
features as the complementary quality-aware features since UGC |
|
videos are commonly degraded by the motion distortions caused |
|
by the unstable shooting equipment or low bit rates in the living |
|
streaming or videoconferencing. The spatial features are difficult |
|
to handle these distortions because they are extracted by the intra- |
|
frames while motion distortions occur in the interframes. Therefore, |
|
the motion features are also necessary for evaluating the quality |
|
of UGC videos. Here, we utilize the pretrained action recognition |
|
model as the motion feature extractor to obtain the motion features |
|
of each video chunk. The action recognition model is designed |
|
to detect different kinds of action classes, so the feature represen- |
|
tation of the action recognition network can reflect the motion |
|
information of the video to a certain extent. Therefore, given the |
|
video chunk 𝑐and the action recognition network MOTION , we |
|
can obtain the motion features: |
|
𝐹𝑚=MOTION(c) (3) |
|
where𝐹𝑚represents the motion features extract by the action |
|
recognition network. |
|
Therefore, given the video chunk 𝑐, we first select a key frame |
|
in the chunk to calculate the spatial features 𝐹𝑠. Then, we calculate |
|
the motion features 𝐹𝑚using the whole frames but at a low spatial |
|
resolution in the video chunk. Finally, we obtain the quality-aware |
|
features for the video chunk 𝑐by concatenating the spatial features |
|
and motion features: |
|
𝐹=cat([𝐹𝑠,𝐹𝑚]), (4) |
|
3.2 Quality Regression Module |
|
After extracting quality-aware feature representation by the feature |
|
extraction module, we need to map these features to the quality |
|
scores via a regression model. In this paper, we use the multi-layer |
|
perception (MLP) as the regression model to obtain the chunk-level |
|
quality due to its simplicity and effectiveness. The MLP consists of |
|
two fully connected layers and there are 128 and 1 neuron in each |
|
layer respectively. Therefore, we can obtain the chunk-level quality |
|
score via |
|
𝑞=𝑓𝑤FC(𝐹), (5) |
|
where𝑓𝑤FCdenotes the function of the two FC layers and 𝑞is the |
|
quality of the video chunk. |
|
3.3 Quality Pooling Module |
|
As stated in Section 3.1, we split the video 𝑉into𝑁𝑐continuous |
|
chunks{𝑐𝑖}𝑁𝑐 |
|
𝑖=1. For the chunk 𝑐𝑖, we can obtain its chunk-level |
|
quality score 𝑞𝑖via the feature extraction module and the quality |
|
regression module. Then, it is necessary to pool the chunk-level |
|
scores into the video level. Though many temporal pooling methods |
|
have been proposed in literature [ 32][15], we find that the temporalA Deep Learning based No-reference Quality Assessment Model for UGC Videos MM ’22, October 10–14, 2022, Lisboa, Portugal |
|
Table 1: Summary of the benchmark UGC VQA databases. Time duration: Seconds. |
|
Database Videos Scenes Resolution Time Duration Format Distortion Type DATA Environment |
|
KoNViD-1k [8] 1,200 1,200 540p 8 MP4 Authentic MOS + 𝜎 Crowd |
|
YouTube-UGC [36] 1500 1500 360p-4K 20 YUV, MP4 Authentic MOS + 𝜎 Crowd |
|
LSVQ [44] 38,811 38,811 99p-4K 5-12 MP4 Authentic MOS + 𝜎 Crowd |
|
LBVD [3] 1,013 1,013 240p-540p 10 MP4 Authentic, Transmission MOS + 𝜎 In-lab |
|
LIVE-YT-Gaming [45] 600 600 360p-1080p 8-9 MP4 Authentic MOS Crowd |
|
averaging pooling achieves the best performance from Section 4.3.2. |
|
Therefore, the video-level quality is calculated as: |
|
𝑄=1 |
|
𝑁𝑐𝑁𝑐∑︁ |
|
𝑖=1𝑞𝑖, (6) |
|
where𝑞𝑖is the quality of the 𝑖-th chunk and 𝑄is the video quality |
|
evaluated by the proposed model. |
|
3.4 Loss Function |
|
The loss function used to optimize the proposed models consists of |
|
two parts: the mean absolute error (MAE) loss and rank loss [ 39]. |
|
The MAE loss is used to make the evaluated quality scores close to |
|
the ground truth, which is defined as: |
|
𝐿𝑀𝐴𝐸=1 |
|
𝑁𝑁∑︁ |
|
𝑖=1𝑄𝑖−ˆ𝑄𝑖, (7) |
|
where the ˆ𝑄𝑖is the ground truth quality score of the 𝑖-th video in a |
|
mini-batch and 𝑁is the number of videos in the mini-batch. |
|
The rank loss is further introduced to make the model distinguish |
|
the relative quality of videos better, which is very useful for the |
|
model to evaluate the videos with similar quality. Since the rank |
|
value between two video quality is non-differentiable, we use the |
|
following formula to approximate the rank value: |
|
𝐿𝑖𝑗 |
|
𝑟𝑎𝑛𝑘=max(0,ˆ𝑄𝑖−ˆ𝑄𝑗−𝑒(ˆ𝑄𝑖,ˆ𝑄𝑗)·(𝑄𝑖−𝑄𝑗)), (8) |
|
where𝑖and𝑗are two video indexes in a mini-batch, and 𝑒(ˆ𝑄𝑖,ˆ𝑄𝑗) |
|
is formulated as: |
|
𝑒(ˆ𝑄𝑖,ˆ𝑄𝑗)=( |
|
1,ˆ𝑄𝑖≥ˆ𝑄𝑗, |
|
−1,ˆ𝑄𝑖<ˆ𝑄𝑗,(9) |
|
Then,𝐿𝑟𝑎𝑛𝑘 is calculated via: |
|
𝐿𝑟𝑎𝑛𝑘=1 |
|
𝑁2𝑁∑︁ |
|
𝑖=1𝑁∑︁ |
|
𝑗=1𝐿𝑖𝑗 |
|
𝑟𝑎𝑛𝑘(10) |
|
Finally, the loss function can be obtained by: |
|
𝐿=𝐿𝑀𝐴𝐸+𝜆·𝐿𝑟𝑎𝑛𝑘, (11) |
|
where𝜆is a hyper-parameter to balance the MAE loss and the rank |
|
loss. |
|
3.5 Multi-scale Quality Fusion Strategy |
|
Previous studies evaluate the video quality either using the original |
|
spatial resolution or a fixed resized spatial resolution, which ig- |
|
nore that videos are naturally multi-scale [ 53]. Some existing work |
|
[38][25][20] shows that considering the multi-scale characteristics |
|
can improve the performance of image quality assessment. So, wepropose a multi-scale quality fusion strategy to further improve |
|
the evaluation accuracy of the VQA model and this strategy is |
|
very useful to compare the quality of videos with different spatial |
|
resolutions. |
|
3.5.1 Multi-scale Video Quality Scores. We first resize the resolu- |
|
tion of the video into three fixed spatial scales, which are 540p, |
|
720p, and 1080p, respectively. We do not downscale the video from |
|
the original scale to several lower resolution scales, which is a |
|
more common practice in previous studies. That is because when |
|
users watch videos in an application, the resolution of videos is |
|
actually adapted to the resolution of the playback device, and the |
|
modern display resolution is normally larger than 1080p. So, the |
|
perceptual quality of the low-resolution videos is also affected by |
|
the up-sampling artifacts, which also need to be considered by VQA |
|
models. Therefore, given a VQA model, we can derive three qual- |
|
ity of videos at three scales, which are denoted as 𝑄1,𝑄2, and𝑄3 |
|
respectively. |
|
3.5.2 Adaptive Multi-scale Weights. The weight of each scale is |
|
obtained by considering the human psychological behaviors and |
|
the visual sensitivity characteristics. It is noted that the contrast |
|
perception ability of the HVS depends on the spatial frequency |
|
of the visual signal, which is modeled by the contrast sensitivity |
|
function (CSF). Specifically, we first define a viewing resolution |
|
factor𝜉as: |
|
𝜉=𝜋·𝑑·𝑛 |
|
180·ℎ𝑠·2, (12) |
|
where the unit of 𝜉is cycles per degree of visual angle (cpd), 𝑑is |
|
the viewing distance (inch), ℎ𝑠is the height of the screen (inch), |
|
and𝑛denotes the number of pixels in the vertical direction of the |
|
screen. For the above three spatial scales of video, we can obtain the |
|
corresponding 𝜉, which are denoted as 𝜉1,𝜉2, and𝜉3respectively. |
|
We use𝜉to divide the spatial frequency range of the corresponding |
|
scale, which covers one section of the CSF formulated by: |
|
𝑆(𝑢)=5200𝑒(−0.0016𝑢2(1+100/𝐿)0.08) |
|
√︃ |
|
(1+144 |
|
𝑋2 |
|
0+0.64𝑢2)(63 |
|
𝐿0.83+1 |
|
1−𝑒(−0.02𝑢2))(13) |
|
where𝑢,𝐿, and𝑋2 |
|
0indicate spatial frequency (cpd), luminance |
|
(cd/m2), and angular object area (squared degrees), respectively. |
|
The weight of each scale is calculated as the area under the CSF |
|
within the corresponding frequency covering range: |
|
𝑤𝑖=1 |
|
𝑍∫𝜉𝑖 |
|
𝜉𝑖−1𝑆(𝑢)d𝑢,𝑖∈{1,2,3}, (14) |
|
where𝑖from 1 to 3 corresponds the finest to coarsest scale respec- |
|
tively, and𝜉0corresponds the viewing resolution factor of 0. 𝑍is a |
|
normalization factor such thatÍ |
|
𝑖𝑤𝑖=1.MM ’22, October 10–14, 2022, Lisboa, Portugal Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai∗ |
|
Table 2: Performance of the SOTA models and the proposed model on the KoNViD-1k, YouTube-UGC, LBVD, and LIVE-YT- |
|
Gaming databases. W.A. means the weight average results. The best performing model is highlighted in each column. |
|
TypeDatabase KoNViD-1k YouTube-UGC LBVD LIVE-YT-Gaming W.A. |
|
Criterion SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC |
|
IQANIQE 0.542 0.553 0.238 0.278 0.327 0.387 0.280 0.304 0.359 0.393 |
|
BRISQUE 0.657 0.658 0.382 0.395 0.435 0.446 0.604 0.638 0.513 0.525 |
|
GM-LOG 0.658 0.664 0.368 0.392 0.314 0.304 0.312 0.317 0.433 0.440 |
|
VGG19 0.774 0.785 0.703 0.700 0.676 0.673 0.678 0.658 0.714 0.712 |
|
ResNet50 0.802 0.810 0.718 0.710 0.715 0.717 0.729 0.768 0.744 0.751 |
|
KonCept512 0.735 0.749 0.587 0.594 0.626 0.636 0.643 0.649 0.650 0.660 |
|
VQAV-BLIINDS 0.710 0.704 0.559 0.555 0.527 0.558 0.357 0.403 0.566 0.578 |
|
TLVQM 0.773 0.769 0.669 0.659 0.614 0.590 0.748 0.756 0.699 0.689 |
|
VIDEVAL 0.783 0.780 0.779 0.773 0.707 0.697 0.807 0.812 0.766 0.762 |
|
RAPIQUE 0.803 0.818 0.759 0.768 0.712 0.725 0.803 0.825 0.767 0.781 |
|
VSFA 0.773 0.775 0.724 0.743 0.622 0.642 0.776 0.801 0.721 0.736 |
|
Liel al. 0.836 0.834 0.831 0.819 - - - - - - |
|
Pro. 0.856 0.860 0.847 0.856 0.844 0.846 0.861 0.866 0.851 0.856 |
|
Table 3: Performance of the SOTA models and the proposed |
|
models on the LSVQ database. Pro. M.S. refers to the pro- |
|
posed model implemented by the multi-scale quality fusion |
|
strategy. W.A. means the weighted average results. The best |
|
performing model is highlighted in each column. |
|
Database Test Test-1080p W.A. |
|
Criterion SRCC PLCC SRCC PLCC SRCC PLCC |
|
TLVQM 0.772 0.774 0.589 0.616 0.712 0.722 |
|
VIDEVAL 0.794 0.783 0.545 0.554 0.712 0.707 |
|
VSFA 0.801 0.796 0.675 0.704 0.759 0.766 |
|
PVQ 0.827 0.828 0.711 0.739 0.789 0.799 |
|
Liel al. 0.852 0.854 0.772 0.788 0.825 0.832 |
|
Pro. 0.864 0.861 0.756 0.801 0.829 0.841 |
|
Pro. M.S. 0.867 0.861 0.764 0.803 0.833 0.842 |
|
Therefore, the multi-scale fusion quality score 𝑄𝑚is calculated |
|
as: |
|
𝑄𝑚=3Ö |
|
𝑖=1𝑄𝑤𝑖 |
|
𝑖, (15) |
|
4 EXPERIMENTAL VALIDATION |
|
4.1 Experimental Protocol |
|
4.1.1 Test Databases. We test the proposed model on the five UGC |
|
VQA database: KoNViD-1k [ 8], YouTube-UGC [ 36], LSVQ [ 44], |
|
LBVD [ 3], and LIVE-YT-Gaming [ 45]. We summarize the main infor- |
|
mation of the databases in Table 1. The LSVQ database is the largest |
|
UGC VQA database so far, and there are 15 video categories such |
|
as animation, gaming, HDR, live music, sports, etc. in the YouTube- |
|
UGC database, which is more diverse than other databases. The |
|
LBVD database focuses on the live broadcasting videos, of which |
|
the videos are degraded by the authentic transmission distortions. |
|
The LIVE-YT-Gaming database consists of streamed gaming videos, |
|
where the video content is generated by computer graphics.4.1.2 Implementation Details. We use the ResNet50 [ 7] as the back- |
|
bone of the spatial feature extraction module and the SlowFast R50 |
|
[6] as the motion feature extraction model for the whole experi- |
|
ments. The weights of the ResNet50 are initialized by training on |
|
the ImageNet dataset [ 5], the weights of the SlowFast R50 are fixed |
|
by training on the Kinetics 400 dataset [ 10], and other weights are |
|
randomly initialized. For the spatial feature extraction module, we |
|
resize the resolution of the minimum dimension of key frames as |
|
520 while maintaining their aspect ratios. In the training stage, the |
|
input frames are randomly cropped with the resolution of 448 ×448. |
|
If we do not use the multi-scale quality fusion strategy, we crop the |
|
center patch with the same resolutions of 448 ×448 in the testing |
|
stage. Note that we only validate the multi-scale quality fusion |
|
strategy on the model trained by the LSVQ database since there |
|
are enough videos with various spatial resolutions in it. For the |
|
motion feature extraction module, the resolution of the videos is |
|
resized to 224×224 for both the training and testing stages. We use |
|
PyTorch to implement the proposed models. The Adam optimizer |
|
with the initial learning rate 0.00001 and batch size 8 are used for |
|
training the proposed model on a server with NVIDIA V100. The |
|
hyper-parameter 𝜆is set as 1. For simplicity, we select the first |
|
frame in each chunk as the key frame. For the multi-scale quality |
|
fusion strategy, there are 𝑑=35,𝑛=1080 ,ℎ=11.3,𝐿=200, and |
|
𝑋2 |
|
0=606, and the final multi-scale weights for UGC videos are |
|
𝑤1=0.8317,𝑤2=0.0939, and𝑤3=0.0745. |
|
4.1.3 Comparing Algorithms. We compare the proposed method |
|
with the following no-reference models: |
|
•IQA models: NIQE [ 24], BRISQUE [ 22], GM-LOG [ 41], VGG19 |
|
[27], ResNet50 [7], and KonCept512 [9]. |
|
•VQA models: V-BLIINDS [ 26], TLVQM [ 12], VIDEAL [ 33], |
|
RAPIQUE [34], VSFA [15], PVQ [44], and Li et al. [14]. |
|
Since the number of videos in the LSVQ database is relatively |
|
large, we only compare some representative VQA models on the |
|
LSVQ database and omit the methods which perform poorly on the |
|
other four UGC databases.A Deep Learning based No-reference Quality Assessment Model for UGC Videos MM ’22, October 10–14, 2022, Lisboa, Portugal |
|
4.1.4 Evaluation Criteria. We adopt two criteria to evaluate the |
|
performance of VQA models, which are Pearson linear correlation |
|
coefficient (PLCC) and Spearman rank-order correlation coefficient |
|
(SRCC). PLCC reflects the prediction linearity of the VQA algorithm |
|
and SRCC indicates the prediction monotonicity. An excellent VQA |
|
model should obtain the value of SRCC and PLCC close to 1. Before |
|
calculating the PLCC, we follow the same procedure in [ 1] to map |
|
the objective score to the subject score using a four-parameter |
|
logistic function. |
|
For KoNViD-1k, YouTube-UGC, LBVD, and LIVE-YT-Gaming |
|
databases, we randomly split these databases into the training set |
|
with 80% videos and the test set with 20% videos for 10 times, and |
|
report the median values of SRCC and PLCC. For the LSVQ database, |
|
we follow the same training and test split suggested by [ 44] and |
|
report the performance on the test and test-1080p subsets. |
|
4.2 Performance Comparison with the SOTA |
|
Models |
|
The performance results of the VQA models on the KoNViD-1k, |
|
YouTube-UGC, LBVD, and LIVE-YT-Gaming databases are listed in |
|
Table 2, and on the LSVQ database are listed in Table 3. From Table |
|
2 and Table 3, we observe that the proposed model achieves the best |
|
performance on all five UGC VQA databases and leads by a large |
|
margin, which demonstrates that the proposed model does have a |
|
strong ability to measure the perceptual quality of various kinds of |
|
UGC videos. For the test-1080p subset of the LSVQ database, the |
|
proposed model is inferior to Li et al., which may be because the spa- |
|
tial resolution of most videos in the test-1080p subset is larger than |
|
1080p while the proposed model resizes the spatial resolution of test |
|
videos into 448×448, so the proposed model has a relatively poor |
|
ability to represent the characteristics of high-resolution videos. |
|
Through the multi-scale quality weighting fusion strategy, the pro- |
|
posed model can significantly improve the performance on the |
|
test-1080p subset. |
|
Then, most handcrafted feature based IQA models perform poorly |
|
on these UGC VQA databases especially for the LBVD and LIVE- |
|
YT-Gaming databases since they are designed for natural scene |
|
images with synthetic distortions and are difficult to handle the |
|
complex in-the-wild distortions and other video types such gaming, |
|
videoliving, etc. It is worth noting that through fine-tuning the deep |
|
CNN baseline i.e. ResNet50 on the VQA databases, it can achieve a |
|
pretty good performance, which also indicates that spatial features |
|
are very important for VQA tasks. For the NR VQA methods, the |
|
hand-crafted feature based NR VQA methods such as TLVQM and |
|
VIDEVAL achieve pretty well performance by incorporating the |
|
rich spatial and temporal quality features, such as NSS features, |
|
motion features, etc., but they are inferior to the deep learning |
|
based NR VQA methods due to the strong feature representation |
|
ability of CNN. VSFA extracts the spatial features from the pre- |
|
trained image recognition model, which are not quality-aware, and |
|
achieves relatively poor performance when compared with other |
|
deep learning based methods. PVQ and Li et al. methods both uti- |
|
lize the pretrained IQA model and ptretrained action recognition |
|
model to extract spatial and motion features respectively, and they |
|
perform better than other compared NR I/VQA methods but are |
|
inferior to the proposed model. Through training an end-to-endTable 4: The results of ablation studies on the LSVQ data- |
|
base. S and M means the spatial features and motion features |
|
respectively, and S∗means that the spatial features are ex- |
|
tracted by the pretrained image classification network. |
|
Database Test Test-1080p |
|
Criterion SRCC PLCC SRCC PLCC |
|
FeatureS∗+M 0.847 0.841 0.732 0.774 |
|
S 0.827 0.829 0.702 0.757 |
|
M 0.660 0.669 0.569 0.621 |
|
RegressionGRU 0.858 0.855 0.735 0.788 |
|
Transformer 0.860 0.861 0.753 0.799 |
|
PoolingMethod in [15] 0.860 0.858 0.733 0.786 |
|
1D CNN based 0.864 0.862 0.739 0.790 |
|
Table 5: The SRCC results of cross-database evaluation. The |
|
model is trained on the LSVQ database. |
|
Database KoNViD-1k YouTube-UGC LBVD LIVE-YT-Gaming |
|
Pro. 0.860 0.789 0.689 0.642 |
|
Pro. M.S. 0.859 0.822 0.711 0.683 |
|
spatial feature extractor, the proposed model can take advantage of |
|
various video content and distortion types in the UGC databases |
|
and learn a better quality-aware feature representation. As a result, |
|
the proposed model achieves the best performance on all five UGC |
|
VQA databases. |
|
4.3 Ablation Studies |
|
In this section, we conduct several ablation studies to investigate |
|
the effectiveness of each module in the proposed model, including |
|
the feature extraction module, and the quality regression module. |
|
All the experiments are tested on the LSVQ database since it is the |
|
largest UGC VQA model and is more representative. |
|
4.3.1 Feature Extraction Module. The proposed model consists |
|
of the spatial feature extractor that learns the end-to-end spatial |
|
quality-aware features and the motion feature extractor that utilizes |
|
a pretrained action recognition model to represent motion informa- |
|
tion. Therefore, we first do not train the spatial feature extractor |
|
and directly use the weights trained on the ImageNet database to |
|
study the effect of the end-to-end training strategy for the spatial |
|
feature extractor. Then, we only use the end-to-end trained spatial |
|
features or the pretrained motion features to evaluate the quality of |
|
UGC videos to investigate the effect of these two kinds of features. |
|
The results are listed in Table 4. First, it is observed that the model |
|
using the motion features is inferior to the model using the spatial |
|
features and both of them are inferior to the proposed model, which |
|
indicates that both spatial and motion features are beneficial to the |
|
UGC VQA task and the spatial features are more important. Then, |
|
we find that end-to-end training for the spatial feature extractor can |
|
significantly improve the evaluation performance, which demon- |
|
strates that end-to-end trained spatial features represent better than |
|
that extracted by the pretrained image classification model.MM ’22, October 10–14, 2022, Lisboa, Portugal Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai∗ |
|
Table 6: Comparison of computational complexity for the six VQA models and two proposed models. Time: Second. |
|
Methods V-BLIINDS TLVQM VIDEVAL VSFA RAPIQUE Li et al. Pro. Pro. M.S. |
|
Time 61.982 219.992 561.408 56.424 38.126 61.971 6.929 8.448 |
|
4.3.2 Quality Regression Module. In this paper, we use the MLP |
|
as the regression model to derive the chunk-level quality scores. |
|
However, in previous studies, some sequential models such as GRU |
|
[15], Transformer [ 14], etc. are also adopted to further consider the |
|
influence of the features extracted from adjacent frames. Here, we |
|
also adopt these methods as a comparison to investigate whether |
|
sequential models can improve the performance of the proposed |
|
models. Specifically, we replace the MLP module with the GRU and |
|
Transformer and keep other experimental setups the same. The |
|
results are listed in Table 4. We observe that models using GRU |
|
and Transformer are both inferior to the proposed model, which |
|
means that the MLP module is enough to regress the quality-aware |
|
features to quality scores though it is very simple. This conclusion |
|
is also consistent with [ 37]. The reason is that the proposed model |
|
and the model in [ 37] calculate the chunk-level quality score and |
|
the effect of adjacent frames are considered in the quality-aware |
|
features (i.e. motion features), while other VQA models [ 15] [14] |
|
calculate the frame-level quality scores, which may need to consider |
|
the effect of adjacent frames in the quality regression module. |
|
4.3.3 Quality Pooling Module. The proposed model uses the tem- |
|
poral average pooling method to fuse the chunk-level quality scores |
|
into the video level. It is noted that previous studies also propose |
|
several temporal pooling methods for VQA. In this section, we test |
|
two temporal pooling methods, which are the subjectively-inspired |
|
method introduced in [ 15] and a learning based temporal pooling |
|
method using the 1D CNN. The results are listed in Table 4. From |
|
Table 4, we observe that the average pooling strategy achieves sim- |
|
ilar performance to the learning based pooling method, and both of |
|
them are superior to the subjectively-inspired methods. Since the |
|
average pooling strategy is simpler and does not increase the extra |
|
parameters, we use the temporal average pooling method in this |
|
paper. |
|
4.4 Cross-Database Evaluation |
|
UGC videos may contain various kinds of distortions and content, |
|
most of which may not exist in the training set. Hence, the gen- |
|
eralization ability of the UGC VQA model is very important. In |
|
this section, we use the cross-database evaluation to test the gener- |
|
alization ability of the proposed model. Specifically, we train the |
|
proposed model on the LSVQ database and test the trained model |
|
on the other four UGC VQA databases. We list the results in Table |
|
5. It is observed that the proposed model achieves excellent per- |
|
formance in cross-database evaluation. The SRCC results on the |
|
KoNViD-1k and YouTube-UGC databases both exceed 0.8, which |
|
have surpassed most VQA models trained on the corresponding |
|
database. We find that the multi-scale quality fusion strategy can |
|
significantly improve the performance on the databases containing |
|
videos with different spatial resolutions (YouTube-UGC, LBVD, and |
|
LIVE-YT-Gaming), which further demonstrates its effectiveness.It is also observed that the performance on the LBVD and LIVE- |
|
YT-Gaming databases is not good as the other two databases. The |
|
reason is that the LBVD and LIVE-YT-Gaming databases contain |
|
live broadcasting and gaming videos respectively, which may rarely |
|
exist in the LSVQ database. Since the single database can not cover |
|
all kinds of video types and distortions, we may further improve the |
|
generalization ability of the proposed model via the multiple data- |
|
base training strategy [ 30] [51] or the continual learning manner |
|
[49] [50]. |
|
4.5 Computational Complexity |
|
The computational complexity is a very important factor that needs |
|
to be considered in practical applications. Hence, we test the com- |
|
putational complexity in this section. All models are tested on a |
|
computer with i7-6920HQ CPU, 16G RAM, and NVIDIA Quadro |
|
P400. The deep learning based models and the handcrafted based |
|
models are tested using the GPU and CPU respectively. We report |
|
the running time for a video with the resolution of 1920 ×1080 and |
|
time duration of eight seconds in Table 6. It is seen that the proposed |
|
model has a considerably low running time compared with other |
|
VQA models. The reason is that we use very sparse frames to calcu- |
|
late the spatial features while other deep learning based methods |
|
need dense frames. Moreover, we extract the motion features at a |
|
very low resolution, which only adds little computational complex- |
|
ity to the proposed model. The very low computational complexity |
|
makes the proposed model suitable for practical applications. |
|
5 CONCLUSION |
|
In this paper, we propose an effective and efficient NR VQA model |
|
for UGC videos. The proposed model extracts the quality-aware |
|
features from the spatial domain and the spatial-temporal domain to |
|
measure the spatial distortions and motion distortions respectively. |
|
We train the spatial feature extractor in an end-to-end training |
|
manner, so the proposed model can make full use of the various |
|
spatial distortions and content in the current VQA database. Then, |
|
the quality-aware features are regressed into the quality scores |
|
by the MLP network, and the temporal average pooling is used |
|
to obtain the video-level quality scores. We further introduce the |
|
multi-scale quality fusion strategy to address the problem of quality |
|
assessment across different spatial resolutions. The experimental |
|
results show that the proposed model can effectively measure the |
|
quality of UGC videos. |
|
ACKNOWLEDGMENTS |
|
This work was supported by the National Natural Science Foun- |
|
dation of China (61831015, 61901260) and the National Key R&D |
|
Program of China 2021YFE0206700.A Deep Learning based No-reference Quality Assessment Model for UGC Videos MM ’22, October 10–14, 2022, Lisboa, Portugal |
|
REFERENCES |
|
[1]Jochen Antkowiak, T Jamal Baina, France Vittorio Baroncini, Noel Chateau, |
|
France FranceTelecom, Antonio Claudio França Pessoa, F Stephanie Colonnese, |
|
Italy Laura Contin, Jorge Caviedes, and France Philips. 2000. Final report from |
|
the video quality experts group on the validation of objective models of video |
|
quality assessment march 2000. (2000). |
|
[2]Yuqin Cao, Xiongkuo Min, Wei Sun, and Guangtao Zhai. 2021. Deep Neural Net- |
|
works For Full-Reference And No-Reference Audio-Visual Quality Assessment. |
|
In2021 IEEE International Conference on Image Processing (ICIP) . IEEE, 1429–1433. |
|
[3]Pengfei Chen, Leida Li, Yipo Huang, Fengfeng Tan, and Wenjun Chen. 2019. QoE |
|
evaluation for live broadcasting video. In 2019 IEEE International Conference on |
|
Image Processing (ICIP) . IEEE, 454–458. |
|
[4]Pengfei Chen, Leida Li, Lei Ma, Jinjian Wu, and Guangming Shi. 2020. RIRNet: |
|
Recurrent-in-recurrent network for video quality assessment. In Proceedings of |
|
the 28th ACM International Conference on Multimedia . 834–842. |
|
[5]Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: |
|
A large-scale hierarchical image database. In 2009 IEEE conference on computer |
|
vision and pattern recognition . Ieee, 248–255. |
|
[6]Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. 2019. Slow- |
|
fast networks for video recognition. In Proceedings of the IEEE/CVF international |
|
conference on computer vision . 6202–6211. |
|
[7]Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual |
|
learning for image recognition. In Proceedings of the IEEE conference on computer |
|
vision and pattern recognition . 770–778. |
|
[8]Vlad Hosu, Franz Hahn, Mohsen Jenadeleh, Hanhe Lin, Hui Men, Tamás Szirányi, |
|
Shujun Li, and Dietmar Saupe. 2017. The Konstanz natural video database |
|
(KoNViD-1k). In 2017 Ninth international conference on quality of multimedia |
|
experience (QoMEX) . IEEE, 1–6. |
|
[9]Vlad Hosu, Hanhe Lin, Tamas Sziranyi, and Dietmar Saupe. 2020. KonIQ-10k: An |
|
ecologically valid database for deep learning of blind image quality assessment. |
|
IEEE Transactions on Image Processing 29 (2020), 4041–4056. |
|
[10] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra |
|
Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al .2017. |
|
The kinetics human action video dataset. arXiv preprint arXiv:1705.06950 (2017). |
|
[11] Woojae Kim, Jongyoo Kim, Sewoong Ahn, Jinwoo Kim, and Sanghoon Lee. 2018. |
|
Deep video quality assessor: From spatio-temporal visual sensitivity to a convo- |
|
lutional neural aggregation network. In Proceedings of the European Conference |
|
on Computer Vision (ECCV) . 219–234. |
|
[12] Jari Korhonen. 2019. Two-level approach for no-reference consumer video quality |
|
assessment. IEEE Transactions on Image Processing 28, 12 (2019), 5923–5938. |
|
[13] Jari Korhonen, Yicheng Su, and Junyong You. 2020. Blind natural video quality |
|
prediction via statistical temporal features and deep spatial features. In Proceed- |
|
ings of the 28th ACM International Conference on Multimedia . 3311–3319. |
|
[14] Bowen Li, Weixia Zhang, Meng Tian, Guangtao Zhai, and Xianpei Wang. 2021. |
|
Blindly Assess Quality of In-the-Wild Videos via Quality-aware Pre-training and |
|
Motion Perception. arXiv preprint arXiv:2108.08505 (2021). |
|
[15] Dingquan Li, Tingting Jiang, and Ming Jiang. 2019. Quality assessment of in-the- |
|
wild videos. In Proceedings of the 27th ACM International Conference on Multimedia . |
|
2351–2359. |
|
[16] Dingquan Li, Tingting Jiang, and Ming Jiang. 2021. Unified quality assessment |
|
of in-the-wild videos with mixed datasets training. International Journal of |
|
Computer Vision 129, 4 (2021), 1238–1257. |
|
[17] Wentao Liu, Zhengfang Duanmu, and Zhou Wang. 2018. End-to-End Blind |
|
Quality Assessment of Compressed Videos Using Deep Neural Networks.. In |
|
ACM Multimedia . 546–554. |
|
[18] Wei Lu, Wei Sun, Xiongkuo Min, Wenhan Zhu, Quan Zhou, Jun He, Qiyuan Wang, |
|
Zicheng Zhang, Tao Wang, and Guangtao Zhai. 2022. Deep Neural Network for |
|
Blind Visual Quality Assessment of 4K Content. arXiv preprint arXiv:2206.04363 |
|
(2022). |
|
[19] Pavan C Madhusudana, Xiangxu Yu, Neil Birkbeck, Yilin Wang, Balu Adsumilli, |
|
and Alan C Bovik. 2021. Subjective and objective quality assessment of high |
|
frame rate videos. IEEE Access 9 (2021), 108069–108082. |
|
[20] Xiongkuo Min, Kede Ma, Ke Gu, Guangtao Zhai, Zhou Wang, and Weisi Lin. |
|
2017. Unified blind quality assessment of compressed natural, graphic, and screen |
|
content images. IEEE Transactions on Image Processing 26, 11 (2017), 5462–5474. |
|
[21] Xiongkuo Min, Guangtao Zhai, Jiantao Zhou, Mylene CQ Farias, and Alan Conrad |
|
Bovik. 2020. Study of subjective and objective quality assessment of audio-visual |
|
signals. IEEE Transactions on Image Processing 29 (2020), 6054–6068. |
|
[22] Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik. 2012. No- |
|
reference image quality assessment in the spatial domain. IEEE Transactions on |
|
image processing 21, 12 (2012), 4695–4708. |
|
[23] Anish Mittal, Michele A Saad, and Alan C Bovik. 2015. A completely blind video |
|
integrity oracle. IEEE Transactions on Image Processing 25, 1 (2015), 289–300. |
|
[24] Anish Mittal, Rajiv Soundararajan, and Alan C Bovik. 2012. Making a “completely |
|
blind” image quality analyzer. IEEE Signal processing letters 20, 3 (2012), 209–212. |
|
[25] Abdul Rehman, Kai Zeng, and Zhou Wang. 2015. Display device-adapted video |
|
quality-of-experience assessment. In Human Vision and Electronic Imaging XX ,Vol. 9394. International Society for Optics and Photonics, 939406. |
|
[26] Michele A Saad, Alan C Bovik, and Christophe Charrier. 2014. Blind prediction |
|
of natural video quality. IEEE Transactions on Image Processing 23, 3 (2014), |
|
1352–1365. |
|
[27] Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks |
|
for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). |
|
[28] Wei Sun, Xiongkuo Min, Guangtao Zhai, Ke Gu, Huiyu Duan, and Siwei Ma. 2019. |
|
MC360IQA: a multi-channel CNN for blind 360-degree image quality assessment. |
|
IEEE Journal of Selected Topics in Signal Processing 14, 1 (2019), 64–77. |
|
[29] Wei Sun, Xiongkuo Min, Guangtao Zhai, Ke Gu, Siwei Ma, and Xiaokang Yang. |
|
2020. Dynamic backlight scaling considering ambient luminance for mobile |
|
videos on lcd displays. IEEE Transactions on Mobile Computing (2020). |
|
[30] Wei Sun, Xiongkuo Min, Guangtao Zhai, and Siwei Ma. 2021. Blind quality |
|
assessment for in-the-wild images via hierarchical feature fusion and iterative |
|
mixed database training. arXiv preprint arXiv:2105.14550 (2021). |
|
[31] Wei Sun, Tao Wang, Xiongkuo Min, Fuwang Yi, and Guangtao Zhai. 2021. Deep |
|
learning based full-reference and no-reference quality assessment models for |
|
compressed ugc videos. In 2021 IEEE International Conference on Multimedia & |
|
Expo Workshops (ICMEW) . IEEE, 1–6. |
|
[32] Zhengzhong Tu, Chia-Ju Chen, Li-Heng Chen, Neil Birkbeck, Balu Adsumilli, |
|
and Alan C Bovik. 2020. A comparative evaluation of temporal pooling methods |
|
for blind video quality assessment. In 2020 IEEE International Conference on Image |
|
Processing (ICIP) . IEEE, 141–145. |
|
[33] Zhengzhong Tu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, and Alan C Bovik. |
|
2021. UGC-VQA: Benchmarking blind video quality assessment for user generated |
|
content. IEEE Transactions on Image Processing (2021). |
|
[34] Zhengzhong Tu, Xiangxu Yu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, and |
|
Alan C Bovik. 2021. Rapique: Rapid and accurate video quality prediction of user |
|
generated content. arXiv preprint arXiv:2101.10955 (2021). |
|
[35] Tao Wang, Zicheng Zhang, Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao |
|
Zhai. 2022. Subjective Quality Assessment for Images Generated by Computer |
|
Graphics. arXiv preprint arXiv:2206.05008 (2022). |
|
[36] Yilin Wang, Sasi Inguva, and Balu Adsumilli. 2019. YouTube UGC dataset for video |
|
compression research. In 2019 IEEE 21st International Workshop on Multimedia |
|
Signal Processing (MMSP) . IEEE, 1–5. |
|
[37] Yilin Wang, Junjie Ke, Hossein Talebi, Joong Gon Yim, Neil Birkbeck, Balu |
|
Adsumilli, Peyman Milanfar, and Feng Yang. 2021. Rich features for percep- |
|
tual quality assessment of UGC videos. In Proceedings of the IEEE/CVF Conference |
|
on Computer Vision and Pattern Recognition . 13435–13444. |
|
[38] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. 2003. Multiscale structural sim- |
|
ilarity for image quality assessment. In The Thrity-Seventh Asilomar Conference |
|
on Signals, Systems & Computers, 2003 , Vol. 2. Ieee, 1398–1402. |
|
[39] Shaoguo Wen and Junle Wang. 2021. A strong baseline for image and video |
|
quality assessment. arXiv preprint arXiv:2111.07104 (2021). |
|
[40] Jiahua Xu, Jing Li, Xingguang Zhou, Wei Zhou, Baichao Wang, and Zhibo Chen. |
|
2021. Perceptual Quality Assessment of Internet Videos. In Proceedings of the |
|
29th ACM International Conference on Multimedia . 1248–1257. |
|
[41] Wufeng Xue, Xuanqin Mou, Lei Zhang, Alan C Bovik, and Xiangchu Feng. 2014. |
|
Blind image quality assessment using joint statistics of gradient magnitude and |
|
Laplacian features. IEEE Transactions on Image Processing 23, 11 (2014), 4850– |
|
4862. |
|
[42] Peng Ye, Jayant Kumar, Le Kang, and David Doermann. 2012. Unsupervised |
|
feature learning framework for no-reference image quality assessment. In 2012 |
|
IEEE conference on computer vision and pattern recognition . IEEE, 1098–1105. |
|
[43] Fuwang Yi, Mianyi Chen, Wei Sun, Xiongkuo Min, Yuan Tian, and Guangtao |
|
Zhai. 2021. Attention Based Network For No-Reference UGC Video Quality |
|
Assessment. In 2021 IEEE International Conference on Image Processing (ICIP) . |
|
IEEE, 1414–1418. |
|
[44] Zhenqiang Ying, Maniratnam Mandal, Deepti Ghadiyaram, and Alan Bovik. 2021. |
|
Patch-VQ:’Patching Up’the Video Quality Problem. In Proceedings of the IEEE/CVF |
|
Conference on Computer Vision and Pattern Recognition . 14019–14029. |
|
[45] Xiangxu Yu, Zhenqiang Ying, Neil Birkbeck, Yilin Wang, Balu Adsumilli, and |
|
Alan C Bovik. 2022. Subjective and Objective Analysis of Streamed Gaming |
|
Videos. arXiv preprint arXiv:2203.12824 (2022). |
|
[46] Saman Zadtootaghaj, Nabajeet Barman, Steven Schmidt, Maria G Martini, and |
|
Sebastian Möller. 2018. NR-GVQM: A no reference gaming video quality metric. |
|
In2018 IEEE International Symposium on Multimedia (ISM) . IEEE, 131–134. |
|
[47] Saman Zadtootaghaj, Steven Schmidt, Saeed Shafiee Sabet, Sebastian Möller, and |
|
Carsten Griwodz. 2020. Quality estimation models for gaming video streaming |
|
services using perceptual video quality dimensions. In Proceedings of the 11th |
|
ACM Multimedia Systems Conference . 213–224. |
|
[48] Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolu- |
|
tional networks. In European conference on computer vision . Springer, 818–833. |
|
[49] Weixia Zhang, Dingquan Li, Chao Ma, Guangtao Zhai, Xiaokang Yang, and Kede |
|
Ma. 2021. Continual learning for blind image quality assessment. arXiv preprint |
|
arXiv:2102.09717 (2021). |
|
[50] Weixia Zhang, Kede Ma, Guangtao Zhai, and Xiaokang Yang. 2021. Task-specific |
|
normalization for continual learning of blind image quality models. arXiv preprintMM ’22, October 10–14, 2022, Lisboa, Portugal Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai∗ |
|
arXiv:2107.13429 (2021). |
|
[51] Weixia Zhang, Kede Ma, Guangtao Zhai, and Xiaokang Yang. 2021. Uncertainty- |
|
aware blind image quality assessment in the laboratory and wild. IEEE Transac- |
|
tions on Image Processing 30 (2021), 3474–3486. |
|
[52] Qi Zheng, Zhengzhong Tu, Yibo Fan, Xiaoyang Zeng, and Alan C Bovik. 2022. |
|
No-Reference Quality Assessment of Variable Frame-Rate Videos Using Tem- |
|
poral Bandpass Statistics. In ICASSP 2022-2022 IEEE International Conference onAcoustics, Speech and Signal Processing (ICASSP) . IEEE, 1795–1799. |
|
[53] Qi Zheng, Zhengzhong Tu, Pavan C Madhusudana, Xiaoyang Zeng, Alan C Bovik, |
|
and Yibo Fan. 2022. FAVER: Blind Quality Prediction of Variable Frame Rate |
|
Videos. arXiv preprint arXiv:2201.01492 (2022). |