Deep Blind Video Quality Assessment for User Generated Videos

Abstract

As short video industry grows up, quality assessment of user generated videos has become a hot issue. Existing no reference video quality assessment methods are not suitable for this type of application scenario since they are aimed at synthetic videos. In this paper, we propose a novel deep blind quality assessment model for user generated videos according to content variety and temporal memory effect. Content-aware features of frames are extracted through deep neural network, and a patch-based method is adopted to obtain frame quality score. Moreover, we propose a temporal memory-based pooling model considering temporal memory effect to predict video quality. Experimental results conducted on KoNViD-1k and LIVE-VQC databases demonstrate that the performance of our proposed method outperforms other state-of-the-art ones, and the comparative analysis proves the efficiency o f our temporal pooling model.

Publication
2020 IEEE International Conference on Visual Communications and Image Processing (VCIP)
Jiapeng Tang
Jiapeng Tang
Master Student
Yu Dong
Yu Dong
PhD Student
Li Song
Li Song
Professor, IEEE Senior Member

Related