Deep Blind Video Quality Assessment for User Generated Videos

Abstract

As short video industry grows up, quality assessment of user generated videos has become a hot issue. Existing no reference video quality assessment methods are not suitable for this type of application scenario since they are aimed at synthetic videos. In this paper, we propose a novel deep blind quality assessment model for user generated videos according to content variety and temporal memory effect. Content-aware features of frames are extracted through deep neural network, and a patch-based method is adopted to obtain frame quality score. Moreover, we propose a temporal memory-based pooling model considering temporal memory effect to predict video quality. Experimental results conducted on KoNViD-1k and LIVE-VQC databases demonstrate that the performance of our proposed method outperforms other state-of-the-art ones, and the comparative analysis proves the efficiency o f our temporal pooling model.

Publication
2020 IEEE International Conference on Visual Communications and Image Processing (VCIP)
Jiapeng Tang
Jiapeng Tang
Master Student

I’m a master student at SJTU Media Lab. I’m doing my research on object detection and video quality assessment (VQA), under the direction of Prof. Li Song.

Yu Dong
Yu Dong
PhD Student

I’m a Research PHD candidate at SJTU Media Lab. I’m doing my research on video processing and low latency end-to-end video systems, under the direction of Prof. Li Song.

Li Song
Li Song
Professor, IEEE Senior Member

Professor, Doctoral Supervisor, the Deputy Director of the Institute of Image Communication and Network Engineering of Shanghai Jiao Tong University, the Double-Appointed Professor of the Institute of Artificial Intelligence and the Collaborative Innovation Center of Future Media Network, the Deputy Secretary-General of the China Video User Experience Alliance and head of the standards group.