CNN-Based Shot Boundary Detection and Video Annotation

Wenjing Tong, Li Song, Xiaokang Yang, Hui Qu, Rong Xie


Abstract

With the explosive growth of video data, contentbased video analysis and management technologies such as indexing, browsing and retrieval have drawn much attention. Video shot boundary detection (SBD) is usually the first and important step for those technologies. Great efforts have been made to improve the accuracy of SBD algorithms. However, most works based on signal rather than interpretable features of frames. In this paper we propose a novel video shot boundary detection framework based on interpretable TAGs learned by Convolutional Neural Networks (CNNs). Firstly, we adopt a candidate segment selection to predict the positions of shot boundaries and discard most non-boundary frames. This preprocessing method can help to improve both accuracy and speed of the SBD algorithm. Then, cut transition and gradual transition detections which are based on the interpretable TAGs are conducted to identify the shot boundaries in the candidate segments. Afterwards, we synthesize the features of frames in a shot and get semantic labels for a shot. Experiments on TRECVID 2001 test data show that the proposed scheme can achieve a better performance compared with the state-of-the-art schemes. Besides, the semantic labels obtained by the framework can be used to depict the content of a shot.

Results

Test Video 1


Test Video 2


More results are available on the website http://www.youku.com/playlist_show/id_23719147.html

Citation

W. Tong, L. Song, X. Yang, H. Qu, "CNN-Based Shot Boundary Detection and Video Annotation", 2015 IEEE international worksho on Broadband Multimedia Systems and Broadcasting (BMSB2015), Jun. 17-19, 2015, Ghent, Belgium.