Learning Transferable Self-attentive Representations for Action Recognition in Untrimmed Videos with Weak Supervision

AAAI 2019 2019.07.17,

Xiao-Yu Zhang, Haichao Shi, Changsheng, Kai Zheng, Xiaobin Zhu, Lixin Duan.

Abstract

Action recognition in videos has attracted a lot of attention in the past decade. In order to learn robust models, previous methods usually assume videos are trimmed as short sequences and require ground-truth annotations of each video frame/sequence, which is quite costly and time-consuming. In this paper, given only video-level annotations, we propose a novel weakly supervised framework to simultaneously locate action frames as well as recognize actions in untrimmed videos. Our proposed framework consists of two major components. First, for action frame localization, we take advantage of the self-attention mechanism to weight each frame, such that the influence of background frames can be effectively eliminated. Second, considering that there are trimmed videos publicly available and also they contain useful information to leverage, we present an additional module to transfer the knowledge from trimmed videos for improving the classification performance in untrimmed ones. Extensive experiments are conducted on two benchmark datasets (i.e., THUMOS14 and Activity Net 1.3), and experimental results clearly corroborate the efficacy of our method.