摘要
Background subtraction refers to extracting the foreground from an observed video, and is the fundamental problem of various applications. There are two kinds of popular methods to deal with background separation, namely, robust principal component analysis (RPCA) and low-rank matrix factorization (LRMF). Nevertheless, the drawback of RPCA requires tuning penalty parameter to attain an ideal result. Compared with RPCA, the l(1)-norm based LRMF does not involve extra parameters tuning, but it is challenging to optimize the l(1)-norm based minimization because of the nonsmooth ll-norm. In addition, it becomes time-consuming to find the optimal solution. In this work, we propose to employ smooth l(1,epsilon)-norm, an approximation of l(1)-norm, to tackle background subtraction. Thus, the proposed model inherits the superiority of LRMF and even becomes tractable. Then the resultant optimization problem is solved by alternating minimization and gradient descent where the step-size of the gradient descent is adaptively updated via backtracking line searching approach. The proposed method is proved to be locally convergent. Experimental results on synthetic and real-world data demonstrate that our method outperforms the state-of-the-art algorithms in terms of reconstruction loss, computational speed and hardware performance.