SSSC-AM: A Unified Framework for Video Co-Segmentation by Structured Sparse Subspace Clustering with Appearance and Motion Features (arxiv 1603.04139)
by Junlin Yao and Frank Nielsen
Video co-segmentation refers to the task of jointly segmenting common objects appearing in a given group of videos. In practice, high-dimensional data such as videos can be conceptually thought as being drawn from a union of subspaces corresponding to categories rather than from a smooth manifold. Therefore, segmenting data into respective subspaces — subspace clustering — finds widespread applications in computer vision, including co-segmentation. In this work, we present a novel unified video co-segmentation framework inspired by the recent Structured Sparse Subspace Clustering (S3C) based on the self-expressiveness model. Our method yields more consistent segmentation results. In order to improve the detectability of motion features with missing trajectories due to occlusion or tracked points moving out of frames, we add an extra-dimensional signature to the motion trajectories. Moreover, we reformulate the S3C algorithm by adding the affine subspace constraint in order to make it more suitable to segment rigid motions lying in affine subspaces of dimension at most 3. Our experiments on MOViCS dataset demonstrate the effectiveness of our framework.
Download the Python code for reproducible research: sssc_amf.zip
Download the MOViCS dataset from http://www.d2.mpi-inf.mpg.de/datasets. To conduct preprocessing, run Temporal Superpixel code. After obtaining superpixels, save preprocessing results (.mat files) in the same folder of the datasets. Then run cosegmentation.py with appropriate parameter setting.
Some experimental results are shown as follows: