视频跟踪学习连接 Tracking Link
视频跟踪汇总：http://cvlab.hanyang.ac.kr/tracker_benchmark/benchmark.html 这个连接绝对要点 包括很多跟踪算法的matlab源码、数据集、以及性能测试等。
In this paper we propose a robust object tracking algorithm using a collaborative model.
As the main challenge for object tracking is to account for drastic appearance change, we propose a robust appearance model that exploits both holistic templates and local representations.
We develop a sparsity-based discriminative classifier (SDC) and a sparsity-based generative model (SGM). In the SDC module, we introduce an effective method to compute the confidence value that assigns more weights to the foreground than the background. In the SGM module, we propose a novel histogram-based method that takes the spatial information of each patch into consideration with an occlusion handing scheme.
Furthermore, the update scheme considers both the latest observations and the original template, thereby enabling the tracker to deal with appearance change effectively and alleviate the drift problem.
Numerous experiments on various challenging videos demonstrate that the proposed tracker performs favorably against several state-of-the-art algorithms.
How does it work?
The main idea behind CMT is to break down the object of interest into tiny parts, known as keypoints. In each frame, we try to again find the keypoints that were already there in the initial selection of the object of interest. We do this by employing two different kind of methods. First, we track keypoints from the previous frame to the current frame by estimating what is known as its optic flow. Second, we match keypoints globally by comparing their descriptors. As both of these methods are error-prone, we employ a novel way of looking for consensus within the found keypoints by letting each keypoint vote for the object center, as shown in the following image.