【论文阅读笔记】(2015 ICML)Unsupervised Learning of Video Representations using LSTMs

该研究提出了一种使用LSTM的无监督学习模型,用于视频表示学习。模型由编码器和解码器LSTM组成,能进行输入序列的重构和未来帧预测。实验表明,条件解码器能提升预测准确性,并且预训练模型在人类行为识别任务上提高分类精度,特别是在训练样本有限的情况下。此外,模型在不同输入类型(如图像像素和高阶特征)上的表现也进行了比较。
摘要由CSDN通过智能技术生成

Unsupervised Learning of Video Representations using LSTMs

(2015 ICML)

Nitish Srivastava, Elman Mansimov, Ruslan Salakhutdinov

Notes

Contributions

  1. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence.
  2. We experiment with two kinds of input sequences – patches of image pixels and high-level representations (“percepts”) of video frames extracted using a pretrained convolutional net.
  3. We explore different design choices such as whether the decoder LSTMs should condition on the generated output.
  4. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We further evaluate the representations by finetuning them for a supervised learning problem – human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.

Method

LSTM Autoencoder Model. This model consists of two Recurrent Neural Nets, the encoder LSTM and the decoder LSTM as shown in Fig. 1. The input to the model is a sequence of vectors (image patches or features). The encoder LSTM reads in this sequence. After the last input has been read, the cell state and output state of the encoder are copied over to the decoder LSTM. The decoder outputs a prediction for the target sequence. The target sequence is same as the input sequence, but in reverse order. The decoder can be conditional or unconditioned. A conditional decoder receives the last generated output frame as input, i.e., the dotted boxes in Fig. 1 are present. An unconditioned decoder does not receive that input.

 

LSTM Future Predictor Model. The design of the Future Predictor Model is same as that of the Autoencoder Model, except that the de- coder LSTM in this case predicts frames of the video that come just after the input sequence (Fig. 2). This model, on the other hand, predicts a long sequence into the future. Here again we consider two variants of the decoder – conditional and unconditioned.

A Composite Model. The two tasks – reconstructing the input and predicting the future can be combined to create a composite model as shown in Fig. 3. Here the encoder LSTM is asked to come up with a state from which we can both predict the next few frames as well as reconstruct the input.

 

Results

First, they construct a Moving MNIST dataset:

  1. Each video was 20 frames long
  2. consisted of 2 digits moving inside
  3. 64 × 64 patch
  4. The digits were chosen randomly from the training set of MNIST
  5. placed initially at random locations inside the patch
  6. Each digit was assigned a velocity whose direction was chosen uniformly randomly on a unit circle and whose magnitude was also chosen uniformly at random over a fixed range.
  7. The digits bounced-off the edges of the 64 × 64 frame and overlapped if they were at the same location.

The reason for working with this dataset is that it is infinite in size and can be generated quickly on the fly.

Visualization and Qualitative Analysis: We can see that

  1. adding depth helps the model make better predictions.
  2. Next, we changed the future predictor by making it conditional. We can see that this model makes even better predictions.

Visualization and Qualitative Analysis: the reconstructions obtained from a two layer Composite model with 2048 units. We found that

  1. the future predictions quickly blur out but the input reconstructions look better.
  2. We then trained a bigger model with 4096 units. Even in this case, the future blurred out quickly. However, the reconstructions look sharper.
  3. We believe that models that look at bigger contexts and use more powerful stochastic decoders are required to get better future predictions.

Action Recognition on UCF-101/HMDB-51. We used a two layer Composite Model with 2048 hid- den units with no conditioning on either decoders. Then, we initialize an LSTM classifier with the weights learned by the encoder LSTM from this model. The model is shown in Fig. 6. The output from each LSTM goes into a softmax classifier that makes a pre- diction about the action being performed at each time step. At test time, the predictions made at each time step are averaged. To get a prediction for the entire video, we average the predictions from all 16 frame blocks in the video with a stride of 8 frames. The baseline for comparing these models is an identical LSTM classifier but with randomly initialized weights. All classifiers used dropout regularization

Fig. 7 compares three models - single frame classifier, baseline LSTM classifier and the LSTM classifier initialized with weights from the Composite Model. We can see that

  1. for the case of very few training examples, unsupervised learning gives a substantial improvement.
  2. As the size of the labelled dataset grows, the improvement becomes smaller.

We further ran similar experiments on the optical flow percepts extracted from the UCF-101 dataset. A temporal stream convolutional net, similar to the one proposed by Simonyan & Zisserman (2014b), was trained on single frame optical flows as well as on stacks of 10 optical flows.

Comparison of Different Model Variants. Future prediction results are summarized in Table 2. For MNIST we compute the cross entropy of the predictions with respect to the ground truth. For natural image patches, we compute the squared loss. We see that

  1. the Composite Model always does a better job of predicting the future compared to the Future Predictor. This indicates that having the autoencoder along with the future predictor to force the model to remember more about the inputs actually helps predict the future better.
  2. Next, we compare each model with its conditional variant. Here, we find that the conditional models perform slightly better

The performance on action recognition achieved by finetuning different unsupervised learning models is summarized in Table 3. We find that

  1. all unsupervised models improve over the baseline LSTM which is itself well-regularized using dropout.
  2. The Autoencoder model seems to perform consistently better than the Future Predictor.
  3. The Composite model, which combines the two, does better than either one alone.
  4. Conditioning on the generated inputs does not seem to give a clear advantage over not doing so. The Composite Model with a conditional future predictor works the best, although its performance is almost same as that of the Composite Model without conditioning.

Comparison with Action Recognition Benchmarks. The table is divided into three sets. The first set compares models that use only RGB data (single or multiple frames). The second set compares models that use explicitly computed flow features only. Models in the third set use both.

  1. On RGB data, our model performs at par with the best deep models. When the C3D features are concatenated with fc6 percepts, they do slightly better than our model.
  2. The improvement for flow features over using a randomly initialized LSTM network is quite small. We believe this is partly due to the fact that the flow percepts already capture a lot of the motion information that the LSTM would otherwise discover. Another contributing factor is that the temporal stream convolutional net that is used to extract flow percepts overfits very readily (in the sense that it gets almost zero training error but much higher test error) in spite of strong regularization. Therefore the statistics of the percepts might be different between the training and test sets.
  3. We believe further improvements can be made by running the model over different patch locations and mirroring the patches.
  4. Also, our model can be applied deeper inside the conv net instead of just at the top-level.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值