Hinton's Dark Knowledge

On Thursday, October 2, 2014 Geoffrey Hinton gave a talk (slidesvideo) on what he calls “dark knowledge” which he claims is most of what deep learning methods actually learn. The talk presented an idea that had been introduced in (Caruana, 2006)where a more complex model is used to train a simpler, compressed model. The main point of the talk introduces the idea that classifiers built from a softmax function have a great deal more information contained in them than just a classifier; the correlations in the softmax outputs are very informative. For example, when building a computer vision system to detect cats,dogs, and boats the output entries for cat and dog in a softmax classifier will always have more correlation than cat and boat since cats look similar to dogs.

Dark knowledge was used by Hinton in two different contexts:

  • Model compression: using a simpler model with fewer parameters to match the performance of a larger model.
  • Specialist Networks: training models specialized to disambiguate between a small number of easily confuseable classes.

Preliminaries

A deep neural network typically maps an input vector  xDin x∈RDin to a set of scores  f(x)C f(x)∈RC for each of  C C classes. These scores are then interpreted as a posterior distribution over the labels using a softmax function

̂ (yx;Θ)=softmax(f(x;Θ)). P^(y∣x;Θ)=softmax(f(x;Θ)).

The parameters of the entire network are collected in  Θ Θ. The goal of the learning algorithm is to estimate  Θ Θ. Usually, the parameters are learned by minimizing the log loss for all training samples

L(hard)=n=1NL(xn,yn;Θ)=n=1Nc=1C1{yn=c}loĝ (ycxn;Θ), L(hard)=∑n=1NL(xn,yn;Θ)=−∑n=1N∑c=1C1{yn=c}log⁡P^(yc∣xn;Θ),

which is the negative of the log-likelihood of the data under the logistic regression model. The parameters  Θ Θ are estimated with iterative algorithms since there is no closed-form solution.

This loss function may be viewed as a cross entropy between an empirical posterior distribution and a predicted posterior distribution given by the model. In the case above, the empirical posterior distribution is simply a 1-hot distribution that puts all its mass at the ground truth label. This cross-entropy view motivates the dark knowledge training paradigm, which can be used to do model compression.

Model compression

Instead of training the cross entropy against the labeled data one could train it against the posteriors of a previously trained model. In Hinton’s narrative, this previous model is an ensemble method, which may contain many large deep networks of similar or various architectures. Ensemble methods have been shown to consistently achieve strong performance on a variety of tasks for deep neural networks. However, these networks have a large number of parameters, which makes it computationally demanding to do inference on new samples. To alleviate this, after training the ensemble and the error rate is sufficiently low, we use the softmax outputs from the ensemble method to construct training targets for the smaller, simpler model.

In particular, for each data point  xn xn, our first bigger ensemble network may make the prediction

ŷ (big)n=(yx;Θ(big)). y^n(big)=P(y∣x;Θ(big)).

The idea is to train the smaller network using this output distribution rather than the true labels. However, since the posterior estimates are typically low entropy, the dark knowledge is largely indiscernible without a log transform. To get around this, Hinton increases the entropy of the posteriors by using a transform that “raises the temperature” as

[g(y;T)]k=y1/Tkky1/Tk, [g(y;T)]k=yk1/T∑k′yk′1/T,

where  T T is a temperature parameter that when raised increases the entropy. We now set our target distributions as

y(target)n=g(y(big)n;T). yn(target)=g(yn(big);T).

The loss function becomes

L(soft)=n=1NL(xn,yn;Θ(small))=n=1Nc=1Cy(target)n,cloĝ (ycxn;Θ(small)). L(soft)=∑n=1NL(xn,yn;Θ(small))=−∑n=1N∑c=1Cyn,c(target)log⁡P^(yc∣xn;Θ(small)).

Hinton mentioned that the best results are achieved by combining the two loss functions. At first, we thought he meant alternating between them, as in train one batch with  L(hard) L(hard) and the other with  L(soft) L(soft). However, after a discussion with a professor that also attended the talk, it seems as though Hinton took a convex combination of the two loss functions

L=αL(soft)+(1α)L(hard), L=αL(soft)+(1−α)L(hard),

where  α α is a parameter. This professor had the impression that an appropriate value was  α=0.9 α=0.9 after asking Hinton about it.

One of the main settings for where this is useful is in the context of speech recognition. Here an ensemble phone recognizer may achieve a low phone error rate, but it may be too slow to process user input on the fly. A simpler model replicating the ensemble method, however, can bring some of the classification gains of large-scale ensemble deep network models to practical speech systems.

Specialist networks

Specialist networks are a way of using dark knowledge to improve the performance of deep network models regardless of their underlying complexity. They are used in the setting where there are many different classes. As before, deep network is trained over the data and each data point is assigned a target that corresponds to the temperature adjusted softmax output. These softmax outputs are then clustered multiple times using k-means and the resultant clusters indicate easily confuseable data points that come from a subset of classes. Specialist networks are then trained only on the data in these clusters using a restricted number of classes. They treat all classes not contained in the cluster as coming from a single “other” class. These specialist networks are then trained using alternating one-hot, temperature-adjusted technique. The ensemble method constructed by combining the various specialist networks creates benefits for the overall network.

One technical hiccup created by the specialist network is that the specialist networks are trained using different classes than the full network so combining the softmax outputs from multiple networks requires a combination trick. Essentially there is an optimization problem to solve: ensure that the catchall “dustbin” classes for the specialist networks match a sum of your softmax outputs. So that if you have cars and cheetahs grouped together in one class for your dog detector you combine that network with your cars versus cheetahs network by ensuring the output probabilities for cars and cheetahs sum to a probability similar to the catch-all output of the dog detector.

原文: http://deepdish.io/2014/10/28/hintons-dark-knowledge/

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值