A Statistical View of Deep Learning (III): Memory and Kernels

A Statistical View of Deep Learning (III): Memory and Kernels

Memory, the ways in which we remember and recall past experiences and data to reason about future events, is a term used frequently in current literature. All models in machine learning consist of a memory that is central to their usage. We have two principal types of memory mechanisms, most often addressed under the types of models they stem from: parametric and non-parametric (but also all the shades of grey in-between). Deep networksrepresent the archetypical parametric model, in which memory is implemented by distilling the statistical properties of observed data into a set of model parameters or weights. The poster-child for non-parametric models would bekernel machines (and nearest neighbours) that implement their memory mechanism by actually storing all the data explicitly. It is easy to think that these represent fundamentally different ways of reasoning about data, but the reality of how we derive these methods points to far deeper connections and a more fundamental similarity.

Deep networks, kernel methods and Gaussian processes form a continuum of approaches for solving the same problem - in their final form, these approaches might seem very different, but they are fundamentally related, and keeping this in mind can only be useful for future research. This connection is what I explore in this post.

Basis Functions and Neural Networks

All the methods in this post look at regression: learning discriminative or input-output mappings. All such methods extend the humble linear model, where we assume that linear combinations of the input data x, or transformations of it φ(x)explain the target values y. The φ(x) are basis functions that transform the data into a set of more interesting features. Features such as SIFT for images or MFCCs for audio have been popular in the past - in these cases, we still have a linear regression, since the basis functions are fixed. Neural networks give us the ability to use adaptive basis functions, allowing us to learn what the best features are from data instead of designing these by-hand, and allowing for a non-linear regression.

A useful probabilistic formulation separates the regression into systematic and random components: the systematic component is a function f we wish to learn, and the targets are noisy realisations of this function. To connect neural networks to the linear model, I'll explicitly separate the last linear layer of the neural network from the layers that appear before it. Thus for an L-layer deep neural network, I'll denote the first L-1 layers by the mapping φ(x; θwith parameters θ, and the final layer weights w; the set of all model parameters is q = {θ, w}.

 

 

Sytematic: f=wϕ(x;θ)qN(0,σ2qI),

 

 

 

 

Random: y=f(x)+ϵϵN(0,σ2y)

 

 

Once we have specified our probabilistic model, this implies an objective function for optimising the model parameters given by the negative log joint-probability. We can now apply back-propagation and learn all the parameters, performing MAP estimation in the neural network model. Memory in this model is maintained in the parametric modelling framework; we do not save the data but compactly represent it by the parameters of our model. This formulation has many nice properties: we can encode properties of the data into the function f, such as being a 2D image for which convolutions are sensible, and we can choose to do a stochastic approximation for scalability and perform gradient descent using mini-batches instead of the entire data set. The loss function for the output weights is of particular interest, since it will offers us a way to move from neural networks to other types of regression.

 

 

J(w)=12n=1N(ynwϕ(xn;θ))2+λ2ww.

 

 

Kernel Methods

If you stare a bit longer at this last objective function, especially as formulated by explicitly representing the last linear layer, you'll very quickly be tempted to compute its dual function [1][pp. 293]. We'll do this by first setting the derivative w.r.t. w to zero and solving for it:

 

 

J(w)=0w=1λn(ynwϕ(xn))ϕ(xn)

 

 

 

 

w=nαnϕ(xn)=Φααn=1λ(wϕ(xn)yn)

 

 

We've combined all basis functions/features for the observations into the matrix Φ. By taking this optimal solution for the last layer weights and substituting it into the loss function, two things emerge: we obtain the dual loss function that is completely rewritten in terms of a new parameter α, and the computation involves the matrix product or Gram matrix K=ΦΦ'. We can repeat the process and solve the dual loss for the optimal parameter α, and obtain:

 

 

J(α)=0α=(K+λIN)1y

 

 

And this is where the kernel machines deviate from neural networks. Since we only need to consider inner products of the features φ(x) (implied by maintaining K), instead of parameterising them using a non-linear mapping given by a deep network, we can usekernel substitution (aka, the kernel trick) and get the same behaviour by choosing an appropriate and rich kernel function k(x, x')This highlights the deep relationship between deep networks and kernel machines: they are more than simply related, they are duals of each other.

The memory mechanism has now been completely transformed into a non-parametric one - we explicitly represent all the data points (through the matrix K). The advantage of the kernel approach is that is is often easier to encode properties of the functions we wish to represent e.g., functions that are up to p-th order differentiable or periodic functions, but stochastic approximation is now not possible. Predictions for a test pointx* can now be written in a few different ways:

 

 

f=wMAPϕ(x)=αΦ(x)ϕ(x)=nαnk(x,xn)=k(X,x)(K+λI)1y

 

 

The last equality is a form of solution implied by the Representer theorem and shows that we can instead think of a different formulation of our problem: one that directly penalises the function we are trying to estimate, subject to the constraint that the function lies within a Hilbert space (and providing a direct non-parametric view):

 

 

J(f)=12n=1N(ynf(xn))2+λ2f2H.

 

 

Gaussian Processes

We can go even one step further and obtain not only a MAP estimate of the function f, but also its variance. We must now specify a probability model that yields the same loss function as this last objective function. This is possible since we now know what a suitable prior over functions is, and this probabilistic model corresponds to Gaussian process (GP) regression [2]:

 

 

p(f)=N(0,K)p(y|f)=N(y|f,λ)

 

 

We can now apply the standard rules for Gaussian conditioning to obtain a mean and variance for any predictions x*. What we obtain is:

 

 

p(f|X,y,x)=N(E[f],V[f])

 

 

 

 

E[f]=k(X,x)(K+λI)1y

 

 

 

 

V[f]=k(x,x)k(X,x)(K+λI)1k(X,x)

 

 

Conveniently, we obtain the same solution for the mean whether we use the kernel approach or the Gaussian conditioning approach. We now also have a way to compute the variance of the functions of interest, which is useful for many problems (such as active learning and optimistic exploration). Memory in the GP is also of the non-parametric flavour, since our problem is formulated in the same way as the kernel machines. GPs form another nice bridge between kernel methods and neural networks: we can see GPs as derived by Bayesian reasoning in kernel machines (which are themselves dual functions of neural nets), or we can obtain a GP by taking the number of hidden units in a one layer neural network to infinity [3].

Summary

Deep neural networks, kernel methods and Gaussian processes are all different ways of solving the same problem - how to learn the best regression functions possible. They are deeply connected: starting from one we can derive any of the other methods, and they expose the many interesting ways in which we can address and combine approaches that are ostensibly in competition. I think such connections are very interesting, and should prove important as we continue to build more powerful and faithful models for regression and classification.

 


Some References

[1]Christopher M Bishop, Pattern recognition and machine learning, , 2006
[2]Carl Edward Rasmussen, Gaussian processes for machine learning, , 2006
[3]Radford M Neal, Bayesian Learning for Neural Networks, , 1994

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
社会发展日新月异,用计算机应用实现数据管理功能已经算是很完善的了,但是随着移动互联网的到来,处理信息不再受制于地理位置的限制,处理信息及时高效,备受人们的喜爱。所以各大互联网厂商都瞄准移动互联网这个潮流进行各大布局,经过多年的大浪淘沙,各种移动操作系统的不断面世,而目前市场占有率最高的就是微信小程序,本次开发一套基于微信小程序的生签到系统,有管理员,教师,学生三个角色。管理员功能有个人中心,学生管理,教师管理,签到管理,学生签到管理,班课信息管理,加入班课管理,请假信息管理,审批信息管理,销假信息管理,系统管理。教师和学生都可以在微信端注册和登录,教师可以管理签到信息,管理班课信息,审批请假信息,查看学生签到,查看加入班级,查看审批信息和销假信息。学生可以查看教师发布的学生签到信息,可以自己选择加入班课信息,添加请假信息,查看审批信息,进行销假操作。基于微信小程序的生签到系统服务端用Java开发的网站后台,接收并且处理微信小程序端传入的json数据,数据库用到了MySQL数据库作为数据的存储。这样就让用户用着方便快捷,都通过同一个后台进行业务处理,而后台又可以根据并发量做好部署,用硬件和软件进行协作,满足于数据的交互式处理,让用户的数据存储更安全,得到数据更方便。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值