[ML of Andrew Ng]Week 4 Neural Networks: Representation

Week 4 Neural Networks: Representation


Non-linear hypotheses

So, simple logistic regression together with adding in maybe the quadratic or the cubic features that’s just not a good way to learn complex nonlinear hypotheses when n is large because you just end up with too many features.

Neural Networks turns out to be a much better way to learn complex hypotheses, complex nonlinear hypotheses even when your input feature space, even when n is large.
No-linear Classification

Neurons and the brain

Neural Networks

Origins: Algorithms that try to mimic the brain.
Was very widely used in 80s and early 90s; popularity diminished in late 90s.
Recent resurgence: State-of-the-art technique for many applications

The brain

This is fascinating hypothesis that the way the brain does all of these different things is not worth like a thousand different programs, but instead, the way the brain does it is worth just a single learning algorithm.
The "one learning algorithm" hypothesis

Model representation

Neurons in the brain

Neurons in the brain

Neural Network

Neural Network
Attention: x0=+1 ,call bias.

Examples

other logic

We put it together to make a three-layer neural network to get x1 XNOR x2
x1 XNOR x2

Multi-class classification

We use the index of max as predict.
In matlab:

[~,p] = max(all_p,[],2);
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值