机器学习演算法 第五讲 Training versus Testing——学习笔记

将Machine Learning拆成了两个问题,1.Ein和Eout是不是一样 2.Ein会不会很小

Recap and Preview

Recap: the 'Statistical' Learning Flow

if H = M finite, N large enough,

for whatever g picked up by A, Eout = Ein

if A finds one g with Ein = 0

PAC guarantee for Eout = 0

test: Eout = Ein

train: Ein = 0

Two Central Questions

for batch and supervised binary classification, g = f <=> Eout = 0

achieved through Eout = Ein and Ein = 0

Trade-off on M

1. can we make sure that Eout is close enough to Ein?

2. can we make Ein small enough?

small M:

1. Yes.  P[BAD] <= 2 M exp( ... )

2. No. Too few choices.

large M:

1. No. P[BAD]  <= 2 M exp(...)

2. Yes. many choices..

----------------------------------------------------

so using the right M or H is important


Effective Number of Lines

Effective Number of Hypothesis

DIchotomies: Mini-hypothesis

H = { hypothesis h: x-> {x ,o} }

call h(x1, x2, ... xn) = (h(x1), h(x2), ... h(x3) ) belong to {x,o}^n

a dichotomy: hypothesis 'limited' to the eyes of x1, x2, ..., xn

Growth Function

remove dependence by taking max of  all possible(x1, x2, ... xn)

mH(N) = max | H(x1,x2,...xn) | finite, upper-bounded 2^N

Growth Function for Positive Rays

mH(N) = N + 1

Growth Function for Positive Intervals

mH(N) = 1/2 * ( N*N + N) + 1

Growth Function for Convex Ssets

mH(N) = 2^N

Break Point

positive rays: break point at 2

positive interval: break point at 3

convex sets: no break point

2D perceptrons: break point at 4

break point is where mH(N) becomes 'non-exponential'

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值