Stanford ML - Lecture 11 - Large scale machine learning

1. Learning with large datasets

  • It's not who has the best algorithm that wins. It's who has the most data

2. Stochastic gradient descent

  • batch gradient descent

repeat



  • stochastic gradient descent



  1. randomly shuffle datasets
  2. repeat




3. Mini-batch gradient descent

  • batch gradient descent: use all  examples in each iteration
  • stochastic gradient descent: use all 1 example in each iteration
  • mini-batch gradient descent: use all  examples in each iteration

4. Stochastic gradient descent convergence

  • Learning rate  is typically held constant. Can slowly decrease  over time if we want  to converge

5. Online learning

6. Map-reduce and data parallelism

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值