1 Stochastic Gradient Descent
1.1 Alternative algorithm to classic (or batch) gradient descent
Stochastic Gradient Descent is more efficient and scalable to large data sets, say 100,000,000 or more. It is computational expensive to summarise total examples in batch gradient descent. So Stochastic Gradient Descent try to fit every example at a time.
This maybe unlikely to converge at the global minimum or wander around the global minimum randomly, but usually yields a result that is close enough.
1.2 Mini-batch Gradient Descent
1.3 Stochastic Gradient Descent Convergence
In order to convergence, one strategy is to plot the average cost of the hypothesis applied to every 1000 training examples. With a smaller learning rate (alpha), it is possible to get a better solution.
Another strategy is to slowly decrease alpha over time, eg: alpha = CONST1/ (iterationNumber + CONST2). However, this is not ofter done for fiddling weith 2 more parameters.
2 Online Learning
With a continuous stream of users to a website, it is possible to run an endless loop to get (xi,yi).
examples:
- providing shipping service for users, predict the proper price to improve the probability of costomer deal.
- providing product to cater users to learn which product is more pupular.
- customized selection of news articles