【Machine Learning】【Andrew Ng】- notes(Week 3: Solving the problem of overfitting)

The Problem of Overfitting

Consider the problem of predicting y from x ∈ R. The leftmost figure below shows the result of fitting a y=θ0+θ1x y = θ 0 + θ 1 x to a dataset. We see that the data doesn’t really lie on straight line, and so the fit is not very good.
这里写图片描述
Instead, if we had added an extra feature x2 x 2 , and fit y=θ0+θ1x+θ2x2 y = θ 0 + θ 1 x + θ 2 x 2 , then we obtain a slightly better fit to the data (See middle figure). Naively, it might seem that the more features we add, the better. However, there is also a danger in adding too many features: The rightmost figure is the result of fitting a 5th 5 t h order polynomial y=5j=0θjxj y = ∑ j = 0 5 θ j x j . We see that even though the fitted curve passes through the data perfectly, we would not expect this to be a very good predictor of, say, housing prices (y) for different living areas (x). Without formally defining what these terms mean, we’ll say the figure on the left shows an instance of underfitting—in which the data clearly shows structure not captured by the model—and the figure on the right is an example of overfitting.
Underfitting, or high bias, is when the form of our hypothesis function h maps poorly to the trend of the data. It is usually caused by a function that is too simple or uses too few features. At the other extreme, overfitting, or high variance, is caused by a hypothesis function that fits the available data but does not generalize well to predict new data. It is usually caused by a complicated function that creates a lot of unnecessary curves and angles unrelated to the data.
This terminology is applied to both linear and logistic regression. There are two main options to address the issue of overfitting:
1) Reduce the number of features:
- Manually select which features to keep.
- Use a model selection algorithm (studied later in the course).
2) Regularization
- Keep all the features, but reduce the magnitude of parameters θj θ j .
- Regularization works well when we have a lot of slightly useful features.

Cost Function

If we have overfitting from our hypothesis function, we can reduce the weight that some of the terms in our function carry by increasing their cost.
Say we wanted to make the following function more quadratic:
θ0+θ1x+θ2x2+θ3x3+θ4x4 θ 0 + θ 1 x + θ 2 x 2 + θ 3 x 3 + θ 4 x 4
We’ll want to eliminate the influence of θ3x3 θ 3 x 3 and θ4x4 θ 4 x 4 . Without actually getting rid of these features or changing the form of our hypothesis, we can instead modify our cost function:
minθ 12mmi=1(hθ(x(i))y(i))2+1000θ23+1000θ24 m i n θ   1 2 m ∑ i = 1 m ( h θ ( x ( i ) ) − y ( i ) ) 2 + 1000 ⋅ θ 3 2 + 1000 ⋅ θ 4 2
We’ve added two extra terms at the end to inflate the cost of θ3 θ 3 and θ4 θ 4 . Now, in order for the cost function to get close to zero, we will have to reduce the values of θ3 θ 3 and θ4 θ 4 to near zero. This will in turn greatly reduce the values of θ3x3 θ 3 x 3 and θ4x4 θ 4 x 4 in our hypothesis function. As a result, we see that the new hypothesis (depicted by the pink curve) looks like a quadratic function but fits the data better due to the extra small terms θ3x3 θ 3 x 3 and θ4x4 θ 4 x 4 .
这里写图片描述
We could also regularize all of our theta parameters in a single summation as:
这里写图片描述
The λ, or lambda, is the regularization parameter. It determines how much the costs of our theta parameters are inflated.
Using the above cost function with the extra summation, we can smooth the output of our hypothesis function to reduce overfitting. If lambda is chosen to be too large, it may smooth out the function too much and cause underfitting. Hence, what would happen if λ=0 λ = 0 or is too small ?

Regularized Linear Regression

We can apply regularization to both linear regression and logistic regression. We will approach linear regression first.
Gradient Descent
We will modify our gradient descent function to separate out θ0 θ 0 from the rest of the parameters because we do not want to penalize θ0 θ 0 .
这里写图片描述
The term λmθj λ m θ j performs our regularization. With some manipulation our update rule can also be represented as:
这里写图片描述
The first term in the above equation, 1αλm 1 − α λ m will always be less than 1. Intuitively you can see it as reducing the value of θj θ j by some amount on every update. Notice that the second term is now exactly the same as it was before.
Normal Equation
Now let’s approach regularization using the alternate method of the non-iterative normal equation.
To add in regularization, the equation is the same as our original, except that we add another term inside the parentheses:
这里写图片描述
L is a matrix with 0 at the top left and 1’s down the diagonal, with 0’s everywhere else. It should have dimension (n+1)×(n+1) ( n + 1 ) × ( n + 1 ) . Intuitively, this is the identity matrix (though we are not including x0 x 0 ), multiplied with a single real number λ.
Recall that if m < n, then XTX X T X is non-invertible. However, when we add the term λL λ ⋅ L , then XTX+λL X T X + λ ⋅ L becomes invertible.
Regularized Logistic Regression
We can regularize logistic regression in a similar way that we regularize linear regression. As a result, we can avoid overfitting. The following image shows how the regularized function, displayed by the pink line, is less likely to overfit than the non-regularized function represented by the blue line:
这里写图片描述
Cost Function
Recall that our cost function for logistic regression was:
这里写图片描述
We can regularize this equation by adding a term to the end:
这里写图片描述
The second sum, nj=1θ2j ∑ j = 1 n θ j 2 means to explicitly exclude the bias term, θ0 θ 0 . I.e. the θ vector is indexed from 0 to n (holding n+1 values, θ0 θ 0 through θn θ n ), and this sum explicitly skips θ0 θ 0 , by running from 1 to n, skipping 0. Thus, when computing the equation, we should continuously update the two following equations:
这里写图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值