coursera ML Logistic Regression——notebook

Classification and Representation

To attempt classification, one method is to use linear regression and map all predictions greater than 0.5 as a 1 and all less than 0.5 as a 0. However, this method doesn’t work well because classification is not actually a linear function.

The classification problem is just like the regression problem, except that the values we now want to predict take on only a small number of discrete values. For now, we will focus on the binary classification problem in which y can take on only two values, 0 and 1. (Most of what we say here will also generalize to the multiple-class case.) For instance, if we are trying to build a spam classifier for email, then x ( i ) x^{(i)} x(i) may be some features of a piece of email, and y may be 1 if it is a piece of spam mail, and 0 otherwise. Hence, y ∈ { 0 , 1 } y\in\{0,1\} y{0,1}. 0 is also called the negative class, and 1 the positive class, and they are sometimes also denoted by the symbols “ − - ”and “ + + +”. Given x ( i ) x^{(i)} x(i) , the corresponding y ( i ) y^{(i)} y(i) is also called the label for the training example.

Hypothesis Representation

We could approach the classification problem ignoring the fact that y is discrete-valued, and use our old linear regression algorithm to try to predict y given x. However, it is easy to construct examples where this method performs very poorly. Intuitively, it also doesn’t make sense for h θ ( x ) h_\theta (x) hθ(x) to take values larger than 1 or smaller than 0 when we know that y ∈ { 0 , 1 } y\in\{0,1\} y{0,1}. To fix this, let’s change the form for our hypotheses h θ ( x ) h_\theta (x) hθ(x) to satisfy 0 ≤ h θ ( x ) ≤ 1 0 \leq h_\theta (x)\leq 1 0hθ(x)1. This is accomplished by plugging θ T x \theta^Tx θTx into the Logistic Function.

Our new form uses the “Sigmoid Function,” also called the “Logistic Function”:
在这里插入图片描述
The following image shows us what the sigmoid function looks like:
在这里插入图片描述
The function g ( z ) g(z) g(z), shown here, maps any real number to the (0, 1) interval, making it useful for transforming an arbitrary-valued function into a function better suited for classification.

h θ ( x ) h_\theta(x) hθ(x) will give us the probability that our output is 1. For example, h θ ( x ) = 0.7 h_\theta(x)=0.7 hθ(x)=0.7 gives us a probability of 70% that our output is 1. Our probability that our prediction is 0 is just the complement of our probability that it is 1 (e.g. if probability that it is 1 is 70%, then the probability that it is 0 is 30%).
在这里插入图片描述

Decision Boundary

In order to get our discrete 0 or 1 classification, we can translate the output of the hypothesis function as follows:
在这里插入图片描述
The way our logistic function g behaves is that when its input is greater than or equal to zero, its output is greater than or equal to 0.5:
在这里插入图片描述
Remember.
在这里插入图片描述
So if our input to g g g is θ T X \theta^T X θTX , then that means:
在这里插入图片描述
From these statements we can now say:
在这里插入图片描述
The decision boundary is the line that separates the area where y = 0 y = 0 y=0 and where y = 1 y = 1 y=1. It is created by our hypothesis function.

Example:
在这里插入图片描述
In this case, our decision boundary is a straight vertical line placed on the graph where x 1 = 5 x_1 = 5 x1=5, and everything to the left of that denotes y = 1 y = 1 y=1, while everything to the right denotes y = 0 y = 0 y=0.

Again, the input to the sigmoid function g ( z ) g(z) g(z)(e.g. θ T X \theta^T X θTX) doesn’t need to be linear, and could be a function that describes a circle (e.g. z = θ 0 + θ 1 x 1 2 + θ 2 x 2 2 z = \theta_0 + \theta_1 x_1^2 +\theta_2 x_2^2 z=θ0+θ1x12+θ2x22) or any shape to fit our data.

Cost Function

We cannot use the same cost function that we use for linear regression because the Logistic Function will cause the output to be wavy, causing many local optima. In other words, it will not be a convex function.

Instead, our cost function for logistic regression looks like:
在这里插入图片描述
When y = 1 y = 1 y=1, we get the following plot for J ( θ ) J(θ) J(θ) vs h θ ( x ) h_\theta(x) hθ(x)

在这里插入图片描述
Similarly, when y = 0 y = 0 y=0,we get the following plot for J ( θ ) J(\theta) J(θ) vs h θ ( x ) : h _\theta(x): hθ(x):
在这里插入图片描述
If our correct answer ‘y’ is 0, then the cost function will be 0 if our hypothesis function also outputs 0. If our hypothesis approaches 1, then the cost function will approach infinity.

If our correct answer ‘y’ is 1, then the cost function will be 0 if our hypothesis function outputs 1. If our hypothesis approaches 0, then the cost function will approach infinity.

Note that writing the cost function in this way guarantees that J ( θ ) J(θ) J(θ) is convex for logistic regression.

Simplified Cost Function and Gradient Descent

We can compress our cost function’s two conditional cases into one case:
在这里插入图片描述
Notice that when y is equal to 1, then the second term ( 1 − y ) log ⁡ ( 1 − h θ ( x ) ) (1-y)\log(1-h_\theta(x)) (1y)log(1hθ(x)) will be zero and will not affect the result. If y is equal to 0, then the first term − y log ⁡ ( h θ ( x ) ) -y \log(h_\theta(x)) ylog(hθ(x)) will be zero and will not affect the result.

We can fully write out our entire cost function as follows:
在这里插入图片描述
A vectorized implementation is:
在这里插入图片描述

Gradient Descent

Remember that the general form of gradient descent is:
在这里插入图片描述
We can work out the derivative part using calculus to get:
在这里插入图片描述
Notice that this algorithm is identical to the one we used in linear regression. We still have to simultaneously update all values in theta.

A vectorized implementation is:
在这里插入图片描述

Multiclass Classification: One-vs-all

Now we will approach the classification of data when we have more than two categories. Instead of y = {0,1} we will expand our definition so that y = {0,1…n}.

Since y = {0,1…n}, we divide our problem into n+1 (+1 because the index starts at 0) binary classification problems; in each one, we predict the probability that ‘y’ is a member of one of our classes.
在这里插入图片描述
We are basically choosing one class and then lumping all the others into a single second class. We do this repeatedly, applying binary logistic regression to each case, and then use the hypothesis that returned the highest value as our prediction.

The following image shows how one could classify 3 classes:
在这里插入图片描述
Train a logistic regression classifier h θ ( x ) h_\theta(x) hθ(x) for each class to predict the probability that   y = i y = i y=i.

To make a prediction on a new x, pick the class that maximizes h θ ( x ) h_\theta (x) hθ(x).

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值