Andrew Ng-----Logistic Regression

given x,want Y =  P(y=1|x)   Given an input feature vector x maybe corresponging to an image that you want to recognize as cat picture or not a cat picture. More formally ,you want y hat to be the probability of the chance that,y is equal to onr given the input features x,so in other words ,if x =is a picture

output : y, = w(tranfor)x + b   -------------------linear function

but it is not a very good algorithm for binary classification .because you want a P(0<=p<=1) between zero and one.

so in logistic regression our output is instead going to be y hat equals the sigmoid function applied to this quantity.

 

this is the shape of the sigmoid function   .label the axes by x and y.

full picture is that :




G(z) = 1/(1+e^(-z))

So ,if Z is very large then E to the negative Z will be close to zero,so the  gradien will be close zero. then the learing rates will be slowly.

So,when you implement logistic regression,your job is to try learn parameters W and B

Befor moving on,just anthor  note on the notation:

when we programming ,we usually keep the parameter W and parameter B seprrate,here B correponds to an interceptor 




  

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值