神经网络 逻辑回归_神经网络心态的逻辑回归

神经网络 逻辑回归Note From Author: 作者注: This tutorial is the foundation of computer vision delivered as “Lesson 8” of the series, there are more Lessons upcoming which would talk to the extend of building yo...
摘要由CSDN通过智能技术生成

神经网络 逻辑回归

Note From Author:

作者注:

This tutorial is the foundation of computer vision delivered as “Lesson 8” of the series, there are more Lessons upcoming which would talk to the extend of building your own deep learning based computer vision projects. You can find the complete syllabus and table of content here

本教程是本系列“第8课”中交付的计算机视觉的基础,接下来还将有更多课程与构建基于深度学习的计算机视觉项目相关。 您可以在此处找到完整的课程提纲和目录

Target Audience : Final year College Students, New to Data Science Career, IT employees who wants to switch to data science Career .

目标受众 :最后一年的大学生,数据科学职业新手,想要转用数据科学职业的IT员工。

Takeaway : Main takeaway from this article :

外卖 :本文的主要外卖:

  1. Logistic Regression

    逻辑回归
  2. Approaching Logistic Regression with Neural Network mindset

    用神经网络思维方式接近Logistic回归

逻辑回归 (Logistic Regression)

Logistic Regression is an algorithm for binary classification. In a binary classification problem the input (X) will be a feature vector of 1-D dimension and the output (Y) label will be a 1 or 0

Logistic回归是用于二进制分类的算法。 在二元分类问题中,输入( X )将是一维维的特征向量,而输出( Y )标签将是10

The logistic regression output label lies between the range 0 and 1 .

逻辑回归输出标签位于0到1之间。

0 ≤ Y ≤ 1, where Y is the probability of the output label being 1 given the input X

0≤Y≤1,其中Y是在给定输入X的情况下输出标签为1的概率

Y = P(y=1 | x) For a learning algorithm to find Y it takes two parameters W and B. Where, W is the weight associated with the input feature vector X and B bias.

Y = P(y = 1 | x)对于查找Y i t的学习算法,它采用两个参数W和B。其中, W是与输入特征向量X和B bias关联的权重。

To find Y . Well, one thing you could try that doesn’t work would be to have Y be w transpose X plus B, kind of a linear function of the input X. And in fact, this is what you use if you were doing linear regression. As shown below.

寻找Y。 好吧,您可以尝试的不起作用的一件事是让W换位X加B,这是输入X的线性函数。实际上,如果要进行线性回归,这就是您要使用的东西。 如下所示。

But this isn’t a very good algorithm for binary classification

但这不是二进制分类的很好算法

Because you want Y to be the chance that y is equal to one Y = P(y=1 | x). So Y should really be between zero and one and it’s difficult to enforce that because W transpose X plus B can be much bigger than one or it can even be negative, which doesn’t make sense for probability. That you want it to be between zero and one.

因为您希望Y成为y等于1的机会,所以Y = P(y = 1 | x) 。 因此Y确实应该在零和一之间,并且很难强制执行,因为W转置X加上B可以比1大得多,或者甚至可以是负数,这对于概率没有意义。 您希望它介于零和一之间。

So in logistic regression, our output is instead going to be Y equals the sigmoid function applied to this quantity.

因此,在逻辑回归中,我们的输出将变为Y等于应用于此数量的S型函数。

Image for post

σ is the Sigmoid function to which we pass the quantity w^T X+B

σ是Sigmoid函数,我们将w ^ T X + B传递给它

A sigmoid function would look like this,

乙状结肠功能看起来像这样,

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值