机器学习---三种线性算法的比较(线性回归,感知机,逻辑回归)(Machine Learning Linear Regression Perceptron Logistic Regression Co...

最小二乘线性回归,感知机,逻辑回归的比较:

 

最小二乘线性回归

Least Squares Linear Regression

感知机

Perceptron

二分类逻辑回归

Binary Logistic Regression

多分类逻辑回归

Multinomial Logistic Regression

特征x

x=([x1,x2,...,xn,1])T

权重w

w=([w1,w2,...,wn,b])T

目标y

实数(负无穷大到正无穷大)

两个类别

1,-1

两个类别

0,1

多个类别

c=0,1,...,k-1

目标函数

 

(类别1的概率)

for c=0,1,...,k-1

 (全部类别的概率)

对y的估计

  

 

(类别1的概率)

 for c=0,1,...,k-1

(全部类别的概率)

映射函数

sign函数 

sigmoid函数

softmax函数

算法的作用

预测

分类

分类

分类

损失函数

 

损失函数的含义

观测值与估计值之间的欧式距离平方和

错误分类点距离分类超平面的总长度

估计的概率分布与真实的概率分布之间的相似程度,对于样本(xi,yi),它的正确分类类别是c,那么如果它计算出的目标属于类别c的分类概率的值为1,则说明分类完全正确,这种情况下对损失函数没有贡献(ln1=0);而如果分类错误,则它计算出的目标属于类别c的的分类概率将是一个小于1的值,这种情况下将对损失函数有所贡献 

估计的概率分布与真实的概率分布之间的相似程度,对于样本(xi,yi),它的正确分类类别是c,那么如果它计算出的目标属于类别c的分类概率的值为1,则说明分类完全正确,这种情况下对损失函数没有贡献(ln1=0);而如果分类错误,则它计算出的目标属于类别c的的分类概率将是一个小于1的值,这种情况下将对损失函数有所贡献

损失函数的本质

目标y的条件概率P(y|x)在高斯分布下的极大似然估计(取对数)

/

目标y的条件概率P(y|x)在伯努利分布下的极大似然估计(取负数和自然对数)

目标y的条件概率P(y|x)在多项分布下的极大似然估计(取负数和自然对数)

最优解方法

解析解(closed form),梯度下降法,牛顿法,拟牛顿法

随机梯度下降法,牛顿法,拟牛顿法

梯度下降法,牛顿法,拟牛顿法

梯度下降法,牛顿法,拟牛顿法

 

转载于:https://www.cnblogs.com/HuZihu/p/10970243.html

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Mastering Machine Learning with scikit-learn - Second Edition by Gavin Hackeling English | 24 July 2017 | ASIN: B06ZYRPFMZ | ISBN: 1783988363 | 254 Pages | AZW3 | 5.17 MB Key Features Master popular machine learning models including k-nearest neighbors, random forests, logistic regression, k-means, naive Bayes, and artificial neural networks Learn how to build and evaluate performance of efficient models using scikit-learn Practical guide to master your basics and learn from real life applications of machine learning Book Description Machine learning is the buzzword bringing computer science and statistics together to build smart and efficient models. Using powerful algorithms and techniques offered by machine learning you can automate any analytical model. This book examines a variety of machine learning models including popular machine learning algorithms such as k-nearest neighbors, logistic regression, naive Bayes, k-means, decision trees, and artificial neural networks. It discusses data preprocessing, hyperparameter optimization, and ensemble methods. You will build systems that classify documents, recognize images, detect ads, and more. You will learn to use scikit-learn's API to extract features from categorical variables, text and images; evaluate model performance, and develop an intuition for how to improve your model's performance. By the end of this book, you will master all required concepts of scikit-learn to build efficient models at work to carry out advanced tasks with the practical approach. What you will learn Review fundamental concepts such as bias and variance Extract features from categorical variables, text, and images Predict the values of continuous variables using linear regression and K Nearest Neighbors Classify documents and images using logistic regression and support vector machines Create ensembles of estimators using bagging and boosting techniques Discover hidden structures in data using K-Means clustering Evaluate the performance of machine learning systems in common tasks About the Author Gavin Hackeling is a data scientist and author. He was worked on a variety of machine learning problems, including automatic speech recognition, document classification, object recognition, and semantic segmentation. An alumnus of the University of North Carolina and New York University, he lives in Brooklyn with his wife and cat. Table of Contents The Fundamentals of Machine Learning Simple linear regression Classification and Regression with K Nearest Neighbors Feature Extraction and Preprocessing From Simple Regression to Multiple Regression From Linear Regression to Logistic Regression Naive Bayes Nonlinear Classification and Regression with Decision Trees From Decision Trees to Random Forests, and other Ensemble Methods The Perceptron From the Perceptron to Support Vector Machines From the Perceptron to Artificial Neural Networks Clustering with K-Means Dimensionality Reduction with Principal Component Analysis
Book Description Machine learning is the buzzword bringing computer science and statistics together to build smart and efficient models. Using powerful algorithms and techniques offered by machine learning you can automate any analytical model. This book examines a variety of machine learning models including popular machine learning algorithms such as k-nearest neighbors, logistic regression, naive Bayes, k-means, decision trees, and artificial neural networks. It discusses data preprocessing, hyperparameter optimization, and ensemble methods. You will build systems that classify documents, recognize images, detect ads, and more. You will learn to use scikit-learn’s API to extract features from categorical variables, text and images; evaluate model performance, and develop an intuition for how to improve your model’s performance. By the end of this book, you will master all required concepts of scikit-learn to build efficient models at work to carry out advanced tasks with the practical approach. What you will learn Review fundamental concepts such as bias and variance Extract features from categorical variables, text, and images Predict the values of continuous variables using linear regression and K Nearest Neighbors Classify documents and images using logistic regression and support vector machines Create ensembles of estimators using bagging and boosting techniques Discover hidden structures in data using K-Means clustering Evaluate the performance of machine learning systems in common tasks About the Author Gavin Hackeling is a data scientist and author. He was worked on a variety of machine learning problems, including automatic speech recognition, document classification, object recognition, and semantic segmentation. An alumnus of the University of North Carolina and New York University, he lives in Brooklyn with his wife and cat. Contents Chapter 1. The Fundamentals of Machine Learning Chapter 2. Simple linear regression Chapter 3. Classification and Regression with K Nearest Neighbors Chapter 4. Feature Extraction and Preprocessing Chapter 5. From Simple Regression to Multiple Regression Chapter 6. From Linear Regression to Logistic Regression Chapter 7. Naive Bayes Chapter 8. Nonlinear Classification and Regression with Decision Trees Chapter 9. From Decision Trees to Random Forests, and other Ensemble Methods Chapter 10. The Perceptron Chapter 11. From the Perceptron to Support Vector Machines Chapter 12. From the Perceptron to Artificial Neural Networks Chapter 13. Clustering with K-Means Chapter 14. Dimensionality Reduction with Principal Component Analysis

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值