机器学习算法之旅A Tour of Machine Learning Algorithms

In this post we take a tour of the most popular machine learning algorithms. It is useful to tour the main algorithms in the field to get a feeling of what methods are available.

There are so many algorithms available and it can feel overwhelming when algorithm names are thrown around and you are expected to just know what they are and where they fit.

In this post I want to give you two ways to think about and categorize the algorithms you may come across in the field.

  • The first is a grouping of algorithms by the learning style.
  • The second is a grouping of algorithms by similarity in form or function (like grouping similar animals together).

Both approaches are useful, but we will focus in on the grouping of algorithms by similarity and go on a tour of a variety of different algorithm types.

After reading this post, you will have a much better understanding of the most popular machine learning algorithms for supervised learning and how they are related.

Ensemble Learning Method

A cool example of an ensemble of lines of best fit. Weak members are grey, the combined prediction is red.
Plot from Wikipedia, licensed under public domain.

Algorithms Grouped by Learning Style

There are different ways an algorithm can model a problem based on its interaction with the experience or environment or whatever we want to call the input data.

It is popular in machine learning and artificial intelligence textbooks to first consider the learning styles that an algorithm can adopt.

There are only a few main learning styles or learning models that an algorithm can have and we’ll go through them here with a few examples of algorithms and problem types that they suit.

This taxonomy or way of organizing machine learning algorithms is useful because it forces you to think about the the roles of the input data and the model preparation process and select one that is the most appropriate for your problem in order to get the best result.

Let’s take a look at four different learning styles in machine learning algorithms:

Supervised Learning

Supervised Learning AlgorithmsInput data is called training data and has a known label or result such as spam/not-spam or a stock price at a time.

A model is prepared through a training process where it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data.

Example problems are classification and regression.

Example algorithms include Logistic Regression and the Back Propagation Neural Network.

Unsupervised Learning

Unsupervised Learning AlgorithmsInput data is not labelled and does not have a known result.

A model is prepared by deducing structures present in the input data. This may be to extract general rules. It may through a mathematical process to systematically reduce redundancy, or it may be to organize data by similarity.

Example problems are clustering, dimensionality reduction and association rule learning.

Example algorithms include: the Apriori algorithm and k-Means.

Semi-Supervised Learning

Semi-supervised Learning AlgorithmsInput data is a mixture of labelled and unlabelled examples.

There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions.

Example problems are classification and regression.

Example algorithms are extensions to other flexible methods that make assumptions about how to model the unlabelled data.

Overview

When crunching data to model business decisions, you are most typically using supervised and unsupervised learning methods.

A hot topic at the moment is semi-supervised learning methods in areas such as image classification where there are large datasets with very few labelled examples.

Algorithms Grouped By Similarity

Algorithms are often grouped by similarity in terms of their function (how they work). For example, tree-based methods, and neural network inspired methods.

I think this is the most useful way to group algorithms and it is the approach we will use here.

This is a useful grouping method, but it is not perfect. There are still algorithms that could just as easily fit into multiple categories like Learning Vector Quantization that is both a neural network inspired method and an instance-based method. There are also categories that have the same name that describes the problem and the class of algorithm such as Regression and Clustering.

We could handle these cases by listing algorithms twice or by selecting the group that subjectively is the “best” fit. I like this latter approach of not duplicating algorithms to keep things simple.

In this section I list many of the popular machine leaning algorithms grouped the way I think is the most intuitive. It is not exhaustive in either the groups or the algorithms, but I think it is representative and will be useful to you to get an idea of the lay of the land.

Please Note: There is a strong bias towards algorithms used for classification and regression, the two most prevalent supervised machine learning problems you will encounter.

If you know of an algorithm or a group of algorithms not listed, put it in the comments and share it with us. Let’s dive in.

Regression Algorithms

Regression AlgorithmsRegression is concerned with modelling the relationship between variables that is iteratively refined using a measure of error in the predictions made by the model.

Regression methods are a workhorse of statistics and have been cooped into statistical machine learning. This may be confusing because we can use regression to refer to the class of problem and the class of algorithm. Really, regression is a process.

The most popular regression algorithms are:

  • Ordinary Least Squares Regression (OLSR)
  • Linear Regression
  • Logistic Regression
  • Stepwise Regression
  • Multivariate Adaptive Regression Splines (MARS)
  • Locally Estimated Scatterplot Smoothing (LOESS)

Instance-based Algorithms

Instance-based AlgorithmsInstance based learning model a decision problem with instances or examples of training data that are deemed important or required to the model.

Such methods typically build up a database of example data and compare new data to the database using a similarity measure in order to find the best match and make a prediction. For this reason, instance-based methods are also called winner-take-all methods and memory-based learning. Focus is put on representation of the stored instances and similarity measures used between instances.

The most popular instance-based algorithms are:

  • k-Nearest Neighbour (kNN)
  • Learning Vector Quantization (LVQ)
  • Self-Organizing Map (SOM)
  • Locally Weighted Learning (LWL)

Regularization Algorithms

Regularization AlgorithmsAn extension made to another method (typically regression methods) that penalizes models based on their complexity, favoring simpler models that are also better at generalizing.

I have listed regularization algorithms separately here because they are popular, powerful and generally simple modifications made to other methods.

The most popular regularization algorithms are:

  • Ridge Regression
  • Least Absolute Shrinkage and Selection Operator (LASSO)
  • Elastic Net
  • Least-Angle Regression (LARS)

Decision Tree Algorithms

Decision Tree AlgorithmsDecision tree methods construct a model of decisions made based on actual values of attributes in the data.

Decisions fork in tree structures until a prediction decision is made for a given record. Decision trees are trained on data for classification and regression problems. Decision trees are often fast and accurate and a big favorite in machine learning.

The most popular decision tree algorithms are:

  • Classification and Regression Tree (CART)
  • Iterative Dichotomiser 3 (ID3)
  • C4.5 and C5.0 (different versions of a powerful approach)
  • Chi-squared Automatic Interaction Detection (CHAID)
  • Decision Stump
  • M5
  • Conditional Decision Trees

Bayesian Algorithms

Bayesian AlgorithmsBayesian methods are those that are explicitly apply Bayes’ Theorem for problems such as classification and regression.

The most popular Bayesian algorithms are:

  • Naive Bayes
  • Gaussian Naive Bayes
  • Multinomial Naive Bayes
  • Averaged One-Dependence Estimators (AODE)
  • Bayesian Belief Network (BBN)
  • Bayesian Network (BN)

Clustering Algorithms

Clustering AlgorithmsClustering, like regression describes the class of problem and the class of methods.

Clustering methods are typically organized by the modelling approaches such as centroid-based and hierarchal. All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality.

The most popular clustering algorithms are:

  • k-Means
  • k-Medians
  • Expectation Maximisation (EM)
  • Hierarchical Clustering

Association Rule Learning Algorithms

Assoication Rule Learning AlgorithmsAssociation rule learning are methods that extract rules that best explain observed relationships between variables in data.

These rules can discover important and commercially useful associations in large multidimensional datasets that can be exploited by an organisation.

The most popular association rule learning algorithms are:

  • Apriori algorithm
  • Eclat algorithm

Artificial Neural Network Algorithms

Artificial Neural Network AlgorithmsArtificial Neural Networks are models that are inspired by the structure and/or function of biological neural networks.

They are a class of pattern matching that are commonly used for regression and classification problems but are really an enormous subfield comprised of hundreds of algorithms and variations for all manner of problem types.

Note that I have separated out Deep Learning from neural networks because of the massive growth and popularity in the field. Here we are concerned with the more classical methods.

The most popular artificial neural network algorithms are:

  • Perceptron
  • Back-Propagation
  • Hopfield Network
  • Radial Basis Function Network (RBFN)

Deep Learning Algorithms

Deep Learning AlgorithmsDeep Learning methods are a modern update to Artificial Neural Networks that exploit abundant cheap computation.

They are concerned with building much larger and more complex neural networks, and as commented above, many methods are concerned with semi-supervised learning problems where large datasets contain very little labelled data.

The most popular deep learning algorithms are:

  • Deep Boltzmann Machine (DBM)
  • Deep Belief Networks (DBN)
  • Convolutional Neural Network (CNN)
  • Stacked Auto-Encoders

Dimensionality Reduction Algorithms

Dimensional Reduction AlgorithmsLike clustering methods, dimensionality reduction seek and exploit the inherent structure in the data, but in this case in an unsupervised manner or order to summarise or describe data using less information.

This can be useful to visualize dimensional data or to simplify data which can then be used in a supervized learning method. Many of these methods can be adapted for use in classification and regression.

  • Principal Component Analysis (PCA)
  • Principal Component Regression (PCR)
  • Partial Least Squares Regression (PLSR)
  • Sammon Mapping
  • Multidimensional Scaling (MDS)
  • Projection Pursuit
  • Linear Discriminant Analysis (LDA)
  • Mixture Discriminant Analysis (MDA)
  • Quadratic Discriminant Analysis (QDA)
  • Flexible Discriminant Analysis (FDA)

Ensemble Algorithms

Ensemble AlgorithmsEnsemble methods are models composed of multiple weaker models that are independently trained and whose predictions are combined in some way to make the overall prediction.

Much effort is put into what types of weak learners to combine and the ways in which to combine them. This is a very powerful class of techniques and as such is very popular.

  • Boosting
  • Bootstrapped Aggregation (Bagging)
  • AdaBoost
  • Stacked Generalization (blending)
  • Gradient Boosting Machines (GBM)
  • Gradient Boosted Regression Trees (GBRT)
  • Random Forest

Other Algorithms

Many algorithms were not covered.

For example, what group would Support Vector Machines go into? It’s own?

I did not cover algorithms from speciality tasks in the process of machine learning, such as:

  • Feature selection algorithms
  • Algorithm accuracy evaluation
  • Performance measures

I also did not cover algorithms from speciality sub-fields of machine learning, such as:

  • Computational intelligence (evolutionary algorithms, etc.)
  • Computer Vision (CV)
  • Natural Language Processing (NLP)
  • Recommender Systems
  • Reinforcement Learning
  • Graphical Models
  • And more…

These may feature in future posts.

Get your FREE Algorithms Mind Map

Machine Learning Algorithms Mind Map

Sample of the handy machine learning algorithms mind map.

I've created a handy mind map of 60+ algorithms organized by type.

Download it, print it and use it to jump-start your next machine learning project.

Download For Free

 

Also receive exclusive email tips and tricks.

 

Further Reading

This tour of machine learning algorithms was intended to give you an overview of what is out there and and some ideas on how to relate algorithms to each other.

I’ve collected together some resources for you to continue your reading on algorithms. If you have a specific question, please leave a comment.

Other Lists of Algorithms

There are other great lists of algorithms out there if you’re interested. Below are few hand selected examples.

How to Study Machine Learning Algorithms

Algorithms are a big part of machine learning. It’s a topic I am passionate about and write about a lot on this blog. Below are few hand selected posts that might interest you for further reading.

How to Run Machine Learning Algorithms

Sometimes you just want to dive into code. Below are some links you can use to run machine learning algorithms, code them up using standard libraries or implement them from scratch.

Final Word

I hope you have found this tour useful.

Please, leave a comment if you have any questions or ideas on how to improve the algorithm tour.

Update #1: Continue the discussion on HackerNews and reddit.

Update #2: I’ve added a bunch more resources and more algorithms. I’ve also added a handy mind map that you can download (see above).

 
About Jason Brownlee
The editor-in-chief at MachineLearningMastery.com. Jason is a husband, father, researcher, author, professional programmer and a machine learning practitioner. Learn more about him.
 
 
from: http://machinelearningmastery.com/a-tour-of-machine-learning-algorithms/

转载于:https://www.cnblogs.com/GarfieldEr007/p/5243092.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: 《Mastering Machine Learning Algorithms 2nd PDF》是一本深入探讨机器学习算法的书籍。该书由一位资深的机器学习专家撰写,提供了最新的算法实现和应用案例。 这本书首先介绍了机器学习的基本概念和原理,包括监督学习、无监督学习和强化学习等。然后,它详细介绍了各种经典和现代的机器学习算法,如线性回归、逻辑回归、决策树、随机森林、支持向量机、神经网络等。对于每个算法,书中都提供了清晰的定义、算法步骤和实现细节。 此外,这本书还强调了实践的重要性,通过大量的案例和实例,帮助读者将理论知识应用到实际问题解决中。书中还提供了示例代码和数据集,使读者能够快速上手,并通过实际操作加深对算法的理解。 《Mastering Machine Learning Algorithms 2nd PDF》还提供了关于算法优化和改进的内容,帮助读者了解如何选择适合特定问题的算法,并对算法进行调优以达到更好的性能。 总的来说,这本书通过全面而深入的介绍了机器学习算法,帮助读者成为机器学习领域的专家。它适合那些对机器学习感兴趣的学生、研究人员和从业者,希望深入了解和应用各种机器学习算法的人群。无论是初学者还是有一定机器学习基础的人都能从中受益,并获得提高自己技能的机会。 ### 回答2: 《机器学习算法精要 第2版》是一本介绍掌握机器学习算法的重要书籍。本书的目的是帮助读者理解并有效地应用不同的机器学习算法。 这本书主要分为三个部分。第一部分首先介绍了机器学习的基本概念和术语,如监督学习、非监督学习和强化学习等。接着,书中详细介绍了各种常见的机器学习算法,包括线性回归、逻辑回归、决策树、支持向量机和朴素贝叶斯等。对于每个算法,书中提供了清晰的解释和算法实现的代码示例,并且还讨论了该算法的优势和限制。 第二部分涵盖了进阶的机器学习算法,如集成学习、神经网络和深度学习。这些算法通常用于处理更复杂的问题,例如图像识别和自然语言处理。书中详细介绍了这些算法的原理和实现方法,并提供了代码示例和实际案例来帮助读者更好地理解和应用这些算法。 第三部分聚焦于特定的机器学习应用领域,如推荐系统、文本分类和时间序列分析等。这些应用领域的研究一直在发展和改进,本书介绍了最新的研究成果和方法。读者可以通过学习这些实际应用案例来更好地理解机器学习算法在不同领域的应用,并且可以将这些知识应用到自己的项目中。 总的来说,这本《机器学习算法精要 第2版》是一本很好的机器学习教材。无论是初学者还是有经验的机器学习工程师,都可以从中受益。通过学习这本书,读者可以掌握不同的机器学习算法,并且能够灵活地应用这些算法解决实际问题。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值