Machine learning and Classifier from Wiki

Wiki真是一个好东西,什么知识点都列得清清楚楚,简直就是一本万能的书。。。Machine <wbr>learning <wbr>and <wbr>Classifier <wbr>from <wbr>Wiki

Algorithm types

Machine learning algorithms can be organized into a taxonomy based on the desired outcome of the algorithm.

  • Supervised learning generates a function that maps inputs to desired outputs (also called labels, because they are often provided by human experts labeling the training examples). For example, in a classification problem, the learner approximates a function mapping a vector into classes by looking at input-output examples of the function.
  • Unsupervised learning models a set of inputs, like clustering. See also data mining and knowledge discovery.
  • Semi-supervised learning combines both labeled and unlabeled examples to generate an appropriate function or classifier.
  • Reinforcement learning learns how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback in the form of rewards that guides the learning algorithm.
  • Transduction tries to predict new outputs based on training inputs, training outputs, and test inputs.
  • Learning to learn learns its own inductive bias based on previous experience.


Approaches

Main article: List of machine learning algorithms

Decision tree learning

Main article: Decision tree learning

Decision tree learning uses a decision tree as a predictive model which maps observations about an item to conclusions about the item's target value.

[edit]Association rule learning

Main article: Association rule learning

Association rule learning is a method for discovering interesting relations between variables in large databases.

[edit]Artificial neural networks

Main article: Artificial neural network

An artificial neural network (ANN) learning algorithm, usually called "neural network" (NN), is a learning algorithm that is inspired by the structure and/or functional aspects of biological neural networks. Computations are structured in terms of an interconnected group of artificial neurons, processing information using a connectionist approach to computation. Modern neural networks are non-linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs, to find patterns in data, or to capture the statistical structure in an unknown joint probability distribution between observed variables.

[edit]Genetic programming

Genetic programming (GP) is an evolutionary algorithm-based methodology inspired by biological evolution to find computer programs that perform a user-defined task. It is a specialization of genetic algorithms (GA) where each individual is a computer program. It is a machine learning technique used to optimize a population of computer programs according to a fitness landscape determined by a program's ability to perform a given computational task.

[edit]Inductive logic programming

Inductive logic programming (ILP) is an approach to rule learning using logic programming as a uniform representation for examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program which entails all the positive and none of the negative examples.

[edit]Support vector machines

Main article: Support vector machines

Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.

[edit]Clustering

Main article: Cluster analysis

Cluster analysis or clustering is the assignment of a set of observations into subsets (called clusters) so that observations in the same cluster are similar in some sense. Clustering is a method of unsupervised learning, and a common technique for statistical data analysis.

[edit]Bayesian networks

Main article: Bayesian network

A Bayesian network, belief network or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and theirconditional independencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inferenceand learning.

[edit]Reinforcement learning

Main article: Reinforcement learning

Reinforcement learning is concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward. Reinforcement learning algorithms attempt to find a policy that maps states of the world to the actions the agent ought to take in those states. Reinforcement learning differs from the supervised learning problem in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected.

[edit]Representation learning

Several learning algorithms, mostly unsupervised learning algorithms, aim at discovering better representations of the inputs provided during training. Classical examples include principal components analysis and clustering. Representation learning algorithms often attempt to preserve the information in their input but transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions, allowing to reconstruct the inputs coming from the unknown data generating distribution, while not being necessarily faithful for configurations that are implausible under that distribution. Manifold learning algorithms attempt to do so under the constraint that the learned representation is low-dimensional. Sparse coding algorithms attempt to do so under the constraint that the learned representation is sparse (has many zeros). Deep learning algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data.[3]

[edit]Sparse Dictionary Learning

In the learning area, sparse dictionary learning is one of the most popular methods, and has gained a huge success in lots of applications. In sparse dictionary learning, a data is represented as a linear combination of basis functions, and the coefficients are assumed to be sparse. Let x be a d-dimensional data, D be a d byn matrix, where each column of D represent a basis function. r is the coefficient to represent x using D. Mathematically, sparse dictionary learning means the following  x \approx D \times r

where r is sparse. Generally speaking, n is assumed to be larger than d to allow the freedom for a sparse representation.

Sparse dictionary learning has been applied in different context. In classification, the problem is to determine whether a new data belongs to which classes. Suppose we already build a dictionary for each class, then a new data is associate to the class such that it is best sparsely represented by the corresponding dictionary. People also applied sparse dictionary learning in image denoising. The key idea is that clean image path can be sparsely represented by a image dictionary, but the noise cannot. User can refer to [4] if interested.



再加上几种统计分类器:

 

Algorithms

The most widely used classifiers are the neural network (multi-layer perceptron), support vector machinesk-nearest neighbours, Gaussian mixture model, Gaussian,naive Bayesdecision tree and RBF classifiers.

Examples of classification algorithms include:

Python网络爬虫与推荐算法新闻推荐平台:网络爬虫:通过Python实现新浪新闻的爬取,可爬取新闻页面上的标题、文本、图片、视频链接(保留排版) 推荐算法:权重衰减+标签推荐+区域推荐+热点推荐.zip项目工程资源经过严格测试可直接运行成功且功能正常的情况才上传,可轻松复刻,拿到资料包后可轻松复现出一样的项目,本人系统开发经验充足(全领域),有任何使用问题欢迎随时与我联系,我会及时为您解惑,提供帮助。 【资源内容】:包含完整源码+工程文件+说明(如有)等。答辩评审平均分达到96分,放心下载使用!可轻松复现,设计报告也可借鉴此项目,该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的。 【提供帮助】:有任何使用问题欢迎随时与我联系,我会及时解答解惑,提供帮助 【附带帮助】:若还需要相关开发工具、学习资料等,我会提供帮助,提供资料,鼓励学习进步 【项目价值】:可用在相关项目设计中,皆可应用在项目、毕业设计、课程设计、期末/期中/大作业、工程实训、大创等学科竞赛比赛、初期项目立项、学习/练手等方面,可借鉴此优质项目实现复刻,设计报告也可借鉴此项目,也可基于此项目来扩展开发出更多功能 下载后请首先打开README文件(如有),项目工程可直接复现复刻,如果基础还行,也可在此程序基础上进行修改,以实现其它功能。供开源学习/技术交流/学习参考,勿用于商业用途。质量优质,放心下载使用。
学习如何从有偏差的分类器中解偏差是一个重要的问题。偏差分类器是指在分类任务中存在不公平或不平衡现象的模型。通常情况下,这些偏差源于数据集中的不平衡或样本中存在的偏见。 为了解决这个问题,我们可以采取一些方法来将偏差分类器转化为无偏差分类器。首先,我们需要对原始分类器进行评估,了解它所产生的偏差是如何体现的。这样我们就能够识别出任何偏向某一特定类别的倾向。 一种常见的去偏差方法是重标定数据。通过重新平衡数据集,使得数据中各个类别的样本数量更加均衡。这样可以消除分类器在少数类别上的偏见,并提高分类器的准确性。 另一个方法是通过增加代表少数类别的样本来平衡数据集。这可以通过改变数据采样方法,例如过采样或合成少数类别样本,来增加这些样本的数量。这样可以增加分类器对少数类别的学习能力,从而减少偏见。 还有一种方法是使用一些特定的算法或技术来调整分类器的决策边界。例如,使用公平学习算法可以对决策边界进行微调,以减少偏见。这些算法通过考虑分类错误的代价和不平衡的权衡,来找到更公平的决策边界。 总而言之,学习如何从有偏差的分类器中解偏差是一个复杂而重要的任务。通过评估原始分类器的偏见,重新标定数据、平衡数据集以及调整决策边界等方法,我们可以将偏差分类器转化为无偏差分类器,从而提高分类器的准确性和公平性。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值