inductive bias

转:

  1. https://www.jianshu.com/p/428668e19080

机器学习算法中,假设学习器在预测中逼近正确的结果,其中包括在训练中未出现的样本。既然是未知的状况,结果可以是任意的结果,若没有其他假设,这任务就无法解决。这种关于目标函数的必要假设就称为*归纳偏置*。

归纳偏差有点像我们所说的先验(Prior),但是有点不同的是归纳偏差在学习的过程中不会更新,但是先验在学习后会不断地被更新。

Algorithm | Inductive Bias

---|---

Linear Regression | The relationship between the attributes x and the output y is linear. The goal is to minimize the sum of squared errors.

Single-Unit Perceptron | Each input votes independently toward the final classification (interactions between inputs are not possible).

Neural Networks with Backpropagation | Smooth interpolation between data points.

K-Nearest Neighbors | The classification of an instance x will be most similar to the classification of other instances that are nearby in Euclidean distance.

Support Vector Machines | Distinct classes tend to be separated by wide margins.

Naive Bayes | Each input depends only on the output class or label; the inputs are independent from each other.

  1. 在这里插入图片描述
  2. I think it is a set of assumption with which people can predict from inputs which are not in the data set we have more properly. It is necessary for a model to have some inductive bias, because only with it the model can be more useful for more data. The goal of model is to fit in the most of data, but not only in the sample data. So inductive bias is important.(来自:https://stackoverflow.com/questions/35655267/what-is-inductive-bias-on-machine-learning)

Therefore, here is the definition in layman’s terms: Given a data set, which learning model (Inductive Bias) should be chosen? Inductive Bias requires some set of prior assumptions about the tasks being considered. Not one Bias that is best on all problems and there have been a lot of research efforts to automatically discover the Inductive Bias.Therefore, here is the definition in layman’s terms: Given a data set, which learning model (Inductive Bias) should be chosen? Inductive Bias requires some set of prior assumptions about the tasks being considered. Not one Bias that is best on all problems and there have been a lot of research efforts to automatically discover the Inductive Bias.

The following is a list of common inductive biases in machine learning algorithms (from Wikipedia.)

Maximum conditional independence: if the hypothesis can be cast in a Bayesian framework, try to maximize conditional independence. This is the bias used in the Naive Bayes classifier.

Minimum cross-validation error: when trying to choose among hypotheses, select the hypothesis with the lowest cross-validation error. Although cross-validation may seem to be free of bias, the “no free lunch” theorems show that cross-validation must be biased.

Maximum margin: when drawing a boundary between two classes, attempt to maximize the width of the boundary. This is the bias used in support vector machines. The assumption is that distinct classes tend to be separated by wide boundaries.

Minimum description length: when forming a hypothesis, attempt to minimize the length of the description of the hypothesis. The assumption is that simpler hypotheses are more likely to be true. See Occam’s razor.

Minimum features: unless there is good evidence that a feature is useful, it should be deleted. This is the assumption behind feature selection algorithms.

Nearest neighbors: assume that most of the cases in a small neighborhood in feature space belong to the same class. Given a case for which the class is unknown, guess that it belongs to the same class as the majority in its immediate neighborhood. This is the bias used in the k-nearest neighbors algorithm. The assumption is that cases that are near each other tend to belong to the same class.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值