Standardization, or mean removal and variance scaling

  4.2.1. Standardization, or mean removal and variance scaling

  Standardization of datasets is a common requirement for many machine learning estimators implemented in the scikit: they might behave badly if the individual feature do not more or less look like standard normally distributed data: Gaussian with zero mean and unit variance.

  In practice we often ignore the shape of the distribution and just transform the data to center it by removing the mean value of each feature, then scale it by dividing non-constant features by their standard deviation.

  For instance, many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the l1 and l2 regularizers of linear models) assume that all features are centered around zero and have variance in the same order. If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected.

  The function scale provides a quick and easy way to perform this operation on a single array-like dataset:

  >>> from sklearn import preprocessing

  >>> import numpy as np

  >>> X = np.array([[ 1., -1., 2.],

  ... [ 2., 0., 0.],

  ... [ 0., 1., -1.]])

  >>> X_scaled = preprocessing.scale(X)

  >>> X_scaled

  array([[ 0. ..., -1.22..., 1.33...],

  [ 1.22..., 0. ..., -0.26...],

  [-1.22..., 1.22..., -1.06...]])

  Scaled data has zero mean and unit variance:

  >>> X_scaled.mean(axis=0)

  array([ 0., 0., 0.])

  >>> X_scaled.std(axis=0)

  array([ 1., 1., 1.])

  The preprocessing module further provides a utility class StandardScaler that implements the Transformer API to compute the mean and standard deviation on a training set so as to be able to later reapply the same transformation on the testing set. This class is hence suitable for use in the early steps of a sklearn.pipeline.Pipeline:

  >>> scaler = preprocessing.StandardScaler().fit(X)

  >>> scaler

  StandardScaler(copy=True, with_mean=True, with_std=True)

  >>> scaler.mean_

  array([ 1. ..., 0. ..., 0.33...])

  >>> scaler.std_

  array([ 0.81..., 0.81..., 1.24...])

  >>> scaler.transform(X)

  array([[ 0. ..., -1.22..., 1.33...],

  [ 1.22..., 0. ..., -0.26...],

  [-1.22..., 1.22..., -1.06...]])

  The scaler instance can then be used on new data to transform it the same way it did on the training set:

  >>> scaler.transform([[-1., 1., 0.]])

  array([[-2.44..., 1.22..., -0.26...]])

  It is possible to disable either centering or scaling by either passing with_mean=False or with_std=False to the constructor of StandardScaler.

  4.2.1.1. Scaling features to a range

  An alternative standardization is scaling features to lie between a given minimum and maximum value, often between zero and one. This can be achieved using MinMaxScaler.

  http://dfeej.info;

  http://rtkuh.info;

  http://xrtyf.info;

  http://qefky.info;

  http://whjir.info;

  http://dfew.info;

  http://cbjya.info;

  http://qwfgr.info;

  http://kuyhb.info;

  http://qjyt.info;

  http://wrtu.info;

  http://xdes.info;

  http://mkoy.info;

  http://txbf.info;

  http://wfkm.info;

  http://njer.info;

  http://www.dfeej.info;

  http://www.rtkuh.info;

  http://www.xrtyf.info;

  http://www.qefky.info;

  http://www.whjir.info;

  http://www.dfew.info;

  http://www.cbjya.info;

  http://www.qwfgr.info;

  http://www.kuyhb.info;

  http://www.qjyt.info;

  http://www.wrtu.info;

  http://www.xdes.info;

  http://www.mkoy.info;

  http://www.txbf.info;

  http://www.wfkm.info;

  http://www.njer.info;

  http://yuip.info;

  http://fwqw.info;

  http://hyui.info;

  http://q237.info;

  http://www.yuip.info;

  http://www.fwqw.info;

  http://www.hyui.info;

  http://www.q237.info;

  http://gsfea.info;

  http://swzsa.info;

  http://123wb.info;

  http://ts235.info;

  http://dt098.info;

  http://sbr69.info;

  http://xdfth.info;

  http://dft2.info;

  http://dvny6.info;

  http://rh5n.info;

  The motivation to use this scaling include robustness to very small standard deviations of features and preserving zero entries in sparse data.

  Here is an example to scale a toy data matrix to the [0, 1] range:

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值