Feature scaling

From Wikipedia, the free encyclopedia

Feature scaling is a method used to standardize the range of independent variables or features of data. In data processing, it is also known as data normalization and is generally performed during the data preprocessing step.

Motivation[edit]

Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions will not work properly without normalization[citation needed]. For example, the majority of classifiers calculate the distance between two points by the Euclidean distance. If one of the features has a broad range of values, the distance will be governed by this particular feature[citation needed]. Therefore, the range of all features should be normalized so that each feature contributes approximately proportionately to the final distance[citation needed].

Another reason why feature scaling is applied is that gradient descent converges much faster with feature scaling than without it.

 

Methods[edit]

Rescaling[edit]

The simplest method is rescaling the range of features to scale the range in [0, 1] or [−1, 1]. Selecting the target range depends on the nature of the data. The general formula is given as:

                                   x' = \frac{x - \text{min}(x)}{\text{max}(x)-\text{min}(x)}

where x is an original value, x' is the normalized value. For example, suppose that we have the students' weight data, and the students' weights span [160 pounds, 200 pounds]. To rescale this data, we first subtract 160 from each student's weight and divide the result by 40 (the difference between the maximum and minimum weights).

Standardization[edit]

In machine learning, we can handle various types of data, e.g. audio signals and pixel values for image data, and this data can include multiple dimensions. Feature standardization makes the values of each feature in the data have zero-mean (when subtracting the mean in the enumerator) and unit-variance. This method is widely used for normalization in many machine learning algorithms (e.g., support vector machineslogistic regression, and neural networks)[citation needed]. This is typically done by calculating standard scores.[1] The general method of calculation is to determine the distribution mean and standard deviation for each feature. Next we subtract the mean from each feature. Then we divide the values (mean is already subtracted) of each feature by its standard deviation.

                                   x' = \frac{x - \bar{x}}{\sigma}

Where x is the original feature vector, \bar{x} is the mean of that feature vector, and \sigma is its standard deviation.

Scaling to unit length[edit]

Another option that is widely used in machine-learning is to scale the components of a feature vector such that the complete vector has length one. This usually means dividing each component by the Euclidean length of the vector. In some applications (e.g. Histogram features) it can be more practical to use the L1 norm (i.e. Manhattan Distance, City-Block Length or Taxicab Geometry) of the feature vector:

x' = \frac{x}{||x||}

This is especially important if in the following learning steps the Scalar Metric is used as a distance measure.

Application[edit]

In stochastic gradient descent, feature scaling can sometimes improve the convergence speed of the algorithm[citation needed]. In support vector machines,[2] it can reduce the time to find support vectors. Note that feature scaling changes the SVM result[citation needed].

References[edit]

  1. Jump up^ Bin Mohamad, Ismail; Dauda Usman (2013). "Standardization and Its Effects on K-Means Clustering Algorithm" (PDF)Research Journal of Applied Sciences, Engineering and Technology.
  2. Jump up^ Juszczak, P.; D. M. J. Tax; R. P. W. Dui (2002). "Feature scaling in support vector data descriptions"Proc. 8th Annu. Conf. Adv. School Comput. Imaging: 95–10.
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值