必须除以应用的缩放以使特征正常化,但也要乘以应用于目标的缩放。在
假设每个特征变量x_i都被scale_x_i缩放(除以)
目标变量按比例缩放(除以)
那么orig_coef_i = coef_i_found_on_scaled_data / scale_x_i * scale_y
下面是一个使用pandas和sklearn LinearRegression的示例
^{pr2}$
这向我们展示了我们的系数线性回归没有应用标度。在# | feature| orig_coef
# 0| CRIM | -0.107171
# 1| ZN | 0.046395
# 2| INDUS | 0.020860
# etc
我们现在使所有变量正常化# Now we normalise the data
scalerX = StandardScaler().fit(X)
scalery = StandardScaler().fit(y.reshape(-1,1)) # Have to reshape to avoid warnings
normed_X = scalerX.transform(X)
normed_y = scalery.transform(y.reshape(-1,1)) # Have to reshape to avoid warnings
normed_y = normed_y.ravel() # Turn y back into a vector again
# Check it's worked
# print np.mean(X, axis=0), np.mean(y, axis=0) # Should be 0s
# print np.std(X, axis=0), np.std(y, axis=0) # Should be 1s
我们可以对这些标准化数据再次进行回归分析。。。在# Now we redo our regression
lr = LinearRegression()
lr.fit(normed_X, normed_y)
coefs2 = pd.DataFrame(
data={
'feature' : boston.feature_names,
'orig_coef' : orig_coefs,
'norm_coef' : lr.coef_,
'scaleX' : scalerX.scale_,
'scaley' : scalery.scale_[0],
},
columns=['feature', 'orig_coef', 'norm_coef', 'scaleX', 'scaley']
)
coefs2
…然后应用缩放来恢复原始系数# We can recreate our original coefficients by dividing by the
# scale of the feature (scaleX) and multiplying by the scale
# of the target (scaleY)
coefs2['rescaled_coef'] = coefs2.norm_coef / coefs2.scaleX * coefs2.scaley
coefs2
当我们这样做时,我们看到我们重新创建了原始系数。在# | feature| orig_coef| norm_coef| scaleX| scaley| rescaled_coef
# 0| CRIM | -0.107171| -0.100175| 8.588284| 9.188012| -0.107171
# 1| ZN | 0.046395| 0.117651| 23.299396| 9.188012| 0.046395
# 2| INDUS | 0.020860| 0.015560| 6.853571| 9.188012| 0.020860
# 3| CHAS | 2.688561| 0.074249| 0.253743| 9.188012| 2.688561
对于某些机器学习方法,目标变量y和特征变量x必须标准化。如果你已经这样做了,你需要包括这个“乘以y的比例”步骤以及“除以x的比例i”来获得原始回归系数。在
希望有帮助