我目前正将数据分析从R转移到
Python.在R i中缩放数据集时,将使用R.scale(),在我的理解中将执行以下操作:(x-mean(x))/ sd(x)
要替换该函数,我尝试使用sklearn.preprocessing.scale().根据我对描述的理解,它做了同样的事情.尽管如此,我运行了一个小测试文件并发现,这两种方法都有不同的返回值.显然,标准偏差并不相同……有人能够解释为什么标准偏差会相互“偏离”吗?
MWE:
# import packages
from sklearn import preprocessing
import numpy
import rpy2.robjects.numpy2ri
from rpy2.robjects.packages import importr
rpy2.robjects.numpy2ri.activate()
# Set up R namespaces
R = rpy2.robjects.r
np1 = numpy.array([[1.0,2.0],[3.0,1.0]])
print "Numpy-array:"
print np1
print "Scaled numpy array through R.scale()"
print R.scale(np1)
print "-------"
print "Scaled numpy array through preprocessing.scale()"
print preprocessing.scale(np1, axis = 0, with_mean = True, with_std = True)
scaler = preprocessing.StandardScaler()
scaler.fit(np1)
print "Mean of preprocessing.scale():"
print scaler.mean_
print "Std of preprocessing.scale():"
print scaler.std_
输出: