来学英语
sklearn中的imputation处理缺失值
import numpy as np
import pandas as pd
df = pd.read_csv(r"F:\Python\pythonProject\jupyter notebook\self-studying\100Days\100-Days-Of-ML-Code-master\datasets\Data.csv",encoding="utf-8")
df
X = df.iloc[:,:-1].values
Y = df.iloc[ : ,3].values
X
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values = np.nan,strategy = "mean")
X[:,1:3] = imputer.fit(X[ : , 1:3])
这里sklearn中有处理缺失值的方法imputition,一些插补方法,有需要去看文档。
Step 4:Encoding categorical data
from sklearn.preprocessing import LabelEncoder,OneHotEncoder
labelencoder_X = LabelEncoder()
X[:,0] = labelencoder_X.fit_transform(X[:,0])
creating a dummy variable
创建哑变量 我们通常会将原始的多分类变量转化为哑变量,每个哑变量只代表某两个级别或若干个级别间的差异,通过构建回归模型,每一个哑变量都能得出一个估计的回归系数,从而使得回归的结果更易于解释,更具有实际意义。
onehotencoder = OneHotEncoder() #独热码,方便分类
X = onehotencoder.fit_transform(X).toarray()
labelencoder_Y = LabelEncoder()
Y = labelencoder_Y.fit_transform(Y)
Step 5: splitting the datasets into traning sets and Test sets
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split( X , Y , test_size = 0.2, random_state = 0) #划分数据集
step 6:Feature Scaling
from sklearn.preprocessing import StandardScaler #标准化
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
Y_train = sc_X.fit_transform(X_test)
第一天就是处理数据集,注意的就是missing values的插补方法更新了许多。