Section I: Brief Introduction on Decision Tree Regression
An advantage of the decision tree algorithm is that it does not require any transformation of the features if we are dealing with nonlinear data. Grow a decision tree by iteratively splitting its nodes until the leaves are pure or a topping criterion is satisfied.
FROM
Sebastian Raschka, Vahid Mirjalili. Python机器学习第二版. 南京:东南大学出版社,2018.
代码
from sklearn import datasets
from sklearn.metrics import mean_squared_error,r2_score
import matplotlib.pyplot as plt
import numpy as np
import warnings
warnings.filterwarnings("ignore")
plt.rcParams['figure.dpi']=200
plt.rcParams['savefig.dpi']=200
font = {'weight': 'light'}
plt.rc("font", **font)
#Section 1: Load data
price=datasets.load_boston()
X=price.data[:,-1]
y=price.target
#Section 2: Construct a DecisionTree model
from sklearn.tree import DecisionTreeRegressor
tree=DecisionTreeRegressor(max_depth=3)
tree.fit(X.reshape(-1,1),y)
sort_idx=X.flatten().argsort()
def lin_regplot(X,y,model):
plt.scatter(X,y,c='steelblue',edgecolor='white',s=70,label='Original')
plt.plot(X,model.predict(X),color='black',lw=2,label='Decision Tree')
return None
lin_regplot(X[sort_idx].reshape(-1,1),y[sort_idx],tree)
plt.xlabel("Lower Status of the Population [LSTAT]")
plt.ylabel("Price in $1000s [MEDV]")
plt.savefig('./fig1.png')
plt.show()
结果
参考文献
Sebastian Raschka, Vahid Mirjalili. Python机器学习第二版. 南京:东南大学出版社,2018.