算法评估矩阵

1.分类算法矩阵

评估分类问题的评估矩阵:

  • 分类准确度
  • 对数损失函数(Logloss)
  • AUC图
  • 混淆矩阵
  • 分类报告(Classification Report)

1.分类准确度

即算法自动分类正确的样本数除以所有的样本数得出的结果。

from pandas import read_csv
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold

# 导入数据
filename = 'pima_data.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataset = read_csv(filename, names=names)
# 将数据划分成输入数据和输出数据
array = dataset.values
X = array[:, 0:8]
Y = array[:, 8]
n_splits=10
seed=7
kfold=KFold(n_splits=n_splits,random_state=seed)
model=LogisticRegression()
result = cross_val_score(model, X, Y,cv=kfold)
print("算法评估结果为:%.3lf (%.3lf)" % (result.mean(), result.std()))

2.对数损失函数

对数损失函数越小,模型越好,而且使损失函数尽量是一个凸函数,便于收敛计算。

from pandas import read_csv
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold

# 导入数据
filename = 'pima_data.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataset = read_csv(filename, names=names)
# 将数据划分成输入数据和输出数据
array = dataset.values
X = array[:, 0:8]
Y = array[:, 8]
n_splits=10
seed=7
scoring='neg_log_loss'
kfold=KFold(n_splits=n_splits,random_state=seed)
model=LogisticRegression()
result = cross_val_score(model, X, Y,cv=kfold,scoring=scoring)
print("Logloss:%.3lf (%.3lf)" % (result.mean(), result.std()))

3.AUC图

from pandas import read_csv
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold

# 导入数据
filename = 'pima_data.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataset = read_csv(filename, names=names)
# 将数据划分成输入数据和输出数据
array = dataset.values
X = array[:, 0:8]
Y = array[:, 8]
n_splits=10
seed=7
scoring='roc_auc'
kfold=KFold(n_splits=n_splits,random_state=seed)
model=LogisticRegression()
result = cross_val_score(model, X, Y,cv=kfold,scoring=scoring)
print("Logloss:%.3lf (%.3lf)" % (result.mean(), result.std()))

4.分类报告

10
1True Positive(TP)False Positive(FP)
0False Negative(FN)True Negative(TN)

精确率:P=TP/(TP+FP)

召回率:R=TP/(TP+FN)

F1值就是精确率和召回率的调和均值,2F1=P+R

from pandas import read_csv
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report

# 导入数据
filename = 'pima_data.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataset = read_csv(filename, names=names)
# 将数据划分成输入数据和输出数据
array = dataset.values
X = array[:, 0:8]
Y = array[:, 8]
test_size=0.33
seed=4
X_train,X_test,Y_train,Y_test=train_test_split(X,Y,test_size=test_size,random_state=seed)
model=LogisticRegression()
model.fit(X_train,Y_train)
predicted=model.predict(X_test)
report=classification_report(Y_test,predicted)
print(report)

2.回归算法矩阵

评估回归算法的评估矩阵

  • 平均绝对误差(Mean Absolute Error,MAE)
  • 均方误差(Mean Squared Error,MSE)
  • 决定系数(R²)

1.平均绝对误差

即所有单个观测值与算术平均值的偏差的绝对值的平均值。(neg_mean_absolute_error)

from pandas import read_csv
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score

# 导入数据
filename = 'housing.csv'
names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS',
         'RAD', 'TAX', 'PRTATIO', 'B', 'LSTAT', 'MEDV']
dataset = read_csv(filename, names=names, delim_whitespace=True)
# 将数据划分成输入数据和输出数据
array = dataset.values
X = array[:, 0:13]
Y = array[:, 13]
n_splits = 10
seed = 7
kfold = KFold(n_splits=n_splits, random_state=seed)
model = LinearRegression()
scoring = 'neg_mean_absolute_error'
result = cross_val_score(model, X, Y, scoring=scoring, cv=kfold)
print('MAE: %.3f (%.3f)' % (result.mean(), result.std()))

2.均方误差

均方误差越小,说明模型越好。(neg_mean_squared_error)

from pandas import read_csv
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score

# 导入数据
filename = 'housing.csv'
names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS',
         'RAD', 'TAX', 'PRTATIO', 'B', 'LSTAT', 'MEDV']
dataset = read_csv(filename, names=names, delim_whitespace=True)
# 将数据划分成输入数据和输出数据
array = dataset.values
X = array[:, 0:13]
Y = array[:, 13]
n_splits = 10
seed = 7
kfold = KFold(n_splits=n_splits, random_state=seed)
model = LinearRegression()
scoring = 'neg_mean_squared_error'
result = cross_val_score(model, X, Y, scoring=scoring, cv=kfold)
print('MAE: %.3f (%.3f)' % (result.mean(), result.std()))

3.决定系数(R²)

反映因变量的全部变异能通过回归关系被自变量解释的比例。

from pandas import read_csv
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score

# 导入数据
filename = 'housing.csv'
names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS',
         'RAD', 'TAX', 'PRTATIO', 'B', 'LSTAT', 'MEDV']
dataset = read_csv(filename, names=names, delim_whitespace=True)
# 将数据划分成输入数据和输出数据
array = dataset.values
X = array[:, 0:13]
Y = array[:, 13]
n_splits = 10
seed = 7
kfold = KFold(n_splits=n_splits, random_state=seed)
model = LinearRegression()
scoring = 'r2'
result = cross_val_score(model, X, Y, scoring=scoring, cv=kfold)
print('MAE: %.3f (%.3f)' % (result.mean(), result.std()))

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值