机器学习
有监督学习:
有监督学习是机器学习任务的一种。它从有标记的训练数据中推导出预测函数。有标记的训练数据是指每个训练实例都包括输入和期望的输出。一句话:给定数据,预测标签。
无监督学习是机器学习任务的一种。它从无标记的训练数据中推断结论。最典型的无监督学习就是聚类分析,它可以在探索性数据分析阶段用于发现隐藏的模式或者对数据进行分组。一句话:给定数据,寻找隐藏的结构。
强化学习是机器学习的另一个领域。它关注的是软件代理如何在一个环境中采取行动以便最大化某种累积的回报。一句话:给定数据,学习如何选择一系列行动,以最大化长期收益。从数学上讲,一个能够感知环境的自治智能体(Agent)如何通过学习来选择能够达到目标的最优动作。(学习从环境到动作的最优映射)
sklearn介绍:
sklearn.datasets 输入数据集包
datasets.make
make_blobs:多类单标签数据集(常用于聚类算法)
make_classification:多类单标签数据集(常用于聚类算法)
make_gaussian_quantiles
make_circle
make_moons
make_multilabel_calssification
# 具体参考sklearn官方手册
sklearn.model_selection 模型选择(预处理)
学习器中超参数优化
estimator.get_params()
两种通用的参数化优化方法:
模型验证(model validation)方法
模型预测性能的评估方法
#!usr/bin/env python
# -*- coding:utf-8 _*-
"""
@author: lyd
@file: test3.py
@time: 2020/04/01
@desc:
"""
from sklearn.datasets import load_wine
from sklearn.model_selection import KFold
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import RandomizedSearchCV
from sklearn.metrics import accuracy_score
import warnings
warnings.filterwarnings('ignore')
# 1.数据载入及预处理
data, label = load_wine(True)
data = data[:, :2]
# 拿出测试数据
data_train, data_test, label_train, label_test = train_test_split(data, label, test_size=0.3)
# 2.通过交叉验证提取样本
kf = KFold(n_splits=3)
distributions = dict(C=list(np.arange(0.5, 2, 0.5)), degree=list(np.arange(0, 5)))
for train, test in kf.split(data_train):
X_train, X_test, y_train, y_test = data_train[train], data_train[test], label_train[train], label_train[test]
# 3.采用SVM估计器,运用随机超参数优化选择最优超参
model = SVC(kernel='poly')
model = RandomizedSearchCV(model, distributions, random_state=0)
model.fit(X_train, y_train)
y_test_hat = model.predict(X_test)
print('********')
print(model.best_score_)
print(model.best_params_)
# 4.模型验证
# print('********')
# print(cross_val_score(model, data_train, data_test))
# 5.模型预测性能评估 (介绍metrics)
label_test_hat = model.predict(data_test)
print('******')
print(accuracy_score(label_test, label_test_hat))
# 可视化
x1_min = data[:, 0].min()
x1_max = data[:, 0].max()
x2_min = data[:, 1].min()
x2_max = data[:, 1].max()
x1, x2 = np.mgrid[x1_min:x1_max:500j, x2_min:x2_max:500j] # 生成网格采样点
grid_test = np.stack((x1.flat, x2.flat), axis=1) # 测试点
grid_hat = model.predict(grid_test)
grid_hat = grid_hat.reshape(x1.shape)
mpl.rcParams['font.sans-serif'] = [u'SimHei']
mpl.rcParams['axes.unicode_minus'] = False
cm_light = mpl.colors.ListedColormap(['#A0FFA0', '#FFA0A0', '#A0A0FF'])
cm_dark = mpl.colors.ListedColormap(['g', 'r', 'b'])
plt.figure(facecolor='w')
plt.pcolormesh(x1, x2, grid_hat, cmap=cm_light)
plt.scatter(data_train[:, 0], data_train[:, 1], c=label_train, edgecolors='k', s=60, cmap=cm_dark, marker='^')
plt.scatter(data_test[:, 0], data_test[:, 1], c=label_test_hat, s=60, cmap=cm_dark, marker='o')
plt.xlim(x1_min, x1_max)
plt.ylim(x2_min, x2_max)
plt.grid(b=True, ls=':')
plt.tight_layout(pad=1.5)
plt.show()
print('学习完成...')
关于 sklearn数据集变换
数据预处理
特征抽取
特征变换
维数约简
关于 sklearn.Pipeline工具
关于 sklearn.pipeline工具 (FeatureUnion)
特征工程
文本词袋:
数据预处理
#!usr/bin/env python
# -*- coding:utf-8 _*-
"""
@author: lyd
@file: test4.py
@time: 2020/04/03
@desc:
"""
from sklearn.datasets import load_boston
import warnings
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.linear_model import ElasticNetCV
from sklearn.metrics import mean_squared_error
import matplotlib as mpl
import matplotlib.pyplot as plt
if __name__ == '__main__':
warnings.filterwarnings(action='ignore')
np.set_printoptions(suppress=True)
mpl.rcParams['font.sans-serif'] = [u'simHei']
mpl.rcParams['axes.unicode_minus'] = False
data, taget = load_boston(True)
print(u'样本个数:%d,特征个数:%d' % data.shape)
x_train, x_test, y_train, y_test = train_test_split(data, taget, test_size=0.33, random_state=42)
for i in [1, 2, 3, 4]:
model = Pipeline([
('ss', StandardScaler()),
('poly', PolynomialFeatures(degree=i)),
('linear', ElasticNetCV(l1_ratio=[0.1, 0.3, 0.5, 0.7, 0.9], alphas=np.logspace(-3, 2, 5)))
])
print('=' * 20)
print(u'%d次多项式特征:' % i)
print(u'开始建模...')
model.fit(x_train, y_train)
linear = model.get_params()['linear']
print(u'超参数:', linear.alpha_)
print(u'l1_ratio:', linear.l1_ratio_)
y_test_hat = model.predict(x_test)
r2 = model.score(x_test, y_test)
mse = mean_squared_error(y_test, y_test_hat)
print(u'R2:', r2)
print(u'均方误差:', mse)
t = np.arange(len(y_test_hat))
plt.figure(facecolor='w')
plt.plot(t, y_test, 'r-', lw=2, label=u'真实值')
plt.plot(t, y_test_hat, 'g-', lw=2, label=u'估计值')
plt.legend(loc='best')
plt.title(u'波士顿房价预测', fontsize=18)
plt.xlabel(u'样本编号', fontsize=15)
plt.ylabel(u'房屋价格', fontsize=15)
plt.grid()
plt.show()