机器学习基础:交叉验证、偏差、方差、正则化以及精确度与召回率
1. 交叉验证
交叉验证是一种常用的模型评估方法。以下是实现k折交叉验证的代码示例:
from sklearn.model_selection import KFold
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import load_iris
# 加载数据
data = load_iris()
X = data.data
y = data.target
# k折交叉验证
kf = KFold(n_splits=5)
model = LogisticRegression(max_iter=200)
accuracies = []
for train_index, test_index in kf.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
accuracies.append(accuracy)
# 输出平均准确率
print(f'Average Accuracy: {sum(accuracies) / len(accuracies)}')
2. 偏差与方差
偏差-方差权衡
偏差和方差的关系可以通过以下公式来表示。假设模型的预测值为 f ^ ( x ) \hat{f}(x) f^(x),真实值为 f ( x ) f(x) f(x),则:
- 偏差: Bias 2 = ( E [ f ^ ( x ) ] − f ( x ) ) 2 \text{Bias}^2 = (\mathbb{E}[\hat{f}(x)] - f(x))^2 Bias2=(E[f^(x)]−f(x))2
- 方差: Variance = E [ ( f ^ ( x ) − E [ f ^ ( x ) ] ) 2 ] \text{Variance} = \mathbb{E}[(\hat{f}(x) - \mathbb{E}[\hat{f}(x)])^2] Variance=E[(f^(x)−E[f^(x)])2]
- 总误差: Error = Bias 2 + Variance + Irreducible Error \text{Error} = \text{Bias}^2 + \text{Variance} + \text{Irreducible Error} Error=Bias2+Variance+Irreducible Error
以下是生成偏差和方差示例图的代码:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# 创建数据
X, y = make_moons(n_samples=500, noise=0.3, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# 训练模型并计算偏差和方差
n_estimators_range = range(1, 51)
bias2 = []
variance = []
for n_estimators in n_estimators_range:
model = RandomForestClassifier(n_estimators=n_estimators, random_state=42)
model.fit(X_train, y_train)
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
bias2.append(np.mean((y_test - np.mean(y_train_pred))**2))
variance.append(np.mean((y_test_pred - np.mean(y_test_pred))**2))
plt.plot(n_estimators_range, bias2, label='Bias^2')
plt.plot(n_estimators_range, variance, label='Variance')
plt.xlabel('Number of Estimators')
plt.ylabel('Error')
plt.legend()
plt.title('Bias-Variance Tradeoff')
plt.show()
我们以make_moons数据集为例:
3. 正则化
L1 正则化
L1 正则化惩罚模型参数的绝对值,具有特征选择的效果。
from sklearn.linear_model import Lasso
from sklearn.datasets import make_regression
# 创建数据
X, y = make_regression(n_samples=100, n_features=20, noise=0.1, random_state=42)
# L1 正则化
model = Lasso(alpha=0.1)
model.fit(X, y)
# 打印模型系数
print(f'L1 Regularization Coefficients: {model.coef_}')
L2 正则化
L2 正则化惩罚模型参数的平方和,能够减少模型复杂度。
from sklearn.linear_model import Ridge
# L2 正则化
model = Ridge(alpha=1.0)
model.fit(X, y)
# 打印模型系数
print(f'L2 Regularization Coefficients: {model.coef_}')
弹性网正则化
弹性网正则化结合了 L1 和 L2 正则化的优点。
from sklearn.linear_model import ElasticNet
# 弹性网正则化
model = ElasticNet(alpha=1.0, l1_ratio=0.5)
model.fit(X, y)
# 打印模型系数
print(f'Elastic Net Coefficients: {model.coef_}')
4. 精确度与召回率
精确度与召回率的代码示例
from sklearn.metrics import precision_score, recall_score
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
# 创建数据
X, y = make_classification(n_samples=1000, n_features=20, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# 训练模型
model = LogisticRegression()
model.fit(X_train, y_train)
# 预测
y_pred = model.predict(X_test)
# 计算精确度和召回率
precision = precision_score(y_test, y_pred)
recall = recall_score(y_test, y_pred)
print(f'Precision: {precision}')
print(f'Recall: {recall}')
F1分数
F1分数是精确度和召回率的调和均值,适用于需要平衡精确度和召回率的情况。
from sklearn.metrics import f1_score
# 计算F1分数
f1 = f1_score(y_test, y_pred)
print(f'F1 Score: {f1}')
结论
理解机器学习中的交叉验证、偏差、方差、正则化以及精确度和召回率,对于构建和优化模型至关重要。这些概念帮助我们评估模型的泛化能力、控制模型复杂度,并权衡不同的性能指标,以实现最佳的预测效果。