目录
在教育领域,了解影响学生成绩的因素并对成绩进行预测,对提升教学质量、制定个性化学习方案具有重要意义。本文将通过一个机器学习实战项目,深入分析学生成绩数据,并利用多种模型进行成绩预测,为大家展示数据处理、可视化分析以及模型训练评估的全过程。
一、项目背景
本项目旨在探索学生成绩与性别、种族、父母教育水平、午餐类型、测试准备课程等因素之间的关系,并建立模型预测学生的数学成绩。数据集包含了这些相关特征以及学生的数学、阅读和写作成绩,为后续分析和预测提供了丰富的数据支持。
二、数据读取与初步处理
项目使用 Python 进行开发,借助pandas
库读取存储在exams.csv
文件中的数据。
import pandas as pd
df_pre = pd.read_csv('machine_learning\机器学习实战04-教育领域:学生成 绩的可视化分析与成绩预测-详细分析\exams.csv')
读取数据后,为了更直观地了解数据特征间的相关性,计算数学、阅读和写作成绩的方差和标准差:
df_pre[['math score','reading score', 'writing score']].agg(['var','std'])
这些统计量能反映成绩的离散程度,帮助我们初步了解数据的分布情况。
三、数据可视化分析
(一)相关性矩阵热图
利用seaborn
库和matplotlib
库绘制相关性矩阵热图,展示各变量之间的相关性:
import seaborn as sns
import matplotlib.pyplot as plt
correlation_matrix = df_pre.corr()
sns.heatmap(correlation_matrix, annot=True, cmap='coolwarm')
plt.show()
热图中颜色越深表示相关性越强,通过热图可以快速发现不同科目成绩之间的相关性,以及其他特征与成绩之间的潜在关系。例如,如果发现数学成绩和阅读成绩在热图上呈现较高的正相关,说明这两门学科成绩可能相互影响。
(二)父母教育水平与成绩关系
通过分组计算不同父母教育水平下学生的平均成绩,并绘制折线图展示关系:
education_score = df_pre.groupby('parental level of education')[['math score','reading score', 'writing score']].mean().reset_index()
education_score['average score'] = (education_score['math score']+education_score['reading score']+education_score['writing score'])/3
education_score = education_score.sort_values('average score', ascending=False)
plt.figure(figsize=(13,4))
plt.plot(education_score['parental level of education'], education_score['math score'], marker='o', label='Math Score')
plt.plot(education_score['parental level of education'], education_score['reading score'], marker='o', label='Reading Score')
plt.plot(education_score['parental level of education'], education_score['writing score'], marker='o', label='Writing Score')
plt.plot(education_score['parental level of education'], education_score['average score'], marker='s', label='Average Score')
plt.title('学生父母的教育水平和成绩之间的关系')
plt.xlabel('教育水平')
plt.ylabel('成绩')
plt.legend()
plt.show()
从图中可以直观地看出,父母教育水平较高的学生,其平均成绩往往也较高,这表明家庭的教育背景对学生成绩有一定的影响。
(三)种族与成绩关系
类似地,分析种族与成绩的关系:
race_score = df_pre.groupby('race/ethnicity')[['math score','reading score', 'writing score']].mean().reset_index()
race_score['average score'] = (race_score['math score']+race_score['reading score']+race_score['writing score'])/3
race_score = race_score.sort_values('average score', ascending=False)
plt.figure(figsize=(13,4))
plt.plot(race_score['race/ethnicity'], race_score['math score'], marker='o', label='Math Score')
plt.plot(race_score['race/ethnicity'], race_score['reading score'], marker='o', label='Reading Score')
plt.plot(race_score['race/ethnicity'], race_score['writing score'], marker='o', label='Writing Score')
plt.plot(race_score['race/ethnicity'], race_score['average score'], marker='s', label='Average Score')
plt.title('种族和成绩之间的关系')
plt.xlabel('种族')
plt.ylabel('成绩')
plt.legend()
plt.show()
不同种族的学生在成绩上存在差异,这可能与不同种族的文化背景、教育资源获取等因素有关。
(四)测试准备课程与成绩关系
分析测试准备课程对成绩的影响:
prep_score = df_pre.groupby('test preparation course')[['math score','reading score', 'writing score']].mean().reset_index()
prep_score['average score'] = (prep_score['math score']+prep_score['reading score']+prep_score['writing score'])/3
prep_score = prep_score.sort_values('average score', ascending=False)
plt.figure(figsize=(13,4))
plt.plot(prep_score['test preparation course'], prep_score['math score'], marker='o', label='Math Score')
plt.plot(prep_score['test preparation course'], prep_score['reading score'], marker='o', label='Reading Score')
plt.plot(prep_score['test preparation course'], prep_score['writing score'], marker='o', label='Writing Score')
plt.plot(prep_score['test preparation course'], prep_score['average score'], marker='s', label='Average Score')
plt.title('测试准备课程和成绩之间的关系')
plt.xlabel('完成与否')
plt.ylabel('成绩')
plt.legend()
plt.show()
结果显示,完成测试准备课程的学生平均成绩普遍高于未完成的学生,说明测试准备课程对提升成绩有积极作用。
(五)其他分析
- 父母教育水平与测试准备课程完成情况:通过绘制饼图展示不同父母教育水平下,学生完成和未完成测试准备课程的比例。
par_test_count = df_pre[['parental level of education', 'test preparation course']].value_counts().to_frame().reset_index().rename(columns={0:'Count'}).sort_values('Count', ascending=False)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,4))
ax1.pie(par_test_count[par_test_count['test preparation course']=='completed']['Count'],
labels=par_test_count[par_test_count['test preparation course']=='completed']['parental level of education'],
autopct='%1.2f%%')
ax1.set_title('父母的教育水平的饼图 学生完成测试准备课程')
ax2.pie(par_test_count[par_test_count['test preparation course']=='none']['Count'],
labels=par_test_count[par_test_count['test preparation course']=='none']['parental level of education'],
autopct='%1.2f%%')
ax2.set_title('父母的教育水平的饼图 学生没有完成测试准备课程')
plt.show()
从饼图中可以看出,父母教育水平较高的学生完成测试准备课程的比例相对较高。
2. 性别与数学成绩:使用小提琴图和散点图比较男性和女性的数学分数分布:
df_pre.groupby('gender').mean()
sns.violinplot(x='gender', y='math score', data=df_pre)
plt.xlabel('Gender')
plt.ylabel('Math Score')
plt.title('比较男性和女性之间的数学分数')
plt.show()
plt.figure(figsize=(10,5))
sns.scatterplot(x=range(0, len(df_pre)), y="math score", hue="gender", data=df_pre)
plt.title('基于性别数学分数的散点图')
plt.xlabel('学生数')
plt.ylabel('成绩')
plt.show()
小提琴图和散点图展示了不同性别学生数学成绩的分布特征,可能发现性别对数学成绩存在一定影响。
3. 成绩分布:绘制数学、阅读和写作成绩的直方图,观察成绩的分布情况:
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20, 4))
ax1.set_title('数学成绩的分布')
ax1.hist(df_pre['math score'], edgecolor='black')
ax2.set_title('阅读成绩的分布')
ax2.hist(df_pre['reading score'], edgecolor='black')
ax3.set_title('写作成绩的分布')
ax3.hist(df_pre['writing score'], edgecolor='black')
plt.show()
直方图能直观呈现成绩在各个分数段的分布频率,帮助了解成绩的集中趋势和离散程度。
四、机器学习模型构建与评估
(一)数据预处理
由于数据集中存在分类变量,需要进行独热编码将其转换为数值型数据,以便模型处理:
df = pd.get_dummies(df_pre)
X = df.drop('math score', axis=1)
y = df['math score']
接着,将数据集按照 40% 的比例划分为测试集,60% 划分为训练集:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42)
(二)模型训练与评估
选择线性回归、决策树回归、随机森林回归以及不同核函数的支持向量回归等模型进行训练和评估:
from sklearn.svm import SVR
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import cross_val_score
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
models = [LinearRegression(), DecisionTreeRegressor(), RandomForestRegressor(), SVR(kernel='linear'), SVR(kernel='poly'), SVR(kernel='rbf')]
cv_scores = []
for model in models:
scores = cross_val_score(model, X_train, y_train, cv=5, scoring='r2', n_jobs=-1)
cv_scores.append(scores.mean())
使用 5 折交叉验证计算每个模型的 R² 分数,评估模型的拟合优度。R² 分数越接近 1,表示模型对数据的拟合效果越好。
绘制柱状图比较不同模型的 R² 分数:
fig, ax = plt.subplots(figsize=(15, 6))
rects = ax.bar(['Linear', 'Decision Tree', 'Random Forest', 'SVR - Linear', 'SVR - Poly', 'SVR - Rbf'], cv_scores, color='blue')
ax.set_ylim(0, 1)
ax.set_title('回归模型的比较')
ax.set_xlabel('Model')
ax.set_ylabel('R-squared')
for rect in rects:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2., height, f'{height:.5f}', ha='center', va='bottom')
plt.show()
从柱状图中可以直观地看出不同模型的性能差异,选择性能较好的模型进行后续预测。
以线性回归模型为例,进行模型训练和预测,并计算均方误差(MSE)、均方根误差(RMSE)、平均绝对误差(MAE)和 R² 分数评估模型性能:
model = LinearRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
rmse = np.sqrt(mse)
mae = mean_absolute_error(y_test, y_pred)
r2 = r2_score(y_test, y_pred)
print('Evaluation Metrics')
print("Mean squared error:", mse)
print("Root Mean squared error:", rmse)
print("Mean absolute error:", mae)
print("R-squared score:", r2)
这些指标从不同角度反映了模型预测值与真实值之间的差异,帮助我们全面评估模型的性能。
五、总结
本项目通过对学生成绩数据集的深入分析和可视化,发现父母教育水平、种族、测试准备课程等因素与学生成绩之间存在一定的关联。在机器学习模型构建方面,使用多种回归模型进行训练和评估,比较了它们的性能。最终,选择合适的模型对学生数学成绩进行预测,并通过评估指标对模型进行了量化评价。
整个项目展示了从数据处理、可视化分析到模型构建与评估的完整机器学习流程,为教育领域的数据分析和预测提供了实践参考。在实际应用中,可以进一步优化模型,探索更多特征工程方法,以提高成绩预测的准确性,为教育决策提供更有力的支持。
六、全代码
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import matplotlib.pyplot as plt
import sklearn
from sklearn.svm import SVR
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
plt.rcParams['font.sans-serif'] = ['SimHei']
df_pre = pd.read_csv('machine_learning\机器学习实战04-教育领域:学生成 绩的可视化分析与成绩预测-详细分析\exams.csv')
df_pre[['math score', 'reading score', 'writing score']].agg(['var', 'std'])
correlation_matrix = df_pre.corr()
# 创建一个热图的相关矩阵
sns.heatmap(correlation_matrix, annot=True, cmap='coolwarm')
plt.show()
education_score = df_pre.groupby('parental level of education')[['math score', 'reading score', 'writing score']].mean().reset_index()
education_score['average score'] = (education_score['math score']+education_score['reading score']+education_score['writing score'])/3
education_score = education_score.sort_values('average score', ascending=False)
# Create plot to find the relationship between parental level of education and scores
plt.figure(figsize=(13,4))
plt.plot(education_score['parental level of education'], education_score['math score'], marker='o', label='Math Score')
plt.plot(education_score['parental level of education'], education_score['reading score'], marker='o', label='Reading Score')
plt.plot(education_score['parental level of education'], education_score['writing score'], marker='o', label='Writing Score')
plt.plot(education_score['parental level of education'], education_score['average score'], marker='s', label='Average Score')
# Add labels and title
plt.title('学生父母的教育水平和成绩之间的关系')
plt.xlabel('教育水平')
plt.ylabel('成绩')
# Show the plot
plt.legend()
plt.show()
# Create a dataframe that calculates the mean of each score in the group of race/ethnicity
race_score = df_pre.groupby('race/ethnicity')[['math score', 'reading score', 'writing score']].mean().reset_index()
race_score['average score'] = (race_score['math score']+race_score['reading score']+race_score['writing score'])/3
race_score = race_score.sort_values('average score', ascending=False)
# Create plot to find the relationship between race/ethnicity and scores
plt.figure(figsize=(13,4))
plt.plot(race_score['race/ethnicity'], race_score['math score'], marker='o', label='Math Score')
plt.plot(race_score['race/ethnicity'], race_score['reading score'], marker='o', label='Reading Score')
plt.plot(race_score['race/ethnicity'], race_score['writing score'], marker='o', label='Writing Score')
plt.plot(race_score['race/ethnicity'], race_score['average score'], marker='s', label='Average Score')
# Add labels and title
plt.title('种族和成绩之间的关系')
plt.xlabel('种族')
plt.ylabel('成绩')
# Show the plot
plt.legend()
plt.show()
prep_score = df_pre.groupby('test preparation course')[['math score', 'reading score', 'writing score']].mean().reset_index()
prep_score['average score'] = (prep_score['math score']+prep_score['reading score']+prep_score['writing score'])/3
prep_score = prep_score.sort_values('average score', ascending=False)
# Create plot to find the relationship between test preparation course and scores
plt.figure(figsize=(13,4))
plt.plot(prep_score['test preparation course'], prep_score['math score'], marker='o', label='Math Score')
plt.plot(prep_score['test preparation course'], prep_score['reading score'], marker='o', label='Reading Score')
plt.plot(prep_score['test preparation course'], prep_score['writing score'], marker='o', label='Writing Score')
plt.plot(prep_score['test preparation course'], prep_score['average score'], marker='s', label='Average Score')
# Add labels and title
plt.title('测试准备课程和成绩之间的关系')
plt.xlabel('完成与否')
plt.ylabel('成绩')
# Show the plot
plt.legend()
plt.show()
df_pre.groupby('test preparation course')[['math score', 'reading score', 'writing score']].agg(['var', 'std'])
par_test_count = df_pre[['parental level of education', 'test preparation course']].value_counts().to_frame().reset_index().rename(columns={0:'Count'}).sort_values('Count', ascending=False)
# Create a figure with two subplots
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,4))
# Create the first pie chart for the count of students who completed the test preparation course
ax1.pie(par_test_count[par_test_count['test preparation course']=='completed']['Count'],
labels=par_test_count[par_test_count['test preparation course']=='completed']['parental level of education'],
autopct='%1.2f%%')
ax1.set_title('父母的教育水平的饼图 学生完成测试准备课程')
# Create the second pie chart for the count of students who did not complete the test preparation course
ax2.pie(par_test_count[par_test_count['test preparation course']=='none']['Count'],
labels=par_test_count[par_test_count['test preparation course']=='none']['parental level of education'],
autopct='%1.2f%%')
ax2.set_title('父母的教育水平的饼图 学生没有完成测试准备课程')
# Show the plot
plt.show()
df_pre.groupby('gender').mean()
sns.violinplot(x='gender', y='math score', data=df_pre)
# Add labels and title
plt.xlabel('Gender')
plt.ylabel('Math Score')
plt.title('比较男性和女性之间的数学分数')
# Show the plot
plt.show()
plt.figure(figsize=(10,5))
sns.scatterplot(x=range(0, len(df_pre)), y="math score", hue="gender", data=df_pre)
# Add labels and title
plt.title('基于性别数学分数的散点图')
plt.xlabel('学生数')
plt.ylabel('成绩')
# Show the plot
plt.show()
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20, 4))
# Plot for math
ax1.set_title('数学成绩的分布')
ax1.hist(df_pre['math score'], edgecolor='black')
# Plot for reading
ax2.set_title('阅读成绩的分布')
ax2.hist(df_pre['reading score'], edgecolor='black')
# Plot for writing
ax3.hist(df_pre['writing score'], edgecolor='black')
ax3.set_title('写作成绩的分布')
# Show plots
plt.show()
# One-hot encoding the categorical variables
df = pd.get_dummies(df_pre)
# Assign variables
X = df.drop('math score', axis=1)
y = df['math score']
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42)
models = [LinearRegression(), DecisionTreeRegressor(), RandomForestRegressor(), SVR(kernel='linear'), SVR(kernel='poly'), SVR(kernel='rbf')]
# Use cross-validation to compute the R-squared score for each model
cv_scores = []
for model in models:
scores = cross_val_score(model, X_train, y_train, cv=5, scoring='r2', n_jobs=-1)
cv_scores.append(scores.mean())
# Plot the results
fig, ax = plt.subplots(figsize=(15, 6))
rects = ax.bar(['Linear', 'Decision Tree', 'Random Forest', 'SVR - Linear', 'SVR - Poly', 'SVR - Rbf'], cv_scores, color='blue')
ax.set_ylim(0, 1)
ax.set_title('回归模型的比较')
ax.set_xlabel('Model')
ax.set_ylabel('R-squared')
# Add labels above each bar
for rect in rects:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2., height, f'{height:.5f}', ha='center', va='bottom')
# Show the plot
plt.show()
model = LinearRegression()
# Fit the model using the training data
model.fit(X_train, y_train)
# Make predictions on the test data
y_pred = model.predict(X_test)
# Evaluate the performance of the model
mse = mean_squared_error(y_test, y_pred)
rmse = np.sqrt(mse)
mae = mean_absolute_error(y_test, y_pred)
r2 = r2_score(y_test, y_pred)
print('Evaluation Metrics')
print("Mean squared error:", mse)
print("Root Mean squared error:", rmse)
print("Mean absolute error:", mae)
print("R-squared score:", r2)