python输出成绩分析代码_Python根据成绩分析系统浅析

本文分析了一个包含学生成绩的数据集,包括GRE、托福、学校等级等9个特征。通过相关性分析,发现GRE、CGPA和托福成绩对是否继续深造影响最大。使用线性回归、随机森林回归和决策树回归进行预测,并对比了这三种模型的性能,发现线性回归表现最优。此外,还运用逻辑回归、随机森林分类器和决策树分类器进行分类任务,结果显示随机森林和朴素贝叶斯预测效果较好。最后,通过K-means和层次聚类对数据进行聚类,观察了不同聚类结果。
摘要由CSDN通过智能技术生成

案例:该数据集的是一个关于每个学生成绩的数据集,接下来我们对该数据集进行分析,判断学生是否适合继续深造

数据集特征展示

1 gre 成绩 (290 to 340)

2 toefl 成绩(92 to 120)

3 学校等级 (1 to 5)

4 自身的意愿 (1 to 5)

5 推荐信的力度 (1 to 5)

6 cgpa成绩 (6.8 to 9.92)

7 是否有研习经验 (0 or 1)

8 读硕士的意向 (0.34 to 0.97)

1.导入包

import numpy as np

import pandas as pd

import matplotlib.pyplot as plt

import seaborn as sns

import os,sys

2.导入并查看数据集

df = pd.read_csv("d:\\machine-learning\\score\\admission_predict.csv",sep = ",")

print('there are ',len(df.columns),'columns')

for c in df.columns:

sys.stdout.write(str(c)+', '

there are 9 columns

serial no., gre score, toefl score, university rating, sop, lor , cgpa, research, chance of admit ,

一共有9列特征

df.info()

rangeindex: 400 entries, 0 to 399

data columns (total 9 columns):

serial no. 400 non-null int64

gre score 400 non-null int64

toefl score 400 non-null int64

university rating 400 non-null int64

sop 400 non-null float64

lor 400 non-null float64

cgpa 400 non-null float64

research 400 non-null int64

chance of admit 400 non-null float64

dtypes: float64(4), int64(5)

memory usage: 28.2 kb

数据集信息:

1.数据有9个特征,分别是学号,gre分数,托福分数,学校等级,sop,lor,cgpa,是否参加研习,进修的几率

2.数据集中没有空值

3.一共有400条数据

# 整理列名称

df = df.rename(columns={'chance of admit ':'chance of admit'})

# 显示前5列数据

df.head()

3.查看每个特征的相关性

fig,ax = plt.subplots(figsize=(10,10))

sns.heatmap(df.corr(),ax=ax,annot=true,linewidths=0.05,fmt='.2f',cmap='magma')

plt.show()

结论:1.最有可能影响是否读硕士的特征是gre,cgpa,toefl成绩

2.影响相对较小的特征是lor,sop,和research

4.数据可视化,双变量分析

4.1 进行research的人数

print("not having research:",len(df[df.research == 0]))

print("having research:",len(df[df.research == 1]))

y = np.array([len(df[df.research == 0]),len(df[df.research == 1])])

x = np.arange(2)

plt.bar(x,y)

plt.title("research experience")

plt.xlabel("canditates")

plt.ylabel("frequency")

plt.xticks(x,('not having research','having research'))

plt.show()

结论:进行research的人数是219,本科没有research人数是181

4.2 学生的托福成绩

y = np.array([df['toefl score'].min(),df['toefl score'].mean(),df['toefl score'].max()])

x = np.arange(3)

plt.bar(x,y)

plt.title('toefl score')

plt.xlabel('level')

plt.ylabel('toefl score')

plt.xticks(x,('worst','average','best'))

plt.show()

结论:最低分92分,最高分满分,进修学生的英语成绩很不错

4.3 gre成绩

df['gre score'].plot(kind='hist',bins=200,figsize=(6,6))

plt.title('gre score')

plt.xlabel('gre score')

plt.ylabel('frequency')

plt.show()

结论:310和330的分值的学生居多

4.4 cgpa和学校等级的关系

plt.scatter(df['university rating'],df['cgpa'])

plt.title('cgpa scores for university ratings')

plt.xlabel('university rating')

plt.ylabel('cgpa')

plt.show()

结论:学校越好,学生的gpa可能就越高

4.5 gre成绩和cgpa的关系

plt.scatter(df['gre score'],df['cgpa'])

plt.title('cgpa for gre scores')

plt.xlabel('gre score')

plt.ylabel('cgpa')

plt.show()

结论:gpa基点越高,gre分数越高,2者的相关性很大

4.6 托福成绩和gre成绩的关系

df[df['cgpa']>=8.5].plot(kind='scatter',x='gre score',y='toefl score',color='red')

plt.xlabel('gre score')

plt.ylabel('toefl score')

plt.title('cgpa >= 8.5')

plt.grid(true)

plt.show()

结论:多数情况下gre和托福成正相关,但是gre分数高,托福一定高。

4.6 学校等级和是否读硕士的关系

s = df[df['chance of admit'] >= 0.75]['university rating'].value_counts().head(5)

plt.title('university ratings of candidates with an 75% acceptance chance')

s.plot(kind='bar',figsize=(20,10),cmap='pastel1')

plt.xlabel('university rating')

plt.ylabel('candidates')

plt.show()

结论:排名靠前的学校的学生,进修的可能性更大

4.7 sop和gpa的关系

plt.scatter(df['cgpa'],df['sop'])

plt.xlabel('cgpa')

plt.ylabel('sop')

plt.title('sop for cgpa')

plt.show()

结论: gpa很高的学生,选择读硕士的自我意愿更强烈

4.8 sop和gre的关系

plt.scatter(df['gre score'],df['sop'])

plt.xlabel('gre score')

plt.ylabel('sop')

plt.title('sop for gre score')

plt.show()

结论:读硕士意愿强的学生,gre分数较高

5.模型

5.1 准备数据集

# 读取数据集

df = pd.read_csv('d:\\machine-learning\\score\\admission_predict.csv',sep=',')

serialno = df['serial no.'].values

df.drop(['serial no.'],axis=1,inplace=true)

df = df.rename(columns={'chance of admit ':'chance of admit'})

# 分割数据集

y = df['chance of admit'].values

x = df.drop(['chance of admit'],axis=1)

from sklearn.model_selection import train_test_split

x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2,random_state=42)

# 归一化数据

from sklearn.preprocessing import minmaxscaler

scalex = minmaxscaler(feature_range=[0,1])

x_train[x_train.columns] = scalex.fit_transform(x_train[x_train.columns])

x_test[x_test.columns] = scalex.fit_transform(x_test[x_test.columns])

5.2 回归

5.2.1 线性回归

from sklearn.linear_model import linearregression

lr = linearregression()

lr.fit(x_train,y_train)

y_head_lr = lr.predict(x_test)

print('real value of y_test[1]: '+str(y_test[1]) + ' -> predict value: ' + str(lr.predict(x_test.iloc[[1],:])))

print('real value of y_test[2]: '+str(y_test[2]) + ' -> predict value: ' + str(lr.predict(x_test.iloc[[2],:])))

from sklearn.metrics import r2_score

print('r_square score: ',r2_score(y_test,y_head_lr))

y_head_lr_train = lr.predict(x_train)

print('r_square score(train data):',r2_score(y_train,y_head_lr_train))

5.2.2 随机森林回归

from sklearn.ensemble import randomforestregressor

rfr = randomforestregressor(n_estimators=100,random_state=42)

rfr.fit(x_train,y_train)

y_head_rfr = rfr.predict(x_test)

print('real value of y_test[1]: '+str(y_test[1]) + ' -> predict value: ' + str(rfr.predict(x_test.iloc[[1],:])))

print('real value of y_test[2]: '+str(y_test[2]) + ' -> predict value: ' + str(rfr.predict(x_test.iloc[[2],:])))

from sklearn.metrics import r2_score

print('r_square score: ',r2_score(y_test,y_head_rfr))

y_head_rfr_train = rfr.predict(x_train)

print('r_square score(train data):',r2_score(y_train,y_head_rfr_train))

5.2.3 决策树回归

from sklearn.tree import decisiontreeregressor

dt = decisiontreeregressor(random_state=42)

dt.fit(x_train,y_train)

y_head_dt = dt.predict(x_test)

print('real value of y_test[1]: '+str(y_test[1]) + ' -> predict value: ' + str(dt.predict(x_test.iloc[[1],:])))

print('real value of y_test[2]: '+str(y_test[2]) + ' -> predict value: ' + str(dt.predict(x_test.iloc[[2],:])))

from sklearn.metrics import r2_score

print('r_square score: ',r2_score(y_test,y_head_dt))

y_head_dt_train = dt.predict(x_train)

print('r_square score(train data):',r2_score(y_train,y_head_dt_train))

5.2.4 三种回归方法比较

y = np.array([r2_score(y_test,y_head_lr),r2_score(y_test,y_head_rfr),r2_score(y_test,y_head_dt)])

x = np.arange(3)

plt.bar(x,y)

plt.title('comparion of regression algorithms')

plt.xlabel('regression')

plt.ylabel('r2_score')

plt.xticks(x,("linearregression","randomforestreg.","decisiontreereg."))

plt.show()

结论 : 回归算法中,线性回归的性能更优

5.2.5 三种回归方法与实际值的比较

​red = plt.scatter(np.arange(0,80,5),y_head_lr[0:80:5],color='red')

blue = plt.scatter(np.arange(0,80,5),y_head_rfr[0:80:5],color='blue')

green = plt.scatter(np.arange(0,80,5),y_head_dt[0:80:5],color='green')

black = plt.scatter(np.arange(0,80,5),y_test[0:80:5],color='black')

plt.title('comparison of regression algorithms')

plt.xlabel('index of candidate')

plt.ylabel('chance of admit')

plt.legend([red,blue,green,black],['lr','rfr','dt','real'])

plt.show()

结论:在数据集中有70%的候选人有可能读硕士,从上图来看还有些点没有很好的得到预测

5.3 分类算法

5.3.1 准备数据

df = pd.read_csv('d:\\machine-learning\\score\\admission_predict.csv',sep=',')

serialno = df['serial no.'].values

df.drop(['serial no.'],axis=1,inplace=true)

df = df.rename(columns={'chance of admit ':'chance of admit'})

y = df['chance of admit'].values

x = df.drop(['chance of admit'],axis=1)

from sklearn.model_selection import train_test_split

x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2,random_state=42)

from sklearn.preprocessing import minmaxscaler

scalex = minmaxscaler(feature_range=[0,1])

x_train[x_train.columns] = scalex.fit_transform(x_train[x_train.columns])

x_test[x_test.columns] = scalex.fit_transform(x_test[x_test.columns])

# 如果chance >0.8, chance of admit 就是1,否则就是0

y_train_01 = [1 if each > 0.8 else 0 for each in y_train]

y_test_01 = [1 if each > 0.8 else 0 for each in y_test]

y_train_01 = np.array(y_train_01)

y_test_01 = np.array(y_test_01)

5.3.2 逻辑回归

from sklearn.linear_model import logisticregression

lrc = logisticregression()

lrc.fit(x_train,y_train_01)

print('score: ',lrc.score(x_test,y_test_01))

print('real value of y_test_01[1]: '+str(y_test_01[1]) + ' -> predict value: ' + str(lrc.predict(x_test.iloc[[1],:])))

print('real value of y_test_01[2]: '+str(y_test_01[2]) + ' -> predict value: ' + str(lrc.predict(x_test.iloc[[2],:])))

from sklearn.metrics import confusion_matrix

cm_lrc = confusion_matrix(y_test_01,lrc.predict(x_test))

f,ax = plt.subplots(figsize=(5,5))

sns.heatmap(cm_lrc,annot=true,linewidths=0.5,linecolor='red',fmt='.0f',ax=ax)

plt.title('test for test dataset')

plt.xlabel('predicted y values')

plt.ylabel('real y value')

plt.show()

from sklearn.metrics import recall_score,precision_score,f1_score

print('precision_score is : ',precision_score(y_test_01,lrc.predict(x_test)))

print('recall_score is : ',recall_score(y_test_01,lrc.predict(x_test)))

print('f1_score is : ',f1_score(y_test_01,lrc.predict(x_test)))

# test for train dataset:

cm_lrc_train = confusion_matrix(y_train_01,lrc.predict(x_train))

f,ax = plt.subplots(figsize=(5,5))

sns.heatmap(cm_lrc_train,annot=true,linewidths=0.5,linecolor='blue',fmt='.0f',ax=ax)

plt.title('test for train dataset')

plt.xlabel('predicted y values')

plt.ylabel('real y value')

plt.show()

结论:1.通过混淆矩阵,逻辑回归算法在训练集样本上,有23个分错的样本,有72人想进一步读硕士

2.在测试集上有7个分错的样本

5.3.3 支持向量机(svm)

from sklearn.svm import svc

svm = svc(random_state=1,kernel='rbf')

svm.fit(x_train,y_train_01)

print('score: ',svm.score(x_test,y_test_01))

print('real value of y_test_01[1]: '+str(y_test_01[1]) + ' -> predict value: ' + str(svm.predict(x_test.iloc[[1],:])))

print('real value of y_test_01[2]: '+str(y_test_01[2]) + ' -> predict value: ' + str(svm.predict(x_test.iloc[[2],:])))

from sklearn.metrics import confusion_matrix

cm_svm = confusion_matrix(y_test_01,svm.predict(x_test))

f,ax = plt.subplots(figsize=(5,5))

sns.heatmap(cm_svm,annot=true,linewidths=0.5,linecolor='red',fmt='.0f',ax=ax)

plt.title('test for test dataset')

plt.xlabel('predicted y values')

plt.ylabel('real y value')

plt.show()

from sklearn.metrics import recall_score,precision_score,f1_score

print('precision_score is : ',precision_score(y_test_01,svm.predict(x_test)))

print('recall_score is : ',recall_score(y_test_01,svm.predict(x_test)))

print('f1_score is : ',f1_score(y_test_01,svm.predict(x_test)))

# test for train dataset:

cm_svm_train = confusion_matrix(y_train_01,svm.predict(x_train))

f,ax = plt.subplots(figsize=(5,5))

sns.heatmap(cm_svm_train,annot=true,linewidths=0.5,linecolor='blue',fmt='.0f',ax=ax)

plt.title('test for train dataset')

plt.xlabel('predicted y values')

plt.ylabel('real y value')

plt.show()

结论:1.通过混淆矩阵,svm算法在训练集样本上,有22个分错的样本,有70人想进一步读硕士

2.在测试集上有8个分错的样本

5.3.4 朴素贝叶斯

from sklearn.naive_bayes import gaussiannb

nb = gaussiannb()

nb.fit(x_train,y_train_01)

print('score: ',nb.score(x_test,y_test_01))

print('real value of y_test_01[1]: '+str(y_test_01[1]) + ' -> predict value: ' + str(nb.predict(x_test.iloc[[1],:])))

print('real value of y_test_01[2]: '+str(y_test_01[2]) + ' -> predict value: ' + str(nb.predict(x_test.iloc[[2],:])))

from sklearn.metrics import confusion_matrix

cm_nb = confusion_matrix(y_test_01,nb.predict(x_test))

f,ax = plt.subplots(figsize=(5,5))

sns.heatmap(cm_nb,annot=true,linewidths=0.5,linecolor='red',fmt='.0f',ax=ax)

plt.title('test for test dataset')

plt.xlabel('predicted y values')

plt.ylabel('real y value')

plt.show()

from sklearn.metrics import recall_score,precision_score,f1_score

print('precision_score is : ',precision_score(y_test_01,nb.predict(x_test)))

print('recall_score is : ',recall_score(y_test_01,nb.predict(x_test)))

print('f1_score is : ',f1_score(y_test_01,nb.predict(x_test)))

# test for train dataset:

cm_nb_train = confusion_matrix(y_train_01,nb.predict(x_train))

f,ax = plt.subplots(figsize=(5,5))

sns.heatmap(cm_nb_train,annot=true,linewidths=0.5,linecolor='blue',fmt='.0f',ax=ax)

plt.title('test for train dataset')

plt.xlabel('predicted y values')

plt.ylabel('real y value')

plt.show()

结论:1.通过混淆矩阵,朴素贝叶斯算法在训练集样本上,有20个分错的样本,有78人想进一步读硕士

2.在测试集上有7个分错的样本

5.3.5 随机森林分类器

from sklearn.ensemble import randomforestclassifier

rfc = randomforestclassifier(n_estimators=100,random_state=1)

rfc.fit(x_train,y_train_01)

print('score: ',rfc.score(x_test,y_test_01))

print('real value of y_test_01[1]: '+str(y_test_01[1]) + ' -> predict value: ' + str(rfc.predict(x_test.iloc[[1],:])))

print('real value of y_test_01[2]: '+str(y_test_01[2]) + ' -> predict value: ' + str(rfc.predict(x_test.iloc[[2],:])))

from sklearn.metrics import confusion_matrix

cm_rfc = confusion_matrix(y_test_01,rfc.predict(x_test))

f,ax = plt.subplots(figsize=(5,5))

sns.heatmap(cm_rfc,annot=true,linewidths=0.5,linecolor='red',fmt='.0f',ax=ax)

plt.title('test for test dataset')

plt.xlabel('predicted y values')

plt.ylabel('real y value')

plt.show()

from sklearn.metrics import recall_score,precision_score,f1_score

print('precision_score is : ',precision_score(y_test_01,rfc.predict(x_test)))

print('recall_score is : ',recall_score(y_test_01,rfc.predict(x_test)))

print('f1_score is : ',f1_score(y_test_01,rfc.predict(x_test)))

# test for train dataset:

cm_rfc_train = confusion_matrix(y_train_01,rfc.predict(x_train))

f,ax = plt.subplots(figsize=(5,5))

sns.heatmap(cm_rfc_train,annot=true,linewidths=0.5,linecolor='blue',fmt='.0f',ax=ax)

plt.title('test for train dataset')

plt.xlabel('predicted y values')

plt.ylabel('real y value')

plt.show()

结论:1.通过混淆矩阵,随机森林算法在训练集样本上,有0个分错的样本,有88人想进一步读硕士

2.在测试集上有5个分错的样本

5.3.6 决策树分类器

from sklearn.tree import decisiontreeclassifier

dtc = decisiontreeclassifier(criterion='entropy',max_depth=3)

dtc.fit(x_train,y_train_01)

print('score: ',dtc.score(x_test,y_test_01))

print('real value of y_test_01[1]: '+str(y_test_01[1]) + ' -> predict value: ' + str(dtc.predict(x_test.iloc[[1],:])))

print('real value of y_test_01[2]: '+str(y_test_01[2]) + ' -> predict value: ' + str(dtc.predict(x_test.iloc[[2],:])))

from sklearn.metrics import confusion_matrix

cm_dtc = confusion_matrix(y_test_01,dtc.predict(x_test))

f,ax = plt.subplots(figsize=(5,5))

sns.heatmap(cm_dtc,annot=true,linewidths=0.5,linecolor='red',fmt='.0f',ax=ax)

plt.title('test for test dataset')

plt.xlabel('predicted y values')

plt.ylabel('real y value')

plt.show()

from sklearn.metrics import recall_score,precision_score,f1_score

print('precision_score is : ',precision_score(y_test_01,dtc.predict(x_test)))

print('recall_score is : ',recall_score(y_test_01,dtc.predict(x_test)))

print('f1_score is : ',f1_score(y_test_01,dtc.predict(x_test)))

# test for train dataset:

cm_dtc_train = confusion_matrix(y_train_01,dtc.predict(x_train))

f,ax = plt.subplots(figsize=(5,5))

sns.heatmap(cm_dtc_train,annot=true,linewidths=0.5,linecolor='blue',fmt='.0f',ax=ax)

plt.title('test for train dataset')

plt.xlabel('predicted y values')

plt.ylabel('real y value')

plt.show()

结论:1.通过混淆矩阵,决策树算法在训练集样本上,有20个分错的样本,有78人想进一步读硕士

2.在测试集上有7个分错的样本

5.3.7 k临近分类器

from sklearn.neighbors import kneighborsclassifier

scores = []

for each in range(1,50):

knn_n = kneighborsclassifier(n_neighbors = each)

knn_n.fit(x_train,y_train_01)

scores.append(knn_n.score(x_test,y_test_01))

plt.plot(range(1,50),scores)

plt.xlabel('k')

plt.ylabel('accuracy')

plt.show()

knn = kneighborsclassifier(n_neighbors=7)

knn.fit(x_train,y_train_01)

print('score 7 : ',knn.score(x_test,y_test_01))

print('real value of y_test_01[1]: '+str(y_test_01[1]) + ' -> predict value: ' + str(knn.predict(x_test.iloc[[1],:])))

print('real value of y_test_01[2]: '+str(y_test_01[2]) + ' -> predict value: ' + str(knn.predict(x_test.iloc[[2],:])))

from sklearn.metrics import confusion_matrix

cm_knn = confusion_matrix(y_test_01,knn.predict(x_test))

f,ax = plt.subplots(figsize=(5,5))

sns.heatmap(cm_knn,annot=true,linewidths=0.5,linecolor='red',fmt='.0f',ax=ax)

plt.title('test for test dataset')

plt.xlabel('predicted y values')

plt.ylabel('real y value')

plt.show()

from sklearn.metrics import recall_score,precision_score,f1_score

print('precision_score is : ',precision_score(y_test_01,knn.predict(x_test)))

print('recall_score is : ',recall_score(y_test_01,knn.predict(x_test)))

print('f1_score is : ',f1_score(y_test_01,knn.predict(x_test)))

# test for train dataset:

cm_knn_train = confusion_matrix(y_train_01,knn.predict(x_train))

f,ax = plt.subplots(figsize=(5,5))

sns.heatmap(cm_knn_train,annot=true,linewidths=0.5,linecolor='blue',fmt='.0f',ax=ax)

plt.title('test for train dataset')

plt.xlabel('predicted y values')

plt.ylabel('real y value')

plt.show()

结论:1.通过混淆矩阵,k临近算法在训练集样本上,有22个分错的样本,有71人想进一步读硕士

2.在测试集上有7个分错的样本

5.3.8 分类器比较

y = np.array([lrc.score(x_test,y_test_01),svm.score(x_test,y_test_01),nb.score(x_test,y_test_01),

dtc.score(x_test,y_test_01),rfc.score(x_test,y_test_01),knn.score(x_test,y_test_01)])

x = np.arange(6)

plt.bar(x,y)

plt.title('comparison of classification algorithms')

plt.xlabel('classification')

plt.ylabel('score')

plt.xticks(x,("logisticreg.","svm","gnb","dec.tree","ran.forest","knn"))

plt.show()

结论:随机森林和朴素贝叶斯二者的预测值都比较高

5.4 聚类算法

5.4.1 准备数据

df = pd.read_csv('d:\\machine-learning\\score\\admission_predict.csv',sep=',')

df = df.rename(columns={'chance of admit ':'chance of admit'})

serialno = df['serial no.']

df.drop(['serial no.'],axis=1,inplace=true)

df = (df - np.min(df)) / (np.max(df)-np.min(df))

y = df['chance of admit']

x = df.drop(['chance of admit'],axis=1)

5.4.2 降维

from sklearn.decomposition import pca

pca = pca(n_components=1,whiten=true)

pca.fit(x)

x_pca = pca.transform(x)

x_pca = x_pca.reshape(400)

dictionary = {'x':x_pca,'y':y}

data = pd.dataframe(dictionary)

print('pca data:',data.head())

print()

print('orin data:',df.head())

5.4.3 k均值聚类

from sklearn.cluster import kmeans

wcss = []

for k in range(1,15):

kmeans = kmeans(n_clusters=k)

kmeans.fit(x)

wcss.append(kmeans.inertia_)

plt.plot(range(1,15),wcss)

plt.xlabel('kmeans')

plt.ylabel('wcss')

plt.show()

df["serial no."] = serialno

kmeans = kmeans(n_clusters=3)

clusters_knn = kmeans.fit_predict(x)

df['label_kmeans'] = clusters_knn

plt.scatter(df[df.label_kmeans == 0 ]["serial no."],df[df.label_kmeans == 0]['chance of admit'],color = "red")

plt.scatter(df[df.label_kmeans == 1 ]["serial no."],df[df.label_kmeans == 1]['chance of admit'],color = "blue")

plt.scatter(df[df.label_kmeans == 2 ]["serial no."],df[df.label_kmeans == 2]['chance of admit'],color = "green")

plt.title("k-means clustering")

plt.xlabel("candidates")

plt.ylabel("chance of admit")

plt.show()

plt.scatter(data.x[df.label_kmeans == 0 ],data[df.label_kmeans == 0].y,color = "red")

plt.scatter(data.x[df.label_kmeans == 1 ],data[df.label_kmeans == 1].y,color = "blue")

plt.scatter(data.x[df.label_kmeans == 2 ],data[df.label_kmeans == 2].y,color = "green")

plt.title("k-means clustering")

plt.xlabel("x")

plt.ylabel("chance of admit")

plt.show()

结论:数据集分成三个类别,一部分学生是决定继续读硕士,一部分放弃,还有一部分学生的比较犹豫,但是深造的可能性较大

5.4.4 层次聚类

from scipy.cluster.hierarchy import linkage,dendrogram

merg = linkage(x,method='ward')

dendrogram(merg,leaf_rotation=90)

plt.xlabel('data points')

plt.ylabel('euclidean distance')

plt.show()

from sklearn.cluster import agglomerativeclustering

hiyerartical_cluster = agglomerativeclustering(n_clusters=3,affinity='euclidean',linkage='ward')

clusters_hiyerartical = hiyerartical_cluster.fit_predict(x)

df['label_hiyerartical'] = clusters_hiyerartical

plt.scatter(df[df.label_hiyerartical == 0 ]["serial no."],df[df.label_hiyerartical == 0]['chance of admit'],color = "red")

plt.scatter(df[df.label_hiyerartical == 1 ]["serial no."],df[df.label_hiyerartical == 1]['chance of admit'],color = "blue")

plt.scatter(df[df.label_hiyerartical == 2 ]["serial no."],df[df.label_hiyerartical == 2]['chance of admit'],color = "green")

plt.title('hierarchical clustering')

plt.xlabel('candidates')

plt.ylabel('chance of admit')

plt.show()

plt.scatter(data[df.label_hiyerartical == 0].x,data.y[df.label_hiyerartical==0],color='red')

plt.scatter(data[df.label_hiyerartical == 1].x,data.y[df.label_hiyerartical==1],color='blue')

plt.scatter(data[df.label_hiyerartical == 2].x,data.y[df.label_hiyerartical==2],color='green')

plt.title('hierarchical clustering')

plt.xlabel('x')

plt.ylabel('chance of admit')

plt.show()

结论:从层次聚类的结果中,可以看出和k均值聚类的结果一致,只不过确定了聚类k的取值3

结论:通过本词入门数据集的训练,可以掌握

1.一些特征的展示的方法

2.如何调用sklearn 的api

3.如何取比较不同模型之间的好坏

如您对本文有疑问或者有任何想说的,请点击进行留言回复,万千网友为您解惑!

本文介绍的是利用Python语言,做成绩分析并生成成绩分析动态图表。Python语言可以利用Pandas、Pyecharts等各种类库,进行数据分析。 本文介绍的成绩分析大体分为三步: 一、拼合单科成绩,合成学年成绩,计算总分,按总分成绩排名次,然后由学年成绩筛选出各个班级的成绩,将学年成绩,各班级成绩存入一个Excel文件中,工作表分别命名为学年成绩,高三(1)班……等 二、利用生成的第一步生成的Excel文件,做成绩分析,保存成绩分析表格。 三、利用成绩分析表格,做成绩分析动态图。 下面是部分源代码: 1、成绩整理与合并 import glob import os import pandas as pd from functools import reduce inputPath="./原始成绩/" writer_lk = pd.ExcelWriter('./整理后的成绩/2020一模理科总成绩及各班级成绩.xlsx') writer_wk = pd.ExcelWriter('./整理后的成绩/2020一模文科总成绩及各班级成绩.xlsx') inputWorkbook=glob.glob(os.path.join(inputPath,"*.xls")) #====================读取全部学生的所有科目成绩=================================== yw_score = pd.read_excel(inputWorkbook[2]) sxlk_score = pd.read_excel(inputWorkbook[1]) sxwk_score = pd.read_excel(inputWorkbook[0]) yy_score = pd.read_excel(inputWorkbook[5]) yy_score['英语'] = (yy_score['英语'] * 1.25).round(0)#英语成绩不计算听力成绩*1.25 lkzh_score = pd.read_excel(inputWorkbook[4]) wkzh_score = pd.read_excel(inputWorkbook[3]) #======================================================================= #====================整理出理科成绩及分班成绩、计算总分、总分排名、班级排名============================= lk_class = ['高三(1)班','高三(2)班','高三(3)班','高三(4)班'] wk_class = ['高三(5)班','高三(6)班'] lk_yw = yw_score.loc[(yw_score.班级.isin(lk_class)), ['班级','姓名','语文']] lk_sx = sxlk_score[['姓名','数学']] lk_yy = yy_score.loc[(yy_score.班级.isin(lk_class)), ['姓名','英语']] lk_k3 = lkzh_score[['姓名','物理','化学','生物','理综']] lk_list = [lk_yw, lk_sx, lk_yy, lk_k3] score_lk = (reduce(lambda left, right: pd.merge(left, right, on='姓名'), lk_list)) score_lk['总分'] = (score_lk['语文'] + score_lk['数学'] + score_lk['英语'] + score_lk['理综']).round(0) def sort_grade(score): score_sort = score.sort_values(by=['总分'], ascending=False) score_sort['年级排名'] = score_sort['总分'].rank(ascending=0,method='min') return score_sort def sort_class_lk(score_garde,name): class_sort = score_garde.loc[score_garde.班级 == name, :] class_sort = class_sort.sort_values(by=['总分'], ascending=False) class_sort['班级排名'] = class_sort['总分'].rank(ascending=0,method='min') class_sort.to_excel(writer_lk, index=None, sheet_name=name) lk_grade_sort = sort_grade(score_lk) lk_grade_sort.to_excel(writer_lk, index=None, sheet_name='学年成绩') for lk in lk_class: class_sort = sort_class_lk(score_lk, lk) writer_lk.save() writer_lk.close() # #============整理出文科成绩及分班成绩、计算总分、总分排名、班级排名================== wk_yw = yw_score.loc[(yw_score.班级.isin(wk_class)), ['班级','姓名','语文']] wk_sx = sxwk_score[['姓名','数学']] wk_yy = yy_score.loc[(yy_score.班级.isin(wk_class)), ['姓名','英语']] wk_k3 = wkzh_score[['姓名','政治','历史','地理','文综']] wk_list = [wk_yw, wk_sx, wk_yy, wk_k3] score_wk = (reduce(lambda left, right: pd.merge(left, right, on='姓名'), wk_list)) score_wk['总分'] = (score_wk['语文'] + score_wk['数学'] + score_wk['英语'] + score_wk['文综']).round(0) def sort_class_wk(score_garde,name): class_sort = score_garde.loc[score_garde.班级 == name, :] class_sort = class_sort.sort_values(by=['总分'], ascending=False) class_sort['班级排名'] = class_sort['总分'].rank(ascending=0,method='min') class_sort.to_excel(writer_wk, index=None, sheet_name=name) wk_grade_sort = sort_grade(score_wk) wk_grade_sort.to_excel(writer_wk, index=None, sheet_name='学年成绩') for wk in wk_class: class_sort = sort_class_wk(wk_grade_sort, wk) writer_wk.save() writer_wk.close() 2、成绩区间分割与统计 #coding:utf-8 import numpy as np import pandas as pd from functools import reduce fpath_lk="./整理后的成绩/2020一模理科总成绩及各班级成绩.xlsx" fpath_wk="./整理后的成绩/2020一模文科总成绩及各班级成绩.xlsx" writer_lk = pd.ExcelWriter('./整理后的成绩/2020一模理科成绩区间分布统计.xlsx') writer_wk = pd.ExcelWriter('./整理后的成绩/2020一模文科成绩区间分布统计.xlsx') lk = pd.read_excel(fpath_lk, None) #获取表格中的所有工作表的内容 wk = pd.read_excel(fpath_wk, None) #===================1.定义区间分割函数===================================== def cut_750(score_750,len): bins_750= [0,370,380,390,400,410,420,430,440,450,460,470,480,490,500,510,520,530,540,550,560,570,580,590,600,620,640,660,750] labels_750 = ['0-370','370-379','380-389','390-399','400-409','410-419','420-429','430-439','440-449','450-459','460-469','470-479','480-489','490-499','500-509','510-519','520-529','530-539','540-549','550-559','560-569','570-579','580-589','590-599','600-619','620-639','640-659','660-750'] cut_750 = pd.cut(score_750, bins_750, labels=labels_750, right=False) qj = pd.DataFrame({'区间':pd.value_counts(cut_750).index,'人数':pd.value_counts(cut_750),'百分比':((pd.value_counts(cut_750))/len).round(3).apply(lambda x: format(x, '.2%'))}).sort_values(by='区间', ascending=False) qj = qj.reset_index(drop=True) return qj def cut_150(score_150,len): bins_150 = [0,30,60,90,120,150] labels_150 = ['0-30', '30-60', '60-90', '90-120', '120-150'] cut_150 = pd.cut(score_150, bins_150, labels=labels_150, right=False) qj = pd.DataFrame({'区间':pd.value_counts(cut_150).index,'人数':pd.value_counts(cut_150),'百分比':((pd.value_counts(cut_150))/len).round(3).apply(lambda x: format(x, '.2%'))}).sort_values(by='区间') 其他源代码及始数据已上传,欢迎各位借鉴,第一次编程,希望网友们能指点不足之处,联系qq:912182988
Python中,误差分析是指在数值计算过程中可能产生的误差。根据引用中的信息,单精度浮点数的精度只有6~7位有效数字,而高于这个位数的数字将被舍弃或四舍五入,这可能导致计算结果的不准确。因此,在使用浮点数进行计算时,需要注意可能出现的尾数丢失问题。 为了更准确地处理数值计算中的误差,Python提供了decimal模块,该模块支持高精度的十进制运算。引用中提到了decimal模块的使用方法和示例。通过使用decimal模块,可以避免使用浮点数时可能出现的误差问题。 另外,误差分析也与具体的数值计算算法和问题有关。在引用的代码示例中,使用了列表来存储输入的数字,并通过max、min和sum等函数来进行计算。然而,这种方法并不能通过全部样例,可能是因为在具体问题中对误差的处理方法不够准确或完善。 总之,Python中的误差分析是在数值计算中需要关注的一个方面。要准确处理误差,可以使用decimal模块提供的高精度计算功能,并结合具体的数值计算算法和问题进行分析和处理。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* *3* [浅析Python高精度计算的误差](https://blog.csdn.net/m0_73792575/article/details/130045092)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 100%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值