前面有文章数据分析之探索性数据分析,里面详细阐述了何为EDA,以及一些常用的分析方法,感兴趣的小伙伴们可以参考下。
本文分析数据科学家更换工作情况数据集,运用常见EDA方法分析每个特征情况及他们与目标变量之间的关系。使用seaborn
进行数据可视化辅助分析数据科学家们更换工作都有哪些特征。
因篇幅过长,将分为上下两篇展开此次探索性数据分析案例精讲。
导包
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False
数据导入与查看
path_train = '../input/hr-analytics-job-change-of-data-scientists/aug_train.csv'
path_test = '../input/hr-analytics-job-change-of-data-scientists/aug_test.csv'
path_submission = '../input/hr-analytics-job-change-of-data-scientists/sample_submission.csv'
train = pd.read_csv(path_train)
test = pd.read_csv(path_test)
train.head()
test.sample(5)
# 表示返回随机5行数据。
探索性数据分析
一般性分析
train.shape
(19158, 14)
test.shape
(2129, 13)
train.dtypes
enrollee_id int64
city object
city_development_index float64
gender object
relevent_experience object
enrolled_university object
education_level object
major_discipline object
experience object
company_size object
company_type object
last_new_job object
training_hours int64
target float64
dtype: object
test.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2129 entries, 0 to 2128
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 enrollee_id 2129 non-null int64
1 city 2129 non-null object
2 city_development_index 2129 non-null float64
3 gender 1621 non-null object
4 relevent_experience 2129 non-null object
5 enrolled_university 2098 non-null object
6 education_level 2077 non-null object
7 major_discipline 1817 non-null object
8 experience 2124 non-null object
9 company_size 1507 non-null object
10 company_type 1495 non-null object
11 last_new_job 2089 non-null object
12 training_hours 2129 non-null int64
dtypes: float64(1), int64(2), object(10)
memory usage: 216.4+ KB
list(train.columns)
['enrollee_id',
'city',
'city_development_index',
'gender',
'relevent_experience',
'enrolled_university',
'education_level',
'major_discipline',
'experience',
'company_size',
'company_type',
'last_new_job',
'training_hours',
'target']
describe
方法用于生成DataFramem描述统计信息。可以很方便的查看数据集的分布情况。注意,这里的统计分布包含NaN值
。
count
表示计数。
mean
表示平均值。
std
表示标准差。
min
表示最小值。
25%
表示四分之一分位数
50%
表示二分之一分位数。
75%
表示四分之三分位数。
max
表示最大值。
train.describe(include='all').T
缺失值分析
def percentage_nulls(df):
number_nulls = pd.DataFrame(df.isnull().sum(),columns=['Total'])
number_nulls['% nulls'] = round((number_nulls['Total'] / df.shape[0])*100,1)
return number_nulls
训练集的缺失值。
nulls_train = percentage_nulls(train)
nulls_train
Total | % nulls | |
---|---|---|
enrollee_id | 0 | 0.0 |
city | 0 | 0.0 |
city_development_index | 0 | 0.0 |
gender | 4508 | 23.5 |
relevent_experience | 0 | 0.0 |
enrolled_university | 386 | 2.0 |
education_level | 460 | 2.4 |
major_discipline | 2813 | 14.7 |
experience | 65 | 0.3 |
company_size | 5938 | 31.0 |
company_type | 6140 | 32.0 |
last_new_job | 423 | 2.2 |
training_hours | 0 | 0.0 |
target | 0 | 0.0 |
# 选择缺失数量最大的5行数据。
nulls_train.nlargest(5, 'Total')
Total | % nulls | |
---|---|---|
company_type | 6140 | 32.0 |
company_size | 5938 | 31.0 |
gender | 4508 | 23.5 |
major_discipline | 2813 | 14.7 |
education_level | 460 | 2.4 |
percentage_nulls(test)
Total | % nulls | |
---|---|---|
enrollee_id | 0 | 0.0 |
city | 0 | 0.0 |
city_development_index | 0 | 0.0 |
gender | 508 | 23.9 |
relevent_experience | 0 | 0.0 |
enrolled_university | 31 | 1.5 |
education_level | 52 | 2.4 |
major_discipline | 312 | 14.7 |
experience | 5 | 0.2 |
company_size | 622 | 29.2 |
company_type | 634 | 29.8 |
last_new_job | 40 | 1.9 |
training_hours | 0 | 0.0 |
缺失值的可视化
import missingno as msno
msno.matrix(train)
测试集的缺失值。
msno.matrix(test)
为了后面分析方便,将缺失值用字符串'Unknown'
填充。
df_train = train.fillna("Unknown")
# 填充完后检查下是否处理完毕。
df_train.isnull().sum()
enrollee_id 0
city 0
city_development_index 0
gender 0
relevent_experience 0
enrolled_university 0
education_level 0
major_discipline 0
experience 0
company_size 0
company_type 0
last_new_job 0
training_hours 0
target 0
dtype: int64
缺失值分析可以参见缺失值处理,你真的会了吗?
分析每个特征
特征 'City' -- 城市
# 切片索引出'City'
city_train = df_train['city']
city_train.value_counts()
city_103 4355
city_21 2702
city_16 1533
city_114 1336
city_160 845
...
city_129 3
city_111 3
city_121 3
city_140 1
city_171 1
Name: city, Length: 123, dtype: int64
city_train.value_counts().plot()
city_test = df_test['city']
city_test.value_counts()
city_103 473
city_21 318
city_16 168
city_114 155
city_160 113
...
city_84 1
city_171 1
city_25 1
city_93 1
city_141 1
Name: city, Length: 108, dtype: int64
观察到,城市是用数字编码的,训练集用123个不同的城市,测试集用108个不同的城市。
'city_ development _index' 城市发展指数
可以查看维基百科中的名词解释 https://en.wikipedia.org/wiki/City_development_index
# 查看特征分布状况
sns.displot(data=df_train,
x='city_development_index',
height=6,
aspect = 2,
color = 'lightblue')
sns.set_context(font_scale=1)
with sns.axes_style():
g = sns.displot(data=df_test,
x='city_development_index',
height=5,
aspect = 2,
color = 'coral')
g.set_ylabels(fontsize=15)
g.set_xlabels(fontsize=15)
g.set_xticklabels(fontsize=15)
g.set_yticklabels(fontsize=15)
注意这种写法,设置x,y轴标签字体大小。
'gender'性别
性别及后面几个特征为分类型变量,因此分别统计每个变量的总数及占比。
gender_train = df_train['gender']
gender_test = df_test['gender']
后面需要多次用到缺失值统计,因此将其定义为函数。
def percent_nan(df):
number = pd.DataFrame(df.value_counts())
number.columns = ['Total']
number['%'] = round((number['Total'] / df.notnull().sum())*100,1)
return number
percent_nan(gender_train)
Total | % | |
---|---|---|
Male | 13221 | 69.0 |
Unknown | 4508 | 23.5 |
Female | 1238 | 6.5 |
Other | 191 | 1.0 |
percent_nan(gender_test)
Total | % | |
---|---|---|
Male | 1460 | 68.6 |
Unknown | 508 | 23.9 |
Female | 137 | 6.4 |
Other | 24 | 1.1 |
两个数据集的缺失值差不多,因此对两者可视化,以更加方便看出其差异。
同样,为后续分析方便,将其定义为函数。
def draw_countplot(feature,palette,order=None):
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(10, 5))
sns.countplot(x=feature,data=df_train,palette=palette,ax=ax[0],order=order).set_title('Train')
sns.countplot(x=feature,data=df_test,palette=palette,ax=ax[1],order=order).set_title('Test')
fig.tight_layout()
draw_countplot("gender","Set1")
'relevent_experience' 相关经验
relevent_experience_train = df_train['relevent_experience']
relevent_experience_test = df_test['relevent_experience']
percent_nan(relevent_experience_train)
Total | % | |
---|---|---|
Has relevent experience | 13792 | 72.0 |
No relevent experience | 5366 | 28.0 |
percent_nan(relevent_experience_test)
Total | % | |
---|---|---|
Has relevent experience | 1524 | 71.6 |
No relevent experience | 605 | 28.4 |
draw_countplot("relevent_experience","Set2")
'enrolled_university' 专业课
已注册的大学课程类型(如有)。
enrolled_university_train = df_train['enrolled_university']
enrolled_university_test = df_test['enrolled_university']
percent_nan(enrolled_university_train)
Total | % | |
---|---|---|
no_enrollment | 13817 | 72.1 |
Full time course | 3757 | 19.6 |
Part time course | 1198 | 6.3 |
Unknown | 386 | 2.0 |
percent_nan(enrolled_university_test)
Total | % | |
---|---|---|
no_enrollment | 1519 | 71.3 |
Full time course | 435 | 20.4 |
Part time course | 144 | 6.8 |
Unknown | 31 | 1.5 |
order_enrolled_university = percent_nan(enrolled_university_train).index
order_enrolled_university
Index(['no_enrollment', 'Full time course',
'Part time course', 'Unknown'],
dtype='object')
draw_countplot('enrolled_university',"Set3",order_enrolled_university)
'education_level' 学历
education_level_train = df_train['education_level']
education_level_test = df_test['education_level']
percent_nan(education_level_train)
Total | % | |
---|---|---|
Graduate | 11598 | 60.5 |
Masters | 4361 | 22.8 |
High School | 2017 | 10.5 |
Unknown | 460 | 2.4 |
Phd | 414 | 2.2 |
Primary School | 308 | 1.6 |
percent_nan(education_level_test)
Total | % | |
---|---|---|
Graduate | 1269 | 59.6 |
Masters | 496 | 23.3 |
High School | 222 | 10.4 |
Phd | 54 | 2.5 |
Unknown | 52 | 2.4 |
Primary School | 36 | 1.7 |
order_education_level = percent_nan(education_level_train).index
draw_countplot('education_level',"Set1",order_education_level)
'major_discipline' 专业学科
major_discipline_train = df_train['major_discipline']
major_discipline_test = df_test['major_discipline']
percent_nan(major_discipline_train)
Total | % | |
---|---|---|
STEM | 14492 | 75.6 |
Unknown | 2813 | 14.7 |
Humanities | 669 | 3.5 |
Other | 381 | 2.0 |
Business Degree | 327 | 1.7 |
Arts | 253 | 1.3 |
No Major | 223 | 1.2 |
percent_nan(major_discipline_test)
Total | % | |
---|---|---|
STEM | 1621 | 76.1 |
Unknown | 312 | 14.7 |
Humanities | 80 | 3.8 |
Other | 40 | 1.9 |
Business Degree | 37 | 1.7 |
No Major | 22 | 1.0 |
Arts | 17 | 0.8 |
order_major_discipline = percent_nan(major_discipline_train).index
draw_countplot('major_discipline',"Set2",order_major_discipline)
'experience' 工作年限
experience_train = df_train['experience']
experience_test = df_test['experience']
percent_nan(experience_train)
Total | % | |
---|---|---|
>20 | 3286 | 17.2 |
5 | 1430 | 7.5 |
4 | 1403 | 7.3 |
3 | 1354 | 7.1 |
6 | 1216 | 6.3 |
2 | 1127 | 5.9 |
7 | 1028 | 5.4 |
10 | 985 | 5.1 |
9 | 980 | 5.1 |
8 | 802 | 4.2 |
15 | 686 | 3.6 |
11 | 664 | 3.5 |
14 | 586 | 3.1 |
1 | 549 | 2.9 |
<1 | 522 | 2.7 |
16 | 508 | 2.7 |
12 | 494 | 2.6 |
13 | 399 | 2.1 |
17 | 342 | 1.8 |
19 | 304 | 1.6 |
18 | 280 | 1.5 |
20 | 148 | 0.8 |
Unknown | 65 | 0.3 |
percent_nan(experience_test)
Total | % | |
---|---|---|
>20 | 383 | 18.0 |
5 | 163 | 7.7 |
3 | 154 | 7.2 |
4 | 145 | 6.8 |
6 | 130 | 6.1 |
2 | 128 | 6.0 |
7 | 116 | 5.4 |
9 | 113 | 5.3 |
10 | 96 | 4.5 |
11 | 86 | 4.0 |
8 | 82 | 3.9 |
<1 | 74 | 3.5 |
16 | 68 | 3.2 |
15 | 59 | 2.8 |
1 | 56 | 2.6 |
14 | 55 | 2.6 |
13 | 54 | 2.5 |
12 | 52 | 2.4 |
17 | 36 | 1.7 |
19 | 29 | 1.4 |
18 | 26 | 1.2 |
20 | 19 | 0.9 |
Unknown | 5 | 0.2 |
order_experience = percent_nan(experience_train).index
draw_countplot('experience',"Set3",order_experience)
'company_size'公司规模
company_size_train = df_train['company_size']
company_size_test = df_test['company_size']
percent_nan(company_size_train)
Total | % | |
---|---|---|
Unknown | 5938 | 31.0 |
50-99 | 3083 | 16.1 |
100-500 | 2571 | 13.4 |
10000+ | 2019 | 10.5 |
10/49 | 1471 | 7.7 |
1000-4999 | 1328 | 6.9 |
<10 | 1308 | 6.8 |
500-999 | 877 | 4.6 |
5000-9999 | 563 | 2.9 |
percent_nan(company_size_test)
Total | % | |
---|---|---|
Unknown | 622 | 29.2 |
50-99 | 338 | 15.9 |
100-500 | 318 | 14.9 |
10000+ | 217 | 10.2 |
10/49 | 172 | 8.1 |
<10 | 163 | 7.7 |
1000-4999 | 143 | 6.7 |
500-999 | 88 | 4.1 |
5000-9999 | 68 | 3.2 |
order_company_size = percent_nan(company_size_train).index
draw_countplot('company_size',"Set1",order_company_size)
'company_type'公司类型
company_type_train = df_train['company_type']
company_type_test = df_test['company_type']
percent_nan(company_type_train)
Total | % | |
---|---|---|
Pvt Ltd | 9817 | 51.2 |
Unknown | 6140 | 32.0 |
Funded Startup | 1001 | 5.2 |
Public Sector | 955 | 5.0 |
Early Stage Startup | 603 | 3.1 |
NGO | 521 | 2.7 |
Other | 121 | 0.6 |
percent_nan(company_type_test)
Total | % | |
---|---|---|
Pvt Ltd | 1141 | 53.6 |
Unknown | 634 | 29.8 |
Public Sector | 127 | 6.0 |
Funded Startup | 97 | 4.6 |
Early Stage Startup | 65 | 3.1 |
NGO | 53 | 2.5 |
Other | 12 | 0.6 |
order_company_type = percent_nan(company_type_train).index
draw_countplot('company_type',"Set2",order_company_type)
'lastnewjob' 以前的工作与现在的工作的年差
last_new_job_train = df_train['last_new_job']
last_new_job_test = df_test['last_new_job']
percent_nan(last_new_job_train)
Total | % | |
---|---|---|
1 | 8040 | 42.0 |
>4 | 3290 | 17.2 |
2 | 2900 | 15.1 |
never | 2452 | 12.8 |
4 | 1029 | 5.4 |
3 | 1024 | 5.3 |
Unknown | 423 | 2.2 |
percent_nan(last_new_job_test)
Total | % | |
---|---|---|
1 | 884 | 41.5 |
>4 | 353 | 16.6 |
2 | 342 | 16.1 |
never | 258 | 12.1 |
3 | 133 | 6.2 |
4 | 119 | 5.6 |
Unknown | 40 | 1.9 |
order_last_new_job = percent_nan(last_new_job_train).index
draw_countplot('last_new_job',"Set3",order_last_new_job)
'training_hours'已完成的培训时长
sns.displot(data=df_train,
x='training_hours',
height=5,
aspect=1.5,
color = 'lightblue')
sns.displot(data=df_test,
x='training_hours',
height=5,
aspect=1.5,
color = 'coral')
'Target'
0 - Not looking for job change
1 - Looking for a job change
target = df_train['target']
percent_nan(target)
Total | % | |
---|---|---|
0.0 | 14381 | 75.1 |
1.0 | 4777 | 24.9 |
sns.countplot(x='target',
data=df_train,
palette="Set1").set_title('Train')
本文数据使用 kaggle数据集,关注公众号,并回复
【 hr 】
获取数据。
推荐阅读-- 数据STUDIO --