【机器学习项目实战】数据相关岗位薪资水平影响因素研究分析

343 篇文章 261 订阅

说明:这是一个机器学习项目全流程(附带项目实例),本篇教程来源于网络,胖哥对此进行了完整的梳理,并把用到的数据+代码完全奉上。如需数据+完整代码可以直接到文章最后获取。

 机器学习项目全流程介绍:

 一、需求分析
1. 需求来源
个人学习
2. 需求分析
数据科学相关岗位的薪资水平
哪些岗位属于数据科学相关岗位,可以列出多个关键字:数据分析师、数据挖掘工程师、大数据工程师、机器学习算法工程师、深度学习算法工程师等

二、数据采集
爬取拉勾网数据,本次学习所需数据已爬取好,数据量500条左右。(如果想爬取最新的数据,可自行撰写爬取脚本,因为拉勾网反爬虫变更较快,所以暂不提供爬虫脚本)

三、数据清洗(脏数据)
脏数据可以理解为带有不整洁程度的原始数据;原始数据的整洁程度由数据采集质量所决定。脏数据的表现形式五花八门,如若数据采集质量不过关,拿到的原始数据内容只有更差没有最差。

【脏数据的表现形式包括:】

数据串行,尤其是长文本情形下
数值变量种混有文本/格式混乱
各种符号乱入
数据记录错误
大段缺失(某种意义上不算脏数据)
数据采集完后拿到的原始数据到建模前的数据是一个十分耗时的过程,从数据分析的角度上来讲,这个中间处理脏数据的数据预处理和清洗过程几乎占到了我们全部机器学习项目的60%-70%的时间。

1. 数据清洗与预处理基本方向


a. 数据预处理没有特别固定的套路
b. 数据预处理的困难程度与原始数据脏的程度而定
c. 原始数据越脏,数据预处理工作越艰辛
d. 数据预处理大的套路没有,小的套路一大堆
e. 机器学习的数据预处理基于pandas来做
f. 缺失值处理
g. 小文本和字符串数据处理
h. 法无定法,融会贯通
缺失值处理方法:

a. 删除:超过70%以上的缺失
b. 填充
小文本与字符串处理:

a. python 字符串处理函数
b. 正则表达式

2. 招聘数据的清洗过程

data = pd.read_csv('machine_learning_z.csv', encoding='gbk')

print(data.head())

print(data.info())

 展示数据信息,可以看出哪些特征有缺失值:

给位置地址的填充['未知'] :

data['address'] = data['address'].fillna("['未知']")
print(data['address'][:5])

 

去掉[]:

for i, j in enumerate(data['address']):
    j = j.replace('[', '').replace(']', '')
    data['address'][i] = j

print(data['address'][:5])

 

for i, j in enumerate(data['industryLables']):
    j = j.replace('[', '').replace(']', '')
    data['industryLables'][i] = j

print(data['industryLables'][:10])

 

for i, j in enumerate(data['label']):
    j = j.replace('[', '').replace(']', '')
    data['label'][i] = j

print(data['label'][:10])

 将回车符 \r替换为空:

data['position_detail'][0].replace('\r', '')
data['position_detail'] = data['position_detail'].fillna('未知')
for i, j in enumerate(data['position_detail']):
    j = j.replace('\r', '')
    data['position_detail'][i] = j

print(data['position_detail'][:3])

工资特征处理:

for i in data['salary'][:10]:
    i = i.replace('k', '')
    i1 = int(i.split('-')[0])
    i2 = int(i.split('-')[1])
    i3 = 1 / 2 * (i1 + i2)
    print(i3)

for i, j in enumerate(data['salary']):
    j = j.replace('k', '').replace('K', '').replace('以上', '-0')
    j1 = int(j.split('-')[0])
    j2 = int(j.split('-')[1])
    j3 = 1 / 2 * (j1 + j2)
    data['salary'][i] = j3 * 1000

print(data['salary'].head(10))

print(data['size'].value_counts())

print(data['stage'].value_counts())

岗位名称特征处理:

for i, j in enumerate(data['position_name']):
    if '数据分析' in j:
        j = '数据分析师'
    if '数据挖掘' in j:
        j = '数据挖掘工程师'
    if '机器学习' in j:
        j = '机器学习工程师'
    if '深度学习' in j:
        j = '深度学习工程师'
    data['position_name'][i] = j

print(data['position_name'][:5])

print(data.head())

 四、数据分析与可视化
1.探索性数据分析(Explore data analysis)
  EDA一定程度上跟描述性数据分析重合,但范围要大于描述性数据分析。不清楚数据长什么样、不知道数据里有什么,目标不甚明确,大目标清晰,中间途径不清晰,需要从数据中找到线索。

2.实例:招聘数据的探索性数据分析

数据概况:

data = pd.read_csv('lagou_preprocessed.csv', encoding='gbk')
print('打开拉钩网预处理后的前5条数据')
print(data.head())
print('\n')

# #### 1.数据概况


# 数据基本信息
print('数据基本信息')
print(data.info())
print('\n')

# 数值型变量统计量描述
print('数值型变量统计量描述')
print(data.describe())
print('\n')

 

目标变量分析:

 

绘制目标变量的直方图,查看值分布:

plt.hist(data['salary'])
plt.show()

 

 seaborn下的直方图:

计算目标变量值的偏度与峰度:

print('计算目标变量值的偏度与峰度')
print("Skewness: %f" % data['salary'].skew())
print('\n')
print("Kurtosis: %f" % data['salary'].kurt())
print('\n')

 

分类值统计:
 

cols = ['city', 'education', 'position_name', 'size', 'stage', 'work_year']
for col in cols:
    print('\n')
    print(data[col].value_counts())
    print('\n')

# 处理city变量
# 将计数少于30的划为其他
city_counts_df = pd.DataFrame()
city_counts_df['city'] = city_counts_df.index
city_counts_df['counts'] = data['city'].value_counts().values

cities = ['北京', '上海', '广州', '深圳', '杭州', '成都', '武汉', '南京']
for i, j in enumerate(data['city']):
    if j not in cities:
        data['city'][i] = '其他'

print('\n')
print(data['city'].value_counts())
print('\n')

 

 城市与工资水平

sns.boxplot(x=data['city'], y=data['salary'])
plt.show()

 

北京薪资水平明显高一个档次。 

学历与工资水平

sns.boxplot(x=data['education'], y=data['salary'])
plt.show()

 

博士工资水平明显高。 

经验与工资水平

sns.boxplot(x=data['work_year'], y=data['salary'])
plt.show()

 

工作年限越长,工资水平越高。 

企业发展阶段与工资水平

sns.boxplot(x=data['stage'], y=data['salary'])
plt.show()

 

图中,上市公司与D轮及以上稍微高点,B轮的也不低,其实说明有些时候公司在发展中时如果资金允许更愿意多花点钱招入人才。 

企业规模与工资水平

sns.boxplot(x=data['size'], y=data['salary'])
plt.show()

 

其中,2000人以上的比较高,少于15人的也比较高,可能因为人少的公司一个人要当几个人用,所以工资会给的高点。 

岗位与工资水平:

sns.boxplot(x=data['position_name'], y=data['salary'])
plt.show()

机器学习、数据挖掘差不多,这里看到数据分析师也挺高,原因是原始数据导致,经过清洗后数据分析师岗位的只有2条数据,所以这里数据分析师的薪水代表不了大众水平,有兴趣的同学可以自己去爬虫新的数据。

处理industry变量:

for i, j in enumerate(data['industry']):
    if ',' not in j:
        data['industry'][i] = j
    else:
        data['industry'][i] = j.split(',')[0]

print('\n')
print(data['industry'].value_counts())
print('\n')

industries = ['移动互联网', '金融', '数据服务', '电子商务', '企业服务', '医疗健康', 'O2O', '硬件', '信息安全', '教育']
for i, j in enumerate(data['industry']):
    if j not in industries:
        data['industry'][i] = '其他'

print('\n')
print(data['industry'].value_counts())
print('\n')

# 行业与工资水平
sns.boxplot(x=data['industry'], y=data['salary']);
plt.show()

 O2O、医疗健康的工资水平较高。

岗位待遇特征处理:

因为这是个长文本,这里采用了分词,取关键词,画词云图。

ADV = []
for i in data['advantage']:
    ADV.append(i)

ADV_text = ''.join(ADV)
print('\n')
print(ADV_text)
print('\n')

import jieba

result = jieba.cut(ADV_text)
print('\n')
print("切分结果:  " + ",".join(result))
print('\n')

jieba.suggest_freq(('五险一金'), True)
jieba.suggest_freq(('六险一金'), True)
jieba.suggest_freq(('带薪年假'), True)
jieba.suggest_freq(('年度旅游'), True)
jieba.suggest_freq(('氛围好'), True)
jieba.suggest_freq(('技术大牛'), True)
jieba.suggest_freq(('免费三餐'), True)
jieba.suggest_freq(('租房补贴'), True)
jieba.suggest_freq(('大数据'), True)
jieba.suggest_freq(('精英团队'), True)
jieba.suggest_freq(('晋升空间大'), True)

result = jieba.cut(ADV_text)
print('\n')
print("切分结果:  " + ",".join(result))
print('\n')

# 读取标点符号库
f = open("stopwords.txt", "r",encoding='utf-8')
stopwords = {}.fromkeys(f.read().split("\n"))
f.close()
# 加载用户自定义词典
# jieba.load_userdict("./utils/jieba_user_dict.txt")
segs = jieba.cut(ADV_text)
mytext_list = []
# 文本清洗
for seg in segs:
    if seg not in stopwords and seg != " " and len(seg) != 1:
        mytext_list.append(seg.replace(" ", ""))

ADV_cloud_text = ",".join(mytext_list)
print('\n')
print(ADV_cloud_text)
print('\n')

from wordcloud import WordCloud

wc = WordCloud(
    background_color="white",  # 背景颜色
    max_words=800,  # 显示最大词数
    font_path=r'C:\Windows\Fonts\STFANGSO.ttf',
    min_font_size=15,
    max_font_size=80
)
wc.generate(ADV_cloud_text)
wc.to_file("ADV_cloud.png")
plt.imshow(wc)
plt.show()

 剔除几个无用变量:

data2 = data.drop(['address', 'industryLables', 'company_name'], axis=1)
data2.to_csv('lagou_data5.csv')
print('\n')
print(data2.shape)
print('\n')

五、特征工程
特征工程概述:什么是特征工程?特征工程指的是最大程度上从原始数据中汲取特征和信息来使得模型和算法达到尽可能好的效果。

【特征工程具体内容包括:】

数据预处理
特征选择
特征变换与提取
特征组合
数据降维

招聘数据的特征工程探索:

warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd

lagou_df = pd.read_csv('lagou_data5.csv')
print('\n')
print(lagou_df.head())
print('\n')

# advantage和label这两个特征作用不大,可在最后剔除
# 分类变量one-hot处理
# pandas one-hot方法
print('\n')
print(pd.get_dummies(lagou_df['city']).head())
print('\n')

# sklearn onehot方法
# 先要硬编码labelcoder
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import LabelEncoder

lbl = LabelEncoder()
lbl.fit(list(lagou_df['city'].values))
lagou_df['city'] = lbl.transform(list(lagou_df['city'].values))
# 查看硬编码结果
print('\n')
print(lagou_df['city'].head())
print('\n')

# 再由硬编码转为one-hot编码
df_city = OneHotEncoder().fit_transform(lagou_df['city'].values.reshape((-1, 1))).toarray()
print('\n')
print(df_city[:5])
print('\n')

# 分类特征统一one-hot处理
cat_features = ['city', 'industry', 'education', 'position_name', 'size', 'stage', 'work_year']
for col in cat_features:
    temp = pd.get_dummies(lagou_df[col])
    lagou_df = pd.concat([lagou_df, temp], axis=1)
    lagou_df = lagou_df.drop([col], axis=1)

print('\n')
print(lagou_df.shape)
print('\n')

pd.options.display.max_columns = 999
lagou_df = lagou_df.drop(['advantage', 'label'], axis=1)
print('\n')
print(lagou_df.head())
print('\n')

职位描述特征的信息提取:

lagou_df2 = pd.read_csv('lagou_data5.csv')
lagou_df2 = lagou_df2[['position_detail', 'salary']]

# 提取Python信息
for i, j in enumerate(lagou_df2['position_detail']):
    if 'python' in j:
        lagou_df2['position_detail'][i] = j.replace('python', 'Python')

lagou_df2['Python'] = pd.Series()
for i, j in enumerate(lagou_df2['position_detail']):
    if 'Python' in j:
        lagou_df2['Python'][i] = 1
    else:
        lagou_df2['Python'][i] = 0
		
print('\n')
print(lagou_df2['Python'][:20])
print('\n')

lagou_df2['R'] = pd.Series()
for i, j in enumerate(lagou_df2['position_detail']):
    if 'R' in j:
        lagou_df2['R'][i] = 1
    else:
        lagou_df2['R'][i] = 0
print('\n')
print(lagou_df2['R'].value_counts())
print('\n')

for i, j in enumerate(lagou_df2['position_detail']):
    if 'sql' in j:
        lagou_df2['position_detail'][i] = j.replace('sql', 'SQL')

lagou_df2['SQL'] = pd.Series()
for i, j in enumerate(lagou_df2['position_detail']):
    if 'SQL' in j:
        lagou_df2['SQL'][i] = 1
    else:
        lagou_df2['SQL'][i] = 0

print('\n')
print(lagou_df2['SQL'].value_counts())
print('\n')

lagou_df2['Excel'] = pd.Series()
for i, j in enumerate(lagou_df2['position_detail']):
    if 'Excel' in j:
        lagou_df2['Excel'][i] = 1
    else:
        lagou_df2['Excel'][i] = 0
		
print('\n')
print(lagou_df2['Excel'].value_counts())
print('\n')

lagou_df2['Java'] = pd.Series()
for i, j in enumerate(lagou_df2['position_detail']):
    if 'Java' in j:
        lagou_df2['Java'][i] = 1
    else:
        lagou_df2['Java'][i] = 0

lagou_df2['Java'].value_counts()

for i, j in enumerate(lagou_df2['position_detail']):
    if 'linux' in j:
        lagou_df2['position_detail'][i] = j.replace('linux', 'Linux')

lagou_df2['Linux'] = pd.Series()
for i, j in enumerate(lagou_df2['position_detail']):
    if 'Linux' in j:
        lagou_df2['Linux'][i] = 1
    else:
        lagou_df2['Linux'][i] = 0

print('\n')
print(lagou_df2['Linux'].value_counts())
print('\n')


lagou_df2['C++'] = pd.Series()
for i, j in enumerate(lagou_df2['position_detail']):
    if 'C++' in j:
        lagou_df2['C++'][i] = 1
    else:
        lagou_df2['C++'][i] = 0
		
print('\n')
print(lagou_df2['C++'].value_counts())
print('\n')


for i, j in enumerate(lagou_df2['position_detail']):
    if 'spark' in j:
        lagou_df2['position_detail'][i] = j.replace('spark', 'Spark')

lagou_df2['Spark'] = pd.Series()
for i, j in enumerate(lagou_df2['position_detail']):
    if 'Spark' in j:
        lagou_df2['Spark'][i] = 1
    else:
        lagou_df2['Spark'][i] = 0

print('\n')
print(lagou_df2['Spark'].value_counts())
print('\n')


for i, j in enumerate(lagou_df2['position_detail']):
    if 'tensorflow' in j:
        lagou_df2['position_detail'][i] = j.replace('tensorflow', 'Tensorflow')

    if 'TensorFlow' in j:
        lagou_df2['position_detail'][i] = j.replace('TensorFlow', 'Tensorflow')

lagou_df2['Tensorflow'] = pd.Series()
for i, j in enumerate(lagou_df2['position_detail']):
    if 'Tensorflow' in j:
        lagou_df2['Tensorflow'][i] = 1
    else:
        lagou_df2['Tensorflow'][i] = 0

print('\n')
print(lagou_df2['Tensorflow'].value_counts())
print('\n')


lagou_df2 = lagou_df2.drop(['position_detail'], axis=1)
print('\n')
print(lagou_df2.head())
print('\n')


lagou_df = lagou_df.drop(['position_detail', 'salary'], axis=1)
print('\n')
print(lagou_df.head())
print('\n')


lagou = pd.concat((lagou_df2, lagou_df), axis=1).reset_index(drop=True)
print('\n')
print(lagou.head())
print('\n')


lagou.to_csv('lagou_featured.csv', encoding='gbk')

one-hot处理:即特征变换、哑变量处理 

 六、机器学习建模与调优

招聘数据的建模:GBDT

df = pd.read_csv('lagou_featured.csv', encoding='gbk')
print('\n')
print(df.shape)
print('\n')

pd.options.display.max_columns = 999
print('\n')
print(df.head())
print('\n')

import matplotlib.pyplot as plt

plt.hist(df['salary'])
plt.show()

X = df.drop(['salary'], axis=1).values
y = df['salary'].values.reshape((-1, 1))
print('\n')
print(X.shape, y.shape)
print('\n')


from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
print('\n')
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
print('\n')

from sklearn.ensemble import GradientBoostingRegressor

model = GradientBoostingRegressor(n_estimators=100, max_depth=5)
model.fit(X_train, y_train)

# 绘制训练误差图

# 计算测试误差
params = {'n_estimators': 500, 'max_depth': 6, 'min_samples_split': 2, 'loss': 'ls'}
clf = ensemble.GradientBoostingRegressor(**params)

clf.fit(X_train, y_train)
test_score = np.zeros((params['n_estimators'],), dtype=np.float64)

for i, y_pred in enumerate(clf.staged_predict(X_test)):
    test_score[i] = clf.loss_(y_test, y_pred)

arry_tmp = clf.train_score_ * 0.5


plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.title('Deviance')
plt.plot(np.arange(params['n_estimators']) + 1, clf.train_score_, 'b-',
         label='Training Set Deviance')
plt.plot(np.arange(params['n_estimators']) + 1, arry_tmp, 'r-',
         label='Test Set Deviance')
plt.legend(loc='upper right')
plt.xlabel('Lagouwang Iterations')
plt.ylabel('Deviance')
plt.show()

# 绘制特征重要性图

feature_importance = model.feature_importances_
feature_importance = 100.0 * (feature_importance / feature_importance.max())
sorted_idx = np.argsort(feature_importance)
pos = np.arange(sorted_idx.shape[0]) + .5
col_tmp=df.columns.values
col_tmp=col_tmp[1:]
plt.subplot(1, 2, 2)
plt.rcParams['font.sans-serif'] = ['SimHei'] # 指定默认字体
plt.rcParams['axes.unicode_minus'] = False # 解决保存图像是负号'-'显示为方块的问题
plt.barh(pos, feature_importance[sorted_idx], align='center')
plt.yticks(pos, np.sort(col_tmp))
plt.xlabel('Relative Importance')
plt.title('Variable Importance')
plt.show()

from sklearn.metrics import mean_squared_error

y_pred = model.predict(X_test)

print('\n')
print("模型准确率:{:.2f}%".format(model.score(X_train, y_train) * 100))
print('\n')

print('\n')
print(np.sqrt(mean_squared_error(y_test, y_pred)))
print('\n')

print('\n')
print(y_pred[:10])
print('\n')

print('\n')
print(y_test[:10].flatten())
print('\n')

plt.plot(y_pred)
plt.plot(y_test)
plt.legend(['y_pred', 'y_test'])
plt.show()

 

 

 

 目标变量对数化处理: 

# 目标变量对数化处理
X_train, X_test, y_train, y_test = train_test_split(X, np.log(y), test_size=0.3, random_state=42)
model = GradientBoostingRegressor(n_estimators=100, max_depth=5)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)

print('\n')
print("模型准确率:{:.2f}%".format(model.score(X_train, y_train) * 100))
print('\n')

print('\n')
print(np.sqrt(mean_squared_error(y_test, y_pred)))
print('\n')

plt.plot(np.exp(y_pred))
plt.plot(np.exp(y_test))
plt.legend(['y_pred', 'y_test'])
plt.show()

 

 招聘数据建模:XGBoost

from sklearn.model_selection import KFold
import xgboost as xgb
from sklearn.metrics import mean_squared_error
import time

kf = KFold(n_splits=5, random_state=123, shuffle=True)


def evalerror(preds, dtrain):
    labels = dtrain.get_label()
    return 'mse', mean_squared_error(np.exp(preds), np.exp(labels))


y = np.log(y)
valid_preds = np.zeros((330, 5))

time_start = time.time()

for i, (train_ind, valid_ind) in enumerate(kf.split(X)):
    print('\n')
    print('Fold', i + 1, 'out of', 5)
    print('\n')
    X_train, y_train = X[train_ind], y[train_ind]
    X_valid, y_valid = X[valid_ind], y[valid_ind]
    xgb_params = {
        'eta': 0.01,
        'max_depth': 6,
        'subsample': 0.9,
        'colsample_bytree': 0.9,
        'objective': 'reg:linear',
        'eval_metric': 'rmse',
        'seed': 99,
        'silent': True
    }

    d_train = xgb.DMatrix(X_train, y_train)
    d_valid = xgb.DMatrix(X_valid, y_valid)

    watchlist = [(d_train, 'train'), (d_valid, 'valid')]
    model = xgb.train(
        xgb_params,
        d_train,
        2000,
        watchlist,
        verbose_eval=100,
        #         feval=evalerror,
        early_stopping_rounds=1000
    )
#     valid_preds[:, i] = np.exp(model.predict(d_valid))

# valid_pred = valid_preds.means(axis=1)
# print('outline score:{}'.format(np.sqrt(mean_squared_error(y_pred, valid_pred)*0.5)))
print('\n')
print('cv training time {} seconds'.format(time.time() - time_start))
print('\n')

import xgboost as xgb

xg_train = xgb.DMatrix(X, y)

params = {
    'eta': 0.01,
    'max_depth': 6,
    'subsample': 0.9,
    'colsample_bytree': 0.9,
    'objective': 'reg:linear',
    'eval_metric': 'rmse',
    'seed': 99,
    'silent': True
}

cv = xgb.cv(params, xg_train, 1000, nfold=5, early_stopping_rounds=800, verbose_eval=100)


import xgboost as xgb
X_train1, X_test1, y_train1, y_test1 = train_test_split(X, np.log(y), test_size=0.2, random_state=42)

xlf = xgb.XGBRegressor(max_depth=6,
                       silent=True,
                       objective='reg:linear',
                       subsample=0.9,
                       colsample_bytree=0.9,
                       seed=90)

xlf.fit(X_train1, y_train1, eval_metric='rmse', verbose=True, eval_set=[(X_test1, y_test1)], early_stopping_rounds=100)
preds = xlf.predict(X_test1)
print('\n')
print("模型准确率:{:.2f}%".format(xlf.score(X_train1, y_train1) * 100))
print('\n')

xgboost模型准确率:81.35%

完整Python代码和数据集的朋友们,请关注如下微信公众号,回复"拉勾招聘项目实战",即可获取完整内容;

添加胖哥微信:zy10178083,胖哥拉你进去python学习交流群,胖哥会不定期分享干货!

微信公众号:胖哥真不错。

 

  • 3
    点赞
  • 57
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 6
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

胖哥真不错

您的鼓励,将是我最大的坚持!

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值