“前程无忧”招聘数据爬虫——(1)

毕业设计第一弹

“前程无忧”招聘数据爬虫

操作系统: Win 10
爬取工具: Jupyter Notebook (Anaconda)
存储路径: 电脑D盘,csv格式
文件名: 招聘.csv
语言: python 3.8
需求: 分析数据分析岗位的招聘情况,包括地区分布、薪资水平、职位要求等,了解最新数据分析岗位的情况

1、导入爬虫所需要的requests、csv模块

# 1、发送请求,对于找到分析得到的url地址发送请求
import requests
# 导入时间模块 来个延时
import time
# 保存数据,导入csv模块  字典写入
import csv
f = open('招聘.csv',mode = 'a',encoding = 'utf-8',newline = '')
csv_writer = csv.DictWriter(f,fieldnames = [
        '岗位名称',
        '公司名称',
        '薪资',
        '城市',
        '福利',
        '公司规模',
        '所处行业',
        '工作经验要求',
        '学历要求',
        '招聘人数',
        '发布时间', 
        '详情页'
])
csv_writer.writeheader()  #写入表头

2、确定请求的url地址

# 确定请求的url地址
for page in range(1,11):
    print(f'===============正在爬取第{page}页数据内容===============')
    time.sleep(2)
    url = f'https://search.51job.com/list/000000,000000,0000,00,9,99,%25E6%2595%25B0%25E6%258D%25AE%25E5%2588%2586%25E6%259E%2590%25E5%25B8%2588,2,{page}.html'
    # headers 请求头参数  headers 字典数据类型
    headers = {
        'Cookie':'_uab_collina=164543084055016766965023; guid=58f3d04867134340a5248e202f80554d; nsearch=jobarea%3D%26%7C%26ord_field%3D%26%7C%26recentSearch0%3D%26%7C%26recentSearch1%3D%26%7C%26recentSearch2%3D%26%7C%26recentSearch3%3D%26%7C%26recentSearch4%3D%26%7C%26collapse_expansion%3D; adv=ad_logid_url%3Dhttps%253A%252F%252Ftrace.51job.com%252Ftrace.php%253Fpartner%253Dsem_pc360s_280%2526ajp%253DaHR0cHM6Ly9ta3QuNTFqb2IuY29tL3RnL3NlbS9qaWFubGlfdjIuaHRtbD9mcm9tPTM2MGFk%2526k%253D1d79cf0d80712e6d8ae85b43444bf67e%2526qhclickid%253Dea89fa5c76488a90%26%7C%26; partner=sem_pc360pz_1; _ujz=MjAzNzIxODA3MA%3D%3D; ps=needv%3D0; 51job=cuid%3D203721807%26%7C%26cusername%3D1Y4kidICmNCR%252FoRCGMLI4bPmECqNa3BhBoq%252FvItOJNY%253D%26%7C%26cpassword%3D%26%7C%26cname%3DqvVOHfs58%252FkvdU2mnjE87A%253D%253D%26%7C%26cemail%3D9Es%252FkspmYK0HViDatG5KId7NuctS4l68StYeDNLeywE%253D%26%7C%26cemailstatus%3D3%26%7C%26cnickname%3D%26%7C%26ccry%3D.0FbpJ4jK2AIQ%26%7C%26cconfirmkey%3D26xU8ts4erdoQ%26%7C%26cautologin%3D1%26%7C%26cenglish%3D0%26%7C%26sex%3D1%26%7C%26cnamekey%3D26Di4Oh.X7CEg%26%7C%26to%3De431c43d220233089a9b2e4035bdefe9621347b7%26%7C%26; slife=lastvisit%3D030200%26%7C%26lowbrowser%3Dnot%26%7C%26lastlogindate%3D20220221%26%7C%26securetime%3DBDhRZFIwAmNWMAc6XGcOYgEzBDA%253D; search=jobarea%7E%60000000%7C%21ord_field%7E%600%7C%21recentSearch0%7E%60000000%A1%FB%A1%FA000000%A1%FB%A1%FA0000%A1%FB%A1%FA00%A1%FB%A1%FA99%A1%FB%A1%FA%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA9%A1%FB%A1%FA99%A1%FB%A1%FA%A1%FB%A1%FA0%A1%FB%A1%FA%CA%FD%BE%DD%B7%D6%CE%F6%CA%A6%A1%FB%A1%FA2%A1%FB%A1%FA1%7C%21recentSearch1%7E%60030200%A1%FB%A1%FA000000%A1%FB%A1%FA0000%A1%FB%A1%FA00%A1%FB%A1%FA02%A1%FB%A1%FA%A1%FB%A1%FA07%A1%FB%A1%FA99%A1%FB%A1%FA08%A1%FB%A1%FA99%A1%FB%A1%FA9%A1%FB%A1%FA04%A1%FB%A1%FA%A1%FB%A1%FA0%A1%FB%A1%FA%CA%FD%BE%DD%B7%D6%CE%F6%CA%A6%A1%FB%A1%FA2%A1%FB%A1%FA1%7C%21recentSearch2%7E%60030200%A1%FB%A1%FA000000%A1%FB%A1%FA0000%A1%FB%A1%FA00%A1%FB%A1%FA01%A1%FB%A1%FA%A1%FB%A1%FA07%A1%FB%A1%FA99%A1%FB%A1%FA08%A1%FB%A1%FA99%A1%FB%A1%FA9%A1%FB%A1%FA04%A1%FB%A1%FA%A1%FB%A1%FA0%A1%FB%A1%FA%CA%FD%BE%DD%B7%D6%CE%F6%CA%A6%A1%FB%A1%FA2%A1%FB%A1%FA1%7C%21recentSearch3%7E%60030200%A1%FB%A1%FA000000%A1%FB%A1%FA0000%A1%FB%A1%FA00%A1%FB%A1%FA99%A1%FB%A1%FA%A1%FB%A1%FA07%A1%FB%A1%FA99%A1%FB%A1%FA08%A1%FB%A1%FA99%A1%FB%A1%FA9%A1%FB%A1%FA04%A1%FB%A1%FA%A1%FB%A1%FA0%A1%FB%A1%FA%CA%FD%BE%DD%B7%D6%CE%F6%CA%A6%A1%FB%A1%FA2%A1%FB%A1%FA1%7C%21recentSearch4%7E%60030200%2C040000%A1%FB%A1%FA000000%A1%FB%A1%FA0000%A1%FB%A1%FA00%A1%FB%A1%FA99%A1%FB%A1%FA%A1%FB%A1%FA07%A1%FB%A1%FA99%A1%FB%A1%FA08%A1%FB%A1%FA99%A1%FB%A1%FA9%A1%FB%A1%FA04%A1%FB%A1%FA%A1%FB%A1%FA0%A1%FB%A1%FA%CA%FD%BE%DD%B7%D6%CE%F6%CA%A6%A1%FB%A1%FA2%A1%FB%A1%FA1%7C%21collapse_expansion%7E%601%7C%21; privacy=1645437834; acw_tc=781bad2116454504283554093e49d9d548ff0da51db1e780158d031276ae12; acw_sc__v2=621394f4a793eb482bebb8c454803f0463f482bd; ssxmod_itna=Wq+xyWeGqiqmq0dKxbD90KODQNIhY47I3D/YKGDnqD=GFDK40oYHKp=YDO9n59W7AnNbenhLmxzvnoby05p9YPbeDHxY=DUge+4YD4bKGwD0eG+DD4DWDmmFDnxAQDj6KGWDbo=GfDGeDep97DY5DhxDCjGPDwx0CEOxNExYFe35w0o4Geii8D7vwDlcD+Ur8Yt8EkmHowx0kX40OnoH8X2YDUjqqpAqqimurMiGX3BDxQY+qbbhbQ72NAmwYQD6tK6i3WmqyDDatWdD; ssxmod_itna2=Wq+xyWeGqiqmq0dKxbD90KODQNIhY47KG9YyDBwe7jWGcDed1uR+7Dpi5iKx',
    #     'Host':'search.51job.com',
    #     'Referer':url,
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36'
    }
    # 通过requests模块里面的get请求方法,对于url地址发送请求,并且携带上headers请求头 最后用response变量接收数据
    response = requests.get(url = url,headers = headers)

运行展示:
运行展示
3、获取服务器返回的文本数据

# 获取数据  获取服务器返回的response响应数据(文本数据)
print(response.text)   #字符串数据内容,re正则可以直接对于字符串数据进行提取

运行展示:
运行展示
4、导入数据取值json模块,re文本处理模块,开始爬取

# 3、解析数据,提取我们想要的内容  .*?  匹配任意字符串(除了换行符\n以外)
# 正则提取数据返回列表数据类型  根据列表的索引位置提取内容
# 导入正则表达式模块
import re
html_data = re.findall('window.__SEARCH_RESULT__ = (.*?)</script>',response.text)[0]
# 把字符串数据 转成字典数据类型  因为字典提取数据更加方便
# 导入json模块
import json
# 导入数据请求模块  让json数据输出更好看点
import pprint
json_data = json.loads(html_data)
# json数据取值,就是根据冒号左边的内容,提取冒号右边的内容
# pprint.pprint(json_data)
engine_jds = json_data['engine_jds']
# 提取出来返回列表,列表一个一个提取元素 用for循环遍历
for index in engine_jds:
#     为了方便等会保存数据,提取出来的数据内容可以用字典接收
    href = f'https://jobs.51job.com/guangzhou-pyq/{index["jobid"]}.html'
    dit = {
        '岗位名称':index['job_name'],
        '公司名称':index['company_name'],
        '薪资':index['providesalary_text'],
        '城市':index['workarea_text'],
        '福利':index['jobwelf'],
        '公司规模':index['companysize_text'],
        '所处行业':index['companytype_text'],
        '工作经验要求':''.join(index['attribute_text'][1]),
        '学历要求':''.join(index['attribute_text'][2]),
        '招聘人数':''.join(index['attribute_text'][3]),
        '发布时间':index['issuedate'], 
        '详情页':href
    }
    csv_writer.writerow(dit)
    pprint.pprint(dit)

运行展示:
运行展示
5、爬取数据展示
爬取数据展示
6、下集预告:数据预处理…

  • 12
    点赞
  • 68
    收藏
    觉得还不错? 一键收藏
  • 22
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 22
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值