前言
先上一波爬取的结果:
![](https://i-blog.csdnimg.cn/blog_migrate/3c930bd80918de859738ebe7b186c89e.png)
![数据库中部分截图](https://i-blog.csdnimg.cn/blog_migrate/8356ea522d2c08b152ef7576a57a69eb.png)
实战
引入类库
import requests
from requests.exceptions import RequestException
import json
from urllib.parse import urlencode
import pymongo
import numpy as np
import time
分析页面请求
![在搜索框中键入python](https://i-blog.csdnimg.cn/blog_migrate/195995f1323e52c3b6a58ebe8fdee8a5.png)
![](https://i-blog.csdnimg.cn/blog_migrate/3158327dbe71d3a70da9ea2654801f80.png)
查看上图返回的结果,没有找到在页面上看到的信息,判断可能是Ajax异步加载的数据我们刷新页面重新查找,切换到XHR选项卡,可以看到页面请求但会了一堆JSON数据,其中就包含了我们需要的信息。
切换XHR选项卡 查看返回的信息 构建Header
我们可以看到页面请求中,请求头的参数有很多其实并不是所有的参数都是我们需要的,我们可以使用Postman对构建的请求进行测试。(postman官网下载需要FQ,后台回复「postman」获取安装包)postman的使用还有很多可以自行百度下
按照postman测试出来的结果我们可以构建以下代码
# 构建headers
headers = {
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.146 Safari/537.36',
'Referer': 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=true&suginput=',
}
因为拉勾的反爬措施比较恶心,一个header满足不了我们了,所以我去github粘贴了一些。就成了以下这样:
# 构建headers,从这个列表里随机获取header
hds = [{
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.139 Safari/537.36',
'Referer': 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=false&suginput=',
},{ 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.146 Safari/537.36',
'Referer': 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=true&suginput=',
},{ 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.139 Safari/537.36',
'Referer': 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=false&suginput=',
},{ 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6',
'Referer': 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=false&suginput=',
},{ 'User-Agent':'Mozilla/5.0 (Windows NT 6.2) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.12 Safari/535.11',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'Referer': 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=false&suginput=',
},{'User-Agent': 'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0)',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'Referer': 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=false&suginput=',
}]
# 请求每一页的链接,根据页数和header列表长度的余数随机取header的值
def get_index_page(url,page):
try:
reponse = requests.get(url, headers=hds[page%len(hds)])
if reponse.status_code == 200:
return reponse.text
return None
except RequestException:
print('请求职位列表页错误')
return None
虽然代码丑是丑了点但是还是蛮好用的。
解析数据
我们通过postman可以看到我们请求出来的数据结构是下面这个样的:
content -> positionResult -> result
result列表里面包含我们要的公司信息,职位信息等等
根据以上的数据结构我们可以通过获取字典键值就可以很方便的获取到想要的数据字段了。
# 解析列表页
def parse_job_page(response):
result = json.loads(response)
if result:
jobs = result['content']['positionResult']['result']
for job in jobs:
company_name = job['companyFullName']
city = job['city']
financ = job['financeStage']
job_name = job['positionName']
job_year = job['workYear']
job_createtime = job['createTime']
job_salary = job['salary']
job_data = {
'company_name': company_name,
'city': city,
'financ': financ,
'job_name': job_name,
'job_year': job_year,
'job_createtime': job_createtime,
'job_salary': job_salary
}
if job_data:
save_to_mongo(job_data)
之后的步骤其实就是结构化数据,数据入库,这里要提一下的就是,本篇存储到的MongoDB数据库(我承认是我偷懒了,没学好MySQL)
这里po下全部的代码。
import requests
from requests.exceptions import RequestException
import json
from urllib.parse import urlencode
import pymongo
import numpy as np
import time
from config import *
client = pymongo.MongoClient(MONGO_URL, connect=False)
db = client[MONGO_DB]
# headers = {
# 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
# 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.146 Safari/537.36',
# 'Referer': 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=true&suginput=',
# }
# 构建headers,从这个列表里随机获取header
hds = [{
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.139 Safari/537.36',
'Referer': 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=false&suginput=',
},{ 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.146 Safari/537.36',
'Referer': 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=true&suginput=',
},{ 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.139 Safari/537.36',
'Referer': 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=false&suginput=',
},{ 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6',
'Referer': 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=false&suginput=',
},{ 'User-Agent':'Mozilla/5.0 (Windows NT 6.2) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.12 Safari/535.11',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'Referer': 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=false&suginput=',
},{'User-Agent': 'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0)',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'Referer': 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=false&suginput=',
}]
# 构建请求链接
def make_url(page):
formdata = {
'needAddtionalResult': 'false',
'first': 'true',
'pn': page,
'kd': 'python'
}
url = u'https://www.lagou.com/jobs/positionAjax.json?' + urlencode(formdata)
return url
# 构建每一页的链接
def get_index_page(url,page):
try:
reponse = requests.get(url, headers=hds[page%len(hds)])
if reponse.status_code == 200:
return reponse.text
return None
except RequestException:
print('请求职位列表页错误')
return None
# 解析列表页
def parse_job_page(response):
result = json.loads(response)
if result:
jobs = result['content']['positionResult']['result']
for job in jobs:
company_name = job['companyFullName']
city = job['city']
financ = job['financeStage']
job_name = job['positionName']
job_year = job['workYear']
job_createtime = job['createTime']
job_salary = job['salary']
job_data = {
'company_name': company_name,
'city': city,
'financ': financ,
'job_name': job_name,
'job_year': job_year,
'job_createtime': job_createtime,
'job_salary': job_salary
}
if job_data:
save_to_mongo(job_data)
# 保存到mongoDB数据库
def save_to_mongo(result):
if db[MONGO_TABLE].insert(result):
print('存储到mongoDB成功', result)
return True
return False
# 用np.random.rand()生成随机数用来sleep模拟人访问的操作
def main():
for page in range(GROUP_START, GROUP_END + 1):
time.sleep(np.random.rand() * 20)
url = make_url(page)
response = get_index_page(url,page)
if response:
parse_job_page(response)
if __name__ == '__main__':
main()
留心
主要是写给自己的
解析页面的部分,如果出现被反爬,会出现报错,异常捕捉部分没有处理好,日后在写功能模块的时候要记得带上。
代码结构可以再优化,较多代码冗余。
尾言
代码还有很多需要改进的地方,咸鱼也在不断学习希望能给大家带来更好的内容,如果我好几天没更了,一定是在学习。 后台回复「技巧」
往期推荐
咸鱼普拉思
一只咸鱼在编程路上的摸爬滚打,记录摸索中的点点滴滴。