环境准备
- Python 3.7
- Mysql
- requests
- PyMySQL
- BeautifulSoup
获取数据
分析数据请求过程
- 首先我们打开boss直聘网查看正常的请求过程
- 打开万能的 F12,刷新,查看下当前网络发生了什么
通过上图能够看到请求参数中包含了职位,页数等信息。
构造模拟请求
import requests
url = "https://www.zhipin.com/c101120100/?query=" + kw+"&page="+str(page)+"&ka=page-"+str(page)
headers = {
'Host': 'www.zhipin.com',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Language': 'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2',
'Accept-Encoding': 'gzip, deflate, br',
'Referer': 'https://www.zhipin.com/job_detail/?city=101120100&source=10&query=PHP',
'DNT': '1',
'Connection': 'keep-alive',
'Cookie': '__c=1579400100; __g=-; __l=l=https%3A%2F%2Fwww.zhipin.com%2Fweb%2Fcommon%2Fsecurity-check.html%3Fseed%3DEWjlLZs%252FPr8cqa5Hs%252FGzdK13lMxlscxjvlJZWtdzaQs%253D%26name%3D986ad753%26ts%3D1579400102260%26callbackUrl%3D%252Fjob_detail%252F%253Fcity%253D101120100%2526source%253D10%2526query%253DPHP%26srcReferer%3D&r=&friend_source=0&friend_source=0; __a=83048337.1579400100..1579400100.11.1.11.11; __zp_stoken__=f0d1JSxtXmdA15ixnd1Lh9vbs1Yr2dghco%2FMt7MWfOXsroaplWan5qqBsdTxTRJMadp2RpuuULVCxSdPrFHXeLlCNNw5OdJC3nz6lIaV0p2mXbKx6jgrj3tQ4%2B4zcEDE2SBk',
'Upgrade-Insecure-Requests': '1',
'Cache-Control': 'max-age=0',
'TE': 'Trailers'
}
r = requests.get(url, headers=headers)
所有请求头原封不动拷下来,放入 headers
。 cookie
需要拷贝自己浏览器的内容,单独使用上面的会失效。
分析网页结构
解析出需要的信息。这一步我们用到了 BeautifulSoup
。
分析发现所有的职位都有类 job-primary
soup = BeautifulSoup(r.text, "lxml")
all_jobs = soup.select("div .job-primary")
继续分析
自此,基本信息获取完毕,下一步组装信息。
组装信息
for job in all_jobs:
jname = job.find("div", attrs={
"class": "job-title"}).text
jurl = "https://www.zhipin.com" + \
job.find("div", attrs={
"class": "info-primary"}).h3.a.attrs['href']
jid = job.find(
"div", attrs={
"class": "info-primary"}).h3.a.attrs['data-jid']
sal = job.find("div", attrs