目标网站:前程无忧招聘网
目标网址:https://search.51job.com/list/120000,000000,0000,00,9,99,Python,2,1.html
目标数据:(1)职位名(2)公司名(3)工作地点(4)薪资 (5)发布时间
任务要求
(1)使用urllib或requests库实现该网站网页源代码的获取,并将源代码进行保存;
(2)自主选择re、bs4、lxml中的一种解析方法对保存的的源代码读取并进行解析,成功找到目标数据所在的特定标签,进行网页结构的解析;
(3)定义函数,将获取的目标数据保存到txt文本文件中及csv中。
(4)使用框架式结构,通过参数传递实现整个特定数据的爬取。
import requests
from requests.exceptions import RequestException
import re
import csv
def getHTMLText(url):
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.62 Safari/537.36'
}
try:
r = requests.get(url, headers=headers)
r.raise_for_status()
r.encoding = r.apparent_encoding
return r.text
except RequestException as e:
print('error', e)
def fillHtml(html):
data= re.compile(r'class="t1 ">.*? <a target="_blank" title="(.*?)".*? <span class="t2"><a target="_blank" title="(.*?)".*?<span class="t3">(.*?)</span>.*?<span class="t4">(.*?)</span>.*? <span class="t5">(.*?)</span>',re.S)
items=re.findall(data,html)
return items
def printHtml_text(data):
for i in data:
with open (r'D:\qq.txt','a',encoding='utf-8') as f:
f.write(i[0]+'\t'+i[1]+'\t'+i[2]+'\t'+i[3]+'\t'+i[4]+'\n')
def printHtml_csv(data):
with open('D:\qq.csv','a',encoding='utf-8-sig',newline='') as csvfile:
fieldnames=['职位名','公司名','工作地点','薪资','发布时间']
writer=csv.DictWriter(csvfile,fieldnames=fieldnames)
writer.writeheader()
for i in data:
writer.writerow({'职位名':i[0],'公司名':i[1],'工作地点':i[2],'薪资':i[3],'发布时间':i[4]})
def main():
for i in range(11):
urls ={'http://search.51job.com/list/000000,000000,0000,00,9,99,python,2,'+ str(i+1)+'.html'}
for url in urls:
html=getHTMLText(url)
data=fillHtml(html)
printHtml_text(data)
printHtml_csv(data)
main()
运行结果