python招工_Python爬取招聘网数据,看看python薪资到底有多少

前言

本文的文字及图片来源于网络,仅供学习、交流使用,不具有任何商业用途,如有问题请及时联系我们以作处理。

PS:如有需要Python学习资料的小伙伴可以加点击下方链接自行获取

开发工具Python 3.6.5

Pycharm

requests

re

json

相关模块可用pip命令安装

网页分析

https://search.51job.com/list/010000%252c020000%252c030200%252c040000,000000,0000,00,9,99,python,2,1.html

请求网页

import requests

url = 'https://search.51job.com/list/010000%252c020000%252c030200%252c040000,000000,0000,00,9,99,python,2,1.html'

params = {

'lang': 'c',

'postchannel': '0000',

'workyear': '99',

'cotype': '99',

'degreefrom': '99',

'jobterm': '99',

'companysize': '99',

'ord_field': '0',

'dibiaoid': '0',

'line': '',

'welfare': '',

}

cookies = {

'''

你的cookie

'''

}

headers = {

'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',

'Host': 'search.51job.com',

'Referer': 'https://search.51job.com/list/190200,000000,0000,00,9,99,python,2,1.html?lang=c&postchannel=0000&workyear=99&cotype=99&degreefrom=99&jobterm=99&companysize=99&ord_field=0&dibiaoid=0&line=&welfare=',

'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36',

}

response = requests.get(url=url, params=params, headers=headers, cookies=cookies)

response.encoding = response.apparent_encoding

print(response.text)

咱们需要的数据的在

window.__SEARCH_RESULT__ =

'''

你想要获取的内容

'''

用正则表达式匹配出来就可以了

把匹配出来的数据转化成json数据,然后根据字典的取值方式取自己想要数据即可

r = re.findall('window.__SEARCH_RESULT__ = (.*?)', response.text, re.S)

string = ''.join(r)

info_dict = json.loads(string)

pprint.pprint(info_dict)

完整代码

import requests

import re

import json

for page in range(1, 11):

url = 'https://search.51job.com/list/010000%252c020000%252c030200%252c040000,000000,0000,00,9,99,python,2,{}.html'.format(page)

params = {

'lang': 'c',

'postchannel': '0000',

'workyear': '99',

'cotype': '99',

'degreefrom': '99',

'jobterm': '99',

'companysize': '99',

'ord_field': '0',

'dibiaoid': '0',

'line': '',

'welfare': '',

}

cookies = {

}

headers = {

'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',

'Host': 'search.51job.com',

'Referer': 'https://search.51job.com/list/190200,000000,0000,00,9,99,python,2,1.html?lang=c&postchannel=0000&workyear=99&cotype=99&degreefrom=99&jobterm=99&companysize=99&ord_field=0&dibiaoid=0&line=&welfare=',

'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36',

}

response = requests.get(url=url, params=params, headers=headers, cookies=cookies)

response.encoding = response.apparent_encoding

r = re.findall('window.__SEARCH_RESULT__ = (.*?)', response.text, re.S)

string = ''.join(r)

info_dict = json.loads(string)

dit_py = info_dict['engine_search_result']

dit = {}

for i in dit_py:

attribute_text = ' '.join(i['attribute_text'][1:])

print(attribute_text)

# dit['job_href'] = i['job_href']

dit['job_name'] = i['job_name']

dit['company_name'] = i['company_name']

dit['money'] = i['providesalary_text']

dit['workarea'] = i['workarea_text']

dit['updatedate'] = i['updatedate']

dit['companytype'] = i['companytype_text']

dit['jobwelf'] = i['jobwelf']

dit['attribute'] = attribute_text

dit['companysize'] = i['companysize_text']

print(dit)

with open('python招聘信息.csv', mode='a', encoding='utf-8') as f:

f.write('{},{},{},{},{},{},{},{}\n'.format(dit['job_name'], dit['company_name'], dit['money'], dit['workarea'], dit['companytype'], dit['jobwelf'], dit['attribute'], dit['companysize']))

实现效果

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,以下是一个示例代码,用于爬取智联招聘网站上的数据分析师工作岗位信息: ```python import requests from bs4 import BeautifulSoup url = 'https://fe-api.zhaopin.com/c/i/sou?start=0&pageSize=60&cityId=489&industry=10100&salary=0,0&workExperience=-1&education=-1&companyType=-1&employmentType=-1&jobWelfareTag=-1&kw=数据分析师&kt=3&_v=0.97530866&x-zp-page-request-id=8d3f7b1e6c9a4c8e9dc8a2a6bb605d4e-1626243117597-609241' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 Edge/16.16299' } response = requests.get(url, headers=headers) soup = BeautifulSoup(response.content, 'html.parser') job_list = soup.find_all('div', {'class': 'job-list'}) for job in job_list: job_name = job.find('a', {'target': '_blank', 'data-jid': True}).text.strip() job_salary = job.find('span', {'class': 'salary'}).text.strip() job_company = job.find('a', {'class': 'company-name'}).text.strip() job_location = job.find('span', {'class': 'job-area'}).text.strip() job_experience = job.find('span', {'class': 'job-exp'}).text.strip() print(job_name, job_salary, job_company, job_location, job_experience) ``` 在这个示例代码中,我们设置了筛选条件,只爬取数据分析师工作岗位的信息。同时,我们使用了 requests 库向智联招聘网站发送了一个 HTTP 请求,并设置了请求头部信息,以避免被网站识别为爬虫。然后,我们使用 BeautifulSoup 库解析了页面内容,并从中提取出了工作岗位信息。 您可以根据自己的需求修改代码中的参数和条件,以获得您需要的工作岗位信息。注意,爬取网站信息时要遵守相关法律法规和网站规定,不要过度频繁地请求网站,以免对网站造成影响。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值