岗位查询

闲着没事自己用爬虫写的一个岗位查询(拉钩网)在这里插入代码片

代码很少

import requestsi
import pymysql
import time
url = 'https://www.lagou.com/jobs/positionAjax.json?px=default&city=%E6%88%90%E9%83%BD&needAddtionalResult=false'
urls = 'https://www.lagou.com/jobs/list_python/p-city_252?px=default#filterBox'
headers = {'Host': 'www.lagou.com',
           'Origin': 'https://www.lagou.com',
           'Referer': 'https://www.lagou.com/jobs/list_python/p-city_252?px=default',
           'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36'}
user = input('请输入你要查询的岗位:')
pages = int(input('请输入页数:'))
print('客官请稍等,正在查询中......')
for page in range(pages):
    time.sleep(0.5)
    data = {'first': 'true',
            'pn': page,
            'kd': user}
    session = requests.Session()
    session.get(url=urls, headers=headers)
    cookie = session.cookies
    response = session.post(url=url, headers=headers, data=data, cookies=cookie).json()
    result = response['content']['positionResult']['result']
    for i in result:
        job_title = i['positionName'] # 岗位
        workYear = i['workYear'] # 经验
        salary = i['salary'] # 薪资
        education = i['education'] # 学历
        positionAdvantage = i['positionAdvantage'] #职位福利
        companyFullName = i['companyFullName'] # 公司
        industryField = i['industryField'] # 公司所属领域
        data_job = {}
        data_job['岗位'] = job_title
        data_job['经验'] = workYear
        data_job['薪资'] = salary
        data_job['学历'] = education
        data_job['职位福利'] = positionAdvantage
        data_job['公司'] = companyFullName
        data_job['公司说书领域'] = industryField
        print(data_job)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值