2021-05-26

python爬虫爬取腾讯招聘网的职位招聘数据(个人学习记录)

很常规的爬取数据然后数据分析,重要部分是在url的分析上,no picture say a j8

下图为该url的分析:

 后期如果需要更快的爬取更多数据,可以加多线程或者修改参数pagesize

import requests,re,json
"""这里URL可以调参,index=页数,pagesize=一页能爬下来的份数,这里我默认两百份"""
url = "https://careers.tencent.com/tencentcareer/api/post/Query?timestamp=1621962191550&countryId=&cityId=&bgIds=&productId=&categoryId=&parentCategoryId=&attrId=&keyword=&pageIndex=1&pageSize=200&language=zh-cn&area=cn"

headers = {
    "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:65.0) Gecko/20100101 Firefox/65.0",
}
fp1 = open('data_json.json', 'w', encoding='utf-8')

def parse_json(url):
    """解析url,得到文本数据"""
    response = requests.get(url=url, headers=headers)
    response.close()
    return response.text



def data_analy(r):                  #对数据进行分析
    pat = re.compile(r'"PostId":"(?P<postid>.*?)","RecruitPostId":.*?,"RecruitPostName":"(?P<position>.*?)","CountryName":"(?P<country>.*?)","LocationName":"(?P<city>.*?)","BGName":".*?","ProductName":".*?","CategoryName":".*?","Responsibility":"(?P<introduction>.*?)","LastUpdateTime":"(?P<time>.*?)","PostURL":"(?P<posturl>.*?)","',re.S)
    data_tuple_value = re.findall(pattern=pat,string=r)
    # data_tuple_gzyq = re.
    data_list_keys = ['职业名称','国家','城市','工作介绍','工作要求','发布日期','提交申请url']
    for i in data_tuple_value:
        data_list_value = list(i)
        data_postid = data_list_value[0]
        del data_list_value[0]
        postid_url = f'https://careers.tencent.com/tencentcareer/api/post/ByPostId?timestamp=1622010434223&postId={data_postid}&language=zh-cn'
        r = requests.get(url = postid_url, headers=headers)
        r.encoding = r.apparent_encoding
        r1 = r.text.replace(r'},', '\n')
        pat1 = re.compile(r',"Requirement":"(?P<gzyq>.*?)",')
        data_require = re.findall(pat1, r1)
        data_require[0] = data_require[0].replace(r'\n','')
        data_list_value.insert(4,f'{data_require[0]}')
        data_list_value[0] = data_list_value[0].split('-')[1]
        data_list_value[3] = data_list_value[3].replace(r'\n','')
        data_dict = dict(zip(data_list_keys,data_list_value))
        print(data_dict)
        r.close()
        fp1.write(json.dumps(data_dict, ensure_ascii=False,indent=4,separators=(',',':')))
    fp1.close()


if __name__ == '__main__':
    r = parse_json(url).replace(r'},','\n')
    data_analy(r)
    print('爬取完成')

 

数据已经转为json文件,后期也可以数据可视化

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
2021-03-26 20:54:33,596 - Model - INFO - Epoch 1 (1/200): 2021-03-26 20:57:40,380 - Model - INFO - Train Instance Accuracy: 0.571037 2021-03-26 20:58:16,623 - Model - INFO - Test Instance Accuracy: 0.718528, Class Accuracy: 0.627357 2021-03-26 20:58:16,623 - Model - INFO - Best Instance Accuracy: 0.718528, Class Accuracy: 0.627357 2021-03-26 20:58:16,623 - Model - INFO - Save model... 2021-03-26 20:58:16,623 - Model - INFO - Saving at log/classification/pointnet2_msg_normals/checkpoints/best_model.pth 2021-03-26 20:58:16,698 - Model - INFO - Epoch 2 (2/200): 2021-03-26 21:01:26,685 - Model - INFO - Train Instance Accuracy: 0.727947 2021-03-26 21:02:03,642 - Model - INFO - Test Instance Accuracy: 0.790858, Class Accuracy: 0.702316 2021-03-26 21:02:03,642 - Model - INFO - Best Instance Accuracy: 0.790858, Class Accuracy: 0.702316 2021-03-26 21:02:03,642 - Model - INFO - Save model... 2021-03-26 21:02:03,643 - Model - INFO - Saving at log/classification/pointnet2_msg_normals/checkpoints/best_model.pth 2021-03-26 21:02:03,746 - Model - INFO - Epoch 3 (3/200): 2021-03-26 21:05:15,349 - Model - INFO - Train Instance Accuracy: 0.781606 2021-03-26 21:05:51,538 - Model - INFO - Test Instance Accuracy: 0.803641, Class Accuracy: 0.738575 2021-03-26 21:05:51,538 - Model - INFO - Best Instance Accuracy: 0.803641, Class Accuracy: 0.738575 2021-03-26 21:05:51,539 - Model - INFO - Save model... 2021-03-26 21:05:51,539 - Model - INFO - Saving at log/classification/pointnet2_msg_normals/checkpoints/best_model.pth 我有类似于这样的一段txt文件,请你帮我写一段代码来可视化这些训练结果
02-06
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值