我之前已经写过爬取腾讯招聘的博客,我是用多线程,生产者与消费者模式结合的方式写的,有兴趣的欢迎看一看
以下是博客链接:https://blog.csdn.net/g_optimistic/article/details/90048696
下面写的是用scrapy框架爬腾讯招聘
目录
1.创建爬虫文件
scrapy genspider s_tencent careers.tencent.com
2.找接口 url
详细的过程之前的博客写过了,在这里我直接给出:
pageIndex里面穿的参数是页码
https://careers.tencent.com/tencentcareer/api/post/Query?keyword=python&pageIndex={}&pageSize=10
3.访问url
start_urls = []
for page in range(1, 62):
url = 'https://careers.tencent.com/tencentcareer/api/post/Query?keyword=python&pageIndex=%s&pageSize=10' % page
start_urls.append(url)
4.解析数据并保存
content = response.body.decode('utf-8')
data = json.loads(content)
job_list = data['Data']['Posts']
for job in job_list:
name = job['RecruitPostName']
country = job['CountryName']
duty = job['Responsibility']
# info=name+country+duty+'\n'
info = {
"name": name,
"country": country,
"duty": duty,
}
with open('job.txt', 'a', encoding='utf-8') as fp:
fp.write(str(info)+'\n')
5.运行项目
scrapy crawl s_tencent
结果:程序运行结束,出现了job.txt
6.s_tencent.py文件的完整代码
# -*- coding: utf-8 -*-
import scrapy
import json
class STencentSpider(scrapy.Spider):
name = 's_tencent'
allowed_domains = ['careers.tencent.com']
start_urls = []
for page in range(1, 62):
url = 'https://careers.tencent.com/tencentcareer/api/post/Query?keyword=python&pageIndex=%s&pageSize=10' % page
start_urls.append(url)
def parse(self, response):
content = response.body.decode('utf-8')
data = json.loads(content)
job_list = data['Data']['Posts']
for job in job_list:
name = job['RecruitPostName']
country = job['CountryName']
duty = job['Responsibility']
# info=name+country+duty+'\n'
info = {
"name": name,
"country": country,
"duty": duty,
}
with open('job.txt', 'a', encoding='utf-8') as fp:
fp.write(str(info)+'\n')