使用scrapy框架爬取腾讯招聘信息

我之前已经写过爬取腾讯招聘的博客,我是用多线程,生产者与消费者模式结合的方式写的,有兴趣的欢迎看一看

以下是博客链接:https://blog.csdn.net/g_optimistic/article/details/90048696

下面写的是用scrapy框架爬腾讯招聘

目录

1.创建爬虫文件

2.找接口  url

3.访问url

4.解析数据并保存

5.运行项目

6.s_tencent.py文件的完整代码


1.创建爬虫文件

scrapy genspider s_tencent careers.tencent.com

2.找接口  url

详细的过程之前的博客写过了,在这里我直接给出:

pageIndex里面穿的参数是页码

https://careers.tencent.com/tencentcareer/api/post/Query?keyword=python&pageIndex={}&pageSize=10


3.访问url

start_urls = []
for page in range(1, 62):
    url = 'https://careers.tencent.com/tencentcareer/api/post/Query?keyword=python&pageIndex=%s&pageSize=10' % page
    start_urls.append(url)

4.解析数据并保存

content = response.body.decode('utf-8')
        data = json.loads(content)
        job_list = data['Data']['Posts']
        for job in job_list:
            name = job['RecruitPostName']
            country = job['CountryName']
            duty = job['Responsibility']
            # info=name+country+duty+'\n'
            info = {
                "name": name,
                "country": country,
                "duty": duty,
            }
            with open('job.txt', 'a', encoding='utf-8') as fp:
                fp.write(str(info)+'\n')

5.运行项目

scrapy crawl s_tencent

结果:程序运行结束,出现了job.txt


6.s_tencent.py文件的完整代码

# -*- coding: utf-8 -*-
import scrapy
import json


class STencentSpider(scrapy.Spider):
    name = 's_tencent'
    allowed_domains = ['careers.tencent.com']
    start_urls = []
    for page in range(1, 62):
        url = 'https://careers.tencent.com/tencentcareer/api/post/Query?keyword=python&pageIndex=%s&pageSize=10' % page
        start_urls.append(url)

    def parse(self, response):
        content = response.body.decode('utf-8')
        data = json.loads(content)
        job_list = data['Data']['Posts']
        for job in job_list:
            name = job['RecruitPostName']
            country = job['CountryName']
            duty = job['Responsibility']
            # info=name+country+duty+'\n'
            info = {
                "name": name,
                "country": country,
                "duty": duty,
            }
            with open('job.txt', 'a', encoding='utf-8') as fp:
                fp.write(str(info)+'\n')

 

 

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值