基于scrapy-----selenium-----PhantomJS爬虫腾讯招聘

基于scrapy与selenium与PhantomJS爬虫腾讯招聘

在这里插入图片描述

项目执行程序main.py:

from scrapy import cmdline
cmdline.execute('scrapy crawl tenxun --nolog'.split())

spider文件,tenxun.py:

# -*- coding: utf-8 -*-
import scrapy
from tencent.items import TencentItem

class TenxunSpider(scrapy.Spider):
    name = 'tenxun'
    allowed_domains = ['careers.tencent.com']
    start_urls = []
    #https://careers.tencent.com/search.html?index=2
    for i in range(1,430):
        base_url = 'https://careers.tencent.com/search.html?index={}'.format(i)
        start_urls.append(base_url)

    def parse(self, response):
        # print(response.text)
        item=TencentItem()
        div_list=response.xpath('//div[@class="recruit-wrap recruit-margin"]/div')
        # print(div_list)
        for site in div_list:
            title=site.xpath('.//h4/text()').extract_first()

            content=site.xpath('.//p[@class="recruit-tips"]/span/text()').extract()

            detail=site.xpath('.//p[@class="recruit-text"]/text()').extract_first()

            item['title']=title

            content='|'.join(content)
            item['content']=content

            item['detail']=detail

            yield item

items.py:

import scrapy


class TencentItem(scrapy.Item):
    # define the fields for your item here like:
    title = scrapy.Field()
    content = scrapy.Field()
    detail = scrapy.Field()

pipelines.py:

import pymongo


class Pipeline(object):
    def __init__(self,mongo_url,mongo_db):
        self.mongo_url = mongo_url
        self.mongo_db = mongo_db

    @classmethod
    def from_crawler(cls,crawler):
        return cls(
            mongo_url=crawler.settings.get('MONGO_URL'),
            mongo_db=crawler.settings.get('MONGO_DATABASE','items')
        )

    def open_spider(self,spider):
        self.client=pymongo.MongoClient(self.mongo_url)
        self.db=self.client[self.mongo_db]

    def close_spider(self,spider):
        self.client.close()


    def process_item(self, item, spider):
        collection_name=item.__class__.__name__
        print(collection_name)
        self.db[collection_name].insert(dict(item))
        return item

settings.py:

BOT_NAME = 'tencent'

SPIDER_MODULES = ['tencent.spiders']
NEWSPIDER_MODULE = 'tencent.spiders'

ROBOTSTXT_OBEY = False

DEFAULT_REQUEST_HEADERS = {
  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  'Accept-Language': 'en',
 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36'
}

DOWNLOADER_MIDDLEWARES = {
   'tencent.TencentMiddlewares.TencentMiddle': 543,
}

ITEM_PIPELINES = {
   'tencent.pipelines.Pipeline': 300,
}

MONGO_URI = 'localhost'
MONGO_DATABASE = 'tencent2'

spider中间件TencentMiddlewares.py:

from selenium import webdriver
import time
from scrapy.http import HtmlResponse
class TencentMiddle(object):

    def process_request(self, request, spider):
        # print(1)
        driver = webdriver.PhantomJS()
        driver.get(request.url)
        time.sleep(0.1)
        html = driver.page_source
        return HtmlResponse(url=request.url,body=html,encoding='utf-8',request=request)

爬取总共4290个职位:

在这里插入图片描述

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值