大数据实训:python爬取51job+hive数据分析+可视化

实训要求

利用python编写爬虫程序,从招聘网站上爬取数据,将数据存入到MongoDB数据库中,将存入的数据作一定的数据清洗后做数据分析,最后将分析的结果做数据可视化。

爬虫部分

爬虫框架为Scrapy
在这里插入图片描述
job

# -*- coding: utf-8 -*-
import scrapy
from job51.items import Job51Item

class QcwySpider(scrapy.Spider):
    name = 'job'
    allowed_domains = ['https://serch.51job.com/']
    start_urls = ['https://search.51job.com/list/000000,000000,0130%252C7501%252C7506%252C7502,01%252C32%252C38,9,99,%2520,2,1.html?lang=c&stype=&postchannel=0000&workyear=99&cotype=99&degreefrom=99&jobterm=99&companysize=99&providesalary=99&lonlat=0%2C0&radius=-1&ord_field=0&confirmdate=9&fromType=&dibiaoid=0&address=&line=&specialarea=00&from=&welfare=']

    def parse(self, response):
        all_urls = response.xpath("//*[@id='resultList']/div[@class='el']/p/span/a/@href").getall()
        for url in all_urls:
            yield scrapy.Request(url, callback=self.parse_html, dont_filter=True)
        next_page = response.xpath("//div[@class='p_in']//li[last()]/a/@href").get()
        if next_page:
            yield scrapy.Request(next_page, callback=self.parse, dont_filter=True)
    def parse_html(self, response):
        item = Job51Item()
        try:
            jobname = response.xpath("//div[@class='cn']/h1/text()").getall()[0]
            salary = response.xpath("//div[@class='cn']//strong/text()").get()
            company = response.xpath("//div[@class='cn']//p[@class='cname']/a[1]/@title").get()
            city = response.xpath("//div[@class='cn']//p[@class='msg ltype']/text()").getall()[0]
            workyear = response.xpath("//div[@class='cn']//p[@class='msg ltype']/text()").getall()[1]
            record = response.xpath("//div[@class='cn']//p[@class='msg ltype']/text()").getall()[2]
            requirements = response.xpath("//div[@class='bmsg job_msg inbox']//text()").getall()
            requirement_str = ""
            for requirement in requirements:
                requirement_str += requirement.strip()
            skill = ""
            keyword = response.xpath("//p[@class='fp'][2]/a/text()").getall()
            for i in keyword:
                skill += i + " "
        except:
            jobname = ""
            salary = ""
            company = ""
            city = ""
            workyear = ""
            record = ""
            requirement_str = ""
            skill = ""
        finally:
            item["jobname"] = jobname
            item["salary"] = salary
            item["company"] = company
            item["city"] = city
            item["workyear"] = workyear
            item["record"] = record
            item["requirement"] = requirement_str
            item["skill"] = skill
        yield item


setting

BOT_NAME = 'job51'

SPIDER_MODULES = ['job51.spiders']
NEWSPIDER_MODULE = 'job51.spiders'

MONGODB_HOST='127.0.0.1'
MONGODB_PORT = 27017
MONGODB_DBNAME = '51job_hive'
MONGODB_DOCNAME = 'job51hive'

DOWNLOAD_DELAY = 1
ROBOTSTXT_OBEY = False

DEFAULT_REQUEST_HEADERS = {
   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
   'Accept-Language': 'en',
   'User_Agent' :'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chr
  • 8
    点赞
  • 79
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值