爬取前程无忧网站数据

本文讲述了如何爬取前程无忧网站上的数据,存储到MongoDB,然后导出到HDFS,使用Flume收集日志。接着在Hive中对大数据相关岗位如'数据分析'、'大数据开发工程师'、'数据采集'的薪资进行统计分析,包括平均、最高和最低工资,并用图表展示结果。最后,通过Sqoop技术将数据导入到MySQL,进一步分析不同地区的大数据岗位数量。
摘要由CSDN通过智能技术生成

1.爬取中华英才网,前程无忧网站的数据。
spiders下:

# -*- coding: utf-8 -*-
import scrapy,copy
from  ..items import QcwyItem

class Qcwy2Spider(scrapy.Spider):
    name = 'qcwy2'
    allowed_domains = ['51job.com']
    # for x in range(1,2):
    start_urls = ['https://search.51job.com/list/000000,000000,0000,00,9,99,%2B,2,{0}.html?lang=c&stype=&postchannel=0000&workyear=99&cotype=99&degreefrom=99&jobterm=99&companysize=99&providesalary=99&lonlat=0%2C0&radius=-1&ord_field=0&confirmdate=9&fromType=&dibiaoid=0&address=&line=&specialarea=00&from=&welfare='
    .format(i) for i in range(1,2000)]
    # a = 892
    # start_urls = [start_url.format(a)]
    def parse(self, response):
        a_list = response.xpath('//div[@class="dw_table"]//div[@class="el"]')#//div[@class="dw_table"]/div[4]/p/span/a/@href
        # print(a_list)
        for list in a_list:
            item = QcwyItem()
            # print(list)
            item['name'] = list.xpath('./p/span/a/text()').extract_first()
            item['salary'] = list.xpath('./span[3]/text()').extract_first()
            item['company'] = list.xpath('./span[1]/a/text()').extract_first()
            item['work'] = list.xpath('./span[2]/text()').extract_first()
            detail_url = response.urljoin(list.xpath('./p/span/a/@href').extract_first())
                # print("name(名字):" + item['name'] + "!!!!!!!!!!!!!!!!!!!!!!!!")
            yield scrapy.Request(detail_url, meta={
   'item': copy.deepcopy(item)},
                                     callback=self.parse_detail)


    def parse_detail(self,response):
        item = response.meta['item']
        item['experience'] = response.xpath('//div[@class="cn"]/p[@class="msg ltype"]/text()[2]').extract_first()
        item['content'] = response.xpath('//div[@class="tBorderTop_box"]/div/p/text()').extract()
        item['content'] = [str(i).replace('\\n', '') for i in item['content'] if len(i) > 10]
        item['content'] = ''.join(item['content'])
        # item['gj'] = response.xpath('/html/body/div[3]/div[2]/div[3]/div[1]/div/div[1]/p[2]/a/text()').extract()
        yield item

setting下:

BOT_NAME = 'qcwy'

SPIDER_MODULES = ['qcwy.spiders']
NEWSPIDER_MODULE = 'qcwy.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'qcwy (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# DOWNLOAD_DELAY = 3
COOKIES_ENABLED = False
DEFAULT_REQUEST_HEADERS = {
   
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    'Accept-Language': 'en',
    'Cookie':'guid=a93597f6d97dd3b6525c29919f51390f; nsearch=jobarea%3D%26%7C%26ord_field%3D%26%7C%26recentSearch0%3D%26%7C%26recentSearch1%3D%26%7C%26recentSearch2%3D%26%7C%26recentSearch3%3D%26%7C%26recentSearch4%3D%26%7C%26collapse_expansion%3D; search=jobarea%7E%60090500%7C%21ord_field%7E%600%7C%21recentSearch0%7E%60090500%A1%FB%A1%FA000000%A1%FB%A1%FA0000%A1%FB%A1%FA00%A1%FB%A1%FA99%A1%FB%A1%FA%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA9%A1%FB%A1%FA99%A1%FB%A1%FA%A1%FB%A1%FA0%A1%FB%A1%FA%A1%FB%A1%FA2%A1%FB%A1%FA1%7C%21; _ujz=MTc1NTM5MzEzMA%3D%3D; ps=needv%3D0; slife=lowbrowser%3Dnot%26%7C%26lastlogindate%3D20200624%26%7C%26securetime%3DDDBWYFY5VTMEb1NlCTJZN1BgATI%253D; track=registertype%3D1; 51job=cuid%3D175539313%26%7C%26cusername%3Dphone_18048698401_202006244341%26%7C%26cpassword%3D%26%7C%26cname%3D%25B2%25CC%25BA%25EC%26%7C%26cemail%3D2878350778%2540163.com%26%7C%26cemailstatus%3D0%26%7C%26cnickname%3D%26%7C%26ccry%3D.0Q%252FTUPtgnsDo%26%7C%26cconfirmkey%3D%25241%2524r%252Fes3SvR%25244.dg.KeuT72eLN7vcyi8o%252F%26%7C%26cautologin%3D1%26%7C%26cenglish%3D0%26%7C%26sex%3D1%26%7C%26cnamekey%3D%25241%252418HQ362f%2524Tdm5.BwbY3EYnYNeJ6N1c1%26%7C%26to%3D1e6307d82ae6aeef3e3e7408357bca7f5ef2f435%26%7C%26; adv=adsnew%3D1%26%7C%26adsnum%3D2004282%26%7C%26adsresume%3D1%26%7C%26adsfrom%3Dhttps%253A%252F%252Fwww.baidu.com%252Fother.php%253Fsc.Kf0000ac3eJpK8MuKic5kix2XCAjcqwRBhTagUM35vSpFWPuJCQpvQKdm02TECmI4L18m7Hr6WGdBb92UMCW0baVHxF8WfJsjkfStJWl5y8LqCHOoI2z1Stag-aeppzpmuRl5SM0Z63Y_DY2vfj6mg9H9M7syisSGFL3RAJajOO-d6EEcBKSfMObytA4O1V3g_hr79letDcMsTx3XqexiVZCeYi-.7b_NR2Ar5Od66CHnsGtVdXNdlc2D1n2xx81IZ76Y_uQQr1F_zIyT8P9MqOOgujSOODlxdlPqKMWSxKSgqjlSzOFqtZOmzUlZlS5S8QqxZtVAOtIO0hWEzxkZeMgxJNkOhzxzP7Si1xOvP5dkOz5LOSQ6HJmmlqoZHYqrVMuIo9oEvpSMG34QQQYLgFLIW2IlXk2-muCyr1FkzTf.TLFWgv-b5HDkrfK1ThPGujYknHb0THY0IAYqkea11neXYtT0IgP-T-qYXgK-5H00mywxIZ-suHY10ZIEThfqkea11neXYtT0ThPv5HmdPHnL0ZNzU7qGujYkPHD3PjD1PWDY0Addgv-b5HDznWRzrjT40AdxpyfqnH0vPjfvrHD0UgwsU7qGujYknHR1P0KsI-qGujYs0APzm1Y4P1m%2526ck%253D4472.1.81.236.159.236.159.329%2526dt%253D1592980715%2526wd%253D%2525E5%252589%25258D%2525E7%2525A8%25258B%2525E6%252597%2525A0%2525E5%2525BF%2525A7%2526tpl%253Dtpl_11534_22672_18815%2526l%253D1518413614%2526us%253DlinkName%25253D%252525E6%252525A0%25252587%252525E5%25252587%25252586%252525E5%252525A4%252525B4%252525E9%25252583%252525A8-%252525E4%252525B8%252525BB%252525E6%252525A0%25252587%252525E9%252525A2%25252598%252526linkText%25253D%252525E3%25252580%25252590%252525E5%25252589%2525258D%252525E7%252525A8%2525258B%252525E6%25252597%252525A0%252525E5%252525BF%252525A751Job%252525E3%25252580%25252591-%25252520%252525E5%252525A5%252525BD%252525E5%252525B7%252525A5%252525E4%252525BD%2525259C%252525E5%252525B0%252525BD%252525E5%2525259C%252525A8%252525E5%25252589%2525258D%252525E7%252525A8%2525258B%252525E6%25252597%252525A0%252525E5%252525BF%252525A7%2521%252526linkType%25253D%26%7C%26',
    'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36'
}

# Configure maximum concurrent requests performed by Scrapy (default: 16
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值