selenium爬取拉勾网上详细信息(争对反爬虫机制)

当我们爬取拉勾网上信息时,会发现拉勾网设置了反爬虫,一般会出现以下情况,如果设置了动态UA以及完整的请求信息都无法获取,那么我们就可以使用selenium来获取详细信息

在这里插入图片描述
(1)我们要先登录拉勾网。在获取爬虫信息期间,会弹出登录窗口,影响信息的爬取,所以要先登录。

driver_path="F:\\Download\\chromedriver.exe"
driver=webdriver.Chrome(executable_path=driver_path)
url='http://www.lagou.com/'
driver.get(url)
time.sleep(5)
driver.find_element_by_xpath('//*[@id="lg_tbar"]/div/div[2]/ul/li[3]/a').click()
time.sleep(2)
driver.find_element_by_xpath('/html/body/div[2]/div[1]/div/div/div[2]/div[3]/div[4]/div/a[3]').click()
time.sleep(10)

(2)登录之后会返回拉勾网,而我们的目的是获取python的职位信息,所以要在拉勾网上搜索python

driver.find_element_by_xpath('//*[@id="search_input"]').send_keys('python')
driver.find_element_by_xpath('//*[@id="search_button"]').click()
time.sleep(5)

(3)获取第一页python的详细信息,代码如下:

class LagouSpider(object):
    def run(self):
        source=driver.page_source
        self.parse(source)
    def parse(self,source):
        html=etree.HTML(source)
        links=html.xpath('//a[@class="position_link"]/@href')
        for link in links:
            self.detail_page(link)
            time.sleep(1)
    def detail_page(self,link):
        driver.execute_script("window.open('%s')"%link)
        driver.switch_to.window(driver.window_handles[1])
        WebDriverWait(driver, 10).until(
            EC.presence_of_all_elements_located((By.XPATH, '//div[@class="job-name"]'))
        )
        source=driver.page_source
        self.parse_detail_page(source)
        driver.close()
        driver.switch_to.window(driver.window_handles[0])
    def parse_detail_page(self,source):
        html=etree.HTML(source)
        position = ''.join(html.xpath('//div[@class="job-name"]/@title'))
        company = ''.join(html.xpath('//h3[@class="fl"]/em[@class="fl-cn"]/text()')).strip()
        request = ''.join(html.xpath('//dd[@class="job_request"]//span/text()'))
        advantage = ''.join(html.xpath('//dd[@class="job-advantage"]//text()')).strip().replace('        ', '')
        demand = ''.join(html.xpath('//div[@class="job-detail"]//text()')).replace('        ', '')
        address1 = ''.join(html.xpath('//h3[@class="address"]/text()'))
        address2 = ''.join(html.xpath('//dd[@class="job-address clearfix"]//a[2][@rel="nofollow"]//text()'))
        address3 = ''.join(html.xpath('//input[@name="positionAddress"]/@value'))
        address = address1 + '\n' + address2 + address3
        print(position + '     ' + company + '\n' + request + '\n' + advantage + demand + '\n' + address)
        print('\n*********************************************************************\n')
if __name__ == '__main__':
    spider=LagouSpider()
    spider.run()

(4)获取15页python的详细信息(设置显示等待,导入from selenium.webdriver.support.ui import WebDriverWait,
from selenium.webdriver.support import expected_conditions as EC , from selenium.webdriver.common.by import By):

class LagouSpider(object):
    def run(self):
        for i in range(15):
            source=driver.page_source
            self.parse(source)
            WebDriverWait(driver, 10).until(
                EC.presence_of_all_elements_located((By.XPATH, '//div[@class="pager_container"]/span[last()]'))
            )
            driver_path = driver.find_element_by_xpath('//div[@class="pager_container"]/span[last()]')
            driver_path.click()
            time.sleep(5)

(5)获取15页所有拉勾网上python职位信息的全部代码如下(第一页的信息获取完之后,会自动点击第二页,第三页,第四页……直到将15页信息获取完):

from selenium import webdriver
from lxml import html
etree=html.etree
import time
from selenium.webdriver.support.ui import WebDriverWait   #显示等待
from selenium.webdriver.support import expected_conditions as EC #条件
from selenium.webdriver.common.by import By
driver_path="F:\\Download\\chromedriver.exe"
driver=webdriver.Chrome(executable_path=driver_path)
url='http://www.lagou.com/'
driver.get(url)
time.sleep(5)
driver.find_element_by_xpath('//*[@id="lg_tbar"]/div/div[2]/ul/li[3]/a').click()
time.sleep(2)
driver.find_element_by_xpath('/html/body/div[2]/div[1]/div/div/div[2]/div[3]/div[4]/div/a[3]').click()
time.sleep(10)
driver.find_element_by_xpath('//*[@id="search_input"]').send_keys('python')
driver.find_element_by_xpath('//*[@id="search_button"]').click()
time.sleep(5)
class LagouSpider(object):
    def run(self):
        for i in range(15):
            source=driver.page_source
            self.parse(source)
            WebDriverWait(driver, 10).until(
                EC.presence_of_all_elements_located((By.XPATH, '//div[@class="pager_container"]/span[last()]'))
            )
            driver_path = driver.find_element_by_xpath('//div[@class="pager_container"]/span[last()]')
            driver_path.click()
            time.sleep(5)
    def parse(self,source):
        html=etree.HTML(source)
        links=html.xpath('//a[@class="position_link"]/@href')
        for link in links:
            self.detail_page(link)
            time.sleep(1)
    def detail_page(self,link):
        driver.execute_script("window.open('%s')"%link)
        driver.switch_to.window(driver.window_handles[1])
        WebDriverWait(driver, 10).until(
            EC.presence_of_all_elements_located((By.XPATH, '//div[@class="job-name"]'))
        )
        source=driver.page_source
        self.parse_detail_page(source)
        driver.close()
        driver.switch_to.window(driver.window_handles[0])
    def parse_detail_page(self,source):
        html=etree.HTML(source)
        position = ''.join(html.xpath('//div[@class="job-name"]/@title'))
        company = ''.join(html.xpath('//h3[@class="fl"]/em[@class="fl-cn"]/text()')).strip()
        request = ''.join(html.xpath('//dd[@class="job_request"]//span/text()'))
        advantage = ''.join(html.xpath('//dd[@class="job-advantage"]//text()')).strip().replace('        ', '')
        demand = ''.join(html.xpath('//div[@class="job-detail"]//text()')).replace('        ', '')
        address1 = ''.join(html.xpath('//h3[@class="address"]/text()'))
        address2 = ''.join(html.xpath('//dd[@class="job-address clearfix"]//a[2][@rel="nofollow"]//text()'))
        address3 = ''.join(html.xpath('//input[@name="positionAddress"]/@value'))
        address = address1 + '\n' + address2 + address3
        print(position + '     ' + company + '\n' + request + '\n' + advantage + demand + '\n' + address)
        print('\n*********************************************************************\n')
if __name__ == '__main__':
    spider=LagouSpider()
    spider.run()

(6)爬取的结果:
在这里插入图片描述
在这里插入图片描述
我尝试过用动态UA,详细的请求信息,代理IP都无法解决拉勾网的发爬虫,就只能用selenium来获取……如果代码有什么不足,还望大佬们多多指教!!

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值