scrapy爬虫利用selenium实现用户登录和cookie传递

3.1. scrapy中的cookie  

  • 当用户登录成功之后,向百度云俱乐部主页发起请求,然后在parse()方法中,取出request请求中携带的cookie信息
# 检查登录结果
    def parseLoginResPage(self, response):
        # 查看登录结果
        print(f"parseLoginResPage: statusCode = {response.status}, url = {response.url}")
        print(f"text = {response.text}")
        # 登录成功之后,访问百度云俱乐部主页
        yield scrapy.Request(
            url="http://www.51baiduyun.com/",
            headers=self.headerData,
            callback=self.parse,
            dont_filter=True,  # 防止页面因为重复爬取,被过滤了
        )

    # 正常的分析页面请求
    def parse(self, response):
        print(f"parse: url = {response.url}, meta = {response.meta}")
        # 获取请求request中的Cookie,也就是携带给网站的cookie信息
        Cookie = response.request.headers.getlist('Cookie')
        print(f'parse: After login CookieReq = {Cookie}')
  • scrapy中cookie的形态为:
# E:\Miniconda\Lib\site-packages\scrapy\downloadermiddlewares\cookies.py 
# 可以看到引用的是 from scrapy.http.cookies import CookieJar
parse:  After login CookieReq = [b'L3em_2132_saltkey=gSZXPVeG; L3em_2132_lastvisit=1523251829; L3em_2132_sid=Ir7Mht; L3em_2132_lastact=1523255437%09member.php%09logging; L3em_2132_seccode=85931.874f29c987d4fb59f3; L3em_2132_ulastactivity=d666YmbdpNj9Iz%2FNUEi%2BjDvc4WOWgYaPjSlfz9WctVMX7egl2vDA; L3em_2132_auth=8d0cjiUMrZ3s55Jt%2B4ypshxBHoUuNN4Z5e3ExPbzViN5lFcOjNWxZ8sz8vaBOTzYEIK7AHENUH%2F%2Fcw6VVnzLC%2BFvOa12; L3em_2132_lastcheckfeed=1315026%7C1523255437; L3em_2132_checkfollow=1; L3em_2132_lip=183.12.51.62%2C1523255149; L3em_2132_security_cookiereport=c346vrKUIfmjcBwk7YP92rl%2FHJROM1lF0Y2knuvE1PvPfZOvnxad']

 

3.2. selenium中的cookie

  • 当用selenium登录成功之后,获取其中的cookie值,如下
seleniumCookies = spider.browser.get_cookies()
print(f"seleniumCookies = {seleniumCookies}")
  •  selenium中cookie的形态为:
seleniumCookies = [{'domain': 'www.51baiduyun.com', 'expiry': 1538989361, 'httpOnly': False, 'name': 'CNZZDATA1253365484', 'path': '/', 'secure': False, 'value': '964419069-1523259525-%7C1523259525'}, {'domain': 'www.51baiduyun.com', 'expiry': 1525856539.733429, 'httpOnly': True, 'name': 'L3em_2132_saltkey', 'path': '/', 'secure': False, 'value': 'uL0UL77j'}, {'domain': 'www.51baiduyun.com', 'expiry': 1523307758.631004, 'httpOnly': False, 'name': 'L3em_2132_security_cookiereport', 'path': '/', 'secure': False, 'value': '6bd1%2FSD%2F0OzhXwpZ5fhpBFDHH1WGRAslxA8eGAjOvYKJjvJkwLkc'}, {'domain': 'www.51baiduyun.com', 'expiry': 1525856539.733484, 'httpOnly': False, 'name': 'L3em_2132_lastvisit', 'path': '/', 'secure': False, 'value': '1523261207'}, {'domain': 'www.51baiduyun.com', 'httpOnly': False, 'name': 'L3em_2132_seccode', 'path': '/', 'secure': False, 'value': '120125.68ba4641e97556392b'}, {'domain': 'www.51baiduyun.com', 'expiry': 1523350961.711943, 'httpOnly': False, 'name': 'L3em_2132_sid', 'path': '/', 'secure': False, 'value': 'mBP4sb'}, {'domain': 'www.51baiduyun.com', 'expiry': 1523264840.028978, 'httpOnly': False, 'name': 'L3em_2132_sendmail', 'path': '/', 'secure': False, 'value': '1'}, {'domain': '.51baiduyun.com', 'expiry': 1538989340, 'httpOnly': False, 'name': 'UM_distinctid', 'path': '/', 'secure': False, 'value': '162a9a44e0823b-098677e48fe2be-454c092b-1fa400-162a9a44e094bf'}, {'domain': '.www.51baiduyun.com', 'expiry': 1554800561, 'httpOnly': False, 'name': 'Hm_lvt_79316e5471828e6e10f2df47721ce150', 'path': '/', 'secure': False, 'value': '1523264541'}, {'domain': 'www.51baiduyun.com', 'expiry': 1538989361, 'httpOnly': False, 'name': 'CNZZDATA1253863031', 'path': '/', 'secure': False, 'value': '1393313043-1523261609-%7C1523261609'}, {'domain': '.www.51baiduyun.com', 'expiry': 1554800561, 'httpOnly': False, 'name': 'Hm_lvt_eaefab1768d285abfc718a706c1164f3', 'path': '/', 'secure': False, 'value': '1523264541'}, {'domain': 'www.51baiduyun.com', 'expiry': 1554800558.630797, 'httpOnly': False, 'name': 'L3em_2132_ulastactivity', 'path': '/', 'secure': False, 'value': 'e52eGQjsi80DLGLXvdzm1z0xQ7lmIKuBlBUK8mQlJmAMXr7Ep8D8'}, {'domain': 'www.51baiduyun.com', 'httpOnly': True, 'name': 'L3em_2132_auth', 'path': '/', 'secure': False, 'value': 'be395ZoslCjexHStJKSaOCgvl9krhLvGLWmNm4hRKMH1qZ65gGUlWA5q9KV7veHBRF6hrQxqUiINkF844oiL5hukCNMg'}, {'domain': 'www.51baiduyun.com', 'expiry': 1554800558.630948, 'httpOnly': False, 'name': 'L3em_2132_lastcheckfeed', 'path': '/', 'secure': False, 'value': '2533730%7C1523264825'}, {'domain': 'www.51baiduyun.com', 'expiry': 1523264588.630963, 'httpOnly': False, 'name': 'L3em_2132_checkfollow', 'path': '/', 'secure': False, 'value': '1'}, {'domain': 'www.51baiduyun.com', 'httpOnly': False, 'name': 'L3em_2132_lip', 'path': '/', 'secure': False, 'value': '183.12.51.62%2C1523264610'}, {'domain': 'www.51baiduyun.com', 'expiry': 1523264591.846338, 'httpOnly': False, 'name': 'L3em_2132_checkpm', 'path': '/', 'secure': False, 'value': '1'}, {'domain': '.www.51baiduyun.com', 'httpOnly': False, 'name': 'Hm_lpvt_79316e5471828e6e10f2df47721ce150', 'path': '/', 'secure': False, 'value': '1523264562'}, {'domain': '.www.51baiduyun.com', 'httpOnly': False, 'name': 'Hm_lpvt_eaefab1768d285abfc718a706c1164f3', 'path': '/', 'secure': False, 'value': '1523264562'}, {'domain': 'www.51baiduyun.com', 'expiry': 1523350961.982766, 'httpOnly': False, 'name': 'L3em_2132_lastact', 'path': '/', 'secure': False, 'value': '1523264829%09misc.php%09patch'}]

结论:通过对比发现,两者使用cookie的形态是不一样的,需要将selenium的cookie转化成scrapy的那种格式,才能在scrapy中进行使用。

 

代码详细解析

  • 在settings.py中,配置好selenium参数:
# 文件settings.py中

# ----------- selenium参数配置 -------------
SELENIUM_TIMEOUT = 25           # selenium浏览器的超时时间,单位秒
LOAD_IMAGE = True               # 是否下载图片
WINDOW_HEIGHT = 900             # 浏览器窗口大小
WINDOW_WIDTH = 900
  • 在spider中,生成request时,标记哪些请求需要走selenium下载:
# 文件mySpider.py中

import scrapy
import datetime
import re
import random
from PIL import Image

# selenium相关库
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait

# scrapy 信号相关库
from scrapy.utils.project import get_project_settings
from scrapy import signals

# 下面这种方式,即将废弃,所以不用
# from scrapy.xlib.pydispatch import dispatcher
# scrapy最新采用的方案
from pydispatch import dispatcher

class mySpider(CrawlSpider):
    name = 'baiduyun'
    allowed_domains = ['51baiduyun.com']
    host = "http://www.51baiduyun.com/"

    custom_settings = {
        'LOG_LEVEL':'INFO',
        'DOWNLOAD_DELAY': 1,
        'COOKIES_ENABLED': False,  # enabled by default
        'DOWNLOADER_MIDDLEWARES': {
            # 代理中间件
            'mySpider.middlewares.ProxiesMiddleware': 400,
            # SeleniumMiddleware 中间件
            'mySpider.middlewares.SeleniumMiddleware': 543,
            # 将scrapy默认的user-agent中间件关闭
            'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
        },
    }

    # 将chrome初始化放到spider中,成为spider中的元素
    def __init__(self, timeout=30, isLoadImage=True, windowHeight=None, windowWidth=None):
        # 从settings.py中获取设置参数
        self.mySetting = get_project_settings()
        self.timeout = self.mySetting['SELENIUM_TIMEOUT']
        self.isLoadImage = self.mySetting['LOAD_IMAGE']
        self.windowHeight = self.mySetting['WINDOW_HEIGHT']
        self.windowWidth = self.mySetting['windowWidth']
        # 初始化chrome对象
        self.browser = webdriver.Chrome()
        if self.windowHeight and self.windowWidth:
            self.browser.set_window_size(900, 900)
        self.browser.set_page_load_timeout(self.timeout)  # 页面加载超时时间
        self.wait = WebDriverWait(self.browser, 25)  # 指定元素加载超时时间
        super(mySpider, self).__init__()
        # 设置信号量,当收到spider_closed信号时,调用mySpiderCloseHandle方法,关闭chrome
        dispatcher.connect(receiver=self.mySpiderCloseHandle,
                           signal=signals.spider_closed
                           )

    # 信号量处理函数:关闭chrome浏览器
    def mySpiderCloseHandle(self, spider):
        print(f"mySpiderCloseHandle: enter ")
        self.browser.quit()

    # 爬虫运行的起始位置
    def start_requests(self):
        print("start baiduyun clawer")
        # 生成request时,将是否使用selenium下载的标记,放入到meta中
        yield scrapy.Request(
            # 听众交互页
            url="http://www.51baiduyun.com/home.php?mod=space&do=notice&view=interactive",
            meta={'usedSelenium': True, 'pageType': 'login'},
            callback=self.parseLoginRes,
            errback=self.errorHandle
        )


    # 用于接收登录结果
    def parseLoginRes(self, response):
        print(f"parseLoginRes: statusCode = {response.status}, url = {response.url}")
        print(f"parseLoginRes: cookies1 = {response.request.cookies}")
        print(f"parseLoginRes: cookies2 = {response.request.headers.getlist('Cookie')}")
        # 登录之后,用下面这个“个人资料页”,来进行测试用户是否登录成功
        yield scrapy.Request(
            # 个人资料页
            url="http://www.51baiduyun.com/home.php?mod=spacecp&ac=profile",
            # 不允许页面跳转来测试
            meta={'usedSelenium': False, 'dont_redirect': True},
            callback=self.parseLoginStatusRes,
            errback=self.errorHandle,
            dont_filter=True,
        )


    # 用于分析登录结果
    def parseLoginStatusRes(self, response):
        print(f"parseLoginStatusRes: statusCode = {response.status}, url = {response.url}")
        print(f"parseLoginStatusRes: cookies1 = {response.request.cookies}")
        print(f"parseLoginStatusRes: cookies2 = {response.request.headers.getlist('Cookie')}")
        # 获取服务器返回过来的Cookie,也就是网站携带给用户的cookie信息
        responseCookie = response.headers.getlist('Set-Cookie')
        print(f"parseLoginStatusRes: response.cookie = {responseCookie}")
        print(f"############################################")
        print(f"text = {response.text}")


    # 请求错误处理:可以打印,写文件,或者写到数据库中
    def errorHandle(self, failure):
        print(f"request error: {failure.value.response}")
  • 在下载中间件middlewares.py中,使用selenium进行用户登录,获取cookie,并传递给scrapy:
# 文件middlewares.py中

# -*- coding: utf-8 -*-

# Define here the models for your spider middleware
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/spider-middleware.html

from scrapy import signals
import random
from scrapyFengniao import headDefine

from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from scrapy.http import HtmlResponse
import time

class SeleniumMiddleware():
    # Middleware中会传递进来一个spider,这就是我们的spider对象,从中可以获取__init__时的chrome相关元素
    def process_request(self, request, spider):
        ''' 用chrome抓取页面 :param request: Request请求对象 :param spider: Spider对象 :return: HtmlResponse响应 '''
        print(f"now is using chrome to get page ...")
        # 依靠meta中的标记,来决定是否需要使用selenium来爬取
        usedSelenium = request.meta.get('usedSelenium', False)
        if usedSelenium:
            if request.meta.get('pageType', '') == 'login':
                # 先存储原始的url链接
                originalUrl = request.url
                try:
                    # 会自动跳转到登录页面
                    spider.browser.get(originalUrl)
                    # 用户名登录框是否出现
                    usernameInput = spider.wait.until(
                        EC.presence_of_element_located((By.XPATH, "//div[@id='messagelogin']//input[@name='username']"))
                    )
                    time.sleep(2)
                    usernameInput.clear()
                    usernameInput.send_keys("ancoxxxxxxx")   # 输入用户名

                    passWordElem = spider.browser.find_element_by_xpath("//div[@id='messagelogin']//input[@name='password']")
                    time.sleep(2)
                    passWordElem.clear()
                    passWordElem.send_keys("anco00000000")        # 输入密码

                    captchaElem = spider.browser.find_element_by_xpath("//div[@id='messagelogin']//input[@name='seccodeverify']")
                    time.sleep(2)
                    captchaElem.clear()
                    # 此处采用手动输入
                    # 关于自动打码,可以参考之前写过的文章,链接如下:
                    # https://blog.csdn.net/zwq912318834/article/details/78616462
                    captcha = input("输入验证码\n>").strip()
                    captchaElem.send_keys(captcha)          # 输入验证码

                    # 点击登录按钮
                    loginButtonElem = spider.browser.find_element_by_xpath("//div[@id='messagelogin']//button[@name='loginsubmit']")
                    time.sleep(2)
                    loginButtonElem.click()
                    time.sleep(1)
                    seleniumCookies = spider.browser.get_cookies()
                    print(f"seleniumCookies = {seleniumCookies}")
                    # # 查看搜索结果是否出现
                    # searchRes = spider.wait.until(
                    # EC.presence_of_element_located((By.XPATH, "//div[@id='resultsCol']"))
                    # )
                except Exception as e:
                    print(f"chrome user login handle error, Exception = {e}")
                    return HtmlResponse(url=request.url, status=500, request=request)
                else:
                    time.sleep(3)
                    # 登录成功之后,获取到selenium的cookie
                    cookie = [item["name"] + ":" + item["value"] for item in seleniumCookies]
                    cookMap = {}
                    for elem in cookie:
                        str = elem.split(':')
                        cookMap[str[0]] = str[1]
                    print(f"cookMap = {cookMap}")
                    # 中间件,对Request进行加工
                    # 开始用这个转换后的cookie重新构造Request,从源码中来看Request构造的原型
                    # E:\Miniconda\Lib\site-packages\scrapy\http\request\__init__.py
                    request.cookies = cookMap  # 让这个带有登录后cookie的Request继续爬取
                    request.meta['usedSelenium'] = False  # 避免这个url发生重定向302,里面的meta信息会让它回到这个流程
  • selenium中的get_cookies()方法,事实上,是将浏览器中的cookie值取出来,如下图所示:
2019-11-26 15:51:36 [selenium.webdriver.remote.remote_connection] DEBUG: GET http://127.0.0.1:53810/session/7d7d525dd0085150cde998e3b3db8e43/cookie {}
2019-11-26 15:51:36 [selenium.webdriver.remote.remote_connection] DEBUG: Finished Request
seleniumCookies = [{'domain': '12306.cn', 'expiry': 1661154695, 'httpOnly': False, 'name': 'RAIL_EXPIRATION', 'path': '/', 'secure': False, 'value': '1575072866069'}, {'domain': 'www.12306.cn', 'httpOnly': False, 'name': 'BIGipServerotn', 'path': '/', 'secure': False, 'value': '1072693770.38945.0000'}, {'domain': '12306.cn', 'expiry': 1924905600, 'httpOnly': False, 'name': 'RAIL_DEVICEID', 'path': '/', 'secure': False, 'value': 'SmwCzDUBV-4tY28qPyNv2K-xr6xcMtC7yCPt_bOaFfVTU8ZvUbi7gUegKztl6eRf6I2DFIvCPAtqWL-M8kEArWh67UPm4mNACwwEFaq05YZgjPY9S8T5Ea4kzO5168MbL3ePE-YNSCWkGWXpAfTgV83WW1WxNdVU'}, {'domain': 'www.12306.cn', 'httpOnly': False, 'name': 'route', 'path': '/', 'secure': False, 'value': '6f50b51faa11b987e576cdb301e545c4'}, {'domain': 'www.12306.cn', 'httpOnly': False, 'name': 'BIGipServerpool_index', 'path': '/', 'secure': False, 'value': '770703882.43286.0000'}]
cookMap = {'RAIL_EXPIRATION': '1575072866069', 'BIGipServerotn': '1072693770.38945.0000', 'RAIL_DEVICEID': 'SmwCzDUBV-4tY28qPyNv2K-xr6xcMtC7yCPt_bOaFfVTU8ZvUbi7gUegKztl6eRf6I2DFIvCPAtqWL-M8kEArWh67UPm4mNACwwEFaq05YZgjPY9S8T5Ea4kzO5168MbL3ePE-YNSCWkGWXpAfTgV83WW1WxNdVU', 'route': '6f50b51faa11b987e576cdb301e545c4', 'BIGipServerpool_index': '770703882.43286.0000'}
2019-11-26 15:51:41 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.12306.cn/index/index.html> (referer: None)

 

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值