scrapy 模拟登陆爬取豆瓣网收藏夹信息

我们以豆瓣网为例,我们想爬取收藏夹的数据,必须先登陆
在这里插入图片描述
如图所示,在登陆之前按下F12,勾选Presever log 登录后,在XHR中找到basic这一项,这是登陆请求
General包含了请求地址以及方法
Request Headers包括了请求头以及cookie:login_start_time=;bid=
Form Data表单数据
由此可见,登陆豆瓣时,浏览器采用post方式发送一个request请求到https://accounts.douban.com/j/mobile/login/basic。同时发送给服务器的表单数据有ck,name,password,remember和ticket,其中ck和ticket是固定值

下面我们就来实现登陆爬虫这一功能,scrapy为了方便起见提供了FormRequest类,专门处理表单功能,它在基类Request的基础上增加了formdata,用于接收字典形式的表单数据

1 创建工程

scrapy startproject douban

2 构建item

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class DoubanItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    title=scrapy.Field()
    author=scrapy.Field()

3 设置setting 添加agent

BOT_NAME = 'douban'

SPIDER_MODULES = ['douban.spiders']
NEWSPIDER_MODULE = 'douban.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'douban (+http://www.yourdomain.com)'
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False
FEED_EXPORT_ENCODING = 'utf-8'

4 爬虫主程序

from scrapy import Request,FormRequest
from scrapy.spiders import Spider
from douban.items import DoubanItem
import json
class doubanspider(Spider):
    name='douban'

    def start_requests(self):
        return [Request(url='https://movie.douban.com', meta={'cookiejar': 1}, callback=self.post_login)]

    def post_login(self, response):
        return FormRequest(
            url='https://accounts.douban.com/j/mobile/login/basic',
            method='POST',
            formdata={
                "ck":"",
            "name":"185255****",
            "password":"****",
            "remember":"true",
            "ticket":""
            },
            meta={'cookiejar': response.meta['cookiejar']},
            dont_filter=True,
            callback=self.parse
        )



    # def start_requests(self):
    #     url="https://accounts.douban.com/j/mobile/login/basic"
    #     data={
    #         "ck":"",
    #         "name":"18525509086",
    #         "password":"woaixuyuebao1314",
    #         "remember":"true",
    #         "ticket":""
    #     }
    #     yield FormRequest(url=url,formdata=data,method="POST")
    def parse(self, response):
        res=json.loads(response.text)
        if res["status"]=="success":
            url="https://www.douban.com/doulist/130497364/"
            yield Request(url=url,meta = {'cookiejar':1},callback=self.call)

    def call(self,response):
        with open("1.txt",'a+',encoding='utf-8') as f:
            f.write(response.text)
        doulist=response.xpath("//div[@class='doulist-item']")

        for items in doulist:
            item = DoubanItem()
            title=items.xpath(".//div[@class='title']/a/text()").extract()[0]
            author=items.xpath(".//div[@class='abstract']/text()").extract()[0]
            item['title']=title
            item['author']=author
            yield item

5 爬虫

scrapy crawl douban -o douban.json

https://blog.csdn.net/qq_42293758/article/details/87925623

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值