Using Scrapy with authenticated (logged in) user session

In the Scrapy docs, there is the following example to illustrate how to use an authenticated session in Scrapy:

class LoginSpider(BaseSpider):
    name = 'example.com'
    start_urls = ['http://www.example.com/users/login.php']

    def parse(self, response):
        return [FormRequest.from_response(response,
                    formdata={'username': 'john', 'password': 'secret'},
                    callback=self.after_login)]

    def after_login(self, response):
        # check login succeed before going on
        if "authentication failed" in response.body:
            self.log("Login failed", level=log.ERROR)
            return

        # continue scraping with authenticated session...

I've got that working, and it's fine. But my question is: What do you have to do to continue scraping with authenticated session, as they say in the last line's comment?

share | improve this question
  add comment

1 Answer

up vote 15 down vote accepted

In the code above, the FormRequest that is being used to authenticate has the after_loginfunction set as its callback. This means that the after_login function will be called and passed the page that the login attempt got as a response.

It is then checking that you are successfully logged in by searching the page for a specific string, in this case "authentication failed". If it finds it, the spider ends.

Now, once the spider has got this far, it knows that it has successfully authenticated, and you can start spawning new requests and/or scrape data. So, in this case:

from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request

...

def after_login(self, response):
    # check login succeed before going on
    if "authentication failed" in response.body:
        self.log("Login failed", level=log.ERROR)
        return
    # We've successfully authenticated, let's have some fun!
    else:
        return Request(url="http://www.example.com/tastypage/",
               callback=self.parse_tastypage)

def parse_tastypage(self, response):
    hxs = HtmlXPathSelector(response)
    yum = hxs.select('//img')

    # etc.

If you look here, there's an an example of a spider that authenticates before scraping.

In this case, it handles things in the parse function (the default callback of any request).

def parse(self, response):
    hxs = HtmlXPathSelector(response)
    if hxs.select("//form[@id='UsernameLoginForm_LoginForm']"):
        return self.login(response)
    else:
        return self.get_section_links(response)

So, whenever a request is made, the response is checked for the presence of the login form. If it is there, then we know that we need to login, so we call the relevant function, if it's not present, we call the function that is reponsible for scraping the data from the response.

I hope this is clear, feel free to ask if you if any other questions!


Edit:

Okay, so you want to do more than just spawn a single request and scrape it. You want to follow links.

To do that, all you need to do is scrape the relevant links from the page, and spawn requests using those urls. For example:

def parse_page(self, response):
    """ Scrape useful stuff from page, and spawn new requests

    """
    hxs = HtmlXPathSelector(response)
    images = hxs.select('//img')
    # .. do something with them
    links = hxs.select('//a/@href')

    # Yield a new request for each link we found
    for link in links:
        yield Request(url=link, callback=self.parse_page)

As you can see, it spawns a new request for every url on the page, and each one of those requests will call this same function with their response, so we have some recursive scraping going on.

What I've written above is just an example. If you want to "crawl" pages, you should look into CrawlSpiders rather than doing things manually.

share | improve this answer
 
 
Ok, so I'm using the scrapy crawl command to run this (Don't know if that matters). After login success, if I call parse_tastypage, it only parses that one page, and then exits. How do I tell it to follow all links and crawl that as well? –   Herman Schaaf  May 1 '11 at 20:04
 
Updated my answer to show an example of spawning multiple requests. –   Acorn  May 1 '11 at 20:16
 
Yes, I am actually using a CrawlSpider in my own code - how would I then do it differently? (without having to explicitly parse the links myself) –   Herman Schaaf  May 1 '11 at 20:17
 
Is there anything in particular that you don't understand about the well commented example that you'd like me to explain? –   Acorn  May 1 '11 at 20:25
 
I posted a new question, that's a bit more specific than my first one - Crawling with an authenticated session in Scrapy –   Herman Schaaf  May 1 '11 at 20:35
 
Link does not work anymore :( –   wrongusername  Oct 3 '12 at 17:49
1  
@wrongusername dead link fixed. –   Acorn  Oct 3 '12 at 18:21
 
Thanks! Any chance you can find the link to the demo spider with login handling page? –   wrongusername Oct 3 '12 at 22:08
 
@wrongusername It was just a link to the example in the crawlspider documentation section:doc.scrapy.org/en/latest/topics/… –   Acorn  Oct 4 '12 at 13:22 
 
ahhh I see. Thanks a lot Acorn! :) –   wrongusername  Oct 4 '12 at 14:58
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值