scrapy爬虫基础

一、初识

创建项目:

scrapy startproject my_one_project    # 创建项目命令
cd my_one_project                     # 先进去, 后面在里面运行
运行爬虫命令为:scrapy crawl tk

spiders下创建test.py 

        其中name就是scrapy crawl tk ,运行时用的


# spiders脚本
import scrapy

class TkSpider(scrapy.Spider):
    name = 'tk'                    # 运行爬虫命令为:scrapy crawl tk
    start_urls = ['https://www.baidu.com/']
    def parse(self, response, **kwargs):
        print(1111)
        print(response.text)

运行时:

[scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden by robots.txt: <GET https://www.baidu.com/>

so所以:

settings.py中

访问百度地址就设置这个
ROBOTSTXT_OBEY = False

不想看那么多东西,可以设置这个
LOG_LEVEL = 'ERROR'   #分别为 CRITICAL< ERROR < WARNING < INFO < DEBUG  (设置为 ERROR ,就不会展示INFO)

再次运行


二、start_requests中完成请求

如下图: 红框的代码可以写可以不写,都是一个效果,但是写了可以在请求前,准备headers 或者cookies

(在执行parse前,实际上是执行了 start_requests的,在这里面实现了请求)

 但是写的话,可以在start_requests方法中,

  1. 发出请求之前执行一些额外的操作。如放一个cookie值、headers来请求
    传递了cookie请求后就能获取响应了
  2. 请求发出之后,如获取本次请求使用的header (但是实际请求还在更内部的地方,这里只能给header加一项而已)

实现随时登录,请求时会默认带上最新的cookie,或响应token

新建一个方法,如下为do_login(), 把下面内容放进去

   for url in self.start_urls:
            yield scrapy.Request(url=url, callback=self.parse)

再原来的start_requests里, scrapy.FormRequest 完成登录请求,并回调 do_login()

就可以把新的header  或cookie使用了

例:

A 这样是获取不到数据的


# spiders脚本
import scrapy

class TkSpider(scrapy.Spider):
    name = 'tk'                         # 运行爬虫命令为:scrapy crawl tk
    # allowed_domains = ['17k.com']       # 允许爬取的域名
    start_urls = ['http://124.223.33.41:7081/api/mgr/sq_mgr/?action=list_course&pagenum=1&pagesize=20']

    def start_requests(self):
        login_url = 'http://124.223.33.41:7081/api/mgr/loginReq'
        # yield scrapy.FormRequest(
        #     url=login_url,
        #     formdata={'username': 'auto', 'password': 'sdfsdfsdf'},
        #     callback=self.do_login
        # )

        for url in self.start_urls:
            yield scrapy.Request(url=url, callback=self.parse)
    def do_login(self, response):
        print("-----------",response.text)
        # print("-----------", response)
        '''
        登陆成功后调用parse进行处理
        cookie中间件会帮我们自动处理携带cookie
        '''
        for url in self.start_urls:
            yield scrapy.Request(url=url, callback=self.parse)

    def parse(self, response, **kwargs):
        print(response.text[:400])
        print(response.request.cookies)     # 这是?

下面实现了登录,自己携带了cookie去传,就能获取到数据了


# spiders脚本
import scrapy

class TkSpider(scrapy.Spider):
    name = 'tk'                         # 运行爬虫命令为:scrapy crawl tk
    # allowed_domains = ['17k.com']       # 允许爬取的域名
    start_urls = ['http://124.223.33.41:7081/api/mgr/sq_mgr/?action=list_course&pagenum=1&pagesize=20']

    def start_requests(self):
        login_url = 'http://124.223.33.41:7081/api/mgr/loginReq'
        yield scrapy.FormRequest(
            url=login_url,
            formdata={'username': 'auto', 'password': 'sdfsdfsdf'},
            callback=self.do_login
        )

    def do_login(self, response):
        print("-----------",response.text)
        # print("-----------", response)
        '''
        登陆成功后调用parse进行处理
        cookie中间件会帮我们自动处理携带cookie
        '''
        for url in self.start_urls:
            yield scrapy.Request(url=url, callback=self.parse)

    def parse(self, response, **kwargs):
        print(response.text[:400])
        print(response.request.cookies)     # 这个东西,查不到实际请求时传的cookies

三、 [下载中间件] 配置项

settings.py中有一个配置项

DOWNLOADER_MIDDLEWARES ={ 
#"my_one_project.middlewares.MyOneProjectDownloaderMiddleware": 544,
"项目名.模块名.自定义类名":543       # 数字表示下载中间件的执行顺序
}

如果执行了该配置项,会在请求前去执行这个自定义类 。通常写在middlewares.py 中

3.1. 下载中间件的初体会

故意写错地址,我们便可以看见下载中间件各方法的执行顺序。

return request 的意思,如果返回request,又会去请求,如下,在里面加上return,假如url有问题,就会一直循环去请求

3.2. 例1:[下载中间件类]中,设置请求头和cookie

用到:process_request,在请求被发送之前调用

1. middlewares.py中,添加一个自定义类

class CustomHeadersMiddleware:
    def process_request(self, request, spider):
        # 修改请求头
        request.headers['User-Agent'] = 'My Custom User Agent'
        request.headers['Referer'] = 'https://example.com'  # 添加 Referer


        # 设置cookies(登录17k.com页面后,复制cookies过来的)
        cookies = 'GUID=6dbc18ae-3e1a-47cc-8fdd-cfbbb8d29fae; sajssdk_2015_cross_new_user=1; Hm_lvt_9793f42b498361373512340937deb2a0=1724903619,1727255561; HMACCOUNT=395943C30BE9D378; c_channel=0; c_csc=web; acw_sc__v2=66f536769a4e77584871dbebc1a65686278ddfec; ssxmod_itna=muDtGKiK0IeIxWq0LqYKOPCT3Mtd5D74HtRmRtLDBL84iNDnD8x7YDvmIN34pb4pDbA4OF=nxIxDkrQn2rL/lmSNoDU4i8DCur43bDen=D5xGoDPxDeDACqGaDb4Dr2qqGPc0EkH=ODpxGrDlKDRx07Vg5DWEFKaebkhDDzwQbPoWDD4xi3f=HFP/yhDivquarPfxG1DQ5Dsx2Lr4DC03kYg=yEERg3m=BGDCKDjE7Ck1YDUn9zcPKvVC03oCGe/mgNLBGqkD2x5eRD/nuxtWDyoC2DrmrYtnix7QVPhDDWdjGeSqhDD; ssxmod_itna2=muDtGKiK0IeIxWq0LqYKOPCT3Mtd5D74HtRmRRD8T687uEDGNLPGaf30ssIx8OrPk5t7drYO/GmO7YC5b42h=rBRmQhU3ijInPthaFB6B=2z1taqFCBKMTAueiZpAZOBK1MZzpmPd86SA8NMCf+qi8fqZwdMtLA5X+dcKkfDXRtcIrA3poUZCRE4UbWfiRiw4x+N6hrOUCvF=wactl3s33qUGYxVKl0ebSI5sT5cK7742urC7Pth0QoIm9vwyqAp2poc013oVMUyba0P2g5hzlOR2PWxxc4/tl=y5LACaPoIKj4m9RKs0muYKD8ILWhEe0UjNN1qsn6FpvCe2003h0Xq40n23vm5tgfUn20jDaDG2Cr3/0vteE9hOzrKWOP4Td1hORR32rYt9qDqeGRQcoNvRDD3dT54tPiEobdbx3DDFqD+hmiB+4Dh1/oV7ca3D===; accessToken=avatarUrl%3Dhttps%253A%252F%252Fcdn.static.17k.com%252Fuser%252Favatar%252F18%252F98%252F90%252F96139098.jpg-88x88%253Fv%253D1650527904000%26id%3D96139098%26nickname%3D%25E4%25B9%25A6%25E5%258F%258BqYx51ZhI1%26e%3D1742898500%26s%3D93ee3faf573907f7; tfstk=g-IKgtfny5VnWqHZ4baMrG3ZupegsgBFt6WjqQAnP1COU_nnx3xHPLO9eXthFW5R2C16t6bhE1HFXsnoxWzFF0KkVSV0moXeL3-7iH1JdhleEp9QqUiWC9r0VxMLmoXFdvvQnOZ0U2G10IOWVeTQfCOwd39WO_O65K9mO3t5NRh6LKlIRLTB5lOwF3O5N3GqT7dsdQmRz_GxjKhTHci1XpLpw0AZ2uykKedfdCFbGGnWJI6B60N0qD3ewCSbsPBNaw1HQ_EY635NP_9R1fNk8T_O6pC3MbRlY9jJQ9HqmOLJ9NsBWYi5CHRRiaCL92RC-1L2CFM80dfD19SCW8l1dsApAdTgc-B6PNjeuMVsvn_VKHb5ar0wvT_BXgJlmiU38jx6T2eTB4uyRd-naHIvCZiVzdd0pu3rzFywBI2TB4uyRdJ9iJFKz48aQ; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%2296139098%22%2C%22%24device_id%22%3A%221922d7043b49b1-0aa4e0296c703d-26001151-2073600-1922d7043b5f1a%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24latest_referrer_host%22%3A%22%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%7D%2C%22first_id%22%3A%226dbc18ae-3e1a-47cc-8fdd-cfbbb8d29fae%22%7D; Hm_lpvt_9793f42b498361373512340937deb2a0=1727346501'
        # 需要注意的是 传递cookies类型为字典
        cookie_dict = {i.split('=')[0].strip(): i.split('=')[1].strip() for i in cookies.split(';')}
        # print(cookie_dict) # {'GUID': '6db....d29fae', 'sajssd....._user': '1', 'Hm_lvt_9...b2a0': '172...561', ...}
        request.cookies = cookie_dict

        return None  # 返回 None 继续处理请求

2. settings.py 中进行配置

DOWNLOADER_MIDDLEWARES = {
   # "my_one_project.middlewares.MyOneProjectDownloaderMiddleware": 544,
    'my_one_project.middlewares.CustomHeadersMiddleware': 543,  # 数字表示执行顺序
}

3. spiders下的test.py


# spiders脚本
import scrapy

class TkSpider(scrapy.Spider):
    name = 'tk'                         # 运行爬虫命令为:scrapy crawl tk
    allowed_domains = ['17k.com']       # 允许爬取的域名
    start_urls = ['https://user.17k.com/ck/user/myInfo/96139098?bindInfo=1&appKey=2406394919']



    def start_requests(self):
        for url in self.start_urls:
            request = scrapy.Request(url=url, callback=self.parse,headers={"name":"taoke"})
            print("------",request.headers)
            yield request

    def parse(self, response, **kwargs):
        print("响应结果,msg:success表示成功",response.text[:120])
        print(response.request.headers)     # 类型: <class 'scrapy.http.headers.Headers'>

命令行中运行> scrapy crawl tk

3.3. 例2:[下载中间件类]中,设置请求头__随机一个user_agent

a.在settings中添加UA的列表
USER_AGENTS_LIST = [ 
"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)", 
"Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
"Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
"Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
"Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5" ]
b.在middlewares.py中添加自定义类

process_request 中,每次请求前去配置中随机一个user_agent

import random
from 你的项目名.settings import USER_AGENTS_LIST # 注意导入路径,请忽视pycharm的错误提示(不想报,可以用pycharm打开项目,进入到项目中)

class UserAgentMiddleware(object):
    def process_request(self, request, spider):
        user_agent = random.choice(USER_AGENTS_LIST)
        request.headers['User-Agent'] = user_agent

定义process_response,获取每次的headers中的User agent

class UserAgentMiddleware(object):
    # ....
        
    def process_response(self, request, response, spider):
        print("process_response-----------")
        print(request.headers['User-Agent'])
        return response
c. settings中配置
DOWNLOADER_MIDDLEWARES = {
   '你的项目名.middlewares.UserAgentMiddleware': 543,
}

每次运行,可以发现获取到了不同user_agent

3.4. 例3:请求使用代理

a. middlewares.py 中,可以单独搞个如下的类, 当然也要配置到settings中,下面就不写了

class ProxyMiddleware:
    def process_request(self, request, spider):
        request.meta['proxy'] = 'http://127.0.0.1:8888' # 主要要写http, https开头
        return None  # 可以不写return

如上,我是使用的本地fiddler, 这样请求会经过fiddler

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值