(笔记)数据采集基础05

20240418

Downloader Middleware的用法

Downloader Middleware 即下载中间件,它是处于 Scrapy Request Response 之间的处
理模块。

Scheduler 从队列中拿出一个 Request 发送给 Downloader 执行下载,这个过程会经过
Downloader Middleware 的处理。另外,当 Downloader Request 下载完成得到 Response
返回给 Spider 时会再次经过 Downloader Middleware 处理。

核心方法

每个 Downloader Middleware 都定义了一个或多个方法的类,核心的方法有如下三个:
process_request(request, spider)
process_response(request, response, spider)
process_excepti on(request, excepti on, spider)
我们只需要实现至少一个方法,就可以定义一个 Downloader Middleware。

process_request(request, spider)

Request Scrapy 引擎调度给 Downloader 之前, process_request() 方法就会被调用,也就
是在 Request 从队列里调度出来到 Downloader 下载执行之前,我们都可以用
process_request() 方法对 Request 进行处理。方法的返回值必须为 None Response 对象、
Request 对象之一,或者抛出 IgnoreRequest 异常。
process_requests()方法的参数有两个:
request: Request 对象,即被处理的 Request
spider: Spider 对象,即此 Request 对应的 Spider。

process_response(request, response, spider) 

Downloader 执行 Request 下载之后,会得到对应的 Response Scrapy 引擎便会将 Response
发送给 Spider 进行解析。在发送之前,我们都可以用 process_response() 方法来对 Response
进行处理。方法的返回值必须为 Request 对象、 Response 对象之一,或者抛出
IgnoreRequest 异常。
process_response()方法的参数有三个:
request:  Request 对象,即此 Response 对应的 Request;
response: Response 对象,即此被处理的 Response;
spider:  Spider 对象,即此 Response 对应的 Spider。

process_excepti on(request, excepti on, spider)

Downloader process_request() 方法抛出异常时,例如抛出 IgnoreRequest 异常,
process_excepti on() 方法就会被调用。方法的返回值必须为 None Response 对象、 Request 对象之一。
process_excepti on() 方法的参数有三个:
request: Request 对象,即产生异常的 Request;
excepti on: Excepti on 对象,即抛出的异常;
spdier: Spider 对象,即 Request 对应的 Spider。

项目实战

新建一个项目,命令如下所示:
scrapy startproject scrapydownloadertest

新建了一个 Scrapy 项目,名为 scrapydownloadertest。进入项目,新建一个 Spider,命令如下所示:

scrapy genspider httpbin httpbin.org
新建了一个 Spider ,名为 htt pbin ,源代码如下所示:
import scrapy
class HttpbinSpider(scrapy.Spider):
name = 'httpbin'
allowed_domains = ['httpbin.org']
start_urls = ['http://httpbin.org/']
def parse(self, response):
pass

接下来我们修改 start_urls 为:[‘httpbin.org’]。随后将 parse() 方法添加一行日志输出,将 response 变量的 text 属性输出,这样我们便可以看到 Scrapy 发送的 Request 信息了。 修改 Spider 内容如下所示:

import scrapy
​
class HttpbinSpider(scrapy.Spider):
    name = 'httpbin'
    allowed_domains = ['httpbin.org']
    start_urls = ['http://httpbin.org/get']
​
    def parse(self, response):
        self.logger.debug(response. Text)

接下来运行此 Spider,执行如下命令:

scrapy crawl httpbin

Scrapy 运行结果包含 Scrapy 发送的 Request 信息,内容如下所示:

{"args": {}, 
  "headers": {
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", 
    "Accept-Encoding": "gzip,deflate,br", 
    "Accept-Language": "en", 
    "Connection": "close", 
    "Host": "httpbin.org", 
    "User-Agent": "Scrapy/1.4.0 (+http://scrapy.org)"
  }, 
  "origin": "60.207.237.85", 
  "url": "http://httpbin.org/get"
}

我们观察一下 Headers,Scrapy 发送的 Request 使用的 User-Agent 是 Scrapy/1.4.0(+http://scrapy.org),这其实是由 Scrapy 内置的 UserAgentMiddleware 设置的,UserAgentMiddleware 的源码如下所示:

from scrapy import signals
​
class UserAgentMiddleware(object):
    def __init__(self, user_agent='Scrapy'):
        self.user_agent = user_agent
​
    @classmethod
    def from_crawler(cls, crawler):
        o = cls(crawler.settings['USER_AGENT'])
        crawler.signals.connect(o.spider_opened, signal=signals.spider_opened)
        return o
​
    def spider_opened(self, spider):
        self.user_agent = getattr(spider, 'user_agent', self.user_agent)
​
    def process_request(self, request, spider):
        if self.user_agent:
            request.headers.setdefault(b'User-Agent', self.user_agent)

在 from_crawler() 方法中,首先尝试获取 settings 里面的 USER_AGENT,然后把 USER_AGENT 传递给 init() 方法进行初始化,其参数就是 user_agent。如果没有传递 USER_AGENT 参数就默认设置为 Scrapy 字符串。我们新建的项目没有设置 USER_AGENT,所以这里的 user_agent 变量就是 Scrapy。接下来,在 process_request() 方法中,将 user-agent 变量设置为 headers 变量的一个属性,这样就成功设置了 User-Agent。因此,User-Agent 就是通过此 Downloader Middleware 的 process_request() 方法设置的。 修改请求时的 User-Agent 可以有两种方式:一是修改 settings 里面的 USER_AGENT 变量;二是通过 Downloader Middleware 的 process_request() 方法来修改。

第一种方法非常简单,我们只需要在 setting.py 里面加一行 USER_AGENT 的定义即可:

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36'

一般推荐使用此方法来设置。但是如果想设置得更灵活,比如设置随机的 User-Agent,那就需要借助 Downloader Middleware 了。所以接下来我们用 Downloader Middleware 实现一个随机 User-Agent 的设置。

在 middlewares.py 里面添加一个 RandomUserAgentMiddleware 的类,如下所示:

import random
​
class RandomUserAgentMiddleware():
    def __init__(self):
        self.user_agents = ['Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)',
            'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.2 (KHTML, like Gecko) Chrome/22.0.1216.0 Safari/537.2',
            'Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:15.0) Gecko/20100101 Firefox/15.0.1'
        ]
​
    def process_request(self, request, spider):
        request.headers['User-Agent'] = random.choice(self.user_agents)

我们首先在类的 init() 方法中定义了三个不同的 User-Agent,并用一个列表来表示。接下来实现了 process_request() 方法,它有一个参数 request,我们直接修改 request 的属性即可。在这里我们直接设置了 request 对象的 headers 属性的 User-Agent,设置内容是随机选择的 User-Agent,这样一个 Downloader Middleware 就写好了。

不过,要使之生效我们还需要再去调用这个 Downloader Middleware。在 settings.py 中,将 DOWNLOADER_MIDDLEWARES 取消注释,并设置成如下内容:

DOWNLOADER_MIDDLEWARES = {'scrapydownloadertest.middlewares.RandomUserAgentMiddleware': 543,}

接下来我们重新运行 Spider,就可以看到 User-Agent 被成功修改为列表中所定义的随机的一个 User-Agent 了:

{"args": {}, 
  "headers": {
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", 
    "Accept-Encoding": "gzip,deflate,br", 
    "Accept-Language": "en", 
    "Connection": "close", 
    "Host": "httpbin.org", 
    "User-Agent": "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)"
  }, 
  "origin": "60.207.237.85", 
  "url": "http://httpbin.org/get"
}

我们就通过实现 Downloader Middleware 并利用 process_request() 方法成功设置了随机的 User-Agent。

另外,Downloader Middleware 还有 process_response() 方法。Downloader 对 Request 执行下载之后会得到 Response,随后 Scrapy 引擎会将 Response 发送回 Spider 进行处理。但是在 Response 被发送给 Spider 之前,我们同样可以使用 process_response() 方法对 Response 进行处理。比如这里修改一下 Response 的状态码,在 RandomUserAgentMiddleware 添加如下代码:

def process_response(self, request, response, spider):
    response.status = 201
    return response

我们将 response 对象的 status 属性修改为 201,随后将 response 返回,这个被修改后的 Response 就会被发送到 Spider。

我们再在 Spider 里面输出修改后的状态码,在 parse() 方法中添加如下的输出语句:

self.logger.debug('Status Code: ' + str(response.status))

重新运行之后,控制台输出了如下内容:

[httpbin] DEBUG: Status Code: 201

可以发现,Response 的状态码成功修改了。因此要想对 Response 进行处理,就可以借助于 process_response() 方法。

另外还有一个 process_exception() 方法,它是用来处理异常的方法。如果需要异常处理的话,我们可以调用此方法。不过这个方法的使用频率相对低一些,在此不用实例演示。

定义Header头的三种方法

第一种:

只需要在 setting.py 里面加一行 USER_AGENT 的定义即可

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36'

第二种:

可以设置随机的 User-Agent ,需要借助 Downloader Middleware 了。所以接下来我们用 Downloader Middleware 实现一个随机 User-Agent 的设置。 在 middlewares.py 里面添加一个 RandomUserAgentMiddleware 的类

import random

class RandomUserAgentMiddleware():
    def __init__(self):
        self.user_agents = ['Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)',
            'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.2 (KHTML, like Gecko) Chrome/22.0.1216.0 Safari/537.2',
            'Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:15.0) Gecko/20100101 Firefox/15.0.1'
        ]

    def process_request(self, request, spider):
        request.headers['User-Agent'] = random.choice(self.user_agents)

要使之生效我们还需要再去调用这个 Downloader Middleware。在 settings.py 中,将 DOWNLOADER_MIDDLEWARES 取消注释,并设置成如下内容:

 DOWNLOADER_MIDDLEWARES = {'scrapydownloadertest.middlewares.RandomUserAgentMiddleware': 543,}

全局修改header头,在setting里直接修改

针对特殊请求进行单个修改:

​
import scrapy
 
class HttpbinSpider(scrapy.Spider):
 
    name = "httpbin"
 
    allowed_domains = ["httpbin.org"]
 
    # start_urls = ["http://httpbin.org/get"]
 
    #
 
    # def parse(self, response):
 
    #     print(response.text)
 
    def start_requests(self):
 
        headers = {
 
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
 
    "Accept-Language": "en,zh-CN;q=0.9,zh;q=0.8",
 
    "Cache-Control": "no-cache",
 
    "Connection": "keep-alive",
 
    "Pragma": "no-cache",
 
    "Sec-Fetch-Dest": "document",
 
    "Sec-Fetch-Mode": "navigate",
 
    "Sec-Fetch-Site": "same-origin",
 
    "Sec-Fetch-User": "?1",
 
    "Upgrade-Insecure-Requests": "1",
 
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36",
 
    "sec-ch-ua": "\"Google Chrome\";v=\"123\", \"Not:A-Brand\";v=\"8\", \"Chromium\";v=\"123\"",
 
    "sec-ch-ua-mobile": "?0",
 
    "sec-ch-ua-platform": "\"Windows\""
 
}
 
        yield scrapy.Request('http://httpbin.org/get',self.demo)
 
    def demo(self,response):
 
​

加隧道代理

小象代理

IP池加代理

# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
from scrapy.downloadermiddlewares.retry import RetryMiddleware
from scrapy import signals
import requests
# useful for handling different item types with a single interface
# from itemadapter import is_item, ItemAdapter
import logging
import aiohttp

class FangdichanSpiderMiddleware:
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_spider_input(self, response, spider):
        # Called for each response that goes through the spider
        # middleware and into the spider.

        # Should return None or raise an exception.
        return None

    def process_spider_output(self, response, result, spider):
        # Called with the results returned from the Spider, after
        # it has processed the response.

        # Must return an iterable of Request, or item objects.
        for i in result:
            yield i

    def process_spider_exception(self, response, exception, spider):
        # Called when a spider or process_spider_input() method
        # (from other spider middleware) raises an exception.

        # Should return either None or an iterable of Request or item objects.
        pass

    def process_start_requests(self, start_requests, spider):
        # Called with the start requests of the spider, and works
        # similarly to the process_spider_output() method, except
        # that it doesn’t have a response associated.

        # Must return only requests (not items).
        for r in start_requests:
            yield r

    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)
# import base64
# proxyUser = "963053782840004608"
# proxyPass = "sdwjPycR"
# proxyHost = "http-short.xiaoxiangdaili.com"
# proxyPort = 10010
#
# proxyServer = "http://%(host)s:%(port)s" % {
#         "host": proxyHost,
#         "port": proxyPort
#     }
# proxyAuth = "Basic " + base64.urlsafe_b64encode(bytes((proxyUser + ":" + proxyPass), "ascii")).decode("utf8")
proxypool_url = 'http://127.0.0.1:5555/random'
logger = logging.getLogger('middlewares.proxy')
class ProxyMiddleware(object):
    async def process_request(self, request, spider):
        async with aiohttp.ClientSession() as client:
            response = await client.get(proxypool_url)
            if not response.status == 200:
                return
            proxy = await response.text()
            logger.debug(f'set proxy {proxy}')
            request.meta['proxy'] = f'https://{proxy}'
# class ProxyMiddleware(object):
#
#     # # for Python2
#     # proxyAuth = "Basic " + base64.b64encode(proxyUser + ":" + proxyPass)
#
#     # def process_request(self, request, spider):
#     #         request.meta["proxy"] = proxyServer
#     #         request.headers["Proxy-Authorization"] = proxyAuth
#     #         request.headers["Proxy-Switch-Ip"] = True
#
#     proxypool_url = 'http://127.0.0.1:5555/random'
#     logger = logging.getLogger('middlewares.proxy')
#
#     async def process_request(self, request, spider):
#         async with aiohttp.ClientSession() as client:
#             response = await client.get(self.proxypool_url)
#             if not response.status == 200:
#                 return
#             proxy = await response.text()
#             self.logger.debug(f'set proxy {proxy}')
#             request.meta['proxy'] = f'http://{proxy}'

class FangdichanDownloaderMiddleware:
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the downloader middleware does not modify the
    # passed objects.
    proxypool_url = 'http://127.0.0.1:5555/random'
    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    # async def process_request(self, request, spider):
    #     async with aiohttp.ClientSession() as client:
    #         response = await client.get(self.proxypool_url)
    #         if not response.status == 200:
    #             return
    #         proxy = await response.text()
    #         self.logger.debug(f'set proxy {proxy}')
    #         request.meta['proxy'] = f'http://{proxy}'
    def process_request(self, request, spider):
        # Called for each request that goes through the downloader
        # middleware.
        proxypool_url = 'http://127.0.0.1:5555/random'
        proxy = requests.get(proxypool_url).text
        print('http://' + proxy)
        request.meta['proxy'] = 'http://' + proxy
        # Must either:
        # - return None: continue processing this request
        # - or return a Response object
        # - or return a Request object
        # - or raise IgnoreRequest: process_exception() methods of
        #   installed downloader middleware will be called
        return None

    def process_response(self, request, response, spider):
        # Called with the response returned from the downloader.

        # Must either;
        # - return a Response object
        # - return a Request object
        # - or raise IgnoreRequest
        return response

    def process_exception(self, request, exception, spider):
        # Called when a download handler or a process_request()
        # (from other downloader middleware) raises an exception.

        # Must either:
        # - return None: continue processing this exception
        # - return a Response object: stops process_exception() chain
        # - return a Request object: stops process_exception() chain
        pass

    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)
# import redis
# from scrapy.downloadermiddlewares.retry import RetryMiddleware
#
# r = redis.Redis(host='127.0.0.1', port=6379,db=1,decode_responses=True)
# from scrapy.downloadermiddlewares.retry import RetryMiddleware, response_status_message
# import logging
# from twisted.internet import defer
# from twisted.internet.error import TimeoutError, DNSLookupError, \
#     ConnectionRefusedError, ConnectionDone, ConnectError, \
#     ConnectionLost, TCPTimedOutError
# from urllib3.exceptions import ProtocolError, ProxyError, ProxySchemeUnknown
# from twisted.web.client import ResponseFailed
# from scrapy.core.downloader.handlers.http11 import TunnelError
# # from versace import settings
# import requests



# class MyRetryMiddleware(RetryMiddleware):
#     logger = logging.getLogger(__name__)
#
#     EXCEPTIONS_TO_RETRY = (defer.TimeoutError, TimeoutError, DNSLookupError,
#                            ConnectionRefusedError, ConnectionDone, ConnectError,
#                            ConnectionLost, TCPTimedOutError, ResponseFailed,
#                            IOError, TunnelError, ProtocolError, ProxyError, ProxySchemeUnknown)
#     proxy_list = []
#     lock = Lock()
#
#     def get_list_name(self, num=10, time=1):
#         self.lock.acquire()
#         if not self.proxy_list:
#             print('ip池为空,重新获取。。。。')
#             res_json = requests.get(url=settings.list_name_API.format(num, time)).json()
#             data_list = res_json.get('data')
#             self.proxy_list = ['https://' + i.get('ip') + ':' + str(i.get('port')) for i in data_list]
#             print(self.proxy_list)
#         list_name = self.proxy_list.pop(0) if self.proxy_list else ''
#         self.lock.release()
#         if list_name:
#             self.proxy_list.append(list_name)
#         # print(list_name)
#         return list_name
#
#     def delete_proxy(self, proxy):
#         if proxy in self.proxy_list:
#             self.proxy_list.remove(proxy)
#
#     def process_request(self, request, spider):
#         if 'jd' in spider.name:
#             list_name = request.meta.get('proxy')
#             if not list_name:
#                 list_name = self.get_list_name(num=settings.CONCURRENT_REQUESTS * 2, time=2)
#                 request.meta['proxy'] = list_name
#                 request.headers['Referer'] = 'https://www.jd.com'
#                 if list_name not in self.proxy_list:
#                     self.proxy_list.append(list_name)
#                 # return None
#
#     def process_response(self, request, response, spider):
#         if 'jd' in spider.name:
#             list_name = request.meta.get('proxy')
#             if not response.body:
#                 print(response.body)
#                 self.logger.info('ip被屏蔽,更换代理IP...')
#                 print(list_name)
#                 self.delete_proxy(list_name)
#                 list_name = self.get_list_name(num=settings.CONCURRENT_REQUESTS * 2, time=2)
#                 request.meta['proxy'] = list_name
#                 request.headers['Referer'] = 'https://www.jd.com'
#                 if list_name not in self.proxy_list:
#                     self.proxy_list.append(list_name)
#                 return request
#
#             if request.meta.get('dont_retry', False):
#                 return response
#             if response.status in self.retry_http_codes:
#                 reason = response_status_message(response.status)
#                 # 删除该代理
#                 self.delete_proxy(request.meta.get('proxy', False))
#                 self.logger.info('返回值异常, 更换代理IP进行重试...')
#                 return self._retry(request, reason, spider) or response
#         else:
#             if response.status in self.retry_http_codes:
#                 reason = response_status_message(response.status)
#                 return self._retry(request, reason, spider) or response
#         return response
#
#     def process_exception(self, request, exception, spider):
#         if isinstance(exception, self.EXCEPTIONS_TO_RETRY) \
#                 and not request.meta.get('dont_retry', False):
#             # 删除该代理
#             if 'jd' in spider.name:
#                 self.delete_proxy(request.meta.get('proxy', False))
#                 list_name = self.get_list_name(num=settings.CONCURRENT_REQUESTS * 2, time=2)
#                 request.meta['proxy'] = list_name
#                 self.logger.info('连接异常, 更换代理IP进行重试...')


# class ProxyMiddleware(object):
#     # proxypool_url = 'http://127.0.0.1:5555/random'
#     logger = logging.getLogger('middlewares.proxy')
#     def process_request(self, request, spider):
#         if 'proxy' not in request.meta:
#             request.meta['url_num'] = 0
#
#         data = r.rpop("list_name").split('|')
#         # 127.0.0.1:7777|0
#         ip = data[0]
#         num = data[1]
#         print('添加代理IP:' + ip)
#         request.meta['proxy'] = 'http://'+ip.strip('/')
#         request.meta['proxy_num'] = num
#         request.meta['download_timeout'] = 8
#         # request.meta.get('dont_retry', False)
#
#     def process_response(self, request, response, spider):
#         ip = request.meta['proxy'].strip('/')
#         # n = int(request.meta['proxy_num'])
#         print(response.status,'代理IP回填',"list_name", ip + '|0')
#         r.lpush("list_name", ip + '|0')
#         return response
#         # if len(requests.text)
#         # if response.status < 400:
#         #     print(1111111, ip +'|'+'0')
#         #     r.lpush("list_name", ip +'|'+'0')
#         #     return response
#         # else:
#         #     n = n + 1
#         #     if n < 3:
#         #         print(3333, ip + '|' + str(n))
#         #         r.lpush("list_name", ip + '|' + str(n))
#         #         print('请求失败,', ip + '|' + str(n))
#         #     else:
#         #         print('舍弃' + ip)
#         #
#         #     if request.meta['url_num'] > 3:
#         #         r.lpush("error_url", response.url)
#         #         return response
#         #     else:
#         #         request.meta['url_num'] += 1
#         #         return request
#
#     def process_exception(self, request, exception, spider):
#
#         # if isinstance(exception, self.EXCEPTIONS_TO_RETRY) and not request.meta.get('dont_retry', False):
#
#             ip = request.meta['proxy'].strip('/')
#             print(2222222222222222,ip)
#             n = int(request.meta['proxy_num'])
#             n = n + 1
#             if n < 10:
#                 r.lpush("list_name", ip + '|' + str(n))
#                 print('请求失败,回填', ip + '|' + str(n))
#             else:
#                 print(request.meta)
#                 print('舍弃' + ip)
#             if request.meta['url_num'] > 3:
#                 r.lpush("error_url", request.url)
#                 return 'error'
#             else:
#                 request.meta['url_num'] += 1
#
#             return request

    # async def process_request(self, request, spider):
    #     async with aiohttp.ClientSession() as client:
    #         response = await client.get(self.proxypool_url)
    #         if not response.status == 200:
    #             return
    #         proxy = await response.text()
    #         self.logger.debug(f'set proxy {proxy}')
    #         request.meta['proxy'] = f'http://{proxy}'


加cookie

header头里面加上cookie

爬虫工具库-spidertools.cn  cookie格式化

抓取登录才能看见内容的网页需要加cookie

  • 10
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值