scrapy框架爬虫初体验——豆瓣评分top250

环境

Scrapy安装

Scrapy抓取步骤

第一步:新建项目

创建scrapy项目

设置settings.py

创建爬虫文件(douban_spider.py)

第二步:明确目标

打开网站

分析抓取内容

实现数据结构(items.py)

第三步:制作爬虫

测试

编写解析文件(douban_spider.py的parse()方法)

第四步:保存数据

存到文件

存到数据库

其他部分:爬虫的伪装

Ip代理中间件编写(middlewares.py)

user-agent中间件编写(middlewares.py)

注意事项

参考资料


环境

win 10 + pycharm + python 3.6 + scrapy 3.2.3

Scrapy安装

pip install scrapy

Scrapy抓取步骤

第一步:新建项目

第二步:明确目标

第三步:制作爬虫

第四步:存储内容

第一步:新建项目

创建scrapy项目

scrapy startproject douban

设置settings.py

settings.py:定义项目的全局设置。

爬虫协议设置:

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

默认 ROBOTSTXT_OBEY = True ,即遵守此协议;当爬取内容不符合该协议且仍要爬取时,设置 ROBOTSTXT_OBEY = False 
(引自:https://www.jianshu.com/p/19c1ea0d59c2

抓取速度设置:

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 0.5
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

创建爬虫文件(douban_spider.py)

此文件用于编写爬虫的xpath和正则表达式。

控制台中,进入项目douban/spiders,输入:

scrapy genspider douban_spider movie.douban.com

爬虫名称:douban_spider(不能与项目名重复)

所爬域名:movie.douban.com

第二步:明确目标

此步骤主要对所爬网站及所爬数据进行分析,并定义数据结构。

打开网站

如:https://movie.douban.com/top250

分析抓取内容

我所需要的内容包括序号、电影名字、电影介绍、评分、描述等6个部分。

以xpath工具分析获取节点路径:

    # 序号
//div[@class='article']//ol[@class='grid_view']//li//div[@class='item']//div[@class='pic']//em/text()
    # 电影名称    
//div[@class='article']//ol[@class='grid_view']//li//div[@class='item']//div[@class='info']//div[@class='hd']//a//span[@class='title'][1]/text()
    # 电影介绍    
//div[@class='article']//ol[@class='grid_view']//li//div[@class='item']//div[@class='info']//div[@class='bd']//p[1]/text()
    # 星级    
//div[@class='article']//ol[@class='grid_view']//li//div[@class='item']//div[@class='info']//div[@class='bd']//div[@class='star']//span[@class='rating_num']/text()
    # 电影评论数    
//div[@class='article']//ol[@class='grid_view']//li//div[@class='item']//div[@class='info']//div[@class='bd']//div[@class='star']//span[4]/text()
    # 电影描述    
//div[@class='article']//ol[@class='grid_view']//li//div[@class='item']//div[@class='info']//div[@class='bd']//p[@class='quote']//span[@class='inq']/text()
    # 下一页
//div[@class='article']//ol[@class='grid_view']//li//span[@class='next']/link/@href

实现数据结构(items.py)

import scrapy

class DoubanItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    # 序号
    serial_number = scrapy.Field()
    # 电影名称
    movie_name = scrapy.Field()
    # 电影介绍
    introduce = scrapy.Field()
    # 星级
    star = scrapy.Field()
    # 电影评论数
    evaluate = scrapy.Field()
    # 电影描述
    describe = scrapy.Field()

第三步:制作爬虫

测试

测试是否正常请求回数据(douban_spider.py):

# -*- coding: utf-8 -*-
import scrapy

class DoubanSpiderSpider(scrapy.Spider):
    # 爬虫的名字(不能与项目名字相同)
    name = 'douban_spider'
    # 允许的域名
    allowed_domains = ['movie.douban.com']
    # 入口url
    start_urls = ['http://movie.douban.com/top250']
    
    def parse(self, response):
        print(response.text)

查询并设置user-agent(settings.py)(默认设置需要修改):

# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'

控制台运行:

scrapy crawl douban_spider

设置启动文件main.py(在pycharm直接输出,方便运行)。在项目douban目录新建.py,命名为main.py:

from scrapy import cmdline

# 运行
cmdline.execute('scrapy crawl douban_spider'.split())

编写解析文件(douban_spider.py的parse()方法)

具体步骤:

  1. 通过xpath解析目标节点;
  2. xpath规则抽取需要的内容(不要忘记打点(.//));
  3. yield到douban_item中去翻页;
  4. 解析“下一页” 实现自动翻页。
# 默认的解析方法
    def parse(self, response):
        # 查看是否请求回数据
        # print(response.text)
        # 循环电影的条目
        movie_list = response.xpath("//div[@class='article']//ol[@class='grid_view']//li")
        for i_item in movie_list:
            # item文件导进来
            douban_item = DoubanItem()
            # 写详细的xpath,进行数据的解析
            douban_item['serial_number'] = i_item.xpath(".//div[@class='item']//div[@class='pic']//em/text()").extract_first()
            douban_item['movie_name'] = i_item.xpath(".//div[@class='item']//div[@class='info']//div[@class='hd']//a//span[@class='title'][1]/text()").extract_first()
            content = i_item.xpath(".//div[@class='item']//div[@class='info']//div[@class='bd']//p[1]/text()").extract()
            # 数据的处理
            for i_content in content:
                content_s = "".join(i_content.split())
                douban_item['introduce'] = content_s
            # print(douban_item)
            douban_item['star'] = i_item.xpath(".//div[@class='item']//div[@class='info']//div[@class='bd']//div[@class='star']//span[@class='rating_num']/text()").extract_first()
            douban_item['evaluate'] = i_item.xpath(".//div[@class='item']//div[@class='info']//div[@class='bd']//div[@class='star']//span[4]/text()").extract_first()
            douban_item['describe'] = i_item.xpath(".//div[@class='item']//div[@class='info']//div[@class='bd']//p[@class='quote']//span[@class='inq']/text()").extract_first()
            # 你需要讲数据yield到pipelines里面去
            yield  douban_item
        # 解析下一页规则,取后一页的xpath
        next_link = response.xpath(".//span[@class='next']/link/@href").extract()
        if next_link:
            next_link = next_link[0]
            yield scrapy.Request("https://movie.douban.com/top250" + next_link, callback=self.parse)

第四步:保存数据

存到文件

存为json:

scrapy crawl douban_spider -o test.json

存为csv:

scrapy crawl douban_spider -o test.csv

如有乱码:

Notepad++打开csv文件,编码选择utf-8 bom,保存。

main.py实现方式:

from scrapy import cmdline

# 运行
# cmdline.execute('scrapy crawl douban_spider'.split())

# 数据导出
cmdline.execute('scrapy crawl douban_spider -o test.json'.split())
cmdline.execute('scrapy crawl douban_spider -o test.csv'.split())

存到数据库

如Mongodb、mysql等。

需要开启item_piplines(settings.py里):

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'douban.pipelines.DoubanPipeline': 300,
}

其他部分:爬虫的伪装

Ip代理中间件编写(middlewares.py)

引自:https://www.imooc.com/learn/1017

以阿布云代理为例:

request.meta['proxy'] = '代理ip:端口'
proxy_name_pass = b'用户名:密码'

如:

用户名、密码分号分隔:

agent = random.choice(USER_AGENT_LIST)

使用base64加密:

proxy_name_pass = b'用户名:密码'
encode_pass_name = base64.b64encode(proxy_name_pass)

如:

设置ip头(Basic后有空格):

requst.headers['Proxy-Authorization'] = 'Basic ' + encode_pass_name.decode()

中间件设置到settings.py里(优先级保持默认即可,数字越小优先级越高):

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
    # 'douban.middlewares.DoubanDownloaderMiddleware': 543,
    'douban.middlewares.my_proxy': 543,
}

运行(有my_proxy即代表运行代理成功,有效地隐藏了ip)

代码:

class my_proxy(object):
    def process_request(self, request, spider):
        request.meta['proxy'] = '代理ip:端口'
        proxy_name_pass = b'用户名:密码'
        encode_pass_name = base64.b64encode(proxy_name_pass)
        requst.headers['Proxy-Authorization'] = 'Basic ' + encode_pass_name.decode()

user-agent中间件编写(middlewares.py)

常见user-agent:(引自:https://blog.csdn.net/weixin_42144379/article/details/85639397

# 各种PC端
        user_agent_list_2 = [
            # Opera
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 OPR/26.0.1656.60",
            "Opera/8.0 (Windows NT 5.1; U; en)",
            "Mozilla/5.0 (Windows NT 5.1; U; en; rv:1.8.1) Gecko/20061208 Firefox/2.0.0 Opera 9.50",
            "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; en) Opera 9.50",
            # Firefox
            "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0",
            "Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10",
            # Safari
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.57.2 (KHTML, like Gecko) Version/5.1.7 Safari/534.57.2",
            # chrome
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11",
            "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.133 Safari/534.16",
            # 360
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36",
            "Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko",
            # 淘宝浏览器
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11",
            # 猎豹浏览器
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER",
            "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; LBBROWSER)",
            "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E; LBBROWSER)",
            # QQ浏览器
            "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; QQBrowser/7.0.3698.400)",
            "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)",
            # sogou浏览器
            "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 SE 2.X MetaSr 1.0",
            "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; SE 2.X MetaSr 1.0)",
            # maxthon浏览器
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Maxthon/4.4.3.4000 Chrome/30.0.1599.101 Safari/537.36",
            # UC浏览器
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.122 UBrowser/4.0.3214.0 Safari/537.36",
        ]
        # 各种移动端
        user_agent_list_3 = [
            # IPhone
            "Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5",
            # IPod
            "Mozilla/5.0 (iPod; U; CPU iPhone OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5",
            # IPAD
            "Mozilla/5.0 (iPad; U; CPU OS 4_2_1 like Mac OS X; zh-cn) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8C148 Safari/6533.18.5",
            "Mozilla/5.0 (iPad; U; CPU OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5",
            # Android
            "Mozilla/5.0 (Linux; U; Android 2.2.1; zh-cn; HTC_Wildfire_A3333 Build/FRG83D) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
            "Mozilla/5.0 (Linux; U; Android 2.3.7; en-us; Nexus One Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
            # QQ浏览器 Android版本
            "MQQBrowser/26 Mozilla/5.0 (Linux; U; Android 2.3.7; zh-cn; MB200 Build/GRJ22; CyanogenMod-7) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
            # Android Opera Mobile
            "Opera/9.80 (Android 2.3.4; Linux; Opera Mobi/build-1107180945; U; en-GB) Presto/2.8.149 Version/11.10",
            # Android Pad Moto Xoom
            "Mozilla/5.0 (Linux; U; Android 3.0; en-us; Xoom Build/HRI39) AppleWebKit/534.13 (KHTML, like Gecko) Version/4.0 Safari/534.13",
            # BlackBerry
            "Mozilla/5.0 (BlackBerry; U; BlackBerry 9800; en) AppleWebKit/534.1+ (KHTML, like Gecko) Version/6.0.0.337 Mobile Safari/534.1+",
            # WebOS HP Touchpad
            "Mozilla/5.0 (hp-tablet; Linux; hpwOS/3.0.0; U; en-US) AppleWebKit/534.6 (KHTML, like Gecko) wOSBrowser/233.70 Safari/534.6 TouchPad/1.0",
            # Nokia N97
            "Mozilla/5.0 (SymbianOS/9.4; Series60/5.0 NokiaN97-1/20.0.019; Profile/MIDP-2.1 Configuration/CLDC-1.1) AppleWebKit/525 (KHTML, like Gecko) BrowserNG/7.1.18124",
            # Windows Phone Mango
            "Mozilla/5.0 (compatible; MSIE 9.0; Windows Phone OS 7.5; Trident/5.0; IEMobile/9.0; HTC; Titan)",
            # UC浏览器
            "UCWEB7.0.2.37/28/999",
            "NOKIA5700/ UCWEB7.0.2.37/28/999",
            # UCOpenwave
            "Openwave/ UCWEB7.0.2.37/28/999",
            # UC Opera
            "Mozilla/4.0 (compatible; MSIE 6.0; ) Opera/UCWEB7.0.2.37/28/999"
        ]
        # 一部分 PC端的
        user_agent_list_1 = [
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
            "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
        ]

USER_AGENT_LIST:

class my_useragent(object):
    def process_request(self, request, spider):
        USER_AGENT_LIST = [
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
            "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
        ]

设置随机:

agent = random.choice(USER_AGENT_LIST)

设置http头:

request.headers['User-Agent'] = agent

settings.py设置(与其他中间件的优先级不能相同):

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
    
    'douban.middlewares.my_proxy': 543,
    'douban.middlewares.my_useragent': 544,
}

注意事项

  1. 中间件定义完要在settings.py中启用;

  2. 爬虫项目不能与爬虫文件名称相同,spiders目录内不能存在相同爬虫名称的项目文件;

  3. 文明守法,尊重公民隐私,不违反法律法规。

参考资料

教程及内容:https://www.imooc.com/learn/1017

settings部分内容:https://www.jianshu.com/p/19c1ea0d59c2

user-agent部分:https://blog.csdn.net/weixin_42144379/article/details/85639397

评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值