Hadoop集群搭建(一)

目录

一、集群部署规划

二、集群搭建

1、安装JDK

(1)卸载linux自带jdk

(2)上传自己的jdk

 (3)解压jdk到自定义目录/opt/module

(4)配置jdk环境变量

(5)source 环境变量,使之生效

(6)查看JDK

(7)同步JDK和环境变量到其它服务器

2、进入hadoop安装包

3、解压hadoop安装包

4、添加hadoop环境变量

5、测试是否安装成功

6、Hadoop配置文件

(1)core-site.xml

(2)hdfs-site.xml

(3)mapred-site.xml

(4)yarn-site.xml

三、启动Hadoop集群

1、Namenode初始化

 2、启动HDFS

3、启动yarn

4、启动JobHistory

四、Web端查看Hadoop集群

1、查看 NameNode

 2、查看 YARN 的 ResourceManager

  3、查看JobHistory

五、集群启动脚本

六、查看三台服务器 Java 进程脚本

一、集群部署规划

注意1:NameNode和SecondaryNameNode不要安装在同一台服务器

注意2:ResourceManager也很消耗内存,不要和NameNode、SecondaryNameNode配置在同一台机器上。

master

slave1

slave2

HDFS

NameNode

DataNode

JobHistory

DataNode

SecondaryNameNode

DataNode

YARN

NodeManager

ResourceManager

NodeManager

NodeManager

二、集群搭建

1、安装JDK

(1)卸载linux自带jdk

 rpm -qa | grep -i java | xargs -n1 rpm -e --nodeps

(2)上传自己的jdk

 (3)解压jdk到自定义目录/opt/module

[root@master software]# tar -zxvf jdk-8u212-linux-x64.tar.gz -C /opt/module

(4)配置jdk环境变量

[root@master module]# vim /etc/profile.d/my_env.sh 

#JAVA_HOME 
export JAVA_HOME=/opt/module/jdk1.8.0_212 
export PATH=$PATH:$JAVA_HOME/bin

(5)source 环境变量,使之生效

[root@master module]# source /etc/profile.d/my_env.sh 

(6)查看JDK

[root@master module]# java -version
java version "1.8.0_212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)

(7)同步JDK和环境变量到其它服务器

[root@master module]# xsync /opt/module/jdk1.8.0_212/

[root@master module]# xsync /etc/profile.d/my_env.sh
[root@slave1 ~]# source /etc/profile.d/my_env.sh 
[root@slave2 ~]# source /etc/profile.d/my_env.sh

2、进入hadoop安装包

[root@slave1 ~]# cd /opt/software/ 

3、解压hadoop安装包

[root@slave1 ~]# tar -zxvf /opt/software/hadoop-3.1.3.tar.gz -C /opt/module/

4、添加hadoop环境变量

>> 添加环境变量

[root@master module]# vim /etc/profile.d/my_env.sh 

#HADOOP_HOME 
export HADOOP_HOME=/opt/module/hadoop-3.1.3 
export PATH=$PATH:$HADOOP_HOME/bin 
export PATH=$PATH:$HADOOP_HOME/sbin 

>> source环境变量,使之生效

[root@master ~] source /etc/profile.d/my_env.sh 

5、测试是否安装成功

[root@master module]# hadoop version
Hadoop 3.1.3
Source code repository https://gitbox.apache.org/repos/asf/hadoop.git -r ba631c436b806728f8ec2f54ab1e289526c90579
Compiled by ztang on 2019-09-12T02:47Z
Compiled with protoc 2.5.0
From source with checksum ec785077c385118ac91aadde5ec9799
This command was run using /opt/module/hadoop-3.1.3/share/hadoop/common/hadoop-common-3.1.3.jar

6、Hadoop配置文件

(1)core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<!-- 指定NameNode的地址 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:8020</value>
</property>


<!-- 指定hadoop数据的存储目录 -->
    <!-- <property> -->
        <!-- <name>hadoop.tmp.dir</name> -->
        <!-- <value>/opt/module/hadoop-3.1.3/data</value> -->
	<!-- </property> -->

<!-- 配置HDFS网页登录使用的静态用户为 root -->
    <property>
        <name>hadoop.http.staticuser.user</name>
        <value>root</value>
</property>

<!-- 配置 root(superUser)允许通过代理访问的主机节点 -->
    <property>
        <name>hadoop.proxyuser.root.hosts</name>
        <value>*</value>
</property>
<!-- 配置root(superUser)允许通过代理用户所属组 -->
    <property>
        <name>hadoop.proxyuser.root.groups</name>
        <value>*</value>
</property>
<!-- 配置root(superUser)允许通过代理的用户-->
    <property>
        <name>hadoop.proxyuser.root.users</name>
        <value>*</value>
</property>


     <property>
        <name>io.compression.codecs</name>
        <value>
            org.apache.hadoop.io.compress.GzipCodec,
            org.apache.hadoop.io.compress.DefaultCodec,
            org.apache.hadoop.io.compress.BZip2Codec,
            org.apache.hadoop.io.compress.SnappyCodec
            <!-- com.hadoop.compression.lzo.LzoCodec, -->
            <!-- com.hadoop.compression.lzo.LzopCodec -->
        </value>
    </property>

    <!-- <property> -->
        <!-- <name>io.compression.codec.lzo.class</name> -->
        <!-- <value>com.hadoop.compression.lzo.LzoCodec</value> -->
    <!-- </property> -->

</configuration>

(2)hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<!-- nn web端访问地址-->
    <property>
         <name>dfs.namenode.http-address</name>
         <value>master:9870</value>
    </property>
    
	<!-- 2nn web端访问地址-->
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>slave2:9868</value>
    </property>
	
	<!-- Datanode 数据文件存放地址-->
	<property>
		<name>dfs.datanode.data.dir</name>
		<value>/opt/module/hadoop-3.1.3/data</value>
	</property>
	
	<!-- Namenode 元数据存放地址-->
	<property>
		<name>dfs.namenode.name.dir</name>
		<value>/opt/module/hadoop-3.1.3/name</value>
	</property>
	
	<!-- secondaryname 元数据存放地址-->
	<property>
		<name>dfs.namenode.checkpoint.dir</name>
		<value>/opt/module/hadoop-3.1.3/secondaryname/</value>
	</property>
	
	
    
    <!-- 测试环境指定HDFS副本的数量3 -->
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>

    <property>
        <name>dfs.client.use.datanode.hostname</name>
        <value>true</value>
        <description>only cofig in clients</description>
    </property>


</configuration>

(3)mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<!-- 指定MapReduce程序运行在Yarn上 -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>

<!-- 历史服务器端地址 -->
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master:10020</value>
    </property>

<!-- 历史服务器web端地址 -->
    <property>
         <name>mapreduce.jobhistory.webapp.address</name>
         <value>master:19888</value>
    </property>

</configuration>

(4)yarn-site.xml

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->
<!-- 指定MR走shuffle -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    
    <!-- 指定ResourceManager的地址-->
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>slave1</value>
    </property>
    
    <!-- 环境变量的继承 -->
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
    
    <!-- yarn容器允许分配的最大最小内存 -->
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>512</value>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>4096</value>
    </property>

<!-- yarn容器允许管理的物理内存大小 -->
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>4096</value>
    </property>
    
    <!-- 关闭yarn对虚拟内存的限制检查 -->
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>

    <property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
    </property>
<!-- 开启日志聚集功能 -->
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>

<!-- 设置日志聚集服务器地址 -->
    <property>  
        <name>yarn.log.server.url</name>  
        <value>http://master:19888/jobhistory/logs</value>
    </property>

<!-- 设置日志保留时间为7天 -->
    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>604800</value>
    </property>

    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>0.0.0.0:8088</value>
    </property>

</configuration>

三、启动Hadoop集群

1、Namenode初始化

注意:格式化 NameNode,会产生新的集群 id,导致 NameNode 和 DataNode 的集群 id 不一致,集群找 不到已往数据。如果集群在运行过程中报错,需要重新格式化 NameNode 的话,一定要先停 止 namenode 和 datanode 进程,并且要删除所有机器的 data 和 logs 目录,然后再进行格式 化。

[root@master hadoop-3.1.3]# hdfs namenode -format
WARNING: /opt/module/hadoop-3.1.3/logs does not exist. Creating.
2023-04-16 10:50:18,743 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master/192.168.158.31
......
2023-04-16 10:50:22,036 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2023-04-16 10:50:22,037 INFO snapshot.SnapshotManager: SkipList is disabled
2023-04-16 10:50:22,046 INFO util.GSet: Computing capacity for map cachedBlocks
2023-04-16 10:50:22,046 INFO util.GSet: VM type       = 64-bit
2023-04-16 10:50:22,046 INFO util.GSet: 0.25% max memory 479.5 MB = 1.2 MB
2023-04-16 10:50:22,046 INFO util.GSet: capacity      = 2^17 = 131072 entries
2023-04-16 10:50:22,071 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2023-04-16 10:50:22,071 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2023-04-16 10:50:22,071 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2023-04-16 10:50:22,074 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2023-04-16 10:50:22,074 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2023-04-16 10:50:22,076 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2023-04-16 10:50:22,076 INFO util.GSet: VM type       = 64-bit
2023-04-16 10:50:22,076 INFO util.GSet: 0.029999999329447746% max memory 479.5 MB = 147.3 KB
2023-04-16 10:50:22,076 INFO util.GSet: capacity      = 2^14 = 16384 entries
2023-04-16 10:50:22,166 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1693817006-192.168.158.31-1681613422158
2023-04-16 10:50:22,213 INFO common.Storage: Storage directory /opt/module/hadoop-3.1.3/name has been successfully formatted.
2023-04-16 10:50:22,300 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/module/hadoop-3.1.3/name/current/fsimage.ckpt_0000000000000000000 using no compression
2023-04-16 10:50:22,742 INFO namenode.FSImageFormatProtobuf: Image file /opt/module/hadoop-3.1.3/name/current/fsimage.ckpt_0000000000000000000 of size 388 bytes saved in 0 seconds .
2023-04-16 10:50:22,766 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2023-04-16 10:50:22,816 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid = 0 when meet shutdown.
2023-04-16 10:50:22,818 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.158.31
************************************************************/

 2、启动HDFS

    >> 会报错

[root@master hadoop-3.1.3]# sbin/start-dfs.sh
Starting namenodes on [master]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [slave2]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.

>> 解决方案

在start-dfs.sh(stop-dfs.sh)中(文件开始空白处)添加:

[root@master hadoop-3.1.3]# vim sbin/start-dfs.sh

HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root


[root@master hadoop-3.1.3]# vim sbin/stop-dfs.sh

HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root

    >> 重新启动

[root@master hadoop-3.1.3]# vim sbin/start-dfs.sh
[root@master hadoop-3.1.3]# sbin/start-dfs.sh
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
Starting namenodes on [master]
上一次登录:日 4月 16 09:12:34 CST 2023从 192.168.158.1pts/4 上
Starting datanodes
上一次登录:日 4月 16 11:01:23 CST 2023pts/4 上
slave2: WARNING: /opt/module/hadoop-3.1.3/logs does not exist. Creating.
Starting secondary namenodes [slave2]
上一次登录:日 4月 16 11:01:26 CST 2023pts/4 上

3、启动yarn

注意:需要在配置了 ResourceManager 的节点启动yarn,本集群是slave1

>> 与启动hdfs一样,同样也会报错

[root@slave1 hadoop-3.1.3]# sbin/start-yarn.sh
Starting resourcemanager
ERROR: Attempting to operate on yarn resourcemanager as root
ERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting operation.
Starting nodemanagers
ERROR: Attempting to operate on yarn nodemanager as root
ERROR: but there is no YARN_NODEMANAGER_USER defined. Aborting operation.

>> 解决方案

在start-yarn.sh(stop-yarn.sh)中(文件开始空白处)添加:

[root@slave1 hadoop-3.1.3]# vim sbin/start-yarn.sh

## add YARN USER
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root


[root@slave1 hadoop-3.1.3]# vim sbin/stop-yarn.sh

## add YARN USER
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

>> 重新启动

[root@slave1 hadoop-3.1.3]# sbin/start-yarn.sh
Starting resourcemanager
上一次登录:日 4月 16 09:12:58 CST 2023从 192.168.158.1pts/3 上
Starting nodemanagers
上一次登录:日 4月 16 11:11:30 CST 2023pts/3 上

4、启动JobHistory

注意:需要在配置了JobHistory的节点启动,本集群是master

[root@master hadoop-3.1.3]# mapred --daemon start historyserver

四、Web端查看Hadoop集群

1、查看 NameNode

http://master:9870

 2、查看 YARN 的 ResourceManager

http://slave1:8088/cluster

  3、查看JobHistory

http://master:19888/jobhistory

五、集群启动脚本

[root@master hadoop-3.1.3]# vim /bin/hdp.sh


#!/bin/bash
if [ $# -lt 1 ]
then
    echo "No Args Input..."
    exit ;
fi
case $1 in
"start")
        echo " =================== 启动 hadoop集群 ==================="

        echo " --------------- 启动 hdfs ---------------"
        ssh master "/opt/module/hadoop-3.1.3/sbin/start-dfs.sh"
        echo " --------------- 启动 yarn ---------------"
        ssh slave1 "/opt/module/hadoop-3.1.3/sbin/start-yarn.sh"
        echo " --------------- 启动 historyserver ---------------"
        ssh master "/opt/module/hadoop-3.1.3/bin/mapred --daemon start historyserver"
	    # echo " --------------- 启动spark historyserver ---------------"
        # ssh master "/opt/module/spark/sbin/start-history-server.sh"
        # echo "----------启动Yarn的ProxyServer服务------------"
        # ssh slave1 "/opt/module/hadoop-3.1.3/sbin/yarn-daemon.sh start proxyserver"
;;
"stop")
        echo " =================== 关闭 hadoop集群 ==================="
        # echo "----------停止Yarn的ProxyServer服务------------"
        # ssh slave1 "/opt/module/hadoop-3.1.3/sbin/yarn-daemon.sh stop proxyserver"
	    # echo " --------------- 停止spark historyserver ---------------"
        # ssh master "/opt/module/spark/sbin/stop-history-server.sh"

        echo " --------------- 关闭 historyserver ---------------"
        ssh master "/opt/module/hadoop-3.1.3/bin/mapred --daemon stop historyserver"
        echo " --------------- 关闭 yarn ---------------"
        ssh slave1 "/opt/module/hadoop-3.1.3/sbin/stop-yarn.sh"
        echo " --------------- 关闭 hdfs ---------------"
        ssh master "/opt/module/hadoop-3.1.3/sbin/stop-dfs.sh"
;;
*)
    echo "Input Args Error..."
;;
esac

>> 添加执行权限 

[root@master hadoop-3.1.3]# chmod +x /bin/hdp.sh

>> 测试

[root@master hadoop-3.1.3]# hdp.sh stop
 =================== 关闭 hadoop集群 ===================
 --------------- 关闭 historyserver ---------------
 --------------- 关闭 yarn ---------------
Stopping nodemanagers
上一次登录:日 4月 16 11:11:33 CST 2023pts/3 上
Stopping resourcemanager
上一次登录:日 4月 16 11:41:35 CST 2023
 --------------- 关闭 hdfs ---------------
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
Stopping namenodes on [master]
上一次登录:日 4月 16 11:02:20 CST 2023pts/4 上
Stopping datanodes
上一次登录:日 4月 16 11:41:41 CST 2023
Stopping secondary namenodes [slave2]
上一次登录:日 4月 16 11:41:42 CST 2023


[root@master hadoop-3.1.3]# hdp.sh start
 =================== 启动 hadoop集群 ===================
 --------------- 启动 hdfs ---------------
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
Starting namenodes on [master]
上一次登录:日 4月 16 11:41:45 CST 2023
Starting datanodes
上一次登录:日 4月 16 11:42:08 CST 2023
Starting secondary namenodes [slave2]
上一次登录:日 4月 16 11:42:11 CST 2023
 --------------- 启动 yarn ---------------
Starting resourcemanager
上一次登录:日 4月 16 11:41:37 CST 2023
Starting nodemanagers
上一次登录:日 4月 16 11:42:27 CST 2023
 --------------- 启动 historyserver ---------------

六、查看三台服务器 Java 进程脚本

>> jpsall

[root@master hadoop-3.1.3]# vim /bin/jpsall.sh


#! /bin/bash 
for i in master slave1 slave2
do
    echo --------- $i ----------
    ssh $i "$*"
done

>> 添加执行权限  

[root@master hadoop-3.1.3]# chmod +x /bin/jpsall.sh

>> 脚本同步到其它节点

[root@master hadoop-3.1.3]# xsync /bin/jpsall.sh 

>> 测试

[root@master hadoop-3.1.3]# /bin/jpsall.sh
=============== master ===============
7120 JobHistoryServer
6843 NodeManager
6076 NameNode
6220 DataNode
7420 Jps
=============== slave1 ===============
5970 NodeManager
6626 Jps
5416 DataNode
5834 ResourceManager
=============== slave2 ===============
5172 DataNode
5477 NodeManager
5289 SecondaryNameNode
5931 Jps

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值