CDH5.16.1的离线部署

CDH5.16.1的离线部署

CDH部署的机器是三台阿里云机器,配置都是2cpu,8G内存,40G硬盘

安装需要的软件包:

1.CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel(CDH软件包)

2.CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel.sha1

3.cloudera-manager-centos7-cm5.16.1_x86_64.tar.gz

4.manifest.json

5.mysql-5.7.11-linux-glibc2.5-x86_64.tar.gz(jdk包)

6.mysql-connector-java-5.1.47.jar

1.节点初始化

1.1 host文件
#修改3台机器的hosts文件
[root@hadoop003 opt] vim /etc/hosts
#添加IP 主机名配置
172.19.35.160 hadoop001
172.19.35.159 hadoop002
172.19.35.158 hadoop003
1.2 防火墙

云主机的防火墙一般都是关闭的,阿里云有web界面的防火墙配置,可以自行修改端口访问限制;

如果是公司内部的服务器,一般把防火墙关闭再部署,部署完之后,通过CDH页面提供的端口,再将防火墙开启,再将这些端口设置通过.

#如果不确认是否关闭,自己手动关一下,3台都需要
[root@hadoop001 cdh5.16.1] systemctl stop firewalld
[root@hadoop001 cdh5.16.1] systemctl disable firewalld
[root@hadoop001 cdh5.16.1] iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
[root@hadoop001 cdh5.16.1] iptables -F
1.3 selinux
#查看selinux的配置是否关闭,3台都需要,一般来说SELINUX=disabled,这个配置都是disable,如果不是改成disabled,重启一下服务器
[root@hadoop001 cdh5.16.1] vim /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted
1.4 时区和时钟同步
#查看服务器的时区状态
[root@hadoop001 cdh5.16.1] timedatectl
      Local time: Mon 2019-05-20 17:28:24 CST
  Universal time: Mon 2019-05-20 09:28:24 UTC
        RTC time: Mon 2019-05-20 17:28:22
       Time zone: Asia/Shanghai (CST, +0800)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: yes
      DST active: n/a
#如果时区有问题,可以手动设置一下,3台都操作
[root@hadoop001 cdh5.16.1] timedatectl set-timezone Asia/Shanghai
#时钟同步,一般安装ntp服务来操作
#安装ntp服务,3台都操作
[root@hadoop001 cdh5.16.1] yum install -y ntp
#选取hadoop001作为时间同步的主节点,hadoop002--hadoop003作为从节点
[root@hadoop001 cdh5.16.1] vim /etc/ntp.conf
#我们在 https://www.ntppool.org/zone/asia 可以找到时间配置
#自定义的time时区
server 0.asia.pool.ntp.org
server 1.asia.pool.ntp.org
server 2.asia.pool.ntp.org
server 3.asia.pool.ntp.org
#当我们配置的外部时间同步不可用的时候,使用本地的时间
server 127.127.1.0 iburst local clock
#允许哪些网段节点的机器来访问这台机器的时间
restrict 172.19.35.0 mask 255.255.255.0 nomodify notrap

#启动ntp服务
[root@hadoop001 cdh5.16.1] systemctl start ntpd
#查看运行状态
[root@hadoop001 cdh5.16.1] systemctl status ntpd
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-05-20 17:32:48 CST; 18min ago
 Main PID: 9481 (ntpd)
   CGroup: /system.slice/ntpd.service
           └─9481 /usr/sbin/ntpd -u ntp:ntp -g

May 20 17:32:48 hadoop001 systemd[1]: Starting Network Time Service...
May 20 17:32:48 hadoop001 ntpd[9481]: proto: precision = 0.089 usec
May 20 17:32:48 hadoop001 ntpd[9481]: 0.0.0.0 c01d 0d kern kernel time sync enabled
May 20 17:32:48 hadoop001 systemd[1]: Started Network Time Service.

#验证服务的准确性
[root@hadoop001 cdh5.16.1]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 LOCAL(0)        .LOCL.          10 l 1161   64    0    0.000    0.000   0.000
x120.25.115.20   10.137.53.7      2 u   39   64  377   27.494   38.829  21.168
 10.143.33.49    .INIT.          16 u    - 1024    0    0.000    0.000   0.000
+100.100.3.1     10.137.55.181    2 u   24   64  377    6.211   53.978  11.191
*100.100.3.2     10.137.55.181    2 u    4   64  377    7.452   65.833  11.511
+100.100.3.3     10.137.55.181    2 u   62   64  377    7.991   48.435  12.723
+203.107.6.88    100.107.25.114   2 u    3   64  377   15.044   66.803  13.470
 10.143.33.50    .INIT.          16 u    - 1024    0    0.000    0.000   0.000
 10.143.33.51    .INIT.          16 u    - 1024    0    0.000    0.000   0.000
 10.143.0.44     .INIT.          16 u    - 1024    0    0.000    0.000   0.000
 10.143.0.45     .INIT.          16 u    - 1024    0    0.000    0.000   0.000
 10.143.0.46     .INIT.          16 u    - 1024    0    0.000    0.000   0.000
+100.100.5.1     10.137.55.181    2 u    1   64  377    4.914   67.908  12.758
+100.100.5.2     10.137.55.181    2 u   59   64  377    5.009   68.310  16.956
+100.100.5.3     10.137.55.181    2 u   58   64  377    7.912   65.652  14.960
#客户端的时钟同步设置
[root@hadoop002 opt] /usr/sbin/ntpdate hadoop001
20 May 17:54:43 ntpdate[18557]: the NTP socket is in use, exiting
#如果发现NTP socket正在使用,查看一下ntp service服务状态
[root@hadoop002 opt] systemctl status ntpd
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-05-20 17:32:56 CST; 22min ago
 Main PID: 18527 (ntpd)
   CGroup: /system.slice/ntpd.service
           └─18527 /usr/sbin/ntpd -u ntp:ntp -g

May 20 17:32:56 hadoop002 systemd[1]: Starting Network Time Service...
May 20 17:32:56 hadoop002 ntpd[18527]: proto: precision = 0.087 usec
May 20 17:32:56 hadoop002 ntpd[18527]: 0.0.0.0 c01d 0d kern kernel time sync enabled
May 20 17:32:56 hadoop002 systemd[1]: Started Network Time Service.

#关闭服务并关闭从节点的开机自启动ntp时间服务
[root@hadoop002 opt] systemctl stop ntpd
[root@hadoop002 opt] systemctl disable ntpd
Removed symlink /etc/systemd/system/multi-user.target.wants/ntpd.service.

#重新同步时间,两台从节点都需要配置
[root@hadoop002 opt] /usr/sbin/ntpdate hadoop001
20 May 17:57:25 ntpdate[18585]: adjust time server 172.19.35.160 offset 0.000344 sec

云主机的时区跟时钟同步一般是不需要做同步的,都帮你做好了

非云主机是需要的,可以按照上面的步骤来进行时区跟时钟同步.

**我们在生产上,我们需要把 /usr/sbin/ntpdate hadoop001 时钟同步的这个命令加到crontab中,一小时同步一次

[root@hadoop002 opt]# crontab -l
00 * * * * /usr/sbin/ntpdate hadoop001
1.5 jdk
#jdk安装的路径安装到/usr/java,并且解压之后要注意用户以及用户组权限的修正,3台都需要安装
#创建/usr/java目录
[root@hadoop001 cdh5.16.1] mkdir /usr/java
[root@hadoop003 cdh5.16.1] ll /usr/java
total 4
drwxr-xr-x 7 10 143 4096 Jul  7  2018 jdk1.8.0_181
#目录权限发生变更,修正权限
[root@hadoop003 cdh5.16.1] chown -R root:root /usr/java
#检查一下
[root@hadoop003 cdh5.16.1] ll /usr/java
total 4
drwxr-xr-x 7 root root 4096 Jul  7  2018 jdk1.8.0_181
#配置环境变量
[root@hadoop001 cdh5.16.1] vim /etc/profile
#env
export JAVA_HOME=/usr/java/jdk1.8.0_181
export PATH=$JAVA_HOME/bin:$PATH
#source生效环境变量
[root@hadoop001 cdh5.16.1] source /etc/profile

#同步环境变量文件到另外两台机器,source生效一下环境变量
[root@hadoop001 cdh5.16.1] scp /etc/profile hadoop002:/etc/
[root@hadoop002 cdh5.16.1] source /etc/profile
1.6.MySQL离线部署

MySQL离线部署的版本是5.7.11

#1.解压及创建目录
[root@hadoop001 local] tar -xzvf mysql-5.7.11-linux-glibc2.5-x86_64.tar.gz -C /usr/local
[root@hadoop001 local] mv mysql-5.7.11-linux-glibc2.5-x86_64 mysql

[root@hadoop001 local] mkdir mysql/arch mysql/data mysql/tmp

#2.修改my.cnf配置
[root@hadoop001 local] vi /etc/my.cnf
#以下配置全部替换
[client]
port            = 3306
socket          = /usr/local/mysql/data/mysql.sock
default-character-set=utf8mb4

[mysqld]
port            = 3306
socket          = /usr/local/mysql/data/mysql.sock

skip-slave-start

skip-external-locking
key_buffer_size = 256M
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 4M
query_cache_size= 32M
max_allowed_packet = 16M
myisam_sort_buffer_size=128M
tmp_table_size=32M

table_open_cache = 512
thread_cache_size = 8
wait_timeout = 86400
interactive_timeout = 86400
max_connections = 600

# Try number of CPU's*2 for thread_concurrency
#thread_concurrency = 32 

#isolation level and default engine 
default-storage-engine = INNODB
transaction-isolation = READ-COMMITTED

server-id  = 1739
basedir     = /usr/local/mysql
datadir     = /usr/local/mysql/data
pid-file     = /usr/local/mysql/data/hostname.pid

#open performance schema
log-warnings
sysdate-is-now

binlog_format = ROW
log_bin_trust_function_creators=1
log-error  = /usr/local/mysql/data/hostname.err
log-bin = /usr/local/mysql/arch/mysql-bin
expire_logs_days = 7

innodb_write_io_threads=16

relay-log  = /usr/local/mysql/relay_log/relay-log
relay-log-index = /usr/local/mysql/relay_log/relay-log.index
relay_log_info_file= /usr/local/mysql/relay_log/relay-log.info

log_slave_updates=1
gtid_mode=OFF
enforce_gtid_consistency=OFF

# slave
slave-parallel-type=LOGICAL_CLOCK
slave-parallel-workers=4
master_info_repository=TABLE
relay_log_info_repository=TABLE
relay_log_recovery=ON

#other logs
#general_log =1
#general_log_file  = /usr/local/mysql/data/general_log.err
#slow_query_log=1
#slow_query_log_file=/usr/local/mysql/data/slow_log.err

#for replication slave
sync_binlog = 500


#for innodb options 
innodb_data_home_dir = /usr/local/mysql/data/
innodb_data_file_path = ibdata1:1G;ibdata2:1G:autoextend

innodb_log_group_home_dir = /usr/local/mysql/arch
innodb_log_files_in_group = 4
innodb_log_file_size = 1G
innodb_log_buffer_size = 200M

#根据生产需要,调整pool size 
innodb_buffer_pool_size = 2G
#innodb_additional_mem_pool_size = 50M #deprecated in 5.6
tmpdir = /usr/local/mysql/tmp

innodb_lock_wait_timeout = 1000
#innodb_thread_concurrency = 0
innodb_flush_log_at_trx_commit = 2

innodb_locks_unsafe_for_binlog=1

#innodb io features: add for mysql5.5.8
performance_schema
innodb_read_io_threads=4
innodb-write-io-threads=4
innodb-io-capacity=200
#purge threads change default(0) to 1 for purge
innodb_purge_threads=1
innodb_use_native_aio=on

#case-sensitive file names and separate tablespace
innodb_file_per_table = 1
lower_case_table_names=1

[mysqldump]
quick
max_allowed_packet = 128M

[mysql]
no-auto-rehash
default-character-set=utf8mb4

[mysqlhotcopy]
interactive-timeout

[myisamchk]
key_buffer_size = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M

#3.创建用户组及用户
[root@hadoop001 local] groupadd -g 101 dba
[root@hadoop001 local] useradd -u 514 -g dba -G root -d /usr/local/mysql mysqladmin

#4.copy 环境变量配置文件至mysqladmin用户的home目录中,为了以下步骤配置个人环境变量
[root@hadoop001 local] cp /etc/skel/.* /usr/local/mysql

#5.配置环境变量
[root@hadoop001 local] vim mysql/.bash_profile
# .bash_profile
# Get the aliases and functions

if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs
export MYSQL_BASE=/usr/local/mysql
export PATH=${MYSQL_BASE}/bin:$PATH


unset USERNAME

#stty erase ^H
set umask to 022
umask 022
PS1=`uname -n`":"'$USER'":"'$PWD'":>"; export PS1

## end

#6.赋权限和用户组,切换用户mysqladmin,安装
#配置文件权限
[root@hadoop001 local] chown  mysqladmin:dba /etc/my.cnf 
[root@hadoop001 local] chmod  640 /etc/my.cnf 
#mysql目录权限
[root@hadoop001 local] chown -R mysqladmin:dba /usr/local/mysql
[root@hadoop001 local] chmod -R 755 /usr/local/mysql 

#7.配置服务及开机自启动
[root@hadoop001 local] cd /usr/local/mysql
#将服务文件拷贝到init.d下,并重命名为mysql
[root@hadoop001 mysql] cp support-files/mysql.server /etc/rc.d/init.d/mysql 
#赋予可执行权限
[root@hadoop001 mysql] chmod +x /etc/rc.d/init.d/mysql
#删除服务
[root@hadoop001 mysql] chkconfig --del mysql
#添加服务
[root@hadoop001 mysql] chkconfig --add mysql
[root@hadoop001 mysql] chkconfig --level 345 mysql on
#centos7 开机自启动
#新建systemctl配置文件
vi /usr/lib/systemd/system/mysql.service

[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target

[Install]
WantedBy=multi-user.target

[Service]
User=mysql
Group=mysql
ExecStart=/usr/local/mysql --defaults-file=/etc/my.cnf
LimitNOFILE = 5000
#Restart=on-failure
#RestartPreventExitStatus=1
#PrivateTmp=false

#编辑/etc/init.d/mysql
vi /etc/init.d/mysql,找到start模块,添加--user=mysqladmin 到 $bindir/mysqld_safe
#启动命令
service mysql start

8.安装libaio及安装mysql的初始db
[root@hadoop001 mysql] yum -y install libaio
[root@hadoop001 mysql] sudo su - mysqladmin

hadoop001:mysqladmin:/usr/local/mysql:> bin/mysqld \
--defaults-file=/etc/my.cnf \
--user=mysqladmin \
--basedir=/usr/local/mysql/ \
--datadir=/usr/local/mysql/data/ \
--initialize

#在初始化时如果加上 –initial-insecure,则会创建空密码的 root@localhost 账号,否则会创建带密码的 root@localhost 账号,密码直接写在 log-error 日志文件中(在5.6版本中是放在 ~/.mysql_secret 文件里,更加隐蔽,不熟悉的话可能会无所适从)

#9.查看临时密码
hadoop001:mysqladmin:/usr/local/mysql/data:>cat hostname.err |grep password 
2017-07-22T02:15:29.439671Z 1 [Note] A temporary password is generated for root@localhost: kFCqrXeh2y(0

#10.启动
/usr/local/mysql/bin/mysqld_safe --defaults-file=/etc/my.cnf &

#11.登录及修改用户密码
hadoop001:mysqladmin:/usr/local/mysql/data:>mysql -uroot -p'kFCqrXeh2y(0'
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.11-log

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> alter user root@localhost identified by '123456';
Query OK, 0 rows affected (0.05 sec)

mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '123456' ;
Query OK, 0 rows affected, 1 warning (0.02 sec)


mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> exit;
Bye
1.7创建CDH的元数据库和用户、amon服务的数据库及用户
#创建cmf数据库
create database cmf DEFAULT CHARACTER SET utf8;
#创建amon数据库
create database amon DEFAULT CHARACTER SET utf8;
grant all on cmf.* TO 'cmf'@'%' IDENTIFIED BY '123456';
grant all on amon.* TO 'amon'@'%' IDENTIFIED BY '123456';
flush privileges;
1.8hadoop001节点部署mysql jdbc jar
#创建jdbc存放路径
[root@hadoop001 local] mkdir -p /usr/share/java/
#修改jdbc连接包名
[root@hadoop001 local] cp mysql-connector-java-5.1.47.jar /usr/share/java/ysql-connector-java.jar

2.CM离线部署

#所有节点创建cloudera-manager目录及解压
mkdir /opt/cloudera-manager
tar -zxvf cloudera-manager-centos7-cm5.16.1_x86_64.tar.gz -C /opt/cloudera-manager/

#所有节点修改agent的配置,指向server的节点hadoop001,可以用sed命令快速操作,也可以vim命令单个修改
sed -i "s/server_host=localhost/server_host=hadoop001/g" /opt/cloudera-manager/cm-5.16.1/etc/cloudera-scm-agent/config.ini

#主节点修改server的配置
vi /opt/cloudera-manager/cm-5.16.1/etc/cloudera-scm-server/db.properties 
com.cloudera.cmf.db.type=mysql
com.cloudera.cmf.db.host=hadoop001
com.cloudera.cmf.db.name=cmf
com.cloudera.cmf.db.user=cmf
com.cloudera.cmf.db.password=Ruozedata123456!
com.cloudera.cmf.db.setupType=EXTERNAL

#所有节点创建cloudera-scm用户
useradd --system --home=/opt/cloudera-manager/cm-5.16.1/run/cloudera-scm-server/ --no-create-home --shell=/bin/false --comment "Cloudera SCM User" cloudera-scm

#cloudera-manager目录修改用户及用户组
chown -R cloudera-scm:cloudera-scm /opt/cloudera-manager

3.parcel源离线部署

#1.主节点操作
#创建软件安装parcels存放目录
mkdir -p /opt/cloudera/parcel-repo
#拷贝parcel文件到目录下
cp CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel /opt/cloudera/parcel-repo/
#切记cp sha1后缀文件时,重命名去掉1,不然在部署过程CM认为如上文件下载未完整,会持续下载
cp CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel.sha1 /opt/cloudera/parcel-repo/CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel.sha
cp manifest.json /opt/cloudera/parcel-repo/

#校验parcel文件完整性,先cat sha文件内的校验值
cat CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel.sha 
703728dfa7690861ecd3a9bcd412b04ac8de7148
#sha1sum命令计算出parcel文件的校验值
sha1sum CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel
703728dfa7690861ecd3a9bcd412b04ac8de7148  CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel

#/opt/cloudera/目录修改用户及用户组
chown -R cloudera-scm:cloudera-scm /opt/cloudera/

#2.所有节点操作
#所有节点创建大数据软件安装(parcel解压)目录、修改用户及用户组权限
mkdir -p /opt/cloudera/parcels
chown -R cloudera-scm:cloudera-scm /opt/cloudera/

4.hadoop001节点启动Server

#启动server,主节点
[root@hadoop001 init.d] pwd
/opt/cloudera-manager/cm-5.16.1/etc/init.d
[root@hadoop001 init.d] ./cloudera-scm-server start

#如果启动失败可以到log目录下查日志
[root@hadoop001 cloudera-scm-server] pwd
/opt/cloudera-manager/cm-5.16.1/log/cloudera-scm-server
[root@hadoop001 cloudera-scm-server] tail -F cloudera-scm-server.log

#启动agent,三台机器都需要
[root@hadoop001 init.d] pwd
/opt/cloudera-manager/cm-5.16.1/etc/init.d
[root@hadoop001 init.d] ./cloudera-scm-agent start

#以上启动都需要等待1min 等待7180端口启动,如果发生错误需要去查看日志

5.接下来,全部Web界⾯面操作

http://hadoop001:7180/
账号密码:admin/admin

1.欢迎使⽤用Cloudera Manager–最终⽤用户许可条款与条件。勾选

2.欢迎使⽤用Cloudera Manager–您想要部署哪个版本?选择Cloudera Express免费版本
3.感谢您选择Cloudera Manager和CDH

4.为CDH集群安装指导主机。选择[当前管理理的主机],全部勾选

5.选择存储库

6.集群安装–正在安装选定Parcel

假如本地parcel离线源配置正确,则"下载"阶段瞬间完成,其余阶段视节点数与内部⽹网络情况决定。

7.检查主机正确性

#7.1.建议将/proc/sys/vm/swappiness设置为最⼤大值10。
#swappiness值控制操作系统尝试交换内存的积极;
#swappiness=0:表示最⼤大限度使⽤用物理理内存,之后才是swap空间;
#swappiness=100:表示积极使⽤用swap分区,并且把内存上的数据及时搬迁到swap空间;
#如果是混合服务器器,不建议完全禁用swap,可以尝试降低swappiness。
#临时调整:
sysctl vm.swappiness=10
#永久调整:
cat /etc/sysctl.conf
# Adjust swappiness value
vm.swappiness=10
#7.2.已启⽤用透明⼤大⻚页⾯面压缩,可能会导致重⼤大性能问题,建议禁⽤用此设置。
#临时调整:
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled
#永久调整:
cat /etc/rc.d/rc.local
# Disable transparent_hugepage
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled
# centos7.x系统,需要为"/etc/rc.d/rc.local"⽂文件赋予执⾏行行权限
chmod +x /etc/rc.d/rc.local
#以上操作在3台机器都需要
8.⾃自定义服务,选择部署Zookeeper、HDFS、Yarn服务

9.⾃自定义⻆角⾊色分配

10.数据库设置

11.审改设置,默认即可

12.首次运⾏

13.恭喜您!

14.主⻚

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值