spark学习记录

搭建云服务器,大数据组件学习记录
摘要由CSDN通过智能技术生成

云服务器环境搭建(从0)

阿里云申请一个云服务器,1核2G,一个月免费试用中。

登陆默认在 “/root”

linux目录结构

bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var

/bin bin 是 Binaries (二进制文件) 的缩写, 这个目录存放着最经常使用的命令

/boot 这里存放的是启动 Linux 时使用的一些核心文件,包括一些连接文件以及镜像文件。

/dev dev 是 Device(设备) 的缩写, 该目录下存放的是 Linux 的外部设备,在 Linux 中访问设备的方式和访问文件的方式是相同的。

/etc etc 是 Etcetera(等等) 的缩写,这个目录用来存放所有的系统管理所需要的配置文件和子目录。

/home用户的主目录,在 Linux 中,每个用户都有一个自己的目录,一般该目录名是以用户的账号命名的,如上图中的 alice、bob 和 eve。

/lib lib 是 Library(库) 的缩写这个目录里存放着系统最基本的动态连接共享库,其作用类似于 Windows 里的 DLL 文件。几乎所有的应用程序都需要用到这些共享库。

media linux 系统会自动识别一些设备,例如U盘、光驱等等,当识别后,Linux 会把识别的设备挂载到这个目录下。
ls /media 为空

mnt 系统提供该目录是为了让用户临时挂载别的文件系统的,我们可以将光驱挂载在 /mnt/ 上,然后进入该目录就可以查看光驱里的内容了。
ls /mnt 为空

/opt opt 是 optional(可选) 的缩写,这是给主机额外安装软件所摆放的目录。比如你安装一个ORACLE数据库则就可以放到这个目录下。默认是空的。
ls /opt 为空
后面的zk,spark我都放在了opt下,下载的压缩包也是

/proc proc 是 Processes(进程) 的缩写,/proc 是一种伪文件系统(也即虚拟文件系统),存储的是当前内核运行状态的一系列特殊文件,这个目录是一个虚拟的目录,它是系统内存的映射,我们可以通过直接访问这个目录来获取系统信息。
这个目录的内容不在硬盘上而是在内存里,我们也可以直接修改里面的某些文件,比如可以通过下面的命令来屏蔽主机的ping命令,使别人无法ping你的机器:

echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all

/root 该目录为系统管理员,也称作超级权限者的用户主目录。
root用户一登陆在/root路径,为空

/sbin s 就是 Super User 的意思,是 Superuser Binaries (超级用户的二进制文件) 的缩写,这里存放的是系统管理员使用的系统管理程序。

/srv 该目录存放一些服务启动之后需要提取的数据。
ls /srv 为空

/sys 这是 Linux2.6 内核的一个很大的变化。该目录下安装了 2.6 内核中新出现的一个文件系统 sysfs 。
sysfs 文件系统集成了下面3种文件系统的信息:针对进程信息的 proc 文件系统、针对设备的 devfs 文件系统以及针对伪终端的 devpts 文件系统。
该文件系统是内核设备树的一个直观反映。
当一个内核对象被创建的时候,对应的文件和目录也在内核对象子系统中被创建。

/tmp tmp 是 temporary(临时) 的缩写这个目录是用来存放一些临时文件的。

/usr usr 是 unix shared resources(共享资源) 的缩写,这是一个非常重要的目录,用户的很多应用程序和文件都放在这个目录下,类似于 windows 下的 program files 目录。
/usr/bin 系统用户使用的应用程序。
/usr/sbin超级用户使用的比较高级的管理程序和系统守护程序。
/usr/src 内核源代码默认的放置目录。

/var var 是 variable(变量) 的缩写,这个目录中存放着在不断扩充着的东西,我们习惯将那些经常被修改的目录放在这个目录下。包括各种日志文件。

/run 是一个临时文件系统,存储系统启动以来的信息。当系统重启时,这个目录下的文件应该被删掉或清除。如果你的系统上有 /var/run 目录,应该让它指向 run。

Reference

Java8 安装

yum -y list java* 可以看到可安装的Java版本
其中带有“-devel”的是jdk,否则是jre。

yum install -y java-1.8.0-openjdk-devel.x86_64

获取jdk的安装目录

rpm -ql java-1.8.0-openjdk

发现在/usr/lib/jvm路径下

至此,yum安装jdk完成。

也可以通过官方安装包进行安装。

Conference

Zookeeper 安装

我装的是3.5.9

wget https://dlcdn.apache.org/zookeeper/zookeeper-3.5.9/apache-zookeeper-3.5.9-bin.tar.gz
tar -zxvf apache-zookeeper-3.5.9-bin.tar.gz
cd apache-zookeeper-3.5.9/conf
cp zoo_sample.cfg zoo.cfg
cd ..
cd bin
sh zkServer.sh start

查看server状态

sh zkServer.sh status

坑:3.5.5后,带有bin名称的包才是我们想要的下载可以直接使用的里面有编译后的二进制的包,而之前的普通的tar.gz的包里面是只是源码的包无法直接使用。

启动客户端

sh zkCli.sh

Conference

Hadoop 安装

Conference
我装的是2.7.5
配置一下ssh
wget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/stable/hadoop-3.3.1.tar.gz(清华源只有2.10 3.3)
tar -zxvf
自带HDFS、Yarn,开启
3.3.1版本,要使用HDFS需要在 /etc/profile 添加

export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root

后在/hadoop 执行 start-dfs.sh 即可开启 nameNode dataNode secondaryNameNode 进程

Spark 2.3 HA 集群分布式 安装

前提:
Java8安装
Zookeeper安装
Hadoop2.7.5 HA安装
Scala安装

只能下载2.4了

wget https://mirrors.tuna.tsinghua.edu.cn/apache/spark/spark-2.4.8/spark-2.4.8-bin-hadoop2.7.tgz

3.3.1的hadoop,下载3.2.0的spark(基于3.3后的hadoop)

wget https://mirrors.tuna.tsinghua.edu.cn/apache/spark/spark-3.2.0/spark-3.2.0-bin-hadoop3.2.tgz
  1. 先启动Zookeeper
    cd 到 ZooKeeper/bin 执行 sh zkServer.sh start
    sh zkServer.sh status 查看运行状态
    jps 查看是否有QuorumPeerMain线程(zk入口类线程)
    如果失败 执行sh zkServer.sh start-forground 带日志的执行

  2. 启动 spark
    cd 到 spark/sbin
    执行 start-all.sh 会启动yarn和hdfs
    执行 sh start-master.sh 启动spark的master
    执行 sh start-slave.sh 启动spark的worker
    (记得spark/conf/spark-env 加入 SPARK_MASTER_HOST=localhost)

  3. spark on Yarn
    测试pi例子
    在spark目录执行

    bin/spark-submit \
    --class org.apache.spark.examples.SparkPi \
    --master yarn \
    --deploy-mode cluster \
    --driver-memory 500M \
    --executor-memory 500m \
    --executor-cores 1 \
    ./examples/jars/spark-examples_2.11-2.4.8.jar \
    10
    

    Conference

    spark内存不够

    暂未成功。INFO yarn.Client: Application report for application_1640188927691_0003 (state: ACCEPTED) 后,一直在 INFO yarn.Client: Application report for application_1640188927691_0003 (state: RUNNING),可能是资源太少(1core2G)的问题

  4. spark on standalone
    4.1. 运行 example-pi

    bin/spark-submit \
    --class org.apache.spark.examples.SparkPi \
    --master spark://localhost:7077 \
    --executor-memory 500m \
    --total-executor-cores 1 \
    ./examples/jars/spark-examples_2.11-2.4.8.jar \
    100
    

    spark-examples_2.12-3.2.0.jar
    警告没资源,等待几分钟后成功

    21/12/23 14:13:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    21/12/23 14:13:57 INFO spark.SparkContext: Running Spark version 2.4.8
    21/12/23 14:13:57 INFO spark.SparkContext: Submitted application: Spark Pi
    21/12/23 14:13:58 INFO spark.SecurityManager: Changing view acls to: root
    21/12/23 14:13:58 INFO spark.SecurityManager: Changing modify acls to: root
    21/12/23 14:13:58 INFO spark.SecurityManager: Changing view acls groups to: 
    21/12/23 14:13:58 INFO spark.SecurityManager: Changing modify acls groups to: 
    21/12/23 14:13:58 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
    21/12/23 14:13:58 INFO util.Utils: Successfully started service 'sparkDriver' on port 33713.
    21/12/23 14:13:58 INFO spark.SparkEnv: Registering MapOutputTracker
    21/12/23 14:13:58 INFO spark.SparkEnv: Registering BlockManagerMaster
    21/12/23 14:13:58 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
    21/12/23 14:13:58 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
    21/12/23 14:13:58 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-619fd2dc-0d04-414a-b314-0421d5c00934
    21/12/23 14:13:58 INFO memory.MemoryStore: MemoryStore started with capacity 413.9 MB
    21/12/23 14:13:58 INFO spark.SparkEnv: Registering OutputCommitCoordinator
    21/12/23 14:13:58 INFO util.log: Logging initialized @3145ms to org.spark_project.jetty.util.log.Slf4jLog
    21/12/23 14:13:58 INFO server.Server: jetty-9.4.z-SNAPSHOT; built: unknown; git: unknown; jvm 1.8.0_312-b07
    21/12/23 14:13:58 INFO server.Server: Started @3383ms
    21/12/23 14:13:59 INFO server.AbstractConnector: Started ServerConnector@79ab3a71{
         HTTP/1.1, (http/1.1)}{
         0.0.0.0:4040}
    21/12/23 14:13:59 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@44ea608c{
         /jobs,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@515f4131{
         /jobs/json,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@74518890{
         /jobs/job,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3f3ddbd9{
         /jobs/job/json,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@14c053c6{
         /stages,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6c2d4cc6{
         /stages/json,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@30865a90{
         /stages/stage,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@71b1a49c{
         /stages/stage/json,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@73e132e0{
         /stages/pool,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3773862a{
         /stages/pool/json,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2472c7d8{
         /storage,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@589b028e{
         /storage/json,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@22175d4f{
         /storage/rdd,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@9fecdf1{
         /storage/rdd/json,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3b809711{
         /environment,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3b0f7d9d{
         /environment/json,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@236ab296{
         /executors,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5c84624f{
         /executors/json,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@63034ed1{
         /executors/threadDump,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@232024b9{
         /executors/threadDump/json,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@55a8dc49{
         /static,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4e406694{
         /,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5ab9b447{
         /api,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@15b986cd{
         /jobs/job/kill,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6bb7cce7{
         /stages/stage/kill,null,AVAILABLE,@Spark}
    21/12/23 14:13:59 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://iZwz915iahvm5k8atqdtj2Z:4040
    21/12/23 14:13:59 INFO spark.SparkContext: Added JAR file:/opt/spark-2.4.8-bin-hadoop2.7/./examples/jars/spark-examples_2.11-2.4.8.jar at spark://iZwz915iahvm5k8atqdtj2Z:33713/jars/spark-examples_2.11-2.4.8.jar with timestamp 1640240039193
    21/12/23 14:13:59 INFO client.StandaloneAppClient$ClientEndpoint: Connecting to master spark://localhost:7077...
    21/12/23 14:13:59 INFO client.TransportClientFactory: Successfully created connection to localhost/127.0.0.1:7077 after 67 ms (0 ms spent in bootstraps)
    21/12/23 14:13:59 INFO cluster.StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20211223141359-0000
    21/12/23 14:13:59 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 37767.
    21/12/23 14:13:59 INFO netty.NettyBlockTransferService: Server created on iZwz915iahvm5k8atqdtj2Z:37767
    21/12/23 14:13:59 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
    21/12/23 14:13:59 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, iZwz915iahvm5k8atqdtj2Z, 37767, None)
    21/12/23 14:13:59 INFO storage.BlockManagerMasterEndpoint: Registering block manager iZwz915iahvm5k8atqdtj2Z:37767 with 413.9 MB RAM, BlockManagerId(driver, iZwz915iahvm5k8atqdtj2Z, 37767, None)
    21/12/23 14:13:59 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, iZwz915iahvm5k8atqdtj2Z, 37767, None)
    21/12/23 14:13:59 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, iZwz915iahvm5k8atqdtj2Z, 37767, None)
    21/12/23 14:14:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3e48e859{
         /metrics/json,null,AVAILABLE,@Spark}
    21/12/23 14:14:00 INFO cluster.StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
    21/12/23 14:14:01 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:38
    21/12/23 14:14:01 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:38) with 100 output partitions
    21/12/23 14:14:01 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:38)
    21/12/23 14:14:01 INFO scheduler.DAGScheduler: Parents of final stage: List()
    21/12/23 14:14:01 INFO scheduler.DAGScheduler: Missing parents: List()
    21/12/23 14:14:01 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents
    21/12/23 14:14:01 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 2.0 KB, free 413.9 MB)
    21/12/23 14:14:01 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1381.0 B, free 413.9 MB)
    21/12/23 14:14:01 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on iZwz915iahvm5k8atqdtj2Z:37767 (size: 1381.0 B, free: 413.9 MB)
    21/12/23 14:14:01 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1184
    21/12/23 14:14:01 INFO scheduler.DAGScheduler: Submitting 100 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14))
    21/12/23 14:14:01 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 100 tasks
    21/12/23 14:14:16 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
    21/12/23 14:14:27 INFO client.StandaloneAppClient$ClientEndpoint: Master removed worker worker-20211223114534-172.18.92.61-40105: Not responding for recovery
    21/12/23 14:14:27 INFO cluster.StandaloneSchedulerBackend: Worker worker-20211223114534-172.18.92.61-40105 removed: Not responding for recovery
    21/12/23 14:14:27 INFO scheduler.TaskSchedulerImpl: Handle removed worker worker-20211223114534-172.18.92.61-40105: Not responding for recovery
    21/12/23 14:14:27 INFO scheduler.DAGScheduler: Shuffle files lost for worker worker-20211223114534-172.18.92.61-40105 on host 172.18.92.61
    21/12/23 14:14:27 INFO client.StandaloneAppClient$ClientEndpoint: Executor added: app-20211223141359-0000/0 on worker-20211223141327-172.18.92.61-34607 (172.18.92.61:34607) with 1 core(s)
    21/12/23 14:14:27 INFO cluster.StandaloneSchedulerBackend: Granted executor ID app-20211223141359-0000/0 on hostPort 172.18.92.61:34607 with 1 core(s), 500.0 MB RAM
    21/12/23 14:14:27 INFO client.StandaloneAppClient$ClientEndpoint: Executor updated: app-20211223141359-0000/0 is now RUNNING
    21/12/23 14:14:31 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
    21/12/23 14:14:31 INFO cluster.CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (172.18.92.61:40782) with ID 0
    21/12/23 14:14:31 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 172.18.92.61, executor 0, partition 0, PROCESS_LOCAL, 7870 bytes)
    21/12/23 14:14:32 INFO storage.BlockManagerMasterEndpoint: Registering block manager 172.18.92.61:35261 with 110.0 MB RAM, BlockManagerId(0, 172.18.92.61, 35261, None)
    21/12/23 14:14:34 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.18.92.61:35261 (size: 1381.0 B, free: 110.0 MB)
    121/12/23 14:19:36 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 172.18.92.61, executor 0, partition 1, PROCESS_LOCAL, 7870 bytes)
    21/12/23 14:19:41 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, 172.18.92.61, executor 0, partition 2, PROCESS_LOCAL, 7870 bytes)
    21/12/23 14:19:41 INFO scheduler.TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, 172.18.92.61, executor 0, partition 3, PROCESS_LOCAL, 7870 bytes)
    21/12/23 14:19:41 INFO scheduler.TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, 172.18.92.61, executor 0, partition 4, PROCESS_LOCAL, 7870 bytes)
    21/12/23 14:19:41 INFO scheduler.TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, 172.18.92.61, executor 0, partition 5, PROCESS_LOCAL, 7870 bytes)
    21/12/23 14:19:41 INFO scheduler.TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, 172.18.92.61, executor 0, partition 6, PROCESS_LOCAL, 7870 bytes)
    21/12/23 14:19:41 INFO scheduler.TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, 172.18.92.61, executor 0, partition 7, PROCESS_LOCAL, 7870 bytes)
    21/12/23 14:19:41 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 309907 ms on 172.18.92.61 (executor 0) (1/100)
    21/12/23 14:19:41 INFO scheduler.TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 172 ms on 172.18.92.61 (executor 0) (2/100)
    21/12/23 14:19:41 INFO scheduler.TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 144 ms on 172.18.92.61 (executor 0) (3/100)
    21/12/23 14:19:41 INFO scheduler.TaskSetManager: Finished task 6.0 in stage 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值