Apache Flume

一、概述

Flume是一种分布式,可靠且可用的服务,用于有效地收集,聚合和移动大量日志数据。Flume构建在日志流之上一个简单灵活的架构。它具有可靠的可靠性机制和许多故障转移和恢复机制,具有强大的容错性。使用Flume这套架构实现对日志流数据的实时在线分析。Flume支持在日志系统中定制各类数据发送方,用于收集数据;同时,Flume提供对数据进行简单处理,并写到各种数据接受方(可定制)的能力。当前Flume有两个版本Flume 0.9X版本的统称Flume-og,Flume1.X版本的统称Flume-ng。由于Flume-ng经过重大重构,与Flume-og有很大不同,使用时请注意区分。本章节中使用的是apache-flume-1.9.0-bin.tar.gz

官网:http://flume.apache.org/releases/content/1.9.0/FlumeUserGuide.html

二、架构原理

在这里插入图片描述

核心组件:Agent{Source->Channel->Sink}

三、安装

  • 安装JDK 1.8+ 配置环境变量
  • 安装Flume
[root@CentOSA ~]# tar -zxf  apache-flume-1.9.0-bin.tar.gz -C /usr/
[root@CentOSA ~]# cd /usr/apache-flume-1.9.0-bin/
[root@CentOSA apache-flume-1.9.0-bin]# ./bin/flume-ng version
Flume 1.9.0
Source code repository: https://git-wip-us.apache.org/repos/asf/flume.git
Revision: d4fcab4f501d41597bc616921329a4339f73585e
Compiled by fszabo on Mon Dec 17 20:45:25 CET 2018
From source with checksum 35db629a3bda49d23e9b3690c80737f9

四、快速入门

(一)Agent配置

1.配置文件路径

/usr/apache-flume-1.9.0-bin/conf/XXX.properties

2.配置文件样板

# 声明组件信息
<Agent>.sources = <Source1> <Source2>
<Agent>.sinks = <Sink1> <Sink1>
<Agent>.channels = <Channel1> <Channel2>

# 组件配置
<Agent>.sources.<Source>.<someProperty> = <someValue>
<Agent>.channels.<Channel>.<someProperty> = <someValue>
<Agent>.sinks.<Sink>.<someProperty> = <someValue>

# 链接组件
<Agent>.sources.<Source>.channels = <Channel1> <Channel2> ...
<Agent>.sinks.<Sink>.channel = <Channel1>

(二)入门案例

在这里插入图片描述

1.配置文件

  • 第一台目标机器
[root@CentOS apache-flume-1.9.0-bin]# vi conf/demo01.properties
# 声明组件信息
a1.sources = s1
a1.sinks = sk1
a1.channels = c1

# 组件配置
a1.sources.s1.type = TAILDIR
a1.sources.s1.filegroups = f1
a1.sources.s1.filegroups.f1 = /root/logs/userLoginFile.*

a1.channels.c1.type = memory

a1.sinks.sk1.type = avro
a1.sinks.sk1.hostname = 192.168.111.133
a1.sinks.sk1.port = 44444

# 链接组件
a1.sources.s1.channels = c1
a1.sinks.sk1.channel = c1
  • 第二台目标机器
[root@CentOS apache-flume-1.9.0-bin]# vi conf/demo01.properties
# 声明组件信息
a1.sources = s1
a1.sinks = sk1
a1.channels = c1

# 组件配置
a1.sources.s1.type = avro
a1.sources.s1.bind = 192.168.111.133
a1.sources.s1.port = 44444

a1.channels.c1.type = memory

a1.sinks.sk1.type = file_roll
a1.sinks.sk1.sink.directory = /root/file_roll
a1.sinks.sk1.sink.rollInterval = 0

# 链接组件
a1.sources.s1.channels = c1
a1.sinks.sk1.channel = c1

2.启动组件

  • 启动第二台
[root@CentOSA apache-flume-1.9.0-bin]# ./bin/flume-ng agent --conf conf/ --conf-file conf/demo01.properties --name a1 -Dflume.root.logger=INFO,console
  • 启动第一台
[root@CentOS apache-flume-1.9.0-bin]# ./bin/flume-ng agent --conf conf/ --conf-file conf/demo01.properties  --name a1

Avro Source(重要)

  • 一般可以通过Avro Sink 将结果直接写入 Avro Source,这种情况,一般指的是通过flume采集本地的日志文件,一般情况下的应用服务器必须和agent部署在同一台物理主机。(服务器端日志采集)
  • 在移动设备端,用户调用Flume的暴露的SDK,直接将数据发送给Avro Source(将应用在移动设备客户端运行产生的日志文件数据发送给远端)

五、常规组件

(一)常规组件列表

Source(采集外围数据)Channel(缓冲数据)Sink(写出数据)
Avro Sourc
使用AVRO协议远程收集数据
Memory Channel
使用内存缓存Event
Avro Sink
使用AVRO协议将数据写出给Avro Source
Thrift Source
使用Thrift协议远程收集数据
JDBC Channel
使用Derby嵌入式数据库文件缓存Event
Thrift Sink
使用Thrift协议将数据写出给Thrift Source
Exec Source
可以将控制台输出数据采集到Agent
Kafka Channel
使用Kafka缓存Event
HDFS Sink
将采集的数据直接写入HDFS中
Spooling Directory Source
采集指定目录下的静态数据
File Channel
使用本地文件系统缓存Event
File Roll Sink
将采集的数据直接写入本地文件中
Taildir Source
动态采集文本日志文件中新产生的行数据
Kafka Sink
将采集的数据直接写入Kafka中
Kafka Source采集来自Kafka消息队列中的数据

(二)常见组件搭配案例

1.Spooling Directory Source|File Roll Sink|JDBC Channel

采集指定目录下的静态数据|将采集的数据直接写入本地文件中|使用Derby嵌入式数据库文件缓存Event

[root@CentOS apache-flume-1.9.0-bin]# vi conf/demo01.properties
[root@CentOSA ~]# mkdir /root/{spooldir,file_roll}
# 声明组件信息
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 组件配置
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /root/spooldir
a1.sources.r1.deletePolicy = never
a1.sources.r1.fileSuffix = .DONE
a1.sources.r1.includePattern = ^.*\\.log$

a1.sinks.k1.type = file_roll
a1.sinks.k1.sink.directory = /root/file_roll
a1.sinks.k1.sink.rollInterval = 0

a1.channels.c1.type = jdbc

# 链接组件
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
[root@CentOSA apache-flume-1.9.0-bin]# ./bin/flume-ng agent --conf conf/ --conf-file conf/demo01.properties --name a1

2.Taildir Source|HDFS Sink|File Channel

动态采集文本日志文件中新产生的行数据|将采集的数据直接写入HDFS中|使用本地文件系统缓存Event

[root@CentOSA apache-flume-1.9.0-bin]# vi conf/demo01.properties
[root@CentOSA ~]# mkdir /root/{tail_dir1,tail_dir2}
# 声明组件
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source属性
a1.sources.r1.type = TAILDIR
a1.sources.r1.filegroups = f1 f2
a1.sources.r1.filegroups.f1 = /root/tail_dir1/.*log.*
a1.sources.r1.filegroups.f2 = /root/tail_dir2/.*log.*

# 配置sink属性
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = /flume/logs/%Y-%m-%d/
a1.sinks.k1.hdfs.rollInterval = 0
a1.sinks.k1.hdfs.rollSize = 0
a1.sinks.k1.hdfs.rollCount = 0
a1.sinks.k1.hdfs.useLocalTimeStamp = true
a1.sinks.k1.hdfs.fileType = DataStream

a1.channels.c1.type = file

# 将source|Sink连接channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
运行主机环境变量必须配置HADOOP_HOME,
[root@CentOSA apache-flume-1.9.0-bin]# ./bin/flume-ng agent --conf conf/ --conf-file conf/demo01.properties --name a1

3.AVRO Source |Memory Channel|Logger Sink

使用AVRO协议远程收集数据|使用内存缓存Event|Logger Sink

[root@CentOS apache-flume-1.9.0-bin]# vi conf/demo01.properties
# 声明组件
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# 配置source属性
a1.sources.r1.type = avro
a1.sources.r1.bind = CentOS
a1.sources.r1.port = 44444
# 配置sink属性
a1.sinks.k1.type = logger
# 配置channel属性
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# 将source连接channel
a1.sources.r1.channels = c1
# 将sink连接channel
a1.sinks.k1.channel = c1
[root@CentOSA apache-flume-1.9.0-bin]# ./bin/flume-ng agent --conf conf/ --conf-file conf/demo01.properties --name a1

4.AVRO Source |Memory Channel|Kafka Sink

使用AVRO协议远程收集数据|使用内存缓存Event|将采集的数据直接写入Kafka中

[root@CentOS apache-flume-1.9.0-bin]# vi conf/demo01.properties
# 声明组件
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# 配置source属性
a1.sources.r1.type = avro
a1.sources.r1.bind = CentOSA
a1.sources.r1.port = 44444
# 配置sink属性
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.bootstrap.servers = CentOSA:9092,CentOSB:9092,CentOSC:9092
a1.sinks.k1.kafka.topic = topicflume
# 配置channel属性
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 将source连接channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
[root@CentOSA apache-flume-1.9.0-bin]# ./bin/flume-ng agent --conf conf/ --conf-file conf/demo01.properties --name a1

六、API集成(集群)

  • pom.xml
<dependency>
    <groupId>org.apache.flume</groupId>
    <artifactId>flume-ng-sdk</artifactId>
    <version>1.9.0</version>
</dependency>
  • 配置文件demo02.properties(Avro Source | memory channel| Kafka Sink)
# 声明组件信息
a1.sources = s1
a1.sinks = sk1
a1.channels = c1

# 组件配置
a1.sources.s1.type = avro
a1.sources.s1.bind = 192.168.111.132
a1.sources.s1.port = 44444

a1.channels.c1.type = memory

a1.sinks.sk1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.sk1.kafka.bootstrap.servers = 192.168.111.132:9092
a1.sinks.sk1.kafka.topic = topic01
a1.sinks.sk1.flumeBatchSize = 20
a1.sinks.sk1.kafka.producer.acks = 1
a1.sinks.sk1.kafka.producer.linger.ms = 1

# 链接组件
a1.sources.s1.channels = c1
a1.sinks.sk1.channel = c1

注意 a1.sinks.sk1.flumeBatchSize官方写错了a1.sinks.sk1.kafka.flumeBatchSize

  • 启动flume
[root@CentOS apache-flume-1.9.0-bin]# ./bin/flume-ng agent --conf conf/ --conf-file conf/demo02.properties  --name a1
  • Java代码(执行)
Properties props = new Properties();
props.setProperty(RpcClientConfigurationConstants.CONFIG_CLIENT_TYPE, "avro");
props.put("client.type", "default_loadbalance");
props.put("hosts", "h1 h2 h3");
String host1 = "192.168.111.133:44444";
String host2 = "192.168.111.133:44444";
String host3 = "192.168.111.133:44444";
props.put("hosts.h1", host1);
props.put("hosts.h2", host2);
props.put("hosts.h3", host3);
props.put("host-selector", "random"); // round_robin

RpcClient client= RpcClientFactory.getInstance(props);
Event event= EventBuilder.withBody("1 zhangsan true 28".getBytes());
client.append(event);

client.close();

七、Flume和log4j整合

  • pom.xml
<dependency>
    <groupId>org.apache.flume</groupId>
    <artifactId>flume-ng-sdk</artifactId>
    <version>1.9.0</version>
</dependency>
<dependency>
    <groupId>org.apache.flume.flume-ng-clients</groupId>
    <artifactId>flume-ng-log4jappender</artifactId>
    <version>1.9.0</version>
</dependency>
  • log4j.properties
log4j.appender.flume = org.apache.flume.clients.log4jappender.LoadBalancingLog4jAppender
log4j.appender.flume.Hosts = 192.168.111.132:44444 192.168.111.132:44444 192.168.111.132:44444
log4j.appender.flume.Selector = ROUND_ROBIN
log4j.appender.flume.MaxBackoff = 30000

log4j.logger.com.baizhi = DEBUG,flume
  • 测试代码
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
public class TestLog {
    private static Log log= LogFactory.getLog(TestLog.class);
    public static void main(String[] args) {
        log.debug("你好!_debug");
        log.info("你好!_info");
        log.warn("你好!_warn");
        log.error("你好!_error");
    }
}

八、Flume和logback整合

(一)依赖整合

  • pom.xml
<dependency>
   <groupId>com.teambytes.logback</groupId>
   <artifactId>logback-flume-appender_2.10</artifactId>
   <version>0.0.9</version>
</dependency>
  • SpringBoot项目组引入logback.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="60 seconds" debug="false">

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%p %c#%M %d{yyyy-MM-dd HH:mm:ss} %m%n</pattern>
            <charset>UTF-8</charset>
        </encoder>
    </appender>

    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">              
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">                     
            <fileNamePattern>logs/userLoginFile-%d{yyyyMMdd}.log</fileNamePattern>                    
            <maxHistory>30</maxHistory>                 
        </rollingPolicy>           
        <encoder>                    
            <pattern>%p %c#%M %d{yyyy-MM-dd HH:mm:ss} %m%n</pattern>
            <charset>UTF-8</charset>                  
        </encoder>
    </appender>

    <appender name="FLUME" class="com.teambytes.logback.flume.FlumeLogstashV1Appender">
        <flumeAgents>
            192.168.150.101:44444
        </flumeAgents>
        <flumeProperties>
            connect-timeout=4000;
            request-timeout=8000
        </flumeProperties>
        <batchSize>1</batchSize>
        <reportingWindow>1000</reportingWindow>
        <additionalAvroHeaders>
            myHeader = myValue
        </additionalAvroHeaders>
        <application>demo01</application>
        <layout class="ch.qos.logback.classic.PatternLayout">             
            <pattern>%p %c#%M %d{yyyy-MM-dd HH:mm:ss} %m%n</pattern>
        </layout>
    </appender>

    <!-- 控制台输出日志级别 -->
    <root level="ERROr">
        <appender-ref ref="STDOUT"/>
    </root>
    <!--additivity 为false,日志不会再父类appender中输出-->
    <logger name="com.baizhi.test" level="DEBUG" additivity="false">
        <appender-ref ref="FLUME"/>
    </logger>
</configuration>
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

private static final Logger LOG = LoggerFactory.getLogger(TestSpringBootLog.class);
LOG.info("-----------------------");

(二)源代码集成

<dependency>
    <groupId>org.apache.flume</groupId>
    <artifactId>flume-ng-sdk</artifactId>
    <version>1.9.0</version>
</dependency>
  • 在项目的logback.xml中添加flume的appender实现
<appender name="flume" class="com.gilt.logback.flume.FlumeLogstashV1Appender">
    <flumeAgents>
        192.168.111.132:44444,
        192.168.111.132:44444,
        192.168.111.132:44444
    </flumeAgents>
    <flumeProperties>
        connect-timeout=4000;
        request-timeout=8000
    </flumeProperties>
    <batchSize>1</batchSize>
    <reportingWindow>1000</reportingWindow>
    <additionalAvroHeaders>
        myHeader=myValue
    </additionalAvroHeaders>
    <application>smapleapp</application>
    <layout class="ch.qos.logback.classic.PatternLayout">
        <pattern>%p %c#%M %d{yyyy-MM-dd HH:mm:ss} %m%n</pattern>
    </layout>
</appender>

(三)自定义Appender集成

  • 在SpringBoot工程中添加当前版本flume的sdk
<dependency>
    <groupId>org.apache.flume</groupId>
    <artifactId>flume-ng-sdk</artifactId>
    <version>1.9.0</version>
</dependency>
  • 自定义代码实现(仿写源代码)
public class BZFlumeLogAppender extends UnsynchronizedAppenderBase<ILoggingEvent> {
    private String flumeAgents;
    protected Layout<ILoggingEvent> layout;
    private static RpcClient rpcClient;
    @Override
    protected void append(ILoggingEvent eventObject) {
       String body= layout!= null? layout.doLayout(eventObject):eventObject.getFormattedMessage();
        if(rpcClient==null){
           rpcClient=buildRpcClient();
        }
        Event event= EventBuilder.withBody(body,Charset.forName("UTF-8"));
        try {
            rpcClient.append(event);
        } catch (EventDeliveryException e) {
            e.printStackTrace();
        }
    }

    public void setFlumeAgents(String flumeAgents) {
        this.flumeAgents = flumeAgents;
    }

    public void setLayout(Layout<ILoggingEvent> layout) {
        this.layout = layout;
    }
    private   RpcClient buildRpcClient(){
        Properties props = new Properties();

        int i = 0;
        for (String agent : flumeAgents.split(",")) {
            String[] tokens = agent.split(":");
            props.put("hosts.h" + (i++), tokens[0] + ':' + tokens[1]);
        }
        StringBuffer buffer = new StringBuffer(i * 4);
        for (int j = 0; j < i; j++) {
            buffer.append("h").append(j).append(" ");
        }
        props.put("hosts", buffer.toString());

        if(i > 1) {
            props.put("client.type", "default_loadbalance");
            props.put("host-selector", "round_robin");
        }

        props.put("backoff", "true");
        props.put("maxBackoff", "10000");

        return RpcClientFactory.getInstance(props);
    }
}
  • 在项目的logback.xml中添加flume的appender实现
<appender name="bz" class="com.baizhi.flume.BZFlumeLogAppender">
    <flumeAgents>
        192.168.111.132:44444,192.168.111.132:44444
    </flumeAgents>
    <layout class="ch.qos.logback.classic.PatternLayout">
        <pattern>%p %c#%M %d{yyyy-MM-dd HH:mm:ss} %m</pattern>
    </layout>
</appender>

九、Flume对接HDFS (静态批处理)

将一个目录下的日志文件,采集到HDFS中,并且删除采集完成的日志文件(批处理作业中)

spooldir source、jdbc channel、HDFS Sink

# 声明组件信息
a1.sources = s1
a1.sinks = sk1
a1.channels = c1

# 组件配置
a1.sources.s1.type = spooldir
a1.sources.s1.spoolDir = /root/spooldir
a1.sources.s1.deletePolicy = immediate
a1.sources.s1.includePattern = ^.*\.log$

a1.channels.c1.type = jdbc

a1.sinks.sk1.type = hdfs
a1.sinks.sk1.hdfs.path= hdfs:///flume/%y-%m-%d/
a1.sinks.sk1.hdfs.filePrefix = events-
a1.sinks.sk1.hdfs.useLocalTimeStamp = true
a1.sinks.sk1.hdfs.rollInterval = 0
a1.sinks.sk1.hdfs.rollSize = 0 
a1.sinks.sk1.hdfs.rollCount = 0
a1.sinks.sk1.hdfs.fileType = DataStream

# 链接组件
a1.sources.s1.channels = c1
a1.sinks.sk1.channel = c1

(十)拦截器&通道选择器

作用于Source和Channel之间

在这里插入图片描述

日志分流案例:

需求只采集用户模块的日志流,对需要评估的数据发送给evaluatetopic,其他用户板块的数据发送给usertopic。

# 声明组件信息
a1.sources = s1
a1.sinks = sk1 sk2
a1.channels = c1 c2

# 组件配置
a1.sources.s1.type = avro
a1.sources.s1.bind = 192.168.111.132
a1.sources.s1.port = 44444

# 拦截器 
a1.sources.s1.interceptors = i1 i2
# 正则过滤拦截器
a1.sources.s1.interceptors.i1.type = regex_filter
a1.sources.s1.interceptors.i1.regex = .*UserController.*
a1.sources.s1.interceptors.i1.excludeEvents = false
# 正则抽取拦截器
a1.sources.s1.interceptors.i2.type = regex_extractor
a1.sources.s1.interceptors.i2.regex = .*(EVALUATE|SUCCESS).*
a1.sources.s1.interceptors.i2.serializers = s1
a1.sources.s1.interceptors.i2.serializers.s1.name = type
# 配置channel属性
a1.channels.c1.type = memory
a1.channels.c2.type = memory

a1.sinks.sk1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.sk1.kafka.bootstrap.servers = 192.168.111.132:9092
a1.sinks.sk1.kafka.topic = evaluatetopic
a1.sinks.sk1.flumeBatchSize = 20
a1.sinks.sk1.kafka.producer.acks = 1
a1.sinks.sk1.kafka.producer.linger.ms = 1

a1.sinks.sk2.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.sk2.kafka.bootstrap.servers = 192.168.111.132:9092
a1.sinks.sk2.kafka.topic = usertopic
a1.sinks.sk2.flumeBatchSize = 20
a1.sinks.sk2.kafka.producer.acks = 1
a1.sinks.sk2.kafka.producer.linger.ms = 1

# 配置多路通道选择器分流
a1.sources.s1.selector.type = multiplexing
a1.sources.s1.selector.header = type
a1.sources.s1.selector.mapping.EVALUATE = c1
a1.sources.s1.selector.mapping.SUCCESS = c2
a1.sources.s1.selector.default = c2

# 链接组件
a1.sources.s1.channels = c1 c2
a1.sinks.sk1.channel = c1
a1.sinks.sk2.channel = c2

(十一) Sink Processor(Sink处理器)

作用于channel和sink之间

在这里插入图片描述

# 声明组件
a1.sources = s1
a1.sinks = sk1 sk2 
a1.channels = c1

# 将看 k1 k2 归纳一个组
a1.sinkgroups = g1 
a1.sinkgroups.g1.sinks = sk1 sk2
a1.sinkgroups.g1.processor.type = load_balance
a1.sinkgroups.g1.processor.backoff = true
a1.sinkgroups.g1.processor.selector = round_robin

# 配置source属性
a1.sources.s1.type = avro
a1.sources.s1.bind = 192.168.111.132
a1.sources.s1.port = 44444

# 配置sink属性
a1.sinks.sk1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.sk1.kafka.bootstrap.servers = 192.168.111.132:9092
a1.sinks.sk1.kafka.topic = evaluatetopic
a1.sinks.sk1.flumeBatchSize = 20
a1.sinks.sk1.kafka.producer.acks = 1
a1.sinks.sk1.kafka.producer.linger.ms = 1

a1.sinks.sk2.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.sk2.kafka.bootstrap.servers = 192.168.111.132:9092
a1.sinks.sk2.kafka.topic = usertopic
a1.sinks.sk2.flumeBatchSize = 20
a1.sinks.sk2.kafka.producer.acks = 1
a1.sinks.sk2.kafka.producer.linger.ms = 1

# 配置channel属性
a1.channels.c1.type = memory
a1.channels.c1.transactionCapacity = 1

# 将source连接channel
a1.sources.s1.channels = c1 
a1.sinks.sk1.channel = c1
a1.sinks.sk2.channel = c1

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值