Apache Flink 第二章教案

程序部署

本地执行

//1.创建流计算执行环境
val env = StreamExecutionEnvironment.createLocalEnvironment(3)

  //2.创建DataStream - 细化
  val text = env.socketTextStream("CentOS", 9999)

  //3.执行DataStream的转换算子
  val counts = text.flatMap(line=>line.split("\\s+"))
  .map(word=>(word,1))
  .keyBy(0)
  .sum(1)

  //4.将计算的结果在控制打印
  counts.print()

  //5.执行流计算任务
  env.execute("Window Stream WordCount")

远程部署

 //1.创建流计算执行环境
 val env = StreamExecutionEnvironment.getExecutionEnvironment
  //2.创建DataStream - 细化
  val text = env.socketTextStream("CentOS", 9999)

  //3.执行DataStream的转换算子
  val counts = text.flatMap(line=>line.split("\\s+"))
  .map(word=>(word,1))
  .keyBy(0)
  .sum(1)

  //4.将计算的结果在控制打印
  counts.print()

  //5.执行流计算任务
  env.execute("Window Stream WordCount")

StreamExecutionEnvironment.getExecutionEnvironment自动识别运行环境,如果运行环境是idea,系统会自动切换成本地模式,默认系统的并行度使用系统最大线程数,等价于Spark中设置的local[*],如果是生产环境,需要用户在提交任务的时候指定并行度--parallelism

  • 部署方式
    • WEB UI部署(略)
    • 通过脚本部署
[root@CentOS ~]# cd /usr/flink-1.10.0/
[root@CentOS flink-1.10.0]# ./bin/flink run 
                            --class com.baizhi.quickstart.FlinkWordCountQiuckStart 
                            --detached  # 后台提交
                            --parallelism 4 #指定程序默认并行度 
                            --jobmanager CentOS:8081  # 提交目标主机
                            /root/flink-datastream-1.0-SNAPSHOT.jar 
Job has been submitted with JobID f2019219e33261de88a1678fdc78c696

查看现有任务

[root@CentOS flink-1.10.0]# ./bin/flink list --running --jobmanager CentOS:8081 
Waiting for response...
------------------ Running/Restarting Jobs -------------------
01.03.2020 05:38:16 : f2019219e33261de88a1678fdc78c696 : Window Stream WordCount (RUNNING)
--------------------------------------------------------------
No scheduled jobs.

[root@CentOS flink-1.10.0]# ./bin/flink list --all  --jobmanager CentOS:8081  
Waiting for response...
------------------ Running/Restarting Jobs -------------------
01.03.2020 05:44:29 : ddfc2ddfb6dc05910a887d61a0c01392 : Window Stream WordCount (RUNNING)
--------------------------------------------------------------
No scheduled jobs.
---------------------- Terminated Jobs -----------------------
01.03.2020 05:36:28 : f216d38bfef7745b36e3151855a18ebd : Window Stream WordCount (CANCELED)
01.03.2020 05:38:16 : f2019219e33261de88a1678fdc78c696 : Window Stream WordCount (CANCELED)
--------------------------------------------------------------

取消指定任务

[root@CentOS flink-1.10.0]# ./bin/flink cancel  --jobmanager CentOS:8081 f2019219e33261de88a1678fdc78c696  
Cancelling job f2019219e33261de88a1678fdc78c696.
Cancelled job f2019219e33261de88a1678fdc78c696.

查看程序执行计划

[root@CentOS flink-1.10.0]# ./bin/flink info --class com.baizhi.quickstart.FlinkWordCountQiuckStart  --parallelism 4   /root/flink-datastream-1.0-SNAPSHOT.jar 
----------------------- Execution Plan -----------------------
{"nodes":[{"id":1,"type":"Source: Socket Stream","pact":"Data Source","contents":"Source: Socket Stream","parallelism":1},{"id":2,"type":"Flat Map","pact":"Operator","contents":"Flat Map","parallelism":4,"predecessors":[{"id":1,"ship_strategy":"REBALANCE","side":"second"}]},{"id":3,"type":"Map","pact":"Operator","contents":"Map","parallelism":4,"predecessors":[{"id":2,"ship_strategy":"FORWARD","side":"second"}]},{"id":5,"type":"aggregation","pact":"Operator","contents":"aggregation","parallelism":4,"predecessors":[{"id":3,"ship_strategy":"HASH","side":"second"}]},{"id":6,"type":"Sink: Print to Std. Out","pact":"Data Sink","contents":"Sink: Print to Std. Out","parallelism":4,"predecessors":[{"id":5,"ship_strategy":"FORWARD","side":"second"}]}]}
--------------------------------------------------------------

No description provided.

用户可以访问:https://flink.apache.org/visualizer/将json数据粘贴过去,查看Flink执行计划图
在这里插入图片描述

跨平台发布

object FlinkWordCountQiuckStartCorssPlatform {
  def main(args: Array[String]): Unit = {
    //1.创建流计算执行环境
    var jars="/Users/admin/IdeaProjects/20200203/flink-datastream/target/flink-datastream-1.0-SNAPSHOT.jar"
    val env = StreamExecutionEnvironment.createRemoteEnvironment("CentOS",8081,jars)
    //设置默认并行度
    env.setParallelism(4)

    //2.创建DataStream - 细化
    val text = env.socketTextStream("CentOS", 9999)

    //3.执行DataStream的转换算子
    val counts = text.flatMap(line=>line.split("\\s+"))
      .map(word=>(word,1))
      .keyBy(0)
      .sum(1)

    //4.将计算的结果在控制打印
    counts.print()

    //5.执行流计算任务
    env.execute("Window Stream WordCount")
  }
}

在运行之前需要使用mvn重新打包程序。直接运行main函数即可

Streaming (DataStream API)

DataSource

数据源是程序读取数据的来源,用户可以通过env.addSource(SourceFunction),将SourceFunction添加到程序中。Flink内置许多已知实现的SourceFunction,但是用户可以自定义实现SourceFunction(非并行化的接口)接口或者实现ParallelSourceFunction(并行化)接口,如果需要有状态管理还可以继承RichParallelSourceFunction.

File-based
  • readTextFile(path) - Reads(once) text files, i.e. files that respect the TextInputFormat specification, line-by-line and returns them as Strings.
	//1.创建流计算执行环境
	val env = StreamExecutionEnvironment.getExecutionEnvironment

  //2.创建DataStream - 细化
  val text:DataStream[String] = env.readTextFile("hdfs://CentOS:9000/demo/words")

    //3.执行DataStream的转换算子
    val counts = text.flatMap(line=>line.split("\\s+"))
    .map(word=>(word,1))
    .keyBy(0)
    .sum(1)

    //4.将计算的结果在控制打印
    counts.print()

    //5.执行流计算任务
    env.execute("Window Stream WordCount")
  • readFile(fileInputFormat, path) - Reads (once) files as dictated by the specified file input format.
  //1.创建流计算执行环境
    val env = StreamExecutionEnvironment.getExecutionEnvironment

    //2.创建DataStream - 细化
    var inputFormat:FileInputFormat[String]=new TextInputFormat(null)
    val text:DataStream[String] = env.readFile(inputFormat,"hdfs://CentOS:9000/demo/words")

    //3.执行DataStream的转换算子
    val counts = text.flatMap(line=>line.split("\\s+"))
      .map(word=>(word,1))
      .keyBy(0)
      .sum(1)

    //4.将计算的结果在控制打印
    counts.print()

    //5.执行流计算任务
    env.execute("Window Stream WordCount")
  • readFile(fileInputFormat, path, watchType, interval, pathFilter, typeInfo) - This is the method called internally by the two previous ones. It reads files in the path based on the given fileInputFormat. Depending on the provided watchType, this source may periodically monitor (every interval ms) the path for new data (FileProcessingMode.PROCESS_CONTINUOUSLY), or process once the data currently in the path and exit (FileProcessingMode.PROCESS_ONCE). Using the pathFilter, the user can further exclude files from being processed.
 //1.创建流计算执行环境
    val env = StreamExecutionEnvironment.getExecutionEnvironment

    //2.创建DataStream - 细化
    var inputFormat:FileInputFormat[String]=new TextInputFormat(null)
    val text:DataStream[String] = env.readFile(inputFormat,
      "hdfs://CentOS:9000/demo/words",FileProcessingMode.PROCESS_CONTINUOUSLY,1000)

    //3.执行DataStream的转换算子
    val counts = text.flatMap(line=>line.split("\\s+"))
      .map(word=>(word,1))
      .keyBy(0)
      .sum(1)

    //4.将计算的结果在控制打印
    counts.print()

    //5.执行流计算任务
    env.execute("Window Stream WordCount")

该方法会检查采集目录下的文件,如果文件发生变化系统会重新采集。此时可能会导致文件的重复计算。一般来说不建议修改文件内容,直接上传新文件即可

Socket Based
  • socketTextStream - Reads from a socket. Elements can be separated by a delimiter.
 //1.创建流计算执行环境
    val env = StreamExecutionEnvironment.getExecutionEnvironment

    //2.创建DataStream - 细化
    val text = env.socketTextStream("CentOS", 9999,'\n',3)

    //3.执行DataStream的转换算子
    val counts = text.flatMap(line=>line.split("\\s+"))
      .map(word=>(word,1))
      .keyBy(0)
      .sum(1)

    //4.将计算的结果在控制打印
    counts.print()

    //5.执行流计算任务
    env.execute("Window Stream WordCount")
Collection-based
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment

  //2.创建DataStream - 细化
  val text = env.fromCollection(List("this is a demo","hello word"))

  //3.执行DataStream的转换算子
  val counts = text.flatMap(line=>line.split("\\s+"))
  .map(word=>(word,1))
  .keyBy(0)
  .sum(1)

  //4.将计算的结果在控制打印
  counts.print()

  //5.执行流计算任务
  env.execute("Window Stream WordCount")
UserDefinedSource
  • SourceFunction
import org.apache.flink.streaming.api.functions.source.SourceFunction

import scala.util.Random

class UserDefinedNonParallelSourceFunction extends SourceFunction[String]{
  @volatile //防止线程拷贝变量
  var isRunning:Boolean=true
  val lines:Array[String]=Array("this is a demo","hello world","ni hao ma")

  //在该方法中启动线程,通过sourceContext的collect方法发送数据
  override def run(sourceContext: SourceFunction.SourceContext[String]): Unit = {
    while(isRunning){
      Thread.sleep(100)
      //输送数据给下游
      sourceContext.collect(lines(new Random().nextInt(lines.size)))
    }
  }
  //释放资源
  override def cancel(): Unit = {
    isRunning=false
  }
}
  • ParallelSourceFunction
import org.apache.flink.streaming.api.functions.source.{ParallelSourceFunction, SourceFunction}

import scala.util.Random

class UserDefinedParallelSourceFunction extends ParallelSourceFunction[String]{
  @volatile //防止线程拷贝变量
  var isRunning:Boolean=true
  val lines:Array[String]=Array("this is a demo","hello world","ni hao ma")

  //在该方法中启动线程,通过sourceContext的collect方法发送数据
  override def run(sourceContext: SourceFunction.SourceContext[String]): Unit = {
    while(isRunning){
      Thread.sleep(100)
      //输送数据给下游
      sourceContext.collect(lines(new Random().nextInt(lines.size)))
    }
  }
  //释放资源
  override def cancel(): Unit = {
    isRunning=false
  }
}
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
  env.setParallelism(4)
  //2.创建DataStream - 细化
  val text = env.addSource[String](用户定义的SourceFunction)

  //3.执行DataStream的转换算子
  val counts = text.flatMap(line=>line.split("\\s+"))
  .map(word=>(word,1))
  .keyBy(0)
  .sum(1)

  //4.将计算的结果在控制打印
  counts.print()

  println(env.getExecutionPlan) //打印执行计划
  //5.执行流计算任务
  env.execute("Window Stream WordCount")

√Kafka集成
  • 引入maven
<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-connector-kafka_2.11</artifactId>
  <version>1.10.0</version>
</dependency>
  • SimpleStringSchema

该SimpleStringSchema方案只会反序列化kafka中的value

	//1.创建流计算执行环境
	val env = StreamExecutionEnvironment.getExecutionEnvironment

  //2.创建DataStream - 细化
  val props = new Properties()
  props.setProperty("bootstrap.servers", "CentOS:9092")
  props.setProperty("group.id", "g1")
  val text = env.addSource(new FlinkKafkaConsumer[String]("topic01",new SimpleStringSchema(),props))
  //3.执行DataStream的转换算子
  val counts = text.flatMap(line=>line.split("\\s+"))
  .map(word=>(word,1))
  .keyBy(0)
  .sum(1)

  //4.将计算的结果在控制打印
  counts.print()

  //5.执行流计算任务
  env.execute("Window Stream WordCount")
  • KafkaDeserializationSchema
import org.apache.flink.api.common.typeinfo.TypeInformation
import org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema
import org.apache.kafka.clients.consumer.ConsumerRecord
import org.apache.flink.api.scala._

class UserDefinedKafkaDeserializationSchema extends KafkaDeserializationSchema[(String,String,Int,Long)]{

  override def isEndOfStream(t: (String, String, Int, Long)): Boolean = false

  override def deserialize(consumerRecord: ConsumerRecord[Array[Byte], Array[Byte]]): (String, String, Int, Long) = {
    if(consumerRecord.key()!=null){
      (new String(consumerRecord.key()),new String(consumerRecord.value()),consumerRecord.partition(),consumerRecord.offset())
    }else{
      (null,new String(consumerRecord.value()),consumerRecord.partition(),consumerRecord.offset())
    }
  }

  override def getProducedType: TypeInformation[(String, String, Int, Long)] = {
    createTypeInformation[(String, String, Int, Long)]
  }
}

//1.创建流计算执行环境
    val env = StreamExecutionEnvironment.getExecutionEnvironment

    //2.创建DataStream - 细化
    val props = new Properties()
    props.setProperty("bootstrap.servers", "CentOS:9092")
    props.setProperty("group.id", "g1")
    val text = env.addSource(new FlinkKafkaConsumer[(String,String,Int,Long)]("topic01",new UserDefinedKafkaDeserializationSchema(),props))
    //3.执行DataStream的转换算子
    val counts = text.flatMap(t=> t._2.split("\\s+"))
      .map(word=>(word,1))
      .keyBy(0)
      .sum(1)

    //4.将计算的结果在控制打印
    counts.print()

    //5.执行流计算任务
    env.execute("Window Stream WordCount")
  • JSONKeyValueNodeDeserializationSchema

要求Kafka中的topic的key和value都必须是json格式,也可以在使用的时候,指定是否读取元数据(topic、分区、offset等)

//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment

  //2.创建DataStream - 细化
  val props = new Properties()
  props.setProperty("bootstrap.servers", "CentOS:9092")
  props.setProperty("group.id", "g1")
  //{"id":1,"name":"zhangsan"}
  val text = env.addSource(new FlinkKafkaConsumer[ObjectNode]("topic01",new JSONKeyValueDeserializationSchema(true),props))
  //t:{"value":{"id":1,"name":"zhangsan"},"metadata":{"offset":0,"topic":"topic01","partition":13}}
  text.map(t=> (t.get("value").get("id").asInt(),t.get("value").get("name").asText()))
  .print()

  //5.执行流计算任务
  env.execute("Window Stream WordCount")

参考:https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/connectors/kafka.html

Data Sinks

Data Sink使用DataStreams并将其转发到文件,Socket,外部系统或打印它们。 Flink带有多种内置输出格式,这些格式封装在DataStreams的操作后面。

File-based
  • writeAsText() / TextOutputFormat - Writes elements line-wise as Strings. The Strings are obtained by calling the toString() method of each element.

  • writeAsCsv(…) / CsvOutputFormat - Writes tuples as comma-separated value files. Row and field delimiters are configurable. The value for each field comes from the toString() method of the objects.

  • writeUsingOutputFormat/ FileOutputFormat - Method and base class for custom file outputs. Supports custom object-to-bytes conversion.

请注意DataStream上的write*()方法主要用于调试目的。

//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment

  //2.创建DataStream - 细化
  val text = env.socketTextStream("CentOS", 9999)

  //3.执行DataStream的转换算子
  val counts = text.flatMap(line=>line.split("\\s+"))
  .map(word=>(word,1))
  .keyBy(0)
  .sum(1)

  //4.将计算的结果在控制打印
  counts.writeUsingOutputFormat(new TextOutputFormat[(String, Int)](new Path("file:///Users/admin/Desktop/flink-results")))

  //5.执行流计算任务
  env.execute("Window Stream WordCount")

注意事项:如果改成HDFS,需要用户自己产生大量数据,才能看到测试效果,原因是因为HDFS文件系统写入时的缓冲区比较大。以上写入文件系统的Sink不能够参与系统检查点,如果在生产环境下通常使用flink-connector-filesystem写入到外围系统。

<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-connector-filesystem_2.11</artifactId>
  <version>1.10.0</version>
</dependency>
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment

  //2.创建DataStream - 细化
  val text = env.readTextFile("hdfs://CentOS:9000/demo/words")

  var bucketingSink=StreamingFileSink.forRowFormat(new Path("hdfs://CentOS:9000/bucket-results"),
                                                   new SimpleStringEncoder[(String,Int)]("UTF-8"))
  .withBucketAssigner(new DateTimeBucketAssigner[(String, Int)]("yyyy-MM-dd"))//动态产生写入路径
  .build()

  //3.执行DataStream的转换算子
  val counts = text.flatMap(line=>line.split("\\s+"))
  .map(word=>(word,1))
  .keyBy(0)
  .sum(1)

  counts.addSink(bucketingSink)

  //5.执行流计算任务
  env.execute("Window Stream WordCount")

老版本写法

//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
  env.setParallelism(4)

  //2.创建DataStream - 细化
  val text = env.readTextFile("hdfs://CentOS:9000/demo/words")

  var bucketingSink=new BucketingSink[(String,Int)]("hdfs://CentOS:9000/bucket-results")
  bucketingSink.setBucketer(new DateTimeBucketer[(String,Int)]("yyyy-MM-dd"))
  bucketingSink.setBatchSize(1024)

  //3.执行DataStream的转换算子
  val counts = text.flatMap(line=>line.split("\\s+"))
  .map(word=>(word,1))
  .keyBy(0)
  .sum(1)

  counts.addSink(bucketingSink)

  //5.执行流计算任务
  env.execute("Window Stream WordCount")
print()/printToErr()

Prints the toString() value of each element on the standard out / standard error stream. Optionally, a prefix (msg) can be provided which is prepended to the output. This can help to distinguish between different calls to print. If the parallelism is greater than 1, the output will also be prepended with the identifier of the task which produced the output.

//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
  env.setParallelism(4)

  //2.创建DataStream - 细化
  val text = env.readTextFile("hdfs://CentOS:9000/demo/words")

  var bucketingSink=new BucketingSink[(String,Int)]("hdfs://CentOS:9000/bucket-results")
  bucketingSink.setBucketer(new DateTimeBucketer[(String,Int)]("yyyy-MM-dd"))
  bucketingSink.setBatchSize(1024)

  //3.执行DataStream的转换算子
  val counts = text.flatMap(line=>line.split("\\s+"))
  .map(word=>(word,1))
  .keyBy(0)
  .sum(1)

  counts.printToErr("测试").setParallelism(2)

  //5.执行流计算任务
  env.execute("Window Stream WordCount")

UserDefinedSinkFunction
import org.apache.flink.configuration.Configuration
import org.apache.flink.streaming.api.functions.sink.{RichSinkFunction, SinkFunction}

class UserDefinedSinkFunction  extends RichSinkFunction[(String,Int)]{
  
  override def open(parameters: Configuration): Unit = {
    println("打开链接...")
  }

  override def invoke(value: (String, Int), context: SinkFunction.Context[_]): Unit = {
    println("输出:"+value)
  }

  override def close(): Unit = {
    println("释放连接")
  }
}
 //1.创建流计算执行环境
 val env = StreamExecutionEnvironment.getExecutionEnvironment
  env.setParallelism(1)

  //2.创建DataStream - 细化
  val text = env.readTextFile("hdfs://CentOS:9000/demo/words")

  var bucketingSink=new BucketingSink[(String,Int)]("hdfs://CentOS:9000/bucket-results")
  bucketingSink.setBucketer(new DateTimeBucketer[(String,Int)]("yyyy-MM-dd"))
  bucketingSink.setBatchSize(1024)

  //3.执行DataStream的转换算子
  val counts = text.flatMap(line=>line.split("\\s+"))
  .map(word=>(word,1))
  .keyBy(0)
  .sum(1)

  counts.addSink(new UserDefinedSinkFunction)

  //5.执行流计算任务
  env.execute("Window Stream WordCount")
RedisSink

参考:https://bahir.apache.org/docs/flink/current/flink-streaming-redis/

<dependency>
  <groupId>org.apache.bahir</groupId>
  <artifactId>flink-connector-redis_2.11</artifactId>
  <version>1.0</version>
</dependency>
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
  env.setParallelism(1)

  //2.创建DataStream - 细化
  val text = env.readTextFile("hdfs://CentOS:9000/demo/words")

  var flinkJeidsConf = new FlinkJedisPoolConfig.Builder()
  .setHost("CentOS")
  .setPort(6379)
  .build()

  //3.执行DataStream的转换算子
  val counts = text.flatMap(line=>line.split("\\s+"))
  .map(word=>(word,1))
  .keyBy(0)
  .sum(1)

  counts.addSink(new RedisSink(flinkJeidsConf,new UserDefinedRedisMapper()))

  //5.执行流计算任务
  env.execute("Window Stream WordCount")


import org.apache.flink.streaming.connectors.redis.common.mapper.{RedisCommand, RedisCommandDescription, RedisMapper}

class UserDefinedRedisMapper extends RedisMapper[(String,Int)]{
  override def getCommandDescription: RedisCommandDescription = {
      new RedisCommandDescription(RedisCommand.HSET,"wordcounts")
  }

  override def getKeyFromData(data: (String, Int)): String = data._1

  override def getValueFromData(data: (String, Int)): String = data._2+""
}
√Kafka集成
<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-connector-kafka_2.11</artifactId>
  <version>1.10.0</version>
</dependency>
class UserDefinedKeyedSerializationSchema extends KeyedSerializationSchema[(String,Int)]{
Int

  override def serializeKey(element: (String, Int)): Array[Byte] = {
    element._1.getBytes()
  }

  override def serializeValue(element: (String, Int)): Array[Byte] = {
    element._2.toString.getBytes()
  }

  //可以覆盖 默认是topic,如果返回null,则将数据写入到默认的topic中
  override def getTargetTopic(element: (String, Int)): String = {
    null
  }
}
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
  env.setParallelism(1)

  
val props = new Properties()
props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "centos:9092")
props.setProperty(ProducerConfig.BATCH_SIZE_CONFIG,"100")
props.setProperty(ProducerConfig.LINGER_MS_CONFIG,"500")
props.setProperty(ProducerConfig.ACKS_CONFIG,"all")
props.setProperty(ProducerConfig.RETRIES_CONFIG,"2")
  
  //2.创建DataStream - 细化
  val text = env.readTextFile("hdfs://CentOS:9000/demo/words")

  var flinkJeidsConf = new FlinkJedisPoolConfig.Builder()
  .setHost("CentOS")
  .setPort(6379)
  .build()

  //3.执行DataStream的转换算子
  val counts = text.flatMap(line=>line.split("\\s+"))
  .map(word=>(word,1))
  .keyBy(0)
  .sum(1)

  counts.addSink(new FlinkKafkaProducer[(String, Int)]("topic01",new UserDefinedKeyedSerializationSchema,props2))

  //5.执行流计算任务
  env.execute("Window Stream WordCount")
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值