flink消费kafka数据,生成parquet结构的数据写到hdfs

要使用一个技术,官方文档一定是最权威、第一手资料,特别是对于那些比较有名技术。那么,接下来我带领大家一起学习一下flink官网上的技术内容。

flink官网:https://flink.apache.org/

在官网左侧,document下选择版本,目前是1.11。如果想看之前版本的文档,直接修改url中的版本号即可查看。

这里我们使用1.10版本的文档,来研究如何将数据生成parquet结构写到hdfs上。打开文档,我们主要看这两个地方:

打开Hadoop FileSystem连接,首先看到如下:

The BucketingSink has been deprecated since Flink 1.9 and will be removed in subsequent releases. Please use the StreamingFileSink instead.

既然这样,那么我们就直接来看Streaming File Sink吧(改天我们再总结一下flink的各种sink组件的差异~~~)。咳咳,我就照着翻译了哈~~~

1、StreamingFileSink介绍

StreamingFileSink该连接器提供了一个接收器(sink),用于将数据分区的写入Flink FileSystem抽象支持的文件系统。

科普一下:flink中吧source、sink这类组件统称作链接器(connectors),source是用来接收数据的(相当于入口),sink是用来输出数据的(相当于出口)。如果了解flume的应该对这些概念不会陌生。

StreamingFileSink有如下特点:

  • 默认基于时间的策略,将流数据写入不同的分桶中,即:每小时将无限的流数据写入一个新的分桶中;
  • 在每个分桶中,数据被组织成有限大小的零件文件(part file),可以配置不同的滚动策略,默认是根据大小滚动part file;

使用StreamingFileSink时需要启用检查点。零件文件只能在成功的检查点上完成。如果禁用了检查点功能,则零件文件将永远处于“进行中”或“待处理”状态,并且下游系统无法安全地读取它们。JavaDoc for StreamingFileSink 

2、File Formats:

 StreamingFileSink支持两种formats,按行和bucket编码格式(例如parquet):

  • Row-encoded sink:StreamingFileSink.forRowFormat(basePath, rowEncoder)
  • Bulk-encoded sink:StreamingFileSink.forBulkFormat(basePath, bulkWriterFactory)

创建这两种编码格式时,需要指定文件的基础路径(base path),以及数据的编码逻辑。

2.1)Row-Encoded Formats:

行编码格式需要指定一个编码器,该编码器用于将各个行序列化为进行中的零件文件的OutputStream。

import org.apache.flink.api.common.serialization.SimpleStringEncoder;
import org.apache.flink.core.fs.Path;
import org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink;
import org.apache.flink.streaming.api.functions.sink.filesystem.rollingpolicies.DefaultRollingPolicy;

DataStream<String> input = ...;

final StreamingFileSink<String> sink = StreamingFileSink
    .forRowFormat(new Path(outputPath), new SimpleStringEncoder<String>("UTF-8"))
    .withRollingPolicy(
        DefaultRollingPolicy.builder()
            .withRolloverInterval(TimeUnit.MINUTES.toMillis(15))
            .withInactivityInterval(TimeUnit.MINUTES.toMillis(5))
            .withMaxPartSize(1024 * 1024 * 1024)
            .build())
	.build();

input.addSink(sink);

在这个例子中,使用了默认的分桶逻辑(根据时间分桶),此外,这是了默认的滚动策略,该策略将在以下三个条件之一下滚动正在进行的零件文件:

  • It contains at least 15 minutes worth of data
  • It hasn’t received new records for the last 5 minutes
  • The file size reached 1 GB (after writing the last record)

2.2)bulk-encoded Formats:

和row-encoded formats类似指定基本路径,但是不需要指定编码器,特别的是这种格式需要指定一种BulkWriter.Factory,BulkWriter逻辑定义了如何添加,刷新新元素。

flink内置了三个BuikWriter factories:

重要说明:bulk-encode格式只能具有“ OnCheckpointRollingPolicy”,它会(仅)在每个检查点上滚动。(由于列式存储是无法针对文件offset进行truncate的,因此就必须在每次checkpoint使文件滚动)

1)Parquet format:

Flink内置了便捷方法,用于为Avro数据创建Parquet编写器工厂。这些方法及其相关文档可在 ParquetAvroWriters类中找到。

为了写入其他与Parquet兼容的数据格式,用户需要使用ParquetBuilder接口的自定义实现来创建ParquetWriterFactory。

看一个例子:

<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-parquet_2.11</artifactId>
  <version>1.10.0</version>
</dependency>

根据schema将数据写成parquet格式: 

import org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink;
import org.apache.flink.formats.parquet.avro.ParquetAvroWriters;
import org.apache.avro.Schema;


Schema schema = ...;
DataStream<GenericRecord> stream = ...;

final StreamingFileSink<GenericRecord> sink = StreamingFileSink
	.forBulkFormat(outputBasePath, ParquetAvroWriters.forGenericRecord(schema))
	.build();

input.addSink(sink);

除了schema的方式,还可以通过反射的方式创建parquet格式。例如:

DataStream<GenericRecord> stream = ...;

final StreamingFileSink<GenericRecord> sink = StreamingFileSink
	.forBulkFormat(outputBasePath,ParquetAvroWriters.forReflectRecord(MyBean.class))
	.build();

input.addSink(sink);

2)Hadoop SequenceFile format:

<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-sequence-file</artifactId>
  <version>1.10.0</version>
</dependency>
import org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink;
import org.apache.flink.configuration.GlobalConfiguration;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.SequenceFile;
import org.apache.hadoop.io.Text;


DataStream<Tuple2<LongWritable, Text>> input = ...;
Configuration hadoopConf = HadoopUtils.getHadoopConfiguration(GlobalConfiguration.loadConfiguration());
final StreamingFileSink<Tuple2<LongWritable, Text>> sink = StreamingFileSink
  .forBulkFormat(
    outputBasePath,
    new SequenceFileWriterFactory<>(hadoopConf, LongWritable.class, Text.class))
	.build();

input.addSink(sink);

3、Bucket Assignment:(分桶策略)

分桶逻辑定义了如何将数据装到base path下的子目录中。Row-encoded、Bulk-Encoded 均使用DateTimeBucketAssigner作为默认分配器。默认情况下,DateTimeBucketAssigner根据系统默认时区使用以下格式创建每小时时段:yyyy-MM-dd--HH。日期格式和时区都可以手动配置。

我们也可以实现 BucketAssigner 接口,自定义bucket assignment。flink内置了两种BucketAssigners:

除此之外,还可以实现BucketAssigner接口,自定义分桶策略。

4、Rolling Policy:(滚动策略)

 RollingPolicy定义何时关闭给定的进行中零件文件并将其移至暂挂状态,然后再移至完成状态。处于“完成”状态的零件文件是可以查看的文件,并且保证包含有效数据,如果发生故障,这些数据将不会还原

滚动策略与检查点间隔(待处理文件在下一个检查点上完成)相结合,可控制零件文件对下游读取器可用的速度以及这些零件的大小和数量。flink内建了两个滚动策略;

  • DefaultRollingPolicy:当超过文件大小(默认为 128 MB),或超过了滚动周期(默认为 60 秒),或未写入数据处于不活跃状态超时(默认为 60 秒)的时候,滚动文件;
  • OnCheckpointRollingPolicy:当 checkpoint 的时候,滚动文件。

5、Part file lifecycle:

5.1)零件文件有三种状态:

  • In-progress : The part file that is currently being written to is in-progress
  • Pending : Closed (due to the specified rolling policy) in-progress files that are waiting to be committed
  • Finished : On successful checkpoints pending files transition to “Finished”

对于活动bucket,每个writer子任务在任何给定时间都将具有一个正在进行的零件文件,但是可以有多个待处理和完成的文件。

:下游系统只能从Finished的零件文件中安全的读数据,因为这些文件不会被修改。

重要说明:对于任何给定的子任务,零件文件索引严格增加(按创建顺序)。但是,这些索引并不总是顺序的。当作业重新启动时,所有子任务的下一个零件索引将是“最大零件索引+ 1”,其中“ max”是在所有子任务中计算的。

5.2)示例:
假设有两个sink subtasks:

└── 2019-08-25--12
    ├── part-0-0.inprogress.bd053eb0-5ecf-4c85-8433-9eff486ac334
    └── part-1-0.inprogress.ea65a428-a1d0-4a0b-bbc5-7a436a75e575

part-1-0滚动时(假设它变得太大),它将变为挂起状态,但不会重命名;然后,sink会打开一个新的零件文件:part-1-1:

└── 2019-08-25--12
    ├── part-0-0.inprogress.bd053eb0-5ecf-4c85-8433-9eff486ac334
    ├── part-1-0.inprogress.ea65a428-a1d0-4a0b-bbc5-7a436a75e575
    └── part-1-1.inprogress.bc279efe-b16f-47d8-b828-00ef6e2fbd11

part-1-0正在等待完成,因此在下一个成功的检查点之后将其完成:

└── 2019-08-25--12
    ├── part-0-0.inprogress.bd053eb0-5ecf-4c85-8433-9eff486ac334
    ├── part-1-0
    └── part-1-1.inprogress.bc279efe-b16f-47d8-b828-00ef6e2fbd11

按照bucket分桶策略,假设创建了新的bucket,这不会影响当前正在进行的文件:

└── 2019-08-25--12
    ├── part-0-0.inprogress.bd053eb0-5ecf-4c85-8433-9eff486ac334
    ├── part-1-0
    └── part-1-1.inprogress.bc279efe-b16f-47d8-b828-00ef6e2fbd11
└── 2019-08-25--13
    └── part-0-2.inprogress.2b475fec-1482-4dea-9946-eb4353b475f1

经过桶策略的评估,旧的bucket中仍有可能可以接收新数据记录。

6、 part file 配置:

可以通过命名,将Finished的文件与in-progress的文件区分开。默认:

  • In-progress / Pending:part-<subtaskIndex>-<partFileIndex>.inprogress.uid
  • Finished:part-<subtaskIndex>-<partFileIndex>

Flink允许用户为其零件文件指定前缀和/或后缀。可以使用OutputFileConfig完成。例如,对于前缀“ prefix”和后缀“ .ext”,接收器将创建以下文件:

└── 2019-08-25--12
    ├── prefix-0-0.ext
    ├── prefix-0-1.ext.inprogress.bd053eb0-5ecf-4c85-8433-9eff486ac334
    ├── prefix-1-0.ext
    └── prefix-1-1.ext.inprogress.bc279efe-b16f-47d8-b828-00ef6e2fbd11
OutputFileConfig config = OutputFileConfig
 .builder()
 .withPartPrefix("prefix")
 .withPartSuffix(".ext")
 .build();
            
StreamingFileSink<Tuple2<Integer, Integer>> sink = StreamingFileSink
 .forRowFormat((new Path(outputPath), new SimpleStringEncoder<>("UTF-8"))
 .withBucketAssigner(new KeyBucketAssigner())
 .withRollingPolicy(OnCheckpointRollingPolicy.build())
 .withOutputFileConfig(config)
 .build();
		

好了,说了这么多最后,最后必须要给出一个例子啊(凡是不带例子的技术讲解都是耍流氓)

import java.io.IOException;
import java.util.Properties;
import org.apache.commons.lang3.StringUtils;
import org.apache.commons.codec.binary.Base64;
import org.apache.flink.api.common.functions.FilterFunction;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.common.serialization.AbstractDeserializationSchema;
import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.core.fs.Path;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink;
import org.apache.flink.streaming.api.functions.sink.filesystem.rollingpolicies.OnCheckpointRollingPolicy;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.parquet.hadoop.metadata.CompressionCodecName;

/**
 * 消费kafka将数据格式化成parquet写入hdfs
 * @author kevinliu
 * 2020-07-10
 */
public class App {
	
	private static final String kafkaTopic = "test";
	
	public static void main(String[] args) throws Exception {
		ParameterTool parameter = ParameterTool.fromArgs(args);
		String rootPath = parameter.get("rootPath", "hdfs://abc/data/test");//  
		long checkpointInterval = parameter.getLong("checkpointInterval", 5 * 60000);
		
		StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
		env.enableCheckpointing(checkpointInterval);

        
        //从kafka读数据
        Properties properties = new Properties();
        properties.setProperty("bootstrap.servers", "10.19.80.82:9092");
        properties.setProperty("zookeeper.connect", "10.19.80.82:2181");
        properties.setProperty("group.id", "test");
        
		DataStream<byte[]> stream = env.addSource(new FlinkKafkaConsumer<>(kafkaTopic, new AbstractDeserializationSchema<byte[]>() {
			private static final long serialVersionUID = 1L;
			@Override
            public byte[] deserialize(byte[] bytes) throws IOException {
                return bytes;
            }
        }, properties))
        .rebalance()//.setParallelism(50)
        .filter(new FilterFunction<byte[]>() {
			private static final long serialVersionUID = 1L;
			@Override  
            public boolean filter(byte[] value) throws Exception {  
				if (value == null) {
					return false;
				} else {
					return true;
				}
            }
        });
        
		DataStream<Mybean> parseStream = stream.map(new MapFunction<byte[], Mybean>() {
			private static final long serialVersionUID = 1L;
			@Override  
            public Mybean map(byte[] value) throws Exception {
                return parseMsg(value);
            }  
        }).filter(new FilterFunction<Mybean>() {
			private static final long serialVersionUID = 1L;
			@Override  
            public boolean filter(Mybean value) throws Exception {  
				if (value == null) {
					return false;
				} else {
					return true;
				}
            }
        });
        
        DataStream<Mybean> stream2 = parseStream.map(new MapFunction<Mybean, Mybean>() {
			private static final long serialVersionUID = 1L;
			@Override  
            public Mybean map(Mybean value) throws Exception {
                generateSchema(value);
                return value;
            }  
        });
        

        //stream1.print();
        //stream2.print();
        //stream1.writeAsText("D:\\flink.txt");
        
        StreamingFileSink<Mybean> stream2Sink = StreamingFileSink
                .forBulkFormat(new Path(rootPath), ParquetAvroWriters.forReflectRecord(Mybean.class))
                .withBucketAssigner(new DateTimeBucketAssigner<>())
                .withRollingPolicy(OnCheckpointRollingPolicy.build())
                .build();
        stream2.addSink(stream2Sink);
        
        env.execute("stream started...");
	}
}

 

以下是一个基于Flink消费Kafka数据并将其写入HDFS的示例: ```java import org.apache.flink.api.common.serialization.SimpleStringSchema; import org.apache.flink.core.fs.FileSystem; import org.apache.flink.formats.orc.OrcSplitReaderUtil; import org.apache.flink.formats.orc.OrcWriterFactory; import org.apache.flink.formats.orc.vector.StringColumnVector; import org.apache.flink.streaming.api.datastream.DataStream; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer; import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer; import org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema; import org.apache.flink.table.api.EnvironmentSettings; import org.apache.flink.table.api.Table; import org.apache.flink.table.api.bridge.java.StreamTableEnvironment; import org.apache.flink.table.descriptors.*; import org.apache.flink.types.Row; import java.util.Properties; public class FlinkKafkaHdfsOrcDemo { public static void main(String[] args) throws Exception { // set up the streaming execution environment final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(1); // set parallelism to 1 for demo purposes // set up the Kafka consumer properties Properties kafkaProps = new Properties(); kafkaProps.setProperty("bootstrap.servers", "localhost:9092"); kafkaProps.setProperty("group.id", "flink-kafka-consumer-group"); // create a FlinkKafkaConsumer instance to consume Kafka data FlinkKafkaConsumer<String> kafkaConsumer = new FlinkKafkaConsumer<>("my-topic", new SimpleStringSchema(), kafkaProps); // create a data stream from the Kafka source DataStream<String> kafkaStream = env.addSource(kafkaConsumer); // parse the JSON data and create a table from it EnvironmentSettings settings = EnvironmentSettings.newInstance().inStreamingMode().useBlinkPlanner().build(); StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, settings); tableEnv.connect(new Kafka().version("universal").topic("my-topic").startFromEarliest().property("bootstrap.servers", "localhost:9092").property("group.id", "flink-kafka-consumer-group")) .withFormat(new Json().deriveSchema()) .withSchema(new Schema().field("name", DataTypes.STRING()).field("age", DataTypes.INT())) .createTemporaryTable("myTable"); Table myTable = tableEnv.from("myTable"); // create an OrcWriterFactory to write ORC data OrcWriterFactory<Row> orcWriterFactory = (OrcWriterFactory<Row>) OrcSplitReaderUtil.createRowOrcWriterFactory( new String[]{"name", "age"}, new OrcSplitReaderUtil.TypeDescription[]{ OrcSplitReaderUtil.TypeDescription.createString(), OrcSplitReaderUtil.TypeDescription.createInt() }); // create a FlinkKafkaProducer instance to write Kafka data FlinkKafkaProducer<Row> kafkaProducer = new FlinkKafkaProducer<>( "my-topic", new OrcRowSerializationSchema("/path/to/hdfs/file.orc", orcWriterFactory), kafkaProps, FlinkKafkaProducer.Semantic.EXACTLY_ONCE); // write the table data to HDFS in ORC format myTable.execute().output(kafkaProducer); // execute the job env.execute("Flink Kafka HDFS ORC Demo"); } // implementation of OrcRowSerializationSchema private static class OrcRowSerializationSchema implements FlinkKafkaProducer.SerializationSchema<Row> { private final String filePath; private final OrcWriterFactory<Row> orcWriterFactory; private transient OrcWriterFactory.Writer<Row> orcWriter; public OrcRowSerializationSchema(String filePath, OrcWriterFactory<Row> orcWriterFactory) { this.filePath = filePath; this.orcWriterFactory = orcWriterFactory; } @Override public byte[] serialize(Row row) { try { if (orcWriter == null) { orcWriter = orcWriterFactory.createWriter(filePath, FileSystem.getHadoopFileSystem(new org.apache.flink.core.fs.Path(filePath).toUri()), true); } StringColumnVector nameVector = new StringColumnVector(1); nameVector.vector[0] = row.getField(0).toString(); StringColumnVector ageVector = new StringColumnVector(1); ageVector.vector[0] = row.getField(1).toString(); orcWriter.addRow(nameVector, ageVector); return null; } catch (Exception e) { throw new RuntimeException(e); } } } } ``` 该示例使用Flink的Table API从Kafka消费数据,并将其写入HDFS中的ORC文件。示例代码使用`JsonKeyValueDeserializationSchema`解析JSON格式的数据,并使用`OrcWriterFactory`将数据写入ORC文件。在示例中,`OrcWriterFactory`被配置为使用String和Int类型的列。还创建了一个`OrcRowSerializationSchema`类,它将Flink的`Row`类型转换为ORC文件中的列向量,并使用`OrcWriterFactory.Writer`将数据写入ORC文件。 注意:在实际使用中,应该根据实际需求修改示例代码,并根据需要添加适当的错误处理和容错机制。
评论 10
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

赶路人儿

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值