使Flink输出的数据在多个partition中均匀分布
FlinkKafkaProducerBase的子类可以使用默认的KafkaPartitioner FixedPartitioner(只向partition 0中写数据)也可以使用自己定义的Partitioner(继承KafkaPartitioner),我觉得实现比较复杂.
构造FlinkKafkaProducerBase的子类的2种情况
public FlinkKafkaProducer09(String topicId, SerializationSchema<IN> serializationSchema,
Properties producerConfig) {
this(topicId, new KeyedSerializationSchemaWrapper<>(serializationSchema),
producerConfig, new FixedPartitioner<IN>());
}
public FlinkKafkaProducer09(String topicId, SerializationSchema<IN> serializationSchema,
Properties producerConfig, KafkaPartitioner<IN> customPartitioner) {
this(topicId, new KeyedSerializationSchemaWrapper<>(serializationSchema),
producerConfig, customPartitioner);
}
默认的FixedPartitioner
public class FixedPartitioner<T> extends KafkaPartitioner<T> implements Serializable {
private static final long serialVersionUID = 1627268846962918126L;
private int targetPartition = -1;
@Override
public void open(int parallelInstanceId, int parallelInstances, int[] partitions) {
if (parallelInstanceId < 0 || parallelInstances <= 0 ||
partitions.length == 0) {
<