KafkaProducer 示例-拦截器- 分区 KafkaProducer-序列化器

拦截器

日志 或错误的记录

spring:
  kafka:
    producer:
      properties:
        interceptor:
          classes: com.du.producerInterceptor.MyProducerlnterceptor

implements ProducerInterceptor<String,Object>

@Slf4j
public class MyProducerlnterceptor implements ProducerInterceptor<String,Object> {

    /**
     * 消息成功发送次数
     */
    private int successCount = 0 ;
    private int errorCount = 0 ;

    /**
     * Producer确保在消息被序列化以及计算分区前调用该方法。用户可以在该方法中对消息做任何操作
     * <p> 但最好保证不要修改消息所属的topic和分区,否则会影响目标分区的计算。</p>
     *
     */
    @Override
    public ProducerRecord<String, Object> onSend(ProducerRecord<String, Object> record) {
        return record;
    }

    /**
     * 该方法会在消息从RecordAccumulator成功发送到Kafka Broker之后,或者在发送过程中失败时调用。
     * 并且通常都是在producer回调逻辑触发之前。onAcknowledgement运行在producer的IO线程中,
     * 因此不要在该方法中放入很重的逻辑,否则会拖慢producer的消息发送效率
     */
    @Override
    public void onAcknowledgement(RecordMetadata recordMetadata, Exception e) {
        log.info("拦截器执行 onAcknowledgement {}",recordMetadata ,e);

        //统计成功或失败次数
        if(e == null){
            successCount++;
        }else{
            errorCount++;
        }
        log.info("successCount = {} ; errorCount = {}" ,successCount , errorCount );

    }

    /**
     * <p>关闭interceptor,主要用于执行一些资源清理工作
     * interceptor可能被运行在多个线程中,因此在具体实现时用户需要自行确保线程安全
     * </p>
     * <p>另外倘若指定了多个interceptor则producer将按照指定顺序调用它们,并仅仅是捕获每个interceptor可能抛出的异常记录到错误日志中而非在向上传递。
     *  这在使用过程中要特别留意</p>
     */
    @Override
    public void close() {
        log.info("拦截器执行 close");
    }

    /**
     * 获取配置信息和初始化数据时调用
     * @param configs
     */
    @Override
    public void configure(Map<String, ?> configs) {
        log.info("拦截器执行 configure ;map = {}",configs);

    }
}

序列化器

  1. 实现了接org.apache.kafka.common.serialization.serializer的key序列化类
  2. 实现了接口org.apache.kafka.common.serialization.serializer的value序列化类

 分区

源码

KafkaProducer

private int partition(ProducerRecord<K, V> record, byte[] serializedKey, byte[] serializedValue, Cluster cluster) {
Integer partition = record.partition();
return partition != null ?partition :partitioner.partition(
record.topic(), record.key(), serializedKey, record.value(), serializedValue, cluster);
}

DefaultPartitioner implements Partitioner 
if (keyBytes == null) {
return stickyPartitionCache.partition(topic, cluster);
}
// hash the keyBytes to choose a partition
return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions;

自定义分区

public class SamplePartition implements Partitioner {
    @Override
    public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
        /*
            key-1
            key-2
            key-3
         */
        String keyStr = key + "";
        String keyInt = keyStr.substring(4);
        System.out.println("keyStr : "+keyStr + "keyInt : "+keyInt);

        int i = Integer.parseInt(keyInt);

        return i%2;
    }

    @Override
    public void close() {

    }

    @Override
    public void configure(Map<String, ?> configs) {

    }
}
public static void producerSendWithCallbackAndPartition(){

    properties.put(ProducerConfig.PARTITIONER_CLASS_CONFIG,"com.du.config.SamplePartition");

    for(int i=0;i<10;i++){
        ProducerRecord<String,String> record =
                new ProducerRecord<>(TOPIC_NAME,"key-"+i,"value-"+i);
        producer.send(record, new Callback() {
            @Override
            public void onCompletion(RecordMetadata recordMetadata, Exception e) {
                System.out.println(
                        "partition : "+recordMetadata.partition()+" , offset : "+recordMetadata.offset());
            }
        });
    }
    producer.close();
}
 
public class keyPartition implements Partitioner {
    @Override
    public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
        List<PartitionInfo> partitionInfos = cluster.partitionsForTopic(topic);
        int numPartitions = partitionInfos.size();

        if (null == keyBytes || !(key instanceof String)) {
            throw new InvalidRecordException("kafka message must have key");
        }

        if (numPartitions == 1) {
            return 0;
        }

        if (key.equals("name")) {
            return numPartitions - 1;
        }

        return Math.abs(Utils.murmur2(keyBytes)) % (numPartitions - 1);
    }

    @Override
    public void close() {}

    @Override
    public void configure(Map<String, ?> configs) {}
}

 KafkaProducer

示例

public class ProducerSample {

  private final static String TOPIC_NAME = "du-topic";
  static Properties properties = new Properties();

  static {
    properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.237.10:9092");
    properties.put(ProducerConfig.ACKS_CONFIG, "all");
    properties.put(ProducerConfig.RETRIES_CONFIG, "0");
    properties.put(ProducerConfig.BATCH_SIZE_CONFIG, "16384");
    properties.put(ProducerConfig.LINGER_MS_CONFIG, "1");
    properties.put(ProducerConfig.BUFFER_MEMORY_CONFIG, "33554432");
    properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
    properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
  }

  static Producer<String, String> producer = new KafkaProducer<>(properties);

  /**
     * 异步阻塞发送
      */
  @Test
  public void syncSend() throws ExecutionException, InterruptedException {
    User du = new User("du", 90);
    ProducerRecord<String, String> stringUserProducerRecord = new ProducerRecord<>("du", JSON.toJSONString(du));
    Future<RecordMetadata> send = producer.send(stringUserProducerRecord);

    //阻塞
    RecordMetadata recordMetadata = send.get();

    System.out.println(recordMetadata.partition());
    System.out.println(recordMetadata.offset());
    producer.close();

  }
  /**
     * 异步发送带回调函数
     */
  @Test
  public void asyncSendCall() throws ExecutionException, InterruptedException {
    User du = new User("du", 90);
    ProducerRecord<String, String> stringUserProducerRecord = new ProducerRecord<>("du", JSON.toJSONString(du));

    producer.send(stringUserProducerRecord,(recordMetadata,ex)->{
      if (null==ex){
        System.out.println("消息发送成功");
      }

    });

    producer.close();

  }
}

ProducerRecord

每条数据都要封装成一个ProducerRecord对象

User du = new User("du", 90);
ProducerRecord<String, String> stringUserProducerRecord = new ProducerRecord<>("du", JSON.toJSONString(du));

KafkaProducer

需要创建一个生产者对象,用来发送数据

Producer<String, String> producer = new KafkaProducer<>(properties);

Future<RecordMetadata> send = producer.send(stringUserProducerRecord);

同步确认

User du = new User("du", 90);
ProducerRecord<String, String> stringUserProducerRecord = new ProducerRecord<>("du", JSON.toJSONString(du));
Future<RecordMetadata> send = producer.send(stringUserProducerRecord);

//阻塞
RecordMetadata recordMetadata = send.get();

System.out.println(recordMetadata.partition());
System.out.println(recordMetadata.offset());
producer.close();

异步确认

  1. 回调函数会在producer收到ack时调用,为异步调用,该方法有两个参数,分别是RecordMetadata和Exception,如果Exception为null,说明消息发送成功,如果Exception不为null,说明消息发送失败
  2. 注意:消息发送失败会自动重试,不需要我们在回调函数中手动重试
User du = new User("du", 90);
ProducerRecord<String, String> stringUserProducerRecord = new ProducerRecord<>("du", JSON.toJSONString(du));

producer.send(stringUserProducerRecord,(recordMetadata,ex)->{
  if (null==ex){
    System.out.println("消息发送成功");
  }
});

producer.close();
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值