使用kafka发布、消费消息

一.引言

kafka的一些特性和rabbitmq还是有所区别的,这一点在使用时会更有感受。

二.kafka接收消息

a.发送消息

public class KafkaProducerService implements Runnable{
    private KafkaProducer<String,String> producer;
    private String topic;

    public KafkaProducerService(String topic){
        Properties properties = new Properties();
        properties.put("bootstrap.servers","43.142.137.160:9092");
        properties.put("acks","all");
        properties.put("retries",0);
        properties.put("batch.size",16384);
        properties.put("key.serializer", StringSerializer.class.getName());
        properties.put("value.serializer", StringSerializer.class.getName());
        this.topic = topic;
        this.producer = new KafkaProducer<String, String>(properties);
    }

    @Override
    public void run() {
        try {
            for(int msgNo = 1; msgNo <=1000; msgNo++){
                String msg = "[" + msgNo + "]:hello, boys!";
                producer.send(new ProducerRecord<>(topic,"message",msg));
                System.out.println("第" + msgNo + "条消息....");
            }
        }catch (Exception e){
            log.info("发送消息失败!",e);
        }finally {
            producer.close();
        }
    }

    public static void main(String[] args) {
        KafkaProducerService service = new KafkaProducerService(KafkaConstant.KAFKA_TOPIC);
        Thread thread = new Thread(service);
        thread.start();
    }
}

在本地发布消息时出现了连接异常

[2023-02-16 22:44:23] [WARN ] o.a.k.c.NetworkClient:982 - [Producer clientId=producer-1] Error connecting to node VM-4-16-centos:9092 (id: 0 rack: null)
java.net.UnknownHostException: VM-4-16-centos
	at java.net.InetAddress.getAllByName0(InetAddress.java:1281)
	at java.net.InetAddress.getAllByName(InetAddress.java:1193)
	at java.net.InetAddress.getAllByName(InetAddress.java:1127)
	at org.apache.kafka.clients.DefaultHostResolver.resolve(DefaultHostResolver.java:27)
	at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:109)
	at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:508)
	at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:465)
	at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:170)
	at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:975)
	at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:301)
	at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:354)
	at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:327)
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:256)
	at java.lang.Thread.run(Thread.java:748)

这里是因为从集群获取到的节点信息是节点名称,需要在本地配置域名解析后才能发布消息。例如:

***.142.137.***	VM-4-16-centos

这样就可以正常发布消息了。

b.接收消息

public class KafkaConsumerService implements Runnable{
    private KafkaConsumer<String,String> consumer;
    private ConsumerRecords<String,String> msgList;
    private String topic;
    private String GROUPID = "groupA";

    private KafkaConsumerService(String topic){
        Properties properties = new Properties();
        properties.put("bootstrap.servers","43.142.137.160:9092");
        properties.put("group.id", GROUPID);
        properties.put("enable.auto.commit", "true");
        properties.put("auto.commit.interval.ms", "1000");
        properties.put("session.timeout.ms", "30000");
        properties.put("auto.offset.reset", "earliest");
        properties.put("key.deserializer", StringDeserializer.class.getName());
        properties.put("value.deserializer", StringDeserializer.class.getName());
        this.consumer = new KafkaConsumer<String, String>(properties);
        this.topic = topic;
        this.consumer.subscribe(Arrays.asList(topic));
    }

    @Override
    public void run() {
        try{
            msgList = consumer.poll(1000);
            Iterator<ConsumerRecord<String,String>> iterator = msgList.iterator();
            int countNo = 0;
            while(iterator.hasNext()){
                ConsumerRecord<String,String> record = iterator.next();
                System.out.println("第" + (countNo++) + "条消息:key:" + record.key() + ",value: " + record.value() + ",offset: " + record.offset());
            }
        }catch (Exception e){
            System.out.println();
        }finally {
            consumer.close();
        }
    }

    public static void main(String[] args) {
        KafkaConsumerService consumerService = new KafkaConsumerService(KafkaConstant.KAFKA_TOPIC);
        Thread thread = new Thread(consumerService);
        thread.start();
    }
}

在发送消息时可以发送两次不同的消息,在接受时可以进行多次消费,某次消费:

0条消息:key:message,value: [1000]:hello, boys!,offset: 20001条消息:key:message,value: [1]:hello, boys!,offset: 20012条消息:key:message,value: [2]:hello, boys!,offset: 20023条消息:key:message,value: [3]:hello, boys!,offset: 20034条消息:key:message,value: [4]:hello, boys!,offset: 20045条消息:key:message,value: [5]:hello, boys!,offset: 20056条消息:key:message,value: [6]:hello, boys!,offset: 20067条消息:key:message,value: [7]:hello, boys!,offset: 20078条消息:key:message,value: [8]:hello, boys!,offset: 20089条消息:key:message,value: [9]:hello, boys!,offset: 200910条消息:key:message,value: [10]:hello, boys!,offset: 201011条消息:key:message,value: [11]:hello, boys!,offset: 201112条消息:key:message,value: [12]:hello, boys!,offset: 201213条消息:key:message,value: [13]:hello, boys!,offset: 201314条消息:key:message,value: [14]:hello, boys!,offset: 201415条消息:key:message,value: [15]:hello, boys!,offset: 201516条消息:key:message,value: [16]:hello, boys!,offset: 2016
....

下一次消费:

0条消息:key:message,value: [500]:hello, boys!,offset: 25001条消息:key:message,value: [501]:hello, boys!,offset: 25012条消息:key:message,value: [502]:hello, boys!,offset: 25023条消息:key:message,value: [503]:hello, boys!,offset: 25034条消息:key:message,value: [504]:hello, boys!,offset: 25045条消息:key:message,value: [505]:hello, boys!,offset: 25056条消息:key:message,value: [506]:hello, boys!,offset: 25067条消息:key:message,value: [507]:hello, boys!,offset: 25078条消息:key:message,value: [508]:hello, boys!,offset: 2508

可以看到,消息的消费是根据偏移量进行循环的,消息一遍后还可以再次遍历消费。消费过的消息并不会被删除,而是可以可以配置,比如一天后自动删除。

后续学习kafak的一些设计思想。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值