Kafka Java示例

    由于项目原因需要升级kafka,需要重新申请kafka服务,用新的Java API来编写producer和consumer,顺便接触一下Kafka。准备工作就是安装zookeeper和kafka,由于关于安装方面的文章很多,在此就不细说了。

一、一些问题

1.1 kafka错误

错误:
WARN Error while fetching metadata with correlation id 0 : {test-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

解决方案https://github.com/wurstmeister/kafka-docker/issues/85https://github.com/wurstmeister/kafka-docker/issues/85

config/server.properties:

>set advertised.host.name = ip。

1.2 kafka命令行
>创建topic:sh kafka-topics.sh --create --zookeeper 10.95.177.100:2181,10.95.97.175:2181,10.95.176.44:2181  --replication-factor 1 --partitions 1 --topic  topic-test
>查看topic:sh kafka-topics.sh --list --zookeeper 10.95.177.192:2181,10.95.97.175:2181,10.95.176.44:2181
>删除topic:
sh kafka-topics.sh --delete --topic topic-test --zookeeper 10.95.177.192:2181,10.95.97.175:2181,10.95.176.44:2181 
>生产消息:
sh kafka-console-producer.sh --broker-list 10.95.177.192:9092,10.95.97.175:9092 --topic topic-test
>消费消息:
sh kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topic-test --from-beginning  --zookeeper 10.95.177.192:2181,10.95.97.175:2181,10.95.176.44:2181 
>启动kafka服务:
./kafka-server-start.sh -daemon ../config/server.properties
>停止kafka服务:
./kafka-server-stop

1.3 Java API版本

一种:kafka.javaapi.*:

import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
另一种: org.apache.kafka.*:

import org.apache.kafka.clients.producer.KafkaProducer<K,V>
在Kafka 0.8.2之前,kafka.javaapi.producer.Producer是为唯一官方用Scala实现的Java Client。
在Kafka 0.8.2之后,有新的Java Producer API,org.apache.kafka.clients.producer.KafkaProducer,完全用Java实现的。

二、Java Kafka示例

pom.xml:

<dependencies>
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
        <version>0.10.2.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka_2.11</artifactId>
        <version>0.10.2.0</version>
    </dependency>
</dependencies>
2.1 老版本Producer:

KafkaProducerOld.java:

import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;

import java.util.Properties;

/**
 * kafka.javaapi.producer.Producer
 * */
public class KafkaProducerOld {

    private Producer<String, String> producer;

    public final static String TOPIC = "didi-topic-test";

    private KafkaProducerOld() {
        Properties props = new Properties();
        props.put("metadata.broker.list", "10.95.177.192:9092,10.95.97.175:9092");
        props.put("serializer.class", "kafka.serializer.StringEncoder");
        ProducerConfig config = new ProducerConfig(props);
        producer = new Producer<String, String>(config);
    }

    public void produce() {
        int messageNo = 0;
        final int COUNT = 10;

        while(messageNo < COUNT) {
            String data = String.format("hello kafka.javaapi.producer message %d", messageNo);

            KeyedMessage<String, String> msg = new KeyedMessage<String, String>(TOPIC, data);
            
            try {
                producer.send(msg);
            } catch (Exception e) {
                e.printStackTrace();
            }

            messageNo++;
        }

        producer.close();
    }

    public static void main(String[] args) {
        new KafkaProducerOld().produce();
    }

}
2.2 老版本Consumer

KafkaConsumerOld.java:

import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.serializer.StringDecoder;
import kafka.utils.VerifiableProperties;

import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;

/**
* Consumer: kafka.javaapi.consumer.Consumer
* */
public class KafkaConsumerOld {

    private final ConsumerConnector consumerConnector;

    public final static String TOPIC = "didi-topic-test";

    private KafkaConsumerOld() {
        Properties props = new Properties();
        props.put("zookeeper.connect", "10.95.177.192:2181,10.95.97.175:2181,10.95.176.44:2181");
        props.put("group.id", "group-1");

        //zk连接超时
        props.put("zookeeper.session.timeout.ms", "400");
        props.put("zookeeper.sync.time.ms", "200");
        props.put("auto.commit.interval.ms", "1000");
        props.put("auto.offset.reset", "smallest");
        //序列化类
        props.put("serializer.class", "kafka.serializer.StringDecoder");

        ConsumerConfig config = new ConsumerConfig(props);

        consumerConnector = kafka.consumer.Consumer.createJavaConsumerConnector(config);
    }

    public void consume() {
        Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
        topicCountMap.put(KafkaProducerNew.TOPIC, new Integer(1));

        StringDecoder keyDecoder = new StringDecoder(new VerifiableProperties());
        StringDecoder valueDecoder = new StringDecoder(new VerifiableProperties());

        Map<String, List<KafkaStream<String, String>>> consumerMap = consumerConnector.createMessageStreams(topicCountMap, keyDecoder, valueDecoder);

        List<KafkaStream<String, String>> streams = consumerMap.get(TOPIC);
        for (final KafkaStream stream : streams) {
            ConsumerIterator<String, String> it = stream.iterator();
            while (it.hasNext()) {
                System.out.println("this is kafka consumer : " + new String( it.next().message().toString()) );
            }
        }
    }

    public static void main(String[] args) {
        new KafkaConsumerOld().consume();
    }
}
2.3 新版本Producer

KafkaProducerNew.java:

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;

import java.util.Properties;

/**
* producer: org.apache.kafka.clients.producer.KafkaProducer;
 * */

public class KafkaProducerNew {

    private final KafkaProducer<String, String> producer;

    public final static String TOPIC = "didi-topic-test";

    private KafkaProducerNew() {
        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "10.95.177.192:9092,10.95.97.175:9092");
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());

        producer = new KafkaProducer<String, String>(props);
    }

    public void produce() {
        int messageNo = 1;
        final int COUNT = 10;

        while(messageNo < COUNT) {
            String key = String.valueOf(messageNo);
            String data = String.format("hello KafkaProducer message %s", key);
            
            try {
                producer.send(new ProducerRecord<String, String>(TOPIC, data));
            } catch (Exception e) {
                e.printStackTrace();
            }

            messageNo++;
        }
        
        producer.close();
    }

    public static void main(String[] args) {
        new KafkaProducerNew().produce();
    }

}
2.4 新版本Consumer:

KafkaConsumerNew.java:

import org.apache.kafka.clients.consumer.*;

import java.util.Arrays;
import java.util.Properties;

/**
* consumer: org.apache.kafka.clients.consumer.Consumer
* */
public class KafkaConsumerNew {

    private Consumer<String, String> consumer;

    private static String group = "group-1";

    private static String TOPIC = "didi-topic-test";

    private KafkaConsumerNew() {
        Properties props = new Properties();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "10.95.177.192:9092,10.95.97.175:9092");
        props.put(ConsumerConfig.GROUP_ID_CONFIG, group);
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
        props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"); // 自动commit
        props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000"); // 自动commit的间隔
        props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "30000");
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        consumer = new KafkaConsumer<String, String>(props);
    }

    private void consume() {
        consumer.subscribe(Arrays.asList(TOPIC)); // 可消费多个topic,组成一个list

        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(1000);
            for (ConsumerRecord<String, String> record : records) {
                System.out.printf("offset = %d, key = %s, value = %s \n", record.offset(), record.key(), record.value());
                try {
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }
        }
    }

    public static void main(String[] args) {
        new KafkaConsumerNew().consume();
    }
}

Author:忆之独秀

Email:leaguenew@qq.com

注明出处:http://blog.csdn.net/lavorange/article/details/78970977


  • 2
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值