java kafka 集群消费_kafka集群搭建和使用Java写kafka生产者消费者

转自:http://chengjianxiaoxue.iteye.com/blog/2190488

1 kafka集群搭建

1.zookeeper集群 搭建在110, 111,112

2.kafka使用3个节点110, 111,112修改配置文件config/server.properties

broker.id=110host.name=192.168.1.110log.dirs=/usr/local/kafka_2.10-0.8.2.0/logs

复制到其他两个节点,然后修改对应节点上的config/server.pro3.启动,在三个节点分别执行

bin/kafka-server-start.sh config/server.properties >/dev/null 2>&1 &

4创建主题

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 3 --topic test5查看主题详细

bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test--topic test

Topic:test PartitionCount:3 ReplicationFactor:3Configs:

Topic: test Partition:0 Leader: 110 Replicas: 110,111,112 Isr: 110,111,112Topic: test Partition:1 Leader: 111 Replicas: 111,112,110 Isr: 111,112,110Topic: test Partition:2 Leader: 112 Replicas: 112,110,111 Isr: 112,110,111

6去zk上看kafka集群

[zk: localhost:2181(CONNECTED) 5] ls /[admin, zookeeper, consumers, config, controller, zk-fifo, storm, brokers, controller_epoch]

[zk: localhost:2181(CONNECTED) 6] ls /brokers ---->查看注册在zk内的kafka

[topics, ids]

[zk: localhost:2181(CONNECTED) 7] ls /brokers/ids

[112, 110, 111]

[zk: localhost:2181(CONNECTED) 8] ls /brokers/ids/112[]

[zk: localhost:2181(CONNECTED) 9] ls /brokers/topics

[test]

[zk: localhost:2181(CONNECTED) 10] ls /brokers/topics/test

[partitions]

[zk: localhost:2181(CONNECTED) 11] ls /brokers/topics/test/partitions

[2, 1, 0]

[zk: localhost:2181(CONNECTED) 12]

2  kafka java调用:

2.1 java端生产数据, kafka集群消费数据:

1 创建maven工程,pom.xml中增加如下:

org.apache.kafka

kafka_2.10

0.8.2.0

2 java代码: 向主题test内写入数据

import java.util.Properties;

import java.util.concurrent.TimeUnit;

import kafka.javaapi.producer.Producer;

import kafka.producer.KeyedMessage;

import kafka.producer.ProducerConfig;

import kafka.serializer.StringEncoder;

public class kafkaProducer extends Thread{

private String topic;

public kafkaProducer(String topic){

super();

this.topic = topic;

}

@Override

public void run() {

Producer producer = createProducer();

int i=0;

while(true){

producer.send(new KeyedMessage(topic, "message: " + i++));

try {

TimeUnit.SECONDS.sleep(1);

} catch (InterruptedException e) {

e.printStackTrace();

}

}

}

private Producer createProducer() {

Properties properties = new Properties();

properties.put("zookeeper.connect", "192.168.1.110:2181,192.168.1.111:2181,192.168.1.112:2181");//声明zk

properties.put("serializer.class", StringEncoder.class.getName());

properties.put("metadata.broker.list", "192.168.1.110:9092,192.168.1.111:9093,192.168.1.112:9094");// 声明kafka broker

return new Producer(new ProducerConfig(properties));

}

public static void main(String[] args) {

new kafkaProducer("test").start();// 使用kafka集群中创建好的主题 test

}

}

3 kafka集群中消费主题test的数据:

[root@h2master kafka]# bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginnin

4 启动java代码,然后在看集群消费的数据如下:

message: 0

message: 1

message: 2

message: 3

message: 4

message: 5

message: 6

message: 7

message: 8

message: 9

message: 10

message: 11

message: 12

message: 13

message: 14

message: 15

message: 16

message: 17

message: 18

message: 19

message: 20

message: 21

3 kafka 使用Java写消费者,这样 先运行kafkaProducer ,在运行kafkaConsumer,即可得到生产者的数据:

importjava.util.HashMap;importjava.util.List;importjava.util.Map;importjava.util.Properties;importkafka.consumer.Consumer;importkafka.consumer.ConsumerConfig;importkafka.consumer.ConsumerIterator;importkafka.consumer.KafkaStream;importkafka.javaapi.consumer.ConsumerConnector;/*** 接收数据

* 接收到: message: 10

接收到: message: 11

接收到: message: 12

接收到: message: 13

接收到: message: 14

*@authorzm

**/

public class kafkaConsumer extendsThread{privateString topic;publickafkaConsumer(String topic){super();this.topic =topic;

}

@Overridepublic voidrun() {

ConsumerConnector consumer=createConsumer();

Map topicCountMap = new HashMap();

topicCountMap.put(topic,1); //一次从主题中获取一个数据

Map>> messageStreams =consumer.createMessageStreams(topicCountMap);

KafkaStream stream = messageStreams.get(topic).get(0);//获取每次接收到的这个数据

ConsumerIterator iterator =stream.iterator();while(iterator.hasNext()){

String message= newString(iterator.next().message());

System.out.println("接收到: " +message);

}

}privateConsumerConnector createConsumer() {

Properties properties= newProperties();

properties.put("zookeeper.connect", "192.168.1.110:2181,192.168.1.111:2181,192.168.1.112:2181");//声明zk

properties.put("group.id", "group1");//必须要使用别的组名称, 如果生产者和消费者都在同一组,则不能访问同一组内的topic数据

return Consumer.createJavaConsumerConnector(newConsumerConfig(properties));

}public static voidmain(String[] args) {new kafkaConsumer("test").start();//使用kafka集群中创建好的主题 test

}

}

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值