在此之前请确保安装jdk,此过程不再赘述。
1、zk集群安装配置(以3个节点集群为例,默认安装目录为/usr/local)
启动zk: /usr/local/zookeeper-3.4.6/bin/zkServer.sh start
停止zk:/usr/local/zookeeper-3.4.6/bin/zkServer.sh stop
2、安装配置kafka(以3个节点集群为例,默认安装目录为/usr/local)
启动kafka: /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties
停止kafka: /usr/local/kafka/bin/kafka-server-stop.sh/usr/local/kafka/config/server.properties
ps: 启动服务之后,jps查看进程,来检查zk和kafka是否正确启动:
25398 Kafka
637 QuorumPeerMain
二、kafka topic的常见操作
创建topic: /usr/local/kafka/bin/kafka-topics.sh --create --zookeeper ip1:2181,ip2:2181,ip3:2181(zk集群) --replication-factor 2 --partitions 20 --topic test(topic名字)
producer发送消息: /usr/local/kafka/bin/kafka-console-producer.sh --broker-list ip1:9092,ip2:9092,ip3:9092(kafka集群) --topic test
consumer接收消息: /usr/local/kafka/bin/kafka-console-consumer.sh --zookeeper ip1:2181,ip2:2181,ip3:2181(zk集群) --topic test --from-beginning
查看创建的topic: /usr/local/kafka/bin/kafka-topics.sh --list --zookeeper ip1:2181,ip2:2181,ip3:2181(zk集群)
删除已创建的topic: /usr/local/kafkabin/kafka-topics.sh --delete --zookeeper ip1:2181,ip2:2181,ip3:2181(zk集群) --topic test
这个命令其实并没有真正删除topic信息,要想彻底删除,需要启动zk客户端去删除,具体操作如下:
三、线上kafka配置特别说明
broker.id=1921685162
host.name=
advertised.host.name=ip
log.dirs=/data/kafka
log.retention.hours=72 #数据保存3天
zookeeper.connection.timeout.ms=9000 #ZooKeeper集群连接超时时长
zookeeper.sync.time.ms=3000 #ZooKeeper集群中leader和follower之间的同步时间
zookeeper.connect=ip1:2181,ip2:2181,ip3:2181
num.replica.fetchers=4 #leader进行复制的线程数,增大这个数值会增加follower的IO
replica.fetch.wait.max.ms=500 #replicas同leader之间通信的最大等待时间,失败了会重试
replica.socket.timeout.ms=30000 #follower与leader之间的socket超时时间
|
四、kafka相关api使用
producer 生产者:
Properties props = new Properties();
props.put( "zk.connect" , "ip1:2181,ip2:2181,ip3:2181" );
props.put( "serializer.class" , "kafka.serializer.StringEncoder" );
props.put( "metadata.broker.list" , "ip1:9092,ip2:9092,ip3:9092" );
props.put( "producer.type" , "async" );
ProducerConfig config = new ProducerConfig(props);
Producer<String, String> producer = new Producer<>(config);
producer.send( new KeyedMessage<String, String>( "clicki_test_topic" , "aaaaaaaaaaa" ));
|
consumer 消费者:
public class ConsumerTest extends Thread {
private final ConsumerConnector consumer;
private final String topic;
public static void main(String[] args) {
ConsumerTest consumerThread = new ConsumerTest( "clicki_test_topic" );
consumerThread.start();
}
public ConsumerTest(String topic) {
consumer = kafka.consumer.Consumer
.createJavaConsumerConnector(createConsumerConfig());
this .topic = topic;
}
private static ConsumerConfig createConsumerConfig() {
Properties props = new Properties();
props.put( "zookeeper.connect" , "ip1:2181,ip2:2181,ip3:2181" );
props.put( "group.id" , "0" );
props.put( "zookeeper.session.timeout.ms" , "400000" );
props.put( "zookeeper.sync.time.ms" , "200" );
props.put( "auto.commit.interval.ms" , "1000" );
return new ConsumerConfig(props);
}
@Override
public void run() {
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
topicCountMap.put(topic, new Integer( 1 ));
Map<String, List<KafkaStream< byte [], byte []>>> consumerMap = consumer
.createMessageStreams(topicCountMap);
KafkaStream< byte [], byte []> stream = consumerMap.get(topic).get( 0 );
ConsumerIterator< byte [], byte []> it = stream.iterator();
while (it.hasNext())
System.out.println( new String(it.next().message()));
}
}
|