kafka创耳机_kafka学习笔记:一、Centos7安装kafka及生产者消费者创建示例

一、环境说明

centos7(vm) + JDK1.8 + zookeeper3.5.5 + kafka2.11-2.3.1

下载JDK 8解压并安装,假设安装之后的目录为/usr/local/java/jdk1.8.0_231,以下的操作以此为前提

二、安装zookeeper下载zookeeper并解压wget https://archive.apache.org/dist/zookeeper/zookeeper-3.5.5/apache-zookeeper-3.5.5-bin.tar.gz

tar -zxf apache-zookeeper-3.5.5-bin.tar.gz

mv apache-zookeeper-3.5.5-bin zookeeper

配置

创建数据存放目录:mkdir -p /var/lib/zookeeper

使用基本的配置参数创建zoo.cfgcat > /usr/local/zookeeper/conf/zoo.cfg << EOF

tickTime=2000

dataDir=/var/lib/zookeeper

clientPort=2181

EOF

启动zookeeperexport JAVA_HOME=/usr/local/java/jdk1.8.0_231

/usr/local/zookeeper/bin/zkServer.sh start

三、安装kafka

可以在官网的下载页面 http://kafka.apache.org/downloads.html 下载需要安装的版本;这里使用的是kafka2.11-2.3.1解压tar -zxf kafka_2.11-2.3.1.tgz

mv kafka_2.11-0.9.0.1 /usr/local/kafka

mkdir /tmp/kafka-logs

启动export JAVA_HOME=/usr/java/jdk1.8.0_231

/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties如果想查看启动情况,可以把-daemon去掉,但去掉该参数之后,命令终端如果关闭,kafka也会随之关闭。

关闭

kafka启动后如果需要关闭,可以通过以下命令关闭/usr/local/kafka/bin/kafka-server-stop.sh

四、生产者示例

新建一个maven工程,引入如下依赖:

org.apache.kafka

kafka-clients

2.1.1

创建生产者:public class MessageProducer {

private static Properties kafkaProps;

private static Producer kafkaProducer;

static{

kafkaProps = new Properties();

kafkaProps.put("bootstrap.servers", "192.168.254.131:9092");

kafkaProps.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");

kafkaProps.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

kafkaProducer = new KafkaProducer(kafkaProps);

}

/**

* 一、发送并忘记(不关心消息是否正常到达)

* @param producerRecord

*/

public void sendMsgAndForget(ProducerRecord producerRecord){

kafkaProducer.send(producerRecord);

}

/**

* 二、同步发送(等待返回Future对象)

* @param producerRecord

* @return

* @throws ExecutionException

* @throws InterruptedException

*/

public RecordMetadata sendSynMsg(ProducerRecord producerRecord) throws ExecutionException, InterruptedException {

RecordMetadata metaData = kafkaProducer.send(producerRecord).get();

return metaData;

}

/**

* 三、异步发送(指定回调函数,服务器在返回响应时调用该函数)

* @param producerRecord

*/

public void sendAsynMsg(ProducerRecord producerRecord){

kafkaProducer.send(producerRecord, new ProducerCallback());

}

public static void main(String[] args) throws ExecutionException, InterruptedException {

MessageProducer messageProducer = new MessageProducer();

ExecutorService executorService = Executors.newFixedThreadPool(10);

for(int i=0; i < 10; i++){

executorService.submit(new Runnable() {

@Override

public void run() {

while(true){

Random random = new Random();

int randNum = random.nextInt(3)%3 + 1;

ProducerRecord record = null;

switch (randNum){

case 1 :

record = new ProducerRecord("test.topic", "smaf", "send and forget");

messageProducer.sendMsgAndForget(record);

break;

case 2 :

record = new ProducerRecord("test.topic", "send", "send");

try {

messageProducer.sendSynMsg(record);

} catch (ExecutionException e) {

e.printStackTrace();

} catch (InterruptedException e) {

e.printStackTrace();

}

break;

case 3:

record = new ProducerRecord("test.topic", "sendAsyn", "send asyn");

messageProducer.sendAsynMsg(record);

break;

}

try {

Thread.sleep(1000);

} catch (InterruptedException e) {

e.printStackTrace();

}

}

}

});

}

}

}

/**

* 回调处理类

*/

class ProducerCallback implements Callback {

public void onCompletion(RecordMetadata recordMetadata, Exception e) {

//回调处理逻辑

if(null != e){

e.printStackTrace();

}

}

}

五、消费者示例

创建消费者public class MessageConsumer {

private static Properties kafkaProps;

private static Consumer kafkaConsumer;

static{

kafkaProps = new Properties();

kafkaProps.put("bootstrap.servers", "192.168.254.131:9092");

kafkaProps.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

kafkaProps.put("group.id", "testGroup");

kafkaProps.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

kafkaConsumer = new KafkaConsumer(kafkaProps);

}

public void consumeMessage(String topic) throws InterruptedException {

kafkaConsumer.subscribe(Collections.singletonList(topic));

Duration duration = Duration.ofSeconds(10l);

while(true){

ConsumerRecords records = kafkaConsumer.poll(duration);

System.out.println("new messages:");

if(records.count()==0) System.out.println("empty");

for(ConsumerRecord record : records){

System.out.printf("topic=%s,partition=%s,key=%s,value=%s\n",record.topic(), record.partition(), record.key(), record.value());

}

}

}

public static void main(String[] args) throws InterruptedException {

MessageConsumer messageConsumer = new MessageConsumer();

messageConsumer.consumeMessage("test.topic");

}

}

六、注意

如果遇到连接超时的问题:Exception in thread "main" java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic test.topic not present in metadata after 60000 ms.

at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:1269)

at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:933)

at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:856)

at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:743)

Caused by: org.apache.kafka.common.errors.TimeoutException: Topic test.topic not present in metadata after 60000 ms.

可以通过以下方式处理:查看防火墙是否开启

firewall-cmd --state

查看9092端口是否对外开放

firewall-cmd --list-ports

如果没有对外开放,使用命令开放端口

firewall-cmd --zone=public --add-port=9092/tcp --permanent

firewall-cmd --reload

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值