kafka搭建 java 链接操作

5 篇文章 0 订阅
1 篇文章 0 订阅

kafka集群搭建与使用

安装前的环境准备
由于Kafka是用Scala语言开发的,运行在JVM上,因此在安装Kafka之前需要先安装JDK。

yum install java-1.8.0-openjdk* -y

kafka依赖zookeeper,所以需要先安装zookeeper

wget http://mirror.bit.edu.cn/apache/zookeeper/stable/zookeeper-3.4.12.tar.gz
tar -zxvf zookeeper-3.4.12.tar.gz
cd zookeeper-3.4.12
cp conf/zoo_sample.cfg conf/zoo.cfg

启动zookeeper

bin/zkServer.sh start
bin/zkCli.sh 
ls /   #查看zk的根目录相关节点

第一步:下载安装包

wget https://archive.apache.org/dist/kafka/1.1.0/kafka_2.11-1.1.0.tgz
tar -xzf kafka_2.11-1.1.0.tgz
cd kafka_2.11-1.1.0

第二步:启动服务

现在来启动kafka服务:

启动脚本语法:kafka-server-start.sh [-daemon] server.properties

bin/kafka-server-start.sh -daemon config/server.properties    我们进入zookeeper目录通过zookeeper客户端查看下zookeeper的目录树
bin/zkCli.sh 
ls /   #查看zk的根目录kafka相关节点
ls /brokers/ids #查看kafka节点

kafka集群配置

首先,我们需要建立好其他2个broker的配置文件:

cp config/server.properties config/server-1.properties
cp config/server.properties config/server-2.properties

配置文件的内容分别如下:config/server-1.properties:

broker.id=1
listeners=PLAINTEXT://192.168.0.197:9093
log.dir=/tmp/kafka-logs-1

config/server-2.properties:

broker.id=2
listeners=PLAINTEXT://192.168.0.197:9094
log.dir=/tmp/kafka-logs-2

启动

bin/kafka-server-start.sh -daemon config/server-1.properties
bin/kafka-server-start.sh -daemon config/server-2.properties

java代码

pom.xml

<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>1.1.0</version>
</dependency>

MsgProducer

package com.jx.common.kafka;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;

import java.util.Properties;
import java.util.concurrent.Future;

public class MsgProducer {

    public static void main(String[] args) throws Exception{

        Properties properties = new Properties();
        properties.put("bootstrap.servers", "192.168.0.197:9091,192.168.0.197:9092,192.168.0.197:9093");
        properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        Producer<String, String> producer = new KafkaProducer<>(properties);
        for (int i = 0; i < 5; i++) {
            //同步方式发送消息
            ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>("topic-replica-yang", 0, Integer.toString(i), Integer.toString(i));
            Future<RecordMetadata> result = producer.send(producerRecord);
			//等待消息发送成功的同步阻塞方法
			RecordMetadata metadata = result.get();
			System.out.println("同步方式发送消息结果:" + "topic-" + metadata.topic() + "|partition-"
			        + metadata.partition() + "|offset-" + metadata.offset());
        }
    }
}

MsgConsumer

package com.jx.common.kafka;

import java.util.Arrays;
import java.util.Properties;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.TopicPartition;


public class MsgConsumer {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "192.168.0.197:9091,192.168.0.197:9092,192.168.0.197:9093");
        // 消费分组名
        props.put("group.id", "testGroup");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        // 消费主题
        consumer.subscribe(Arrays.asList("topic-replica-yang"));
        // 消费指定分区
        //consumer.assign(Arrays.asList(new TopicPartition("topic-replica-yang", 0)));
        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(1000);
            for (ConsumerRecord<String, String> record : records) {
                System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
            }
            if (records.count() > 0) {
                // 提交offset
                consumer.commitSync();
            }

        }
    }
}
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值