java11.0.1怎么安装,mac 下 kafka 1.1.0 安装与测试,利用java实现生产与消费

机器:localhost ,对应topic为:courier-gps

1、zookeeper相关

创建如下两个目录:

数据存放目录(dataDir): /Users/jiangcaijun/technicalSoftware/zookeeper-3.4.12/zookeeperData

日志存放目录(dataLogDir): /Users/jiangcaijun/technicalSoftware/zookeeper-3.4.12/zookeeperLog

复制zoo_sample.cfg 文件并重新命名为 zoo.cfg ,命令为 cp zoo_sample.cfg zoo.cfg

编辑zoo.cfg,修改 dataDir、dataLogDir 为上述路径

启动命令: 进入解压目录下: bin/zkServer.sh start &

启动后,客户端连接zk:

bin/zkCli.sh -server localhost:2181

可利用ls / 查看节点

2、kafka安装路径:

cd /Users/jiangcaijun/technicalSoftware/kafka_2.11-1.0.0

修改 config/server.porperties

zookeeper.connect=localhost:2181/kafka110

kafka启动:

bin/kafka-server-start.sh config/server.properties &

将会自动在zookeeper下创建kafka110节点;可利用zk客户端,查看到该节点已创建

3、查看topic有哪些,并创建

bin/kafka-topics.sh --list --zookeeper localhost:2181/kafka110

创建 courier-gps,

bin/kafka-topics.sh --create --zookeeper localhost:2181/kafka110 --replication-factor 1 --partitions 1 --topic courier-gps

4、创建kafka生产者

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic courier-gps

5、创建kafka消费者

1、通过kafka(测试未通过,找不到信息)

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic courier-gps --from-beginning

2、通过zk路由

bin/kafka-console-consumer.sh --zookeeper 127.0.0.1:2181/kafka110 --topic courier-gps --from-beginning

注意:由于设置zk根结点kafka110,方法1 失败,方法 2 成功

6、java链接kafka

1、引入依赖

org.apache.kafka

kafka_2.12

1.1.0

2、生产者: import org.apache.kafka.clients.producer.KafkaProducer;

import org.apache.kafka.clients.producer.Producer;

import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

/**

* @Auther: jiangcaijun

* @Date: 2018/6/26 14:42

* @Description:

*/

public class MessageProducer {

public static void main(String[] args) throws InterruptedException {

Properties props = new Properties();

props.put("bootstrap.servers", "172.23.1.130:9092");

props.put("acks", "all");

props.put("retries", 0);

props.put("batch.size", 16384);

props.put("linger.ms", 1);

props.put("buffer.memory", 33554432);

props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");

props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

// 指定自定义分区器

// props.put("partitioner.class", "com.mr.partitioner.MyPartitioner");

Producer producer = new KafkaProducer<>(props);

String str = "[{\"order_number\":\"1101\",\"location_action\":\"1\",\"loginname\":\"x\",\"user_id\":\"897\",\"c_time\":\"2018-06-15 18:29:26.0\",\"posistion_data\":\"2018-06-15 18:54:07.0\",\"lng\":\"116.446672\",\"lat\":\"39.895109\",\"dt_ymd\":\"20180615\"},\n" +

"{\"order_number\":\"1102\",\"location_action\":\"1\",\"loginname\":\"y\",\"user_id\":\"897\",\"c_time\":\"2018-06-15 18:29:26.0\",\"posistion_data\":\"2018-06-15 18:43:49.0\",\"lng\":\"116.446245\",\"lat\":\"39.895983\",\"dt_ymd\":\"20180615\"}\n" +

"]";

producer.send(new ProducerRecord("courier-gps", null, str));

producer.close();

}

}

3、消费者 import java.util.HashMap;

import java.util.List;

import java.util.Map;

import java.util.Properties;

import kafka.consumer.ConsumerConfig;

import kafka.consumer.ConsumerIterator;

import kafka.consumer.KafkaStream;

import kafka.javaapi.consumer.ConsumerConnector;

import kafka.serializer.StringDecoder;

import kafka.utils.VerifiableProperties;

public class MessageConsumer {

/*# kafka相关(版本1.1.0)*/

private final ConsumerConnector consumer;

private String TOPIC = "courier-gps";

private MessageConsumer() {

Properties props = new Properties();

// zookeeper 配置

props.put("zookeeper.connect", "172.23.0.13:2181/kafka110");

// 消费者所在组

props.put("group.id", "group-courier-gps");

// zk连接超时

props.put("zookeeper.session.timeout.ms", "1000");

props.put("zookeeper.sync.time.ms", "1000");

props.put("auto.commit.interval.ms", "1000");

// 序列化类

props.put("serializer.class", "kafka.serializer.StringEncoder");

ConsumerConfig config = new ConsumerConfig(props);

consumer = kafka.consumer.Consumer.createJavaConsumerConnector(config);

}

void consume() {

Map topicCountMap = new HashMap();

topicCountMap.put(TOPIC, new Integer(1));

StringDecoder keyDecoder = new StringDecoder(new VerifiableProperties());

StringDecoder valueDecoder = new StringDecoder(new VerifiableProperties());

Map>> consumerMap =

consumer.createMessageStreams(topicCountMap,keyDecoder,valueDecoder);

KafkaStream stream = consumerMap.get(TOPIC).get(0);

ConsumerIterator it = stream.iterator();

System.out.println("接收到消息如下:");

while (it.hasNext()){

System.out.println(it.next().message());

}

}

public static void main(String[] args) {

new MessageConsumer().consume();

}

}

其他:

1、Linux中记录终端输出到文本文件 ls > ls.txt #或者 ls-->ls.txt #把ls命令的运行结果保存到文件ls.txt中

2、若其他机器无法访问该kafka,可修改 kafka/conf/server.properties 中 全部 localhost 改为本机实际ip

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值