一、安装JAVA JDK
省略
二、安装ZooKeeper
下载地址:
2、解压后,进入目录中的conf目录,有一个zoo_sample.cfg文件,将其重命名为zoo.cfg,然后打开,在最后添加
#自己的安装路径
dataDir=D:\zookeeper\apache-zookeeper-3.5.5-bin\apache-zookeeper-3.5.5-bin\data
dataDirLog=D:\zookeeper\apache-zookeeper-3.5.5-bin\apache-zookeeper-3.5.5-bin\log
3、环境变量配置
新建系统变量变量名ZOOKEEPER_HOME,变量值D:\tools\zookeeper-3.4.10 【zooleeper安装/解压路径】
在path环境变量中添加 %ZOOKEEPER_HOME%\bin 以及 %ZOOKEEPER_HOME\conf
win+R输入cmd进入命令行窗口,输入zkServer回车,信息如图所示即为配置成功
注意:窗口不要关闭
三、安装并启Kafka
1、下载安装包
2、 解压并进入Kafka目录,笔者:D:\kafka\kafka_2.12-2.3.0
3、 进入config目录找到文件server.properties并打开
4、 找到并编辑log.dirs=D:\kafka\kafka_2.12-2.3.0\log
5、 找到并编辑zookeeper.connect=localhost:2181
6、 Kafka会按照默认,在9092端口上运行,并连接zookeeper的默认端口:2181
7、 进入Kafka安装目录,打开命令窗口,输入:
.\bin\windows\kafka-server-start.bat .\config\server.properties
注意:窗口不要关闭
四、在java中运行程序调用(借用网上的一段代码)
在idea上引入相应的maven地址
<dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.12</artifactId> <version>1.1.1</version> </dependency>
1、配置文件
/**
* @author HuangZheng
* @Date 2019/9/20 9:49
*/
public class KafkaProperties {
public static final String ZK = "127.0.0.1:2181";
public static final String TOPIC = "hello_topic";
public static final String BROKER_LIST = "127.0.0.1:9092";
public static final String GROUP_ID = "test_group1";
}
2、 Producer API DEMO
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
import java.util.Properties;
/**
* @author HuangZheng
* @Date 2019/9/20 9:50
*/
public class KafkaProducer extends Thread {
private String topic;
private Producer<Integer, String> producer;
public KafkaProducer(String topic) {
this.topic = topic;
Properties properties = new Properties();
properties.put("metadata.broker.list",KafkaProperties.BROKER_LIST);
properties.put("serializer.class","kafka.serializer.StringEncoder");
properties.put("request.required.acks","1");
producer = new Producer<Integer, String>(new ProducerConfig(properties));
}
@Override
public void run() {
int messageNo = 1;
while(true) {
String message = "message_" + messageNo;
producer.send(new KeyedMessage<Integer, String>(topic, message));
System.out.println("Sent: " + message);
messageNo ++ ;
try{
Thread.sleep(2000);
} catch (Exception e){
e.printStackTrace();
}
}
}
public static void main(String[] args) {
new KafkaProducer("test").start();// 使用kafka集群中创建好的主题 test
}
}
3、Consumer API DEMO
import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
/**
* @author HuangZheng
* @Date 2019/9/20 9:58
*/
public class KafkaConsumer extends Thread{
private String topic;
public KafkaConsumer(String topic) {
this.topic = topic;
}
private ConsumerConnector createConnector(){
Properties properties = new Properties();
properties.put("zookeeper.connect", KafkaProperties.ZK);
properties.put("group.id",KafkaProperties.GROUP_ID);
return Consumer.createJavaConsumerConnector(new ConsumerConfig(properties));
}
@Override
public void run() {
ConsumerConnector consumer = createConnector();
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
topicCountMap.put(topic, 1);
// topicCountMap.put(topic2, 1);
// topicCountMap.put(topic3, 1);
// String: topic
// List<KafkaStream<byte[], byte[]>> 对应的数据流
Map<String, List<KafkaStream<byte[], byte[]>>> messageStream = consumer.createMessageStreams(topicCountMap);
KafkaStream<byte[], byte[]> stream = messageStream.get(topic).get(0); //获取我们每次接收到的暑假
ConsumerIterator<byte[], byte[]> iterator = stream.iterator();
while (iterator.hasNext()) {
String message = new String(iterator.next().message());
System.out.println("rec: " + message);
}
}
}
4、调用俩个类的main方法
/**
* @author HuangZheng
* @Date 2019/9/20 14:50
*/
public class Test {
public static void main(String[] args) {
new KafkaProducer("test").start();// 使用kafka集群中创建好的主题 test
new KafkaConsumer("test").start();
}
}
5、运行结果如下