搭建kafka
相关链接:
https://blog.csdn.net/u010046908/article/details/62229015
https://www.cnblogs.com/BlueSkyyj/p/11425998.html
第二个连接缺少启动zookeeper的过程
个人梳理
- brew install kafka
- 配置文件位置
/usr/local/etc/kafka/server.properties
/usr/local/etc/kafka/zookeeper.properties
- 进入安装目录(里面有bin),启动zookeeper
.bin/zookeeper-server-start /usr/local/etc/kafka/zookeeper.properties
- 启动kafka
kafka-server-start /usr/local/etc/kafka/server.properties
- 创建topic,发消息,收消息
#创建topic
kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
#查看创建的topic
kafka-topics --list --zookeeper localhost:2181
#终端1发送消息
kafka-console-producer --broker-list localhost:9092 --topic test
#终端2接受消息
kafka-console-consumer --bootstrap-server localhost:9092 --topic test --from-beginning
flink连接kafka
参考连接:https://blog.csdn.net/u014468095/article/details/103143275
pom配置:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.example</groupId>
<artifactId>UserBehaviorAnalysis</artifactId>
<packaging>pom</packaging>
<version>1.0-SNAPSHOT</version>
<modules>
<module>HotItemsAnalysis</module>
</modules>
<properties>
<flink.version>1.10.1</flink.version>
<scala.binary.version>2.12</scala.binary.version>
<kafka.version>2.8.0</kafka.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.11_2.12</artifactId>
<version>1.10.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.7</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.8</source>
<target>1.8</target>
<encoding>UTF-8</encoding>
</configuration>
</plugin>
</plugins>
</build>
</project>
其中,下面这个最重要:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.11_2.12</artifactId>
<version>1.10.1</version>
<scope>test</scope>
</dependency>
测试的java代码:
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.junit.Test;
import java.util.Properties;
public class Test2 {
@Test
public void myTest() throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Properties props = new Properties();
props.put("bootstrap.servers","localhost:9092");
props.setProperty("group.id", "consume_id");
DataStream<String> input = env.addSource(new FlinkKafkaConsumer<String>
("test",new SimpleStringSchema(),props));
input.print();
env.execute("Flink Streaming Java API Skeleton");
}
}
注意这个new FlinkKafkaConsumer,简直是天坑,不要带10,11之类的数字,浪费我2小时