虚拟机安装kafka(ubuntu)
hello,我是专注于开发的小红(男),今天咋们来聊聊虚拟机安装kafka的那些事儿
1.准备
准备好vmWare,zookeeper,kafka安装包,我的zookeeper版本是3.5.8(一定要下载后面带-bin的安装包),kafka版本是2.12-2.5.0;前提是你的虚拟机已经有java环境,我用的是jdk8。
2.安装zookeeper
2.1 下载,解压
下载地址:http://archive.apache.org/dist/zookeeper/
将安装包放在你想放的位置,然后解压,我是解压在*/usr/local*下的,由于ubuntu是桌面化的,所以直接解压即可,当然也可以命令(tar zxvf fileName)
2.2 修改配置文件
在conf文件夹下修改zoo_sample.cfg为zoo.cfg文件(mv zoo_sample.cfg zoo.cfg
)
2.3 更改配置文件
用记事本打开刚刚修改的zoo.cfg文件(sudo gedit zoo.cfg
)
修改dataDir和dataLogDir路径,然后增加server.0=ip地址:2888:3888
2.4 加入系统环境
vim /etc/profile,并添加如下信息
export ZOOKEEPER_HOME=/usr/local/zk
export PATH=${ZOOKEEPER_HOME}/bin:$PATH
使配置文件立即生效
source /etc/profile
2.5 启动zookeeper
进入到bin目录下执行
./zkServer.sh start ../conf/zoo.cfg
2.6 测试zookeeper是否启动成功
我测试是用的java代码来测试的
private static String ip = "ip:2181";
private static int session_timeout = 40000;
private static CountDownLatch latch = new CountDownLatch(1);
public static void main(String[] args) throws Exception {
ZooKeeper zooKeeper = new ZooKeeper(ip, session_timeout, new Watcher() {
@Override
public void process(WatchedEvent watchedEvent) {
if(watchedEvent.getState() == Event.KeeperState.SyncConnected) {
//确认已经连接完毕后再进行操作
latch.countDown();
System.out.println("已经获得了连接");
}
}
});
//连接完成之前先等待
latch.await();
ZooKeeper.States states = zooKeeper.getState();
System.out.println(states);
}
把ip换成你的虚拟机ip就好了,启动成功后如下图
3. 安装kafka
3.1 下载,解压
下载地址http://archive.apache.org/dist/kafka/
解压同zookeeper,我这里是解压在/usr/local/kafka中
3.2 更改配置文件
进入config中用记事本打开server.properties
加入
advertised.host.name=ip地址
更改
listeners=PLAINTEXT://ip地址:9092
log.dirs=你想要存放的路径
zookeeper.connect=ip地址:2181
3.3 启动kafka
进入bin目录执行
./kafka-server-start.sh ../config/server.properties
如果要在后台启动可以在./kafka-server-start.sh
后面加-daemon
4.防火墙放行端口
查看防火墙状态
systemctl status firewalld
如果出现Command 'firewall-cmd' not found
输入apt install firewalld
下载
在防火墙开启的状态下放行端口
firewall-cmd --zone=public --add-port=2181/tcp --permanent
firewall-cmd --zone=public --add-port=9092/tcp --permanent
放行之后要重启防火墙,不然没有作用
5.测试
这里是使用springboot项目来进行测试
5.1 pom文件
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.5.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>2.2.0.RELEASE</version>
</dependency>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.2</version>
</dependency>
<dependency>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
<version>3.4.14</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
5.2 yml文件
server:
port: 8881
spring:
kafka:
bootstrap-servers: 192.168.3.94:9092
producer:
retries: 0
batch-size: 16384
buffer-memory: 33554432
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
acks: 1
consumer:
auto-commit-interval: 1S
auto-offset-reset: earliest
enable-auto-commit: false
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
listener:
concurrency: 5
ack-mode: manual_immediate
missing-topics-fatal: false
5.3 生产者
@RestController
@RequestMapping("/product")
@Slf4j
public class KafkaProduct {
@Autowired
private KafkaTemplate<String, Object> kafkaTemplate;
/**
* 自定义topic
*/
public static final String TOPIC_TEST = "topic.test";
/**
* 组别1
*/
public static final String TOPIC_GROUP1 = "topic.group1";
/**
* 组别2
*/
public static final String TOPIC_GROUP2 = "topic.group2";
@PostMapping("/send")
public void send() {
String obj2String = "哈哈哈哈";
log.info("准备发送消息为:{}", obj2String);
//发送消息
ListenableFuture<SendResult<String, Object>> future = kafkaTemplate.send(TOPIC_TEST, obj2String);
future.addCallback(new ListenableFutureCallback<SendResult<String, Object>>() {
@Override
public void onFailure(Throwable throwable) {
//发送失败的处理
log.info(TOPIC_TEST + " - 生产者 发送消息失败:" + throwable.getMessage());
}
@Override
public void onSuccess(SendResult<String, Object> stringObjectSendResult) {
//成功的处理
log.info(TOPIC_TEST + " - 生产者 发送消息成功:" + stringObjectSendResult.toString());
}
});
}
}
5.4 监听
@Component
@Slf4j
public class KafkaListeners2 {
@KafkaListener(topics = KafkaProduct.TOPIC_TEST,groupId = KafkaProduct.TOPIC_GROUP1)
public void consumerGroup1(ConsumerRecord<?, ?> record, Acknowledgment ack, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic){
Optional message = Optional.ofNullable(record.value());
if(message.isPresent()){
Object msg = message.get();
log.info("consumerGroup1 消费了: Topic:" + topic + ",Message:" + msg);
//手动提交偏移量
ack.acknowledge();
}
}
@KafkaListener(topics = KafkaProduct.TOPIC_TEST,groupId = KafkaProduct.TOPIC_GROUP2)
public void consumerGroup2(ConsumerRecord<?, ?> record, Acknowledgment ack, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic){
Optional message = Optional.ofNullable(record.value());
if(message.isPresent()){
Object msg = message.get();
log.info("consumerGroup2 消费了:Topic:" + topic + ",Message:" + msg);
//手动提交偏移量
ack.acknowledge();
}
}
}
5.5 启动项目
postman访问localhost:8881/product/send
,查看控制台
显示这样那就说明大功告成了
6.结语
好了,以上便是简单的使用ubuntu安装kafka并且使用的简单例子,如有错误或遗漏,请多多指教。