Logstash kafka伪集群高可用部署方案


logstash本身不带高可用部署方案 ,但是可以借助KAFKA的消费者 GROUP,构建Logstash伪集群从而实现高可用
这里对这种方式进行测试

环境准备

这里使用单节点的kafka,并部署两个Logstash 模拟 集群实现 kafka消息消费的高可用。准备两台虚拟机并安装jdk1.8
192.168.195.11 kafka logstash jdk1.8
192.168.196.12 logtash jdk1.8
jdk自行百度安装

Kafka部署

下载安装包并上传
下载kafka 安装包并上传 到虚拟机 192.168.195.11

解压安装包

[root@192 ~]# tar -zvxf kafka_2.12-2.8.1.tgz 
kafka_2.12-2.8.1/
kafka_2.12-2.8.1/LICENSE
kafka_2.12-2.8.1/NOTICE
kafka_2.12-2.8.1/bin/
kafka_2.12-2.8.1/bin/kafka-delete-records.sh
kafka_2.12-2.8.1/bin/trogdor.sh

进入解压目录

[root@192 ~]# ln -s kafka_2.12-2.8.1 kafka
[root@192 ~]# cd kafla
-bash: cd: kafla: No such file or directory
[root@192 ~]# cd kafka
[root@192 kafka]# ll
total 40
drwxr-xr-x. 3 root root  4096 Sep 14  2021 bin
drwxr-xr-x. 3 root root  4096 Sep 14  2021 config
drwxr-xr-x. 2 root root  8192 Apr 11 06:24 libs
-rw-r--r--. 1 root root 14520 Sep 14  2021 LICENSE
drwxr-xr-x. 2 root root   262 Sep 14  2021 licenses
-rw-r--r--. 1 root root   953 Sep 14  2021 NOTICE
drwxr-xr-x. 2 root root    44 Sep 14  2021 site-docs

启动kafka自带zk

[root@192 kafka]# bin/zookeeper-server-start.sh   config/zookeeper.properties 
[2022-04-11 06:29:06,215] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-04-11 06:29:06,216] WARN config/zookeeper.properties is relative. Prepend ./ to indicate that you're sure! (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-04-11 06:29:06,226] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-04-11 06:29:06,226] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-04-11 06:29:06,228] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2022-04-11 06:29:06,228] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2022-04-11 06:29:06,228] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2022-04-11 06:29:06,228] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2022-04-11 06:29:06,233] INFO Log4j 1.2 jmx support found and enabled. (org.apache.zookeeper.jmx.ManagedUtil)
[2022-04-11 06:29:06,247] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-04-11 06:29:06,247] WARN config/zookeeper.properties is relative. Prepend ./ to indicate that you're sure! (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-04-11 06:29:06,248] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-04-11 06:29:06,248] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-04-11 06:29:06,248] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2022-04-11 06:29:06,251] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2022-04-11 06:29:06,358] INFO Server environment:zookeeper.version=3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 20:03 GMT (org.apache.zookeeper.server.ZooKeeperServer)
[2022-04-11 06:29:06,358] INFO Server environment:host.name=192.168.0.102 (org.apache.zookeeper.server.ZooKeeperServer)
[2022-04-11 06:29:06,358] INFO Server environment:java.version=1.8.0_191 (org.apache.zookeeper.server.ZooKeeperServer)
[2022-04-11 06:29:06,358] INFO Server environment:java.vendor=Oracle Corporation 

启动kafka
使用程序包自带的配置文件启动kafka,配置文件中配置的zk地址就是本机的zk地址,因此不用个修改,

bin/kafka-server-start.sh  -daemon   config/server.properties

也可以使用daemon参数 后台运行服务

[root@192 kafka]# bin/kafka-server-start.sh -daemon  config/server.properties 
[root@192 kafka]# jps
5024 Kafka
4641 QuorumPeerMain
5105 Jps

服务验证
通过监控工具可以连接到zk 和kafka 证明服务启动正常
在这里插入图片描述
创建topic并发消息
通过监控工具创建一个 logstashHA 的topic 然后通过kafka消息发送工具 发送几条消息

[root@192 kafka]#  bin/kafka-console-producer.sh --broker-list localhost:9092 --topic  logstashHA
>qweqwe
>1
>2
>3
>4
>4
>5
>5
>6

监控工具可以看到刚发送的消息
在这里插入图片描述

Logstash部署

上传程序包并解压
下载程序安装包,上传到服务器,解压,如下

[root@192 ~]# tar -zvxf logstash-6.5.4.tar.gz 
......
[root@192 ~]# mv logstash-6.5.4 logstash
[root@192 ~]# cd logstash
[root@192 logstash]# ll
total 844
drwxr-xr-x.  2 root root   4096 Apr 11 06:55 bin
drwxr-xr-x.  2 root root    142 Apr 11 06:55 config
-rw-r--r--.  1 root root   2276 Dec 18  2018 CONTRIBUTORS
drwxr-xr-x.  2 root root      6 Dec 18  2018 data
-rw-r--r--.  1 root root   4056 Dec 18  2018 Gemfile
-rw-r--r--.  1 root root  21862 Dec 18  2018 Gemfile.lock
drwxr-xr-x.  6 root root     84 Apr 11 06:55 lib
-rw-r--r--.  1 root root  13675 Dec 18  2018 LICENSE.txt
drwxr-xr-x.  4 root root     90 Apr 11 06:55 logstash-core
drwxr-xr-x.  3 root root     57 Apr 11 06:55 logstash-core-plugin-api
drwxr-xr-x.  4 root root     55 Apr 11 06:55 modules
-rw-r--r--.  1 root root 808305 Dec 18  2018 NOTICE.TXT
drwxr-xr-x.  3 root root     30 Apr 11 06:55 tools
drwxr-xr-x.  4 root root     33 Apr 11 06:55 vendor
drwxr-xr-x. 10 root root    205 Apr 11 06:55 x-pack

创建配置文件
配置文件配置 从kafka节点读取消息并在终端打印

[root@192 logstash]# vi config/testHa.conf 
input{
    kafka {
        bootstrap_servers => "192.168.195.11:9092"
        topics => "logstashHA"
        consumer_threads => 1
        decorate_events => true
        group_id => "logstashGroup"
        client_id => "logstashCli"
    }

}
output{
    stdout{}
}

启动logstash

[root@192 logstash]# bin/logstash -f config/testHa.conf 
Sending Logstash logs to /root/logstash/logs which is now configured via log4j2.properties
[2022-04-11T08:10:19,195][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-04-11T08:10:19,213][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.5.4"}

消费消息
再次想kafka发送消息后可以看到logstash消费消息并打印了出来,证明服务正常
在这里插入图片描述

高可用测试

部署另一台logstash
按照上述相同步骤在195.12 虚拟机也部署一个logstash,但是 kafka消费者组id不同

[root@192 logstash]# vi config/testHA.conf 
input{
    kafka {
        bootstrap_servers => "192.168.195.11:9092"
        topics => "logstashHA"
        consumer_threads => 1
        decorate_events => true
        group_id => "logstashGroup2"
        client_id => "logstashCli2"
    }

}
output{
    stdout{}
}

启动logstash

[root@192 logstash]# bin/logstash -f config/testHA.conf 

......
Consumer clientId=logstashCli2-0, groupId=logstashGroup2] (Re-)joining group
[2022-05-01T20:24:07,766][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstashCli2-0, groupId=logstashGroup2] Successfully joined group with generation 1
[2022-05-01T20:24:07,776][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstashCli2-0, groupId=logstashGroup2] Setting newly assigned partitions [logstashHA-0]
[2022-05-01T20:24:07,803][INFO ][org.apache.kafka.clients.consumer.internals.Fetcher] [Consumer clientId=logstashCli2-0, groupId=logstashGroup2] Resetting offset for partition logstashHA-0 to offset 25.
[2022-05-01T20:24:07,878][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

两个logstash不同组时消息会同时被二者消费
这个时候 再向 kafka中发送一条消息 发现另个logstash 都有打印
在这里插入图片描述
在这里插入图片描述
两个logstash 属于统一组时,只有一个节点可以消费到消息
这是因为目前他们属于多个消费组。现在我们修改 192.12虚拟机的配置文件,group ip 和 client_id 参数修改为何195.11相同。 "

[root@192 logstash]# vi config/testHA.conf 
input{
    kafka {
        bootstrap_servers => "192.168.195.11:9092"
        topics => "logstashHA"
        consumer_threads => 1
        decorate_events => true
        group_id => "logstashGroup"
        client_id => "logstashCli"
    }

}
output{
    stdout{}
}

再次启动服务,向kafka发送消息,观察两个logstash 的输出
发送了四条消息 都被11上的logstash 消费
在这里插入图片描述
而 12 上的logstash 未做任何输出,没有消费任何消息.

模拟logstash宕机
我们直接kill 掉11 上logstash的进程,

[root@192 ~]# kill -9 5770
[root@192 ~]# 

可以看到11 上的logstash被kill 掉了
在这里插入图片描述

再次进行发送消息测试,可以看到kafka的消息被另外一个logstash节点正常消费
在这里插入图片描述

如此便实现了logstash的高可用。

  • 1
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

catch that elf

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值