实验环境:三台centos7的虚拟机,kafka_2.12-3.0.1,apache-zookeeper-3.6.3
kafka集群最好只用三台以上;由于我电脑配置原因,就只开启了两台kafka作为集群
主机名 | ip |
---|---|
kafka01 | 192.168.1.100 |
kafka02 | 192.168.1.101 |
filebeat-nginx-01 | 192.168.1.102 |
前期准备:
修改三台主机的hosts文件
vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.100 kafka01
192.168.1.101 kafka02
192.168.1.102 nginx-filebeat-01
192.168.30.10 manager10
192.168.30.11 node11
192.168.30.12 ndoe12
================================= zookeeper配置==============================
下载配置zookeeper
在两台kafka主机上下载zookeeper
kafka01主机:
- 新建目录zookeeper和myid文件
mkdir /tmp/zookeeper
touch /tmp/zookeeper/myid
echo 1 > /tmp/zookeeper/myid # 用于标识zookeeper
- 下载并且解压zookeeper
wget https://mirrors.bfsu.edu.cn/apache/zookeeper/zookeeper-3.6.3/apache-zookeeper-3.6.3-bin.tar.gz
tar xf apache-zookeeper-3.6.3-bin.tar.gz
cd apache-zookeeper-3.6.3 #进入zookeeper的安装目录
- 进入zookeeper的安装目录,修改配置文件
[root@manager10 apache-zookeeper-3.6.3-bin]# ls
bin conf docs lib LICENSE.txt logs NOTICE.txt README.md README_packaging.md
[root@localhost apache-zookeeper-3.6.3-bin]# cd conf/
[root@localhost conf]# ls
configuration.xsl log4j.properties zoo_sample.cfg
[root@localhost conf]# cp zoo_sample.cfg zoo_cfg #拷贝zookeeper的配置文件
[root@localhost conf]# ls
configuration.xsl log4j.properties zoo_cfg zoo_sample.cfg
- 主要是对zoo.cfg文件进行配置,在末行添加对应的kafka映射关系。
[root@manager10 conf]# vim zoo.cfg
server.1=kafka01:2888:3888
server.2=kafka02:2888:3888
# 这里的1、2必须和之前配置的myid中的内容一致
# 2888端口是kafka中flower和leader的通信端口,简称服务端内部通信端口
# 3888端口是kafka中controller的选举端口
kafka02主机中也要配置相应的zookeeper,步骤和在kafka01的配置相似,不同点在于kafka02主机中的myid应该设置为2,而kafka01主机中的是1
最后启动两台主机中的zookeeper服务。
[root@manager10 apache-zookeeper-3.6.3-bin]# ls
bin conf docs lib LICENSE.txt logs NOTICE.txt README.md README_packaging.md
[root@manager10 apache-zookeeper-3.6.3-bin]# bin/zkServer.sh start
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /root/kafka_2.12-3.0.1/bin/apache-zookeeper-3.6.3-bin/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@manager10 apache-zookeeper-3.6.3-bin]#
在kafka02主机中:
在kafka01主机中:
====================================== kafka配置=======================
在两台kafka主机上下载kafka
# 使用的国内镜像
[root@manager10 ~]# wget http://mirrors.aliyun.com/apache/kafka/3.0.1/kafka_2.12-3.0.1.tgz
[root@manager10 ~]# tar xf kafka_2.12-3.0.1.tgz
[root@manager10 ~]# cd kafka_2.12-3.0.1
[root@manager10 kafka_2.12-3.0.1]# ls
bin config libs LICENSE licenses logs NOTICE site-docs
[root@manager10 kafka_2.12-3.0.1]# cd config
================================= kafka01主机 ==================
在conf目录中找到server.properties文件并修改
[root@manager10 config]# vim server.properties #修改如图配置
======================== kafka02主机中 ===================
[root@manager10 kafka_2.12-3.0.1]# cd config/
[root@manager10 config]# vim server.properties #修改如下图配置
启动kafka:两台kafka都要启动
进入kafka的安装目录,找到bin文件,执行以下命令开启kafka服务
[root@manager10 kafka_2.12-3.0.1]# bin/kafka-server-start.sh -daemon config/server.properties
[root@manager10 kafka_2.12-3.0.1]# lsof -i:9092
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 9868 root 164u IPv6 57917 0t0 TCP kafka02:XmlIpcRegSvc (LISTEN)
java 9868 root 183u IPv6 57924 0t0 TCP kafka02:49542->kafka02:XmlIpcRegSvc (ESTABLISHED)
java 9868 root 184u IPv6 57925 0t0 TCP kafka02:XmlIpcRegSvc->kafka02:49542 (ESTABLISHED)
启动生产者消费者进行验证:
在kafka集群任意一台主机中创建topic:
[root@manager10 kafka_2.12-3.0.1]# bin/kafka-topics.sh --bootstrap-server 192.168.1.101:9092 --create --topic test --partitions 2 --replication-factor 1
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Created topic test.
创建生产者:
[root@manager10 kafka_2.12-3.0.1]# bin/kafka-console-producer.sh --broker-list 192.168.1.101:9092 --topic test
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
>生产了一台汽车
>
创建消费者:
[root@manager10 kafka_2.12-3.0.1]# bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.101:9092 --topic test --from-beginning --group default_group
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
生产了一台汽车
============================== filebeat-nginx配置在192.168.1.102主机上
关闭防火墙和selinux
[root@localhost conf]# systemctl stop firewalld
[root@localhost conf]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service
[root@localhost conf]# setenforce 0
setenforce: SELinux is disabled
[root@localhost conf]# getenforce
Disabled
下载启动nginx
yum install nginx -y
nginx #启动
访问nginx,这里之前对nginx的index.html做过修改,显示首页和您们可能不一样,可以访问就表示nginx启动成功。
filebeat配置
安装:
[root@localhost conf]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
# 编辑fb.repo文件,配置filebeat的官方下载源
[root@localhost conf]# vim /etc/yum.repos.d/fb.repo
[elastic-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
安装filebeat:
[root@localhost conf]# yum install filebeat -y
# 设置开启自启
[root@localhost conf]# systemctl enable filebeat
配置:
# 查看filebeat支持哪些模块
[root@localhost conf]# filebeat modules list
# 开启支持模块
[root@localhost conf]# filebeat modules enable system nginx mysql
# 修改配置文件/etc/filebeat/filebeat.yml采集nginx的日志文件 只保存以下内容,其他内容删除
[root@localhost conf]# vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
# ============================kafka=======================
output.kafka:
hosts: ["192.168.1.100:9092","192.168.1.101:9092"]
topic: test
keep_alive: 10s
最后启动filebeat:
[root@localhost conf]# systemctl restart filebeat
[root@localhost conf]# ps aux|grep filebeat
root 2867 22.6 11.0 945100 109748 ? Ssl 10:44 0:01 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.logs /var/log/filebeat
root 2874 0.0 0.0 112824 988 pts/0 R+ 10:44 0:00 grep --color=auto filebeat
[root@localhost conf]#
验证消费者是否可以消费到nignx日志文件,访问nginx产生日志,如下可以看到nginx日志成功推送到了kafka集群中。