ELK+kafka缓存收集日志

kafka收集elk日志

Kafka是最初由Linkedin公司开发,是一个分布式支持分区的(partition)、多副本的(replica),基于zookeeper协调的分布式消息系统,它的最大的特性就是可以实时的处理大量数据以满足各种需求场景;

之前搭建过Redis做为中间件来缓存日志,由于redis做消息队列并不是他的强项,日志量过多就会有他自己的瓶颈,不能很好的缓解es集群的压力,正规的企业还是会引进专业的kafka消息队列作为中间件,来满足PB级别的日志量。

kafka做消息系统的优势:

  • 解耦:在项目启动之初来预测将来项目会碰到什么需求,是极其困难的。消息系统在处理过程中间插入了一个隐含的、基于数据的接口层,两边的处理过程都要实现这一接口。通俗地讲,解耦就是解决了2个不同的人对待同一件事的做法,是这2个人最终都满意的过程就是解耦

  • 冗余:有些情况下,处理数据的过程会失败。除非数据被持久化,否则将造成丢失。消息队列把数据进行持久化直到它们已经被完全处理,通过这一方式规避了数据丢失风险。

  • 扩展性:因为消息队列解耦了你的处理过程,所以增大消息入队和处理的频率是很容易的,只要另外增加处理过程即可

  • 灵活性&削峰:在访问量剧增的情况下,应用仍然需要继续发挥作用,但是这样的突发流量并不常见;如果为以能处理这类峰值访问为标准来投入资源随时待命无疑是巨大的浪费。使用消息队列能够使关键组件顶住突发的访问压力,而不会因为突发的超负荷的请求而完全崩溃

  • 异步通信:很多时候,用户不想也不需要立即处理消息。消息队列提供了异步处理机制,允许用户把一个消息放入队列,但并不立即处理它。想向队列中放入多少消息就放多少,然后在需要的时候再去处理它们

  • 顺序保证:在大多使用场景下,数据处理的顺序都很重要。大部分消息队列本来就是排序的,并且能保证数据会按照特定的顺序来处理。Kafka能保证一个Partition内的消息的有序性。

kafka官网下载地址:http://kafka.apache.org/downloads

zookeeper官网下载地址:https://zookeeper.apache.org/releases.html

注:kafka不能单独使用,必须依赖于zookeeper,zookeeper就像是它 的管理工具一样

elk+kafka架构图

在这里插入图片描述

  1. filebeat轻量级日志收集工具,区别于logstash就是filebeat占用的资源更少
  2. filebeat收集完后appserver产生的日志,传送到kafka集群
  3. kafka作为中间件先去接收这些日志,主要发挥削峰,缓解es压力
  4. 传送到logstash开始过滤等处理后,最后输出给es集群
  5. kibana在web界面登录展示

搭建zookeeper集群部署

注:官网最新版zookeeper3.5.5版本以后,我们需要下载的包是apache-zookeeper-3.5.8-bin.tar.gz

bin才是二进制编译完后的包,我们可以直接使用

而之前的普通的tar.gz的包里面是只是源码的包无法直接使用

由于是实时收集日志,一定要保证时间一致

群集内每一台节点都要做

yum -y install ntpdate
ntpdate ntp1.aliyun.com

es,kibana,logstrash,zookeeper等都需要java环境

yum -y install java-1.8.0-openjdk*

解压缩包

tar zxf apache-zookeeper-3.5.8-bin.tar.gz -C /opt
cd /opt/
mv apache-zookeeper-3.5.8-bin/ ./zookeeper
cp zookeeper/conf/zoo_sample.cfg ./zookeeper/conf/zoo.cfg  ##复制样例文件修改

添加主配置信息

vim /opt/zookeeper/conf/zoo.cfg
dataDir=/data/zookeeper #数据文件存放目录
clientPort=2181
tickTime=2000
initLimit=20
syncLimit=10
server.1=192.168.10.2:2888:3888
server.2=192.168.10.1:2888:3888
server.3=192.168.10.7:2888:3888

每一台的节点的配置文件添加的都是一样的,只是myid 不一样而已,

myid来源于server.x

 mkdir -p /data/zookeeper
 echo '1' > /data/zookeeper/myid

把zookeeper有关的目录同步到其他的节点

 rsync -avh --delete /opt/zookeeper root@192.168.10.1:/opt/
 rsync -avh --delete /opt/zookeeper root@192.168.10.7:/opt/
 rsync -avh --delete /data root@192.168.10.1:/
 rsync -avh --delete /data root@192.168.10.7:/

修改节点的myid

sed -i 's#1#2#g' /data/zookeeper/myid
sed -i 's#1#3#g' /data/zookeeper/myid

启动zookeeper

/opt/zookeeper/bin/zkServer.sh start    #开启
/opt/zookeeper/bin/zkServer.sh stop     #关闭
/opt/zookeeper/bin/zkServer.sh restart	#重启
/opt/zookeeper/bin/zkServer.sh status	#查看状态

都启动以后查看状态

会显示有1个leader,2个follower

[root@eskibana ~]# /opt/zookeeper/bin/zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
[root@filebeat data]# /opt/zookeeper/bin/zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader
[root@localhost ~]# /opt/zookeeper/bin/zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Client port 181. Client address: localhost.
Mode: follower

测试zookeeper集群

在其中任何一台节点上创建数据都可以

[root@localhost ~]# /opt/zookeeper/bin/zkCli.sh -server 192.168.10.2:2181
/usr/bin/java
Connecting to 192.168.10.2:2181
2020-06-27 17:48:53,074 [myid:] - INFO  [main:Environment@109] - Client environment:zookeeper.version=3.5.8-f439ca583e70862c3068a1f2a7d4d068eec33315, built on 05/04/2020 15:07 GMT
2020-06-27 17:48:53,079 [myid:] - INFO  [main:Environment@109] - Client environment:host.name=localhost
2020-06-27 17:48:53,079 [myid:] - INFO  [main:Environment@109] - Client environment:java.version=1.8.0_102
2020-06-27 17:48:53,088 [myid:] - INFO  [main:Environment@109] - Client environment:java.vendor=Oracle Corporation
2020-06-27 17:48:53,088 [myid:] - INFO  [main:Environment@109] - Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.102-4.b14.el7.x86_64/jre
2020-06-27 17:48:53,088 [myid:] - INFO  [main:Environment@109] - Client environment:java.class.path=/opt/zookeeper/bin/../zookeeper-server/target/classes:/opt/zookeeper/bin/../build/classes:/opt/zookeeper/bin/../zookeeper-server/target/lib/*.jar:/opt/zookeeper/bin/../build/lib/*.jar:/opt/zookeeper/bin/../lib/zookeeper-jute-3.5.8.jar:/opt/zookeeper/bin/../lib/zookeeper-3.5.8.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.7.25.jar:/opt/zookeeper/bin/../lib/slf4j-api-1.7.25.jar:/opt/zookeeper/bin/../lib/netty-transport-native-unix-common-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-transport-native-epoll-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-transport-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-resolver-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-handler-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-common-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-codec-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-buffer-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/log4j-1.2.17.jar:/opt/zookeeper/bin/../lib/json-simple-1.1.1.jar:/opt/zookeeper/bin/../lib/jline-2.11.jar:/opt/zookeeper/bin/../lib/jetty-util-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/jetty-servlet-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/jetty-server-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/jetty-security-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/jetty-io-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/jetty-http-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/javax.servlet-api-3.1.0.jar:/opt/zookeeper/bin/../lib/jackson-databind-2.10.3.jar:/opt/zookeeper/bin/../lib/jackson-core-2.10.3.jar:/opt/zookeeper/bin/../lib/jackson-annotations-2.10.3.jar:/opt/zookeeper/bin/../lib/commons-cli-1.2.jar:/opt/zookeeper/bin/../lib/audience-annotations-0.5.0.jar:/opt/zookeeper/bin/../zookeeper-*.jar:/opt/zookeeper/bin/../zookeeper-server/src/main/resources/lib/*.jar:/opt/zookeeper/bin/../conf:
2020-06-27 17:48:53,088 [myid:] - INFO  [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2020-06-27 17:48:53,088 [myid:] - INFO  [main:Environment@109] - Client environment:java.io.tmpdir=/tmp
2020-06-27 17:48:53,088 [myid:] - INFO  [main:Environment@109] - Client environment:java.compiler=<NA>
2020-06-27 17:48:53,088 [myid:] - INFO  [main:Environment@109] - Client environment:os.name=Linux
2020-06-27 17:48:53,088 [myid:] - INFO  [main:Environment@109] - Client environment:os.arch=amd64
2020-06-27 17:48:53,088 [myid:] - INFO  [main:Environment@109] - Client environment:os.version=3.10.0-514.el7.x86_64
2020-06-27 17:48:53,089 [myid:] - INFO  [main:Environment@109] - Client environment:user.name=root
2020-06-27 17:48:53,089 [myid:] - INFO  [main:Environment@109] - Client environment:user.home=/root
2020-06-27 17:48:53,089 [myid:] - INFO  [main:Environment@109] - Client environment:user.dir=/root
2020-06-27 17:48:53,089 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.free=24MB
2020-06-27 17:48:53,090 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.max=228MB
2020-06-27 17:48:53,090 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.total=29MB
2020-06-27 17:48:53,097 [myid:] - INFO  [main:ZooKeeper@868] - Initiating client connection, connectString=192.168.10.2:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@7e774085
2020-06-27 17:48:53,108 [myid:] - INFO  [main:X509Util@79] - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2020-06-27 17:48:53,120 [myid:] - INFO  [main:ClientCnxnSocket@237] - jute.maxbuffer value is 4194304 Bytes
2020-06-27 17:48:53,134 [myid:] - INFO  [main:ClientCnxn@1653] - zookeeper.request.timeout value is 0. feature enabled=
Welcome to ZooKeeper!
JLine support is enabled
2020-06-27 17:48:53,223 [myid:192.168.10.2:2181] - INFO  [main-SendThread(192.168.10.2:2181):ClientCnxn$SendThread@1112] - Opening socket connection to server 192.168.10.2/192.168.10.2:2181. Will not attempt to authenticate using SASL (unknown error)
2020-06-27 17:48:53,274 [myid:192.168.10.2:2181] - INFO  [main-SendThread(192.168.10.2:2181):ClientCnxn$SendThread@959] - Socket connection established, initiating session, client: /192.168.10.2:38026, server: 192.168.10.2/192.168.10.2:2181
2020-06-27 17:48:53,289 [myid:192.168.10.2:2181] - INFO  [main-SendThread(192.168.10.2:2181):ClientCnxn$SendThread@1394] - Session establishment complete on server 192.168.10.2/192.168.10.2:2181, sessionid = 0x1000024bc5c0002, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: 192.168.10.2:2181(CONNECTED) 0] 
[zk: 192.168.10.2:2181(CONNECTED) 0] create /test "haha"
Created /test
[zk: 192.168.10.2:2181(CONNECTED) 1] 

[root@filebeat ~]# /opt/zookeeper/bin/zkCli.sh -server 192.168.10.7:2181
Connecting to 192.168.10.7:2181
2020-06-27 17:50:09,374 [myid:] - INFO  [main:Environment@109] - Client environment:zookeeper.version=3.5.8-f439ca583e70862c3068a1f2a7d4d068eec33315, built on 05/04/2020 15:07 GMT
2020-06-27 17:50:09,377 [myid:] - INFO  [main:Environment@109] - Client environment:host.name=filebeat
2020-06-27 17:50:09,377 [myid:] - INFO  [main:Environment@109] - Client environment:java.version=1.8.0_102
2020-06-27 17:50:09,379 [myid:] - INFO  [main:Environment@109] - Client environment:java.vendor=Oracle Corporation
2020-06-27 17:50:09,379 [myid:] - INFO  [main:Environment@109] - Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.102-4.b14.el7.x86_64/jre
2020-06-27 17:50:09,379 [myid:] - INFO  [main:Environment@109] - Client environment:java.class.path=/opt/zookeeper/bin/../zookeeper-server/target/classes:/opt/zookeeper/bin/../build/classes:/opt/zookeeper/bin/../zookeeper-server/target/lib/*.jar:/opt/zookeeper/bin/../build/lib/*.jar:/opt/zookeeper/bin/../lib/zookeeper-jute-3.5.8.jar:/opt/zookeeper/bin/../lib/zookeeper-3.5.8.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.7.25.jar:/opt/zookeeper/bin/../lib/slf4j-api-1.7.25.jar:/opt/zookeeper/bin/../lib/netty-transport-native-unix-common-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-transport-native-epoll-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-transport-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-resolver-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-handler-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-common-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-codec-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-buffer-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/log4j-1.2.17.jar:/opt/zookeeper/bin/../lib/json-simple-1.1.1.jar:/opt/zookeeper/bin/../lib/jline-2.11.jar:/opt/zookeeper/bin/../lib/jetty-util-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/jetty-servlet-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/jetty-server-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/jetty-security-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/jetty-io-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/jetty-http-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/javax.servlet-api-3.1.0.jar:/opt/zookeeper/bin/../lib/jackson-databind-2.10.3.jar:/opt/zookeeper/bin/../lib/jackson-core-2.10.3.jar:/opt/zookeeper/bin/../lib/jackson-annotations-2.10.3.jar:/opt/zookeeper/bin/../lib/commons-cli-1.2.jar:/opt/zookeeper/bin/../lib/audience-annotations-0.5.0.jar:/opt/zookeeper/bin/../zookeeper-*.jar:/opt/zookeeper/bin/../zookeeper-server/src/main/resources/lib/*.jar:/opt/zookeeper/bin/../conf:
2020-06-27 17:50:09,379 [myid:] - INFO  [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2020-06-27 17:50:09,379 [myid:] - INFO  [main:Environment@109] - Client environment:java.io.tmpdir=/tmp
2020-06-27 17:50:09,379 [myid:] - INFO  [main:Environment@109] - Client environment:java.compiler=<NA>
2020-06-27 17:50:09,379 [myid:] - INFO  [main:Environment@109] - Client environment:os.name=Linux
2020-06-27 17:50:09,379 [myid:] - INFO  [main:Environment@109] - Client environment:os.arch=amd64
2020-06-27 17:50:09,379 [myid:] - INFO  [main:Environment@109] - Client environment:os.version=3.10.0-514.el7.x86_64
2020-06-27 17:50:09,379 [myid:] - INFO  [main:Environment@109] - Client environment:user.name=root
2020-06-27 17:50:09,379 [myid:] - INFO  [main:Environment@109] - Client environment:user.home=/root
2020-06-27 17:50:09,379 [myid:] - INFO  [main:Environment@109] - Client environment:user.dir=/root
2020-06-27 17:50:09,379 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.free=24MB
2020-06-27 17:50:09,382 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.max=228MB
2020-06-27 17:50:09,382 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.total=29MB
2020-06-27 17:50:09,387 [myid:] - INFO  [main:ZooKeeper@868] - Initiating client connection, connectString=192.168.10.7:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@7e774085
2020-06-27 17:50:09,393 [myid:] - INFO  [main:X509Util@79] - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2020-06-27 17:50:09,400 [myid:] - INFO  [main:ClientCnxnSocket@237] - jute.maxbuffer value is 4194304 Bytes
2020-06-27 17:50:09,410 [myid:] - INFO  [main:ClientCnxn@1653] - zookeeper.request.timeout value is 0. feature enabled=
Welcome to ZooKeeper!
JLine support is enabled
2020-06-27 17:50:09,503 [myid:192.168.10.7:2181] - INFO  [main-SendThread(192.168.10.7:2181):ClientCnxn$SendThread@1112] - Opening socket connection to server filebeat/192.168.10.7:2181. Will not attempt to authenticate using SASL (unknown error)
2020-06-27 17:50:09,513 [myid:192.168.10.7:2181] - INFO  [main-SendThread(192.168.10.7:2181):ClientCnxn$SendThread@959] - Socket connection established, initiating session, client: /192.168.10.7:43706, server: filebeat/192.168.10.7:2181
2020-06-27 17:50:09,529 [myid:192.168.10.7:2181] - INFO  [main-SendThread(192.168.10.7:2181):ClientCnxn$SendThread@1394] - Session establishment complete on server filebeat/192.168.10.7:2181, sessionid = 0x3000156e8410002, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: 192.168.10.7:2181(CONNECTED) 0] get /test
haha
[zk: 192.168.10.7:2181(CONNECTED) 1] 

[root@eskibana ~]# /opt/zookeeper/bin/zkCli.sh -server 192.168.10.1:2181
Connecting to 192.168.10.1:2181
2020-06-27 17:51:04,114 [myid:] - INFO  [main:Environment@109] - Client environment:zookeeper.version=3.5.8-f439ca583e70862c3068a1f2a7d4d068eec33315, built on 05/04/2020 15:07 GMT
2020-06-27 17:51:04,118 [myid:] - INFO  [main:Environment@109] - Client environment:host.name=eskibana
2020-06-27 17:51:04,119 [myid:] - INFO  [main:Environment@109] - Client environment:java.version=1.8.0_102
2020-06-27 17:51:04,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.vendor=Oracle Corporation
2020-06-27 17:51:04,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.102-4.b14.el7.x86_64/jre
2020-06-27 17:51:04,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.class.path=/opt/zookeeper/bin/../zookeeper-server/target/classes:/opt/zookeeper/bin/../build/classes:/opt/zookeeper/bin/../zookeeper-server/target/lib/*.jar:/opt/zookeeper/bin/../build/lib/*.jar:/opt/zookeeper/bin/../lib/zookeeper-jute-3.5.8.jar:/opt/zookeeper/bin/../lib/zookeeper-3.5.8.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.7.25.jar:/opt/zookeeper/bin/../lib/slf4j-api-1.7.25.jar:/opt/zookeeper/bin/../lib/netty-transport-native-unix-common-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-transport-native-epoll-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-transport-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-resolver-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-handler-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-common-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-codec-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/netty-buffer-4.1.48.Final.jar:/opt/zookeeper/bin/../lib/log4j-1.2.17.jar:/opt/zookeeper/bin/../lib/json-simple-1.1.1.jar:/opt/zookeeper/bin/../lib/jline-2.11.jar:/opt/zookeeper/bin/../lib/jetty-util-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/jetty-servlet-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/jetty-server-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/jetty-security-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/jetty-io-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/jetty-http-9.4.24.v20191120.jar:/opt/zookeeper/bin/../lib/javax.servlet-api-3.1.0.jar:/opt/zookeeper/bin/../lib/jackson-databind-2.10.3.jar:/opt/zookeeper/bin/../lib/jackson-core-2.10.3.jar:/opt/zookeeper/bin/../lib/jackson-annotations-2.10.3.jar:/opt/zookeeper/bin/../lib/commons-cli-1.2.jar:/opt/zookeeper/bin/../lib/audience-annotations-0.5.0.jar:/opt/zookeeper/bin/../zookeeper-*.jar:/opt/zookeeper/bin/../zookeeper-server/src/main/resources/lib/*.jar:/opt/zookeeper/bin/../conf:
2020-06-27 17:51:04,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2020-06-27 17:51:04,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.io.tmpdir=/tmp
2020-06-27 17:51:04,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.compiler=<NA>
2020-06-27 17:51:04,120 [myid:] - INFO  [main:Environment@109] - Client environment:os.name=Linux
2020-06-27 17:51:04,120 [myid:] - INFO  [main:Environment@109] - Client environment:os.arch=amd64
2020-06-27 17:51:04,120 [myid:] - INFO  [main:Environment@109] - Client environment:os.version=3.10.0-514.el7.x86_64
2020-06-27 17:51:04,121 [myid:] - INFO  [main:Environment@109] - Client environment:user.name=root
2020-06-27 17:51:04,121 [myid:] - INFO  [main:Environment@109] - Client environment:user.home=/root
2020-06-27 17:51:04,121 [myid:] - INFO  [main:Environment@109] - Client environment:user.dir=/root
2020-06-27 17:51:04,121 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.free=52MB
2020-06-27 17:51:04,122 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.max=228MB
2020-06-27 17:51:04,122 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.total=57MB
2020-06-27 17:51:04,124 [myid:] - INFO  [main:ZooKeeper@868] - Initiating client connection, connectString=192.168.10.1:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@ee7d9f1
2020-06-27 17:51:04,128 [myid:] - INFO  [main:X509Util@79] - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2020-06-27 17:51:04,137 [myid:] - INFO  [main:ClientCnxnSocket@237] - jute.maxbuffer value is 4194304 Bytes
2020-06-27 17:51:04,149 [myid:] - INFO  [main:ClientCnxn@1653] - zookeeper.request.timeout value is 0. feature enabled=
Welcome to ZooKeeper!
JLine support is enabled
[zk: 192.168.10.1:2181(CONNECTING) 0] 
[zk: 192.168.10.1:2181(CONNECTING) 0] 
[zk: 192.168.10.1:2181(CONNECTING) 0] get /2020-06-27 17:51:14,239 [myid:192.168.10.1:2181] - INFO  [main-SendThread(192.168.10.1:2181):ClientCnxn$SendThread@1112] - Opening socket connection to server eskibana/192.168.10.1:2181. Will not attempt to authenticate using SASL (unknown error)
2020-06-27 17:51:14,245 [myid:192.168.10.1:2181] - INFO  [main-SendThread(192.168.10.1:2181):ClientCnxn$SendThread@959] - Socket connection established, initiating session, client: /192.168.10.1:48198, server: eskibana/192.168.10.1:2181
2020-06-27 17:51:14,259 [myid:192.168.10.1:2181] - INFO  [main-SendThread(192.168.10.1:2181):ClientCnxn$SendThread@1394] - Session establishment complete on server eskibana/192.168.10.1:2181, sessionid = 0x2000180e3d60001, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null


[zk: 192.168.10.1:2181(CONNECTED) 1] 
[zk: 192.168.10.1:2181(CONNECTED) 1] get /test
haha
[zk: 192.168.10.1:2181(CONNECTED) 2] 

现在zookeeper环境搭建H好

部署kafka

tar zxf kafka_2.12-2.5.0.tgz -C /opt
cd /opt/
mv kafka_2.12-2.5.0/ ./kafka
cd kafka/
mkdir /opt/kafka/logs  #创建存放日志目录
vim /opt/kafka/config/server.properties

broker.id=1
listeners=PLAINTEXT://192.168.10.2:9092
log.dirs=/opt/kafka/logs
log.retention.hours=24
zookeeper.connect=192.168.10.1:2181,192.168.10.7:2181,192.168.10.2:2181


把kafka目录复制到其他节点

scp -rp /opt/kafka root@192.168.10.7:/opt/
scp -rp /opt/kafka root@192.168.10.1:/opt/

需要修改2处地方

broker.id=2
listeners=PLAINTEXT://192.168.10.1:9092

broker.id=3
listeners=PLAINTEXT://192.168.10.7:9092

测试启动

/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties 
[2020-06-27 18:10:27,192] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)

在最后一行会出现id=1,started 字样说明已经开启成功!

这时候可以放到后台

查看日志观察启动情况

/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties 

tail -f /opt/kafka/logs/server.log 
[2020-06-27 18:11:38,237] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)

其他节点启动

/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
[2020-06-27 18:20:03,940] INFO [KafkaServer id=2] started (kafka.server.KafkaServer)

/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties 
[2020-06-27 18:20:59,501] INFO [KafkaServer id=3] started (kafka.server.KafkaServer)

创建主题topic

[root@localhost ~]# /opt/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.10.1:2181,192.168.10.2:2181,192.168.10.7:2181 --partitions 3 --replication-factor 3 --topic kafkatest
Created topic kafkatest.
[root@localhost ~]# /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.10.1:2181,192.168.10.2:2181,192.168.10.7:2181 --topic kafkatest
Topic: kafkatest	PartitionCount: 3	ReplicationFactor: 3   Configs: 
	Topic: kafkatest	Partition: 0	Leader: 3	Replicas: 3,2,1	Isr: 3,2,1
	Topic: kafkatest	Partition: 1	Leader: 1	Replicas: 1,3,2	Isr: 1,3,2
	Topic: kafkatest	Partition: 2	Leader: 2	Replicas: 2,1,3	Isr: 2,1,3
##登录测试
[root@localhost ~]# /opt/kafka/bin/kafka-console-producer.sh --broker-list 192.168.10.1:2181,192.168.10.2:2181,192.168.10.7:2181 --topic kafkatest
>
>hello
>hi
>lalalala

注:老版本消费者测试用:/opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.10.1:2181,192.168.10.2:2181,192.168.10.7:2181 --topic message --from-beginning

出现状况:

[root@filebeat opt]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.10.1:2181,192.168.10.2:2181,192.168.10.7:2181 --topic message --from-beginning 
zookeeper is not a recognized option
Option                                   Description                            
------                                   -----------                            
--bootstrap-server <String: server to    REQUIRED: The server(s) to connect to. 
  connect to>                                                                   
--consumer-property <String:             A mechanism to pass user-defined       
  consumer_prop>                           properties in the form key=value to  
                                           the consumer.                        
--consumer.config <String: config file>  Consumer config properties file. Note  
                                           that [consumer-property] takes       
                                           precedence over this config.         
--enable-systest-events                  Log lifecycle events of the consumer   
                                           in addition to logging consumed      
                                           messages. (This is specific for      
                                           system tests.)                       
--formatter <String: class>              The name of a class to use for         
                                           formatting kafka messages for        
                                           display. (default: kafka.tools.      
                                           DefaultMessageFormatter)             
--from-beginning                         If the consumer does not already have  
                                           an established offset to consume     
                                           from, start with the earliest        
                                           message present in the log rather    
                                           than the latest message.             
--group <String: consumer group id>      The consumer group id of the consumer. 
--help                                   Print usage information.               
--isolation-level <String>               Set to read_committed in order to      
                                           filter out transactional messages    
                                           which are not committed. Set to      
                                           read_uncommitted to read all         
                                           messages. (default: read_uncommitted)
--key-deserializer <String:                                                     
  deserializer for key>                                                         
--max-messages <Integer: num_messages>   The maximum number of messages to      
                                           consume before exiting. If not set,  
                                           consumption is continual.            
--offset <String: consume offset>        The offset id to consume from (a non-  
                                           negative number), or 'earliest'      
                                           which means from beginning, or       
                                           'latest' which means from end        
                                           (default: latest)                    
--partition <Integer: partition>         The partition to consume from.         
                                           Consumption starts from the end of   
                                           the partition unless '--offset' is   
                                           specified.                           
--property <String: prop>                The properties to initialize the       
                                           message formatter. Default           
                                           properties include:                  
                                         	print.timestamp=true|false            
                                         	print.key=true|false                  
                                         	print.value=true|false                
                                         	key.separator=<key.separator>         
                                         	line.separator=<line.separator>       
                                         	key.deserializer=<key.deserializer>   
                                         	value.deserializer=<value.            
                                           deserializer>                        
                                         Users can also pass in customized      
                                           properties for their formatter; more 
                                           specifically, users can pass in      
                                           properties keyed with 'key.          
                                           deserializer.' and 'value.           
                                           deserializer.' prefixes to configure 
                                           their deserializers.                 
--skip-message-on-error                  If there is an error when processing a 
                                           message, skip it instead of halt.    
--timeout-ms <Integer: timeout_ms>       If specified, exit if no message is    
                                           available for consumption for the    
                                           specified interval.                  
--topic <String: topic>                  The topic id to consume on.            
--value-deserializer <String:                                                   
  deserializer for values>                                                      
--version                                Display Kafka version.                 
--whitelist <String: whitelist>          Regular expression specifying          
                                           whitelist of topics to include for   
                                           consumption.      

查阅资料发现新版本已经把上面的启动方式删除

新版本使用时已经不支持上面这种写法:

参考

/opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.10.1:9092 --topic message --from-beginning

[root@filebeat opt]#  /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.10.7:9092 --topic message --from-beginning


hello
hi
lalalala

[root@eskibana ~]# /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.10.1:9092 --topic message --from-beginning


hello
hi
lalalala

安装部署es集群

下载安装es

wget https://mirror.tuna.tsinghua.edu.cn/elasticstack/yum/elastic6.x/6.6.0/elasticsearch-6.6.0.rpm
#安装
rpm -ivh elasticsearch-6.6.0.rpm

修改配置文件

grep -Ev '^#|^$' /etc/elasticsearch/elasticsearch.yml 
cluster.name: ES_cluster
node.name: node-1
path.data: /data/es        #数据存放目录
path.logs: /var/log/elasticsearch  #日志存放目录
bootstrap.memory_lock: true   #开启内存锁定,会与/etc/elasticsearch/jvm.options有关联
network.host: 192.168.10.1       
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.10.1", "192.168.10.2"]
discovery.zen.minimum_master_nodes: 2

创建目录,修改权限

mkdir -p /data/elasticsearch 
chown -R elasticsearch:elasticsearch /data/elasticsearch/

若开启了内存锁定,需要添加修改这个与内存分配有关的文件

vim /etc/elasticsearch/jvm.options
-Xms2g		#分配最小内存	,根据你自己的而定=你物理内存的一半
-Xmx2g		#分配最大内存,官方推荐为物理内存的一半,但最大为32G,超过了32G,反而可能会有副作用

接着添加这一行,具体为什么可以去参考官方文档
添加完成以后重启OK

systemctl edit elasticsearch
添加:
[Service]
LimitMEMLOCK=infinity
F2 回车 保存退出

systemctl daemon-reload
systemctl start elasticsearch

img

node-2配置和node-1一模一样
es群集配置完会通过单播的形式在节点之间互相发现,所以节点IP不要写错

node-2安装完成以后,复制主配置文件

scp -p /etc/elasticsearch/elasticsearch.yml root@192.168.10.2:/etc/elasticsearch/
scp -rp /data/ root@192.168.10.2:/

修改

node.name: node-2
network.host: 192.168.10.2

启动elasticsearch

安装部署kibana

# yum 安装
rpm -ivh kibana-6.6.0-x86_64.rpm 

修改配置文件

server.port: 5601
server.host: "192.168.10.1"
server.name: "db01"
elasticsearch.hosts: ["http://192.168.10.1:9200"]
保存退出

启动kibana

systemctl start kibana

安装部署filebeat

安装filebeat和nginx

rpm -ivh filebeat-6.6.0-x86_64.rpm
yum -y install nginx
systemctl start nignx
修改filebeat文件

添加收集 nignx的日志信息;定义发送的目的地kafka集群

vim /etc/filebeat/filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

setup.template.settings:
  index.number_of_shards: 5

setup.kibana:
  hosts: ["192.168.10.1"]

output.kafka:
  hosts: ["192.168.10.1:9092","192.168.10.2:9092","192.168.10.7:9092"]
  topic: "elklog"


安装logstash

rpm -ivh logstash-6.6.0.rpm
修改logstash文件

从kafka收集日志信息,然后过滤传输到es

input {
  kafka {
    bootstrap_servers => "192.168.10.1:9092" #zookeeper群集任何一个就可以
    topics => ["elklog"]
    group_id =>'logstrash'
    codec => "json"
  }
}

filter {
  mutate {
    convert => ["upstream_time","float"]
    convert => ["request_time","float"]
  }
}

output {
  stdout {}
   if "access" in [tags] {
    elasticsearch {
      hosts => ["http://192.168.10.1:9200"]
      index => "nginx_access-%{+YYYY.MM}"
      manage_template => false
    }
   }
   if "error" in [tags] {
    elasticsearch {
      hosts => ["http://192.168.10.1:9200"]
      index => "nginx_error-%{+YYYY.MM}"
      manage_template => false
     }
   }
}


重启filebeat,logstash

systemctl start filebeat
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/kafka.conf & #放到后台执行

压力测试

  307  ab -n 1000 -c 100 http://192.168.10.7
  308  ab -n 1000 -c 100 http://192.168.10.7/
  309  ab -n 1000 -c 100 http://192.168.10.7/aaa
  310  ab -n 1000 -c 100 http://192.168.10.7/bbb

在这里插入图片描述

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-KExxmfya-1593510604844)(C:\Users\Administrator\AppData\Roaming\Typora\typora-user-images\image-20200627222900316.png)]

nginx配置文件默认支持json格式日志

添加内容如下

 log_format log_json '{ "@timestamp": "$time_local", '
'"remote_addr": "$remote_addr", '
'"referer": "$http_referer", '
'"request": "$request", '
'"status": $status, '
'"bytes": $body_bytes_sent, '
'"agent": "$http_user_agent", '
'"x_forwarded": "$http_x_forwarded_for", '
'"up_addr": "$upstream_addr",'
'"up_host": "$upstream_http_host",'
'"up_resp_time": "$upstream_response_time",'
'"request_time": "$request_time"'
' }';

最后记得把nignx默认开启的日志格式main改log_json
重新启动nginx
重启压测,查看日志

在这里插入图片描述

  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值