ELK-7.5搭建及监控日志教程

1 篇文章 0 订阅
1 篇文章 0 订阅

ELK介绍

ElasticSearch:是一个基于Lucene的搜索服务器。它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口。Elasticsearch是用Java开发的,并作为Apache许可条款下的开放源码发布,是当前流行的企业级搜索引擎。设计用于云计算中,能够达到实时搜索,稳定,可靠,快速,安装使用方便。

Logstash:是一个具有实时渠道能力的数据收集引擎,主要用于日志的收集与解析,并将其存入 ElasticSearch中。与ElasticSearch有很高的适配性。

Kibana:是一款基于 Apache 开源协议,使用 JavaScript 语言编写,为 Elasticsearch提供分析和可视化的 Web 平台。它可以在 Elasticsearch 的索引中查找,交互数据,并生成各种维度的表图。

工作流程

在需要收集日志的所有服务上部署logstash,作为logstash agent(logstash shipper)用于监控并过滤收集日志,
将过滤后的内容发送到Redis,然后logstash indexer将日志收集在一起交给全文搜索服务ElasticSearch,
可以用ElasticSearch进行自定义搜索通过Kibana 来结合自定义搜索进行页面展示。

环境

RHEL7.1虚拟机
firewalld关闭
selinux关闭
logstash-7.5.0
kibana-7.5.0
elasticsearch-7.5.0

搭建ELK

安装JDK

检查本机是否安装JDK,若安装可跳过此步

[root@server1 ~]# java -version
openjdk version "1.8.0_222"
OpenJDK Runtime Environment (build 1.8.0_222-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode) #已安装JDK
安装elasticsearch

配置ELK的yum源

[root@server1 yum.repos.d]# vim elasticsearch.repo 

[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@server1 yum.repos.d]# yum install elasticsearch

配置elasticsearch

配置elasticsearch启动内存,我的虚拟机内存小所以我配的也比较小。

[root@server1 yum.repos.d]# vim /etc/elasticsearch/jvm.options 

-Xms256M
-Xmx256M	#根据自身内存大小调整,虚拟机内存大可以调大点

配置elasticsearch的基本信息

[root@server1 yum.repos.d]# vim /etc/elasticsearch/elasticsearch.yml 
node.name: es		#给这个节点去个名字为es
path.data: /var/lib/elasticsearch	#数据存放路径
#
#Path to log files:
#
path.logs: /var/log/elasticsearch	#日志存放路径
#
#----------------------------------- Memory -----------------------------------
#
#Lock the memory on startup:
#
#bootstrap.memory_lock: true
network.host: 192.168.56.132	#绑定IP地址
http.port: 9200		#设置端口为9200

启动elasticsearch

[root@server1 yum.repos.d]# systemctl start elasticsearch
[root@server1 yum.repos.d]# ps -ef |grep elasticsearch
elastic+   8994      1 48 21:31 ?        00:00:36 /usr/share/elasticsearch/jdk/bin/java -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.locale.providers=COMPAT -Xms256M -Xmx256M -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.io.tmpdir=/tmp/elasticsearch-6650605205529676914 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m -XX:MaxDirectMemorySize=134217728 -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -Des.bundled_jdk=true -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
elastic+   9080   8994  0 21:32 ?        00:00:00 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
root       9144   2808  0 21:33 pts/0    00:00:00 grep --color=auto elasticsearch

访问IP+9200查看elasticsearch是否启动

在这里插入图片描述

安装logstash
[root@server1 yum.repos.d]# yum install logstash -y
[root@server1 yum.repos.d]# rpm -qa |grep logstash
logstash-7.5.0-1.noarch

修改logstash配置文件

修改logstash启动内存大小

[root@server1 yum.repos.d]# vim /etc/logstash/jvm.options 
-Xms256M
-Xmx256M	此处大小根据自己虚拟机内存大小分配

修改logstash基本信息

[root@server1 yum.repos.d]# vim /etc/logstash/logstash.yml 	#此文件同elasticsearch用来指定日志路径等信息,可以为默认。

启动logstash

[root@server1 yum.repos.d]# systemctl start logstash
[root@server1 yum.repos.d]# ps -ef |grep logstash
logstash  10478      1 85 22:47 ?        00:00:09 /bin/java -Xms256M -Xmx256M -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -Djruby.regexp.interruptible=true -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -Dlog4j2.isThreadContextMapInheritable=true -cp /usr/share/logstash/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/share/logstash/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/share/logstash/logstash-core/lib/jars/commons-compiler-3.0.11.jar:/usr/share/logstash/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/share/logstash/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/guava-22.0.jar:/usr/share/logstash/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-annotations-2.9.9.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-core-2.9.9.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-databind-2.9.9.3.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.9.jar:/usr/share/logstash/logstash-core/lib/jars/janino-3.0.11.jar:/usr/share/logstash/logstash-core/lib/jars/javassist-3.24.0-GA.jar:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.8.0.jar:/usr/share/logstash/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-api-2.11.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-core-2.11.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-slf4j-impl-2.11.1.jar:/usr/share/logstash/logstash-core/lib/jars/logstash-core.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/reflections-0.9.11.jar:/usr/share/logstash/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash --path.settings /etc/logstash
安装kibana

由于server1没有空间了只能安装在server2上了。。。
这里用.tar.gz包安装

下载kibana的tar包并将其解压至/usr/local目录下

tar -zxvf /root/Desktop/kibana-7.6.0-linux-x86_64.tar.gz -C .

改配置文件

[root@server2 kibana-7.5.0-linux-x86_64]# vim config/kibana.yml 
 
server.port: 5601	#指定kibana端口号
server.host: "localhost"	#指定kibana的监听地址
##elasticsearch.url: "http://192.168.56.132:9200"
elasticsearch.hosts: "http://192.168.56.132:9200"	#指定elasticsearch的访问URL
#kibana.index: ".kibana"
logging.dest: /root/Desktop/kibana.log	#指定kibana日志输出到哪里

启动kibana(默认是不允许使用root用户启动,若要使用root用户启动则需加上–allow-root)

[root@server2 kibana-7.5.0-linux-x86_64]# bin/kibana --allow-root

录kibana页面(kibana所在IP+5601)
kibana登录页面
这里我的kibana界面没有dashboard等选项,网上查了很久也没有发现类似的问题,就在我准备放弃的时候发现server1的/磁盘空间满了,应该是这个问题了。于是我又扩容了server1的磁盘,有扩容磁盘需求的小伙伴可以参考文章:https://blog.csdn.net/lucky_ykcul/article/details/105847785

重新启动kibana

[root@server2 kibana-7.5.0-linux-x86_64]# bin/kibana --allow-root

访问kibana,已经正常了在这里插入图片描述
到此步ELK平台已经搭建完成,下面将日志接入ELK

ELK监控日志

配置logstash
[root@server1 ~]# vim /etc/logstash/conf.d/system.conf

input {
  file {
    path => "/var/log/messages"     #日志路径
    type => "systemlog"      #类型
    start_position => "beginning"    #logstash 从什么位置开始读取文件
    stat_interval => "2"  #logstash 每隔多久检查一次被监听文件状态(是否有更新) ,默认是 1 秒。
  }
}
output {
  elasticsearch {
  hosts => ["192.168.56.132:9200"]      #指定elasticsearch监听地址
  index => "logstash-systemlog-%{+YYYY.MM.dd}"    #索引名称
 }
}

检测配置文件是否有语法错误

[root@server1 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/system.conf -t
Thread.exclusive is deprecated, use Thread::Mutex
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2020-05-12 22:20:18.858 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ] 2020-05-12 22:20:19.068 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[WARN ] 2020-05-12 22:20:21.454 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2020-05-12 22:20:25.884 [LogStash::Runner] Reflections - Reflections took 108 ms to scan 1 urls, producing 20 keys and 40 values 
Configuration OK	#这里说明是没问题的

重启logstash

[root@server1 ~]# systemctl restart logstash
kibana中创建索引

management——kibana——index patterns
创建索引
index patterns——create index pattern
在这里插入图片描述
这里输入想要创建的索引名称
在这里插入图片描述

这里选择使用时间戳方式
在这里插入图片描述
这里索引已经创建完成,可以看到索引信息
在这里插入图片描述

点击dashboard,搜索刚才创建的索引

可以看到已经有数据了
在这里插入图片描述

到此步已经完成了ELK对日志的监控

总结

ELK的功能十分强大,它能够很好地将复杂无序的日志有好的展示出来,方便IT人员快速的定位问题。本次搭建因为资源有限,所以架构比较简单,组件相对比较少。在企业中经常采用elasticsearch和logstash集群模式来提高性能,而且经常会用到filebeat,Redis等组件提高效率。感兴趣的小伙伴也赶紧动手试试吧!

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值