ELK5.5日志系统-RPM搭建文档(服务端单机部署)

简介
  • ELK由Elasticsearch、Logstash和Kibana三部分组件组成;
  • Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等;
  • Logstash是一个完全开源的工具,它可以对你的日志进行收集、分析,并将其存储供以后使用;
  • kibana 是一个开源和免费的工具,它可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志;
  • Filebeat是一个日志文件托运工具,在你的服务器上安装客户端后,filebeat会监控日志目录或者指定的日志文件,追踪读取这些文件(追踪文件的变化,不停的读),并且转发这些信息到elasticsearch或者logstarsh中存放;

安装包准备

基于CentOS7环境,服务端IP:10.168.11.10
logstash5.5 下载地址
kibana5.5 下载地址
JDK8 下载地址
filebeat5.5 下载地址
elasticsearch5.5 下载地址

PS:
安装包默认放置于/usr/local/src目录下;
ELK服务端需部署logstash | kibana | jdk | elasticsearch不需要filebeat;
客户端只要部署filebeat即可;


服务端安装
java安装配置
cd /usr/local/src
rpm -ivh jdk-8_65-linux-x64.rpm  #安装java rpm包
vim /etc/profile  #添加java环境变量,打开文件在末尾添加以下内容
JAVA_HOME=/usr/java/jdk1.8.0_65
JRE_HOME=/usr/java/jdk1.8.0_65/jre
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
export JAVA_HOME JRE_HOME PATH CLASSPATH
source /etc/profile  #使环境变量立即生效
elasticsearch安装配置
cd /usr/local/src
rpm -ivh elasticsearch-5.5.0.rpm  #安装rpm包
systemctl enable elasticsearch  #设定开机启动
systemctl start elasticsearch  #启动elasticsearch此时用netstat -lntp查看端口会出现9200和9300两个端口的侦听进程

测试:
访问http://10.168.11.10:9200

PS:配置文件位于/etc/elasticsearch/elasticsearch.yml,修改并编辑以下内容

network.host: 10.168.11.10  #指定服务端IP地址
http.port: 9200  #指定elasticsearch侦听的端口
bootstrap.system_call_filter: false  #系统调用过滤器,建议禁用该项检查,否则容易报错

PS:编辑/etc/elasticsearch/jvm.options文件修改以下内容,否则elasticsearch很容易发生内存溢出问题

-Xms2g  #修改为你服务器的实际内存的50%,如实际内存为16G就修改为8G
-Xmx2g  #修改为你服务器的实际内存的50%,如实际内存为16G就修改为8G
logstash安装配置
cd /usr/local/src
rpm -ivh logstash-5.5.0.rpm  #安装rpm包
systemctl enable logstash  #设定开机启动logstash
systemctl start logstash  #测试启动logstash此时用netstat -lntp查看端口会出现9600和5044两个端口的侦听进程

PS:配置文件位于/etc/logstash/logstash.yml此处没有什么可修改的地方,主要为三处:

 1. path.data: /var/lib/logstash  #修改logstash数据存储的位置
 2. path.config: /etc/logstash/conf.d  #修改logstash读取自定义的客户端日志抓取配置文件目录
 3. path.logs: /var/log/logstash  #修改logstash日志存放位置
kibana安装配置
cd /usr/local/src
rpm -ivh kibana-5.5.0-x86_64.rpm  #安装rpm包
systemctl enable kibana  #设定开机启动kibana
systemctl start kibana  #启动kibana

PS:配置文件位于/etc/kibana/kibana.yml,修改并编辑以下内容

#server.port: 80  #默认为注释,可修改WEB端口为80,默认为5601
server.host: "10.168.11.10"  #修改为服务端IP地址
elasticsearch.url: "http://10.168.11.10:9200"  #填入elasticsearch的地址

测试:
访问http://10.168.11.10:5601

  • 至此服务端部署完毕,防火墙部分请自行添加相关策略,亦可把kibana的5601端口通过防火墙转发到80端口上:
firewall-cmd --add-forward-port=port=80:proto=tcp:toport=5601 --permanent
firewall-cmd --reload

客户端安装
filebeat安装配置
cd /usr/local/src
rpm -ivh filebeat-5.5.0-x86_64.rpm  #安装rpm包
systemctl enable filebeat  #设定开机启动
systemctl start filebeat  #启动filebeat

PS:客户端只是把采集到的日志内容发送到服务端的logstash,所以不需要安装其他就安装filebeat即可;


日志采集配置文件编写
客户端filebeat配置文件示例

(编辑/etc/filebeat/filebeat.yml)

filebeat:
  prospectors:
    -
      paths:
        - "/usr/local/tomcat/logs/localhost_*.txt"  #指定采集的日志路径
      fields:
        input_type: log
        tag: 11_18-ycwb-wcp-tomcatlog  #自定义此日志对应的标签名(与elasticsearch的索引名对应)
 
    -
      paths:
        - "/var/log/messages*"   
      fields:
        tag: 11_18-ycwb-wcp-messageslog

    -
      paths:
        - "/var/log/secure*"
      fields:
        tag: 11_18-ycwb-wcp-securelog

    -
      paths:
        - "/var/log/cron*"
      fields:
        tag: 11_18-ycwb-wcp-cronlog

    -
      paths:
        - "/var/log/boot.log"
      fields:
        tag: 11_18-ycwb-wcp-bootlog
 
output:
  logstash:
       hosts: ["10.168.11.10:5044"]  #指定输出到10.168.11.10的logstash服务5044端口
服务端logstash配置文件示例

(编辑/etc/logstash/conf.d/logstash.conf)

input {
  beats {
    port => "5044"  #指定filebeat通讯的端口
  }
}

output {
	if [fields][tag] == "11_18-ycwb-wcp-tomcatlog"{  #此tab与filebeat的tag名对应
		elasticsearch {
			hosts => "10.168.11.10:9200"  #输入elasticsearch的地址
			index => "11_18-ycwb-wcp-tomcatlog"  #生成索引名,与filebeat的tag名对应
		}
	}
	if [fields][tag] == "11_18-ycwb-wcp-messageslog"{
		elasticsearch {
                        hosts => "10.168.11.10:9200"
                        index => "11_18-ycwb-wcp-messageslog-%{+YYYY.MM.dd}"  #这种格式可以在生成的索引名后缀加入索引创建的日期,这样每天都会生成一条索引名+日期
        }
	}
	if [fields][tag] == "11_18-ycwb-wcp-securelog"{
		elasticsearch {
                        hosts => "10.168.11.10:9200"
                        index => "11_18-ycwb-wcp-securelog"
        }
	}
	if [fields][tag] == "11_18-ycwb-wcp-cronlog"{
	    elasticsearch {
                        hosts => "10.168.11.10:9200"
                        index => "11_18-ycwb-wcp-cronlog"
        }
    }
	if [fields][tag] == "11_18-ycwb-wcp-bootlog"{
	    elasticsearch {
                        hosts => "10.168.11.10:9200"
                        index => "11_18-ycwb-wcp-bootlog"
        }
    }
}
自定义日志解析规则

PS:以上配置在kibana输入格式都是默认的,用filter配合grok插件对输出内容进行规则解析及定义,如nginx的日志能解析每个日志段落为一个字段,或者加上IP地址的解析等,需要安装logstash的grok及geoip插件(grok和geoip插件在logstash目录下的bin下的logstash-plugin脚本进行安装):

logstash/bin/logstash-plugin install logstash-filter-geoip  #安装geoip插件
logstash/bin/logstash-plugin install logstash-filter-grok  #安装grok插件

安装完毕后可建立新的测试文件进行字段测试,下面是针对tomcat日志的一个测试文件:

input { stdin { } }  #从本地控制台输入测试日志内容

filter {
    grok {
	match => { "message" => "%{IPORHOST:clientip} - - %{NOTSPACE:LogTime} %{NOTSPACE:timezone} %{NOTSPACE:Method} %{NOTSPACE:Path} %{NOTSPACE:httpversion} %{NOTSPACE:ReturnValue} %{NOTSPACE:Value}" }  #日志解析规则,具体请查看官方文档,稍微复杂
    }
    geoip {
        source => "clientip"
    }
}

output {
	stdout { codec=> rubydebug }  #解析后的内容输出到控制台
}

某tomcat项目的简单解析规则示例
input {
  beats {
    port => "5044"
  }
}

filter {
    grok {
	match => { "message" => "%{IPORHOST:clientip} - - %{NOTSPACE:LogTime} %{NOTSPACE:timezone} %{NOTSPACE:Method} %{NOTSPACE:Path} %{NOTSPACE:httpversion} %{NOTSPACE:ReturnValue} %{NOTSPACE:Value}" }
    }
    geoip {
        source => "clientip"
    }
}

output {
	if [fields][tag] == "11_18-ycwb-wcp-tomcatlog"{
		elasticsearch {
			hosts => "10.168.11.10:9200"
			index => "11_18-ycwb-wcp-tomcatlog"
		}
	}
}
logstash解析日志字段数量说明

logstash解析的字段达到一定的数量会有延迟,测试后数据如下:
解析字段22个以内:延迟1秒以内到1秒之间
解析字段23个:延迟2秒以内
解析字段24个:延迟2秒左右
解析字段25个:延迟3秒左右
解析字段26个:延迟5-6秒左右
解析字段26个以上:延迟30秒或以上

grok插件的解析部分关键字说明(重要)
USERNAME [a-zA-Z0-9._-]+
USER %{USERNAME}
INT (?:[+-]?(?:[0-9]+))
BASE10NUM (?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))
NUMBER (?:%{BASE10NUM})
BASE16NUM (?<![0-9A-Fa-f])(?:[+-]?(?:0x)?(?:[0-9A-Fa-f]+))
BASE16FLOAT \b(?<![0-9A-Fa-f.])(?:[+-]?(?:0x)?(?:(?:[0-9A-Fa-f]+(?:\.[0-9A-Fa-f]*)?)|(?:\.[0-9A-Fa-f]+)))\b
POSINT \b(?:[1-9][0-9]*)\b
NONNEGINT \b(?:[0-9]+)\b
WORD \b\w+\b
NOTSPACE \S+
SPACE \s*
DATA .*?
GREEDYDATA .*
QUOTEDSTRING (?>(?<!\\)(?>"(?>\\.|[^\\"]+)+"|""|(?>'(?>\\.|[^\\']+)+')|''|(?>`(?>\\.|[^\\`]+)+`)|``))
UUID [A-Fa-f0-9]{8}-(?:[A-Fa-f0-9]{4}-){3}[A-Fa-f0-9]{12}
# Networking
MAC (?:%{CISCOMAC}|%{WINDOWSMAC}|%{COMMONMAC})
CISCOMAC (?:(?:[A-Fa-f0-9]{4}\.){2}[A-Fa-f0-9]{4})
WINDOWSMAC (?:(?:[A-Fa-f0-9]{2}-){5}[A-Fa-f0-9]{2})
COMMONMAC (?:(?:[A-Fa-f0-9]{2}:){5}[A-Fa-f0-9]{2})
IPV6 ((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:)))(%.+)?
IPV4 (?<![0-9])(?:(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2}))(?![0-9])
IP (?:%{IPV6}|%{IPV4})
HOSTNAME \b(?:[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:\.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(\.?|\b)
HOST %{HOSTNAME}
IPORHOST (?:%{HOSTNAME}|%{IP})
HOSTPORT %{IPORHOST}:%{POSINT}
# paths
PATH (?:%{UNIXPATH}|%{WINPATH})
UNIXPATH (?>/(?>[\w_%!$@:.,-]+|\\.)*)+
TTY (?:/dev/(pts|tty([pq])?)(\w+)?/?(?:[0-9]+))
WINPATH (?>[A-Za-z]+:|\\)(?:\\[^\\?*]*)+
URIPROTO [A-Za-z]+(\+[A-Za-z+]+)?
URIHOST %{IPORHOST}(?::%{POSINT:port})?
# uripath comes loosely from RFC1738, but mostly from what Firefox
# doesn't turn into %XX
URIPATH (?:/[A-Za-z0-9$.+!*'(){},~:;=@#%_\-]*)+
#URIPARAM \?(?:[A-Za-z0-9]+(?:=(?:[^&]*))?(?:&(?:[A-Za-z0-9]+(?:=(?:[^&]*))?)?)*)?
URIPARAM \?[A-Za-z0-9$.+!*'|(){},~@#%&/=:;_?\-\[\]]*
URIPATHPARAM %{URIPATH}(?:%{URIPARAM})?
URI %{URIPROTO}://(?:%{USER}(?::[^@]*)?@)?(?:%{URIHOST})?(?:%{URIPATHPARAM})?
# Months: January, Feb, 3, 03, 12, December
MONTH \b(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\b
MONTHNUM (?:0?[1-9]|1[0-2])
MONTHNUM2 (?:0[1-9]|1[0-2])
MONTHDAY (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])
# Days: Monday, Tue, Thu, etc...
DAY (?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?|Sat(?:urday)?|Sun(?:day)?)
# Years?
YEAR (?>\d\d){1,2}
HOUR (?:2[0123]|[01]?[0-9])
MINUTE (?:[0-5][0-9])
# '60' is a leap second in most time standards and thus is valid.
SECOND (?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)
TIME (?!<[0-9])%{HOUR}:%{MINUTE}(?::%{SECOND})(?![0-9])
# datestamp is YYYY/MM/DD-HH:MM:SS.UUUU (or something like it)
DATE_US %{MONTHNUM}[/-]%{MONTHDAY}[/-]%{YEAR}
DATE_EU %{MONTHDAY}[./-]%{MONTHNUM}[./-]%{YEAR}
ISO8601_TIMEZONE (?:Z|[+-]%{HOUR}(?::?%{MINUTE}))
ISO8601_SECOND (?:%{SECOND}|60)
TIMESTAMP_ISO8601 %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?
DATE %{DATE_US}|%{DATE_EU}
DATESTAMP %{DATE}[- ]%{TIME}
TZ (?:[PMCE][SD]T|UTC)
DATESTAMP_RFC822 %{DAY} %{MONTH} %{MONTHDAY} %{YEAR} %{TIME} %{TZ}
DATESTAMP_RFC2822 %{DAY}, %{MONTHDAY} %{MONTH} %{YEAR} %{TIME} %{ISO8601_TIMEZONE}
DATESTAMP_OTHER %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{TZ} %{YEAR}
DATESTAMP_EVENTLOG %{YEAR}%{MONTHNUM2}%{MONTHDAY}%{HOUR}%{MINUTE}%{SECOND}
# Syslog Dates: Month Day HH:MM:SS
SYSLOGTIMESTAMP %{MONTH} +%{MONTHDAY} %{TIME}
PROG (?:[\w._/%-]+)
SYSLOGPROG %{PROG:program}(?:\[%{POSINT:pid}\])?
SYSLOGHOST %{IPORHOST}
SYSLOGFACILITY <%{NONNEGINT:facility}.%{NONNEGINT:priority}>
HTTPDATE %{MONTHDAY}/%{MONTH}/%{YEAR}:%{TIME} %{INT}
# Shortcuts
QS %{QUOTEDSTRING}
# Log formats
SYSLOGBASE %{SYSLOGTIMESTAMP:timestamp} (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}:
COMMONAPACHELOG %{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)
COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}

如文章对您有帮助,请打开支付宝扫码领取红包,就当做对作者的支持,谢谢
这里写图片描述

  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
对于部署ELK(Elasticsearch, Logstash, Kibana)7.16.1的Docker环境,可以按照以下步骤进行操作: 1. 安装Docker和Docker Compose:确保系统上已经安装了Docker和Docker Compose。可以通过官方文档或适用于您操作系统的其他资源来完成安装。 2. 创建Docker Compose文件:创建一个名为docker-compose.yml的文件,并添加以下内容: ```yaml version: '3' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.16.1 environment: - discovery.type=single-node ports: - 9200:9200 - 9300:9300 networks: - elk logstash: image: docker.elastic.co/logstash/logstash:7.16.1 volumes: - ./logstash/config:/usr/share/logstash/config - ./logstash/pipeline:/usr/share/logstash/pipeline ports: - 5000:5000 networks: - elk kibana: image: docker.elastic.co/kibana/kibana:7.16.1 environment: - ELASTICSEARCH_HOSTS=http://elasticsearch:9200 ports: - 5601:5601 networks: - elk networks: elk: ``` 3. 创建配置文件和管道文件夹:在与docker-compose.yml相同的目录中创建名为`logstash/config`和`logstash/pipeline`的文件夹。 4. 配置Logstash:在`logstash/config`文件夹中创建一个名为`logstash.yml`的文件,并添加以下内容: ```yaml http.host: "0.0.0.0" ``` 5. 创建Logstash管道:在`logstash/pipeline`文件夹中创建一个名为`pipeline.conf`的文件,并根据您的需求配置Logstash的管道。例如,以下是一个简单的例子: ```conf input { tcp { port => 5000 codec => json } } output { elasticsearch { hosts => ["elasticsearch:9200"] index => "logs-%{+YYYY.MM.dd}" } } ``` 6. 启动ELK容器:在终端中导航到包含docker-compose.yml文件的目录,并运行以下命令启动容器: ```shell docker-compose up -d ``` 7. 等待一段时间,直到容器启动完毕。然后,您可以通过浏览器访问Kibana界面,地址为`http://localhost:5601`,查看和分析日志。 请注意,这只是一个基本的ELK部署示例,您可以根据自己的需求进行进一步的配置和定制化。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Benson_xuhb

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值