编写日期:2016-09-19
编写作者:mtsbv110
部署环境
操作系统: :CentOS Linux release 7.2.1511 (Core) cat /etc/redhat-release
Mysql版本:mysql-5.6.26.tar.gz
操作用户: root
系统IP: 192.168.9.66
主机名: jetsen-analysis-01
配置: 8核 8G cat/proc/cpuinfo |grep "name" |cut -f2 -d: |uniq –c
grep MemTotal/proc/meminfo
Elasticsearch下载地址: https://www.elastic.co/downloads/elasticsearch
Logstash下载地址: https://www.elastic.co/downloads/logstash
Kibana下载地址: https://www.elastic.co/downloads/kibana
1. Elasticsearch环境的搭建
1.1. JDK环境变量配置(JDK1.7以上)
JAVA_HOME=/usr/local/jdk1.7.0_79
PATH=$JAVA_HOME/bin:/usr/local/mysql/bin:/usr/local/mysql/lib:$PATH
CLASSPATH=$JAVA_HOME/jre/lib/ext:$JAVA_HOME/lib/tools.jar
export PATH JAVA_HOME CLASSPATH
We recommend installing the Java 8 update20 or later, or Java 7 update 55 or later.
1.2. 所以需要新建用户
[root@jetsen-analysis-01src]# useradd elk
[root@jetsen-analysis-01src]# passwd elk
更改用户 elk 的密码。
新的 密码:
重新输入新的 密码:
passwd:所有的身份验证令牌已经成功更新。
[root@jetsen-analysis-01src
Ubuntu在 useradd 和 adduse是两条命令,而在Centos上则是同一条命令,adduser 是链接的形式存在
1.3. Elasticsearch环境配置
https://www.elastic.co/guide/en/elasticsearch/reference/2.3/setup.html
[root@jetsen-analysis-01 src]# cpelasticsearch-2.3.0.tar.gz ../
[root@jetsen-analysis-01 local]# tar -zxvfelasticsearch-2.3.0.tar.gz
[root@jetsen-analysis-01 local]# cdelasticsearch-2.3.0/
[root@jetsen-analysis-01elasticsearch-2.3.0]# ls
bin config lib LICENSE.txt modules NOTICE.txt README.textile
[root@jetsen-analysis-01elasticsearch-2.3.0]# cd config/
[root@jetsen-analysis-01 config]# ls
elasticsearch.yml logging.yml
[root@jetsen-analysis-01 config]# vimelasticsearch.yml
将network.host修改为本机ip,注意此行前面的空格缩进
network.host:192.168.9.66
1.4. Elasticsearch 使用非root用户运行
[root@jetsen-analysis-01 local]# chmod -R 777elasticsearch-2.3.0
[elk@jetsen-analysis-01 elasticsearch-2.3.0]$/usr/local/elasticsearch-2.3.0/bin/elasticsearch –d
1.5. Elasticsearch目录文件解读
Bin 含有运行ES实例和 管理插件的一些脚本。
Config 主要是一些设置 文件,如:Elasticsearch.yml 和 logging.yml
Lib 包含相关的包文件
Plugins 包含相关的插件文件
Logs 日志文件
Data 存放数据的地方
1.6. 测试是否安装正确
http://192.168.9.66:9200/
1.7. 在线安装head插件
elasticsearch/bin/plugin install mobz/elasticsearch-head
http:// 192.168.9.66:9200/_plugin/head/
2. logstash环境的搭建
https://www.elastic.co/guide/en/logstash/2.3/config-examples.html
2.1. 解压logstash安装包
[root@jetsen-analysis-01 local]# tar -zxvflogstash-2.3.0.tar.gz
2.2. conf-logstash和logstash-patterns配置
参考链接
https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/java
https://www.elastic.co/guide/en/logstash/current/config-examples.html
https://www.elastic.co/guide/en/logstash/current/input-plugins.html
https://www.elastic.co/guide/en/logstash/current/output-plugins.html
https://www.elastic.co/guide/en/logstash/current/filter-plugins.html
https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns
https://www.elastic.co/guide/en/logstash/current/codec-plugins.html
tomcat-log-analysis.conf配置如下
input { file{ codec => multiline { #pattern => "^\s" pattern => "(^.+[^\[INFO\]]Exception:.+)|(^.+\[ERROR\].+)|(^[a-zA-Z])|(^\s+at .+)|(^\s+... \d+ more)|(^\s*Caused by:.+)" what => "previous" } path => "/usr/local/analysis-tomcat/logs/catalina.out" start_position => "beginning" } }
filter { if "ERROR" in [message] { #如果消息里有ERROR字符则将type改为自定义的标记 mutate { replace => { type => "tomcat_catalina_error" } } } else if "WARN" in [message] { mutate { replace => { type => "tomcat_catalina_warn" } } } else { mutate { replace => { type => "tomcat_catalina_info" } } } grok { patterns_dir => "/opt/elk/logstash-patterns" #match => { # "message" => "%{MYLOG}" #} match => [ "message", "\[%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:%{MINUTE}:(%{SECOND})\]\[(?<thread_name>.+?)\]\[(?<log_level>\w+)\]\s*(?<content>.*)", "message", "%{TIMESTAMP_ISO8601:date} \[(?<thread_name>.+?)\]-\[(?<log_level>\w+)\]\s*(?<content>.*)" ] add_field => [ "log_ip", "192.168.6.66" ] }
}
output { elasticsearch { hosts => ["192.168.9.66"] index => "analysis_tomcat"
} }
|
Filters 用于根据各种匹配条件对Logstash事件进行过滤处理,主要有以下几个插件
grok 解析任意文本并将它结构化
mutate 对事件进行添加,删除,移动,替换,修改等更改操作
Outputs 是Logstashpipeline的最后一个阶段。一个事件可以有多种输出。常用的有以下几个插件
elasticsearch 将事件数据写入到Elasticsearch
file 将事件数据写入到磁盘文件
Codecs 是用于流过滤,可以添加到input或output。主要有plain,json等
2.3. Logstash启动
[root@jetsen-analysis-01 conf-logstash]# /usr/local/logstash-2.3.0/bin/logstash-f../conf-logstash/tomcat-log-analysis.conf
-e 后面直接跟配置信息,而不通过-f 参数指定配置文件。可以用于快速测试
通过head插件验证logstash是否将tomcat日志采集成功
http://192.168.9.66:9200/_plugin/head/
使用head插件查询最近50条错误日志,使用时间倒叙排序
http://192.168.9.66:9200/analysis_tomcat/tomcat_catalina_error/
_search
{
"fields": [
"_parent",
"_source"
],
"query": {
"bool": {
"must": [],
"must_not": [],
"should": [
{
"match_all": {}
}
]
}
},
"from": 0,
"size": 50,
"sort": {
"@timestamp": {
"order": "desc"
}
},
"aggs": {},
"version": true
}
3. kibana 环境的搭建
3.1. 解压kibana安装包
[root@jetsen-analysis-01 local]# tar -zxvfkibana-4.5.0-linux-x64.tar.gz
3.2. kibana.yml 配置修改
[root@jetsen-analysis-01 local]#cd /usr/local/kibana-4.5.0-linux-x64/config
elasticsearch.url: http://192.168.9.66:9200
3.3. kibana启动
[root@jetsen-analysis-01 bin]# ./kibana&
打开浏览器http://192.168.9.66:5601/