ELK日志分析系统安装,单节点(一)
(CentOS 6.5)
一、简介
1、核心组成
ELK由Elasticsearch、Logstash和Kibana三部分组件组成;
Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。
Logstash是一个完全开源的工具,它可以对你的日志进行收集、分析,并将其存储供以后使用
kibana 是一个开源和免费的工具,它可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志。
二、安装
Logstash5.3.0
1.安装JDK 1.8.0_X
Logstash的运行依赖于Java运行环境。
#yum -y install java-1.8.0 #java -version openjdk version "1.8.0_141" OpenJDK Runtime Environment (build 1.8.0_141-b16) OpenJDK 64-Bit Server VM (build 25.141-b16, mixed mode) 配置环境变量:vim /etc/profile(该方法一劳永逸,设置一次,对所有用户有效) JDK的源码安装 JAVA_HOME=/usr/local/jdk1.8.0_144 PATH=$JAVA_HOME/bin:$PATH CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export JAVA_HOME export PATH export CLASSPATH #source /etc/profile,使配置环境生效
2.下载logstash5.3.0 下载地址是:https://www.elastic.co/cn/downloads
3.解压logstash
移动到/usr/local下
三、安装elasticsearch5.3.0
5.0以后的版本和之前的版本在安装head插件上有一些区别,之前的版本安装head插件很容易,只需要运行
#elasticsearch/bin/plugin -install mobz/elasticsearch-head1
就可以了,但是5.0之后,head插件换成了采用grunt服务方式启动,需要用npm方式打包安装,稍微复杂一些。
接下来,一块看下es和head的安装。
1、下载elasticsearch5.3.0
2、解压缩
移动到/usr/local下 #mkdir -p /usr/local/elasticsearch-5.3.0/{data,logs}
3、配置 elasticsearch 参数文件
/elasticsearch-5.3.0/config/elasticsearch.yml
# 注意冒号后有空格 http.port: 9200 node.name: node-1 cluster.name: es_cluster network.host: 192.168.32.44 bootstrap.memory_lock: false bootstrap.system_call_filter: false path.data: /usr/local/elasticsearch-5.3.0/data path.logs: /usr/local/elasticsearch-5.3.0/logs http.cors.allow-origin: "/.*/" http.cors.enabled: true 这两个配置是支持跨域访问用的,这样的话后续安装的head插件和kibana才能够正常的访问es的查询接口。
4、修改jvm空间分配,因为elasticsearch默认分配jvm空间大小为2g
# vim /elasticsearch-5.3.0/config/jvm.options -Xms2g -Xmx2g 修改为 -Xms512m -Xmx512m 不然会报以下错误: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x000000008a660000, 1973026816, 0) failed; error=‘Cannot allocate memory‘ (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (mmap) failed to map 1973026816 bytes for committing reserved memory. # An error report file with more information is saved as: # /usr/local/elasticsearch-5.3.0/hs_err_pid11986.log
5、添加独立用户,并启动服务
#groupadd elsearch # useradd elsearch -g elsearch # chown -R elsearch:elsearch /usr/local/elasticsearch # su elsearch # cd /usr/local/elasticsearch/bin # ./elasticsearch
6、启动es服务
#/elasticsearch-5.3.0/bin/elasticsearch -d #表示以daemon的方式启动,命令行输入输出不被占用 #netstat -antpu #查看是否启动正常 #/elasticsearch-5.3.0/bin/elasticsearch
可能错误(一)
[plain] view plain copy
#/elasticsearch-5.3.0/bin/elasticsearch ERROR: bootstrap checks failed max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536] max number of threads [1024] for user [elsearch] likely too low, increase to at least [2048] max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144] 解决:修改配置 # vi /etc/security/limits.conf #永久设置 elsearch soft nofile 65536 elsearch hard nofile 65536 elsearch soft nproc 2048 elsearch hard nproc 2048 # ulimit -n 65536 #临时设置,重启无效 # ulimit -u 2048 # ulimit -a # vi /etc/sysctl.conf #永久设置 vm.max_map_count=262144 # sysctl -w vm.max_map_count=262144 #临时设置 # sysctl -a | grep "vm.max_map_count"
max number of threads [1024] for user [work] likely too low, increase to at least [2048]
解决方法:进入limits.d下的配置文件:vi /etc/security/limits.d/90-nproc.conf ,修改配置如下:
* soft nproc 1024 修改为: * soft nproc 2048
可能错误(二)
elk - elasticsearch 异常中止服务
# [2017-09-04T15:10:01,029][INFO ][o.e.c.r.a.DiskThresholdMonitor] [t104] low disk watermark [85%] exceeded on [grRFjvA_SjqjeRbvz5N0bw][t104] [/opt/elk/elasticsearch-5.1.1/bin/./data/nodes/0] free: 9.5gb[12.4%], replicas will not be assigned to this node [2017-09-04T15:10:31,146][INFO ][o.e.c.r.a.DiskThresholdMonitor] [t104] rerouting shards: [one or more nodes has gone under the high or low watermark
解决办法,在elasticsearch.yml里面加入三行
cluster.routing.allocation.disk.threshold_enabled: false
#cluster.routing.allocation.disk.watermark.low: 30gb
#cluster.routing.allocation.disk.watermark.high: 20gb
安装head插件
具体的安装过程请参见以下博客:http://www.cnblogs.com/xiaofei1205/p/6704239.html
四、安装kibana5.3.0
1、下载kibana5.3.0
2、解压缩
移动到/usr/local下
3、修改配置文件kibana-5.3.0-darwin-x86_64/config/kibana.yml
server.port: 5601 server.host: "0.0.0.0" elasticsearch.url: "http://localhost:9200" 这是链接es的http地址,kibana自身服务器默认的端口是5601.
4、启动kibana
# ./bin/kibana & 放到后台去执行
五、logStash收集日志
在添加一个配置文件logstash.conf内容如下:
input { file { path => "/var/log/yum.log" //被采集的日志文件 start_position => beginning //采集位置 从开始 } } filter { //过滤为空 } output{ elasticsearch{ hosts => "192.168.32.44:9200" } stdout{codec => rubydebug} }
./logstash -f logstash.conf
查看测试
多日志收集
input { file { path => "D:\elk\logs\logs.log" start_position => beginning type => "one" } file { path => "D:\elk\logs\u_ex%date:~2,2%%date:~5,2%%date:~8,2%.log" start_position => beginning type => "two" } } filter { //Nginx日志收集如果有其它要进行判断 grok { //logstash自带的过滤%{COMBINEDAPACHELOG} match => ["message","%{COMBINEDAPACHELOG}"] } } output{ if"_grokparsefailure" in [tags]{ }else{ if [type] == "one"{ elasticsearch{ hosts => "192.168.32.44:9200" index => "one-%{+YYYY.MM.dd}" //索引名 } stdout{codec => rubydebug} } if [type] == "two"{ elasticsearch{ hosts => "192.168.32.44:9200" index => "two-%{+YYYY.MM.dd}" //索引名 } stdout{codec => rubydebug} } } }
转载:
http://blog.csdn.net/buqutianya/article/details/72019264
http://www.cnblogs.com/fengli9998/p/7152822.html
http://blog.csdn.net/wangyangzhizhou/article/details/53314022
转载于:https://blog.51cto.com/hellvenus/1967239