ELK+redis+filebeat搭建
全部单节点乞丐版,记录一下。顺便本文档针对免费license的阉割版x-pack,想体验全功能的请留意补充内容
环境准备
软件包
包 | 版本 |
---|---|
OS | Redhat6.8 x86_64 (无要求) |
java | 1.8及以上 |
redis-server | 4.0.2(无要求) |
elasticsearch | https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.3.tar.gz |
logstash | https://artifacts.elastic.co/downloads/logstash/logstash-6.2.3.tar.gz |
kibana | https://artifacts.elastic.co/downloads/kibana/kibana-6.2.3-linux-x86_64.tar.gz |
x-pack | https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-6.2.3.zip |
filebeat | https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.3-x86_64.rpm |
其中,elasticsearch、logstash、kibana、redis-server部署在同一台服务器,filebeat部署在采集客户机
修改内核配置
新建一个elk用户
useradd elk
# vim /etc/profile
elk soft nproc 65536
elk hard nproc 65536
elk soft nofile 65536
elk hard nofile 65536
echo vm.max_map_count=262144 >> /etc/sysctl.conf
sysctl -p
部署ELK平台
部署redis
yum install redis
redis关键配置
#bind 127.0.0.1 不要bind回环地址,不bind或bind局域网IP地址
port 6379
daemonize yes
protected-mode no
pidfile /var/run/redis_6379.pid
logfile /var/log/redis/redis.log
dbfilename dump.rdb
dir /var/lib/redis/
启动redis-server
redis-server /etc/redis.conf
部署ELK
解压并安装xpack插件
tar -zxvf elasticsearch-6.2.3.tar.gz
tar -zxvf logstash-6.2.3.tar.gz
tar -zxvf kibana-6.2.3-linux-x86_64.tar.gz
# 进入各自软件包目录
[elk@elk elasticsearch-6.2.3]$ bin/elasticsearch-plugin install file:///home/elk/x-pack-6.2.3.zip
[elk@elk logstash-6.2.3]$ bin/logstash-plugin install file:///home/elk/x-pack-6.2.3.zip
配置logstash
安装x-pack
[elk@elk logstash-6.2.3]$ bin/logstash-plugin install file:///home/elk/x-pack-6.2.3.zip
创建logstash配置文件,filter空配置
[elk@elk config]$ vim logstash-6.2.3/conf/logstash.conf
input {
redis {
host => "192.168.17.5"
port => "6379"
key => "elk_redis"
data_type => "list"
}
}
filter {
}
output {
elasticsearch {
hosts => "192.168.17.5:9200"
}
}
logstash.yml修改一下bind IP,方便后期直接输出日志至logstash
[root@elk ~]# cat /home/elk/logstash-6.2.3/config/logstash.yml | sed '/^#/d'
http.host: "192.168.17.5"
启动logstash
[elk@elk logstash-6.2.3]$ bin/logstash -f config/logstash.conf
配置elasticsearch
修改elasticsearch.yml配置
[elk@elk elasticsearch-6.2.3]$ vim config/elasticsearch.yml
network.host: 192.168.17.5
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
安装支持filebeat modules的插件
bin/elasticsearch-plugin install ingest-geoip
启动elasticsearch
[elk@elk elasticsearch-6.2.3]$ bin/elasticsearch
写入免费license
由于x-pack是付费插件,全功能试用期只有1个月,有认证,需要对logstash output 和 kibana配置认证,一个月后又要该配置,我懒得折腾,直接上免费license
注册并下载免费license : https://license.elastic.co/registration
配置elastic初始密码,一定要在开启elasticsearch以后再进行
elasticsearch-6.2.3/bin/x-pack/setup-passwords interactive
# 交互式输入elastic、logstash 、kibana的密码,可以配置成一样的,只需要用的到elastic的密码
上传免费license,并在license目录执行以下命令
curl -XPUT -u elastic "http://192.168.17.5:9200/_xpack/license?acknowledge=true" -H "Content-Type: application/json" -d @huangwj-work-94846770-1b0f-45a6-b404-e2f407708e91-v5.json
补充内容:试用x-pack完整功能的配置
logstash.conf中output增加elastic认证
user => elastic
password => "set password"
kibana.yml中增加认证配置
elasticsearch.username: "user"
elasticsearch.password: "pass"
部署filebeat采集日志
filebeat采集日志我这里提供两种思路
一种常规日志如redis、Nginx、MySQL等等filebeat内置modules,可以直接通过module做grok解析,结果直接给到elasticsearch,无需logstash grok正则分析,前提是日志格式为默认,比如Nginx log_format
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
一种是非常规日志只做采集将日志分析的工作全部交给logstash
关于modules: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-modules-overview.html
调用modules做简单日志分析
启用modules,比如nginx
[root@elk ~]# filebeat modules enable nginx
配置filebeat.yml
[root@elk ~]# cat /etc/filebeat/filebeat.yml
filebeat.config.modules:
path: /etc/filebeat/modules.d/*.yml
filebeat.prospectors:
- type: log
paths:
- "/var/log/nginx/access.log_hardlink"
tags:
- "nginx_access"
output.elasticsearch:
hosts: "http://192.168.17.5:9200"
setup.kibana:
host: "http://192.168.17.5:5601"
setup.dashboards.enabled: true
配置modules.d/nginx.yml
[root@elk ~]# cat /etc/filebeat/modules.d/nginx.yml
- module: nginx
# Access logs
access:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
var.paths: ["/var/log/nginx/access.log"]
# Error logs
error:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
var.paths: ["/var/log/nginx/error.log"]
不调用modules
配置filebeat.yml
filebeat.prospectors:
- type: log
paths:
- "/var/log/nginx/access.log_hardlink"
tags:
- "nginx_access"
output.redis:
hosts: "192.168.17.5:6379"
key: "elk_redis"
日志对比
默认的Nginx log_format
不调用modules得到的日志
{
"_index": "filebeat-6.2.3-2018.04.04",
"_type": "doc",
"_id": "zUx0jmIBfEByYdtGQsI4",
"_version": 1,
"_score": null,
"_source": {
"@timestamp": "2018-04-04T02:20:05.582Z",
"tags": [
"nginx_access"
],
"prospector": {
"type": "log"
},
"beat": {
"version": "6.2.3",
"name": "elk",
"hostname": "elk"
},
"source": "/var/log/nginx/access.log_hardlink",
"offset": 537263,
"message": "192.168.7.19 - - [04/Apr/2018:10:20:00 +0800] \"POST /api/console/proxy?path=_mapping&method=GET HTTP/1.1\" 200 6075 \"http://192.168.17.5/app/kibana\" \"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36\" \"-\""
},
"fields": {
"@timestamp": [
"2018-04-04T02:20:05.582Z"
]
},
"sort": [
1522808405582
]
}
调用modules得到的日志内容
{
"_index": "filebeat-6.2.3-2018.04.04",
"_type": "doc",
"_id": "z0x0jmIBfEByYdtGQsI4",
"_version": 1,
"_score": null,
"_source": {
"@timestamp": "2018-04-04T02:20:00.000Z",
"offset": 537263,
"nginx": {
"access": {
"referrer": "http://192.168.17.5/app/kibana",
"response_code": "200",
"remote_ip": "192.168.7.19",
"method": "POST",
"user_name": "-",
"http_version": "1.1",
"body_sent": {
"bytes": "6075" },
"remote_ip_list": [
"192.168.7.19"
],
"url": "/api/console/proxy?path=_mapping&method=GET",
"user_agent": {
"patch": "3112",
"major": "60",
"minor": "0",
"os": "Windows 10",
"name": "Chrome",
"os_name": "Windows 10",
"device": "Other" }
}
},
"beat": {
"hostname": "elk",
"name": "elk",
"version": "6.2.3"
},
"prospector": {
"type": "log"
},
"read_timestamp": "2018-04-04T02:20:05.591Z",
"source": "/var/log/nginx/access.log",
"fileset": {
"module": "nginx",
"name": "access"
}
},
"fields": {
"@timestamp": [
"2018-04-04T02:20:00.000Z"
]
},
"sort": [
1522808400000
]
}
kibana查看日志及dashboards
添加index patterns
management —> index patterns —> Create Index Pattern —> @timestamp
由于在日志分析中配置了kibana dashboards,因此可以在dashboards中看到一些简单的看板