EFK日志收集
Elasticsearch: 数据库,存储数据 java
logstash: 日志收集,过滤数据 java
kibana: 分析,过滤,展示 java
filebeat: 收集日志,传输到ES或logstash go
filebeat官方文档:
https://www.elastic.co/guide/en/beats/filebeat/current/index.html
环境:
es主机:192.168.1.104 (内存:最小4G)
elasticsearch
kibana
filebeat
nginx
##################################################################
安装es主机:192.168.1.104
1.安装elasticsearch:
前提:jdk-1.8.0
复制elasticsearch-6.6.0.rpm到虚拟机
rpm -ivh elasticsearch-6.6.0.rpm
2.修改配置文件:
vim /etc/elasticsearch/elasticsearch.yml
node.name: node-1
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 192.168.1.104,127.0.0.1
http.port: 9200
3.创建数据目录,并修改权限
mkdir -p /data/elasticsearch
chown -R elasticsearch.elasticsearch /data/elasticsearch/
4.分配锁定内存:
vim /etc/elasticsearch/jvm.options
-Xms1g #分配最小内存
-Xmx1g #分配最大内存,官方推荐为物理内存的一半,但最大为32G
5.修改锁定内存后,无法重启,解决方法如下:
systemctl edit elasticsearch
添加:
[Service]
LimitMEMLOCK=infinity
F2保存退出
systemctl daemon-reload
systemctl restart elasticsearch
##################################################################
在es主机上安装kibana
(1)安装kibana
cd /data/soft
rpm -ivh kibana-6.6.0-x86_64.rpm
(2)修改配置文件
vim /etc/kibana/kibana.yml
修改:
server.port: 5601
server.host: "192.168.1.104"
server.name: "db01" #自己所在主机的主机名
elasticsearch.hosts: ["http://192.168.1.104:9200"] #es服务器的ip,便于接收日志数据
保存退出
(3)启动kibana
systemctl start kibana
###################################################################
在es主机上安装filebeat
1.安装filebeat
cd /data/soft
rpm -ivh filebeat-6.6.0-x86_64.rpm
2.修改配置文件
vim /etc/filebeat/filebeat.yml
修改:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
output.elasticsearch:
hosts: ["192.168.1.104:9200"]
保存退出
3.启动filebeat
systemctl start filebeat
######################################################################
在es主机安装nginx,httpd-tools
1.配置yum源,安装nginx httpd-tools
yum -y install epel-release
yum -y install nginx httpd-tools
2.启动nginx
systemctl start nginx
3.s使用ab压力测试工具测试访问
ab -n 100 -c 20 http://192.168.1.104/
4.在es浏览器查看filebeat索引和数据
5.在kibana添加索引
management--create index
discover--右上角--选择today
6.修改nginx的日志格式为json
vim /etc/nginx/nginx.conf
添加在http {}内:
log_format log_json '{ "@timestamp": "$time_local", '
'"remote_addr": "$remote_addr", '
'"referer": "$http_referer", '
'"request": "$request", '
'"status": $status, '
'"bytes": $body_bytes_sent, '
'"agent": "$http_user_agent", '
'"x_forwarded": "$http_x_forwarded_for", '
'"up_addr": "$upstream_addr",'
'"up_host": "$upstream_http_host",'
'"up_resp_time": "$upstream_response_time",'
'"request_time": "$request_time"'
' }';
access_log /var/log/nginx/access.log log_json;
保存退出
systemctl restart nginx
清空日志:vim /var/log/nginx/access.log
ab测试访问,生成json格式日志
7.修改filebeat配置文件
vim /etc/filebeat/filebeat.yml
修改为:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
output.elasticsearch:
hosts: ["192.168.1.104:9200"]
index: "nginx-%{[beat.version]}-%{+yyyy.MM}"
setup.template.name: "nginx"
setup.template.patten: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true
保存退出
重启服务:systemctl restart filebeat
8.配置access.log和error.log分开
vim /etc/filebeat/filebeat.yml
修改为:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
tags: ["access"]
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
tags: ["error"]
output.elasticsearch:
hosts: ["192.168.1.104:9200"]
#index: "nginx-%{[beat.version]}-%{+yyyy.MM}"
indices:
- index: "nginx-access-%{[beat.version]}-%{+yyyy.MM}"
when.contains:
tags: "access"
- index: "nginx-error-%{[beat.version]}-%{+yyyy.MM}"
when.contains:
tags: "error"
setup.template.name: "nginx"
setup.template.patten: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true
保存退出
重启服务:systemctl restart filebeat