filebeat+ELK使用版本为5.1.1(没有用到logstash)
压缩包准备,各种解压。
因为filebeat比较小巧,所以采集端直接用filebeat
1 nginx日志原始配置:
nginx.conf
log_format main '$remote_addr - [$time_local] $request_method "$uri" "$query_string" '
'$status $body_bytes_sent "$http_referer" $upstream_status '
'"$http_user_agent"';
access_log /var/log/nginx/access.log main;
2配置filebeat
filebeat.yml
filebeat.prospectors:
- type: log
paths:
- '/var/log/nginx/access.log'
json.message_key: log
json.keys_under_root: true
#- c:\programdata\elasticsearch\logs\*
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["10.130.24.111:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
setup.dashboards:
enabled: true
setup.kibana:
host: "10.130.24.111:5601"
template.enabled: true
template.path: "filebeat.template.json"
template.overwrite: false
2启动filebeat:(允许nginx获取日志)
./filebeat modules enable nginx &
3 配置
https://blog.csdn.net/xiaoying0531/article/details/78941631
注意这里要配置成你的账户分配内存
4启动elasticsearch(需要子账户)
elasticsearch后台运行
./elasticsearch -d
5 kibana配置
vim /usr/local/kibana/config/kibana.yml 添加一个
elasticsearch.url: "http://10.149.11.226:9200"
server.host: "0.0.0.0"
logging.dest: "/usr/local/kibana/kibana.log"
5 启动kibana
kibana后台运行
/bin/kibana &
测试: