1、ELK介绍
ELK官网:https://www.elastic.co/
使用zookeeper+logstash:https://blog.51cto.com/tchuairen/1855090
2、流程
处理流程为:Nginx --syslog--> Rsyslog --omkafka--> Kafka --> Logstash --> Elasticsearch --> Kibana
Nginx产生日志通过syslog系统服务传给Rsyslog服务端,Rsyslog接收到日志后通过omkafka模块将日志写入Kafka,Logstash读取Kafka队列然后写入Elasticsearch,用户通过Kibana检索Elasticsearch里存储的日志
Rsyslog服务系统自带无需安装,所以整个流程中客户端不需要额外安装应用
服务端虽然Rsyslog也已安装,但默认没有omkafka模块,如果需要Rsyslog写入Kafka需要先安装这个模块
omkafka模块在rsyslog v8.7.0之后的版本才支持,所以需要先通过rsyslogd -v命令查看rsyslog版本,如果版本较低则需要升级
3、安装omkafka
cd /etc/yum.repos.d/
wget http://rpms.adiscon.com/v8-stable/rsyslog.repo
yum -y install rsyslog
yum -y install rsyslog-kafka
4、配置rsyslog写日志到kafka
/etc/rsyslog.d/rsyslog_nginx_kafka_cluster.conf
module(load="imudp")
input(type="imudp" port="514")
# nginx access log ==> rsyslog server(local) ==> kafka
module(load="omkafka")
template(name="nginxLog" type="string" string="%msg%")
if $inputname == "imudp" then {
if ($programname == "nginx_access_log") then
action(type="omkafka"
template="nginxLog"
broker=["120.79.36.134:9092"]
topic="rsyslog_nginx"
partitions.auto="on"
confParam=[
"socket.keepalive.enable=true"
]
)
}
:rawmsg, contains, "nginx_access_log" ~
配置解释:
5、配置nginx日志格式
log_format jsonlog '{'
'"host": "$host",'
'"server_addr": "$server_addr",'
'"http_x_forwarded_for":"$http_x_forwarded_for",'
'"remote_addr":"$remote_addr",'
'"time_local":"$time_local",'
'"request_method":"$request_method",'
'"request_uri":"$request_uri",'
'"status":$status,'
'"body_bytes_sent":$body_bytes_sent,'
'"http_referer":"$http_referer",'
'"http_user_agent":"$http_user_agent",'
'"upstream_addr":"$upstream_addr",'
'"upstream_status":"$upstream_status",'
'"upstream_response_time":"$upstream_response_time",'
'"request_time":$request_time'
'}';
access_log syslog:server=120.79.36.134:514,facility=local7,tag=nginx_access_log,severity=info jsonlog;
6、安装kafka
(1)安装java环境:
yum -y install java-1.8.0-openjdk.x86_64
在/usr/lib/jvm看具体路径
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-3.b13.el7_5.x86_64/jre
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
cd /usr/local
wget http://mirrors.hust.edu.cn/apache/kafka/1.1.0/kafka_2.12-1.1.0.tgz
解压并进入目录
tar -zvxf ./kafka_2.12-1.1.0.tgz
cd kafka_2.12-1.1.0
启动Zookeeper
使用安装包中的脚本启动单节点Zookeeper 实例:(参数-daemon表示后台运行)
bin/zookeeper-server-start.sh -daemon config/zookeeper.properties
启动Kafka服务
bin/kafka-server-start.sh config/server.properties
后台启动kafka的话,用nohup 与&来启动
7、安装ELK
ELK安装的话去官网rpm安装,每个都有安装教程
logstash配置:
/etc/logstash/logstash.conf
input {
kafka {
bootstrap_servers => "10.82.9.202:9092"
topics => ["rsyslog_nginx"]
}
}
filter {
mutate {
gsub => ["message", "\\x", "\\\x"]
}
json {
source => "message"
}
date {
match => ["time_local","dd/MMM/yyyy:HH:mm:ss Z"]
target => "@timestamp"
}
}
output {
elasticsearch {
hosts => ["10.82.9.205:9200"]
index => "rsyslog-nginx-%{+YYYY.MM.dd}"
}
}