日志分析平台-ELK
此次主要是搭建带有redis缓存层的ELK日志分析平台
环境:
node1:172.16.1.152
node2: 172.16.1.153
一:配置基础环境
所有节点关闭selinux
setenforce 0
vim /etc/selinux/config
所有节点都需要将虚拟内存调大
max_map_count: 定义了一个进程能拥有的最多的内存区域
vim /etc/sysctl.conf
vm.max_map_count=262144
所有节点都需要按照jdk
方法不详细描述
二,安装Elasticsearch(所有节点)
上传rpm安装包
rpm -ivh elasticsearch-oss-6.5.4.rpm
修改配置文件
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: my-elk #配置集群名称 node.name: node-1 #配置集群节点名称 path.data: /elk/elasticsearch/data #配置数据存放目录 path.logs: /elk/elasticsearch/log #配置日志存放目录 network.host: 172.16.1.152 #配置服务监听地址 http.port: 9200 #配置服务监听端口 discovery.zen.ping.unicast.hosts: ["172.16.1.152", "172.16.1.152"] #配置集群阶段ip
http.cors.enabled: true
http.cors.allow-origin: "*"
创建数据存放目录
mkdir /elk/elasticsearch/data -p
mkdir /elk/elasticsearch/log -p
chown -R elasticsearch:elasticsearch /elk/elasticsearch/
创建java命令软连接
ln -s /usr/local/jdk1.8.0_171/bin/java /usr/bin/
重启elasticsearch
systemctl restart elasticsearch
开启防火墙
firewall-cmd --add-port=9200/tcp --permanent
firewall-cmd --add-port=9300/tcp --permanent
firewall-cmd --reload
三.安装elasticsearch-head
下载安装包
wget https://github.com/mobz/elasticsearch-head/archive/master.zip
解压并进入目录
安装npm和nodejs
yum -y install npm
npm install grunt
npm install
nohup npm run start
firewall-cmd --add-port=9100/tcp --permanent
firewall-cmd --reload
重启elasticsearch
systemctl restart elasticsearch
浏览器访问172.16.1.152:9100
四.安装kibana
rpm -ivh kibana-oss-6.5.4-x86_64.rpm
修改配置文件
vim /etc/kibana/kibana.yml
server.port: 5601 server.host: 172.16.1.152 elasticsearch.url: "http://172.16.1.152:9200"
启动kibana
systemctl start kibana
开启防火墙
firewall-cmd --add-port=5601/tcp --permanent
firewall-cmd --reload
五.安装redis
tar -xf redis-5.0.0.tar.gz
cd redis-5.0.0/
mkdir /usr/local/redis
make MALLOC=libc
make install PREFIX=/usr/local/redis
创建配置文件存放目录
mkdir /usr/local/redis/etc
cp /tools/redis-5.0.0/redis.conf /usr/local/redis/etc
开机自启动
daemonize yes
允许远程连接
注释掉bind
#bind 127.0.0.1
关闭保护模式
protected-mode no
开发防火墙
firewall-cmd --add-port=6379/tcp --permanent
firewall-cmd --reload
六.安装logstash
rpm -ivh logstash-6.5.4.rpm
在启动脚本中假如java_home的信息
vim /usr/share/logstash/bin/logstash
export JAVA_HOME=/usr/local/jdk1.8.0_171
创建配置文件(配置文件是在指定路径下的所有以.conf结尾的文件)
vim /etc/logstash/conf.d/logstash.conf
在远端:
input{ file{ path => "/usr/local/nginx/logs/access.log" type => "nginx01-access.log" start_position => "beginning" stat_interval => "2" } } output { redis { host => '172.16.1.152' data_type => 'list' key => 'elk:redis' codec => json { charset => ["UTF-8"] } } }
在本地端
input { redis { host => '172.16.1.152' data_type => 'list' key => 'elk:redis' type => 'redis-input' codec => plain { charset => "UTF-8" } } } filter { json { source => "message" } } output { elasticsearch { hosts => ["172.16.1.152:9200"] index => "nginx01-%{+YYYY.MM.dd}" } }
修改nginx日志格式
log_format access_json '{"@timestamp":"$time_iso8601",' '"host":"$server_addr",' '"clientip":"$remote_addr",' '"size":"$body_bytes_sent",' '"responsetime":"$request_time",' '"upstreamtime":"$upstream_response_time",' '"upstreamhost":"$upstream_addr",' '"http_host":"$host",' '"url":"$uri",' '"domain":"$host",' '"xff":"$http_x_forwarded_for",' '"referer":"$http_referer",' '"status":"$status"}'; access_log logs/access.log access_json;
添加文件读权限
chmod 644 /usr/local/nginx/logs/access.log
启动logstash
在kibana中添加index
查看数据是否正常推送过来