EFK收集日志
1.准备两台虚拟机
1)192.168.88.132 服务器
2)192.168.88.133 客户机
2.关闭防火墙
systemctl stop firewalld
3.关闭SElinux
setenforce 0
第一台安装
1.安装Elasticsearch集群
1)先安装jdk环境
rpm -ivh jdk-8u131-linux-x64_.rpm
2)安装elasticsearch
yum -y install elasticsearch-6.6.2.rpm
3)修改Elasticsearch配置文件
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: my-1803A
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.88.131
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.88.131", "192.168.88.132"]
4)启动Elasticsearch
systemctl start elasticsearch
2.安装logstash和kibana
1)安装logstash
yum -y install logstash-6.6.0.rpm
2)修改httpd的索引
vim /etc/logstash/conf.d/httpd.conf
input {
redis {
data_type => "list"
host => "192.168.88.132"
password => "123321"
port => "6379"
db => "1"
key => "filebeat-httpd"
}
}
output {
elasticsearch {
hosts => ["192.168.88.132:9200"]
index => "redis-httpdlog-%{+YYYY.MM.dd}"
}
}
3)安装kibana
yum -y install kibana-6.6.2-x86_64.rpm
4)修改kibana配置文件
vim /etc/kibana/kibana.yml
下面这条命令只查看修改过的命令
cat /etc/kibana/kibana.yml | egrep -v "^#|^$"
server.port: 5601
server.host: "192.168.88.132"
elasticsearch.hosts: ["http://192.168.88.132:9200"]
5)启动kibana
systemctl start kibana
3.安装redis
1)解压redis安装包
tar zxf redis-5.0.0.tar.gz
2)复制redis-5.0.0到/usr/local/redis
cp -r redis-5.0.0 /usr/local/redis
3)进入redis目录
cd /usr/local/redis/
4)安装redis编译环境
yum -y install gcc-c++
5)编译安装
make
6)做软连接
ln -s /usr/local/redis/src/redis-server /usr/bin/redis-server
ln -s /usr/local/redis/src/redis-cli /usr/bin/redis-cli
7)修改redis配置文件
vim /usr/local/redis/redis.conf
69行修改:
bind 192.168.232.135
508行添加:
requirepass 123321
8)启动redis:
redis-server ./redis.conf
9)把511追加到/proc/sys/net/core/somaxconn:
echo 511 >> /proc/sys/net/core/somaxconn
10)去/etc/sysctl.conf最后一行添加下面命令:
vim /etc/sysctl.conf
vm.overcommit_memory = 1
11)在/etc/rc.local里面添加:
echo never > /sys/kernel/mm/transparent_hugepage/enabled
12)进入redis配置文件136行将daemonize no修改为yes
vim /usr/local/redis/redis.conf
136行将daemonize no修改为daemonize yes
13)重启redis:
redis-server ./redis.conf
14)测试:
192.168.232.135:6379> auth 123321
OK
192.168.232.135:6379> get *
(nil)
第二台客户机安装
1.安装Elasticsearch集群
1)先安装jdk环境
rpm -ivh jdk-8u131-linux-x64_.rpm
2)安装elasticsearch
yum -y install elasticsearch-6.6.2.rpm
3)修改Elasticsearch配置文件
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: my-1803A
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.88.131
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.88.131", "192.168.88.132"]
4)启动Elasticsearch
systemctl start elasticsearch
2.安装Apche
1)安装
yum -y install httpd
2)启动httpd
systemctl start httpd
3.安装filebeat
1)安装
rpm -ivh filebeat-6.8.1-x86_64.rpm
2)修改配置文件
vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/httpd/access_log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.ilm.enabled: false
setup.template.name: "filebeat-httpd"
setup.template.pattern: "filebeat-httpd-*"
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
output.redis:
hosts: ["192.168.182.210:6379"] #redis服务器及端口
key: "filebeat-httpd" #这里自定义key的名称,为了后期处理
db: 1 #使用第几个库
timeout: 5 #超时时间
password: 123321 #redis 密码
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
3)启动filebeat
systemctl start filebeat
第一台重启logstash+kibana
systemctl restart logstash
systemctl restart kibana
第二台启动filebeat
systemctl start filebeat
最后进行web端访问