提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档
EFK-redis-logstash
前言
准备两台虚拟机
一、配置elasticsearch
两台都按下面的配置
安装jdk
rpm -ivh jdk-8u131-linux-x64_.rpm
安装elasticsearch
rpm -ivh elasticsearch-6.8.1.rpm
进入elasticsearch配置文件
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: ELK-Cluster #ELK的集群名称,名称相同即属于是同一个集群
node.name: elk-node1 #去掉注释
network.host: 192.168.1.100 #监听的IP地址
http.port: 9200 #服务监听的端口
discovery.zen.ping.unicast.hosts: ["192.168.1.100", "192.168.1.220"] #本机跟另一台的ip
第二台
安装jdk
rpm -ivh jdk-8u131-linux-x64_.rpm
安装elasticsearch
rpm -ivh elasticsearch-6.8.1.rpm
进入elasticsearch配置文件
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: ELK-Cluster #ELK的集群名称,名称相同即属于是同一个集群
node.name: elk-node2 #去掉注释
network.host: 192.168.1.100 #监听的IP地址
http.port: 9200 #服务监听的端口
两台同时启动稍等几秒钟查看端口9200端口是否起来
systemctl start elasticsearch
netstat -ntpl
浏览器访问
ip:9200
执行curl http://192.168.1.100:9200/_cluster/health?pretty=true
查看是否连通
二、增加消息队列
1.安装redis跟logstash
在第一台机器上传redis解压安装
tar -xvzf redis-5.0.0.tar.gz
cd /usr/local/redis-5.0.0/
make distclean #要清除所有生成的文件
make
安装完毕开始配置redis
vim redis.conf
daemonize yes
bind xx.xx.x.xx 自己的ip
requirepass 123456
配置完毕开始登录redis
./redis-5.0.0/src/redis-cli -c -h 跟上自己的ip
xxx.xxx.x.xxx:6379> AUTH 123456
OK
xxx.xxx.x.xxx:6379>ping
PONG
xxx.xxx.x.xxx:6379>KEYS *
(empty list or set)
xxx.xxx.x.xxx:6379>quit
2.安装logstash
vim /etc/logstash/conf.d/httpd.conf
input {
file {
path => "/var/log/messages"
type => "httpdlog"
start_position => "beginning"
stat_interval => "2"
}
}
output {
if [type] == "systemlog" {
redis {
data_type => "list"
host => "192.168.6.130"
password => "123456"
port => "6379"
db => "0"
key => "systemlog"
}
}
}
配置完检查语法是否正确
/usr/share/logstash/bin/logstash -f
/etc/logstash/conf.d/httpd.conf -t
重启logstash
systemctl restart logstash
写入messages日志测试
echo "redis-test" >> /var/log/messages
echo "systemlog" >> /var/log/messages
登录redis进行查看
./redis-5.0.0/src/redis-cli -c -h 跟上自己的ip
192.168.6.130:6379> AUTH 123456
OK
192.168.6.130:6379> SELECT 0
OK
192.168.6.130:6379> KEYS *
1) "systemlog"
192.168.6.130:6379> LLEN httpdlog
(integer) 126
配置logstash从redis中取出数据到elasticsearch
vim /etc/logstash/conf.d/redis-read.conf
input {
redis {
data_type => "list"
host => "192.168.1.30"
password => "123321"
port => "6379"
db => "0"
key => "systemlog"
}
}
output {
elasticsearch {
hosts => ["192.168.1.100:9200"]
index => "redis-systemlog-%{+YYYY.MM.dd}"
}
}
测试logstash配置是否正确
/usr/share/logstash/bin/logstash -f
/etc/logstash/conf.d/redis-read.conf -t
systemctl restart logstash
登录redis测试
redis-5.0.0 -cli -h 192.168.6.130
192.168.6.130:6379> AUTH 123456
OK
192.168.6.130:6379> SELECT 0
OK
192.168.6.130:6379> KEYS *
(empty list or set) #这里数据已经为空
192.168.6.130:6379> SELECT 1
OK
192.168.6.130:6379[1]> KEYS *
(empty list or set
3.安装filebeat
在另一台上安装filebeat`。
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.8.1-x86_64.rpm
yum -y localinstall filebeat-6.8.1-x86_64.rpm
修改配置文件
vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log # 默认值 log ,表示一个日志读取源
enabled: true # 该配置是否生效,如果设置为 false 将不会收集该配置的日志
paths:
- /var/log/messages # 要抓取的日志路径,写绝对路径,可以多个
- /var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
output.file:
path: "/tmp"
filename: "filebeat.txt"
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
systemctl start filebeat
测试验证数据
下面展示一些 内联代码片
。
echo "test" >> /var/log/messages
tail /tmp/filebeat.txt
配置filebeat收集系统日志输出redis
vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
output.redis:
hosts: ["192.168.6.130:6379"] #redis服务器及端口
key: "httpd-log-33" #这里自定义key的名称,为了后期处理
db: 1 #使用第几个库
timeout: 5 #超时时间
password: 123456 #redis 密码
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- systemctl restart filebeat
启动redis测试`。
redis-5.0.0 -cli -h 192.168.6.130
192.168.6.130:6379> AUTH 123456
OK
192.168.6.130:6379> SELECT 1
OK
192.168.6.130:6379[1]> KEYS *
1) "system-log-33"
192.168.6.130:6379[1]> LLEN system-log-33
(integer) 3
logstash服务器上面配置从redis服务器中取数据
cat /etc/logstash/conf.d/redis-filebeat.conf
input {
redis {
data_type => "list"
host => "192.168.6.130"
password => "123456"
port => "6379"
db => "1"
key => "httpd-log-33"
}
}
output {
elasticsearch {
hosts => ["192.168.1.100:9200"]
index => "file-systemlog-%{+YYYY.MM.dd}"
}
}
systemctl restart logstash
写入文件测试
echo "11111111111111" >> /var/log/messages
echo "2222222222" >> /var/log/messages
echo "33333333" >> /var/log/messages
安装kibana
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.8.1-x86_64.rpm
yum -y localinstall kibana-6.8.1-x86_64.rpm
vim /etc/kibana/kibana.yml
server.port: 5601 #监听端口
server.host: "192.168.1.100" #监听地址
elasticsearch.hosts: ["http://192.168.1.100:9200"] #elasticsearch服务器地址
systemctl start kibana
ss -nlt |grep 5601
LISTEN 0 128 192.168.1.100:5601
在kibana页面里创建索引