1.条件
准备三台机器
三台机器elasticsearch
服务器安装redis
客户机安装filebeat
服务器安装logstash
kibana展示
2.解压包
rpm -ivh jdk-8u131-linux-x64_.rpm
rpm -ivh elasticsearch-6.6.2.rpm
rpm -ivh kibana-6.6.2-x86_64.rpm
tar xzf redis-5.0.0.tar.gz 【安装源码包要先 yum -y install gcc gcc-c++】
3.修改es(elasticsearch)配置文件
vim /etc/elasticsearch/elasticsearch.yml [配置文件]
cluster.name: my-application [在17行修改后面名字随便]
node.name: node-1 [在第23行修改,后面的1换个机器修改为2一次类推]
network.host: 192.168.242.133 [第55行修改为本机IP]
http.port: 9200 [第59行修改端口号]
discovery.zen.ping.unicast.hosts: ["192.168.242.133", "192.168.242.134","192.168.242.135"] [第68行,添加集群里的所有IP,只有服务器需要其他两台发包后记得注释掉]
4.发送配置文件
[root@localhost ~]# scp /etc/elasticsearch/elasticsearch.yml root@192.168.242.134:/etc/elasticsearch/elasticsearch.yml [后面都是自己出来的]
The authenticity of host '192.168.242.134 (192.168.242.134)' can't be established.
ECDSA key fingerprint is SHA256:PvRK+yunxPG/AeETCzZ7LUb73pJJLF1JJCql5/3+geU.
ECDSA key fingerprint is MD5:52:f9:0d:d0:69:b5:36:df:9e:a7:4c:0e:17:39:ce:ab.
Are you sure you want to continue connecting (yes/no)? yes [yes]
Warning: Permanently added '192.168.242.134' (ECDSA) to the list of known hosts.
root@192.168.242.134's password: [密码]
elasticsearch.yml 100% 2906 1.4MB/s
**发送完配置文件记得在另外一台机器修改本机IP和node1修改为2 [23行]**
5.服务器安装redis
yum -y install gcc gcc-c++
tar xzf redis-5.0.0.tar.gz [安装源码包要先 yum -y install gcc gcc-c++]
cp -r redis-5.0.0 /usr/local/redis
cd /usr/local/redis
make distclean 【不写这个可能编译不成功】
make
ln -s /usr/local/redis/src/redis-server /usr/bin/redis-server
ln -s /usr/local/redis/src/redis-cli /usr/bin/redis-cli
vim /usr/local/redis/redis.conf
bind 192.168.242.133(第69行,修改自己的IP)
requirepass 123321(第508行,多添加一行密码)
redis-server /usr/local/redis/redis.conf
echo 511 > /proc/sys/net/core/somaxconn
echo "vm.overcommit_memory = 1" >> /etc/sysctl.conf
echo "echo never > /sys/kernel/mm/transparent_hugepage/enabled" >> /etc/rc.local
vim /usr/local/redis/redis.conf
136行修改为yes
redis-server /usr/local/redis/redis.conf
6.客户机安装filebeat
yum -y install httpd
systemctl start httpd
进入浏览器测试页面,多刷新页面几回
1)yum -y install filebeat-6.8.1-x86_64.rpm [安装filebeat收集httpd日志]
2)vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/httpd/access_log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.ilm.enabled: false
setup.template.name: "filebeat-httpd"
setup.template.pattern: "filebeat-httpd-*"
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
output.redis:
hosts: ["192.168.242.133:6379"] #redis服务器及端口
key: "filebeat-httpd" #这里自定义key的名称,为了后期处理
db: 1 #使用第几个库
timeout: 5 #超时时间
password: 123321 #redis 密码
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
测试redis缓存收集到filebeat的日志没
1) redis-cli -h 192.168.242.133
192.168.242.133:6379> auth 123321 #登录的密码
OK
192.168.242.133:6379> get * #获取全部文件
(nil)
192.168.242.133:6379> KEYS * #
(empty list or set) #看这块有没有收集到,没有收集到刷新httpd页面,如有(filebeat日志)的话可以不直线下面的
192.168.242.133:6379> SELECT 1
OK
192.168.242.133:6379[1]> KEYS *
1) "filebeat-httpd" #出现这个才算收集到httpd日志
服务器安装logstash
1) yum -y install logstash-6.6.0.rpm
2)vim /etc/logstash/conf.d/httpd.conf
input {
redis {
data_type => "list"
host => "192.168.182.210"
password => "123321"
port => "6379"
db => "1"
key => "filebeat-httpd"
}
}
output {
elasticsearch {
hosts => ["192.168.182.210:9200"]
index => "redis-httpdlog-%{+YYYY.MM.dd}"
}
}
kibana展示
yum -y install kibana-6.6.2-x86_64.rpm
1.server.port: 5601(第2行)
2.server.host: "192.168.242.133"(第7行,修改自己本机IP)
3.elasticsearch.hosts: ["http://192.168.242.133:9200"](第28行,修改自己的IP)