在redhat7/8平台上部署ELK7.17.18的技术方案

部署环境说明

为节省资源直接使用1台测试机模拟3节点elasticsearch服务集群做部署,在该主机上同时部署了3个elasticsearch实例、1个logstash实例、1个kibana实例、1个filebeat实例。对于生产环境,以上实例服务应该做分布式部署。

ELK-TEST1 192.168.10.11

本方案已通过了以下操作系统环境的验证测试:

  • os: rhel 7.9 openssl:1.1.1w
  • os: rhel 8.8 openssl:3.1.2

操作系统参数调优与配置

主机名IP映射

cat << EOF >> /etc/hosts
192.168.10.11 node-1
192.168.10.11 node-2
192.168.10.11 node-3
192.168.10.11 ELK-TEST1
EOF

禁用swap

swapoff -a
sed -i '/swap/d' /etc/fstab

调整系统可用资源限制

文件句柄与最大线程并发数量:

cat << EOF > /etc/security/limits.d/usercustom.conf
* soft nofile 65535
* hard nofile 65535
* soft nproc 4096
* hard nproc 4096
* soft fsize unlimited
* hard fsize unlimited
* soft memlock unlimited
* hard memlock unlimited
EOF

虚拟内存及网络连接重连

cat << EOF > /etc/sysctl.d/98-usercustom.conf
vm.max_map_count=262144
net.ipv4.tcp_retries2 = 5
EOF

sysctl -p /etc/sysctl.d/98-usercustom.conf

创建es专用的系统用户

useradd elastic

检查或配置系统安全配置

生产网不建议关闭系统防火墙,可以直接对运行ELK服务的主机节点间做网络访问的全部放行配置。

关闭selinux

setenforce 0
sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config

重启系统,以使上面配置全部生效。

部署ELK安装包

将以下4个安装包上传到主机/opt目录下,解压缩:
elasticsearch-7.17.18-linux-x86_64.tar.gz
filebeat-7.17.18-linux-x86_64.tar.gz
kibana-7.17.18-linux-x86_64.tar.gz
logstash-7.17.18-linux-x86_64.tar.gz

cd /opt
tar zxf elasticsearch-7.17.18-linux-x86_64.tar.gz 
tar zxf filebeat-7.17.18-linux-x86_64.tar.gz 
tar zxf kibana-7.17.18-linux-x86_64.tar.gz 
tar zxf logstash-7.17.18-linux-x86_64.tar.gz
mkdir soft
mv *.gz soft

由于我们是使用1个主机来模拟部署一套ELK服务,elasticsearch集群服务需要运行在生产模式下,至少有3个es实例。所以对部署目录做以下调整:

cd /opt
mv elasticsearch-7.17.18/ elastic-node1
cp -r elastic-node1/ elastic-node2
cp -r elastic-node1/ elastic-node3
mv logstash-7.17.18/ logstash
mv kibana-7.17.18-linux-x86_64/ kibana
mv filebeat-7.17.18-linux-x86_64/ filebeat
chown -R elastic.elastic *

最终的/opt部署路径结果如下:

elastic-node1  elastic-node2  elastic-node3  filebeat  kibana  logstash  soft

ELK集群服务的初始化配置

以下所有的配置均是使用elastic普通用户执行!!!
我们这里是把所有服务部署在一个主机上了,所以以下配置命令均在同一个主机上执行。如果你规划的ELK服务集群使用了多个主机节点,请根据每个服务实例实际部署位置选择相应的主机并配置。

制作ELK集群使用的证书密钥

cd /opt/elastic-node1
mkdir makecerts
./bin/elasticsearch-certutil ca --out ./makecerts/elastic-stack-ca.p12 --days 36500  # 签发CA根证书,有效期100年
./bin/elasticsearch-certutil cert --ca ./makecerts/elastic-stack-ca.p12 --out ./makecerts/elastic-certificates.p12 --dns node-1,ELK-TEST1,node-2,node-3 --ip 192.168.10.11 --days 36500  # 记录好以上两个证书的密码信息
/opt/elastic-node1/jdk/bin/keytool -keystore ./makecerts/elastic-stack-ca.p12 -list  # 查看CA证书
/opt/elastic-node1/jdk/bin/keytool -keystore ./makecerts/elastic-certificates.p12 -list  # 查看elasticsearch服务证书

签发其他实例服务使用的证书:

./bin/elasticsearch-certutil cert --ca ./makecerts/elastic-stack-ca.p12 --out ./makecerts/logstash.zip --name logstash --dns node-1,ELK-TEST1,node-2,node-3 --ip 192.168.10.11 --pem --days 36500 
./bin/elasticsearch-certutil cert --ca ./makecerts/elastic-stack-ca.p12 --out ./makecerts/kibana.zip --name kibana --dns node-1 --ip 192.168.10.11 --pem --days 36500 
./bin/elasticsearch-certutil cert --ca ./makecerts/elastic-stack-ca.p12 --out ./makecerts/filebeat-10.11.zip --name filebeat-10.11 --dns node-1,ELK-TEST1,node-2,node-3 --pem --days 36500 

./bin/elasticsearch-certutil cert --ca ./makecerts/elastic-stack-ca.p12 --out ./makecerts/metricbeat.zip --name metricbeat --dns node-1 --ip 192.168.10.11 --pem --days 36500 

openssl pkcs12 -nocerts -nodes -in ./makecerts/elastic-stack-ca.p12 -out ./makecerts/private.pem
openssl pkcs12 -clcerts -nokeys -in ./makecerts/elastic-stack-ca.p12 -out ./makecerts/cacert.pem  # 生成一份pem格式的ca证书文件
openssl x509 -in ./makecerts/cacert.pem -noout -text  # 查看ca pem证书信息

将elasticsearch证书密码保存到keystore、truststore中:

./bin/elasticsearch-keystore create  
./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
./bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
./bin/elasticsearch-keystore add xpack.security.http.ssl.truststore.secure_password
./bin/elasticsearch-keystore list # 浏览keystore密钥库中保存的信息
./bin/elasticsearch-keystore show xpack.security.http.ssl.keystore.secure_password # 查看在密钥库中保存的指定密码信息

解压kibana.zip,logstash.zip,filebeat.zip,并进一步制作适配logstash服务的证书文件:

cd /opt/elastic-node1/makecerts
unzip filebeat-10.11.zip 
unzip kibana.zip 
unzip logstash.zip
rm -rf *.zip

logstash input插件需要使用pkcs8格式的密钥文件,output elasticsearch插件需要使用truststore密钥库保存pkcs12格式的ca证书:

cd /opt/elastic-node1/makecerts/logstash
openssl pkcs8 -in logstash.key -topk8 -nocrypt -out logstash.p8  
/opt/elastic-node1/jdk/bin/keytool -import -file /opt/elastic-node1/makecerts/cacert.pem -keystore truststore.p12 -storepass ueyf36456fh -noprompt -storetype pkcs12

分发各实例需要使用的证书、密钥文件:

cd /opt/elastic-node1/makecerts
cp elastic-certificates.p12 /opt/elastic-node1/config
cp elastic-certificates.p12 /opt/elastic-node2/config
cp elastic-certificates.p12 /opt/elastic-node3/config
cp ../config/elasticsearch.keystore /opt/elastic-node2/config/
cp ../config/elasticsearch.keystore /opt/elastic-node3/config/
cp cacert.pem logstash/*  /opt/logstash/config/
cp cacert.pem kibana/kibana.* /opt/kibana/config/
cp cacert.pem filebeat-10.11/* /opt/filebeat/

注:将makecerts目录打包做好备份,tar zcf makecerts.tgz makecerts/

设置elasticsearch实例配置

设置elasticsearch实例的jvm缓存,请根据实际情况调整:

cat << EOF > /opt/elastic-node1/config/jvm.options.d/jvm-heap.conf
-Xms4g
-Xmx4g
EOF

cat << EOF > /opt/elastic-node2/config/jvm.options.d/jvm-heap.conf
-Xms4g
-Xmx4g
EOF

cat << EOF > /opt/elastic-node3/config/jvm.options.d/jvm-heap.conf
-Xms4g
-Xmx4g
EOF

在本示例的3个es服务实例的elasticsearch.yml配置中,只有4个参数值有差别,它们是node.name、path.data、path.logs、http.port、transport.port 。如果是使用3个主机节点部署,且每个主机上只运行一个elasticsearch实例时,每个实例间的配置只有node.name、network.host参数值的差别。

cat << EOF > /opt/elastic-node1/config/elasticsearch.yml
cluster.name: elk-application
node.name: node-1
node.master: true
node.data: true
path.data: /opt/elastic-node1/data
path.logs: /opt/elastic-node1/logs
bootstrap.memory_lock: true
network.host: 192.168.10.11
http.port: 9200
transport.port: 9300
discovery.seed_hosts: ["192.168.10.11:9300", "192.168.10.11:9301", "192.168.10.11:9302"]
# cluster.initial_master_nodes参数在第1次启动es服务集群后,需要及时注释掉!
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: ./elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: ./elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: ./elastic-certificates.p12
xpack.security.http.ssl.truststore.path: ./elastic-certificates.p12
EOF

cat << EOF > /opt/elastic-node2/config/elasticsearch.yml
cluster.name: elk-application
node.name: node-2
node.master: true
node.data: true
path.data: /opt/elastic-node2/data
path.logs: /opt/elastic-node2/logs
bootstrap.memory_lock: true
network.host: 192.168.10.11
http.port: 9201
transport.port: 9301
discovery.seed_hosts: ["192.168.10.11:9300", "192.168.10.11:9301", "192.168.10.11:9302"]
# cluster.initial_master_nodes参数在第1次启动es服务集群后,需要及时注释掉!
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: ./elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: ./elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: ./elastic-certificates.p12
xpack.security.http.ssl.truststore.path: ./elastic-certificates.p12
EOF

cat << EOF > /opt/elastic-node3/config/elasticsearch.yml
cluster.name: elk-application
node.name: node-3
node.master: true
node.data: true
path.data: /opt/elastic-node3/data
path.logs: /opt/elastic-node3/logs
bootstrap.memory_lock: true
network.host: 192.168.10.11
http.port: 9202
transport.port: 9302
discovery.seed_hosts: ["192.168.10.11:9300", "192.168.10.11:9301", "192.168.10.11:9302"]
# cluster.initial_master_nodes参数在第1次启动es服务集群后,需要及时注释掉!
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: ./elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: ./elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: ./elastic-certificates.p12
xpack.security.http.ssl.truststore.path: ./elastic-certificates.p12
EOF

设置logstash服务实例配置

logstash服务配置文件:

cat << EOF > /opt/logstash/config/logstash.yml
node.name: logstash-10-11
xpack.monitoring.enabled: false
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: hDagwy141d
#xpack.monitoring.elasticsearch.hosts: ["https://node-1:9200", "https://node-2:9201", "https://node-3:9202"]
xpack.monitoring.elasticsearch.hosts: ["https://node-1:9200"]
xpack.monitoring.elasticsearch.ssl.certificate_authority: "/opt/logstash/config/cacert.pem"
xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
EOF

logstash 数据转发配置文件:

cat << EOF > /opt/logstash/config/logstash.conf
input {
  beats {
    id => "logstash-10-11"
    port => 5044
    ssl => true
    ssl_certificate_authorities => "/opt/logstash/config/cacert.pem"
    ssl_certificate => "/opt/logstash/config/logstash.crt"
    ssl_key => "/opt/logstash/config/logstash.p8"
    ssl_verify_mode => "force_peer"
  }
}
output {
  elasticsearch {
    id => "elk-application"
    hosts => ["https://node-1:9200", "https://node-2:9201", "https://node-3:9202"]
    manage_template => true
    template_overwrite => true
    index => "test-logs-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "iwuHBG865"
    ssl_certificate_verification => true
    truststore => "/opt/logstash/config/truststore.p12"
    truststore_password => "ueyf36456fh"
  }
}
EOF

设置kibana服务实例配置

cat << EOF > /opt/kibana/config/kibana.yml
server.host: "node-1"
server.publicBaseUrl: "https://192.168.10.11:5601/"
elasticsearch.hosts: ["https://192.168.10.11:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "hfrr53df64"
server.ssl.enabled: true
server.ssl.certificate: /opt/kibana/config/kibana.crt
server.ssl.key: /opt/kibana/config/kibana.key
elasticsearch.ssl.certificateAuthorities: [ "/opt/kibana/config/cacert.pem" ]
elasticsearch.ssl.verificationMode: certificate
xpack.security.encryptionKey: "dfe2435fdsdfg2424wegrcvnjhgfr5678909iju"
xpack.security.sessionTimeout: 1800000
xpack.monitoring.elasticsearch.hosts: [ "https://192.168.10.11:9200" ]
xpack.monitoring.elasticsearch.ssl.certificateAuthorities: config/cacert.pem
EOF

设置filebeat服务配置

filebeat.yml配置文件如下,因有特殊字符无法使用cat命令直接写入文件,请复制下面内容并替换配置文件内容:

filebeat.inputs:
- type: filestream
  id: ELK-TEST1-id
  enabled: true
  paths:
    - /var/log/test-logs/*.log
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
output.logstash:
  hosts: ["node-1:5044"]
  ssl.certificate_authorities: ["/opt/filebeat/cacert.pem"]
  ssl.certificate: "/opt/filebeat/filebeat-10.11.crt"
  ssl.key: "/opt/filebeat/filebeat-10.11.key"
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

设置rsyslogd服务配置

很多安全、网络类的设备,仅支持将设备日志转发给syslog服务,所以我们需要配置一个rsyslogd服务,接收这些设备日志。在完成日志落盘后,由filebeat负责采集和存储到ELK平台。

检查/etc/rsyslog.conf文件,启用以下参数:

module(load="imudp") # needs to be done just once
input(type="imudp" port="514")

module(load="imtcp") # needs to be done just once
input(type="imtcp" port="514")

注:这里的参数配置方法在rhel7和rhel8上有差别,但区别不大,找到并启用即可。
创建/etc/rsyslog.d/test-logs.conf 配置文件,内容如下:

$template remote-incoming-logs,"/var/log/test-logs/%fromhost-ip%_%$YEAR%.log"
*.* ?remote-incoming-logs
& ~
mkdir -p /var/log/test-logs
systemctl restart rsyslog
systemctl status rsyslog

注意查看rsyslog服务日志、状态。

启动各个服务组件并观察日志

启动elasticsearch服务集群并设置管理账密信息

注:依次启协3个服务实例,每启动一个后,先观察 elk-application.log 日志输出,在前一个实例启动结束后,再启动下一次。

cd /opt/elastic-node1
./bin/elasticsearch -d

cd /opt/elastic-node2
./bin/elasticsearch -d

cd /opt/elastic-node3
./bin/elasticsearch -d

观察上述服务启动,日志输出和集群显示状态均正常后,及时注释掉elasticsearch.yml文件中的cluster.initial_master_nodes参数!

执行下面命令设置内建管理用户的密码:

./bin/elasticsearch-setup-passwords interactive

注:这里设置的账号密码,需要与前面各种服务的配置文件中会使用的账号、密码信息一致。

2)启动kibana服务

cd /opt/kibana
nohup ./bin/kibana &

注:观察并确认日志输出正常,服务运行正常。

访问https://192.168.10.11:5601,使用上面创建的elastic管理员用户登录。

3)启动logstash服务

cd /opt/logstash
./bin/logstash -f ./config/logstash.conf &

4)启动filebeat服务
使用root用户操作:

cd /opt/filebeat
chown root.root filebeat.yml
./filebeat -e -c filebeat.yml &

注:由于我们的使用场景中,filebeat会采集/var/log下一些系统日志,需要root权限,所以这里有上述的权限调整。

登录kibana控制台配置索引管理信息

登录后,进入Management界面创建一个index pattern

名称为:test-logs-*
在discover界面下,就可以检索到已经采集到的日志数据了。

创建索引生命周期管理策略

名称:test-logs-policy
启用两个生命阶段即可:

  • hot phase:管理30天内的索引文件
  • cold phase: 管理大于180天的索引文件

创建索引模板

进入Dev tools界面,执行以下命令:

PUT _index_template/test-logs-template?pretty
{
        "index_patterns" : [
          "test-logs-*"
        ],
        "template" : {
          "settings" : {
            "index" : {
              "lifecycle" : {
                "name" : "test-logs-policy",
                "rollover_alias" : "test-logs"
              },
              "number_of_shards" : "1",
              "number_of_replicas" : "2"
            }
          },
          "aliases": {
            "test-logs": {}
          }
        }
}

查看索引模板:

GET _index_template/test-logs-template?pretty

到这里,主要配置内容基本结束。

  • 30
    点赞
  • 21
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值