rpm包单机部署ELK

正式环境部署ELK单机版,要将所有的linux业务服务器和windows业务服务器对接ELK接入日志,本次部署ELK日志收集系统仅对接了一台windows业务服务器和linux业务服务器演示,后续其他业务服务器正常对接。
一,服务器环境
ELK服务器:10.12.93.146 4核8G
Linux程序服务器:10.12.93.151
Windows程序服务器:10.12.93.130

软件版本:
elasticsearch:elasticsearch-7.17.7-x86_64.rpm
logstash:logstash-7.17.7-x86_64.rpm
kibana:kibana-7.17.7-x86_64.rpm
filebeat:filebeat-7.17.7-x86_64.rpm
winlogbeat:winlogbeat-7.17.7-windows-x86_64.msi
注意:winlogbeat在windows上安装,filebeat在CentOS上安装

二,部署操作流程
1.在ELK服务器10.12.93.146上操作:
(1)关闭selinux,设置主机名,加hosts解析
[root@elk-log-server ~]# hostnamectl set-hostname elk-log-server–static
[root@elk-log-server ~]# vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.12.93.146  elk-log-server

(2)安装JDK,elasticsearch ,logstash,kibana
[root@elk-log-server ~]# yum -y install java-1.8.0-openjdk*
[root@elk-log-server ~]# yum -y install elasticsearch-7.17.7-x86_64.rpm
[root@elk-log-server ~]# yum -y install kibana-7.17.7-x86_64.rpm
[root@elk-log-server ~]# yum -y install logstash-7.17.7-x86_64.rpm

(3)修改elasticsearch配置文件
[root@elk-log-server ~]# vim /etc/elasticsearch/elasticsearch.yml

cluster.name: xiantao-elk  (集群名字,ELK单机版可以随便取)
node.name: elk-log-server   (ELK本机主机名)
path.data: /home/data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 10.12.93.146  (ELK本机IP)
http.port: 9200
discovery.seed_hosts: ["elk-log-server"]   (ELK本机主机名)
cluster.initial_master_nodes: ["10.12.93.146"]  (ELK本机IP)
末尾加入开启证书验证:
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12
xpack.monitoring.collection.enabled: true

[root@elk-log-server ~]# vim /etc/elasticsearch/jvm.options
ELK本机内存是16G,给一半8G

-Xms8g
-Xmx8g

(4)配置证书
[root@elk-log-server ~]# /usr/share/elasticsearch/bin/elasticsearch-certutil ca
[root@elk-log-server ~]# /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
(以上两个命令执行后一路回车,完成后会生成2个文件elastic-certificates.p12和elastic-stack-ca.p12,文件放在执行命令的当前路径下或者是在/usr/share/elasticsearch/)
需要将 elastic-certificates.p12和elastic-stack-ca.p12这两个文件移动到/etc/elasticsearch/
[root@elk-log-server ~]# mv /usr/share/elasticsearch/elastic-* /etc/elasticsearch/
[root@elk-log-server ~]# chown -R elasticsearch:elasticsearch /etc/elasticsearch/
[root@elk-log-server ~]# mkdir -p /home/data/elasticsearch
[root@elk-log-server ~]# chown -R elasticsearch:elasticsearch /home/data/elasticsearch

(5)启动elasticsearch
[root@elk-log-server ~]# systemctl start elasticsearch
[root@elk-log-server ~]# systemctl enable elasticsearch
[root@elk-log-server ~]# systemctl status elasticsearch

(6)修改logstash配置文件
[root@elk-log-server ~]# vim /etc/logstash/logstash.yml

node.name: elk-log-server   (ELK本机主机名)
path.data: /home/data/logstash  (logstash数据存储路径)
pipeline.ordered: auto
path.config: /etc/logstash/conf.d (配置文件路径)
log.level: info
path.logs: /var/log/logstash
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: "123456"   (密码自己设置)
xpack.monitoring.elasticsearch.hosts: ["http://elk-log-server:9200"]   

(7)修改kibana配置文件
[root@elk-log-server ~]# vim /etc/kibana/kibana.yml

server.port: 5601
server.host: "10.12.93.146"
server.name: "elk-log-server"
elasticsearch.hosts: ["http://10.12.93.146:9200"]  (kibana连接elasticsearch的地址)
kibana.index: ".kibana"
elasticsearch.username: "kibana"
elasticsearch.password: "123456"
i18n.locale: "zh-CN"
在末尾添加以下内容:
xpack.reporting.encryptionKey: "a_random_string"
xpack.security.encryptionKey: "something_at_least_32_characters"

(8)创建用户的密码用于登陆
密码统一一样
[root@elk-log-server ~]# /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y
Enter password for [elastic]: 123456
Reenter password for [elastic]: 123456
Enter password for [apm_system]: 123456
Reenter password for [apm_system]: 123456
Enter password for [kibana]: 123456
Reenter password for [kibana]: 123456
Enter password for [logstash_system]: 123456
Reenter password for [logstash_system]: 123456
Enter password for [beats_system]: 123456
Reenter password for [beats_system]: 123456
Enter password for [remote_monitoring_user]: 123456
Reenter password for [remote_monitoring_user]:123456
出现以下内容说明设置成功:
Changed password for user [apm_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

[root@elk-log-server ~]# /usr/share/kibana/bin/kibana-keystore --allow-root create
A Kibana keystore already exists. Overwrite? [y/N] y
Created Kibana keystore in /var/lib/kibana/kibana.keystore
[root@elk-log-server ~]# /usr/share/kibana/bin/kibana-keystore --allow-root add elasticsearch.username
Enter value for elasticsearch.username: kibana
[root@elk-log-server ~]# /usr/share/kibana/bin/kibana-keystore --allow-root add elasticsearch.password
Enter value for elasticsearch.password: ******

[root@elk-log-server ~]# vim /etc/logstash/conf.d/filebeats.conf

input{
    beats {
port  => 5044
    add_field => {OS_type => "linux"}
   }

beats {
port => 5045
    add_field => {OS_type => "windows"}
   }
}
output{
  if [OS_type] == "linux" {
  elasticsearch{
    hosts => ["elk-log-server:9200"]
    user => "elastic"
    password => "Sccin1qazCDE#"
    manage_template => true
    index => "filebeat-%{+YYYY.MM}"
  }
}
  if [OS_type] == "windows" {
  elasticsearch{
    hosts => ["elk-log-server:9200"]
    user => "elastic"
    password => "Sccin1qazCDE#"
    manage_template => true
    index => "winlogbeat-%{+YYYY.MM}"
}
 }
}

(9)启动kibana和logstash
[root@elk-log-server ~]# systemctl start kibana
[root@elk-log-server ~]# systemctl enable kibana
[root@elk-log-server ~]# systemctl status kibana
[root@elk-log-server ~]# systemctl start logstash
[root@elk-log-server ~]# systemctl enable logstash
[root@elk-log-server ~]# systemctl status logstash

(10)安装filebeat
[root@elk-log-server ~]# yum -y install filebeat-7.17.7-x86_64.rpm

(11)修改filebeat的配置文件
[root@elk-log-server ~]# vim /etc/filebeat/filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/*.log
    - /var/log/messages*
    - /var/log/secure*
    - /var/log/cron*
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
fields:
ip: 10.12.93.146
setup.kibana:
output.logstash:
 hosts: ["10.12.93.146:5044"]
 codec: json
processors:
- drop_fields:
fields:   ["agent.ephemeral_id",  "agent.hostname", "agent.id", "agent.type", "agent.version", "ecs.version", "event.code", "event.created
", "event.kind", "event.provider", "host.architecture", "host.id", "host.name", "host.os.build", "host.os.family", "host.os.kernel", "host.os.platform"
, "host.os.version", "process.name", "user.domain", "winlog.activity_id", "winlog.api", "winlog.computer_name", "winlog.event_data.CallerProcessld", "w
inlog.event_data.SubjectDomainName", "winlog.event_data.SubjectLogonld", "winlog.event_data.SubjectUserName", "winlog.event_data.SubjectUserSid", "winl
og.event_data.TargetDomainName", "winlog.event_data.TargetSid", "winlog.event_data.TargetUserName", "winlog.logon.id", "winlog.opcode", "winlog.process
.pid","winlog.process.thread.id", "winlog.provider_name", "winlog.record_id"]
ignore_missing: false
logging.level: info
monitoring.enabled: false

(12)启动filebeat
[root@elk-log-server ~]# systemctl start filebeat
[root@elk-log-server ~]# systemctl enable filebeat
[root@elk-log-server ~]# systemctl status filebeat

(13)防火墙放端口
[root@elk-log-server ~]# firewall-cmd --zone=public --add-port=9200/tcp --permanent (elasticsearch端口)
[root@elk-log-server ~]# firewall-cmd --zone=public --add-port=5601/tcp --permanent (kibana端口)
[root@elk-log-server ~]# firewall-cmd --zone=public --add-port=5044/tcp --permanent (filebeat端口 linux)
[root@elk-log-server ~]# firewall-cmd --zone=public --add-port=5045/tcp --permanent (filebeat端口 windows)
[root@elk-log-server ~]# systemctl restart firewalld

到此为止单机版ELK+Filebeat部署好了,可以在浏览器访问 http://1012.93.146:5601 账号是elastic,密码是123456

三,在Linux程序服务器:10.12.93.151部署Filebeat收集日志给ELK服务器
1.安装filebeat
[root@elk-log-server ~]# rpm -ivh filebeat-7.17.7-x86_64.rpm
[root@elk-log-server ~]# vim /etc/filebeat/filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/*.log
    - /var/log/messages*
    - /var/log/secure*
    - /var/log/cron*
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
fields:
  ip: 10.12.93.151     (填写本机IP)
setup.kibana:
output.logstash:
  hosts: ["10.12.93.146:5044"]  (填写ELK服务收集linux系统日志的端口)
  codec: json
processors:
      - drop_fields:
             fields:   ["agent.ephemeral_id",  "agent.hostname", "agent.id", "agent.type", "agent.version", "ecs.version", "event.code", "event.created
", "event.kind", "event.provider", "host.architecture", "host.id", "host.name", "host.os.build", "host.os.family", "host.os.kernel", "host.os.platform"
, "host.os.version", "process.name", "user.domain", "winlog.activity_id", "winlog.api", "winlog.computer_name", "winlog.event_data.CallerProcessld", "w
inlog.event_data.SubjectDomainName", "winlog.event_data.SubjectLogonld", "winlog.event_data.SubjectUserName", "winlog.event_data.SubjectUserSid", "winl
og.event_data.TargetDomainName", "winlog.event_data.TargetSid", "winlog.event_data.TargetUserName", "winlog.logon.id", "winlog.opcode", "winlog.process
.pid","winlog.process.thread.id", "winlog.provider_name", "winlog.record_id"]
             ignore_missing: false
logging.level: info
monitoring.enabled: false

[root@elk-log-server ~]# systemctl start filebeat
[root@elk-log-server ~]# systemctl enable filebeat
[root@elk-log-server ~]# systemctl status filebeat

2.在浏览器查看索引有没有接入并添加索引
在这里插入图片描述
3.查看discover界面有没有出现日志
在这里插入图片描述
到此为止,linux系统日志接入ELK完成

四,windows安装winlogbeat

1.安装winlogbeat
将winlogbeat-7.17.7-windows-x86_64.msi 软件上传到服务器直接安装
2.修改配置文件
配置文件路径:C:\ProgramData\Elastic\Beats\winlogbeat\winlogbeat.yml

 setup.template.settings:
     index.number_of_shards: 3
   fields:
     ip: 172.17.22.166  添加这一行
   #output.elasticsearch:
        # Array of hosts to connect to.
  #hosts: ["localhost:9200"]
  output.logstash:
  hosts: ["172.17.22.179:5045"]
   logging.level: info

3,启动服务
在这里插入图片描述
再去kibana界面添加索引,查看discover界面是否有日志产生
到此为止,windows系统日志已接入ELK

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
对于ELKElasticsearch, Logstash, Kibana)单机部署,黄色状态通常表示有一些索引分片未分配到可用节点上。这可能是由于节点变动、索引创建或者删除操作导致的。下面是一些常见的解决方法: 1. 确认集群节点状态:使用Elasticsearch的API或者Kibana的管理界面来查看节点状态,确保所有节点都正常运行。 2. 查看分片分配情况:使用Elasticsearch的API或者Kibana的管理界面来查看分片分配情况,确认是否存在未分配的分片。 3. 分配未分配的分片:如果有未分配的分片,可以使用Elasticsearch的API手动将其分配到可用节点上。可以使用以下命令将未分配的索引分片重新分配到可用节点上: ``` POST /_cluster/reroute { "commands": [ { "allocate_replica": { "index": "your_index_name", "shard": your_shard_number, "node": "your_node_name", "allow_primary": true } } ] } ``` 请将`your_index_name`替换为你的索引名称,`your_shard_number`替换为分片号,`your_node_name`替换为目标节点名称。 4. 等待自动分配:如果你不想手动分配分片,可以等待一段时间,Elasticsearch会自动尝试将未分配的分片分配到可用节点上。 5. 调整集群配置:如果黄色状态持续存在,你可能需要调整集群的配置,增加可用节点或者分片副本数量。 需要注意的是,以上方法仅适用于ELK单机部署环境。如果你使用了多个节点的集群部署,更复杂的配置和操作可能会涉及到。建议参考Elasticsearch官方文档或者咨询相关专业人士进行进一步的故障排除和解决。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值