CentOS7安装部署ELK8.3(单节点)

CentOS7安装部署ELK8.3(单节点)

架构:Beats+Kafka+Logstash+Elasticsearch+Kibana
版本:8.3.3
下载地址:https://mirrors.tuna.tsinghua.edu.cn/elasticstack/8.x/yum/8.3.3/
兼容性:
https://www.elastic.co/cn/support/matrix#matrix_compatibility

一、Elasticsearch

  1. Install
    [root@node1 ~]# rpm -ivh /opt/elasticsearch-8.3.3-x86_64.rpm
    安装时生成一些密码,将这些密码保存在一个文件里,已备后续使用

  2. ES Configuration
    [root@node1 ~]# mkdir /localdata/esdata/
    [root@node1 ~]# chown elasticsearch:elasticsearch /localdata/esdata/
    [root@node1 ~]# vim /etc/elasticsearch/elasticsearch.yml
    cluster.name: node1
    node.name: node1
    path.data: /localdata/esdata //数据存放目录
    path.logs: /var/log/elasticsearch
    bootstrap.memory_lock: true //lock the process address space into RAM, preventing any Elasticsearch heap memory from being swapped out.
    network.host: 192.168.1.10 //es服务器IP
    http.port: 9200 //es服务器端口
    discovery.seed_hosts: [“node1”] //es服务器发现主机初始列表
    xpack.security.enabled: true
    xpack.security.enrollment.enabled: true
    xpack.security.http.ssl:
    enabled: true
    keystore.path: certs/http.p12
    xpack.security.transport.ssl:
    enabled: true
    verification_mode: certificate
    keystore.path: certs/transport.p12
    truststore.path: certs/transport.p12
    cluster.initial_master_nodes: [“node1”]
    discovery.type: single-node
    http.host: 0.0.0.0
    transport.host: 0.0.0.0
    ingest.geoip.downloader.enabled: false
    [root@node1 ~]# cd /etc/elasticsearch/
    [root@node1 elasticsearch]# vim jvm.options
    -XX:HeapDumpPath=/localdata/esdata //在内存不足异常时将堆转储到该目录
    [root@node1 tmp]# vim /etc/elasticsearch/log4j2.properties
    appender.rolling.strategy.action.condition.nested_condition.age = 7D //日志保留7天

  3. System Configuration
    swap对性能和节点稳定性非常不利,应该不惜一切代价加以避免。它会导致垃圾回收持续几分钟而不是几毫秒,并且会导致节点响应缓慢,甚至与集群断开连接
    注:如果是systemd而非systemctl,则需要通过以下方式修改限制
    查看系统使用的Sysv是init or systemd
    [root@node1 ~]# ps -p 1
    PID TTY TIME CMD
    1 ? 00:00:49 systemd
    [root@node1 ~]# systemctl edit elasticsearch //
    [Service]
    LimitMEMLOCK=infinity
    [root@node1 ~]# ulimit –a //查看最大打开文件数
    查看es默认的最大打开文件数
    [root@node1 ~]# curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic “https://node1:9200/_nodes/stats/process?filter_path=**.max_file_descriptors&pretty”
    Enter host password for user ‘elastic’:
    {
    “nodes” : {
    “x6StOmqzMO9GSNpYtLyM8A” : {
    “process” : {
    “max_file_descriptors” : 65535
    }
    }
    }
    }
    Elasticsearch使用mmapfs目录存储其索引。默认操作系统对mmap计数的限制可能太低,这可能导致内存不足异常
    [root@node1 ~]# sysctl -w vm.max_map_count=524288 //临时修改mmap限制
    [root@node1 ~]# vim /etc/sysctl.conf //永久修改mmap限制
    vm.max_map_count=524288
    重启服务器之后查看是否生效:
    [root@node1 elasticsearch]# sysctl vm.max_map_count
    vm.max_map_count = 524288
    为了防止/tmp没有操作权限,指定JNA和libffi的临时目录
    [root@node1 elasticsearch]# vim jvm.options
    -Djna.tmpdir=/usr/share/elasticsearch/tmp //添加此行
    [root@node1 elasticsearch]# mkdir /usr/share/elasticsearch/tmp/
    [root@node1 elasticsearch]# chown elasticsearch:elasticsearch /usr/share/elasticsearch/tmp/
    [root@node1 elasticsearch]# export LIBFFI_TMPDIR=/usr/share/elasticsearch/tmp
    [root@node1 elasticsearch]# vim /etc/profile
    export LIBFFI_TMPDIR=/usr/share/elasticsearch/tmp //添加此行
    减少系统默认TCP丢包重传次数由15改为5,以使集群迅速检测节点故障
    [root@node1 elasticsearch]# sysctl net.ipv4.tcp_retries2
    net.ipv4.tcp_retries2 = 15
    [root@node1 elasticsearch]# sysctl net.ipv4.tcp_retries2=5
    net.ipv4.tcp_retries2 = 5
    [root@node1 elasticsearch]# vim /etc/sysctl.conf //永久生效
    net.ipv4.tcp_retries2=5

  4. Start
    [root@node1 ~]# systemctl daemon-reload
    [root@node1 ~]# systemctl enable elasticsearch.service
    [root@node1 ~]# systemctl start elasticsearch.service

    Check that elasticsearch id running
    [root@node1 elasticsearch]# curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://node1:9200
    Enter host password for user ‘elastic’:
    {
    “name” : “node1”,
    “cluster_name” : “es”,
    “cluster_uuid” : “_xD7vC…Q”,
    “version” : {
    “number” : “8.3.3”,
    “build_flavor” : “default”,
    “build_type” : “rpm”,
    “build_hash” : “801f…f6”,
    “build_date” : “2022-07-23T19:30:09.227964828Z”,
    “build_snapshot” : false,
    “lucene_version” : “9.2.0”,
    “minimum_wire_compatibility_version” : “7.17.0”,
    “minimum_index_compatibility_version” : “7.0.0”
    },
    “tagline” : “You Know, for Search”
    }
    在这里插入图片描述
    在这里插入图片描述

二、Kibana

  1. Install
    [root@node1 ~]# rpm -ivh /opt/kibana-8.3.3-x86_64.rpm

  2. Configure
    [root@node1 elasticsearch]# cd /usr/share/elasticsearch/
    [root@node1 elasticsearch]# ./bin/elasticsearch-create-enrollment-token -s kibana
    warning: ignoring JAVA_HOME=/usr/local/jdk-18.0.2.1; using bundled JDK
    eyJ2ZXIiOiI4LjMuMyIsImFkciI6WyIxMC4yOS4yMTYuMTAxOuuuuuuuuuuuuuuuuuuuuuuuuMjUyOTRjNThmMDNlZTRiZDliNThjMzFkMzJkMjFiNTFlOGIzYjAttttttttttttttttttttttttttttttttttttttttY4VEl5WTBIQTRlZmN1LWcifQ==

    [root@node1 bin]# grep -v ^# /etc/kibana/kibana.yml
    server.port: 5601
    server.host: 0.0.0.0
    server.maxPayload: 1048576
    server.name: node1
    elasticsearch.hosts: [‘https://192.168.1.10:9200’]
    elasticsearch.requestTimeout: 90000
    logging.appenders.file.type: file
    logging.appenders.file.fileName: /var/log/kibana/kibana.log
    logging.appenders.file.layout.type: json
    logging.root.appenders: [default, file]
    pid.file: /run/kibana/kibana.pid
    ops.interval: 5000
    [root@node1 bin]# systemctl start kibana
    浏览器访问http:node1:5601,如果启动后web无法打开,则手动同步token
    [root@node1 bin]# ./kibana-setup --enrollment-token eyJ2ZXIiOiI4LjMuMyIsImFkciI6WyIxMC4yOS4yMTYuMTAxOuuuuuuuuuuuuuuuuuuuuuuuuMjUyOTRjNThmMDNlZTRiZDliNThjMzFkMzJkMjFiNTFlOGIzYjAttttttttttttttttttttttttttttttttttttttttY4VEl5WTBIQTRlZmN1LWcifQ==

    ✔ Kibana configured successfully.

    To start Kibana run:
    bin/kibana

    重启kibana服务,刷新浏览器,输入用户名elastic密码登录
    在这里插入图片描述

  3. 修改最大分片数
    更改最大分片数,默认1000
    在这里插入图片描述
    通过Kibana界面查看文件数限制是否生效
    在这里插入图片描述

  4. Encrypt traffic between your browser and kibana
    [root@node1 ~]# /usr/share/elasticsearch/bin/elasticsearch-certutil csr -name node1 -dns test.com
    [root@node1 ~]# cd /usr/share/elasticsearch/
    [root@node1 elasticsearch]# unzip csr-bundle.zip
    [root@node1 elasticsearch]# cp node1/* /etc/kibana/
    [root@node1 elasticsearch]# openssl x509 -req -in /etc/kibana/node1.csr -signkey /etc/kibana/node1.key -out /etc/kibana/node1.crt
    [root@node1 elasticsearch]# vim /etc/kibana/kibana.yml 以下3行去掉注释并修改
    server.ssl.enabled: true
    server.ssl.certificate: /etc/kibana/node1.crt
    server.ssl.key: /etc/kibana/node1.key
    [root@node1 elasticsearch]# systemctl restart kibana
    接下来采用Https方式访问
    https://node1:5601

三、Logstash

  1. Install
    [root@node1 ~]# rpm -ivh /opt/logstash-8.3.3-x86_64.rpm

  2. Configuration
    [root@node1 logstash]# vim jvm.options //注释掉以下3行
    #11-13:-XX:+UseConcMarkSweepGC
    #11-13:-XX:CMSInitiatingOccupancyFraction=75
    #11-13:-XX:+UseCMSInitiatingOccupancyOnly
    [root@node1 ~]# cat /etc/logstash/logstash.yml
    node.name: node1
    path.data: /var/lib/logstash
    pipeline.workers: 8
    pipeline.batch.size: 1000
    pipeline.batch.delay: 50
    path.logs: /var/log/logstash
    [root@node1 ~]# vim /etc/logstash/jvm.options
    -Xms4g
    -Xmx4g

  3. 测试标准输入和输出
    [root@node1 ~]# cd /usr/share/logstash/bin/
    ./logstash -e ‘input { stdin { } } output { stdout { } }’
    输入123456
    输出:
    {
    “@version” => “1”,
    “host” => {
    “hostname” => “node1”
    },
    “message” => “123456”,
    “@timestamp” => 2022-10-28T06:43:57.676416Z,
    “event” => {
    “original” => “123456”
    }
    }

  4. 测试输出到文件
    [root@node1 ~]# cat /etc/logstash/conf.d/logstash-sample.conf
    input {
    stdin {}
    }
    output {
    file {
    path => “/tmp/test.log”
    }
    }
    [root@node1 bin]# ./logstash -f /etc/logstash/conf.d/logstash-sample.conf //开启服务

  5. beat输入,输出到es
    [root@node1 ~]# cat /etc/logstash/conf.d/logstash-sample.conf
    input {
    file {
    path => “/tmp/test”
    }
    }

    output {
    elasticsearch {
    hosts => [“http://node1:9200”]
    index => “test20221031”
    user => “elastic”
    password => “qv…”
    cacert => “/etc/logstash/http_ca.crt”
    }
    }
    cp /etc/elasticsearch/certs/http_ca.crt /etc/logstash/
    chmod 777 /etc/logstash/http_ca.crt

  6. Verify your configuration
    [root@node1 elasticsearch]# cd /usr/share/logstash/bin/
    [root@node1 bin]# ./logstash -f /etc/logstash/conf.d/logstash-sample.conf --config.test_and_exit

  7. Start Logstash
    [root@node1 bin]# systemctl start logstash
    [root@node1 bin]# echo 123456 >> /tmp/test
    查看es索引
    [root@node1 ~]# curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic “https://node1:9200/_cat/indices”
    Enter host password for user ‘elastic’:
    green open test20221031 iaBs7cCUTi2Y7Tlwc8DsXA 1 1 2 0 7.1kb 7.1kb

四、Auditbeat

you can use Auditbeat to collect and centralize audit events from the Linux Audit Framework. You can also use Auditbeat to detect changes to critical files, like binaries and configuration files, and identify potential security policy violations.

  1. Install
    [root@node1 ~]# rpm -ivh /opt/auditbeat-8.3.3-x86_64.rpm
  2. Configuration
    在这里插入图片描述
  3. Verify your configuration
    [root@node1 bin]# auditbeat test config –e
  4. Stop Auditd service
    [root@node1 ~]# service auditd stop
    [root@node1 ~]# systemctl disable auditd.service
  5. Start auditbeat Service
    [root@node1 ~]# systemctl start auditbeat
  6. 查看rules
    [root@node1 auditbeat]# auditbeat show auditd-rules
  7. 查看ES的Index
    [root@node1 ~]# curl --cacert /etc/elastcisearch/certs/http_ca.crt -u elastic “https://localhost:9200/_cat/indices”

五、Kibana仪表盘

点击左上角展开列表,可以看到有哪些功能
在这里插入图片描述

  1. 创建Index
    点击列表中stack management,然后点击kibana 中的Data Views添加es的index到kibana中
    在这里插入图片描述
    在这里插入图片描述
    auditbeat*即可将所有auditbeat开头的的index添加进去
  2. 查找数据
    点击discover
    在这里插入图片描述
    Select fields,选择时间,查找需要的log
    在这里插入图片描述
  3. Create visualization
    点击visualize Library
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
  4. 创建dashboard
    创建完visualize直接跳转到dashboard,save即可
    在这里插入图片描述
  5. Setup Dashboards
    这一步将导入默认模板,模板有几十个,会影响机器性能,酌情导入。
    [root@node1 modules.d]# auditbeat setup --dashboards
    Loading dashboards (Kibana must be running and reachable)
    Loaded dashboards
  6. 查看导入的Auditbeat dashboard模板
    在这里插入图片描述

六、Filebeat

  1. Install
    [root@node1 modules.d]# rpm -ivh /opt/filebeat-8.3.3-x86_64.rpm
  2. Enable Module
    [root@node1 ~]# cd /etc/filebeat/modules.d/
    [root@node1 modules.d]# filebeat modules list
    [root@node1 modules.d]# filebeat modules enable system //启动监测system模块
  3. Configurate
    在这里插入图片描述
    在这里插入图片描述
  4. Start Service
    [root@node1 modules.d]# systemctl start filebeat.service
    查看es是否还有filebeat的index
    [root@node1 ~]# curl --cacert /etc/elastcisearch/certs/http_ca.crt -u elastic “https://localhost:9200/_cat/indices”
  5. Setup Dashboards
    这一步将导入默认模板,模板有几十个,会影响机器性能,酌情导入。
    在这里插入图片描述
  6. Kibana查看已导入的Dashboard
    在这里插入图片描述
    查看system overview dashboard
    在这里插入图片描述

七、Kafka

This Beat output works with all Kafka versions in between 0.8.2.0 and 2.6.0.

  1. 下载
    https://archive.apache.org/dist/kafka/2.6.0/kafka_2.13-2.6.0.tgz
    [root@node1 ~]# tar -zxvf kafka_2.13-2.6.0.tgz -C /opt/
  2. 启动服务
    [root@node1 ~]# cd /opt/kafka_2.13-2.6.0
    [root@node1 kafka_2.13-2.6.0]# nohup bin/zookeeper-server-start.sh config/zookeeper.properties > /tmp/kafka.log 2>&1 &
    [root@node1 kafka_2.13-2.6.0]# nohup bin/kafka-server-start.sh config/server.properties > /tmp/kafka.log 2>&1 &
  3. Create a topic
    [root@node1 kafka_2.13-2.6.0]# bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test //测试
    [root@node1 kafka_2.13-2.6.0]# bin/kafka-topics.sh --list --bootstrap-server localhost:9092
    Test
  4. Send some messages
    [root@node1 kafka_2.13-2.6.0]# bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
    This is a message
    This is another message
    在另一个termanel查看输出
    [root@node1 kafka_2.13-2.6.0]# bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
    This is a message
    This is another message
  5. Configuration
    [root@node1 kafka_2.13-2.6.0]# vim config/kraft/server.properties
    log.dirs=/opt/elk/kafka //数据存放地
    log.retention.hours=168 //segments保留168H=7天,7天前的数据将被清理
    log.segment.bytes=1073741824
    log.retention.check.interval.ms=300000 //5分钟check一次

八、Filebeat+Kafka+Logstash+ES

  1. Filebeat将logs发送到Kafka
    将之前的output配置注释掉,增加以下行
    在这里插入图片描述
  2. Logstash将Kafka消息发送到ES
    在这里插入图片描述
  • 17
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
CentOS 7上部署Elasticsearch 7.10节点的步骤如下: 1. 首先,禁用SELinux和防火墙。可以使用以下命令来禁用SELinux: ``` sed -i 's/enforcing/disabled/g' /etc/selinux/config setenforce 0 ``` 然后,使用以下命令禁用防火墙: ``` systemctl disable firewalld systemctl stop firewalld ``` 2. 下载Elasticsearch安装包。可以使用以下命令下载Elasticsearch 7.10的安装包: ``` wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.2-x86_64.rpm -P /opt/ ``` 3. 安装Elasticsearch。使用以下命令安装下载的安装包: ``` yum -y install /opt/elasticsearch-7.10.2-x86_64.rpm ``` 4. 配置Elasticsearch。使用编辑器打开`/etc/elasticsearch/elasticsearch.yml`文件,并进行以下配置: ``` cluster.name: my-application node.name: node-1 network.host: 0.0.0.0 http.port: 9200 cluster.initial_master_nodes: \["node-1"\] ``` 5. 启动Elasticsearch。使用以下命令启动Elasticsearch服务: ``` systemctl start elasticsearch ``` 现在,Elasticsearch已经在CentOS 7上成功部署节点。 请注意,以上步骤假设您已经安装了必要的依赖项,并且您已经具有适当的权限来执行这些操作。此外,您还可以根据您的需求进行其他配置和调整。 #### 引用[.reference_title] - *1* [elasticsearch 7.10.0 节点安装](https://blog.csdn.net/qq_21442867/article/details/115632876)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* *3* [CentOS7 安装 ElasticSearch7.10](https://blog.csdn.net/m0_54850825/article/details/123730051)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值