手把手教你搭建Kafka(带SASL认证)+ELK集群 - 四 Logstash集群安装

接上一篇

手把手教你搭建Kafka(带SASL认证)+ELK集群 - 三

https://blog.csdn.net/lwlfox/article/details/119802258

部署Logstash

以下步骤需要在所有的Logstash节点上执行

  • 创建logstash用户
    useradd logstash
  • 将logstash-7.14.0-linux-x86_64.tar.gz文件上传到服务器并解压到/data目录
    tar -zxvf logstash-7.14.0-linux-x86_64.tar.gz -C /data/
    chown  -R logstash:logstash /data/logstash-7.14.0
  • elasticsearch-ca.pem文件复制到服务器的/data/logstash-7.14.0/config/目录中,这个ca文件是创建ES集群时生成的
  • 将以下内容写入到/data/logstash-7.14.0/config/kafka_client_jaas.conf文件中,其中的用户名和密码是在kafka中创建并授权给了logstash 的
    KafkaClient {
      org.apache.kafka.common.security.scram.ScramLoginModule required
      username="logstash"
      password="123456";
    };
    
  • 将以下内容写入到/data/logstash-7.14.0/config/logstash-kafka.conf文件中,该配置文件实现从kafka topic中读取消息,上传到elasticsearch集群中,其中output段中ES的用户名和密码是在集群中创建并授权的
    input {
     kafka {
        bootstrap_servers => "10.228.82.156:9092,10.228.82.214:9092,10.228.82.57:9092"
        security_protocol => "SASL_PLAINTEXT"
        sasl_mechanism => "SCRAM-SHA-256"
        jaas_path => "/data/logstash-7.14.0/config/kafka_client_jaas.conf"
        topics => ["Test"]
        enable_metric => false
        codec => json
        group_id => "logstash"
        client_id => "logstash-node-1"
        }
    }
    
    output {
          elasticsearch {
            user => "logstash_system"
            password => "NVHFG56123!"
            hosts => ["https://10.228.82.9:9200/","https://10.228.82.232:9200/","https://10.228.82.36:9200/" ]
            index => "logstash--%{+YYYY.MM.dd}"
            ssl => true
            ssl_certificate_verification => false
            cacert => "/data/logstash-7.14.0/config/elasticsearch-ca.pem"
          }
    }
    
  • 将以下内容写入到/data/logstash-7.14.0/bin/logstash-server-stop.sh文件中
    #!/bin/bash
    SIGNAL=${SIGNAL:-TERM}
    
    OSNAME=$(uname -s)
    if [[ "$OSNAME" == "OS/390" ]]; then
        if [ -z $JOBNAME ]; then
            JOBNAME="LOGSTASHSTRT"
        fi
        PIDS=$(ps -A -o pid,jobname,comm | grep -i $JOBNAME | grep java | grep -v grep | awk '{print $1}')
    elif [[ "$OSNAME" == "OS400" ]]; then
        PIDS=$(ps -Af | grep -i 'logstash\.Logstash' | grep java | grep -v grep | awk '{print $2}')
    else
        PIDS=$(ps ax | grep 'logstash\.Logstash' | grep java | grep -v grep | awk '{print $1}')
    fi
    
    if [ -z "$PIDS" ]; then
      echo "No logstash server to stop"
      exit 1
    else
      kill -s $SIGNAL $PIDS
    fi
    
  • 将logstash安装为服务,/etc/systemd/system/logstash.service,文件内容如下:
    [Unit]
    Description=logstash.service
    After=network.target
     
    [Service]
    User=logstash
    Group=logstash
    Type=simple
    Environment=JAVA_HOME=/data/jdk1.8.0_301
    ExecStart=/data/logstash-7.14.0/bin/logstash -f /data/logstash-7.14.0/config/logstash-kafka.conf
    ExecStop=/data/logstash-7.14.0/bin/logstash-server-stop.sh
     
    [Install]
    WantedBy=multi-user.target
    
  • 启动logstas服务
    systemctl daemon-reload && systemctl start logstash

接下一篇

手把手教你搭建Kafka(带SASL认证)+ELK集群 - 五

https://blog.csdn.net/lwlfox/article/details/119803391 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值