(二)log4j漏洞es升级

版本

版本升级说明:防止log4j漏洞带来的风险
版本统一说明:统一版本,防止不必要的意外发生
版本选择说明:elasticsearch:7.16.2
logstash:7.16.2
file beat:7.16.2


下载说明

进入官网es官网

选择需要的安装包和版本下载


filebeat

配置

进入filebeat-7.16.2,打开filebeat.yml,配置

###################### Filebeat Configuration Example #########################

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - E:监控文件路径\monitor.data
  encoding: utf-8
  fields:
    logType: monitorApi
  fields_under_root: true

- type: log
  enabled: true
  paths:
    - 监控文件路径\monitor.data
  encoding: utf-8
  fields:
    logType: monitorErr
  fields_under_root: true
    #- c:\programdata\elasticsearch\logs\*
# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  reload.period: 5s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================
# ================================= Dashboards =================================
# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# =============================== Elastic Cloud ================================
# ================================== Outputs ===================================
# ---------------------------- Elasticsearch Output ----------------------------
# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]
# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================
# ============================= X-Pack Monitoring ==============================
# ============================== Instrumentation ===============================
# ================================= Migration ==================================

启动

###启动
./filebeat -e -c filebeat配置文件
### 后台启动生成日志文件,进入file-*.*.*目录下
nohup ./filebeat -e -c filebeat.yml -d "Publish" & > nohup.out
### 后台启动不生成日志
./filebeat -e -c filebeat.yml -d "Publish" >/dev/null 2>&1 &
  1. 关键在于最后的 >/dev/null 2>&1 部分,/dev/null是一个虚拟的空设备(类似物理中的黑洞),任何输出信息被重定向到该设备后,将会石沉大海
  2. /dev/null 表示将标准输出信息重定向到"黑洞"
  3. 2>&1 表示将标准错误重定向到标准输出(由于标准输出已经定向到“黑洞”了,即:标准输出此时也是"黑洞",再将标准错误输出定向到标准输出,相当于错误输出也被定向至“黑洞”)

logstash

配置

进入logstash-7.16.2/bin;创建配置文件***.conf (这里创建的是aaa.conf),配置

input {
    beats {
        port => 5044
        codec => json
    }
}

filter {

    date {
       match => [ "operateTime", "yyyy-MM-dd HH:mm:ss" ]
       target => "operateTime"
       timezone =>"Asia/Shanghai"
   }

   geoip {
       source => "operatorIp"
       target => "geoip"

           database => "/opt/elk-7.16.2/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-7.2.8-java/vendor/GeoLite2-City.mmdb"
   }
}

output {
      #stdout { codec => rubydebug }
    if[logType]=="monitorApi"{
        elasticsearch {
            #日志索引按月分隔
            index => "monitorapi-%{+YYYY.MM}"
            hosts => ["192.168.8.35:9200"]
        }
    }else if[logType]=="monitorErr"{
        elasticsearch {
            #日志索引按月分隔
            index => "monitorerr-%{+YYYY.MM}"
            hosts => ["192.168.8.35:9200"]
        }
    }else{
        elasticsearch {
            #日志索引按月分隔
            index => "arlog-%{+YYYY.MM}"
            hosts => ["192.168.8.35:9200"]
         #document_type => "%{logType}"
         #document_type => "%{[fields][logType]}"
         #type指定为filebeat中配置的类型
         #es开启认证,添加一下配置
         #username => "admin"
         #password => "admin"
        }
    }  
}

启动

### 后台启动 ./logstash -f easyweblog.conf
### 后台启动生成日志文件,进入logstash-*.*.*目录下
### 我这里为了方便把aaa.conf 文件放在了bin目录下
nohup ./bin/logstash -f bin/aaa.conf & > nohup.out
### 后台启动不生成日志
./bin/logstash -f bin/aaa.conf >/dev/null 2>&1 &

elasticsearch

配置

进入elasticsearch-7.16.2/config,打开elasticsearch.yml,配置

# ======================== Elasticsearch Configuration =========================
# ---------------------------------- Cluster -----------------------------------
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/es-7.16.2/data
#
# Path to log files:
#
path.logs: /data/es-7.16.2/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.memory_lock: true
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1"]
# ---------------------------------- Various -----------------------------------
# ---------------------------------- Security ----------------------------------
#------this confguration is for es-head-----
http.cors.enabled: true 
http.cors.allow-origin: "*"
node.master: true
node.data: true

启动

新建用户启动,不能用root用户启动

### elasticsearch 直接启动
### 后台启动生成日志,进入elastic-*.*.*/bin 目录下
nohup ./elasticsearch & > nohup.out
### 后台启动不生成日志
./elasticsearch >/dev/null 2>&1 &

遇到问题

一、如何查看es数据,如何开发

为了方便部署调试,需下载es-head(全称:elasticsearch-head-master,版本没有要求),可以直观的展现es中的数据,也可以配合写代码。说明如下:

  1. 下载es-head,解压进入elasticsearch-head-master,打开index.html,配置es地址http://localhost:9200/,输入数据,在【概览】展示出索引,在【数据浏览】中可以看到输入的数据
  2. 在开发时可以使用【基本查询】中配置查询条件,和代码中设置查询条件互相对应,可以作为参照

二、有索引显示不出来

解决方案:

  1. 查看filebeat 日志,是否有对应文件修改监控信息,如下有[input.harvester] 和监控文件地址,如果有说明filebeat读取了新增的日志信息

    2022-02-11T09:09:56.783+0800    INFO    [input.harvester]       log/harvester.go:309    Harvester started for file.
            {"input_id": "c51d15c6-d4c0-4f36-b556-cefc1e8e8340", "source": "E:\\project\\***\\data\\monitorApi\\monitor.data", "state_id": "native::1376256-202864-1710438242", "finished": false, "os_id": "1376256-202864-1710438242", "old_source": "E:\\project\\***\\data\\monitorApi\\monitor.data", "old_finished": true, "old_os_id": "1376256-202864-1710438242", "harvester_id": "e91e944d-0039-48f4-b830-a543d8c11bb0"}
    
  2. 打开logstash日志功能 #stdout { codec => rubydebug },看logstash日志是否有输出,在日志中可以看到准备收集的信息

  3. 打开es查看日志,节点后面有index的信息,看是否是想要的,如果不是则需要修改logstash配置文件中的output内容,注意:1、if-else的使用,语法和java一样,去掉条件的小括号就行; 2、注意引用规则,中括号可以引用filebeat中设置的字段,直接把名称放入即可

  4. 如果没有找到索引,最好先看看es中是否有输入的记录,可能在别的索引下,这是就要考虑第3步来检查输出的索引配置了。

项目配置

参见(一)log4j漏洞es升级

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值