Suricata+ELK集群监控办公网流量

本博客链接:https://security.blog.csdn.net/article/details/115214781

背景

需要利用Suricata作为IDS来监控办公网出口流量,同时利用ELK(Elasticsearch+Logstash+Kibana)集群进行数据存储与展示。

准备工作:在办公网出口核心交换机上做端口流量镜像,将流量镜像端口连接到一台服务器上,以下的内容都是在这台服务器上开展的。

一、环境准备

基础环境更新:

yum update
yum upgrade

安装基础依赖:

yum -y install gcc libpcap-devel pcre-devel libyaml-devel file-devel zlib-devel jansson-devel nss-devel libcap-ng-devel libnet-devel tar make libnetfilter_queue-devel lua-devel

其他基础组件,后面遇到缺啥再装啥就可以了。

需要安装的软件:suricata、Luajit、Hyperscan、elasticsearch(集群)、elasticsearch-head、logstash、filebeat、kibana

其中elasticsearch、logstash、filebeat、kibana需要安装同一版本。

安装的目标机器:192.168.1.101(主)、192.168.1.102、192.168.1.103

注意点:出于性能考量,监控系统各组件安装方式都不建议使用Docker安装

二、suricata部署

部署目标机器:192.168.1.101

Suricata官网:https://suricata-ids.org/
Suricata和ELK流量检测:https://zhuanlan.zhihu.com/p/64742715

安装依赖:

yum install wget libpcap-devel libnet-devel pcre-devel gcc-c++ automake autoconf libtool make libyaml-devel zlib-devel file-devel jansson-devel nss-devel epel-release lz4-devel rustc cargo

编译并安装suricata:

wget https://www.openinfosecfoundation.org/download/suricata-6.0.2.tar.gz
tar -xvf suricata-6.0.2.tar.gz
cd suricata-6.0.2

# 注意,这里的默认参数尽量不要改,否则后面各种问题排查起来也是要命的
./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var --enable-geoip --enable-luajit --with-libluajit-includes=/usr/local/include/luajit-2.0/ --with-libluajit-libraries=/usr/local/lib/ --with-libhs-includes=/usr/local/include/hs/ --with-libhs-libraries=/usr/local/lib/ --enable-profiling

编译后的参数:

Suricata Configuration:
  AF_PACKET support:                       yes
  eBPF support:                            no
  XDP support:                             no
  PF_RING support:                         no
  NFQueue support:                         no
  NFLOG support:                           no
  IPFW support:                            no
  Netmap support:                          no
  DAG enabled:                             no
  Napatech enabled:                        no
  WinDivert enabled:                       no

  Unix socket enabled:                     yes
  Detection enabled:                       yes

  Libmagic support:                        yes
  libnss support:                          yes
  libnspr support:                         yes
  libjansson support:                      yes
  hiredis support:                         no
  hiredis async with libevent:             no
  Prelude support:                         no
  PCRE jit:                                yes
  LUA support:                             yes, through luajit
  libluajit:                               yes
  GeoIP2 support:                          yes
  Non-bundled htp:                         no
  Hyperscan support:                       yes
  Libnet support:                          yes
  liblz4 support:                          yes

  Rust support:                            yes
  Rust strict mode:                        no
  Rust compiler path:                      /usr/bin/rustc
  Rust compiler version:                   rustc 1.50.0 (Red Hat 1.50.0-1.el7)
  Cargo path:                              /usr/bin/cargo
  Cargo version:                           cargo 1.50.0
  Cargo vendor:                            yes

  Python support:                          yes
  Python path:                             /usr/bin/python3
  Python distutils                         yes
  Python yaml                              yes
  Install suricatactl:                     yes
  Install suricatasc:                      yes
  Install suricata-update:                 yes

  Profiling enabled:                       yes
  Profiling locks enabled:                 no

  Plugin support (experimental):           yes

Development settings:
  Coccinelle / spatch:                     no
  Unit tests enabled:                      no
  Debug output enabled:                    no
  Debug validation enabled:                no

Generic build parameters:
  Installation prefix:                     /usr
  Configuration directory:                 /etc/suricata/
  Log directory:                           /var/log/suricata/

  --prefix                                 /usr
  --sysconfdir                             /etc
  --localstatedir                          /var
  --datarootdir                            /usr/share

  Host:                                    x86_64-pc-linux-gnu
  Compiler:                                gcc (exec name) / g++ (real)
  GCC Protect enabled:                     no
  GCC march native enabled:                yes
  GCC Profile enabled:                     no
  Position Independent Executable enabled: no
  CFLAGS                                   -g -O2 -std=gnu99 -march=native -I${srcdir}/../rust/gen -I${srcdir}/../rust/dist
  PCAP_CFLAGS
  SECCFLAGS

执行安装:

make && make install

安装其他组件:

# 这里执行这一个就够了,它相当于安装:configuration、rules、provide
make install-full

修改配置文件:

vim /etc/suricata/suricata.yaml

# 修改的第一部分
# 修改相关参数,并把不用的注释掉

vars:
  address-groups:
    HOME_NET: "[10.10.11.0/24,172.16.10.0/24]"
    DNS_NET: "[10.10.10.100,10.10.10.101,10.10.10.102]"
#    HOME_NET: "[10.0.0.0/8]"
#    HOME_NET: "[172.16.0.0/12]"
#    HOME_NET: "any"

    EXTERNAL_NET: "!$HOME_NET"
#    EXTERNAL_NET: "any"

    HTTP_SERVERS: "$HOME_NET"
#    SMTP_SERVERS: "$HOME_NET"
#    SQL_SERVERS: "$HOME_NET"
    DNS_SERVERS: "$DNS_NET"
#    TELNET_SERVERS: "$HOME_NET"
#    AIM_SERVERS: "$EXTERNAL_NET"
#    DC_SERVERS: "$HOME_NET"
#    DNP3_SERVER: "$HOME_NET"
#    DNP3_CLIENT: "$HOME_NET"
#    MODBUS_CLIENT: "$HOME_NET"
#    MODBUS_SERVER: "$HOME_NET"
#    ENIP_CLIENT: "$HOME_NET"
#    ENIP_SERVER: "$HOME_NET"

  port-groups:
    HTTP_PORTS: "80,443"
#    SHELLCODE_PORTS: "!80"
#    ORACLE_PORTS: 1521
    SSH_PORTS: 22
#    DNP3_PORTS: 20000
#    MODBUS_PORTS: 502
#    FILE_DATA_PORTS: "[$HTTP_PORTS,110,143]"
    FTP_PORTS: 21
#    GENEVE_PORTS: 6081
#    VXLAN_PORTS: 4789
#    TEREDO_PORTS: 3544
# 修改的第二部分
# 修改日志存储文件位置。因为home分区比较大,一般是几百GB,而root分区一般几十GB

default-log-dir: /home/suricata/log/suricata/
# 修改的第三部分
# 主要是修改部分参数配置

        - http:
            extended: yes     # enable this for extended logging information
            custom: [ accept, accept_charset, accept_datetime, accept_encoding, accept_language, accept_range, age,
              allow, authorization, cache_control, connection, content_encoding, content_language, content_length,
              content_location, content_md5, content_range, content_type, cookie, date, dnt, etag, expires, from,
              last_modified, link, location, max_forwards, org_src_ip, origin, pragma, proxy_authenticate,
              proxy_authorization, range, referrer, refresh, retry_after, server, set_cookie, te, trailer,
              transfer_encoding, true_client_ip, upgrade, vary, via, warning, www_authenticate, x_authenticated_user,
              x_bluecoat_via, x_flash_version, x_forwarded_proto, x_requested_with ]
            dump-all-headers: [both]

        - dns:
            enabled: yes
            version: 1
            requests: yes
            responses: yes

        - tls:
            extended: yes
            session-resumption: yes
            custom: [ subject, issuer, session_resumed, serial, fingerprint, sni, version, not_before, not_after,
              certificate, chain, ja3 ]
# 修改第四部分
# 全局搜索,将网卡名称改为自己的网卡名称。我这里网卡名称是p2p1,因此将所有eth0/eth1/eth2……都改为p2p1

- interface: p2p1
# 修改第五部分
# 全局搜索,将所有完整校验关掉,不然的话会有非常多的误报,并且占空间

checksum-checks: no
# 修改第六部分
# 修改我们要使用的规则,把不使用的注释掉

default-rule-path: /var/lib/suricata/rules
# 需要把这个目录下的规则复制过去:/usr/share/suricata/rules

rule-files:
#  - suricata.rules
#  - app-layer-events.rules
#  - decoder-events.rules
#  - dhcp-events.rules
#  - dnp3-events.rules
  - dns-events.rules
  - files.rules
  - http-events.rules
#  - ipsec-events.rules
#  - kerberos-events.rules
#  - modbus-events.rules
#  - nfs-events.rules
#  - ntp-events.rules
#  - smb-events.rules
#  - smtp-events.rules
#  - stream-events.rules
  - tls-events.rules

更新规则集:

pip install --upgrade suricata-update
suricata-update

启动测试,若不报错即成功:

/usr/bin/suricata -T

正常启动:

/usr/local/bin/suricata -c /etc/suricata/suricata.yaml -i p2p1 --init-errors-fatal

也可以将其使用supervisord守护进程启动:

vim /etc/supervisord.d/suricata.conf

[program:suricata]
directory=/usr/bin
command=suricata -c /etc/suricata/suricata.yaml -i p2p1 --init-errors-fatal
autostart=true
autorestart=false
#stderr_logfile=/tmp/test_stderr.log
#stdout_logfile=/tmp/test_stdout.log
user=root

三、Luajit部署

部署目标机器:192.168.1.101

简介:

LuaJIT是采用C语言写的Lua代码的解释器,LuaJIT试图保留Lua的精髓–轻量级,高效和可扩展。

安装:

wget http://luajit.org/download/LuaJIT-2.0.5.tar.gz
tar -zxf LuaJIT-2.0.5.tar.gz
cd LuaJIT-2.0.5/
sudo make && make install

编辑配置文件:

vim /etc/ld.so.conf

# 添加如下路径,保存退出
/usr/local/lib

执行加载命令:

sudo ldconfig

四、Hyperscan部署

部署目标机器:192.168.1.101

简介:

Hyperscan是一个高性能的多重正则表达式匹配库。在Suricata中它可以用来执行多模式匹配。Hyperscan适用于部署在诸如DPI/IPS/IDS/FW等场景中,目前已经在全球多个客户网络安全方案中得到实际的应用。

使用 Hyperscan 作为 Suricata 的 MPM(多处理模块))匹配器(mpm-algo 设置)可以大大提高性能,尤其是在快速模式匹配方面。 Hyperscan 还在快速模式匹配时考虑深度和偏移量。

安装依赖:

yum install cmake ragel libtool python-devel GyeoIP-devel
yum install boost boost-devel boost-doc
yum install libquadmath libquadmath-devel bzip2-devel

安装:

wget http://downloads.sourceforge.net/project/boost/boost/1.66.0/boost_1_66_0.tar.gz
tar xvzf boost_1_66_0.tar.gz
cd boost_1_66_0/
./bootstrap.sh --prefix=/home/suricata/boost-1.66
./b2 install

// 不要退出目录
git clone https://github.com/intel/hyperscan.git
cd hyperscan
cmake -DBUILD_STATIC_AND_SHARED=1 -DBOOST_ROOT=/home/suricata/boost-1.66
make
make install

编辑配置文件:

vim /etc/ld.so.conf

# 添加如下路径,保存退出
/usr/local/lib64

执行加载命令:

sudo ldconfig

文件结构如下:

在这里插入图片描述

五、elasticsearch集群部署

部署目标机器:192.168.1.101、192.168.1.102、192.168.1.103

简介:

为什么ES集群至少要3个节点:https://www.cnblogs.com/xiaohanlin/p/14155964.html
ELFK专栏:https://blog.csdn.net/miss1181248983/category_8872481.html
ELK集群安装:https://blog.csdn.net/q1009020096/article/details/111591154

ElasticSearch简称ES,它是一个实时的分布式搜索和分析引擎,它可以用于全文搜索,结构化搜索以及分析。它是一个建立在全文搜索引擎 Apache Lucene 基础上的搜索引擎,使用 Java 语言编写。

基础配置,三台机器都一样:

vim /etc/security/limits.conf

# 这些数尽量不要省,不然启动失败还得改回来,麻烦
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096

vim /etc/sysctl.conf

vm.max_map_count=655360

安装Java,三台机器都一样:

tar zxf jdk-8u271-linux-x64.tar.gz
mv jdk1.8.0_271/ /usr/local/java

vim /etc/profile

export JAVA_HOME=/usr/local/java
export JRE_HOME=/usr/local/java/jre
export PATH=$PATH:/usr/local/java/bin
export CLASSPATH=./:/usr/local/java/lib:/usr/local/java/jre/lib
# 让环境变量生效
source !$
java -version

# 这里最好直接放在/bin下面,否则后面logstash报错非常难排查原因
which java
ln -s /usr/local/java/bin/* /bin

安装elasticsearch,三台机器都一样:

tar zxf elasticsearch-7.5.1.tar.gz
mv elasticsearch-7.5.1 /usr/local/elasticsearch

mkdir /usr/local/elasticsearch/data
chown -R admin:admin /usr/local/elasticsearch

192.168.1.101机器修改配置文件:

vim /usr/local/elasticsearch/config/elasticsearch.yml

cluster.name: ELK         # 集群名,同一个集群,集群名必须一致
node.name: es-1           # 集群节点,可任意取

transport.tcp.compress: true
path.data: /usr/local/elasticsearch/data         # 数据存放路径
path.logs: /usr/local/elasticsearch/logs         # 日志存放路径

network.host: 192.168.1.101           # 监听IP地址,
http.port: 9200
transport.tcp.port: 9300

discovery.seed_hosts: ["192.168.1.101", "192.168.1.102", "192.168.1.103"]
cluster.initial_master_nodes: ["192.168.1.101", "192.168.1.102", "192.168.1.103"]

network.publish_host: 192.168.1.101
node.master: true          # 允许成为主节点
node.data: true            # 允许成为数据节点

#xpack.security.enabled: true       # 建议关闭或不设置,若设置了有很多非常麻烦的事
http.cors.enabled: true
http.cors.allow-origin: "*"

indices.query.bool.max_clause_count: 8192
search.max_buckets: 100000

192.168.1.102机器修改配置文件:

vim /usr/local/elasticsearch/config/elasticsearch.yml

cluster.name: ELK         # 集群名,同一个集群,集群名必须一致
node.name: es-2           # 集群节点,可任意取

transport.tcp.compress: true
path.data: /usr/local/elasticsearch/data         # 数据存放路径
path.logs: /usr/local/elasticsearch/logs         # 日志存放路径

network.host: 192.168.1.102           # 监听IP地址,
http.port: 9200
transport.tcp.port: 9300

discovery.seed_hosts: ["192.168.1.101", "192.168.1.102", "192.168.1.103"]
cluster.initial_master_nodes: ["192.168.1.101", "192.168.1.102", "192.168.1.103"]

network.publish_host: 192.168.1.102
node.master: true          # 允许成为主节点
node.data: true            # 允许成为数据节点

#xpack.security.enabled: true       # 建议关闭或不设置,若设置了有很多非常麻烦的事
http.cors.enabled: true
http.cors.allow-origin: "*"

indices.query.bool.max_clause_count: 8192
search.max_buckets: 100000

192.168.1.103机器修改配置文件:

vim /usr/local/elasticsearch/config/elasticsearch.yml

cluster.name: ELK         # 集群名,同一个集群,集群名必须一致
node.name: es-3           # 集群节点,可任意取

transport.tcp.compress: true
path.data: /usr/local/elasticsearch/data         # 数据存放路径
path.logs: /usr/local/elasticsearch/logs         # 日志存放路径

network.host: 192.168.1.103           # 监听IP地址,
http.port: 9200
transport.tcp.port: 9300

discovery.seed_hosts: ["192.168.1.101", "192.168.1.102", "192.168.1.103"]
cluster.initial_master_nodes: ["192.168.1.101", "192.168.1.102", "192.168.1.103"]

network.publish_host: 192.168.1.103
node.master: true          # 允许成为主节点
node.data: true            # 允许成为数据节点

#xpack.security.enabled: true       # 建议关闭或不设置,若设置了有很多非常麻烦的事
http.cors.enabled: true
http.cors.allow-origin: "*"

indices.query.bool.max_clause_count: 8192
search.max_buckets: 100000

尽量将所有机器设置为允许成为主节点和数据节点,除非机器负载很高。

修改日志,三台机器都一样:

vim //usr/local/elasticsearch/config/log4j2.properties

appender.rolling.strategy.action.condition.nested_condition.type = IfLastModified
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB
# 只存储7天的日志
appender.rolling.strategy.action.condition.nested_condition.age = 7D

启动elasticsearch集群,三台机器都一样:

cd  /usr/local/elasticsearch/bin
# 后台启动
./elasticsearch -d
# 非后台启动,主要用于调试
./elasticsearch

查看集群健康状态:

# curl '192.168.1.101:9200/_cluster/health?pretty'
# curl '192.168.1.102:9200/_cluster/health?pretty'
# curl '192.168.1.103:9200/_cluster/health?pretty' 

{
  "cluster_name" : "ELK",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

在root下执行以下命令,三台机器都一样,否则在运行一段时间后就会出错:

// 自行换IP
curl -XPUT -H 'Content-Type: application/json' http://192.168.1.101:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

六、elasticsearch-head部署

部署目标机器:192.168.1.101

用它来看ES状态非常直观,除此外感觉这个没啥卵用,可以不安装。

nodejs安装:

因为head插件是用node.js开发的,所以需要此环境。

tar -Jxf node-v14.15.4-linux-x64.tar.xz
mv node-v14.15.4-linux-x64/ /usr/local/node

vim /etc/profile

export NODE_HOME=/usr/local/node
export PATH=$NODE_HOME/bin:$PATH
export NODE_PATH=$NODE_HOME/lib/node_modules:$PATH
source !$
node -v

head插件安装:

wget  https://github.com/mobz/elasticsearch-head/archive/master.zip
unzip master.zip
mv elasticsearch-head-master/ /usr/local/elasticsearch-head
cd /usr/local/elasticsearch-head

npm install -g cnpm --registry=https://registry.npm.taobao.org
cnpm install -g grunt-cli
cnpm install -g grunt
cnpm install grunt-contrib-clean
cnpm install grunt-contrib-concat
cnpm install grunt-contrib-watch
cnpm install grunt-contrib-connect
cnpm install grunt-contrib-copy
cnpm install grunt-contrib-jasmine				#若报错就再执行一遍

vim /usr/local/elasticsearch-head/Gruntfile.js

connect: {
        server: {
                options: {
                        hostname: '0.0.0.0',     # 新增这一行即可,不要忘了后面的逗号
                        port: 9100,
                        base: '.',
                        keepalive: true
                }
        }
}

后台启动

cd /usr/local/elasticsearch-head
nohup grunt server &
eval "cd /usr/local/elasticsearch-head/ ; nohup  npm run start >/dev/null 2>&1 & "

最终web页面:http://192.168.1.101:9100/

在这里插入图片描述

七、logstash部署

部署目标机器:192.168.1.101

logstash安装:https://blog.csdn.net/jeikerxiao/article/details/84403437
logstash出现的问题:https://blog.csdn.net/weixin_40163498/article/details/80453123
synesis_lite_suricata项目:https://github.com/orright/synesis_lite_suricata

简介:

Logstash是一个具有实时传输能力的数据收集引擎,用来进行数据收集(如:读取文本文件)、解析、过滤,并将数据发送给ES。

由于需要加入数据模板,最好使用yum部署安装

yum安装部署logstash:

vim /etc/yum.repos.d/logstash.repo

[logstash-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

安装密钥,否则无法下载

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
yum install logstash-7.5.1

安装x-pack:

./logstash-plugin install x-pack

修改配置文件:

vim /etc/logstash/jvm.options

-Xms4g
-Xmx4g

vim /etc/logstash/log4j2.properties

appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:ls.logs}
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = ${sys:ls.logs}/logstash-${sys:ls.log.format}
appender.rolling.strategy.action.condition.nested_condition.type = IfLastModified
appender.rolling.strategy.action.condition.nested_condition.age = 7D

vim /etc/logstash/logstash.yml

http.host: "172.16.10.248"
http.port: 9600
path.data: /usr/share/logstash/data2
path.logs: /usr/share/logstash/logs
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: [ "172.16.10.248:9200","10.10.11.33:9200","192.168.150.134:9200" ]

vim /etc/logstash/pipelines.yml
这一步由于使用了synesis_lite_suricata模板的配置文件,因此比较消耗性能,会影响到数据的实时性。如果对数据实时性要求较高,这里的目录可以定向到自己的配置文件。

- pipeline.id: synlite_suricata
  path.config: "/etc/logstash/synlite_suricata/conf.d/*.conf"

# 如果是自己的配置文件,例如:

- pipeline.id: synlite_suricata
  path.config: "/etc/logstash/logstash.conf"

导入数据模板synesis_lite_suricata:

项目主页:https://github.com/orright/synesis_lite_suricata

在实践中,这一步千万不要执行,否则报错,而且无法排查原因,博主就因为这个失去了一个周末假期

cd /usr/share/logstash
./logstash-plugin update logstash-filter-dns

将synesis_lite_suricata下的synlite_suricata移动到logstash的配置目录下

mv synesis_lite_suricata/logstash/synlite_suricata /etc/logstash

此时的目录结构(logstash.conf和logstash.conf没用,是我用来测试的。当然,如果要使用自定义配置文件的话,它就有用了):
在这里插入图片描述

vim logstash.service.d/synlite_suricata.conf

[Service]
# Synesis Lite for Suricata global configuration
Environment="SYNLITE_SURICATA_DICT_PATH=/etc/logstash/synlite_suricata/dictionaries"
Environment="SYNLITE_SURICATA_TEMPLATE_PATH=/etc/logstash/synlite_suricata/templates"
Environment="SYNLITE_SURICATA_GEOIP_DB_PATH=/etc/logstash/synlite_suricata/geoipdbs"
Environment="SYNLITE_SURICATA_GEOIP_CACHE_SIZE=8192"
Environment="SYNLITE_SURICATA_GEOIP_LOOKUP=true"
Environment="SYNLITE_SURICATA_ASN_LOOKUP=true"
Environment="SYNLITE_SURICATA_CLEANUP_SIGS=false"

# Name resolution option
Environment="SYNLITE_SURICATA_RESOLVE_IP2HOST=false"
Environment="SYNLITE_SURICATA_NAMESERVER=127.0.0.1"
Environment="SYNLITE_SURICATA_DNS_HIT_CACHE_SIZE=25000"
Environment="SYNLITE_SURICATA_DNS_HIT_CACHE_TTL=900"
Environment="SYNLITE_SURICATA_DNS_FAILED_CACHE_SIZE=75000"
Environment="SYNLITE_SURICATA_DNS_FAILED_CACHE_TTL=3600"

# Elasticsearch connection settings
Environment="SYNLITE_SURICATA_ES_HOST=[192.168.1.101:9200, 192.168.1.102:9200, 192.168.1.103:9200]"
# 如果是开源的ES,这里的用户名和密码都不用管,它会自动忽略
Environment="SYNLITE_SURICATA_ES_USER=elastic"
Environment="SYNLITE_SURICATA_ES_PASSWD=changeme"

# Beats input
Environment="SYNLITE_SURICATA_BEATS_HOST=172.16.10.248"
Environment="SYNLITE_SURICATA_BEATS_PORT=5044"

移动synlite_suricata.conf

mv logstash.service.d/synlite_suricata.conf /etc/systemd/system/logstash.service.d/synlite_suricata.conf
# 使配置生效
systemctl daemon-reload
# 启动logstash
systemctl start logstash

验证配置文件,这里就是我测试配置文件用的:

./logstash --path.settings /etc/logstash/ -f /etc/logstash/logstash.conf --config.test_and_exit

八、filebeat安装

部署目标机器:192.168.1.101

下载及安装:

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.5.1-linux-x86_64.tar.gz
tar -zxvf filebeat-7.5.1-linux-x86_64.tar.gz
mv filebeat-7.5.1-linux-x86_64 filebeat
cd filebeat

修改配置文件:

vim filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  # 必须和/etc/suricata/suricata.yaml力的日志配置路径一致
  paths:
    - /home/suricata/log/suricata/eve.json

  fields:
    event.type: suricata

  json.keys_under_root: true
  json.overwrite_keys: true    

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 1

setup.kibana:
  host: ["192.168.1.101:5601"]

output.logstash:
  hosts: ["192.168.1.101:5044"]

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~

运行:

./filebeat -c ./filebeat.yml

也可以将其使用supervisord守护进程启动:

vim /etc/supervisord.d/filebeat.conf

[program:filebeat]
directory=/home/suricata/filebeat
command=/home/suricata/filebeat/filebeat -e -c /home/suricata/filebeat/filebeat.yml
autostart=true
autorestart=false
stderr_logfile=/tmp/test_stderr.log
stdout_logfile=/tmp/test_stdout.log
user=root

九、kibana部署

部署目标机器:192.168.1.101

Kibana为 Elasticsearch 提供了分析和可视化的 Web 平台。它可以在 Elasticsearch 的索引中查找,交互数据,并生成各种维度表格、图形。

yum安装部署kibana:

vim /etc/yum.repos.d/kibana.repo

[kibana-7.x]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

安装密钥,否则无法下载

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
yum install kibana-7.5.1

修改配置文件:

vi /etc/kibana/kibana.yml

server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://192.168.1.101:9200","http://192.168.1.101:9200","http://192.168.1.101:9200"]

logging.dest: /usr/share/kibana/logs/kibana.log
kibana.index: ".kibana"
i18n.locale: "zh-CN"

启动kibana:

systemctl start kibana

最终web页面:http://192.168.1.101:5601/

添加模板文件:/kibana/synlite_suricata.kibana.7.1.x.json

添加位置:
在这里插入图片描述

创建索引:

要创建至少两个索引:suricata-*suricata_stats-*

在这里插入图片描述

数据展示

在这里插入图片描述

垃圾数据删除

suricata在内网跑起来后,短短时间就会有大量告警。所以我们得对规则进行优化,某些我们不关心的规则可以禁用掉。禁用掉相关规则后,不会再生成对应的告警。但是ES中已存在的该规则告警该怎么删除呢?我们可以在kibana中直接删除:使用kibana面板中的Dev Tools。

在这里插入图片描述

告警量不大的删除方式:

POST logstash-suricata_log-*/_delete_by_query
{
  "query": {
    "match": {
      "alert.signature": "SURICATA STREAM 3way handshake wrong seq wrong ack"
    }
  }
}

若告警量大,则会报超时错误,此时的删除方式:

POST logstash-suricata_log-*/_delete_by_query?wait_for_completion=false
{
  "query": {
    "match": {
      "alert.signature": "SURICATA STREAM bad window update"
    }
  }
}

上述步骤若成功会返回一个task,检查清空操作是否完成:

GET _tasks/NQtjLxAaTiig6ZDZ3nK-cw:126846320

若删除完成,则会提示"completed": true

删除ES中的索引数据

这个就很简单了,如下面这样

在这里插入图片描述

  • 10
    点赞
  • 32
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

武天旭

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值