elk初步搭建

初步搭建ELK7.10

搭建elasticserach、kibana、logstash和beats,导入存储在aws s3中的日志

准备工作:

1、准备四台服务器(在四台机子上分别部署这elk+beats)

centos8系统

都安装了java11

第四台机子安装nginx网站

sudo yum install java-11-openjdk.x86_64

2、从官网下载安装包或yum安装(yum安装官网有教程)

https://www.elastic.co/guide/en/elasticsearch/reference/7.10/rpm.html#rpm-repo

https://www.elastic.co/guide/en/kibana/7.10/rpm.html#rpm-repo

https://www.elastic.co/guide/en/logstash/7.10/installing-logstash.html#_yum

 

举个elastic例子:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

sudo vi /etc/yum.repos.d/elastic.repo

以下是文本内容

[elastic-7.x]

name=Elastic repository for 7.x packages

baseurl=https://artifacts.elastic.co/packages/7.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

然后安装

sudo yum install elasticsearch

yum有可能网络不稳定时间过久

采用rpm包安装

rpm -ivh 安装包.rpm

一、elasticsearch配置

sudo su(习惯直接root用户)

systemctl enable elasticsearch

修改配置文件

vi /etc/elasticsearch/elasticsearch.yml

配置文件内容

#节点名字
node.name: example  
#elastic自身日志路径
path.logs: /var/log/elasticsearch  
#数据存储的路径,要存储日志较多,所以我选择了自己挂载1T硬盘的路径
path.data: /mnt/example 
#允许访问的ip,根据需求设置  
network.host: 0.0.0.0
#elastic的访问端口,根据需求修改  
http.port: 9200 
#因为是单机,就设置本机为主节点     
cluster.initial_master_nodes: ["example"]  


#开启xpack基础功能,可以启用用户密码访问功能
xpack.security.enabled: true
#如果elk建在一台机子上,可以不开启,我分开搭建,所以必须开启         
xpack.security.transport.ssl.enabled: true  

设置密码

cd /usr/share/elasticsearch/

/elasticsearch-setup-passwords interactive

然后依次设置密码(elastic用户密码很多地方都要用到)

 

二、kibana配置

sudo su

密文配置
 

/usr/share/kibana/bin/kibana-keystore --allow-root create

/usr/share/kibana/bin/kibana-keystore --allow-root add elastic

/usr/share/kibana/bin/kibana-keystore --allow-root add 对应的密码

修改配置文件

vi /etc/kibana/kibana.yml

 配置文件内容

server.port: 5601
server.host: "0.0.0.0"
server.name: "服务器名"
elasticsearch.hosts: ["ip:port"]
xpack.reporting.encryptionKey: "a_random_string"
xpack.security.encryptionKey: "something_at_least_32_characters"
i18n.locale: "zh-CN"

明文配置要加入
 

elasticsearch.username: "elastic"

elasticsearch.password: "密码"

进行密文设置后,会多出个安全的功能,可以添加相应的用户和权限。

 创建好后,kibana和elastic都要账户密码登入

 

三、logstash配置

sudo su

1、导入s3的logstash配置文件

修改配置文件

vi /etc/logstash/conf.d/aws.conf

 配置文件内容

s3 input配置参考了相应的博客,在aws s3的页面创建读存储区域的用户,并将信息对应填入

https://geektechstuff.com/2020/07/17/aws-using-logstash-to-ingest-logs-from-s3-bucket-into-elastic/

由于日志为json格式,所以用了json插件

geoip地图插件,可以在kibana里创建地图(索引名字我在output中用logstash开头,可以使用logstash模板,就能用地图了)

date时间插件,自己修改后在导入就日志是会自动匹配到对应日志生成的时间,替换自动生成的@timestap的默认服务器时间

这些官方文档里都有介绍

https://www.elastic.co/guide/en/logstash/current/plugins-filters-geoip.html

https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html

input {
      s3 {
        "access_key_id" => ""
        "secret_access_key" => ""
        "bucket" => ""
        "region" => ""  
        "prefix" => "" 
        "additional_settings" => {
          "force_path_style" => true
          "follow_redirects" => false
        }
   }
}
filter {
    json {
        source => "message"  
    }
    geoip {
       source => "ClientIP"
       target => "geoip"
       add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
       add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
                }
      mutate {
                        convert => [ "[geoip][coordinates]", "float"]
                 }
      date {
        match => ["EdgeStartTimestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss", "ISO8601"]
        target => "@timestamp"
       }      
}
output {
elasticsearch {
      hosts => ["ip:port"]
      index => "logstash-%{+YYYY.MM.dd}"
      user => "elastic"
      password => "elastic密码"
}
}

2、导入nginx日志的配置文件

vi /etc/logstash/conf.d/nginx.conf

nginx的配置文件

input {
  beats {
   port => 5044
   host => "0.0.0.0"
 }
}
filter {
  if "nginx_access" in [tags] {
	grok {
	match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]}%{DATA:[nginx][access][host]}\ - \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\""] }
	}
		}
	if "nginx_error" in [tags] {
	grok {
	match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]}%{DATA:[nginx][access][host]}\ - \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}
\" \"%{DATA:[nginx][access][agent]}\""] }
	}
		}
}
output {
   if "nginx_access" in [tags]{
      elasticsearch {
         hosts => ["ip:port"]
         user => "elastic"
         password => "密码"
         index => "3nginx-access-logstash-%{+YYYY.MM.dd}"
	}
   }
   if "nginx_error" in [tags]{
      elasticsearch {
        hosts => ["ip:port"]
        user => "elastic"
        password => "密码"
        index => "3nginx-error-logstash-%{+YYYY.MM.dd}"
	}
   }
}

 

四、beats配置

filebeat=>logstash导入nginx的日志

  1. filebeat
vi /etc/filebeat/filebeat.yml

配置文件

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  tags: ["nginx_access"]
- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["nginx_error"]

output.logstash:
  hosts: ["ip:port"]

 

其它beats=>elasticsearch导入

修改对应的配置文件

参考https://www.cnblogs.com/llwxhn/category/1663454.html

1、Heartbeat

vi /etc/heartbeat/heartbeat.yml
heartbeat.config.monitors:
  path: ${path.config}/monitors.d/*.yml
  reload.enabled: true
  reload.period: 5s

heartbeat.monitors:
- type: http
  id: my-monitor
  name: My Monitor
  urls: ["url"]
  schedule: '@every 10s'

setup.template.settings:
index.number_of_shards: 1
index.codec: best_compression
setup.kibana:
host: "ip:port"
output.elasticsearch:
hosts: ["ip:port"]
username: "elastic"
password: "密码"

2、Packbeat

vi /etc/packetbeat/packetbeat.yml
packetbeat.interfaces.device: any


packetbeat.flows:
  # Set network flow timeout. Flow is killed if no packet is received before being
  # timed out.
  timeout: 30s

  # Configure reporting period. If set to -1, only killed flows will be reported
  period: 10s


packetbeat.protocols:
- type: icmp
  # Enable ICMPv4 and ICMPv6 monitoring. Default: false
  enabled: true

- type: amqp
  # Configure the ports where to listen for AMQP traffic. You can disable
  # the AMQP protocol by commenting out the list of ports.
  ports: [5672]

- type: cassandra
  #Cassandra port for traffic monitoring.
  ports: [9042]

- type: dhcpv4
  # Configure the DHCP for IPv4 ports.
  ports: [67, 68]

- type: dns
  # Configure the ports where to listen for DNS traffic. You can disable
  # the DNS protocol by commenting out the list of ports.
  ports: [53]

- type: http
  # Configure the ports where to listen for HTTP traffic. You can disable
  # the HTTP protocol by commenting out the list of ports.
  ports: [80, 8080, 8000, 5000, 8002]

- type: memcache
  # Configure the ports where to listen for memcache traffic. You can disable
  # the Memcache protocol by commenting out the list of ports.
  ports: [11211]

- type: mysql
  # Configure the ports where to listen for MySQL traffic. You can disable
  # the MySQL protocol by commenting out the list of ports.
  ports: [3306,3307]

- type: pgsql
  # Configure the ports where to listen for Pgsql traffic. You can disable
  # the Pgsql protocol by commenting out the list of ports.
  ports: [5432]

- type: redis
  # Configure the ports where to listen for Redis traffic. You can disable
  # the Redis protocol by commenting out the list of ports.
  ports: [6379]

- type: thrift
  # Configure the ports where to listen for Thrift-RPC traffic. You can disable
  # the Thrift-RPC protocol by commenting out the list of ports.
  ports: [9090]

- type: mongodb
  # Configure the ports where to listen for MongoDB traffic. You can disable
  # the MongoDB protocol by commenting out the list of ports.
  ports: [27017]

- type: nfs
  # Configure the ports where to listen for NFS traffic. You can disable
  # the NFS protocol by commenting out the list of ports.
  ports: [2049]

- type: tls
  # Configure the ports where to listen for TLS traffic. You can disable
  # the TLS protocol by commenting out the list of ports.
  ports:
    - 443   # HTTPS
    - 993   # IMAPS
    - 995   # POP3S
    - 5223  # XMPP over SSL
    - 8443
    - 8883  # Secure MQTT
    - 9243  # Elasticsearch

- type: sip
protocol by commenting out the list of ports.
  ports: [5060]


setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
  host: "ip:port"
output.elasticsearch:
  hosts: ["IP:port"]
  username: "elastic"
  password: "密码"

processors:
    if.contains.tags: forwarded
    then:
      - drop_fields:
          fields: [host]
    else:
      - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~

metricbeat

vi /etc/metricbeat/metricbeat.yml
metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
setup.kibana:
  host: "ip:port"
output.elasticsearch:
  hosts: ["ip:port"]
  username: "elastic"
  password: "密码"
  
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

auditbeat

vi /etc/auditbeat/auditbeat.yml
auditbeat.modules:

- module: auditd
  audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ]
  audit_rules: |
    

- module: file_integrity
  paths:
  - /bin
  - /usr/bin
  - /sbin
  - /usr/sbin
  - /etc

- module: system
  datasets:
    - package # Installed, updated, and removed packages

  period: 2m # The frequency at which the datasets check for changes

- module: system
  datasets:
    - host    # General host information, e.g. uptime, IPs
    - login   # User logins, logouts, and system boots.
    - process # Started and stopped processes
    - socket  # Opened and closed sockets
    - user    # User information

 
  state.period: 12h

 
  user.detect_password_changes: true

 
  login.wtmp_file_pattern: /var/log/wtmp*
  login.btmp_file_pattern: /var/log/btmp*

setup.template.settings:
  index.number_of_shards: 1
  

setup.kibana:
   host: "ip:port"
  
output.elasticsearch:
  hosts: ["ip:port"]

  username: "elastic"
  password: "密码"


processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~

安全方面设置

参考:

http://www.eryajf.net/3500.html

https://blog.csdn.net/qq_27639777/article/details/98470844

kibana面板设置

可以参考官方教程和网上教程

索引创建:

https://www.elastic.co/guide/cn/kibana/current/tutorial-define-index.html

1.

2.

3.

根据自己创建的索引进行筛选

创建用户,进行用户权限管控

https://blog.csdn.net/cui884658/article/details/106805325

之前密文设置kibana后就多了个安全选项,可以进行权限管控,比如创建个只读用户

总结

根据官方文档和网上教程研究到这里,记录一下。

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值