20231105作业

1、总结ELK 各组件的主要功能

elasticsearch:
负责数据存储及检索
logstash:
负责日志收集、日志处理并发送至elasticsearch
kibana:
负责从ES读取数据进行可视化展示及数据管理

2、部署ES cluster,并实现XPACK认证

3、掌握ES 插件head的基本使用

4、掌握Logstash部署、配置文件编写及收集多个日志文件,并写入到ES不同的index中

配置logstash的yum仓库

vim /etc/yum.repos.d/logstash.repo

[logstash-8.x]
name=Elastic repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

查看可安装版本

yum --showduplicates list logstash

安装指定版本,与elasticsearch保持版本一致

yum install -y logstash-8.5.3

基于logstash收集多个文件并输出到elasticsearch不同的index中

添加配置文件

vim /etc/logstash/conf.d/syslog-to-es.conf

input {
file {  #文件读取模块
path => "/var/log/syslog"
type => "systemlog"
start_position => "beginning"  #从开头读取文件,默认值为"end"
stat_interval => "1"   #默认值为1秒。循环发现新文件并检查它们是否已增长/减少。在再次循环之前,此轮循环休眠时长为stat_interval设置的秒数。但是,如果文件已经增长,则会读取新内容并排队。在所有增长的文件中读取和排队可能需要时间,尤其是在管道堵塞的情况下。因此,整个循环时间是stat_interval和文件读取时间的组合。
}
file {
path => "/var/log/auth.log"
type => "authlog"
start_position => "beginning"
stat_interval => "1"
}
}
output {
if [type] == "systemlog" {
elasticsearch {
hosts => ["172.31.2.101:9200"]
index => "magedu-systemlog-%{+YYYY.MM.dd}"  #指定索引
password => "123456"
user => "magedu"
}}
if [type] == "authlog" {
elasticsearch {
hosts => ["172.31.2.101:9200"]
index => "magedu-authlog-%{+YYYY.MM.dd}"
password => "123456"
user => "magedu"
}}
}

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/syslog-to-es.conf -t

Configuration OK

systemctl start logstash && systemctl enable logstash

5、Kibana的安装及使用、heartbeat和metricbeat的安装使用

安装kibana

yum install -y kibana-8.5.1

vim /etc/kibana/kibana.yml

server.port: 5601
server.host: "192.168.220.106"
elasticsearch.hosts: ["http://192.168.220.107:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "111111"
logging:
  appenders:
    file:
      type: file
      fileName: /var/log/kibana/kibana.log
      layout:
        type: json
  root:
    appenders:
      - default
      - file
pid.file: /run/kibana/kibana.pid
i18n.locale: "zh-CN"

systemctl restart kibana.service

创建索引:
Stack Management-->数据视图-->创建数据视图
验证数据:
discover-->选择自己的数据视图
dpkg -i metricbeat-8.5.1-amd64.deb
vim /etc/metricbeat/metricbeat.yml
metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
setup.kibana:
  host: "172.31.2.101:5601"
  setup_kibana_username: "magedu"
  setup_kibana_password: "123456"
output.elasticsearch:
  hosts: ["172.31.2.101:9200"]
  username: "magedu"
  password: "123456"
processors: 
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
kibana验证metricbeat数据:   observability-概览
Heartbeat安装及配置: https://www.elastic.co/cn/beats/heartbeat
vim /etc/heartbeat/heartbeat.yml 
heartbeat.config.monitors:
  path: ${path.config}/monitors.d/*.yml
  reload.enabled: false
  reload.period: 5s
heartbeat.monitors: 
  - type: http
    enabled: true
    id: http-monitor
    name: http-domain-monitor
    urls: ["http://www.magedu.com","http://www.baidu.com"]
    schedule: '@every 10s' 
  - type: icmp
    enabled: true
    id: icmp-monitor
    name: icmp-ip-monitor
    schedule: '*/5 * * * * * *'
    hosts: ["172.31.2.101","172.31.2.101"]
setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
setup.kibana:
  host: "172.31.2.101:5601"
  setup_kibana_username: "magedu"
  setup_kibana_password: "123456"
output.elasticsearch:
  hosts: ["172.31.2.103:9200"]
  username: "magedu"
  password: "123456"
processors: 
  - add_observer_metadata:

systemctl restart heartbeat-elastic.service

kibana验证heartbeat数据: Observability-->监测

6、基于Logstash Filter总结,并基于Gork解析Nginx默认格式的访问日志及错误日志为JSON格式、并写入Elasticsearch并在Kibana展示;

logstash plugin简介
/usr/share/logstash/bin/logstash-plugin --help
/usr/share/logstash/bin/logstash-plugin list #查看所有logstash插件
/usr/share/logstash/bin/logstash-plugin install logstash-output-jdbc #安装指定插件
logstash filter 简介:
filter 插件可实现从input方向进入的event按照指定的条件进行数据解析、字段删除、数据类型转换等操作,然后再从ouput方向发送到 elasticsearch等目的server进行存储及展示,filter 阶段主要基于不同的插件实现不同的功能,官方连接
https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html
filter 插件功能模块
aggregate:同一个事件的多行日志聚合功能, https://www.elastic.co/guide/en/logstash/current/plugins-filters-aggregate.html
bytes: 讲存储单位MB、GB、TB等转换为字节, https://www.elastic.co/guide/en/logstash/current/plugins-filters-bytes.html
date:从事件中解析日期,然后作为logsatsh的时间戳, https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html
..........
geoip:对IP进行地理信息识别并添加到事件中, https://www.elastic.co/guide/en/logstash/current/plugins-filters-geoip.html
grok:基于正则表达式对事件进行匹配并以json格式输出,grok经常用于对系统errlog、mysql及zookeeper等中间件服务、网络设备日志等进行重新结构化处理(将非json格式日志转换为json格式),然后将转换后的日志重新输出到elasticsearch进行存储、在通过kibana进行绘图展示, https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html

https://github.com/elastic/logstash/blob/v1.4.2/patterns/grok-patterns 

查看grok插件内置正则表达式

vim /usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-patterns-core-4.3.4/patterns/legacy/grok-patterns
mutate: 对事件中的字段进行重命名、删除、修改等操作, https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html
logstash filter 日志收集配置步骤:
安装web服务nginx
配置nginx提供域名请求
启动nginx
配置logsatsh收集nginx访问日志并基于filter实现日志过滤及处理
重启logsatsh并在kibana验证nginx访问日志
配置logsatsh收集nginx错误日志并基于filter实现日志过滤及处理
重启logstash并在kibana验证nginx错误日志
logstash配置:
vim /etc/logstash/conf.d/nginxlog-to-es.conf
input {
  file {
    path => "/apps/nginx/logs/access.log"
    type => "nginx-accesslog"
    stat_interval => "1"
    start_position => "beginning"
  }

  file {
    path => "/apps/nginx/logs/error.log"
    type => "nginx-errorlog"
    stat_interval => "1"
    start_position => "beginning"
  }

}

filter {
  if [type] == "nginx-accesslog" {
  grok {
    match => { "message" => ["%{IPORHOST:clientip} - %{DATA:username} \[%{HTTPDATE:request-time}\] \"%{WORD:request-method} %{DATA:request-uri} HTTP/%{NUMBER:http_version}\" %{NUMBER:response_code} %{NUMBER:body_sent_bytes} \"%{DATA:referrer}\" \"%{DATA:useragent}\""] }
    remove_field => "message"
    add_field => { "project" => "magedu"}
  }
  mutate {
    convert => [ "[response_code]", "integer"]
    }
  }
  if [type] == "nginx-errorlog" {
    grok {
      match => { "message" => ["(?<timestamp>%{YEAR}[./]%{MONTHNUM}[./]%{MONTHDAY} %{TIME}) \[%{LOGLEVEL:loglevel}\] %{POSINT:pid}#%{NUMBER:threadid}\: \*%{NUMBER:connectionid} %{GREEDYDATA:message}, client: %{IPV4:clientip}, server: %{GREEDYDATA:server}, request: \"(?:%{WORD:request-method} %{NOTSPACE:request-uri}(?: HTTP/%{NUMBER:httpversion}))\", host: %{GREEDYDATA:domainname}"]}
      remove_field => "message"
    }
  }
}

output {
  if [type] == "nginx-accesslog" {
    elasticsearch {
      hosts => ["172.31.2.101:9200"]
      index => "magedu-nginx-accesslog-%{+yyyy.MM.dd}"
      user => "magedu"
      password => "123456"
  }}

  if [type] == "nginx-errorlog" {
    elasticsearch {
      hosts => ["172.31.2.101:9200"]
      index => "magedu-nginx-errorlog-%{+yyyy.MM.dd}"
      user => "magedu"
      password => "123456"
  }}

}

7、基于logstash收集Nginx JSON格式访问日志 

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值