ELK集群部署(八)之监控nginx日志

监控nginx日志

103上安装nginx
yum install -y nginx

启动nginx
systemctl start nginx

访问 http://192.168.56.103/

配置nginx日志格式

cd /etc/nginx
cp nginx.conf nginx.conf.bk

修改nginx日志格式
vi nginx.conf
log_format  main  "$http_x_forwarded_for | $time_local | $request | $status | $body_bytes_sent | $request_body | $content_length | $http_referer | $http_user_agent |"
                      "$http_cookie | $remote_addr | $hostname | $upstream_addr | $upstream_response_time | $request_time" ;
                        
    access_log  /var/log/nginx/access.log main;

重启nginx
systemctl restart nginx
                        

filebeat配置

收集nginx访问日志access.log和错误日志error.log ,并将日志放到不同的topic中
cat <<EOF > /usr/local/src/filebeat-7.7.1-linux-x86_64/nginx2kafka.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  fields:
    log_topics: nginx-access
    
- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  fields:
    log_topics: nginx-error
  
output.kafka:
  hosts: ["192.168.56.103:9092"]
  topic: '%{[fields][log_topics]}'
  partition.round_robin:
    reachable_only: false
  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000
EOF

启动filebeat

/usr/local/src/filebeat-7.7.1-linux-x86_64/filebeat -e -c /usr/local/src/filebeat-7.7.1-linux-x86_64/nginx2kafka.yml -d "publish"

logstash配置

access.log格式
- | 23/May/2022:01:26:25 +0000 | GET / HTTP/1.1 | 304 | 0 | - | - | - | Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36 Edg/92.0.902.67 |- | 192.168.56.1 | node-3 | - | - | 0.000

error.log格式
2022/05/23 01:22:07 [error] 3824#3824: *5 open() "/usr/share/nginx/html/help" failed (2: No such file or directory), client: 192.168.56.1, server: _, request: "GET /help HTTP/1.1", host: "192.168.56.103"


cat > /usr/local/src/logstash-7.7.1/config/nginx.yml <<EOF
input{
  kafka{
   bootstrap_servers=> ["192.168.56.103:9092"]
   topics=> ["nginx-access"]
   type => "nginx-access"
 }
  kafka{
   bootstrap_servers=> ["192.168.56.103:9092"]
   topics=> ["nginx-error"]
   type => "nginx-error"
 }
}
filter{
if [type] == "nginx-access" {
ruby {
                init => "@kname =['http_x_forwarded_for','time_local','request','status','body_bytes_sent','request_body','content_length','http_referer','http_user_agent','http_cookie','remote_addr','hostname','upstream_addr','upstream_response_time','request_time']"
                code => "new_event = LogStash::Event.new(Hash[@kname.zip(event.get('message').split('|'))])
                new_event.remove('@timestamp')
                event.append(new_event)
                "
        }

if [request] {
        ruby {
                init => "@kname = ['method','uri','verb']"
                code => "
                        new_event = LogStash::Event.new(Hash[@kname.zip(event.get('request').split(' '))])
                        new_event.remove('@timestamp')
                        event.append(new_event)
                "
        }
 } 
if [uri] {
        ruby{
                init => "@kname = ['url_path','url_args']"
                code => "
                        new_event = LogStash::Event.new(Hash[@kname.zip(event.get('uri').split('?'))])
                        new_event.remove('@timestamp')
                        event.append(new_event)
                "
        }
 }
kv {
        prefix =>"url_"
        source =>"url_args"
        field_split =>"&"
        include_keys => ["uid","cip"]
        remove_field => ["url_args","uri","request"]
}
mutate {
        convert => [
                "body_bytes_sent","integer",
                "content_length","integer",
                "upstream_response_time","float",
                "request_time","float"
        ]
 }
date {
        match => [ "time_local","dd/MMM/yyyy:hh:mm:ss Z" ]
        locale => "en"
 }
}

if [type] == "nginx-error" {
  json {
    source => "message"       #json格式显示message信息
  }
  grok {
    match => { "message" => ['(?<DATETIME>%{YEAR}/%{MONTHNUM}/%{MONTHDAY} %{DATA}) \[%{LOGLEVEL:logLevel}\] %{DATA:pid}: %{GREEDYDATA:mess}, client: %{IP:client}, server: %{WORD:server}, request: \\"%{GREEDYDATA:request}\\", host: \\"%{HOSTNAME:servername}\\"']}
  } 
}
}
output{
 # 调试的时候可以打开查看控制台信息
 #stdout {
  #codec=>rubydebug
 #}
 if [type] == "nginx-access" {
   elasticsearch {
      hosts=> ["192.168.56.101:9200","192.168.56.102:9200","192.168.56.103:9200"]
      index =>  "nginx-access-%{+YYYY-MM-dd}"
   }
 }
 if [type] == "nginx-error" {
   elasticsearch {
      hosts=> ["192.168.56.101:9200","192.168.56.102:9200","192.168.56.103:9200"]
      index =>  "nginx-error-%{+YYYY-MM-dd}"
   }
   stdout {
      codec=>rubydebug
    }
 }
}
EOF

启动logstash

/usr/local/src/logstash-7.7.1/bin/logstash -f /usr/local/src/logstash-7.7.1/config/nginx.yml

针对错误日志文本形式过滤方法

错误日志filter过滤语法,因为kafka里取数都是JSON格式,可以用下面的语法对message内容进行过滤。注意双引号要用2个\转义

filter {
  grok {
    match => { "message" => ['(?<DATETIME>%{YEAR}/%{MONTHNUM}/%{MONTHDAY} %{DATA}) \[%{LOGLEVEL:logLevel}\] %{DATA:pid}: %{GREEDYDATA:mess}, client: %{IP:client}, server: %{WORD:server}, request: \\"%{GREEDYDATA:request}\\", host: \\"%{HOSTNAME:servername}\\"']}
  } 
}

上面例子显示的结果

{
              "@version" => "1",
               "message" => "2022/05/25 01:51:52 [error] 3653#3653: *47 open() \"/usr/share/nginx/html/kk\" failed (2: No such file or directory), client: 192.168.56.1, server: _, request: \"GET /kk HTTP/1.1\", host: \"192.168.56.103\"",
            "servername" => "192.168.56.103",
                  "host" => {
        "name" => "node-3"
    },
                   "ecs" => {
        "version" => "1.5.0"
    },
                  "type" => "nginx-error",
              "logLevel" => "error",
               "request" => "GET /kk HTTP/1.1",
                "fields" => {
        "log_topics" => "nginx-error"
    },
              "DATETIME" => "2022/05/25 01:51:52",
            "@timestamp" => 2022-05-25T01:51:53.531Z,
                 "agent" => {
        "ephemeral_id" => "196c2730-9e80-4b7d-ae35-4087d7716085",
            "hostname" => "node-3",
                  "id" => "b4acb658-0d87-4903-a8e1-01539ad03e27",
             "version" => "7.7.1",
                "type" => "filebeat"
    },
                   "log" => {
        "offset" => 14531,
          "file" => {
            "path" => "/var/log/nginx/error.log"
        }
    },
                   "pid" => "3653#3653",
    "message1" => "*47 open() \"/usr/share/nginx/html/kk\" failed (2: No such file or directory)",
                "client" => "192.168.56.1",
                "server" => "_",
                 "input" => {
        "type" => "log"
    }
}

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

攻城狮JasonLong

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值