目录
介绍
ELK是当前比较主流的分布式日志收集处理工具。
常用日志采集方式: Filebeat→Kafka集群→Logstash→ES→Kibana
Grafana(可视化监控工具)上配置连接ES数据库 进行实时监控
实施步骤:
Filebeat部署在应用服务器上(只负责Logstash的读取和转发,降低CPU负载消耗,确保不会抢占应用资源),Logstash、ES、Kibana在一台服务器上(此处的Logstash负责日志的过滤,会消耗一定的CPU负载,可以考虑如何优化过滤的语法步骤来达到降低负载)。
架构图:
主流架构
一、Filebeat+Elasticsearch+Kibana
filebeat 直接输出到 es,kibana做搜索展示
二、Filebeat+Kafka+Logstash+Elasticsearch+kibana
filebeat n台 输出到 kafka集群
logstash 1~3台 接收kafka日志 输出到 es集群,logstash挂1台不影响kafka日志接收。
kibana做搜索展示
通过增加消息队列中间件,来避免数据的丢失。当Logstash出现故障,日志还是存在中间件中,当Logstash再次启动,则会读取中间件中积压的日志
Filebeat
docker 部署filebeat 直接到 es
注意 filebeat版本一定要和es 版本一致
logback.xml里日志格式为json
docker run --privileged --name filebeat --net=host -d -m 1000M \
--log-driver json-file --log-opt max-size=1024m \
-v /data0/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml \
-v /data0/filebeat/logs:/root \
-v /data0/filebeat/data:/data \
-v /data0:/home/logs \
registry.api.ww.com/bop_ci/filebeat:6.6.0
vi filebeat.yml
filebeat.inputs:
- type: log
eabled: true
paths:
- /home/logs/bop-fms-account-info/logs/*.log
- /home/logs/bop-fms-advertiser-info/logs/*.log
- /home/logs/bop-fms-agent-web/logs/*.log
- /home/logs/bop-fms-api/logs/*.log
- /home/logs/bop-fms-config/logs/*.log
- /home/logs/bop-fms-vip-api/logs/*.log
ignore_older: 12h
clean_inactive: 14h
tags: ["fms-log"]
- type: log
eabled: true
paths:
- /home/logs/bop-cmc-strategy/logs/*.log
- /home/logs/qualification/logs/*.log
- /home/logs/bop-cmc-customer/logs/*.log
- /home/logs/bop-mdm-cmc-diplomat/logs/*.log
- /home/logs/bop-asm-api/logs/*.log
- /home/logs/bop-asm-message/logs/*.log
- /home/logs/bop-asm-notice/logs/*.log
ignore_older: 12h
clean_inactive: 14h
tags: ["others-log"]
json.keys_under_root: true
json.overwrite_keys: true
setup.ilm.enabled: false
setup.template.name: "bop-log"
setup.template.pattern: "bop-log-*"
setup.template.enabled: false
setup.template.overwrite: true
setup.template.settings:
index.number_of_shards: 1
index.number_of_replicas: 0
index.codec: best_compression
output.elasticsearch:
hosts: ["10.13.177.206:9201"]
#index: "bop-log-%{+yyyy.MM.dd}"
pipeline: "timestamp-pipeline-id"
indices:
- index: "bop-log-fms-%{+yyyy.MM.dd}"
when.contains:
tags: "fms-log"
- index: "bop-log-others-%{+yyyy.MM.dd}"
when.contains:
tags: "others-log"
processors:
- decode_json_fields:
fields: ["message"]
target: ""
overwrite_keys: true
- rename:
fields:
- from: "error"
to: "run_error"
- drop_fields:
fields: ["input_type", "log.offset","log.file.path","beat.version","prospector.type","beat.name", "host.name", "input.type", "agent.hostname"]
#ignore_missing: false
到 es 示例2:
# 输出到es
output.elasticsearch:
#username: "elastic"
#password: "xxxxxxxxxxx"
#worker: 1
#bulk_max_size: 1500
#pipeline: "timestamp-pipeline-id" #@timestamp处理
hosts: ["elasticsearch1:9200"]
index: "pb-%{[fields.index_name]}-*"
indices:
- index: "pb-nginx-%{+yyyy.MM.dd}"
when.equals:
fields.index_name: "nginx_log"
- index: "pb-log4j-%{+yyyy.MM.dd}"
when.equals:
fields.index_name: "log4j_log"
- index: "pb-biz-%{+yyyy.MM.dd}"
when.equals:
fields.index_name: "biz_log"
多行合并
异常堆栈的多行合并问题
在 -type下面增加属性:
multiline:
# pattern for error log, if start with space or cause by
pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:'
negate: false
match: after
支持es 自定义索引字段的索引类型
filebeat.yml配置文件增加
setup.template.json.enabled: true
setup.template.json.path: "/usr/share/filebeat/logs_template.json"
setup.template.json.name: "logs_template"
docker run 增加配置
-v /data0filebeat/fields.yml:/usr/share/filebeat/fields.yml
-v /data0/filebeat/logs_template.json:/usr/share/filebeat/logs_template.json
新建fields.yml
- key: bop-log
title: bop-log
description: >
custom fields
fields:
# some desc
- name: args
type: text
- name: serviceuri
type: text
- name: serviceurl
type: text
- name: beat.hostname
type: text
- name: class
type: text
新建logs_template.json
{
"index_patterns": [
"bop-log-*"
],
"mappings": {
"doc": {
"dynamic_templates": [
{
"strings_as_keyword": {
"mapping": {
"type": "text",
"analyzer": "standard",
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"match_mapping_type": "string",
"match": "*"
}
}
],
"properties": {
"httpmethod": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
},
"responseheader": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
},
"function": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
},
"servicename": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
},
"serviceuri": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
},
"serviceurl": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
},
"responsebody": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
},
"args": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
},
"requestheader": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
}
}
}
}
}
filebeat @timestamp日期处理
在Kibana Management Advanced Settings 搜索Date format 格式化设置成:
YYYY-MM-DD HH:mm:ss.SSS
或者
在 Kibana 中的 Devtools 界面中编写如下 pipeline 并执行
查询 GET _ingest/pipeline/timestamp-pipeline-id
PUT _ingest/pipeline/timestamp-pipeline-id
{
"description": "timestamp-pipeline-id",
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"%{TIMESTAMP_ISO8601:timestamp}"
],
"ignore_failure": true
},
"date": {
"field": "timestamp",
"timezone": "Asia/Shanghai",
"formats": [
"yyyy-MM-dd HH:mm:ss.SSS"
],
"ignore_failure": true
}
}
]
}
filebeat 设置索引分片及复本个数
在kibana DevTools执行
设置1个分片 0个复本
PUT /_template/filebeat
{
"index_patterns": ["bop-log-*"],
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0
}
}
GET /_template/filebeat
filebeat7.7.0相关详细配置预览- processors
filebeat采集多个日志(推送给ES或者logstash)
Filebeat+Elasticsearch收集整洁的业务日志
Logstash
docker 部署logstash
mkdir /data0/logstash/log -p
cd /data0/logstash
vi logstash.conf
input {
kafka {
topics => "kafkaTopic" #kafka的topic
bootstrap_servers => ["192.168.1.100:9092"] #服务器地址
codec => "json" #以Json格式取数据
}
}
output {
elasticsearch {
hosts => ["192.168.1.110:9009"] #ES地址
index => "errorlog" #ES index,必须使用小写字母
user => "elastic" #这里建议使用 elastic 用户
password => "**********"
}
}
vi logstash.yml
http.host: "0.0.0.0"
#ES地址
xpack.monitoring.elasticsearch.hosts: ["http://192.168.1.110:9009"]
xpack.monitoring.enabled: true
#ES中的内置账户和密码,在ES中配置
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: *****************
docker pull logstash:6.7.0
docker run --name logstash --privileged=true -p 9007:9600 -d -v /data/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf -v /data/logstash/log/:/home/public/ -v /data/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml logstash:8.1.3
Kibana
docker 部署kibana
注意,kibana要和es的版本一致,否则版本不兼容
cd /data0
mkdir kibana
cd kibana
docker run --name kibana -p 5601:5601 -d kibana:6.6.0
docker cp kibana:/usr/share/kibana/config/kibana.yml .
将如下内容写到kibana.yml中,然后保存退出::wq
server.name: kibana
server.host: "0"
#elasticsearch.hosts: [ "http://elasticsearch:9200" ]
elasticsearch.hosts: [ "http://自己的elasticsearch的IP:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
#设置kibana中文显示
i18n.locale: zh-CN
重新启动
docker rm -f kibana
docker run --name kibana -p 5601:5601 -d -v /data0/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:6.6.0
kibana自动关联es索引
vi auto_add_index.sh
#!/bin/bash
today=`date +%Y.%m.%d`
yestoday=`date -d "1 days ago" +%Y-%m-%d`
pattern='bop-log-'${today}
old_pattern='bop-log-'${yestoday}
index='bop-log-'${today}
echo ${pattern} ${old_pattern}
#新增
curl -f -XPOST -H 'Content-Type: application/json' -H 'kbn-xsrf: anything' \
"http://localhost:5601/api/saved_objects/index-pattern/${pattern}" -d"{\"attributes\":{\"title\":\"${index}\",\"timeFieldName\":\"@timestamp\"}}"
#设置默认索引
curl -f -XPOST -H 'Content-Type: application/json' -H 'kbn-xsrf: anything' http://localhost:5601/api/kibana/settings/defaultIndex -d "{\"value\":\"${pattern}\"}"
#删除
curl -XDELETE "http://localhost:5601/api/saved_objects/index-pattern/${old_pattern}" -H 'kbn-xsrf: true'
定时删除索引
kibana DevTools执行
30天后删除
PUT _ilm/policy/logs_policy
{
"policy": {
"phases": {
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
ELK系列(2) - Kibana怎么修改日期格式Date format
iLogtail
ilogtail -> kafka -> logstash -> elasticsearch