EFK组件:
ElasticSearch:
Elasticsearch是一个分布式的免费开源搜索和分析引擎数据库,适用于包括文本、数字、地理空间、结构化和非结构化数据等在内的所有类型的数据。
Filebeat:(或者使用Logstash)
轻量级日志服务器,对日志信息进行分词,并将分词后的结果发送到ElasticSearch上去。
Logstash相对于Filebeat功能更加强悍。
Kibana:
ElasticSearch的图形化界面,搭载了一批经典功能:柱状图、线状图、饼图图,等等。除此之外,还可以自己定义属于可视化图形。
EFK架构图:
EFK容器安装:
在宿主机CENTOS8上:
提前配置一下时间相关文件,后续同步到ELK容器中:
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
所需文件架构图:
docker-compose.yml文件:
version: '2.2'
services:
# ElasticSearch容器相关定义内容
elasticsearch:
# 镜像名称
image: elastic/elasticsearch:7.8.1
privileged: true
environment: # ES设置,这里并没有做集群
- discovery.type=single-node
- node.name=netdevops_es
- cluster.name=netdevops_es_cluster
- network.host=0.0.0.0
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms4g -Xmx4g" # 资源控制,这里给了4g内存,可以根据自己的设备性能进行调整
volumes:
- /usr/share/elasticsearch/data # 数据持久化
- /etc/timezone:/etc/timezone:ro # 调整容器内的时间
- /etc/localtime:/etc/localtime:ro
networks:
- efk_net # efk连接的网络
ports: # 端口映射
- "9200:9200"
- "9300:9300"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
restart: always
# 定义kibana容器
kibana:
image: elastic/kibana:7.8.1
privileged: true
environment:
- SERVER_NAME=netdevops_kibana
- ELASTICSEARCH_URL=http://elasticsearch:9200
- PATH_DATA=/usr/share/kibana/data
- NODE_OPTIONS="--max_old_space_size=4096"
volumes:
- /usr/share/kibana/data # 数据持久化
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- efk_net
ports:
- "5601:5601"
ulimits:
memlock:
soft: -1
hard: -1
depends_on:
- "elasticsearch"
restart: always
# 定义图形化界面的容器
filebeat:
image: elastic/filebeat:7.8.1
privileged: true
volumes:
- ./filebeat/cisco.yml:/usr/share/filebeat/modules.d/cisco.yml # 将cisco.yml拷贝到容器内文件夹
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml # 将filebeat.yml拷贝到容器内文件夹
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- efk_net
ports:
- "514:9002/udp" # 将宿主机514端口映射到容器内部的9002端口,9002端口专门处理和分析思科IOS系统的日志信息,并分词
depends_on:
- "elasticsearch"
- "kibana"
ulimits:
memlock:
soft: -1
hard: -1
restart: always
# 定义桥接网络
networks:
efk_net:
driver: bridge
filebeat.yml:filebeat的config文件
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
processors:
- add_cloud_metadata: ~
- add_docker_metadata: ~
# 设置filebeat连接elasticsearch的相关信息
output.elasticsearch:
hosts: ["elasticsearch:9200"]
# 发送的索引名称
index: "netdevops-ios-%{+yyyy.MM.dd}"
# 定义kibana相关的信息
setup.kibana.host: "http://kibana:5601"
setup.template.name: "netdevops-ios"
setup.template.pattern: "netdevops-ios-*"
setup.ilm.enabled: false
cisco.yml:可以看到,这是手机cisco日志的配置信息,我们仅收集IOS平台的,所以将IOS平台的信息设置为true。可以看到这里收集日志的端口为9002,和docker-compose.yml中设置的:
# Module: cisco
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.9/filebeat-module-cisco.html
- module: cisco
asa:
enabled: false
# Set which input to use between syslog (default) or file.
#var.input: syslog
# The interface to listen to UDP based syslog traffic. Defaults to
# localhost. Set to 0.0.0.0 to bind to all available interfaces.
# var.syslog_host: 0.0.0.0
# The UDP port to listen for syslog traffic. Defaults to 9001.
# var.syslog_port: 9001
# Set the log level from 1 (alerts only) to 7 (include all messages).
# Messages with a log level higher than the specified will be dropped.
# See https://www.cisco.com/c/en/us/td/docs/security/asa/syslog/b_syslog/syslogs-sev-level.html
# var.log_level: 7
ftd:
enabled: false
# Set which input to use between syslog (default) or file.
#var.input: syslog
# The interface to listen to UDP based syslog traffic. Defaults to
# localhost. Set to 0.0.0.0 to bind to all available interfaces.
#var.syslog_host: localhost
# The UDP port to listen for syslog traffic. Defaults to 9003.
#var.syslog_port: 9003
# Set the log level from 1 (alerts only) to 7 (include all messages).
# Messages with a log level higher than the specified will be dropped.
# See https://www.cisco.com/c/en/us/td/docs/security/firepower/Syslogs/b_fptd_syslog_guide/syslogs-sev-level.html
#var.log_level: 7
ios:
enabled: true
# Set which input to use between syslog (default) or file.
#var.input: syslog
# The interface to listen to UDP based syslog traffic. Defaults to
# localhost. Set to 0.0.0.0 to bind to all available interfaces.
var.syslog_host: 0.0.0.0
# The UDP port to listen for syslog traffic. Defaults to 9002.
var.syslog_port: 9002
# Set custom paths for the log files when using file input. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
nexus:
enabled: false
# Set which input to use between udp (default), tcp or file.
# var.input: udp
# var.syslog_host: localhost
# var.syslog_port: 9506
# Set paths for the log files when file input is used.
# var.paths:
# Toggle output of non-ECS fields (default true).
# var.rsa_fields: true
# Set custom timezone offset.
# "local" (default) for system timezone.
# "+02:00" for GMT+02:00
# var.tz_offset: local
在docker-compose.yaml目录下,使用命令:
docker-compose up -d
部署EFK,然后进入filebeat的交互界面:
进入图中的目录下,就可以找到需要监控的设备,我们仅需要使用命令cp xxx.yml.disabled xxx.yml,即可激活监控的对象信息。
ELK收集日志信息:
步骤一: 准备一台CSR1000v,配置好logging服务器地址为宿主机的地址192.168.0.166,端口号默认为514,不用修改。
步骤二: 使用游览器访问192.168.0.166:5601,查看日志信息:
日志信息按照Filebeat中设置的格式显示:
步骤三: 在Kibana Discover中创建索引模式。
创建Index pattern:
创建完成后,返回Discover界面,查看日志分词结果:
步骤四: Kibana做Dashboard呈现:
可以选择一个pie:
这里做一个简单的举例,统计日志的级别:
步骤五(可选): 将dashboard导出嵌入到自己的页面中。
将dashboard保存后,通过share,将其嵌入到自己的管理系统中(保证管理系统和ELK能够通信)。
参考资料来源:
容器ELK快速实现:https://www.bilibili.com/video/BV1N54y1m7zx,有相关代码分享。