最近学习了Docker,在Centos7上试做了个ELK环境的搭建,试用两种方法实现了用ELK+filebeat收集LOG
第一种方法参考网址如下,按照参考链接做就可以
https://elk-docker.readthedocs.io/#installation
https://juejin.im/post/5ba4c8ef6fb9a05d082a1f53
第二种是自己写个简单的docker-compose,并修改配置文件。
1.编写docker-compose.yml
- version: '3' #版本号 https://www.docker.elastic.co/#
- services:
- elasticsearch01: #服务名称(不是容器名,名称最好不要含有特殊字符,碰到过用下划线时运行出错)
- image: docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION} #使用的镜像
- container_name: elasticsearch01 #容器名称
- volumes: #挂载文件
- - ./elasticsearch/logs/:/usr/share/logs/
- # - ./elasticsearch/data:/usr/share/elasticsearch/data
- - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
- ports:
- - "9200:9200" #暴露的端口信息和docker run -d -p 80:80一样
- - "9300:9300"
- #restart: "always" #重启策略,能够使服务保持始终运行,生产环境推荐使用
- environment: #设置镜像变量,它可以保存变量到镜像里面
- ES_JAVA_OPTS: "-Xmx256m -Xms256m"
- networks: #加入指定网络
- - elk
- logstash_test:
- image: docker.elastic.co/logstash/logstash:${ELK_VERSION}
- container_name: logstash01
- volumes:
- - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- - ./logstash/pipeline:/usr/share/logstash/pipeline
- ports:
- - "5044:5044"
- - "9600:9600"
- environment:
- LS_JAVA_OPTS: "-Xmx256m -Xms256m"
- networks:
- - elk
- depends_on: #标签解决了容器的依赖、启动先后的问题
- - elasticsearch01
- kibana_test:
- image: docker.elastic.co/kibana/kibana:${ELK_VERSION}
- container_name: kibana01
- volumes:
- - ./kibana/config/:/usr/share/kibana/config:ro
- ports:
- - "5601:5601"
- networks:
- - elk
- depends_on:
- - elasticsearch01
- networks:
- elk:
- driver: bridge
2.创建.evn文件指定ELK_VERSION,目前最新版是6.6.1(2019.2.22)
- ELK_VERSION=6.6.1
3.修改配置文件
kibana.yml配置文件如下:
- server.name: kibana
- server.host: "0"
- elasticsearch.url: http://elasticsearch01:9200
logstash.yml配置如下:
-
http.host: "0.0.0.0"
logstash.conf配置如下,参考方法一,用filebeat发送log给logstash。碰到问题是elasticsearch的host用IP地址,会提示【[Manticore::SocketException] No route to host (Host unreachable)"} 】
- input {
- beats {
- port => 5044
- }
- }
- ## Add your filters / logstash plugins configuration here
- output {
- elasticsearch {
- hosts => ["elasticsearch01:9200"]
- manage_template => false
- index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
- document_type => "%{[@metadata][type]}"
- }
- }
Elasticsearch.yml配置如下:
- cluster.name: "docker-cluster"
- network.host: 0.0.0.0
- discovery.zen.minimum_master_nodes: 1
4.安装Filebeat
没有使用docker filebeat,参考方法1的链接到官网查看最新版本直接下载:www.elastic.co/downloads/b…最新版安装了。
filebeat.yml配置信息如下,配置好后重启filebeat,查看filebeat日志 tail -f /var/log/filebeat/filebeat就能出日志。
- #=========================== Filebeat inputs =============================
- filebeat.inputs:
- # Each - is an input. Most options can be set at the input level, so
- # you can use different inputs for various configurations.
- # Below are the input specific configurations.
- - type: log
- # Change to true to enable this input configuration.
- enabled: true
- # Paths that should be crawled and fetched. Glob based paths.
- paths:
- #- /var/log/*.log
- - /var/lib/docker/containers/*/*.log
- ### Multiline options
- # Multiline can be used for log messages spanning multiple lines. This is common
- # for Java Stack Traces or C-Line Continuation
- # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
- #multiline.pattern: ^\[
- multiline.pattern: ^\s*(\d{4}|\d{2})\-(\d{2}|[a-zA-Z]{3})\-(\d{2}|\d{4})
- # Defines if the pattern set under pattern should be negated or not. Default is false.
- #multiline.negate: false
- multiline.negate: true
- # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
- # that was (not) matched before or after or as long as a pattern is not matched based on negate.
- # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
- #multiline.match: after
- multiline.match: after
- multiline.max_lines: 1000
- multiline.timeout: 30s
- #================================ Outputs =====================================
- # Configure what output to use when sending the data collected by the beat.
- #-------------------------- Elasticsearch output ------------------------------
- #output.elasticsearch:
- # Array of hosts to connect to.
- # hosts: ["172.28.104.235:9200"]
- # Enabled ilm (beta) to use index lifecycle management instead daily indices.
- #ilm.enabled: false
- # Optional protocol and basic auth credentials.
- #protocol: "https"
- #username: "elastic"
- #password: "changeme"
- #----------------------------- Logstash output --------------------------------
- output.logstash:
- # The Logstash hosts
- hosts: ["172.28.104.235:5044"]
- # Optional SSL. By default is off.
- # List of root certificates for HTTPS server verifications
- #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
- # Certificate for SSL client authentication
- #ssl.certificate: "/etc/pki/client/cert.pem"
- # Client Certificate Key
- #ssl.key: "/etc/pki/client/cert.key"
enabled:filebeat 6.0后,enabled默认为关闭,必须要修改成true
paths:为你想要抓取分析的日志所在路径
multiline:如果不进行该合并处理操作的话,那么当采集的日志很长或是像输出xml格式等日志,就会出现采集不全或是被分割成多条的情况
pattern:配置的正则表达式,指定匹配的表达式(匹配以 2017-11-15 08:04:23:889 时间格式开头的字符串),如果匹配不到的话,就进行合并行。
注释掉Elasticsearch output,开启Logstash output。
hosts:elk所在机器IP地址
如果直接将日志发送到Elasticsearc,请编辑此行:Elasticsearch output
如果直接将日志发送到Logstash,请编辑此行:Logstash output
只能使用一行输出,其它的注掉即可