ELK(一)-- ELK5.X+logback搭建日志平台

一、背景

当生产环境web服务器做了负载集群后,查询每个tomcat下的日志无疑是一件麻烦的事,elk为我们提供了快速查看日志的方法。这里是自己搭建的一套,后续会介绍使用阿里云的日志服务。

二、环境

CentOS7、JDK8、这里使用了ELK5.0.0(即:Elasticsearch-5.0.0、Logstash-5.0.0、kibana-5.0.0),安装ElasticSearch6.2.0也是一样的步骤,已亲测。

三、注意
  • 安装ELK一定要JDK8或者以上版本。

  • 关闭防火墙:systemctl stop firewalld.service

  • 启动elasticsearch,一定不能用root用户,需要重新创建一个用户,因为elasticsearch会验证安全。

  • 官网历史版本下载:https://www.elastic.co/downloads/past-releases

四、Elasticsearch安装

1.用root登录centos,创建elk组,并且将elk用户添加到elk组中,然后创建elk用户的密码


   
   
  1. [root@iZuf6a50pk1lwxxkn0qr6tZ ~]# groupadd elk

  2. [root@iZuf6a50pk1lwxxkn0qr6tZ ~]# useradd -g elk elk

  3. [root@iZuf6a50pk1lwxxkn0qr6tZ ~]# passwd elk

  4. 更改用户 elk 的密码 。

  5. 新的 密码:

  6. 重新输入新的 密码:

  7. passwd:所有的身份验证令牌已经成功更新。

2.进入/usr/local目录,创建elk目录,并将该目录的使用权赋给elk用户。下载elasticsearch,可以从官网下载,也可以直接使用wget下载,这里使用wget下载


   
   
  1. cd /usr/local

  2. mkdir elk

  3. chown -R elk:elk /usr/local/elk # 授权

  4. cd elk

  5. wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.0.0.tar.gz

  6. 解压elasticsearch:tar -zxvf elasticsearch-5.0.0.tar.gz

3.使用root用户,进入/usr/local/elk/elasticsearch-5.0.0/config目录,编辑 elasticsearch.yml ,修改以下参数:


   
   
  1. cluster.name: nmtx-cluster # 在集群中的名字

  2. node.name: node-1 # 节点的名字

  3. path.data: /data/elk/elasticsearch-data # es数据保存的目录

  4. path.logs: /data/elk/elasticsearch-logs # es日志保存的目录

  5. network.host: 0.0.0.0

  6. http.port: 9200 # 开放的端口

将data/elk目录的使用权限 赋予 elk用户:


   
   
  1. chown -R elk:elk /data/elk

4.使用root用户,修改/etc/sysctl.conf 配置文件,在最后一行添加:vm.maxmapcount=262144, 然后使其生效:sysctl -p /etc/sysctl.conf。如果不配置这一个的话,启动es会报如下错误:


   
   
  1. max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

5.使用root用户,修改/etc/security/limits.conf文件,添加或修改如下行:


   
   
  1. * soft nproc 65536

  2. * hard nproc 65536

  3. * soft nofile 65536

  4. * hard nofile 65536

如果不配置,会报如下错误:


   
   
  1. max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

6.切换到elk用户,启动es(直接后台启动了,加上 -d 即为后台启动,配置完4、5步骤基本不会出什么问题了),


   
   
  1. /usr/local/elk/elasticsearch-5.0.0/bin/elasticsearch -d

7.验证是否启动成功,输入以下命令:


   
   
  1. curl -XGET 'localhost:9200/?pretty'

当看到如下时,即为启动成功:


   
   
  1. {

  2. "name" : "node-1",

  3. "cluster_name" : "nmtx-cluster",

  4. "cluster_uuid" : "WdX1nqBPQJCPQniObzbUiQ",

  5. "version" : {

  6. "number" : "5.0.0",

  7. "build_hash" : "253032b",

  8. "build_date" : "2016-10-26T04:37:51.531Z",

  9. "build_snapshot" : false,

  10. "lucene_version" : "6.2.0"

  11. },

  12. "tagline" : "You Know, for Search"

  13. }

或者使用浏览器访问:192.168.1.66:9200

但是6版本集成了x-pack插件,可能会出现如下:


   
   
  1. [elk@localhost bin]$ curl -XGET 'localhost:9200/?pretty'

  2. {

  3. "error" : {

  4. "root_cause" : [

  5. {

  6. "type" : "security_exception",

  7. "reason" : "missing authentication token for REST request [/?pretty]",

  8. "header" : {

  9. "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""

  10. }

  11. }

  12. ],

  13. "type" : "security_exception",

  14. "reason" : "missing authentication token for REST request [/?pretty]",

  15. "header" : {

  16. "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""

  17. }

  18. },

  19. "status" : 401

  20. }

需要加上用户名密码,如下:


   
   
  1. [elk@localhost bin]$ curl --user elastic:changeme -XGET 'localhost:9200/_cat/health?v&pretty'

  2. epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent

  3. 1556865276 06:34:36 my-application green 1 1 1 1 0 0 0 0 - 100.0%

五、Logstash安装

1.下载logstash5.0.0,也是使用 wget下载,解压logstash


   
   
  1. cd /usr/local/elk

  2. wget https://artifacts.elastic.co/downloads/logstash/logstash-5.0.0.tar.gz

  3. tar -zxvf logstash-5.0.0.tar.gz

2.编辑 usr/local/elk/logstash-5.0.0/config/logstash.conf 文件,这是一个新创建的文件,在文件中添加如下内容:


   
   
  1. input {

  2. tcp {

  3. port => 4567

  4. mode => "server"

  5. codec => json_lines

  6. }

  7. }

  8. filter {

  9. }

  10. output {

  11. elasticsearch {

  12. hosts => ["192.168.1.66:9200"]

  13. cluster => nmtx-cluster

  14.  index => "operation-%{+YYYY.MM.dd}"

  15. }  

  16. stdout {

  17. codec => rubydebug

  18. }

  19. }

logstash的配置文件须包含三个内容:

  • input{}:此模块是负责收集日志,可以从文件读取、从redis读取 或者 开启端口让产生日志的业务系统直接写入到logstash,这里使用的是:开启端口让产生日志的业务系统直接写入到logstash

  • filter{}:此模块是负责过滤收集到的日志,并根据过滤后对日志定义显示字段

  • output{}:此模块是负责将过滤后的日志输出到elasticsearch或者文件、redis等,生产环境中可以将 stdout 移除掉,防止在控制台上打印日志。

3.启动logstash,先不以后台方式启动,为了看logstash是否可以启动成功


   
   
  1. /usr/local/elk/logstash-5.0.0/bin/logstash -f /usr/local/elk/logstash-5.0.0/config/logstash.conf

当看到控制台上出现如下内容时,即为启动成功:


   
   
  1. Sending Logstash logs to /elk/logstash-5.0.0/logs which is now configured via log4j2.properties.

  2. [2018-03-11T12:12:14,588][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:4567"}

  3. [2018-03-11T12:12:14,892][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://localhost:9200"]}}

  4. [2018-03-11T12:12:14,894][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}

  5. [2018-03-11T12:12:15,425][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}

  6. [2018-03-11T12:12:15,445][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash

  7. [2018-03-11T12:12:15,729][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["localhost:9200"]}

  8. [2018-03-11T12:12:15,732][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}

  9. [2018-03-11T12:12:15,735][INFO ][logstash.pipeline ] Pipeline main started

  10. [2018-03-11T12:12:15,768][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

Ctrl+C,关闭logstash,然后 后台启动:


   
   
  1. nohup /usr/local/elk/logstash-5.0.0/bin/logstash -f /usr/local/elk/logstash-5.0.0/config/logstash.conf > /data/elk/logstash-log.file 2>&1 &

六、Kibana安装

1.下载并解压Kibana,wget下载:


   
   
  1. cd /usr/local/elk

  2. wget https://artifacts.elastic.co/downloads/kibana/kibana-5.0.0-linux-x86_64.tar.gz

  3. tar zxvf kibana-5.0.0-linux-x86_64.tar.gz

2.修改/usr/local/elk/kibana-5.0.0-linux-x86_64/config/kibana.yml文件,修改内容如下:


   
   
  1. server.port: 5601

  2. server.host: "192.168.1.66"

  3. elasticsearch.url: "http://192.168.1.66:9200"

  4. kibana.index: ".kibana"

3.启动Kibana,后台启动:


   
   
  1. nohup /elk/kibana-5.0.0-linux-x86_64/bin/kibana > /data/elk/kibana-log.file 2>&1 &

4.浏览器访问:x.x.x.x:5601,然后配置索引匹配,如下:

七、使用logback向logstash中写入日志

1.新建一个SpringBoot工程,引入 logstash需要的jar包依赖:


   
   
  1. <!-- logstash -->

  2. <dependency>

  3. <groupId>net.logstash.logback</groupId>

  4. <artifactId>logstash-logback-encoder</artifactId>

  5. <version>4.11</version>

  6. </dependency>

2.添加 logback.xml 文件,内容如下:


   
   
  1. <!-- Logback configuration. See http://logback.qos.ch/manual/index.html -->

  2. <configuration scan="true" scanPeriod="10 seconds">

  3. <include resource="org/springframework/boot/logging/logback/base.xml" />

  4. <appender name="INFO_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">

  5. <File>${LOG_PATH}/info.log</File>

  6. <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">

  7. <fileNamePattern>${LOG_PATH}/info-%d{yyyyMMdd}.log.%i</fileNamePattern>

  8. <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">

  9. <maxFileSize>500MB</maxFileSize>

  10. </timeBasedFileNamingAndTriggeringPolicy>

  11. <maxHistory>2</maxHistory>

  12. </rollingPolicy>

  13. <layout class="ch.qos.logback.classic.PatternLayout">

  14. <Pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} -%msg%n

  15. </Pattern>

  16. </layout>

  17. </appender>

  18. <appender name="ERROR_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">

  19. <filter class="ch.qos.logback.classic.filter.ThresholdFilter">

  20. <level>ERROR</level>

  21. </filter>

  22. <File>${LOG_PATH}/error.log</File>

  23. <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">

  24. <fileNamePattern>${LOG_PATH}/error-%d{yyyyMMdd}.log.%i

  25. </fileNamePattern>

  26. <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">

  27. <maxFileSize>500MB</maxFileSize>

  28. </timeBasedFileNamingAndTriggeringPolicy>

  29. <maxHistory>2</maxHistory>

  30. </rollingPolicy>

  31. <layout class="ch.qos.logback.classic.PatternLayout">

  32. <Pattern>

  33. %d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} -%msg%n

  34. </Pattern>

  35. </layout>

  36. </appender>

  37. <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">

  38. <encoder>

  39. <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>

  40. </encoder>

  41. </appender>

  42. <!-- 配置logstash的ip端口 -->

  43. <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">

  44. <destination>192.168.1.66:4567</destination>

  45. <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" />

  46. </appender>

  47. <root level="INFO">

  48. <appender-ref ref="INFO_FILE" />

  49. <appender-ref ref="ERROR_FILE" />

  50. <appender-ref ref="STDOUT" />

  51.    <!-- 向logstash中写入日志 -->

  52. <appender-ref ref="LOGSTASH" />

  53. </root>

  54. <logger name="org.springframework.boot" level="INFO"/>

  55. </configuration>

3.application.properties文件中配置使用 我们刚刚加入的logback.xml文件


   
   
  1. #日志

  2. logging.config=classpath:logback.xml

  3. logging.path=/data/springboot-log

4.新建一个测试类,如下:


   
   
  1. import org.junit.Test;

  2. import org.junit.runner.RunWith;

  3. import org.slf4j.Logger;

  4. import org.slf4j.LoggerFactory;

  5. import org.springframework.boot.test.context.SpringBootTest;

  6. import org.springframework.test.context.junit4.SpringRunner;

  7. @RunWith(SpringRunner.class)

  8. @SpringBootTest

  9. public class LogTest {

  10. public static final Logger logger = LoggerFactory.getLogger(Main.class);

  11. @Test

  12. public void testLog() {

  13. logger.info("==66666=={}" , "我是info级别的日志");

  14. logger.error("==66666=={}" , "我是error级别的日志");

  15. }

  16. }

在浏览器中输入:192.168.1.66:5601,选中 左侧菜单中的Discover,然后在 下图框中输入'66666'(注意:单引号代表模糊查询,双引号代表精确查询),回车,看到下图中可以查询出刚刚LogTest中输入的日志,说明logback已经可以向logstash中写入日志了。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值