docker 安装ELK

ELK 日志管理

1、 为什么要用ELK 日志管理,微服务架构多个系统,日志存放的位置不同。如果查看错误日志,需要连接服务器去一台一台的找。比如100台就要去100台服务器里面找,找到问题天都黑了。

 1.1 ELK 的安装 (使用 docker-compose.yml ) docker-compose 启动:

新建挂载目录

mkdir -p /mydata/logstash

mkdir -p /mydata/elasticsearch/data

mkdir -p /mydata/elasticsearch/plugins

chmod 777 /mydata/elasticsearch/data  # 给777权限,不然启动elasticsearch 可能会有权限问题

cd usr/soft/logs

vi docker-compose.yml

version: '3'
services:
  elasticsearch:
    image: elasticsearch:6.4.0
    container_name: elasticsearch
    environment:
      - "cluster.name=elasticsearch" #设置集群名称为elasticsearch
      - "discovery.type=single-node" #以单一节点模式启动
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m" #设置使用jvm内存大小
    volumes:
      - /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins #插件文件挂载
      - /mydata/elasticsearch/data:/usr/share/elasticsearch/data #数据文件挂载
    ports:
      - 9200:9200
  kibana:
    image: kibana:6.4.0
    container_name: kibana
    links:
      - elasticsearch:es #可以用es这个域名访问elasticsearch服务,es代表是域名
    depends_on:
      - elasticsearch #kibana在elasticsearch启动之后再启动
    environment:
      - "elasticsearch.hosts=http://es:9200" #设置访问elasticsearch的地址
    ports:
      - 5601:5601
  logstash:
    image: logstash:6.4.0
    container_name: logstash
    volumes:
      - /mydata/logstash/upms-logstash.conf:/usr/share/logstash/pipeline/logstash.conf
    depends_on:
      - elasticsearch #kibana在elasticsearch启动之后再启动
    links:
      - elasticsearch:es #可以用es这个域名访问elasticsearch服务
    ports:    # 开放端口 4560 4561 4562 4563 4564
      - 4560:4560
      - 4561:4561
      - 4562:4562
      - 4563:4563
      - 4564:4564

1.2 编写日志采集logstash

在 /mydata/logstash 目录创建 upms-logstash.conf

input {
  tcp {
   add_field => {"service" => "upms"}
    mode => "server"
    host => "0.0.0.0"
    port => 4560
    codec => json_lines
  }
  tcp {
   add_field => {"service" => "auth"}
    mode => "server"
    host => "0.0.0.0"
    port => 4561
    codec => json_lines
  }
  tcp {
   add_field => {"service" => "customer"}
    mode => "server"
    host => "0.0.0.0"
    port => 4562
    codec => json_lines
  }
  tcp {
   add_field => {"service" => "employee"}
    mode => "server"
    host => "0.0.0.0"
    port => 4563
    codec => json_lines
  }
  tcp {
    add_field => {"service" => "employee_service"}
    mode => "server"
    host => "0.0.0.0"
    port => 4564
    codec => json_lines
  }
}
output {
   if [service] == "upms"{
    elasticsearch {
      hosts => "es:9200"
      index => "upms-logstash-%{+YYYY.MM.dd}"
    }
   }
  if [service] == "auth"{
    elasticsearch {
      hosts => "es:9200"
      index => "auth-logstash-%{+YYYY.MM.dd}"
    }
   }
   if [service] == "customer"{
    elasticsearch {
      hosts => "es:9200"
      index => "customer-logstash-%{+YYYY.MM.dd}"
    }
   }
   if [service] == "employee"{
    elasticsearch {
      hosts => "es:9200"
      index => "employee-logstash-%{+YYYY.MM.dd}"
    }
   }
   if [service] == "employee_service"{
    elasticsearch {
      hosts => "es:9200"
      index => "employee_service-logstash-%{+YYYY.MM.dd}"
    }
   }
}

1.3 启动 Elk 服务

   在docker-compose.yml 同级目录执行 docker-compose up -d 

   注意:Elasticsearche 启动可能需要好几分钟,要耐心等待。

 1.4 logstash 安装 json_lines 格式

# 进入logstash容器
docker exec -it logstash /bin/bash
# 进入bin目录
cd /bin/
# 安装插件
logstash-plugin install logstash-codec-json_lines
# 退出容器
exit
# 重启logstash服务
docker restart logstash

2、安装完成之后 

  2.1 访问: 宿主机:9200   例如 192.168.1.163:9200

 2.2 安装 ElasticSearch  

 

 访问宿主机 Kibana 

3、 pigx 服务整合 Logstash (以UPMS 模块为例)

3.1 添加pom 依赖

<!--集成logstash-->
<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>5.3</version>
</dependency>

   3.2 logback-spring.xml 新增 appender

<!--输出到logstash的appender-->
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
    <!--可以访问的logstash日志收集端口-->
    <destination>192.168.0.31:4560</destination>
    <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
<root level="INFO">
    <appender-ref ref="LOGSTASH"/>
</root>

3.3 启动pigx 项目 在 kibana 中查询日志

 3.4 新建索引:

 

 

 3.5 kibana 中 Discover 可以查看信息:

 3.6 Dev Tools  可以查询

GET _search
{
  "query": {
    "match_all": {}
  }
}


# GET /my_index/my_type/_search


GET /.kibana/doc/_search
{
  "query": {
    "index_name": {}
  }
}


# 查看索引
GET _cat/indices?v


# 查看索引所有数据 (一般用这个)
GET /auth-logstash*/`?q=*




# 查询数据
GET /auth-logstash*/_search
{
  "query": {
    "max_score": {
      "max_score": 1
    }
    
  }
}


# 查询索引的所有信息
GET /auth-logstash*/_search
{
  "query": {
    "match_all": {
    }
  }
}


# 搜索查询  soruce 里面只显示 port 和 service
POST /auth-logstash*/_search
{
"query": { "match_all": {} },
"_source": ["port", "service"]
}

POST /auth-logstash*/_search
{
  "query": {
    "match": {
      "host": "192.168.1.102"
    }  
  }
}

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值