ELK - kafak 日志收集

ELK - kaffa 日志收集

场景

用 kafk 收集日志,通过 logstash 消费 kafka 消息 ,然后写入 elasticsearch , 通过 kibana 展示 elasticsearch 结果

配置目录

.
├── data
│   ├── es
│   │   ├── data
│   │   │   └── nodes
│   │   └── plugins
│   ├── kibana
│   │   └── data
│   │       └── uuid
│   └── logstash
│       └── config
│           └── logstash.conf
└── docker-compose.yml

docker-compose.yaml 配置文件

version: '3.1'
services:  
   zk:
      image: zookeeper
      container_name: zk
      ports:
        - 2181:2181
      volumes:
        - "./zoo/data:/data"
        - "./zoo/datalog:/datalog"
      environment:
        ZOO_MY_ID: 1
        ZOO_SERVERS: server.1=zoo:2888:3888;2181
   kafka:
      image: wurstmeister/kafka
      container_name: kafka
      ports:
        - "9092:9092"
      environment:
        KAFKA_BROKER_ID: 1
        KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
        KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.xx.x:9092    ## 宿主机IP
        KAFKA_ADVERTISED_HOST_NAME: 192.168.xx.x
        KAFKA_ADVERTISED_PORT: 9092
        KAFKA_ZOOKEEPER_CONNECT: "zk:2181"
      volumes:
        - "./kafka/data/:/kafka"
   elasticsearch:    
      image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
      container_name: elasticsearch
      environment:
         - discovery.type=single-node
         - http.port=9200
         - http.cors.enabled=true
         - http.cors.allow-origin=*
         - http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
         - http.cors.allow-credentials=false
         - bootstrap.memory_lock=true
         - 'ES_JAVA_OPTS=-Xms512m -Xmx512m'
         - TZ=Asia/Shanghai    
      volumes:      
         - $PWD/data/es/plugins:/usr/share/elasticsearch/plugins
         - $PWD/data/es/data:/usr/share/elasticsearch/data
      ports:      
         - 9200:9200
         - 9300:9300
   kibana:    
      image: docker.elastic.co/kibana/kibana:7.10.1
      container_name: kibana
      links:
         - elasticsearch:es    
      depends_on:      
         - elasticsearch  
      environment:      
         - elasticsearch.hosts=http://es:9200
         - TZ=Asia/Shanghai    
      ports:      
         - 5601:5601
      volumes:
#         - $PWD/data/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
         - $PWD/data/kibana/data:/usr/share/kibana/data
   logstash:    
      image: docker.elastic.co/logstash/logstash:7.10.1
      container_name: logstash
      environment:
         - TZ=Asia/Shanghai
      volumes:
         - $PWD/data/logstash/config/logstash.conf:/usr/share/logstash/pipeline/logstash.conf 
         # - $PWD/data/logstash/tpl:/usr/share/logstash/config/tpl 
      depends_on:
         - elasticsearch     
         - kafka
      links:      
         - elasticsearch:es   
      ports:      
         - 9600:9600
         - 5044:5044

配置 logstash.conf 配置 文件

input {
 # 从 kafka 消费
  kafka {
    bootstrap_servers => ["192.168.xx.x:9092"]
    group_id => "kafka_elk_group"
    topics => ["kafka_elk_log"]
    auto_offset_reset => "earliest"
    codec => "plain"
 }
}
filter {

}
output {
 # 输出到 elasticsearch
  elasticsearch {
    hosts => "192.168.xx.x:9200"
    index => "kafka_elk_log‐%{+YYYY.MM.dd}"
    codec => "plain"
 }
 # 输出到控制台
 stdout { codec => rubydebug }
}

启动容器

$ docker ps
CONTAINER ID   IMAGE                                PORTS                                                 NAMES
5a17a2602172   docker.elastic.co/logstash/logstash:7.10.1            , :::9600->9600/tcp                  logstash
3b9acb045908   zookeeper                                             :::2181->2181/tcp, 8080/tcp           zk
c29acecd5444   wurstmeister/kafka                                     :::9092->9092/tcp                    kafka
a7fdde8ee4ce   docker.elastic.co/kibana/kibana:7.10.1                :::5601->5601/tcp                     kibana
4416852d2815   docker.elastic.co/elasticsearch/elasticsearch:7.10.1    :::9300->9300/tcp                   elasticsearch

kafka 生产者生成消息


bash-5.1# ./kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list
kafka_elk_group

# 查看消费者 
bash-5.1# ./kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group kafka_elk_group --describe

GROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID       HOST     CLIENT-ID 
kafka_elk_group kafka_elk_log   0          0                 0             0       logstash-0-bd4d3309 /172.25.0.1     logstash-0

# 生产消息
bash-5.1# ./kafka-console-producer.sh --bootstrap-server localhost:9092 --topic kafka_elk_log
>hello kafka
>

查看 logstash 控制台输出:

{
      "message" => "hello kafka",
      "@version" => "1",
      "@timestamp" => 2022-04-28T15:17:39.655Z
}

## 有可能会创建索引失败, 在elasticsearch 中执行, 自动创建索引
PUT /_cluster/settings
{
    "persistent" : {
        "action": {
          "auto_create_index": "true"
        }
    }
}

kibana 查询 elasticsearch

kibana 页面访问地址 localhost:5601

查看刚才生成的索引

在这里插入图片描述

在 kibana 菜单 [Stack Management] > index pattern 中创建索引查询

在这里插入图片描述

在 discovery 中查询索引

在这里插入图片描述
如果不显示数据,请选择 today

good luck

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值