logstash-将消息从kafka抽取到elasticsearch

安装logstash

下载地址:https://www.elastic.co/cn/downloads/logstash

二进制包安装logstash

#进入workspace目录
cd /home/service/app

#获取安装包
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.13.3-linux-x86_64.tar.gz

#解压安装包
tar -zxvf logstash-7.13.3-linux-x86_64.tar.gz

启动logstash

nohup /home/service/app/logstash-7.13.3/bin/logstash -f /home/service/app/logstash-7.13.3/config/platform-apisix.conf --config.reload.automatic &

 -f 指定配置文件

--config.reload.automatic 配置文件有改动会自动重载配置文件

目录结构

bin下是logstashd的可以执行文件

config下是配置文件

#目录结构
[root@10-144-131-19.platform-apisix.gzly app]# ll logstash-7.13.3
total 508
drwxr-xr-x 2 root root    4096 Jul 15 14:50 bin
drwxr-xr-x 2 root root    4096 Jul 15 15:01 config
-rw-r--r-- 1 root wheel   2276 Jul  2 19:29 CONTRIBUTORS
drwxr-xr-x 2 root wheel   4096 Jul  2 19:29 data
-rw-r--r-- 1 root wheel   4041 Jul  2 19:30 Gemfile
-rw-r--r-- 1 root wheel  23939 Jul  2 19:30 Gemfile.lock
drwxr-xr-x 9 root root    4096 Jul 15 14:50 jdk
drwxr-xr-x 6 root root    4096 Jul 15 14:50 lib
-rw-r--r-- 1 root wheel  13675 Jul  2 19:31 LICENSE.txt
drwxr-xr-x 4 root root    4096 Jul 15 14:50 logstash-core
drwxr-xr-x 3 root root    4096 Jul 15 14:50 logstash-core-plugin-api
drwxr-xr-x 4 root root    4096 Jul 15 14:50 modules
-rw-r--r-- 1 root wheel 424030 Jul  2 19:29 NOTICE.TXT
drwxr-xr-x 3 root root    4096 Jul 15 14:50 tools
drwxr-xr-x 4 root root    4096 Jul 15 14:50 vendor
drwxr-xr-x 9 root root    4096 Jul 15 14:50 x-pack

将消息从Kafka抽取到elasticsearch

编辑配置文件config/kafka-to-es.yaml

从单个topic中抽取消息到es中

input {
  kafka {
    bootstrap_servers  => "kafka1:9092,kafka2:9092,kafka3:9092"
    topics => "filebeat-plugin-log"
    type => "access-log"
  }
}

output {
  elasticsearch {
    hosts => ["http://es1:9200","http://es2:9200","http://es3:9200"]     
    index => "access-log-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}

从多个topic中抽取数据到es的不同index中

可以用if判断区分type

input {
  kafka {
    bootstrap_servers  => "kafka1:9092,kafka2:9092,kafka3:9092"
    topics => "filebeat-plugin-log"
    type => "access-log"
  }
  kafka {
    bootstrap_servers  => "kafka1:9092,kafka2:9092,kafka3:9092"
    topics => "logstahs-plugin-log"
    type => "error-log"
  }
}

output {
  if [type] == "access-log" {
    elasticsearch {
      hosts => ["http://es1:9200","http://es2:9200","http://es3:9200"]
      index => "access-log-%{+YYYY.MM.dd}"
      #user => "elastic"
      #password => "changeme"
    }
  }
  if [type] == "error-log" {
    elasticsearch {
      hosts => ["http://es1:9200","http://es2:9200","http://es3:9200"]
      index => "error-log-%{+YYYY.MM.dd}"
    }
  }
}

input.kafka中,如果group_id没有配置,则默认值是logstash。意为,kafka消费对应topic的消费组是logstash。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值