ELK部署

1.首先确保有可用的yum源

#关闭selinux
setenforce 0
sed -i.bak  's@^SELINUX=\(.*\)@SELINUX=disabled@p' /etc/selinux/config

#关闭防火墙
#Centos7
systemctl disable firewalld
systemctl stop firewalld






vim /etc/yum.repos.d/ELK.repo



[ELK]
name=ELK-Elasticstack
baseurl=https://mirrors.tuna.tsinghua.edu.cn/elasticstack/yum/elastic-7.x/
gpgcheck=0
enabled=1

2.部署elasticsearch

1.安装elasticsearch 7.13.3

安装elasticsearch版本需要是7.13.3
yum install elasticsearsh
 
设置开机自启
systemctl daemon-reload
systemctl enable elasticsearch.service

chown -R elasticsearch.elasticsearch /usr/share/elasticsearch 


修改系统配置文件属性
vim  /etc/security/limits.conf 
在系统文件内增加以下内容
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited


2.集群搭建

修改配置文件

vim /etc/elasticsearch/elasticsearch.yml

修改内容

#集群名    集群每个node需要同样的集群名
cluster.name: my-application
#节点名称  每个node的名字需要唯一
node.name: node-1
#数据存放路径
path.data: /var/lib/elasticsearch/nodes
#日志存放路径
path.logs: /var/log/elasticsearch

#端口号
http.port: 9200



#配置集群其他节点ip   可以配置内网IP地址
discovery.seed_hosts: ["10.12.27.230", "10.12.25.159"]

#配置集群其他节点的node.name   也可以配置IP地址
cluster.initial_master_nodes: ["node-1", "node-2"]

启动es三个节点

#启动
systemctl start elasticsearch.service


#查看集群所有节点信息
curl -X GET http://127.0.0.1:9200/_cat/nodes

3.部署kibana

1.安装kibana

yum install kibana
# 版本号需要和Elasticsearch 相同

2.配置kibana

vim /etc/kibana/kibana.yml



server.port: 5601
#本机ip
server.host: "10.12.27.230"
#es地址必须是同一集群的
elasticsearch.url: ["http://10.12.27.230:9200", "http://10.12.25.159:9200"]
kibana.index: ".kibana"

3.访问kibana

http://10.12.27.230:5601/app/

4.部署logstash

1.安装logstash

yum install logstash


# 版本号最好与Elasticsearch一致

2.配置logstash

vim /etc/logstash/logstash.yml





#本机
http.host: "127.0.0.1"
http.port: 9600-9700

配置含义详见 logstash.yml | Logstash Reference [7.13] | Elastic


3. 增加logstash收集配置

创建输入流为kafka的收集配置

cd /etc/logstash/conf.d/
touch kafka.conf

input {
  kafka {
    codec => "json"
    topics => ["filebeat_df_base","filebeat_df_detail"]
    bootstrap_servers => "10.12.27.165:9092,10.12.26.165:9092,10.12.24.166:9092"
    auto_offset_reset => "latest"
    decorate_events => basic
  }
}


filter {
  json {
    source => "message"
  }
  mutate {
    remove_field => ["message", "path", "host", "@version", "tags", "level", "agent", "ecs", "log", "pathline", "@timestamp", "input", "fields"]
  }
}


output {
  elasticsearch {
    index => "%{[@metadata][kafka][topic]}"
    hosts => ["127.0.0.1:9200"]
  }
}

解释

#输入流
input {
  #输入流类别选择kafka
  kafka {
    #输入内容选择json
    codec => "json"
    #监控的topic
    topics => ["filebeat_df_base","filebeat_df_detail"]
    #kafka集群
    bootstrap_servers => "10.12.27.165:9092,10.12.26.165:9092,10.12.24.166:9092"
    #消费方式  按照最新消息消费 
    auto_offset_reset => "latest"
    #是否把kafka元数据采集出来  默认为不采集  当前配置需要采集 因为在下面输出到es时有使用topic定义index
    decorate_events => basic
  }
}


filter {
  json {
    #将message进行json格式化
    source => "message"
  }
  mutate {
    #剔除不需要的字段
    remove_field => ["message", "path", "host", "@version", "tags", "level", "agent", "ecs", "log", "pathline", "@timestamp", "input", "fields"]
  }
}


output {
  #输出至es
  elasticsearch {
    #根据采集不同的topic来灵活定义index
    index => "%{[@metadata][kafka][topic]}"
    #es集群
    hosts => ["127.0.0.1:9200"]
  }
}

测试配置文件可用

#进入bin目录
cd /usr/share/logstash/bin


#测试配置文件
./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/kafka.conf  --config.test_and_exit

4.启动logstash

#设置开机自启
systemctl daemon-reload
systemctl enable logstash.service


#启动logstash
systemctl start logstash.service

5.部署filebeat

1.安装filebeat

yum install filebeat
#版本同样与es版本保持一致

2. 修改filebeat配置文件

cd /etc/filebeat/
vim filebeat.yml


filebeat.inputs:

- type: log
  enabled: true
  #日志采集的路径
  paths:
    - /home/rong/www/cube_original/decision-engine/logs/df_base.log
  fields:
    log_topic: filebeat_df_base

- type: log
  #fields字段是否可以被获取
  enabled: true
  #日志采集的路径
  paths:
    - /home/rong/www/cube_original/decision-engine/logs/df_detail.log
  fields:
    log_topic: filebeat_df_detail


output.kafka:
  #kafka集群
  hosts: ["10.12.27.165:9092", "10.12.26.165:9092", "10.12.24.166:9092"]

  #kafka中的topic  获取fields下的log_topic值
  topic: '%{[fields.log_topic]}'

  partition.round_robin:
    reachable_only: false

  required_acks: 1

  compression: gzip

  max_message_bytes: 1000000

  keep_alive: 10s



3.启动filebeat

#设置开机自启
systemctl daemon-reload
systemctl enable filebeat.service


#启动filebeat
systemctl start filebeat.service

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值