ELK集群环境搭建

ELK下载地址: 

https://www.elastic.co/cn/downloads/past-releases

 

ELK版本兼容性:

es6.5.4 不兼容 kibana 7.9.3

kibana配置elasticsearch:

7.X版本以下不支持配置es集群地址,如下:
elasticsearch.url: "http://192.168.1.1:9200"

7.X版本以上支持配置es集群地址,如下:

elasticsearch.hosts: ["http://172.16.4.241:9000","http://172.16.4.123:9000","http://172.16.4.59:9000"]

 

下载ik分词器:
wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.10.1/elasticsearch-analysis-ik-7.10.1.zip

ElasticSearch默认的分词器为Stardard分词器.

 

Elasticsearch集群搭建:

需要三台服务器: node-1, node-2, node-3

#启动

bin/elasticsearch -d

 

#node-1配置:
cluster.name: warf
node.name: node-1
node.master: true  
node.data: true
path.data: /data/elk/elasticsearch-6.5.4/data
path.logs: /data/elk/elasticsearch-6.5.4/logs
network.host: 172.16.4.123
bootstrap.memory_lock: false
http.port: 9000
http.cors.enabled: true
http.cors.allow-origin: "*"
transport.tcp.port: 9800
transport.tcp.compress: true
discovery.zen.ping.unicast.hosts: ["172.16.4.241", "172.16.4.59","172.16.4.123"]
discovery.zen.minimum_master_nodes: 2

 

#node-2配置
cluster.name: warf
node.name: node-2
node.master: true
node.data: true
path.data: /data/elk/elasticsearch-6.5.4/data
path.logs: /data/elk/elasticsearch-6.5.4/logs
network.host: 172.16.4.59
bootstrap.memory_lock: false
http.port: 9000
http.cors.enabled: true
http.cors.allow-origin: "*"
transport.tcp.port: 9800
transport.tcp.compress: true
discovery.zen.ping.unicast.hosts: ["172.16.4.241", "172.16.4.59","172.16.4.123"]
discovery.zen.minimum_master_nodes: 2

 

#node-3配置
cluster.name: warf
node.name: node-3
node.master: true
node.data: true
path.data: /data/elk/elasticsearch-6.5.4/data
path.logs: /data/elk/elasticsearch-6.5.4/logs
network.host: 172.16.4.241
bootstrap.memory_lock: false
http.port: 9000
http.cors.enabled: true
http.cors.allow-origin: "*"
transport.tcp.port: 9800
transport.tcp.compress: true
discovery.zen.ping.unicast.hosts: ["172.16.4.241", "172.16.4.59","172.16.4.123"]
discovery.zen.minimum_master_nodes: 2

 

ElasticSearch Head搭建:

1. 修改Gruntfile.js文件中hostname和port

 connect: {
                        server: {
                                options: {
                                        hostname: '172.16.4.59',
                                        port: 9100,

                                        base: '.',
                                        keepalive: true
                                }
                        }
                }

2.  修改_site/app.js

     配置ip为任意es集群中节点的ip

     this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://172.16.4.59:9000";

3. 启动head插件: 

     /elasticsearch-head/node_modules/grunt/bin/grunt server &

 

Kibana单机搭建

1. 修改config/kibana.yml

server.port: 5601
server.host: "172.16.4.54"
server.name: "warf_kibana"
elasticsearch.hosts: ["http://172.16.4.241:9000","http://172.16.4.123:9000","http://172.16.4.59:9000"]
elasticsearch.requestTimeout: 99999

2. 启动

./kibana &

 

Filebeat:

1. 新建 filebeat-dashboard.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /itcast/logs/*.log
setup.template.settings:
  index.number_of_shards: 3
output.logstash:
  hosts: ["192.168.40.133:5044"]

2. 启动
./filebeat -e -c filebeat-dashboard.yml

 

Logstash:

1. 新建 logstash-dashboard.conf

input {
      beats {
            port => "5044"
    }
}

filter {
    mutate {
        split => {"message"=>"|"}
    }
    mutate {
        add_field => {
            "userId" => "%{message[1]}"
            "visit" => "%{message[2]}"
            "date" => "%{message[3]}"
        }
    }
    mutate {
        convert => {
            "userId" => "integer"
            "visit" => "string"
            "date" => "string"
        }
    }
}

output {
    stdout { codec => rubydebug }
}

output {
    elasticsearch {
        hosts => [ "http://172.16.4.241:9000","http://172.16.4.123:9000","http://172.16.4.59:9000"]
    }
}

 

2. 启动

./bin/logstash -f logstash-dashboard.conf &

 

 

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值