ELK 系列九、elasticsearch扩容(从单机至单机伪集群)

目录

一、简介

二、升级操作

2.1 es配置

2.2 logstash设置

2.3 守护进程配置

2.4 查看结果


一、简介

单机伪集群是什么概念呢,就是同一台服务器有多个es节点

先介绍一下,原先的环境为在一台服务器192.168.0.15上安装了es、kibana、logstash,通过守护进程来启动

服务,单机单节点。现在升级为单机多节点。

 

二、升级操作

查看服务进程

supervisorctl status

先停止所以服务

supervisorctl stop all

进入es程序目录,复杂一份程序

cd /data

[root@i-iehivbeb data]# cp elasticsearch-6.5.3 elasticsearch-6.5.3-node0 -r

[root@i-iehivbeb data]# ls
elasticsearch-6.5.3  elasticsearch-6.5.3-node0

 

2.1 es配置

开启证书通信验证,生成证书

参考资料:

https://www.elastic.co/guide/en/elasticsearch/reference/6.5/configuring-tls.html#node-certificates

[root@i-iehivbeb data]# cd /data/elasticsearch-6.5.3-node0/

执行以下脚本

#生成ca证书

bin/elasticsearch-certutil ca

第一步回车是指定生成的ca文件名称,第而不是指定该文件的密码,我这边都默认回车,无密码

查看是否已生成

继续基于根证书生成集群通信证书

bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12

ok,两个都已生成

elastic-certificates.p12  elastic-stack-ca.p12    

我习惯把这两个文件放到一个目录中存放,elastic-certificates.p12是每个集群节点数据通信的时候要用到的,另外一个暂时没有用了

[root@i-iehivbeb elasticsearch-6.5.3-node0]# mkdir config/certs

[root@i-iehivbeb elasticsearch-6.5.3-node0]# mkdir config/certs
[root@i-iehivbeb elasticsearch-6.5.3-node0]# mv elastic-certificates.p12  config/certs
[root@i-iehivbeb elasticsearch-6.5.3-node0]# mv elastic-stack-ca.p12 config/certs/
[root@i-iehivbeb elasticsearch-6.5.3-node0]# ls config/certs/
elastic-certificates.p12  elastic-stack-ca.p12

我们回过头来看下文件权限,发现还是root权限

给es整个目录赋用户权限

[root@i-iehivbeb data]# chown -R elasticsearch:elasticsearch /data/elasticsearch-6.5.3-node0

 再查看下

原有配置

cat /data/elasticsearch-6.5.3-node0/config/elasticsearch.yml

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
network.host: 0.0.0.0

修改后的配置为:

cluster.name: es-cluster-dev
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate 
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
node.name: node-0
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["192.168.0.15:9300", "192.168.0.15:9301", "192.168.0.15:9302"]
discovery.zen.minimum_master_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"

集群中以cluster.name来确定是否同一个,集群如果开启来xpack安全,就需要进行ssl安全验证,

node.name 为节点编号,每个节点都不同

network.host为ip访问白名单,指定哪些客户端ip可以访问该es,0.0.0.0是指all

http.port为被客户端或程序访问的端口

transport.tcp.port 为集群内部通信端口

discovery.zen.ping.unicast.hosts指定集群自动发现的ip集合

discovery.zen.minimum_master_nodes: 集群至少有2个节点才会选举master
http.cors.enabled: 是否开启head
http.cors.allow-origin: "*" head不限制

ok,第一个节点配置完成,继续配置其它两个节点

[root@i-iehivbeb data]# cp elasticsearch-6.5.3-node0 elasticsearch-6.5.3-node1 -r 
[root@i-iehivbeb data]# cp elasticsearch-6.5.3-node0 elasticsearch-6.5.3-node2 -r  

[root@i-iehivbeb data]# ll
total 254000
drwxr-xr-x  9 elasticsearch elasticsearch      4096 Dec 17 09:52 elasticsearch-6.5.3
drwxr-xr-x  9 elasticsearch elasticsearch      4096 May 15 10:25 elasticsearch-6.5.3-node0
drwxr-xr-x  9 root          root               4096 May 15 10:38 elasticsearch-6.5.3-node1
drwxr-xr-x  9 root          root               4096 May 15 10:39 elasticsearch-6.5.3-node2

赋用户权限

[root@i-iehivbeb data]# chown -R elasticsearch:elasticsearch elasticsearch-6.5.3-node0 elasticsearch-6.5.3-node1 elasticsearch-6.5.3-node2

[root@i-iehivbeb data]# ll
total 254000
drwxr-xr-x  9 elasticsearch elasticsearch      4096 Dec 17 09:52 elasticsearch-6.5.3
drwxr-xr-x  9 elasticsearch elasticsearch      4096 May 14 11:23 elasticsearch-6.5.3-node0
drwxr-xr-x  9 elasticsearch elasticsearch      4096 May 14 11:35 elasticsearch-6.5.3-node1
drwxr-xr-x  9 elasticsearch elasticsearch      4096 May 14 11:40 elasticsearch-6.5.3-node2

修改node1和node2的es配置文件内容

[root@i-iehivbeb data]# vim /data/elasticsearch-6.5.3-node1/config/elasticsearch.yml

cluster.name: es-cluster-dev
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate 
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
node.name: node-1
network.host: 0.0.0.0
http.port: 9201
transport.tcp.port: 9301
discovery.zen.ping.unicast.hosts: ["192.168.0.15:9300", "192.168.0.15:9301", "192.168.0.15:9302"]
discovery.zen.minimum_master_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"

[root@i-iehivbeb data]# vim /data/elasticsearch-6.5.3-node2/config/elasticsearch.yml

cluster.name: es-cluster-dev
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate 
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
node.name: node-2
network.host: 0.0.0.0
http.port: 9202
transport.tcp.port: 9302
discovery.zen.ping.unicast.hosts: ["192.168.0.15:9300", "192.168.0.15:9301", "192.168.0.15:9302"]
discovery.zen.minimum_master_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"

注意:这一步很重要,删除节点2和节点3的data数据,不删除会导致elasticsearch 集群无法启动出现如下提示 failed to send join request to master

rm  /data/elasticsearch-6.5.3-node1/data/ -rf

rm  /data/elasticsearch-6.5.3-node2/data/ -rf

 

ok,至此,三个集群节点的配置都已修改完成,下一步,新增守护进程配置,启动es

2.2 logstash设置

因为上面已经设置了es集群,所以我们在日志入库的时候最好也指定多节点的es的http端口

修改hosts参数为hosts => ["192.168.0.15:9200","192.168.0.15:9201","192.168.0.15:9202"]

当然,就算指定某一个节点也是能正常运行的

logstash整个配置我也写下来

# 监听5044端口作为输入
input {
    beats {
        port => "5044"
    }
}
# 数据过滤
filter {
  if [logtype] == "otosaas_app_xingneng" {
    grok {
        match => { "message" => "%{DATA:logDate}\[\|\]%{DATA:requestId}\[\|\]%{DATA:appName}\[\|\]%{DATA:requestMethod}\[\|\]%{DATA:apiName}\[\|\]%{DATA:hostIP}\[\|\]%{DATA:hostPort}\[\|\]%{DATA:sourceIP}\[\|\]%{DATA:costTime}\[\|\]%{DATA:bizCode}\[\|\]" }
    }
    geoip {
        source => "clientip"
    }
  }

  if [logtype] == "otosaas_app_yunxing" {
    grok {
        match => { "message" => "%{DATA:logDate}\[\|\]%{DATA:requestId}\[\|\]%{DATA:appName}\[\|\]%{DATA:apiName}\[\|\]%{DATA:hostIP}\[\|\]%{DATA:sourceIP}\[\|\]%{DATA:requestParams}\[\|\]%{DATA:logType}\[\|\]%{DATA:logContent}\[\|\]" }
    }
    geoip {
        source => "clientip"
    }
  }

  if [logtype] == "otosaas_konglog" {
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
    geoip {
        source => "clientip"
    }
  }
    }
  }
  #全链路日志
  if [logtype] == "otosaas_qlllog" {
    grok {
        match => { "message" => '\[%{TIMESTAMP_ISO8601:timestamp}\] %{NUMBER:mesc} %{DATA:httphost} %{IPORHOST:client_ip} \"%{DATA:server_ip}\" %{NUMBER:request_status} %{BASE16FLOAT:request_time} %{NUMBER:bytes_sent} \"(?:HTTP/%{NUMBER:httpversion})\" \"%{WORD:request_method}\" \"%{DATA:request_uri}\" \"%{DATA:http_referer}\" \"(%{DATA:request_body}?|-)\" \"%{DATA:upstream_ip}\" (?:%{NUMBER:upstream_time}|-) %{QS:agent} %{QS:referrer} \"%{WORD:request_id}\" \"%{DATA:http_rid}\" \"%{DATA:http_lbid}\"' }
    }
    mutate{
        convert => {
         "request_time" => "float"
         "request_status" => "integer"
         "bytes_sent" => "integer"
         "httpversion" => "float"
        }
    }
    geoip {
        source => "clientip"
    }
  }
}
# 输出配置为本机的9200端口,这是ElasticSerach服务的监听端口
output {
  if [logtype] == "otosaas_app_xingneng" {
    elasticsearch {
        user => "elastic"
        password => "密码"
        hosts => ["192.168.0.15:9200","192.168.0.15:9201","192.168.0.15:9202"]
        index => "otosaas_app_xingneng-%{+YYYY.MM.dd}"
    }
  }
  if [logtype] == "otosaas_app_yunxing" {
    elasticsearch {
        user => "elastic"
        password => "密码"
        hosts => ["192.168.0.15:9200","192.168.0.15:9201","192.168.0.15:9202"]
        index => "otosaas_app_yunxing-%{+YYYY.MM.dd}"
    }
  }
  if [logtype] == "otosaas_konglog" {
    elasticsearch {
        user => "elastic"
        password => "密码"
        hosts => ["192.168.0.15:9200","192.168.0.15:9201","192.168.0.15:9202"]
        index => "otosaas_konglog-%{+YYYY.MM.dd}"
    }
  }
  if [logtype] == "otosaas_qlllog" {
    elasticsearch {
        user => "elastic"
        password => "密码"
        hosts => ["192.168.0.15:9200","192.168.0.15:9201","192.168.0.15:9202"]
        index => "otosaas_qlllog-%{+YYYY.MM.dd}"
    }
  }
}

2.3 守护进程配置

新增守护进程配置

vim /etc/supervisord.d/es-cluster-node0.ini 

[program:es-cluster-node0]
command=/data/elasticsearch-6.5.3-node0/bin/elasticsearch
directory=/data/elasticsearch-6.5.3-node0/bin
user=elasticsearch
redirect_stderr=true
stdout_logfile=/data/elkrunlog/es-cluster-node0.log
autostart=true
autorestart=true
;startsecs=10000
;stopwaitsecs=600
killasgroup=true
environment=JAVA_HOME=/usr/local/jdk1.8.0_181

vim /etc/supervisord.d/es-cluster-node1.ini  

[program:es-cluster-node1]
command=/data/elasticsearch-6.5.3-node1/bin/elasticsearch
directory=/data/elasticsearch-6.5.3-node1/bin
user=elasticsearch
redirect_stderr=true
stdout_logfile=/data/elkrunlog/es-cluster-node1.log
autostart=true
autorestart=true
;startsecs=10000
;stopwaitsecs=600
killasgroup=true
environment=JAVA_HOME=/usr/local/jdk1.8.0_181

vim /etc/supervisord.d/es-cluster-node2.ini   

[program:es-cluster-node2]
command=/data/elasticsearch-6.5.3-node2/bin/elasticsearch
directory=/data/elasticsearch-6.5.3-node2/bin
user=elasticsearch
redirect_stderr=true
stdout_logfile=/data/elkrunlog/es-cluster-node2.log
autostart=true
autorestart=true
;startsecs=10000
;stopwaitsecs=600
killasgroup=true
environment=JAVA_HOME=/usr/local/jdk1.8.0_181

[root@i-iehivbeb supervisord.d]# supervisorctl update
elasticsearch: stopped
elasticsearch: removed process group
es-cluster-node0: added process group
es-cluster-node1: added process group
es-cluster-node2: added process group

[root@i-iehivbeb data]# supervisorctl status
es-cluster-node0                 RUNNING   pid 23703, uptime 0:00:18
es-cluster-node1                 RUNNING   pid 23783, uptime 0:00:11
es-cluster-node2                 RUNNING   pid 23849, uptime 0:00:05

 

[root@i-iehivbeb data]# netstat  -tunlp|grep java |sort

2.4 查看结果

注意,6.5的kibana还不支持连接es集群,只能单机连,所以为一般会在每个es服务器上装kibana,kibana连接本机es,通过负载均衡连kibana做负载。注意6.8以及之后的版本直接在kibana.yml中配置以下参数就能支持连接集群

elasticsearch.hosts:
  - http://elasticsearch1:9200
  - http://elasticsearch2:9200

登录kibana查看一下

至此,完成升级扩容

  • 2
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值