Elasticsearch Cluster 安装与配置
2017-08-15 景峯 Netkiller
本文节选自《Netkiller Database 手札》作者:netkiller 网站: http://www.netkiller.cn
23.1.2. Elasticsearch Cluster
集群模式需要两个以上的节点,通常是一个 master 节点,多个 data 节点
首先在所有节点上安装 elasticsearch,然后配置各节点的配置文件,对于 5.5.1 不需要配置决定哪些节点属于 master 节点 或者 data 节点。
curl -s https://raw.githubusercontent.com/oscm/shell/master/search/elasticsearch/elasticsearch-5.x.sh | bash
配置文件
cluster.name: elasticsearch-cluster # 配置集群名称,所有服务器服务器保持一致 node.name: node-1 # 每个节点唯一标识,每个节点只需改动这里,一次递增 node-1, node-2, node-3 ... network.host: 0.0.0.0 discovery.zen.ping.unicast.hosts: ["172.16.0.20", "172.16.0.21","172.16.0.22"] # 所有节点的IP 地址写在这里 discovery.zen.minimum_master_nodes: 3 # 可以作为master的节点总数,有多少个节点就写多少 http.cors.enabled: true http.cors.allow-origin: "*"
查看节点状态,使用curl工具: curl 'http://localhost:9200/_nodes/process?pretty'
root@netkiller /var/log/elasticsearch % curl 'http://localhost:9200/_nodes/process?pretty' { "_nodes" : { "total" : 2, "successful" : 2, "failed" : 0 }, "cluster_name" : "my-application", "nodes" : { "-lnKCmBXRpiwExLns0jc9g" : { "name" : "node-1", "transport_address" : "10.104.3.2:9300", "host" : "10.104.3.2", "ip" : "10.104.3.2", "version" : "5.5.1", "build_hash" : "19c13d0", "roles" : [ "master", "data", "ingest" ], "process" : { "refresh_interval_in_millis" : 1000, "id" : 23669, "mlockall" : false } }, "WVsgYi2HT8GWnZU1kUwFwA" : { "name" : "node-2", "transport_address" : "10.186.7.221:9300", "host" : "10.186.7.221", "ip" : "10.186.7.221", "version" : "5.5.1", "build_hash" : "19c13d0", "roles" : [ "master", "data", "ingest" ], "process" : { "refresh_interval_in_millis" : 1000, "id" : 12641, "mlockall" : false } } } }
启动节点后回生成 cluster.name 为文件名的日志文件。
谁先启动谁讲成为master
[2017-08-11T17:42:46,018][INFO ][o.e.c.s.ClusterService ] [node-1] new_master {node-1}{-lnKCmBXRpiwExLns0jc9g}{rZcJDIynSzq2Td3yP2kN5A}{10.104.3.2}{10.104.3.2:9300}, added {{node-2}{WVsgYi2HT8GWnZU1kUwFwA}{X13ShUpAQa2zA1Mgcsm3bQ}{10.186.7.221}{10.186.7.221:9300},}, reason: zen-disco-elected-as-master ([1] nodes joined)[{node-2}{WVsgYi2HT8GWnZU1kUwFwA}{X13ShUpAQa2zA1Mgcsm3bQ}{10.186.7.221}{10.186.7.221:9300}]
如果master出现故障,其他节点会接管
[2017-08-11T17:44:52,797][INFO ][o.e.c.s.ClusterService ] [node-2] master {new {node-2}{WVsgYi2HT8GWnZU1kUwFwA}{vl8kQx8sQdGVVohrNQnZOQ}{10.186.7.221}{10.186.7.221:9300}}, removed {{node-1}{-lnKCmBXRpiwExLns0jc9g}{rZcJDIynSzq2Td3yP2kN5A}{10.104.3.2}{10.104.3.2:9300},}, added {{node-1}{-lnKCmBXRpiwExLns0jc9g}{odnoG9kpQpeX1ltx5KYTSw}{10.104.3.2}{10.104.3.2:9300},}, reason: zen-disco-elected-as-master ([1] nodes joined)[{node-1}{-lnKCmBXRpiwExLns0jc9g}{odnoG9kpQpeX1ltx5KYTSw}{10.104.3.2}{10.104.3.2:9300}] [2017-08-11T17:44:53,184][INFO ][o.e.c.r.DelayedAllocationService] [node-2] scheduling reroute for delayed shards in [59.5s] (11 delayed shards) [2017-08-11T17:44:53,929][INFO ][o.e.c.r.a.AllocationService] [node-2] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[information][0]] ...]).
master 节点恢复上线会提示
[2017-08-11T17:44:52,855][INFO ][o.e.c.s.ClusterService ] [node-1] detected_master {node-2}{WVsgYi2HT8GWnZU1kUwFwA}{vl8kQx8sQdGVVohrNQnZOQ}{10.186.7.221}{10.186.7.221:9300}, added {{node-2}{WVsgYi2HT8GWnZU1kUwFwA}{vl8kQx8sQdGVVohrNQnZOQ}{10.186.7.221}{10.186.7.221:9300},}, reason: zen-disco-receive(from master [master {node-2}{WVsgYi2HT8GWnZU1kUwFwA}{vl8kQx8sQdGVVohrNQnZOQ}{10.186.7.221}{10.186.7.221:9300} committed version [44]])
23.1.3. 负载均衡配置
因为 elasticsearch 没有用户认证机制我们通常在内网访问他。如果对外提供服务需要增加用户认证。
$ printf "john:$(openssl passwd -crypt s3cr3t)n" > /etc/nginx/passwords
创建 nginx 配置文件 /etc/nginx/conf.d/elasticsearch.conf
upstream elasticsearch { server 172.16.0.10:9200; server 172.16.0.20:9200; server 172.16.0.30:9200; keepalive 15; } server { listen 9200; auth_basic "Protected Elasticsearch"; auth_basic_user_file passwords; location ~* ^(/_cluster|/_nodes) { return 403; break; } location / { if ($request_filename ~ _shutdown) { return 403; break; } proxy_pass http://elasticsearch; proxy_http_version 1.1; proxy_set_header Connection "Keep-Alive"; proxy_set_header Proxy-Connection "Keep-Alive"; } }
反复使用下面方法请求,最终你会发现 total_opened 会达到你的nginx 配置数量
$ curl 'http://test:test@localhost:9200/_nodes/stats/http?pretty' | grep total_opened # "total_opened" : 15
23.2.1. elasticsearch-analysis-ik
安装插件
root@netkiller ~ % /usr/share/elasticsearch/bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v5.5.1/elasticsearch-analysis-ik-5.5.1.zip -> Downloading https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v5.5.1/elasticsearch-analysis-ik-5.5.1.zip [=================================================] 100% -> Installed analysis-ik
curl -XPOST http://localhost:9200/index/fulltext/_mapping -d' { "properties": { "content": { "type": "text", "analyzer": "ik_max_word", "search_analyzer": "ik_max_word" } } }'
23.3. 节点管理
23.3.1. 查看索引
root@netkiller ~ % curl 'http://localhost:9200/_cat/indices?v' health status index uuid pri rep docs.count docs.deleted store.size pri.store.size yellow open information oygxIi-dR1eB9NoIZtJrxQ 5 1 45 42 731kb 731kb green open .kibana 9jBBaOomTO2EakZlZqnE-g 1 1 5 1 62.5kb 31.2kb green open logstash-api WHXZhn3vRWiuVbhR8rGoEg 5 1 565 0 3.8mb 1.9mb
23.3.2. 节点健康状态
root@netkiller ~ % curl 'http://localhost:9200/_cat/health?v' epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent 1502445967 18:06:07 my-application yellow 2 2 17 11 0 0 5 0 - 77.3%
root@netkiller ~ % curl 'http://localhost:9200/_cluster/health' {"cluster_name":"my-application","status":"yellow","timed_out":false,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":11,"active_shards":17,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":5,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":77.27272727272727}
23.3.3. 节点http状态
root@VM_3_2_centos ~ % curl 'localhost:9200/_nodes/stats/http?pretty' { "_nodes" : { "total" : 2, "successful" : 2, "failed" : 0 }, "cluster_name" : "my-application", "nodes" : { "-lnKCmBXRpiwExLns0jc9g" : { "timestamp" : 1502446878773, "name" : "node-1", "transport_address" : "10.104.3.2:9300", "host" : "10.104.3.2", "ip" : "10.104.3.2:9300", "roles" : [ "master", "data", "ingest" ], "http" : { "current_open" : 4, "total_opened" : 29 } }, "WVsgYi2HT8GWnZU1kUwFwA" : { "timestamp" : 1502446878782, "name" : "node-2", "transport_address" : "10.186.7.221:9300", "host" : "10.186.7.221", "ip" : "10.186.7.221:9300", "roles" : [ "master", "data", "ingest" ], "http" : { "current_open" : 0, "total_opened" : 2 } } } }