Rancher部署Elasticsearch集群
上一篇介绍了如何使用Rancher(k8s)部署Canal单节点:https://blog.csdn.net/iceliooo/article/details/113880448,这里继续介绍通过Rancher(k8s)部署Elasticsearch集群
Elasticsearch
百度百科:https://baike.baidu.com/item/elasticsearch/3411206?fr=aladdin/
部署环境
[root@uat-master ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
uat-master Ready controlplane,etcd,worker 41d v1.17.4 172.18.30.215 <none> CentOS Linux 7 (Core) 3.10.0-1062.18.1.el7.x86_64 docker://18.9.9
uat-w1 Ready worker 41d v1.17.4 172.18.30.216 <none> CentOS Linux 7 (Core) 3.10.0-1062.18.1.el7.x86_64 docker://18.9.9
uat-w2 Ready worker 40d v1.17.4 172.18.30.214 <none> CentOS Linux 7 (Core) 3.10.0-1062.18.1.el7.x86_64 docker://18.9.9
[root@uat-master ~]#
1 准备NFS
参考Rancher部署NFS持久化存储:https://blog.csdn.net/iceliooo/article/details/111461651
2 部署Service
2.1 headless service无头服务
所谓headless service无头服务指: 没有ClusterIP的service, 它仅有一个service name.这个服务名解析得到的不是service的集群IP,而是Pod的IP,当其它地方访问该service时,将直接获得Pod的IP进行访问
每个Pod都会得到集群内的一个DNS域名,格式为$(podname).$(service name).$(namespace).svc.cluster.local
2.2 部署Service
这里为 Pod 设置了无头服务es-cluster和一个便于提供外部访问的es-cluster-client服务
端口9200用于与 REST API 交互、9300用于节点间通信
部署文件es-cluster-service.yaml
apiVersion: v1
kind: Service
metadata:
name: es-cluster
namespace: middleware
labels:
app: es-cluster
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
ports:
- port: 9200
name: rest
targetPort: 9200
- port: 9300
name: internal-node
targetPort: 9300
clusterIP: None
selector:
app: es
---
## 开放集权外访问端口,便于本地应用访问外网es服务
apiVersion: v1
kind: Service
metadata:
name: es-cluster-client
namespace: middleware
labels:
app: es-cluster
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
ports:
- port: 9200
name: rest
targetPort: 9200
- port: 9300
name: internal-node
targetPort: 9300
selector:
app: es
type: NodePort
查看svc
注意:需要开放32224,31274 主机端口的访问权限外部网络才能正常访问
[root@uat-master es]# kubectl get svc -n middleware |grep es-cluster
es-cluster ClusterIP None <none> 9200/TCP,9300/TCP 6h15m
es-cluster-client NodePort 10.43.232.32 <none> 9200:32224/TCP,9300:31274/TCP 6h15m
[root@uat-master es]#
3 部署es集群
接下来我们通过 StatefulSet 来创建具体的 Elasticsearch 的 Pod 应用
需要注意多节点集群中出现“脑裂”问题,当一个或多个节点无法与其他节点通信时会产生“脑裂”,可能会出现几个主节点,参数
discover.zen.minimum_master_nodes
=N/2+1,其中N是 Elasticsearch 集群中节点数,比如3个节点,意味着discover.zen.minimum_master_nodes
=2。这样,如果一个节点暂时与集群断开连接,则另外两个节点可以选择一个新的主节点,并且集群可以在最后一个节点尝试重新加入时继续运行,在扩展 Elasticsearch 集群时
部署文件
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: middleware
spec:
selector:
matchLabels:
app: es # has to match .spec.template.metadata.labels
serviceName: "es-cluster" #声明它属于哪个Headless Service.
replicas: 1 # 这里节省资源只部署1个节点
template:
metadata:
labels:
app: es
spec:
terminationGracePeriodSeconds: 10
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.12
resources:
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: internal-node
protocol: TCP
env:
- name: cluster.name
value: es-cluster
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
#- name: discovery.zen.ping.unicast.hosts
# value: single-node
- name: discovery.zen.minimum_master_nodes
value: "1"
- name: ES_JAVA_OPTS
value: "-Xms256m -Xmx256m"
#- name: discovery.type
# value: single-node
# 允许跨域域名,*代表所有域名
- name: http.cors.enabled
value: "true"
- name: http.cors.allow-origin
value: "*"
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
subPath: data
- name: data
mountPath: /usr/share/elasticsearch/plugins
subPath: plugins
initContainers:
- name: install-ik
image: busybox
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "cd /usr/share/elasticsearch/plugins && rm -rf ik && wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.8.12/elasticsearch-analysis-ik-6.8.12.zip && unzip elasticsearch-analysis-ik-6.8.12.zip -d ik && rm -rf elasticsearch-analysis-ik-6.8.12.zip"]
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/plugins
subPath: plugins
- name: fix-permission
image: busybox
imagePullPolicy: IfNotPresent
#command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data && chown -R 1000:1000 /usr/share/elasticsearch/plugins"]
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
subPath: data
- name: fix-vm-max-map
#image: alpine:3.6
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
#- /sbin/sysctl
#- -w
#- vm.max_map_count=262144
securityContext:
privileged: true
- name: fix-fd-ulimit
image: busybox
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
namespace: middleware
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: managed-nfs-storage
volumeMode: Filesystem
[root@uat-master es]# kubectl apply -f es-cluster-statefulset.yaml
statefulset.apps/es-cluster created
[root@uat-master es]# kubectl get pods -n middleware |grep es
es-cluster-0 1/1 Running 6 54m
[root@uat-master es]#
4 访问es集群
4.1 浏览器访问查看集群状态
这里使用谷歌浏览器通过9200对应的主机端口32224访问:http://ip:32224/_cluster/state?pretty
4.2 Kibana
后续在介绍k8s中部署和使用Kibana
这里ik分词器能够正常使用