我们先不考虑配置文件的前提下:
apiVersion: apps/v1
kind: StatefulSet #####固定hostname,有状态的服务使用这个 statefalset有个问题,就是如果那个pod不是running状态,这个主机名是无法解析的,这样就构成了一个死循环,我sed替换主机名的时候由于pod还不是running状态,她只能获取自己的主机名。无法获取别人的主机名,所以在zookeeper中换成了换成了ip
metadata:
name: zookeeper
spec:
serviceName: zookeeper ####所以生成的3个pod的名字叫zookeeper-0,zookeeper-1,zookeeper-2
replicas: 3
revisionHistoryLimit: 10
selector: ##statefulset必须有的
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
volumes:
- name: volume-logs
hostPath:
path: /var/log/zookeeper
containers:
- name: zookeeper
image: harbor.test.com/middleware/zookeeper:3.4.10
imagePullPolicy: IfNotPresent
livenessProbe:
tcpSocket:
port: 2181
initialDelaySeconds: 30
timeoutSeconds: 3
periodSeconds: 5
successThreshold: 1
failureThreshold: 2
ports:
- containerPort: 2181
protocol: TCP
- containerPort: 2888
protocol: TCP
- containerPort: 3888
protocol: TCP
env:
- name: SERVICE_NAME
value: "zookeeper"
- name: MY_POD_NAME #声明k8s自带的变量,这样在pod创建之后,在其中可以直接echo ${MY_POD_NAME}得到hostname
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- name: volume-logs
mountPath: /var/log/zookeeper
nodeSelector:
zookeeper: enable
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper #我的cluster名字为这个,在任意一个生成的pod中可以ping zookeeper,相当于zookeeper为生成的3个pod的cluster_name,会发现每次ping出的地址不一定相同,nslookup zookeeper得到的是3个pod的pod ip,共3条记录。
spec:
ports:
- port: 2181
selector:
app: zookeeper
clusterIP: None #此句必须加上
[root@host5 src]# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default zookeeper-0 1/1 Running 0 12m 192.168.55.69 host3
default zookeeper-1 1/1 Running 0 12m 192.168.31.93 host4
default zookeeper-2 1/1 Running 0 12m 192.168.55.70 host3
bash-4.3# nslookup zookeeper
nslookup: can't resolve '(null)': Name does not resolve
Name: zookeeper
Address 1: 192.168.55.70 zo