我们不应该期望Kubernetes Pod是健壮的,而是要假设Pod中的容器很可能因为各种原因发生故障而死掉。Deployment等controller会通过动态创建和销毁Pod来保证应用整体的健壮性。换句话说,Pod是脆弱的,但应用是健壮的。
每个Pod都有自己的IP地址。当controller用新Pod替代发生故障的Pod时,新Pod会分配到新的IP地址。这样就产生了一个问题:
如果一组Pod对外提供服务(比如HTTP),它们的IP很有可能发生变化,那么客户端如何找到并访问这个服务呢?
Kubernetes给出的解决方案是Service。
创建Service
Kubernetes Service从逻辑上代表了一组Pod,具体是哪些Pod则是由label来挑选。Service有自己的IP,而且这个IP是不变的。客户端只需要访问Service的IP,Kuberentes则负责建立和维护Service与Pod的映射关系。无论后端Pod如何变化,对客户端不会有任何影响,因为Service没有变。
[root@master deployment]# vim httpd-deploy.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-deploy
labels:
run: apache
spec:
replicas: 3
selector:
matchLabels:
run: apache
template:
metadata:
labels:
run: apache
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
运行httpd-deploy.yml文件:
[root@master deployment]# kubectl apply -f httpd-deploy.yml
deployment.apps/httpd-deploy created
通过get pod 可以看到每个pod都分配了ip 这些ip 只能内部访问 即k8s集群和容器访问
[root@master deployment]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINA
httpd-deploy-6cbfc784b4-5sjdp 1/1 Running 0 4m2s 10.244.4.155 node2 <none>
httpd-deploy-6cbfc784b4-hgccs 1/1 Running 0 4m2s 10.244.0.20 master <none>
httpd-deploy-6cbfc784b4-qn6k8 1/1 Running 0 4m2s 10.244.3.127 node1 <none>
[root@master deployment]# curl 10.244.4.155
<html><body><h1>It works!</h1></body></html>
[root@master deployment]# curl 10.244.0.20
<html><body><h1>It works!</h1></body></html>
[root@master deployment]# curl 10.244.3.127
<html><body><h1>It works!</h1></body></html>
master主机编写YAML配置文件,然后通过文件创建Service,最后查看Service的状态:
[root@master deployment]# vim httpd-ser.yml
apiVersion: v1
kind: Service
metadata:
name: httpd-ser
spec:
selector:
run: apache
ports:
- protocol: TCP
port: 8000
targetPort: 80
selector需要指定httpd这个deployment上的labels,也就是run:httpd
protocol指定tcp协议
port指定一个虚拟端口号
targetport指定deployment上暴漏的端口号
详细查看服务
[root@master deployment]# kubectl describe service httpd-ser
Name: httpd-ser
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"httpd-ser","namespace":"default"},"spec
Selector: run=apache
Type: ClusterIP
IP: 10.96.34.32
Port: <unset> 8000/TCP
TargetPort: 80/TCP
Endpoints: 10.244.0.20:80,10.244.3.127:80,10.244.4.155:80
Session Affinity: None
Events: <none>
可以看到Endpoints: 哪里就是我们三台pod的ip
访问ip
[root@master deployment]# curl 10.96.34.32:8000
<html><body><h1>It works!</h1></body></html>
DNS访问Service
[root@master deployment]# kubectl get deployments.apps -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 5d19h
集群中的pod可以通过 服务名字+命名空间 访问服务:
[root@master deployment]# kubectl run -it --image=busybox:1.28 sh
通过wget可以下载界面:
/ # wget httpd-ser.default:8000
Connecting to httpd-ser.default:8000 (10.96.34.32:8000)
index.html 100% |***************************************************| 45 0:00:00 ETA
查看
/ # cat index.html
<html><body><h1>It works!</h1></body></html>
外网访问服务:
需要在 httpd-ser里spec下添加类型为NodePort
[root@master deployment]# vim httpd-ser.yml
apiVersion: v1
kind: Service
metadata:
name: httpd-ser
spec:
type: NodePort
selector:
run: apache
ports:
- protocol: TCP
port: 8000
targetPort: 80
重新运行httpd-ser.yml文件
查看
[root@master deployment]# kubectl get svc httpd-ser
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
httpd-ser NodePort 10.96.34.32 <none> 8000:31864/TCP 49m
类型变成了NodePort
监听31579端口收到请求会转发给10.96.55.230的8000端口,然后按照上面的规则发给pod
自己指定端口在30000-32767 之间别的端口不行
测试
[root@master deployment]# curl 192.168.3.100:31814
<html><body><h1>It works!</h1></body></html>
[root@master deployment]# curl 192.168.3.150:31814
<html><body><h1>It works!</h1></body></html>
[root@master deployment]# curl 192.168.3.200:31814
<html><body><h1>It works!</h1></body></html>
访问三台节点的ip下的31814端口可以访问到网页了
自己指定端口
[root@master deployment]# vim httpd-ser.yml
apiVersion: v1
kind: Service
metadata:
name: httpd-ser
spec:
type: NodePort
selector:
run: apache
ports:
- protocol: TCP
nodePort: 30000
port: 8000
targetPort: 80
nodePort: 30000 是开放主机的端口
port: 8000 服务的端口
targetPort: 80 pod的端口
重新运行
kubectl apply -f httpd-ser.yml
测试
[root@master deployment]# curl 192.168.3.100:30000
<html><body><h1>It works!</h1></body></html>
[root@master deployment]# curl 192.168.3.150:30000
<html><body><h1>It works!</h1></body></html>
[root@master deployment]# curl 192.168.3.200:30000
<html><body><h1>It works!</h1></body></html>