kubernetes
-
-
- 1. service
- 2. Service 是由 kube-proxy 组件,加上 iptables 来共同实现
- 3. 集群内部访问
- 4. IPVS模式的service,可以使K8s集群支持更多量级的Pod
- 5. Flannel vxlan模式跨主机通信原理
- 6. 外部访问
- 7. Kubernetes 提供了一个 DNS 插件 Service,通过解析访问
- 8.headless service(无头服务)
- 9. Pod滚动更新后,依然可以解析
- 10. LoadBalancer 类型的 Service
- 11. 从外部访问的第三种方式叫做ExternalName(pod访问集群外部资源)
- 12. service允许为其分配一个公有IP
- 13. ingress控制器
- 14. 让ingress-nginx访问内部服务
-
1. service
Service可以看作是一组提供相同服务的Pod对外的访问接口。借助Service,应用可以方便地实现服务发现和负载均衡。
service默认只支持4层负载均衡能力,没有7层功能。(可以通过Ingress实现)
service的类型:
ClusterIP:默认值,k8s系统给service自动分配的虚拟IP,只能在集群内部访问。
NodePort:将Service通过指定的Node上的端口暴露给外部,访问任意一个NodeIP:nodePort都将路由到ClusterIP。
LoadBalancer:在 NodePort 的基础上,借助 cloud provider 创建一个外部的负载均衡器,并将请求转发到 :NodePort,此模式只能在云服务器上使用。
ExternalName:将服务通过 DNS CNAME 记录方式转发到指定的域名(通过 spec.externlName 设定)。
2. Service 是由 kube-proxy 组件,加上 iptables 来共同实现
kube-proxy 通过 iptables 处理 Service 的过程,需要在宿主机上设置相当多的 iptables 规则,如果宿主机有大量的Pod,不断刷新iptables规则,会消耗大量的CPU资源。
IPVS模式的service,可以使K8s集群支持更多量级的Pod。
开启kube-proxy的ipvs模式:
# yum install -y ipvsadm //所有节点安装
$ kubectl edit cm kube-proxy -n kube-system //修改IPVS模式
mode: "ipvs"
$ kubectl get pod -n kube-system |grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}' //更新kube-proxy pod
3. 集群内部访问
[kubeadm@server2 manifest]$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: myapp:v1
ports:
- containerPort: 80
[kubeadm@server2 manifest]$ cat service.yaml
kind: Service
apiVersion: v1
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: nginx
[kubeadm@server2 manifest]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5c58fb7c46-fm54f 1/1 Running 0 15m 10.244.1.20 server3 <none> <none>
nginx-deployment-5c58fb7c46-qxqbr 1/1 Running 0 15m 10.244.2.22 server4 <none> <none>
[kubeadm@server2 manifest]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d20h
myservice ClusterIP 10.106.59.243 <none> 80/TCP 3m18s
[kubeadm@server2 manifest]$ kubectl describe svc myservice
Name: myservice
Namespace: default
Labels: <none>
Annotations: Selector: app=nginx
Type: ClusterIP
IP: 10.106.59.243
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.20:80,10.244.2.22:80
Session Affinity: None
Events: <none>
[kubeadm@server2 manifest]$ kubectl run test -it --image=busyboxplus
再次进入:[kubeadm@server2 manifest]$ kubectl attach -it test
/ # curl 10.106.59.243
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server3 ~]# iptables -t nat -nL|grep 10.106.59.243
KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.106.59.243 /* default/myservice: cluster IP */ tcp dpt:80
KUBE-SVC-DN4K6DJYBW27OJYO tcp -- 0.0.0.0/0 10.106.59.243 /* default/myservice: cluster IP */ tcp dpt:80
4. IPVS模式的service,可以使K8s集群支持更多量级的Pod
在server2、3、4节点上安装ipvsadm
yum install -y ipvsadm
[kubeadm@server2 ~]$ kubectl -n kube-system get cm
NAME DATA AGE
coredns 1 2d20h
extension-apiserver-authentication 6 2d20h
kube-flannel-cfg 2 2d19h
kube-proxy 2 2d20h
kubeadm-config 2 2d20h
kubelet-config-1.18 1 2d20h
[kubeadm@server2 ~]$ kubectl -n kube-system edit cm kube-proxy
43 mode: "ipvs"
更新kube-proxy pod
[kubeadm@server2 ~]$ kubectl get pod -n kube-system |grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
[root@server3 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 172.25.60.2:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 10.244.0.6:53 Masq 1 0 0
-> 10.244.0.7:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.0.6:9153 Masq 1 0 0
-> 10.244.0.7:9153 Masq 1 0 0
TCP 10.106.59.243:80 rr
-> 10.244.1.20:80 Masq 1 0 0
-> 10.244.2.22:80 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.0.6:53 Masq 1 0 0
-> 10.244.0.7:53 Masq 1 0 0
5. Flannel vxlan模式跨主机通信原理
6. 外部访问
方式一:
[kubeadm@server2 ~]$ kubectl edit svc myservice
53 type: NodePort
[kubeadm@server2 ~]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d21h
myservice NodePort 10.106.59.243 <none> 80:31701/TCP 93m # 开启一个对外端口
[kubeadm@server2 ~]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5c58fb7c46-fm54f 1/1 Running 0 108m 10.244.1.20 server3 <none> <none>
nginx-deployment-5c58fb7c46-qxqbr 1/1 Running 0 108m 10.244.2.22 server4 <none> <none>
test 1/1 Running 4 107m 10.244.1.21 server3 <none> <none>
[root@server3 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.60.3:31701 rr
-> 10.244.1.20:80 Masq 1 0 0
-> 10.244.2.22:80 Masq 1 0 0
方式二:直接在yaml文件中指定type:NodePort
[kubeadm@server2 manifest]$ cat service.yaml
kind: Service
apiVersion: v1
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: nginx
type: NodePort
[kubeadm@server2 manifest]$ kubectl apply -f service.yaml
service/myservice created
[kubeadm@server2 manifest]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d22h
myservice NodePort 10.109.224.100 <none> 80:32480/TCP 7s
[kubeadm@server2 manifest]$ kubectl describe svc myservice
Name: myservice
Namespace: default
Labels: <none>
Annotations: Selector: app=nginx
Type: NodePort
IP: 10.109.224.100
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32480/TCP
Endpoints: 10.244.1.20:80,10.244.2.22:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
[root@server4 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-