目录
一、Service介绍
# 10.96.0.10:53 是service 提供的访问入口# 当访问这个入口的时候,可以发现后面有两个 pod 的服务在等待调用,# kube-proxy 会基于 rr (轮询)的策略,将请求分发到其中一个 pod 上去# 这个规则会同时在集群内的所有节点上都生成,所以在任何一个节点上访问都可以。[root @node1 ~ ] # ipvsadm -LnIP Virtual Server version 1.2.1 (size = 4096 )Prot LocalAddress : Port Scheduler Flags-> RemoteAddress : Port Forward Weight ActiveConn InActConnTCP 10.96.0.10:53 rr
-> 10.244.58.196:53 Masq 1 0 0
-> 10.244.58.197:53 Masq 1 0 0
kube-proxy目前支持三种工作模式:
userspace 模式
# 此模式必须安装ipvs内核模块,否则会降级为iptables
# 开启 ipvs[root@k8s-master01 ~]# kubectl edit cm kube-proxy -n kube-system
Edit cancelled, no changes made.
修改: mode : "ipvs"[root@k8s-master01 ~]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system
pod "kube-proxy-65vrr" deleted
pod "kube-proxy-fl8f8" deleted
pod "kube-proxy-xwkjp" deleted[root@k8s-node01 ~]# ipvsadm -Ln
TCP 10.96.0.10:53 rr
-> 10.244.58.196:53 Masq 1 0 0
-> 10.244.58.197:53 Masq 1 0 0
二、Service类型
Service的资源清单文件:
kind : Service # 资源类型apiVersion : v1 # 资源版本metadata : # 元数据name : service # 资源名称namespace : dev # 命名空间spec : # 描述selector : # 标签选择器,用于确定当前 service 代理哪些 podapp : nginxtype : # Service 类型,指定 service 的访问方式clusterIP : # 虚拟服务的 ip 地址sessionAffinity : # session 亲和性,支持 ClientIP 、 None 两个选项ports : # 端口信息- protocol : TCPport : 3017 # service 端口targetPort : 5003 # pod 端口nodePort : 31122 # 主机端口
ClusterIP:默认值,它是Kubernetes系统自动分配的虚拟IP,只能在集群内部访问
NodePort :将 Service 通过指定的 Node 上的端口暴露给外部,通过此方法,就可以在集群外部访问服务LoadBalancer :使用外接负载均衡器完成到服务的负载分发,注意此模式需要外部云环境支持ExternalName : 把集群外部的服务引入集群内部,直接使用
三、Service使用
3.1 实验环境准备
[root@k8s-master01 Service]# vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pc-deployment
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- containerPort: 80
[root@k8s-master01 Service]# kubectl create -f deployment.yaml
deployment.apps/pc-deployment created
#查看pod详情
[root@k8s-master01 Service]# kubectl get pods -n dev -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
pc-deployment-5cb65f68db-ln95v 1/1 Running 0 3m40s 192.168.58.194 k8s-node02 <none> <none> app=nginx-pod,pod-template-hash=5cb65f68db
pc-deployment-5cb65f68db-mqtm8 1/1 Running 0 3m40s 192.168.85.193 k8s-node01 <none> <none> app=nginx-pod,pod-template-hash=5cb65f68db
pc-deployment-5cb65f68db-sqzcw 1/1 Running 0 3m40s 192.168.85.194 k8s-node01 <none> <none> app=nginx-pod,pod-template-hash=5cb65f68db
# 为了方便后面的测试,修改下三台 nginx 的 index.html 页面(三台修改的 IP 地址不一致)# kubectl exec -it pc-deployment-5cb65f68db-ln95v -n dev /bin/sh#echo "192.168.58.194" > /usr/share/nginx/html/index.html[root@k8s-master01 Service]# kubectl exec -it pc-deployment-5cb65f68db-ln95v -n dev /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
# echo "192.168.58.194" > /usr/share/nginx/html/index.html# exit
[root@k8s-master01 Service]# kubectl exec -it pc-deployment-5cb65f68db-mqtm8 -n dev /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
# echo "192.168.85.193" > /usr/share/nginx/html/index.html
# exit[root@k8s-master01 Service]# kubectl exec -it pc-deployment-5cb65f68db-sqzcw -n dev /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
# echo "192.168.85.194" > /usr/share/nginx/html/index.html
# exit
# 修改完毕之后,访问测试[root@k8s-master01 Service]# curl 192.168.58.194
192.168.58.194
[root@k8s-master01 Service]# curl 192.168.85.193
192.168.85.193
[root@k8s-master01 Service]# curl 192.168.85.194
192.168.85.194
3.2 ClusterIP类型的Service
[root@k8s-master01 Service]# vim service-clusterip.yaml
apiVersion: v1
kind: Service
metadata:
name: service-clusterip
namespace: dev
spec:
selector:
app: nginx-pod
clusterIP: # service的ip地址,如果不写,默认会生成一个
type: ClusterIP
ports:
- port: 80 # Service端口
targetPort: 80 # pod端口
# 创建service
[root@k8s-master01 Service]# kubectl create -f service-clusterip.yaml
service/service-clusterip created
# 查看service
[root@k8s-master01 Service]# kubectl get svc -n dev -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service-clusterip ClusterIP 10.103.182.229 <none> 80/TCP 13s app=nginx-pod
# 查看service的详细信息
# 在这里有一个Endpoints列表,里面就是当前service可以负载到的服务入口
[root@k8s-master01 Service]# kubectl describe svc service-clusterip -n dev
Name: service-clusterip
Namespace: dev
Labels: <none>
Annotations: <none>
Selector: app=nginx-pod
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.103.182.229
IPs: 10.103.182.229
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 192.168.58.194:80,192.168.85.193:80,192.168.85.194:80
Session Affinity: None
Events: <none>
# 查看ipvs的映射规则
[root@k8s-master01 Service]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.103.182.229:80 rr
-> 192.168.58.194:80 Masq 1 0 0
-> 192.168.85.193:80 Masq 1 0 0
-> 192.168.85.194:80 Masq 1 0 0
# 访问10.103.182.229:80观察效果
[root@k8s-master01 Service]# curl 10.103.182.229:80
192.168.85.194
[root@k8s-master01 Service]# curl 10.103.182.229:80
192.168.58.194
[root@k8s-master01 Service]# curl 10.103.182.229:80
192.168.85.193
3.2.1 Endpoint
3.2.2 负载分发策略
如果不定义,默认使用kube-proxy 的策略,比如随机、轮询基于客户端地址的会话保持模式,即来自同一个客户端发起的所有请求都会转发到固定的一个Pod上
此模式可以使在spec中添加 sessionAffinity:ClientIP 选项
# 查看ipvs的映射规则【rr 轮询】
[root@k8s-master01 Service]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.103.182.229:80 rr
-> 192.168.58.194:80 Masq 1 0 0
-> 192.168.85.193:80 Masq 1 0 0
-> 192.168.85.194:80 Masq 1 0 0
# 循环访问测试
[root@k8s-master01 Service]# while true; do curl 10.103.182.229:80; sleep 5; done;
192.168.58.194
192.168.85.194
192.168.85.193
192.168.58.194
192.168.85.194
192.168.85.193
192.168.58.194
192.168.85.194
192.168.85.193
192.168.58.194
192.168.85.194
192.168.85.193
# 修改分发策略----sessionAffinity:ClientIP
# 查看ipvs规则【persistent 代表持久】
[root@k8s-master01 Service]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.103.182.229:80 rr persistent 10800
-> 192.168.58.194:80 Masq 1 0 0
-> 192.168.85.193:80 Masq 1 0 0
-> 192.168.85.194:80 Masq 1 0 0
# 循环访问测试
[root@k8s-master01 Service]# while true; do curl 10.103.182.229:80 ; sleep 3 ; done;
192.168.85.194
192.168.85.194
192.168.85.194
192.168.85.194
# 删除service
[root@k8s-master01 Service]# kubectl delete -f service-clusterip.yaml
service "service-clusterip" deleted
3.3 HeadLiness类型的Service
[root@k8s-master01 Service]# vim service-headliness.yaml
apiVersion: v1
kind: Service
metadata:
name: service-headliness
namespace: dev
spec:
selector:
app: nginx-pod
clusterIP: None # 将ClusterIP设置为None,即可创建headliness Service
type: ClusterIP
ports:
- port: 80
targetPort: 80
# 创建service
[root@k8s-master01 Service]# kubectl create -f service-headliness.yaml
service/service-headliness created
# 获取service, 发现CLUSTER-IP未分配
[root@k8s-master01 Service]# kubectl get svc service-headliness -n dev -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service-headliness ClusterIP None <none> 80/TCP 54s app=nginx-pod
# 查看service详情
[root@k8s-master01 Service]# kubectl describe svc service-headliness -n dev
Name: service-headliness
Namespace: dev
Labels: <none>
Annotations: <none>
Selector: app=nginx-pod
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: None
IPs: None
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 192.168.58.194:80,192.168.85.193:80,192.168.85.194:80
Session Affinity: None
Events: <none>
# 查看域名的解析情况
[root@k8s-master01 Service]# kubectl exec -it pc-deployment-5cb65f68db-ln95v -n dev /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
# cat /etc/resolv.conf
nameserver 10.96.0.10
search dev.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
[root@k8s-master01 Service]# dig @10.96.0.10 service-headliness.dev.svc.cluster.local
;; ANSWER SECTION:
service-headliness.dev.svc.cluster.local. 30 IN A 192.168.58.194
service-headliness.dev.svc.cluster.local. 30 IN A 192.168.85.193
service-headliness.dev.svc.cluster.local. 30 IN A 192.168.85.194
3.4 NodePort类型的Service
创建service-nodeport.yaml
[root@k8s-master01 Service]# vim service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: service-nodeport
namespace: dev
spec:
selector:
app: nginx-pod
type: NodePort # service类型
ports:
- port: 80
nodePort: 30002 # 指定绑定的node的端口(默认的取值范围是:30000-32767),如果不指定,会默认分配
targetPort: 80
# 创建service
[root@k8s-master01 Service]# kubectl create -f service-nodeport.yaml
service/service-nodeport created
# 查看service
[root@k8s-master01 Service]# kubectl get svc -n dev -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service-nodeport NodePort 10.104.75.225 <none> 80:30002/TCP 116s app=nginx-pod
# 接下来可以通过电脑主机的浏览器去访问集群中任意一个nodeip的30002端口,即可访问到pod
k8s-node01(http://192.168.186.101:30002/)
k8s-node02(http://192.168.186.102:30002/)
3.5 LoadBalancer类型的Service
三款开源 Kubernetes 负载均衡器: MetalLB vs PureLB vs OpenELB
3.5.1 什么是 OpenELB
K8S 对集群外暴露服务有三种方式: NodePort , Ingress 和 Loadbalancer 。 NodePort 用于暴露 TCP 服 务(4 层 ) ,但限于对集群节点主机端口的占用,不适合大规模使用; Ingress 用于暴露 HTTP 服务 (7 层 ) , 可对域名地址做路由分发;Loadbalancer 则专属于云服务,可动态分配公网网关。 对于私有云集群,没有用到公有云服务,能否使用 LoadBalancer 对外暴露服务呢?答案当然是肯定的,OpenELB 正是为裸金属服务器提供 LoadBalancer 服务而生的!由青云科技 KubeSphere 容器团队开源的负载均衡器插件 OpenELB 正式通过 CNCF (云原生计算基金会)TOC 技术委员会审核 .
3.5.2 应用安装与配置
1 安装 OpenELB
# 注意:是修改,默认 strictARP 是 false# kubectl edit configmap kube-proxy -n kube-system......ipvs:strictARP: true......
openelb/deploy/openelb.yaml 在 master ·OpenELB/OpenELB (github.com)# wget -c https://openelb/deploy/openelb.yaml 在 master ·OpenELB/OpenELB (github.com)
[root@k8s-master01 Service]# vim openelb.yaml
修改镜像地址:修改两处image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1 为image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhookcertgen:v1.1.1
安装[root@k8s-master01 Service]# kubectl apply -f openelb.yaml
查看[root@k8s-master01 Service]# kubectl get po -n openelb-system
NAME READY STATUS RESTARTS AGE
openelb-admission-create-lg5td 0/1 ImagePullBackOff 0 21m
openelb-admission-patch-gr276 0/1 ImagePullBackOff 0 21m
openelb-controller-64f7fb77f8-9xhq2 0/1 ContainerCreating 0 21m
openelb-speaker-7gztc 1/1 Running 0 21m
openelb-speaker-9xkmt 1/1 Running 0 21m
openelb-speaker-cc8d2 1/1 Running 0 21m
2 添加 EIP 池
EIP 地址要与集群主机节点在同一网段内,且不可绑定任何网卡;
[root@master ~]# cat ip_pool.yml
apiVersion: network.kubesphere.io/v1alpha2
kind: Eip
metadata:
name: eip-pool
spec:
address: 172.16.90.231-172.16.90.238
protocol: layer2
disable: false
interface: eth0
# kubectl apply -f ip_pool.yml
[root@master ~]# kubectl get eip
NAME CIDR USAGE TOTAL
eip-pool 172.16.90.231-172.16.90.238 0 8
3 配置 Service 为 LoadBalancer
lb.kubesphere.io/v1alpha1: openelbprotocol.openelb.kubesphere.io/v1alpha1: layer2eip.openelb.kubesphere.io/v1alpha2: layer2-eip
总体配置清单如下:
[ root@master test ] # cat svc-lb.ymlapiVersion : v1kind : Servicemetadata :name : svc-lbnamespace : devannotations :lb.kubesphere.io/v1alpha1 : openelbprotocol.openelb.kubesphere.io/v1alpha1 : layer2eip.openelb.kubesphere.io/v1alpha2 : eip-poolspec :selector :app : nginx-podtype : LoadBalancerports :- port : 80targetPort : 80[ root@master test ] # kubectl apply -f svc-lb.yml[ root@master test ] # kubectl get svc svc-lb -n devNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEsvc-lb LoadBalancer 10.103.128.30 172.16.90.231 80:32532/TCP 22m
测试负载均衡
[root@node2 ~] # for i in {1..9}> do> curl 172 .16.90.231> doneweb test page, ip is 10 .224.166.181 .web test page, ip is 10 .224.166.179 .web test page, ip is 10 .224.104.9 .web test page, ip is 10 .224.166.181 .web test page, ip is 10 .224.166.179 .web test page, ip is 10 .224.104.9 .web test page, ip is 10 .224.166.181 .web test page, ip is 10 .224.166.179 .web test page, ip is 10 .224.104.9 .
3.6 ExternalName类型的Service
apiVersion: v1
kind : Servicemetadata :name : service-externalnamenamespace : devspec :type : ExternalName # service 类型externalName : www.baidu.com # 改成 ip 地址也可以
[root@k8s-master01 Service]# vim service-externalname.yaml
apiVersion: v1
kind: Service
metadata:
name: service-externalname
namespace: dev
spec:
type: ExternalName # service类型
externalName: www.baidu.com # 改成ip地址也可以
# 创建service
[root@k8s-master01 Service]# kubectl create -f service-externalname.yaml
service/service-externalname created# 域名解析[root @master ~ ] # dig @10.96.0.10 service-externalname.dev.svc.cluster.localservice-externalname.dev.svc.cluster.local. 30 IN CNAME www.baidu.com.www.baidu.com. 30 IN CNAME www.a.shifen.com.www.a.shifen.com. 30 IN A 39.156.66.18www.a.shifen.com. 30 IN A 39.156.66.14