文章目录
ipvs和iptables
pod之间的通信:
同一节点的pod之间通过cni网桥
转发数据包。
不同节点的pod之间的通信需要网络插件支持
。
pod和service通信: 通过iptables或ipvs
实现通信,ipvs取代不了iptables,因为ipvs只能做负载均衡
,而做不了nat
转换。
pod和外网通信: iptables的MASQUERADE
。
Service与集群外部客户端的通信:ingress、nodeport、loadbalancer
https://github.com/kubernetes/kubernetes/tree/master/pkg/proxy/ipvs
IPVS模式在Kubernetes v1.8中引入alpha版本,在v1.9中处于beta版本,在v1.11中处于GA(也就是release版本,国外都是说GA版本)版本,IPTABLES模式已在v1.1中添加,并成为v1.2之后的默认操作模式, IPVS和IPTABLES都基于netfilter
,ipvs模式和iptables模式之间的差异如下:
1. 修改iptables为ipvs及调度算法
systemd方式
修改kubernetes使用ipvs:
https://kubernetes.io/zh/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/
这条命令进入修改
#kubectl edit cm kube-proxy -n kube-system
vim /etc/systemd/system/kube-proxy.service
--proxy-mode=ipvs \
--ipvs-scheduler=sh
算法:
rr: round-robin
lc: least connection
dh: destination hashing
sh: source hashing
sed: shortest expected delay
nq: never queue
查看
[root@k8s-master service]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
………………
TCP 10.102.246.104:8080 rr
-> 10.244.2.111:80 Masq 1 0 0
-> 10.244.3.164:80 Masq 1 0 0
-> 10.244.3.165:80 Masq 1 0 0
configmap方式
将Kube-proxy改为ipvs模式,因为在初始化集群的时候注释了ipvs配置,所以需要自行修改一下:
在master01节点执行
root@k8s-master01:~# curl 127.0.0.1:10249/proxyMode
iptables
root@k8s-master01:~# kubectl edit cm kube-proxy -n kube-system
...
mode: "ipvs"
更新Kube-Proxy的Pod:
sql复制代码root@k8s-master01:~# kubectl patch daemonset kube-proxy -p "{"spec":{"template":{"metadata":{"annotations":{"date":"`date +'%s'`"}}}}}" -n kube-system
daemonset.apps/kube-proxy patched
验证Kube-Proxy模式
rust复制代码root@k8s-master01:~# curl 127.0.0.1:10249/proxyMode
ipvs
root@k8s-master01:~# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.17.0.1:30005 rr
-> 192.169.111.132:8443 Masq 1 0 0
TCP 172.17.0.1:31101 rr
-> 192.167.195.129:80 Masq 1 0 0
-> 192.169.111.129:80 Masq 1 0 0
TCP 172.31.3.101:30005 rr
-> 192.169.111.132:8443 Masq 1 0 0
TCP 172.31.3.101:31101 rr
-> 192.167.195.129:80 Masq 1 0 0
-> 192.169.111.129:80 Masq 1 0 0
TCP 192.162.55.64:30005 rr
-> 192.169.111.132:8443 Masq 1 0 0
TCP 192.162.55.64:31101 rr
-> 192.167.195.129:80 Masq 1 0 0
-> 192.169.111.129:80 Masq 1 0 0
TCP 10.96.0.1:443 rr
-> 172.31.3.101:6443 Masq 1 0 0
-> 172.31.3.102:6443 Masq 1 0 0
-> 172.31.3.103:6443 Masq 1 1 0
TCP 10.96.0.10:53 rr
-> 192.162.55.65:53 Masq 1 0 0
-> 192.162.55.67:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 192.162.55.65:9153 Masq 1 0 0
-> 192.162.55.67:9153 Masq 1 0 0
TCP 10.101.15.4:8000 rr
-> 192.170.21.198:8000 Masq 1 0 0
TCP 10.102.2.33:443 rr
-> 192.170.21.197:4443 Masq 1 0 0
TCP 10.106.167.169:80 rr
-> 192.167.195.129:80 Masq 1 0 0
-> 192.169.111.129:80 Masq 1 0 0
TCP 10.110.52.91:443 rr
-> 192.169.111.132:8443 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 192.162.55.65:53 Masq 1 0 0
-> 192.162.55.67:53 Masq 1 0 0
2. 设置token登录会话保持时间
vim dashboard/kubernetes-dashboard.yaml
image: 192.168.200.110/baseimages/kubernetes-dashboard-amd64:v1.10.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --token-ttl=43200
# kubectl apply -f .
3. session保持
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
问题:IPVS代理模式下Service服务的ClusterIP类型访问失败处理
使用IPVS代理模式,当Service的类型为ClusterIP时,如何处理访问service却不能访问后端pod的情况。
参考
DNS服务:
目前常用的dns组件有kube-dns和coredns两个,用于解析k8s集群中service name所对应得到IP地址。
1. 部署kube-dns
1.skyDNS/kube-dns/coreDNS
kube-dns:提供service name域名的解析
dns-dnsmasq:提供DNS缓存,降低kubedns负载,提高性能
dns-sidecar:定期检查kubedns和dnsmasq的健康状态
2.导入镜像并上传至本地harbor
docker load -i k8s-dns-kube-dns-amd64_1.14.13.tar.gz
docker images
docker tag gcr.io/google-containers/k8s-dns-kube-dns-amd64:1.14.13
docker push harbor.magedu.net/baseimages/k8s-dns-kube-dns-amd64:1.14.13
docker load -i k8s-dns-sidecar-amd64_1.14.13.tar.gz
docker images
docker tag gcr.io/google-containers/k8s-dns-sidecar-amd64:1.14.13
docker push harbor.magedu.net/baseimages/k8s-dns-sidecar-amd64:1.14.13
docker load -i k8s-dns-dnsmasq-nanny-amd64_1.14.13.tar.gz
docker images
docker tag gcr.io/google-containers/k8s-dns-dnsmasq-nanny-amd64:1.14.13
docker push harbor.magedu.net/baseimages/k8s-dns-dnsmasq-nanny-amd64:1.14.13
3.修改yaml文件中的镜像地址为本地harbor地址
vim kube-dns.yaml
- name: kubedns
image: harbor.magedu.net/baseimages/k8s-dns-kube-dns-amd64:1.14.13
- name: dnsmasq
image: harbor.magedu.net/baseimages/k8s-dns-dnsmasq-nanny-amd64:1.14.13
- name: sidecar
image: harbor.magedu.net/baseimages/k8s-dns-sidecar-amd64:1.14.13
4.创建服务
kubectl apply -f kube-dns.yaml
2. 部署coredns
coredns 1.2/1.3/1.4/1.5版本
docker tag gcr.io/google-containers/coredns:1.2.6
docker push harbor.magedu.net/baseimages/coredns:1.2.6
#1.6部署方式
https://github.com/coredns/deployment/tree/master/kubernetes
unzip deployment-master.zip
./deploy.sh 10.20.0.0/16 > coredns.yaml
vim coredns.yaml #修改域名
3. 域名解析测试
删除kube-dns
kubectl delete -f /etc/ansible/manifests/dns/kube-dns/kube-dns.yaml
#部署coredns
kubectl apply -f coredns.yaml
kubectl exec busybox nslookup kubernetes
kubectl exec busybox nslookup kubernetes.default.svc.linux36.local