k8s集群ClusterIP不能使用

原因:iptables没有具体的设备响应,kube-proxy需要使用--proxy-mode=ipvs

[root@kubernetes bak4]# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-6694fb884c-mgn79             1/1     Running   0          146m
coredns-6694fb884c-ncqh6             1/1     Running   0          146m
etcd-kubernetes                      1/1     Running   11         49d
kube-apiserver-kubernetes            1/1     Running   10         49d
kube-controller-manager-kubernetes   1/1     Running   6          49d
kube-flannel-ds-amd64-5cv9n          1/1     Running   6          49d
kube-flannel-ds-amd64-6tzvm          1/1     Running   5          49d
kube-flannel-ds-amd64-827f9          1/1     Running   6          49d
kube-proxy-7ndzn                     1/1     Running   6          49d
kube-proxy-ft6wc                     1/1     Running   5          49d
kube-proxy-nvc4l                     1/1     Running   6          49d
kube-scheduler-kubernetes            1/1     Running   6          49d

kube-proxy有报错
[root@kubernetes bak4]# kubectl logs -n kube-system kube-proxy-7ndzn
W1110 09:13:34.247156       1 proxier.go:493] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1110 09:13:34.248189       1 proxier.go:493] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1110 09:13:34.250441       1 proxier.go:493] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1110 09:13:34.251811       1 proxier.go:493] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1110 09:13:34.256361       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
I1110 09:13:34.264897       1 server_others.go:148] Using iptables Proxier.
I1110 09:13:34.265123       1 server_others.go:178] Tearing down inactive rules.
I1110 09:13:34.282035       1 server.go:464] Version: v1.13.3
I1110 09:13:34.288611       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I1110 09:13:34.288637       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1110 09:13:34.290663       1 conntrack.go:83] Setting conntrack hashsize to 32768
I1110 09:13:34.290831       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1110 09:13:34.290892       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1110 09:13:34.291011       1 config.go:102] Starting endpoints config controller
I1110 09:13:34.291023       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I1110 09:13:34.291040       1 config.go:202] Starting service config controller
I1110 09:13:34.291044       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I1110 09:13:34.391530       1 controller_utils.go:1034] Caches are synced for service config controller
I1110 09:13:34.391624       1 controller_utils.go:1034] Caches are synced for endpoints config controller


[root@kubernetes bak4]# kubectl edit cm kube-proxy -n kube-system
修改的部分的截图
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      syncPeriod: 30s
    kind: KubeProxyConfiguration
    metricsBindAddress: 127.0.0.1:10249
    mode: "ipvs"

所有的节点都需要更改
[root@kubernetes bak4]# cat  /etc/sysconfig/modules/ipvs.modules
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

[root@kubernetes bak4]# chmod 755 /etc/sysconfig/modules/ipvs.modules 
[root@kubernetes bak4]# bash /etc/sysconfig/modules/ipvs.modules 

[root@kubernetes bak4]# lsmod |grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 141092  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4      15053  6 
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
nf_conntrack          133387  9 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack


重启kube-proxy
[root@kubernetes bak4]# kubectl get pods -n kube-system |grep kube-proxy
kube-proxy-7ndzn                     1/1     Running   6          49d
kube-proxy-ft6wc                     1/1     Running   5          49d
kube-proxy-nvc4l                     1/1     Running   6          49d

[root@kubernetes bak4]# kubectl get pods -n kube-system |grep kube-proxy|awk '{print $1}'| xargs kubectl delete pod  -n kube-system
pod "kube-proxy-7ndzn" deleted
pod "kube-proxy-ft6wc" deleted
pod "kube-proxy-nvc4l" deleted


已经无报错
[root@kubernetes-node2 ~]# kubectl logs -n kube-system kube-proxy-h6kwp
I1110 15:30:58.565092       1 server_others.go:189] Using ipvs Proxier.
W1110 15:30:58.565309       1 proxier.go:381] IPVS scheduler not specified, use rr by default
I1110 15:30:58.565420       1 server_others.go:216] Tearing down inactive rules.
I1110 15:30:58.603102       1 server.go:464] Version: v1.13.3
I1110 15:30:58.608057       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1110 15:30:58.608802       1 config.go:202] Starting service config controller
I1110 15:30:58.608813       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I1110 15:30:58.608910       1 config.go:102] Starting endpoints config controller
I1110 15:30:58.608915       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I1110 15:30:58.709075       1 controller_utils.go:1034] Caches are synced for endpoints config controller
I1110 15:30:58.709124       1 controller_utils.go:1034] Caches are synced for service config controller

[root@kubernetes-node2 ~]# kubectl logs -n kube-system kube-proxy-kbxcr
I1110 15:30:55.564636       1 server_others.go:189] Using ipvs Proxier.
W1110 15:30:55.564845       1 proxier.go:381] IPVS scheduler not specified, use rr by default
I1110 15:30:55.565141       1 graceful_termination.go:160] Trying to delete rs: 10.96.0.1:443/TCP/192.168.73.133:6443
I1110 15:30:55.565179       1 graceful_termination.go:174] Deleting rs: 10.96.0.1:443/TCP/192.168.73.133:6443
I1110 15:30:55.565208       1 graceful_termination.go:160] Trying to delete rs: 192.168.73.172:31247/TCP/10.244.1.30:80
I1110 15:30:55.565220       1 graceful_termination.go:174] Deleting rs: 192.168.73.172:31247/TCP/10.244.1.30:80
I1110 15:30:55.565247       1 graceful_termination.go:160] Trying to delete rs: 192.168.73.168:31247/TCP/10.244.1.30:80
I1110 15:30:55.565259       1 graceful_termination.go:174] Deleting rs: 192.168.73.168:31247/TCP/10.244.1.30:80
I1110 15:30:55.565277       1 graceful_termination.go:160] Trying to delete rs: 192.168.73.133:31247/TCP/10.244.1.30:80
I1110 15:30:55.565288       1 graceful_termination.go:174] Deleting rs: 192.168.73.133:31247/TCP/10.244.1.30:80
I1110 15:30:55.565310       1 graceful_termination.go:160] Trying to delete rs: 10.96.0.10:9153/TCP/10.244.2.22:9153
I1110 15:30:55.565325       1 graceful_termination.go:174] Deleting rs: 10.96.0.10:9153/TCP/10.244.2.22:9153
I1110 15:30:55.565335       1 graceful_termination.go:160] Trying to delete rs: 10.96.0.10:9153/TCP/10.244.1.24:9153
I1110 15:30:55.565346       1 graceful_termination.go:174] Deleting rs: 10.96.0.10:9153/TCP/10.244.1.24:9153
I1110 15:30:55.565364       1 graceful_termination.go:160] Trying to delete rs: 192.168.73.101:31247/TCP/10.244.1.30:80
I1110 15:30:55.565376       1 graceful_termination.go:174] Deleting rs: 192.168.73.101:31247/TCP/10.244.1.30:80
I1110 15:30:55.565394       1 graceful_termination.go:160] Trying to delete rs: 10.96.0.10:53/TCP/10.244.2.22:53
I1110 15:30:55.565428       1 graceful_termination.go:174] Deleting rs: 10.96.0.10:53/TCP/10.244.2.22:53
I1110 15:30:55.565441       1 graceful_termination.go:160] Trying to delete rs: 10.96.0.10:53/TCP/10.244.1.24:53
I1110 15:30:55.565455       1 graceful_termination.go:174] Deleting rs: 10.96.0.10:53/TCP/10.244.1.24:53
I1110 15:30:55.565474       1 graceful_termination.go:160] Trying to delete rs: 10.96.0.10:53/UDP/10.244.2.22:53
I1110 15:30:55.565487       1 graceful_termination.go:174] Deleting rs: 10.96.0.10:53/UDP/10.244.2.22:53
I1110 15:30:55.565497       1 graceful_termination.go:160] Trying to delete rs: 10.96.0.10:53/UDP/10.244.1.24:53
I1110 15:30:55.565509       1 graceful_termination.go:174] Deleting rs: 10.96.0.10:53/UDP/10.244.1.24:53
I1110 15:30:55.565558       1 graceful_termination.go:160] Trying to delete rs: 10.97.64.43:80/TCP/10.244.1.30:80
I1110 15:30:55.565592       1 graceful_termination.go:174] Deleting rs: 10.97.64.43:80/TCP/10.244.1.30:80
I1110 15:30:55.565616       1 graceful_termination.go:160] Trying to delete rs: 10.244.0.0:31247/TCP/10.244.1.30:80
I1110 15:30:55.565629       1 graceful_termination.go:174] Deleting rs: 10.244.0.0:31247/TCP/10.244.1.30:80
I1110 15:30:55.565648       1 graceful_termination.go:160] Trying to delete rs: 127.0.0.1:31247/TCP/10.244.1.30:80
I1110 15:30:55.565664       1 graceful_termination.go:174] Deleting rs: 127.0.0.1:31247/TCP/10.244.1.30:80
I1110 15:30:55.565682       1 graceful_termination.go:160] Trying to delete rs: 172.17.0.1:31247/TCP/10.244.1.30:80
I1110 15:30:55.565693       1 graceful_termination.go:174] Deleting rs: 172.17.0.1:31247/TCP/10.244.1.30:80
I1110 15:30:55.565713       1 graceful_termination.go:160] Trying to delete rs: 10.96.0.3:10051/TCP/10.244.2.26:10051
I1110 15:30:55.565726       1 graceful_termination.go:174] Deleting rs: 10.96.0.3:10051/TCP/10.244.2.26:10051
I1110 15:30:55.565750       1 server_others.go:216] Tearing down inactive rules.
E1110 15:30:55.594545       1 proxier.go:432] Failed to execute iptables-restore for nat: exit status 1 (iptables-restore: line 7 failed
)
I1110 15:30:55.597338       1 server.go:464] Version: v1.13.3
I1110 15:30:55.602835       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1110 15:30:55.605060       1 config.go:102] Starting endpoints config controller
I1110 15:30:55.605073       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I1110 15:30:55.605293       1 config.go:202] Starting service config controller
I1110 15:30:55.605300       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I1110 15:30:55.705688       1 controller_utils.go:1034] Caches are synced for service config controller
I1110 15:30:55.705688       1 controller_utils.go:1034] Caches are synced for endpoints config controller

[root@kubernetes-node2 ~]# kubectl logs -n kube-system kube-proxy-s86dr 
I1110 15:31:00.779612       1 server_others.go:189] Using ipvs Proxier.
W1110 15:31:00.779923       1 proxier.go:381] IPVS scheduler not specified, use rr by default
I1110 15:31:00.779999       1 server_others.go:216] Tearing down inactive rules.
I1110 15:31:00.820185       1 server.go:464] Version: v1.13.3
I1110 15:31:00.824642       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1110 15:31:00.825227       1 config.go:202] Starting service config controller
I1110 15:31:00.825237       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I1110 15:31:00.825247       1 config.go:102] Starting endpoints config controller
I1110 15:31:00.825249       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I1110 15:31:00.925362       1 controller_utils.go:1034] Caches are synced for service config controller
I1110 15:31:00.925368       1 controller_utils.go:1034] Caches are synced for endpoints config controller

容器内部测试成功
bash-5.0$ ping 10.96.0.10
PING 10.96.0.10 (10.96.0.10) 56(84) bytes of data.
64 bytes from 10.96.0.10: icmp_seq=1 ttl=64 time=0.051 ms
64 bytes from 10.96.0.10: icmp_seq=2 ttl=64 time=0.052 ms
64 bytes from 10.96.0.10: icmp_seq=3 ttl=64 time=0.052 ms
64 bytes from 10.96.0.10: icmp_seq=4 ttl=64 time=0.055 ms
^C
--- 10.96.0.10 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.051/0.052/0.055/0.007 ms


bash-5.0$ ping zabbix-server
PING zabbix-server.default.svc.cluster.local (10.96.0.3) 56(84) bytes of data.
64 bytes from zabbix-server.default.svc.cluster.local (10.96.0.3): icmp_seq=1 ttl=64 time=0.037 ms
64 bytes from zabbix-server.default.svc.cluster.local (10.96.0.3): icmp_seq=2 ttl=64 time=0.052 ms
64 bytes from zabbix-server.default.svc.cluster.local (10.96.0.3): icmp_seq=3 ttl=64 time=0.047 ms
^C
--- zabbix-server.default.svc.cluster.local ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 0.037/0.045/0.052/0.008 ms


bash-5.0$ ping zabbix-web
PING zabbix-web.default.svc.cluster.local (10.97.64.43) 56(84) bytes of data.
64 bytes from zabbix-web.default.svc.cluster.local (10.97.64.43): icmp_seq=1 ttl=64 time=0.056 ms
64 bytes from zabbix-web.default.svc.cluster.local (10.97.64.43): icmp_seq=2 ttl=64 time=0.050 ms
64 bytes from zabbix-web.default.svc.cluster.local (10.97.64.43): icmp_seq=3 ttl=64 time=0.047 ms
^C
--- zabbix-web.default.svc.cluster.local ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 4ms
rtt min/avg/max/mdev = 0.047/0.051/0.056/0.003 ms


bash-5.0$ ping mysql-server
PING mysql-server.default.svc.cluster.local (10.99.100.149) 56(84) bytes of data.
64 bytes from mysql-server.default.svc.cluster.local (10.99.100.149): icmp_seq=1 ttl=64 time=0.046 ms
64 bytes from mysql-server.default.svc.cluster.local (10.99.100.149): icmp_seq=2 ttl=64 time=0.052 ms
64 bytes from mysql-server.default.svc.cluster.local (10.99.100.149): icmp_seq=3 ttl=64 time=0.065 ms
^C
--- mysql-server.default.svc.cluster.local ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 0.046/0.054/0.065/0.010 ms


bash-5.0$ ping 10.99.100.149
PING 10.99.100.149 (10.99.100.149) 56(84) bytes of data.
64 bytes from 10.99.100.149: icmp_seq=1 ttl=64 time=0.137 ms
64 bytes from 10.99.100.149: icmp_seq=2 ttl=64 time=0.044 ms
^C
--- 10.99.100.149 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.044/0.090/0.137/0.047 ms


bash-5.0$ ping 10.96.0.3
PING 10.96.0.3 (10.96.0.3) 56(84) bytes of data.
64 bytes from 10.96.0.3: icmp_seq=1 ttl=64 time=0.057 ms
64 bytes from 10.96.0.3: icmp_seq=2 ttl=64 time=0.053 ms
^C
--- 10.96.0.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1ms
rtt min/avg/max/mdev = 0.053/0.055/0.057/0.002 ms
bash-5.0$ 


bash-5.0$ ping 10.97.64.43
PING 10.97.64.43 (10.97.64.43) 56(84) bytes of data.
64 bytes from 10.97.64.43: icmp_seq=1 ttl=64 time=0.051 ms
64 bytes from 10.97.64.43: icmp_seq=2 ttl=64 time=0.044 ms
^C
--- 10.97.64.43 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 0.044/0.047/0.051/0.007 ms

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
### 回答1: ClusterIP Service 是 Kubernetes 中一种 Service 类型,它会为后端 Pod 提供一个虚拟 IP 地址,使得在集群内部可以通过这个虚拟 IP 地址来访问后端 Pod,而不需要知道具体的 Pod IP 地址。ClusterIP Service 的优缺点如下: 优点: 1. 高可用性:ClusterIP Service 支持负载均衡,可以将请求均匀地分配给多个后端 Pod,从而提高应用的可用性。 2. 灵活性:ClusterIP Service 支持配置多种负载均衡算法和会话保持方式,可以根据实际需求进行灵活配置。 3. 安全性:ClusterIP Service 只在集群内部暴露虚拟 IP 地址,不会将后端 Pod 直接暴露给外部网络,从而提高了应用的安全性。 缺点: 1. 仅适用于集群内部访问:ClusterIP Service 只能在 Kubernetes 集群内部使用,无法通过外部网络直接访问。 2. 不支持动态扩容:ClusterIP Service 只能在创建时指定后端 Pod 的数量,无法动态地根据负载自动扩容。 3. 无法实现跨集群访问:ClusterIP Service 只能在同一集群内部进行负载均衡,无法实现跨集群访问。 ### 回答2: K8s中的ClusterIP Service是一种服务发现和负载均衡机制,它有以下优点: 1. 内部服务访问:ClusterIP Service允许在集群内部创建一个虚拟的固定IP地址,用于访问部署在集群内的服务。这个IP地址可以供其他Pod或Service使用,方便进行内部通信,无需暴露到集群外部。 2. 负载均衡:通过ClusterIP Service,可以将请求均匀地分发到后端的多个Pod上,实现负载均衡。这样可以提高服务的可用性、扩展性和性能。 3. 内部DNS解析:ClusterIP Service会自动为每个Service分配一个唯一的DNS名称,这样可以通过名称来访问Service,而无需直接使用Pod的IP地址。这样,当Pod的IP地址发生变化时,不会影响到服务的访问。 然而,ClusterIP Service也有一些缺点: 1. 无法供集群外部访问:ClusterIP Service只能在集群内部使用,无法通过集群外部的IP地址直接访问。如果需要集群外部访问,需要结合其他类型的Service,如NodePort或LoadBalancer。 2. 不能实现会话保持:ClusterIP Service默认使用基于IP的负载均衡算法,不保证请求会发送到同一个后端Pod上。这就导致无法实现会话保持,对于需要保持会话状态的应用可能存在问题。 3. 无法处理动态变化:当创建或删除Pod时,ClusterIP Service需要重新配置和更新。在大规模的集群中,频繁的Pod变化可能导致Service的配置更新变得复杂和有延迟。 总的来说,ClusterIP Service在K8s集群中有诸多优点,如方便的内部服务发现和负载均衡,但也有一些限制,无法用于集群外部访问,不支持会话保持,以及对动态变化的响应较慢。 ### 回答3: Kubernetes(k8s)是一个用于自动化管理容器化应用程序的开源平台,而ClusterIP Service是k8s中的一种服务类型。下面是ClusterIP Service的优缺点: 优点: 1. 内部访问:ClusterIP Service为内部应用程序提供了一个虚拟的集群IP地址,只对集群内可见。这允许应用程序在集群内部相互通信,同时保护应用程序免受来自外部的未经授权的访问。 2. 负载均衡:ClusterIP Service可以在后端Pod之间进行负载均衡。它可以自动将流量分发到后端Pod实例,以提高应用程序的可用性和性能。 3. 简化网络配置:通过使用ClusterIP Service,应用程序可以通过单个虚拟IP地址访问多个后端Pod实例,而不需要知道每个Pod的具体IP地址。这简化了网络配置和管理的复杂性。 4. 无需暴露端口:ClusterIP Service只在集群内部使用,没有暴露到集群外部。这样可以确保应用程序仅对集群内的其他组件可见,而不受来自外部的潜在攻击。 缺点: 1. 无法从集群外部访问:ClusterIP Service只在集群内可用,无法从集群外部直接访问。如果需要将应用程序暴露给外部用户或设备,则需要使用其他类型的Service如NodePort或LoadBalancer。 2. 需要额外配置:ClusterIP Service需要在k8s集群中进行额外的配置。这包括创建Service对象、定义后端Pod等。相对于其他类型的Service,ClusterIP Service的设置和管理需要一些额外的步骤。 3. 网络层面问题:ClusterIP Service是在网络层面上实现的,因此可能会存在一些网络层面的问题。例如,可能会出现网络延迟或丢包等问题,尤其是在大规模部署时。 总的来说,ClusterIP Service提供了许多方便和安全的特性,但也有一些限制。开发人员需要根据应用程序的具体需求和部署环境来选择合适的Service类型。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值