Kubernetes的Ingress资源

1.   kubernetes启用ipvs模式

注意:这里的实验是已经运行的k8s集群修改kuberbetes的模式为ipvs,生产不建议这么修改,生产可以在部署初始化时去自定义各个部署参数,定义了kubeProxy的模式为ipvs

 

我们现在需要做的是修改kube-proxy配置文件的内容

[root@master ~]# kubectl get cm -n kube-system
NAME                                 DATA   AGE
coredns                              1      14d
extension-apiserver-authentication   6      14d
kube-flannel-cfg                     2      14d
kube-proxy                           2      14d
kubeadm-config                       2      14d
kubelet-config-1.19                  1      14d

我们此前讲了如何创建,如何删除我们的资源,其实修改资源也有非常简单的方法,有些资源的属性允许动态修改,而有些属性创建完之后是不能修改的只能删除再去重建,可以使用kubectl edit去修改配置文件的内容

[root@master ~]# kubectl edit cm kube-proxy -n kube-system

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    bindAddressHardFail: false
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 10.244.0.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    detectLocalMode: ""
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: null
      minSyncPeriod: 0s
      syncPeriod: 0s
    ipvs:     #看这里有一个ipvs规则
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s

    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: ""       #到底使用的是iptables还是ipvs规则取决于这里的mode配置的定义这里留空的则代表是iptables,我们需要把这里改成ipvs才可以,但是此时这里不能修改,不原因是每个节点的ipvs的内核模块还没有装,这个过程没法自动进行,需要我们自己手动去进行,我们也可以再安装k8s初始化时明确定义kubeProxy的模式为ipvs
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    showHiddenMetricsForVersion: ""
    udpIdleTimeout: 0s
    winkernel:
      enableDSR: false
      networkName: ""
      sourceVip: ""

我们也可以再安装k8s初始化时明确定义kubeProxy的模式为ipvs  如果要使用ipvs类型的话建议一开始就使用ipvs类型  方法如下:

kubeadm也可通过配置文件加载配置,以定制更丰富的部署选项。以下是个符合前述命令设定方式的使用示例,不过,它明确定义了kubeProxy的模式为ipvs,并支持通过修改imageRepository的值修改获取系统镜像时使用的镜像仓库。

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.13.3
api:
  advertiseAddress: 172.20.0.71
  bindPort: 6443
  controlPlaneEndpoint: ""
imageRepository: k8s.gcr.io
kubeProxy:
  config:
    mode: "ipvs"
    ipvs:
      ExcludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      syncPeriod: 30s    
kubeletConfiguration:
  baseConfig:
    cgroupDriver: cgroupfs
    clusterDNS:
    - 10.96.0.10
    clusterDomain: cluster.local
    failSwapOn: false
    resolvConf: /etc/resolv.conf
    staticPodPath: /etc/kubernetes/manifests
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
 
将上面的内容保存于配置文件中,例如kubeadm-config.yaml,而后执行相应的命令:
    
   ~]# kubeadm init --config kubeadm-config.yaml --ignore-preflight-errors=Swap

为了使得以后每一个节点重启以后依然可以被装,我们把它写成脚本并放在特定目录下

[root@master ~]# cd /etc/sysconfig/modules/

[root@master modules]# vim ipvs.modules

#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for mod in $(ls $ipvs_mods_dir | grep -o "^[^.]*"); do
    /sbin/modinfo -F filename $mod  &> /dev/null
    if [ $? -eq 0 ]; then
        /sbin/modprobe $mod
    fi
done

[root@master modules]# chmod +x ipvs.modules    #给执行权限
[root@master modules]# bash ipvs.modules    #运行

[root@master modules]# lsmod | grep ip_vs   #查看涉及到的模块
ip_vs_wlc              12519  0
ip_vs_sed              12519  0
ip_vs_pe_sip           12697  0
nf_conntrack_sip       33860  1 ip_vs_pe_sip
ip_vs_nq               12516  0
ip_vs_lc               12516  0
ip_vs_lblcr            12922  0
ip_vs_lblc             12819  0
ip_vs_ftp              13079  0
ip_vs_dh               12688  0
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  0
ip_vs                 141092  24 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblc
nf_nat                 26787  4 ip_vs_ftp,nf_nat_ipv4,xt_nat,nf_nat_masquerade_ipv4
nf_conntrack          133387  8 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4
libcrc32c              12644  3 ip_vs,nf_nat,nf_conntrack

 

需要把 ipvs.modules复制到其它的node节点上

[root@master modules]# scp -p ipvs.modules node1:/etc/sysconfig/modules/
The authenticity of host 'node1 (172.21.96.13)' can't be established.
ECDSA key fingerprint is SHA256:RAAnxG51mtapB4TKjCZ887N9v0iLljTriue/9YMHI7s.
ECDSA key fingerprint is MD5:23:91:f8:c0:0e:4c:de:1f:c3:c8:e2:c0:7f:52:2f:22.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,172.21.96.13' (ECDSA) to the list of known hosts.
root@node1's password:
ipvs.modules                                                                                                                                                             100%  253   955.3KB/s   00:00 

[root@master modules]# scp -p ipvs.modules node2:/etc/sysconfig/modules/
The authenticity of host 'node2 (172.21.16.33)' can't be established.
ECDSA key fingerprint is SHA256:5DRIbLHDv4zftrxtgCYYvTkA7hB1edr3iyPET5EwaOs.
ECDSA key fingerprint is MD5:29:5e:01:05:23:5f:e6:b5:e2:dd:61:0d:f0:0e:93:75.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,172.21.16.33' (ECDSA) to the list of known hosts.
root@node2's password:
ipvs.modules                                                                                                                                                             100%  253   179.8KB/s   00:00

我们再node01和node02上手动去执行一下脚本

[root@node1 ~]# cd /etc/sysconfig/modules/
[root@node1 modules]# ls
ipvs.modules
[root@node1 modules]# bash ipvs.modules
[root@node1 modules]# lsmod | grep ip_vs
ip_vs_wlc              12519  0
ip_vs_sed              12519  0
ip_vs_pe_sip           12697  0
nf_conntrack_sip       33860  1 ip_vs_pe_sip
ip_vs_nq               12516  0
ip_vs_lc               12516  0
ip_vs_lblcr            12922  0
ip_vs_lblc             12819  0
ip_vs_ftp              13079  0
ip_vs_dh               12688  0
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  0
ip_vs                 141092  36 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblc
nf_nat                 26787  4 ip_vs_ftp,nf_nat_ipv4,xt_nat,nf_nat_masquerade_ipv4
nf_conntrack          133387  8 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4
libcrc32c              12644  3 ip_vs,nf_nat,nf_conntrack


node2同上

上面的步骤操作完成以后,我们才能去修改kube-proxy配置文件的内容:把mode改为ipvs

[root@master modules]# kubectl edit cm kube-proxy -n kube-system
apiVersion: v1
data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    bindAddressHardFail: false
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 10.244.0.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    detectLocalMode: ""
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: null
      minSyncPeriod: 0s
      syncPeriod: 0s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: "ipvs"
    nodePortAddresses: null

 

这里我们发现我们的kube-proxy已经运行了14天了,但是我们一旦通过kubectl edit修改了kube-proxy的配置文件它就需要重构这个pod,他会自动去实现,因为这个配置信息可能不支持修改,一旦修改了只能重启或者重建pod才行,这里我们需要等到它重构完成才能查看是否为ipvs类型,这个重构可能是自动进行的,也有可能需要手动进行,到底要不要重构,取决于你的配置信息能不能被重载,能不能被自动生效

[root@master modules]# kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-6c76c8bb89-425s2         1/1     Running   0          14d
coredns-6c76c8bb89-kfn67         1/1     Running   0          14d
etcd-master                      1/1     Running   0          14d
kube-apiserver-master            1/1     Running   0          14d
kube-controller-manager-master   1/1     Running   0          14d
kube-flannel-ds-75txh            1/1     Running   0          14d
kube-flannel-ds-pb69v            1/1     Running   0          14d
kube-flannel-ds-rx782            1/1     Running   0          14d
kube-proxy-2lx79                 1/1     Running   0          14d
kube-proxy-7cvv8                 1/1     Running   0          14d
kube-proxy-qt52t                 1/1     Running   0          14d
kube-scheduler-master            1/1     Running   0          14d

 

 

现在我们验证service有没有生成ipvs规则

我们去node1或者node2去安装ipvsadm

[root@node2 modules]# yum -y install ipvsadm
                                     

这里我们查看规则没有出现说明修改类型没有生效,因为我们的service一定是存在的

[root@node2 modules]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

 

因为目前是实验环境,我们手动删除kube-proxy,注意生产环境切记不可以这么搞

[root@master modules]# kubectl get pods -n kube-system --show-labels
NAME                             READY   STATUS    RESTARTS   AGE   LABELS
coredns-6c76c8bb89-425s2         1/1     Running   0          14d   k8s-app=kube-dns,pod-template-hash=6c76c8bb89
coredns-6c76c8bb89-kfn67         1/1     Running   0          14d   k8s-app=kube-dns,pod-template-hash=6c76c8bb89
etcd-master                      1/1     Running   0          14d   component=etcd,tier=control-plane
kube-apiserver-master            1/1     Running   0          14d   component=kube-apiserver,tier=control-plane
kube-controller-manager-master   1/1     Running   0          14d   component=kube-controller-manager,tier=control-plane
kube-flannel-ds-75txh            1/1     Running   0          14d   app=flannel,controller-revision-hash=56df9fd6f9,pod-template-generation=2,tier=node
kube-flannel-ds-pb69v            1/1     Running   0          14d   app=flannel,controller-revision-hash=56df9fd6f9,pod-template-generation=2,tier=node
kube-flannel-ds-rx782            1/1     Running   0          14d   app=flannel,controller-revision-hash=56df9fd6f9,pod-template-generation=2,tier=node
kube-proxy-2lx79                 1/1     Running   0          14d   controller-revision-hash=75f58d84d7,k8s-app=kube-proxy,pod-template-generation=1
kube-proxy-7cvv8                 1/1     Running   0          14d   controller-revision-hash=75f58d84d7,k8s-app=kube-proxy,pod-template-generation=1
kube-proxy-qt52t                 1/1     Running   0          14d   controller-revision-hash=75f58d84d7,k8s-app=kube-proxy,pod-template-generation=1
kube-scheduler-master            1/1     Running   0          14d   component=kube-scheduler,tier=control-plane
[root@master modules]# kubectl delete pods -l k8s-app=kube-proxy -n kube-system
pod "kube-proxy-2lx79" deleted
pod "kube-proxy-7cvv8" deleted
pod "kube-proxy-qt52t" deleted

删除完成以后他会通过装入我们的新配置来生成新的kube-proxy。

[root@master modules]# kubectl get pods -n kube-system --show-labels
NAME                             READY   STATUS    RESTARTS   AGE   LABELS
coredns-6c76c8bb89-425s2         1/1     Running   0          14d   k8s-app=kube-dns,pod-template-hash=6c76c8bb89
coredns-6c76c8bb89-kfn67         1/1     Running   0          14d   k8s-app=kube-dns,pod-template-hash=6c76c8bb89
etcd-master                      1/1     Running   0          14d   component=etcd,tier=control-plane
kube-apiserver-master            1/1     Running   0          14d   component=kube-apiserver,tier=control-plane
kube-controller-manager-master   1/1     Running   0          14d   component=kube-controller-manager,tier=control-plane
kube-flannel-ds-75txh            1/1     Running   0          14d   app=flannel,controller-revision-hash=56df9fd6f9,pod-template-generation=2,tier=node
kube-flannel-ds-pb69v            1/1     Running   0          14d   app=flannel,controller-revision-hash=56df9fd6f9,pod-template-generation=2,tier=node
kube-flannel-ds-rx782            1/1     Running   0          14d   app=flannel,controller-revision-hash=56df9fd6f9,pod-template-generation=2,tier=node
kube-proxy-f6pph                 1/1     Running   0          92s   controller-revision-hash=75f58d84d7,k8s-app=kube-proxy,pod-template-generation=1
kube-proxy-nrqrj                 1/1     Running   0          92s   controller-revision-hash=75f58d84d7,k8s-app=kube-proxy,pod-template-generation=1
kube-proxy-vqkbr                 1/1     Running   0          89s   controller-revision-hash=75f58d84d7,k8s-app=kube-proxy,pod-template-generation=1
kube-scheduler-master            1/1     Running   0          14d   component=kube-scheduler,tier=control-plane

 

kube-proxy是新建的话它应该就可以加载到这个配置了,加载到了这个配置以后或许就会生成ipvs类型的规则:我们对某一个特定服务的访问它应该会被调度至转发至某一个servce,service回把后端调度至pod资源,默认调度算法是rr,也可以再配置文件中去指定调度算法   命令: kubectl edit cm kube-proxy -n kube-system去修改ipvs的scheduler去定义

[root@node2 modules]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.17.0.1:32223 rr
  -> 10.244.1.6:80                Masq    1      0          0         
  -> 10.244.1.19:80               Masq    1      0          0         
  -> 10.244.2.23:80               Masq    1      0          0         
  -> 10.244.2.24:80               Masq    1      0          0         
TCP  172.17.0.1:32405 rr      
  -> 10.244.2.3:80                Masq    1      0          0         
TCP  172.21.16.33:32223 rr
  -> 10.244.1.6:80                Masq    1      0          0         
  -> 10.244.1.19:80               Masq    1      0          0         
  -> 10.244.2.23:80               Masq    1      0          0         
  -> 10.244.2.24:80               Masq    1      0          0         
TCP  172.21.16.33:32405 rr
  -> 10.244.2.3:80                Masq    1      0          0         
TCP  10.96.0.1:443 rr
  -> 172.21.96.32:6443            Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.244.1.2:53                Masq    1      0          0         
  -> 10.244.2.2:53                Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.1.2:9153              Masq    1      0          0         
  -> 10.244.2.2:9153              Masq    1      0          0         
TCP  10.96.205.132:80 rr
  -> 10.244.1.6:80                Masq    1      0          0         
  -> 10.244.1.19:80               Masq    1      0          0         
  -> 10.244.2.23:80               Masq    1      0          0         
  -> 10.244.2.24:80               Masq    1      0          0         
TCP  10.99.54.64:80 rr
  -> 10.244.1.6:80                Masq    1      0          0         
  -> 10.244.1.19:80               Masq    1      0          0         
  -> 10.244.2.23:80               Masq    1      0          0         
  -> 10.244.2.24:80               Masq    1      0          0         
TCP  10.105.229.228:80 rr
  -> 10.244.2.3:80                Masq    1      0          0         
TCP  10.107.27.178:80 rr
  -> 10.244.1.6:80                Masq    1      0          0         
  -> 10.244.1.19:80               Masq    1      0          0         
  -> 10.244.2.23:80               Masq    1      0          0         
  -> 10.244.2.24:80               Masq    1      0          0         
TCP  10.244.2.0:32223 rr
  -> 10.244.1.6:80                Masq    1      0          0         
  -> 10.244.1.19:80               Masq    1      0          0         
  -> 10.244.2.23:80               Masq    1      0          0         
  -> 10.244.2.24:80               Masq    1      0          0         
TCP  10.244.2.0:32405 rr
  -> 10.244.2.3:80                Masq    1      0          0         
TCP  10.244.2.1:32223 rr
  -> 10.244.1.6:80                Masq    1      0          0         
  -> 10.244.1.19:80               Masq    1      0          0         
  -> 10.244.2.23:80               Masq    1      0          0         
  -> 10.244.2.24:80               Masq    1      0          0         
TCP  10.244.2.1:32405 rr
  -> 10.244.2.3:80                Masq    1      0          0         
TCP  127.0.0.1:32223 rr
  -> 10.244.1.6:80                Masq    1      0          0         
  -> 10.244.1.19:80               Masq    1      0          0         
  -> 10.244.2.23:80               Masq    1      0          0         
  -> 10.244.2.24:80               Masq    1      0          0         
TCP  127.0.0.1:32405 rr
  -> 10.244.2.3:80                Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.1.2:53                Masq    1      0          0         
  -> 10.244.2.2:53                Masq    1      0          0         

 

注意:所有的service只能使用同一种调度算法

我们查看svc,去访问svc的地址做测试看是否能访问到后端的pod,如果访问成功则代表ipvs类型的service已经正常工作起来了

[root@master modules]# kubectl get svc
NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP         PORT(S)        AGE
external-www-svc     ExternalName   <none>           www.kubernetes.io   6379/TCP       19h
kubernetes           ClusterIP      10.96.0.1        <none>              443/TCP        14d
myapp-headless-svc   ClusterIP      None             <none>              80/TCP         19h
myapp-svc            ClusterIP      10.107.27.178    <none>              80/TCP         20h
myapp-svc-lb         ClusterIP      10.99.54.64      <none>              80/TCP         20h
myapp-svc-nodeport   NodePort       10.96.205.132    <none>              80:32223/TCP   20h
ng-dep               NodePort       10.105.229.228   <none>              80:32405/TCP   14d
[root@master modules]# curl 10.107.27.178
Hello MyApp | Version: v3 | <a href="hostname.html">Pod Name</a>

 

 

 

 

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值