kubernetes双栈部署

1. 什么是双栈

IPv4/IPv6 双协议栈网络能够将 IPv4 和 IPv6 地址分配给 Pod 和 Service。

从 1.21 版本开始,Kubernetes 集群默认启用 IPv4/IPv6 双协议栈网络, 以支持同时分配 IPv4 和 IPv6 地址
下面是官方回答

双栈协议从1.23开始,进入stable状态,本次我们部署1.23.1版本

2. 配置

# 关闭防火墙
 systemctl stop firewalld
 systemctl disable firewalld
# 关闭selinux
 sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
 setenforce 0  # 临时
# 关闭swap
 swapoff -a  # 临时
 sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

配置域名解析   (每一台服务器都要做)
在/etc/hosts下面添加域名解析
2000::171 master
2000::172 node1
192.168.124.5 master
192.168.124.7 node1
将桥接的IPv4、ipv6流量传递到iptables的链  (每一台服务器都要做)
cat >  /etc/sysctl.conf<< EOF
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding=1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl -p

docker安装略过
注意要在daemon.json里面配置
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}


配置阿里云软件源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm,kubelet和kubectl,这里指定1.23.1
yum install -y kubelet-1.23.1 kubeadm-1.23.1 kubectl-1.23.1



3. 启动

这里编写启动的yaml文件 kubeadmin.yaml

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
networking:
  podSubnet: 2001::/64,10.39.0.0/16  #不需要改
  serviceSubnet: 2002::/110,10.96.0.0/16 #不需要改
etcd:
  local:
    extralArgs:
      listen-metrics-urls: http://[::]:2381
controllerManager:
  extraArgs:
    "node-cidr-mask-size-ipv4": "25"
    "node-cidr-mask-size-ipv6": "80"
imageRepository: "registry.cn-hangzhou.aliyuncs.com/google_containers"
clusterName: "rhel-k8scluster" #这里你可以自定义集群名称
kubernetesVersion: "v1.23.1"  #集群版本
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "2000::171" #这里只能配置一个地址(v4v6都可以,不过官方推荐配置v6,将这里配置为你自己的master地址)
  bindPort: 6443
nodeRegistration:
  kubeletExtraArgs:
    node-ip: 2000::171,192.168.124.5 #这里必须同时配置你主机的ipv4、ipv6地址
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: "::"
clusterCIDR: "2001::/64,10.39.0.0/16"    # Pod的地址范围,默认不需要改
mode: "ipvs"

## 在 kube-proxy 使用 iptables 的 kubernetes 集群中,ClusterIP 是一个虚拟 IP,不会响应 ping,它仅作为 iptables 转发的目标规则。测试 ##ClusterIP 配置是否正确的最好方法是使用curl来访问 Pod IP 和端口以查看它是否响应。
##在 kube-proxy 使用 ipvs 的 kubernetes 集群中,ClusterIP 可以 ping 通。
advertiseAddress 字段里,如果你配置了两个地址,默认只有第一个地址生效。

官方说明
官方说明
执行 kubeadm init --config kubeadmin.yaml 启动集群

[root@master ~]# kubeadm init --config kubeadmin.yaml 
W0630 09:31:16.848843   12432 strict.go:55] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta3", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "extralArgs"
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [2002::1 2000::171]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [2000::171 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [2000::171 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.002703 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: zcdy8g.nby6c0ogzyor3cej
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join [2000::171]:6443 --token zcdy8g.nby6c0ogzyor3cej \
	--discovery-token-ca-cert-hash sha256:6667d2bb4203739dca360fa62f0cba12a5b49a930c60870228ab22c4ceb9aba9 

保存这里的信息,下面要用到

下面配置子节点
编写启动文件 kube-admin.yaml

apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
discovery:
  bootstrapToken:
    apiServerEndpoint: "[2000::171]:6443"
    token: "zcdy8g.nby6c0ogzyor3cej"
    caCertHashes:
    - "sha256:6667d2bb4203739dca360fa62f0cba12a5b49a930c60870228ab22c4ceb9aba9"
    # change auth info above to match the actual token and CA certificate hash for your cluster
nodeRegistration:
  kubeletExtraArgs:
    node-ip: 2000::172,192.168.124.7

这里有大坑,不能像传统的v4那样,直接输入主节点的口令加入集群,否则对应的节点只会获取到单栈ip,编写启动文件,比较稳妥一些

kubeadm join --config=kube-admin.yaml

如果忘记口令,可以使用

kubeadm token create --print-join-command

重新生成

4. 网络插件配置

网络插件通常使用calico与flannel,但是目前flannel似乎并不支持k8s的双栈网络,我们在这里使用calico
官方配置文档
https://docs.tigera.io/archive/v3.21/networking/ipv6#enable-dual-stack
我们按照文档指引来配置calico
首先下载calico
https://docs.projectcalico.org/v3.21/manifests/calico-typha.yaml
修改ipam

"ipam": {
              "type": "calico-ipam",
              "assign_ipv4": "true",
              "assign_ipv6": "true"
          },

配置环境变量

            - name: FELIX_IPV6SUPPORT
              value: "true"
            - name: IP6
              value: "autodetect"
            - name: FELIX_HEALTHENABLED
              value: "true"
            - name: CALICO_IPV6POOL_CIDR
              value: "2001::/64"

更改IPIP的模式,把默认的“Always”改成“CrossSubnet”。注意:IPv4支持IPIP或Vxlan,默认使用ipip的Always模式。另外,IPv6不支持ipip与vxlan(所以跨子网之间的节点的容器无法通过IPv6进行通信)。
此处有大坑,如果不配置,集群内的ipv6地址无法与集群外部的ipv6地址通信,集群内部的通信是正常的。
这个文件中, 有两个PodDisruptionBudget对象,需要把这两个对象的apiVersion从policy/v1beta1改成policy/v1

下面apply calico即可完成安装

验证寻址

[root@master ~]# kubectl get nodes  node1  -o go-template --template='{{range .spec.podCIDRs}}{{printf "%s\n" .}}{{end}}'
2001::1:0:0:0/80
10.39.0.128/25

配置成功

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值