1. 安装虚拟机
我们需要安装三个虚拟机节点,我们可以先安装其中1个节点,然后利用virtualbox的复制功能,再复制出另外两个节点。
最低环境要求如下:
- 操作系统: CentOS 7.7
- 内存:2G
- CPU:2核
- 硬盘:20G
1.1 安装VirtualBox
步骤比较简单,忽略
1.2 下载CentOS系统安装镜像
CentOS系统下载地址,截止20190921,两个源信息如下:
-
华东师大源:http://mirrors.ecnu.edu.cn/centos/ (有7.7以前版本)
-
清华源:https://mirrors.tuna.tsinghua.edu.cn/centos/(只有7.7最新版本)
我们可以使用清华源下载7.7最新版本的centos,链接为:https://mirrors.tuna.tsinghua.edu.cn/centos/7.7.1908/isos/x86_64/CentOS-7-x86_64-DVD-1908.iso
1.3 创建虚拟机
- (1)打开VirtualBox主界面,点击「新建」
- (2)输入名称,并注意类型选择「Linux」,版本选择「Red Hat(64-bit)」
- (3)创建虚拟硬盘,大小选择20G
完毕后点击「创建」
- (4)在主界面右键该虚拟机,点击「设置」。首先选择网络,网络连接方式设置为「桥接网卡」,这样,虚拟机拥有和宿主机同网段的IP,可以实现宿主机和虚拟机的互相访问
- (5)在系统选项中,处理器数量设置为2
- (6)在存储选项中,增加我们下载好的CentOS 7.7 操作系统镜像
-
(7)设置完成后,启动我们的虚拟机,并选择「Install CentOS 7」
-
(8)软件安装这里,为了不安装不必要的软件,我们选择「最小安装」
-
(9)点击「开始安装」,在安装开始后,我们可以在该界面设置一下ROOT密码
等待一会儿,安装完成之后,会提醒我们重启,重启后就可以登入我们的虚拟机了。最小安装不会安装图形界面,所以我们默认是进入命令行界面,通过root用户以及设置的root密码就可以登入了。
2. 配置虚拟机环境
2.1 配置远程访问环境
在VirtualBox中直接操作比较麻烦,所以我们可以先配置一下远程访问,系统中默认已经开启了ssh-server,所以我们只需要知道虚拟机的IP,就可以在宿主机通过ssh连到虚拟机进行操作了。
2.1.1 安装net-tools工具
最开始,虚拟机没有分配IP,无法ping,也无法通过yum安装任何软件。通过dhclient
命令可以让虚拟机自动分配到IP地址。
dhclient
并且,要把/etc/sysconfig/network-scripts/ifcfg-{网络设备名}
最后的ONBOOT=no
改为ONBOOT=yes
(网络设备名可以通过ifconfig
命令查看,ifconfig
命令没有的话要先执行下一步,安装net-tools
)
sed -i '/ONBOOT=n\|ONBOOT=y/c\ONBOOT=yes' /etc/sysconfig/network-scripts/ifcfg-enp0s3
然后执行:
yum -y install net-tools
2.1.2 获取本虚拟机IP地址
2.1.3 在宿主机通过ssh登录虚拟机
之后的操作都可以在宿主机进行啦!
2.2 配置yum源
执行如下命令配置yum源为阿里源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum makecache
2.3 关闭防火墙
防火墙一定要提前关闭,否则在后续安装K8S集群的时候是个麻烦。执行下面语句关闭,并禁止开机启动:
systemctl stop firewalld && systemctl disable firewalld
2.4 关闭Swap
在安装K8S集群时,Linux的Swap内存交换机制是一定要关闭的,否则会因为内存交换而影响性能以及稳定性。这里,我们可以提前进行设置:
- 执行
swapoff -a
可以临时关闭,但系统重启后会恢复 - 编辑
/etc/fstab
,注释掉包含swap
那一行就可以永久关闭了,可以执行如下命令注释掉该行:
sed -i '/ swap / s/^/#/' /etc/fstab
2.5 安装docker
安装docker可以参考官方文档:https://docs.docker.com/install/linux/docker-ce/centos/#prerequisites, 依次执行如下命令即可:
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
配置阿里仓库:
yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装docker:
yum install -y docker-ce docker-ce-cli containerd.io
启动docker服务并激活开机启动:
systemctl start docker && systemctl enable docker
安装成功之后可以通过docker version
命令查看docker的版本,最新的版本是19.03:
可以运行一个hello-world镜像测试docker工作是否正常:
docker run --rm hello-world
2.6 安装kubernetes
我们利用kubernetes官方提供的kubeadm工具来安装kubernetes集群,官方文档可以参考:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
2.6.1 配置kubernetes的yum源
官方仓库无法使用,建议使用阿里源的仓库,执行如下命令添加kubernetes.repo
仓库
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.6.2 关闭 SeLinux
- 执行
setenforce 0
可以临时关闭 - 永久关闭需要修改
/etc/sysconfig/selinux
的文件设置
sed -i '/SELINUX=e\|SELINUX=p\|SELINUX=d/c\SELINUX=disabled' /etc/sysconfig/selinux
2.6.3 安装kubernetes组件
yum install -y kubelet kubeadm kubectl
2.6.4 启动 kubelet
systemctl enable kubelet && systemctl start kubelet
3. 复制虚拟机
当第一个虚拟机安装完成之后,就可以利用virtualBox的复制功能,复制出另外两个虚拟机。
3.1 关闭第一台虚拟机
复制前要先关闭第一台虚拟机。
shutdown -h now
3.2 复制
右键虚拟机,然后点击「复制」
- 第一台虚拟机我们命名为centos-master,作为k8s集群的master节点。这一台虚拟机命名为
centos-worker1
,作为工作节点1 - 副本类型选择「完全复制」
- MAC地址设定选择「为所有网卡重新生成MAC地址」
点击复制之后就可以创建出一台一模一样的虚拟机,利用同样的方式再从master复制出centos-worker2
3.3 启动三台虚拟机
启动三台虚拟机,并执行ifconfig
命令,可以获取三个节点的IP地址分别为:
- centos-master:192.168.0.12
- centos-worker1: 192.168.0.13
- centos-worker2: 192.168.0.14
这三个节点的IP地址都是和宿主机在同一个网段,相互之间都可以ping通。接下来我们可以在宿主机上同时通过ssh登录这三台虚拟机进行操作。
3.4 设置虚拟机
以centos-master虚拟机为例,输入以下命令设置hostname、配置hosts:
cat > /etc/hostname <<< 'k8s-master' && cat >> '/etc/hosts' <<< '192.168.0.12 k8s-master'
另外两台虚拟机的hostname分别设置为k8s-worker1
和k8s-worker2
,以及配置他们的hosts:
cat > /etc/hostname <<< 'k8s-worker1' && cat >> '/etc/hosts' <<< '192.168.0.13 k8s-worker1'
cat > /etc/hostname <<< 'k8s-worker2' && cat >> '/etc/hosts' <<< '192.168.0.14 k8s-worker2'
修改之后,必须重新启动虚拟机才能生效。
4. 创建集群
前面的工作都准备好后,我们就可以真正创建集群了。我们使用官方提供的kubeadm工具,官方文档可以参考:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
4.1 初始化k8s集群
在k8s-master节点上执行如下命令:
kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository="gcr.azk8s.cn/google_containers" --kubernetes-version=v1.16.0 --apiserver-advertise-address=192.168.0.12
- 选项–pod-network-cidr=10.244.0.0/16,指定了pod的子网划分地址。因为后面要使用flannel网络插件,所以这里要指定flannel规定的cidr地址
- 选项–image-repository="gcr.azk8s.cn/google_containers,指定了容器镜像的仓库地址,因为国内无法访问google官方的镜像仓库地址(你懂的)
- 选项–kubernetes-version=v1.16.0,指定了我们要安装的k8s的版本,这里我使用了当前(20190922)最新的版本v1.16.0 。 随着时间的推移,这个版本会不断更新,你安装的时候记得把该版本指定为最新版本
- 选项–apiserver-advertise-address=192.168.0.12,表示api-server绑定的网卡地址,这里就是当前k8s-master这个节点的IP地址
注: 使用kubeadm init 命令出错或者强制终止,则再次执行该命令之前,要先执行
kubeadm reset
进行重置
执行该命令提示了一个错误:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
我们将/proc/sys/net/bridge/bridge-nf-call-iptables
的内容设置为1,然后执行kubeadm reset
之后再进行初始化
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
执行完成之后,得到k8s集群初始化成功的消息:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.12:6443 --token 2i80rk.ca8tjurnpp0yf8h8 \
--discovery-token-ca-cert-hash sha256:60bac2e3c44d074669801486c9f3a10ef60633dbfebbffb5db8ccf7ebe2bed88
先执行下面三条指令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
另外上面的信息还提示我们要创建网络,并且可以让其他节点通过kubeadm join
命令加入集群。
4.2 创建网络
如果不创建网络,我们查看pod的状态,可以看到coredns的pod是pending状态,集群也是不可用的:
大家可以参考官方文档,选择合适的网络插件。这里,我使用的是flannel。它的官方地址是:https://github.com/coreos/flannel 。
我们可以通过kubectl apply 仓库里的kube-flannel.yml(https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml)文件来安装flannel。但是要注意的是,flannel的镜像很可能你拉不下来。所以我们需要将这份文件修改镜像之后再apply。将flannel的镜像从quay.io/coreos/flannel:v0.11.0-amd64
换成quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
,后者是七牛云提供的镜像,在国内访问是没问题的。
yml文件的镜像修改之后,我们通过下述命令创建flannel 网络:
kubectl apply -f- <<\EOF
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unsed in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: arm64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: arm
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: ppc64le
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: s390x
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
EOF
4.3 No networks found in /etc/cni/net.d错误解决
在创建完flannel之后,等了很久,我发现coredns的pod还是处于pending状态。并且master节点也一直是NotReady状态:
通过执行systemctl status kubectl
命令查看状态,发现kubectl的日志中,有类似Unable to update cni config: No networks found in /etc/cni/net.d
的错误:
错误信息是类似这样的:
kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:dock... uninitialized
cni.go:202] Error validating CNI config &{cbr0 false [0xc001659e40 0xc001659ec0] [123 10 32 32 34 110 97 109 101 ...12 101 34 58 3
cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d
找了很久的原因,终于在stackoverflow上看到了这个问题的解决方案,参考:
是因为 /etc/cni/net.d/10-flannel.conflist 配置文件中少了"cniVersion": "0.2.0"
字段:
将"cniVersion": "0.2.0"
加上,重新写入该文件:
cat > /etc/cni/net.d/10-flannel.conflist <<\EOF
{
"name": "cbr0",
"cniVersion": "0.2.0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
EOF
过了一会儿,就会发现所有的pod都处于running状态了:
master节点也ready了:
4.4 加入k8s-worker1和k8s-worker2节点
在另外两个worker节点上分别执行初始化集群成功时提示的kubeadm join
命令。在执行命令之前也要先把/proc/sys/net/bridge/bridge-nf-call-iptables
文件内容设置为1
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
然后执行kubeadm join
命令:
kubeadm join 192.168.0.12:6443 --token 2i80rk.ca8tjurnpp0yf8h8 \
--discovery-token-ca-cert-hash sha256:60bac2e3c44d074669801486c9f3a10ef60633dbfebbffb5db8ccf7ebe2bed88
执行之后发现worker节点也一直处于NotReady状态,通过在master节点执行kubectl describe node k8s-worker1
发现,还是flannel网络的/etc/cni/net.d/10-flannel.conflist
文件的问题,同样的,我们在两个worker节点上执行如下命令:
cat > /etc/cni/net.d/10-flannel.conflist <<\EOF
{
"name": "cbr0",
"cniVersion": "0.2.0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
EOF
之后过一小会儿,我们再在master节点上执行kubectl get nodes
命令,发现所有的节点都处于Ready状态了:
这也说明我们的三个节点的k8s集群已经搭建成功!
5. 安装低版本的k8s
上述步骤我们在安装docker、kubelet、kubeadm等这些组件的时候没有指定版本,所以每次都是安装最新的版本。如果需要安装指定版本,则需要在安装过程中指定安装的版本。
比如,安装v1.11.9版本的k8s
docker 版本指定为18.03.0.ce-1.el7.centos
yum install -y docker-ce-18.03.0.ce-1.el7.centos docker-ce-cli-18.03.0.ce-1.el7.centos containerd.io
kubelet、kubeadm、kubelet等组件的版本指定为v1.11.9
yum install -y kubelet-1.11.9 kubeadm-1.11.9 kubectl-1.11.9
由于1.11.9版本的kubeadm无法通过--image-repository
参数指定镜像仓库的地址,所以要提前把k8s相关的镜像拉到机器上,并且进行reTag操作:
docker pull gcr.azk8s.cn/google_containers/kube-proxy:v1.11.9
docker tag gcr.azk8s.cn/google_containers/kube-proxy:v1.11.9 k8s.gcr.io/kube-proxy-amd64:v1.11.9
docker pull gcr.azk8s.cn/google_containers/kube-apiserver:v1.11.9
docker tag gcr.azk8s.cn/google_containers/kube-apiserver:v1.11.9 k8s.gcr.io/kube-apiserver-amd64:v1.11.9
docker pull gcr.azk8s.cn/google_containers/kube-controller-manager:v1.11.9
docker tag gcr.azk8s.cn/google_containers/kube-controller-manager:v1.11.9 k8s.gcr.io/kube-controller-manager-amd64:v1.11.9
docker pull gcr.azk8s.cn/google_containers/kube-scheduler:v1.11.9
docker tag gcr.azk8s.cn/google_containers/kube-scheduler:v1.11.9 k8s.gcr.io/kube-scheduler-amd64:v1.11.9
docker pull gcr.azk8s.cn/google_containers/pause:3.1
docker tag gcr.azk8s.cn/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker pull gcr.azk8s.cn/google_containers/etcd:3.2.18
docker tag gcr.azk8s.cn/google_containers/etcd:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
docker pull gcr.azk8s.cn/google_containers/coredns:1.1.3
docker tag gcr.azk8s.cn/google_containers/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3
最后执行:
kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.11.9 --apiserver-advertise-address=10.177.106.145
并且指定--kubernetes-version=v1.11.9
Updated at 20210124
k8s集群的创建
使用阿里云的镜像仓库:
kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository="registry.cn-hangzhou.aliyuncs.com/google_containers" --kubernetes-version=v1.20.2 --apiserver-advertise-address=10.177.0.60
flannel网络创建
quay.io/coreos/flannel:v0.13.1-rc1镜像替换为dockerHub的镜像:
kubectl apply -f- <<\EOF
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: gejunqiang/flannel:v0.13.1-rc1
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: gejunqiang/flannel:v0.13.1-rc1
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
EOF