文章目录
1 目标环境
目标环境中包含3个位于同一局域网的节点,3个节点为均安装了ubuntu server 18.04 x86_64的服务器。三台机器的机器名与IP对应关系如下表所示。
序号 | 节点名 | IP地址 |
---|---|---|
1 | node0 | 172.24.149.255 |
2 | node1 | 172.24.144.245 |
3 | node2 | 172.24.152.90 |
2 安装k8s
2.1 关闭swap
# 临时关闭swap, 重启后失效
swapoff -a
# 注释掉/etc/fstab中的swap分区或swap文件
cp /etc/fstab /etc/fstab.bak
grep -v "swap" /etc/fstab > /etc/fstab.new
mv /etc/fstab.new /etc/fstab
# 调节swap参数
# 临时生效
echo 0 > /proc/sys/vm/swappiness
echo 'vm.swappiness=0' >> /etc/sysctl.conf
sysctl -p
2.2 安装docker
根据docker安装文档 进行docker组件安装。若下载较慢,请尝试使用阿里云提供的docker-ce源。
# 将当前用户加到docker组,避免每次都使用sudo命令
gpasswd -a $USER docker
# 更改镜像加速源
mkdir -p /etc/docker
cat <<EOF >/etc/docker/daemon.json
{
"registry-mirrors": [
"https://ccr.ccs.tencentyun.com/",
"https://docker.mirrors.ustc.edu.cn/"
],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
# 启动docker进程
systemctl restart docker
# 设置开机启动
systemctl enable docker
## 配置containerd的systemd
# mkdir -p /etc/containerd
# containerd config default | tee /etc/containerd/config.toml
# sed -i "/\[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc.options\]/a\ SystemdCgroup = true" config.toml
2.3 配置kubeadm安装源
为了加速安装包的下载过程,本文使用阿里云的kubernetes源进行安装。
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
2.4 安装kubeadm和相关工具
apt -y install kubelet kubeadm kubectl
# 设置kubelet开机自动启动
systemctl enable kubelet
2.5 下载kubernetes相关镜像
mkdir -p ~/k8s_setup/ && cd ~/k8s_setup/
# 获取初始化安装参数
kubeadm config print init-defaults > init.default.yaml
# 保存自定义的镜像拉取源, 注意较低版本的kubeadm需要添加apiServerExtraArgs项
cat <<EOF > ~/k8s_setup/init-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
imageRepository: registry.aliyuncs.com/google_containers
kubernetesVersion: v1.22.0
networking:
podSubnet: "172.31.121.0/24"
#apiServerExtraArgs:
# service-node-port-range: 1-65535
EOF
# 拉取镜像
kubeadm config images pull --config=init-config.yaml
2.6 安装Master(node0上执行)
cd ~/k8s_setup/
kubeadm init --config=init-config.yaml
# 最后得到如下输出,即表示成功
# Your Kubernetes control-plane has initialized successfully!
#
# To start using your cluster, you need to run the following as a regular user:
#
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
#
# Alternatively, if you are the root user, you can run:
#
# export KUBECONFIG=/etc/kubernetes/admin.conf
#
# You should now deploy a pod network to the cluster.
# Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
# https://kubernetes.io/docs/concepts/cluster-administration/addons/
#
# Then you can join any number of worker nodes by running the following on each as root:
#
# kubeadm join 172.24.149.225:6443 --token 67q5jn.hhpzbq9u5wxtofsz \
# --discovery-token-ca-cert-hash sha256:2133d2361e12f7f6052df2aa723a87197495bee40e5776835c878d64134602c2
# 按提示准备kubectl所需的文件
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
2.7 安装工作节点(node1和node2上执行)
cd ~/k8s_setup/
# 生成join配置,注意其中的apiServerEndpoint、token和tlsBootstrapToken来自2.6节安装成功后的kubeadm join串
cat <<EOF > ~/k8s_setup/join-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
discovery:
bootstrapToken:
apiServerEndpoint: 172.24.149.225:6443
token: 67q5jn.hhpzbq9u5wxtofsz
unsafeSkipCAVerification: true
tlsBootstrapToken: 67q5jn.hhpzbq9u5wxtofsz
EOF
# 执行加入命令
kubeadm join --config=join-config.yaml
2.8 安装网络插件(calico, node0上执行)
2.8.1 安装calicoctl
curl -o calicoctl https://github.com/projectcalico/calicoctl/releases/download/v3.17.1/calicoctl
chmod +x calicoctl && mv calicoctl /usr/local/bin
2.8.2 安装calico容器
# 下载calico部署脚本, 如果image下载慢,可以使用grep将image字段筛选出来,手动选pull下来
wget https://docs.projectcalico.org/manifests/calico-etcd.yaml
# 更改calico配置,填写集群相关信息 请参考: https://www.cnblogs.com/Christine-ting/p/12837250.html
# 替代etcd地址
sed -i 's@.*etcd_endpoints:.*@\ \ etcd_endpoints:\ \"https://172.24.149.225:2379\"@gi' calico-etcd.yaml
# 替换 Etcd 证书
# apiserver进程 --etcd-certfile 参数使用的文件
export ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 -w 0`
# apiserver进程 --etcd-keyfile 参数使用的文件
export ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 -w 0`
# apiserver进程 --etcd-cafile 参数使用的文件
export ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 -w 0`
sed -i "s@.*etcd-cert:.*@\ \ etcd-cert:\ ${ETCD_CERT}@gi" calico-etcd.yaml
sed -i "s@.*etcd-key:.*@\ \ etcd-key:\ ${ETCD_KEY}@gi" calico-etcd.yaml
sed -i "s@.*etcd-ca:.*@\ \ etcd-ca:\ ${ETCD_CA}@gi" calico-etcd.yaml
sed -i 's@.*etcd_ca:.*@\ \ etcd_ca:\ "/calico-secrets/etcd-ca"@gi' calico-etcd.yaml
sed -i 's@.*etcd_cert:.*@\ \ etcd_cert:\ "/calico-secrets/etcd-cert"@gi' calico-etcd.yaml
sed -i 's@.*etcd_key:.*@\ \ etcd_key:\ "/calico-secrets/etcd-key"@gi' calico-etcd.yaml
### 替换 IPPOOL 地址
sed -i 's/# - name: CALICO_IPV4POOL_CIDR/- name: CALICO_IPV4POOL_CIDR/g' calico-etcd.yaml
sed -i 's@# value: "192.168.0.0/16"@ value: "172.31.121.0/24"@g' calico-etcd.yaml
# 修改configMap挂载权限,防止出现无法打开etcd认证文件的问题
sed -i 's#defaultMode: 0400#defaultMode: 0440#g' calico-etcd.yaml
# 拉起calico
kubectl apply -f calico-etcd.yaml
2.9 验证安装
# 查看所有pod均就绪
watch kubectl get pods -n kube-system
# 最终输出类似于
# NAMESPACE NAME READY STATUS RESTARTS AGE
# kube-system calico-kube-controllers-7bc67ff9d9-w45rx 1/1 Running 0 4m21s
# kube-system calico-node-f78tj 1/1 Running 0 4m21s
# kube-system calico-node-th6pj 1/1 Running 0 4m21s
# kube-system calico-node-tr87t 1/1 Running 0 4m21s
# kube-system coredns-7f89b7bc75-2tk85 1/1 Running 0 11h
# kube-system coredns-7f89b7bc75-lzvxr 1/1 Running 0 11h
# kube-system etcd-node0 1/1 Running 1 11h
# kube-system kube-apiserver-node0 1/1 Running 1 11h
# kube-system kube-controller-manager-node0 1/1 Running 1 11h
# kube-system kube-proxy-p4z7z 1/1 Running 1 10h
# kube-system kube-proxy-pbl4g 1/1 Running 1 10h
# kube-system kube-proxy-vh9ps 1/1 Running 1 11h
# kube-system kube-scheduler-node0 1/1 Running 1 11h
如果发现有状态错误的Pod, 则可以执行kubectl --namespace=kubesystem describe pod<pod_name>来查看错误原因。 如果安装失败, 可以执行kubeadm reset命令将主机恢复原状, 重新执行kubeadm init命令, 再次进行安装。
3 可能遇到的问题
# 问题 1:
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
# 解决办法
cat << EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
EOF
sysctl --system
其他
测试用的deployment
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx-test
labels:
app: nginx-test
spec:
replicas: 1
selector:
matchLabels:
app: nginx-deploy-test
template:
metadata:
name: nginx-deploy-test-pod
labels:
app: nginx-deploy-test
spec:
containers:
- name: nginx
image: nginx:1.19.6
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
hostPort: 8080
restartPolicy: Always
参考
- https://docs.docker.com/engine/install/ubuntu/
- https://developer.aliyun.com/mirror/docker-ce
- https://developer.aliyun.com/mirror/kubernetes
- https://github.com/kubernetes/kubernetes/releases
- https://www.cnblogs.com/sweetchildomine/p/8823722.html
- https://blog.csdn.net/dejunyang/article/details/97972399
- https://blog.csdn.net/jinguangliu/article/details/82792617
- https://www.cnblogs.com/kevingrace/p/12778066.html
- https://kubernetes.io/docs/setup/production-environment/container-runtimes/
- https://docs.projectcalico.org/getting-started/clis/calicoctl/install
- https://docs.projectcalico.org/getting-started/kubernetes/quickstart
- https://www.cnblogs.com/Christine-ting/p/12837250.html
- https://blog.csdn.net/cui_cui_666/article/details/102460555
- https://blog.csdn.net/abcaxyzx/article/details/111976785
- https://github.com/kubernetes/kubeadm/issues/1334
- kubeadm修改configMap
- 端口范围问题