1.节点部署列表:
主机名: | Ip: | 角色: | 系统: | CPU/内存 | 磁盘: |
---|---|---|---|---|---|
K8S-master | 192.168.200.50 | Master | Linux 3.10.0核的64为操作系统(CentOS 7.9) | 2 核 4G | 40G |
K8S-node01 | 192.168.200.51 | Node | Linux 3.10.0核的64为操作系统(CentOS 7.9) | 2 核 4G | 40G |
K8S-node02 | 192.168.200.52 | Node | Linux 3.10.0核的64为操作系统(CentOS 7.9) | 2 核 4G | 40G |
2.初始化系统环境,以下命令三台服务器全部执行
2.1 docker安装
#!/bin/sh
#使用 root 权限登录 Centos。确保 yum 包更新到最新。
yum -y update
#移除旧版本docker
yum remove docker docker-common docker-selinux docker-engine
#安装一些必要的系统工具
yum install -y yum-utils device-mapper-persistent-data lvm2
#添加软件源信息
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
#更新 yum 缓存
yum makecache fast
#安装 Docker-ce
yum install -y docker-ce-18.03.1.ce-1.el7.centos
#启动 Docker 后台服务
systemctl start docker
#docker加入开机自启动
systemctl enable docker
echo '{"registry-mirrors": ["https://jn0vay2w.mirror.aliyuncs.com"]}' > /etc/docker/daemon.json
systemctl daemon-reload
systemctl restart docker
修改/etc/docker/daemon.json
{
"registry-mirrors": [
"https://jn0vay2w.mirror.aliyuncs.com/",
"https://dockerhub.azk8s.cn"
],
"exec-opts": ["native.cgroupdriver=systemd"]
}
重启
systemctl daemon-reload
systemctl restart docker
2.2关闭防火墙
为了避免kubernetes的Master节点和各个工作节点的Node节点间的通信出现问题,我们可以关闭本地搭建的Centos虚拟机的防火墙。
systemctl disable firewalld
systemctl stop firewalld
2.3关闭Swap
Swap是操作系统在内存吃紧的情况申请的虚拟内存,按照Kubernetes官网的说法,Swap会对Kubernetes的性能造成影响,不推荐使用Swap。
echo "vm.swappiness = 0">> /etc/sysctl.conf
sed -i 's/.*swap.*/#&/' /etc/fstab
swapoff -a
2.4禁用SELinux,让容器可以顺利地读取主机文件系统
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
2.5修改主机名,并写入三台服务器的host中
hostnamectl set-hostname K8S-master 50
hostnamectl set-hostname K8S-node01 51
hostnamectl set-hostname K8S-node02 52
cat >> /etc/hosts << EOF
192.168.200.50 K8S-master
192.168.200.51 K8S-node01
192.168.200.52 K8S-node02
EOF
2.6将桥接的IPV4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system #配置生效
2.7同步时间
yum install -y ntpdate
ntpdate time.windows.com
#如果时区不对执行下面命令,然后在同步
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
2.8 配置国内的kubernetes源:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.9 安装kubelet、kubeadm和kubectl工具:
#kubectl-1.18.0命令行管理工具,kubeadm-1.18.0是引导K8S集群,kubelet-1.18.0管理容器
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
2.10启动kubelet并设置开机自启:
systemctl enable kubelet && systemctl start kubelet
3.部署 Master节点
3.1导出k8s-master配置文件,并修改
# 导出配置文件
kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml
修改配置
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
# 修改为主节点 IP
advertiseAddress: 192.168.200.50
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: kubernetes-master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
# 国内不能访问 Google,修改为阿里云
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
# 修改版本号
kubernetesVersion: v1.18.0
networking:
dnsDomain: cluster.local
# 配置 POD 所在网段为我们虚拟机不重叠的网段(这里用的是 Flannel 默认网段)
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
scheduler: {}
3.2查看所需镜像
kubeadm config images list --config kubeadm.yml
输出如下:
registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0
registry.aliyuncs.com/google_containers/pause:3.2
registry.aliyuncs.com/google_containers/etcd:3.4.3-0
registry.aliyuncs.com/google_containers/coredns:1.6.7
3.3拉取所需镜像
kubeadm config images pull --config kubeadm.yml
输出如下:
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.3-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.6.7
3.4安装master节点
执行以下命令初始化主节点,该命令指定了初始化时需要使用的配置文件,其中添加 --upload-certs 参数可以在后续执行加入节点时自动分发证书文件。追加的 tee kubeadm-init.log 用以输出日志。
–注意: 如果安装 kubernetes 版本和下载的镜像版本不统一则会出现 timed out waiting for the condition 错误。中途失败或是想修改配置可以使用 kubeadm reset 命令重置配置,同时rm -rf $HOME/.kube命令删除这个目录,再做初始化操作即可。
kubeadm init --config=kubeadm.yml --upload-certs | tee kubeadm-init.log
启动成功后,你会看到类似如下提示:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.200.50:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:ebfb9c16284581ee7a1c06725e00fbce31ac61922e9a3b9af798212d7783f535
3.5配置kubectl
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# 非 ROOT 用户执行
chown $(id -u):$(id -g) $HOME/.kube/config
3.6验证是否成功
kubectl get node
# 输出如下
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 3m24s v1.18.0
4.部署Node节点
在node01、node02节点上执行下面这条命令,加入Master:
kubeadm join 192.168.200.50:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:124f57abe13b9d529a539cb6edb7f78bb2755d41741eab9f2425bf783a3959ef
5.安装网络插件
5.1下载Calico配置文件并修改
wget https://docs.projectcalico.org/manifests/calico.yaml
vi calico.yaml
将 192.168.0.0/16 修改为 10.244.0.0/16,并放开注释
5.2安装网络插件 Calico
kubectl apply -f calico.yaml
成功如下图:
5.3验证是否成功
查看 Calico 网络插件处于 Running 状态即表示安装成功
kubectl get pods --all-namespaces
查看节点状态处于 Ready 即表示安装成功
kubectl get nodes