一、环境配置
#1.1 优化系统
配置静态IP地址
vim /etc/netplan/50-cloud-init.yaml
network:
ethernets:
ens33:
addresses: [192.168.236.177/24]
dhcp4: false
gateway4: 192.168.236.2
nameservers:
addresses: [192.168.236.2]
optional: true
version: 2
关闭防火墙
sudo ufw disable
systemctl stop ufw
systemctl disable ufw
关闭swap
sudo swapoff -a
sudo sed -i 's/.*swap.*/#&/' /etc/fstab
禁止selinux
sudo apt install -y selinux-utils
setenforce 0
sudo getenforce
修改k8s系统配置
vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
sudo modprobe br_netfilter
sudo sysctl -p /etc/sysctl.d/k8s.conf
#注:其他优化项参考博文,阿里云中ecs无需其他优化
二 、docker部署
1 安装系统依赖
apt-get update && apt-get install -y curl telnet wget man \
apt-transport-https \
ca-certificates \
software-properties-common vim libltdl7
2 配置kubernetes源
配置aliyun的源
vi /etc/apt/sources.list
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
可能在 apt-get update 时会碰到 no pubkey,解决办法:
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
apt-get update && apt-get install -y apt-transport-https
3 使用国内源部署docker
参考:https://www.runoob.com/docker/ubuntu-docker-install.html
curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository \
"deb [arch=amd64] https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/ \
$(lsb_release -cs) \
stable"
apt-get update
apt-get -y install docker-ce
安装特定版本docker:
查询docker-ce版本:apt-cache madison docker-ce
安装docker-ce版本:apt-get install docker-ce=18.06.3~ce~3-0~ubuntu
查看docker-ce版本:docker version
启动:
sudo systemctl enable docker
sudo systemctl start docker
优化docker:
配置docker镜像加速器与日志滚动
vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://o4sfv1b.mirror.aliyuncs.com"],
"log-driver":"json-file",
"log-opts":{
"max-size" :"500m","max-file":"1"
}
}
重新启动docker
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker
三、部署K8S
1 安装kubelet和kubeadm(所有节点都需要执行)
apt-get install -y kubelet=1.18.6-00 kubeadm=1.18.6-00 kubectl=1.18.6-00 (1.18.6版本)
sudo systemctl enable kubelet && systemctl start kubelet
2 master节点部署
mkdir /App/kubeadm/ -p
cd /App/kubeadm/
创建kubeadm.conf配置文件
kubeadm config print init-defaults ClusterConfiguration > kubeadm.conf
以下是我优化过后的配置文件,参考
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.100.10
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: prod-wlhyos-k8s1
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.100.10:8443"
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.18.6
networking:
dnsDomain: cluster.local
serviceSubnet: 10.128.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
3 集群初始化
查看一下都需要哪些镜像文件需要拉取
kubeadm config images list --config kubeadm.conf
拉取镜像
kubeadm config images pull --config ./kubeadm.conf
初始化
sudo kubeadm init --config ./kubeadm.conf
**k8s启动成功输出内容较多,但是记住末尾的内容**
```shell
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.100.10:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:944d62003e4cfe849263af919389ceb31fcf15e17419f757e350b91abec5c1e5
```
按照官方提示,执行以下操作。
1. 执行如下命令
```shell
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
2. 创建系统服务并启动
```shell
# 启动kubelet 设置为开机自启动
$ sudo systemctl enable kubelet
# 启动k8s服务程序
$ sudo systemctl start kubelet
注:
如果是多master节点:修改配置文件
controlPlaneEndpoint: "192.168.100.10:8443" 添加项,表示vip地址和端口,可以使用nginx代理
修改网络为ipvs模式:
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
若集群初始化失败:
使用命令重新初始化:kubeadm reset -f
4 部署网络插件Flannel,如需安装calico请参考另一篇博文
查看,此时是notready状态,原因是未安装网络插件
kubectl get nodes
kubectl get cs
部署网络插件
wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
修改Pod网段
vi kube-flannel.yml
```ini
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
```
kubectl apply -f kube-flannel.yml
此时集群已正常
配置多master节点,多etcd节点
vi join-kubeadm.sh
#!/bin/bash
USER=root
CONTROL_PLANE_IPS="192.168.3.43 192.168.3.44"
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
done
加入master节点:
kubeadm join 192.168.100.10:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:e952d592a31eadbb7c2a02a1d9b6d2592c4794ed23f0d3f091eacec54665020a \
--control-plane
四、部署配置node节点
拷贝master文件
#将admin.conf传递给node1
sudo scp /etc/kubernetes/admin.conf root@192.168.236.178:/etc/kubernetes/
#将admin.conf传递给node2
sudo scp /etc/kubernetes/admin.conf root@192.168.236.179:/etc/kubernetes/
node节点执行:
mkdir -p $HOME/.kube
sudo cp -i $HOME/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
把节点加入集群
sudo kubeadm join 192.168.100.10:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:e778d3665e52f5a680a87b00c6d54df726c2eda601c0db3bfa4bb198af2262a8
以上集群部署完成
注:
遗忘上述加入节点密钥:使用以下命令进行打印
kubeadm token create --print-join-command --ttl 0