k8s集群安装部署

k8s集群安装部署

服务器规划

服务器(hostname)IPdocker版本k8s版本calico版本kuboard版本
k8s-master172.16.13.11120.10.71.23.16v3.25v3
k8s-node1172.16.13.11220.10.71.23.16--
k8s-node2172.16.13.11320.10.71.23.16--

环境准备

关闭防火墙

systemctl stop firewalld.service
systemctl disable firewalld.service

关闭selinux

# 临时关闭
setenforce 0
# 永久关闭
sed -i 's/enforcing/disabled/' /etc/selinux/config

关闭swap

# 临时关闭
swapoff -a
# 永久关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab

设置主机名

hostnamectl set-hostname k8s-master # 在master上执行
hostnamectl set-hostname k8s-node1  # 在node1上执行
hostnamectl set-hostname k8s-node2  # 在node2上执行

在master添加hosts

cat >> /etc/hosts << EOF
172.16.13.111 k8s-master
172.16.13.112 k8s-node1
172.16.13.113 k8s-node2
EOF

在node1添加hosts

cat >> /etc/hosts << EOF
127.0.0.1 k8s-node1
EOF

在node2添加hosts

cat >> /etc/hosts << EOF
127.0.0.1 k8s-node2
EOF

启用br_netfilter模块,允许iptables检查桥接流量

cat > /etc/modules-load.d/k8s.conf << EOF
br_netfilter
EOF

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# 配置生效
sysctl --system

时间同步,k8s涉及证书,对时间敏感

yum install -y chrony
chronyc sources -v time.windows.com

添加阿里云k8s软件源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

修改docker驱动为systemd

cat > /etc/docker/daemon.json << EOF
{
	"exec-opts": ["native.cgroupdriver=systemd"],
	"registry-mirrors": [
		"https://jsiyxhlk.mirror.aliyuncs.com",
		"https://registry.docker-cn.com",
		"http://hub-mirror.c.163.com",
		"https://docker.mirrors.ustc.edu.cn"
	]
}
EOF

# 重启docker
systemctl daemon-reload
systemctl restart docker

安装部署

安装kubeadm、kubelet、kubectl

yum install -y kubelet-1.23.16 kubeadm-1.23.16 kubectl-1.23.16
systemctl enable kubelet && systemctl start kubelet

Master配置

master节点生成初始化文件

kubeadm config print init-defaults > kubeadm-init.yaml

master节点修改kubeadm-init.yaml,有四个属性是必须修改的:

  • localAPIEndpoint.advertiseAddress:当前节点的公布地址
  • nodeRegistration.name:向集群注册的hostname
  • imageRepository:设置镜像拉取所用的容器仓库
  • kubernetesVersion:必须与安装版本一致
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.16.13.111
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: k8s-master
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.23.16
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

master节点拉取kubeadm所需的镜像

kubeadm config images pull --config kubeadm-init.yaml

master节点执行初始化

kubeadm init --config kubeadm-init.yaml

初始化失败,在解决问题后,须先reset再进行init

kubeadm reset

初始化成功,得到打印,记下最后两行join命令,node加入集群时用

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.13.111:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:011b1dd34429896b33eaf22b3aa8c5680b93ef7ee2ea48b7f099e693e18ad5b7

下载网络插件配置文件

wget https://docs.projectcalico.org/v3.25/manifests/calico.yaml

打开其中的CALICO_IPV4POOL_CIDR属性,修改valuekubeadm-init.yaml中的networking.serviceSubnet

- name: CALICO_IPV4POOL_CIDR
  value: "10.96.0.0/12"

按照初始化成功打印内容,依次执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
source /etc/profile
kubectl apply -f calico.yaml

拷贝admin.conf到node节点,在node节点也可以查看集群状态

scp /etc/kubernetes/admin.conf root@172.16.13.112:/etc/kubernetes/admin.conf
scp /etc/kubernetes/admin.conf root@172.16.13.113:/etc/kubernetes/admin.conf
# 在node节点分别执行
export KUBECONFIG=/etc/kubernetes/admin.conf
source /etc/profile

Node配置

node节点分别执行kubeadm join命令加入集群

kubeadm join 172.16.13.111:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:011b1dd34429896b33eaf22b3aa8c5680b93ef7ee2ea48b7f099e693e18ad5b7

若join提示token过期,重新创建token

kubeadm token create --print-join-command

若join有其他异常,也可以使用

kubeadm reset

最后查看集群节点STATUS,全部Ready即部署集群成功

kubectl get node

常用命令

查看集群节点状态

kubectl get node

查看系统空间pod状态,如果有ImagePullBackOff的Pod,是镜像拉取失败,及时调整加速源

kubectl get pods -n kube-system

移除节点第一步(驱逐节点)

kubectl drain k8s-node1 --delete-local-data --force --ignore-daemonsets

移除节点第二步(删除节点)

kubectl delete node k8s-node1

kuboard安装

在master节点通过docker run方式安装

docker run -d \
  --restart=unless-stopped \
  --name=kuboard \
  -p 80:80/tcp \
  -p 10081:10081/tcp \
  -e KUBOARD_ENDPOINT="http://172.16.13.111:80" \
  -e KUBOARD_AGENT_SERVER_TCP_PORT="10081" \
  -e KUBOARD_ADMIN_DERAULT_PASSWORD="newland" \
  -v /home/kuboard/data:/data \
  eipwork/kuboard:v3.5.2.3
  # 也可以使用镜像 swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3.5.2.3 ,可以更快地完成镜像下载。
  # 请不要使用 127.0.0.1 或者 localhost 作为内网 IP
  # Kuboard 不需要和 K8S 在同一个网段,Kuboard Agent 甚至可以通过代理访问 Kuboard Server 

在浏览器输入http://your-host-ip:80即可访问kuboard v3.x的界面

nfs服务安装

yum安装

yum -y install nfs-utils 
systemctl restart rpcbind && systemctl enable rpcbind 
systemctl restart nfs && systemctl enable nfs

配置nfs

cat >> /etc/exports << EOF
/home/nfs *(insecure,rw,async,no_root_squash)
EOF

exportfs -rf

权限参考

权限描述
ro只读访问
rw读写访问
sync同步写入资料到内存与硬盘中
async资料会先暂存于内存中,而非直接写入硬盘
secureNFS通过1024以下的安全TCP/IP端口发送
insecureNFS通过1024以上的端口发送
wdelay如果多个用户要写入NFS目录,则归组写入(默认)
no_wdelay如果多个用户要写入NFS目录,则立即写入,当使用async时,无需此设置
hide在NFS共享目录中不共享其子目录
no_hide共享NFS目录的子目录
subtree_check如果共享/usr/bin之类的子目录时,强制NFS检查父目录的权限(默认)
no_subtree_check和上面相对,不检查父目录权限
all_squash共享文件的UID和GID映射匿名用户anonymous,适合公用目录
no_all_squash保留共享文件的UID和GID(默认)
root_squashroot用户的所有请求映射成如anonymous用户一样的权限(默认)
no_root_squashroot用户具有根目录的完全管理访问权限
anonuid=xxx指定NFS服务器/etc/passwd文件中匿名用户的UID
anongid=xxx指定NFS服务器/etc/passwd文件中匿名用户的GID

MetalLB安装配置

编辑当前集群中的kube-proxy配置

kubectl edit configmap -n kube-system kube-proxy

启用严格的ARP模式

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  strictARP: true

应用MetalLB资源清单

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.9/config/manifests/metallb-native.yaml

MetalLB与Service配合示例清单

apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 80
  selector:
    app: nginx-pod
  type: LoadBalancer

---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: cluster-address-pool
  namespace: metallb-system
spec:
  addresses:
    - 172.16.13.114-172.16.13.120
    - 172.16.14.0/24
    - fc00:f853:0ccd:e799::/124

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
    - cluster-address-pool
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值