安装
- 创建三个虚拟机, 创建一个, 克隆两个. 系统ubuntu22.04. 2核8G, 磁盘40G, 还一台装了MySQL集群和ElasticSearch等服务, 共四个实例
- 依次固定好ip, master: 192.168.222.129, slave1: 192.168.222.132, slave2: 192.168.222.133.
vim /etc/netplan/00-installer-config.yaml
netplan apply
network:
ethernets:
ens33:
dhcp4: no
addresses: [192.168.222.133/24]
routes:
- to: default
via: 192.168.222.2
nameservers:
addresses: [192.168.222.2]
version: 2
- 安装好docker
vim /etc/docker/daemon.json
{"registry-mirrors":["[https://dockerhub.azk8s.cn","https://reg-mirror.qiniu.com","https://quay-mirror.qiniu.com"],"exec-opts":](https://dockerhub.azk8s.cn","https://reg-mirror.qiniu.com","https://quay-mirror.qiniu.com"],"exec-opts":) ["native.cgroupdriver=systemd"]}
systemctl daemon-reload
systemctl restart docker
- 关闭 swap 内存.
vim /etc/fstab
找到swap相关的行, 用#
注释, 然后重启,free
看swap都为0, 就是成功了 - k8s 要求 管理节点可以直接免密登录工作节点.
- 在master上ping下两台slave. ping通说明没问题
- 免密钥登录. 在master上执行:
ssh-keygen
. 将~/.ssh/id_rsa.pub
的内容保持到两台slave的~/.ssh/authorized_keys
中. - 验证:
ssh root@192.168.222.132
- 安装kubelet、kubeadm以及kubectl.
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet=1.23.1-00 kubeadm=1.23.1-00 kubectl=1.23.1-00
注意版本- 初始化 master 节点
kubeadm init --kubernetes-version=1.23.1 --apiserver-advertise-address=192.168.222.129 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.10.0.0/16 --pod-network-cidr=10.244.0.0/16
ip改成master的ip- 如果当前步骤报错:
[ERROR CRI]: container runtime is not running: output:
vim /etc/containerd/config.toml
- 将 disabled_plugin 更改为 enabled_plugin
- https://github.com/containerd/containerd/issues/8139#issuecomment-1478375386
- 如果当前步骤报错:
- 若过程中因为某些原因导致错误, 使用
kubeadm reset
重置, 再重新初始化 - 完成后输出:
kubeadm join 192.168.222.129:6443 --token 3b2pqq.fe3sjyd96ol0y564 --discovery-token-ca-cert-hash sha256:4188dca1cf2b7bc527ef2e6c4adbe631b36d1b6c388ecbfb145f7f2d1a768450
复制下来留着slave加入集群用的
- 配置 kubectl 工具:
mkdir -p /root/.kube && cp /etc/kubernetes/admin.conf /root/.kube/config
- 通过下面两条命令测试 kubectl是否可用
- 查看已加入的节点:
kubectl get nodes
- 查看集群状态:
kubectl get cs
- master安装calico
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/tigera-operator.yaml
wet https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/custom-resources.yaml
- 将cidr改成前面init时使用的网段
kubectl create -f custom-resources.yaml
- 将 slave 节点加入网络
- 在slave上重复step 2~6
vim /etc/hostname
改为 slave1.vim /etc/hosts
修改127.0.0.1 slave1
kubeadm join 192.168.222.129:6443 --token 3b2pqq.fe3sjyd96ol0y564 --discovery-token-ca-cert-hash sha256:4188dca1cf2b7bc527ef2e6c4adbe631b36d1b6c388ecbfb145f7f2d1a768450
输入即可- 报错:
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
将这个ca.crt文件删了就行 (之前没改主机名执行了命令导致已经初始化一次了) - 出现
This node has joined the cluster:
即为成功, master可以再执行kubectl get nodes
验证一下 - 安装dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
- 在原来的line:40新增
type: NodePort
, 在targetPort下面新增nodePort: 30000
kubectl apply -f recommended.yaml
- 打开
https://192.168.222.129:30000/
, 浏览器直接输入thisisunsafe
- 创建配置文件 dashboard-adminuser.yaml, 内容放在后面
kubectl apply -f dashboard-adminuser.yaml
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{ {.data.token | base64decode}}"
- 复制输出的文本到浏览器登录
cat <<EOF > dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
- k8s自带的dashboard不太方便, 可以使用kuboard
docker run -d --restart=unless-stopped --name=kuboard -p 801:80/tcp -p 10081:10081/tcp -e KUBOARD_ENDPOINT="http://192.168.222.129:801" -e KUBOARD_AGENT_SERVER_TCP_PORT="10081" -v /usr/local/kuboard-data:/data eipwork/kuboard:v3
- 初始化后再在master节点安装
metrics-server
和metrics-scraper
. kuboard会给出yaml文件,kubectl create -f xxxx.yaml
即可
简单使用
- 安装ingress: 将外部流量路由到集群内部服务
wget [https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml](https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml)
sed -i 's/registry.k8s.io\/ingress-nginx\/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974/registry.cn-hangzhou.aliyuncs.com\/google_containers\/nginx-ingress-controller:v1.3.1/g' ./deploy.yaml
sed -i 's/registry.k8s.io\/ingress-nginx\/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47/registry.cn-hangzhou.aliyuncs.com\/google_containers\/kube-webhook-certgen:v1.3.0/g' ./deploy.yaml
kubectl apply -f deploy.yaml
- 安装metallb: 负载均衡
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
- 为metallb创建ip池对象. 使用其他的ip (非node使用的ip). 使用layer2模式. https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#a-pure-software-solution-metallb
- 附一下官方文档: 在第 2 层模式下,服务 IP 的所有流量都进入一个节点。从那里, kube-proxy将流量传播到所有服务的 pod。从这个意义上说,第 2 层没有实现负载均衡器。相反,它实现了故障转移机制,以便在当前领导节点由于某种原因发生故障时,不同的节点可以接管。
- 生产自建集群不推荐使用L2模式: https://www.lixueduan.com/posts/cloudnative/01-metallb/#%E5%B1%80%E9%99%90%E6%80%A7
kubectl apply -f xxxx.yaml
kubectl get service ingress-nginx-controller --namespace=ingress-nginx
验证ip是否已分配
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: ip-pool
namespace: metallb-system
spec:
addresses:
- 192.168.222.134-192.168.222.135
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: l2-advertisement
namespace: metallb-system
spec:
ipAddressPools:
- ip-pool
MetalLB生效后, service分配到了一个ip: 192.168.222.134
root@localhost:~# kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.10.158.184 192.168.222.134 80:31079/TCP,443:30881/TCP 17h
ingress-nginx-controller-admission ClusterIP 10.10.29.235 <none> 443/TCP 17h
可以通过MTR/traceroute工具看到流量是从192.168.222.132经过再到192.168.222.134的, 因为Leader所在节点就是192.168.222.132
- 创建测试服务
kubectl create deployment deployment-demo --image=httpd --port=80
kubectl expose deployment deployment-demo
kubectl create ingress ingress-demo --class=nginx --rule="test.test.com/*=deployment-demo:80"
- 规则限制了只能填域名
- 查看
ingress-nginx-controller
service所在的节点, 更改hosts文件, 将test.test.com映射到该节点ip, 然后访问test.test.com
- 浏览器显示:
It works!
NFS/PV/PVC
apt install nfs-common
所有节点- 主节点
apt install nfs-kernel-server
mkdir /nfs/
chmod 777 /nfs/
echo "/nfs *(rw,sync,no_subtree_check,no_root_squash)" > /etc/exports
service nfs-kernel-server restart
/systemctl start nfs-server
- 从节点
mkdir -p /mnt/nfs/
- 在
/etc/fstab
文件加一行:192.168.222.129:/nfs /mnt/nfs nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0
mount -a
- 在从节点新建一个文件测试是否能同步到主节点
- 自动创建pv, 依次kubectl apply -f xxx.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs
parameters:
archiveOnDelete: "false"
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: easzlab/nfs-subdir-external-provisioner:v4.0.1
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.222.129
- name: NFS_PATH
value: /nfs
volumes:
- name: nfs-client-root
nfs:
server: 192.168.222.129
path: /nfs
- 测试一下, kubectl apply -f xxx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
- 对应用进行扩容缩容删除, 主机的nfs数据仍然存在
应用上云
- 项目的全部源码: https://github.com/MQPearth/spring-boot-backend
- 中间件上云
- mysql (不推荐): 当容器化部署mysql时, 磁盘和网络会成为性能瓶颈, 建议独立集群部署做高可用
- nacos:
- 集群配置
- 应用配置文件, 修改配置.
kubectl create -f xxx.yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{},"labels":{"app":"nacos"},"name":"nacos"}}
labels:
app: nacos
name: nacos
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: nacos
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"name":"nfs-client-provisioner","namespace":"nacos"}}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nacos-cm
namespace: nacos
data:
mysql.host: "10.11.38.190"
mysql.db.name: "nacos"
mysql.port: "3307"
mysql.user: "root"
mysql.password: "123456"