Kubernetes 云原生实战 全过程

安装

  1. 创建三个虚拟机, 创建一个, 克隆两个. 系统ubuntu22.04. 2核8G, 磁盘40G, 还一台装了MySQL集群和ElasticSearch等服务, 共四个实例
  2. 依次固定好ip, master: 192.168.222.129, slave1: 192.168.222.132, slave2: 192.168.222.133. vim /etc/netplan/00-installer-config.yaml netplan apply
network:
  ethernets:
    ens33:
      dhcp4: no
      addresses: [192.168.222.133/24]
      routes:
        - to: default
          via: 192.168.222.2
      nameservers:
        addresses: [192.168.222.2]
  version: 2
  1. 安装好docker
    1. vim /etc/docker/daemon.json
    2. {"registry-mirrors":["[https://dockerhub.azk8s.cn","https://reg-mirror.qiniu.com","https://quay-mirror.qiniu.com"],"exec-opts":](https://dockerhub.azk8s.cn","https://reg-mirror.qiniu.com","https://quay-mirror.qiniu.com"],"exec-opts":) ["native.cgroupdriver=systemd"]}
    3. systemctl daemon-reload
    4. systemctl restart docker
  2. 关闭 swap 内存. vim /etc/fstab 找到swap相关的行, 用#注释, 然后重启, free看swap都为0, 就是成功了
  3. k8s 要求 管理节点可以直接免密登录工作节点.
    1. 在master上ping下两台slave. ping通说明没问题
    2. 免密钥登录. 在master上执行: ssh-keygen. 将~/.ssh/id_rsa.pub的内容保持到两台slave的~/.ssh/authorized_keys中.
    3. 验证: ssh root@192.168.222.132
  4. 安装kubelet、kubeadm以及kubectl.
    1. apt-get update && apt-get install -y apt-transport-https
    2. curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list 
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
  1. apt-get update
  2. apt-get install -y kubelet=1.23.1-00 kubeadm=1.23.1-00 kubectl=1.23.1-00 注意版本
  3. 初始化 master 节点
    1. kubeadm init --kubernetes-version=1.23.1 --apiserver-advertise-address=192.168.222.129 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.10.0.0/16 --pod-network-cidr=10.244.0.0/16 ip改成master的ip
      1. 如果当前步骤报错: [ERROR CRI]: container runtime is not running: output:
      2. vim /etc/containerd/config.toml
      3. 将 disabled_plugin 更改为 enabled_plugin
      4. https://github.com/containerd/containerd/issues/8139#issuecomment-1478375386
    2. 若过程中因为某些原因导致错误, 使用kubeadm reset重置, 再重新初始化
    3. 完成后输出: kubeadm join 192.168.222.129:6443 --token 3b2pqq.fe3sjyd96ol0y564 --discovery-token-ca-cert-hash sha256:4188dca1cf2b7bc527ef2e6c4adbe631b36d1b6c388ecbfb145f7f2d1a768450复制下来留着slave加入集群用的
  4. 配置 kubectl 工具: mkdir -p /root/.kube && cp /etc/kubernetes/admin.conf /root/.kube/config
    1. 通过下面两条命令测试 kubectl是否可用
    2. 查看已加入的节点: kubectl get nodes
    3. 查看集群状态: kubectl get cs
  5. master安装calico
    1. kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/tigera-operator.yaml
    2. wet https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/custom-resources.yaml
    3. 将cidr改成前面init时使用的网段
    4. kubectl create -f custom-resources.yaml
  6. 将 slave 节点加入网络
  7. 在slave上重复step 2~6
  8. vim /etc/hostname 改为 slave1. vim /etc/hosts 修改 127.0.0.1 slave1
  9. kubeadm join 192.168.222.129:6443 --token 3b2pqq.fe3sjyd96ol0y564 --discovery-token-ca-cert-hash sha256:4188dca1cf2b7bc527ef2e6c4adbe631b36d1b6c388ecbfb145f7f2d1a768450 输入即可
  10. 报错: [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists 将这个ca.crt文件删了就行 (之前没改主机名执行了命令导致已经初始化一次了)
  11. 出现This node has joined the cluster:即为成功, master可以再执行kubectl get nodes验证一下
  12. 安装dashboard
  13. wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
  14. 在原来的line:40新增type: NodePort, 在targetPort下面新增nodePort: 30000
  15. kubectl apply -f recommended.yaml
  16. 打开https://192.168.222.129:30000/, 浏览器直接输入thisisunsafe
  17. 创建配置文件 dashboard-adminuser.yaml, 内容放在后面
  18. kubectl apply -f dashboard-adminuser.yaml
  19. kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{ {.data.token | base64decode}}"
  20. 复制输出的文本到浏览器登录
cat <<EOF > dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF
  1. k8s自带的dashboard不太方便, 可以使用kuboard
  2. docker run -d --restart=unless-stopped --name=kuboard -p 801:80/tcp -p 10081:10081/tcp -e KUBOARD_ENDPOINT="http://192.168.222.129:801" -e KUBOARD_AGENT_SERVER_TCP_PORT="10081" -v /usr/local/kuboard-data:/data eipwork/kuboard:v3
  3. 初始化后再在master节点安装metrics-servermetrics-scraper. kuboard会给出yaml文件, kubectl create -f xxxx.yaml即可

简单使用

  1. 安装ingress: 将外部流量路由到集群内部服务
    1. wget [https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml](https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml)
    2. sed -i 's/registry.k8s.io\/ingress-nginx\/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974/registry.cn-hangzhou.aliyuncs.com\/google_containers\/nginx-ingress-controller:v1.3.1/g' ./deploy.yaml
    3. sed -i 's/registry.k8s.io\/ingress-nginx\/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47/registry.cn-hangzhou.aliyuncs.com\/google_containers\/kube-webhook-certgen:v1.3.0/g' ./deploy.yaml
    4. kubectl apply -f deploy.yaml
  2. 安装metallb: 负载均衡
    1. kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
    2. kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
    3. 为metallb创建ip池对象. 使用其他的ip (非node使用的ip). 使用layer2模式. https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#a-pure-software-solution-metallb
      1. 附一下官方文档: 在第 2 层模式下,服务 IP 的所有流量都进入一个节点。从那里, kube-proxy将流量传播到所有服务的 pod。从这个意义上说,第 2 层没有实现负载均衡器。相反,它实现了故障转移机制,以便在当前领导节点由于某种原因发生故障时,不同的节点可以接管。
      2. 生产自建集群不推荐使用L2模式: https://www.lixueduan.com/posts/cloudnative/01-metallb/#%E5%B1%80%E9%99%90%E6%80%A7
    4. kubectl apply -f xxxx.yaml
    5. kubectl get service ingress-nginx-controller --namespace=ingress-nginx 验证ip是否已分配
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: ip-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.222.134-192.168.222.135
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: l2-advertisement
  namespace: metallb-system
spec:
  ipAddressPools:
  - ip-pool

MetalLB生效后, service分配到了一个ip: 192.168.222.134

root@localhost:~# kubectl get service -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.10.158.184   192.168.222.134   80:31079/TCP,443:30881/TCP   17h
ingress-nginx-controller-admission   ClusterIP      10.10.29.235    <none>            443/TCP                      17h

可以通过MTR/traceroute工具看到流量是从192.168.222.132经过再到192.168.222.134的, 因为Leader所在节点就是192.168.222.132
image.png

  1. 创建测试服务
    1. kubectl create deployment deployment-demo --image=httpd --port=80
    2. kubectl expose deployment deployment-demo
    3. kubectl create ingress ingress-demo --class=nginx --rule="test.test.com/*=deployment-demo:80"
      1. 规则限制了只能填域名
      2. 查看ingress-nginx-controllerservice所在的节点, 更改hosts文件, 将test.test.com映射到该节点ip, 然后访问test.test.com
      3. 浏览器显示: It works!

NFS/PV/PVC

  1. apt install nfs-common 所有节点
  2. 主节点
    1. apt install nfs-kernel-server
    2. mkdir /nfs/
    3. chmod 777 /nfs/
    4. echo "/nfs *(rw,sync,no_subtree_check,no_root_squash)" > /etc/exports
    5. service nfs-kernel-server restart / systemctl start nfs-server
  3. 从节点
    1. mkdir -p /mnt/nfs/
    2. /etc/fstab文件加一行: 192.168.222.129:/nfs /mnt/nfs nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0
    3. mount -a
    4. 在从节点新建一个文件测试是否能同步到主节点
  4. 自动创建pv, 依次kubectl apply -f xxx.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs
parameters:
  archiveOnDelete: "false"
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
  resources: ["persistentvolumes"]
  verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner
  namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: easzlab/nfs-subdir-external-provisioner:v4.0.1
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.222.129
            - name: NFS_PATH
              value: /nfs
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.222.129
            path: /nfs
  1. 测试一下, kubectl apply -f xxx.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx"
  replicas: 2
  selector:
   matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
      annotations:
        volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
  1. 对应用进行扩容缩容删除, 主机的nfs数据仍然存在

应用上云

  1. 项目的全部源码: https://github.com/MQPearth/spring-boot-backend
  2. 中间件上云
    1. mysql (不推荐): 当容器化部署mysql时, 磁盘和网络会成为性能瓶颈, 建议独立集群部署做高可用
    2. nacos:
      1. 集群配置
      2. 应用配置文件, 修改配置. kubectl create -f xxx.yaml
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{},"labels":{"app":"nacos"},"name":"nacos"}}
  labels:
    app: nacos
  name: nacos
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: nacos
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ServiceAccount","metadata":{"name":"nfs-client-provisioner","namespace":"nacos"}}
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nacos-cm
  namespace: nacos
data:
  mysql.host: "10.11.38.190"
  mysql.db.name: "nacos"
  mysql.port: "3307"
  mysql.user: "root"
  mysql.password: "123456"
  
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值