全国职业技能大赛私有云容器云边缘计算系列学习笔记

Kubernetes容器云平台搭建

这是一个系列笔记,欢迎各位大佬给与指导和交流
第一场:模块一 OpenSatck私有云平台搭建
第一场:模块二 OpenStack私有云服务运维
第一场:模块三 私有云Python运维开发
第二场:模块一 Kubernetes容器云平台搭建
第二场:模块二 Kubernetes容器云服务运维
第二场:模块三 Kubernetes容器云运维开发
第三场:模块一 边缘计算系统运维
第三场:模块二 边缘计算云应用开发


前言

本节内容:Kubernetes容器云平台搭建

用到的资源和节点规划:

CentOS-7-x86_64-DVD-2009.iso
chinaskills_cloud_paas_v2.1.iso

节点名称ip网卡备注
master192.168.25.100eth0与我们交互用NAT
192.168.200.10eth1(内)备用
node192.168.25.200eth0与我们交互用NAT
192.168.200.20eth1(内)备用

题目部分

【题目 1 】2.1.1 部署容器云平台 [5分]

1.完成 Kubernetes 集群的部署
2.完成 Istio 服务网格、 KubeVirt 虚拟化和 Harbor 镜像仓库的部署

使用 OpenStack 私有云平台创建两台云主机,云主机类型使用 4vCPU/12G/100G 类型, 分别作
为 Kubernetes 集群的 Master 节点和 node 节点,然后完成 Kubernetes 集群的部署, 并完
成 Istio 服务网格、 KubeVirt 虚拟化和 Harbor 镜像仓库的部署。
完成后提交 Master 节点的用户名、密码和 IP 到答题框。


答:

一、完成 Kubernetes 集群的部署

1,上传chinaskills_cloud_paas_v2.0.2.iso

[root@master ~]# ls
anaconda-ks.cfg  chinaskills_cloud_paas_v2.0.2.iso

2,kubernetes集群部署

[root@master ~]# mount chinaskills_cloud_paas_v2.0.2.iso /mnt/
mount: /dev/loop0 is write-protected, mounting read-only

[root@master ~]# cp -rfv /mnt/* /opt//mnt/dependencies’ -> ‘/opt/dependencies’
‘/mnt/dependencies/base-rpms.tar.gz’ -> ‘/opt/dependencies/base-rpms.tar.gz’
‘/mnt/dependencies/packages-list.txt’ -> ‘/opt/dependencies/packages-list.txt’
‘/mnt/extended-images’ -> ‘/opt/extended-images’
‘/mnt/extended-images/busybox_latest.tar’ -> ‘/opt/extended-images/busybox_latest.tar’
‘/mnt/extended-images/centos_7.9.2009.tar’ -> ‘/opt/extended-images/centos_7.9.2009.tar’
‘/mnt/extended-images/httpd_latest.tar’ -> ‘/opt/extended-images/httpd_latest.tar’
‘/mnt/extended-images/mysql_latest.tar’ -> ‘/opt/extended-images/mysql_latest.tar’
‘/mnt/extended-images/nginx_latest.tar’ -> ‘/opt/extended-images/nginx_latest.tar’
‘/mnt/extended-images/php_latest.tar’ -> ‘/opt/extended-images/php_latest.tar’
‘/mnt/extended-images/wordpress_latest.tar’ -> ‘/opt/extended-images/wordpress_latest.tar’
‘/mnt/harbor-offline.tar.gz’ -> ‘/opt/harbor-offline.tar.gz’
‘/mnt/helm-v3.7.1-linux-amd64.tar.gz’ -> ‘/opt/helm-v3.7.1-linux-amd64.tar.gz’
‘/mnt/istio.tar.gz’ -> ‘/opt/istio.tar.gz’
‘/mnt/kubeeasy’ -> ‘/opt/kubeeasy’
‘/mnt/kubernetes.tar.gz’ -> ‘/opt/kubernetes.tar.gz’
‘/mnt/kubevirt.tar.gz’ -> ‘/opt/kubevirt.tar.gz’
[root@master ~]# umount /mnt/

[root@master ~]# mv /opt/kubeeasy /usr/bin/kubeeasy

3,安装依赖

每一行后面都是有 \(反斜杠) 的,所以才能换行输入(手机版CSDN可能显示不出 反斜杠)
[root@master ~]# kubeeasy install depend \
> --host 192.168.100.20,192.168.100.21 \
> --user root \
> --password 000000 \
> --offline-file /opt/dependencies/base-rpms.tar.gz 
[2024-03-01 23:51:21] INFO:    [start] bash kubeeasy install depend --host 192.168.25.220,192.168.25.221 --user root --password ****** --offline-file /opt/dependencies/base-rpms.tar.gz
[2024-03-01 23:51:21] INFO:    [offline] unzip offline dependencies package on local.
[2024-03-01 23:51:23] INFO:    [offline] unzip offline dependencies package succeeded.
[2024-03-01 23:51:23] INFO:    [install] install dependencies packages on local.
[2024-03-01 23:51:59] INFO:    [install] install dependencies packages succeeded.
[2024-03-01 23:51:59] INFO:    [offline] 192.168.100.20: load offline dependencies file
[2024-03-01 23:52:01] INFO:    [offline] load offline dependencies file to 192.168.25.220 succeeded.
[2024-03-01 23:52:01] INFO:    [install] 192.168.100.20: install dependencies packages
[2024-03-01 23:52:02] INFO:    [install] 192.168.100.20: install dependencies packages succeeded.
[2024-03-01 23:52:02] INFO:    [offline] 192.168.100.21: load offline dependencies file
[2024-03-01 23:52:06] INFO:    [offline] load offline dependencies file to 192.168.25.221 succeeded.
[2024-03-01 23:52:06] INFO:    [install] 192.168.100.21: install dependencies packages
[2024-03-01 23:52:44] INFO:    [install] 192.168.100.21: install dependencies packages succeeded.

  See detailed log >> /var/log/kubeinstall.log 

[root@master ~]# 

4,配置ssh免密钥

[root@master ~]# kubeeasy create ssh-keygen \
> --master 192.168.100.20 \
> --worker 192.168.100.21 \
> --user root --password 000000
[2024-03-01 23:55:46] INFO:    [start] bash kubeeasy create ssh-keygen --master 192.168.25.220 --worker 192.168.25.221 --user root --password ******
[2024-03-01 23:55:46] INFO:    [check] sshpass command exists.
[2024-03-01 23:55:46] INFO:    [check] ssh 192.168.100.20 connection succeeded.
[2024-03-01 23:55:46] INFO:    [check] ssh 192.168.100.21 connection succeeded.
[2024-03-01 23:55:47] INFO:    [create] create ssh keygen 192.168.100.20
[2024-03-01 23:55:47] INFO:    [create] create ssh keygen 192.168.100.20 succeeded.
[2024-03-01 23:55:47] INFO:    [create] create ssh keygen 192.168.100.21
[2024-03-01 23:55:47] INFO:    [create] create ssh keygen 192.168.100.21 succeeded.


  See detailed log >> /var/log/kubeinstall.log 

[root@master ~]# 

5,安装kubernetes集群

[root@master ~]# kubeeasy install kubernetes \
> --master 192.168.100.20 \
> --worker 192.168.100.21 \
> --user root \
> --password 000000 \
> --version 1.22.1 \
> --offline-file /opt/kubernetes.tar.gz 
[2024-03-01 14:01:40] INFO:    [start] bash kubeeasy install kubernetes --master 192.168.100.20 --worker 192.168.100.21 --user root --password ****** --version 1.22.1 --offline-file /opt/kubernetes.tar.gz
[2024-03-01 14:01:40] INFO:    [check] sshpass command exists.
[2024-03-01 14:01:40] INFO:    [check] rsync command exists.
[2024-03-01 14:01:40] INFO:    [check] ssh 192.168.100.20 connection succeeded.
[2024-03-01 14:01:41] INFO:    [check] ssh 192.168.100.21 connection succeeded.
[2024-03-01 14:01:41] INFO:    [offline] unzip offline package on local.
[2024-03-01 14:01:47] INFO:    [offline] unzip offline package succeeded.
[2024-03-01 14:01:47] INFO:    [offline] master 192.168.100.20: load offline file
[2024-03-01 14:01:47] INFO:    [offline] load offline file to 192.168.100.20 succeeded.
[2024-03-01 14:01:47] INFO:    [offline] master 192.168.100.20: disable the firewall
[2024-03-01 14:01:48] INFO:    [offline] 192.168.100.20: disable the firewall succeeded.
[2024-03-01 14:01:48] INFO:    [offline] worker 192.168.100.21: load offline file
[2024-03-01 14:02:19] INFO:    [offline] load offline file to 192.168.100.21 succeeded.
[2024-03-01 14:02:19] INFO:    [offline] worker 192.168.100.21: disable the firewall
[2024-03-01 14:02:19] INFO:    [offline] 192.168.100.21: disable the firewall succeeded.
[2024-03-01 14:02:19] INFO:    [get] Get 192.168.100.20 InternalIP.
[2024-03-01 14:02:20] INFO:    [result] get MGMT_NODE_IP value succeeded.
[2024-03-01 14:02:20] INFO:    [result] MGMT_NODE_IP is 192.168.100.20
[2024-03-01 14:02:20] INFO:    [init] master: 192.168.100.20
[2024-03-02 03:02:22] INFO:    [init] init master 192.168.100.20 succeeded.
[2024-03-02 03:02:22] INFO:    [init] master: 192.168.100.20 set hostname and hosts
[2024-03-02 03:02:22] INFO:    [init] 192.168.100.20 set hostname and hosts succeeded.
[2024-03-02 03:02:22] INFO:    [init] worker: 192.168.100.21
[2024-03-02 03:02:25] INFO:    [init] init worker 192.168.100.21 succeeded.
[2024-03-02 03:02:25] INFO:    [init] master: 192.168.100.21 set hostname and hosts
[2024-03-02 03:02:26] INFO:    [init] 192.168.100.21 set hostname and hosts succeeded.
[2024-03-02 03:02:26] INFO:    [install] install docker on 192.168.100.20.
[2024-03-02 03:03:24] INFO:    [install] install docker on 192.168.100.20 succeeded.
[2024-03-02 03:03:25] INFO:    [install] install kube on 192.168.100.20
[2024-03-02 03:03:25] INFO:    [install] install kube on 192.168.100.20 succeeded.
[2024-03-02 03:03:25] INFO:    [install] install docker on 192.168.100.21.
[2024-03-02 03:05:49] INFO:    [install] install docker on 192.168.100.21 succeeded.
[2024-03-02 03:05:49] INFO:    [install] install kube on 192.168.100.21
[2024-03-02 03:05:50] INFO:    [install] install kube on 192.168.100.21 succeeded.
[2024-03-02 03:05:50] INFO:    [kubeadm init] kubeadm init on 192.168.100.20
[2024-03-02 03:05:50] INFO:    [kubeadm init] 192.168.100.20: set kubeadm-config.yaml
[2024-03-02 03:05:50] INFO:    [kubeadm init] 192.168.100.20: set kubeadm-config.yaml succeeded.
[2024-03-02 03:05:50] INFO:    [kubeadm init] 192.168.100.20: kubeadm init start.
[2024-03-02 03:06:02] INFO:    [kubeadm init] 192.168.100.20: kubeadm init succeeded.
[2024-03-02 03:06:05] INFO:    [kubeadm init] 192.168.100.20: set kube config.
[2024-03-02 03:06:05] INFO:    [kubeadm init] 192.168.100.20: set kube config succeeded.
[2024-03-02 03:06:05] INFO:    [kubeadm init] 192.168.100.20: delete master taint
[2024-03-02 03:06:05] INFO:    [kubeadm init] 192.168.100.20: delete master taint succeeded.
[2024-03-02 03:06:06] INFO:    [kubeadm init] Auto-Approve kubelet cert csr succeeded.
[2024-03-02 03:06:06] INFO:    [kubeadm join] master: get join token and cert info
[2024-03-02 03:06:06] INFO:    [result] get CACRT_HASH value succeeded.
[2024-03-02 03:06:06] INFO:    [result] get INTI_CERTKEY value succeeded.
[2024-03-02 03:06:07] INFO:    [result] get INIT_TOKEN value succeeded.
[2024-03-02 03:06:07] INFO:    [kubeadm join] worker 192.168.100.21 join cluster.
[2024-03-02 03:06:26] INFO:    [kubeadm join] worker 192.168.100.21 join cluster succeeded.
[2024-03-02 03:06:26] INFO:    [kubeadm join] set 192.168.100.21 worker node role.
[2024-03-02 03:06:26] INFO:    [kubeadm join] set 192.168.100.21 worker node role succeeded.
[2024-03-02 03:06:26] INFO:    [network] add flannel network
[2024-03-02 03:06:26] INFO:    [calico] change flannel pod subnet succeeded.
[2024-03-02 03:06:26] INFO:    [apply] apply kube-flannel.yaml file
[2024-03-02 03:06:27] INFO:    [apply] apply kube-flannel.yaml file succeeded.
[2024-03-02 03:06:30] INFO:    [waiting] waiting kube-flannel-ds
[2024-03-02 03:06:31] INFO:    [waiting] kube-flannel-ds pods ready succeeded.
[2024-03-02 03:06:31] INFO:    [apply] apply coredns-cm.yaml file
[2024-03-02 03:06:31] INFO:    [apply] apply coredns-cm.yaml file succeeded.
[2024-03-02 03:06:31] INFO:    [apply] apply metrics-server.yaml file
[2024-03-02 03:06:32] INFO:    [apply] apply metrics-server.yaml file succeeded.
[2024-03-02 03:06:35] INFO:    [waiting] waiting metrics-server
[2024-03-02 03:06:35] INFO:    [waiting] metrics-server pods ready succeeded.
[2024-03-02 03:06:35] INFO:    [apply] apply dashboard.yaml file
[2024-03-02 03:06:35] INFO:    [apply] apply dashboard.yaml file succeeded.
[2024-03-02 03:06:38] INFO:    [waiting] waiting dashboard-agent
[2024-03-02 03:06:39] INFO:    [waiting] dashboard-agent pods ready succeeded.
[2024-03-02 03:06:42] INFO:    [waiting] waiting dashboard-en
[2024-03-02 03:06:42] INFO:    [waiting] dashboard-en pods ready succeeded.
[2024-03-02 03:06:57] INFO:    [cluster] kubernetes cluster status
+ kubectl get node -o wide
NAME               STATUS   ROLES                         AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k8s-master-node1   Ready    control-plane,master,worker   59s   v1.22.1   192.168.100.20   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.8
k8s-worker-node1   Ready    worker                        36s   v1.22.1   192.168.100.21   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.8
+ kubectl get pods -A -o wide
NAMESPACE      NAME                                       READY   STATUS    RESTARTS      AGE   IP               NODE               NOMINATED NODE   READINESS GATES
dashboard-cn   dashboard-agent-cd88cf454-js5sq            1/1     Running   1 (13s ago)   22s   10.244.1.5       k8s-worker-node1   <none>           <none>
dashboard-cn   dashboard-cn-64bd46887f-jvgzq              1/1     Running   0             22s   10.244.1.4       k8s-worker-node1   <none>           <none>
dashboard-en   dashboard-en-55596d469-vsqx7               1/1     Running   0             22s   10.244.1.6       k8s-worker-node1   <none>           <none>
kube-system    coredns-78fcd69978-bncsj                   1/1     Running   0             41s   10.244.1.2       k8s-worker-node1   <none>           <none>
kube-system    coredns-78fcd69978-qhplr                   1/1     Running   0             41s   10.244.1.3       k8s-worker-node1   <none>           <none>
kube-system    etcd-k8s-master-node1                      1/1     Running   0             55s   192.168.100.20   k8s-master-node1   <none>           <none>
kube-system    kube-apiserver-k8s-master-node1            1/1     Running   0             55s   192.168.100.20   k8s-master-node1   <none>           <none>
kube-system    kube-controller-manager-k8s-master-node1   1/1     Running   0             55s   192.168.100.20   k8s-master-node1   <none>           <none>
kube-system    kube-flannel-ds-5cv6l                      1/1     Running   0             30s   192.168.100.20   k8s-master-node1   <none>           <none>
kube-system    kube-flannel-ds-wp5pj                      1/1     Running   0             30s   192.168.100.21   k8s-worker-node1   <none>           <none>
kube-system    kube-proxy-lswkz                           1/1     Running   0             36s   192.168.100.21   k8s-worker-node1   <none>           <none>
kube-system    kube-proxy-qjhwh                           1/1     Running   0             41s   192.168.100.20   k8s-master-node1   <none>           <none>
kube-system    kube-scheduler-k8s-master-node1            1/1     Running   0             55s   192.168.100.20   k8s-master-node1   <none>           <none>
kube-system    metrics-server-77564bc84d-jbzb6            1/1     Running   0             26s   192.168.100.21   k8s-worker-node1   <none>           <none> 

  See detailed log >> /var/log/kubeinstall.log 

集群部署完毕,CRT重新连接即可

[root@master ~]# logout

Last login: Sat Mar  2 02:51:49 2024 from 192.168.100.1


  ██╗  ██╗ █████╗ ███████╗
  ██║ ██╔╝██╔══██╗██╔════╝
  █████╔╝ ╚█████╔╝███████╗
  ██╔═██╗ ██╔══██╗╚════██║
  ██║  ██╗╚█████╔╝███████║
  ╚═╝  ╚═╝ ╚════╝ ╚══════╝ 

 Information as of: 2024-03-02 03:09:21

 Product............: VMware Virtual Platform None
 OS.................: CentOS Linux release 7.9.2009 (Core)
 Kernel.............: Linux 3.10.0-1160.el7.x86_64 x86_64 GNU/Linux
 CPU................: AMD Ryzen 7 5800H with Radeon Graphics 4P 1C 4L

 Hostname...........: k8s-master-node1
 IP Addresses.......: 192.168.100.20

 Uptime.............: 0 days, 00h 17m 58s
 Memory.............: 1.49GiB of 7.62GiB RAM used (19.54%)
 Load Averages......: 0.40 / 0.39 / 0.26 with 4 core(s) at 3193.874Hz
 Disk Usage.........: 17G of 95G disk space used (18%)

 Users online.......: 1
 Running Processes..: 167
 Container Info.....: Exited:2 Running:6 Images:27
****************************************************************************************************
Last login: Sat Mar  2 02:49:33 2024 from 192.168.100.1


  ██╗  ██╗ █████╗ ███████╗
  ██║ ██╔╝██╔══██╗██╔════╝
  █████╔╝ ╚█████╔╝███████╗
  ██╔═██╗ ██╔══██╗╚════██║
  ██║  ██╗╚█████╔╝███████║
  ╚═╝  ╚═╝ ╚════╝ ╚══════╝ 

 Information as of: 2024-03-02 03:09:25

 Product............: VMware Virtual Platform None
 OS.................: CentOS Linux release 7.9.2009 (Core)
 Kernel.............: Linux 3.10.0-1160.el7.x86_64 x86_64 GNU/Linux
 CPU................: AMD Ryzen 7 5800H with Radeon Graphics 4P 1C 4L

 Hostname...........: k8s-worker-node1
 IP Addresses.......: 192.168.100.21

 Uptime.............: 0 days, 00h 23m 54s
 Memory.............: 1.33GiB of 7.62GiB RAM used (17.41%)
 Load Averages......: 0.25 / 0.53 / 0.32 with 4 core(s) at 3193.874Hz
 Disk Usage.........: 8.1G of 93G disk space used (9%)

 Users online.......: 2
 Running Processes..: 183
 Container Info.....: Exited:3 Running:8 Images:27

在master节点查看集群状态

[root@k8s-master-node1 ~]# kubectl cluster-info
Kubernetes control plane is running at https://apiserver.cluster.local:6443
CoreDNS is running at https://apiserver.cluster.local:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

查看节点负载情况

[root@k8s-master-node1 ~]# kubectl top nodes --use-protocol-buffers
NAME               CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master-node1   149m         3%     2155Mi          27%       
k8s-worker-node1   79m          1%     1751Mi          22%       

此时可通过$masterIP:30080,访问一道云云开发平台


二、完成 Istio 服务网格、 KubeVirt 虚拟化和 Harbor 镜像仓库的部署

1,部署Istio服务网格
1.1,在 Kubernetes 中安装 Istio 服务网格

[root@k8s-master-node1 ~]# kubeeasy add --istio istio
[2024-03-02 03:41:48] INFO:    [start] bash kubeeasy add --istio istio
[2024-03-02 03:41:48] INFO:    [check] sshpass command exists.
[2024-03-02 03:41:48] INFO:    [check] wget command exists.
[2024-03-02 03:41:48] INFO:    [check] conn apiserver succeeded.
[2024-03-02 03:41:49] INFO:    [istio] add istio
✔ Istio core installed                                                                                                                                            
✔ Istiod installed                                                                                                                                                
✔ Ingress gateways installed                                                                                                                                      
✔ Egress gateways installed                                                                                                                                       
✔ Installation complete                                                                                                                                           Making this installation the default for injection and validation.

Thank you for installing Istio 1.12.  Please take a few minutes to tell us about your install/upgrade experience!  https://forms.gle/FegQbc9UvePd4Z9z7
[2024-03-02 03:42:03] INFO:    [waiting] waiting istio-egressgateway
[2024-03-02 03:42:03] INFO:    [waiting] istio-egressgateway pods ready succeeded.
[2024-03-02 03:42:06] INFO:    [waiting] waiting istio-ingressgateway
[2024-03-02 03:42:06] INFO:    [waiting] istio-ingressgateway pods ready succeeded.
[2024-03-02 03:42:09] INFO:    [waiting] waiting istiod
[2024-03-02 03:42:09] INFO:    [waiting] istiod pods ready succeeded.
[2024-03-02 03:42:13] INFO:    [waiting] waiting grafana
[2024-03-02 03:42:13] INFO:    [waiting] grafana pods ready succeeded.
[2024-03-02 03:42:16] INFO:    [waiting] waiting jaeger
[2024-03-02 03:42:16] INFO:    [waiting] jaeger pods ready succeeded.
[2024-03-02 03:42:19] INFO:    [waiting] waiting kiali
[2024-03-02 03:42:40] INFO:    [waiting] kiali pods ready succeeded.
[2024-03-02 03:42:43] INFO:    [waiting] waiting prometheus
[2024-03-02 03:42:43] INFO:    [waiting] prometheus pods ready succeeded.
[2024-03-02 03:42:43] INFO:    [cluster] kubernetes istio status
+ kubectl get pod -n istio-system -o wide
NAME                                   READY   STATUS    RESTARTS   AGE   IP            NODE               NOMINATED NODE   READINESS GATES
grafana-6ccd56f4b6-m5nms               1/1     Running   0          34s   10.244.1.15   k8s-worker-node1   <none>           <none>
istio-egressgateway-7f4864f59c-n2rn8   1/1     Running   0          48s   10.244.1.14   k8s-worker-node1   <none>           <none>
istio-ingressgateway-55d9fb9f-cfqds    1/1     Running   0          48s   10.244.1.13   k8s-worker-node1   <none>           <none>
istiod-555d47cb65-t4pjv                1/1     Running   0          52s   10.244.1.12   k8s-worker-node1   <none>           <none>
jaeger-5d44bc5c5d-7gb6h                1/1     Running   0          34s   10.244.0.2    k8s-master-node1   <none>           <none>
kiali-9f9596d69-7xjj5                  1/1     Running   0          33s   10.244.0.3    k8s-master-node1   <none>           <none>
prometheus-64fd8ccd65-8c2d9            2/2     Running   0          33s   10.244.0.4    k8s-master-node1   <none>           <none> 

  See detailed log >> /var/log/kubeinstall.log 

1.2,创建一个名为 “images” 的命名空间,这样就可以在这个命名空间中使用Istio服务网格

[root@k8s-master-node1 ~]# kubectl create namespace images
namespace/images created

1.3,给 “images” 命名空间添加标签,这启用自动注入 Envoy 代理作为 Sidecar
在这个空间下创建的所有新的pod都将自动注入Envoy代理作为sidecar,使他们能够成为Istio服务网格的一部分

[root@k8s-master-node1 ~]# kubectl label namespace images istio-injection=enabled
namespace/images labeled

以上操作,在 “images” 命名空间下创建的所有新的 Pod 都将自动添加 Envoy 代理,使它们能够成为 Istio 服务网格的一部分
可通过$masterIP:33000访问图形化界面
2,kubevirt虚拟化

[root@k8s-master-node1 ~]# kubeeasy add --virt kubevirt 
[2024-03-02 03:52:33] INFO:    [start] bash kubeeasy add --virt kubevirt
[2024-03-02 03:52:33] INFO:    [check] sshpass command exists.
[2024-03-02 03:52:33] INFO:    [check] wget command exists.
[2024-03-02 03:52:33] INFO:    [check] conn apiserver succeeded.
[2024-03-02 03:52:34] INFO:    [virt] add kubevirt
[2024-03-02 03:52:34] INFO:    [apply] apply kubevirt-operator.yaml file
[2024-03-02 03:52:34] INFO:    [apply] apply kubevirt-operator.yaml file succeeded.
[2024-03-02 03:52:37] INFO:    [waiting] waiting kubevirt
[2024-03-02 03:52:44] INFO:    [waiting] kubevirt pods ready succeeded.
[2024-03-02 03:52:44] INFO:    [apply] apply kubevirt-cr.yaml file
[2024-03-02 03:52:44] INFO:    [apply] apply kubevirt-cr.yaml file succeeded.
[2024-03-02 03:53:17] INFO:    [waiting] waiting kubevirt
[2024-03-02 03:53:23] INFO:    [waiting] kubevirt pods ready succeeded.
[2024-03-02 03:53:26] INFO:    [waiting] waiting kubevirt
[2024-03-02 03:53:48] INFO:    [waiting] kubevirt pods ready succeeded.
[2024-03-02 03:53:51] INFO:    [waiting] waiting kubevirt
[2024-03-02 03:53:51] INFO:    [waiting] kubevirt pods ready succeeded.
[2024-03-02 03:53:51] INFO:    [apply] apply multus-daemonset.yaml file
[2024-03-02 03:53:51] INFO:    [apply] apply multus-daemonset.yaml file succeeded.
[2024-03-02 03:53:54] INFO:    [waiting] waiting kube-multus
[2024-03-02 03:53:54] INFO:    [waiting] kube-multus pods ready succeeded.
[2024-03-02 03:53:54] INFO:    [apply] apply multus-cni-macvlan.yaml file
[2024-03-02 03:53:54] INFO:    [apply] apply multus-cni-macvlan.yaml file succeeded.
[2024-03-02 03:53:54] INFO:    [cluster] kubernetes kubevirt status
+ kubectl get pod -n kubevirt -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP            NODE               NOMINATED NODE   READINESS GATES
virt-api-86f9d6d4f-6xkbr          1/1     Running   0          52s   10.244.0.6    k8s-master-node1   <none>           <none>
virt-api-86f9d6d4f-8s2c7          1/1     Running   0          52s   10.244.1.18   k8s-worker-node1   <none>           <none>
virt-controller-54b79f5db-dt54h   1/1     Running   0          26s   10.244.0.8    k8s-master-node1   <none>           <none>
virt-controller-54b79f5db-jm4ms   1/1     Running   0          26s   10.244.1.20   k8s-worker-node1   <none>           <none>
virt-handler-blrbr                1/1     Running   0          26s   10.244.0.7    k8s-master-node1   <none>           <none>
virt-handler-pwf9c                1/1     Running   0          26s   10.244.1.19   k8s-worker-node1   <none>           <none>
virt-operator-6fbd74566c-wbsxk    1/1     Running   0          80s   10.244.1.16   k8s-worker-node1   <none>           <none>
virt-operator-6fbd74566c-x8wtb    1/1     Running   0          80s   10.244.0.5    k8s-master-node1   <none>           <none> 

  See detailed log >> /var/log/kubeinstall.log 

3,Harbor镜像仓库部署

[root@k8s-master-node1 ~]# kubeeasy add --registry harbor
[2024-03-02 03:56:45] INFO:    [start] bash kubeeasy add --registry harbor
[2024-03-02 03:56:45] INFO:    [check] sshpass command exists.
[2024-03-02 03:56:45] INFO:    [check] wget command exists.
[2024-03-02 03:56:46] INFO:    [check] conn apiserver succeeded.
[2024-03-02 03:56:46] INFO:    [offline] unzip offline harbor package on local.
[2024-03-02 03:56:52] INFO:    [offline] installing docker-compose on local.
[2024-03-02 03:56:52] INFO:    [offline] Installing harbor on local.

[Step 0]: checking if docker is installed ...

Note: docker version: 20.10.14

[Step 1]: checking docker-compose is installed ...

Note: docker-compose version: 2.2.1

[Step 2]: loading Harbor images ...


[Step 3]: preparing environment ...

[Step 4]: preparing harbor configs ...
prepare base dir is set to /opt/harbor
WARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to https
Generated configuration file: /config/portal/nginx.conf
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
Generated and saved secret to file: /data/secret/keys/secretkey
Successfully called func: create_root_cert
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir



[Step 5]: starting Harbor ...
[+] Running 10/10
 ⠿ Network harbor_harbor        Created                                                                                                                      0.0s
 ⠿ Container harbor-log         Started                                                                                                                      0.5s
 ⠿ Container harbor-portal      Started                                                                                                                      1.6s
 ⠿ Container redis              Started                                                                                                                      1.5s
 ⠿ Container registry           Started                                                                                                                      1.7s
 ⠿ Container harbor-db          Started                                                                                                                      1.5s
 ⠿ Container registryctl        Started                                                                                                                      1.7s
 ⠿ Container harbor-core        Started                                                                                                                      2.4s
 ⠿ Container harbor-jobservice  Started                                                                                                                      3.3s
 ⠿ Container nginx              Started                                                                                                                      3.3s
✔ ----Harbor has been installed and started successfully.----
[2024-03-02 03:58:09] INFO:    [cluster] kubernetes Harbor status
+ docker-compose -f /opt/harbor/docker-compose.yml ps
NAME                COMMAND                  SERVICE             STATUS              PORTS
harbor-core         "/harbor/entrypoint.…"   core                running (healthy)   
harbor-db           "/docker-entrypoint.…"   postgresql          running (healthy)   
harbor-jobservice   "/harbor/entrypoint.…"   jobservice          running (healthy)   
harbor-log          "/bin/sh -c /usr/loc…"   log                 running (healthy)   127.0.0.1:1514->10514/tcp
harbor-portal       "nginx -g 'daemon of…"   portal              running (healthy)   
nginx               "nginx -g 'daemon of…"   proxy               running (healthy)   0.0.0.0:80->8080/tcp, :::80->8080/tcp
redis               "redis-server /etc/r…"   redis               running (healthy)   
registry            "/home/harbor/entryp…"   registry            running (healthy)   
registryctl         "/home/harbor/start.…"   registryctl         running (healthy)    

  See detailed log >> /var/log/kubeinstall.log 

可通过$masterIP访问到
admin/Harbor12345

最后

部署了k8s集群,搭建部分结束

欢迎大家来一起交流和交换资源

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值