kubesphere all in one体验

部署过程

本文kubesphere 和k8s一起部署,系统centos7.9

[root@k8s-test ~]# curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh -


Downloading kubekey v1.1.1 from https://github.com/kubesphere/kubekey/releases/download/v1.1.1/kubekey-v1.1.1-linux-amd64.tar.gz ...


Kubekey v1.1.1 Download Complete!

[root@k8s-test ~]# 
[root@k8s-test ~]# chmod +x kk
[root@k8s-test ~]# ./kk create cluster --with-kubernetes v1.20.4 --with-kubesphere v3.1.1
+----------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name     | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |
+----------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| k8s-test | y    | y    | y       |          |       |       |           |        | y          |             |                  | CST 14:55:12 |
+----------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
k8s-test: conntrack is required. 
[root@k8s-test ~]# yum install conntrack
#指定区域,要不然kubeadm等下载不回来
[root@k8s-test ~]# export KKZONE=cn

默认我的centos7.9没有安装docker,kubespere会帮我们安装docker,官网文档有介绍。

[root@k8s-test ~]# ./kk create cluster --with-kubernetes v1.20.4 --with-kubesphere v3.1.1
+----------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name     | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |
+----------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| k8s-test | y    | y    | y       |          |       |       | y         |        | y          |             |                  | CST 15:09:13 |
+----------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
INFO[15:09:16 CST] Downloading Installation Files               
INFO[15:09:16 CST] Downloading kubeadm ...                      
INFO[15:09:58 CST] Downloading kubelet ...                      
INFO[15:11:54 CST] Downloading kubectl ...                      
INFO[15:12:32 CST] Downloading helm ...                         
INFO[15:13:15 CST] Downloading kubecni ...                      
INFO[15:13:50 CST] Configuring operating system ...             
[k8s-test 192.168.5.233] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
INFO[15:13:51 CST] Installing docker ...                        
INFO[15:14:27 CST] Start to download images on all nodes        
[k8s-test] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/etcd:v3.4.13
[k8s-test] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[k8s-test] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.20.4
[k8s-test] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.20.4
[k8s-test] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.20.4
[k8s-test] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.4
[k8s-test] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[k8s-test] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[k8s-test] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[k8s-test] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[k8s-test] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[k8s-test] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
INFO[15:15:40 CST] Generating etcd certs                        
INFO[15:15:41 CST] Synchronizing etcd certs                     
INFO[15:15:41 CST] Creating etcd service                        
[k8s-test 192.168.5.233] MSG:
etcd will be installed
INFO[15:15:42 CST] Starting etcd cluster                        
[k8s-test 192.168.5.233] MSG:
Configuration file will be created
INFO[15:15:42 CST] Refreshing etcd configuration                
[k8s-test 192.168.5.233] MSG:
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
Waiting for etcd to start
INFO[15:15:48 CST] Backup etcd data regularly                   
INFO[15:15:54 CST] Get cluster status                           
[k8s-test 192.168.5.233] MSG:
Cluster will be created.
INFO[15:15:55 CST] Installing kube binaries                     
Push /root/kubekey/v1.20.4/amd64/kubeadm to 192.168.5.233:/tmp/kubekey/kubeadm   Done
Push /root/kubekey/v1.20.4/amd64/kubelet to 192.168.5.233:/tmp/kubekey/kubelet   Done
Push /root/kubekey/v1.20.4/amd64/kubectl to 192.168.5.233:/tmp/kubekey/kubectl   Done
Push /root/kubekey/v1.20.4/amd64/helm to 192.168.5.233:/tmp/kubekey/helm   Done
Push /root/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.5.233:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
INFO[15:15:59 CST] Initializing kubernetes cluster              
[k8s-test 192.168.5.233] MSG:
W0908 15:16:00.248343    5035 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.20.4
[preflight] Running pre-flight checks
	[WARNING FileExisting-ebtables]: ebtables not found in system path
	[WARNING FileExisting-socat]: socat not found in system path
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.8. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
#这儿挺重要的,要看看
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-test k8s-test.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 192.168.5.233 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
#传说中的控制平面
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 73.502453 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-test as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-test as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: kohk2x.fox13xzhuhbltrgw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 --token kohk2x.fox13xzhuhbltrgw \
    --discovery-token-ca-cert-hash sha256:9325ffdd81e36c260acda5cf4ff28a8189aeda601e633df9a67e841e0a79b49a \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 --token kohk2x.fox13xzhuhbltrgw \
    --discovery-token-ca-cert-hash sha256:9325ffdd81e36c260acda5cf4ff28a8189aeda601e633df9a67e841e0a79b49a
[k8s-test 192.168.5.233] MSG:
node/k8s-test untainted
[k8s-test 192.168.5.233] MSG:
node/k8s-test labeled
[k8s-test 192.168.5.233] MSG:
service "kube-dns" deleted
[k8s-test 192.168.5.233] MSG:
service/coredns created
[k8s-test 192.168.5.233] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[k8s-test 192.168.5.233] MSG:
configmap/nodelocaldns created
[k8s-test 192.168.5.233] MSG:
I0908 15:17:42.838760    7407 version.go:254] remote version is much newer: v1.22.1; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
6e9ee9b920395fc9d416758e75af6b034fc1e8425736a672d806c6e2f37fccd0
[k8s-test 192.168.5.233] MSG:
secret/kubeadm-certs patched
[k8s-test 192.168.5.233] MSG:
secret/kubeadm-certs patched
[k8s-test 192.168.5.233] MSG:
secret/kubeadm-certs patched
[k8s-test 192.168.5.233] MSG:
kubeadm join lb.kubesphere.local:6443 --token bxicgc.0s9w7x3jthle4c6f     --discovery-token-ca-cert-hash sha256:9325ffdd81e36c260acda5cf4ff28a8189aeda601e633df9a67e841e0a79b49a
[k8s-test 192.168.5.233] MSG:
k8s-test   v1.20.4   [map[address:192.168.5.233 type:InternalIP] map[address:k8s-test type:Hostname]]
INFO[15:17:44 CST] Joining nodes to cluster                     
INFO[15:17:44 CST] Deploying network plugin ...                 
[k8s-test 192.168.5.233] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
[k8s-test 192.168.5.233] MSG:
storageclass.storage.k8s.io/local created
serviceaccount/openebs-maya-operator created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/openebs-maya-operator created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/openebs-maya-operator created
deployment.apps/openebs-localpv-provisioner created
INFO[15:17:46 CST] Deploying KubeSphere ...                     
v3.1.1
[k8s-test 192.168.5.233] MSG:
namespace/kubesphere-system created
namespace/kubesphere-monitoring-system created
[k8s-test 192.168.5.233] MSG:
secret/kube-etcd-client-certs created
[k8s-test 192.168.5.233] MSG:
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer created
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.5.233:30880
Account: admin
Password: P@88w0rd

NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2021-09-08 15:21:57
#####################################################
INFO[15:22:00 CST] Installation is complete.

Please check the result using the command:

       kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

根据上面提示的,看看日志,学习一下,到底kubesphere all in one k8s 干了什么,看起来是调用ansible来部署的。

[root@k8s-test ~]#  kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
2021-09-08T15:18:23+08:00 INFO     : shell-operator latest
2021-09-08T15:18:23+08:00 INFO     : Use temporary dir: /tmp/shell-operator
2021-09-08T15:18:23+08:00 INFO     : Initialize hooks manager ...
2021-09-08T15:18:23+08:00 INFO     : Search and load hooks ...
2021-09-08T15:18:23+08:00 INFO     : HTTP SERVER Listening on 0.0.0.0:9115
2021-09-08T15:18:23+08:00 INFO     : Load hook config from '/hooks/kubesphere/installRunner.py'
2021-09-08T15:18:24+08:00 INFO     : Load hook config from '/hooks/kubesphere/schedule.sh'
2021-09-08T15:18:24+08:00 INFO     : Initializing schedule manager ...
2021-09-08T15:18:24+08:00 INFO     : KUBE Init Kubernetes client
2021-09-08T15:18:24+08:00 INFO     : KUBE-INIT Kubernetes client is configured successfully
2021-09-08T15:18:24+08:00 INFO     : MAIN: run main loop
2021-09-08T15:18:24+08:00 INFO     : MAIN: add onStartup tasks
2021-09-08T15:18:24+08:00 INFO     : Running schedule manager ...
2021-09-08T15:18:24+08:00 INFO     : MSTOR Create new metric shell_operator_live_ticks
2021-09-08T15:18:24+08:00 INFO     : QUEUE add all HookRun@OnStartup
2021-09-08T15:18:24+08:00 INFO     : MSTOR Create new metric shell_operator_tasks_queue_length
2021-09-08T15:18:24+08:00 INFO     : GVR for kind 'ClusterConfiguration' is installer.kubesphere.io/v1alpha1, Resource=clusterconfigurations
2021-09-08T15:18:24+08:00 INFO     : EVENT Kube event '943951fa-36fe-444a-8466-5e21ad8d5e55'
2021-09-08T15:18:24+08:00 INFO     : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py
2021-09-08T15:18:27+08:00 INFO     : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py
2021-09-08T15:18:27+08:00 INFO     : Running hook 'kubesphere/installRunner.py' binding 'KUBE_EVENTS' ...
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'

PLAY [localhost] ***************************************************************

TASK [download : include_tasks] ************************************************
skipping: [localhost]

TASK [download : Downloading items] ********************************************
skipping: [localhost]

TASK [download : Synchronizing container] **************************************
skipping: [localhost]

TASK [kubesphere-defaults : KubeSphere | Setting images' namespace override] ***
ok: [localhost]

TASK [kubesphere-defaults : KubeSphere | Configuring defaults] *****************
ok: [localhost] => {
    "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
}

TASK [preinstall : KubeSphere | Checking Kubernetes version] *******************
changed: [localhost]

TASK [preinstall : KubeSphere | Initing Kubernetes version] ********************
ok: [localhost]

TASK [preinstall : KubeSphere | Stopping if Kubernetes version is nonsupport] ***
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [preinstall : KubeSphere | Checking StorageClass] *************************
changed: [localhost]

TASK [preinstall : KubeSphere | Stopping if StorageClass was not found] ********
skipping: [localhost]

TASK [preinstall : KubeSphere | Checking default StorageClass] *****************
changed: [localhost]

TASK [preinstall : KubeSphere | Stopping if default StorageClass was not found] ***
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [preinstall : KubeSphere | Checking KubeSphere component] *****************
changed: [localhost]

TASK [preinstall : KubeSphere | Getting KubeSphere component version] **********
skipping: [localhost]

TASK [preinstall : KubeSphere | Getting KubeSphere component version] **********
skipping: [localhost] => (item=ks-openldap) 
skipping: [localhost] => (item=ks-redis) 
skipping: [localhost] => (item=ks-minio) 
skipping: [localhost] => (item=ks-openpitrix) 
skipping: [localhost] => (item=elasticsearch-logging) 
skipping: [localhost] => (item=elasticsearch-logging-curator) 
skipping: [localhost] => (item=istio) 
skipping: [localhost] => (item=istio-init) 
skipping: [localhost] => (item=jaeger-operator) 
skipping: [localhost] => (item=ks-jenkins) 
skipping: [localhost] => (item=ks-sonarqube) 
skipping: [localhost] => (item=logging-fluentbit-operator) 
skipping: [localhost] => (item=uc) 
skipping: [localhost] => (item=metrics-server) 

PLAY RECAP *********************************************************************
localhost                  : ok=9    changed=4    unreachable=0    failed=0    skipped=6    rescued=0    ignored=0   

[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'

PLAY [localhost] ***************************************************************

TASK [download : include_tasks] ************************************************
skipping: [localhost]

TASK [download : Downloading items] ********************************************
skipping: [localhost]

TASK [download : Synchronizing container] **************************************
skipping: [localhost]

TASK [kubesphere-defaults : KubeSphere | Setting images' namespace override] ***
ok: [localhost]

TASK [kubesphere-defaults : KubeSphere | Configuring defaults] *****************
ok: [localhost] => {
    "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
}

TASK [metrics-server : Metrics-Server | Getting metrics-server installation files] ***
skipping: [localhost]

TASK [metrics-server : Metrics-Server | Creating manifests] ********************
skipping: [localhost] => (item={'file': 'metrics-server.yaml'}) 

TASK [metrics-server : Metrics-Server | Checking Metrics-Server] ***************
skipping: [localhost]

TASK [metrics-server : Metrics-Server | Uninstalling old metrics-server] *******
skipping: [localhost]

TASK [metrics-server : Metrics-Server | Installing new metrics-server] *********
skipping: [localhost]

TASK [metrics-server : Metrics-Server | Waitting for metrics.k8s.io ready] *****
skipping: [localhost]

TASK [metrics-server : Metrics-Server | Importing metrics-server status] *******
skipping: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0    skipped=10   rescued=0    ignored=0   

[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'

PLAY [localhost] ***************************************************************

TASK [download : include_tasks] ************************************************
skipping: [localhost]

TASK [download : Downloading items] ********************************************
skipping: [localhost]

TASK [download : Synchronizing container] **************************************
skipping: [localhost]

TASK [kubesphere-defaults : KubeSphere | Setting images' namespace override] ***
ok: [localhost]

TASK [kubesphere-defaults : KubeSphere | Configuring defaults] *****************
ok: [localhost] => {
    "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
}

TASK [common : KubeSphere | Checking kube-node-lease namespace] ****************
changed: [localhost]

TASK [common : KubeSphere | Getting system namespaces] *************************
ok: [localhost]

TASK [common : set_fact] *******************************************************
ok: [localhost]

TASK [common : debug] **********************************************************
ok: [localhost] => {
    "msg": [
        "kubesphere-system",
        "kubesphere-controls-system",
        "kubesphere-monitoring-system",
        "kubesphere-monitoring-federated",
        "kube-node-lease"
    ]
}

TASK [common : KubeSphere | Creating KubeSphere namespace] *********************
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kubesphere-monitoring-federated)
changed: [localhost] => (item=kube-node-lease)

TASK [common : KubeSphere | Labeling system-workspace] *************************
changed: [localhost] => (item=default)
changed: [localhost] => (item=kube-public)
changed: [localhost] => (item=kube-system)
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kubesphere-monitoring-federated)
changed: [localhost] => (item=kube-node-lease)

TASK [common : KubeSphere | Creating ImagePullSecrets] *************************
changed: [localhost] => (item=default)
changed: [localhost] => (item=kube-public)
changed: [localhost] => (item=kube-system)
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kubesphere-monitoring-federated)
changed: [localhost] => (item=kube-node-lease)

TASK [common : KubeSphere | Labeling namespace for network policy] *************
changed: [localhost]

TASK [common : KubeSphere | Getting Kubernetes master num] *********************
changed: [localhost]

TASK [common : KubeSphere | Setting master num] ********************************
ok: [localhost]

TASK [common : KubeSphere | Getting common component installation files] *******
changed: [localhost] => (item=common)
changed: [localhost] => (item=ks-crds)

TASK [common : KubeSphere | Creating KubeSphere crds] **************************
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/app.k8s.io_applications.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/application.kubesphere.io_helmapplications.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/application.kubesphere.io_helmapplicationversions.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/application.kubesphere.io_helmcategories.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/application.kubesphere.io_helmreleases.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/application.kubesphere.io_helmrepos.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/cluster.kubesphere.io_clusters.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/devops.kubesphere.io_devopsprojects.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/devops.kubesphere.io_pipelines.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/devops.kubesphere.io_s2ibinaries.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/devops.kubesphere.io_s2ibuilders.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/devops.kubesphere.io_s2ibuildertemplates.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/devops.kubesphere.io_s2iruns.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/iam.kubesphere.io_globalrolebindings.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/iam.kubesphere.io_globalroles.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/iam.kubesphere.io_groupbindings.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/iam.kubesphere.io_groups.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/iam.kubesphere.io_loginrecords.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/iam.kubesphere.io_rolebases.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/iam.kubesphere.io_users.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/iam.kubesphere.io_workspacerolebindings.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/iam.kubesphere.io_workspaceroles.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/network.kubesphere.io_ipamblocks.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/network.kubesphere.io_ipamhandles.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/network.kubesphere.io_ippools.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/network.kubesphere.io_namespacenetworkpolicies.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/quota.kubesphere.io_resourcequotas.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/servicemesh.kubesphere.io_servicepolicies.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/servicemesh.kubesphere.io_strategies.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/storage.kubesphere.io_provisionercapabilities.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/storage.kubesphere.io_storageclasscapabilities.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/tenant.kubesphere.io_workspaces.yaml)
changed: [localhost] => (item=/kubesphere/kubesphere/ks-crds/tenant.kubesphere.io_workspacetemplates.yaml)

TASK [common : KubeSphere | Creating Storage ProvisionerCapability] ************
changed: [localhost]

TASK [common : KubeSphere | Checking Kubernetes version] ***********************
changed: [localhost]

TASK [common : KubeSphere | Getting common component installation files] *******
changed: [localhost] => (item=snapshot-controller)

TASK [common : KubeSphere | Creating snapshot controller values] ***************
changed: [localhost] => (item={'name': 'custom-values-snapshot-controller', 'file': 'custom-values-snapshot-controller.yaml'})

TASK [common : KubeSphere | Removing old snapshot crd] *************************
changed: [localhost]

TASK [common : KubeSphere | Deploying snapshot controller] *********************
changed: [localhost]

TASK [common : KubeSphere | Checking openpitrix common component] **************
changed: [localhost]

TASK [common : include_tasks] **************************************************
skipping: [localhost] => (item={'op': 'openpitrix-db', 'ks': 'mysql-pvc'}) 
skipping: [localhost] => (item={'op': 'openpitrix-etcd', 'ks': 'etcd-pvc'}) 

TASK [common : Getting PersistentVolumeName (mysql)] ***************************
skipping: [localhost]

TASK [common : Getting PersistentVolumeSize (mysql)] ***************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeName (mysql)] ***************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeSize (mysql)] ***************************
skipping: [localhost]

TASK [common : Getting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]

TASK [common : Getting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]

TASK [common : KubeSphere | Checking mysql PersistentVolumeClaim] **************
changed: [localhost]

TASK [common : KubeSphere | Setting mysql db pv size] **************************
skipping: [localhost]

TASK [common : KubeSphere | Checking redis PersistentVolumeClaim] **************
changed: [localhost]

TASK [common : KubeSphere | Setting redis db pv size] **************************
skipping: [localhost]

TASK [common : KubeSphere | Checking minio PersistentVolumeClaim] **************
changed: [localhost]

TASK [common : KubeSphere | Setting minio pv size] *****************************
skipping: [localhost]

TASK [common : KubeSphere | Checking openldap PersistentVolumeClaim] ***********
changed: [localhost]

TASK [common : KubeSphere | Setting openldap pv size] **************************
skipping: [localhost]

TASK [common : KubeSphere | Checking etcd db PersistentVolumeClaim] ************
changed: [localhost]

TASK [common : KubeSphere | Setting etcd pv size] ******************************
skipping: [localhost]

TASK [common : KubeSphere | Checking redis ha PersistentVolumeClaim] ***********
changed: [localhost]

TASK [common : KubeSphere | Setting redis ha pv size] **************************
skipping: [localhost]

TASK [common : KubeSphere | Checking es-master PersistentVolumeClaim] **********
changed: [localhost]

TASK [common : KubeSphere | Setting es master pv size] *************************
skipping: [localhost]

TASK [common : KubeSphere | Checking es data PersistentVolumeClaim] ************
changed: [localhost]

TASK [common : KubeSphere | Setting es data pv size] ***************************
skipping: [localhost]

TASK [common : KubeSphere | Creating common component manifests] ***************
changed: [localhost] => (item={'path': 'redis', 'file': 'redis.yaml'})

TASK [common : KubeSphere | Deploying etcd and mysql] **************************
skipping: [localhost] => (item=etcd.yaml) 
skipping: [localhost] => (item=mysql.yaml) 

TASK [common : KubeSphere | Getting minio installation files] ******************
skipping: [localhost] => (item=minio-ha) 

TASK [common : KubeSphere | Creating manifests] ********************************
skipping: [localhost] => (item={'name': 'custom-values-minio', 'file': 'custom-values-minio.yaml'}) 

TASK [common : KubeSphere | Checking minio] ************************************
skipping: [localhost]

TASK [common : KubeSphere | Deploying minio] ***********************************
skipping: [localhost]

TASK [common : debug] **********************************************************
skipping: [localhost]

TASK [common : fail] ***********************************************************
skipping: [localhost]

TASK [common : KubeSphere | Importing minio status] ****************************
skipping: [localhost]

TASK [common : KubeSphere | Checking ha-redis] *********************************
skipping: [localhost]

TASK [common : KubeSphere | Getting redis installation files] ******************
skipping: [localhost] => (item=redis-ha) 

TASK [common : KubeSphere | Creating manifests] ********************************
skipping: [localhost] => (item={'name': 'custom-values-redis', 'file': 'custom-values-redis.yaml'}) 

TASK [common : KubeSphere | Checking old redis status] *************************
skipping: [localhost]

TASK [common : KubeSphere | Deleting and backup old redis svc] *****************
skipping: [localhost]

TASK [common : KubeSphere | Deploying redis] ***********************************
skipping: [localhost]

TASK [common : KubeSphere | Getting redis PodIp] *******************************
skipping: [localhost]

TASK [common : KubeSphere | Creating redis migration script] *******************
skipping: [localhost] => (item={'path': '/etc/kubesphere', 'file': 'redisMigrate.py'}) 

TASK [common : KubeSphere | Checking redis-ha status] **************************
skipping: [localhost]

TASK [common : ks-logging | Migrating redis data] ******************************
skipping: [localhost]

TASK [common : KubeSphere | Disabling old redis] *******************************
skipping: [localhost]

TASK [common : KubeSphere | Deploying redis] ***********************************
skipping: [localhost] => (item=redis.yaml) 

TASK [common : KubeSphere | Importing redis status] ****************************
skipping: [localhost]

TASK [common : KubeSphere | Getting openldap installation files] ***************
skipping: [localhost] => (item=openldap-ha) 

TASK [common : KubeSphere | Creating manifests] ********************************
skipping: [localhost] => (item={'name': 'custom-values-openldap', 'file': 'custom-values-openldap.yaml'}) 

TASK [common : KubeSphere | Checking old openldap status] **********************
skipping: [localhost]

TASK [common : KubeSphere | Shutdown ks-account] *******************************
skipping: [localhost]

TASK [common : KubeSphere | Deleting and backup old openldap svc] **************
skipping: [localhost]

TASK [common : KubeSphere | Checking openldap] *********************************
skipping: [localhost]

TASK [common : KubeSphere | Deploying openldap] ********************************
skipping: [localhost]

TASK [common : KubeSphere | Loading old openldap data] *************************
skipping: [localhost]

TASK [common : KubeSphere | Checking openldap-ha status] ***********************
skipping: [localhost]

TASK [common : KubeSphere | Getting openldap-ha pod list] **********************
skipping: [localhost]

TASK [common : KubeSphere | Getting old openldap data] *************************
skipping: [localhost]

TASK [common : KubeSphere | Migrating openldap data] ***************************
skipping: [localhost]

TASK [common : KubeSphere | Disabling old openldap] ****************************
skipping: [localhost]

TASK [common : KubeSphere | Restarting openldap] *******************************
skipping: [localhost]

TASK [common : KubeSphere | Restarting ks-account] *****************************
skipping: [localhost]

TASK [common : KubeSphere | Importing openldap status] *************************
skipping: [localhost]

TASK [common : KubeSphere | Checking ha-redis] *********************************
skipping: [localhost]

TASK [common : KubeSphere | Getting redis installation files] ******************
skipping: [localhost] => (item=redis-ha) 

TASK [common : KubeSphere | Creating manifests] ********************************
skipping: [localhost] => (item={'name': 'custom-values-redis', 'file': 'custom-values-redis.yaml'}) 

TASK [common : KubeSphere | Checking old redis status] *************************
skipping: [localhost]

TASK [common : KubeSphere | Deleting and backup old redis svc] *****************
skipping: [localhost]

TASK [common : KubeSphere | Deploying redis] ***********************************
skipping: [localhost]

TASK [common : KubeSphere | Getting redis PodIp] *******************************
skipping: [localhost]

TASK [common : KubeSphere | Creating redis migration script] *******************
skipping: [localhost] => (item={'path': '/etc/kubesphere', 'file': 'redisMigrate.py'}) 

TASK [common : KubeSphere | Checking redis-ha status] **************************
skipping: [localhost]

TASK [common : ks-logging | Migrating redis data] ******************************
skipping: [localhost]

TASK [common : KubeSphere | Disabling old redis] *******************************
skipping: [localhost]

TASK [common : KubeSphere | Deploying redis] ***********************************
skipping: [localhost] => (item=redis.yaml) 

TASK [common : KubeSphere | Importing redis status] ****************************
skipping: [localhost]

TASK [common : KubeSphere | Getting openldap installation files] ***************
skipping: [localhost] => (item=openldap-ha) 

TASK [common : KubeSphere | Creating manifests] ********************************
skipping: [localhost] => (item={'name': 'custom-values-openldap', 'file': 'custom-values-openldap.yaml'}) 

TASK [common : KubeSphere | Checking old openldap status] **********************
skipping: [localhost]

TASK [common : KubeSphere | Shutdown ks-account] *******************************
skipping: [localhost]

TASK [common : KubeSphere | Deleting and backup old openldap svc] **************
skipping: [localhost]

TASK [common : KubeSphere | Checking openldap] *********************************
skipping: [localhost]

TASK [common : KubeSphere | Deploying openldap] ********************************
skipping: [localhost]

TASK [common : KubeSphere | Loading old openldap data] *************************
skipping: [localhost]

TASK [common : KubeSphere | Checking openldap-ha status] ***********************
skipping: [localhost]

TASK [common : KubeSphere | Getting openldap-ha pod list] **********************
skipping: [localhost]

TASK [common : KubeSphere | Getting old openldap data] *************************
skipping: [localhost]

TASK [common : KubeSphere | Migrating openldap data] ***************************
skipping: [localhost]

TASK [common : KubeSphere | Disabling old openldap] ****************************
skipping: [localhost]

TASK [common : KubeSphere | Restarting openldap] *******************************
skipping: [localhost]

TASK [common : KubeSphere | Restarting ks-account] *****************************
skipping: [localhost]

TASK [common : KubeSphere | Importing openldap status] *************************
skipping: [localhost]

TASK [common : KubeSphere | Getting minio installation files] ******************
skipping: [localhost] => (item=minio-ha) 

TASK [common : KubeSphere | Creating manifests] ********************************
skipping: [localhost] => (item={'name': 'custom-values-minio', 'file': 'custom-values-minio.yaml'}) 

TASK [common : KubeSphere | Checking minio] ************************************
skipping: [localhost]

TASK [common : KubeSphere | Deploying minio] ***********************************
skipping: [localhost]

TASK [common : debug] **********************************************************
skipping: [localhost]

TASK [common : fail] ***********************************************************
skipping: [localhost]

TASK [common : KubeSphere | Importing minio status] ****************************
skipping: [localhost]

TASK [common : KubeSphere | Getting elasticsearch and curator installation files] ***
skipping: [localhost]

TASK [common : KubeSphere | Creating custom manifests] *************************
skipping: [localhost] => (item={'name': 'custom-values-elasticsearch', 'file': 'custom-values-elasticsearch.yaml'}) 
skipping: [localhost] => (item={'name': 'custom-values-elasticsearch-curator', 'file': 'custom-values-elasticsearch-curator.yaml'}) 

TASK [common : KubeSphere | Checking elasticsearch data StatefulSet] ***********
skipping: [localhost]

TASK [common : KubeSphere | Checking elasticsearch storageclass] ***************
skipping: [localhost]

TASK [common : KubeSphere | Commenting elasticsearch storageclass parameter] ***
skipping: [localhost]

TASK [common : KubeSphere | Creating elasticsearch credentials secret] *********
skipping: [localhost]

TASK [common : KubeSphere | Checking internal es] ******************************
skipping: [localhost]

TASK [common : KubeSphere | Deploying elasticsearch-logging] *******************
skipping: [localhost]

TASK [common : KubeSphere | Getting PersistentVolume Name] *********************
skipping: [localhost]

TASK [common : KubeSphere | Patching PersistentVolume (persistentVolumeReclaimPolicy)] ***
skipping: [localhost]

TASK [common : KubeSphere | Deleting elasticsearch] ****************************
skipping: [localhost]

TASK [common : KubeSphere | Waiting for seconds] *******************************
skipping: [localhost]

TASK [common : KubeSphere | Deploying elasticsearch-logging] *******************
skipping: [localhost]

TASK [common : KubeSphere | Importing es status] *******************************
skipping: [localhost]

TASK [common : KubeSphere | Deploying elasticsearch-logging-curator] ***********
skipping: [localhost]

TASK [common : KubeSphere | Getting fluentbit installation files] **************
skipping: [localhost]

TASK [common : KubeSphere | Creating custom manifests] *************************
skipping: [localhost] => (item={'path': 'fluentbit', 'file': 'custom-fluentbit-fluentBit.yaml'}) 
skipping: [localhost] => (item={'path': 'init', 'file': 'custom-fluentbit-operator-deployment.yaml'}) 
skipping: [localhost] => (item={'path': 'migrator', 'file': 'custom-migrator-job.yaml'}) 

TASK [common : KubeSphere | Checking fluentbit-version] ************************
skipping: [localhost]

TASK [common : KubeSphere | Backuping old fluentbit crd] ***********************
skipping: [localhost]

TASK [common : KubeSphere | Deleting old fluentbit operator] *******************
skipping: [localhost] => (item={'type': 'deploy', 'name': 'logging-fluentbit-operator'}) 
skipping: [localhost] => (item={'type': 'fluentbits.logging.kubesphere.io', 'name': 'fluent-bit'}) 
skipping: [localhost] => (item={'type': 'ds', 'name': 'fluent-bit'}) 
skipping: [localhost] => (item={'type': 'crd', 'name': 'fluentbits.logging.kubesphere.io'}) 

TASK [common : KubeSphere | Preparing fluentbit operator setup] ****************
skipping: [localhost]

TASK [common : KubeSphere | Migrating fluentbit operator old config] ***********
skipping: [localhost]

TASK [common : KubeSphere | Deploying new fluentbit operator] ******************
skipping: [localhost]

TASK [common : KubeSphere | Importing fluentbit status] ************************
skipping: [localhost]

TASK [common : Setting persistentVolumeReclaimPolicy (mysql)] ******************
skipping: [localhost]

TASK [common : Setting persistentVolumeReclaimPolicy (etcd)] *******************
skipping: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=30   changed=24   unreachable=0    failed=0    skipped=119  rescued=0    ignored=0   

[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'

PLAY [localhost] ***************************************************************

TASK [download : include_tasks] ************************************************
skipping: [localhost]

TASK [download : Downloading items] ********************************************
skipping: [localhost]

TASK [download : Synchronizing container] **************************************
skipping: [localhost]

TASK [kubesphere-defaults : KubeSphere | Setting images' namespace override] ***
ok: [localhost]

TASK [kubesphere-defaults : KubeSphere | Configuring defaults] *****************
ok: [localhost] => {
    "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
}

TASK [ks-core/prepare : KubeSphere | Checking core components (1)] *************
changed: [localhost]

TASK [ks-core/prepare : KubeSphere | Checking core components (2)] *************
changed: [localhost]

TASK [ks-core/prepare : KubeSphere | Checking core components (3)] *************
skipping: [localhost]

TASK [ks-core/prepare : KubeSphere | Checking core components (4)] *************
skipping: [localhost]

TASK [ks-core/prepare : KubeSphere | Updating ks-core status] ******************
skipping: [localhost]

TASK [ks-core/prepare : set_fact] **********************************************
skipping: [localhost]

TASK [ks-core/prepare : KubeSphere | Creating KubeSphere directory] ************
ok: [localhost]

TASK [ks-core/prepare : KubeSphere | Getting installation init files] **********
changed: [localhost] => (item=ks-init)

TASK [ks-core/prepare : KubeSphere | Checking account init] ********************
changed: [localhost]

TASK [ks-core/prepare : KubeSphere | Initing account] **************************
changed: [localhost]

TASK [ks-core/prepare : KubeSphere | Initing KubeSphere] ***********************
changed: [localhost] => (item=role-templates.yaml)
changed: [localhost] => (item=webhook-secret.yaml)
changed: [localhost] => (item=network.kubesphere.io.yaml)
changed: [localhost] => (item=iam.kubesphere.io.yaml)
changed: [localhost] => (item=quota.kubesphere.io.yaml)

TASK [ks-core/prepare : KubeSphere | Getting controls-system file] *************
changed: [localhost] => (item={'name': 'kubesphere-controls-system', 'file': 'kubesphere-controls-system.yaml'})

TASK [ks-core/prepare : KubeSphere | Installing controls-system] ***************
changed: [localhost]

TASK [ks-core/prepare : KubeSphere | Generating kubeconfig-admin] **************
skipping: [localhost]

TASK [ks-core/init-token : KubeSphere | Creating KubeSphere directory] *********
ok: [localhost]

TASK [ks-core/init-token : KubeSphere | Getting installation init files] *******
changed: [localhost] => (item=jwt-script)

TASK [ks-core/init-token : KubeSphere | Creating KubeSphere Secret] ************
changed: [localhost]

TASK [ks-core/init-token : KubeSphere | Creating KubeSphere Secret] ************
ok: [localhost]

TASK [ks-core/init-token : KubeSphere | Creating KubeSphere Secret] ************
skipping: [localhost]

TASK [ks-core/init-token : KubeSphere | Enabling Token Script] *****************
changed: [localhost]

TASK [ks-core/init-token : KubeSphere | Getting KubeSphere Token] **************
changed: [localhost]

TASK [ks-core/init-token : KubeSphere | Checking KubeSphere secrets] ***********
changed: [localhost]

TASK [ks-core/init-token : KubeSphere | Deleting KubeSphere secret] ************
skipping: [localhost]

TASK [ks-core/init-token : KubeSphere | Creating components token] *************
changed: [localhost]

TASK [ks-core/ks-core : KubeSphere | Getting Kubernetes version] ***************
changed: [localhost]

TASK [ks-core/ks-core : KubeSphere | Setting Kubernetes version] ***************
ok: [localhost]

TASK [ks-core/ks-core : KubeSphere | Getting Kubernetes master num] ************
changed: [localhost]

TASK [ks-core/ks-core : KubeSphere | Setting master num] ***********************
ok: [localhost]

TASK [ks-core/ks-core : KubeSphere | Override master num] **********************
skipping: [localhost]

TASK [ks-core/ks-core : ks-console | Checking ks-console svc] ******************
changed: [localhost]

TASK [ks-core/ks-core : ks-console | Getting ks-console svc port] **************
skipping: [localhost]

TASK [ks-core/ks-core : ks-console | Setting console_port] *********************
skipping: [localhost]

TASK [ks-core/ks-core : KubeSphere | Getting Ingress installation files] *******
changed: [localhost] => (item=ingress)
changed: [localhost] => (item=ks-apiserver)
changed: [localhost] => (item=ks-console)
changed: [localhost] => (item=ks-controller-manager)

TASK [ks-core/ks-core : KubeSphere | Creating manifests] ***********************
changed: [localhost] => (item={'path': 'ingress', 'file': 'ingress-controller.yaml', 'type': 'config'})
changed: [localhost] => (item={'path': 'ks-apiserver', 'file': 'ks-apiserver.yml', 'type': 'deploy'})
changed: [localhost] => (item={'path': 'ks-controller-manager', 'file': 'ks-controller-manager.yaml', 'type': 'deploy'})
changed: [localhost] => (item={'path': 'ks-console', 'file': 'ks-console-config.yml', 'type': 'config'})
changed: [localhost] => (item={'path': 'ks-console', 'file': 'ks-console-deployment.yml', 'type': 'deploy'})
changed: [localhost] => (item={'path': 'ks-console', 'file': 'ks-console-svc.yml', 'type': 'svc'})
changed: [localhost] => (item={'path': 'ks-console', 'file': 'sample-bookinfo-configmap.yaml', 'type': 'config'})

TASK [ks-core/ks-core : KubeSphere | Deleting Ingress-controller configmap] ****
changed: [localhost]

TASK [ks-core/ks-core : KubeSphere | Creating Ingress-controller configmap] ****
changed: [localhost]

TASK [ks-core/ks-core : KubeSphere | Creating ks-core] *************************
changed: [localhost] => (item={'path': 'ks-apiserver', 'file': 'ks-apiserver.yml'})
changed: [localhost] => (item={'path': 'ks-controller-manager', 'file': 'ks-controller-manager.yaml'})
changed: [localhost] => (item={'path': 'ks-console', 'file': 'ks-console-config.yml'})
changed: [localhost] => (item={'path': 'ks-console', 'file': 'sample-bookinfo-configmap.yaml'})
changed: [localhost] => (item={'path': 'ks-console', 'file': 'ks-console-deployment.yml'})

TASK [ks-core/ks-core : KubeSphere | Checking ks-console svc] ******************
changed: [localhost]

TASK [ks-core/ks-core : KubeSphere | Creating ks-console svc] ******************
changed: [localhost] => (item={'path': 'ks-console', 'file': 'ks-console-svc.yml'})

TASK [ks-core/ks-core : KubeSphere | Patching ks-console svc] ******************
skipping: [localhost]

TASK [ks-core/ks-core : KubeSphere | Importing ks-core status] *****************
changed: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=32   changed=25   unreachable=0    failed=0    skipped=14   rescued=0    ignored=0   

Start installing monitoring
Start installing multicluster
Start installing openpitrix
Start installing network
**************************************************
Waiting for all tasks to be completed ...
task network status is successful  (1/4)
task openpitrix status is successful  (2/4)
task multicluster status is successful  (3/4)
task monitoring status is successful  (4/4)
**************************************************
Collecting installation results ...
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.5.233:30880
Account: admin
Password: P@88w0rd

NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2021-09-08 15:21:57
#####################################################

出于好奇然后我看了看。。到底有哪些容器在跑。。

[root@k8s-test ~]# find / -name "ansible"
/var/lib/docker/overlay2/bb0be2514f7efccf6c4ed47e88bd93663ffe2f4d17d1608de6789ed067c563a5/diff/usr/local/bin/ansible
/var/lib/docker/overlay2/bb0be2514f7efccf6c4ed47e88bd93663ffe2f4d17d1608de6789ed067c563a5/diff/usr/local/lib/python3.9/site-packages/ansible
/var/lib/docker/overlay2/b801a6ae33ab97ee1d84ec40a188f93f7d75e69ef3917a7bbe24561a6472b086/merged/usr/local/bin/ansible
/var/lib/docker/overlay2/b801a6ae33ab97ee1d84ec40a188f93f7d75e69ef3917a7bbe24561a6472b086/merged/usr/local/lib/python3.9/site-packages/ansible
[root@k8s-test ~]# docker ps
CONTAINER ID   IMAGE                                                                         COMMAND                  CREATED             STATUS             PORTS     NAMES
3302edb5ba16   registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl                         "entrypoint.sh"          About an hour ago   Up About an hour             k8s_kubectl_kubectl-admin-68dc989bf8-klp4w_kubesphere-controls-system_44f0faa6-0ca9-4ba1-9b78-60b20cff4986_0
19027b65969f   6d6859d1a42a                                                                  "/bin/prometheus --w…"   About an hour ago   Up About an hour             k8s_prometheus_prometheus-k8s-0_kubesphere-monitoring-system_75d362ab-91b9-4e7d-aff5-d0509102d106_1
00dbdeebb2fd   7ec24a279487                                                                  "/configmap-reload -…"   About an hour ago   Up About an hour             k8s_rules-configmap-reloader_prometheus-k8s-0_kubesphere-monitoring-system_75d362ab-91b9-4e7d-aff5-d0509102d106_0
7716e3dead12   registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader      "/bin/prometheus-con…"   About an hour ago   Up About an hour             k8s_prometheus-config-reloader_prometheus-k8s-0_kubesphere-monitoring-system_75d362ab-91b9-4e7d-aff5-d0509102d106_0
88fe76f40270   registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager            "/notification-manag…"   About an hour ago   Up About an hour             k8s_notification-manager_notification-manager-deployment-97dfccc89-jc7vg_kubesphere-monitoring-system_3706ea15-6eab-41d6-8d80-70e6d8f876be_0
7ecbaddb3535   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_kubectl-admin-68dc989bf8-klp4w_kubesphere-controls-system_44f0faa6-0ca9-4ba1-9b78-60b20cff4986_0
eb9322948e60   registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager           "controller-manager …"   About an hour ago   Up About an hour             k8s_ks-controller-manager_ks-controller-manager-5b4d5d95b7-s92fg_kubesphere-system_047f622c-03fa-4e2a-b080-7d461980e7c4_0
8e1424920b66   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_notification-manager-deployment-97dfccc89-jc7vg_kubesphere-monitoring-system_3706ea15-6eab-41d6-8d80-70e6d8f876be_0
8977ced6f185   registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator   "/notification-manag…"   About an hour ago   Up About an hour             k8s_notification-manager-operator_notification-manager-operator-59cbfc566b-9zdnn_kubesphere-monitoring-system_078e675c-a46d-4293-b722-2b53602070ca_0
ba02231bc92d   905d6f5ae7d4                                                                  "ks-apiserver --logt…"   About an hour ago   Up About an hour             k8s_ks-apiserver_ks-apiserver-dc84cb4f8-4kxxw_kubesphere-system_2b7a3e79-4f40-4e6a-99ab-5999faa38e16_0
e878556a9afd   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_ks-controller-manager-5b4d5d95b7-s92fg_kubesphere-system_047f622c-03fa-4e2a-b080-7d461980e7c4_0
364a81dff4ee   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_ks-apiserver-dc84cb4f8-4kxxw_kubesphere-system_2b7a3e79-4f40-4e6a-99ab-5999faa38e16_0
2dd18f08dc69   registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload                "/configmap-reload -…"   About an hour ago   Up About an hour             k8s_config-reloader_alertmanager-main-0_kubesphere-monitoring-system_36dffb91-d42d-4009-9bc7-731804b5a6d8_0
7b27ca0b6283   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_prometheus-k8s-0_kubesphere-monitoring-system_75d362ab-91b9-4e7d-aff5-d0509102d106_0
37ae2a242314   registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy                 "/usr/local/bin/kube…"   About an hour ago   Up About an hour             k8s_kube-rbac-proxy_notification-manager-operator-59cbfc566b-9zdnn_kubesphere-monitoring-system_078e675c-a46d-4293-b722-2b53602070ca_0
7dec23d0746d   ad393d6a4d1b                                                                  "/usr/local/bin/kube…"   About an hour ago   Up About an hour             k8s_kube-rbac-proxy-self_kube-state-metrics-577b8b4cf-68j9l_kubesphere-monitoring-system_67a9df8e-f525-49a7-9faa-d73c386fcee0_0
bea0f3fd1bea   registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy                 "/usr/local/bin/kube…"   About an hour ago   Up About an hour             k8s_kube-rbac-proxy-main_kube-state-metrics-577b8b4cf-68j9l_kubesphere-monitoring-system_67a9df8e-f525-49a7-9faa-d73c386fcee0_0
d7a22384abfd   registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager                    "/bin/alertmanager -…"   About an hour ago   Up About an hour             k8s_alertmanager_alertmanager-main-0_kubesphere-monitoring-system_36dffb91-d42d-4009-9bc7-731804b5a6d8_0
1aba0db91e84   registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy                 "/usr/local/bin/kube…"   About an hour ago   Up About an hour             k8s_kube-rbac-proxy_node-exporter-r499q_kubesphere-monitoring-system_7c61bbfa-31d7-49b4-9e94-86ea17aa7c10_0
594e8064532b   registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy                 "/usr/local/bin/kube…"   About an hour ago   Up About an hour             k8s_kube-rbac-proxy_prometheus-operator-8f97cb8c6-z4ml8_kubesphere-monitoring-system_64d83546-a4b1-4d74-ad5b-ac450a667671_0
65143a621293   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_notification-manager-operator-59cbfc566b-9zdnn_kubesphere-monitoring-system_078e675c-a46d-4293-b722-2b53602070ca_0
3d6160e5ed4a   registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics              "/kube-state-metrics…"   About an hour ago   Up About an hour             k8s_kube-state-metrics_kube-state-metrics-577b8b4cf-68j9l_kubesphere-monitoring-system_67a9df8e-f525-49a7-9faa-d73c386fcee0_0
ae57d8e077d8   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_alertmanager-main-0_kubesphere-monitoring-system_36dffb91-d42d-4009-9bc7-731804b5a6d8_0
d1dceb7af073   registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter                   "/bin/node_exporter …"   About an hour ago   Up About an hour             k8s_node-exporter_node-exporter-r499q_kubesphere-monitoring-system_7c61bbfa-31d7-49b4-9e94-86ea17aa7c10_0
46384ea221c9   registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator             "/bin/operator --kub…"   About an hour ago   Up About an hour             k8s_prometheus-operator_prometheus-operator-8f97cb8c6-z4ml8_kubesphere-monitoring-system_64d83546-a4b1-4d74-ad5b-ac450a667671_0
bfa9014b92d6   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_kube-state-metrics-577b8b4cf-68j9l_kubesphere-monitoring-system_67a9df8e-f525-49a7-9faa-d73c386fcee0_0
96a16088f105   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_node-exporter-r499q_kubesphere-monitoring-system_7c61bbfa-31d7-49b4-9e94-86ea17aa7c10_0
a71d82d33687   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_prometheus-operator-8f97cb8c6-z4ml8_kubesphere-monitoring-system_64d83546-a4b1-4d74-ad5b-ac450a667671_0
be95c70edf24   registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console                      "docker-entrypoint.s…"   About an hour ago   Up About an hour             k8s_ks-console_ks-console-58b965dbf5-58d6q_kubesphere-system_f36df02e-051f-4428-bfbe-bb2e85b1a10a_0
bce19d3a088e   registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64            "/server"                About an hour ago   Up About an hour             k8s_default-http-backend_default-http-backend-5d7c68c698-fcrm2_kubesphere-controls-system_927f3a3e-3313-43d9-ad35-66fba29d501b_0
f4c85647ad9f   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_ks-console-58b965dbf5-58d6q_kubesphere-system_f36df02e-051f-4428-bfbe-bb2e85b1a10a_0
7d06a15caad0   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_default-http-backend-5d7c68c698-fcrm2_kubesphere-controls-system_927f3a3e-3313-43d9-ad35-66fba29d501b_0
9261d64cb549   registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller             "/snapshot-controlle…"   About an hour ago   Up About an hour             k8s_snapshot-controller_snapshot-controller-0_kube-system_7fa7414c-f3d5-46f9-abf9-33d60f2b8626_0
15fce1e05c61   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_snapshot-controller-0_kube-system_7fa7414c-f3d5-46f9-abf9-33d60f2b8626_0
79e284ef0773   registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv             "/usr/local/bin/prov…"   About an hour ago   Up About an hour             k8s_openebs-provisioner-hostpath_openebs-localpv-provisioner-5cddd6cbfc-p6rt6_kube-system_32250797-215e-4d69-9ab9-0a6c08b8d17f_0
e912bb8c395e   registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer                    "/shell-operator sta…"   About an hour ago   Up About an hour             k8s_installer_ks-installer-769994b6ff-74wtk_kubesphere-system_3bc1c1d5-10c8-4484-b421-4bc652791f7c_0
c52bed8c4002   75c8849ca840                                                                  "/usr/bin/kube-contr…"   About an hour ago   Up About an hour             k8s_calico-kube-controllers_calico-kube-controllers-8545b68dd4-nkm5p_kube-system_f3046267-258b-4a18-8e16-f3b3aff765c1_0
38cf16f58b54   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_openebs-localpv-provisioner-5cddd6cbfc-p6rt6_kube-system_32250797-215e-4d69-9ab9-0a6c08b8d17f_0
daf5c4c244b9   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_calico-kube-controllers-8545b68dd4-nkm5p_kube-system_f3046267-258b-4a18-8e16-f3b3aff765c1_0
6656ffdf4951   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_ks-installer-769994b6ff-74wtk_kubesphere-system_3bc1c1d5-10c8-4484-b421-4bc652791f7c_0
3a03d4e76cd3   faac9e62c0d6                                                                  "/coredns -conf /etc…"   About an hour ago   Up About an hour             k8s_coredns_coredns-7f87749d6c-knw8r_kube-system_3cca53b5-1c6a-4df3-a0ad-5096d4b49cd9_0
cf537ff25479   faac9e62c0d6                                                                  "/coredns -conf /etc…"   About an hour ago   Up About an hour             k8s_coredns_coredns-7f87749d6c-ct9fv_kube-system_4f3400f3-dbf7-4b33-9db2-7577c42a3805_0
96af1423aa96   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_coredns-7f87749d6c-ct9fv_kube-system_4f3400f3-dbf7-4b33-9db2-7577c42a3805_0
9fa13c2d58d2   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_coredns-7f87749d6c-knw8r_kube-system_3cca53b5-1c6a-4df3-a0ad-5096d4b49cd9_0
2836eb96f405   f0d3b0d0e32c                                                                  "start_runit"            About an hour ago   Up About an hour             k8s_calico-node_calico-node-h26c5_kube-system_d09ef8b2-578b-4b9d-84c6-b5898743e231_0
bb38dddbbfe8   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_calico-node-h26c5_kube-system_d09ef8b2-578b-4b9d-84c6-b5898743e231_0
c4dbf71105c8   5340ba194ec9                                                                  "/node-cache -locali…"   About an hour ago   Up About an hour             k8s_node-cache_nodelocaldns-nk8vh_kube-system_7e863542-1263-4554-9de1-d1ef2f9000d8_0
9afb0d7d7fde   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_nodelocaldns-nk8vh_kube-system_7e863542-1263-4554-9de1-d1ef2f9000d8_0
a9574ac0615d   c29e6c583067                                                                  "/usr/local/bin/kube…"   About an hour ago   Up About an hour             k8s_kube-proxy_kube-proxy-bp6xl_kube-system_5a8cdb7e-54dd-4263-b2be-954a7f1e6f8f_0
65610821fe53   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_kube-proxy-bp6xl_kube-system_5a8cdb7e-54dd-4263-b2be-954a7f1e6f8f_0
1dbd75332774   ae5eb22e4a9d                                                                  "kube-apiserver --ad…"   About an hour ago   Up About an hour             k8s_kube-apiserver_kube-apiserver-k8s-test_kube-system_1d329797fe80a2b2e76fd5d00379ee9b_0
bc091211e653   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_kube-apiserver-k8s-test_kube-system_1d329797fe80a2b2e76fd5d00379ee9b_0
096b83364e95   5f8cb769bd73                                                                  "kube-scheduler --au…"   About an hour ago   Up About an hour             k8s_kube-scheduler_kube-scheduler-k8s-test_kube-system_d9d342aadf98711fa7593d41fd00773b_0
12b0eaf31a2c   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_kube-scheduler-k8s-test_kube-system_d9d342aadf98711fa7593d41fd00773b_0
cdc2433fa259   0a41a1414c53                                                                  "kube-controller-man…"   About an hour ago   Up About an hour             k8s_kube-controller-manager_kube-controller-manager-k8s-test_kube-system_93bf8d46bbc7206cf3327fa70a9adb2a_0
82fa0cd29ac7   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2                       "/pause"                 About an hour ago   Up About an hour             k8s_POD_kube-controller-manager-k8s-test_kube-system_93bf8d46bbc7206cf3327fa70a9adb2a_0
90907f76222b   registry.cn-beijing.aliyuncs.com/kubesphereio/etcd:v3.4.13                    "/usr/local/bin/etcd"    About an hour ago   Up About an hour             etcd1

然后再看了看端口,这是一个最干净,最小化的kubesphere了,all in one的,跑的东西有点多。k8s工资高,还是有他的道理,目前最流行的技术了。

[root@k8s-test ~]# netstat  -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      702/rpcbind         
tcp        0      0 169.254.25.10:53        0.0.0.0:*               LISTEN      7555/node-cache     
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1169/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1126/master         
tcp        0      0 0.0.0.0:30880           0.0.0.0:*               LISTEN      7131/kube-proxy     
tcp        0      0 169.254.25.10:9254      0.0.0.0:*               LISTEN      7555/node-cache     
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      6521/kubelet        
tcp        0      0 127.0.0.1:44680         0.0.0.0:*               LISTEN      6521/kubelet        
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      7131/kube-proxy     
tcp        0      0 127.0.0.1:9099          0.0.0.0:*               LISTEN      8626/calico-node    
tcp        0      0 192.168.5.233:2379      0.0.0.0:*               LISTEN      4609/etcd           
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      4609/etcd           
tcp        0      0 192.168.5.233:9100      0.0.0.0:*               LISTEN      24923/kube-rbac-pro 
tcp        0      0 127.0.0.1:9100          0.0.0.0:*               LISTEN      23465/node_exporter 
tcp        0      0 192.168.5.233:2380      0.0.0.0:*               LISTEN      4609/etcd           
tcp6       0      0 :::111                  :::*                    LISTEN      702/rpcbind         
tcp6       0      0 :::10256                :::*                    LISTEN      7131/kube-proxy     
tcp6       0      0 :::10257                :::*                    LISTEN      6052/kube-controlle 
tcp6       0      0 :::10259                :::*                    LISTEN      6215/kube-scheduler 
tcp6       0      0 :::22                   :::*                    LISTEN      1169/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1126/master         
tcp6       0      0 :::9253                 :::*                    LISTEN      7555/node-cache     
tcp6       0      0 :::9353                 :::*                    LISTEN      7555/node-cache     
tcp6       0      0 :::10250                :::*                    LISTEN      6521/kubelet        
tcp6       0      0 :::6443                 :::*                    LISTEN      6376/kube-apiserver 
udp        0      0 169.254.25.10:53        0.0.0.0:*                           7555/node-cache     
udp        0      0 0.0.0.0:111             0.0.0.0:*                           702/rpcbind         
udp        0      0 127.0.0.1:323           0.0.0.0:*                           713/chronyd         
udp        0      0 0.0.0.0:873             0.0.0.0:*                           702/rpcbind         
udp6       0      0 :::111                  :::*                                702/rpcbind         
udp6       0      0 ::1:323                 :::*                                713/chronyd         
udp6       0      0 :::873                  :::*                                702/rpcbind 

登录web控制台看看
控制台
很好,所有状态都正常。
在这里插入图片描述

深入了解K8S
K8s 集群架构图
以下 K8s 架构图显示了 Kubernetes 集群的各部分之间的联系:
K8s 集群架构图

下面文章来着红帽

K8s 架构原理是什么?

K8s 集群的神经中枢:控制平面
让我们从 Kubernetes 集群的神经中枢(即控制平面)开始说起。在这里,我们可以找到用于控制集群的 Kubernetes 组件以及一些有关集群状态和配置的数据。这些核心 Kubernetes 组件负责处理重要的工作,以确保容器以足够的数量和所需的资源运行。

控制平面会一直与您的计算机保持联系。集群已被配置为以特定的方式运行,而控制平面要做的就是确保万无一失。

K8s 集群API: kube-apiserver
如果需要与您的 Kubernetes 集群进行交互,就要通过 API。Kubernetes API 是 Kubernetes 控制平面的前端,用于处理内部和外部请求。API 服务器会确定请求是否有效,如果有效,则对其进行处理。您可以通过 REST 调用、kubectl 命令行界面或其他命令行工具(例如 kubeadm)来访问 API。

K8s 调度程序:kube-scheduler
您的集群是否状况良好?如果需要新的容器,要将它们放在哪里?这些是 Kubernetes 调度程序所要关注的问题。

调度程序会考虑容器集的资源需求(例如 CPU 或内存)以及集群的运行状况。随后,它会将容器集安排到适当的计算节点。

K8s 控制器:kube-controller-manager
控制器负责实际运行集群,而 Kubernetes 控制器管理器则是将多个控制器功能合而为一。控制器用于查询调度程序,并确保有正确数量的容器集在运行。如果有容器集停止运行,另一个控制器会发现并做出响应。控制器会将服务连接至容器集,以便让请求前往正确的端点。还有一些控制器用于创建帐户和 API 访问令牌。

键值存储数据库 etcd
配置数据以及有关集群状态的信息位于 etcd(一个键值存储数据库)中。etcd 采用分布式、容错设计,被视为集群的最终事实来源。

Kubernetes 节点中会发生什么?

K8s 节点
Kubernetes 集群中至少需要一个计算节点,但通常会有多个计算节点。容器集经过调度和编排后,就会在节点上运行。如果需要扩展集群的容量,那就要添加更多的节点。

容器集
容器集是 Kubernetes 对象模型中最小、最简单的单元。它代表了应用的单个实例。每个容器集都由一个容器(或一系列紧密耦合的容器)以及若干控制容器运行方式的选件组成。容器集可以连接至持久存储,以运行有状态应用。

容器运行时引擎
为了运行容器,每个计算节点都有一个容器运行时引擎。比如 Docker,但 Kubernetes 也支持其他符合开源容器运动(OCI)标准的运行时,例如 rkt 和 CRI-O。

kubelet
每个计算节点中都包含一个 kubelet,这是一个与控制平面通信的微型应用。kublet 可确保容器在容器集内运行。当控制平面需要在节点中执行某个操作时,kubelet 就会执行该操作。

kube-proxy
每个计算节点中还包含 kube-proxy,这是一个用于优化 Kubernetes 网络服务的网络代理。kube-proxy 负责处理集群内部或外部的网络通信——靠操作系统的数据包过滤层,或者自行转发流量。

K8s 集群部署还需要什么?

持久存储
除了管理运行应用的容器外,Kubernetes 还可以管理附加在集群上的应用数据。Kubernetes 允许用户请求存储资源,而无需了解底层存储基础架构的详细信息。持久卷是集群(而非容器集)所特有的,因此其寿命可以超过容器集。

容器镜像仓库
Kubernetes 所依赖的容器镜像存储于容器镜像仓库中。这个镜像仓库可以由您自己配置的,也可以由第三方提供。

底层基础架构
您可以自己决定具体在哪里运行 Kubernetes。答案可以是裸机服务器、虚拟机、公共云提供商、私有云和混合云环境。Kubernetes 的一大优势就是它可以在许多不同类型的基础架构上运行。

扩展文章
一文了解 Kubernetes 中的服务发现

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

少陽君

谢谢老板的拿铁

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值