TKG使用的基本命令
查看上文中建立的Cluster
在上一文中建立了管理和负载的集群,可以通过tkg命令查看
查看管理集群
使用tkg get management-cluster 命令可以查看管理集群
[root@hop-172 ~]# tkg get management-cluster
MANAGEMENT-CLUSTER-NAME CONTEXT-NAME STATUS
tkg-mgmt-vsphere-20210207152959 * tkg-mgmt-vsphere-20210207152959-admin@tkg-mgmt-vsphere-20210207152959 Success
如果有多个管理集群,这里也会一并列出。
在多个管理集群中切换,使用tkg set management-cluster命令。
查看Tanzu Kubernetes clusters负载集群
使用tkg get cluster 可以查看在default namespace负载集群的情况,不在default的cluster需要列出–namespace 选项。
[root@hop-172 ~]# tkg get cluster
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
contour-test default running 1/1 3/3 v1.19.3+vmware.1 <none>
my-cluster-dev-1 default running 1/1 1/1 v1.19.3+vmware.1 <none>
如上图,已经创建了两个负载集群contour-test和my-cluster-dev-1,并列出其参数。
States一栏表述了现在集群的状态,定义如下:
- creating: 正在创建控制平面
- createStalled: 创建控制平面的过程已停止
- deleting: 集群正在删除中
- failed: 创建控制平面失败
- running: 控制平面已完全初始化
- updating: 集群正在发布更新或正在扩展节点
- updateFailed: 集群更新过程失败
- updateStalled: 群集更新过程已停止
- No status: 集群的创建尚未开始
如果群集处于停止状态stalled state,检查与外部registry的网络连接,确保目标平台上有足够的资源来完成操作以及DHCP正确发布了IPv4地址。
使用–include-management-cluster参数同时查看管理和负载集群。
[root@hop-172 ~]# tkg get cluster --include-management-cluster
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
contour-test default running 1/1 3/3 v1.19.3+vmware.1 <none>
my-cluster-dev-1 default running 1/1 1/1 v1.19.3+vmware.1 <none>
tkg-mgmt-vsphere-20210207152959 tkg-system running 1/1 1/1 v1.19.3+vmware.1 management
导出Tanzu Kubernetes Cluster 配置细节
使用tkg get cluster --output json/yaml导出
[root@hop-172 ~]# tkg get cluster --output yaml >1.yaml
[root@hop-172 ~]# cat 1.yaml
- name: contour-test
namespace: default
status: running
plan: ""
controlplane: 1/1
workers: 3/3
kubernetes: v1.19.3+vmware.1
roles: []
- name: my-cluster-dev-1
namespace: default
status: running
plan: ""
controlplane: 1/1
workers: 1/1
kubernetes: v1.19.3+vmware.1
roles: []
上面的例子将Cluster的配置导入了文件1.yaml中。
切换到负载集群
获取Tanzu Kubernetes Cluster Credentials
运行tkg get credentials可以将集群的凭据自动添加到kubeconfig文件中,在运行时指定集群的名称。
[root@hop-172 ~]# tkg get credentials contour-test
Credentials of workload cluster 'contour-test' have been saved
You can now access the cluster by running 'kubectl config use-context contour-test-admin@contour-test'
接入到负载集群
根据提示,使用kubectl config use-context contour-test-admin@contour-test命令便可以切换到contour-test集群
[root@hop-172 ~]# kubectl config use-context contour-test-admin@contour-test
Switched to context "contour-test-admin@contour-test".
[root@hop-172 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
contour-test-control-plane-trnnk Ready master 5m41s v1.19.3+vmware.1
contour-test-md-0-96dcc5c79-ft78t Ready <none> 4m38s v1.19.3+vmware.1
contour-test-md-0-96dcc5c79-h7gzg Ready <none> 4m37s v1.19.3+vmware.1
contour-test-md-0-96dcc5c79-l8gj5 Ready <none> 4m43s v1.19.3+vmware.1
在负载查看系统容器
使用kubectl get pod -A命令查看系统运行生成的Pod
[root@hop-172 ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system antrea-agent-8b9bk 2/2 Running 0 7m
kube-system antrea-agent-hmt7q 2/2 Running 0 6m2s
kube-system antrea-agent-k56qs 2/2 Running 0 5m56s
kube-system antrea-agent-sdmnn 2/2 Running 0 5m57s
kube-system antrea-controller-5d594c5cc7-j8tc7 1/1 Running 0 7m2s
kube-system coredns-5d6f7c958-5xwjk 1/1 Running 0 6m57s
kube-system coredns-5d6f7c958-ljrtc 1/1 Running 0 6m57s
kube-system etcd-contour-test-control-plane-trnnk 1/1 Running 0 6m47s
kube-system kube-apiserver-contour-test-control-plane-trnnk 1/1 Running 0 6m47s
kube-system kube-controller-manager-contour-test-control-plane-trnnk 1/1 Running 0 6m47s
kube-system kube-proxy-45n6k 1/1 Running 0 6m2s
kube-system kube-proxy-7gnrw 1/1 Running 0 5m56s
kube-system kube-proxy-s5pgb 1/1 Running 0 5m57s
kube-system kube-proxy-vwvfn 1/1 Running 0 6m58s
kube-system kube-scheduler-contour-test-control-plane-trnnk 1/1 Running 0 6m46s
kube-system kube-vip-contour-test-control-plane-trnnk 1/1 Running 0 6m46s
kube-system vsphere-cloud-controller-manager-zq2s4 1/1 Running 0 6m59s
kube-system vsphere-csi-controller-5b6f54ccc5-hc7j8 5/5 Running 0 7m2s
kube-system vsphere-csi-node-95x9x 3/3 Running 0 6m2s
kube-system vsphere-csi-node-js9tr 3/3 Running 0 5m57s
kube-system vsphere-csi-node-krntq 3/3 Running 0 5m56s
kube-system vsphere-csi-node-vrgwg 3/3 Running 0 7m
Antrea, CNI container networking interface
coredns, 内部 DNS
etcd, 存储 key-value 值
kube-apiserver, Kubernetes API 服务器
kube-proxy, Kubernetes network 代理
kube-scheduler, 调度及可用性
vsphere-cloud-controller-manager, Kubernetes cloud provider for vSphere
kube-vip, 为Cluster API server提供负载均衡服务
vsphere-csi-controller and vsphere-csi-node, CNI-container storage interface for vSphere
第三方接入系统
将凭据保存在单独的kubeconfig文件中(例如,将其分发给开发人员),需要指定–export-file选项。
如:tkg get credentials my-cluster --export-file my-cluster-credentials
在第三方设备使用的时候,只需要把文件my-cluster-credentials和环境变量KUBECONFIG匹配就可以
export KUBECONFIG=my-cluster-credentials
也可以将~/.kube/config这个文件拷贝到第三方设备到 ~/.kube/config,然后可以切换到所有context,没有.kube目录就建一个(/usr/local/bin下以及有kubectl和tkg,方法见Step by step:安装 Tanzu Kubernetes Grid)
[root@localhost ~]# cd .kube
-bash: cd: .kube: No such file or directory
[root@localhost ~]# mkdir .kube
[root@localhost ~]# cp config.bak ./.kube/config
[root@localhost ~]# kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
contour-test-admin@contour-test contour-test contour-test-admin
my-cluster-dev-1-admin@my-cluster-dev-1 my-cluster-dev-1 my-cluster-dev-1-admin
tkg-mgmt-vsphere-20210207152959-admin@tkg-mgmt-vsphere-20210207152959 tkg-mgmt-vsphere-20210207152959 tkg-mgmt-vsphere-20210207152959-admin
* tkg-wld-admin@tkg-wld tkg-wld tkg-wld-admin
[root@localhost ~]# kubectl config use-context tkg-mgmt-vsphere-20210207152959-admin@tkg-mgmt-vsphere-20210207152959
Switched to context "tkg-mgmt-vsphere-20210207152959-admin@tkg-mgmt-vsphere-20210207152959".
[root@localhost ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
tkg-mgmt-vsphere-20210207152959-control-plane-2xdwx Ready master 10d v1.19.3+vmware.1
tkg-mgmt-vsphere-20210207152959-md-0-78f475f9c8-6qmpl Ready <none> 10d v1.19.3+vmware.1
[root@localhost ~]# kubectl config use-context tkg-wld-admin@tkg-wld
Switched to context "tkg-wld-admin@tkg-wld".
[root@localhost ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
tkg-wld-control-plane-vv7m9 Ready master 13h v1.19.3+vmware.1
tkg-wld-md-0-7cc4fb8888-vnzbd Ready <none> 13h v1.19.3+vmware.1
以上