k8s搭建部署(详细)-持续优化
安装
1. 安装要求
部署Kubernetes集群机器需要满足以下几个条件:
# 3台以上机器,操作系统 CentOS7.7以上64位系统
# 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
# 集群中所有机器之间网络互通
# 可以访问外网,需要拉取镜像
# 禁止swap分区
2. 部署内容
# kubernetes1.16.2版本,docker19.03版本
# 在所有节点上安装Docker和kubeadm,kubenet
# 部署Kubernetes Master
# 部署容器网络插件
# 部署 Kubernetes Node,将节点加入Kubernetes集群中
3. 准备环境
三台机器
centos版本7.7以上
ip: 192.168.63.130 主机名:Kubernetes-Master-130 系统:centos7.9 配置:2核2G
ip: 192.168.63.131 主机名:Kubernetes-Node-131 系统:centos7.9 配置:2核2G
ip: 192.168.63.132 主机名:Kubernetes-Node-132 系统:centos7.9 配置:2核2G
修改主机名
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
关闭防火墙和selinux
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
关闭swap分区
swapoff -a # 临时关闭
vim /etc/fstab # 注释到swap那一行 永久关闭
添加主机名与IP对应关系(三台主机都执行)
echo '''
192.168.63.130 Kubernetes-Master-130
192.168.63.131 Kubernetes-Node-131
192.168.63.132 Kubernetes-Node-132
''' >> /etc/hosts
修改主机名(依次修改)
hostnamectl set-hostname Kubernetes-Master-130
hostnamectl set-hostname Kubernetes-Node-131
hostnamectl set-hostname Kubernetes-Node-132
将桥接的IPv4流量传递到iptables的链(都执行)
以下net.ipv4.ip_forward如存在=0,修改为1即可
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_recycle = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
所有机器升级内核到最新
# 查看当前内核版本
uname -a
# 安装epel源
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
yum install -y https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
# 查看内核版本并安装最新版本(这里安装的是5.4.103的lt长期支持版本)
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
# 安装最新lt内核版本
yum --disablerepo='*' --enablerepo=elrepo-kernel install kernel-lt -y
# 查看系统grub内核的启动列表,这里编号0的5.4.103的lt版本是我们新安装的
awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
# 指定以新安装的编号0的内核版本为默认启动内核
grub2-set-default 0
# 卸载旧内核版本
yum remove kernel -y
# 重启机器,以新内核版本加载启动
reboot
4. 所有节点安装Docker/kubeadm/kubelet
Centos7上安装docker
依赖安装:
yum install -y yum-utils device-mapper-persistent-data lvm2
-阿里源安装:
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
docker安装:
yum install docker-ce -y # 安装指定版本,例如yum install -y docker-ce-18.09
启动并加入开机自启
systemctl start docker
systemctl enable docker
镜像加速
vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://01xxgaft.mirror.aliyuncs.com"]
}
重启docker载入新配置:
systemctl restart docker
#基本使用
这里可以docker --help查看所有用法,下面是比较常用的
镜像获取查看
镜像搜索:
docker search 想搜索的镜像包名 (例如:docker search nginx)
镜像下载:docker pull 想下载的镜像包名 (例如:docker pull nginx)
查看本地所有的镜像:docker image ls
容器启动和管理
容器启动:docker run -d 镜像名 (直接后台启动容器 例如:docker run -d nginx)
容器交互模式启动:docker run -d -ti 镜像名 /bin/bash (使用交互模式启动容器 例如:docker run -d -ti centos /bin/bash)
容器启动打上标记名:docker run -d --name=标记名 镜像名 (给容器打上标记名 例如:docker run -d --name=test_nginx nginx)
容器启动映射端口:docker run -d -p 需要映射的宿主机端口:容器端口 镜像名 (将容器端口映射到宿主机 例如:docker run -d -p 8080:80 nginx 这里就将容器的80端口映射到了宿主机的8080端口)
进入容器内部:docker exec -ti 镜像名 /bin/bash (进入容器内部 例如:docker exec -ti centos /bin/bash 推出时按住ctrl不放,另一个只手按p,然后再按q,这样就是安全退出,容器也不会死掉,如果是用交互模式运行的,直接exit退出就行)
镜像和容器的查看停止与删除
查看正在运行的容器,这里会显示各种信息:docker ps
显示所有的容器,包括已经停止运行的:docker ps -a
停止容器:docker stop 容器id或者标记的name (这里id可以docker ps 看到,标记的name是启动时你设置的,例如停止上图的nginx:docker stop 7baea3ea0701 或 docker stop nginx_test1)
删除容器:docker rm 容器id或者标记的name (主要用于删除已经停止运行的容器,但是容器必须是停止状态才能删除,例如:docker rm 7baea3ea0701 或 docker rm nginx_test1)
删除镜像:docker rmi 镜像名 (主要用于删除镜像 例如:docker rmi nginx)
镜像的各种打包
将当前运行的容器状态打包成镜像:docker commit -m '提交内容' -a '提交人' 已启动并修改过的容器名 新的镜像名以及版本号 已启动并修改过的容器名 新的镜像名以及版本号 (主要用于将运行中的容器进入内部修改后将其打包成新的镜像)
这里我拿上图的nginx来打包 例如:docker commit -m 'test' -a 'test' nginx_test1 nginx:testv1.0.0
使用dockerfile打包:docker build -t 新的镜像名以及版本号 目录位置 (这里是使用dockerfile进行打包,需要新建一个目录并在里面创建一个Dockerfile文件)
例如,我新建一个nginxfile目录 mkdir nginxfile &&; cd nginxfile && touch Dockerfile
编辑Dockerfile内容,格式为
FROM 镜像名 RUN 打包过程中执行的命令,可以有多个,当有多个时用一条RUN,然后用 && 和 \ 来连接,否则或创建多层镜像 CMD 运行容器时执行的命令,只能有一个生效
注意:CMD内的内容必须是前台运行的程序,否则后台后容器会挂掉,因为容器启动时会记住CMD后面这个命令执行的pid,并将其标识为1,如果该进程没有存活,容器会认为任务已经运行结束,会自动挂掉,上面我将nginx前台启动
镜像的导入导出
镜像文件导出:docker save -o 保存的文件位置 镜像名 (例如:docker save -o /tmp/nginx.tar.gz nginx)
镜像文件导入:docker load < 镜像包位置 (例如 docker load < /tmp/nginx.tar.gz nginx)
容器和宿主机的数据交互
拷贝文件:docker cp 文件位置 容器id或标记的name:容器内位置 (将宿主机内文件拷贝到容器内,也可以反着来将容器内文件拷贝出来,例如:docker cp /tmp/1.txt nginx_name:/tmp/ 反向拷贝:docker cp nginx_name:/tmp/ /tmp/1.txt )
挂载本地目录:docker run d -v 宿主机目录:容器内目录 镜像名(将本地目录映射挂载到容器内目录,例如:docker run d -v /tmp/logs:/data/logs nginx)
Docker配置修改,设置cgroup驱动,这里用systemd
配置修改为如下
vim /etc/docker/daemon.json
{
"graph": "/data/docker",
"registry-mirrors": ["https://01xxgaft.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
重启docker
systemctl restart docker
5. 添加k8s阿里云YUM软件源
#所有机器都需要执行
cat > /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF[kubernetes]
#安装kubeadm,kubelet和kubectl
所有机器执行
yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2
systemctl start kubelet
systemctl enable kubelet
6. 部署k8s的master和node节点
部署master节点,在192.168.63.130执行
初始化master节点
kubeadm init --apiserver-advertise-address=192.168.63.130 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.16.0 --service-cidr=10.140.0.0/16 --pod-network-cidr=10.240.0.0/16
网段问题,两个网段不要重,后面是/16,不要与当前机器网段一样
这里执行完会生成一串命令用于node节点的加入,记录下来,接着执行以下命令
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
部署node节点,在10.0.1.177和178执行
这里复制上面生成的一串命令,我这里只是示例,命令根据你实际生成的复制去node节点执行
kubeadm join 192.168.63.130:6443 --token 69x9mm.sjvn0r2b64bcel1e \
--discovery-token-ca-cert-hash sha256:c6a3f14f3988b1bf2ade1a07204cadd70c0a037afa2b757ce65747786e17bf6f
7. 安装网络插件(CNI)
wget https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz
cp flannel /opt/cni/bin/
下面两种插件二选一,master上执行,如果是云服务器建议按照flannel,calico可能会和云网络环境有冲突
安装flannel插件(轻量级用于快速搭建使用,初学推荐)
下载yaml文件
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unsed in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"cniVersion": "0.2.0",
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: lizhenliang/flannel:v0.11.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: lizhenliang/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- arm64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- arm
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-ppc64le
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-ppc64le
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-s390x
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-s390x
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
修改net-conf.json下面的网段为上面init pod-network-cidr的网段地址(必须正确否则会导致集群网络问题)
sed -i 's/10.244.0.0/10.240.0.0/' kube-flannel.yml
修改完安装插件,执行
kubectl apply -f kube-flannel.yml
kubectl get pods -n kube-system
kubectl get nodes
安装calico插件(用于复杂网络环境)
下载yaml文件
wget https://github.com/xuwei777/xw_yaml/blob/main/calico-3.9.2.yaml
修改配置文件的网段为上面init pod-network-cidr的网段地址(必须正确否则会导致集群网络问题)
sed -i 's/192.168.0.0/10.240.0.0/g' calico-3.9.2.yaml
修改完安装插件,执行
kubectl apply -f calico.yaml
kubectl get pod --all-namespaces -o wide
8. 测试kubernetes集群
在Kubernetes集群中创建一个pod,验证是否正常运行
创建一个pod,开放对外端口访问,这里会随机映射一个端口
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
查看pod状态,必须是running状态而且ready是1,并查看nginx svc的80端口映射到了哪个端口
kubectl get pod,svc
访问任意机器的刚刚查看的映射端口,看看是否nginx已经运行
9. k8s常用命令
查看pod,service,endpoints,secret等等的状态
kubectl get 组件名 # 例如kubectl get pod 查看详细信息可以加上-o wide 其他namespace的指定 -n namespace名
创建,变更一个yaml文件内资源,也可以是目录,目录内包含一组yaml文件(实际使用中都是以yaml文件为主,直接使用命令创建pod的很少,推荐多使用yaml文件)
kubectl apply -f xxx.yaml # 例如kubectl apply -f nginx.yaml 这里是如果没有则创建,如果有则变更,比create好用
删除一个yaml文件内资源,也可以是目录,目录内包含一组yaml文件
kubectl delete -f xxx.yaml # 例如kubectl delete -f nginx.yaml
查看资源状态,比如有一组deployment内的pod没起来,一般用于pod调度过程出现的问题排查
kubectl describe pod pod名 # 先用kubectl get pod查看 有异常的复制pod名使用这个命令
查看pod日志,用于pod状态未就绪的故障排查
kubectl logs pod名 # 先用kubectl get pod查看 有异常的复制pod名使用这个命令
查看node节点或者是pod资源(cpu,内存资源)使用情况
kubectl top 组件名 # 例如kubectl top node kubectl top pod
进入pod内部
kubectl exec -ti pod名 /bin/bash # 先用kubectl get pod查看 有需要的复制pod名使用这个命令