centos7环境下安装k8s 1.18.0版本带dashboard界面全记录(纯命令版)

 k8s  1.18.0 一个master 两个node搭建命令整理

目录

 k8s  1.18.0 一个master 两个node搭建命令整理

一、环境要求(所有节点)

二、环境系统配置(所有节点)

2.1关闭防火墙

2.2关闭selinux#

2.3关闭swap分区#

2.4主机名#

2.5添加hosts

2.6将桥接的IPv4流量传递到iptables的链#

2.7时间同步#

2.8开启ipvs#

三、容器环境配置

3.1 概述#

3.2安装Docker#

3.3添加阿里云的YUM软件源#

3.4 安装kubeadm、kubelet和kubectl#

3.5部署k8s的Master节点#

3.6添加k8s的Node节点#

3.7部署CNI网络插件#

3.8 检查

四、部署 Dashboard

4.1 安装dashboard

4.2开放NodePort

4.3查看外放信息。

4.4 授权

五、其他相关

5.1 删除docker

5.2删除kubelet

5.3 其他有用命令



一、环境要求(所有节点)


centos7 最低4个cpu 8G内存 50G硬盘
集群中的所有机器之间网络互通。
可以访问外网,需要拉取镜像。
禁止swap分区。

二、环境系统配置(所有节点)


2.1关闭防火墙


关闭防火墙:

systemctl stop firewalld


禁止防火墙开机自启:

systemctl disable firewalld

2.2关闭selinux#


永久关闭:
# 永久

sed -i 's/enforcing/disabled/' /etc/selinux/config


# 重启

reboot


临时关闭:
 # 临时

setenforce 0

2.3关闭swap分区#


永久关闭swap分区:
# 永久

sed -ri 's/.*swap.*/#&/' /etc/fstab


# 重启

reboot


临时关闭swap分区:

swapoff -a

2.4主机名#


设置主机名:

hostnamectl set-hostname <hostname>


设置192.168.1.195的主机名:

hostnamectl set-hostname k8s-master


设置192.168.1.190的主机名:

hostnamectl set-hostname k8s-node1


设置192.168.1.180的主机名:

hostnamectl set-hostname k8s-node2

2.5添加hosts


在每个节点添加hosts:

cat >> /etc/hosts << EOF
192.168.1.195 k8s-master
192.168.1.190 k8s-node1
192.168.1.180 k8s-node2
EOF

2.6将桥接的IPv4流量传递到iptables的链#

在每个节点添加如下的命令:

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF


# 加载br_netfilter模块

modprobe br_netfilter


# 查看是否加载

lsmod | grep br_netfilter


# 生效

sysctl --system  

2.7时间同步#


在每个节点添加时间同步:

yum install ntpdate -y
ntpdate time.windows.com


有ntp服务器的话,可以用自己的ntp服务器,我这边使用的自己的ntp服务器。

ntpdate 192.168.1.169

2.8开启ipvs#


在每个节点安装ipset和ipvsadm:

yum -y install ipset ipvsadm


在所有节点执行如下脚本:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF


授权、运行、检查是否加载:

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4


检查是否加载:

lsmod | grep -e ipvs -e nf_conntrack_ipv4

三、容器环境配置


所有节点安装Docker/kubeadm/kubelet/kubectl#


3.1 概述#


k8s默认CRI(容器运行时)为Docker,因此需要先安装Docker。

3.2安装Docker#


安装Docker:

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.3.ce-3.el7
systemctl enable docker && systemctl start docker
docker version

设置Docker镜像加速器:

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "exec-opts": ["native.cgroupdriver=systemd"],    
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

启动docker

sudo systemctl daemon-reload
sudo systemctl restart docker

3.3添加阿里云的YUM软件源#

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

3.4 安装kubeadm、kubelet和kubectl#


由于版本更新频繁,这里指定版本号部署:

yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0


为了实现Docker使用的cgroup drvier和kubelet使用的cgroup drver一致,建议修改"/etc/sysconfig/kubelet"文件的内容:

vim /etc/sysconfig/kubelet


# 修改
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
设置为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动:

systemctl enable kubelet

3.5部署k8s的Master节点#


部署k8s的Master节点(192.168.1.195):
# 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里需要指定阿里云镜像仓库地址

kubeadm init \
  --apiserver-advertise-address=192.168.1.195 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.18.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16

根据提示信息,在Master节点上使用kubectl工具:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

3.6添加k8s的Node节点#


在192.168.1.190和192.168.1.180上添加如下的命令:
# 向k8s集群中添加Node节点

kubeadm join 192.168.1.195:6443 --token 4016im.eg4e10yamcbxjm59 \
    --discovery-token-ca-cert-hash sha256:ce2111ce594e5189255144a72268250e5eedda87470cc3a1f69f8c973927699e


    特别注意:上面的命令不是固定的,是3.5初始化步骤后生成的,注意,仔细看
    
默认的token有效期为24小时,当过期之后,该token就不能用了,这时可以使用如下的命令创建token:

kubeadm token create --print-join-command


# 生成一个永不过期的token(可以给3.6的token用,也可以不管)

kubeadm token create --ttl 0

3.7部署CNI网络插件#


根据提示,在Master节点使用kubectl工具查看节点状态:
kubectl get nodes
会发现都是notready状态。

在Master节点部署CNI网络插件(可能会失败,如果失败,请下载到本地,然后安装):

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 

(最后提供)
期间安装到成功会有2分钟的时间,等待一下
查看部署CNI网络插件进度:

kubectl get pods -n kube-system


全部为runing为正常(也需要等待,如果报错的话,可以去 /var/log/messege里看日志)

3.8 检查


再次在Master节点使用kubectl工具查看节点状态:

kubectl get nodes


发现都是ready 为正常

查看集群健康状态:

kubectl get cs


Healthy为正常

查看集群信息

kubectl cluster-info

四、部署 Dashboard


4.1 安装dashboard


Dashboard是官方提供的一个UI,可用于基本管理K8s资源。

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml


下载dashboard时,根据kubernet的版本信息(kubectl version 查看信息)找与之匹配的dashboard,比如我的kubernet的版本是1.180 和它匹配的dashboard版本是v2.03
一般情况下都下载不下来,要用网址打开,从里面下载,找到recommend.yaml 拷贝到服务器上,执行

kubectl apply -f recommended.yaml

(最后提供)
安装即可
安装成功后,只能内网访问。
以成功访问到登录界面,但是却无法登录,这是因为Dashboard只允许localhost和127.0.0.1使用HTTP连接进行访问,而其它地址只允许使用HTTPS。因此,如果需要在非本机访问Dashboard的话,只能选择其他访问方式。

4.2开放NodePort


NodePort是将节点直接暴露在外网的一种方式,只建议在开发环境,单节点的安装方式中使用。
启用NodePort很简单,只需执行kubectl edit命令进行编辑:

kubectl -n kube-system edit service kubernetes-dashboard


输出如下:
apiVersion: v1kind: Servicemetadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}  creationTimestamp: 2018-05-01T07:23:41Z
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "1750"
  selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
  uid: 9329577a-4d10-11e8-a548-00155d000529spec:
  clusterIP: 10.103.5.139
  ports:
  - port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: ClusterIPstatus:
  loadBalancer: {}
然后我们将上面的type: ClusterIP修改为type: NodePort,保存后使用kubectl get service命令来查看自动生产的端口:

改完后,

:wq

即可

4.3查看外放信息。

kubectl -n kube-system get service kubernetes-dashboard


输出如下:
NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.103.5.139   <none>        443:31795/TCP   4h
如上所示,Dashboard已经在31795端口上公开,现在可以在外部使用https://<cluster-ip>:31795进行访问。需要注意的是,在多节点的集群中,必须找到运行Dashboard节点的IP来访问,而不是Master节点的IP

4.4 授权


创建service account并绑定默认cluster-admin管理员集群角色:
# 创建用户

kubectl create serviceaccount dashboard-admin -n kube-system


# 用户授权

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin


# 获取用户Token

kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')


输出信息如下:
Name:         dashboard-admin-token-sph56
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 372862a7-13a0-4584-8237-cc29f9974711

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImV4NmxncUVRZERFZEFUSS1zWHpEV2xwcDduMDhSOTQta3AtaUdIaExMNzAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tc3BoNTYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzcyODYyYTctMTNhMC00NTg0LTgyMzctY2MyOWY5OTc0NzExIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.IXUT5vjK6O9TyCpPjW8ox3E4VGE9v18SNvJqQDJ9Qo5NTcK6AHBJdg9JibOKoMt3taUIlr0ikPZUNw09Fo8Y84loLnMBq6FNWtg5LKRbg0xKmCgXea3Id87ZXjKXJPjjQrpp-if2IeFiT2dOEk8B_w3BSKU_yo2GhqyhzpsUDqiWQpoGmwFDzroeEHJjBPn8y4mL1yj7X5JzNiKh9VbcaEWBuEaCTogTINN4GxAgvl_GogPnY_lI_rKIiJqDqxa5ax4hztH0SfylTC2fGu9sg7tNwM6-fKCuPsFY9MVVyVF6X0123kqUmgApRsdPA2fm67AFQyEQvU_jd75UZZ8tAQ


使用输出的token登录Dashboard。

五、其他相关(太累了,后面的就不代码段了)


5.1 删除docker


docker安装错误或者版本冲突,如何删除?
docker 卸载

查看当前docker状态

systemctl status docker


如果是运行状态则停掉

systemctl stop docker


查看yum安装的docker文件包

yum list installed |grep docker

删除所有安装的docker文件包

yum -y remove docker.XXXXXX(上述查到的文件)

查看docker相关的rpm源文件

rpm -qa |grep docker

删除docker的镜像文件,默认在/var/lib/docker目录下 

删除上述的docker目录

rm -rf /var/lib/docker

删除其他docker文件
保险起见,再删除一遍
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

find / -name doker (找到所有docker文件,删除)

rm -rf /etc/systemd/system/docker.service.d
rm -rf /var/lib/docker
rm -rf /var/run/docker

到此docker卸载就完成了

5.2删除kubelet


yum remove -y kubelet kubeadm kubectl
kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd

5.3 其他有用命令

创建对象
通过yaml文件创建:

kubectl create -f xxx.yaml (不建议使用,无法更新,必须先delete)

kubectl apply -f xxx.yaml (创建+更新,可以重复使用)

删除对象
通过yaml文件删除:

kubectl delete -f xxx.yaml

查看kube-system namespace下面的pod/svc/deployment 等等
(-o wide 选项可以查看存在哪个对应的节点)

kubectl get pod/svc/deployment -n kube-system

查看所有namespace下面的pod/svc/deployment等等
kubectl get pod/svc/deployment --all-namcpaces

重启pod
(无法删除对应的应用,因为存在deployment/rc之类的副本控制器,删除pod也会重新拉起来)

kubectl get pod -n kube-system

查看pod描述:
kubectl describe pod XXX -n kube-system

查看pod 日志 (如果pod有多个容器需要加-c 容器名)
kubectl logs xxx -n kube-system

删除应用(先确定是由说明创建的,再删除对应的kind):
kubectl delete deployment xxx -n kube-system

根据label删除:
kubectl delete pod -l app=flannel -n kube-system

扩容
kubectl scale deployment spark-worker-deployment --replicas=8

导出配置文件:
导出proxy
  kubectl get ds -n kube-system -l k8s-app=kube-proxy -o yaml>kube-proxy-ds.yaml
  导出kube-dns
  kubectl get deployment -n kube-system -l k8s-app=kube-dns -o yaml >kube-dns-dp.yaml
  kubectl get services -n kube-system -l k8s-app=kube-dns -o yaml >kube-dns-services.yaml
  导出所有 configmap
  kubectl get configmap -n kube-system -o wide -o yaml > configmap.yaml

查看集群信息
kubectl cluster-info

查看各组件信息
kubectl get componentstatuses

#查看kubelet进程启动参数
ps -ef | grep kubelet

查看日志:
journalctl -u kubelet -f

设为不可调度状态:
kubectl cordon node1

将pod赶到其他节点:
kubectl drain node1

解除不可调度状态
kubectl uncordon node1

master运行pod
kubectl taint nodes master.k8s node-role.kubernetes.io/master-
master不运行pod
kubectl taint nodes master.k8s node-role.kubernetes.io/master=:NoSchedule

显示和查找资源
Get commands with basic output
$ kubectl get services # 列出所有 namespace 中的所有 service
$ kubectl get pods --all-namespaces # 列出所有 namespace 中的所有 pod
$ kubectl get pods -o wide # 列出所有 pod 并显示详细信息
$ kubectl get deployment my-dep # 列出指定 deployment
$ kubectl get pods --include-uninitialized # 列出该 namespace 中的所有 pod 包括未初始化的

使用详细输出来描述命令
$ kubectl describe nodes my-node
$ kubectl describe pods my-pod
$ kubectl get services --sort-by=.metadata.name # List Services Sorted by Name

根据重启次数排序列出 pod
$ kubectl get pods --sort-by=’.status.containerStatuses[0].restartCount’

获取所有具有 app=cassandra 的 pod 中的 version 标签
$ kubectl get pods --selector=app=cassandra rc -o
jsonpath=’{.items[*].metadata.labels.version}’

获取所有节点的 ExternalIP
$ kubectl get nodes -o jsonpath=’{.items[*].status.addresses[?(@.type==“ExternalIP”)].address}’

列出属于某个 PC 的 Pod 的名字
“jq”命令用于转换复杂的 jsonpath,参考 https://stedolan.github.io/jq/
$ sel=KaTeX parse error: Expected '}', got 'EOF' at end of input: {(kubectl get rc my-rc --output=json | jq -j ‘.spec.selector | to_entries | .[] | “(.key)=(.value),”’)%?}
$ echo ( k u b e c t l g e t p o d s − − s e l e c t o r = (kubectl get pods --selector=(kubectlgetpods−−selector=sel --output=jsonpath={.items…metadata.name})

查看哪些节点已就绪
$ JSONPATH=’{range .items[]}{@.metadata.name}:{range @.status.conditions[]}{@.type}={@.status};{end}{end}’
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep “Ready=True”

列出当前 Pod 中使用的 Secret
$ kubectl get pods -o json | jq ‘.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name’ | grep -v null | sort | uniq

更新资源
$ kubectl rolling-update frontend-v1 -f frontend-v2.json # 滚动更新 pod frontend-v1
$ kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2 # 更新资源名称并更新镜像
$ kubectl rolling-update frontend --image=image:v2 # 更新 frontend pod 中的镜像
$ kubectl rolling-update frontend-v1 frontend-v2 --rollback # 退出已存在的进行中的滚动更新
$ cat pod.json | kubectl replace -f - # 基于 stdin 输入的 JSON 替换 pod

强制替换,删除后重新创建资源。会导致服务中断。
$ kubectl replace --force -f ./pod.json

为 nginx RC 创建服务,启用本地 80 端口连接到容器上的 8000 端口
$ kubectl expose rc nginx --port=80 --target-port=8000

更新单容器 pod 的镜像版本(tag)到 v4
$ kubectl get pod mypod -o yaml | sed ‘s/(image: myimage):.*$/\1:v4/’ | kubectl replace -f -
$ kubectl label pods my-pod new-label=awesome # 添加标签
$ kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq # 添加注解
$ kubectl autoscale deployment foo --min=2 --max=10 # 自动扩展 deployment “foo”

修补资源
使用策略合并补丁并修补资源。

$ kubectl patch node k8s-node-1 -p ‘{“spec”:{“unschedulable”:true}}’ # 部分更新节点

更新容器镜像; spec.containers[*].name 是必须的,因为这是合并的关键字
$ kubectl patch pod valid-pod -p ‘{“spec”:{“containers”:[{“name”:“kubernetes-serve-hostname”,“image”:“new image”}]}}’

使用具有位置数组的 json 补丁更新容器镜像
$ kubectl patch pod valid-pod --type=‘json’ -p=’[{“op”: “replace”, “path”: “/spec/containers/0/image”, “value”:“new image”}]’

使用具有位置数组的 json 补丁禁用 deployment 的 livenessProbe
$ kubectl patch deployment valid-deployment --type json -p=’[{“op”: “remove”, “path”: “/spec/template/spec/containers/0/livenessProbe”}]’

编辑资源
在编辑器中编辑任何 API 资源。

$ kubectl edit svc/docker-registry # 编辑名为 docker-registry 的 service
$ KUBE_EDITOR=“nano” kubectl edit svc/docker-registry # 使用其它编辑器
Scale 资源
$ kubectl scale --replicas=3 rs/foo # Scale a replicaset named ‘foo’ to 3
$ kubectl scale --replicas=3 -f foo.yaml # Scale a resource specified in “foo.yaml” to 3
$ kubectl scale --current-replicas=2 --replicas=3 deployment/mysql # If the deployment named mysql’s current size is 2, scale mysql to 3
$ kubectl scale --replicas=5 rc/foo rc/bar rc/baz # Scale multiple replication controllers

删除资源
$ kubectl delete -f ./pod.json # 删除 pod.json 文件中定义的类型和名称的 pod
$ kubectl delete pod,service baz foo # 删除名为“baz”的 pod 和名为“foo”的 service
$ kubectl delete pods,services -l name=myLabel # 删除具有 name=myLabel 标签的 pod 和 serivce
$ kubectl delete pods,services -l name=myLabel --include-uninitialized # 删除具有 name=myLabel 标签的 pod 和 service,包括尚未初始化的
$ kubectl -n my-ns delete po,svc --all # 删除 my-ns namespace 下的所有 pod 和 serivce,包括尚未初始化的
与运行中的 Pod 交互
$ kubectl logs my-pod # dump 输出 pod 的日志(stdout)
$ kubectl logs my-pod -c my-container # dump 输出 pod 中容器的日志(stdout,pod 中有多个容器的情况下使用)
$ kubectl logs -f my-pod # 流式输出 pod 的日志(stdout)
$ kubectl logs -f my-pod -c my-container # 流式输出 pod 中容器的日志(stdout,pod 中有多个容器的情况下使用)
$ kubectl run -i --tty busybox --image=busybox – sh # 交互式 shell 的方式运行 pod
$ kubectl attach my-pod -i # 连接到运行中的容器
$ kubectl port-forward my-pod 5000:6000 # 转发 pod 中的 6000 端口到本地的 5000 端口
$ kubectl exec my-pod – ls / # 在已存在的容器中执行命令(只有一个容器的情况下)
$ kubectl exec my-pod -c my-container – ls / # 在已存在的容器中执行命令(pod 中有多个容器的情况下)
$ kubectl top pod POD_NAME --containers # 显示指定 pod 和容器的指标度量
与节点和集群交互
$ kubectl cordon my-node # 标记 my-node 不可调度
$ kubectl drain my-node # 清空 my-node 以待维护
$ kubectl uncordon my-node # 标记 my-node 可调度
$ kubectl top node my-node # 显示 my-node 的指标度量
$ kubectl cluster-info # 显示 master 和服务的地址
$ kubectl cluster-info dump # 将当前集群状态输出到 stdout
$ kubectl cluster-info dump --output-directory=/path/to/cluster-state # 将当前集群状态输出到 /path/to/cluster-state

如果该键和影响的污点(taint)已存在,则使用指定的值替换
$ kubectl taint nodes foo dedicated=special-user:NoSchedule

六、相关文件

6.1 kube-flannel.yml(方法新建一个kube-flannel.yml文件,然后把下面内容拷贝进去即可)

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.0.1 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel:v0.17.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel:v0.17.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

6.2 recommended.yaml内容(同样新建一个recommended.yaml文件,把下面内容拷贝进去即可,配kubete1.18.0使用)

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.3
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.4
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

  • 1
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

金龙鱼先生

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值