K8s集群离线部署详细方案

k8s部署

安装架构:

k8s v1.23.4
ingress-controller v1.1.2
helm v3.8.1
docker v20.10.14
harbor ****:5000
共享存储 local-path

@@@.221 master 介质:/root/
@@@.222 master
@@@.223 master
@@@.224 node
@@@.225 node

1、主机环境初始化

安装python3
yum -y install openssl-devel bzip2-devel expat-devel gdbm-devel readline-devel zlib-devel
yum -y install python3
ll /usr/bin/python3

root用户免密、远程登陆配置

时钟同步

2、安装包下载:

https://github.com/easzlab/kubeasz/blob/3.6.2/docs/setup/offline_install.md
#下载工具脚本ezdown,举例使用kubeasz版本3.6.2
export release=3.6.2
wget https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
chmod +x ./ezdown

#如果/etc/kubeasz/bin 目录下已经有kube* 文件,需要先删除 rm -f /etc/kubeasz/bin/kube*
#指定k8s版本v1.23.4、docker版本20.10.14(直接vi ezdown 指定版本),执行下载:
./ezdown -D
#下载过程先拉起kubeasz-k8s-bin容器,所有下载的文件通过容器下载,然后docker cp 到宿主机上,删除容器。

#下载离线系统软件包 (适用于无法使用yum/apt仓库情形)
./ezdown -P centos_7

#[可选]如果需要更多组件,请下载额外容器镜像(cilium,flannel,prometheus等)
./ezdown -X flannel
./ezdown -X prometheus

#脚本运行成功后,所有文件(kubeasz代码、二次制作、离线镜像)均已整理好放入目录/etc/kubeasz

3、离线安装K8集群:

上述下载完成后,把/etc/kubeasz整个目录复制到目标离线服务器相同目录,然后在离线服务器上运行/etc/

#离线安装K8s/docker,检查本地文件,正常会提示所有文件已经下载完成。在离线安装机器上执行:
./ezdown -D
./ezdown -X

启动kubeasz容器
./ezdown -S

进入容器搭建集群
docker exec -it kubeasz bash
ezctl new zzk8s #${集群名称}
#搭建过程使用/etc/kubeasz/example/里的编排模版创建。
执行命令后会在/etc/kubeasz下创建一个clusters的目录,里面有新建的集群目录。
然后根据提示配置hosts文件和config.yml:
根据前面节点规划修改hosts文件和其他集群层面的主要配置选项(网络、端口等);
其他集群组件等配置项可以在config.yml文件中修改。

#根据提示修改hosts如下,其中hosts文件中按规划调整了etcd、kube_master、kube_node和ex_lb四处位置的服务器IP,注意这里只能使用IP,不能使用hostname;
#另外CONTAINER_RUNTIME应该设置为containerd(注意:生成的hosts文件里的CONTAINER_RUNTIME一定不能是docker,这个不要调整错了。),其它配置可保持不变。
#本实践中的网络组件选择了flannel,NODE_PORT_RANGE调整为"30-32767"
cat hosts
############################

[etcd]
@@@.221
@@@.222
@@@.223

master node(s)
[kube_master]
@@@.221
@@@.222
@@@.223

work node(s)
[kube_node]
@@@.224
@@@.225

[optional] harbor server, a private docker registry
‘NEW_INSTALL’: ‘true’ to install a harbor server; ‘false’ to integrate with existed one
[harbor]
#@@@168.1.8 NEW_INSTALL=false
@@@.221 NEW_INSTALL=true

[optional] loadbalance for accessing k8s from outside
[ex_lb]
@@@.221 LB_ROLE=master EX_APISERVER_VIP=@@@.226 EX_APISERVER_PORT=8443
@@@.222 LB_ROLE=backup EX_APISERVER_VIP=@@@.226 EX_APISERVER_PORT=8443
#10.0.1.121 LB_ROLE=backup EX_APISERVER_VIP=10.0.1.120 EX_APISERVER_PORT=8443
#10.0.1.122 LB_ROLE=master EX_APISERVER_VIP=10.0.1.120 EX_APISERVER_PORT=8443
#10.0.1.123 LB_ROLE=master EX_APISERVER_VIP=10.0.1.120 EX_APISERVER_PORT=8443

[optional] ntp server for the cluster
[chrony]
@@@.221

[all:vars]
--------- Main Variables ---------------
Secure port for apiservers
SECURE_PORT=“6443”

Cluster container-runtime supported: docker, containerd
if k8s version >= 1.24, docker is not supported,一定不能是docker!!!
#启用的CRI组件
CONTAINER_RUNTIME=“containerd”

Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
#集群使用的网络插件,公有云选择flannel,私有云选择calio
CLUSTER_NETWORK=“calico”

Service proxy mode of kube-proxy: ‘iptables’ or ‘ipvs’
#kube-proxy运行的模式
PROXY_MODE=“ipvs”

K8S Service CIDR, not overlap with node(host) networking
#service的地址段
SERVICE_CIDR=“10.100.0.0/16”

Cluster CIDR (Pod CIDR), not overlap with node(host) networking
#pod的地址段 ,不要与service地址段有冲突
CLUSTER_CIDR=“10.200.0.0/16”

NodePort Range
#启动pod暴露的端口范围
NODE_PORT_RANGE=“30000-32767”

Cluster DNS Domain
#创建service的后缀
CLUSTER_DNS_DOMAIN=“zzcluster.local”

-------- Additional Variables (don’t change the default value right now) —
Binaries Directory 二进制文件存放路径
bin_dir=“/opt/kube/bin”

Deploy Directory (kubeasz workspace) 集群部署路径
base_dir=“/etc/kubeasz”

Directory for a specific cluster
cluster_dir=“{{ base_dir }}/clusters/zzk8s”

CA and other components cert/key Directory 证书路径
ca_dir=“/etc/kubernetes/ssl”

cat config.yml
############################
prepare
############################
可选离线安装系统软件包 (offline|online) 修改
INSTALL_SOURCE: “offline”

可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false

#它配置:

#删除未使用的 yum 存储库并启用 GPG 密钥检查
#删除有已知问题的包
#配置 pam 以进行强密码检查
#安装和配置 auditd
#通过软限制禁用核心转储
#设置限制性 umask
#配置系统路径下文件的执行权限
#加强对影子和密码文件的访问
#禁用未使用的文件系统
#禁用 rhosts
#配置安全 ttys
#通过 sysctl 配置内核参数
#在基于 EL 的系统上启用 selinux
#删除 SUID 和 GUID
#配置系统帐户的登录名和密码

############################
role:deploy
############################
default: ca will expire in 100 years
default: certs issued by the ca will expire in 50 years
CA_EXPIRY: “876000h”
CERT_EXPIRY: “438000h”

kubeconfig 配置参数 #配置集群名字和上下文
CLUSTER_NAME: “zzk8s”
CONTEXT_NAME: “context-{{ CLUSTER_NAME }}”

k8s version
K8S_VER: “1.23.4”

############################
role:etcd
############################
设置不同的wal目录,可以避免磁盘io竞争,提高性能
ETCD_DATA_DIR: “/var/lib/etcd”
ETCD_WAL_DIR: “”

############################
role:runtime [containerd,docker]
############################
------------------------------------------- containerd
[.]启用容器仓库镜像
ENABLE_MIRROR_REGISTRY: true

[containerd]基础容器镜像
SANDBOX_IMAGE: “easzlab.io.local:5000/easzlab/pause:3.7”

[containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: “/var/lib/containerd”

------------------------------------------- docker
[docker]容器存储目录
DOCKER_STORAGE_DIR: “/var/lib/docker”

[docker]开启Restful API
ENABLE_REMOTE_API: false

[docker]信任的HTTP仓库
INSECURE_REG: ‘[“http://easzlab.io.local:5000”]’

############################
role:kube-master
############################
k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)
MASTER_CERT_HOSTS:

  • “@@@.10”
  • “@@@.5”
  • “@@@.6”
  • “@@@.7”
  • “k8s.easzlab.io”
    #- “www.test.com”

node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24

############################
role:kube-node
############################
Kubelet 根目录
KUBELET_ROOT_DIR: “/var/lib/kubelet”

node节点最大pod 数
MAX_PODS: 110

配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: “no”

k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: “no”

############################
role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
------------------------------------------- flannel
[flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: “vxlan”
DIRECT_ROUTING: false

[flannel] flanneld_image: “quay.io/coreos/flannel:v0.10.0-amd64”
flannelVer: “v0.15.1”
flanneld_image: “easzlab.io.local:5000/easzlab/flannel:{{ flannelVer }}”

------------------------------------------- calico
[calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: “Always”

[calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: “can-reach={{ groups[‘kube_master’][0] }}”

[calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: “brid”

[calico]设置calico 是否使用route reflectors
如果集群规模超过50个节点,建议启用该特性
CALICO_RR_ENABLED: false

CALICO_RR_NODES 配置route reflectors的节点,如果未设置默认使用集群master节点
CALICO_RR_NODES: [“@@@168.1.1”, “@@@168.1.2”]
CALICO_RR_NODES: []

[calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: “v3.19.4”

[calico]calico 主版本
calico_ver_main: “{{ calico_ver.split(‘.’)[0] }}.{{ calico_ver.split(‘.’)[1] }}”

------------------------------------------- cilium
[cilium]镜像版本
cilium_ver: “1.11.6”
cilium_connectivity_check: true
cilium_hubble_enabled: false
cilium_hubble_ui_enabled: false

------------------------------------------- kube-ovn
[kube-ovn]选择 OVN DB and OVN Control Plane 节点,默认为第一个master节点
OVN_DB_NODE: “{{ groups[‘kube_master’][0] }}”

[kube-ovn]离线镜像tar包
kube_ovn_ver: “v1.5.3”

------------------------------------------- kube-router
[kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 “subnet”
OVERLAY_TYPE: “full”

[kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: true

[kube-router]kube-router 镜像版本
kube_router_ver: “v0.3.1”
busybox_ver: “1.28.4”

############################
role:cluster-addon
############################
coredns 自动安装
dns_install: “yes”
corednsVer: “1.9.3”
ENABLE_LOCAL_DNS_CACHE: true
dnsNodeCacheVer: “1.21.1”
设置 local dns cache 地址
LOCAL_DNS_CACHE: “169.254.20.10”

metric server 自动安装
metricsserver_install: “yes”
metricsVer: “v0.5.2”

dashboard 自动安装
dashboard_install: “yes”
dashboardVer: “v2.5.1”
dashboardMetricsScraperVer: “v1.0.8”

prometheus 自动安装
prom_install: “no”
prom_namespace: “monitor”
prom_chart_ver: “35.5.1”

nfs-provisioner 自动安装
nfs_provisioner_install: “no”
nfs_provisioner_namespace: “kube-system”
nfs_provisioner_ver: “v4.0.2”
nfs_storage_class: “managed-nfs-storage”
nfs_server: “@@@168.1.10”
nfs_path: “/data/nfs”

network-check 自动安装
network_check_enabled: false
network_check_schedule: “*/5 * * * *”

############################
role:harbor
############################
harbor version,完整版本号
HARBOR_VER: “v2.1.3”
HARBOR_DOMAIN: “harbor.easzlab.io.local”
HARBOR_TLS_PORT: 8443

if set ‘false’, you need to put certs named harbor.pem and harbor-key.pem in directory ‘down’
HARBOR_SELF_SIGNED_CERT: true

install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true

一键安装集群:
docker exec -it kubeasz bash
ezctl setup zzk8s all #${集群名称}
等价与:docker exec -it kubeasz ezctl setup zzk8s all

分步安装集群(分别执行01-07的yml):
ezctl setup zzk8s 01
ezctl setup zzk8s 02
ezctl setup zzk8s 03
ezctl setup zzk8s 04
ezctl setup zzk8s 05
ezctl setup zzk8s 06
ezctl setup zzk8s 07

4、验证集群

#各节点就绪 (Ready) 状态、角色、运行时间以及版本号
kubectl get nodes -o wide
#scheduler/controller-manager/etcd等组件 Healthy
kubectl get cs
#kubernetes master(apiserver)组件 running
kubectl cluster-info
#查看所有集群pod状态,默认已安装网络插件flannel、coredns、metrics-server等
kubectl get po --all-namespaces
#查看所有集群svc状态
kubectl get svc --all-namespaces
#从master节点查看token并使用token登录
kubectl get secret -n kube-system
kubectl describe secret -n kube-system admin-user
#使用admin-user的token登录kubernetes-dashboard
在验证安装过程查看所有集群svc状态的操作中,我们看到了默认安装了kubernetes-dashboard,可以通过NodePort访问https://IP:*端口/,其中{IP}可为任一节点IP。
#验证etcd集群状态
systemctl status etcd 查看服务状态
journalctl -u etcd 查看运行日志
在任一个etcd集群节点上执行如下命令
根据hosts中配置设置shell变量 $NODE_IPS
export NODE_IPS="@@@.5 @@@.6 "
for ip in N O D E I P S ; d o E T C D C T L A P I = 3 e t c d c t l   − − e n d p o i n t s = h t t p s : / / {NODE_IPS}; do ETCDCTL_API=3 etcdctl \ --endpoints=https:// NODEIPS;doETCDCTLAPI=3etcdctl endpoints=https://{ip}:2379
–cacert=/etc/kubernetes/ssl/ca.pem
–cert=/etc/kubernetes/ssl/etcd.pem
–key=/etc/kubernetes/ssl/etcd-key.pem
endpoint health; done

5、集群组件安装配置

恢复Master调度:
默认情况下,pod 节点不会分配到 master 节点,可以通过如下命令让 master 节点恢复调度,这样后续master也可以运行pod了。
[root@hx-dssn-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-01 Ready,SchedulingDisabled master 61m v1.24.15
master-02 Ready,SchedulingDisabled master 61m v1.24.15
master-03 Ready,SchedulingDisabled master 61m v1.24.15
worker-01 Ready node 61m v1.24.15
worker-02 Ready node 61m v1.24.15
[root@hx-dssn-master1 ~]# kubectl uncordon master-01 master-02 master-03
node/master-01 uncordoned
node/master-02 uncordoned
node/master-03 uncordoned

kubeasz内置了很多常用组件,包括helm、nginx-ingress、prometheus等等
harbor安装:
离线下载:harbor-offline-installer-v2.6.4.tgz 上传到/etc/kubeasz/down/ 下
docker exec -it kubeasz bash
ezctl setup zzk8s 11
等同于 dk ezctl setup zzk8s 11
#harbor的yml文件位于/etc/kubeasz/playbooks/11.harbor.yml,
#其默认配置模板位于/etc/kubeasz/roles/harbor/templates/harbor-v2.1.yml.j2, 如想自定义密码,可在该文件中自行修改。
运行完后用 docker ps |grep harbor 看一下状态是否正常运行
[root@hx-dssn-master1 harbor]# docker images | grep harbor
goharbor/harbor-exporter v2.6.4 6580b20af112 7 months ago 96.3MB
goharbor/chartmuseum-photon v2.6.4 f0c5731ffd55 7 months ago 227MB
goharbor/redis-photon v2.6.4 d8c7faabd44d 7 months ago 127MB
goharbor/trivy-adapter-photon v2.6.4 cd967bf97d68 7 months ago 442MB
goharbor/notary-server-photon v2.6.4 f4deee780a9e 7 months ago 113MB
goharbor/notary-signer-photon v2.6.4 bb4ffa2e8fcb 7 months ago 110MB
goharbor/harbor-registryctl v2.6.4 2900490af828 7 months ago 139MB
goharbor/registry-photon v2.6.4 65b79e20be53 7 months ago 78.1MB
goharbor/nginx-photon v2.6.4 ce433f9e1584 7 months ago 126MB
goharbor/harbor-log v2.6.4 e5bb5eaa3f91 7 months ago 133MB
goharbor/harbor-jobservice v2.6.4 8ef7e582d2e2 7 months ago 251MB
goharbor/harbor-core v2.6.4 4668209cd469 7 months ago 214MB
goharbor/harbor-portal v2.6.4 58a3c22b3c5e 7 months ago 135MB
goharbor/harbor-db v2.6.4 22e1a69e8b6c 7 months ago 204MB
goharbor/prepare v2.6.4 54b5cee77c99 7 months ago 164MB
[root@hx-dssn-master1 harbor]# docker ps | grep harbor
e5001e431fbb goharbor/harbor-jobservice:v2.6.4 “/harbor/entrypoint.…” 41 minutes ago Up 41 minutes (healthy) harbor-jobservice
9ee5ce9c8211 goharbor/nginx-photon:v2.6.4 “nginx -g 'daemon of…” 41 minutes ago Up 41 minutes (healthy) 0.0.0.0:8443->8443/tcp, :::8443->8443/tcp, 0.0.0.0:80->8080/tcp, :::80->8080/tcp nginx
d7ed4909b35f goharbor/harbor-core:v2.6.4 “/harbor/entrypoint.…” 41 minutes ago Up 41 minutes (healthy) harbor-core
bc8961fa2e1e goharbor/harbor-db:v2.6.4 “/docker-entrypoint.…” 41 minutes ago Up 41 minutes (healthy) harbor-db
5213578c4b57 goharbor/harbor-registryctl:v2.6.4 “/home/harbor/start.…” 41 minutes ago Up 41 minutes (healthy) registryctl
76a2d7cd1740 goharbor/registry-photon:v2.6.4 “/home/harbor/entryp…” 41 minutes ago Up 41 minutes (healthy) registry
a70ce077f9ca goharbor/redis-photon:v2.6.4 “redis-server /etc/r…” 41 minutes ago Up 41 minutes (healthy) redis
6bbf7cc86948 goharbor/chartmuseum-photon:v2.6.4 “./docker-entrypoint…” 41 minutes ago Up 41 minutes (healthy) chartmuseum
333f5637fad9 goharbor/harbor-portal:v2.6.4 “nginx -g 'daemon of…” 41 minutes ago Up 41 minutes (healthy) harbor-portal
79a4cb616ddd goharbor/harbor-log:v2.6.4 “/bin/sh -c /usr/loc…” 41 minutes ago Up 41 minutes (healthy) 127.0.0.1:1514->10514/tcp harbor-log

[root@hx-dssn-master1 harbor]# cat /etc/hosts | grep harbor
@@@.221 harbor.easzlab.io.local

[root@hx-dssn-master1 harbor]# cat /var/data/harbor/harbor.yml | grep admin_password
harbor_admin_password: q55AK3lFhqiEsCMi

查看docker配置
cat /etc/docker/daemon.json
{
“exec-opts”: [“native.cgroupdriver=systemd”],
“registry-mirrors”: [
“https://docker.nju.edu.cn/”,
“https://kuamavit.mirror.aliyuncs.com”
],
“insecure-registries”: [“http://easzlab.io.local:5000”],
“max-concurrent-downloads”: 10,
“log-driver”: “json-file”,
“log-level”: “warn”,
“log-opts”: {
“max-size”: “10m”,
“max-file”: “3”
},
“data-root”: “/var/lib/docker”
}

登录 harbor
[root@harbor harbor]# docker login http://easzlab.io.local:5000
Username: admin
Password: q55AK3lFhqiEsCMi
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
验证:
docker tag cbb01a7bd410 easzlab.io.local:5000/easzlab/registry:2
docker push easzlab.io.local:5000/easzlab/registry:2

重装:
数据目录 /var/data ,其中最主要是 /var/data/database 和 /var/data/registry 目录,如果你要彻底重新安装harbor,删除这两个目录即可。日志目录 /var/log/harbor
暂停harbor: docker-compose stop : docker容器stop,并不删除容器
恢复harbor: docker-compose start : 恢复docker容器运行
停止harbor: docker-compose down -v : 停止并删除docker容器
启动harbor: docker-compose up -d : 启动所有docker容器
修改harbor的运行配置,需要如下步骤:
停止 harbor
docker-compose down -v
修改配置
vim harbor.yml
执行./prepare已更新配置到docker-compose.yml文件
./prepare
启动 harbor
docker-compose up -d

自行安装了helm、nginx-ingress和存储(local-path)
安装 helm
下载:https://github.com/helm/helm/releases?page=3
解压:tar -zxvf helm-v3.8.1-linux-amd64.tar.gz
linux-amd64/
linux-amd64/helm
linux-amd64/LICENSE
linux-amd64/README.md
安装:cp linux-amd64/helm /usr/bin
验证:helm version
version.BuildInfo{Version:“v3.8.1”, GitCommit:“5cb9af4b1b271d11d7a97a71df3ac337dd94ad37”, GitTreeState:“clean”, GoVersion:“go1.17.5”}

安装 nginx-ingress
下载:https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.1.2
tar -zxvf ingress-nginx-controller-v1.1.2.tar.gz
cat /home/kbs/pkg/ingress-nginx-controller-v1.1.2/deploy/static/provider/cloud/deploy.yaml
需要下载镜像:
k8s.gcr.io/ingress-nginx/controller:v1.1.2
k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
从国内镜像仓库下载:
docker pull registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.1.2
docker pull registry.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1

加载镜像:
docker load -i nginx-ingress-controller.tar
docker load -i kube-webhook-certgen.tar
docker tag 7e5c1cecb086 easzlab.io.local:5000/easzlab/nginx-ingress-controller:v1.1.2
docker push easzlab.io.local:5000/easzlab/nginx-ingress-controller:v1.1.2
docker tag c41e9fcadf5a easzlab.io.local:5000/easzlab/kube-webhook-certgen:v1.1.1
docker push easzlab.io.local:5000/easzlab/kube-webhook-certgen:v1.1.1

执行deploy.yaml文件进行安装
kubectl apply -f deploy.yaml
执行完毕后查看pod ingress-nginx-controller的状态
kubectl get po -n ingress-nginx -o wide
kubectl get svc -n ingress-nginx

kubectl get pods -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-admission-create-fq2kq 0/1 Completed 0 11s 10.32.1.89 centos06
ingress-nginx-admission-patch-fkphb 0/1 Completed 1 11s 10.32.1.90 centos06
ingress-nginx-controller-5c79d9494c-rh5rn 0/1 Running 0 11s 10.32.1.91 centos06
访问ingress-nginx-controller的ip,如下:
curl http://172.20.32.197

404 Not Found

404 Not Found


nginx

安装local-path
下载安装包:
https://github.com/rancher/local-path-provisioner/releases/tag/v0.0.24
tar -zxvf local-path-provisioner-0.0.24.tar.gz
cat /home/kbs/pkg/local-path-provisioner-0.0.24/deploy/local-path-storage.yaml
cat /home/kbs/pkg/local-path-provisioner-0.0.24/deploy/local-path-storage.yaml | grep image
image: rancher/local-path-provisioner:v0.0.24
imagePullPolicy: IfNotPresent
image: busybox
imagePullPolicy: IfNotPresent

docker pull rancher/local-path-provisioner:v0.0.24
docker pull busybox
docker save image_name -o ***.tar 保存镜像,上传到宿主机
docker load -i ***.tar
docker tag b29384aeb4b1 easzlab.io.local:5000/easzlab/local-path-provisioner:v0.0.24
docker push easzlab.io.local:5000/easzlab/local-path-provisioner:v0.0.24
docker tag a416a98b71e2 easzlab.io.local:5000/easzlab/busybox
docker push easzlab.io.local:5000/easzlab/busybox

安装:
kubectl apply -f /home/kbs/pkg/local-path-provisioner-0.0.24/deploy/local-path-storage.yaml
namespace/local-path-storage created
serviceaccount/local-path-provisioner-service-account created
clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created
deployment.apps/local-path-provisioner created
storageclass.storage.k8s.io/local-path created
configmap/local-path-config created

配置local-path为默认存储:
kubectl patch storageclass local-path -p ‘{“metadata”: {“annotations”:{“storageclass.kubernetes.io/is-default-class”:“true”}}}’
storageclass.storage.k8s.io/local-path patched
kubectl patch storageclass local-path -p ‘{“metadata”: {“annotations”:{“storageclass.beta.kubernetes.io/is-default-class”:“true”}}}’
storageclass.storage.k8s.io/local-path patched
kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 2m29s

验证 local-path
kubectl apply -f /home/kbs/pkg/local-path-provisioner-0.0.24/examples/pod/pod.yaml,/home/kbs/pkg/local-path-provisioner-0.0.24/examples/pvc/pvc.yaml
pod/volume-test created
persistentvolumeclaim/local-path-pvc created
kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/local-path-pvc Bound pvc-1432e8ff-da6d-4bce-8349-3773ca594bda 128Mi RWO local-path 17s

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-1432e8ff-da6d-4bce-8349-3773ca594bda 128Mi RWO Delete Bound default/local-path-pvc local-path 11s

#################################附命令###################################################

集群卸载清理(需要清理时执行!!!):
#ezctl destroy zzk8s

./ezctl
Usage: ezctl COMMAND [args]
Cluster setups:
list to list all of the managed clusters
checkout to switch default kubeconfig of the cluster
new to start a new k8s deploy with name ‘cluster’
setup to setup a cluster, also supporting a step-by-step way
start to start all of the k8s services stopped by ‘ezctl stop’
stop to stop all of the k8s services temporarily
upgrade to upgrade the k8s cluster
destroy to destroy the k8s cluster
backup to backup the cluster state (etcd snapshot)
restore to restore the cluster state from backups
start-aio to quickly setup an all-in-one cluster with ‘default’ settings

Cluster ops:
add-etcd to add a etcd-node to the etcd cluster
add-master to add a master node to the k8s cluster
add-node to add a work node to the k8s cluster
del-etcd to delete a etcd-node from the etcd cluster
del-master to delete a master node from the k8s cluster
del-node to delete a work node from the k8s cluster

Extra operation:
kcfg-adm to manage client kubeconfig of the k8s cluster

Use “ezctl help ” for more information about a given command.
命令集 1:集群安装相关操作
显示当前所有管理的集群
切换默认集群
创建新集群配置
安装新集群
启动临时停止的集群
临时停止某个集群(包括集群内运行的pod)
升级集群k8s组件版本
删除集群
备份集群(仅etcd数据,不包括pv数据和业务应用数据)
从备份中恢复集群
创建单机集群(类似 minikube)

命令集 2:集群节点操作
增加 etcd 节点
增加主节点
增加工作节点
删除 etcd 节点
删除主节点
删除工作节点

命令集3:额外操作
管理客户端kubeconfig

  • 17
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值