KubeSphere搭建

1、安装KubeSphere前置环境

Nfs文件系统

1、安装nfs-server

# 在每个机器执行

yum install -y nfs-utils

以下在master执行

# 在master 执行以下命令

echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports

# 执行以下命令,启动 nfs 服务;创建共享目录

mkdir -p /nfs/data

# 在master执行

systemctl enable rpcbind

systemctl enable nfs-server

systemctl start rpcbind

systemctl start nfs-server

# 使配置生效

exportfs -r

#检查配置是否生效

exportfs

2、配置nfs-client(选做)在node节点执行

showmount -e 192.168.1.11

查看master是否nfs成功

mkdir -p /nfs/data

创建存储文件

mount -t nfs 192.168.1.11:/nfs/data /nfs/data

挂载文件(根据实际IP挂载)

3、配置默认存储

创建以下yaml文件

配置动态供应的默认存储类  (注意修改下面nfs机器IP地址)

## 创建了一个存储类

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

  name: nfs-storage

  annotations:

    storageclass.kubernetes.io/is-default-class: "true"

provisioner: k8s-sigs.io/nfs-subdir-external-provisioner

parameters:

  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份

---

apiVersion: apps/v1

kind: Deployment

metadata:

  name: nfs-client-provisioner

  labels:

    app: nfs-client-provisioner

  # replace with namespace where provisioner is deployed

  namespace: default

spec:

  replicas: 1

  strategy:

    type: Recreate

  selector:

    matchLabels:

      app: nfs-client-provisioner

  template:

    metadata:

      labels:

        app: nfs-client-provisioner

    spec:

      serviceAccountName: nfs-client-provisioner

      containers:

        - name: nfs-client-provisioner

          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2

          # resources:

          #    limits:

          #      cpu: 10m

          #    requests:

          #      cpu: 10m

          volumeMounts:

            - name: nfs-client-root

              mountPath: /persistentvolumes

          env:

            - name: PROVISIONER_NAME

              value: k8s-sigs.io/nfs-subdir-external-provisioner

            - name: NFS_SERVER

              value: 172.31.0.4 ## 指定自己nfs服务器地址

            - name: NFS_PATH 

              value: /nfs/data  ## nfs服务器共享的目录

      volumes:

        - name: nfs-client-root

          nfs:

            server: 172.31.0.4

            path: /nfs/data

---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: nfs-client-provisioner

  # replace with namespace where provisioner is deployed

  namespace: default

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: nfs-client-provisioner-runner

rules:

  - apiGroups: [""]

    resources: ["nodes"]

    verbs: ["get", "list", "watch"]

  - apiGroups: [""]

    resources: ["persistentvolumes"]

    verbs: ["get", "list", "watch", "create", "delete"]

  - apiGroups: [""]

    resources: ["persistentvolumeclaims"]

    verbs: ["get", "list", "watch", "update"]

  - apiGroups: ["storage.k8s.io"]

    resources: ["storageclasses"]

    verbs: ["get", "list", "watch"]

  - apiGroups: [""]

    resources: ["events"]

    verbs: ["create", "update", "patch"]

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: run-nfs-client-provisioner

subjects:

  - kind: ServiceAccount

    name: nfs-client-provisioner

    # replace with namespace where provisioner is deployed

    namespace: default

roleRef:

  kind: ClusterRole

  name: nfs-client-provisioner-runner

  apiGroup: rbac.authorization.k8s.io

---

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: leader-locking-nfs-client-provisioner

  # replace with namespace where provisioner is deployed

  namespace: default

rules:

  - apiGroups: [""]

    resources: ["endpoints"]

    verbs: ["get", "list", "watch", "create", "update", "patch"]

---

kind: RoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: leader-locking-nfs-client-provisioner

  # replace with namespace where provisioner is deployed

  namespace: default

subjects:

  - kind: ServiceAccount

    name: nfs-client-provisioner

    # replace with namespace where provisioner is deployed

    namespace: default

roleRef:

  kind: Role

  name: leader-locking-nfs-client-provisioner

  apiGroup: rbac.authorization.k8s.io

启动yaml文件

Kubectl apply -f nfs.yaml

确认配置是否生效

kubectl get sc

4、metrics-server

集群指标监控组件

创建以下yaml文件并启动

apiVersion: v1

kind: ServiceAccount

metadata:

  labels:

    k8s-app: metrics-server

  name: metrics-server

  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  labels:

    k8s-app: metrics-server

    rbac.authorization.k8s.io/aggregate-to-admin: "true"

    rbac.authorization.k8s.io/aggregate-to-edit: "true"

    rbac.authorization.k8s.io/aggregate-to-view: "true"

  name: system:aggregated-metrics-reader

rules:

- apiGroups:

  - metrics.k8s.io

  resources:

  - pods

  - nodes

  verbs:

  - get

  - list

  - watch

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  labels:

    k8s-app: metrics-server

  name: system:metrics-server

rules:

- apiGroups:

  - ""

  resources:

  - pods

  - nodes

  - nodes/stats

  - namespaces

  - configmaps

  verbs:

  - get

  - list

  - watch

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

  labels:

    k8s-app: metrics-server

  name: metrics-server-auth-reader

  namespace: kube-system

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: Role

  name: extension-apiserver-authentication-reader

subjects:

- kind: ServiceAccount

  name: metrics-server

  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  labels:

    k8s-app: metrics-server

  name: metrics-server:system:auth-delegator

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: system:auth-delegator

subjects:

- kind: ServiceAccount

  name: metrics-server

  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  labels:

    k8s-app: metrics-server

  name: system:metrics-server

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: system:metrics-server

subjects:

- kind: ServiceAccount

  name: metrics-server

  namespace: kube-system

---

apiVersion: v1

kind: Service

metadata:

  labels:

    k8s-app: metrics-server

  name: metrics-server

  namespace: kube-system

spec:

  ports:

  - name: https

    port: 443

    protocol: TCP

    targetPort: https

  selector:

    k8s-app: metrics-server

---

apiVersion: apps/v1

kind: Deployment

metadata:

  labels:

    k8s-app: metrics-server

  name: metrics-server

  namespace: kube-system

spec:

  selector:

    matchLabels:

      k8s-app: metrics-server

  strategy:

    rollingUpdate:

      maxUnavailable: 0

  template:

    metadata:

      labels:

        k8s-app: metrics-server

    spec:

      containers:

      - args:

        - --cert-dir=/tmp

        - --kubelet-insecure-tls

        - --secure-port=4443

        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

        - --kubelet-use-node-status-port

        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3

        imagePullPolicy: IfNotPresent

        livenessProbe:

          failureThreshold: 3

          httpGet:

            path: /livez

            port: https

            scheme: HTTPS

          periodSeconds: 10

        name: metrics-server

        ports:

        - containerPort: 4443

          name: https

          protocol: TCP

        readinessProbe:

          failureThreshold: 3

          httpGet:

            path: /readyz

            port: https

            scheme: HTTPS

          periodSeconds: 10

        securityContext:

          readOnlyRootFilesystem: true

          runAsNonRoot: true

          runAsUser: 1000

        volumeMounts:

        - mountPath: /tmp

          name: tmp-dir

      nodeSelector:

        kubernetes.io/os: linux

      priorityClassName: system-cluster-critical

      serviceAccountName: metrics-server

      volumes:

      - emptyDir: {}

        name: tmp-dir

---

apiVersion: apiregistration.k8s.io/v1

kind: APIService

metadata:

  labels:

    k8s-app: metrics-server

  name: v1beta1.metrics.k8s.io

spec:

  group: metrics.k8s.io

  groupPriorityMinimum: 100

  insecureSkipTLSVerify: true

  service:

    name: metrics-server

    namespace: kube-system

  version: v1beta1

  versionPriority: 100

5、安装kubesphere

Kubesphere官网面向云原生应用的容器混合云,支持 Kubernetes 多集群管理的 PaaS 容器云平台解决方案 | KubeSphere

1、下载核心文件

wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml

wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml

2、修改cluster-configuration文件

将文件里面false改为true使所有功能开放

3、执行安装

kubectl apply -f kubesphere-installer.yaml

kubectl apply -f cluster-configuration.yaml

4、查看安装进度

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

访问任意主机的30880端口

账户admin

密码P@88w0rd

解决etcd监控证书找不到问题

kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Sean‘

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值