k8s ceph

环境要求

Virtual Machine

Docker

Docker-compose

Kubernetes 1.19.15

MySQL 5.7

MongoDB 4.2.3

InnfluxDB

Redis

Kafka

Ceph 14.2.10 Nautilus

Harbor

Nginx

Nacos

服务器要求
  • CentOS 7.6+
  • APP

    Kubernetes

    master 2C 4G 50GiB

    node01 4C 12G 100GiB

    node02 4C 12G 100GiB

    node03 4C 12G 100GiB

  • DB

    Kubernetes

    master 2C 4G 50GiB

    node01 4C 8G 50GiB

    node02 4C 8G 50GiB

  • Harbor

    2C 8G 200GiB

  • Ceph

    ceph1 2C 4G 50GiB (ceph-deploy ceph-mon)

    ceph2 2C 8G 50GiB (ceph-mon node) 额外 1TB 磁盘

    ceph3 2C 8G 50GiB (ceph-mon node) 额外 1TB 磁盘

    每个 node 节点需挂载一个硬盘

  • Nginx Nacos

    2C 4G 50GiB

虚拟机安装

基础服务部署

Nginx

Nacos

https://nacos.io/zh-cn/docs/quick-start-docker.html

Github 上下载 nacos 相关文件
git clone --depth 1 https://github.com/nacos-group/nacos-docker.git
cd nacos-docker/example
#里面有多个文件,cluster-embedded.yaml  cluster-hostname.yaml  cluster-ip.yaml  init.d  mysql  prometheus  standalone-derby.yaml  standalone-logs  standalone-mysql-5.7.yaml  standalone-mysql-8.yaml

Copy

单机模式部署

建议使用 MySQL5.7

docker-compose -f example/standalone-mysql-5.7.yaml up

Copy

Harbor

Ceph

配置部署环境
配置 epel 源

在 Ceph 所有集群节点执行下列命令以配置 epel 源。

yum install epel-release -y

Copy

关闭防火墙

关闭 Ceph 所有节点的防火墙

systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld

Copy

关闭 SELinux

关闭本节点 SELinux,需在所有主客户机节点执行。

  • 临时关闭,重启后失效,与下一条互补。

    setenforce 0
    

    Copy

  • 永久关闭,重启后生效

    sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
    

    Copy

配置主机名

配置永久静态主机名,ceph 节点配置为 ceph1~ceph3

hostnamectl --static set-hostname ceph1

Copy

修改主机节点 hosts 解析

cat << EOF >> /etc/hosts
192.168.0.6   ceph1
192.168.0.7   ceph2
192.168.0.8   ceph3
EOF

Copy

配置时区

配置时区为 UTC+8

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
hwclock -w

Copy

配置免密登录

需配置 ceph1 节点对所有 node 节点的免密(包括 ceph1 本身),在 ceph1 节点生成公钥,并发放到各个节点

ssh-keygen -t rsa
for i in {1..3}; do ssh-copy-id ceph$i; done

Copy

配置 Ceph 镜像源
  1. 在 Ceph 所有节点中创建 ceph.repo

    vi /etc/yum.repos.d/ceph.repo
    

    Copy

    并加入如下内容:

    [Ceph]
    name=Ceph packages for $basearch
    baseurl=http://download.ceph.com/rpm-nautilus/el7/$basearch
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=https://download.ceph.com/keys/release.asc
    priority=1
    
    [Ceph-noarch]
    name=Ceph noarch packages
    baseurl=http://download.ceph.com/rpm-nautilus/el7/noarch
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=https://download.ceph.com/keys/release.asc
    priority=1
    
    [ceph-source]
    name=Ceph source packages
    baseurl=http://download.ceph.com/rpm-nautilus/el7/SRPMS
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=https://download.ceph.com/keys/release.asc
    priority=1
    

    Copy

  2. 更新 yum 源

    yum clean all && yum makecache
    

    Copy

安装 Ceph
安装 Ceph 软件
  1. 在 Ceph 所有节点安装 Ceph

    yum -y install librados2-14.2.10 ceph-14.2.10
    

    Copy

  2. 在 Ceph1 节点额外安装 ceph-deploy

    yum -y install ceph-deploy
    

    Copy

  3. 在各节点查看版本

    ceph -v
    

    Copy

    结果如下:

    ceph version 14.2.10 (b340acf629a010a74d90da5782a2c5fe0b54ac20) nautilus (stable)
    

    Copy

部署 MON 节点

只需要在主节点 ceph1 执行

  1. 创建集群

    cd /etc/ceph
    ceph-deploy new ceph1 ceph2 ceph3
    

    Copy

  2. 在“/etc/ceph”目录下自动生成的 ceph.conf 文件中配置网络 mon_host、public network、cluster network

    vi /etc/ceph/ceph.conf
    

    Copy

    将 ceph.conf 中的内容修改为如下所示:

    [global]
    fsid = f6b3c38c-7241-44b3-b433-52e276ssdfg
    mon_initial_members = ceph1, ceph2, ceph3
    mon_host = 192.168.0.6,192.168.0.7,192.168.0.8
    auth_cluster_required = cephx
    auth_service_required = cephx
    auth_client_required = cephx
    
    # 因为组网简单,就使用一个网络,未隔离分开
    public_network = 192.168.0.0/24
    cluster_network = 192.168.0.0/24
    
    [mon]
    mon_allow_pool_delete = true
    

    Copy

  3. 初始化监视器并收集密钥

    ceph-deploy mon create-initial
    

    Copy

  4. 将“ceph.client.admin.keyring”拷贝到各个节点上

    ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3
    

    Copy

  5. 查看是否配置成功

    ceph -s
    

    Copy

    如下所示:

    cluster:
    id:     f6b3c38c-7241-44b3-b433-52e276dd53c6
    health: HEALTH_OK
    
    services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 25h)
    

    Copy

部署 MGR 节点
  1. 部署 MGR 节点

    ceph-deploy mgr create ceph1 ceph2 ceph3
    

    Copy

  2. 查看 MGR 是否部署成功

    ceph -s
    

    Copy

    结果如下所示:

    cluster:
    id:     f6b3c38c-7241-44b3-b433-52e276dd53c6
    health: HEALTH_OK
    
    services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 25h)
    mgr: ceph1(active, since 2d), standbys: ceph2, ceph3
    

    Copy

部署 OSD 节点
  1. 挂载硬盘
    查看数据节点可挂载的盘

    ceph-deploy disk list ceph2 ceph3
    

    Copy

    挂载数据盘

    ceph-deploy disk zap ceph2 /dev/sdb # 磁盘名根据实际情况修改
    ceph-deploy disk zap ceph3 /dev/sdb
    

    Copy

  2. 创建 OSD 节点

    ceph-deploy osd create ceph2 --data /dev/sdb
    ceph-deploy osd create ceph3 --data /dev/sdb
    

    Copy

  3. 查看集群状态
    创建成功后,查看是否正常,即 2 个 OSD 是否都为 up

    ceph -s
    

    Copy

Kubernetes

Install Kubernetes

Pre-Required

所有节点安装 Docker 并执行下列操作

关闭 Swap

swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab

Copy

禁用防火墙

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

Copy

网络配置

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

Copy

修改 Docker 镜像仓库为阿里云镜像

mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
 "registry-mirrors": ["https://trtnvh4p.mirror.aliyuncs.com"],
 "insecure-registries": [ "harbor"] 
} 
EOF
sudo systemctl daemon-reload  && sudo systemctl restart docker

Copy

Install

配置 yum 源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetel7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

Copy

安装 kubelet kubeadm kubectl

yum install -y kubelet-1.19.8 kubeadm-1.19.8 kubectl-1.19.8 --disableexcludes=kubernetes
systemctl enable --now kubelet

Copy

Master 节点执行 Init,–apiserver-advertise-address 修改为 Master 地址。

kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.201.128

Copy

在所有 Node 节点执行 Master 初始化后输出的 kubeadm join 命令。

复制管理配置文件到管理用户 home 目录(非 root 用户添加)

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Copy

在 Master 节点执行添加 Flannel 网络操作。

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Copy

Ceph For Kubernetes

APP 使用 cephfs 做的 StorageClass,DB 使用 rbd 做的 StorageClass

Cephfs

说明:

  • CephFS 需要使用两个 Pool 来分别存储数据和元数据,下面我们分别创建 fs_data 和 fs_metadata 两个 Pool。
  • 创建存储池命令最后的两个数字,比如 ceph osd pool create fs_data 1024 1024 中的两个 1024 分别代表存储池的 pg_num 和 pgp_num,即存储池对应的 pg 数量。Ceph 官方文档建议整个集群所有存储池的 pg 数量之和大约为:(OSD 数量 * 100)/数据冗余因数,数据冗余因数对副本模式而言是副本数,对 EC 模式而言是数据块 + 校验块之和,比方说,三副本模式是 3,EC4+2 模式是 6。
  • 此处整个集群 3 台服务器,每台服务器 12 个 OSD,总共 36 个 OSD,按照上述公式计算应为 1200,一般建议 pg 数取 2 的整数次幂。由于 fs_data 存放的数据量远大于其他几个存储池的数据量,因此该存储池也成比例的分配更多的 pg。

综上,fs_data 的 pg 数量取 1024,fs_metadata 的 pg 数量取 128 或者 256。

创建 Cephfs

MDS(Metadata Server)即元数据 Server 主要负责 Ceph FS 集群中文件和目录的管理。cephfs 需要至少一个 mds(Ceph Metadata Server)服务用来存放 cepfs 服务依赖元数据信息,有条件的可以创建 2 个会自动成为主备。
注:在 Ceph 集群中安装 MDS

  1. 在 ceph-deploy 创建 mds 服务
sudo ceph-deploy mds create ceph2 ceph3

Copy

  1. 创建存储池

一个 cephfs 需要至少两个 RADOS 存储池,一个用于数据、一个用于元数据。配置这些存储池时需考虑:

  • 为元数据存储池设置较高的副本水平,因为此存储池丢失任何数据都会导致整个文件系统失效;
  • 为元数据存储池分配低延时存储器(例如 SSD),因为它会直接影响到客户端的操作延时;
sudo ceph osd pool create cephfs-data 64 64
sudo ceph osd pool create cephfs-metadata 16 16
sudo ceph fs new cephfs cephfs-metadata cephfs-data

Copy

创建完成之后,查看 mds 和 fs 的状态:

# sudo ceph mds stat
e6: 1/1/1 up {0=ceph2=up:active}, 1 up:standby
# sudo ceph fs ls
name: cephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]

Copy

  1. 获取 Ceph auth key

在 ceph-deploy 主机中查看 cephfs 的 key

sudo ceph auth get-key client.admin | base64

Copy

K8S 集成 Cephfs

注:在 Kubernetes APP Cluster 中集成 Cephfs

  1. 安装 ceph-common

    sudo yum install -y ceph-common
    

    Copy

  2. 创建命名空间
    cephfs-ns.yaml

    apiVersion: v1
    kind: Namespace
    metadata:
       name: cephfs
       labels:
         name: cephfs
    

    Copy

  3. 创建 ceph-secret
    ceph-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
  namespace: cephfs
data:
  key: xxxx=   # 这里输入上面得到的key(base|64)

Copy

  1. cluster role
    clusterRole.yaml

    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: cephfs-provisioner
      namespace: cephfs
    rules:
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["create", "update", "patch"]
      - apiGroups: [""]
        resources: ["services"]
        resourceNames: ["kube-dns","coredns"]
        verbs: ["list", "get"]
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["list", "get", "watch", "create", "update", "patch"]
      - apiGroups: [""]
        resources: ["secrets"]
        verbs: ["get", "create", "delete"]
    

    Copy

  2. cluster role binding
    clusterRoleBinding.yaml

    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: cephfs-provisioner
    subjects:
      - kind: ServiceAccount
        name: cephfs-provisioner
        namespace: cephfs
    roleRef:
      kind: ClusterRole
      name: cephfs-provisioner
      apiGroup: rbac.authorization.k8s.io
    

    Copy

  3. role binding
    roleBinding.yaml

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: cephfs-provisioner
      namespace: cephfs
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: cephfs-provisioner
    subjects:
    - kind: ServiceAccount
      name: cephfs-provisioner
    

    Copy

  4. service account
    serviceAccount.yaml

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: cephfs-provisioner
      namespace: cephfs
    

    Copy

  5. deployment
    cephfs-deployment.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: cephfs-provisioner
      namespace: cephfs
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: cephfs-provisioner
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: cephfs-provisioner
        spec:
          containers:
          - name: cephfs-provisioner
            image: "quay.io/external_storage/cephfs-provisioner:latest"
            env:
            - name: PROVISIONER_NAME
              value: ceph.com/cephfs
            - name: PROVISIONER_SECRET_NAMESPACE
              value: cephfs
            command:
            - "/usr/local/bin/cephfs-provisioner"
            args:
            - "-id=cephfs-provisioner-1"
          serviceAccount: cephfs-provisioner
    

    Copy

  6. 创建 StorageClass

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: cephfs
      namespace: cephfs
    provisioner: ceph.com/cephfs
    parameters:
      monitors: ip1:6789,ip2:6789,ip3:6789 # monitor有多少填多少
      adminId: admin
      adminSecretName: ceph-secret
      adminSecretNamespace: cephfs
    

    Copy

Kubernetes 集成 Ceph rbd

在 Ceph 集群中初始化存储池

ceph osd pool  create esdb 64 64 #创建池
rbd pool init esdb   #初始化池
rbd create esdb/img --size 4096 --image-feature layering -k /etc/ceph/ceph.client.admin.keyring  #创建镜像
rbd map esdb/img --name client.admin -k /etc/ceph/ceph.client.admin.keyring #映射镜像
cd /etc/ceph
ceph auth get-or-create client.esdb mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow
rwx pool=esdb' -o ceph.client.esdb.keyring # 创建esdb 认证 key

Copy

在 Kubernetes DB Cluster 中安装 ceph rbd

rpm -Uvh https://download.ceph.com/rpm-mimic/el7/noarch/ceph-release-1-1.el7.noarch.rpm
sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum repolist && yum install ceph-common -y
yum -y install librbd1 && modprobe rbd

Copy

获取 ceph 的 key 用于创建 secret

cd /etc/ceph
cat ceph.client.admin.keyring | grep key  #admin key
cat ceph.client.esdb.keyring | grep key   # esdb key

Copy

创建 secret

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
  namespace: k8s-ceph
data:
  key: **admin key** #注意写入时文本格式,可以通过rancher进行编辑修改密钥
type: kubernetes.io/rbd

Copy

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-esdb
  namespace: k8s-ceph
data:
  key: **esdb key**#注意写入时文本格式,可以通过rancher进行编辑修改密钥
type: kubernetes.io/rbd

Copy

创建 rbd-provisioner

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rbd-provisioner
  namespace: k8s-ceph
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rbd-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: rbd-provisioner
    spec:
      containers:
      - name: rbd-provisioner
        image: "quay.io/external_storage/rbd-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/rbd
      serviceAccount: rbd-provisioner

Copy

创建 Storage class

1:创建ClusterRole
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get"]
2:创建ClusterRoleBindng
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
subjects:
  - kind: ServiceAccount
    name: rbd-provisioner
    namespace: k8s-ceph
roleRef:
  kind: ClusterRole
  name: rbd-provisioner
  apiGroup: rbac.authorization.k8s.io
3:创建StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rbd
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: ceph.com/rbd
reclaimPolicy: Delete
parameters:
  monitors: 你的mon节点IP:6789 (所有节点)
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: k8s-ceph
  pool: esdb
  userId: esdb
  userSecretName: ceph-secret-esdb
  userSecretNamespace: k8s-ceph
  imageFormat: "2"
  imageFeatures: layering

Copy

在 Rancher 中添加 PVC 状态为 Bound 则为集成成功

创建数据库服务

Tips:
创建 Harbor Secret 并保存为 yaml 文件的命令为:

kubectl create secret docker-registry harbor --docker-server=harbor --docker-username=devops --docker-password=password -n websocket-qa --dry-run=client -o yaml > harbor-secret.yaml

Copy

请根据实际情况修改用户名密码!!!

MySQL

创建 Harbor Secret 用于拉取镜像

apiVersion: v1
data:
  .dockerconfigjson: xxxxx=
kind: Secret
metadata:
  creationTimestamp: null
  name: harbor
  namespace: mysql-dev
type: kubernetes.io/dockerconfigjson

Copy

创建命名空间

apiVersion: v1
kind: Namespace
metadata:
  name: mysql-dev

Copy

Mysql-config

创建 config map

apiVersion: v1
data:
  mysqld.cnf: |-
    [mysqld]
    pid-file=/var/run/mysqld/mysqld.pid
    socket=/var/run/mysqld/mysqld.sock
    datadir=/var/lib/mysql
    #log-error=/var/log/mysql/error.log
    # By default we only accept connections from localhost
    #bind-address=127.0.0.1
    # Disabling symbolic-links is recommended to prevent assorted security risks
    symbolic-links=0
    lower_case_table_names=1
    query_cache_type=1
    max_connections=20000
kind: ConfigMap
metadata:
  name: dev-mysql-config
  namespace: mysql-dev

Copy

创建 pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: dev-mysql-config-pvc # PVC Name
  namespace: mysql-dev
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi # 数据存储需要的空间
  storageClassName: rbd

Copy

创建 statefulset

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: mysql-config-dev # Pod标签设置
  name: mysql-config-dev
  namespace: mysql-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql-config-dev # Pod标签设置
  serviceName: mysql-config-dev
  template:
    metadata:
      labels:
        app: mysql-config-dev # Pod标签设置
    spec:
      containers:
      - env:
        - name: MYSQL_ROOT_PASSWORD
          value: password # mysql 密码
        - name: TZ
          value: Asia/Shanghai
        image: mysql:5.7
        name: mysql
        volumeMounts:
        - mountPath: /etc/mysql/mysql.conf.d/
          name: mysql-config
        - mountPath: /var/lib/mysql
          name: mysql-data
          subPath: mysql
      volumes:
      - configMap:
          defaultMode: 420
          name: dev-mysql-config # 卷映射的config map,和上面创建的名称一致
          optional: false
        name: mysql-config
      - name: mysql-data
        persistentVolumeClaim:
          claimName: dev-mysql-config-pvc # 卷映射的pvc,和上面创建的名称一致

Copy

创建 Service

apiVersion: v1
kind: Service
metadata:
  labels:
    app: mysql
  name: database-mysql-config-svc
  namespace: mysql-dev
spec:
  ports:
  - name: mysql
    nodePort: 32709 # 可以通过k8s node ip来访问的端口,根据实际开放端口配置
    port: 3306
    protocol: TCP
    targetPort: 3306
  selector:
    app: mysql-config-qa # 通过此标签来关联端口指向的Pod,参考上面的Deployment中的标签配置
  type: NodePort # 表示该服务发现为NodePort类型,可以在集群外部通过Node IP访问

Copy

Mysql-Process

创建 Config Map

apiVersion: v1
data:
  mysqld.cnf: |-
    [mysqld]
    pid-file=/var/run/mysqld/mysqld.pid
    socket=/var/run/mysqld/mysqld.sock
    datadir=/var/lib/mysql
    #log-error=/var/log/mysql/error.log
    # By default we only accept connections from localhost
    #bind-address=127.0.0.1
    # Disabling symbolic-links is recommended to prevent assorted security risks
    symbolic-links=0
    lower_case_table_names=1
    query_cache_type=1
    max_connections=20000
kind: ConfigMap
metadata:
  name: dev-mysql-process
  namespace: mysql-dev

Copy

创建 pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: dev-mysql-process-pvc # PVC Name
  namespace: mysql-dev
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi # 数据存储需要的空间
  storageClassName: rbd

Copy

创建 Statefulset

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: mysql-process-dev # Pod标签设置
  name: mysql-process-dev
  namespace: mysql-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql-process-dev # Pod标签设置
  serviceName: mysql-process-dev
  template:
    metadata:
      labels:
        app: mysql-process-dev # Pod标签设置
    spec:
      containers:
      - env:
        - name: MYSQL_ROOT_PASSWORD
          value: password # mysql 密码
        - name: TZ
          value: Asia/Shanghai
        image: mysql:5.7
        name: mysql
        volumeMounts:
        - mountPath: /etc/mysql/mysql.conf.d/
          name: mysql-config
        - mountPath: /var/lib/mysql
          name: mysql-data
          subPath: mysql
      volumes:
      - configMap:
          defaultMode: 420
          name: dev-mysql-process # 卷映射的config map,和上面创建的名称一致
          optional: false
        name: mysql-config
      - name: mysql-data
        persistentVolumeClaim:
          claimName: dev-mysql-process-pvc # 卷映射的pvc,和上面创建的名称一致

Copy

创建 Service

apiVersion: v1
kind: Service
metadata:
  labels:
    app: mysql
  name: database-mysql-process-svc
  namespace: mysql-dev
spec:
  ports:
  - name: mysql
    nodePort: 32710 # 可以通过k8s node ip来访问的端口,根据实际开放端口配置
    port: 3306
    protocol: TCP
    targetPort: 3306
  selector:
    app: mysql-process-dev # 通过此标签来关联端口指向的Pod,参考上面的Deployment中的标签配置
  type: NodePort # 表示该服务发现为NodePort类型,可以在集群外部通过Node IP访问

Copy

Redis

创建 Secret 用与拉取 Harbor 镜像

apiVersion: v1
data:
  .dockerconfigjson: xxxxx=
kind: Secret
metadata:
  creationTimestamp: null
  name: harbor
  namespace: redis-dev
type: kubernetes.io/dockerconfigjson

Copy

创建 namespace

apiVersion: v1
kind: Namespace
metadata:
  name: redis-dev

Copy

创建 statefulset

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: database-redis # Pod标签设置
  namespace: redis-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: database-redis # Pod标签设置
  serviceName: database-redis
  template:
    metadata:
      labels:
        app: database-redis # Pod标签设置
    spec:
      containers:
      - image: redis:latest
        name: redis
        ports:
        - containerPort: 6379
          name: redis
          protocol: TCP

Copy

创建 Service

apiVersion: v1
kind: Service
metadata:
  labels:
    app: database-redis
  name: database-redis-svc
  namespace: redis-dev
spec:
  ports:
  - name: redis
    nodePort: 32712 # 可以通过k8s node ip来访问的端口,根据实际开放端口配置
    port: 6379
    protocol: TCP
    targetPort: 6379
  selector:
    app: database-redis # 通过此标签来关联端口指向的Pod,参考上面的Deployment中的标签配置
  type: NodePort # 表示该服务发现为NodePort类型,可以在集群外部通过Node IP访问

Copy

InfluxDB

创建 Secret 用于拉取 Harbor 镜像

apiVersion: v1
data:
  .dockerconfigjson: xxxxx=
kind: Secret
metadata:
  creationTimestamp: null
  name: harbor
  namespace: influxdb-dev
type: kubernetes.io/dockerconfigjson

Copy

创建命名空间

apiVersion: v1
kind: Namespace
metadata:
  name: influxdb-dev

Copy

创建 deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: influxdb # Pod标签设置
  name: database-influxdb
  namespace: influxdb-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: influxdb # Pod标签设置
  template:
    metadata:
      labels:
        app: influxdb # Pod标签设置
    spec:
      imagePullSecrets:
      - name: harbor
      containers:
      - image: influxdb # influx image from docker hub
        imagePullPolicy: Always
        name: influxdb
        ports:
        - containerPort: 8086 # Pod port
          name: influxdb
          protocol: TCP
        volumeMounts:
        - mountPath: /var/lib/influxdb
          name: influxdb-data-vol
      volumes:
      - name: influxdb-data-vol
        persistentVolumeClaim:
          claimName: database-influxdb-data-pvc #和上面创建的PVC名称一致

Copy

创建 pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: database-influxdb-data-pvc # PVC Name
  namespace: influxdb-dev
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi # influx数据存储需要的空间
  storageClassName: rbd # k8s集群上配置的storage class

Copy

创建 Service

apiVersion: v1
kind: Service
metadata:
  name: database-influxdb-svc
  namespace: influxdb-dev
spec:
  ports:
  - name: influxdb
    nodePort: 32713 # 可以通过k8s node ip来访问的端口,根据实际开放端口配置
    port: 8086
    protocol: TCP
    targetPort: 8086
  selector:
    app: influxdb # 通过此标签来关联端口指向的Pod,参考上面的Deployment中的标签配置
  type: NodePort # 表示该服务发现为NodePort类型,可以在集群外部通过Node IP访问

Copy

MongoDB

创建 Harbor Secret

apiVersion: v1
data:
  .dockerconfigjson: xxxxx=
kind: Secret
metadata:
  creationTimestamp: null
  name: harbor
  namespace: mongodb-dev
type: kubernetes.io/dockerconfigjson

Copy

创建 Namespace

apiVersion: v1
kind: Namespace
metadata:
  name: mongodb-dev

Copy

创建 statefulset

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: mongo # Pod标签设置
  name: database-mongo
  namespace: mongodb-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongo # Pod标签设置
  serviceName: database-mongo
  template:
    metadata:
      labels:
        app: mongo # Pod标签设置
    spec:
      imagePullSecrets:
      - name: harbor
      containers:
      - command:
        - mongod
        - --bind_ip_all
        - --replSet
        - rs0
        image: mongo.4.2.3
        imagePullPolicy: Always
        name: mongo
        ports:
        - containerPort: 27017
          name: mongo
          protocol: TCP
        volumeMounts:
        - mountPath: /data/db
          name: mongo-data
      volumes:
      - name: mongo-data
        persistentVolumeClaim:
          claimName: database-mongo-data-pvc #和上面创建的PVC名称一致

Copy

创建 PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: database-mongo-data-pvc # PVC Name
  namespace: mongodb-dev
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi # 数据存储需要的空间
  storageClassName: rbd # k8s集群上配置的storage class

Copy

创建 Service

apiVersion: v1
kind: Service
metadata:
  name: database-mongo-svc
  namespace: mongodb-dev
spec:
  ports:
  - name: mongo
    nodePort: 32714 # 可以通过k8s node ip来访问的端口,根据实际开放端口配置
    port: 27017
    protocol: TCP
    targetPort: 27017
  selector:
    app: mongo # 通过此标签来关联端口指向的Pod,参考上面的Deployment中的标签配置
  type: NodePort # 表示该服务发现为NodePort类型,可以在集群外部通过Node IP访问

Copy

创建完成后连接到 mongo,执行以下命令,可以去除 not master and slaveOk=false 主从数据库造成的 无法登陆的问题

rs.initiate();

Copy

Kafka

创建 Secret 和 Namespace

apiVersion: v1
kind: Namespace
metadata:
  name: kafka-dev
---
apiVersion: v1
data:
  .dockerconfigjson: xxxxx=
kind: Secret
metadata:
  creationTimestamp: null
  name: harbor
  namespace: kafka-dev
type: kubernetes.io/dockerconfigjson

Copy

创建 Statefulset 和 Service

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: kafka-1
  name: kafka-1
  namespace: kafka-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-1
  serviceName: kafka-1
  template:
    metadata:
      labels:
        app: kafka-1
    spec:
      imagePullSecrets:
      - name: harbor
      containers:
      - env:
        - name: KAFKA_SESSION_TIMEOUT_MS
          value: "6000"
        - name: KAFKA_HEARTBEAT_INTERVAL_MS
          value: "2000"
        - name: KAFKA_ADVERTISED_HOST_NAME
          value: 192.168.0.6
        - name: KAFKA_ADVERTISED_PORT
          value: "32715"
        - name: KAFKA_AUTO_CREATE_TOPICS_ENABLE
          value: "true"
        - name: KAFKA_AUTO_LEADER_REBALANCE_ENABLE
          value: "true"
        - name: KAFKA_BACKGROUND_THREADS
          value: "10"
        - name: KAFKA_BROKER_ID
          value: "1"
        - name: KAFKA_COMPRESSION_TYPE
          value: producer
        - name: KAFKA_DELETE_TOPIC_ENABLE
          value: "true"
        - name: KAFKA_LEADER_IMBALANCE_CHECK_INTERVAL_SECONDS
          value: "300"
        - name: KAFKA_LEADER_IMBALANCE_PER_BROKER_PERCENTAGE
          value: "10"
        - name: KAFKA_LISTENERS
          value: PLAINTEXT://:9092
        - name: KAFKA_LOG_CLEANER_BACKOFF_MS
          value: "15000"
        - name: KAFKA_LOG_CLEANER_DEDUPE_BUFFER_SIZE
          value: "134217728"
        - name: KAFKA_LOG_CLEANER_DELETE_RETENTION_MS
          value: "86400000"
        - name: KAFKA_LOG_CLEANER_ENABLE
          value: "true"
        - name: KAFKA_LOG_CLEANER_IO_BUFFER_LOAD_FACTOR
          value: "0.9"
        - name: KAFKA_LOG_CLEANER_IO_BUFFER_SIZE
          value: "524288"
        - name: KAFKA_LOG_CLEANER_IO_MAX_BYTES_PER_SECOND
          value: "1.7976931348623157E308"
        - name: KAFKA_LOG_CLEANER_MIN_CLEANABLE_RATIO
          value: "0.5"
        - name: KAFKA_LOG_CLEANER_MIN_COMPACTION_LAG_MS
          value: "0"
        - name: KAFKA_LOG_CLEANER_THREADS
          value: "1"
        - name: KAFKA_LOG_CLEANUP_POLICY
          value: delete
        - name: KAFKA_LOG_INDEX_INTERVAL_BYTES
          value: "4096"
        - name: KAFKA_LOG_INDEX_SIZE_MAX_BYTES
          value: "10485760"
        - name: KAFKA_LOG_MESSAGE_TIMESTAMP_DIFFERENCE_MAX_MS
          value: "9223372036854775807"
        - name: KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE
          value: CreateTime
        - name: KAFKA_LOG_PREALLOCATE
          value: "false"
        - name: KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS
          value: "300000"
        - name: KAFKA_MAX_CONNECTIONS_PER_IP
          value: "2147483647"
        - name: KAFKA_NUM_PARTITIONS
          value: "4"
        - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
          value: "1"
        - name: KAFKA_PRODUCER_PURGATORY_PURGE_INTERVAL_REQUESTS
          value: "1000"
        - name: KAFKA_REPLICA_FETCH_BACKOFF_MS
          value: "1000"
        - name: KAFKA_REPLICA_FETCH_MAX_BYTES
          value: "1048576"
        - name: KAFKA_RESERVED_BROKER_MAX_ID
          value: "1000"
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: kafka-zookeeper:2181
        image: kafka:latest
        name: kafka-1
        ports:
        - containerPort: 9092
          name: 9092tcp02
          protocol: TCP
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: kafka-2
  name: kafka-2
  namespace: kafka-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-2
  serviceName: kafka-2
  template:
    metadata:
      labels:
        app: kafka-2
    spec:
      imagePullSecrets:
      - name: harbor
      containers:
      - env:
        - name: KAFKA_SESSION_TIMEOUT_MS
          value: "6000"
        - name: KAFKA_HEARTBEAT_INTERVAL_MS
          value: "2000"
        - name: KAFKA_ADVERTISED_HOST_NAME
          value: 192.168.0.6
        - name: KAFKA_ADVERTISED_PORT
          value: "32716"
        - name: KAFKA_AUTO_CREATE_TOPICS_ENABLE
          value: "true"
        - name: KAFKA_AUTO_LEADER_REBALANCE_ENABLE
          value: "true"
        - name: KAFKA_BACKGROUND_THREADS
          value: "10"
        - name: KAFKA_BROKER_ID
          value: "2"
        - name: KAFKA_COMPRESSION_TYPE
          value: producer
        - name: KAFKA_DELETE_TOPIC_ENABLE
          value: "true"
        - name: KAFKA_LEADER_IMBALANCE_CHECK_INTERVAL_SECONDS
          value: "300"
        - name: KAFKA_LEADER_IMBALANCE_PER_BROKER_PERCENTAGE
          value: "10"
        - name: KAFKA_LISTENERS
          value: PLAINTEXT://:9092
        - name: KAFKA_LOG_CLEANER_BACKOFF_MS
          value: "15000"
        - name: KAFKA_LOG_CLEANER_DEDUPE_BUFFER_SIZE
          value: "134217728"
        - name: KAFKA_LOG_CLEANER_DELETE_RETENTION_MS
          value: "86400000"
        - name: KAFKA_LOG_CLEANER_ENABLE
          value: "true"
        - name: KAFKA_LOG_CLEANER_IO_BUFFER_LOAD_FACTOR
          value: "0.9"
        - name: KAFKA_LOG_CLEANER_IO_BUFFER_SIZE
          value: "524288"
        - name: KAFKA_LOG_CLEANER_IO_MAX_BYTES_PER_SECOND
          value: "1.7976931348623157E308"
        - name: KAFKA_LOG_CLEANER_MIN_CLEANABLE_RATIO
          value: "0.5"
        - name: KAFKA_LOG_CLEANER_MIN_COMPACTION_LAG_MS
          value: "0"
        - name: KAFKA_LOG_CLEANER_THREADS
          value: "1"
        - name: KAFKA_LOG_CLEANUP_POLICY
          value: delete
        - name: KAFKA_LOG_INDEX_INTERVAL_BYTES
          value: "4096"
        - name: KAFKA_LOG_INDEX_SIZE_MAX_BYTES
          value: "10485760"
        - name: KAFKA_LOG_MESSAGE_TIMESTAMP_DIFFERENCE_MAX_MS
          value: "9223372036854775807"
        - name: KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE
          value: CreateTime
        - name: KAFKA_LOG_PREALLOCATE
          value: "false"
        - name: KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS
          value: "300000"
        - name: KAFKA_MAX_CONNECTIONS_PER_IP
          value: "2147483647"
        - name: KAFKA_NUM_PARTITIONS
          value: "4"
        - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
          value: "1"
        - name: KAFKA_PRODUCER_PURGATORY_PURGE_INTERVAL_REQUESTS
          value: "1000"
        - name: KAFKA_REPLICA_FETCH_BACKOFF_MS
          value: "1000"
        - name: KAFKA_REPLICA_FETCH_MAX_BYTES
          value: "1048576"
        - name: KAFKA_RESERVED_BROKER_MAX_ID
          value: "1000"
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: kafka-zookeeper:2181
        image: kafka:latest
        imagePullPolicy: Always
        name: kafka-2
        ports:
        - containerPort: 9092
          name: 9092tcp02
          protocol: TCP
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: kafka-3
  name: kafka-3
  namespace: kafka-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-3
  serviceName: kafka-3
  template:
    metadata:
      labels:
        app: kafka-3
    spec:
      containers:
      - env:
        - name: KAFKA_SESSION_TIMEOUT_MS
          value: "6000"
        - name: KAFKA_HEARTBEAT_INTERVAL_MS
          value: "2000"
        - name: KAFKA_ADVERTISED_HOST_NAME
          value: 192.168.0.6
        - name: KAFKA_ADVERTISED_PORT
          value: "32717"
        - name: KAFKA_AUTO_CREATE_TOPICS_ENABLE
          value: "true"
        - name: KAFKA_AUTO_LEADER_REBALANCE_ENABLE
          value: "true"
        - name: KAFKA_BACKGROUND_THREADS
          value: "10"
        - name: KAFKA_BROKER_ID
          value: "3"
        - name: KAFKA_COMPRESSION_TYPE
          value: producer
        - name: KAFKA_DELETE_TOPIC_ENABLE
          value: "true"
        - name: KAFKA_LEADER_IMBALANCE_CHECK_INTERVAL_SECONDS
          value: "300"
        - name: KAFKA_LEADER_IMBALANCE_PER_BROKER_PERCENTAGE
          value: "10"
        - name: KAFKA_LISTENERS
          value: PLAINTEXT://:9092
        - name: KAFKA_LOG_CLEANER_BACKOFF_MS
          value: "15000"
        - name: KAFKA_LOG_CLEANER_DEDUPE_BUFFER_SIZE
          value: "134217728"
        - name: KAFKA_LOG_CLEANER_DELETE_RETENTION_MS
          value: "86400000"
        - name: KAFKA_LOG_CLEANER_ENABLE
          value: "true"
        - name: KAFKA_LOG_CLEANER_IO_BUFFER_LOAD_FACTOR
          value: "0.9"
        - name: KAFKA_LOG_CLEANER_IO_BUFFER_SIZE
          value: "524288"
        - name: KAFKA_LOG_CLEANER_IO_MAX_BYTES_PER_SECOND
          value: "1.7976931348623157E308"
        - name: KAFKA_LOG_CLEANER_MIN_CLEANABLE_RATIO
          value: "0.5"
        - name: KAFKA_LOG_CLEANER_MIN_COMPACTION_LAG_MS
          value: "0"
        - name: KAFKA_LOG_CLEANER_THREADS
          value: "1"
        - name: KAFKA_LOG_CLEANUP_POLICY
          value: delete
        - name: KAFKA_LOG_INDEX_INTERVAL_BYTES
          value: "4096"
        - name: KAFKA_LOG_INDEX_SIZE_MAX_BYTES
          value: "10485760"
        - name: KAFKA_LOG_MESSAGE_TIMESTAMP_DIFFERENCE_MAX_MS
          value: "9223372036854775807"
        - name: KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE
          value: CreateTime
        - name: KAFKA_LOG_PREALLOCATE
          value: "false"
        - name: KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS
          value: "300000"
        - name: KAFKA_MAX_CONNECTIONS_PER_IP
          value: "2147483647"
        - name: KAFKA_NUM_PARTITIONS
          value: "4"
        - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
          value: "1"
        - name: KAFKA_PRODUCER_PURGATORY_PURGE_INTERVAL_REQUESTS
          value: "1000"
        - name: KAFKA_REPLICA_FETCH_BACKOFF_MS
          value: "1000"
        - name: KAFKA_REPLICA_FETCH_MAX_BYTES
          value: "1048576"
        - name: KAFKA_RESERVED_BROKER_MAX_ID
          value: "1000"
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: kafka-zookeeper:2181
        image: kafka:latest
        imagePullPolicy: Always
        name: kafka-3
        ports:
        - containerPort: 9092
          name: 9092tcp02
          protocol: TCP
      imagePullSecrets:
      - name: harbor

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    cattle.io/creator: norman
    app: kafka-zookeeper
  name: kafka-zookeeper
  namespace: kafka-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-zookeeper
  template:
    metadata:
      labels:
        app: kafka-zookeeper
    spec:
      imagePullSecrets:
      - name: harbor
      containers:
      - image: zookeeper:latest
        imagePullPolicy: Always
        name: kafka-zookeeper
---
apiVersion: v1
kind: Service
metadata:
  name: kafka-1
  namespace: kafka-dev
spec:
  ports:
  - name: 9092tcp
    nodePort: 32715
    port: 9092
    protocol: TCP
    targetPort: 9092
  selector:
    app: kafka-1
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  name: kafka-2
  namespace: kafka-dev
spec:
  ports:
  - name: 9092tcp
    nodePort: 32716
    port: 9092
    protocol: TCP
    targetPort: 9092
  selector:
    app: kafka-2
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  name: kafka-3
  namespace: kafka-dev
spec:
  ports:
  - name: 9092tcp
    nodePort: 32717
    port: 9092
    protocol: TCP
    targetPort: 9092
  selector:
    app: kafka-3
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  name: kafka-zookeeper
  namespace: kafka-dev
spec:
  ports:
  - name: default
    nodePort: 32718
    port: 2181
    protocol: TCP
    targetPort: 2181
  selector:
    app: kafka-zookeeper
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  namespace: kafka-dev
  labels:
    app: kafka-zookeeper
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: kafka-zookeeper
---
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  namespace: kafka-dev
  labels:
    app: kafka-zookeeper
spec:
  ports:
  - port: 2181
    name: client
  selector:
    app: kafka-zookeeper

Elasticsearch 集群搭建

在 Kubernetes 集群中部署 ECK

安装自定义资源定义和操作符及其 RBAC 规则:

kubectl create -f https://download.elastic.co/downloads/eck/1.8.0/crds.yaml
kubectl apply -f https://download.elastic.co/downloads/eck/1.8.0/operator.yaml

部署 Elasticsearch
配置 StorageClass 使用 Ceph 作为存储

说明: 需要先将 ceph 集成至 Kubernetes

编写 es.yaml 文件

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elastic-cluster #Cluster name
  namespace: elastic-system
spec:
  version: 7.2.0
  nodeSets:
  - name: master-nodes #node name
    count: 1
    config:
      node.master: true
      node.data: false
    podTemplate:
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        - name: plugins
          command:
          - sh
          - -c
          - |
            bin/elasticsearch-plugin install --batch http://192.168.0.6/elasticsearch-analysis-ik-7.2.0.zip # 使用initcontainer安装Elasticsearch插件
        containers:
        - name: elasticsearch
          env:
          - name: ES_JAVA_OPTS
            value: -Xms1g -Xmx1g
          resources:
            requests:
              memory: 2Gi
            limits:
              memory: 2Gi
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 200Gi
        storageClassName: rbd
  - name: data-nodes #node name
    count: 2
    config:
      node.master: false
      node.data: true
    podTemplate:
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        - name: plugins
          command:
          - sh
          - -c
          - |
            bin/elasticsearch-plugin install --batch http://192.168.0.6/elasticsearch-analysis-ik-7.2.0.zip  # 使用initcontainer安装Elasticsearch插件
        containers:
        - name: elasticsearch
          env:
          - name: ES_JAVA_OPTS
            value: -Xms1g -Xmx1g
          resources:
            requests:
              memory: 2Gi
            limits:
              memory: 2Gi
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 200Gi
        storageClassName: rbd
  http:
    service:
      spec:
        type: NodePort

kubectl apply -f es.yaml

Copy

部署 Kibana
编写 Kibana.yaml 文件

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: elastic-kibana
  namespace: elastic-system
spec:
  version: 7.2.0
  http:
    tls:
      selfSignedCertificate:
        disabled: true
  count: 1
  elasticsearchRef:
    name: elastic-cluster
  podTemplate:
    spec:
      containers:
      - name: kibana
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
  http:
    service:
      spec:
        type: NodePort


kubectl apply -f kibana.yaml
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值