TiDb部署流程

 

目录

 

1. 本地PV配置

2. 部署rcd资源

3. 安装TiDB Operator

1. 添加 PingCAP 仓库

2. 为 TiDB Operator 创建一个命名空间

3. 安装 TiDB Operator

4.创建TIDB集群

5.连接TiDb集群

6.创建TIDB集群监控

7.集群初始化配置(非必要)

资源配置

local-volume-provisioner.yaml

tidb-cluster.yaml

tidb-secret.yaml

tidb-monitor.yaml

tidb-initializer.yaml

镜像文件


1. 本地PV配置

https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner

https://github.com/pingcap/tidb-operator/blob/master/manifests/local-dind/local-volume-provisioner.yaml

 

kubectl apply -f local-volume-provisioner.yaml

 

2. 部署rcd资源

https://github.com/pingcap/tidb-operator/blob/master/manifests/crd.yaml

 

kubectl apply -f crd.yaml

期望输出:

backups.pingcap.com

backupschedules.pingcap.com

restores.pingcap.com

tidbclusterautoscalers.pingcap.com

tidbclusters.pingcap.com

tidbinitializers.pingcap.com

tidbmonitors.pingcap.com

 

3. 安装TiDB Operator

https://docs.pingcap.com/zh/tidb-in-kubernetes/stable/get-started#%E9%83%A8%E7%BD%B2-tidb-operator

TiDB Operator 使用 Helm 3 安装。

1. 添加 PingCAP 仓库

helm repo add pingcap https://charts.pingcap.org/

期望输出:

"pingcap" has been added to your repositories

 

2. 为 TiDB Operator 创建一个命名空间

kubectl create namespace tidb-admin

期望输出:

namespace/tidb-admin created

 

3. 安装 TiDB Operator

# 1.官方文档命令为
$ helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.7
# 执行提示报错:Error: This command needs 1 argument: chart name

# 2.修改命令
$ helm install --namespace tidb-admin --name tidb-operator pingcap/tidb-operator --version v1.1.7
# 部署失败:镜像下载失败
#删除
$ helm del --purge tidb-operator

# 3.修改命令
$ helm install --namespace tidb-admin --name tidb-operator pingcap/tidb-operator --version v1.1.7 \
    --set operatorImage=172.0.0.1/surliya/tidb-k8s/operator:v1.1.7 \
 --set tidbBackupManagerImage=172.0.0.1/surliya/tidb-k8s/backup-manager:v1.1.7 \
 --set scheduler.kubeSchedulerImageName=172.0.0.1/surliya/tidb-k8s/kube-scheduler \
 --set advancedStatefulset.image=172.0.0.1/surliya/tidb-k8s/advanced-statefulset:v0.4.0

# 4. 或者执行本地文件
$ mkdir -p ${HOME}/tidb-operator && \ 
 helm inspect values pingcap/tidb-operator --version=v1.1.7 > ${HOME}/tidb-operator/values-tidb-operator.yaml
# 修复镜像后执行
$ helm install pingcap/tidb-operator --name=tidb-operator --namespace=tidb-admin --version=v1.1.7 -f ${HOME}/tidb-operator/values-tidb-operator.yaml 

# 5.升级
$ helm upgrade tidb-operator pingcap/tidb-operator -f  ${HOME}/tidb-operator/values-tidb-operator.yaml
$ helm upgrade tidb-operator pingcap/tidb-operator -f  ./tidb-operator/values.yaml

 

当所有的 pods 都处于 Running 状态时,可进行下一步操作。

kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-operator

 

4.创建TIDB集群

使用官方镜像配置:https://github.com/pingcap/tidb-operator/blob/master/examples/basic/tidb-cluster.yaml

使用阿里镜像配置:https://github.com/pingcap/tidb-operator/blob/master/examples/basic-cn/tidb-cluster.yaml

 

$ kubectl create namespace tidb-cluster && \    
  kubectl -n tidb-cluster apply -f tidb-cluster.yaml     

 

注意:

  • 如果 TiDB 集群创建完以后手动修改过 root 用户的密码,初始化会失败。

  • 以下功能只在 TiDB 集群创建后第一次执行起作用,执行完以后再修改不会生效。

初始化账号和密码设置

# tidb-secret一定要在集群部署完之后就配置
$ kubectl create secret generic tidb-secret --from-literal=root=secret123 --from-literal=developer=secret1234 --namespace=tidb-cluster   

 

问题:

# describe pod显示
Warning  FailedScheduling  <unknown>  default-scheduler  error while running "VolumeBinding" filter plugin for pod "basic-pd-0": pod has unbound immediate PersistentVolumeClaims

原因:没有见PV,手动建一个就行了

 

5.连接TiDb集群

获取 tidb-cluster 命名空间中的服务列表:

kubectl get svc -n tidb-cluster

 

期望输出:

NAME                     TYPE              CLUSTER-IP         EXTERNAL-IP         PORT(S)         AGE

basic-discovery       ClusterIP         10.101.69.5           <none>                 10261/TCP         10m

basic-grafana         ClusterIP         10.106.41.250       <none>                 3000/TCP         10m

basic-pd                 ClusterIP         10.104.43.232        <none>                 2379/TCP         10m

basic-pd-peer         ClusterIP         None                      <none>                 2380/TCP         10m

basic-prometheus ClusterIP         10.106.177.227       <none>                 9090/TCP         10m

basic-tidb               LoadBalancer   10.99.24.91         <pending>         4000:7271/TCP,10080:7825/TCP 4m

basic-tidb-peer    ClusterIP           None                     <none>                 10080/TCP         8m4s

basic-tikv-peer    ClusterIP           None                     <none>                 20160/TCP         9m3s

 

# 找一台服务器安装mysql
$ yum -y install mysql
# 连接 service地址和端口(输入tidb-secret的root密码)
$ mysql -h 10.97.224.180 -P 4000 -u root -p
# 或者映射服务器和端口
$ mysql -h 192.168.xx.xx -P 7271 -u root -p
# 查数据库
mysql> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| INFORMATION_SCHEMA |
| METRICS_SCHEMA     |
| PERFORMANCE_SCHEMA |
| app                |
| mysql              |
| test               |
+--------------------+
6 rows in set (0.00 sec)

# 至此便可以用idea连接 jdbc:mysql://192.168.xx.xx:7271/test

 

问题1:

[root@node3 data]# mysql -h 127.0.0.1 -P 4000 -u root
-bash: mysql: 未找到命令

原因:服务器没有安装mysql

 

问题2:

[root@node2 lib]# mysql -h 10.97.224.180 -P 4000 -u root
ERROR 1045 (28000): Access denied for user 'root'@'192.168.51.132' (using password: NO)
# 原因:不能无密码连接,需要输入密码

[root@node2 lib]# mysql -h 10.97.224.180 -P 4000 -u root -p
Enter password: 
ERROR 2003 (HY000): Can't connect to MySQL server on '10.97.224.180' (111)

[root@node2 lib]# mysql -h 191.168.xx.xx -P 4000 -u root -p
Enter password: 
ERROR 1045 (28000): Access denied for user 'root'@'192.168.xx.xx' (using password: YES)

解决:集群tidb.service.type必须为LoadBalancer,另外需要LoadBalancer类型的Service类映射4000端口;

 

6.创建TIDB集群监控

使用官方镜像配置:https://github.com/pingcap/tidb-operator/blob/master/examples/basic/tidb-monitor.yaml

使用阿里镜像配置:https://github.com/pingcap/tidb-operator/blob/master/examples/basic-cn/tidb-monitor.yaml

 

# 部署监控
$ kubectl -n tidb-cluster apply -f tidb-monitor.yaml

# 查看pod状态
$ kubectl get po -n tidb-cluster

 

期望输出:

NAME                                         READY         STATUS         RESTARTS         AGE

basic-discovery-6bb656bfd-xl5pb 1/1                 Running         0                         9m9s

basic-monitor-5fc8589c89-gvgjj 3/3                 Running         0                         8m58s

basic-pd-0                                 1/1                 Running         0                         9m8s

basic-tidb-0                                 2/2                 Running         0                         7m14s

basic-tikv-0                                 1/1                 Running         0                         8m13s

等待所有组件 Pods 都启动,然后进行下一步

 

7.集群初始化配置(非必要)

https://github.com/pingcap/docs-tidb-operator/blob/release-1.1/zh/initialize-a-cluster.md

TidbInitializer

$ kubectl -n tidb-cluster apply -f tidb-initializer.yaml

 

问题:

$ kube -n tidb-cluster logs -f --tail=200 basic-tidb-initializer-cbwgk

Traceback (most recent call last):  File "/usr/local/bin/start_script.py", line 5, in <module>    conn = MySQLdb.connect(host=host, port=port, user='root', connect_timeout=5)  File "/usr/local/lib/python3.8/site-packages/MySQLdb/__init__.py", line 84, in Connect    return Connection(*args, **kwargs)  File "/usr/local/lib/python3.8/site-packages/MySQLdb/connections.py", line 179, in __init__    super(Connection, self).__init__(*args, **kwargs2)MySQLdb._exceptions.OperationalError: (1045, "Access denied for user 'root'@'10.43.0.7' (using password: NO)")

原因:1. Secret只在 TiDB 集群创建后第一次执行起作用,执行完以后再修改不会生效;可以换个集群名再部署;

 

资源配置

local-volume-provisioner.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: "local-storage"  # 名称不可改
  namespace: "tidb-cluster"
provisioner: "kubernetes.io/no-provisioner"
volumeBindingMode: "WaitForFirstConsumer"

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: local-provisioner-config
  namespace: kube-system
data:
  setPVOwnerRef: "true"
  nodeLabelsForPV: |
    - kubernetes.io/hostname
  storageClassMap: |
    local-storage:
      hostDir: /data/tidb/mnt/disks
      mountDir: /mnt/disks
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: local-volume-provisioner
  namespace: kube-system
  labels:
    app: local-volume-provisioner
spec:
  selector:
    matchLabels:
      app: local-volume-provisioner
  template:
    metadata:
      labels:
        app: local-volume-provisioner
    spec:
      serviceAccountName: local-storage-admin
      containers:
        - image: "172.0.0.1/surliya/tidb-k8s/local-volume-provisioner:v2.3.4"  #"quay.io/external_storage/local-volume-provisioner:v2.3.4"
          name: provisioner
          securityContext:
            privileged: true
          env:
          - name: MY_NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: MY_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: JOB_CONTAINER_IMAGE
            value: "172.0.0.1/surliya/tidb-k8s/local-volume-provisioner:v2.3.4"  #"quay.io/external_storage/local-volume-provisioner:v2.3.4"
          resources:
            requests:
              cpu: 100m
              memory: 100Mi
            limits:
              cpu: 100m
              memory: 100Mi
          volumeMounts:
            - mountPath: /etc/provisioner/config
              name: provisioner-config
              readOnly: true
            # mounting /dev in DinD environment would fail
            # - mountPath: /dev
            #   name: provisioner-dev
            - mountPath: /mnt/disks
              name: local-disks
              mountPropagation: "HostToContainer"
      volumes:
        - name: provisioner-config
          configMap:
            name: local-provisioner-config
        # - name: provisioner-dev
        #   hostPath:
        #     path: /dev
        - name: local-disks
          hostPath:
            path: /data/tidb/mnt/disks
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: local-storage-admin
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: local-storage-provisioner-pv-binding
  namespace: kube-system
subjects:
- kind: ServiceAccount
  name: local-storage-admin
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: system:persistent-volume-provisioner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: local-storage-provisioner-node-clusterrole
  namespace: kube-system
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: local-storage-provisioner-node-binding
  namespace: kube-system
subjects:
- kind: ServiceAccount
  name: local-storage-admin
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: local-storage-provisioner-node-clusterrole
  apiGroup: rbac.authorization.k8s.io

     

tidb-cluster.yaml

# 必须手动创建一个自动挂载的pv,或多个指定挂载的pv
---  
apiVersion: v1
kind: PersistentVolume
metadata:
  name: wal
spec:
  capacity:
    storage: 30Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  hostPath:
    path: /data/tidb
    type: DirectoryOrCreate

#最好先service后TidbCluster部署
---
apiVersion: v1
kind: Service
metadata:
  name: basic-tidb  #集群名-tidb
  namespace: tidb-cluster
spec:
  selector:
    app: basic
  type: LoadBalancer
  ports:
  - name: mysql
    port: 4000
    targetPort: 4000
    nodePort: 7271
    protocol: TCP
  - name: status
    port: 10080
    targetPort: 10080
    nodePort: 7825
    protocol: TCP
  externalIPs:
  - 192.168.xx.xx
  externalTrafficPolicy: Cluster
  sessionAffinity: None
status:
  loadBalancer: {}
  
---
# IT IS NOT SUITABLE FOR PRODUCTION USE.
# This YAML describes a basic TiDB cluster with minimum resource requirements,
# which should be able to run in any Kubernetes cluster with storage support.
apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
metadata:
  name: basic
  namespace: tidb-cluster
spec:
  version: v4.0.8
  timezone: UTC
  pvReclaimPolicy: Retain
  enableDynamicConfiguration: true
  configUpdateStrategy: RollingUpdate
  discovery: {}
  pd:
    baseImage: 172.0.0.1/surliya/tidb-k8s/pd 
    replicas: 1
    # if storageClassName is not set, the default Storage Class of the Kubernetes cluster will be used
    storageClassName: local-storage
    requests:
      storage: "1Gi"
    config: {}
  tikv:
    baseImage: 172.0.0.1/surliya/tidb-k8s/tikv 
    replicas: 1
    storageClassName: local-storage
    requests:
      storage: "1Gi"
    config:
      storage:
        # In basic examples, we set this to avoid using too much storage.
        reserve-space: "0MB"
  tidb:
    baseImage: 172.0.0.1/surliya/tidb-k8s/tidb 
    replicas: 1
    config: {}
    service:
      type: LoadBalancer
      externalTrafficPolicy: Cluster 
      #mysqlNodePort: 33306 #默认4000
      #statusNodePort: 3001 #默认10080

 

tidb-secret.yaml

#可使用下列命令获取yaml文件配置
# kubectl create secret generic tidb-secret --from-literal=root=secret123 --from-literal=developer=secret1234 --namespace=tidb-cluster --dry-run=true -o yaml
---
apiVersion: v1
data:
  developer: c2VjcmV0MTIzNA
  root: c2VjcmV0MTIz
kind: Secret
metadata:
  creationTimestamp: null
  name: tidb-secret
  namespace: tidb-cluster

 

tidb-monitor.yaml


# 随机web端口:访问192.168.51.130:46417(46417随机生成的),账号密码admin,修改密码monitor123
# 指定映射:另建一个Service,metadata.name=basic-grafana(TidbMonitor的${metadata.name}-grafana),映射3000端口,并对应TidbMonitor的spec.grafana.service.portName
---
apiVersion: v1
kind: Service
metadata:
  name: basic-grafana
  namespace: tidb-cluster
spec:
  selector:
    app: basic
  type: LoadBalancer
  ports:
  - name: grafana
    port: 3000
    targetPort: 3030
    nodePort: 3030
    protocol: TCP
  externalIPs:
  - 192.168.xx.xx
  externalTrafficPolicy: Cluster
  sessionAffinity: None
status:
  loadBalancer: {}

---
apiVersion: pingcap.com/v1alpha1
kind: TidbMonitor
metadata:
  name: basic
  namespace: tidb-cluster
spec:
  clusters: #tidb-cluster.yaml的metadata信息
  - name: basic
    namespace: tidb-cluster
  prometheus:
    baseImage: 172.0.0.1/surliya/tidb-k8s/prom-prometheus
    version: v2.18.1
  grafana:
    baseImage: 172.0.0.1/surliya/tidb-k8s/grafana 
    version: 6.1.6
    service:
      type: LoadBalancer
      loadBalancerIP: 192.168.xx.xx
      portName: grafana
  initializer:
    baseImage: 172.0.0.1/surliya/tidb-k8s/monitor-initializer
    version: v4.0.8
  reloader:
    baseImage: 172.0.0.1/surliya/tidb-k8s/monitor-reloader
    version: v1.0.1
  imagePullPolicy: IfNotPresent

 

tidb-initializer.yaml

---
apiVersion: pingcap.com/v1alpha1
kind: TidbInitializer
metadata:
  name: tidb-init
  namespace: tidb-cluster
spec:
  image: 172.0.0.1/surliya/tidb-k8s/tnir-mysqlclient
  # imagePullPolicy: IfNotPresent
  cluster:
    namespace: tidb-cluster
    name: basic
  initSql: |-
    create database app;
    GRANT ALL PRIVILEGES ON app.* TO 'developer'@'%';
  # initSqlConfigMap: tidb-initsql
  passwordSecret: tidb-secret
  #permitHost: 192.168.xx.xx  #设置仅允许访问 TiDB 的主机 host_name
  # resources:
  #   limits:
  #     cpu: 1000m
  #     memory: 500Mi
  #   requests:
  #     cpu: 100m
  #     memory: 50Mi
  # timezone: "Asia/Shanghai"

 

镜像文件

# 可以直接使用官方镜像,或者官方建议的阿里云镜像
# 也可以下载下来传到私有库,配置文件里使用私有库镜像

$ docker pull 官方镜像
$ docker tag 官方镜像 私有库镜像名
$ docker push 私有库镜像名

172.0.0.1/surliya/tidb-k8s/prom-prometheus #prom/prometheus

172.0.0.1/surliya/tidb-k8s/grafana #grafana/grafana

172.0.0.1/surliya/tidb-k8s/monitor-initializer #pingcap/tidb-monitor-initializer

172.0.0.1/surliya/tidb-k8s/monitor-reloader #pingcap/tidb-monitor-reloader

172.0.0.1/surliya/tidb-k8s/tnir-mysqlclient #tnir/mysqlclient

172.0.0.1/surliya/tidb-k8s/pd #pingcap/pd

172.0.0.1/surliya/tidb-k8s/tikv #pingcap/tikv

172.0.0.1/surliya/tidb-k8s/tidb #pingcap/tidb

172.0.0.1/surliya/tidb-k8s/operator:v1.1.7 #pingcap/tidb-operator:v1.1.7

172.0.0.1/surliya/tidb-k8s/backup-manager:v1.1.7 #pingcap/tidb-backup-manager:v1.1.7

172.0.0.1/surliya/tidb-k8s/kube-scheduler #registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler

172.0.0.1/surliya/tidb-k8s/advanced-statefulset:v0.4.0 #pingcap/advanced-statefulset:v0.4.0

 

 

 

 

 

 

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值