云主机部署 TiDB 测试集群

环境准备

购买了4台按量付费云主机

主机名称公网ip局域网ip
tidb047.109.27.111172.16.69.205
tidb147.108.114.70172.16.69.207
tidb247.108.213.190172.16.69.206
tidb347.109.183.173172.16.69.208

tidb0 将用于部署PD server , TiDB server 和 其他监控组件。
tidb1、tidb2、tidb3 将组成TiKV server集群。

4台主机都设置了相同SSH登录账号root和相同密码,这个SSH密码除了在首次进入中控机要使用,更会在配置集群时用于与集群中其他主机通信。

文章发出时,以上所有实例已经释放。


SSH到tidb0,接下来所有操作都在这台中控机上完成,由tidb0自动完成tidb1、2、3的部署。

看一下系统情况:

[root@tidb0 ~]# hostnamectl
   Static hostname: tidb0
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 20190711105006363114529432776998
           Boot ID: 73cbbc178f38445c96e86df65e3663ca
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-957.21.3.el7.x86_64
      Architecture: x86-64

参考官网

https://docs.pingcap.com/zh/tidb/v5.4/production-deployment-using-tiup#第-2-步在中控机上部署-tiup-组件

本次也采用在线部署。部署TIDB版本为5.4.1。

在线安装

  1. 执行如下命令安装 TiUP 工具
    curl --proto ‘=https’ --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
[root@tidb0 ~]# curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 5149k  100 5149k    0     0  3341k      0  0:00:01  0:00:01 --:--:-- 3341k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Successfully set mirror to https://tiup-mirrors.pingcap.com
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================
  1. 使用source命令重新生效环境变量,是为了能在当前会话使用 tiup工具命令
source /root/.bash_profile
  1. 验证tiup命令可用 / 确认 TiUP 工具是否安装
which tiup
  1. 安装 TiUP cluster 组件
tiup cluster

会开始下载,直到出现:
Use “tiup cluster help [command]” for more information about a command.

不管三七二十一,更新 TiUP cluster 组件至最新版本:

tiup update --self && tiup update cluster

预期输出 “Update successfully!” 字样。

  1. 验证当前 TiUP cluster 版本信息。执行如下命令查看 TiUP cluster 组件版本
[root@tidb0 ~]# tiup --binary cluster
/root/.tiup/components/cluster/v1.16.0/tiup-cluster

初始化集群拓扑文件

  • 使用命令生成一个topology.yaml文件

tiup cluster template > topology.yaml

[root@tidb0 ~]# tiup cluster template > topology.yaml
[root@tidb0 ~]# ls
topology.yaml

vim打开topology.yaml后,修改对应的ip。 其中tiflash是优化查询缓存用,测试环境时可以去掉这一项。

修改后的文件:

global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/tidb-deploy"
  data_dir: "/tidb-data"
server_configs: {}
pd_servers:
  - host: 172.16.69.205

tidb_servers:
  - host: 172.16.69.205

tikv_servers:
  - host: 172.16.69.207
  - host: 172.16.69.206
  - host: 172.16.69.208
monitoring_servers:
  - host: 172.16.69.205
grafana_servers:
  - host: 172.16.69.205
alertmanager_servers:
  - host: 172.16.69.205

保存并退出.

  • 检查集群存在的潜在风险:
    命令模板:tiup cluster check ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]

其中:

topology.yaml  就是刚才初始化的配置文件
--user root 表示通过 root 用户登录到目标主机完成集群部署,该用户需要有 ssh 到目标机器的权限,并且在目标机器有 sudo 权限。也可以用其他有 ssh 和 sudo 权限的用户完成部署。
[-i] 及 [-p] 为可选项,如果已经配置免密登录目标机,则不需填写。否则选择其一即可,[-i] 为可登录到目标机的 root 用户(或 --user 指定的其他用户)的私钥,也可使用 [-p] 交互式输入该用户的密码。

实际执行命令:

tiup cluster check ./topology.yaml --user root -p

提示输入密码就是上述连接SSH时用到的密码。

检查结果中,有很多Warn和Fail。 由于只是测试环境,就不管这些了。生产环境需要处理。

尝试自动修复集群存在的潜在风险:

tiup cluster check ./topology.yaml --apply --user root -p

修复命令可能是起不了作用的。

  • 准备部署 TiDB 集群

先看以下没有部署成功时,执行tiup cluster list的结果:

[root@tidb0 ~]# tiup cluster list
Name  User  Version  Path  PrivateKey
----  ----  -------  ----  ----------
  • 部署 TiDB 集群
tiup cluster deploy tidb-test v5.4.1 ./topology.yaml --user root -p

其中:

tidb-test 为部署的集群名称,
v5.4.1 为部署的集群版本,
topology.yaml  就是刚才初始化的配置文件
--user root 表示通过 root 用户登录到目标主机完成集群部署,该用户需要有 ssh 到目标机器的权限,并且在目标机器有 sudo 权限。也可以用其他有 ssh 和 sudo 权限的用户完成部署。
[-i] 及 [-p] 为可选项,如果已经配置免密登录目标机,则不需填写。否则选择其一即可,[-i] 为可登录到目标机的 root 用户(或 --user 指定的其他用户)的私钥,也可使用 [-p] 交互式输入该用户的密码。

提示输入密码就是上述连接SSH时用到的密码。执行后会去到集群中目标主机自动完成部署操作。

预期日志结尾输出 Cluster tidb-test deployed successfully, you can start it with command: tiup cluster start tidb-test --init
表示部署成功。

tiup cluster start tidb-test --init 就是后面用于启动集群的命令;但在此之前,可以再通过tiup cluster list看看集群情况。

[root@tidb0 ~]# tiup cluster list
Name       User  Version  Path                                            PrivateKey
----       ----  -------  ----                                            ----------
tidb-test  tidb  v5.4.1   /root/.tiup/storage/cluster/clusters/tidb-test  /root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa
  • 检查 tidb-test 集群情况
[root@tidb0 ~]# tiup cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v5.4.1
Deploy user:        tidb
SSH type:           builtin
Grafana URL:        http://172.16.69.205:3000
ID                   Role          Host           Ports        OS/Arch       Status  Data Dir                      Deploy Dir
--                   ----          ----           -----        -------       ------  --------                      ----------
172.16.69.205:9093   alertmanager  172.16.69.205  9093/9094    linux/x86_64  Down    /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
172.16.69.205:3000   grafana       172.16.69.205  3000         linux/x86_64  Down    -                             /tidb-deploy/grafana-3000
172.16.69.205:2379   pd            172.16.69.205  2379/2380    linux/x86_64  Down    /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.69.205:9090   prometheus    172.16.69.205  9090/12020   linux/x86_64  Down    /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
172.16.69.205:4000   tidb          172.16.69.205  4000/10080   linux/x86_64  Down    -                             /tidb-deploy/tidb-4000
172.16.69.206:20160  tikv          172.16.69.206  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.69.207:20160  tikv          172.16.69.207  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.69.208:20160  tikv          172.16.69.208  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 8

预期输出包括 tidb-test 集群中实例 ID、角色、主机、监听端口和状态(由于还未启动,所以状态为 Down/inactive)、目录信息。

启动集群

  1. 普通启动。可通过无密码的 root 用户登录数据库。
tiup cluster start tidb-test

预期结果输出 Started cluster tidb-test successfully,表示启动成功

  1. 安全启动。必须使用米姆登录数据库,所以启动时需要记录命令行返回的密码。自动生成的密码只会返回一次。
tiup cluster start tidb-test --init

预期结果如下,表示启动成功。

...
	Start 172.16.69.206 success
	Start 172.16.69.205 success
	Start 172.16.69.208 success
	Start 172.16.69.207 success
...
Started cluster `tidb-test` successfully
The root password of TiDB database has been changed.
The new password is: 'U719-^8@FHGM0Ln4*p'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.

上述临时密码U719-^8@FHGM0Ln4*p就用于登录tidb。(文章发出时此密码已经不可用)

  1. 再次检查 tidb-test 集群状态时, 可以看到状态不再是Down而是Up,代表集群已经在正常运行了。
tiup cluster display tidb-test
[root@tidb0 ~]# tiup cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v5.4.1
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://172.16.69.205:2379/dashboard
Grafana URL:        http://172.16.69.205:3000
ID                   Role          Host           Ports        OS/Arch       Status   Data Dir                      Deploy Dir
--                   ----          ----           -----        -------       ------   --------                      ----------
172.16.69.205:9093   alertmanager  172.16.69.205  9093/9094    linux/x86_64  Up       /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
172.16.69.205:3000   grafana       172.16.69.205  3000         linux/x86_64  Up       -                             /tidb-deploy/grafana-3000
172.16.69.205:2379   pd            172.16.69.205  2379/2380    linux/x86_64  Up|L|UI  /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.69.205:9090   prometheus    172.16.69.205  9090/12020   linux/x86_64  Up       /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
172.16.69.205:4000   tidb          172.16.69.205  4000/10080   linux/x86_64  Up       -                             /tidb-deploy/tidb-4000
172.16.69.206:20160  tikv          172.16.69.206  20160/20180  linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.69.207:20160  tikv          172.16.69.207  20160/20180  linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.69.208:20160  tikv          172.16.69.208  20160/20180  linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 8

tidb的端口 4000 需要云主机上开启安全组。

  1. 简单做个连接测试
Class.forName("com.mysql.cj.jdbc.Driver");
Connection conn = DriverManager.getConnection("jdbc:mysql://47.109.27.111:4000/test?useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=Asia/Shanghai",
        "root", "U719-^8@FHGM0Ln4*p");

System.out.println("获取mysql连接成功");
conn.close();

获取mysql连接成功

全部部署过程和命令

shell 6 (Build 0204)
Copyright (c) 2002 NetSarang Computer, Inc. All rights reserved.

Type `help' to learn how to use Xshell prompt.
[C:\~]$ 

Connecting to 47.109.27.111:22...
Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.

WARNING! The remote SSH server rejected X11 forwarding request.
Last login: Sat Aug 17 17:27:46 2024 from 118.112.72.89

Welcome to Alibaba Cloud Elastic Compute Service !

[root@tidb0 ~]# hostnamectl
   Static hostname: tidb0
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 20190711105006363114529432776998
           Boot ID: 73cbbc178f38445c96e86df65e3663ca
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-957.21.3.el7.x86_64
      Architecture: x86-64
[root@tidb0 ~]# curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 5149k  100 5149k    0     0  3341k      0  0:00:01  0:00:01 --:--:-- 3341k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Successfully set mirror to https://tiup-mirrors.pingcap.com
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================
[root@tidb0 ~]# source /root/.bash_profile
[root@tidb0 ~]# which tiup
/root/.tiup/bin/tiup
[root@tidb0 ~]# tiup cluster
Checking updates for component cluster... Timedout (after 2s)
The component `cluster` version  is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.16.0-linux-amd64.tar.gz 8.83 MiB / 8.83 MiB 100.00% 55.35 MiB/s         
Deploy a TiDB cluster for production

Usage:
  tiup cluster [command]

Available Commands:
  check       Perform preflight checks for the cluster.
  deploy      Deploy a cluster for production
  start       Start a TiDB cluster
  stop        Stop a TiDB cluster
  restart     Restart a TiDB cluster
  scale-in    Scale in a TiDB cluster
  scale-out   Scale out a TiDB cluster
  destroy     Destroy a specified cluster
  clean       (EXPERIMENTAL) Cleanup a specified cluster
  upgrade     Upgrade a specified TiDB cluster
  display     Display information of a TiDB cluster
  prune       Destroy and remove instances that is in tombstone state
  list        List all clusters
  audit       Show audit log of cluster operation
  import      Import an exist TiDB cluster from TiDB-Ansible
  edit-config Edit TiDB cluster config
  show-config Show TiDB cluster config
  reload      Reload a TiDB cluster's config and restart if needed
  patch       Replace the remote package with a specified package and restart the service
  rename      Rename the cluster
  enable      Enable a TiDB cluster automatically at boot
  disable     Disable automatic enabling of TiDB clusters at boot
  replay      Replay previous operation and skip successed steps
  template    Print topology template
  tls         Enable/Disable TLS between TiDB components
  meta        backup/restore meta information
  rotatessh   rotate ssh keys on all nodes
  help        Help about any command
  completion  Generate the autocompletion script for the specified shell

Flags:
  -c, --concurrency int     max number of parallel tasks allowed (default 5)
      --format string       (EXPERIMENTAL) The format of output, available values are [default, json] (default "default")
  -h, --help                help for tiup
      --ssh string          (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.
      --ssh-timeout uint    Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
  -v, --version             version for tiup
      --wait-timeout uint   Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)
  -y, --yes                 Skip all confirmations and assumes 'yes'

Use "tiup cluster help [command]" for more information about a command.
[root@tidb0 ~]# tiup update --self && tiup update cluster
download https://tiup-mirrors.pingcap.com/tiup-v1.16.0-linux-amd64.tar.gz 5.03 MiB / 5.03 MiB 100.00% 36.52 MiB/s            
Updated successfully!
component cluster version v1.16.0 is already installed
Updated successfully!
[root@tidb0 ~]# tiup --binary cluster
/root/.tiup/components/cluster/v1.16.0/tiup-cluster
[root@tidb0 ~]# tiup cluster template > topology.yaml
[root@tidb0 ~]# ls
topology.yaml
[root@tidb0 ~]# vim topology.yaml 
[root@tidb0 ~]# tiup cluster check ./topology.yaml --user root -p
Input SSH password: 




+ Detect CPU Arch Name
  - Detecting node 172.16.69.205 Arch info ... Done
  - Detecting node 172.16.69.207 Arch info ... Done
  - Detecting node 172.16.69.206 Arch info ... Done
  - Detecting node 172.16.69.208 Arch info ... Done




+ Detect CPU OS Name
  - Detecting node 172.16.69.205 OS info ... Done
  - Detecting node 172.16.69.207 OS info ... Done
  - Detecting node 172.16.69.206 OS info ... Done
  - Detecting node 172.16.69.208 OS info ... Done
+ Download necessary tools
  - Downloading check tools for linux/amd64 ... Done
+ Collect basic system information
  - Getting system info of 172.16.69.205:22 ... ⠼ CopyComponent: component=insight, version=, remote=172.16.69.205:/tmp/ti...
+ Collect basic system information
+ Collect basic system information
+ Collect basic system information
  - Getting system info of 172.16.69.205:22 ... Done
  - Getting system info of 172.16.69.207:22 ... Done
  - Getting system info of 172.16.69.206:22 ... Done
  - Getting system info of 172.16.69.208:22 ... Done
+ Check time zone
  - Checking node 172.16.69.205 ... Done
  - Checking node 172.16.69.207 ... Done
  - Checking node 172.16.69.206 ... Done
  - Checking node 172.16.69.208 ... Done
+ Check system requirements
  - Checking node 172.16.69.205 ... ⠦ CheckSys: host=172.16.69.205 type=exist
+ Check system requirements
  - Checking node 172.16.69.205 ... Done
+ Check system requirements
+ Check system requirements
  - Checking node 172.16.69.205 ... Done
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
  - Checking node 172.16.69.205 ... Done
  - Checking node 172.16.69.207 ... Done
  - Checking node 172.16.69.206 ... Done
  - Checking node 172.16.69.208 ... Done
  - Checking node 172.16.69.205 ... Done
  - Checking node 172.16.69.205 ... Done
  - Checking node 172.16.69.205 ... Done
  - Checking node 172.16.69.205 ... Done
  - Checking node 172.16.69.205 ... Done
  - Checking node 172.16.69.207 ... Done
  - Checking node 172.16.69.206 ... Done
  - Checking node 172.16.69.208 ... Done
+ Cleanup check files
  - Cleanup check files on 172.16.69.205:22 ... Done
  - Cleanup check files on 172.16.69.207:22 ... Done
  - Cleanup check files on 172.16.69.206:22 ... Done
  - Cleanup check files on 172.16.69.208:22 ... Done
Node           Check         Result  Message
----           -----         ------  -------
172.16.69.207  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
172.16.69.207  memory        Pass    memory size is 8192MB
172.16.69.207  disk          Warn    mount point / does not have 'noatime' option set
172.16.69.207  selinux       Pass    SELinux is disabled
172.16.69.207  thp           Fail    THP is enabled, please disable it for best performance
172.16.69.207  service       Fail    service irqbalance is not running
172.16.69.207  cpu-cores     Pass    number of CPU cores / threads: 4
172.16.69.207  disk          Fail    mount point / does not have 'nodelalloc' option set
172.16.69.207  sysctl        Fail    net.core.somaxconn = 128, should be greater than 32768
172.16.69.207  sysctl        Fail    net.ipv4.tcp_syncookies = 1, should be 0
172.16.69.207  sysctl        Fail    fs.file-max = 763803, should be greater than 1000000
172.16.69.207  command       Fail    numactl not usable, bash: numactl: command not found
172.16.69.207  os-version    Pass    OS is CentOS Linux 7 (Core) 7.6.1810
172.16.69.207  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
172.16.69.207  limits        Fail    soft limit of 'stack' for user 'tidb' is not set or too low
172.16.69.207  limits        Fail    soft limit of 'nofile' for user 'tidb' is not set or too low
172.16.69.207  limits        Fail    hard limit of 'nofile' for user 'tidb' is not set or too low
172.16.69.206  cpu-cores     Pass    number of CPU cores / threads: 4
172.16.69.206  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
172.16.69.206  limits        Fail    soft limit of 'nofile' for user 'tidb' is not set or too low
172.16.69.206  limits        Fail    hard limit of 'nofile' for user 'tidb' is not set or too low
172.16.69.206  limits        Fail    soft limit of 'stack' for user 'tidb' is not set or too low
172.16.69.206  sysctl        Fail    fs.file-max = 763803, should be greater than 1000000
172.16.69.206  sysctl        Fail    net.core.somaxconn = 128, should be greater than 32768
172.16.69.206  sysctl        Fail    net.ipv4.tcp_syncookies = 1, should be 0
172.16.69.206  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
172.16.69.206  thp           Fail    THP is enabled, please disable it for best performance
172.16.69.206  os-version    Pass    OS is CentOS Linux 7 (Core) 7.6.1810
172.16.69.206  command       Fail    numactl not usable, bash: numactl: command not found
172.16.69.206  memory        Pass    memory size is 8192MB
172.16.69.206  disk          Fail    mount point / does not have 'nodelalloc' option set
172.16.69.206  disk          Warn    mount point / does not have 'noatime' option set
172.16.69.206  selinux       Pass    SELinux is disabled
172.16.69.206  service       Fail    service irqbalance is not running
172.16.69.208  selinux       Pass    SELinux is disabled
172.16.69.208  os-version    Pass    OS is CentOS Linux 7 (Core) 7.6.1810
172.16.69.208  cpu-cores     Pass    number of CPU cores / threads: 4
172.16.69.208  service       Fail    service irqbalance is not running
172.16.69.208  thp           Fail    THP is enabled, please disable it for best performance
172.16.69.208  command       Fail    numactl not usable, bash: numactl: command not found
172.16.69.208  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
172.16.69.208  disk          Warn    mount point / does not have 'noatime' option set
172.16.69.208  limits        Fail    soft limit of 'nofile' for user 'tidb' is not set or too low
172.16.69.208  limits        Fail    hard limit of 'nofile' for user 'tidb' is not set or too low
172.16.69.208  limits        Fail    soft limit of 'stack' for user 'tidb' is not set or too low
172.16.69.208  sysctl        Fail    fs.file-max = 763803, should be greater than 1000000
172.16.69.208  sysctl        Fail    net.core.somaxconn = 128, should be greater than 32768
172.16.69.208  sysctl        Fail    net.ipv4.tcp_syncookies = 1, should be 0
172.16.69.208  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
172.16.69.208  memory        Pass    memory size is 8192MB
172.16.69.208  disk          Fail    mount point / does not have 'nodelalloc' option set
172.16.69.205  thp           Fail    THP is enabled, please disable it for best performance
172.16.69.205  service       Fail    service irqbalance is not running
172.16.69.205  os-version    Pass    OS is CentOS Linux 7 (Core) 7.6.1810
172.16.69.205  cpu-cores     Pass    number of CPU cores / threads: 4
172.16.69.205  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
172.16.69.205  memory        Pass    memory size is 8192MB
172.16.69.205  disk          Warn    mount point / does not have 'noatime' option set
172.16.69.205  sysctl        Fail    fs.file-max = 763803, should be greater than 1000000
172.16.69.205  sysctl        Fail    net.core.somaxconn = 128, should be greater than 32768
172.16.69.205  sysctl        Fail    net.ipv4.tcp_syncookies = 1, should be 0
172.16.69.205  command       Fail    numactl not usable, bash: numactl: command not found
172.16.69.205  disk          Fail    mount point / does not have 'nodelalloc' option set
172.16.69.205  limits        Fail    soft limit of 'stack' for user 'tidb' is not set or too low
172.16.69.205  limits        Fail    soft limit of 'nofile' for user 'tidb' is not set or too low
172.16.69.205  limits        Fail    hard limit of 'nofile' for user 'tidb' is not set or too low
172.16.69.205  selinux       Pass    SELinux is disabled
[root@tidb0 ~]# tiup cluster check ./topology.yaml --apply --user root -p
Input SSH password: 




+ Detect CPU Arch Name
  - Detecting node 172.16.69.205 Arch info ... Done
  - Detecting node 172.16.69.207 Arch info ... Done
  - Detecting node 172.16.69.206 Arch info ... Done
  - Detecting node 172.16.69.208 Arch info ... Done




+ Detect CPU OS Name
  - Detecting node 172.16.69.205 OS info ... Done
  - Detecting node 172.16.69.207 OS info ... Done
  - Detecting node 172.16.69.206 OS info ... Done
  - Detecting node 172.16.69.208 OS info ... Done
+ Download necessary tools
  - Downloading check tools for linux/amd64 ... Done
+ Collect basic system information
+ Collect basic system information
  - Getting system info of 172.16.69.205:22 ... ⠴ CopyComponent: component=insight, version=, remote=172.16.69.205:/tmp/ti...
+ Collect basic system information
+ Collect basic system information
  - Getting system info of 172.16.69.205:22 ... Done
  - Getting system info of 172.16.69.207:22 ... Done
  - Getting system info of 172.16.69.206:22 ... Done
  - Getting system info of 172.16.69.208:22 ... Done
+ Check time zone
  - Checking node 172.16.69.205 ... Done
  - Checking node 172.16.69.207 ... Done
  - Checking node 172.16.69.206 ... Done
  - Checking node 172.16.69.208 ... Done
+ Check system requirements
  - Checking node 172.16.69.205 ... ⠦ CheckSys: host=172.16.69.205 type=exist
+ Check system requirements
+ Check system requirements
+ Check system requirements
  - Checking node 172.16.69.205 ... Done
+ Check system requirements
  - Checking node 172.16.69.205 ... Done
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
  - Checking node 172.16.69.205 ... Done
  - Checking node 172.16.69.207 ... Done
  - Checking node 172.16.69.206 ... Done
  - Checking node 172.16.69.208 ... Done
  - Checking node 172.16.69.205 ... Done
  - Checking node 172.16.69.205 ... Done
  - Checking node 172.16.69.205 ... Done
  - Checking node 172.16.69.205 ... Done
  - Checking node 172.16.69.205 ... Done
  - Checking node 172.16.69.207 ... Done
  - Checking node 172.16.69.206 ... Done
  - Checking node 172.16.69.208 ... Done
+ Cleanup check files
  - Cleanup check files on 172.16.69.205:22 ... Done
  - Cleanup check files on 172.16.69.207:22 ... Done
  - Cleanup check files on 172.16.69.206:22 ... Done
  - Cleanup check files on 172.16.69.208:22 ... Done
Node           Check         Result  Message
----           -----         ------  -------
172.16.69.205  memory        Pass    memory size is 8192MB
172.16.69.205  disk          Fail    mount point / does not have 'nodelalloc' option set, auto fixing not supported
172.16.69.205  disk          Warn    mount point / does not have 'noatime' option set, auto fixing not supported
172.16.69.205  limits        Fail    will try to set 'tidb    hard    nofile    1000000'
172.16.69.205  limits        Fail    will try to set 'tidb    soft    stack    10240'
172.16.69.205  limits        Fail    will try to set 'tidb    soft    nofile    1000000'
172.16.69.205  thp           Fail    will try to disable THP, please check again after reboot
172.16.69.205  service       Fail    will try to 'start irqbalance.service'
172.16.69.205  command       Fail    numactl not usable, bash: numactl: command not found, auto fixing not supported
172.16.69.205  cpu-cores     Pass    number of CPU cores / threads: 4
172.16.69.205  cpu-governor  Warn    Unable to determine current CPU frequency governor policy, auto fixing not supported
172.16.69.205  sysctl        Fail    will try to set 'fs.file-max = 1000000'
172.16.69.205  sysctl        Fail    will try to set 'net.core.somaxconn = 32768'
172.16.69.205  sysctl        Fail    will try to set 'net.ipv4.tcp_syncookies = 0'
172.16.69.205  selinux       Pass    SELinux is disabled
172.16.69.205  os-version    Pass    OS is CentOS Linux 7 (Core) 7.6.1810
172.16.69.207  cpu-cores     Pass    number of CPU cores / threads: 4
172.16.69.207  cpu-governor  Warn    Unable to determine current CPU frequency governor policy, auto fixing not supported
172.16.69.207  memory        Pass    memory size is 8192MB
172.16.69.207  limits        Fail    will try to set 'tidb    soft    nofile    1000000'
172.16.69.207  limits        Fail    will try to set 'tidb    hard    nofile    1000000'
172.16.69.207  limits        Fail    will try to set 'tidb    soft    stack    10240'
172.16.69.207  sysctl        Fail    will try to set 'fs.file-max = 1000000'
172.16.69.207  sysctl        Fail    will try to set 'net.core.somaxconn = 32768'
172.16.69.207  sysctl        Fail    will try to set 'net.ipv4.tcp_syncookies = 0'
172.16.69.207  disk          Fail    mount point / does not have 'nodelalloc' option set, auto fixing not supported
172.16.69.207  disk          Warn    mount point / does not have 'noatime' option set, auto fixing not supported
172.16.69.207  service       Fail    will try to 'start irqbalance.service'
172.16.69.207  os-version    Pass    OS is CentOS Linux 7 (Core) 7.6.1810
172.16.69.207  thp           Fail    will try to disable THP, please check again after reboot
172.16.69.207  command       Fail    numactl not usable, bash: numactl: command not found, auto fixing not supported
172.16.69.207  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
172.16.69.207  selinux       Pass    SELinux is disabled
172.16.69.206  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
172.16.69.206  cpu-cores     Pass    number of CPU cores / threads: 4
172.16.69.206  cpu-governor  Warn    Unable to determine current CPU frequency governor policy, auto fixing not supported
172.16.69.206  limits        Fail    will try to set 'tidb    soft    nofile    1000000'
172.16.69.206  limits        Fail    will try to set 'tidb    hard    nofile    1000000'
172.16.69.206  limits        Fail    will try to set 'tidb    soft    stack    10240'
172.16.69.206  service       Fail    will try to 'start irqbalance.service'
172.16.69.206  memory        Pass    memory size is 8192MB
172.16.69.206  sysctl        Fail    will try to set 'fs.file-max = 1000000'
172.16.69.206  sysctl        Fail    will try to set 'net.core.somaxconn = 32768'
172.16.69.206  sysctl        Fail    will try to set 'net.ipv4.tcp_syncookies = 0'
172.16.69.206  thp           Fail    will try to disable THP, please check again after reboot
172.16.69.206  disk          Warn    mount point / does not have 'noatime' option set, auto fixing not supported
172.16.69.206  os-version    Pass    OS is CentOS Linux 7 (Core) 7.6.1810
172.16.69.206  disk          Fail    mount point / does not have 'nodelalloc' option set, auto fixing not supported
172.16.69.206  selinux       Pass    SELinux is disabled
172.16.69.206  command       Fail    numactl not usable, bash: numactl: command not found, auto fixing not supported
172.16.69.208  selinux       Pass    SELinux is disabled
172.16.69.208  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
172.16.69.208  cpu-cores     Pass    number of CPU cores / threads: 4
172.16.69.208  disk          Fail    mount point / does not have 'nodelalloc' option set, auto fixing not supported
172.16.69.208  memory        Pass    memory size is 8192MB
172.16.69.208  limits        Fail    will try to set 'tidb    soft    nofile    1000000'
172.16.69.208  limits        Fail    will try to set 'tidb    hard    nofile    1000000'
172.16.69.208  limits        Fail    will try to set 'tidb    soft    stack    10240'
172.16.69.208  os-version    Pass    OS is CentOS Linux 7 (Core) 7.6.1810
172.16.69.208  cpu-governor  Warn    Unable to determine current CPU frequency governor policy, auto fixing not supported
172.16.69.208  command       Fail    numactl not usable, bash: numactl: command not found, auto fixing not supported
172.16.69.208  service       Fail    will try to 'start irqbalance.service'
172.16.69.208  disk          Warn    mount point / does not have 'noatime' option set, auto fixing not supported
172.16.69.208  sysctl        Fail    will try to set 'fs.file-max = 1000000'
172.16.69.208  sysctl        Fail    will try to set 'net.core.somaxconn = 32768'
172.16.69.208  sysctl        Fail    will try to set 'net.ipv4.tcp_syncookies = 0'
172.16.69.208  thp           Fail    will try to disable THP, please check again after reboot
+ Try to apply changes to fix failed checks
  - Applying changes on 172.16.69.205 ... ⠙ Sysctl: host=172.16.69.205 net.ipv4.tcp_syncookies = 0
  - Applying changes on 172.16.69.207 ... ⠙ Sysctl: host=172.16.69.207 net.ipv4.tcp_syncookies = 0
  - Applying changes on 172.16.69.206 ... ⠙ Sysctl: host=172.16.69.206 net.ipv4.tcp_syncookies = 0
+ Try to apply changes to fix failed checks
  - Applying changes on 172.16.69.205 ... ⠹ Shell: host=172.16.69.205, sudo=true, command=`if [ -d /sys/kernel/mm/transpar...
  - Applying changes on 172.16.69.207 ... ⠹ Sysctl: host=172.16.69.207 net.ipv4.tcp_syncookies = 0
  - Applying changes on 172.16.69.206 ... ⠹ Shell: host=172.16.69.206, sudo=true, command=`if [ -d /sys/kernel/mm/transpar...
+ Try to apply changes to fix failed checks
  - Applying changes on 172.16.69.205 ... Done
  - Applying changes on 172.16.69.207 ... Done
  - Applying changes on 172.16.69.206 ... Done
  - Applying changes on 172.16.69.208 ... Done
[root@tidb0 ~]# tiup cluster list
Name  User  Version  Path  PrivateKey
----  ----  -------  ----  ----------
[root@tidb0 ~]# tiup cluster deploy tidb-test v5.4.1 ./topology.yaml --user root -p
Input SSH password: 




+ Detect CPU Arch Name
  - Detecting node 172.16.69.205 Arch info ... Done
  - Detecting node 172.16.69.207 Arch info ... Done
  - Detecting node 172.16.69.206 Arch info ... Done
  - Detecting node 172.16.69.208 Arch info ... Done




+ Detect CPU OS Name
  - Detecting node 172.16.69.205 OS info ... Done
  - Detecting node 172.16.69.207 OS info ... Done
  - Detecting node 172.16.69.206 OS info ... Done
  - Detecting node 172.16.69.208 OS info ... Done
Please confirm your topology:
Cluster type:    tidb
Cluster name:    tidb-test
Cluster version: v5.4.1
Role          Host           Ports        OS/Arch       Directories
----          ----           -----        -------       -----------
pd            172.16.69.205  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv          172.16.69.207  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.69.206  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.69.208  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tidb          172.16.69.205  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
prometheus    172.16.69.205  9090/12020   linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana       172.16.69.205  3000         linux/x86_64  /tidb-deploy/grafana-3000
alertmanager  172.16.69.205  9093/9094    linux/x86_64  /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v5.4.1 (linux/amd64) ... Done
  - Download tikv:v5.4.1 (linux/amd64) ... Done
  - Download tidb:v5.4.1 (linux/amd64) ... Done
  - Download prometheus:v5.4.1 (linux/amd64) ... Done
  - Download grafana:v5.4.1 (linux/amd64) ... Done
  - Download alertmanager: (linux/amd64) ... Done
  - Download node_exporter: (linux/amd64) ... Done
  - Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 172.16.69.205:22 ... Done
  - Prepare 172.16.69.207:22 ... Done
  - Prepare 172.16.69.206:22 ... Done
  - Prepare 172.16.69.208:22 ... Done
+ Deploy TiDB instance
  - Copy pd -> 172.16.69.205 ... Done
  - Copy tikv -> 172.16.69.207 ... Done
  - Copy tikv -> 172.16.69.206 ... Done
  - Copy tikv -> 172.16.69.208 ... Done
  - Copy tidb -> 172.16.69.205 ... Done
  - Copy prometheus -> 172.16.69.205 ... Done
  - Copy grafana -> 172.16.69.205 ... Done
  - Copy alertmanager -> 172.16.69.205 ... Done
  - Deploy node_exporter -> 172.16.69.205 ... Done
  - Deploy node_exporter -> 172.16.69.207 ... Done
  - Deploy node_exporter -> 172.16.69.206 ... Done
  - Deploy node_exporter -> 172.16.69.208 ... Done
  - Deploy blackbox_exporter -> 172.16.69.205 ... Done
  - Deploy blackbox_exporter -> 172.16.69.207 ... Done
  - Deploy blackbox_exporter -> 172.16.69.206 ... Done
  - Deploy blackbox_exporter -> 172.16.69.208 ... Done
+ Copy certificate to remote host
+ Init instance configs
  - Generate config pd -> 172.16.69.205:2379 ... Done
  - Generate config tikv -> 172.16.69.207:20160 ... Done
  - Generate config tikv -> 172.16.69.206:20160 ... Done
  - Generate config tikv -> 172.16.69.208:20160 ... Done
  - Generate config tidb -> 172.16.69.205:4000 ... Done
  - Generate config prometheus -> 172.16.69.205:9090 ... Done
  - Generate config grafana -> 172.16.69.205:3000 ... Done
  - Generate config alertmanager -> 172.16.69.205:9093 ... Done
+ Init monitor configs
  - Generate config node_exporter -> 172.16.69.206 ... Done
  - Generate config node_exporter -> 172.16.69.208 ... Done
  - Generate config node_exporter -> 172.16.69.205 ... Done
  - Generate config node_exporter -> 172.16.69.207 ... Done
  - Generate config blackbox_exporter -> 172.16.69.205 ... Done
  - Generate config blackbox_exporter -> 172.16.69.207 ... Done
  - Generate config blackbox_exporter -> 172.16.69.206 ... Done
  - Generate config blackbox_exporter -> 172.16.69.208 ... Done
Enabling component pd
	Enabling instance 172.16.69.205:2379
	Enable instance 172.16.69.205:2379 success
Enabling component tikv
	Enabling instance 172.16.69.208:20160
	Enabling instance 172.16.69.206:20160
	Enabling instance 172.16.69.207:20160
	Enable instance 172.16.69.206:20160 success
	Enable instance 172.16.69.208:20160 success
	Enable instance 172.16.69.207:20160 success
Enabling component tidb
	Enabling instance 172.16.69.205:4000
	Enable instance 172.16.69.205:4000 success
Enabling component prometheus
	Enabling instance 172.16.69.205:9090
	Enable instance 172.16.69.205:9090 success
Enabling component grafana
	Enabling instance 172.16.69.205:3000
	Enable instance 172.16.69.205:3000 success
Enabling component alertmanager
	Enabling instance 172.16.69.205:9093
	Enable instance 172.16.69.205:9093 success
Enabling component node_exporter
	Enabling instance 172.16.69.208
	Enabling instance 172.16.69.207
	Enabling instance 172.16.69.206
	Enabling instance 172.16.69.205
	Enable 172.16.69.205 success
	Enable 172.16.69.206 success
	Enable 172.16.69.208 success
	Enable 172.16.69.207 success
Enabling component blackbox_exporter
	Enabling instance 172.16.69.208
	Enabling instance 172.16.69.205
	Enabling instance 172.16.69.207
	Enabling instance 172.16.69.206
	Enable 172.16.69.205 success
	Enable 172.16.69.206 success
	Enable 172.16.69.207 success
	Enable 172.16.69.208 success
Cluster `tidb-test` deployed successfully, you can start it with command: `tiup cluster start tidb-test --init`
[root@tidb0 ~]# tiup cluster list
Name       User  Version  Path                                            PrivateKey
----       ----  -------  ----                                            ----------
tidb-test  tidb  v5.4.1   /root/.tiup/storage/cluster/clusters/tidb-test  /root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa
[root@tidb0 ~]# tiup cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v5.4.1
Deploy user:        tidb
SSH type:           builtin
Grafana URL:        http://172.16.69.205:3000
ID                   Role          Host           Ports        OS/Arch       Status  Data Dir                      Deploy Dir
--                   ----          ----           -----        -------       ------  --------                      ----------
172.16.69.205:9093   alertmanager  172.16.69.205  9093/9094    linux/x86_64  Down    /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
172.16.69.205:3000   grafana       172.16.69.205  3000         linux/x86_64  Down    -                             /tidb-deploy/grafana-3000
172.16.69.205:2379   pd            172.16.69.205  2379/2380    linux/x86_64  Down    /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.69.205:9090   prometheus    172.16.69.205  9090/12020   linux/x86_64  Down    /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
172.16.69.205:4000   tidb          172.16.69.205  4000/10080   linux/x86_64  Down    -                             /tidb-deploy/tidb-4000
172.16.69.206:20160  tikv          172.16.69.206  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.69.207:20160  tikv          172.16.69.207  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.69.208:20160  tikv          172.16.69.208  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 8
[root@tidb0 ~]# tiup cluster start tidb-test --init
Starting cluster tidb-test...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=172.16.69.205
+ [Parallel] - UserSSH: user=tidb, host=172.16.69.206
+ [Parallel] - UserSSH: user=tidb, host=172.16.69.205
+ [Parallel] - UserSSH: user=tidb, host=172.16.69.208
+ [Parallel] - UserSSH: user=tidb, host=172.16.69.205
+ [Parallel] - UserSSH: user=tidb, host=172.16.69.205
+ [Parallel] - UserSSH: user=tidb, host=172.16.69.207
+ [Parallel] - UserSSH: user=tidb, host=172.16.69.205
+ [ Serial ] - StartCluster
Starting component pd
	Starting instance 172.16.69.205:2379
	Start instance 172.16.69.205:2379 success
Starting component tikv
	Starting instance 172.16.69.208:20160
	Starting instance 172.16.69.207:20160
	Starting instance 172.16.69.206:20160
	Start instance 172.16.69.206:20160 success
	Start instance 172.16.69.208:20160 success
	Start instance 172.16.69.207:20160 success
Starting component tidb
	Starting instance 172.16.69.205:4000
	Start instance 172.16.69.205:4000 success
Starting component prometheus
	Starting instance 172.16.69.205:9090
	Start instance 172.16.69.205:9090 success
Starting component grafana
	Starting instance 172.16.69.205:3000
	Start instance 172.16.69.205:3000 success
Starting component alertmanager
	Starting instance 172.16.69.205:9093
	Start instance 172.16.69.205:9093 success
Starting component node_exporter
	Starting instance 172.16.69.207
	Starting instance 172.16.69.206
	Starting instance 172.16.69.208
	Starting instance 172.16.69.205
	Start 172.16.69.206 success
	Start 172.16.69.205 success
	Start 172.16.69.208 success
	Start 172.16.69.207 success
Starting component blackbox_exporter
	Starting instance 172.16.69.207
	Starting instance 172.16.69.205
	Starting instance 172.16.69.208
	Starting instance 172.16.69.206
	Start 172.16.69.206 success
	Start 172.16.69.205 success
	Start 172.16.69.208 success
	Start 172.16.69.207 success
+ [ Serial ] - UpdateTopology: cluster=tidb-test
Started cluster `tidb-test` successfully
The root password of TiDB database has been changed.
The new password is: 'U719-^8@FHGM0Ln4*p'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.
[root@tidb0 ~]# tiup cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v5.4.1
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://172.16.69.205:2379/dashboard
Grafana URL:        http://172.16.69.205:3000
ID                   Role          Host           Ports        OS/Arch       Status   Data Dir                      Deploy Dir
--                   ----          ----           -----        -------       ------   --------                      ----------
172.16.69.205:9093   alertmanager  172.16.69.205  9093/9094    linux/x86_64  Up       /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
172.16.69.205:3000   grafana       172.16.69.205  3000         linux/x86_64  Up       -                             /tidb-deploy/grafana-3000
172.16.69.205:2379   pd            172.16.69.205  2379/2380    linux/x86_64  Up|L|UI  /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.69.205:9090   prometheus    172.16.69.205  9090/12020   linux/x86_64  Up       /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
172.16.69.205:4000   tidb          172.16.69.205  4000/10080   linux/x86_64  Up       -                             /tidb-deploy/tidb-4000
172.16.69.206:20160  tikv          172.16.69.206  20160/20180  linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.69.207:20160  tikv          172.16.69.207  20160/20180  linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.69.208:20160  tikv          172.16.69.208  20160/20180  linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 8
[root@tidb0 ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:           7.4G        605M        5.1G        508K        1.7G        6.5G
Swap:            0B          0B          0B
[root@tidb0 ~]# 

TiDB环境的部署可以通过使用TiUP来完成。TiUP是TiDB 4.0版本引入的集运维工具,它提供了集管理组件TiUP cluster,可以用于管理TiDB部署启动、关闭、销毁、弹性扩缩容、升级等工作。通过执行TiUP命令,可以输出当前通过TiUP cluster管理的所有集信息,包括集名称、部署用户、版本、密钥信息等。 具体的部署步骤如下: 1. 安装TiUP组件:执行TiUP命令进行组件安装。 2. 创建集配置文件:使用TiUP cluster命令创建一个新的集配置文件。 3. 配置集参数:根据需求修改集配置文件中的参数,例如副本数、节点数量等。 4. 部署TiDB:执行TiUP cluster命令进行集部署。 5. 检查部署情况:执行TiUP cluster命令检查部署TiDB情况。 通过以上步骤,可以完成TiDB环境的部署和配置。使用TiUP作为集管理工具,可以方便地进行TiDB生态下各个组件的管理和运维工作,极大地降低了管理难度。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* [使用 TiUP 部署 TiDB](https://blog.csdn.net/weixin_42241611/article/details/125518329)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] - *3* [TIDB部署](https://blog.csdn.net/qq_21040559/article/details/127716535)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值