1.TIDB7.2社区版本安装部署示例

1.下载社区版本TIDB7.2的软件

tidb-community-server-v7.2.0-linux-amd64.tar.gz 

2.集群安装前准备

(1)环境规划

仅仅测试使用,生产环境不能按照如下方式部署。

54作为主节点。
10.168.111.54 tidb1(控制台)  PD,TIDB,TIKV,TIFLASH 
10.168.111.55 tidb2          PD,TIDB,TIKV,TIFLASH 
10.168.111.56 tidb3          PD,TIDB,TIKV,TIFLASH

(2)安装依赖包

mount /dev/cdrom /mnt 
yum -y install make automake libtool pkgconfig libaio-devel libtool openssl-devel.x86_64 openssl.x86_64 epel-release git curl sshpass python2-pip ntp numactl 

(3)创建用户,目录,配置SUDO权限

groupadd tidb 
useradd -m -d /home/tidb tidb 
passwd tidb 


mkdir /tidb
chown -R tidb:tidb /tidb 



cd /etc/
chmod +w sudoers

vi sudoer
tidb ALL=(ALL) NOPASSWD: ALL 

(4)安装TIUP 

su - root
tar xvf tidb-community-server-v7.2.0-linux-amd64.tar.gz 
mv tidb-community-server-v7.2.0-linux-amd64 /soft
chown -R tidb:tidb tidb-community-server-v7.2.0-linux-amd64
su - tidb

cd /soft/tidb-community-server-v7.2.0-linux-amd64
sh local_install.sh   --安装TIUP;

(5)编写参数文件

cat config.yaml
global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/tidb/tidb-deploy"
  data_dir: "/tidb/tidb-data"
pd_servers:
  - host: 10.168.111.54
  - host: 10.168.111.55
  - host: 10.168.111.56
tikv_servers:
  - host: 10.168.111.54
  - host: 10.168.111.55
  - host: 10.168.111.56
monitoring_servers:
  - host: 10.168.111.54
grafana_servers:
  - host: 10.168.111.54
alertmanager_servers:
  - host: 10.168.111.54
tiflash_servers:
  - host: 10.168.111.54
  - host: 10.168.111.55
  - host: 10.168.111.56
tidb_servers:
  - host: 10.168.111.54
  - host: 10.168.111.55
  - host: 10.168.111.56

(6)部署集群

我本地已经部署了同名的集群,需要先销毁集群,然后重新部署。如果是第一次部署则不需要销毁。

销毁集群:tidbcluster 

[root@tidb1 soft]# tiup cluster destroy tidbcluster
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.3/tiup-cluster destroy tidbcluster

  ██     ██  █████  ██████  ███    ██ ██ ███    ██  ██████
  ██     ██ ██   ██ ██   ██ ████   ██ ██ ████   ██ ██
  ██  █  ██ ███████ ██████  ██ ██  ██ ██ ██ ██  ██ ██   ███
  ██ ███ ██ ██   ██ ██   ██ ██  ██ ██ ██ ██  ██ ██ ██    ██
   ███ ███  ██   ██ ██   ██ ██   ████ ██ ██   ████  ██████

This operation will destroy tidb v7.2.0 cluster tidbcluster and its data.
Are you sure to continue?
(Type "Yes, I know my cluster and data will be deleted." to continue)
: Yes, I know my cluster and data will be deleted.     --冒号后面输入这句话。
Destroying cluster...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.55
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.56
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.55
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.54
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.54
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.54
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.54
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.55
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.56
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.54
+ [ Serial ] - StopCluster
Stopping component alertmanager
        Stopping instance 10.168.111.54
        Stop alertmanager 10.168.111.54:9093 success
Stopping component grafana
        Stopping instance 10.168.111.54
        Stop grafana 10.168.111.54:3000 success
Stopping component prometheus
        Stopping instance 10.168.111.54
        Stop prometheus 10.168.111.54:9090 success
Stopping component tiflash
        Stopping instance 10.168.111.55
        Stop tiflash 10.168.111.55:9000 success
Stopping component tikv
        Stopping instance 10.168.111.56
        Stopping instance 10.168.111.54
        Stopping instance 10.168.111.55
        Stop tikv 10.168.111.56:20160 success
        Stop tikv 10.168.111.55:20160 success
        Stop tikv 10.168.111.54:20160 success
Stopping component pd
        Stopping instance 10.168.111.56
        Stopping instance 10.168.111.54
        Stopping instance 10.168.111.55
        Stop pd 10.168.111.56:2379 success
        Stop pd 10.168.111.55:2379 success
        Stop pd 10.168.111.54:2379 success
Stopping component node_exporter
        Stopping instance 10.168.111.56
        Stopping instance 10.168.111.54
        Stopping instance 10.168.111.55
        Stop 10.168.111.56 success
        Stop 10.168.111.55 success
        Stop 10.168.111.54 success
Stopping component blackbox_exporter
        Stopping instance 10.168.111.56
        Stopping instance 10.168.111.54
        Stopping instance 10.168.111.55
        Stop 10.168.111.56 success
        Stop 10.168.111.55 success
        Stop 10.168.111.54 success
+ [ Serial ] - DestroyCluster
Destroying component alertmanager
        Destroying instance 10.168.111.54
Destroy 10.168.111.54 success
- Destroy alertmanager paths: [/tidb/tidb-data/alertmanager-9093 /tidb/tidb-deploy/alertmanager-9093/log /tidb/tidb-deploy/alertmanager-9093 /etc/systemd/system/alertmanager-9093.service]
Destroying component grafana
        Destroying instance 10.168.111.54
Destroy 10.168.111.54 success
- Destroy grafana paths: [/tidb/tidb-deploy/grafana-3000 /etc/systemd/system/grafana-3000.service]
Destroying component prometheus
        Destroying instance 10.168.111.54
Destroy 10.168.111.54 success
- Destroy prometheus paths: [/tidb/tidb-data/prometheus-9090 /tidb/tidb-deploy/prometheus-9090/log /tidb/tidb-deploy/prometheus-9090 /etc/systemd/system/prometheus-9090.service]
Destroying component tiflash
        Destroying instance 10.168.111.55
Destroy 10.168.111.55 success
- Destroy tiflash paths: [/tidb/tidb-data/tiflash-9000 /tidb/tidb-deploy/tiflash-9000/log /tidb/tidb-deploy/tiflash-9000 /etc/systemd/system/tiflash-9000.service]
Destroying component tikv
        Destroying instance 10.168.111.54
Destroy 10.168.111.54 success
- Destroy tikv paths: [/tidb/tidb-deploy/tikv-20160/log /tidb/tidb-deploy/tikv-20160 /etc/systemd/system/tikv-20160.service /tidb/tidb-data/tikv-20160]
        Destroying instance 10.168.111.55
Destroy 10.168.111.55 success
- Destroy tikv paths: [/tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160/log /tidb/tidb-deploy/tikv-20160 /etc/systemd/system/tikv-20160.service]
        Destroying instance 10.168.111.56
Destroy 10.168.111.56 success
- Destroy tikv paths: [/tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160/log /tidb/tidb-deploy/tikv-20160 /etc/systemd/system/tikv-20160.service]
Destroying component pd
        Destroying instance 10.168.111.54
Destroy 10.168.111.54 success
- Destroy pd paths: [/tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379/log /tidb/tidb-deploy/pd-2379 /etc/systemd/system/pd-2379.service]
        Destroying instance 10.168.111.55
Destroy 10.168.111.55 success
- Destroy pd paths: [/tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379/log /tidb/tidb-deploy/pd-2379 /etc/systemd/system/pd-2379.service]
        Destroying instance 10.168.111.56
Destroy 10.168.111.56 success
- Destroy pd paths: [/tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379/log /tidb/tidb-deploy/pd-2379 /etc/systemd/system/pd-2379.service]
Destroying monitored 10.168.111.54
        Destroying instance 10.168.111.54
Destroy monitored on 10.168.111.54 success
Destroying monitored 10.168.111.55
        Destroying instance 10.168.111.55
Destroy monitored on 10.168.111.55 success
Destroying monitored 10.168.111.56
        Destroying instance 10.168.111.56
Destroy monitored on 10.168.111.56 success
Clean global directories 10.168.111.54
        Clean directory /tidb/tidb-deploy on instance 10.168.111.54
        Clean directory /tidb/tidb-data on instance 10.168.111.54
Clean global directories 10.168.111.54 success
Clean global directories 10.168.111.55
        Clean directory /tidb/tidb-deploy on instance 10.168.111.55
        Clean directory /tidb/tidb-data on instance 10.168.111.55
Clean global directories 10.168.111.55 success
Clean global directories 10.168.111.56
        Clean directory /tidb/tidb-deploy on instance 10.168.111.56
        Clean directory /tidb/tidb-data on instance 10.168.111.56
Clean global directories 10.168.111.56 success
Delete public key 10.168.111.54
Delete public key 10.168.111.54 success
Delete public key 10.168.111.55
Delete public key 10.168.111.55 success
Delete public key 10.168.111.56
Delete public key 10.168.111.56 success
Destroyed cluster `tidbcluster` successfully

看到:successfully 说明之前的集群已经删除成功。

现在开始部署集群:

[root@tidb1 soft]# tiup cluster deploy tidbcluster v7.2.0 /soft/config.yaml --user root -p
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.3/tiup-cluster deploy tidbcluster v7.2.0 /soft/config.yaml --user root -p
Input SSH password: rootroot   (我的操作系统用户密码是:rootroot)
+ Detect CPU Arch Name
  - Detecting node 10.168.111.54 Arch info ... Done
  - Detecting node 10.168.111.55 Arch info ... Done
  - Detecting node 10.168.111.56 Arch info ... Done
+ Detect CPU OS Name
  - Detecting node 10.168.111.54 OS info ... Done
  - Detecting node 10.168.111.55 OS info ... Done
  - Detecting node 10.168.111.56 OS info ... Done
Please confirm your topology:
Cluster type:    tidb
Cluster name:    tidbcluster
Cluster version: v7.2.0
Role          Host           Ports                            OS/Arch       Directories
----          ----           -----                            -------       -----------
pd            10.168.111.54  2379/2380                        linux/x86_64  /tidb/tidb-deploy/pd-2379,/tidb/tidb-data/pd-2379
pd            10.168.111.55  2379/2380                        linux/x86_64  /tidb/tidb-deploy/pd-2379,/tidb/tidb-data/pd-2379
pd            10.168.111.56  2379/2380                        linux/x86_64  /tidb/tidb-deploy/pd-2379,/tidb/tidb-data/pd-2379
tikv          10.168.111.54  20160/20180                      linux/x86_64  /tidb/tidb-deploy/tikv-20160,/tidb/tidb-data/tikv-20160
tikv          10.168.111.55  20160/20180                      linux/x86_64  /tidb/tidb-deploy/tikv-20160,/tidb/tidb-data/tikv-20160
tikv          10.168.111.56  20160/20180                      linux/x86_64  /tidb/tidb-deploy/tikv-20160,/tidb/tidb-data/tikv-20160
tidb          10.168.111.54  4000/10080                       linux/x86_64  /tidb/tidb-deploy/tidb-4000
tidb          10.168.111.55  4000/10080                       linux/x86_64  /tidb/tidb-deploy/tidb-4000
tidb          10.168.111.56  4000/10080                       linux/x86_64  /tidb/tidb-deploy/tidb-4000
tiflash       10.168.111.54  9000/8123/3930/20170/20292/8234  linux/x86_64  /tidb/tidb-deploy/tiflash-9000,/tidb/tidb-data/tiflash-9000
tiflash       10.168.111.55  9000/8123/3930/20170/20292/8234  linux/x86_64  /tidb/tidb-deploy/tiflash-9000,/tidb/tidb-data/tiflash-9000
tiflash       10.168.111.56  9000/8123/3930/20170/20292/8234  linux/x86_64  /tidb/tidb-deploy/tiflash-9000,/tidb/tidb-data/tiflash-9000
prometheus    10.168.111.54  9090/12020                       linux/x86_64  /tidb/tidb-deploy/prometheus-9090,/tidb/tidb-data/prometheus-9090
grafana       10.168.111.54  3000                             linux/x86_64  /tidb/tidb-deploy/grafana-3000
alertmanager  10.168.111.54  9093/9094                        linux/x86_64  /tidb/tidb-deploy/alertmanager-9093,/tidb/tidb-data/alertmanager-9093
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y  (输入y继续)
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v7.2.0 (linux/amd64) ... Done
  - Download tikv:v7.2.0 (linux/amd64) ... Done
  - Download tidb:v7.2.0 (linux/amd64) ... Done
  - Download tiflash:v7.2.0 (linux/amd64) ... Done
  - Download prometheus:v7.2.0 (linux/amd64) ... Done
  - Download grafana:v7.2.0 (linux/amd64) ... Done
  - Download alertmanager: (linux/amd64) ... Done
  - Download node_exporter: (linux/amd64) ... Done
  - Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 10.168.111.54:22 ... Done
  - Prepare 10.168.111.55:22 ... Done
  - Prepare 10.168.111.56:22 ... Done
+ Deploy TiDB instance
  - Copy pd -> 10.168.111.54 ... Done
  - Copy pd -> 10.168.111.55 ... Done
  - Copy pd -> 10.168.111.56 ... Done
  - Copy tikv -> 10.168.111.54 ... Done
  - Copy tikv -> 10.168.111.55 ... Done
  - Copy tikv -> 10.168.111.56 ... Done
  - Copy tidb -> 10.168.111.54 ... Done
  - Copy tidb -> 10.168.111.55 ... Done
  - Copy tidb -> 10.168.111.56 ... Done
  - Copy tiflash -> 10.168.111.54 ... Done
  - Copy tiflash -> 10.168.111.55 ... Done
  - Copy tiflash -> 10.168.111.56 ... Done
  - Copy prometheus -> 10.168.111.54 ... Done
  - Copy grafana -> 10.168.111.54 ... Done
  - Copy alertmanager -> 10.168.111.54 ... Done
  - Deploy node_exporter -> 10.168.111.54 ... Done
  - Deploy node_exporter -> 10.168.111.55 ... Done
  - Deploy node_exporter -> 10.168.111.56 ... Done
  - Deploy blackbox_exporter -> 10.168.111.54 ... Done
  - Deploy blackbox_exporter -> 10.168.111.55 ... Done
  - Deploy blackbox_exporter -> 10.168.111.56 ... Done
+ Copy certificate to remote host
+ Init instance configs
  - Generate config pd -> 10.168.111.54:2379 ... Done
  - Generate config pd -> 10.168.111.55:2379 ... Done
  - Generate config pd -> 10.168.111.56:2379 ... Done
  - Generate config tikv -> 10.168.111.54:20160 ... Done
  - Generate config tikv -> 10.168.111.55:20160 ... Done
  - Generate config tikv -> 10.168.111.56:20160 ... Done
  - Generate config tidb -> 10.168.111.54:4000 ... Done
  - Generate config tidb -> 10.168.111.55:4000 ... Done
  - Generate config tidb -> 10.168.111.56:4000 ... Done
  - Generate config tiflash -> 10.168.111.54:9000 ... Done
  - Generate config tiflash -> 10.168.111.55:9000 ... Done
  - Generate config tiflash -> 10.168.111.56:9000 ... Done
  - Generate config prometheus -> 10.168.111.54:9090 ... Done
  - Generate config grafana -> 10.168.111.54:3000 ... Done
  - Generate config alertmanager -> 10.168.111.54:9093 ... Done
+ Init monitor configs
  - Generate config node_exporter -> 10.168.111.54 ... Done
  - Generate config node_exporter -> 10.168.111.55 ... Done
  - Generate config node_exporter -> 10.168.111.56 ... Done
  - Generate config blackbox_exporter -> 10.168.111.54 ... Done
  - Generate config blackbox_exporter -> 10.168.111.55 ... Done
  - Generate config blackbox_exporter -> 10.168.111.56 ... Done
Enabling component pd
        Enabling instance 10.168.111.56:2379
        Enabling instance 10.168.111.54:2379
        Enabling instance 10.168.111.55:2379
        Enable instance 10.168.111.56:2379 success
        Enable instance 10.168.111.55:2379 success
        Enable instance 10.168.111.54:2379 success
Enabling component tikv
        Enabling instance 10.168.111.56:20160
        Enabling instance 10.168.111.54:20160
        Enabling instance 10.168.111.55:20160
        Enable instance 10.168.111.56:20160 success
        Enable instance 10.168.111.54:20160 success
        Enable instance 10.168.111.55:20160 success
Enabling component tidb
        Enabling instance 10.168.111.56:4000
        Enabling instance 10.168.111.54:4000
        Enabling instance 10.168.111.55:4000
        Enable instance 10.168.111.56:4000 success
        Enable instance 10.168.111.55:4000 success
        Enable instance 10.168.111.54:4000 success
Enabling component tiflash
        Enabling instance 10.168.111.56:9000
        Enabling instance 10.168.111.54:9000
        Enabling instance 10.168.111.55:9000
        Enable instance 10.168.111.56:9000 success
        Enable instance 10.168.111.55:9000 success
        Enable instance 10.168.111.54:9000 success
Enabling component prometheus
        Enabling instance 10.168.111.54:9090
        Enable instance 10.168.111.54:9090 success
Enabling component grafana
        Enabling instance 10.168.111.54:3000
        Enable instance 10.168.111.54:3000 success
Enabling component alertmanager
        Enabling instance 10.168.111.54:9093
        Enable instance 10.168.111.54:9093 success
Enabling component node_exporter
        Enabling instance 10.168.111.56
        Enabling instance 10.168.111.54
        Enabling instance 10.168.111.55
        Enable 10.168.111.56 success
        Enable 10.168.111.55 success
        Enable 10.168.111.54 success
Enabling component blackbox_exporter
        Enabling instance 10.168.111.56
        Enabling instance 10.168.111.54
        Enabling instance 10.168.111.55
        Enable 10.168.111.56 success
        Enable 10.168.111.55 success
        Enable 10.168.111.54 success
Cluster `tidbcluster` deployed successfully, you can start it with command: `tiup cluster start tidbcluster --init`

最后一步提示需要做初始化:tiup cluster start tidbcluster --init 

(7)集群初始化

[root@tidb1 soft]# tiup cluster start tidbcluster --init
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.3/tiup-cluster start tidbcluster --init
Starting cluster tidbcluster...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.55
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.56
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.54
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.55
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.55
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.54
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.54
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.54
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.56
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.54
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.54
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.54
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.56
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.55
+ [Parallel] - UserSSH: user=tidb, host=10.168.111.56
+ [ Serial ] - StartCluster
Starting component pd
        Starting instance 10.168.111.56:2379
        Starting instance 10.168.111.55:2379
        Starting instance 10.168.111.54:2379
        Start instance 10.168.111.56:2379 success
        Start instance 10.168.111.55:2379 success
        Start instance 10.168.111.54:2379 success
Starting component tikv
        Starting instance 10.168.111.56:20160
        Starting instance 10.168.111.54:20160
        Starting instance 10.168.111.55:20160
        Start instance 10.168.111.54:20160 success
        Start instance 10.168.111.56:20160 success
        Start instance 10.168.111.55:20160 success
Starting component tidb
        Starting instance 10.168.111.56:4000
        Starting instance 10.168.111.54:4000
        Starting instance 10.168.111.55:4000
        Start instance 10.168.111.56:4000 success
        Start instance 10.168.111.55:4000 success
        Start instance 10.168.111.54:4000 success
Starting component tiflash
        Starting instance 10.168.111.55:9000
        Starting instance 10.168.111.56:9000
        Starting instance 10.168.111.54:9000
        Start instance 10.168.111.56:9000 success
        Start instance 10.168.111.55:9000 success
        Start instance 10.168.111.54:9000 success
Starting component prometheus
        Starting instance 10.168.111.54:9090
        Start instance 10.168.111.54:9090 success
Starting component grafana
        Starting instance 10.168.111.54:3000
        Start instance 10.168.111.54:3000 success
Starting component alertmanager
        Starting instance 10.168.111.54:9093
        Start instance 10.168.111.54:9093 success
Starting component node_exporter
        Starting instance 10.168.111.56
        Starting instance 10.168.111.54
        Starting instance 10.168.111.55
        Start 10.168.111.56 success
        Start 10.168.111.55 success
        Start 10.168.111.54 success
Starting component blackbox_exporter
        Starting instance 10.168.111.56
        Starting instance 10.168.111.54
        Starting instance 10.168.111.55
        Start 10.168.111.56 success
        Start 10.168.111.55 success
        Start 10.168.111.54 success
+ [ Serial ] - UpdateTopology: cluster=tidbcluster
Started cluster `tidbcluster` successfully
Failed to set root password of TiDB database to '9D4^q3-25+JpBn6bE*'
Error: dial tcp 10.168.111.56:4000: connect: connection refused
Verbose debug logs has been written to /root/.tiup/logs/tiup-cluster-debug-2023-07-22-14-57-05.log.

可以看到最后一步,设置root密码失败,连接拒绝。

(8)检查集群状态

[root@tidb1 soft]# tiup cluster list
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.3/tiup-cluster list
Name         User  Version  Path                                              PrivateKey
----         ----  -------  ----                                              ----------
tidbcluster  tidb  v7.2.0   /root/.tiup/storage/cluster/clusters/tidbcluster  /root/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa

[root@tidb1 soft]# tiup cluster display tidbcluster
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.3/tiup-cluster display tidbcluster
Cluster type:       tidb
Cluster name:       tidbcluster
Cluster version:    v7.2.0
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://10.168.111.56:2379/dashboard
Grafana URL:        http://10.168.111.54:3000
ID                   Role          Host           Ports                            OS/Arch       Status  Data Dir                           Deploy Dir
--                   ----          ----           -----                            -------       ------  --------                           ----------
10.168.111.54:9093   alertmanager  10.168.111.54  9093/9094                        linux/x86_64  Up      /tidb/tidb-data/alertmanager-9093  /tidb/tidb-deploy/alertmanager-9093
10.168.111.54:3000   grafana       10.168.111.54  3000                             linux/x86_64  Up      -                                  /tidb/tidb-deploy/grafana-3000
10.168.111.54:2379   pd            10.168.111.54  2379/2380                        linux/x86_64  Up      /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
10.168.111.55:2379   pd            10.168.111.55  2379/2380                        linux/x86_64  Up|L    /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
10.168.111.56:2379   pd            10.168.111.56  2379/2380                        linux/x86_64  Up|UI   /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
10.168.111.54:9090   prometheus    10.168.111.54  9090/12020                       linux/x86_64  Up      /tidb/tidb-data/prometheus-9090    /tidb/tidb-deploy/prometheus-9090
10.168.111.54:4000   tidb          10.168.111.54  4000/10080                       linux/x86_64  Down    -                                  /tidb/tidb-deploy/tidb-4000
10.168.111.55:4000   tidb          10.168.111.55  4000/10080                       linux/x86_64  Down    -                                  /tidb/tidb-deploy/tidb-4000
10.168.111.56:4000   tidb          10.168.111.56  4000/10080                       linux/x86_64  Down    -                                  /tidb/tidb-deploy/tidb-4000
10.168.111.54:9000   tiflash       10.168.111.54  9000/8123/3930/20170/20292/8234  linux/x86_64  Up      /tidb/tidb-data/tiflash-9000       /tidb/tidb-deploy/tiflash-9000
10.168.111.55:9000   tiflash       10.168.111.55  9000/8123/3930/20170/20292/8234  linux/x86_64  Up      /tidb/tidb-data/tiflash-9000       /tidb/tidb-deploy/tiflash-9000
10.168.111.56:9000   tiflash       10.168.111.56  9000/8123/3930/20170/20292/8234  linux/x86_64  Up      /tidb/tidb-data/tiflash-9000       /tidb/tidb-deploy/tiflash-9000
10.168.111.54:20160  tikv          10.168.111.54  20160/20180                      linux/x86_64  Up      /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
10.168.111.55:20160  tikv          10.168.111.55  20160/20180                      linux/x86_64  Up      /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
10.168.111.56:20160  tikv          10.168.111.56  20160/20180                      linux/x86_64  Up      /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
Total nodes: 15

总共15个节点,可以看到TIDB的三个节点都是DOWN状态,所以密码无法设置成功原因就在这里。

(9)尝试启动三个TIDB SERVER节点

su - root 
cd /tidb/tidb-deploy/tidb-4000/scripts
sh run_tidb.sh

(10)再次检查TIDB集群的状态

[root@tidb1 bin]# tiup cluster display tidbcluster
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.3/tiup-cluster display tidbcluster
Cluster type:       tidb
Cluster name:       tidbcluster
Cluster version:    v7.2.0
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://10.168.111.56:2379/dashboard
Grafana URL:        http://10.168.111.54:3000
ID                   Role          Host           Ports                            OS/Arch       Status  Data Dir                           Deploy Dir
--                   ----          ----           -----                            -------       ------  --------                           ----------
10.168.111.54:9093   alertmanager  10.168.111.54  9093/9094                        linux/x86_64  Up      /tidb/tidb-data/alertmanager-9093  /tidb/tidb-deploy/alertmanager-9093
10.168.111.54:3000   grafana       10.168.111.54  3000                             linux/x86_64  Up      -                                  /tidb/tidb-deploy/grafana-3000
10.168.111.54:2379   pd            10.168.111.54  2379/2380                        linux/x86_64  Up|L    /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
10.168.111.55:2379   pd            10.168.111.55  2379/2380                        linux/x86_64  Up      /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
10.168.111.56:2379   pd            10.168.111.56  2379/2380                        linux/x86_64  Up|UI   /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
10.168.111.54:9090   prometheus    10.168.111.54  9090/12020                       linux/x86_64  Up      /tidb/tidb-data/prometheus-9090    /tidb/tidb-deploy/prometheus-9090
10.168.111.54:4000   tidb          10.168.111.54  4000/10080                       linux/x86_64  Up      -                                  /tidb/tidb-deploy/tidb-4000
10.168.111.55:4000   tidb          10.168.111.55  4000/10080                       linux/x86_64  Up      -                                  /tidb/tidb-deploy/tidb-4000
10.168.111.56:4000   tidb          10.168.111.56  4000/10080                       linux/x86_64  Up      -                                  /tidb/tidb-deploy/tidb-4000
10.168.111.54:9000   tiflash       10.168.111.54  9000/8123/3930/20170/20292/8234  linux/x86_64  Up      /tidb/tidb-data/tiflash-9000       /tidb/tidb-deploy/tiflash-9000
10.168.111.55:9000   tiflash       10.168.111.55  9000/8123/3930/20170/20292/8234  linux/x86_64  Up      /tidb/tidb-data/tiflash-9000       /tidb/tidb-deploy/tiflash-9000
10.168.111.56:9000   tiflash       10.168.111.56  9000/8123/3930/20170/20292/8234  linux/x86_64  Up      /tidb/tidb-data/tiflash-9000       /tidb/tidb-deploy/tiflash-9000
10.168.111.54:20160  tikv          10.168.111.54  20160/20180                      linux/x86_64  Up      /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
10.168.111.55:20160  tikv          10.168.111.55  20160/20180                      linux/x86_64  Up      /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
10.168.111.56:20160  tikv          10.168.111.56  20160/20180                      linux/x86_64  Up      /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
Total nodes: 15

可以看到端口是4000,三个TIDB SERVER节点都手动启动成功。为什么初始化没有成功,而需要手动启动三个TIDBSERVER实例,暂时不清楚。

(11)重启集群

重启集群,看看TIDBSERVER三个节点是否能够正常启动。

tiup cluster display tidbcluster 
tiup cluster stop tidbcluster
tiup cluster start tidbcluster 

经过测试发现,TIDBSERVER在重启后,可以自动拉起,说明集群目前没有问题。

(12)登陆测试

初始化的时候,TIDBSERVER设置密码失败,所以首次登陆时不需要输入密码,直接登陆。设置密码后,下次登陆需要手工输入密码。

mysql -u root -h 10.168.111.54 -P 4000  
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| INFORMATION_SCHEMA |
| METRICS_SCHEMA     |
| PERFORMANCE_SCHEMA |
| mysql              |
| test               |
+--------------------+
mysql> create database sjzt;
Query OK, 0 rows affected (0.11 sec)
mysql> use sjzt;
Database changed
mysql> create table test1(id int,name varchar(100));
Query OK, 0 rows affected (0.11 sec)
mysql> insert into test1 values(1,'薛双奇');
Query OK, 1 row affected (0.01 sec)
mysql> insert into test1 values(2,'薛双奇2');
Query OK, 1 row affected (0.01 sec)
mysql> insert into test1 values(3,'薛双奇3');
Query OK, 1 row affected (0.01 sec)
mysql> select * from test1;
+------+---------+
| id   | name    |
+------+---------+
|    1 | 薛双奇  |
|    2 | 薛双奇2 |
|    3 | 薛双奇3 |
+------+---------+
3 rows in set (0.00 sec)
mysql> alter user user() identified by 'rootroot';   --修改密码。
Query OK, 0 rows affected (0.05 sec)
[root@tidb2 log]#  mysql -u root -h 10.168.111.54 -P 4000 -prootroot
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| INFORMATION_SCHEMA |
| METRICS_SCHEMA     |
| PERFORMANCE_SCHEMA |
| mysql              |
| sjzt               |
| test               |
+--------------------+
6 rows in set (0.00 sec)
mysql> select user,host from mysql.user;
+------+------+
| user | host |
+------+------+
| root | %    |
+------+------+
1 row in set (0.00 sec)



三个节点都尝试登陆,看是否正常
 mysql -u root -h 10.168.111.54 -P 4000 -prootroot
 mysql -u root -h 10.168.111.55 -P 4000 -prootroot
 mysql -u root -h 10.168.111.56 -P 4000 -prootroot

可以看到数据能够正常读写,同时我们将密码修改为:rootroot; 

(13)检查 TIKV的大小及状态

mysql> select STORE_ID,ADDRESS,STORE_STATE,STORE_STATE_NAME,CAPACITY,AVAILABLE,UPTIME from INFORMATION_SCHEMA.TIKV_STORE_STATUS;
+----------+---------------------+-------------+------------------+----------+-----------+-----------------+
| STORE_ID | ADDRESS             | STORE_STATE | STORE_STATE_NAME | CAPACITY | AVAILABLE | UPTIME          |
+----------+---------------------+-------------+------------------+----------+-----------+-----------------+
|      230 | 10.168.111.54:3930  |           0 | Up               | 149GiB   | 104.6GiB  | 3m19.849420252s |
|        1 | 10.168.111.54:20160 |           0 | Up               | 149GiB   | 104.6GiB  | 3m40.035903886s |
|        4 | 10.168.111.55:20160 |           0 | Up               | 149GiB   | 107.3GiB  | 4m51.763975952s |
|        5 | 10.168.111.56:20160 |           0 | Up               | 149GiB   | 107.3GiB  | 3m40.719707648s |
|      228 | 10.168.111.56:3930  |           0 | Up               | 149GiB   | 107.3GiB  | 3m20.482273237s |
|      229 | 10.168.111.55:3930  |           0 | Up               | 149GiB   | 107.3GiB  | 4m31.509983588s |
+----------+------------------

(14)检查 TIDB的版本

mysql> select tidb_version()\G
*************************** 1. row ***************************
tidb_version(): Release Version: v7.2.0
Edition: Community
Git Commit Hash: 9fd5f4a8e4f273a60fbe7d3848f85a1be8f0600b
Git Branch: heads/refs/tags/v7.2.0
UTC Build Time: 2023-06-27 15:04:42
GoVersion: go1.20.5
Race Enabled: false
Check Table Before Drop: false
Store: tikv
1 row in set (0.00 sec)

可以看到当前 TIDB的版本是V7.2.0.

至此一个简单的测试环境搭建成功,可以用于其他测试。如果是生产环境,则可能涉及REGION的分布情况,高可用户容灾规划等。

3.总结

数据库的安装只是第一个步,此环境仅可用于测试使用,如果生产,则需要详细规划AZ,Region分布等细节。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值