TIDB7.5LTS集群安装配置手册

-------------------------------------------------------------------
欢迎关注作者 分享更多数据库安装配置,troubleshooting,调优,备份恢复知识和案例
CSDN:潇湘秦-CSDN博客

公众号:潇湘秦

-------------------------------------------------------------------

  • 简介

 因近期有一个项目需要上线,在评估使用什么架构时,和开发同仁沟通需求,了解到该应用为OLTP但是数据量很集中,会有几张超大的表,如果要保证事务效率,使用mysql集群难免会要做分库分表,对后期的运维带来很大的挑战;而TIDB属于分布式集群,TIKV的行存模式非常适用于大表事务性业务,因此选型用TIDB来作为该应用的底层数据库架构;

安装集群的一些基本配置要求请参考如下官方文档,这里不再赘述

https://docs.pingcap.com/zh/tidb/stable/hardware-and-software-requirements

TIDB7.5LTS 长期支持版本相关特性介绍如下链接

TiDB 7.5 LTS 发版丨提升规模化场景下关键应用的稳定性和成本的灵活性 - 墨天轮

ps:默认命令为加粗斜体

  • 安装前准备

本次安装为测试环境配置为1 台TIDB/PD 3台TIKV

配置信息

IP

TIDB/PD

1台8C/16GB 200GB centos7.9

10.189.60.201

TIKV

3台8C/32GB 200GB centos7.9

10.189.60.202/203/204

  1. 需要开通外网,并配置好外部yum源 (安装依赖包,TIUP,mysql等都需要外网拉取)
  2. 安装依赖包

编译和构建 TiDB 所需的依赖库

版本

Golang

1.21 及以上版本

Rust

nightly-2022-07-31 及以上版本

GCC

7.x

LLVM

13.0 及以上版本

NTP

None

Ntpdate

None

Sshpass

1.06 及以上

Numactl

2.0.12 及以上

2.1 安装依赖包

yum install –y gcc llvm sshpass numactl ntp ntpdate

2.2安装GO语言包1.21 及以上版本

go 官网下载All releases - The Go Programming Language

下载go1.21.5.linux-amd64.tar.gz

上传到集群各个主机

chown root:root go1.21.5.linux-amd64.tar.gz    ##修改属性
tar -C /usr/local -xzf go1.21.5.linux-amd64.tar.gz  ##解压到执行目录

vi .bash_profile   ##修改root 环境变量

PATH=$PATH:$HOME/bin:/usr/local/go/bin

# go version   ##生效环境变量后检查go版本

2.3安装rust语言包

curl --proto '=https' --tlsv1.2 https://sh.rustup.rs -sSf | sh

安装完成后确认版本

# rustc --version 

3.设置临时空间

sudo mkdir /tmp/tidb

如果目录 /tmp/tidb 已经存在,需确保有写入权限。

sudo chmod -R 777 /tmp/tidb

4.关闭防火墙

检查防火墙状态(以 CentOS 7.x 为例)

sudo firewall-cmd --state

sudo systemctl status firewalld.service

关闭防火墙服务

sudo systemctl stop firewalld.service

关闭防火墙自动启动服务

sudo systemctl disable firewalld.service

检查防火墙状态

sudo systemctl status firewalld.service

5.配置NTP服务

yum install -y ntp ntpdate

systemctl start ntpd.service

systemctl enable ntpd.service

systemctl status ntpd.service

6.检测及关闭swap

echo "vm.swappiness = 0">> /etc/sysctl.conf
swapoff –a
sysctl –p

vi /etc/fstab

# 注释加载swap分区的那行记录

#UUID=4f863b5f-20b3-4a99-a680-ddf84a3602a4 swap                    swap    defaults        0 0

  • 检查和配置操作系统优化参数

在生产系统的 TiDB 中,建议对操作系统进行如下的配置优化:

  1. 关闭透明大页(即 Transparent Huge Pages,缩写为 THP)。数据库的内存访问模式往往是稀疏的而非连续的。当高阶内存碎片化比较严重时,分配 THP 页面会出现较高的延迟。
  2. 将存储介质的 I/O 调度器设置为 noop。对于高速 SSD 存储介质,内核的 I/O 调度操作会导致性能损失。将调度器设置为 noop 后,内核不做任何操作,直接将 I/O 请求下发给硬件,以获取更好的性能。同时,noop 调度器也有较好的普适性。
  3. 为调整 CPU 频率的 cpufreq 模块选用 performance 模式。将 CPU 频率固定在其支持的最高运行频率上,不进行动态调节,可获取最佳的性能。

因为本次使用虚拟机 且无SSD 2/3项目无需调整

修改当前的内核配置立即关闭透明大页。

echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
检查修改后的状态
cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
 

  1. 执行以下命令修改 sysctl 参数
echo "fs.file-max = 1000000">> /etc/sysctl.conf
echo "net.core.somaxconn = 32768">> /etc/sysctl.conf
echo "net.ipv4.tcp_tw_recycle = 0">> /etc/sysctl.conf
echo "net.ipv4.tcp_syncookies = 0">> /etc/sysctl.conf
echo "vm.overcommit_memory = 1">> /etc/sysctl.conf
sysctl -p
  1. 执行以下命令配置用户的 limits.conf 文件
cat << EOF >>/etc/security/limits.conf
tidb           soft    nofile          1000000
tidb           hard    nofile          1000000
tidb           soft    stack          32768
tidb           hard    stack          32768
EOF

  • 手动配置 SSH 互信及 sudo 免密码

对于有需求,通过手动配置中控机至目标节点互信的场景,可参考本段。通常推荐使用 TiUP 部署工具会自动配置 SSH 互信及免密登录,可忽略本段内容

配置互信和oracle 11g rac配置互信类似

以 root 用户依次登录到部署目标机器创建 tidb 用户并设置登录密码。

useradd tidb && \
passwd tidb

置好 sudo 免密码。

visudo
tidb ALL=(ALL) NOPASSWD: ALL
配置互信
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub 10.189.60.201(ip)
因为本次配置tidb和pd在同一台所以也需要配置本机互信
注意:因为修改了默认的ssh端口 ssh时候需要使用
Ssh –p xxxx ip/hostname
为了不加-p的参数需要修改
vi /etc/ssh/ssh_config
拿掉port的注释 并修改为修改后的端口号即可
       确认互信成功
[tidb@YZPTLTIDB01T ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub 10.189.60.204
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/tidb/.ssh/id_rsa.pub"
The authenticity of host '[10.189.60.204]:11122 ([10.189.60.204]:11122)' can't be established.
ECDSA key fingerprint is SHA256:bQ5xO2+G76dkFqSjX+hEZNUWuTKnsfuKyY6WrWu3lyc.
ECDSA key fingerprint is MD5:5f:dc:02:69:20:92:cf:4d:56:26:f0:5c:bd:f5:56:ee.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
tidb@10.189.60.204's password: 
Number of key(s) added: 1
Now try logging into the machine, with:   "ssh '10.189.60.204'"
and check to make sure that only the key(s) you wanted were added.
[tidb@YZPTLTIDB01T ~]$ ssh 10.189.60.204  ##直接登录到目标主机

[tidb@YZPTLTIKV03T ~]$

  • 搭建集群服务

TiUP 是 TiDB 4.0 版本引入的集群运维工具,TiUP cluster 是 TiUP 提供的使用 Golang 编写的集群管理组件,通过 TiUP cluster 组件就可以进行日常的运维工作,包括部署、启动、关闭、销毁、弹性扩缩容、升级 TiDB 集群,以及管理 TiDB 集群参数。

目前 TiUP 可以支持部署 TiDB、TiFlash、TiDB Binlog、TiCDC 以及监控系统。

本文将介绍 TiDB 集群拓扑的具体部署步骤。

5.1下载并安装TiUP

在中控机上下载安装TIUP,本次测试集群就是TIDB/PD主机

curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh

[root@YZPTLTIDB01T ~]# source /root/.bash_profile  ##重新声明环境变量

[root@YZPTLTIDB01T ~]# tiup cluster   ##安装tiup cluster组件

tiup is checking updates for component cluster ...timeout(2s)!

The component `cluster` version  is not installed; downloading from repository.

download https://tiup-mirrors.pingcap.com/cluster-v1.14.0-linux-amd64.tar.gz 8.75 MiB / 8.75 MiB 100.00% 38.12 MiB/s     ##提示少个包 后面下了安装一下,我看也有别的博主未下载也可以使用                                       

Starting component `cluster`: /root/.tiup/components/cluster/v1.14.0/tiup-cluster

Deploy a TiDB cluster for production

Usage:

  tiup cluster [command]

Available Commands:

  check       Perform preflight checks for the cluster.

  deploy      Deploy a cluster for production

  start       Start a TiDB cluster

  stop        Stop a TiDB cluster

  restart     Restart a TiDB cluster

  scale-in    Scale in a TiDB cluster

  scale-out   Scale out a TiDB cluster

  destroy     Destroy a specified cluster

  clean       (EXPERIMENTAL) Cleanup a specified cluster

  upgrade     Upgrade a specified TiDB cluster

  display     Display information of a TiDB cluster

  prune       Destroy and remove instances that is in tombstone state

  list        List all clusters

  audit       Show audit log of cluster operation

  import      Import an exist TiDB cluster from TiDB-Ansible

  edit-config Edit TiDB cluster config

  show-config Show TiDB cluster config

  reload      Reload a TiDB cluster's config and restart if needed

  patch       Replace the remote package with a specified package and restart the service

  rename      Rename the cluster

  enable      Enable a TiDB cluster automatically at boot

  disable     Disable automatic enabling of TiDB clusters at boot

  replay      Replay previous operation and skip successed steps

  template    Print topology template

  tls         Enable/Disable TLS between TiDB components

  meta        backup/restore meta information

  rotatessh   rotate ssh keys on all nodes

  help        Help about any command

  completion  Generate the autocompletion script for the specified shell

Flags:

  -c, --concurrency int     max number of parallel tasks allowed (default 5)

      --format string       (EXPERIMENTAL) The format of output, available values are [default, json] (default "default")

  -h, --help                help for tiup

      --ssh string          (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.

      --ssh-timeout uint    Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)

  -v, --version             version for tiup

      --wait-timeout uint   Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)

  -y, --yes                 Skip all confirmations and assumes 'yes'

Use "tiup cluster help [command]" for more information about a command.

[root@YZPTLTIDB01T ~]# wget https://tiup-mirrors.pingcap.com/cluster-v1.14.0-linux-amd64.tar.gz  ##下载这个包

--2023-12-18 16:41:12--  https://tiup-mirrors.pingcap.com/cluster-v1.14.0-linux-amd64.tar.gz

Resolving tiup-mirrors.pingcap.com (tiup-mirrors.pingcap.com)... 120.240.109.47, 120.241.84.45, 111.48.217.20

Connecting to tiup-mirrors.pingcap.com (tiup-mirrors.pingcap.com)|120.240.109.47|:443... connected.

HTTP request sent, awaiting response... 200 OK

Length: 9178241 (8.8M) [application/x-compressed]

Saving to: ‘cluster-v1.14.0-linux-amd64.tar.gz’

100%[======================================================================================================================>] 9,178,241   16.2MB/s   in 0.5s  

2023-12-18 16:41:13 (16.2 MB/s) - ‘cluster-v1.14.0-linux-amd64.tar.gz’ saved [9178241/9178241]

[root@YZPTLTIDB01T ~]# tar -xzf cluster-v1.14.0-linux-amd64.tar.gz

[root@YZPTLTIDB01T ~]#

5.2更新tiup

[root@YZPTLTIDB01T ~]# tiup update --self && tiup update cluster

download https://tiup-mirrors.pingcap.com/tiup-v1.14.0-linux-amd64.tar.gz 4.83 MiB / 4.83 MiB 100.00% 26.31 MiB/s                                              

Updated successfully!

component cluster version v1.14.0 is already installed

Updated successfully!

[root@YZPTLTIDB01T ~]#

[root@YZPTLTIDB01T ~]# tiup --binary cluster ---更新后的版本

/root/.tiup/components/cluster/v1.14.0/tiup-cluster

5.3配置参数文件

   

  [root@YZPTLTIDB01T ~]# cat topo.yaml

# # Global variables are applied to all deployments and used as the default value of

# # the deployments if a specific deployment value is missing.

global:

  user: "tidb"

  ssh_port: 11122

  deploy_dir: "/tidb-deploy"

  data_dir: "/tidb-data"

pd_servers:

  - host: 10.189.60.201

tidb_servers:

  - host: 10.189.60.201

tikv_servers:

  - host: 10.189.60.202

  - host: 10.189.60.203

  - host: 10.189.60.204

monitoring_servers:

  - host: 10.189.60.201

grafana_servers:

  - host: 10.189.60.201

alertmanager_servers:

  - host: 10.189.60.201

 5.4安装前预检查

  tiup cluster check  ./topo.yaml --user root –p

 [root@YZPTLTIDB01T ~]# tiup cluster check  ./topo.yaml --user root –p

tiup is checking updates for component cluster ...

Starting component `cluster`: /root/.tiup/components/cluster/v1.14.0/tiup-cluster check ./topo.yaml --user root –p

Input SSH password:

+ Detect CPU Arch Name

  - Detecting node 10.189.60.201 Arch info ... Done

  - Detecting node 10.189.60.202 Arch info ... Done

  - Detecting node 10.189.60.203 Arch info ... Done

  - Detecting node 10.189.60.204 Arch info ... Done

+ Detect CPU OS Name

  - Detecting node 10.189.60.201 OS info ... Done

  - Detecting node 10.189.60.202 OS info ... Done

  - Detecting node 10.189.60.203 OS info ... Done

  - Detecting node 10.189.60.204 OS info ... Done

+ Download necessary tools

  - Downloading check tools for linux/amd64 ... Done

+ Collect basic system information

+ Collect basic system information

  - Getting system info of 10.189.60.201:11122 ... ⠴ CopyComponent: component=insight, version=, remote=10.189.60.201:/tmp/tiup os=linux, arch=amd64

+ Collect basic system information

+ Collect basic system information

  - Getting system info of 10.189.60.201:11122 ... Done

  - Getting system info of 10.189.60.202:11122 ... Done

  - Getting system info of 10.189.60.203:11122 ... Done

  - Getting system info of 10.189.60.204:11122 ... Done

+ Check time zone

  - Checking node 10.189.60.201 ... Done

  - Checking node 10.189.60.202 ... Done

  - Checking node 10.189.60.203 ... Done

  - Checking node 10.189.60.204 ... Done

+ Check system requirements

  - Checking node 10.189.60.201 ... ⠴ CheckSys: host=10.189.60.201 type=exist

+ Check system requirements

  - Checking node 10.189.60.201 ... Done

+ Check system requirements

  - Checking node 10.189.60.201 ... Done

+ Check system requirements

  - Checking node 10.189.60.201 ... Done

+ Check system requirements

+ Check system requirements

+ Check system requirements

  - Checking node 10.189.60.201 ... Done

+ Check system requirements

  - Checking node 10.189.60.201 ... Done

  - Checking node 10.189.60.202 ... Done

  - Checking node 10.189.60.203 ... Done

  - Checking node 10.189.60.204 ... Done

  - Checking node 10.189.60.201 ... Done

  - Checking node 10.189.60.201 ... Done

  - Checking node 10.189.60.201 ... Done

  - Checking node 10.189.60.201 ... Done

  - Checking node 10.189.60.201 ... Done

  - Checking node 10.189.60.202 ... Done

  - Checking node 10.189.60.203 ... Done

  - Checking node 10.189.60.204 ... Done

+ Cleanup check files

  - Cleanup check files on 10.189.60.201:11122 ... Done

  - Cleanup check files on 10.189.60.202:11122 ... Done

  - Cleanup check files on 10.189.60.203:11122 ... Done

  - Cleanup check files on 10.189.60.204:11122 ... Done

Node           Check         Result  Message

----           -----         ------  -------

10.189.60.202  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai

10.189.60.202  cpu-governor  Warn    Unable to determine current CPU frequency governor policy

10.189.60.202  swap          Warn    swap is enabled, please disable it for best performance

10.189.60.202  memory        Pass    memory size is 32768MB

10.189.60.202  thp           Pass    THP is disabled

10.189.60.202  command       Pass    numactl: policy: default

10.189.60.202  os-version    Pass    OS is CentOS Linux 7 (Core) 7.9.2009

10.189.60.202  cpu-cores     Pass    number of CPU cores / threads: 8

10.189.60.202  network       Pass    network speed of ens192 is 10000MB

10.189.60.202  disk          Warn    mount point / does not have 'noatime' option set

10.189.60.202  selinux       Pass    SELinux is disabled

10.189.60.203  swap          Warn    swap is enabled, please disable it for best performance

10.189.60.203  memory        Pass    memory size is 32768MB

10.189.60.203  disk          Warn    mount point / does not have 'noatime' option set

10.189.60.203  command       Pass    numactl: policy: default

10.189.60.203  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai

10.189.60.203  os-version    Pass    OS is CentOS Linux 7 (Core) 7.9.2009

10.189.60.203  network       Pass    network speed of ens192 is 10000MB

10.189.60.203  selinux       Pass    SELinux is disabled

10.189.60.203  thp           Pass    THP is disabled

10.189.60.203  cpu-cores     Pass    number of CPU cores / threads: 8

10.189.60.203  cpu-governor  Warn    Unable to determine current CPU frequency governor policy

10.189.60.204  selinux       Pass    SELinux is disabled

10.189.60.204  cpu-cores     Pass    number of CPU cores / threads: 8

10.189.60.204  swap          Warn    swap is enabled, please disable it for best performance

10.189.60.204  network       Pass    network speed of ens192 is 10000MB

10.189.60.204  disk          Warn    mount point / does not have 'noatime' option set

10.189.60.204  thp           Pass    THP is disabled

10.189.60.204  command       Pass    numactl: policy: default

10.189.60.204  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai

10.189.60.204  os-version    Pass    OS is CentOS Linux 7 (Core) 7.9.2009

10.189.60.204  cpu-governor  Warn    Unable to determine current CPU frequency governor policy

10.189.60.204  memory        Pass    memory size is 32768MB

10.189.60.201  os-version    Pass    OS is CentOS Linux 7 (Core) 7.9.2009

10.189.60.201  cpu-governor  Warn    Unable to determine current CPU frequency governor policy

10.189.60.201  swap          Warn    swap is enabled, please disable it for best performance

10.189.60.201  memory        Pass    memory size is 16384MB

10.189.60.201  sysctl        Fail    vm.swappiness = 60, should be 0

10.189.60.201  command       Fail    numactl not usable, bash: numactl:

##未禁用swap,未安装numactl 

command not found

10.189.60.201  cpu-cores     Pass    number of CPU cores / threads: 8

10.189.60.201  network       Pass    network speed of ens192 is 10000MB

10.189.60.201  disk          Warn    mount point / does not have 'noatime' option set

10.189.60.201  selinux       Pass    SELinux is disabled

10.189.60.201  thp           Pass    THP is disabled

Transaction Summary

处理完成后 再次check  确保没有fail 

5.5安装TIDB

[root@YZPTLTIDB01T ~]# tiup list tidb  ##list出可以安装的版本

v6.1.2                                               2022-10-24T15:16:17+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.1.3                                               2022-12-05T11:50:23+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.1.4                                               2023-02-08T11:34:10+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.1.5                                               2023-02-28T11:23:57+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.1.6                                               2023-04-12T11:05:35+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.1.7                                               2023-07-12T11:22:57+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.2.0                                               2022-08-23T09:14:36+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.3.0                                               2022-09-30T10:59:36+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.4.0                                               2022-11-17T11:26:23+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.5.0                                               2022-12-29T11:32:06+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.5.1                                               2023-03-10T13:36:50+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.5.2                                               2023-04-21T10:52:46+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.5.3                                               2023-06-14T14:36:43+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.5.4                                               2023-08-28T11:40:24+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.5.5                                               2023-09-21T11:51:14+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.5.6                                               2023-12-07T07:12:10Z                 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v6.6.0                                               2023-02-20T16:43:16+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.0.0                                               2023-03-30T10:33:19+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.1.0                                               2023-05-31T14:49:49+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.1.1                                               2023-07-24T11:39:38+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.1.2                                               2023-10-25T03:58:13Z                 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.2.0                                               2023-06-29T11:57:48+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.3.0                                               2023-08-14T12:41:31+08:00            darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.4.0                                               2023-10-12T04:07:12Z                 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.5.0                                               2023-12-01T03:55:55Z                 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

v7.6.0-alpha-nightly-20231216                        2023-12-16T15:17:07Z                 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64

正式安装TIDB

#本次安装选择TIDB 7.5 改版本为7.x第二个LTS长期支持版本

tiup cluster deploy test-tidb v7.5.0 ./topo.yaml --user root –p

##test-tidb 为当前安装集群的名

[root@YZPTLTIDB01T ~]# tiup cluster deploy test-tidb v7.5.0 ./topo.yaml --user root -p

tiup is checking updates for component cluster ...

Starting component `cluster`: /root/.tiup/components/cluster/v1.14.0/tiup-cluster deploy test-tidb v7.5.0 ./topology.yaml --user root -p

Input SSH password:

+ Detect CPU Arch Name

  - Detecting node 10.189.60.201 Arch info ... Done

  - Detecting node 10.189.60.202 Arch info ... Done

  - Detecting node 10.189.60.203 Arch info ... Done

  - Detecting node 10.189.60.204 Arch info ... Done

+ Detect CPU OS Name

  - Detecting node 10.189.60.201 OS info ... Done

  - Detecting node 10.189.60.202 OS info ... Done

  - Detecting node 10.189.60.203 OS info ... Done

  - Detecting node 10.189.60.204 OS info ... Done

Please confirm your topology:

Cluster type:    tidb

Cluster name:    test-tidb

Cluster version: v7.5.0

Role          Host           Ports        OS/Arch       Directories

----          ----           -----        -------       -----------

pd            10.189.60.201  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379

tikv          10.189.60.202  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160

tikv          10.189.60.203  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160

tikv          10.189.60.204  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160

tidb          10.189.60.201  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000

prometheus    10.189.60.201  9090/12020   linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090

grafana       10.189.60.201  3000         linux/x86_64  /tidb-deploy/grafana-3000

alertmanager  10.189.60.201  9093/9094    linux/x86_64  /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093

Attention:

    1. If the topology is not what you expected, check your yaml file.

    2. Please confirm there is no port/directory conflicts in same host.

Do you want to continue? [y/N]: (default=N) y

+ Generate SSH keys ... Done

+ Download TiDB components

  - Download pd:v7.5.0 (linux/amd64) ... Done

  - Download tikv:v7.5.0 (linux/amd64) ... Done

  - Download tidb:v7.5.0 (linux/amd64) ... Done

  - Download prometheus:v7.5.0 (linux/amd64) ... Done

  - Download grafana:v7.5.0 (linux/amd64) ... Done

  - Download alertmanager: (linux/amd64) ... Done

  - Download node_exporter: (linux/amd64) ... Done

  - Download blackbox_exporter: (linux/amd64) ... Done

+ Initialize target host environments

  - Prepare 10.189.60.203:11122 ... Done

  - Prepare 10.189.60.204:11122 ... Done

  - Prepare 10.189.60.201:11122 ... Done

  - Prepare 10.189.60.202:11122 ... Done

+ Deploy TiDB instance

  - Copy pd -> 10.189.60.201 ... Done

  - Copy tikv -> 10.189.60.202 ... Done

  - Copy tikv -> 10.189.60.203 ... Done

  - Copy tikv -> 10.189.60.204 ... Done

  - Copy tidb -> 10.189.60.201 ... Done

  - Copy prometheus -> 10.189.60.201 ... Done

  - Copy grafana -> 10.189.60.201 ... Done

  - Copy alertmanager -> 10.189.60.201 ... Done

  - Deploy node_exporter -> 10.189.60.201 ... Done

  - Deploy node_exporter -> 10.189.60.202 ... Done

  - Deploy node_exporter -> 10.189.60.203 ... Done

  - Deploy node_exporter -> 10.189.60.204 ... Done

  - Deploy blackbox_exporter -> 10.189.60.202 ... Done

  - Deploy blackbox_exporter -> 10.189.60.203 ... Done

  - Deploy blackbox_exporter -> 10.189.60.204 ... Done

  - Deploy blackbox_exporter -> 10.189.60.201 ... Done

+ Copy certificate to remote host

+ Init instance configs

  - Generate config pd -> 10.189.60.201:2379 ... Done

  - Generate config tikv -> 10.189.60.202:20160 ... Done

  - Generate config tikv -> 10.189.60.203:20160 ... Done

  - Generate config tikv -> 10.189.60.204:20160 ... Done

  - Generate config tidb -> 10.189.60.201:4000 ... Done

  - Generate config prometheus -> 10.189.60.201:9090 ... Done

  - Generate config grafana -> 10.189.60.201:3000 ... Done

  - Generate config alertmanager -> 10.189.60.201:9093 ... Done

+ Init monitor configs

  - Generate config node_exporter -> 10.189.60.204 ... Done

  - Generate config node_exporter -> 10.189.60.201 ... Done

  - Generate config node_exporter -> 10.189.60.202 ... Done

  - Generate config node_exporter -> 10.189.60.203 ... Done

  - Generate config blackbox_exporter -> 10.189.60.203 ... Done

  - Generate config blackbox_exporter -> 10.189.60.204 ... Done

  - Generate config blackbox_exporter -> 10.189.60.201 ... Done

  - Generate config blackbox_exporter -> 10.189.60.202 ... Done

Enabling component pd

        Enabling instance 10.189.60.201:2379

        Enable instance 10.189.60.201:2379 success

Enabling component tikv

        Enabling instance 10.189.60.204:20160

        Enabling instance 10.189.60.202:20160

        Enabling instance 10.189.60.203:20160

        Enable instance 10.189.60.202:20160 success

        Enable instance 10.189.60.204:20160 success

        Enable instance 10.189.60.203:20160 success

Enabling component tidb

        Enabling instance 10.189.60.201:4000

        Enable instance 10.189.60.201:4000 success

Enabling component prometheus

        Enabling instance 10.189.60.201:9090

        Enable instance 10.189.60.201:9090 success

Enabling component grafana

        Enabling instance 10.189.60.201:3000

        Enable instance 10.189.60.201:3000 success

Enabling component alertmanager

        Enabling instance 10.189.60.201:9093

        Enable instance 10.189.60.201:9093 success

Enabling component node_exporter

        Enabling instance 10.189.60.204

        Enabling instance 10.189.60.202

        Enabling instance 10.189.60.201

        Enabling instance 10.189.60.203

        Enable 10.189.60.204 success

        Enable 10.189.60.203 success

        Enable 10.189.60.201 success

        Enable 10.189.60.202 success

Enabling component blackbox_exporter

        Enabling instance 10.189.60.204

        Enabling instance 10.189.60.202

        Enabling instance 10.189.60.201

        Enabling instance 10.189.60.203

        Enable 10.189.60.204 success

        Enable 10.189.60.203 success

        Enable 10.189.60.201 success

        Enable 10.189.60.202 success

Cluster `test-tidb` deployed successfully, you can start it with command: `tiup cluster start test-tidb --init`

安装成功

[root@YZPTLTIDB01T ~]# tiup cluster list

tiup is checking updates for component cluster ...

Starting component `cluster`: /root/.tiup/components/cluster/v1.14.0/tiup-cluster list

Name       User  Version  Path                                            PrivateKey

----       ----  -------  ----                                            ----------

test-tidb  tidb  v7.5.0   /root/.tiup/storage/cluster/clusters/test-tidb  /root/.tiup/storage/cluster/clusters/test-tidb/ssh/id_rsa

查看当前安装集群状态

[root@YZPTLTIDB01T ~]# tiup cluster display test-tidb

tiup is checking updates for component cluster ...

Starting component `cluster`: /root/.tiup/components/cluster/v1.14.0/tiup-cluster display test-tidb

Cluster type:       tidb

Cluster name:       test-tidb

Cluster version:    v7.5.0

Deploy user:        tidb

SSH type:           builtin

Grafana URL:        http://10.189.60.201:3000

ID                   Role          Host           Ports        OS/Arch       Status  Data Dir                      Deploy Dir

--                   ----          ----           -----        -------       ------  --------                      ----------

10.189.60.201:9093   alertmanager  10.189.60.201  9093/9094    linux/x86_64  Down    /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093

10.189.60.201:3000   grafana       10.189.60.201  3000         linux/x86_64  Down    -                             /tidb-deploy/grafana-3000

10.189.60.201:2379   pd            10.189.60.201  2379/2380    linux/x86_64  Down    /tidb-data/pd-2379            /tidb-deploy/pd-2379

10.189.60.201:9090   prometheus    10.189.60.201  9090/12020   linux/x86_64  Down    /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090

10.189.60.201:4000   tidb          10.189.60.201  4000/10080   linux/x86_64  Down    -                             /tidb-deploy/tidb-4000

10.189.60.202:20160  tikv          10.189.60.202  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160

10.189.60.203:20160  tikv          10.189.60.203  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160

10.189.60.204:20160  tikv          10.189.60.204  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160

Total nodes: 8

可以看到当前集群的状态都是down的

5.6初始化并启动集群

[root@YZPTLTIDB01T ~]# tiup cluster start test-tidb --init

tiup is checking updates for component cluster ...

Starting component `cluster`: /root/.tiup/components/cluster/v1.14.0/tiup-cluster start test-tidb --init

Starting cluster test-tidb...

+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/test-tidb/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/test-tidb/ssh/id_rsa.pub

+ [Parallel] - UserSSH: user=tidb, host=10.189.60.201

+ [Parallel] - UserSSH: user=tidb, host=10.189.60.202

+ [Parallel] - UserSSH: user=tidb, host=10.189.60.201

+ [Parallel] - UserSSH: user=tidb, host=10.189.60.204

+ [Parallel] - UserSSH: user=tidb, host=10.189.60.201

+ [Parallel] - UserSSH: user=tidb, host=10.189.60.201

+ [Parallel] - UserSSH: user=tidb, host=10.189.60.203

+ [Parallel] - UserSSH: user=tidb, host=10.189.60.201

+ [ Serial ] - StartCluster

Starting component pd

        Starting instance 10.189.60.201:2379

        Start instance 10.189.60.201:2379 success

Starting component tikv

        Starting instance 10.189.60.204:20160

        Starting instance 10.189.60.202:20160

        Starting instance 10.189.60.203:20160

        Start instance 10.189.60.204:20160 success

        Start instance 10.189.60.202:20160 success

        Start instance 10.189.60.203:20160 success

Starting component tidb

        Starting instance 10.189.60.201:4000

        Start instance 10.189.60.201:4000 success

Starting component prometheus

        Starting instance 10.189.60.201:9090

        Start instance 10.189.60.201:9090 success

Starting component grafana

        Starting instance 10.189.60.201:3000

        Start instance 10.189.60.201:3000 success

Starting component alertmanager

        Starting instance 10.189.60.201:9093

        Start instance 10.189.60.201:9093 success

Starting component node_exporter

        Starting instance 10.189.60.204

        Starting instance 10.189.60.201

        Starting instance 10.189.60.202

        Starting instance 10.189.60.203

        Start 10.189.60.204 success

        Start 10.189.60.203 success

        Start 10.189.60.202 success

        Start 10.189.60.201 success

Starting component blackbox_exporter

        Starting instance 10.189.60.204

        Starting instance 10.189.60.202

        Starting instance 10.189.60.203

        Starting instance 10.189.60.201

        Start 10.189.60.202 success

        Start 10.189.60.203 success

        Start 10.189.60.201 success

        Start 10.189.60.204 success

+ [ Serial ] - UpdateTopology: cluster=test-tidb

Started cluster `test-tidb` successfully

The root password of TiDB database has been changed.

The new password is: 'N_Mz@vp^17tG6E+504'.

Copy and record it to somewhere safe, it is only displayed once, and will not be stored.

The generated password can NOT be get and shown again.

[root@YZPTLTIDB01T ~]#

记住初始化时给的密码(和mysql类似)

5.7查看集群的状态

tiup cluster display test-tidb

这时可以看到集群的状态已经都是up了

  • 启动、关闭集群命令
启动集群
tiup cluster start test-tidb
关闭集群
tiup cluster stop test-tidb
停止组件
例如,下列命令只停止 TiDB 组件:
tiup cluster stop test-tidb -R tidb
停止组件中的某一个节点(TIKV)
tiup cluster stop test-tidb -N 10.189.60.202:20160

停掉后集群状态
 
启动组件中的某一个节点(TIKV
tiup cluster start test-tidb -N 10.189.60.202:20160
更改组件的配置后,重启组件
tiup cluster reload ${cluster-name} -R 组件名,
比如
tiup cluster reload test-tidb -R t
  • 命令行连接到 TiDB 集群

TiDB 兼容 MySQL 协议,故需要 MySQL 客户端连接,则需安装MySQL 客户端。

Linux7 版本的系统默认自带安装了 MariaDB,需要先清理。

 
删除自带的mariadb
[root@YZPTLTIDB01T ~]# rpm -qa |grep mariadb
mariadb-libs-5.5.68-1.el7.x86_64
[root@YZPTLTIDB01T ~]# 
[root@YZPTLTIDB01T ~]# rpm -e --nodeps mariadb-libs-5.5.68-1.el7.x86_64
[root@YZPTLTIDB01T ~]# rpm -qa |grep mariadb

安装mysql客户端

#yum -y install http://dev.mysql.com/get/mysql80-community-release-el7-10.noarch.rpm

# rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2023

# yum -y install mysql

[root@YZPTLTIDB01T ~]# yum -y install http://dev.mysql.com/get/mysql80-community-release-el7-10.noarch.rpm

Loaded plugins: fastestmirror

mysql80-community-release-el7-10.noarch.rpm                                                                                              |  14 kB  00:00:00     

Examining /var/tmp/yum-root-26cqAu/mysql80-community-release-el7-10.noarch.rpm: mysql80-community-release-el7-11.noarch

Marking /var/tmp/yum-root-26cqAu/mysql80-community-release-el7-10.noarch.rpm to be installed

Resolving Dependencies

--> Running transaction check

---> Package mysql80-community-release.noarch 0:el7-11 will be installed

--> Finished Dependency Resolution

Dependencies Resolved

================================================================================================================================================================

 Package                                     Arch                     Version                  Repository                                                  Size

================================================================================================================================================================

Installing:

 mysql80-community-release                   noarch                   el7-11                   /mysql80-community-release-el7-10.noarch                    17 k

Transaction Summary

================================================================================================================================================================

Install  1 Package

Total size: 17 k

Installed size: 17 k

Downloading packages:

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

Warning: RPMDB altered outside of yum.

** Found 2 pre-existing rpmdb problem(s), 'yum check' output follows:

2:postfix-2.10.1-9.el7.x86_64 has missing requires of libmysqlclient.so.18()(64bit)

2:postfix-2.10.1-9.el7.x86_64 has missing requires of libmysqlclient.so.18(libmysqlclient_18)(64bit)

  Installing : mysql80-community-release-el7-11.noarch                                                                                                      1/1

  Verifying  : mysql80-community-release-el7-11.noarch                                                                                                      1/1

Installed:

  mysql80-community-release.noarch 0:el7-11                                                                                                                    

Complete!

[root@YZPTLTIDB01T ~]# rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2023

[root@YZPTLTIDB01T ~]#  yum -y install mysql

Loaded plugins: fastestmirror

Loading mirror speeds from cached hostfile

 * base: mirrors.bfsu.edu.cn

 * extras: mirrors.bfsu.edu.cn

 * updates: mirrors.bfsu.edu.cn

base                                                                                                                                     | 3.6 kB  00:00:00    

Not using downloaded base/repomd.xml because it is older than what we have:

  Current   : Mon Mar 20 23:22:29 2023

  Downloaded: Fri Oct 30 04:03:00 2020

extras                                                                                                                                   | 2.9 kB  00:00:00    

mysql-connectors-community                                                                                                               | 2.6 kB  00:00:00    

mysql-tools-community                                                                                                                    | 2.6 kB  00:00:00    

mysql80-community                                                                                                                        | 2.6 kB  00:00:00    

updates                                                                                                                                  | 2.9 kB  00:00:00    

(1/3): mysql-tools-community/x86_64/primary_db                                                                                           |  95 kB  00:00:00    

(2/3): mysql-connectors-community/x86_64/primary_db                                                                                      | 102 kB  00:00:00    

(3/3): mysql80-community/x86_64/primary_db                                                                                               | 266 kB  00:00:00    

Resolving Dependencies

--> Running transaction check

---> Package mysql-community-client.x86_64 0:8.0.35-1.el7 will be installed

--> Processing Dependency: mysql-community-client-plugins = 8.0.35-1.el7 for package: mysql-community-client-8.0.35-1.el7.x86_64

--> Processing Dependency: mysql-community-libs(x86-64) >= 8.0.11 for package: mysql-community-client-8.0.35-1.el7.x86_64

--> Running transaction check

---> Package mysql-community-client-plugins.x86_64 0:8.0.35-1.el7 will be installed

---> Package mysql-community-libs.x86_64 0:8.0.35-1.el7 will be installed

--> Processing Dependency: mysql-community-common(x86-64) >= 8.0.11 for package: mysql-community-libs-8.0.35-1.el7.x86_64

--> Running transaction check

---> Package mysql-community-common.x86_64 0:8.0.35-1.el7 will be installed

--> Finished Dependency Resolution

Dependencies Resolved

================================================================================================================================================================

 Package                                             Arch                        Version                           Repository                              Size

================================================================================================================================================================

Installing:

 mysql-community-client                              x86_64                      8.0.35-1.el7                      mysql80-community                       16 M

Installing for dependencies:

 mysql-community-client-plugins                      x86_64                      8.0.35-1.el7                      mysql80-community                      3.5 M

 mysql-community-common                              x86_64                      8.0.35-1.el7                      mysql80-community                      665 k

 mysql-community-libs                                x86_64                      8.0.35-1.el7                      mysql80-community                      1.5 M

Transaction Summary

================================================================================================================================================================

Install  1 Package (+3 Dependent packages)

Total download size: 22 M

Installed size: 116 M

Downloading packages:

warning: /var/cache/yum/x86_64/7/mysql80-community/packages/mysql-community-client-plugins-8.0.35-1.el7.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID 3a79bd29: NOKEY

Public key for mysql-community-client-plugins-8.0.35-1.el7.x86_64.rpm is not installed

(1/4): mysql-community-client-plugins-8.0.35-1.el7.x86_64.rpm                                                                            | 3.5 MB  00:00:00    

(2/4): mysql-community-common-8.0.35-1.el7.x86_64.rpm                                                                                    | 665 kB  00:00:00    

(3/4): mysql-community-client-8.0.35-1.el7.x86_64.rpm                                                                                    |  16 MB  00:00:01    

(4/4): mysql-community-libs-8.0.35-1.el7.x86_64.rpm                                                                                      | 1.5 MB  00:00:00    

----------------------------------------------------------------------------------------------------------------------------------------------------------------

Total                                                                                                                            13 MB/s |  22 MB  00:00:01    

Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql-2023

Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql-2022

Importing GPG key 0x3A79BD29:

 Userid     : "MySQL Release Engineering <mysql-build@oss.oracle.com>"

 Fingerprint: 859b e8d7 c586 f538 430b 19c2 467b 942d 3a79 bd29

 Package    : mysql80-community-release-el7-11.noarch (@/mysql80-community-release-el7-10.noarch)

 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-mysql-2022

Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql

Importing GPG key 0x5072E1F5:

 Userid     : "MySQL Release Engineering <mysql-build@oss.oracle.com>"

 Fingerprint: a4a9 4068 76fc bd3c 4567 70c8 8c71 8d3b 5072 e1f5

 Package    : mysql80-community-release-el7-11.noarch (@/mysql80-community-release-el7-10.noarch)

 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-mysql

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

  Installing : mysql-community-client-plugins-8.0.35-1.el7.x86_64                                                                                           1/4

  Installing : mysql-community-common-8.0.35-1.el7.x86_64                                                                                                   2/4

  Installing : mysql-community-libs-8.0.35-1.el7.x86_64                                                                                                     3/4

  Installing : mysql-community-client-8.0.35-1.el7.x86_64                                                                                                   4/4

  Verifying  : mysql-community-client-plugins-8.0.35-1.el7.x86_64                                                                                           1/4

  Verifying  : mysql-community-libs-8.0.35-1.el7.x86_64                                                                                                     2/4

  Verifying  : mysql-community-client-8.0.35-1.el7.x86_64                                                                                                   3/4

  Verifying  : mysql-community-common-8.0.35-1.el7.x86_64                                                                                                   4/4

Installed:

  mysql-community-client.x86_64 0:8.0.35-1.el7                                                                                                                 

Dependency Installed:

  mysql-community-client-plugins.x86_64 0:8.0.35-1.el7       mysql-community-common.x86_64 0:8.0.35-1.el7       mysql-community-libs.x86_64 0:8.0.35-1.el7     

Complete!

[root@YZPTLTIDB01T ~]#

连接数据库 并修改初始root密码

mysql -h 10.189.60.201 -P 4000 -uroot –p

[root@YZPTLTIDB01T ~]# mysql -h 10.189.60.201 -P 4000 -uroot -p

Enter password:

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 1543503878

Server version: 8.0.11-TiDB-v7.5.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

mysql> show databases;

+--------------------+

| Database           |

+--------------------+

| INFORMATION_SCHEMA |

| METRICS_SCHEMA     |

| PERFORMANCE_SCHEMA |

| mysql              |

| test               |

+--------------------+

5 rows in set (0.00 sec)

mysql> use mysql

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

Database changed

mysql> alter user 'root'@'%' identified by 'tidb';

Query OK, 0 rows affected (0.03 sec)

mysql>

mysql>

  • 集群监控

集群状态中有监控的url

集群的dashboard 监控整个集群的状态 密码为mysql的root密码

Root/tidb

可以看到整个集群的整体状态

Grafana默认密码为admin/admin

首次登陆需要修改密码

首次使用需要建个新的dashboard

  • 19
    点赞
  • 21
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

潇湘秦

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值