Centos7上部署TiDB 6.5在线HTAP平台

本文章同样适用于TIDB 5.2.2版本的部署

1. 集群规划

2. 环境与系统配置(4台)

2.1 关闭系统swap

使用swap做为内存不足的缓冲,会降低性能,所以需要关闭

临时关闭swap

[root@bigdata1 ~]#
[root@bigdata1 ~]# swapoff -a
[root@bigdata1 ~]#
[root@bigdata1 ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:            972         163         576           7         232         666
Swap:             0           0           0
[root@bigdata1 ~]# 
[root@bigdata1 ~]# echo "vm.swappiness = 0">> /etc/sysctl.conf
[root@bigdata1 ~]# 
[root@bigdata1 ~]# sysctl -p
vm.swappiness = 0
[root@bigdata1 ~]#

永久关闭swap,注释/etc/fstab中下面这行

/dev/mapper/centos_centos-swap swap                    swap    defaults        0 0

以后reboot重启服务器,就可以了

2.2 用NTP进行时间同步

Tidb需要服务器之间的时间同步,来保证ACID模型的事务线性一致性

时间同步可以参考Centos7服务器通过Chrony设置时间同步

2.3 关闭透明大页面THP

tidb的内存访问模式是稀疏的,开启Transparent Huge Pages会带来延时

查看透明大页开启状态,[always]表示是开启状态

[root@bigdata1 ~]# 
[root@bigdata1 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
[root@bigdata1 ~]#

使用tuned配置系统优化参数,查看当前操作系统的tuned策略

[root@bigdata1 ~]# 
[root@bigdata1 ~]# tuned-adm list
Available profiles:
- balanced                    - General non-specialized tuned profile
- desktop                     - Optimize for the desktop use-case
- hpc-compute                 - Optimize for HPC compute workloads
- latency-performance         - Optimize for deterministic performance at the cost of increased power consumption
- network-latency             - Optimize for deterministic performance at the cost of increased power consumption, focused on low latency network performance
- network-throughput          - Optimize for streaming network throughput, generally only necessary on older CPUs or 40G+ networks
- powersave                   - Optimize for low power consumption
- throughput-performance      - Broadly applicable tuning that provides excellent performance across a variety of common server workloads
- virtual-guest               - Optimize for running inside a virtual guest
- virtual-host                - Optimize for running KVM guests
Current active profile: virtual-guest
[root@bigdata1 ~]# 

Current active profile: virtual-guest表示当前操作系统的策略是virtual-guest

在当前策略上添加操作系统优化配置

[root@bigdata1 ~]# 
[root@bigdata1 ~]# mkdir /etc/tuned/virtual-guest-tidb-optimal
[root@bigdata1 ~]# 
[root@bigdata1 ~]# touch /etc/tuned/virtual-guest-tidb-optimal/tuned.conf
[root@bigdata1 ~]# 
[root@bigdata1 ~]# cat /etc/tuned/virtual-guest-tidb-optimal/tuned.conf
[main]
include=virtual-guest


[vm]
transparent_hugepages=never
[root@bigdata1 ~]#

应用新的tuned策略

[root@bigdata1 ~]# 
[root@bigdata1 ~]# tuned-adm profile virtual-guest-tidb-optimal
[root@bigdata1 ~]# 
[root@bigdata1 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
[root@bigdata1 ~]#

下面我们关闭transparent_hugepage的defrag

查看defrag开启状态

[root@bigdata1 ~]# 
[root@bigdata1 ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
[always] madvise never
[root@bigdata1 ~]# 

临时关闭defrag

[root@bigdata1 ~]# 
[root@bigdata1 ~]# echo never > /sys/kernel/mm/transparent_hugepage/defrag
[root@bigdata1 ~]# 
[root@bigdata1 ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
always madvise [never]
[root@bigdata1 ~]#

永久关闭defrag

[root@bigdata1 opt]# 
[root@bigdata1 opt]# touch disable_transparent_hugepage_defrag.sh
[root@bigdata1 opt]# 
[root@bigdata1 opt]# cat disable_transparent_hugepage_defrag.sh
echo never > /sys/kernel/mm/transparent_hugepage/defrag
[root@bigdata1 opt]# 
[root@bigdata1 opt]# chmod +x disable_transparent_hugepage_defrag.sh
[root@bigdata1 opt]# 
[root@bigdata1 opt]# cat /etc/rc.d/rc.local
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.

touch /var/lock/subsys/local

/opt/disable_transparent_hugepage_defrag.sh

[root@bigdata1 opt]# 
[root@bigdata1 opt]# chmod +x /etc/rc.d/rc.local
[root@bigdata1 opt]# 

以后reboot重启服务器,就可以了

2.4 修改sysctl参数

[root@bigdata1 ~]# 
[root@bigdata1 ~]# echo "fs.file-max = 1000000">> /etc/sysctl.conf
[root@bigdata1 ~]# echo "net.core.somaxconn = 32768">> /etc/sysctl.conf
[root@bigdata1 ~]# echo "net.ipv4.tcp_tw_recycle = 0">> /etc/sysctl.conf
[root@bigdata1 ~]# echo "net.ipv4.tcp_syncookies = 0">> /etc/sysctl.conf
[root@bigdata1 ~]# echo "vm.overcommit_memory = 1">> /etc/sysctl.conf
[root@bigdata1 ~]# 
[root@bigdata1 ~]# sysctl -p
vm.swappiness = 0
fs.file-max = 1000000
net.core.somaxconn = 32768
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_syncookies = 0
vm.overcommit_memory = 1
[root@bigdata1 ~]#

2.5 修改limits.conf

添加以下内容到/etc/security/limits.conf文件中

root           soft    nofile          1000000
root           hard    nofile          1000000
root           soft    stack          32768
root           hard    stack          32768

2.6 安装numactl和sshpass工具

有时因为硬件机器配置往往高于需求,会考虑单机部署多个TiDB或TiKV。numa绑核工具主要为了防止CPU资源的争抢

[root@bigdata1 ~]# 
[root@bigdata1 ~]# yum -y install numactl
[root@bigdata1 ~]#
[root@bigdata1 ~]# yum -y install sshpass
[root@bigdata1 ~]#

2.7 配置root用户的免密登录

2.8 关闭selinux

临时关闭selinux

[root@bigdata1 ~]# 
[root@bigdata1 ~]# getenforce
Enforcing
[root@bigdata1 ~]# 
[root@bigdata1 ~]# setenforce 0
[root@bigdata1 ~]# 
[root@bigdata1 ~]# getenforce
Permissive
[root@bigdata1 ~]# 
[root@bigdata1 ~]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      31
[root@bigdata1 ~]# 

永久关闭selinux,修改/etc/selinux/config,修改内容如下:

SELINUX=disabled

以后reboot重启服务器,就可以了

2.9 开启irqbalance网卡中断绑定

[root@bigdata1 ~]#
[root@bigdata1 ~]# systemctl restart irqbalance
[root@bigdata1 ~]# 
[root@bigdata1 ~]# systemctl status irqbalance
● irqbalance.service - irqbalance daemon
   Loaded: loaded (/usr/lib/systemd/system/irqbalance.service; enabled; vendor preset: enabled)
   Active: active (running) since 一 2023-01-02 18:48:07 CST; 2min 19s ago
 Main PID: 1129 (irqbalance)
   CGroup: /system.slice/irqbalance.service
           └─1129 /usr/sbin/irqbalance --foreground

1月 02 18:48:07 bigdata1 systemd[1]: Stopped irqbalance daemon.
1月 02 18:48:07 bigdata1 systemd[1]: Started irqbalance daemon.
[root@bigdata1 ~]#
  • 开启此功能的服务器至少需要2个cpu

3. 使用Tiup部署集群(bigdata1上操作)

TiUP cluster组件可以进行部署、启动、关闭、销毁、弹性扩缩容、升级TiDB集群,以及管理TiDB集群参数

TiUP支持部署TiDB、TiFlash、TiDB Binlog、TiCDC,以及监控系统

3.1 部署TiUP组件

下载TiDB server镜像包,包含TiUP

[root@bigdata1 ~]#
[root@bigdata1 ~]# mkdir tidb-community-server-v6.5.0
[root@bigdata1 ~]#
[root@bigdata1 ~]# cd tidb-community-server-v6.5.0
[root@bigdata1 tidb-community-server-v6.5.0]#
[root@bigdata1 tidb-community-server-v6.5.0]# wget https://download.pingcap.org/tidb-community-server-v6.5.0-linux-amd64.tar.gz
[root@bigdata1 tidb-community-server-v6.5.0]#

进行解压安装

[root@bigdata1 tidb-community-server-v6.5.0]#
[root@bigdata1 tidb-community-server-v6.5.0]# tar -zxvf [root@bigdata1 tidb-community-server-v6.5.0]#
[root@bigdata1 tidb-community-server-v6.5.0]# 
[root@bigdata1 tidb-community-server-v6.5.0]# cd tidb-community-server-v6.5.0-linux-amd64
[root@bigdata1 tidb-community-server-v6.5.0-linux-amd64]# 
[root@bigdata1 tidb-community-server-v6.5.0-linux-amd64]# sh local_install.sh
Disable telemetry success
Successfully set mirror to /root/tidb-community-server-v6.5.0/tidb-community-server-v6.5.0-linux-amd64
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
1. source /root/.bash_profile
2. Have a try:   tiup playground
===============================================
[root@bigdata1 tidb-community-server-v6.5.0-linux-amd64]# 
[root@bigdata1 tidb-community-server-v6.5.0-linux-amd64]# source /root/.bash_profile
[root@bigdata1 tidb-community-server-v6.5.0-linux-amd64]# 

3.2 配置集群拓扑文件

  1. 生成集群拓扑文件
[root@bigdata1 tidb-community-server-v6.5.0]# 
[root@bigdata1 tidb-community-server-v6.5.0]# mkdir tidb-v6.5.0-install
[root@bigdata1 tidb-community-server-v6.5.0]# 
[root@bigdata1 tidb-community-server-v6.5.0]# tiup cluster template > /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/topology.yaml
tiup is checking updates for component cluster ...
A new version of cluster is available:
   The latest version:         v1.11.1
   Local installed version:    
   Update current component:   tiup update cluster
   Update all components:      tiup update --all

The component `cluster` version  is not installed; downloading from repository.
Starting component `cluster`: /root/.tiup/components/cluster/v1.11.1/tiup-cluster template
[root@bigdata1 tidb-community-server-v6.5.0]# 
[root@bigdata1 tidb-community-server-v6.5.0]# ll tidb-v6.5.0-install/
总用量 12
-rw-r--r-- 1 root root 10671 1月   2 19:03 topology.yaml
[root@bigdata1 tidb-community-server-v6.5.0]# 

注释topology.yaml所有内容,再参考topology.yaml的内容,添加以下内容到topology.yaml中

global:
  user: "root"
  ssh_port: 22
  deploy_dir: "/root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy"
  data_dir: "/root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data"
  arch: "amd64"
  
monitored:
  node_exporter_port: 9100
  blackbox_exporter_port: 9115
  
server_configs:
  pd:
    replication.enable-placement-rules: true

pd_servers:
  - host: bigdata1
  - host: bigdata2
  - host: bigdata3
  - host: bigdata4

tidb_servers:
  - host: bigdata1
  - host: bigdata2
  - host: bigdata3
  - host: bigdata4

tikv_servers:
  - host: bigdata1
  - host: bigdata2
  
tiflash_servers:
  - host: bigdata3
    data_dir: /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/tiflash-9000
    deploy_dir: /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tiflash-9000
  - host: bigdata4
    data_dir: /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/tiflash-9000
    deploy_dir: /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tiflash-9000

monitoring_servers:
  - host: bigdata1

grafana_servers:
  - host: bigdata1

alertmanager_servers:
  - host: bigdata1

参数说明如下:

  • TiKV和TiFlash需要部署在不同的服务器上或使用不同的磁盘(非目录)
  • 使用Tiflash需要开启PD的Placement Rules功能,所以需要设置此参数replication.enable-placement-rules: true
  1. 检测集群部署的问题
[root@bigdata1 ~]# 
[root@bigdata1 ~]# tiup cluster check /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/topology.yaml --user root
[root@bigdata1 ~]# 
  • 如果global.user是其它用户,这里的–user参数值也要指定root
  • 这里如果检测的result为Fail,则需根据提示信息解决错误

3.3 deploy部署集群

[root@bigdata1 ~]# 
[root@bigdata1 ~]# tiup cluster deploy tidb-cluster v6.5.0 /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/topology.yaml --user root
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.11.1/tiup-cluster deploy tidb-cluster v6.5.0 /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/topology.yaml --user root
......省略部分......
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
......省略部分......
Cluster `tidb-cluster` deployed successfully, you can start it with command: `tiup cluster start tidb-cluster --init`
[root@bigdata1 ~]#
[root@bigdata1 ~]# tiup cluster list
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.11.1/tiup-cluster list
Name          User  Version  Path                                               PrivateKey
----          ----  -------  ----                                               ----------
tidb-cluster  root  v6.5.0   /root/.tiup/storage/cluster/clusters/tidb-cluster  /root/.tiup/storage/cluster/clusters/tidb-cluster/ssh/id_rsa
[root@bigdata1 ~]#
  • 如果global.user是其它用户,会自动在目标服务器创建该用户,这里的–user参数值也要指定root
  • 可以通过tiup cluster destroy tidb-cluster销毁一个集群

3.4 查看集群状态

[root@bigdata1 ~]#
[root@bigdata1 ~]# tiup cluster display tidb-cluster
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.11.1/tiup-cluster display tidb-cluster
Cluster type:       tidb
Cluster name:       tidb-cluster
Cluster version:    v6.5.0
Deploy user:        root
SSH type:           builtin
Grafana URL:        http://bigdata1:3000
ID              Role          Host      Ports                            OS/Arch       Status  Data Dir                                                                            Deploy Dir
--              ----          ----      -----                            -------       ------  --------                                                                            ----------
bigdata1:9093   alertmanager  bigdata1  9093/9094                        linux/x86_64  Down    /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/alertmanager-9093  /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/alertmanager-9093
bigdata1:3000   grafana       bigdata1  3000                             linux/x86_64  Down    -                                                                                   /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/grafana-3000
bigdata1:2379   pd            bigdata1  2379/2380                        linux/x86_64  Down    /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/pd-2379            /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/pd-2379
bigdata2:2379   pd            bigdata2  2379/2380                        linux/x86_64  Down    /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/pd-2379            /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/pd-2379
bigdata3:2379   pd            bigdata3  2379/2380                        linux/x86_64  Down    /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/pd-2379            /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/pd-2379
bigdata4:2379   pd            bigdata4  2379/2380                        linux/x86_64  Down    /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/pd-2379            /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/pd-2379
bigdata1:9090   prometheus    bigdata1  9090/12020                       linux/x86_64  Down    /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/prometheus-9090    /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/prometheus-9090
bigdata1:4000   tidb          bigdata1  4000/10080                       linux/x86_64  Down    -                                                                                   /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tidb-4000
bigdata2:4000   tidb          bigdata2  4000/10080                       linux/x86_64  Down    -                                                                                   /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tidb-4000
bigdata3:4000   tidb          bigdata3  4000/10080                       linux/x86_64  Down    -                                                                                   /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tidb-4000
bigdata4:4000   tidb          bigdata4  4000/10080                       linux/x86_64  Down    -                                                                                   /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tidb-4000
bigdata3:9000   tiflash       bigdata3  9000/8123/3930/20170/20292/8234  linux/x86_64  N/A     /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/tiflash-9000       /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tiflash-9000
bigdata4:9000   tiflash       bigdata4  9000/8123/3930/20170/20292/8234  linux/x86_64  N/A     /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/tiflash-9000       /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tiflash-9000
bigdata1:20160  tikv          bigdata1  20160/20180                      linux/x86_64  N/A     /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/tikv-20160         /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tikv-20160
bigdata2:20160  tikv          bigdata2  20160/20180                      linux/x86_64  N/A     /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/tikv-20160         /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tikv-20160
Total nodes: 15
[root@bigdata1 ~]#

3.5 启动集群

建议每台服务器至少3G的内存

[root@bigdata1 ~]#
[root@bigdata1 ~]# tiup cluster start tidb-cluster
tiup is checking updates for component cluster ...
......省略部分......
Started cluster `tidb-cluster` successfully
[root@bigdata1 ~]#

4. 验证集群

方式一:通过tiup cluster命令查看

[root@bigdata1 ~]# 
[root@bigdata1 ~]# tiup cluster display tidb-cluster
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.11.1/tiup-cluster display tidb-cluster
Cluster type:       tidb
Cluster name:       tidb-cluster
Cluster version:    v6.5.0
Deploy user:        root
SSH type:           builtin
Dashboard URL:      http://bigdata1:2379/dashboard
Grafana URL:        http://bigdata1:3000
ID              Role          Host      Ports                            OS/Arch       Status  Data Dir                                                                            Deploy Dir
--              ----          ----      -----                            -------       ------  --------                                                                            ----------
bigdata1:9093   alertmanager  bigdata1  9093/9094                        linux/x86_64  Up      /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/alertmanager-9093  /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/alertmanager-9093
bigdata1:3000   grafana       bigdata1  3000                             linux/x86_64  Up      -                                                                                   /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/grafana-3000
bigdata1:2379   pd            bigdata1  2379/2380                        linux/x86_64  Up|UI   /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/pd-2379            /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/pd-2379
bigdata2:2379   pd            bigdata2  2379/2380                        linux/x86_64  Up|L    /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/pd-2379            /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/pd-2379
bigdata3:2379   pd            bigdata3  2379/2380                        linux/x86_64  Up      /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/pd-2379            /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/pd-2379
bigdata4:2379   pd            bigdata4  2379/2380                        linux/x86_64  Up      /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/pd-2379            /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/pd-2379
bigdata1:9090   prometheus    bigdata1  9090/12020                       linux/x86_64  Up      /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/prometheus-9090    /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/prometheus-9090
bigdata1:4000   tidb          bigdata1  4000/10080                       linux/x86_64  Up      -                                                                                   /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tidb-4000
bigdata2:4000   tidb          bigdata2  4000/10080                       linux/x86_64  Up      -                                                                                   /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tidb-4000
bigdata3:4000   tidb          bigdata3  4000/10080                       linux/x86_64  Up      -                                                                                   /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tidb-4000
bigdata4:4000   tidb          bigdata4  4000/10080                       linux/x86_64  Up      -                                                                                   /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tidb-4000
bigdata3:9000   tiflash       bigdata3  9000/8123/3930/20170/20292/8234  linux/x86_64  Up      /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/tiflash-9000       /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tiflash-9000
bigdata4:9000   tiflash       bigdata4  9000/8123/3930/20170/20292/8234  linux/x86_64  Up      /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/tiflash-9000       /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tiflash-9000
bigdata1:20160  tikv          bigdata1  20160/20180                      linux/x86_64  Up      /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/tikv-20160         /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tikv-20160
bigdata2:20160  tikv          bigdata2  20160/20180                      linux/x86_64  Up      /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-data/tikv-20160         /root/tidb-community-server-v6.5.0/tidb-v6.5.0-install/tidb-deploy/tikv-20160
Total nodes: 15
[root@bigdata1 ~]# 
  • 可以看到只有bigdata1上的PD有UI,能打开http://bigdata1:2379/dashboard

方式二:通过TiDB Dashboard查看
访问http://bigdata1:2379/dashboard,登录用户名和密码为TiDB数据库的root用户和密码,密码默认为空,页面如下:
TiDB Dashboard

方式三:通过Grafana查看

访问http://bigdata1:3000,默认登录用户名和密码为:admin / admin,页面如下:
HomeDashboard

TiDB

方式四:登录数据库执行DDL、DML操作和查询SQL语句

访问的ip为tidb_servers定义的地址:bigdata1、bigdata2、bigdata3、bigdata4,默认端口为: 4000,默认用户名为:root,默认密码为空

[root@bigdata1 ~]# 
[root@bigdata1 ~]#  mysql -h 192.168.28.21 -P 4000 -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 405
Server version: 5.7.25-TiDB-v6.5.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible

Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>
mysql> select tidb_version()\G
*************************** 1. row ***************************
tidb_version(): Release Version: v6.5.0
Edition: Community
Git Commit Hash: 706c3fa3c526cdba5b3e9f066b1a568fb96c56e3
Git Branch: heads/refs/tags/v6.5.0
UTC Build Time: 2022-12-27 03:50:44
GoVersion: go1.19.3
Race Enabled: false
TiKV Min Version: 6.2.0-alpha
Check Table Before Drop: false
Store: tikv
1 row in set (0.00 sec)

mysql>
mysql> select store_id, address, store_state, store_state_name, capacity, available, uptime from information_schema.tikv_store_status;
+----------+----------------+-------------+------------------+----------+-----------+-----------------+
| store_id | address        | store_state | store_state_name | capacity | available | uptime          |
+----------+----------------+-------------+------------------+----------+-----------+-----------------+
|    20001 | bigdata3:3930  |           0 | Up               | 49.98GiB | 47.25GiB  | 2m20.938404917s |
|    20002 | bigdata4:3930  |           0 | Up               | 49.98GiB | 47.25GiB  | 2m20.620190925s |
|        1 | bigdata2:20160 |           0 | Up               | 49.98GiB | 46.68GiB  | 3m10.957098978s |
|        2 | bigdata1:20160 |           0 | Up               | 49.98GiB | 43.06GiB  | 3m10.113911682s |
+----------+----------------+-------------+------------------+----------+-----------+-----------------+
4 rows in set (0.02 sec)

mysql>
mysql> create database test_db;
Query OK, 0 rows affected (0.16 sec)

mysql> 
mysql> use test_db;
Database changed
mysql> 
mysql> create table test_tb(id int, name varchar(64));
Query OK, 0 rows affected (0.15 sec)

mysql> 
mysql> insert into test_tb(id, name) values(1, 'yi'), (2, 'er');
Query OK, 2 rows affected (0.01 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql> 
mysql> select * from test_tb;
+------+------+
| id   | name |
+------+------+
|    1 | yi   |
|    2 | er   |
+------+------+
2 rows in set (0.00 sec)

mysql> 
mysql> exit
Bye
[root@bigdata1 ~]#
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值