环境:centos7+mysql8.0.16
一、搭建mysql服务:
二、下载tidb社区版安装包:
三、环境准备:
1、安装工具numactl
yum -y install numactl
2、修改sysctl参数:
echo "fs.file-max = 1000000">> /etc/sysctl.conf
echo "net.core.somaxconn = 32768">> /etc/sysctl.conf
echo "net.ipv4.tcp_tw_recycle = 0">> /etc/sysctl.conf
echo "net.ipv4.tcp_syncookies = 0">> /etc/sysctl.conf
echo "vm.overcommit_memory = 1">> /etc/sysctl.conf
sysctl -p #查看参数
3、配置/etc/security/limits.conf文件(文件最后追加):
vi /etc/security/limits.conf
tidb soft nofile 1000000
tidb hard nofile 1000000
tidb soft stack 32768
tidb hard stack 32768
4、修改配置文件(文件最后追加):
vi /etc/rc.local
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
chmod +x /etc/rc.local
source /etc/rc.local
5、ssh免密登录:
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa #三个回车
ssh-keygen -t dsa #三个回车
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
6、新建tidb组和tidb用户
groupadd tidb
useradd -g tidb tidb
四、安装tidb:
1、创建文件夹存放tidb文件,并解压tidb压缩包:
mkdir /tidb
tar -xvf /tidb/tidb-community-server-v6.1.1-linux-amd64.tar.gz -C /tidb
2、部署tiup:
sh /tidb/tidb-community-server-v6.1.1-linux-amd64/local_install.sh
source /root/.bash_profile
[root@localhost tidb]# sh /tidb/tidb-community-server-v6.1.1-linux-amd64/local_install.sh
Disable telemetry success
Successfully set mirror to /tidb/tidb-community-server-v6.1.1-linux-amd64
Detected shell: bash
Shell profile: /root/.bash_profile
/root/.bash_profile has been modified to to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
1. source /root/.bash_profile
2. Have a try: tiup playground
===============================================
[root@localhost tidb]# source /root/.bash_profile
[root@localhost tidb]# which tiup
/root/.tiup/bin/tiup
3、生成初始化拓扑模板:
tiup cluster template > /tidb/topology.yaml
4、修改模版(可将/tidb/topology.yaml文件里面的内容全部删除,替换成下面的,也可以只改文件中的ip):
vi /tidb/topology.yaml
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/tidb-deploy"
data_dir: "/tidb-data"
server_configs: {}
pd_servers:
- host: 192.168.200.57
tidb_servers:
- host: 192.168.200.57
tikv_servers:
- host: 192.168.200.57
monitoring_servers:
- host: 192.168.200.57
grafana_servers:
- host: 192.168.200.57
alertmanager_servers:
- host: 192.168.200.57
tiflash_servers:
- host: 192.168.200.57
5、检查是否瞒住tidb条件:
tiup cluster check /tidb/topology.yaml
这一步检查报ssh错误,tidb没有可执行权限:
执行yum -y update即可解决。
报下面的错误可以忽略,因为我用到的存储空间很少很少,若正常情况,需要进行扩容,将tikv和tiflash的存储路径放在不同目录(即修改/tidb/topology.yaml里面数据存储的目录的参数)。
[root@localhost ~]# tiup cluster check /tidb/topology.yaml
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.3/tiup-cluster check /tidb/topology.yaml
+ Detect CPU Arch Name
- Detecting node 192.168.200.57 Arch info ... Done
+ Detect CPU OS Name
- Detecting node 192.168.200.57 OS info ... Done
+ Download necessary tools
- Downloading check tools for linux/amd64 ... Done
+ Collect basic system information
+ Collect basic system information
- Getting system info of 192.168.200.57:22 ... Done
+ Check time zone
- Checking node 192.168.200.57 ... Done
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
- Checking node 192.168.200.57 ... Done
- Checking node 192.168.200.57 ... Done
- Checking node 192.168.200.57 ... Done
- Checking node 192.168.200.57 ... Done
- Checking node 192.168.200.57 ... Done
- Checking node 192.168.200.57 ... Done
- Checking node 192.168.200.57 ... Done
+ Cleanup check files
- Cleanup check files on 192.168.200.57:22 ... Done
- Cleanup check files on 192.168.200.57:22 ... Done
- Cleanup check files on 192.168.200.57:22 ... Done
- Cleanup check files on 192.168.200.57:22 ... Done
- Cleanup check files on 192.168.200.57:22 ... Done
- Cleanup check files on 192.168.200.57:22 ... Done
- Cleanup check files on 192.168.200.57:22 ... Done
Node Check Result Message
---- ----- ------ -------
192.168.200.57 disk Warn mount point / does not have 'noatime' option set
192.168.200.57 disk Fail multiple components tikv:/tidb-data/tikv-20160,tiflash:/tidb-data/tiflash-9000 are using the same partition 192.168.200.57:/ as data dir
192.168.200.57 sysctl Fail vm.swappiness = 30, should be 0
192.168.200.57 selinux Pass SELinux is disabled
192.168.200.57 thp Fail THP is enabled, please disable it for best performance
192.168.200.57 cpu-cores Pass number of CPU cores / threads: 2
192.168.200.57 memory Pass memory size is 4096MB
192.168.200.57 epoll-exclusive Fail epoll exclusive is not supported
192.168.200.57 command Pass numactl: policy: default
192.168.200.57 network Pass network speed of eno16777984 is 10000MB
192.168.200.57 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009
192.168.200.57 cpu-governor Warn Unable to determine current CPU frequency governor policy
192.168.200.57 swap Warn swap is enabled, please disable it for best performance
6、安装tidb:
tiup cluster deploy mytidb_cluster v6.1.1 /tidb/topology.yaml
[root@localhost ~]# tiup cluster deploy mytidb_cluster v6.1.1 /tidb/topology.yaml
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.3/tiup-cluster deploy mytidb_cluster v6.1.1 /tidb/topology.yaml
+ Detect CPU Arch Name
- Detecting node 192.168.200.57 Arch info ... Done
+ Detect CPU OS Name
- Detecting node 192.168.200.57 OS info ... Done
Please confirm your topology:
Cluster type: tidb
Cluster name: mytidb_cluster
Cluster version: v6.1.1
Role Host Ports OS/Arch Directories
---- ---- ----- ------- -----------
pd 192.168.200.57 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv 192.168.200.57 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tidb 192.168.200.57 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
tiflash 192.168.200.57 9000/8123/3930/20170/20292/8234 linux/x86_64 /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000
prometheus 192.168.200.57 9090/12020 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana 192.168.200.57 3000 linux/x86_64 /tidb-deploy/grafana-3000
alertmanager 192.168.200.57 9093/9094 linux/x86_64 /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ Generate SSH keys ... Done
+ Download TiDB components
- Download pd:v6.1.1 (linux/amd64) ... Done
- Download tikv:v6.1.1 (linux/amd64) ... Done
- Download tidb:v6.1.1 (linux/amd64) ... Done
- Download tiflash:v6.1.1 (linux/amd64) ... Done
- Download prometheus:v6.1.1 (linux/amd64) ... Done
- Download grafana:v6.1.1 (linux/amd64) ... Done
- Download alertmanager: (linux/amd64) ... Done
- Download node_exporter: (linux/amd64) ... Done
- Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments
- Prepare 192.168.200.57:22 ... Done
+ Deploy TiDB instance
- Copy pd -> 192.168.200.57 ... Done
- Copy tikv -> 192.168.200.57 ... Done
- Copy tidb -> 192.168.200.57 ... Done
- Copy tiflash -> 192.168.200.57 ... Done
- Copy prometheus -> 192.168.200.57 ... Done
- Copy grafana -> 192.168.200.57 ... Done
- Copy alertmanager -> 192.168.200.57 ... Done
- Deploy node_exporter -> 192.168.200.57 ... Done
- Deploy blackbox_exporter -> 192.168.200.57 ... Done
+ Copy certificate to remote host
+ Init instance configs
- Generate config pd -> 192.168.200.57:2379 ... Done
- Generate config tikv -> 192.168.200.57:20160 ... Done
- Generate config tidb -> 192.168.200.57:4000 ... Done
- Generate config tiflash -> 192.168.200.57:9000 ... Done
- Generate config prometheus -> 192.168.200.57:9090 ... Done
- Generate config grafana -> 192.168.200.57:3000 ... Done
- Generate config alertmanager -> 192.168.200.57:9093 ... Done
+ Init monitor configs
- Generate config node_exporter -> 192.168.200.57 ... Done
- Generate config blackbox_exporter -> 192.168.200.57 ... Done
Enabling component pd
Enabling instance 192.168.200.57:2379
Enable instance 192.168.200.57:2379 success
Enabling component tikv
Enabling instance 192.168.200.57:20160
Enable instance 192.168.200.57:20160 success
Enabling component tidb
Enabling instance 192.168.200.57:4000
Enable instance 192.168.200.57:4000 success
Enabling component tiflash
Enabling instance 192.168.200.57:9000
Enable instance 192.168.200.57:9000 success
Enabling component prometheus
Enabling instance 192.168.200.57:9090
Enable instance 192.168.200.57:9090 success
Enabling component grafana
Enabling instance 192.168.200.57:3000
Enable instance 192.168.200.57:3000 success
Enabling component alertmanager
Enabling instance 192.168.200.57:9093
Enable instance 192.168.200.57:9093 success
Enabling component node_exporter
Enabling instance 192.168.200.57
Enable 192.168.200.57 success
Enabling component blackbox_exporter
Enabling instance 192.168.200.57
Enable 192.168.200.57 success
Cluster `mytidb_cluster` deployed successfully, you can start it with command: `tiup cluster start mytidb_cluster --init`
7、启动tidb:
tiup cluster start mytidb_cluster --init
错误一(4000端口启动不成功):
[error="context deadline exceeded"]
[stack="github.com/pingcap/tidb/session.mustExecute\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/session/bootstrap.go:2477\ngithub.com/pingcap/tidb/session.insertBuiltinBindInfoRow\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/session/bootstrap.go:1636\ngithub.com/pingcap/tidb/session.initBindInfoTable\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/session/bootstrap.go:1632\ngithub.com/pingcap/tidb/session.doDDLWorks\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/session/bootstrap.go:2287\ngithub.com/pingcap/tidb/session.bootstrap\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/session/bootstrap.go:527\ngithub.com/pingcap/tidb/session.runInBootstrapSession\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/session/session.go:3428\ngithub.com/pingcap/tidb/session.BootstrapSession\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/session/session.go:3264\nmain.createStoreAndDomain\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/tidb-server/main.go:318\nmain.main\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/tidb-server/main.go:218\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"]
解决办法:磁盘空间不足,增加磁盘空间,或删除不必要的文件,释放磁盘空间。
错误二(tiflash 9000端口启动错误):
[root@localhost tidb]# tiup cluster start mytidb_cluster --init
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.3/tiup-cluster start mytidb_cluster --init
Starting cluster mytidb_cluster...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/mytidb_cluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/mytidb_cluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [ Serial ] - StartCluster
Starting component pd
Starting instance 192.168.200.57:2379
Start instance 192.168.200.57:2379 success
Starting component tikv
Starting instance 192.168.200.57:20160
Start instance 192.168.200.57:20160 success
Starting component tidb
Starting instance 192.168.200.57:4000
Start instance 192.168.200.57:4000 success
Starting component tiflash
Starting instance 192.168.200.57:9000
Error: failed to start tiflash: failed to start: 192.168.200.57 tiflash-9000.service, please check the instance's log(/tidb-deploy/tiflash-9000/log) for more detail.: timed out waiting for port 9000 to be started after 2m0s
Verbose debug logs has been written to /root/.tiup/logs/tiup-cluster-debug-2023-11-01-08-53-59.log.
cat /root/.tiup/logs/tiup-cluster-debug-2023-11-01-08-53-59.lo
2023-11-01T08:53:59.723+0800 DEBUG retry error {"error": "operation timed out after 2m0s"}
2023-11-01T08:53:59.723+0800 DEBUG TaskFinish {"task": "StartCluster", "error": "failed to start tiflash: failed to start: 192.168.200.57 tiflash-9000.service, please check the instance's log(/tidb-deploy/tiflash-9000/log) for more detail.: timed out waiting for port 9000 to be started after 2m0s", "errorVerbose": "timed out waiting for port 9000 to be started after 2m0s\ngithub.com/pingcap/tiup/pkg/cluster/module.(*WaitFor).Execute\n\tgithub.com/pingcap/tiup/pkg/cluster/module/wait_for.go:91\ngithub.com/pingcap/tiup/pkg/cluster/spec.PortStarted\n\tgithub.com/pingcap/tiup/pkg/cluster/spec/instance.go:116\ngithub.com/pingcap/tiup/pkg/cluster/spec.(*TiFlashInstance).Ready\n\tgithub.com/pingcap/tiup/pkg/cluster/spec/tiflash.go:803\ngithub.com/pingcap/tiup/pkg/cluster/operation.startInstance\n\tgithub.com/pingcap/tiup/pkg/cluster/operation/action.go:404\ngithub.com/pingcap/tiup/pkg/cluster/operation.StartComponent.func1\n\tgithub.com/pingcap/tiup/pkg/cluster/operation/action.go:533\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\tgolang.org/x/sync@v0.0.0-20220513210516-0976fa681c29/errgroup/errgroup.go:74\nruntime.goexit\n\truntime/asm_amd64.s:1571\nfailed to start: 192.168.200.57 tiflash-9000.service, please check the instance's log(/tidb-deploy/tiflash-9000/log) for more detail.\nfailed to start tiflash"}
2023-11-01T08:53:59.723+0800 INFO Execute command finished {"code": 1, "error": "failed to start tiflash: failed to start: 192.168.200.57 tiflash-9000.service, please check the instance's log(/tidb-deploy/tiflash-9000/log) for more detail.: timed out waiting for port 9000 to be started after 2m0s", "errorVerbose": "timed out waiting for port 9000 to be started after 2m0s\ngithub.com/pingcap/tiup/pkg/cluster/module.(*WaitFor).Execute\n\tgithub.com/pingcap/tiup/pkg/cluster/module/wait_for.go:91\ngithub.com/pingcap/tiup/pkg/cluster/spec.PortStarted\n\tgithub.com/pingcap/tiup/pkg/cluster/spec/instance.go:116\ngithub.com/pingcap/tiup/pkg/cluster/spec.(*TiFlashInstance).Ready\n\tgithub.com/pingcap/tiup/pkg/cluster/spec/tiflash.go:803\ngithub.com/pingcap/tiup/pkg/cluster/operation.startInstance\n\tgithub.com/pingcap/tiup/pkg/cluster/operation/action.go:404\ngithub.com/pingcap/tiup/pkg/cluster/operation.StartComponent.func1\n\tgithub.com/pingcap/tiup/pkg/cluster/operation/action.go:533\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\tgolang.org/x/sync@v0.0.0-20220513210516-0976fa681c29/errgroup/errgroup.go:74\nruntime.goexit\n\truntime/asm_amd64.s:1571\nfailed to start: 192.168.200.57 tiflash-9000.service, please check the instance's log(/tidb-deploy/tiflash-9000/log) for more detail.\nfailed to start tiflash"}
root@localhost log]# cat /tidb-deploy/tiflash-9000/log/tiflash_error.log
[2023/11/01 09:48:04.625 +08:00] [ERROR] [<unknown>] ["Application:Exception: Could not determine time zone from TZ variable value: '/etc/localtime': custom time zone file used."] [thread_id=1]
[2023/11/01 09:48:20.123 +08:00] [ERROR] [<unknown>] ["Application:Exception: Could not determine time zone from TZ variable value: '/etc/localtime': custom time zone file used."] [thread_id=1]
[2023/11/01 09:48:35.621 +08:00] [ERROR] [<unknown>] ["Application:Exception: Could not determine time zone from TZ variable value: '/etc/localtime': custom time zone file used."] [thread_id=1]
[2023/11/01 09:48:51.120 +08:00] [ERROR] [<unknown>] ["Application:Exception: Could not determine time zone from TZ variable value: '/etc/localtime': custom time zone file used."] [thread_id=1]
[2023/11/01 09:49:06.623 +08:00] [ERROR] [<unknown>] ["Application:Exception: Could not determine time zone from TZ variable value: '/etc/localtime': custom time zone file used."] [thread_id=1]
[2023/11/01 09:49:22.128 +08:00] [ERROR] [<unknown>] ["Application:Exception: Could not determine time zone from TZ variable value: '/etc/localtime': custom time zone file used."] [thread_id=1]
解决办法:(时区问题造成的)
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
错误三(blackbox_exporter-9115.service启动失败):
[root@localhost log]# tiup cluster start mytidb_cluster --init
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.3/tiup-cluster start mytidb_cluster --init
Starting cluster mytidb_cluster...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/mytidb_cluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/mytidb_cluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [ Serial ] - StartCluster
Starting component pd
Starting instance 192.168.200.57:2379
Start instance 192.168.200.57:2379 success
Starting component tikv
Starting instance 192.168.200.57:20160
Start instance 192.168.200.57:20160 success
Starting component tidb
Starting instance 192.168.200.57:4000
Start instance 192.168.200.57:4000 success
Starting component tiflash
Starting instance 192.168.200.57:9000
Start instance 192.168.200.57:9000 success
Starting component prometheus
Starting instance 192.168.200.57:9090
Start instance 192.168.200.57:9090 success
Starting component grafana
Starting instance 192.168.200.57:3000
Start instance 192.168.200.57:3000 success
Starting component alertmanager
Starting instance 192.168.200.57:9093
Start instance 192.168.200.57:9093 success
Starting component node_exporter
Starting instance 192.168.200.57
Start 192.168.200.57 success
Starting component blackbox_exporter
Starting instance 192.168.200.57
Error: failed to start: 192.168.200.57 blackbox_exporter-9115.service, please check the instance's log() for more detail.: timed out waiting for port 9115 to be started after 2m0s
[root@localhost log]#cat /root/.tiup/logs/tiup-cluster-debug-2023-11-01-09-53-02.log
2023-11-01T09:53:02.262+0800 DEBUG retry error {"error": "operation timed out after 2m0s"}
2023-11-01T09:53:02.262+0800 DEBUG TaskFinish {"task": "StartCluster", "error": "failed to start: 192.168.200.57 blackbox_exporter-9115.service, please check the instance's log() for more detail.: timed out waiting for port 9115 to be started after 2m0s", "errorVerbose": "timed out waiting for port 9115 to be started after 2m0s\ngithub.com/pingcap/tiup/pkg/cluster/module.(*WaitFor).Execute\n\tgithub.com/pingcap/tiup/pkg/cluster/module/wait_for.go:91\ngithub.com/pingcap/tiup/pkg/cluster/spec.PortStarted\n\tgithub.com/pingcap/tiup/pkg/cluster/spec/instance.go:116\ngithub.com/pingcap/tiup/pkg/cluster/operation.systemctlMonitor.func1\n\tgithub.com/pingcap/tiup/pkg/cluster/operation/action.go:335\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\tgolang.org/x/sync@v0.0.0-20220513210516-0976fa681c29/errgroup/errgroup.go:74\nruntime.goexit\n\truntime/asm_amd64.s:1571\nfailed to start: 192.168.200.57 blackbox_exporter-9115.service, please check the instance's log()for more detail."}
解决办法:
将/etc/systemd/system/multi-user.target.wants/blackbox_exporter-9115.service文件中的AmbientCapabilities=CAP_NET_RAW注释掉。
正确结果:
[root@localhost log]# tiup cluster start mytidb_cluster --init
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.3/tiup-cluster start mytidb_cluster --init
Starting cluster mytidb_cluster...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/mytidb_cluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/mytidb_cluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [Parallel] - UserSSH: user=tidb, host=192.168.200.57
+ [ Serial ] - StartCluster
Starting component pd
Starting instance 192.168.200.57:2379
Start instance 192.168.200.57:2379 success
Starting component tikv
Starting instance 192.168.200.57:20160
Start instance 192.168.200.57:20160 success
Starting component tidb
Starting instance 192.168.200.57:4000
Start instance 192.168.200.57:4000 success
Starting component tiflash
Starting instance 192.168.200.57:9000
Start instance 192.168.200.57:9000 success
Starting component prometheus
Starting instance 192.168.200.57:9090
Start instance 192.168.200.57:9090 success
Starting component grafana
Starting instance 192.168.200.57:3000
Start instance 192.168.200.57:3000 success
Starting component alertmanager
Starting instance 192.168.200.57:9093
Start instance 192.168.200.57:9093 success
Starting component node_exporter
Starting instance 192.168.200.57
Start 192.168.200.57 success
Starting component blackbox_exporter
Starting instance 192.168.200.57
Start 192.168.200.57 success
+ [ Serial ] - UpdateTopology: cluster=mytidb_cluster
Started cluster `mytidb_cluster` successfully
The root password of TiDB database has been changed.
The new password is: 'BZ$6K587Yt4+C!3z@T'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.
8、登录数据库修改初始密码:
/mysqlsoft/mysql/bin/mysql -h 192.168.200.57 -P4000 -uroot -p#之后输入启动成功后生成的临时密码。即可进入数据库
修改数据库密码,刷新权限:
set password for 'root'@'%' = 'tidb';
flush privileges;
[root@localhost log]# /mysqlsoft/mysql/bin/mysql -h 192.168.200.57 -P4000 -uroot -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 407
Server version: 5.7.25-TiDB-v6.1.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> set password for 'root'@'%' = 'tidb';
Query OK, 0 rows affected (0.04 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.05 sec)
mysql>
9、卸载tidb步骤:
停止
tiup cluster stop mytidb_cluster
清理数据
tiup cluster clean mytidb_cluster --all
卸载
tiup cluster destroy mytidb_cluster