【TIDB】tiup 部署tidb

1.配置DNS 

[root@mysql1 ~]# cat /etc/resolv.conf 
# Generated by NetworkManager
nameserver 8.8.8.8
nameserver 8.8.4.4

2.下载TIUP 工具 

[tidb@mysql1 bin]$ curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 5149k  100 5149k    0     0  13.7M      0 --:--:-- --:--:-- --:--:-- 13.8M
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /home/tidb/.tiup/bin/7b8e153f2e2d0928.root.json
Successfully set mirror to https://tiup-mirrors.pingcap.com
Detected shell: bash
Shell profile:  /home/tidb/.bash_profile
Installed path: /home/tidb/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================

3.安装集群软件 

[tidb@mysql1 .tiup]$ tiup cluster
Checking updates for component cluster... Timedout (after 2s)
The component `cluster` version  is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.16.0-linux-amd64.tar.gz 8.83 MiB / 8.83 MiB 100.00% 55.39 MiB/s                                                                              
Deploy a TiDB cluster for production

Usage:
  tiup cluster [command]

Available Commands:
  check       Perform preflight checks for the cluster.
  deploy      Deploy a cluster for production
  start       Start a TiDB cluster
  stop        Stop a TiDB cluster
  restart     Restart a TiDB cluster
  scale-in    Scale in a TiDB cluster
  scale-out   Scale out a TiDB cluster
  destroy     Destroy a specified cluster
  clean       (EXPERIMENTAL) Cleanup a specified cluster
  upgrade     Upgrade a specified TiDB cluster
  display     Display information of a TiDB cluster
  prune       Destroy and remove instances that is in tombstone state
  list        List all clusters
  audit       Show audit log of cluster operation
  import      Import an exist TiDB cluster from TiDB-Ansible
  edit-config Edit TiDB cluster config
  show-config Show TiDB cluster config
  reload      Reload a TiDB cluster's config and restart if needed
  patch       Replace the remote package with a specified package and restart the service
  rename      Rename the cluster
  enable      Enable a TiDB cluster automatically at boot
  disable     Disable automatic enabling of TiDB clusters at boot
  replay      Replay previous operation and skip successed steps
  template    Print topology template
  tls         Enable/Disable TLS between TiDB components
  meta        backup/restore meta information
  rotatessh   rotate ssh keys on all nodes
  help        Help about any command
  completion  Generate the autocompletion script for the specified shell

Flags:
  -c, --concurrency int     max number of parallel tasks allowed (default 5)
      --format string       (EXPERIMENTAL) The format of output, available values are [default, json] (default "default")
  -h, --help                help for tiup
      --ssh string          (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.
      --ssh-timeout uint    Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
  -v, --version             version for tiup
      --wait-timeout uint   Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)
  -y, --yes                 Skip all confirmations and assumes 'yes'

Use "tiup cluster help [command]" for more information about a command.

--检查版本。
[tidb@mysql1 .tiup]$ tiup --binary cluster
/home/tidb/.tiup/components/cluster/v1.16.0/tiup-cluster

4.备份TIUP 安装目录 

[tidb@mysql1 .tiup]$ which tiup 
~/.tiup/bin/tiup
--备份。
cp -r ~/.tiup  ~/mytiup.bak

5.安装集群

[tidb@mysql1 ~]$  tiup cluster list tidb
Name  User  Version  Path  PrivateKey
----  ----  -------  ----  ----------

--部署集群 。
tiup cluster deploy tidbcluster v4.0.7 ~/config.yaml --user root -p  

[tidb@mysql1 ~]$ tiup cluster deploy tidbcluster v4.0.7 ~/config.yaml --user root -p  
Input SSH password: 



+ Detect CPU Arch Name
  - Detecting node 192.168.1.11 Arch info ... Done
  - Detecting node 192.168.1.12 Arch info ... Done
  - Detecting node 192.168.1.13 Arch info ... Done



+ Detect CPU OS Name
  - Detecting node 192.168.1.11 OS info ... Done
  - Detecting node 192.168.1.12 OS info ... Done
  - Detecting node 192.168.1.13 OS info ... Done
Please confirm your topology:
Cluster type:    tidb
Cluster name:    tidbcluster
Cluster version: v4.0.7
Role          Host          Ports        OS/Arch       Directories
----          ----          -----        -------       -----------
pd            192.168.1.11  2379/2380    linux/x86_64  /tidb/tidb-deploy/pd-2379,/tidb/tidb-data/pd-2379
pd            192.168.1.12  2379/2380    linux/x86_64  /tidb/tidb-deploy/pd-2379,/tidb/tidb-data/pd-2379
pd            192.168.1.13  2379/2380    linux/x86_64  /tidb/tidb-deploy/pd-2379,/tidb/tidb-data/pd-2379
tikv          192.168.1.11  20160/20180  linux/x86_64  /tidb/tidb-deploy/tikv-20160,/tidb/tidb-data/tikv-20160
tikv          192.168.1.12  20160/20180  linux/x86_64  /tidb/tidb-deploy/tikv-20160,/tidb/tidb-data/tikv-20160
tikv          192.168.1.13  20160/20180  linux/x86_64  /tidb/tidb-deploy/tikv-20160,/tidb/tidb-data/tikv-20160
tidb          192.168.1.11  4000/10080   linux/x86_64  /tidb/tidb-deploy/tidb-4000
tidb          192.168.1.12  4000/10080   linux/x86_64  /tidb/tidb-deploy/tidb-4000
tidb          192.168.1.13  4000/10080   linux/x86_64  /tidb/tidb-deploy/tidb-4000
prometheus    192.168.1.11  9090         linux/x86_64  /tidb/tidb-deploy/prometheus-9090,/tidb/tidb-data/prometheus-9090
grafana       192.168.1.11  3000         linux/x86_64  /tidb/tidb-deploy/grafana-3000
alertmanager  192.168.1.11  9093/9094    linux/x86_64  /tidb/tidb-deploy/alertmanager-9093,/tidb/tidb-data/alertmanager-9093
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v4.0.7 (linux/amd64) ... Done
  - Download tikv:v4.0.7 (linux/amd64) ... Done
  - Download tidb:v4.0.7 (linux/amd64) ... Done
  - Download prometheus:v4.0.7 (linux/amd64) ... Done
  - Download grafana:v4.0.7 (linux/amd64) ... Done
  - Download alertmanager: (linux/amd64) ... Done
  - Download node_exporter: (linux/amd64) ... Done
  - Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 192.168.1.13:22 ... Done
  - Prepare 192.168.1.11:22 ... Done
  - Prepare 192.168.1.12:22 ... Done
+ Deploy TiDB instance
  - Copy pd -> 192.168.1.11 ... Done
  - Copy pd -> 192.168.1.12 ... Done
  - Copy pd -> 192.168.1.13 ... Done
  - Copy tikv -> 192.168.1.11 ... Done
  - Copy tikv -> 192.168.1.12 ... Done
  - Copy tikv -> 192.168.1.13 ... Done
  - Copy tidb -> 192.168.1.11 ... Done
  - Copy tidb -> 192.168.1.12 ... Done
  - Copy tidb -> 192.168.1.13 ... Done
  - Copy prometheus -> 192.168.1.11 ... Done
  - Copy grafana -> 192.168.1.11 ... Done
  - Copy alertmanager -> 192.168.1.11 ... Done
  - Deploy node_exporter -> 192.168.1.11 ... Done
  - Deploy node_exporter -> 192.168.1.12 ... Done
  - Deploy node_exporter -> 192.168.1.13 ... Done
  - Deploy blackbox_exporter -> 192.168.1.13 ... Done
  - Deploy blackbox_exporter -> 192.168.1.11 ... Done
  - Deploy blackbox_exporter -> 192.168.1.12 ... Done
+ Copy certificate to remote host
+ Init instance configs
  - Generate config pd -> 192.168.1.11:2379 ... Done
  - Generate config pd -> 192.168.1.12:2379 ... Done
  - Generate config pd -> 192.168.1.13:2379 ... Done
  - Generate config tikv -> 192.168.1.11:20160 ... Done
  - Generate config tikv -> 192.168.1.12:20160 ... Done
  - Generate config tikv -> 192.168.1.13:20160 ... Done
  - Generate config tidb -> 192.168.1.11:4000 ... Done
  - Generate config tidb -> 192.168.1.12:4000 ... Done
  - Generate config tidb -> 192.168.1.13:4000 ... Done
  - Generate config prometheus -> 192.168.1.11:9090 ... Done
  - Generate config grafana -> 192.168.1.11:3000 ... Done
  - Generate config alertmanager -> 192.168.1.11:9093 ... Done
+ Init monitor configs
  - Generate config node_exporter -> 192.168.1.11 ... Done
  - Generate config node_exporter -> 192.168.1.12 ... Done
  - Generate config node_exporter -> 192.168.1.13 ... Done
  - Generate config blackbox_exporter -> 192.168.1.13 ... Done
  - Generate config blackbox_exporter -> 192.168.1.11 ... Done
  - Generate config blackbox_exporter -> 192.168.1.12 ... Done
Enabling component pd
	Enabling instance 192.168.1.13:2379
	Enabling instance 192.168.1.11:2379
	Enabling instance 192.168.1.12:2379
	Enable instance 192.168.1.11:2379 success
	Enable instance 192.168.1.13:2379 success
	Enable instance 192.168.1.12:2379 success
Enabling component tikv
	Enabling instance 192.168.1.13:20160
	Enabling instance 192.168.1.11:20160
	Enabling instance 192.168.1.12:20160
	Enable instance 192.168.1.11:20160 success
	Enable instance 192.168.1.13:20160 success
	Enable instance 192.168.1.12:20160 success
Enabling component tidb
	Enabling instance 192.168.1.13:4000
	Enabling instance 192.168.1.11:4000
	Enabling instance 192.168.1.12:4000
	Enable instance 192.168.1.13:4000 success
	Enable instance 192.168.1.12:4000 success
	Enable instance 192.168.1.11:4000 success
Enabling component prometheus
	Enabling instance 192.168.1.11:9090
	Enable instance 192.168.1.11:9090 success
Enabling component grafana
	Enabling instance 192.168.1.11:3000
	Enable instance 192.168.1.11:3000 success
Enabling component alertmanager
	Enabling instance 192.168.1.11:9093
	Enable instance 192.168.1.11:9093 success
Enabling component node_exporter
	Enabling instance 192.168.1.13
	Enabling instance 192.168.1.11
	Enabling instance 192.168.1.12
	Enable 192.168.1.12 success
	Enable 192.168.1.13 success
	Enable 192.168.1.11 success
Enabling component blackbox_exporter
	Enabling instance 192.168.1.13
	Enabling instance 192.168.1.11
	Enabling instance 192.168.1.12
	Enable 192.168.1.12 success
	Enable 192.168.1.13 success
	Enable 192.168.1.11 success
Cluster `tidbcluster` deployed successfully, you can start it with command: `tiup cluster start tidbcluster --init`


--启动集群并初始化。
[tidb@mysql1 ~]$ tiup cluster start tidbcluster --init
Starting cluster tidbcluster...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [ Serial ] - StartCluster
Starting component pd
	Starting instance 192.168.1.13:2379
	Starting instance 192.168.1.11:2379
	Starting instance 192.168.1.12:2379
	Start instance 192.168.1.13:2379 success
	Start instance 192.168.1.11:2379 success
	Start instance 192.168.1.12:2379 success
Starting component tikv
	Starting instance 192.168.1.13:20160
	Starting instance 192.168.1.11:20160
	Starting instance 192.168.1.12:20160
	Start instance 192.168.1.11:20160 success
	Start instance 192.168.1.13:20160 success
	Start instance 192.168.1.12:20160 success
Starting component tidb
	Starting instance 192.168.1.13:4000
	Starting instance 192.168.1.11:4000
	Starting instance 192.168.1.12:4000
	Start instance 192.168.1.11:4000 success
	Start instance 192.168.1.13:4000 success
	Start instance 192.168.1.12:4000 success
Starting component prometheus
	Starting instance 192.168.1.11:9090
	Start instance 192.168.1.11:9090 success
Starting component grafana
	Starting instance 192.168.1.11:3000
	Start instance 192.168.1.11:3000 success
Starting component alertmanager
	Starting instance 192.168.1.11:9093
	Start instance 192.168.1.11:9093 success
Starting component node_exporter
	Starting instance 192.168.1.12
	Starting instance 192.168.1.13
	Starting instance 192.168.1.11
	Start 192.168.1.13 success
	Start 192.168.1.12 success
	Start 192.168.1.11 success
Starting component blackbox_exporter
	Starting instance 192.168.1.12
	Starting instance 192.168.1.13
	Starting instance 192.168.1.11
	Start 192.168.1.12 success
	Start 192.168.1.13 success
	Start 192.168.1.11 success
+ [ Serial ] - UpdateTopology: cluster=tidbcluster
Started cluster `tidbcluster` successfully
The root password of TiDB database has been changed.
The new password is: '6+X^793y_*hPS5Zq2Y'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.

--此密码仅展示一次。root用户的密码:'6+X^793y_*hPS5Zq2Y'


--检查集群
[tidb@mysql1 ~]$ tiup cluster list
Checking updates for component cluster... Name         User  Version  Path                                                   PrivateKey
----         ----  -------  ----                                                   ----------
tidbcluster  tidb  v4.0.7   /home/tidb/.tiup/storage/cluster/clusters/tidbcluster  /home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa


--查看集群
[tidb@mysql1 ~]$ tiup cluster display tidbcluster
Cluster type:       tidb
Cluster name:       tidbcluster
Cluster version:    v4.0.7
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://192.168.1.13:2379/dashboard
Grafana URL:        http://192.168.1.11:3000
ID                  Role          Host          Ports        OS/Arch       Status  Data Dir                           Deploy Dir
--                  ----          ----          -----        -------       ------  --------                           ----------
192.168.1.11:9093   alertmanager  192.168.1.11  9093/9094    linux/x86_64  Up      /tidb/tidb-data/alertmanager-9093  /tidb/tidb-deploy/alertmanager-9093
192.168.1.11:3000   grafana       192.168.1.11  3000         linux/x86_64  Up      -                                  /tidb/tidb-deploy/grafana-3000
192.168.1.11:2379   pd            192.168.1.11  2379/2380    linux/x86_64  Up|L    /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
192.168.1.12:2379   pd            192.168.1.12  2379/2380    linux/x86_64  Up      /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
192.168.1.13:2379   pd            192.168.1.13  2379/2380    linux/x86_64  Up|UI   /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
192.168.1.11:9090   prometheus    192.168.1.11  9090         linux/x86_64  Up      /tidb/tidb-data/prometheus-9090    /tidb/tidb-deploy/prometheus-9090
192.168.1.11:4000   tidb          192.168.1.11  4000/10080   linux/x86_64  Up      -                                  /tidb/tidb-deploy/tidb-4000
192.168.1.12:4000   tidb          192.168.1.12  4000/10080   linux/x86_64  Up      -                                  /tidb/tidb-deploy/tidb-4000
192.168.1.13:4000   tidb          192.168.1.13  4000/10080   linux/x86_64  Up      -                                  /tidb/tidb-deploy/tidb-4000
192.168.1.11:20160  tikv          192.168.1.11  20160/20180  linux/x86_64  Up      /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
192.168.1.12:20160  tikv          192.168.1.12  20160/20180  linux/x86_64  Up      /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
192.168.1.13:20160  tikv          192.168.1.13  20160/20180  linux/x86_64  Up      /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
Total nodes: 12

--启动集群 
[tidb@mysql1 ~]$ tiup cluster start tidbcluster
Starting cluster tidbcluster...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [ Serial ] - StartCluster
Starting component pd
	Starting instance 192.168.1.13:2379
	Starting instance 192.168.1.11:2379
	Starting instance 192.168.1.12:2379
	Start instance 192.168.1.13:2379 success
	Start instance 192.168.1.11:2379 success
	Start instance 192.168.1.12:2379 success
Starting component tikv
	Starting instance 192.168.1.13:20160
	Starting instance 192.168.1.11:20160
	Starting instance 192.168.1.12:20160
	Start instance 192.168.1.13:20160 success
	Start instance 192.168.1.11:20160 success
	Start instance 192.168.1.12:20160 success
Starting component tidb
	Starting instance 192.168.1.13:4000
	Starting instance 192.168.1.11:4000
	Starting instance 192.168.1.12:4000
	Start instance 192.168.1.13:4000 success
	Start instance 192.168.1.11:4000 success
	Start instance 192.168.1.12:4000 success
Starting component prometheus
	Starting instance 192.168.1.11:9090
	Start instance 192.168.1.11:9090 success
Starting component grafana
	Starting instance 192.168.1.11:3000
	Start instance 192.168.1.11:3000 success
Starting component alertmanager
	Starting instance 192.168.1.11:9093
	Start instance 192.168.1.11:9093 success
Starting component node_exporter
	Starting instance 192.168.1.11
	Starting instance 192.168.1.12
	Starting instance 192.168.1.13
	Start 192.168.1.12 success
	Start 192.168.1.13 success
	Start 192.168.1.11 success
Starting component blackbox_exporter
	Starting instance 192.168.1.11
	Starting instance 192.168.1.12
	Starting instance 192.168.1.13
	Start 192.168.1.12 success
	Start 192.168.1.13 success
	Start 192.168.1.11 success
+ [ Serial ] - UpdateTopology: cluster=tidbcluster
Started cluster `tidbcluster` successfully

6.登陆和使用集群

--使用上面的密码登陆数据库 
[tidb@mysql1 ~]$ mysql -u root -h 192.168.1.11 -P 4000 -p'6+X^793y_*hPS5Zq2Y'
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 6
Server version: 5.7.25-TiDB-v4.0.7 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases; 
+--------------------+
| Database           |
+--------------------+
| INFORMATION_SCHEMA |
| METRICS_SCHEMA     |
| PERFORMANCE_SCHEMA |
| mysql              |
| test               |
+--------------------+
5 rows in set (0.00 sec)

mysql> use test 
Database changed
mysql> show tables; 
Empty set (0.00 sec)

mysql> create table my_test1 (id int,name varchar(20));
Query OK, 0 rows affected (0.10 sec)

mysql> insert into my_test1 values(1,'薛双奇1'),(2,'薛双奇2');
Query OK, 2 rows affected (0.02 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql> select * from my_test1;
+------+------------+
| id   | name       |
+------+------------+
|    1 | 薛双奇1    |
|    2 | 薛双奇2    |
+------+------------+
2 rows in set (0.01 sec)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值