Centos7 TiDB 数据库安装部署
1.资源准备
官方要求的资源比较高,我准备了 8Core20G200G 的一个虚机(1个IP),准备集群安装在同一个节点上。
组件 | CPU | 内存 | 磁盘 | 实例数(最低)
TiDB 8Core+ 16GB+ 无特殊要求 1
PD 4Core+ 8GB+ SAS,200GB+ 1
TiKV 8Core+ 32GB+ SSD,200GB+ 3
TiFlash 32Core+ 64GB+ SSD,200GB+ 1
TiCDC 8Core+ 16GB+ SAS,200GB+ 1
监控 8Core+ 16GB+ SAS 1
TiKV 最少3个节点,我全部安装在同一个节点上(同一个IP),因此对应的端口配置注意递增配置、避免冲突。如果是部署多个IP节点、则多节点的端口可以不变。
2.防火墙配置:
systemctl stop firewalld
systemctl disable firewalld
注意:如果需要保持防火墙开启,则需要配置防火墙开发对应的端口,端口详情参考后面的:topo.yaml 配置文件
vi /etc/selinux/config
SELINUX=disabled
setenforce 0
3.单机模拟多机集群部署需要修改默认的ssh连接数:
vim /etc/ssh/sshd_config
#修改默认的最大连接数10改为50
MaxSessions 20
重启sshd服务生效配置:
service sshd restart
4.系统内核参数优化
非必须,普通测试环境,可以跳过
5.安装前的准备(组件安装)
5.1 下载并安装TiUP
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
5.2 加载全局变量
source /root/.bash_profile
5.3 cluster组件安装
tiup cluster
如果之前已经安装了、需要更新版本的话,则执行软件更新命令:
tiup update --self && tiup update cluster
5.4 mysql client客户端组件安装(方便客户端连接数据库测试):
yum install -y mysql client
5.5 安装前创建安装需要的配置文件 topo.yaml
mkdir -p /opt/tidb_install
cd $_
touch topo.yaml
vi topo.yaml
内容如下
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/opt/tidb-deploy"
data_dir: "/opt/tidb-data"
# # Monitored variables are applied to all the machines.
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
server_configs:
tidb:
log.slow-threshold: 300
tikv:
readpool.storage.use-unified-pool: false
readpool.coprocessor.use-unified-pool: true
pd:
replication.enable-placement-rules: true
replication.location-labels: ["host"]
tiflash:
logger.level: "info"
pd_servers:
- host: 192.168.1.36
tidb_servers:
- host: 192.168.1.36
tikv_servers:
- host: 192.168.1.36
port: 20160
status_port: 20180
config:
server.labels: { host: "logic-host-1" }
- host: 192.168.1.36
port: 20161
status_port: 20181
config:
server.labels: { host: "logic-host-2" }
- host: 192.168.1.36
port: 20162
status_port: 20182
config:
server.labels: { host: "logic-host-3" }
tiflash_servers:
- host: 192.168.1.36
monitoring_servers:
- host: 192.168.1.36
grafana_servers:
- host: 192.168.1.36
6.执行安装部署命令进行安装
tiup cluster deploy tidb610-cluster v6.1.0 ./topo.yaml --user root -p
参数说明:
tidb610-cluster: 指定TiDB集群名称
v6.1.0: 指定安装部署的TiDB版本
./topo.yaml: 指定安装部署的配置文件
安装过程会提示输入 root 用户的密码,输入后才能继续安装,安装完毕。
我把所有集群节点都安装在同一个虚机上(8Core20G,IP配置同一个),安装过程比较快,大概不到1分钟。
安装完毕,界面会提示首次启动时执行启动命令:tiup cluster start tidb-cluster --init
--init: Initialize a secure root password for the database。 首次启动加 --init 是为了给 root 初始化一个 database 连接用户root的密码,登录后可以修改密码
7.按照提示启动数据库
tiup cluster start tidb-cluster --init
首次启动加 --init,后续再启动就不用加 --init 了,直接 tiup cluster start tidb-cluster 启动即可。初始化和启动成功,会提示 database 连接用户 root 的密码。
8.登录数据库、修改密码
mysql -uroot -h127.0.0.1 -P4000 -p
alter user root identified by 'dba.1q2w3e';
创建新用户&授权
create user tidb_dba identified by '1q2w3e';
grant all privileges on *.* to tidb_dba;
-- privileges 关键字可以省略
\q
9.自己创建一个tidbcmd.sh脚本管理常用命令,方便后续管理
cd /opt
touch tidbcmd.sh
vi tidbcmd.sh
chmod u+x tidbcmd.sh
#!/bin/sh
# cmds for tidb operation by sunny05296
target_cluster="tidb-cluster"
if [ "$#" != 1 ]||[ "$1" = "-h" ]||[ "$1" = "--help" ];then
echo "input error! Please use: $0 {start|stop|status|list|clean}"
echo '
start: start tidb cluster
stop: stop tidb cluster
status: display tidb cluster status
list: list tidb clusters
clean: clean tidb data with arg "--all"
'
exit;
fi
if [ "$1" = "start" ];then
# start
#tiup cluster start tidb-cluster --init
tiup cluster start $target_cluster
elif [ "$1" = "stop" ];then
#stop
tiup cluster stop $target_cluster
elif [ "$1" = "status" ];then
#display status
tiup cluster display $target_cluster
elif [ "$1" = "list" ];then
#cluster list
tiup cluster list
elif [ "$1" = "clean" ];then
#clean data
tiup cluster clean $target_cluster --all
else
echo "not support for arg: $1"
fi
[root@localhost opt]# ./tidbcmd.sh status
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.3/tiup-cluster display tidb-cluster
Cluster type: tidb
Cluster name: tidb-cluster
Cluster version: v6.1.0
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://192.168.1.36:2379/dashboard
Grafana URL: http://192.168.1.36:3000
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
192.168.1.36:3000 grafana 192.168.1.36 3000 linux/x86_64 Up - /opt/tidb-deploy/grafana-3000
192.168.1.36:2379 pd 192.168.1.36 2379/2380 linux/x86_64 Up|L|UI /opt/tidb-data/pd-2379 /opt/tidb-deploy/pd-2379
192.168.1.36:9090 prometheus 192.168.1.36 9090/12020 linux/x86_64 Up /opt/tidb-data/prometheus-9090 /opt/tidb-deploy/prometheus-9090
192.168.1.36:4000 tidb 192.168.1.36 4000/10080 linux/x86_64 Up - /opt/tidb-deploy/tidb-4000
192.168.1.36:9000 tiflash 192.168.1.36 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /opt/tidb-data/tiflash-9000 /opt/tidb-deploy/tiflash-9000
192.168.1.36:20160 tikv 192.168.1.36 20160/20180 linux/x86_64 Up /opt/tidb-data/tikv-20160 /opt/tidb-deploy/tikv-20160
192.168.1.36:20161 tikv 192.168.1.36 20161/20181 linux/x86_64 Up /opt/tidb-data/tikv-20161 /opt/tidb-deploy/tikv-20161
192.168.1.36:20162 tikv 192.168.1.36 20162/20182 linux/x86_64 Up /opt/tidb-data/tikv-20162 /opt/tidb-deploy/tikv-20162
Total nodes: 8
[root@localhost opt]#