Tidb 3.0安装

生产不用cdc做数据同步的情况下,要跑数据仓库,做HTAP时,最少11台机器,而且Tidb一般都是3台,那就是12台机器
不建议云环境,cluster直接通讯有问题

每台机器2C2G也可以

1,中控机操作 安装mysql的客户端的依赖和rpm包,不要安装服务端
yum -y install make automake libtool pkgconfig libaio-devel libtool openssl-devel.x86_64 openssl.x86_64

rpm -qa |grep mariadb
rpm -e mariadb-5.5.60-1.el7_5.x86_64 --nodeps
rpm -e mariadb-server-5.5.60-1.el7_5.x86_64 --nodeps
rpm -e mariadb-libs-5.5.60-1.el7_5.x86_64 --nodeps

rpm -ivh mysql-community-common-8.0.21-1.el7.x86_64.rpm
rpm -ivh mysql-community-libs-8.0.21-1.el7.x86_64.rpm
rpm -ivh mysql-community-client-8.0.21-1.el7.x86_64.rpm
rpm -ivh mysql-community-devel-8.0.21-1.el7.x86_64.rpm

2,中控机 下载安装TIDB-Ansible
下载需要外网,关机再加一快网卡,配置net网络,
将原来的ens33网卡gateway 改成192.168.0.1
systemctl restart network

配置阿里云的镜像:
echo “[EL7-2]” > /etc/yum.repos.d/RHEL7.repo
echo “name =Linux-2” >> /etc/yum.repos.d/RHEL7.repo
echo “baseurl=http://mirrors.aliyun.com/epel/7Server/x86_64/” >> /etc/yum.repos.d/RHEL7.repo
echo “gpgcheck=0” >> /etc/yum.repos.d/RHEL7.repo
echo “enabled=1” >> /etc/yum.repos.d/RHEL7.repo

在线安装
yum -y install epel-release git curl sshpass python2-pip wget

wget https://bootstrap.pypa.io/get-pip.py
which python 看下在哪
/usr/bin/python -v 看下版本

图解:2.7.5就够用了,下面的软连接就不用做了
###cd /tidb/soft
###ln -sf /usr/bin/python2.7 /usr/bin/python

python get-pip.py
可能会报错:

图解:wget下载后面连接就行

[root@fgtidb09 tidb]# pip -V
pip 20.2.3 from /usr/lib/python2.7/site-packages/pip (python 2.7)

到这里前面需要的py环境都配置好了,下载包
tidb-ansible-4.0.7.tar.gz (tidb-ansible-master)
在线 TiDb-Ansible:https://github.com/pingcap/tidb-ansible

图解:按上面的方式下载,否则可能下载的是最新版本的包

su - root
cd /tidb/soft
tar zxvf /opt/tidb-ansible-4.0.7.tar.gz
mv tidb-ansible-4.0.7 /tidb/tidb-ansible
cd /tidb/tidb-ansible
pip install -r ./requirements.txt

图解:报超时,就再点下,可能要开下科学上网
ansible --version

3.0的话,都是在线安装的,4.0的话,就是新的工具了,使用离线的,但是3.0也得学,因为量很大,必须要学的

3,在中控机上配置部署机器ssh和sudo规则
####useradd -m -d /home/tidb tidb
####passwd tidb

visudo
tidb ALL=(ALL) NOPASSWD: ALL

su - root
chown -R tidb:tidb /tidb
chmod -R 775 /tidb
su - tidb
cd /tidb/tidb-ansible
vi hosts.ini
[servers]
192.168.1.80
192.168.1.81
192.168.1.82
192.168.1.83
192.168.1.84
192.168.1.85
192.168.1.86
192.168.1.87
192.168.1.88
192.168.1.89
[all:vars]
username = tidb
ntp_server = 127.127.1.0

##########互信可能会有问题
部署所有的主机,在目标机上创建用户,互认,相关规则
su - tidb
cd /tidb/tidb-ansible
#下面的命令要输入root的密码
ansible-playbook -i hosts.ini create_users.yml -u root -k
########################
手工建 每一台都走下面的11行命令,自己也要做互信
##全部窗口执行,做完检查下,很容易出错
ssh-keygen -t rsa
##下面命令依次执行
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.80
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.81
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.82
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.83
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.84
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.85
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.86
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.87
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.88
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.89

su - tidb
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub tidb@192.168.1.80
ssh-copy-id -i ~/.ssh/id_rsa.pub tidb@192.168.1.81
ssh-copy-id -i ~/.ssh/id_rsa.pub tidb@192.168.1.82
ssh-copy-id -i ~/.ssh/id_rsa.pub tidb@192.168.1.83
ssh-copy-id -i ~/.ssh/id_rsa.pub tidb@192.168.1.84
ssh-copy-id -i ~/.ssh/id_rsa.pub tidb@192.168.1.85
ssh-copy-id -i ~/.ssh/id_rsa.pub tidb@192.168.1.86
ssh-copy-id -i ~/.ssh/id_rsa.pub tidb@192.168.1.87
ssh-copy-id -i ~/.ssh/id_rsa.pub tidb@192.168.1.88
ssh-copy-id -i ~/.ssh/id_rsa.pub tidb@192.168.1.89

互信做过,但报错ERROR: ECDSA host key for 192.168.1.140 has changed and you have requested strict checking.
执行它就好了
ssh-keygen -R 192.168.1.139

验证:验证都要做好,防止输入yes卡住
sudo su root
su - tidb
####下面命令也要一条一条输,输完需要输入yes,也要一个一个输入
ssh zxhtidb80 date
ssh zxhtidb01 date
ssh zxhtidb02 date
ssh zxhtidb03 date
ssh zxhtidb04 date
ssh zxhtidb05 date
ssh zxhtidb06 date
ssh zxhtidb07 date
ssh zxhtidb08 date
ssh zxhtidb09 date

4,配置ntp,前面配置过就不用再配置了
echo “server 127.127.1.0 iburst” >> /etc/ntp.conf
systemctl restart ntpd
systemctl enable ntpd
ntpq -p
ntpstat
其他主机root:
echo “server 192.168.1.80”>> /etc/ntp.conf
echo “restrict 192.168.1.80 nomodify notrap noquery” >> /etc/ntp.conf
ntpdate -u 192.168.1.80
hwclock -w
systemctl restart ntpd
systemctl enable ntpd
ntpq -p

cd /tidb/tidb-ansible
ansible -i hosts.ini all -m shell -a “systemctl disable chronyd.service” -b
ansible -i hosts.ini all -m shell -a “systemctl enable ntpd.service” -b
ansible -i hosts.ini all -m shell -a “systemctl start ntpd.service” -b

检查:
su - tidb
ansible -i hosts.ini all -m shell -a “ntpdate -u 192.168.1.80” -b
ansible -i hosts.ini all -m shell -a “ntpstat” -b
ansible -i hosts.ini all -m shell -a “ntpq -p” -b
ansible -i hosts.ini all -m shell -a “date” -b

5,设置cpu的调节模式不支持就算了
cpupower frequency-info --governors

analyzing CPU 0:
available cpufreq governors: performance powersave

调整模式:为最大性能模式
cpupower frequency-set --governor performance
ansible -i hosts.ini all -m shell -a “cpupower frequency-set --governor performance” -u tidb -b

6,使用中控机进行规划配置
su - tidb
cd /tidb/tidb-ansible
cp inventory.ini inventory.ini.bak

##下面红色往上删除掉,包含红色,把下面的拷贝走
vi inventory.ini

TiDB Cluster Part

[tidb_servers]
192.168.1.84
192.168.1.85
192.168.1.86
[tikv_servers]
192.168.1.81
192.168.1.82
192.168.1.83
[pd_servers]
192.168.1.87
192.168.1.88
192.168.1.89
[spark_master]
192.168.1.81
[spark_slaves]
192.168.1.82
192.168.1.83

Monitoring Part

prometheus and pushgateway servers

[monitoring_servers]
192.168.1.80
[grafana_servers]
192.168.1.80

node_exporter and blackbox_exporter servers

[monitored_servers]
192.168.1.80
192.168.1.81
192.168.1.82
192.168.1.83
192.168.1.84
192.168.1.85
192.168.1.86
192.168.1.87
192.168.1.88
192.168.1.89
[alertmanager_servers]
192.168.1.80
[kafka_exporter_servers]

Binlog Part

[pump_servers]

[drainer_servers]

For TiFlash Part, please contact us for beta-testing and user manual

[tiflash_servers]

Group variables

[pd_servers:vars]

location_labels = [“zone”,“rack”,“host”]

Global variables

[all:vars]
deploy_dir = /tidb/deploy

Connection

ssh via normal user

ansible_user = tidb
cluster_name = fgeducluster

CPU architecture: amd64, arm64

cpu_architecture = amd64
tidb_version = v4.0.7

process supervision, [systemd, supervise]

process_supervision = systemd
timezone = Asia/Shanghai
enable_firewalld = False

check NTP service

enable_ntpd = True
set_hostname = False

7,安装集群 ,中控机上执行
ansible-playbook 执行playbook时,默认并发5个,部署目标机器较多时,添加-f参数执行并发

su - tidb
验证下ssh是否成功
ansible -i inventory.ini all -m shell -a ‘whoami’
ansible -i inventory.ini all -m shell -a ‘whoami’ -b

#下载资源
ansible-playbook local_prepare.yml

#配置环境内核参数,初始化
所有节点yum install -y yum-utils
ansible-playbook bootstrap.yml

如果报错cpu需要8 core的话,注释文件bootstrap.yml中的这行即可

图解:出这个报错,意思是要用ssd盘,没关系

做监控的安装这几个包
sudo yum install fontconfig open-sans-fonts -y

根据config.ini部署集群
ansible-playbook deploy.yml

8,启动停止TIDB集群
192.168.1.80:
cd /tidb/tidb-ansible
ansible-playbook start.yml
cd /tidb/tidb-ansible
ansible-playbook stop.yml

监控garafa
http://192.168.1.80:3000
admin/admin

http://192.168.1.87:2379/dashboard 不需要密码,任意一台PD的ip都可以

http://192.168.1.88:2379/dashboard
http://192.168.1.89:2379/dashboard

9,登录执行sql,三台Tidb的ip都可以连接
mysql -uroot -h192.168.1.84 -P4000
mysql> select tidb_version()\G

创建 fgedu database
create database fgedu;
use fgedu;
创建 itpux01 表
CREATE TABLE itpuxt1 (
id int(11) NOT NULL AUTO_INCREMENT,
name varchar(20) NOT NULL DEFAULT ‘’,
age int(11) NOT NULL DEFAULT 0,
PRIMARY KEY (id),
KEY idx_age (age));
插入数据
insert into itpuxt1 values (1,‘itpux01’,21);
insert into itpuxt1 values (2,‘itpux02’,22);
insert into itpuxt1 values (3,‘itpux03’,23);
insert into itpuxt1 values (4,‘itpux04’,24);
insert into itpuxt1 values (5,‘itpux05’,25);
commit;
select * from itpuxt1;
select STORE_ID,ADDRESS,STORE_STATE,STORE_STATE_NAME,CAPACITY,AVAILABLE,UPTIME from
INFORMATION_SCHEMA.TIKV_STORE_STATUS;

图解:查看tidb节点的存储情况

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值