集群中间件安装

服务器准备

硬件条件

系统CPU(核)内存(GB)ip服务
CentOS 7.4416192.168.1.241redis、nginx
CentOS 7.448192.168.1.242redis、RocketMQ
CentOS 7.448192.168.1.243redis、zookeeper·、Mysql
CentOS 7.448192.168.1.244相关应用服务
CentOS 7.4816192.168.1.246TiDB、PD
CentOS 7.4816192.168.1.247PD
CentOS 7.448192.168.1.249TiDB、PD
CentOS 7.4816192.168.1.251TiKV
CentOS 7.4816192.168.1.252TiKV
CentOS 7.4816192.168.1.253TiKV

服务器配置

#配置服务器名称,每个服务器配置不同
[root@localhost ~]# vim /etc/hostname
	bme241
#配置网络
[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens192
	TYPE=Ethernet
	PROXY_METHOD=none
	BROWSER_ONLY=no
	#静态ip配置
	BOOTPROTO=static
	DEFROUTE=yes
	IPV4_FAILURE_FATAL=no
	IPV6INIT=yes
	IPV6_AUTOCONF=yes
	IPV6_DEFROUTE=yes
	IPV6_FAILURE_FATAL=no
	IPV6_ADDR_GEN_MODE=stable-privacy
	NAME=ens192
	UUID=cec5a0f3-1ef5-4569-91dc-095e05ac3dab
	DEVICE=ens192
	#开机启动
	ONBOOT=yes
	#网关
	GATEWAY=192.168.1.1
	#静态ip地址
	IPADDR=192.168.1.241
	#子网掩码
	NETMASK=255.255.255.0

#配置DNS
[root@localhost ~]# vim /etc/resolv.conf
	# Generated by NetworkManager
	nameserver 192.1.69.1
	nameserver 192.168.1.1

Redis

工具/环境

ip:192.168.1.241、192.168.1.242、192.168.1.243(注:Redis集群要求至少要有三个节点)

安装Redis

1.单节点下载配置

#安装GNU编译器
[root@bme241 software]# yum -y install gcc
#下载redis压缩包
[root@bme241 software]# wget http://download.redis.io/releases/redis-4.0.11.tar.gz
[root@bme241 software]# tar -zxvf redis-4.0.11.tar.gz -C /opt/module/
[root@bme241 software]# cd /opt/module/redis-4.0.11
#编译安装
[root@bme241 redis-4.0.11]# make && make install
[root@bme241 redis-4.0.11]# mkdir -p /opt/module/redis-cluster/prod
[root@bme241 redis-4.0.11]# mkdir -p /opt/module/redis-cluster/bin
[root@bme241 redis-4.0.11]# cp redis.conf /opt/module/redis-cluster/prod
#将redis-check-aof、redis-benchmark、redis-cli、redis-trib.rb、redis-server、redis-sentinel复制到/opt/module/redis-cluster/bin/目录下
[root@bme241 redis-4.0.11]# cp src/redis-check-aof /opt/module/redis-cluster/bin/
[root@bme241 cluster]# vim bin/redis.conf
	#绑定ip和端口
	bind 192.168.1.241
	protected-mode yes
	port 9000
	#日志
	pidfile "./redis.pid"
	logfile "./logs/redis.log"
	#开启集群
	cluster-enabled yes
	cluster-config-file "./nodes.conf"
	cluster-node-timeout 15000
	#密码设置(可不设置)
	masterauth "1qaz@WSX"
	requirepass "1qaz@WSX"

2.将redis上传到其他节点并配置

#上传到242和243节点
[root@bme241 module]# scp -r /opt/module/redis-4.0.11 root@192.168.1.242:/opt/module/
[root@bme241 module]# scp -r /opt/module/redis-4.0.11 root@192.168.1.243:/opt/module/

重复241节点创建/opt/module/redis-cluster目录,并修改redis.conf中bind参数
3.启动节点

#三个节点prod目录下分别启动
[root@bme241 prod]# nohup ../bin/redis-server ./redis.conf > ./run.log 2>&1 &

4.redis5.0之后可以通过redis-cli创建集群,redis5.0之前通过ruby命令创建集群,centos7中通过yum安装ruby的版本是2.0,不满足当前redis版本(“redis requires Ruby version >= 2.3.0”)
参考:
https://www.cnblogs.com/ding2016/p/7892542.html
https://www.cnblogs.com/ding2016/p/7903147.html

#会在/etc/yum.repos.d/目录下多出一个CentOS-SCLo-scl-rh.repo源
[root@bme241 redis-cluster]# yum install centos-release-scl-rh
#直接yum安装即可
[root@bme241 redis-cluster]# yum install rh-ruby23  -y
#必须运行,启动ruby
[root@bme241 redis-cluster]# scl enable rh-ruby23 bash
#查看ruby版本
[root@bme241 redis-cluster]# ruby -v   
#下载ruby gem的redis依赖
[root@bme241 redis-cluster]# gem install redis
[root@bme241 redis-cluster]# cp /opt/module/redis-4.0.11/src/redis-trib.rb /opt/module/redis-cluster/bin
#使用redis-trib.rb命令创建redis集群,以下是创建三个主节点
[root@bme241 redis-cluster]# bin/redis-trib.rb create 192.168.1.241:9000 192.168.1.242:9000 192.168.1.243:9000 

常见错误

Sorry, can’t connect to node,原因是redis.conf配置了bind或配置了密码。

1).若是bind导致,redis.conf注释掉bind或修改成对应ip

#bind 127.0.0.1
protected-mode no
或
bind 127.0.0.1 192.168.1.241
protected-mode yes

2).配置了密码导致

#查找client.rb文件
[root@bme241 ~]# find / -name client.rb
	/usr/share/ruby/xmlrpc/client.rb
	/opt/rh/rh-ruby23/root/usr/local/share/gems/gems/redis-4.1.3/lib/redis/client.rb
	/opt/rh/rh-ruby23/root/usr/share/ruby/xmlrpc/client.rb
#修改密码
[root@bme241 ~]# vim /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/redis-4.1.3/lib/redis/client.rb
    DEFAULTS = {
      :url => lambda { ENV["REDIS_URL"] },
      :scheme => "redis",
      :host => "127.0.0.1",
      :port => 6379,
      :path => nil,
      :timeout => 5.0,
      :password => "1qaz@WSX",
      :db => 0,
      :driver => nil,
      :id => nil,
      :tcp_keepalive => 0,
      :reconnect_attempts => 1,
      :reconnect_delay => 0,
      :reconnect_delay_max => 0.5,
      :inherit_socket => false
    }

[ERR] Node xxxxx is not empty. Either the node already knows other nodes (check with CLUSTER NODES)

解决方法:
1)、停止所有redis进程,将需要新增的节点下aof、rdb等本地备份文件删除;
2)、同时将新Node的集群配置文件删除,即:删除你redis.conf里面cluster-config-file所在的文件,一般为nodes.conf;
3)、再次添加新节点如果还是报错,则登录新Node,./redis-cli–h x –p对数据库进行清除:
192.168.15.102:6000> flushdb #清空当前数据库
4)、启动redis进程重新执行集群新建命令

[root@bme241 bin]# ./redis-trib.rb create 192.168.1.241:9000 192.168.1.242:9000 192.168.1.243:9000 
redis集群一直显示Waiting for the cluster to join…

检查防火墙并开放相应端口

Redis三主三从

参考https://www.cnblogs.com/ivictor/p/9768010.html

nginx安装

环境:192.168.1.241

#安装编译工具及库文件,gcc-c++为编译环境,zlib为了gzip压缩,openssl为了支持ssl,devel主要是和开发相关的东西
[root@bme241 software]# yum -y install make zlib zlib-devel gcc-c++ libtool  openssl openssl-devel

#PCRE 作用是让 Nginx 支持 Rewrite 功能
[root@bme241 software]# wget https://jaist.dl.sourceforge.net/project/pcre/pcre/8.42/pcre-8.42.tar.gz
[root@bme241 software]# tar -zxvf pcre-8.42.tar.gz
[root@bme241 software]# cd pcre-8.42 
[root@bme241 pcre-8.42]# ./configure
[root@bme241 pcre-8.42]# make && make install
[root@bme241 pcre-8.42]# pcre-config --version


[root@bme241 software]# wget https://nginx.org/download/nginx-1.15.9.tar.gz
[root@bme241 software]# tar -xvf nginx-1.15.9.tar.gz
[root@bme241 software]# cd nginx-1.15.9
#配置,prefix指定安装路径,http_stub_status_module和http_ssl_module分别是http和https组件,with-stream为stream组件,with-pcre指pcre路径
[root@bme241 nginx-1.15.9]# ./configure --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --with-pcre=/opt/software/pcre-8.42  --with-stream=dynamic
#编译并安装
[root@bme241 nginx-1.15.9]# make && make install

#筛选出以及可以安装的包,这个包括自定义安装的,--with 前缀开头的为可选安装包,其余为默认安装包,安装时使用 --with-模块名称安装
[root@bme241 nginx-1.15.9]# cat auto/options | grep YES  
#查看自己添加的参数、编译时附带的可选模块或三方模块
[root@bme241 nginx-1.15.9]# sbin/nginx -V

tidb安装

准备机器

机器服务
192.168.1.246TiDB、PD
192.168.1.247PD
192.168.1.249TIDB、PD
192.168.1.251TiKV
192.168.1.252TiKV
192.168.1.253TiKV

安装

#中控机安装依赖
[root@bme246 software]# yum -y install epel-release git curl sshpass && yum -y install python2-pip

#在中控机上创建 tidb 用户,并生成 ssh key
[root@bme246 software]# useradd -m -d /home/tidb tidb
[root@bme246 ~]# passwd tidb

#配置 tidb 用户 sudo 免密码,将 tidb ALL=(ALL) NOPASSWD: ALL 添加到文件末尾即可
[root@bme246 ~]# visudo
	tidb ALL=(ALL) NOPASSWD: ALL

#生成 ssh key: 执行 su 命令从 root 用户切换到 tidb 用户下
[tidb@bme246 tidb]# su - tidb
[tidb@bme246 tidb]# ssh-keygen -t rsa

#在中控机器上下载 TiDB Ansible
[tidb@bme246 tidb]# git clone -b v3.0.1 https://github.com/pingcap/tidb-ansible.git
#在中控机器上安装 Ansible 及其依赖
[tidb@bme246 tidb]# cd /home/tidb/tidb-ansible && sudo pip install -r ./requirements.txt && ansible --version
[tidb@bme246 tidb-ansible]# vi hosts.ini	
	[servers]
	192.168.1.246
	192.168.1.247
	192.168.1.249
	192.168.1.251
	192.168.1.252
	192.168.1.253
	
	[all:vars]
	username = tidb
	ntp_server = pool.ntp.org
#在部署目标机器上创建 tidb 用户,并配置 sudo 规则,配置中控机与部署目标机器之间的 ssh 互信
[tidb@bme246 tidb-ansible]# ansible-playbook -i hosts.ini create_users.yml -u root -k

#在部署目标机器上安装 NTP 服务
[tidb@bme246 tidb-ansible]# ansible-playbook -i hosts.ini deploy_ntp.yml -u tidb -b

#在部署目标机器上添加数据盘 ext4 文件系统挂载参数
[tidb@bme246 tidb-ansible]# fdisk -l
Disk /dev/sdb: 1000 GB
#创建分区表
[tidb@bme246 tidb-ansible]# parted -s -a optimal /dev/sdb mklabel gpt -- mkpart primary ext4 1 -1
#格式化文件系统
[tidb@bme246 tidb-ansible]# mkfs.ext4 /dev/sdb
[tidb@bme246 tidb-ansible]# lsblk -f
sdb             ext4                        425966d6-f439-4fc3-b55c-405f222dfe74   /bme

#添加 nodelalloc 挂载参数
[tidb@bme246 tidb-ansible]# vi /etc/fstab
	UUID=425966d6-f439-4fc3-b55c-405f222dfe74 /bme  ext4    defaults,nodelalloc,noatime 0 2
#挂载数据盘
[tidb@bme246 tidb-ansible]# mkdir /data1 && mount -a
[tidb@bme246 tidb-ansible]# mount -t ext4

#分配机器资源,编辑 inventory.ini 文件
[tidb@bme246 tidb-ansible]# vi inventory.ini
	## TiDB Cluster Part
	[tidb_servers]
	192.168.1.246
	192.168.1.249
	
	[tikv_servers]
	192.168.1.251
	192.168.1.252
	192.168.1.253
	
	[pd_servers]
	192.168.1.246
	192.168.1.247
	192.168.1.249
	
	[spark_master]
	
	[spark_slaves]
	
	[lightning_server]
	
	[importer_server]
	
	## Monitoring Part
	# prometheus and pushgateway servers
	[monitoring_servers]
	192.168.1.246
	
	[grafana_servers]
	192.168.1.246
	
	# node_exporter and blackbox_exporter servers
	[monitored_servers]
	192.168.1.246
	192.168.1.247
	192.168.1.249
	192.168.1.251
	192.168.1.252
	192.168.1.253
	
	[alertmanager_servers]
	192.168.1.246
	[kafka_exporter_servers]
	## Binlog Part
	[pump_servers]
	[drainer_servers]
	## Group variables
	[pd_servers:vars]
	# location_labels = ["zone","rack","host"]
	## Global variables
	[all:vars]
	deploy_dir = /bme/deploy
	## Connection
	# ssh via normal user
	ansible_user = tidb
	cluster_name = test-cluster
	tidb_version = v3.0.1
	# process supervision, [systemd, supervise]
	process_supervision = systemd
	timezone = Asia/Shanghai
	enable_firewalld = False
	# check NTP service
	enable_ntpd = True
	set_hostname = False
	## binlog trigger
	enable_binlog = False
	# kafka cluster address for monitoring, example:
	# kafka_addrs = "192.168.0.11:9092,192.168.0.12:9092,192.168.0.13:9092"
	kafka_addrs = ""
	# zookeeper address of kafka cluster for monitoring, example:
	# zookeeper_addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181"
	zookeeper_addrs = ""
	# enable TLS authentication in the TiDB cluster
	enable_tls = False
	# KV mode
	deploy_without_tidb = False
	# wait for region replication complete before start tidb-server.
	wait_replication = True
	# Optional: Set if you already have a alertmanager server.
	# Format: alertmanager_host:alertmanager_port
	alertmanager_target = ""
	grafana_admin_user = "admin"
	grafana_admin_password = "admin"
	### Collect diagnosis
	collect_log_recent_hours = 2
	enable_bandwidth_limit = True
	# default: 10Mb/s, unit: Kbit/s
	collect_bandwidth_limit = 10000

#执行以下命令如果所有 server 返回 tidb 表示 ssh 互信配置成功
[tidb@bme246 tidb-ansible]# ansible -i inventory.ini all -m shell -a 'whoami'
#执行以下命令如果所有 server 返回 root 表示 tidb 用户 sudo 免密码配置成功
[tidb@bme246 tidb-ansible]# ansible -i inventory.ini all -m shell -a 'whoami' -b
#联网下载 TiDB binary 到中控机
[tidb@bme246 tidb-ansible]# ansible-playbook local_prepare.yml
#初始化系统环境,修改内核参数
[tidb@bme246 tidb-ansible]# ansible-playbook bootstrap.yml
#部署 TiDB 集群软件
[tidb@bme246 tidb-ansible]# ansible-playbook deploy.yml
#启动 TiDB 集群
[tidb@bme246 tidb-ansible]# ansible-playbook start.yml

监控平台访问地址:http://192.168.1.246:3000
默认帐号密码是:admin/admin

常见错误

playbook: bootstrap.yml; TASK: check_system_optional : Preflight check - Check TiDB server’s CPU; message: {“changed”: false, “msg”: “This machine does not have sufficient CPU to run TiDB, at least 8 cores.”}

CPU配置不够检测不通过

[tidb@bme246 tidb-ansible]# vim  bootstrap.yml
- name: check system
  hosts: all
  any_errors_fatal: true
  roles:
    - check_system_necessary
#   - { role: check_system_optional, when: not dev_mode }   #这里注销掉

fio: randread iops of tikv_data_dir disk is too low: 24054 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues

服务器非SSD,SSD检测不通过

[tidb@bme246 tidb-ansible]# vim  bootstrap.yml
- name: tikv_servers machine benchmark
  hosts: tikv_servers
  gather_facts: false
  roles:
#    - { role: machine_benchmark, when: not dev_mode }   #这里注销掉

[tidb@bme246 tidb-ansible]# ansible-playbook bootstrap.yml --extra-vars "dev_mode=True"

playbook: deploy.yml; TASK: check_system_dynamic : Preflight check - NTP service; message: {“changed”: false, “msg”: “Make sure NTP service is running and ntpstat is synchronised to NTP server. See https://github.com/pingcap/docs/blob/master/op-guide/ansible-deployment.md#how-to-check-whether-the-ntp-service-is-normal .”}

Centos7 默认有个chronyd服务设置为开机自启动,这个服务导致ntp无法开启自启动

#查看服务
[tidb@bme246 tidb-ansible]# systemctl status chronyd
#开启的话关掉它
[tidb@bme246 tidb-ansible]# systemctl disable chronyd.service
#把NTP 开机自启动 
[tidb@bme246 tidb-ansible]# systemctl enable ntpd.service
#查看ntp状态
[tidb@bme246 tidb-ansible]# ntpstat

playbook: deploy.yml; TASK: check_system_dynamic : Preflight check -Check swap; message: {“changed”: false, “msg”: “Swap is on, for best performance, turn swap off”}

swap未关闭

#查询 SWAP 分区设置
[tidb@bme246 tidb-ansible]# free -m    
#查看交互分区
[tidb@bme246 tidb-ansible]# cat /etc/fstab
#交换区可能有些不一样
[tidb@bme246 tidb-ansible]# swapoff /dev/swap

Zookeeper

#解压zookeeper-3.4.13.tar.gz
[root@bme244 software]# tar -zxvf zookeeper-3.4.13.tar.gz -C /opt/module/
#配置文件
[root@bme244  zookeeper-3.4.13]# mv conf/zoo_sample.cfg conf/zoo.cfg
#Zookeeper启动
[root@bme244  zookeeper-3.4.13]# bin/zkServer.sh start

Mysql

参考:https://blog.csdn.net/yutao_Struggle/article/details/100575389

RocketMQ

参考:https://blog.csdn.net/wangmx1993328/article/details/81536168

Apache RocketMq下载地址:http://rocketmq.apache.org/release_notes/release-notes-4.3.0/

#下载rocketMq编译包
[root@bme242 software]# wget http://mirrors.tuna.tsinghua.edu.cn/apache/rocketmq/4.3.0/rocketmq-all-4.3.0-bin-release.zip
#解压到指定目录
[root@bme242 software]# unzip -d /opt/module/ rocketmq-all-4.3.0-bin-release.zip 

[root@bme242 software]# cd /opt/module/rocketmq-all-4.3.0-bin-release/
#启动nameserver
[root@bme242 rocketmq-all-4.3.0-bin-release]# nohup sh bin/mqnamesrv &
#启动broker
[root@bme242 rocketmq-all-4.3.0-bin-release]# nohup sh bin/mqbroker -n localhost:9876 &

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值