- 准备安装包
下载GBase 8c安装包并拷贝至gtm节点对应目录下(以/home/gbase/deploy为例)。
注:数据库安装部署需要在配置了免密登录的节点上进行,本例中为gtm节点免密,则安装部署操作需在该节点完成。
压缩安装包:
$ tar xvf GBase8cV5_S2.0.0B38.tar.gz
执行bin目录的gb_install.sh(如果同一目录多次执行,需要删除.gb_install.sh.completed),配置默认安装路径:
$ /home/gbase/deploy/bin/gb_install.sh
2.部署DCS集群
部署DCS集群,需要列出规划的全部DCS集群IP地址和端口号信息:
语法:
gha_ctl CREATE dcs host:port .....
DCS需要提供高可用功能,应至少部署在三台节点上。
具体部署命令为:
$ /home/gbase/deploy/bin/gha_ctl create dcs 10.0.7.17:2379 10.0.7.18:2379 10.0.7.19:2379
返回添加成功信息:
{
"ret":0,
"msg":"Success"
}
向dcs中添加安装路径及安装包名信息(集群名称缺省为gbase8c):
gha_ctl PREPARE version GBase8cV5_XXX.tar.gz installpath -l dcslist [-c cluster]
- version为对应安装包版本号;
- installpath为安装路径;
- dcslist为dcs地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;
- [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;
具体部署命令为:
$ /home/gbase/deploy/bin/gha_ctl prepare GBase8cV5_S2.1.0B10 /home/gbase/deploy/GBase8cV5_S2.1.0B10.tar.gz /home/gbase/install -l http://10.0.7.17:2379,http://10.0.7.18:2379,http://10.0.7.19:2379
返回添加成功信息:
{
"ret":0,
"msg":"Success"
}
拷贝安装包并解压缩至目标路径,同时设置环境变量:
gha_ctl DEPLOY host ... -l dcslist
- host表示节点IP;
- dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;
具体部署命令为:
$ /home/gbase/deploy/bin/gha_ctl deploy 10.0.7.17 10.0.7.18 10.0.7.19 10.0.7.16 -l http://10.0.7.17:2379,http://10.0.7.18:2379,http://10.0.7.19:2379
返回添加成功信息:
{
"ret":0,
"msg":"Success"
}
3.增加GTM节点
添加gtm节点,并将其信息记录到DCS:
gha_ctl ADD gtm name host port dir rest_port -l dcslist [-c cluster]
- host port dir分别为节点IP、端口号及目录;
- rest_port为修改动态库的rpath;
- dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;
- [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;
具体部署命令为:
$ /home/gbase/deploy/bin/gha_ctl add gtm gtm1 10.0.7.16 6666 /home/gbase/data/gtm 8008 -l http://10.0.7.17:2379,http://10.0.7.18:2379,http://10.0.7.19:2379
返回添加成功信息:
{
"ret":0,
"msg":"Success"
}
4.增加Coordinator节点
添加Coordinator节点,并将其信息记录到DCS,命令如下:
gha_ctl ADD COORDINATOR name host port pooler dir proxy_port rest_port -l dcslist [-c cluster]
参数说明:
- host port pooler dir分别为节点IP、端口号、连接池及目录;
- proxy_port为proxy路径;
- rest_port为修改动态库的rpath;
- dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;
- [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;
具体部署命令为:
$ /home/gbase/deploy/bin/gha_ctl add coordinator cn1 10.0.7.17 5432 6667 /home/gbase/data/coord 6666 8009 -l http://10.0.7.17:2379,http://10.0.7.18:2379,http://10.0.7.19:2379
返回添加成功信息:
{
"ret":0,
"msg":"Success"
}
相同方法添加cn2及cn3:
$ /home/gbase/deploy/bin/gha_ctl add coordinator cn2 10.0.7.18 5432 6667 /home/gbase/data/coord 6666 8009 -l http://10.0.7.17:2379,http://10.0.7.18:2379,http://10.0.7.19:2379
$ /home/gbase/deploy/bin/gha_ctl add coordinator cn3 10.0.7.19 5432 6667 /home/gbase/data/coord 6666 8009 -l http://10.0.7.17:2379,http://10.0.7.18:2379,http://10.0.7.19:2379
5.增加Datanode节点
添加Datanode节点,并将其信息记录到DCS:
gha_ctl ADD datanode group name host port pooler dir proxy_port rest_port -l dcslist [-c cluster]
- host port pooler dir分别为节点IP、端口号、连接池及目录;
- proxy_port为proxy路径;
- rest_port为修改动态库的rpath;
- dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;
- [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;
具体部署命令为:
$ /home/gbase/deploy/bin/gha_ctl add datanode dn1 dn1_1 10.0.7.17 5433 6668 /home/gbase/data/datanode 7001 8011 -l http://10.0.7.17:2379,http://10.0.7.18:2379,http://10.0.7.19:2379
返回添加成功信息:
{
"ret":0,
"msg":"Success"
}
相同方法添加dn2及dn3:
$ /home/gbase/deploy/bin/gha_ctl add datanode dn2 dn2_1 10.0.7.18 5433 6668 /home/gbase/data/datanode 7001 8011 -l http://10.0.7.17:2379,http://10.0.7.18:2379,http://10.0.7.19:2379
$ /home/gbase/deploy/bin/gha_ctl add datanode dn3 dn3_1 10.0.7.19 5433 6668 /home/gbase/data/datanode 7001 8011 -l http://10.0.7.17:2379,http://10.0.7.18:2379,http://10.0.7.19:2379
5.集群状态检查
添加完成后,检查集群状态:
gha_ctl MONITOR all/gtm/coordinator/datanode/dcs -l dcslist [-c cluster] [-H]
- [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;
- dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;
- [-H]集群状态结果将以表格形式展示;
具体操作命令为:
$ /home/gbase/deploy/bin/gha_ctl monitor all -l http://10.0.7.17:2379,http://10.0.7.18:2379,http://10.0.7.19:2379
返回全部运行中为成功:
{
"cluster":"gbase8c",
"version":"GBase8cV5_S2.1.0B10",
"gtm":[
{
"name":"gtm1",
"host":"10.0.7.16",
"port":"6666",
"workDir":"/home/gbase/data/gtm1",
"restPort":"8008",
"state":"running",
"role":"master"
}
],
"coordinator":[
{
"name":"cn1",
"host":"10.0.7.17",
"port":"5432",
"pooler":"6667",
"workDir":"/home/gbase/data/coord",
"proxyPort":"6666",
"restPort":"8009",
"state":"running"
},
{
"name":"cn2",
"host":"10.0.7.18",
"port":"5432",
"pooler":"6667",
"workDir":"/home/gbase/data/coord",
"proxyPort":"6666",
"restPort":"8009",
"state":"running"
},
{
"name":"cn3",
"host":"10.0.7.19",
"port":"5432",
"pooler":"6667",
"workDir":"/home/gbase/data/coord",
"proxyPort":"6666",
"restPort":"8009",
"state":"running"
}
],
"datanode":{
"dn1":[
{
"name":"dn1_1",
"host":"10.0.7.17",
"port":"5433",
"pooler":"6668",
"workDir":"/home/gbase/data/dn1_1",
"proxyPort":"6789",
"restPort":"8011",
"role":"master",
"state":"running"
}
],
"dn2":[
{
"name":"dn2_1",
"host":"10.0.7.18",
"port":"5433",
"pooler":"6668",
"workDir":"/home/gbase/data/dn2_1",
"proxyPort":"6789",
"restPort":"8011",
"role":"master",
"state":"running"
}
],
"dn3":[
{
"name":"dn3_1",
"host":"10.0.7.19",
"port":"5433",
"pooler":"6668",
"workDir":"/home/gbase/data/dn3_1",
"proxyPort":"6789",
"restPort":"8011",
"role":"master",
"state":"running"
}
]
},
"dcs":{
"clusterState":"healthy",
"members":[
{
"url":"http://10.0.7.17:2379",
"id":"2e548b6394a12d14",
"name":"node_0",
"state":"healthy",
"isLeader":false
},
{
"url":"http://10.0.7.19:2379",
"id":"b6466580277576a2",
"name":"node_2",
"state":"healthy",
"isLeader":false
},
{
"url":"http://10.0.7.18:2379",
"id":"f527f60a643af12e",
"name":"node_1",
"state":"healthy",
"isLeader":true
}
]
}
}
该集群中已配置1个GTM节点,3个Coordinator节点及3个Datanode节点(Datanode节点无备机)。
表格形式展示结果如下:
$ /home/gbase/deploy/bin/gha_ctl monitor all -l http://10.0.7.17:2379,http://10.0.7.18:2379,http://10.0.7.19:2379 -H
+----+------+-----------+------+-----------------------+---------+--------+
| No | name | host | port | workDir | state | role |
+----+------+-----------+------+-----------------------+---------+--------+
| 0 | gtm1 | 10.0.7.16 | 6666 | /home/gbase/data/gtm1 | running | master |
+----+------+-----------+------+-----------------------+---------+--------+
+----+------+-----------+------+------------------------+---------+
| No | name | host | port | workDir | state |
+----+------+-----------+------+------------------------+---------+
| 0 | cn1 | 10.0.7.17 | 5432 | /home/gbase/data/coord | running |
| 1 | cn2 | 10.0.7.18 | 5432 | /home/gbase/data/coord | running |
| 2 | cn3 | 10.0.7.19 | 5432 | /home/gbase/data/coord | running |
+----+------+-----------+------+------------------------+---------+
+----+-------+-------+-----------+------+------------------------+---------+--------+
| No | group | name | host | port | workDir | state | role |
+----+-------+-------+-----------+------+------------------------+---------+--------+
| 0 | dn1 | dn1_1 | 10.0.7.17 | 5433 | /home/gbase/data/dn1_1 | running | master |
| 1 | dn2 | dn2_1 | 10.0.7.18 | 5433 | /home/gbase/data/dn2_1 | running | master |
| 2 | dn3 | dn3_1 | 10.0.7.19 | 5433 | /home/gbase/data/dn3_1 | running | master |
+----+-------+-------+-----------+------+------------------------+---------+--------+
+----+-----------------------+--------+---------+----------+
| No | url | name | state | isLeader |
+----+-----------------------+--------+---------+----------+
| 0 | http://10.0.7.17:2379 | node_0 | healthy | False |
| 1 | http://10.0.7.19:2379 | node_2 | healthy | False |
| 2 | http://10.0.7.18:2379 | node_1 | healthy | True |
+----+-----------------------+--------+---------+----------+