mysql 集群怎么卸载节点_手动删除OMS数据库数据,清理集群中节点的方法(高危操作)...

本文档详述了高风险操作:如何卸载MySQL集群中的节点,包括停止OMS服务、备份数据库、删除节点信息等步骤,强调必须由专业人员执行,误操作可能导致集群异常无法恢复。
摘要由CSDN通过智能技术生成

注:本操作为高危操作,可能导致集群异常而且操作失误可能集群无法恢复。操作必须专业人员来操作。

1. 操作前先停止oms,执行命令sh /opt/huawei/Bigdata/om-0.0.1/sbin/stop-oms.sh

2. 操作前先备份Manager数据库数据:/srv/Bigdata/dbdata_om/db,将该目录全部打包,备份到其他目录中(例如/opt/bak)

3. 备份完后数据后,再进入ommdba用户,启动gaussdb,执行命令:gs_ctl start -M primary

4. 进入数据库(以C30为例,其他版本端口可能有变):gsql -p 20015 -U omm -W om**uawei@123

5. 从OM_NODES表中记录节点的NodeId

select * from OM_NODES;

NODE_ID            |   NODE_NAME    |       IP       |   BUSINESSIP   |    HOSTNAME    |    RACKNAME    | IS_DETACHED

-------------------------------+----------------+----------------+----------------+----------------+----------------+-------------

189.39.235.8@189-39-235-8     | 189.39.235.8   | 189.39.235.8   | 189.39.235.8   | 189-39-235-8   | /default/rack0 |           0

189.39.234.176@189-39-234-176 | 189.39.234.176 | 189.39.234.176 | 189.39.234.176 | 189-39-234-176 | /default/rack0 |           0

189.39.235.118@189-39-235-118 | 189.39.235.118 | 189.39.235.118 | 189.39.235.118 | 189-39-235-118 | /default/rack0 |           0

记录要删除节点的NODE_ID : 189.39.235.118@189-39-235-118

6. 从OM_TOPOLOGY表中找到该节点上的所有roleinstance

select * from OM_TOPOLOGY where NODE_ID='189.39.235.118@189-39-235-118';

ROLE_INSTANCE_ID | ROLE_ID |            NODE_ID            | CURRENT_PATCH_VERSION | IS_DETACHED | ROLE_INSTANCE_LIVE

------------------+---------+-------------------------------+-----------------------+-------------+--------------------

roleinstance_2   | role_1  | 189.39.235.118@189-39-235-118 |                       |           0 |                  0

roleinstance_5   | role_2  | 189.39.235.118@189-39-235-118 |                       |           0 |                  0

roleinstance_7   | role_3  | 189.39.235.118@189-39-235-118 |                       |           0 |                  0

roleinstance_9   | role_4  | 189.39.235.118@189-39-235-118 |                       |           0 |                  0

roleinstance_11  | role_5  | 189.39.235.118@189-39-235-118 |                       |           0 |                  0

roleinstance_14  | role_6  | 189.39.235.118@189-39-235-118 |                       |           0 |                  0

roleinstance_17  | role_7  | 189.39.235.118@189-39-235-118 |                       |           0 |                  0

roleinstance_19  | role_8  | 189.39.235.118@189-39-235-118 |                       |           0 |                  0

roleinstance_21  | role_9  | 189.39.235.118@189-39-235-118 |                       |           0 |                  0

roleinstance_24  | role_10 | 189.39.235.118@189-39-235-118 |                       |           0 |                  0

roleinstance_27  | role_11 | 189.39.235.118@189-39-235-118 |                       |           0 |                  0

roleinstance_30  | role_12 | 189.39.235.118@189-39-235-118 |                       |           0 |                  0

roleinstance_33  | role_14 | 189.39.235.118@189-39-235-118 |                       |           0 |                  0

roleinstance_35  | role_15 | 189.39.235.118@189-39-235-118 |                       |           0 |                  0

roleinstance_38  | role_16 | 189.39.235.118@189-39-235-118 |                       |           0 |                  0

roleinstance_41  | role_17 | 189.39.235.118@189-39-235-118 |                       |           0 |                  0

7. 从OM_CONFIGURATIONS删除节点得配置

删除节点上roleinstance 的配置

delete from OM_CONFIGURATIONS where OWNER_ROLE_INSTANCE_ID='roleinstance_2';

delete from OM_CONFIGURATIONS where OWNER_ROLE_INSTANCE_ID='roleinstance_5';

delete from OM_CONFIGURATIONS where OWNER_ROLE_INSTANCE_ID='roleinstance_7';

delete from OM_CONFIGURATIONS where OWNER_ROLE_INSTANCE_ID='roleinstance_9';

delete from OM_CONFIGURATIONS where OWNER_ROLE_INSTANCE_ID='roleinstance_11';

delete from OM_CONFIGURATIONS where OWNER_ROLE_INSTANCE_ID='roleinstance_14';

delete from OM_CONFIGURATIONS where OWNER_ROLE_INSTANCE_ID='roleinstance_17';

delete from OM_CONFIGURATIONS where OWNER_ROLE_INSTANCE_ID='roleinstance_19';

delete from OM_CONFIGURATIONS where OWNER_ROLE_INSTANCE_ID='roleinstance_21';

delete from OM_CONFIGURATIONS where OWNER_ROLE_INSTANCE_ID='roleinstance_24';

delete from OM_CONFIGURATIONS where OWNER_ROLE_INSTANCE_ID='roleinstance_27';

delete from OM_CONFIGURATIONS where OWNER_ROLE_INSTANCE_ID='roleinstance_30';

delete from OM_CONFIGURATIONS where OWNER_ROLE_INSTANCE_ID='roleinstance_33';

delete from OM_CONFIGURATIONS where OWNER_ROLE_INSTANCE_ID='roleinstance_35';

delete from OM_CONFIGURATIONS where OWNER_ROLE_INSTANCE_ID='roleinstance_38';

delete from OM_CONFIGURATIONS where OWNER_ROLE_INSTANCE_ID='roleinstance_41';

8. 从OM_FSMTRANSITIONS删除状态信息

执行删除节点上roleinstance 的状态

delete from OM_FSMTRANSITIONS where ROLE_INSTANCE_ID='roleinstance_2';

delete from OM_FSMTRANSITIONS where ROLE_INSTANCE_ID='roleinstance_5';

delete from OM_FSMTRANSITIONS where ROLE_INSTANCE_ID='roleinstance_7';

delete from OM_FSMTRANSITIONS where ROLE_INSTANCE_ID='roleinstance_9';

delete from OM_FSMTRANSITIONS where ROLE_INSTANCE_ID='roleinstance_11';

delete from OM_FSMTRANSITIONS where ROLE_INSTANCE_ID='roleinstance_14';

delete from OM_FSMTRANSITIONS where ROLE_INSTANCE_ID='roleinstance_17';

delete from OM_FSMTRANSITIONS where ROLE_INSTANCE_ID='roleinstance_19';

delete from OM_FSMTRANSITIONS where ROLE_INSTANCE_ID='roleinstance_21';

delete from OM_FSMTRANSITIONS where ROLE_INSTANCE_ID='roleinstance_24';

delete from OM_FSMTRANSITIONS where ROLE_INSTANCE_ID='roleinstance_27';

delete from OM_FSMTRANSITIONS where ROLE_INSTANCE_ID='roleinstance_30';

delete from OM_FSMTRANSITIONS where ROLE_INSTANCE_ID='roleinstance_33';

delete from OM_FSMTRANSITIONS where ROLE_INSTANCE_ID='roleinstance_35';

delete from OM_FSMTRANSITIONS where ROLE_INSTANCE_ID='roleinstance_38';

delete from OM_FSMTRANSITIONS where ROLE_INSTANCE_ID='roleinstance_41';

9. 从OM_TOPOLOGY表删除拓扑信息

delete from OM_TOPOLOGY where NODE_ID='189.39.235.118@189-39-235-118';

10. 从OM_NODES表中删除Node信息

delete from OM_NODES where NODE_ID='189.39.235.118@189-39-235-118';

11.重启controller

执行命令:sh /opt/huawei/Bigdata/om-0.0.1/sbin/restart-controller.sh

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值