熟能生巧丨MogDB数据库+CM+VIP两节点集群升级

本文记录了MogDB数据库5.0.5版本升级至5.0.7版本的全部过程,欢迎大家在非生产环境下动手跟练。

outside_default.png

查看PTK版本

[root@mogdb1 ~]# ptk --version
PTK Version:    v1.4.6 release
Go Version:     go1.19.10
Build Date:     2024-05-28T11:00:47
Git Hash:       5723e886
OS/Arch:        linux/amd64 
[root@mogdb1 ~]#

outside_default.png

查看集群状态

[root@mogdb1 ~]# ptk cluster status -n ebhk 
[   Cluster State   ]
cluster_name                   : ebhk
cluster_state                  : Normal
database_version               : MogDB 5.0.5 (build b77f1a82)
cm_version                     : 5.0.5 (build 3a8b3616)
active_vip                     : 192.168.115.83

[  CMServer State   ]
  id |       ip       | port  |    hostname     |  role    
-----+----------------+-------+-----------------+----------
  1  | 192.168.115.81 | 15300 | mogdb1.mark.com | primary  
  2  | 192.168.115.82 | 15300 | mogdb2.mark.com | standby  

[  Datanode State   ]
  cluster_name |  id  |       ip       | port  | user | nodename | db_role | state  |  uptime  | upstream  
---------------+------+----------------+-------+------+----------+---------+--------+----------+-----------
  ebhk         | 6001 | 192.168.115.81 | 26000 | omm  | dn_6001  | primary | Normal | 00:01:01 | -         
               | 6002 | 192.168.115.82 | 26000 | omm  | dn_6002  | standby | Normal | 00:01:05 | -   

[root@mogdb1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:b2:14:fb brd ff:ff:ff:ff:ff:ff
    inet 192.168.115.81/24 brd 192.168.115.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.115.83/24 brd 192.168.115.255 scope global secondary ens33:26000
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feb2:14fb/64 scope link 
       valid_lft forever preferred_lft forever

outside_default.png

升级PTK版本

注:MogDB v5.0.7要求PTK版本大等于1.4.7

[root@mogdb1 u01]# cd /u01
[root@mogdb1 u01]# tar -xzvf ptk_1.5.2_linux_x86_64.tar.gz -C /tmp/
CHANGELOG/.gitkeep
README.md
ptk
[root@mogdb1 u01]# which ptk
/root/.ptk/bin/ptk
[root@mogdb1 u01]# cp -a /tmp/ptk /root/.ptk/bin/
cp: overwrite ‘/root/.ptk/bin/ptk’? y
[root@mogdb1 u01]# ptk --version
PTK Version:    v1.5.2 release
Go Version:     go1.19.10
Build Date:     2024-06-18T10:23:48
Git Hash:       812d1612
OS/Arch:        linux/amd64
[root@mogdb1 u01]#

outside_default.png

升级MogDB至5.0.7

01

卸载旧版本CM

[root@mogdb1 ~]# ptk cluster -n ebhk uninstall-cm
INFO[2024-06-23T20:18:30.705] [192.168.115.82][omm] remove cron task: om_monitor 
INFO[2024-06-23T20:18:30.710] [192.168.115.81][omm] remove cron task: om_monitor 
INFO[2024-06-23T20:18:30.784] [192.168.115.82][omm] kill user "omm" processes if exist: [om_monitor cm_agent cm_server fenced UDF] 
INFO[2024-06-23T20:18:30.785] [192.168.115.81][omm] kill user "omm" processes if exist: [om_monitor cm_agent cm_server fenced UDF] 
INFO[2024-06-23T20:18:30.835] [192.168.115.82][omm] remove files: /u01/mogdb/app/bin/om_monitor*,/u01/mogdb/app/bin/cm_agent*,/u01/mogdb/app/bin/cm_server*,/u01/mogdb/app/bin/cm_ctl*,/u01/mogdb/app/bin/cm_persist*,/u01/mogdb/app/bin/*manual*start*,/u01/mogdb/app/bin/promote_mode_cms 
INFO[2024-06-23T20:18:30.846] [192.168.115.81][omm] remove files: /u01/mogdb/app/bin/om_monitor*,/u01/mogdb/app/bin/cm_agent*,/u01/mogdb/app/bin/cm_server*,/u01/mogdb/app/bin/cm_ctl*,/u01/mogdb/app/bin/cm_persist*,/u01/mogdb/app/bin/*manual*start*,/u01/mogdb/app/bin/promote_mode_cms 
INFO[2024-06-23T20:18:30.896] [192.168.115.82][omm] remove files: /u01/mogdb/log/cm,/u01/mogdb/app/share/sslcert/cm,/u01/mogdb/cm/cm_server,/u01/mogdb/cm/cm_agent,/u01/mogdb/cm/dcf_data,/u01/mogdb/cm/gstor 
INFO[2024-06-23T20:18:30.905] [192.168.115.81][omm] remove files: /u01/mogdb/log/cm,/u01/mogdb/app/share/sslcert/cm,/u01/mogdb/cm/cm_server,/u01/mogdb/cm/cm_agent,/u01/mogdb/cm/dcf_data,/u01/mogdb/cm/gstor 
INFO[2024-06-23T20:18:30.953] [192.168.115.82][omm] clear cm dir           
INFO[2024-06-23T20:18:30.962] [192.168.115.81][omm] clear cm dir           
INFO[2024-06-23T20:18:31.003] [192.168.115.82][omm] generate cluster_static_config file 
INFO[2024-06-23T20:18:31.011] [192.168.115.81][omm] generate cluster_static_config file 
INFO[2024-06-23T20:18:31.140] uninstall succeeded                          
INFO[2024-06-23T20:18:31.140] time elapsed: 0s

02

卸载CM后的状态

[root@mogdb1 ~]# ptk cluster status -n ebhk 
[   Cluster State   ]
cluster_name                   : ebhk
cluster_state                  : Normal
database_version               : MogDB 5.0.5 (build b77f1a82)

[  Datanode State   ]
  cluster_name |  id  |       ip       | port  | user | nodename | db_role | state  |  uptime  | upstream  
---------------+------+----------------+-------+------+----------+---------+--------+----------+-----------
  ebhk         | 6001 | 192.168.115.81 | 26000 | omm  | dn_6001  | primary | Normal | 00:02:57 | -         
               | 6002 | 192.168.115.82 | 26000 | omm  | dn_6002  | standby | Normal | 00:03:01 | -         
[root@mogdb1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:b2:14:fb brd ff:ff:ff:ff:ff:ff
    inet 192.168.115.81/24 brd 192.168.115.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.115.83/24 brd 192.168.115.255 scope global secondary ens33:26000
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feb2:14fb/64 scope link 
       valid_lft forever preferred_lft forever
[root@mogdb1 ~]#

03

升级MogDB数据库集群

[root@mogdb1 ~]# ptk cluster -n ebhk upgrade -p /u01/MogDB-5.0.7-CentOS-x86_64-all.tar.gz
INFO[2024-06-23T20:20:18.120] PTK Version: 1.5.2 release                   
INFO[2024-06-23T20:20:18.127] new step recorder, current step: "start"     
Tip:
        The upgrade process relies on SSH.
        Please ensure that the network is always worked
        during the upgrade process.
INFO[2024-06-23T20:20:18.128] parsing package from /u01/MogDB-5.0.7-CentOS-x86_64-all.tar.gz 
INFO[2024-06-23T20:20:26.022] kernel package name: MogDB-5.0.7-CentOS-64bit.tar.gz 
INFO[2024-06-23T20:20:26.022] package version: MogDB-5.0.7,92.924,c4707384 
INFO[2024-06-23T20:20:26.022] check db version                             
INFO[2024-06-23T20:20:26.022] big upgrade: true                            
INFO[2024-06-23T20:20:26.022] current version: 5.0.5, target version: 5.0.7 
INFO[2024-06-23T20:20:26.022] current number: 92.901, target number: 92.924 
INFO[2024-06-23T20:20:26.207] version is ok                                
INFO[2024-06-23T20:20:26.208] prepare all upgrade sql                      
INFO[2024-06-23T20:20:26.954] extract upgrade_sql.tar.gz successfully      
INFO[2024-06-23T20:20:27.190] no created plugins found                     
INFO[2024-06-23T20:20:27.209] create temp dir                              
INFO[2024-06-23T20:20:27.228] upload upgrade_extensions.tar.gz             
INFO[2024-06-23T20:20:27.250] extract upgrade_extensions.tar.gz to dir /u01/mogdb/tmp/upgrade 
INFO[2024-06-23T20:20:27.439] check cluster status                         
INFO[2024-06-23T20:20:27.616] cluster state is ok                          
INFO[2024-06-23T20:20:27.867] create remote temporary upgrade directory    
✔ PTK will set cluster to readonly and restart cluster during upgrade, are you ready (default=n) [y/n]: y
INFO[2024-06-23T20:20:35.229] set upgrade_mode to 2                        
INFO[2024-06-23T20:20:35.271] set cluster readonly                         
INFO[2024-06-23T20:20:35.319] check disk size                              
INFO[2024-06-23T20:20:35.442] check and wait xlog sync                     
INFO[2024-06-23T20:20:35.488] setting temporary guc values                 
INFO[2024-06-23T20:20:35.488] set guc param support_extended_features value to "on" 
INFO[2024-06-23T20:20:35.522] set guc param enable_wdr_snapshot value to "off" 
INFO[2024-06-23T20:20:35.563] stop all datanodes                           
INFO[2024-06-23T20:20:35.563] operation: stop                              
INFO[2024-06-23T20:20:35.563] ========================================     
INFO[2024-06-23T20:20:35.627] stop db [192.168.115.82:26000] ...           
INFO[2024-06-23T20:20:36.661] stop db [192.168.115.82:26000] successfully  
INFO[2024-06-23T20:20:36.744] stop db [192.168.115.81:26000] ...           
INFO[2024-06-23T20:20:38.779] stop db [192.168.115.81:26000] successfully  
INFO[2024-06-23T20:20:38.779] ========================================     
INFO[2024-06-23T20:20:38.779] stop successfully                            
INFO[2024-06-23T20:20:38.780] backup old app and tool                      
INFO[2024-06-23T20:20:51.116] checking cluster state before start          
INFO[2024-06-23T20:20:51.186] operation: start                             
INFO[2024-06-23T20:20:51.186] ========================================     
INFO[2024-06-23T20:20:51.186] start db [192.168.115.81:26000] ...          
INFO[2024-06-23T20:20:53.326] start db [192.168.115.81:26000] successfully 
INFO[2024-06-23T20:20:53.326] start db [192.168.115.82:26000] ...          
INFO[2024-06-23T20:20:56.071] start db [192.168.115.82:26000] successfully 
INFO[2024-06-23T20:20:56.297] ========================================     
INFO[2024-06-23T20:20:56.298] start cluster successfully                   
INFO[2024-06-23T20:20:56.413] call hook script: before_pre_upgrade_script  
INFO[2024-06-23T20:20:57.105] execute rollback sql                         
INFO[2024-06-23T20:20:57.105] execute rollback sql on postgres             
INFO[2024-06-23T20:20:57.373] execute rollback sql on otherdb: [template1 template0], parallel num: 10 
INFO[2024-06-23T20:20:57.665] execute upgrade sql                          
INFO[2024-06-23T20:20:57.665] execute upgrade sql on postgres              
INFO[2024-06-23T20:20:57.889] execute upgrade sql on otherdb: [template1 template0], parallel num: 10 
INFO[2024-06-23T20:20:58.120] execute checkpoint                           
INFO[2024-06-23T20:20:58.481] call hook script: after_pre_upgrade_script   
INFO[2024-06-23T20:20:58.520] stop all datanodes                           
INFO[2024-06-23T20:20:58.520] operation: stop                              
INFO[2024-06-23T20:20:58.520] ========================================     
INFO[2024-06-23T20:20:58.617] stop db [192.168.115.82:26000] ...           
INFO[2024-06-23T20:21:01.683] stop db [192.168.115.82:26000] successfully  
INFO[2024-06-23T20:21:01.784] stop db [192.168.115.81:26000] ...           
INFO[2024-06-23T20:21:02.826] stop db [192.168.115.81:26000] successfully  
INFO[2024-06-23T20:21:02.826] ========================================     
INFO[2024-06-23T20:21:02.826] stop successfully                            
INFO[2024-06-23T20:21:02.827] check need update config in postgresql.conf  
INFO[2024-06-23T20:21:02.827] backup old configs                           
INFO[2024-06-23T20:21:02.875] install new package files                    
INFO[2024-06-23T20:21:02.875] extract new package locally                  
INFO[2024-06-23T20:21:03.828] distribute new package files to all datanodes 
INFO[2024-06-23T20:21:03.828] [192.168.115.81][omm] upload kernel package... 
INFO[2024-06-23T20:21:03.829] [192.168.115.82][omm] upload kernel package... 
INFO[2024-06-23T20:21:06.735] [192.168.115.81][omm] upload om package...   
INFO[2024-06-23T20:21:07.028] [192.168.115.81][omm] upload cm package...   
INFO[2024-06-23T20:21:09.403] [192.168.115.82][omm] upload om package...   
INFO[2024-06-23T20:21:10.336] [192.168.115.82][omm] upload cm package...   
INFO[2024-06-23T20:21:12.249] [192.168.115.81][omm] validate and try to fix ld library for gs_initdb 
INFO[2024-06-23T20:21:12.283] [192.168.115.81][omm] validate and try to fix ld library for mogdb 
INFO[2024-06-23T20:21:12.318] [192.168.115.81][omm] try to fix psutil python lib 
INFO[2024-06-23T20:21:12.396] [192.168.115.81][omm] write file /u01/mogdb/tool/script/py_pstree.py 
INFO[2024-06-23T20:21:15.341] [192.168.115.82][omm] validate and try to fix ld library for gs_initdb 
INFO[2024-06-23T20:21:15.383] [192.168.115.82][omm] validate and try to fix ld library for mogdb 
INFO[2024-06-23T20:21:15.429] [192.168.115.82][omm] try to fix psutil python lib 
INFO[2024-06-23T20:21:15.509] [192.168.115.82][omm] write file /u01/mogdb/tool/script/py_pstree.py 
INFO[2024-06-23T20:21:15.639] update upgrade_version                       
INFO[2024-06-23T20:21:15.670] restore old configs to new app               
INFO[2024-06-23T20:21:15.711] delete deprecated guc                        
INFO[2024-06-23T20:21:15.891] update config in postgresql.conf             
INFO[2024-06-23T20:21:15.892] start with old number                        
INFO[2024-06-23T20:21:15.892] checking cluster state before start          
INFO[2024-06-23T20:21:15.972] operation: start                             
INFO[2024-06-23T20:21:15.972] ========================================     
INFO[2024-06-23T20:21:15.972] start db [192.168.115.81:26000] ...          
INFO[2024-06-23T20:21:18.258] start db [192.168.115.81:26000] successfully 
INFO[2024-06-23T20:21:18.258] start db [192.168.115.82:26000] ...          
INFO[2024-06-23T20:21:20.458] start db [192.168.115.82:26000] successfully 
INFO[2024-06-23T20:21:20.731] ========================================     
INFO[2024-06-23T20:21:20.731] start cluster successfully                   
INFO[2024-06-23T20:21:20.732] call hook script: before_post_upgrade_script 
INFO[2024-06-23T20:21:20.762] checking cluster status                      
INFO[2024-06-23T20:21:21.154] cluster status is normal                     
INFO[2024-06-23T20:21:21.154] execute rollback-post sql                    
INFO[2024-06-23T20:21:21.154] execute rollback-post sql on postgres        
INFO[2024-06-23T20:21:21.447] execute rollback-post sql on otherdb: [template1 template0], parallel num: 10 
INFO[2024-06-23T20:21:21.824] execute upgrade-post sql                     
INFO[2024-06-23T20:21:21.824] execute upgrade-post sql on postgres         
INFO[2024-06-23T20:21:22.114] execute upgrade-post sql on otherdb: [template1 template0], parallel num: 10 
INFO[2024-06-23T20:21:22.420] call hook script: after_post_upgrade_script  
INFO[2024-06-23T20:21:22.454] after_post_upgrade_script output:
This version of whale doesn't support upgrade 
INFO[2024-06-23T20:21:22.455] restore guc value                            
INFO[2024-06-23T20:21:22.455] restore guc param support_extended_features value to "off" 
INFO[2024-06-23T20:21:22.499] restore guc param enable_wdr_snapshot value to "on" 
INFO[2024-06-23T20:21:22.564] stop all datanodes                           
INFO[2024-06-23T20:21:22.564] operation: stop                              
INFO[2024-06-23T20:21:22.564] ========================================     
INFO[2024-06-23T20:21:22.677] stop db [192.168.115.82:26000] ...           
INFO[2024-06-23T20:21:25.722] stop db [192.168.115.82:26000] successfully  
INFO[2024-06-23T20:21:25.834] stop db [192.168.115.81:26000] ...           
INFO[2024-06-23T20:21:29.892] stop db [192.168.115.81:26000] successfully  
INFO[2024-06-23T20:21:29.892] ========================================     
INFO[2024-06-23T20:21:29.893] stop successfully                            
INFO[2024-06-23T20:21:29.893] start all datanodes                          
INFO[2024-06-23T20:21:29.893] checking cluster state before start          
INFO[2024-06-23T20:21:29.986] operation: start                             
INFO[2024-06-23T20:21:29.986] ========================================     
INFO[2024-06-23T20:21:29.986] start db [192.168.115.81:26000] ...          
INFO[2024-06-23T20:21:31.199] start db [192.168.115.81:26000] successfully 
INFO[2024-06-23T20:21:31.199] start db [192.168.115.82:26000] ...          
INFO[2024-06-23T20:21:33.347] start db [192.168.115.82:26000] successfully 
INFO[2024-06-23T20:21:33.574] wait following dn to Normal:                 
INFO[2024-06-23T20:21:33.574] dn: 192.168.115.82 state: Catchup            
INFO[2024-06-23T20:21:36.814] ========================================     
INFO[2024-06-23T20:21:36.814] start cluster successfully                   
INFO[2024-06-23T20:21:36.815] upgrade successfully                         

If you confirm that the upgrade is correct, you can run this command to finish the upgrade:
    ptk cluster -n ebhk upgrade-commit
If you want to rollback the upgrade, you can run:
    ptk cluster -n ebhk upgrade-rollback
INFO[2024-06-23T20:21:36.815] time elapsed: 1m19s                          
[root@mogdb1 ~]#

04

提交MogDB数据库集群的升级

[root@mogdb1 ~]# ptk cluster -n ebhk upgrade-commit
INFO[2024-06-23T20:22:40.746] PTK Version: 1.5.2 release                   
INFO[2024-06-23T20:22:40.754] new step recorder, current step: "restart_dn_finally" 
INFO[2024-06-23T20:22:40.754] check cluster status                         
INFO[2024-06-23T20:22:40.755] checking cluster status                      
INFO[2024-06-23T20:22:41.100] cluster status is normal                     
INFO[2024-06-23T20:22:41.297] set upgrade_mode to 0                        
INFO[2024-06-23T20:22:41.353] enable cluster read-write                    
INFO[2024-06-23T20:22:41.456] create temp dir                              
INFO[2024-06-23T20:22:41.479] upload upgrade_extensions.tar.gz             
INFO[2024-06-23T20:22:41.509] extract upgrade_extensions.tar.gz to dir /u01/mogdb/tmp/upgrade 
INFO[2024-06-23T20:22:41.758] call hook script: on_commit_script           
INFO[2024-06-23T20:22:41.804] clear temporary directories                  
INFO[2024-06-23T20:22:42.106] commit success                               
INFO[2024-06-23T20:22:42.106] time elapsed: 1s                             
[root@mogdb1 ~]

05

MogDB升级后集群状态

[root@mogdb1 ~]# ptk cluster status -n ebhk 
[   Cluster State   ]
cluster_name                   : ebhk
cluster_state                  : Normal
database_version               : MogDB 5.0.7 (build c4707384)

[  Datanode State   ]
  cluster_name |  id  |       ip       | port  | user | nodename | db_role | state  |  uptime  | upstream  
---------------+------+----------------+-------+------+----------+---------+--------+----------+-----------
  ebhk         | 6001 | 192.168.115.81 | 26000 | omm  | dn_6001  | primary | Normal | 00:02:09 | -         
               | 6002 | 192.168.115.82 | 26000 | omm  | dn_6002  | standby | Normal | 00:02:08 | -         
[root@mogdb1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:b2:14:fb brd ff:ff:ff:ff:ff:ff
    inet 192.168.115.81/24 brd 192.168.115.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.115.83/24 brd 192.168.115.255 scope global secondary ens33:26000
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feb2:14fb/64 scope link 
       valid_lft forever preferred_lft forever

06

安装新版CM集群

[root@mogdb1 ~]# ptk cluster -n ebhk install-cm ebhk -p  /u01/MogDB-5.0.7-CentOS-x86_64-all.tar.gz
INFO[2024-06-23T20:25:43.523] cm enabled but cm_servers is empty, use db servers as cm servers 
The cluster will contains 2 cm nodes, so please confirm following cms configs:
- (Optional) db_service_vip="192.168.115.83"
- (Required) third_party_gateway_ip="192.168.115.2"
- (Optional) cms_enable_failover_on2nodes="True"
- (Optional) cms_enable_db_crash_recovery="False"
Now, these configs are:
- db_service_vip="192.168.115.83"
- third_party_gateway_ip="192.168.115.2"
- cms_enable_failover_on2nodes="True"
- cms_enable_db_crash_recovery="False"
✔ Do you want to modify them (default=n) [y/n]: n
INFO[2024-06-23T20:25:52.166] check cluster state                          
INFO[2024-06-23T20:25:52.423] current cluster state: Normal                
INFO[2024-06-23T20:25:52.423] [192.168.115.81][omm] check cm_server port: 15300 
INFO[2024-06-23T20:25:52.423] [192.168.115.82][omm] check cm_server port: 15300 
INFO[2024-06-23T20:25:52.512] check the package from command line          
INFO[2024-06-23T20:25:52.512] parse package: "/u01/MogDB-5.0.7-CentOS-x86_64-all.tar.gz" 
INFO[2024-06-23T20:25:59.618] compare cm package version with cluster version 
INFO[2024-06-23T20:26:00.301] extract MogDB-5.0.7-CentOS-64bit-cm.tar.gz to dir /tmp/tmp.USJnvgUraq 
INFO[2024-06-23T20:26:00.493] execute command: cm_ctl --version | awk '{print $2,$4}' 
INFO[2024-06-23T20:26:00.562] got cm version is MogDB-5.0.7                
INFO[2024-06-23T20:26:00.562] distributing cm package...                   
INFO[2024-06-23T20:26:01.351] generate ssl files for cm                    
INFO[2024-06-23T20:26:02.893] distribute ssl files                         
INFO[2024-06-23T20:26:03.204] [192.168.115.81][omm] add user omm to cron.allow 
INFO[2024-06-23T20:26:03.204] [192.168.115.82][omm] add user omm to cron.allow 
INFO[2024-06-23T20:26:03.211] [192.168.115.81][omm] create file cluster_manual_start 
INFO[2024-06-23T20:26:03.232] [192.168.115.81][omm] make user omm's dir(s): /u01/mogdb/cm,/u01/mogdb/cm/cm_server,/u01/mogdb/cm/cm_agent,/u01/mogdb/log/cm/om_monitor 
INFO[2024-06-23T20:26:03.318] [192.168.115.82][omm] create file cluster_manual_start 
INFO[2024-06-23T20:26:03.342] [192.168.115.82][omm] make user omm's dir(s): /u01/mogdb/cm,/u01/mogdb/cm/cm_server,/u01/mogdb/cm/cm_agent,/u01/mogdb/log/cm/om_monitor 
INFO[2024-06-23T20:26:03.820] [192.168.115.81][omm] extract MogDB-5.0.7-CentOS-64bit-cm.tar.gz to dir /u01/mogdb/app 
INFO[2024-06-23T20:26:03.929] [192.168.115.82][omm] extract MogDB-5.0.7-CentOS-64bit-cm.tar.gz to dir /u01/mogdb/app 
INFO[2024-06-23T20:26:04.029] [192.168.115.81][omm] change /u01/mogdb/app owner to omm 
INFO[2024-06-23T20:26:04.029] [192.168.115.81][omm] copy /u01/mogdb/app/share/config/cm_server.conf.sample to /u01/mogdb/cm/cm_server/cm_server.conf 
INFO[2024-06-23T20:26:04.049] [192.168.115.81][omm] copy /u01/mogdb/app/share/config/cm_agent.conf.sample to /u01/mogdb/cm/cm_agent/cm_agent.conf 
INFO[2024-06-23T20:26:04.069] [192.168.115.81][omm] copy /u01/mogdb/app/script/cm_manage_vip.py to /u01/mogdb/app/bin 
INFO[2024-06-23T20:26:04.089] [192.168.115.81][omm] change /u01/mogdb/cm owner to omm 
INFO[2024-06-23T20:26:04.089] [192.168.115.81][omm] update cm config file: /u01/mogdb/cm/cm_agent/cm_agent.conf 
INFO[2024-06-23T20:26:04.128] [192.168.115.82][omm] change /u01/mogdb/app owner to omm 
INFO[2024-06-23T20:26:04.128] [192.168.115.82][omm] copy /u01/mogdb/app/share/config/cm_server.conf.sample to /u01/mogdb/cm/cm_server/cm_server.conf 
INFO[2024-06-23T20:26:04.141] [192.168.115.81][omm] update cm config file: /u01/mogdb/cm/cm_server/cm_server.conf 
INFO[2024-06-23T20:26:04.151] [192.168.115.82][omm] copy /u01/mogdb/app/share/config/cm_agent.conf.sample to /u01/mogdb/cm/cm_agent/cm_agent.conf 
INFO[2024-06-23T20:26:04.173] [192.168.115.82][omm] copy /u01/mogdb/app/script/cm_manage_vip.py to /u01/mogdb/app/bin 
INFO[2024-06-23T20:26:04.194] [192.168.115.82][omm] change /u01/mogdb/cm owner to omm 
INFO[2024-06-23T20:26:04.194] [192.168.115.82][omm] update cm config file: /u01/mogdb/cm/cm_agent/cm_agent.conf 
INFO[2024-06-23T20:26:04.226] [192.168.115.81][omm] generate cluster_static_config file 
INFO[2024-06-23T20:26:04.251] [192.168.115.82][omm] update cm config file: /u01/mogdb/cm/cm_server/cm_server.conf 
INFO[2024-06-23T20:26:04.321] [192.168.115.81][omm] start om_monitor       
INFO[2024-06-23T20:26:04.351] [192.168.115.82][omm] generate cluster_static_config file 
INFO[2024-06-23T20:26:04.352] [192.168.115.81][omm] remove cron task: om_monitor 
INFO[2024-06-23T20:26:04.379] [192.168.115.81][omm] set omm cron task: om_monitor 
INFO[2024-06-23T20:26:04.403] [192.168.115.82][omm] start om_monitor       
INFO[2024-06-23T20:26:04.432] [192.168.115.82][omm] remove cron task: om_monitor 
INFO[2024-06-23T20:26:04.460] [192.168.115.82][omm] set omm cron task: om_monitor 
INFO[2024-06-23T20:26:04.508] starting cm servers with cm_ctl              
INFO[2024-06-23T20:26:12.337] start successfully                           
INFO[2024-06-23T20:26:12.337] install finished                             
[root@mogdb1 ~]#

07

安装新版CM后集群状态

[root@mogdb1 ~]# ptk cluster status -n ebhk 
[   Cluster State   ]
cluster_name                   : ebhk
cluster_state                  : Normal
database_version               : MogDB 5.0.7 (build c4707384)
cm_version                     : 5.0.7 (build 39e2ce35)

[  CMServer State   ]
  id |       ip       | port  |    hostname     |  role    
-----+----------------+-------+-----------------+----------
  1  | 192.168.115.81 | 15300 | mogdb1.mark.com | primary  
  2  | 192.168.115.82 | 15300 | mogdb2.mark.com | standby  

[  Datanode State   ]
  cluster_name |  id  |       ip       | port  | user | nodename | db_role | state  |  uptime  | upstream  
---------------+------+----------------+-------+------+----------+---------+--------+----------+-----------
  ebhk         | 6001 | 192.168.115.81 | 26000 | omm  | dn_6001  | primary | Normal | 00:06:28 | -         
               | 6002 | 192.168.115.82 | 26000 | omm  | dn_6002  | standby | Normal | 00:06:26 | -         
[root@mogdb1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:b2:14:fb brd ff:ff:ff:ff:ff:ff
    inet 192.168.115.81/24 brd 192.168.115.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.115.83/24 brd 192.168.115.255 scope global secondary ens33:26000
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feb2:14fb/64 scope link 
       valid_lft forever preferred_lft forever
[root@mogdb1 ~]#

08

载入cm-vip

[root@mogdb1 ~]# ptk cluster -n ebhk load-cm-vip --vip 192.168.115.83 --action install  --log-level debug
DEBU[2024-06-23T20:29:18.621] cmd: ptk cluster -n ebhk load-cm-vip --vip 192.168.115.83 --action install --log-level debug 
DEBU[2024-06-23T20:29:18.623] tidy config successfully                     
DEBU[2024-06-23T20:29:18.623] [192.168.115.81][omm] new sudo user executor with config option: <SSHOption host=192.168.115.81, user=root, port=22> 
DEBU[2024-06-23T20:29:18.623] [192.168.115.81][omm] new local executor     
DEBU[2024-06-23T20:29:18.625] [192.168.115.81][omm] try to init db user executor with auth.db 
DEBU[2024-06-23T20:29:18.625] [192.168.115.81][omm] new ssh executor       
DEBU[2024-06-23T20:29:18.625] [192.168.115.82][omm] new sudo user executor with config option: <SSHOption host=192.168.115.82, user=root, port=22> 
DEBU[2024-06-23T20:29:18.626] [192.168.115.82][omm] new ssh executor       
DEBU[2024-06-23T20:29:18.627] [192.168.115.82][omm] try to init db user executor with auth.db 
DEBU[2024-06-23T20:29:18.627] [192.168.115.82][omm] new ssh executor       
DEBU[2024-06-23T20:29:18.628] clusterinfo: save config: 
{"global":{"cluster_name":"ebhk","user":"omm","group":"omm","enable_cm":true,"enable_dss":false},"db_servers":[{"id":6001,"dn_name":"dn_6001","host":"192.168.115.81","role":"primary","db_port":26000},{"id":6002,"dn_name":"dn_6002","host":"192.168.115.82","role":"standby","db_port":26000}],"cm_servers":[{"id":1,"host":"192.168.115.81","port":15300},{"id":2,"host":"192.168.115.82","port":15300}]} 
DEBU[2024-06-23T20:29:18.630] complete new clusterinfo                     
DEBU[2024-06-23T20:29:18.631] [192.168.115.81][omm] connecting to omm@192.168.115.81 ... 
DEBU[2024-06-23T20:29:18.631] [192.168.115.82][omm] connecting to omm@192.168.115.82 ... 
DEBU[2024-06-23T20:29:18.676] [192.168.115.81][omm] connect to omm@192.168.115.81 success 
DEBU[2024-06-23T20:29:18.692] [192.168.115.82][omm] connect to omm@192.168.115.82 success 
DEBU[2024-06-23T20:29:18.767] [192.168.115.81][omm] SHELL LOG:
[ADDRESS]: omm@192.168.115.81
[COMMAND]:
bash -c "export LANG=en_US.UTF-8;if [ -f ~/.ptk_mogdb_env ]; then . ~/.ptk_mogdb_env; fi; cm_ctl --version | awk '{print \$2,\$4,\$6}'"
[STDOUT]:
(MogDB 5.0.7 39e2ce35)
[STDERR]:

[ERROR]:
<nil> 
DEBU[2024-06-23T20:29:18.779] [192.168.115.82][omm] SHELL LOG:
[ADDRESS]: omm@192.168.115.82
[COMMAND]:
bash -c "export LANG=en_US.UTF-8;if [ -f ~/.ptk_mogdb_env ]; then . ~/.ptk_mogdb_env; fi; cm_ctl --version | awk '{print \$2,\$4,\$6}'"
[STDOUT]:
(MogDB 5.0.7 39e2ce35)
[STDERR]:

[ERROR]:
<nil> 
DEBU[2024-06-23T20:29:18.803] [192.168.115.81][omm] SHELL LOG:
[ADDRESS]: omm@192.168.115.81
[COMMAND]:
bash -c "export LANG=en_US.UTF-8;if [ ! -f /u01/mogdb/app/bin/cm_manage_vip.py ];then echo notexist;fi"
[STDOUT]:

[STDERR]:

[ERROR]:
<nil> 
DEBU[2024-06-23T20:29:18.827] [192.168.115.81][omm] SHELL LOG:
[ADDRESS]: omm@192.168.115.81
[COMMAND]:
bash -c "export LANG=en_US.UTF-8;
if [[ -e /u01/mogdb/cm/cm_agent/cm_resource.json ]];then
    awk '{if (\$0 ~ /float_ip/) print \$0}' /u01/mogdb/cm/cm_agent/cm_resource.json
else
    echo \"notexist\"
fi
        "
[STDOUT]:
notexist
[STDERR]:

[ERROR]:
<nil> 
DEBU[2024-06-23T20:29:18.875] [192.168.115.81][omm] SHELL LOG:
[ADDRESS]: omm@192.168.115.81
[COMMAND]:
bash -c "export LANG=en_US.UTF-8;sudo -n ifconfig 2>&1; echo \$?"
[STDOUT]:
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.115.81  netmask 255.255.255.0  broadcast 192.168.115.255
        inet6 fe80::20c:29ff:feb2:14fb  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:b2:14:fb  txqueuelen 1000  (Ethernet)
        RX packets 36263  bytes 6758808 (6.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 140384  bytes 276321419 (263.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33:26000: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.115.83  netmask 255.255.255.0  broadcast 192.168.115.255
        ether 00:0c:29:b2:14:fb  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 61276  bytes 153878620 (146.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 61276  bytes 153878620 (146.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

0
[STDERR]:

[ERROR]:
<nil> 
DEBU[2024-06-23T20:29:18.921] [192.168.115.82][omm] SHELL LOG:
[ADDRESS]: omm@192.168.115.82
[COMMAND]:
bash -c "export LANG=en_US.UTF-8;sudo -n ifconfig 2>&1; echo \$?"
[STDOUT]:
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.115.82  netmask 255.255.255.0  broadcast 192.168.115.255
        inet6 fe80::20c:29ff:fe80:89a1  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:80:89:a1  txqueuelen 1000  (Ethernet)
        RX packets 137609  bytes 177644823 (169.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 31902  bytes 4569367 (4.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 13840  bytes 3619501 (3.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13840  bytes 3619501 (3.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

0
[STDERR]:

[ERROR]:
<nil> 
DEBU[2024-06-23T20:29:18.921] [192.168.115.81][omm] new sftp client        
DEBU[2024-06-23T20:29:18.950] [192.168.115.81][omm] open remote file /u01/mogdb/app/bin/cm_manage_vip.py 
DEBU[2024-06-23T20:29:18.950] [192.168.115.81][omm] upload remote file /u01/mogdb/app/bin/cm_manage_vip.py 
DEBU[2024-06-23T20:29:18.953] [192.168.115.81][omm] download "/u01/mogdb/app/bin/cm_manage_vip.py" to "/tmp/cm_manage_vip.py" success 
DEBU[2024-06-23T20:29:18.955] [192.168.115.81][omm] new sftp client        
DEBU[2024-06-23T20:29:18.982] [192.168.115.81][omm] create remote file /u01/mogdb/tmp/cm_manage_vip.py 
DEBU[2024-06-23T20:29:18.983] [192.168.115.81][omm] upload remote file /u01/mogdb/tmp/cm_manage_vip.py 
DEBU[2024-06-23T20:29:18.984] [192.168.115.81][omm] chmod on remote file /u01/mogdb/tmp/cm_manage_vip.py 
DEBU[2024-06-23T20:29:18.985] [192.168.115.81][omm] upload "/tmp/cm_manage_vip.py" to "/u01/mogdb/tmp/cm_manage_vip.py" success 
INFO[2024-06-23T20:29:18.986] execute load vip script, action: install     
DEBU[2024-06-23T20:29:21.347] [192.168.115.81][omm] SHELL LOG:
[ADDRESS]: omm@192.168.115.81
[COMMAND]:
bash -c "export LANG=en_US.UTF-8;python3 /u01/mogdb/tmp/cm_manage_vip.py --install --db_service_vip=192.168.115.83 --db_service_vip_port=26000"
[STDOUT]:
cm_ctl query -Cv | grep cluster_state | grep Normal
cm_ctl query -Cvi | tail -n 1
ifconfig
hostname -I
ifconfig
hostname -I
ifconfig
hostname -I
ifconfig
whoami
pssh -H omm@192.168.115.81 -i "sudo ifconfig ens33:28542 17.23.1.200 netmask 255.255.255.0 up  && sudo ifconfig ens33:28542 down" 
pssh -H omm@192.168.115.82 -i "sudo ifconfig ens33:28542 17.23.1.200 netmask 255.255.255.0 up  && sudo ifconfig ens33:28542 down" 
Error: the 192.168.115.83 is reachable, the 192.168.115.83 has been occupied. Please check.
[STDERR]:

[ERROR]:
Process exited with status 1 
INFO[2024-06-23T20:29:21.347] load vip script output:
cm_ctl query -Cv | grep cluster_state | grep Normal
cm_ctl query -Cvi | tail -n 1
ifconfig
hostname -I
ifconfig
hostname -I
ifconfig
hostname -I
ifconfig
whoami
pssh -H omm@192.168.115.81 -i "sudo ifconfig ens33:28542 17.23.1.200 netmask 255.255.255.0 up  && sudo ifconfig ens33:28542 down" 
pssh -H omm@192.168.115.82 -i "sudo ifconfig ens33:28542 17.23.1.200 netmask 255.255.255.0 up  && sudo ifconfig ens33:28542 down" 
Error: the 192.168.115.83 is reachable, the 192.168.115.83 has been occupied. Please check. 
INFO[2024-06-23T20:29:21.348] time elapsed: 2s                             
DEBU[2024-06-23T20:29:21.348] stop signal listener                         
DEBU[2024-06-23T20:29:21.348] global cleaner called                        
Process exited with status 1
DEBU[2024-06-23T20:29:21.349] exit with code: 1                            
[root@mogdb1 ~]#

注:因VIP 192.168.115.83还处在可ping通、被占用的状态,载入cm-vip操作失败。

09

删除已占用的cm-vip

[root@mogdb1 ~]# ifconfig -a
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.115.81  netmask 255.255.255.0  broadcast 192.168.115.255
        inet6 fe80::20c:29ff:feb2:14fb  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:b2:14:fb  txqueuelen 1000  (Ethernet)
        RX packets 48047  bytes 8904590 (8.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 150228  bytes 277350226 (264.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33:26000: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.115.83  netmask 255.255.255.0  broadcast 192.168.115.255
        ether 00:0c:29:b2:14:fb  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 83922  bytes 161093100 (153.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 83922  bytes 161093100 (153.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@mogdb1 ~]# ifconfig  ens33:26000 down
[root@mogdb1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:b2:14:fb brd ff:ff:ff:ff:ff:ff
    inet 192.168.115.81/24 brd 192.168.115.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feb2:14fb/64 scope link 
       valid_lft forever preferred_lft forever
[root@mogdb1 ~]#

10

重新载入cm-vip

[root@mogdb1 ~]# ptk cluster -n ebhk load-cm-vip --vip 192.168.115.83 --action install  --log-level debug
DEBU[2024-06-23T20:42:33.183] cmd: ptk cluster -n ebhk load-cm-vip --vip 192.168.115.83 --action install --log-level debug 
DEBU[2024-06-23T20:42:33.184] tidy config successfully                     
DEBU[2024-06-23T20:42:33.185] [192.168.115.81][omm] new sudo user executor with config option: <SSHOption host=192.168.115.81, user=root, port=22> 
DEBU[2024-06-23T20:42:33.185] [192.168.115.81][omm] new local executor     
DEBU[2024-06-23T20:42:33.187] [192.168.115.81][omm] try to init db user executor with auth.db 
DEBU[2024-06-23T20:42:33.187] [192.168.115.81][omm] new ssh executor       
DEBU[2024-06-23T20:42:33.187] [192.168.115.82][omm] new sudo user executor with config option: <SSHOption host=192.168.115.82, user=root, port=22> 
DEBU[2024-06-23T20:42:33.187] [192.168.115.82][omm] new ssh executor       
DEBU[2024-06-23T20:42:33.190] [192.168.115.82][omm] try to init db user executor with auth.db 
DEBU[2024-06-23T20:42:33.190] [192.168.115.82][omm] new ssh executor       
DEBU[2024-06-23T20:42:33.190] clusterinfo: save config: 
{"global":{"cluster_name":"ebhk","user":"omm","group":"omm","enable_cm":true,"enable_dss":false},"db_servers":[{"id":6001,"dn_name":"dn_6001","host":"192.168.115.81","role":"primary","db_port":26000},{"id":6002,"dn_name":"dn_6002","host":"192.168.115.82","role":"standby","db_port":26000}],"cm_servers":[{"id":1,"host":"192.168.115.81","port":15300},{"id":2,"host":"192.168.115.82","port":15300}]} 
DEBU[2024-06-23T20:42:33.193] complete new clusterinfo                     
DEBU[2024-06-23T20:42:33.193] [192.168.115.81][omm] connecting to omm@192.168.115.81 ... 
DEBU[2024-06-23T20:42:33.194] [192.168.115.82][omm] connecting to omm@192.168.115.82 ... 
DEBU[2024-06-23T20:42:33.238] [192.168.115.81][omm] connect to omm@192.168.115.81 success 
DEBU[2024-06-23T20:42:33.241] [192.168.115.82][omm] connect to omm@192.168.115.82 success 
DEBU[2024-06-23T20:42:33.325] [192.168.115.81][omm] SHELL LOG:
[ADDRESS]: omm@192.168.115.81
[COMMAND]:
bash -c "export LANG=en_US.UTF-8;if [ -f ~/.ptk_mogdb_env ]; then . ~/.ptk_mogdb_env; fi; cm_ctl --version | awk '{print \$2,\$4,\$6}'"
[STDOUT]:
(MogDB 5.0.7 39e2ce35)
[STDERR]:

[ERROR]:
<nil> 
DEBU[2024-06-23T20:42:33.330] [192.168.115.82][omm] SHELL LOG:
[ADDRESS]: omm@192.168.115.82
[COMMAND]:
bash -c "export LANG=en_US.UTF-8;if [ -f ~/.ptk_mogdb_env ]; then . ~/.ptk_mogdb_env; fi; cm_ctl --version | awk '{print \$2,\$4,\$6}'"
[STDOUT]:
(MogDB 5.0.7 39e2ce35)
[STDERR]:

[ERROR]:
<nil> 
DEBU[2024-06-23T20:42:33.367] [192.168.115.81][omm] SHELL LOG:
[ADDRESS]: omm@192.168.115.81
[COMMAND]:
bash -c "export LANG=en_US.UTF-8;if [ ! -f /u01/mogdb/app/bin/cm_manage_vip.py ];then echo notexist;fi"
[STDOUT]:

[STDERR]:

[ERROR]:
<nil> 
DEBU[2024-06-23T20:42:33.408] [192.168.115.81][omm] SHELL LOG:
[ADDRESS]: omm@192.168.115.81
[COMMAND]:
bash -c "export LANG=en_US.UTF-8;
if [[ -e /u01/mogdb/cm/cm_agent/cm_resource.json ]];then
    awk '{if (\$0 ~ /float_ip/) print \$0}' /u01/mogdb/cm/cm_agent/cm_resource.json
else
    echo \"notexist\"
fi
        "
[STDOUT]:
notexist
[STDERR]:

[ERROR]:
<nil> 
DEBU[2024-06-23T20:42:33.453] [192.168.115.81][omm] SHELL LOG:
[ADDRESS]: omm@192.168.115.81
[COMMAND]:
bash -c "export LANG=en_US.UTF-8;sudo -n ifconfig 2>&1; echo \$?"
[STDOUT]:
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.115.81  netmask 255.255.255.0  broadcast 192.168.115.255
        inet6 fe80::20c:29ff:feb2:14fb  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:b2:14:fb  txqueuelen 1000  (Ethernet)
        RX packets 56291  bytes 10426880 (9.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 157087  bytes 278028677 (265.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 99407  bytes 166003198 (158.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 99407  bytes 166003198 (158.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

0
[STDERR]:

[ERROR]:
<nil> 
DEBU[2024-06-23T20:42:33.501] [192.168.115.82][omm] SHELL LOG:
[ADDRESS]: omm@192.168.115.82
[COMMAND]:
bash -c "export LANG=en_US.UTF-8;sudo -n ifconfig 2>&1; echo \$?"
[STDOUT]:
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.115.82  netmask 255.255.255.0  broadcast 192.168.115.255
        inet6 fe80::20c:29ff:fe80:89a1  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:80:89:a1  txqueuelen 1000  (Ethernet)
        RX packets 153924  bytes 179244291 (170.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 51350  bytes 9333273 (8.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 40746  bytes 10938093 (10.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 40746  bytes 10938093 (10.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

0
[STDERR]:

[ERROR]:
<nil> 
DEBU[2024-06-23T20:42:33.501] [192.168.115.81][omm] new sftp client        
DEBU[2024-06-23T20:42:33.528] [192.168.115.81][omm] open remote file /u01/mogdb/app/bin/cm_manage_vip.py 
DEBU[2024-06-23T20:42:33.529] [192.168.115.81][omm] upload remote file /u01/mogdb/app/bin/cm_manage_vip.py 
DEBU[2024-06-23T20:42:33.533] [192.168.115.81][omm] download "/u01/mogdb/app/bin/cm_manage_vip.py" to "/tmp/cm_manage_vip.py" success 
DEBU[2024-06-23T20:42:33.535] [192.168.115.81][omm] new sftp client        
DEBU[2024-06-23T20:42:33.560] [192.168.115.81][omm] create remote file /u01/mogdb/tmp/cm_manage_vip.py 
DEBU[2024-06-23T20:42:33.561] [192.168.115.81][omm] upload remote file /u01/mogdb/tmp/cm_manage_vip.py 
DEBU[2024-06-23T20:42:33.562] [192.168.115.81][omm] chmod on remote file /u01/mogdb/tmp/cm_manage_vip.py 
DEBU[2024-06-23T20:42:33.563] [192.168.115.81][omm] upload "/tmp/cm_manage_vip.py" to "/u01/mogdb/tmp/cm_manage_vip.py" success 
INFO[2024-06-23T20:42:33.564] execute load vip script, action: install     
DEBU[2024-06-23T20:43:31.423] [192.168.115.81][omm] SHELL LOG:
[ADDRESS]: omm@192.168.115.81
[COMMAND]:
bash -c "export LANG=en_US.UTF-8;python3 /u01/mogdb/tmp/cm_manage_vip.py --install --db_service_vip=192.168.115.83 --db_service_vip_port=26000"
[STDOUT]:
cm_ctl query -Cv | grep cluster_state | grep Normal
cm_ctl query -Cvi | tail -n 1
ifconfig
hostname -I
ifconfig
hostname -I
ifconfig
hostname -I
ifconfig
whoami
pssh -H omm@192.168.115.81 -i "sudo ifconfig ens33:28542 17.23.1.200 netmask 255.255.255.0 up  && sudo ifconfig ens33:28542 down" 
pssh -H omm@192.168.115.82 -i "sudo ifconfig ens33:28542 17.23.1.200 netmask 255.255.255.0 up  && sudo ifconfig ens33:28542 down" 
cm_ctl query -Cvi | tail -n 1
ifconfig
hostname -I
ifconfig
ifconfig
hostname -I
ifconfig
cm_ctl show | tail -5 | awk '{{print $(NF)}}'
whoami
pssh -H omm@192.168.115.81 -i "sudo ifconfig ens33:26000 192.168.115.83 netmask 255.255.255.0 up"
whoami
pssh -H omm@192.168.115.81 -i "cm_ctl res --add --res_name="VIP_az661421" --res_attr="resources_type=VIP,float_ip=192.168.115.83""
pssh -H omm@192.168.115.81 -i "cm_ctl res --edit --res_name="VIP_az661421" --add_inst="node_id=1,res_instance_id=6001" --inst_attr="base_ip=192.168.115.81" "
pssh -H omm@192.168.115.81 -i "cm_ctl res --edit --res_name="VIP_az661421" --add_inst="node_id=2,res_instance_id=6002" --inst_attr="base_ip=192.168.115.82" "
pssh -H omm@192.168.115.82 -i "cm_ctl res --add --res_name="VIP_az661421" --res_attr="resources_type=VIP,float_ip=192.168.115.83""
pssh -H omm@192.168.115.82 -i "cm_ctl res --edit --res_name="VIP_az661421" --add_inst="node_id=1,res_instance_id=6001" --inst_attr="base_ip=192.168.115.81" "
pssh -H omm@192.168.115.82 -i "cm_ctl res --edit --res_name="VIP_az661421" --add_inst="node_id=2,res_instance_id=6002" --inst_attr="base_ip=192.168.115.82" "
gs_guc set -N all -h "host all all 192.168.115.83/32 sha256"
cm_ctl res --check
timeout 120 cm_ctl stop && cm_ctl start
 cm_ctl query -Cv | grep cluster_state | grep Normal
cm_ctl show | grep 192.168.115.83
cm_ctl show | tail -5 | awk '{{print $(NF)}}'
cm_ctl set --param --agent  -k db_service_vip="'192.168.115.83'"
cm_ctl set --param --agent  -k enable_fence_dn="off"
cm_ctl reload --param --agent
load 192.168.115.83 successfully.
[STDERR]:

[ERROR]:
<nil> 
INFO[2024-06-23T20:43:31.424] load vip script output:
cm_ctl query -Cv | grep cluster_state | grep Normal
cm_ctl query -Cvi | tail -n 1
ifconfig
hostname -I
ifconfig
hostname -I
ifconfig
hostname -I
ifconfig
whoami
pssh -H omm@192.168.115.81 -i "sudo ifconfig ens33:28542 17.23.1.200 netmask 255.255.255.0 up  && sudo ifconfig ens33:28542 down" 
pssh -H omm@192.168.115.82 -i "sudo ifconfig ens33:28542 17.23.1.200 netmask 255.255.255.0 up  && sudo ifconfig ens33:28542 down" 
cm_ctl query -Cvi | tail -n 1
ifconfig
hostname -I
ifconfig
ifconfig
hostname -I
ifconfig
cm_ctl show | tail -5 | awk '{{print $(NF)}}'
whoami
pssh -H omm@192.168.115.81 -i "sudo ifconfig ens33:26000 192.168.115.83 netmask 255.255.255.0 up"
whoami
pssh -H omm@192.168.115.81 -i "cm_ctl res --add --res_name="VIP_az661421" --res_attr="resources_type=VIP,float_ip=192.168.115.83""
pssh -H omm@192.168.115.81 -i "cm_ctl res --edit --res_name="VIP_az661421" --add_inst="node_id=1,res_instance_id=6001" --inst_attr="base_ip=192.168.115.81" "
pssh -H omm@192.168.115.81 -i "cm_ctl res --edit --res_name="VIP_az661421" --add_inst="node_id=2,res_instance_id=6002" --inst_attr="base_ip=192.168.115.82" "
pssh -H omm@192.168.115.82 -i "cm_ctl res --add --res_name="VIP_az661421" --res_attr="resources_type=VIP,float_ip=192.168.115.83""
pssh -H omm@192.168.115.82 -i "cm_ctl res --edit --res_name="VIP_az661421" --add_inst="node_id=1,res_instance_id=6001" --inst_attr="base_ip=192.168.115.81" "
pssh -H omm@192.168.115.82 -i "cm_ctl res --edit --res_name="VIP_az661421" --add_inst="node_id=2,res_instance_id=6002" --inst_attr="base_ip=192.168.115.82" "
gs_guc set -N all -h "host all all 192.168.115.83/32 sha256"
cm_ctl res --check
timeout 120 cm_ctl stop && cm_ctl start
 cm_ctl query -Cv | grep cluster_state | grep Normal
cm_ctl show | grep 192.168.115.83
cm_ctl show | tail -5 | awk '{{print $(NF)}}'
cm_ctl set --param --agent  -k db_service_vip="'192.168.115.83'"
cm_ctl set --param --agent  -k enable_fence_dn="off"
cm_ctl reload --param --agent
load 192.168.115.83 successfully. 
INFO[2024-06-23T20:43:31.424] script executed successful                   
INFO[2024-06-23T20:43:31.424] time elapsed: 58s                            
DEBU[2024-06-23T20:43:31.424] stop signal listener                         
DEBU[2024-06-23T20:43:31.426] global cleaner called                        
DEBU[2024-06-23T20:43:31.426] clear debug log: /root/.ptk/log/ptk-2024-06-23T20_42_33-debug.log

11

查询集群状态,确认CM版本及vip已激活

[root@mogdb1 ~]# ptk cluster status -n ebhk 
[   Cluster State   ]
cluster_name                   : ebhk
cluster_state                  : Normal
database_version               : MogDB 5.0.7 (build c4707384)
cm_version                     : 5.0.7 (build 39e2ce35)
active_vip                     : 192.168.115.83

[  CMServer State   ]
  id |       ip       | port  |    hostname     |  role    
-----+----------------+-------+-----------------+----------
  1  | 192.168.115.81 | 15300 | mogdb1.mark.com | primary  
  2  | 192.168.115.82 | 15300 | mogdb2.mark.com | standby  

[  Datanode State   ]
  cluster_name |  id  |       ip       | port  | user | nodename | db_role | state  |  uptime  | upstream  
---------------+------+----------------+-------+------+----------+---------+--------+----------+-----------
  ebhk         | 6001 | 192.168.115.81 | 26000 | omm  | dn_6001  | primary | Normal | 00:00:53 | -         
               | 6002 | 192.168.115.82 | 26000 | omm  | dn_6002  | standby | Normal | 00:00:53 | -         
[root@mogdb1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:b2:14:fb brd ff:ff:ff:ff:ff:ff
    inet 192.168.115.81/24 brd 192.168.115.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.115.83/24 brd 192.168.115.255 scope global secondary ens33:26000
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feb2:14fb/64 scope link 
       valid_lft forever preferred_lft forever

4c2779904e0e85c655e46b5e83723eb7.gif

数据驱动,成就未来,云和恩墨,不负所托!


云和恩墨创立于2011年,是业界领先的“智能的数据技术提供商”。公司总部位于北京,在国内外35个地区设有本地办公室并开展业务。

云和恩墨以“数据驱动,成就未来”为使命,致力于将创新的数据技术产品和解决方案带给全球的企业和组织,帮助客户构建安全、高效、敏捷且经济的数据环境,持续增强客户在数据洞察和决策上的竞争优势,实现数据驱动的业务创新和升级发展。

自成立以来,云和恩墨专注于数据技术领域,根据不断变化的市场需求,创新研发了系列软件产品,涵盖数据库、数据库存储、数据库云管和数据智能分析等领域。这些产品已经在集团型、大中型、高成长型客户以及行业云场景中得到广泛应用,证明了我们的技术和商业竞争力,展现了公司在数据技术端到端解决方案方面的优势。

在云化、数字化和智能化的时代背景下,云和恩墨始终以正和多赢为目标,感恩每一位客户和合作伙伴的信任与支持,“利他先行”,坚持投入于数据技术核心能力,为构建数据驱动的智能未来而不懈努力。

我们期待与您携手,共同探索数据力量,迎接智能未来。

e73585610d36068a9f5c9c373a9c366b.gif

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值