openGauss修改CM集群某个节点IP

说明

环境信息

#操作系统信息
[omm@testosa ~]$ arch 
x86_64
[omm@testosa ~]$ uname -a
Linux testosa 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[omm@testosa ~]$ cat /etc/os-release 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
 
[omm@testosa ~]$ cat /etc/redhat-release 
CentOS Linux release 7.6.1810 (Core) 
[omm@testosa ~]$ 


#集群信息
[omm@testosa ~]$ cm_ctl query -Cvdip
[  CMServer State   ]

node       node_ip         instance                           state
---------------------------------------------------------------------
1  testosa 192.168.1.80    1    /opengauss/cmserver/cm_server Primary
2  testosb 192.168.1.81    2    /opengauss/cmserver/cm_server Standby
3  testosc 192.168.1.82    3    /opengauss/cmserver/cm_server Standby
4  testosd 192.168.1.83    4    /opengauss/cmserver/cm_server Standby

[   Cluster State   ]

cluster_state   : Normal
redistributing  : No
balanced        : Yes
current_az      : AZ_ALL

[  Datanode State   ]

node       node_ip         instance                       state            
--------------------------------------------------------------------------
1  testosa 192.168.1.80    6001 12345  /opengauss/data/dn P Primary Normal
2  testosb 192.168.1.81    6002 12345  /opengauss/data/dn S Standby Normal
3  testosc 192.168.1.82    6003 12345  /opengauss/data/dn S Standby Normal
4  testosd 192.168.1.83    6004 12345  /opengauss/data/dn C Cascade Standby Normal
[omm@testosa ~]$ 


#版本信息
[omm@testosa ~]$ gs_ssh -c "gsql -d postgres -p 12345 -c \"select regexp_substr(version(),'.*\d{2} ') as version\""
Successfully execute command on all nodes.

Output:
[SUCCESS] testosa:
                              version                              
-------------------------------------------------------------------
 (openGauss 3.0.3 build 46134f73) compiled at 2022-12-30 20:48:06 
(1 row)

[SUCCESS] testosb:
                              version                              
-------------------------------------------------------------------
 (openGauss 3.0.3 build 46134f73) compiled at 2022-12-30 20:48:06 
(1 row)

[SUCCESS] testosc:
                              version                              
-------------------------------------------------------------------
 (openGauss 3.0.3 build 46134f73) compiled at 2022-12-30 20:48:06 
(1 row)

[SUCCESS] testosd:
                              version                              
-------------------------------------------------------------------
 (openGauss 3.0.3 build 46134f73) compiled at 2022-12-30 20:48:06 
(1 row)


[omm@testosa ~]$ 


[omm@testosa ~]$ gs_ssh -c "gaussdb -V"
Successfully execute command on all nodes.

Output:
[SUCCESS] testosa:
gaussdb (openGauss 3.0.3 build 46134f73) compiled at 2022-12-30 20:48:06 commit 0 last mr  
[SUCCESS] testosb:
gaussdb (openGauss 3.0.3 build 46134f73) compiled at 2022-12-30 20:48:06 commit 0 last mr  
[SUCCESS] testosc:
gaussdb (openGauss 3.0.3 build 46134f73) compiled at 2022-12-30 20:48:06 commit 0 last mr  
[SUCCESS] testosd:
gaussdb (openGauss 3.0.3 build 46134f73) compiled at 2022-12-30 20:48:06 commit 0 last mr  

[omm@testosa ~]$ gs_ssh -c "cm_ctl -V"
Successfully execute command on all nodes.

Output:
[SUCCESS] testosa:
cm_ctl (openGauss CM 3.0.3 build 313feba8) compiled at 2022-12-30 20:58:21 Release
[SUCCESS] testosb:
cm_ctl (openGauss CM 3.0.3 build 313feba8) compiled at 2022-12-30 20:58:21 Release
[SUCCESS] testosc:
cm_ctl (openGauss CM 3.0.3 build 313feba8) compiled at 2022-12-30 20:58:21 Release
[SUCCESS] testosd:
cm_ctl (openGauss CM 3.0.3 build 313feba8) compiled at 2022-12-30 20:58:21 Release

[omm@testosa ~]$ gs_ssh -c "gs_om -V"
Successfully execute command on all nodes.

Output:
[SUCCESS] testosa:
gs_om (openGauss OM 3.0.3 build 698397ea) compiled at 2022-12-30 20:58:12 commit 0 last mr
[SUCCESS] testosb:
gs_om (openGauss OM 3.0.3 build 698397ea) compiled at 2022-12-30 20:58:12 commit 0 last mr
[SUCCESS] testosc:
gs_om (openGauss OM 3.0.3 build 698397ea) compiled at 2022-12-30 20:58:12 commit 0 last mr
[SUCCESS] testosd:
gs_om (openGauss OM 3.0.3 build 698397ea) compiled at 2022-12-30 20:58:12 commit 0 last mr

[omm@testosa ~]$ 

测试说明

[root@testosb ~]# hostname
testosb
[root@testosb ~]# ifconfig|awk '/broadcast/ {print $2}'
192.168.1.81
[root@testosb ~]# cat /etc/hosts|grep testosb
192.168.1.81  testosb  #Gauss OM IP Hosts Mapping
[root@testosb ~]# 
#如上testosb节点当前的IP地址为192.168.1.81,现在需要修改为192.168.1.90

测试过程

停止数据库集群

[omm@testosa ~]$ cm_ctl stop

修改主机IP

#修改ip信息
[root@testosb ~]# grep IPADDR /etc/sysconfig/network-scripts/ifcfg-ens33
IPADDR="192.168.1.81"
sed -i '/IPADDR/s/192.168.1.81/192.168.1.90/' /etc/sysconfig/network-scripts/ifcfg-ens33
[root@testosb ~]# sed -i '/IPADDR/s/192.168.1.81/192.168.1.90/' /etc/sysconfig/network-scripts/ifcfg-ens33
[root@testosb ~]# grep IPADDR /etc/sysconfig/network-scripts/ifcfg-ens33
IPADDR="192.168.1.90"
#重启网络服务
systemctl restart network
#查询ip信息
[root@testosb ~]# ifconfig|awk '/broadcast/ {print $2}'
192.168.1.90
[root@testosb ~]# 

修改/etc/hosts文件

#将当前各个节点的/etc/hosts中的192.168.1.81替换为192.168.1.90
[root@testosa ~]# sed -i 's/192.168.1.81/192.168.1.90/' /etc/hosts
[root@testosb ~]# sed -i 's/192.168.1.81/192.168.1.90/' /etc/hosts
[root@testosc ~]# sed -i 's/192.168.1.81/192.168.1.90/' /etc/hosts
[root@testosd ~]# sed -i 's/192.168.1.81/192.168.1.90/' /etc/hosts

修复子用户互信

#修改子用户互信的known_hosts文件
[omm@testosa ~]$ grep '192.168.1.81' /home/omm/.ssh/known_hosts
192.168.1.81 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJUFWSx4LdDZPSiG2YJL8e7elgSWhT7BiVADsZ1rVML1 #OM
[omm@testosa ~]$ sed -i 's/192.168.1.81/192.168.1.90/' /home/omm/.ssh/known_hosts
[omm@testosa ~]$ grep '192.168.1.81' /home/omm/.ssh/known_hosts
[omm@testosa ~]$ 


[omm@testosb ~]$ grep '192.168.1.81' /home/omm/.ssh/known_hosts
192.168.1.81 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJUFWSx4LdDZPSiG2YJL8e7elgSWhT7BiVADsZ1rVML1 #OM
[omm@testosb ~]$ sed -i 's/192.168.1.81/192.168.1.90/' /home/omm/.ssh/known_hosts
[omm@testosb ~]$ grep '192.168.1.81' /home/omm/.ssh/known_hosts
[omm@testosb ~]$ 


[omm@testosc ~]$ grep '192.168.1.81' /home/omm/.ssh/known_hosts
192.168.1.81 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJUFWSx4LdDZPSiG2YJL8e7elgSWhT7BiVADsZ1rVML1 #OM
[omm@testosc ~]$ sed -i 's/192.168.1.81/192.168.1.90/' /home/omm/.ssh/known_hosts
[omm@testosc ~]$ grep '192.168.1.81' /home/omm/.ssh/known_hosts
[omm@testosc ~]$ 


[root@testosd ~]# grep '192.168.1.81' /home/omm/.ssh/known_hosts
192.168.1.81 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJUFWSx4LdDZPSiG2YJL8e7elgSWhT7BiVADsZ1rVML1 #OM
[root@testosd ~]# sed -i 's/192.168.1.81/192.168.1.90/' /home/omm/.ssh/known_hosts
[root@testosd ~]# grep '192.168.1.81' /home/omm/.ssh/known_hosts
[root@testosd ~]# 


#检查子用户互信(四个节点都查询),保证子用户互信恢复正常
for i in {testosa,testosb,testosc,testosd,192.168.1.80,192.168.1.90,192.168.1.82,192.168.1.83};do
ssh $i hostname
done

修改各节点数据库配置文件(postgresql.conf)

#修改testosb节点数据库配置文件(将192.168.1.81修改为当前的192.168.1.90)
[omm@testosb dn]$ grep '192.168.1.81' /opengauss/data/dn/postgresql.conf
listen_addresses = 'localhost,192.168.1.81'             # what IP address(es) to listen on;
local_bind_address = '192.168.1.81'
replconninfo1 = 'localhost=192.168.1.81 localport=12346 localheartbeatport=12350 localservice=12349 remotehost=192.168.1.80 remoteport=12346 remoteheartbeatport=12350 remoteservice=12349'              # replication connection information used to connect primary on standby, or standby on primary,
replconninfo2 = 'localhost=192.168.1.81 localport=12346 localheartbeatport=12350 localservice=12349 remotehost=192.168.1.82 remoteport=12346 remoteheartbeatport=12350 remoteservice=12349'              # replication connection information used to connect secondary on primary or standby,
replconninfo3 = 'localhost=192.168.1.81 localport=12346 localheartbeatport=12350 localservice=12349 remotehost=192.168.1.83 remoteport=12346 remoteheartbeatport=12350 remoteservice=12349 iscascade=true'             # replication connection information used to connect primary on standby, or standby on primary,

[omm@testosb dn]$ sed -i 's/192.168.1.81/192.168.1.90/' /opengauss/data/dn/postgresql.conf
[omm@testosb dn]$ grep '192.168.1.81' /opengauss/data/dn/postgresql.conf
[omm@testosb dn]$ 


#修改其他节点的数据库配置文件(将192.168.1.81修改为当前的192.168.1.90)
[omm@testosa ~]$ grep '192.168.1.81' /opengauss/data/dn/postgresql.conf
replconninfo1 = 'localhost=192.168.1.80 localport=12346 localheartbeatport=12350 localservice=12349 remotehost=192.168.1.81 remoteport=12346 remoteheartbeatport=12350 remoteservice=12349'              # replication connection information used to connect primary on standby, or standby on primary,
[omm@testosa ~]$ sed -i 's/192.168.1.81/192.168.1.90/' /opengauss/data/dn/postgresql.conf
[omm@testosa ~]$ grep '192.168.1.81' /opengauss/data/dn/postgresql.conf
[omm@testosa ~]$ 


[root@testosc ~]# grep '192.168.1.81' /opengauss/data/dn/postgresql.conf
replconninfo2 = 'localhost=192.168.1.82 localport=12346 localheartbeatport=12350 localservice=12349 remotehost=192.168.1.81 remoteport=12346 remoteheartbeatport=12350 remoteservice=12349'              # replication connection information used to connect secondary on primary or standby,
[root@testosc ~]# sed -i 's/192.168.1.81/192.168.1.90/' /opengauss/data/dn/postgresql.conf
[root@testosc ~]# grep '192.168.1.81' /opengauss/data/dn/postgresql.conf
[root@testosc ~]# 


[root@testosd ~]# grep '192.168.1.81' /opengauss/data/dn/postgresql.conf
replconninfo2 = 'localhost=192.168.1.83 localport=12346 localheartbeatport=12350 localservice=12349 remotehost=192.168.1.81 remoteport=12346 remoteheartbeatport=12350 remoteservice=12349'              # replication connection information used to connect secondary on primary or standby,
[root@testosd ~]# sed -i 's/192.168.1.81/192.168.1.90/' /opengauss/data/dn/postgresql.conf
[root@testosd ~]# grep '192.168.1.81' /opengauss/data/dn/postgresql.conf
[root@testosd ~]# 

修改各个节点pg_hba.conf文件

[omm@testosa dn]$ grep '192.168.1.81' /opengauss/data/dn/pg_hba.conf
host    all    omm    192.168.1.81/32    trust
host    all    all    192.168.1.81/32    sha256
[omm@testosa dn]$ sed -i 's/192.168.1.81/192.168.1.90/' /opengauss/data/dn/pg_hba.conf
[omm@testosa dn]$ grep '192.168.1.81' /opengauss/data/dn/pg_hba.conf
[omm@testosa dn]$ 


[omm@testosb dn]$ grep '192.168.1.81' /opengauss/data/dn/pg_hba.conf
host    all    omm    192.168.1.81/32    trust
host    all    all    192.168.1.81/32    sha256
[omm@testosb dn]$ sed -i 's/192.168.1.81/192.168.1.90/' /opengauss/data/dn/pg_hba.conf
[omm@testosb dn]$ grep '192.168.1.81' /opengauss/data/dn/pg_hba.conf
[omm@testosb dn]$ 


[root@testosc ~]# grep '192.168.1.81' /opengauss/data/dn/pg_hba.conf
host    all    omm    192.168.1.81/32    trust
host    all    all    192.168.1.81/32    sha256
[root@testosc ~]# sed -i 's/192.168.1.81/192.168.1.90/' /opengauss/data/dn/pg_hba.conf
[root@testosc ~]# grep '192.168.1.81' /opengauss/data/dn/pg_hba.conf
[root@testosc ~]# 


[root@testosd ~]# grep '192.168.1.81' /opengauss/data/dn/pg_hba.conf
host    all    omm    192.168.1.81/32    trust
host    all    all    192.168.1.81/32    sha256
[root@testosd ~]# sed -i 's/192.168.1.81/192.168.1.90/' /opengauss/data/dn/pg_hba.conf
[root@testosd ~]# grep '192.168.1.81' /opengauss/data/dn/pg_hba.conf
[root@testosd ~]# 

重新生成静态配置文件

#如下可以看出静态配置文件里面存的还是旧的IP(192.168.1.81)需要修改为192.168.1.90
[omm@testosa dn]$ gs_om -t view|grep -E '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+'
sshChannel 1:192.168.1.80
datanodeListenIP 1:192.168.1.80
datanodeLocalHAIP 1:192.168.1.80
datanodePeer0HAIP 1:192.168.1.81
datanodePeer1HAIP 1:192.168.1.82
datanodePeer2HAIP 1:192.168.1.83
sshChannel 1:192.168.1.81
datanodeListenIP 1:192.168.1.81
datanodeLocalHAIP 1:192.168.1.81
datanodePeer0HAIP 1:192.168.1.80
datanodePeer1HAIP 1:192.168.1.82
datanodePeer2HAIP 1:192.168.1.83
sshChannel 1:192.168.1.82
datanodeListenIP 1:192.168.1.82
datanodeLocalHAIP 1:192.168.1.82
datanodePeer0HAIP 1:192.168.1.80
datanodePeer1HAIP 1:192.168.1.81
datanodePeer2HAIP 1:192.168.1.83
sshChannel 1:192.168.1.83
datanodeListenIP 1:192.168.1.83
datanodeLocalHAIP 1:192.168.1.83
datanodePeer0HAIP 1:192.168.1.80
datanodePeer1HAIP 1:192.168.1.81
datanodePeer2HAIP 1:192.168.1.82
[omm@testosa dn]$ 

#修改xml文件里面的IP
[omm@testosa dn]$ grep '192.168.1.81' /tmp/3standby_cm.xml 
                <PARAM name="backIp1s" value="192.168.1.80,192.168.1.81,192.168.1.82,192.168.1.83"/>
                        <PARAM name="cmServerListenIp1" value="192.168.1.80,192.168.1.81,192.168.1.82,192.168.1.83"/>
                        <PARAM name="cmServerHaIp1" value="192.168.1.80,192.168.1.81,192.168.1.82,192.168.1.83"/>
                        <PARAM name="backIp1" value="192.168.1.81"/>
                        <PARAM name="sshIp1" value="192.168.1.81"/>
[omm@testosa dn]$ sed -i 's/192.168.1.81/192.168.1.90/' /tmp/3standby_cm.xml
[omm@testosa dn]$ grep '192.168.1.81' /tmp/3standby_cm.xml 
[omm@testosa dn]$ 

#生成分发静态配置文件
[omm@testosa ~]$ gs_om -t generateconf -X /tmp/3standby_cm.xml --distribute
Generating static configuration files for all nodes.
Creating temp directory to store static configuration files.
Successfully created the temp directory.
Generating static configuration files.
Successfully generated static configuration files.
Static configuration files for all nodes are saved in /opengauss/om/script/static_config_files.
Distributing static configuration files to all nodes.
Successfully distributed static configuration files.
[omm@testosa ~]$ 

#如下四个节点查询静态配置文件里面已经没有192.168.1.81这个IP了
[omm@testosa ~]$ gs_ssh -c "gs_om -t view|grep -E '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+'"|awk -F : '{print $2}'|sort|uniq

192.168.1.80
192.168.1.82
192.168.1.83
192.168.1.90
[omm@testosa ~]$ 

cm配置处理

#删除dcf元数据
[omm@testosa ~]$ gs_ssh -c "ls -ld /opengauss/cmserver/dcf_data/metadata"
Successfully execute command on all nodes.

Output:
[SUCCESS] testosa:
drwx------ 2 omm dbgrp 72 Feb  1 22:33 /opengauss/cmserver/dcf_data/metadata
[SUCCESS] testosb:
drwx------ 2 omm dbgrp 72 Feb  1 22:33 /opengauss/cmserver/dcf_data/metadata
[SUCCESS] testosc:
drwx------ 2 omm dbgrp 72 Feb  1 22:33 /opengauss/cmserver/dcf_data/metadata
[SUCCESS] testosd:
drwx------ 2 omm dbgrp 72 Feb  1 22:33 /opengauss/cmserver/dcf_data/metadata

[omm@testosa ~]$ 

[omm@testosa ~]$ gs_ssh -c "rm -rf /opengauss/cmserver/dcf_data/metadata"
Successfully execute command on all nodes.

Output:
[SUCCESS] testosa:
[SUCCESS] testosb:
[SUCCESS] testosc:
[SUCCESS] testosd:

[omm@testosa ~]$ gs_ssh -c "ls -ld /opengauss/cmserver/dcf_data/metadata"
Failed to execute command on all nodes.

Output:
[GAUSS-51400] : Failed to execute the command: sh /opengauss/tmp/ClusterCall_71151.sh. Error:
[FAILURE] testosa:
ls: cannot access /opengauss/cmserver/dcf_data/metadata: No such file or directory
[FAILURE] testosb:
ls: cannot access /opengauss/cmserver/dcf_data/metadata: No such file or directory
[FAILURE] testosc:
ls: cannot access /opengauss/cmserver/dcf_data/metadata: No such file or directory
[FAILURE] testosd:
ls: cannot access /opengauss/cmserver/dcf_data/metadata: No such file or directory
[omm@testosa ~]$ 


#删除动态文件
[omm@testosa ~]$ gs_ssh -c "ls -l /opengauss/app_46134f73/bin/cluster_dynamic_config"
Successfully execute command on all nodes.

Output:
[SUCCESS] testosa:
-rw------- 1 omm dbgrp 9068 Feb  1 22:38 /opengauss/app_46134f73/bin/cluster_dynamic_config
[SUCCESS] testosb:
-rw------- 1 omm dbgrp 9068 Feb  1 22:53 /opengauss/app_46134f73/bin/cluster_dynamic_config
[SUCCESS] testosc:
-rw------- 1 omm dbgrp 9068 Feb  1 22:53 /opengauss/app_46134f73/bin/cluster_dynamic_config
[SUCCESS] testosd:
-rw------- 1 omm dbgrp 9068 Feb  1 22:53 /opengauss/app_46134f73/bin/cluster_dynamic_config

[omm@testosa ~]$ 

[omm@testosa ~]$ gs_ssh -c "rm -rf  /opengauss/app_46134f73/bin/cluster_dynamic_config"
Successfully execute command on all nodes.

Output:
[SUCCESS] testosa:
[SUCCESS] testosb:
[SUCCESS] testosc:
[SUCCESS] testosd:

[omm@testosa ~]$ 

启动集群

[omm@testosa ~]$ cm_ctl start
cm_ctl: checking cluster status.
cm_ctl: checking cluster status.
cm_ctl: checking finished in 480 ms.
cm_ctl: start cluster. 
cm_ctl: start nodeid: 1
cm_ctl: start nodeid: 2
cm_ctl: start nodeid: 3
cm_ctl: start nodeid: 4
................
cm_ctl: start cluster successfully.
[omm@testosa ~]$ 

#查询集群状态
[omm@testosa ~]$ cm_ctl query -Cvdip
[  CMServer State   ]

node       node_ip         instance                           state
---------------------------------------------------------------------
1  testosa 192.168.1.80    1    /opengauss/cmserver/cm_server Primary
2  testosb 192.168.1.90    2    /opengauss/cmserver/cm_server Standby
3  testosc 192.168.1.82    3    /opengauss/cmserver/cm_server Standby
4  testosd 192.168.1.83    4    /opengauss/cmserver/cm_server Standby

[   Cluster State   ]

cluster_state   : Normal
redistributing  : No
balanced        : Yes
current_az      : AZ_ALL

[  Datanode State   ]

node       node_ip         instance                       state            
---------------------------------------------------------------------------
1  testosa 192.168.1.80    6001 12345  /opengauss/data/dn P Primary Normal 
2  testosb 192.168.1.90    6002 12345  /opengauss/data/dn S Standby Normal 
3  testosc 192.168.1.82    6003 12345  /opengauss/data/dn S Standby Normal 
4  testosd 192.168.1.83    6004 12345  /opengauss/data/dn C Cascade Standby Normal
[omm@testosa ~]$ 
  • 2
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
如果 ClickHouse 集群中的某个节点上的表被删除,可以按照以下步骤进行恢复: 1. 检查 ClickHouse 集群中的其他节点是否仍然拥有该表的副本。如果其他节点仍然拥有该表的完整副本,则可以使用这些节点中的任何一个节点来恢复该表。 2. 如果其他节点中没有该表的完整副本,则需要使用备份进行恢复。如果你有该表的备份,请按照以下步骤进行恢复: - 将备份文件复制到节点上,可以使用 `scp` 或其他文件传输工具。 - 使用 ClickHouse 客户端或 Web 界面创建表,例如: ```sql CREATE TABLE my_table (...); ``` - 使用 `INSERT` 语句将备份数据加载到新创建的表中,例如: ```sql INSERT INTO my_table FORMAT Native FORMAT NativeBackup (...) ``` 其中,`Native` 指定输入数据的格式,`NativeBackup` 指定备份数据的格式,`...` 表示备份文件的路径和其他选项。 如果备份文件大小很大,可能需要将其分成多个部分并逐个加载,或者使用 `cat` 命令等工具将其合并为单个文件再加载。 3. 如果没有备份,可以尝试使用 ClickHouse 的系统日志进行恢复。按照以下步骤进行操作: - 在其他节点上运行以下命令,将日志文件复制到本地: ``` clickhouse-client --query="SELECT * FROM system.query_log WHERE query LIKE '%DROP TABLE my_table%' FORMAT CSV" > drop_table_log.csv ``` 其中,`my_table` 是被删除的表的名称。 - 手动编辑 `drop_table_log.csv` 文件,删除除了 `DROP TABLE` 语句之外的所有行和列,保留查询时间戳、查询 ID 和查询文本。 - 使用以下命令将日志数据还原为表结构: ``` clickhouse-client --query="SELECT query FROM system.query_log WHERE type = 1 AND query_id = 'my_query_id'" > create_table.sql ``` 其中,`my_query_id` 是在上一步中找到的与 `DROP TABLE` 语句对应的查询 ID。 - 使用 `create_table.sql` 文件创建表,例如: ```sql CREATE TABLE my_table (...); ``` - 重复执行 `DROP TABLE` 语句之前的日志记录,例如: ```sql INSERT INTO my_table (...) VALUES (...); ``` 直到恢复表的所有数据。请注意,这可能需要花费大量时间和资源,因此仅在没有备份的情况下才使用此方法。 需要注意的是,如果表使用了分布式表引擎(例如 `Distributed` 或 `Replicated`),则需要在重新创建表时指定正确的分布式表配置,以确保数据正确地分布到集群中的其他节点

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值