EsgynDB Troubleshooting-Backup already exists

现象

EsgynDB中做备份集import导入的时候报错“Backup full20190702_00212428826064850102 already exists”,

SQL>import backup from location 'hdfs://172.31.234.16:8020/tmp/fulldb12parallel',tag 'full20190702_00212428826064850102';

*** ERROR[5050] IMPORT BACKUP command could not be completed. Reason: Error returned from exportOrImportBackup method. See next error for details. [2019-07-04 11:36:45]
*** ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::exportOrImportBackup returned error HBASE_EXPORT_IMPORT_BACKUP_ERROR(727). Cause: java.io.IOException: Backup full20190702_00212428826064850102 already exists.
org.apache.hadoop.hbase.pit.BackupRestoreClient.importBackup(BackupRestoreClient.java:4378)
org.apache.hadoop.hbase.pit.BackupRestoreClient.exportOrImportBackup(BackupRestoreClient.java:4517). [2019-07-04 11:36:45]

解决

通过"get all backup tags"查看是否存储backup,发现不存在,

sqlci -> get all backup tags;

说明之前可能是有正在import的执行一半异常中断,import到一些中间目录未删除。
执行"hadoop fs -ls /user/trafodion/backupsys"发现目录下有backup结果集,并将其删除。

trafodion@cs02 ~]$ hadoop fs -ls /user/trafodion/backupsys
Found 2 items
drwxrwx---   - trafodion hbase          0 2019-07-03 09:46 /user/trafodion/backupsys/full20190702_00212428826064850102
[trafodion@cs02 ~]$ hadoop fs -rmr /user/trafodion/backupsys/full20190702_00212428826064850102
rmr: DEPRECATED: Please use 'rm -r' instead.
19/07/04 11:42:49 INFO fs.TrashPolicyDefault: Moved: 'hdfs://nameservice1/user/trafodion/backupsys/full20190702_00212428826064850102' to trash at: hdfs://nameservice1/user/trafodion/.Trash/Current/user/trafodion/backupsys/full20190702_00212428826064850102

删除之后继续报以下错误,

>import backup from location 'hdfs://172.31.234.16:8020/tmp/fulldb12parallel',tag 'full20190702_00212428826064850102';
cli: do_get_servers process type TMID err=0, num_servers=1

*** ERROR[5050] IMPORT BACKUP command could not be completed. Reason: Error returned from exportOrImportBackup method. See next error for details.

*** ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::exportOrImportBackup returned error HBASE_EXPORT_IMPORT_BACKUP_ERROR(727). Cause: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: java.lang.Exception: doImport thread 120 FAILED with error:1 Error Detail: Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release
The snapshot 'cf8492b79f8843b494ee491cefe146a0' already exists in the destination: hdfs://nameservice1/hbase/.hbase-snapshot/cf8492b79f8843b494ee491cefe146a0

根据以上错误提示,删除hbase目录下面的snapshot,

hadoop fs -rmr hdfs://nameservice1/hbase/.hbase-snapshot/cf8492b79f8843b494ee491cefe146a0

注:此方法同样适用于import执行一半的情况,

>>import backup from location 'hdfs://172.31.234.16:8020/tmp/full12backup',tag 'full4backup_00212429333584910646';

*** ERROR[5050] IMPORT BACKUP command could not be completed. Reason: Error returned from exportOrImportBackup method. See next error for details.

*** ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::exportOrImportBackup returned error HBASE_EXPORT_IMPORT_BACKUP_ERROR(727). Cause: java.io.IOException: Backup full4backup_00212429333584910646 already imported.
org.apache.hadoop.hbase.pit.BackupRestoreClient.importBackup(BackupRestoreClient.java:4863)
org.apache.hadoop.hbase.pit.BackupRestoreClient.exportOrImportBackup(BackupRestoreClient.java:5001).

--- SQL operation failed with errors.

>>get all backup tags;

--- SQL operation complete.

[trafodion@cs02 ~]$ hadoop fs -ls /user/trafodion/backupsys 
Found 1 items
drwxrwx---   - trafodion hbase          0 2019-07-09 18:06 /user/trafodion/backupsys/full4backup_00212429333584910646
[trafodion@cs02 ~]$ 
[trafodion@cs02 ~]$ 
[trafodion@cs02 ~]$ hadoop fs -rmr /user/trafodion/backupsys/full4backup_00212429333584910646
rmr: DEPRECATED: Please use 'rm -r' instead.
19/07/09 20:48:42 INFO fs.TrashPolicyDefault: Moved: 'hdfs://nameservice1/user/trafodion/backupsys/full4backup_00212429333584910646' to trash at: hdfs://nameservice1/user/trafodion/.Trash/Current/user/trafodion/backupsys/full4backup_00212429333584910646

另外,有时候我们即使删除了hdfs://nameservice1/hbase/.hbase-snapshot和/user/trafodion/backupsys下面的内容之后可能仍然报错如下,

SQL>import backup from location 'hdfs://10.19.41.29:8020/tmp/chenlong', tag 'fb_test01_00212438653245090402';

*** ERROR[5050] IMPORT BACKUP command could not be completed. Reason: Error returned from exportOrImportBackup method. See next error for details. [2019-10-24 15:29:33]
*** ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::exportOrImportBackup returned error HBASE_EXPORT_IMPORT_BACKUP_ERROR(727). Cause: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: java.lang.Exception: doImport thread 943 FAILED with error:1 Error Detail: Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release
19/10/24 15:29:32 INFO snapshot.ExportSnapshot: Copy Snapshot Manifest
19/10/24 15:29:32 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /hbase/.hbase-snapshot/.tmp/8a77abfae7f840b932784cbdec540991/.snapshotinfo (inode 9960793): File does not exist. Holder DFSClient_NONMAPREDUCE_418811023_1 does not have any open files.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3755)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3556)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3412)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:688)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:217)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apac

解决方法:重启YARN!!!因为import/export这些动作均执行mapreduce。

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

数据源的港湾

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值