HDP2.5更换集群IP

场景:
linux centos6.9 Ambari + HDP + Kerberos
目前集群节点有3个,运行一切正常。由于客户ip发生变化,需要统一将原先的ip全部替换。

注:首先将dataNode目录下的数据进行备份

1、通过Ambari界面将所有服务停了

2、修改hosts(win/linux)

1)修改linux 之hosts(所有节点都得修改)
[root@hdp39 network-scripts]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.22.103 hdp39
192.168.22.40 hdp40
192.168.22.41 hdp41
192.168.22.103 hdp39.bigdata.com
192.168.22.40 hdp40.bigdata.com
192.168.22.41 hdp41.bigdata.com

注:只修改ip不修改映射名,原因是在原先环境中有很多地方用的是映射名,如果映射名没改那些地方便不需要再进行修改,如下为原先的hosts文件内容
(2)修改window 之host

      C:\Windows\System32\drivers\etc\hosts

(3)修改当前ip映射的信息

        /etc/sysconfig/network-scripts/ifcfg-Auto_eth0

查找当前映射信息配置文件的命令
[root@hdp39 network-scripts]# cd /etc/sysconfig/network-scripts
[root@hdp39 network-scripts]# grep -rn 192.168.22.39           
ifcfg-Auto_eth0_bak:4:IPADDR=192.168.22.39
Binary file .ifcfg-Auto_eth0.swp matches

(4)重启网络服务

 命令:
  service network restart或/etc/init.d/network restart

[root@hdp39 network-scripts]# service network restart
Shutting down interface Auto_eth0:  Device state: 3 (disconnected)
                                                           [  OK  ]
Shutting down interface Auto_eth0_bak:                     [  OK  ]
Shutting down interface eth0:                              [  OK  ]
Shutting down loopback interface:                          [  OK  ]
Bringing up loopback interface:                            [  OK  ]
Bringing up interface Auto_eth0:  Active connection state: activating
Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/2
state: activated
Connection activated
                                                           [  OK  ]
Bringing up interface Auto_eth0_bak:  Active connection state: activated
Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/3
                                                           [  OK  ]
Bringing up interface eth0:  Error: No suitable device found: no device found for connection 'System eth0'.
                                                           [FAILED] 

解决:

cd /etc/udev/rules.d
rm 70-persistent-cd.rules

验证网络访问是否正常

ping baidu.com

(5)重启服务器

 init 6 或reboot

(6)检查防火墙是否开启

[root@hdp39 ~]# service iptables status
iptables: Firewall is not running.

(7)重启 Ldap

[root@hdp39 ~]# find / -name 'ldap.sh'
    /usr/hdp/2.5.3.0-37/knox/bin/ldap.sh
[root@hdp39 bin]# ./ldap.sh start
    Starting LDAP succeeded with PID 5330. 

(8)开启Ambari并登录Ambari

    启动Ambari
     ambari-server start 

    通过界面访问ambari
     http://<your.ambari.server>:8080  
由于Ambari之前配置了knox单点登录,因此会自动跳转到knox单点登录对应的 

为了让其不跳转到knox中,因此在跳转之前快速停止浏览器的跳转,
也就是直接跳转到ambari主页,其url 为http://<your.ambari.server>:8080  

(9)通过Abmari 界面修改有写了固定 IP 地址的地方。

 修改 yarn、hdfs、mapreduce、zookeeper 等组件中有写了ip地址的配置文件。

遇到的坑:

一、修改完之后ambari 后与hosts 后,重启服务器发现登录不进ambari 主界面
原因一:重启服务器后防火墙被重新打开了
原因二:knox 对应的Ldap 服务没有开户

参考文档: knox官方文档
http://knox.apache.org/books/knox-0-12-0/user-guide.html

[root@hdp39 bin]# ./ldap.sh start
Starting LDAP succeeded with PID 46528.

解决方法:在集群正常运行的情况下先将knox 停了,修改knox主机ip


二、Ambari Metrics 启动失败
报的错如下:

 org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultPhoenixDataSource:
 Unable to connect to HBase store using Phoenix.
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table undefined. tableName=SYSTEM.CATALOG.

This is usually due to AMS Data being corrupt.

1、Shut down Ambari Monitors, and Collector via Ambari
2、Cleared out the /var/lib/ambari-metrics-collector dir for fresh restart
3、From Ambari -> Ambari Metrics -> Config -> Advanced ams-hbase-site get the hbase.rootdir and hbase-tmp directory

这里写图片描述

4、Delete or Move the hbase-tmp and hbase.rootdir directories to an archive folder
这里写图片描述

Started AMS.
All services will came online and graphs started to display, after a few minutes

参考文档:
https://community.hortonworks.com/articles/11805/how-to-solve-ambari-metrics-corrupted-data.html


三、dfs.data.dir 配置导致datanode无法启动

报错如下所示:

查看日志如下:(日志为:less /var/log/hadoop/hdfs/hadoop-hdfs-datanode-hdp40.log2017-12-20 12:51:21,201 WARN  datanode.DataNode (DataNode.java:checkStorageLocations(2524)) - Invalid dfs.datanode.data.dir /bigdata/hadoop/hdfs/data : 
java.io.FileNotFoundException: File file:/bigdata/hadoop/hdfs/data does not exist
        at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:624)
        at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:850)
        at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:614)
        at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:422)
        at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)
        at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
        at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2479)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2521)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2503)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2395)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2442)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2623)
        at org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:77)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
2017-12-20 12:51:21,203 ERROR datanode.DataNode (DataNode.java:secureMain(2630)) - Exception in secureMain
java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/bigdata/hadoop/hdfs/data" 
        at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2530)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2503)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2395)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2442)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2623)
        at org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:77)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
2017-12-20 12:51:21,204 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2017-12-20 12:51:21,208 INFO  datanode.DataNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG: 

解决:
(1)确认在更换了Ip 之后,该机器确实不存在该目录了

[root@hdp40 hdfs]# cd /bigdata/hadoop/hdfs/data
-bash: cd: /bigdata/hadoop/hdfs/data: No such file or directory

(2)手动创建对应的目录

     mkdir -p hdfs/data

(3)改变指定目录以及其子目录下的所有文件的拥有者和群组

[root@hdp40 hdfs]# chown -R -v hdfs:hadoop data
changed ownership of `data' to hdfs:hadoop
[root@hdp40 hdfs]# ll
total 4
drwxr-xr-x 2 hdfs hadoop 4096 Dec 20 16:07 data

如果权限不是755则给他授权
[root@hdp40 hdfs]# chmod -R 755 data/

注:上述操作需要在所有做为datanode节点的机器上进行操作。

执行完上述步骤后:
重启dataNode,执行正常(笔者第一次重启的时候依然是报错,但是有点奇怪的是第二重启便可以正常启动了,只是启动的时间久了点)
这里写图片描述

四、Ranger启动失败

报错如下所示:

Watcher org.apache.solr.common.cloud.ConnectionManager@7b1083f6 name:ZooKeeperConnection Watcher:hdp39:2181,hdp40:2181,hdp41:2181/infra-solr got event WatchedEvent state:AuthFailed type:None path:null path:null type:None
zkClient received AuthFailed
makePath: /configs/ranger_audits/managed-schema
Error uploading file /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf/managed-schema to zookeeper path /configs/ranger_audits/managed-schema
java.io.IOException: Error uploading file /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf/managed-schema to zookeeper path /configs/ranger_audits/managed-schema
    at org.apache.solr.common.cloud.ZkConfigManager$1.visitFile(ZkConfigManager.java:69)
    at org.apache.solr.common.cloud.ZkConfigManager$1.visitFile(ZkConfigManager.java:59)
    at java.nio.file.Files.walkFileTree(Files.java:2670)
    at java.nio.file.Files.walkFileTree(Files.java:2742)
    at org.apache.solr.common.cloud.ZkConfigManager.uploadToZK(ZkConfigManager.java:59)
    at org.apache.solr.common.cloud.ZkConfigManager.uploadConfigDir(ZkConfigManager.java:121)
    at org.apache.ambari.logsearch.solr.commands.UploadConfigZkCommand.executeZkConfigCommand(UploadConfigZkCommand.java:39)
    at org.apache.ambari.logsearch.solr.commands.UploadConfigZkCommand.executeZkConfigCommand(UploadConfigZkCommand.java:29)
    at org.apache.ambari.logsearch.solr.commands.AbstractZookeeperConfigCommand.executeZkCommand(AbstractZookeeperConfigCommand.java:38)
    at org.apache.ambari.logsearch.solr.commands.AbstractZookeeperRetryCommand.createAndProcessRequest(AbstractZookeeperRetryCommand.java:38)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.uploadConfiguration(AmbariSolrCloudClient.java:218)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:469)
Caused by: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /configs
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:123)
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
    at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
    at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:294)
    at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:291)
    at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
    at org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:291)
    at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:486)
    at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:408)
    at org.apache.solr.common.cloud.ZkConfigManager$1.visitFile(ZkConfigManager.java:67)
    ... 13 more

解决方式:

  • 将各服务器的时间同步便解决了。观察上述日志发现有关Kerberos认证的错误,因而想到的第一点可能是服务器时间不同步所引起的。

五、有些服务启动成功后又立刻失败

  • 当遇到那些启动成功后又很快就自动停止的,首先是确认该服务是否已有一个启动了,但是还没有关闭或者看该服务端口是否被占用了。
  • 如:hive 的metastore 启动成功又立马自动停止。查看对应的端口发现已有对应的服务启动了,这种情况kill掉,重启便可。
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值