Warning: $HADOOP_HOME is deprecated. hadoop

本文介绍了解决Hadoop启动时出现的$HADOOP_HOME已弃用警告的方法。通过修改.bash_profile文件添加环境变量HADOOP_HOME_WARN_SUPPRESS并重新加载配置来消除警告。

转载于:http://blog.csdn.net/yangkai_hudong/article/details/18732421?utm_source=tuicool

 下面的解决方案我亲自试了没有问题;可行,但是对于初学者来说肯定会有一个疑问;这个.bash_profile文件到底在哪呢;其实很简单:

当前用户的.bash_profile在/home/用户/下,系统的.bash_profile在/etc/skel目录下; 默认可能是隐藏的;有人会问了,隐藏的我怎么打开它,一个简单的办法,直接使用vi命令打开指定目录下的该文件就可以了;比如:vi /root/.bash_profile  

说白了修改方法只有两步(针对root用户的命令):

1.修改配置文件:vi /root/.bash_profile   添加 export HADOOP_HOME_WARN_SUPPRESS=1

2.重新加载配置文件:source /root/.bash_profile

下面详细介绍原因及方案;资料来自:http://chenzhou123520.iteye.com/blog/1826002

启动Hadoop时报了一个警告信息,我安装的Hadoop版本是hadoop1.0.4,具体警告信息如下:

Shell代码  收藏代码
  1. [root@localhost hadoop-1.0.4]# ./bin/start-all.sh   
  2. Warning: $HADOOP_HOME is deprecated.  

 网上的说法是因为Hadoop本身对HADOOP_HOME做了判断,具体在bin/hadoop和bin/hadoop-config.sh里。在hadoop-config.sh里有如下的配置: 

Shell代码  收藏代码
  1. if [ "$HADOOP_HOME_WARN_SUPPRESS" = "" ] && [ "$HADOOP_HOME" != "" ]; then  
  2.   echo "Warning: \$HADOOP_HOME is deprecated." 1>&2  
  3.   echo 1>&2  
  4. fi  

 对于这个警告问题,解决方法如下: 

1.注释掉hadoop-config.sh里的上面给出的这段if fi配置(不推荐)

2.在当前用户home/.bash_profile里增加一个环境变量:

export HADOOP_HOME_WARN_SUPPRESS=1

注:修改完.bash_profile后需要执行source操作使其生效

Shell代码  收藏代码
  1. [root@localhost ~]# source .bash_profile  

 执行完后我们可以检验一下配置是否成功,重新执行start-all.sh脚本:

Shell代码  收藏代码
  1. [root@localhost hadoop-1.0.4]# ./bin/start-all.sh   
  2. starting namenode, logging to /root/hadoop-1.0.4/libexec/../logs/hadoop-root-namenode-localhost.out  
  3. localhost: starting datanode, logging to /root/hadoop-1.0.4/libexec/../logs/hadoop-root-datanode-localhost.out  
  4. localhost: starting secondarynamenode, logging to /root/hadoop-1.0.4/libexec/../logs/hadoop-root-secondarynamenode-localhost.out  
  5. starting jobtracker, logging to /root/hadoop-1.0.4/libexec/../logs/hadoop-root-jobtracker-localhost.out  
  6. localhost: starting tasktracker, logging to /root/hadoop-1.0.4/libexec/../logs/hadoop-root-tasktracker-localhost.out  

 没有出现Warning: $HADOOP_HOME is deprecated,说明问题已经解决。


[root@master hadoop]# cd $HADOOP_HOME/sbin [root@master sbin]# vi start-dfs.sh [root@master sbin]# vi stop-dfs.sh [root@master sbin]# vi stop-dfs.sh [root@master sbin]# vi start-yarn.sh [root@master sbin]# vi start-yarn.sh [root@master sbin]# vi stop-yarn.sh [root@master sbin]# # 创建 tmp 目录 [root@master sbin]# mkdir -p /var/log/hadoop/tmp [root@master sbin]# chmod 777 /var/log/hadoop/tmp [root@master sbin]# [root@master sbin]# # 创建 HDFS 数据目录 [root@master sbin]# mkdir -p /data/hadoop/hdfs/name [root@master sbin]# mkdir -p /data/hadoop/hdfs/data [root@master sbin]# chmod -R 777 /data/hadoop/hdfs [root@master sbin]# [root@master sbin]# # 创建 YARN 日志目录 [root@master sbin]# mkdir -p /data/hadoop/yarn/local [root@master sbin]# mkdir -p /data/tmp/logs [root@master sbin]# chmod -R 777 /data/hadoop/yarn [root@master sbin]# chmod -R 777 /data/tmp [root@master sbin]# hdfs namenode -format namenode is running as process 94099. Stop it first. [root@master sbin]# cd $HADOOP_HOME [root@master hadoop]# [root@master hadoop]# # 启动 HDFS [root@master hadoop]# sbin/start-dfs.sh sbin/start-dfs.sh:行1: OP_HOME/sbin/stop-dfs.sh: 没有那个文件或目录 WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. Starting namenodes on [master] 上一次登录:四 11月 6 20:50:16 CST 2025pts/2 上 master: Unable to get valid context for root Starting datanodes 上一次登录:四 11月 6 21:11:30 CST 2025pts/2 上 slave3: ssh: Could not resolve hostname slave3: Name or service not known slave2: ssh: Could not resolve hostname slave2: Name or service not known localhost: Unable to get valid context for root slave1: ssh: connect to host slave1 port 22: No route to host 2025-11-06 21:11:36,193 ERROR conf.Configuration: error parsing conf yarn-site.xml com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <configuration> at [row,col,system-id]: [102,0,"file:/export/server/hadoop/etc/hadoop/yarn-site.xml"] at com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:687) at com.ctc.wstx.sr.BasicStreamReader.throwUnexpectedEOF(BasicStreamReader.java:5608) at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2802) at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123) at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3320) at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2968) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1200) at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1254) at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1532) at org.apache.hadoop.security.Groups.<init>(Groups.java:113) at org.apache.hadoop.security.Groups.<init>(Groups.java:102) at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:451) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:336) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:303) at org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1827) at org.apache.hadoop.security.UserGroupInformation.createLoginUser(UserGroupInformation.java:709) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:659) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:570) at org.apache.hadoop.hdfs.tools.GetConf.run(GetConf.java:344) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.hdfs.tools.GetConf.main(GetConf.java:361) Exception in thread "main" java.lang.RuntimeException: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <configuration> at [row,col,system-id]: [102,0,"file:/export/server/hadoop/etc/hadoop/yarn-site.xml"] at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3024) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2968) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1200) at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1254) at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1532) at org.apache.hadoop.security.Groups.<init>(Groups.java:113) at org.apache.hadoop.security.Groups.<init>(Groups.java:102) at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:451) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:336) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:303) at org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1827) at org.apache.hadoop.security.UserGroupInformation.createLoginUser(UserGroupInformation.java:709) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:659) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:570) at org.apache.hadoop.hdfs.tools.GetConf.run(GetConf.java:344) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.hdfs.tools.GetConf.main(GetConf.java:361) Caused by: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <configuration> at [row,col,system-id]: [102,0,"file:/export/server/hadoop/etc/hadoop/yarn-site.xml"] at com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:687) at com.ctc.wstx.sr.BasicStreamReader.throwUnexpectedEOF(BasicStreamReader.java:5608) at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2802) at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123) at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3320) at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007) ... 18 more [root@master hadoop]# [root@master hadoop]# # 启动 YARN [root@master hadoop]# sbin/start-yarn.sh Starting resourcemanagers on [] 上一次登录:四 11月 6 21:11:30 CST 2025pts/2 上 slave2: ssh: Could not resolve hostname slave2: Name or service not known slave3: ssh: Could not resolve hostname slave3: Name or service not known localhost: Unable to get valid context for root slave1: ssh: connect to host slave1 port 22: No route to host Starting nodemanagers 上一次登录:四 11月 6 21:11:37 CST 2025pts/2 上 slave3: ssh: Could not resolve hostname slave3: Name or service not known slave2: ssh: Could not resolve hostname slave2: Name or service not known localhost: Unable to get valid context for root slave1: ssh: connect to host slave1 port 22: No route to host [root@master hadoop]# [root@master hadoop]# # 启动 JobHistory Server [root@master hadoop]# sbin/mr-jobhistory-daemon.sh start historyserver WARNING: Use of this script to start the MR JobHistory daemon is deprecated. WARNING: Attempting to execute replacement "mapred --daemon start" instead. ERROR: Cannot set priority of historyserver process 112921 [root@master hadoop]# # 启动 JobHistory Server [root@master hadoop]# sbin/mr-jobhistory-daemon.sh start historyserver WARNING: Use of this script to start the MR JobHistory daemon is deprecated. WARNING: Attempting to execute replacement "mapred --daemon start" instead. ERROR: Cannot set priority of historyserver process 113141 [root@master hadoop]# jps 94099 NameNode 94307 SecondaryNameNode 87714 DataNode 113301 Jps [root@master hadoop]# [root@master hadoop]#
最新发布
11-07
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值