UBUNTU14.04 hadoop搭建的注意事项

hadoop 2.5.0
安装目录/usr/local/hadoop-2.5.0
首先是ssh localhost


PS:
1如何出现 ssh : connect to host localhost port 22:Connection 这样命令,那么是ssh server没有启动。安装openssh-server,然后
 (sudo) /etc/init.d/ssh -start (另外一个命令是 service sshd restart)启动。你可以用命令 ps -e|grep ssh查看是否成功启动。

2.如果 ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
       cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
登录ssh localhost显示 Agent admitted failure to sign using the key,那么在当前目录下ssh-add即可。
 
3.如果上面两步设置之后,出现the Authenicity of host (127.0.0.1) can't be estabilished,并且之后仍然要求输入密码(通常这个问题由本机修改了RSA键值或者重新生成RSA键值造成的)。那么是文件权限问题
.ssh 权限 700
authorized_keys 664
id_rsa 600
id_rsa.pub 644
known_hosts 644



I.格式化命令
两种格式化方式.因为hadoop已经加到系统路径里面,所以这个命令在哪里执行都是一样的
hadoop namenode -format (deprecated)
hdfs namenode -format


II.启动hadoop
sbin/start-dfs.sh
sbin/start-yarn.sh

PS:
1.如果出现一下错误
localhost: Error: JAVA_HOME is not set and could not be found.
localhost: Error: JAVA_HOME is not set and could not be found.
那么需要在hadoop-env.xml中设置JAVA_HOME路径,例如:
export JAVA_HOME=/usr/local/jdk1.7.0_51

2.另外,start-all.sh和stop-all这两个命令都是deprecated.同时一些书中的start-mapred.sh也被start-yarn.sh所取代

3.start-dfs.sh的作用是启动namenode 和 datanode, secondary namenodes
start-yarn.sh的作用是启动resourcemanager和nodemanager

III 查看守护进程

1第一种方法是 jps,执行效果如下

flmeng@Tank:/usr/local/hadoop-2.5.0$ jps
3969 Jps
3732 NodeManager
3045 DataNode
3414 ResourceManager
2887 NameNode
3241 SecondaryNameNode


http://localhost:50070 检查HDFS是否成功启动,也可以检查namenode节点
http://localhost:8088 可以查看任务(master)
http://localhost:50030 MapReducer/JobTracker
http://localhost:50075 DataNode on
PS:

0:注意,如果是2.x.0z这种版本的话,是没有 jobtrack的。jobtrack的功能被分成了ResourceManager和NodeManager

1.我碰到过这样的问题,就是localhost:50070可以链接,但是localhost:50030但是链接不到。同时用jps命令查看时,各个进程都成功启动。


2.linux下使用curl命令可以查看端口是否可达,例如
flmeng@Tank:/usr/local/hadoop-2.5.0$ curl localhost:50070
<!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="REFRESH" content="1;url=dfshealth.jsp" />
<title>Hadoop Administration</title>
</head>
<body>
<script type="text/javascript">
//<![CDATA[
window.location.href='dfshealth.html';
//]]>
</script>
<h1>Hadoop Administration</h1>
<ul>
<li><a href="dfshealth.jsp">DFS Health/Status</a></li>
</ul>
</body>
</html>


3 netstat -nap 显示各个端口状态










































  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值