Hadoop 2.6.2 伪分布安装

一,安装环境 
硬件:虚拟机 
操作系统:Centos 6.5 64位 
IP:192.168.1.105(根据自己实际情况设定)
主机名:hadoop

安装用户:root


二,安装JDK 
安装JDK1.7或者以上版本。这里安装jdk-7u75-linux-x64.gz。 

1,tar -zxvf jdk-7u75-linux-x64.gz jdk1.7

2,chmod u+x  jdk1.7
3,vim .bash_profile中添加如下配置:
export JAVA_HOME=/home/zfh/tools/jdk1.7
export JAVA_BIN=/home/zfh/tools/jdk1.7/bin
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
4,使环境变量生效,#source ~/.bash_profile 
5,安装验证# java -version 

三,配置SSH无密码登陆

$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
验证ssh,# ssh localhost 
不需要输入密码即可登录。

注:最好是在~/.ssh/这个目录下,不然因为权限的原因,ssh时还算要输入密码

四,修改主机名

1,vi /etc/hosts

2,

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.105 hadoop

五,修改静态IP(可选)

1,ifconfig,查看自己的网络类型;

2,vi /etc/sysconfig/network-scripts/ifcfg-eth0(或wlan0)

vi /etc/sysconfig/network-scripts/ifcfg-eth0(eth0,第一块网卡,如果是第二块则为eth1)
按如下修改ip
DEVICE=eth0(如果是第二块刚为eth1)
BOOTPROTO=static
IPADDR=192.168.0.11(改成要设置的IP)
NETMASK=255.255.255.0 (子网掩码)
GATEWAY=192.168.0.1(网关)
ONBOO=yes
NM_CONTROLLED=no
然后
service network restart

六,关闭防火墙(可选)

1,service iptables stop(关闭防火墙);

2,chkconfig iptables off(关闭防火墙重启)

七,安装Hadoop2.6.2

1,解压安装包:tar -zxvf hadoop-2.6.2.tar.gz hadoop-2.6.2

2,设置环境变量,#vi ~/.bash_profile

export HADOOP_HOME=/home/zfh/apache/hadoop-2.6.2
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin--不加这个可能里面的命令找不到

3,使环境变量生效,$source ~/.bash_profile

4,进入$HADOOP_HOME/etc/hadoop目录,配置 hadoop-env.sh等。涉及的配置文件如下: 
hadoop-2.6.2/etc/hadoop/hadoop-env.sh 
hadoop-2.6.2/etc/hadoop/yarn-env.sh 
hadoop-2.6.2/etc/hadoop/core-site.xml 
hadoop-2.6.2/etc/hadoop/hdfs-site.xml 
hadoop-2.6.2/etc/hadoop/mapred-site.xml 
hadoop-2.6.2/etc/hadoop/yarn-site.xml


1)配置hadoop-env.sh

# The java implementation to use.
#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/home/zfh/tools/jdk1.7
2)配置yarn-env.sh

#export JAVA_HOME=/home/y/libexec/jdk1.6.0/
export JAVA_HOME=/home/zfh/tools/jdk1.7
3)配置core-site.xml 
添加如下配置:


<configuration>
 <property>
    <name>fs.default.name</name>
    <value>hdfs://hadoop:9000</value>
    <description>HDFS的URI,文件系统://namenode标识:端口号</description>
</property>


<property>
    <name>hadoop.tmp.dir</name>
    <value>/root/hadoop/tmp</value>
    <description>namenode上本地的hadoop临时文件夹</description>
</property>
</configuration>
4),配置hdfs-site.xml 
添加如下配置
<configuration>
<!—hdfs-site.xml-->
<property>
    <name>dfs.name.dir</name>
    <value>file:/root/hadoop/hdfs/name</value>
    <description>namenode上存储hdfs名字空间元数据 </description> 
</property>


<property>
    <name>dfs.data.dir</name>
    <value>file:/root/hadoop/hdfs/data</value>
    <description>datanode上数据块的物理存储位置</description>
</property>


<property>
    <name>dfs.replication</name>
    <value>1</value>
    <description>副本个数,配置默认是3,应小于datanode机器数量</description>
</property>
</configuration>

注意:要加file:不然会报错

5),配置mapred-site.xml 
添加如下配置:

<configuration>
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
</property>
</configuration>
6),配置yarn-site.xml 
添加如下配置:
<configuration>
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>


4,Hadoop启动 
1)格式化namenode

hadoop namenode -format

注意:hdfs namenode –format好像有问题,导致,我的hadoop在启动时报错,后来改用hadoop namenode -format重新格式化ok了

2016-08-14 09:55:58,633 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop/192.168.1.105:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-08-14 09:55:59,634 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop/192.168.1.105:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-08-14 09:56:00,635 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop/192.168.1.105:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-08-14 09:56:01,636 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop/192.168.1.105:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-08-14 09:56:02,637 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop/192.168.1.105:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-08-14 09:56:03,638 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop/192.168.1.105:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-08-14 09:56:04,639 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop/192.168.1.105:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-08-14 09:56:05,640 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop/192.168.1.105:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-08-14 09:56:06,641 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop/192.168.1.105:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-08-14 09:56:07,641 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop/192.168.1.105:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-08-14 09:56:07,643 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: hadoop/192.168.1.105:9000
2016-08-14 09:56:13,645 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop/192.168.1.105:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-08-14 09:56:14,645 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop/192.168.1.105:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

2)启动hadoop

进入hadoop/sbin 目录:start-all.sh

3)查看启动进程:jps

4)访问hadoop web页面:

1,yarn:http://localhost:8088

2,nameNode:http://localhost:50070



  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值