Hadoop1.1.2伪分布式安装(Vmware+CenterOS6.5.X64)

一、环境准备

1.1设置ip地址

[root@hadoop01 ~]# vi/etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

TYPE=Ethernet

UUID=96d74e07-dd76-41c4-b2be-40b7e961766f

ONBOOT=yes

NM_CONTROLLED=yes

BOOTPROTO=none

IPADDR=192.168.255.111

PREFIX=24

GATEWAY=192.168.255.2

DNS1=192.168.255.2

DEFROUTE=yes

IPV4_FAILURE_FATAL=yes

IPV6INIT=no

NAME="Systemeth0"

HWADDR=00:0C:29:A0:0C:A0

LAST_CONNECT=1448919762

执行命令

[root@hadoop01 ~]#  service network restart

Shutting down interfaceeth0:                              [  OK  ]

Shutting down loopbackinterface:                         [  OK  ]

Bringing up loopbackinterface:                           [  OK  ]

Bringing up interfaceeth0:  Determining if ip address192.168.255.111 is already in use for device eth0...

                                                          [  OK  ]

验证: ifconfig

[root@hadoop01 ~]#ifconfig

eth0      Link encap:Ethernet  HWaddr 00:0C:29:A0:0C:A0 

          inet addr:192.168.255.111  Bcast:192.168.255.255  Mask:255.255.255.0

          inet6 addr:fe80::20c:29ff:fea0:ca0/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500 Metric:1

          RX packets:48956 errors:0 dropped:0overruns:0 frame:0

          TX packets:4699 errors:0 dropped:0overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:70091382 (66.8 MiB)  TX bytes:341581 (333.5 KiB)

          Interrupt:19 Base address:0x2000

 

lo        Link encap:Local Loopback 

          inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:16436 Metric:1

          RX packets:0 errors:0 dropped:0overruns:0 frame:0

          TX packets:0 errors:0 dropped:0overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

 

1.2 关闭防火墙(暂时关闭)

1.2.1暂时关闭

[root@hadoop01app]# service iptables stop

iptables:Setting chains to policy ACCEPT: filter          [ OK  ]

iptables:Flushing firewall rules:                        [  OK  ]

iptables:Unloading modules:                              [  OK  ]

验证:service iptables status

[root@hadoop01 app]#service iptables status

iptables: Firewall isnot running.

 1.2.2关闭防火墙的自动运行,彻底关闭

执行命令 chkconfigiptables off

[root@hadoop01 app]#chkconfig iptables off

验证: chkconfig--list | grep iptables

[root@hadoop01 app]#chkconfig --list | grep iptables

iptables        0:off  1:off   2:off   3:off  4:off   5:off   6:off

 

1.3 设置主机名

执行命令 (1)hostname xxxxx(暂时修改,重启失效)

(2) vi /etc/sysconfig/network(永久修改)

NETWORKING=yes

HOSTNAME=hadoop01

GATEWAY=192.168.255.2

重启查看 hostname

[root@hadoop01 ~]#hostname

hadoop01

1.4 ip与hostname绑定

执行命令 vi/etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4localhost4.localdomain4

::1         localhost localhost.localdomainlocalhost6 localhost6.localdomain6

192.168.255.111 hadoop01

验证: pinghadoop01

1.5设置ssh免密码登陆

执行命令 (1)

[root@hadoop01 ~]#ssh-keygen -t rsa

[root@hadoop01 .ssh]#cd /root/.ssh/

[root@hadoop01 .ssh]#ls

id_rsa  id_rsa.pub

(2)[root@hadoop01.ssh]# cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

[root@hadoop01 .ssh]#vi authorized_keys

ssh-rsaAAAAB3NzaC1yc2EAAAABIwAAAQEAu22cnbuq0sAA3B6ifh24JsX86VTd5JEPwCYO4ClFLElw

O8Wk5wnxbiPcZYIoIVDNlVLRzeGSeyj1YwQsW2xo+y7dmDqkAzwANvPfhNZtvdDrfcWj7uL46ZDPmaIF

xs+UN7h5JjpHCUkl5VDvBYXQ75D6SPnU8Uml2SHGcZq4eUWANlC8w5kYRnLkchNu3qsaPQ4MwASoGymQ

2uIwDK0JYAaC49TAbCIhj6RZOo291SuWbsPNCK0P61NVArK3VGDlgxE6RlNwTDFHg7iKq/18Mdepyzh3

hzi6ZqMX8CGLinw0xLbiG6hUWFtpFujXO7JXkQrtUbavWpYA+C6XogeTFw==root@hadoop01

验证:[root@hadoop01 ~]#ssh localhost

Last login: SunDec  6 06:39:32 2015 from localhost

[root@hadoop01 ~]#

 

二、软件安装

2.1安装jdk

[root@hadoop01 ~]# cd/usr/local/

[root@hadoop01 local]#mkdir java

[root@hadoop01 local]#cd java/

2.1.1把JDK放到此目录中 jdk-7u79-linux-x64.tar.gz(我这里使用的是64虚拟机,所以jdk了要用64位)

jdk-7u79-linux-x64.tar.gz 链接:http://pan.baidu.com/s/1nun1haT 密码:dw02

 [root@hadoop01 java]#tar -zxvf jdk-7u79-linux-x64.tar.gz(解压)

[root@hadoop01 java]#cd jdk1.7.0_79/

[root@hadoop01jdk1.7.0_79]# pwd

/usr/local/java/jdk1.7.0_79

2.1.2添加环境变量

[root@hadoop01 java]#vi /etc/profile

exportJAVA_HOME=/usr/local/java/jdk1.7.0_79

exportCLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

exportPATH=$PATH:$JAVA_HOME/bin

[root@hadoop01jdk1.7.0_79]# source /etc/profile

2.1.3查看jdk安装成功

[root@hadoop01jdk1.7.0_79]# java -version

java version"1.7.0_79"

Java(TM) SE RuntimeEnvironment (build 1.7.0_79-b15)

Java HotSpot(TM) 64-BitServer VM (build 24.79-b02, mixed mode)

 

2.2 安装配置hadoop1.1.2

2.2.1 准备安装文件

[root@hadoop01jdk1.7.0_79]# cd /usr/share/app(我习惯把常用的软件安装到/usr/share/app目录下,)

把hadoop-1.1.2.tar.gz上传到此目录,下载链接: http://pan.baidu.com/s/1qXjXn0S 密码:kqoq

[root@hadoop01 app]# ls

hadoop-1.1.2.tar.gz

[root@hadoop01 app]#tar -zxvf hadoop-1.1.2.tar.gz

[root@hadoop01 app]# cdhadoop-1.1.2

[root@hadoop01hadoop-1.1.2]# ls

bin        CHANGES.txt  docs                     hadoop-core-1.1.2.jar         hadoop-test-1.1.2.jar   ivy.xml LICENSE.txt  sbin   webapps

build.xml  conf        hadoop-ant-1.1.2.jar    hadoop-examples-1.1.2.jar    hadoop-tools-1.1.2.jar  lib      NOTICE.txt   share

c++        contrib      hadoop-client-1.1.2.jar  hadoop-minicluster-1.1.2.jar  ivy                     libexec  README.txt  src

[root@hadoop01hadoop-1.1.2]#

[root@hadoop01hadoop-1.1.2]# pwd

/usr/share/app/hadoop-1.1.2

2.2.2 添加环境变量

[root@hadoop01 java]#vi /etc/profile

export HADOOP_HOME=/usr/share/app/hadoop-1.1.2

exportPATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/lib

[root@hadoop01jdk1.7.0_79]# source /etc/profile

2.2.3 修改配置文件

切换到配置文件目录,修改对应配置文件hadoop-env.sh、core-site.xml、hdfs-site.xml 、mapred-site.xml 4个文件

[root@hadoop01hadoop-1.1.2]# cd conf/

2.2.3.1  hadoop-env.sh

[root@hadoop01conf]# vihadoop-env.sh (第九行,修改jdk安装目录)

# Set Hadoop-specificenvironment variables here.

 

# The only requiredenvironment variable is JAVA_HOME.  Allothers are

# optional.  When running a distributed configuration itis best to

# set JAVA_HOME in thisfile, so that it is correctly defined on

# remote nodes.

 

# The javaimplementation to use.  Required.

# export JAVA_HOME=/usr/lib/j2sdk1.5-sun

export JAVA_HOME=/usr/local/java/jdk1.7.0_79(修改后)

 

# Extra Java CLASSPATHelements.  Optional.

# exportHADOOP_CLASSPATH=

 

# The maximum amount ofheap to use, in MB. Default is 1000.

# exportHADOOP_HEAPSIZE=2000

 

# Extra Java runtimeoptions.  Empty by default.

# exportHADOOP_OPTS=-server

 

2.2.3.2  core-site.xml

[root@hadoop01 conf]# vi core-site.xml

<?xmlversion="1.0"?>

<?xml-stylesheettype="text/xsl" href="configuration.xsl"?>

 

<!-- Putsite-specific property overrides in this file. -->

 

<configuration>

    <property>

       <name>fs.default.name</name>

       <value>hdfs://hadoop01:9000</value>

    </property>

    <property>

        <name>hadoop.tmp.dir</name>

       <value>/usr/share/app/hadoop-1.1.2/tmp</value>

    </property>

</configuration>

 

2.2.3.3   hdfs-site.xml

[root@hadoop01 conf]# vi hdfs-site.xml

<?xmlversion="1.0"?>

<?xml-stylesheettype="text/xsl" href="configuration.xsl"?>

 

<!-- Putsite-specific property overrides in this file. -->

 

<configuration>

    <property>

       <name>dfs.replication</name>

        <value>1</value>

    </property>

    <property>

       <name>dfs.permissions</name>

        <value>false</value>

    </property>

 

</configuration>

 

2.2.3.4     mapred-site.xml

[root@hadoop01 conf]# vi mapred-site.xml

<?xmlversion="1.0"?>

<?xml-stylesheettype="text/xsl" href="configuration.xsl"?>

 

<!-- Putsite-specific property overrides in this file. -->

 

<configuration>

        <property>

               <name>mapred.job.tracker</name>

               <value>hadoop01:9001</value>

        </property>

</configuration>

 

2.2.4格式化namenode

[root@hadoop01 conf]#hadoop namenode -format

Warning: $HADOOP_HOMEis deprecated.

 

15/12/06 09:19:12 INFOnamenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: StartingNameNode

STARTUP_MSG:   host = hadoop01/192.168.255.111

STARTUP_MSG:   args = [-format]

STARTUP_MSG:   version = 1.1.2

STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1-r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013

************************************************************/

15/12/06 09:19:13 INFOutil.GSet: VM type       = 64-bit

15/12/06 09:19:13 INFOutil.GSet: 2% max memory = 19.33375 MB

15/12/06 09:19:13 INFOutil.GSet: capacity      = 2^21 = 2097152entries

15/12/06 09:19:13 INFOutil.GSet: recommended=2097152, actual=2097152

15/12/06 09:19:13 INFOnamenode.FSNamesystem: fsOwner=root

15/12/06 09:19:13 INFOnamenode.FSNamesystem: supergroup=supergroup

15/12/06 09:19:13 INFOnamenode.FSNamesystem: isPermissionEnabled=false

15/12/06 09:19:13 INFOnamenode.FSNamesystem: dfs.block.invalidate.limit=100

15/12/06 09:19:13 INFOnamenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0min(s), accessTokenLifetime=0 min(s)

15/12/06 09:19:14 INFOnamenode.NameNode: Caching file names occuring more than 10 times

15/12/06 09:19:14 INFOcommon.Storage: Image file of size 110 saved in 0 seconds.

15/12/06 09:19:14 INFOnamenode.FSEditLog: closing edit log: position=4,editlog=/usr/share/app/hadoop-1.1.2/tmp/dfs/name/current/edits

15/12/06 09:19:14 INFOnamenode.FSEditLog: close success: truncate to 4,editlog=/usr/share/app/hadoop-1.1.2/tmp/dfs/name/current/edits

15/12/06 09:19:14 INFOcommon.Storage: Storage directory /usr/share/app/hadoop-1.1.2/tmp/dfs/name hasbeen successfully formatted.

15/12/06 09:19:14 INFOnamenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shuttingdown NameNode at hadoop01/192.168.255.111

************************************************************/

2.2.5 启动hadoop服务

[root@hadoop01 conf]#start-all.sh

Warning: $HADOOP_HOME is deprecated(这个警告不影响,但是看着不爽,下面有解决方法)

 

starting namenode,logging to/usr/share/app/hadoop-1.1.2/libexec/../logs/hadoop-root-namenode-hadoop01.out

The authenticity ofhost 'localhost (::1)' can't be established.

RSA key fingerprint is85:81:73:a7:56:b4:b0:bd:af:c8:ea:70:ef:f8:f5:2a.

Are you sure you wantto continue connecting (yes/no)? yes

localhost: Warning:Permanently added 'localhost' (RSA) to the list of known hosts.

localhost: startingdatanode, logging to/usr/share/app/hadoop-1.1.2/libexec/../logs/hadoop-root-datanode-hadoop01.out

localhost: startingsecondarynamenode, logging to/usr/share/app/hadoop-1.1.2/libexec/../logs/hadoop-root-secondarynamenode-hadoop01.out

starting jobtracker,logging to/usr/share/app/hadoop-1.1.2/libexec/../logs/hadoop-root-jobtracker-hadoop01.out

localhost: startingtasktracker, logging to/usr/share/app/hadoop-1.1.2/libexec/../logs/hadoop-root-tasktracker-hadoop01.out

2.2.6 查看是否启动成功

[root@hadoop01 conf]#jps

1407 NameNode

1629 SecondaryNameNode

1817 TaskTracker

1710 JobTracker

1862 Jps

1528 DataNode

(2)在浏览器查看,http://hadoop01:50070  http://hadoop01:50030

 

出现这几个进程代表hadoop伪分布安成功!有不足之处欢迎指正,共同进步!

 

另:

Warning: $HADOOP_HOME is deprecated解决方法

1.注释掉hadoop-config.sh里的上面给出的这段iffi配置(不推荐)

2.在当前用户home/.bash_profile 或者 里增加一个环境变量:

exportHADOOP_HOME_WARN_SUPPRESS=1

[root@hadoop01 conf]#vi /etc/profile

[root@hadoop01 conf]#source /etc/profile

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值