hadoop2.7.2HA集群安装

 

1、下载

Centos7

Jdk-1.7 or 1.8

hadoop-2.7.2.tar.gz

2、准备

主机规划

IP


主机名

用户

作用

192.168.116.134   

 

cancer01

hadoop

namenode

resourcemanager

zkfc

192.168.116.136   

 

cancer02

hadoop

namenode

resourcemanager

zkfc

192.168.116.135   

 

cancer03

hadoop

datanode

nodemanager

journalnode

quorumpeermain

192.168.116.131   

 

cancer04

hadoop

datanode

nodemanager

journalnode

quorumpeermain

192.168.116.128   

 

cancer05

hadoop

datanode

nodemanager

journalnode

quorumpeermain

添加用户、组,在每台机器上

useradd   hadoop

passwd  hadoop

设置主机名,在每台机器上

vim /etc/sysconfig/network

vim /etc/hosts

hostnamectl set-hostname cancer01

hostnamectl set-hostname cancer02

hostnamectl set-hostname cancer03

hostnamectl set-hostname cancer04

hostnamectl set-hostname cancer05

每台机器增加host

192.168.116.134    cancer01

192.168.116.136    cancer02

192.168.116.135    cancer03

192.168.116.131    cancer04

192.168.116.128    cancer05

每台机器均关闭防火墙

systemctl stop firewalld.service              centos7停止firewall

systemctl disable firewalld.service centos7禁止firewall开机启动

 

每台机器均禁用Transparent Hugepage

查看状态

cat /sys/kernel/mm/transparent_hugepage/enabled

返回结果

[always] madvise never  

永久关闭

vim /etc/rc.local

if test -f/sys/kernel/mm/transparent_hugepage/enabled; then

  echo never > /sys/kernel/mm/transparent_hugepage/enabled

fi 

if test -f/sys/kernel/mm/transparent_hugepage/defrag; then

  echo never > /sys/kernel/mm/transparent_hugepage/defrag

fi

或者直接运行下面命令:

echo never >/sys/kernel/mm/transparent_hugepage/enabled

echo never > /sys/kernel/mm/transparent_hugepage/defrag

重启机器

查看状态

cat /sys/kernel/mm/transparent_hugepage/enabled 

返回结果

always madvise [never]

 

给hadoop用户授权sudo,在每台机器上

vim /etc/sudoers

hadoop ALL=(ALL)       ALL

 

3、配置ssh

打通01,02,03,04,05机器之间的SSH无密码登陆

修改sshd配置,每台机器上

vim /etc/ssh/sshd_config

放开2行注释

RSAAuthentication yes

PubkeyAuthentication yes

AuthorizedKeysFile      .ssh/authorized_keys

切换用户

su hadoop

查看是否安装ssh,每台机器上

rpm -qa | grep ssh

可以通过以下命令安装ssh:

apt-get install openssh-server

yum install ssh

在每台机器上进入hadoop用户目录,使用命令生成公钥和私钥(连续回车,不设置密码)

ssh-keygen -t rsa

在01机器上生成authorized_keys文件

scp ~/.ssh/id_rsa.pubhadoop@cancer01:/home/hadoop/.ssh/authorized_keys

将其他四台机器的id_rsa.pub文件内容手动拷贝到01机器上的authorized_keys文件中

在01机器上把authorized_keys文件复制到其他要访问的机器的hadoop用户目录下.ssh目录

scp ~/.ssh/authorized_keys hadoop@cancer02:/home/hadoop/.ssh/authorized_keys

scp ~/.ssh/authorized_keys hadoop@cancer03:/home/hadoop/.ssh/authorized_keys

scp ~/.ssh/authorized_keys hadoop@cancer04:/home/hadoop/.ssh/authorized_keys

scp ~/.ssh/authorized_keys hadoop@cancer05:/home/hadoop/.ssh/authorized_keys

 

访问授权,每台机器上

chmod 600.ssh/authorized_keys

在各台机器上检测是否可以不需要密码登陆

ssh localhost

ssh hadoop@cancer01

ssh hadoop@cancer02

ssh hadoop@cancer03

ssh hadoop@cancer04

ssh hadoop@cancer05

 

问题:The authenticity of host 'cancer04(192.168.1.116)' can't be established.ECDSA key fingerprint is86:c2:6b:12:68:b0:f8:5d:9b:96:35:e0:f1:8e:75:3a.Are you sure you want tocontinue connecting (yes/no)? yes。

解决:ssh -o StrictHostKeyChecking=no192.168.116.128

 

4、安装jdk,在每台机器上

下载jdk-8u101-linux-x64.rpm,使用rz命令上传。

1、安装前,最好先删除Linux自带的OpenJDK:

(1)运行java -version,会发现Linux自带的OpenJDK,运行rpm-qa | grep jdk,找出自带的OpenJDK名称;

(2)运行rpm -e --nodeps OpenJDK名称,删除OpenJDK;

2、下载jdk-8u20-linux-x64.rpm,运行rpm -ivh jdk-8u20-linux-x64.rpm

3、运行vim/etc/profile,在文件末尾输入以下几行:

export JAVA_HOME=/usr/java/jdk1.8.0_101

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

export PATH=$PATH:$JAVA_HOME/bin

4、运行source/etc/profile,使文件生效;

5、运行java -version,查看返回结果。

 

或yum安装

wget http://download.oracle.com/otn-pub/java/jdk/8u72-b15/jdk-8u72-linux-x64.rpm?AuthParam=1453706601_fb0540fefe22922be611d401fbbf4d75

通过yum 进行安装

yum localinstall jdk-8u72-linux-x64.rpm

设置JAVA_HOME环境变量

vim /etc/profile

export JAVA_HOME=/usr/java/jdk1.8.0_101

 

5、安装zookeeper

上传zookeeper-3.4.9.tar.gz到cancer03、cancer04、cancer05。

su hadoop

在每台机器上解压

tar -xvf zookeeper-3.4.9.tar.gz

 

cd /usr/local/zookeeper-3.4.9/conf

cp zoo_sample.cfg zoo.cfg

修改dataDir

sudo vim zoo.cfg

dataDir=/usr/local/zookeeper-3.4.9/data

添加下面三行

server.1=cancer03:2888:3888

server.2=cancer04:2888:3888

server.3=cancer05:2888:3888

cd ../

mkdir data

touch data/myid

echo 1 > data/myid

more data/myid

1

## cancer04 05,也修改配置,就如下不同

echo 2 > data/myid

echo 3 > data/myid

配置环境变量,(在3台机器上都要做)

vim /etc/profile

export ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.9

export PATH=$PATH:$ZOOKEEPER_HOME/bin

 

6、安装hadoop

只在01机器上下载hadoop,进行配置后复制到其他节点。

下载hadoop-2.7.2.tar.gz

wget http://apache.fayea.com/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz

移动hadoop-2.7.2到/usr/local目录下,

cd /usr/local

tar -xvf hadoop-2.7.2.tar.gz

chown -R hadoop:hadoop ./hadoop-2.7.2  (在每台机器上都要做)

ln -s /usr/local/hadoop-2.7.2  /usr/local/hadoop    (在每台机器上都要做)

判断hadoop的版本

/usr/local/hadoop/bin/hadoop version

配置环境变量,(在每台机器上都要做)

vim /etc/profile

export HADOOP_HOME=/usr/local/hadoop

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

source /etc/profile

在01、02本地文件系统创建以下文件夹:

mkdir /home/hadoop/name

mkdir /home/hadoop/data

mkdir /home/hadoop/temp

mkdir /home/hadoop/edits

在03、04、05本地文件系统创建以下文件夹:

mkdir /home/hadoop/data

mkdir /home/hadoop/temp

mkdir /home/hadoop/jn

 

7、配置hadoop

涉及到的配置文件有7个:

/usr/local/hadoop/etc/hadoop/hadoop-env.sh

/usr/local/hadoop/etc/hadoop/yarn-env.sh

/usr/local/hadoop/etc/hadoop/slaves

/usr/local/hadoop/etc/hadoop/core-site.xml

/usr/local/hadoop/etc/hadoop/hdfs-site.xml

/usr/local/hadoop/etc/hadoop/mapred-site.xml

/usr/local/hadoop/etc/hadoop/yarn-site.xml

 

修改配置(如果系统已经设置了JAVA_HOME,也要配置env.sh)

在01机器上进入/usr/local/hadoop/etc/hadoop

配置文件1:hadoop-env.sh

修改JAVA_HOME值

export JAVA_HOME=/usr/java/jdk1.8.0_101

配置文件2:yarn-env.sh

修改JAVA_HOME值

export JAVA_HOME=/usr/java/jdk1.8.0_101

配置文件3:slaves (这个文件里面保存所有slave节点)

    cancer03

   cancer04

   cancer05

配置文件4:core-site.xml

   <configuration> 

  <property>

   <name>fs.defaultFS</name>

   <value>hdfs://cancer</value>

  </property>

  <property>

   <name>hadoop.tmp.dir</name>

   <value>file:/home/hadoop/temp</value>

  </property>

        

<!-- ======Trash机制====== -->

<property>

    <!--多长时间创建CheckPoint NameNode截点上运行的CheckPointer 从Current文件夹创建CheckPoint;默认:0 由fs.trash.interval项指定 -->

    <name>fs.trash.checkpoint.interval</name>

    <value>0</value>

</property>

<property>

    <!--多少分钟.Trash下的CheckPoint目录会被删除,该配置服务器设置优先级大于客户端,默认:0 不删除 -->

    <name>fs.trash.interval</name>

    <value>1440</value>

</property>

 

<!--HDFS超级用户

<property>

    <name>dfs.permissions.superusergroup</name>

    <value>root</value>

</property>

-->

   </configuration> 

配置文件5:hdfs-site.xml

<configuration>

       <property>

        <name>dfs.namenode.name.dir</name>

        <value>file:/home/hadoop/name</value>

       </property>

       <property>

        <name>dfs.datanode.data.dir</name>

        <value>file:/home/hadoop/data</value>

       </property>

       <property>

              <name>dfs.namenode.edits.dir</name>

              <value>file:/home/hadoop/edits</value>

       </property>

       <property>

              <name>dfs.blocksize</name>

              <value>268435456</value>

       </property>  

       <property>

              <name>dfs.replication</name>

              <value>3</value>

       </property>

       <property>

              <name>dfs.webhdfs.enabled</name>

              <value>true</value>

       </property>

       <property>

              <name>dfs.permissions</name>

              <value>false</value>

       </property>

      

       <!--=======HDFS高可用配置======== -->

       <property>

              <name>dfs.nameservices</name>

              <value>cancer</value>

       </property>

       <property>

              <!--设置NameNodeIDs 此版本最大只支持两个NameNode -->

              <name>dfs.ha.namenodes.cancer</name>

              <value>nn1,nn2</value>

       </property>

       <!-- HdfsHA:dfs.namenode.rpc-address.[nameservice ID] rpc 通信地址-->

       <property>

              <name>dfs.namenode.rpc-address.cancer.nn1</name>

              <value>cancer01:8020</value>

       </property>

       <property>

              <name>dfs.namenode.rpc-address.cancer.nn2</name>

              <value>cancer02:8020</value>

       </property>

       <!-- Hdfs HA:dfs.namenode.http-address.[nameservice ID] http 通信地址-->

       <property>

              <name>dfs.namenode.http-address.cancer.nn1</name>

              <value>cancer01:50070</value>

       </property>

       <property>

              <name>dfs.namenode.http-address.cancer.nn2</name>

              <value>cancer02:50070</value>

       </property>

      

       <!--=========Namenode editlog同步 ========== -->

       <!--保证数据恢复-->

       <property>

              <name>dfs.journalnode.http-address</name>

              <value>0.0.0.0:8480</value>

       </property>

       <property>

              <name>dfs.journalnode.rpc-address</name>

              <value>0.0.0.0:8485</value>

       </property>

       <property>

       <!--设置JournalNode服务器地址,QuorumJournalManager 用于存储editlog -->

       <!--格式:qjournal://<host1:port1>;<host2:port2>;<host3:port3>/<journalId>端口同journalnode.rpc-address -->

              <name>dfs.namenode.shared.edits.dir</name>

              <value>qjournal://cancer03:8485;cancer04:8485;cancer05:8485/cancer</value>

       </property>

       <!--JournalNode存放数据地址 -->

       <property>

              <name>dfs.journalnode.edits.dir</name>

              <value>file:/home/hadoop/jn</value>

       </property>

      

<!--=========DataNode editlog同步 ======== -->

<property>

       <!--DataNode,Client连接Namenode识别选择Active NameNode策略 -->

       <name>dfs.client.failover.proxy.provider.cancer</name>

       <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

</property>

      

<!--=========Namenode fencing:=========== -->

<!--Failover后防止停掉的Namenode启动,造成两个服务 -->

       <property>

              <name>dfs.ha.fencing.methods</name>

              <value>sshfence</value>

       </property>

       <property>

              <name>dfs.ha.fencing.ssh.private-key-files</name>

              <value>/home/hadoop/.ssh/id_rsa</value>

       </property>

       <property>

              <!--多少milliseconds认为fencing失败 -->

              <name>dfs.ha.fencing.ssh.connect-timeout</name>

              <value>30000</value>

       </property>

      

       <!--=======NameNode auto failover baseZKFC and Zookeeper========= -->

       <!--开启基于Zookeeper及ZKFC进程的自动备援设置,监视进程是否死掉 -->

       <property>

              <name>dfs.ha.automatic-failover.enabled</name>

              <value>true</value>

       </property>

       <property>

              <name>ha.zookeeper.quorum</name>

              <value>cancer03:2181,cancer05:2181,cancer05:2181</value>

       </property>

       <property>

       <!--指定ZooKeeper超时间隔,单位毫秒 -->

              <name>ha.zookeeper.session-timeout.ms</name>

              <value>2000</value>

       </property>

      

      <!--

       <property>

              <name>dfs.namenode.secondary.http-address</name>

              <value>cancer01:9001</value>

       </property>

       -->

</configuration>

配置文件6:mapred-site.xml

   <configuration> 

            <property> 

                           <name>mapreduce.framework.name</name> 

                           <value>yarn</value> 

</property> 

<property> 

                           <name>mapreduce.jobhistory.address</name> 

                           <value>cancer01:10020</value> 

</property> 

<property> 

                           <name>mapreduce.jobhistory.webapp.address</name> 

                           <value>cancer01:19888</value> 

</property> 

   </configuration> 

配置文件7:yarn-site.xml

<configuration>

              <property>

                     <name>yarn.nodemanager.aux-services</name>

                     <value>mapreduce_shuffle</value>

              </property>

              <property>

                     <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>

                     <value>org.apache.hadoop.mapred.ShuffleHandler</value>

              </property>

      

              <property>

                     <name>yarn.nodemanager.localizer.address</name>

                     <value>0.0.0.0:23344</value>

              </property>

              <property>

                     <name>yarn.nodemanager.webapp.address</name>

                     <value>0.0.0.0:23999</value>

              </property>

      

              <!--HA 配置 =========== -->

              <!--Resource Manager Configs -->

              <property>

                     <name>yarn.resourcemanager.connect.retry-interval.ms</name>

                     <value>2000</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.ha.enabled</name>

                     <value>true</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>

                     <value>true</value>

              </property>

              <!--使嵌入式自动故障转移。HA环境启动,与ZKRMStateStore 配合 处理fencing -->

              <property>

                     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>

                     <value>true</value>

              </property>

              <!--集群名称,确保HA选举时对应的集群 -->

              <property>

                     <name>yarn.resourcemanager.cluster-id</name>

                     <value>yarn-cluster</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.ha.rm-ids</name>

                     <value>rm1,rm2</value>

              </property>

              <!--这里RM主备结点需要单独指定,(可选)

              <property>

                     <name>yarn.resourcemanager.ha.id</name>

                     <value>rm2</value>

              </property>

              -->

              <property>

                     <name>yarn.resourcemanager.scheduler.class</name>

                     <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.recovery.enabled</name>

                     <value>true</value>

              </property>

              <property>

                     <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>

                     <value>5000</value>

              </property>

              <!--ZKRMStateStore 配置 -->

              <property>

                     <name>yarn.resourcemanager.store.class</name>

                     <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.zk-address</name>

                     <value>cancer03:2181,cancer04:2181,cancer05:2181</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.zk.state-store.address</name>

                     <value>cancer03:2181,cancer04:2181,cancer05:2181</value>

              </property>

              <!--Client访问RM的RPC地址 (applications manager interface) -->

              <property>

                     <name>yarn.resourcemanager.address.rm1</name>

                     <value>cancer01:23140</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.address.rm2</name>

                     <value>cancer02:23140</value>

              </property>

              <!--AM访问RM的RPC地址(scheduler interface) -->

              <property>

                     <name>yarn.resourcemanager.scheduler.address.rm1</name>

                     <value>cancer01:23130</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.scheduler.address.rm2</name>

                     <value>cancer02:23130</value>

              </property>

              <!--RM admin interface -->

              <property>

                     <name>yarn.resourcemanager.admin.address.rm1</name>

                     <value>cancer01:23141</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.admin.address.rm2</name>

                     <value>cancer02:23141</value>

              </property>

              <!--NM访问RM的RPC端口 -->

              <property>

                     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>

                     <value>cancer01:23125</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>

                     <value>cancer02:23125</value>

              </property>

              <!--RM web application 地址 -->

              <property>

                     <name>yarn.resourcemanager.webapp.address.rm1</name>

                     <value>cancer01:8088</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.webapp.address.rm2</name>

                     <value>cancer02:8088</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.webapp.https.address.rm1</name>

                     <value>cancer01:23189</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.webapp.https.address.rm2</name>

                     <value>cancer02:23189</value>

              </property>

             

              <!--

              <property>

                     <name>yarn.resourcemanager.address</name>

                     <value>cancer01:8032</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.scheduler.address</name>

                     <value>cancer01:8030</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.resource-tracker.address</name>

                     <value>cancer01:8031</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.admin.address</name>

                     <value>cancer01:8033</value>

              </property>

              <property>

                     <name>yarn.resourcemanager.webapp.address</name>

                     <value>cancer01:8088</value>

              </property>

              -->

</configuration>

 

复制hadoop到到其他节点

scp -r /usr/local/hadoophadoop@cancer02:/usr/local/

scp -r /usr/local/hadoop hadoop@cancer03:/usr/local/

scp -r /usr/local/hadoop hadoop@cancer04:/usr/local/

scp -r /usr/local/hadoop hadoop@cancer05:/usr/local/

 

8、启动zookeeper

在03、04、05机器上启动zookeeper

cd /usr/local/zookeeper-3.4.6/bin

./zkServer.sh start

jps

./zkServer.sh status

 

9、启动journalnode

在03、04、05机器上启动journalnode

cd /usr/local/hadoop/sbin

hadoop-daemon.sh start journalnode

 

10、格式化namenode

cd /usr/local/hadoop/bin

hadoop namenode -format

 

11、同步namenode元数据

cd /home/hadoop

scp -r name hadoop@cancer02:/home/hadoop/

scp -r edits hadoop@cancer02:/home/hadoop/

scp -r temp hadoop@cancer02:/home/hadoop/

scp -r data hadoop@cancer02:/home/hadoop/

 

12、初始化zfck

hdfs zkfc -formatZK

 

13、启动hdfs

在01机器上,集群启动hdfs

cd /usr/local/hadoop/sbin

start-dfs.sh

 

可以分别在01、02机器上,单进程启动hdfs

cd /usr/local/hadoop/sbin

hadoop-daemon.sh start namenode

hadoop-daemon.sh start datanode

hadoop-daemon.sh start journalnode

hadoop-daemon.sh start zkfc

 

14、验证hdfs

01、02机器上,

jps

DFSZKFailoverController

NameNode

03、04、05机器上,

JournalNode

DataNode

QuorumPeerMain

访问页面:

http://cancer01:50070

http://cancer02:50070

 

15、启动yarn

在01机器上,集群启动yarn

cd /usr/local/hadoop/sbin

start-yarn.sh

在02备机上,启动rm

yarn-daemon.sh start resourcemanager

 

可以01、02机器上,单进程启动

yarn-daemon.sh start resourcemanager

可以03、04、05机器上,单进程启动

yarn-daemon.sh start nodemanager

 

16、验证yarn

01、02机器上,

jps

ResourceManager

03、04、05机器上,

NodeManager

访问页面:

http://cancer01:8088/cluster

http://cancer01:8088/cluster/cluster

 

集群监控

hdfs dfsadmin -report

 

17、准备调试环境

win7中,解压hadoop-2.7.2,到C:\princetechs\servers\hadoop-2.7.2  (以下用$HADOOP_HOME表示)

win7中添加几个环境变量

HADOOP_HOME=C:\princetechs\servers\hadoop-2.7.2

HADOOP_BIN_PATH=%HADOOP_HOME%\bin

HADOOP_PREFIX=C:\princetechs\servers\hadoop-2.7.2

另外,PATH变量在最后追加;%HADOOP_HOME%\bin

 

18、编译eclipse plugin

下载地址:https://github.com/winghc/hadoop2x-eclipse-plugin

修改hadoop2x-eclipse-plugin-2.6.0\ivy下的libraries.properties

1)cd到hadoop2x-eclipse-plugin-master所在目录

2)执行ant jar

在命令行中执行如下命令:

ant jar -Dversion=2.6.0 -Declipse.home=[这里填你的eclipse目录路径]-Dhadoop.home=[这里填你的hadoop目录路径]

ant jar -Dversion=2.6.0 -Declipse.home=C:\sts-3.6.0.RELEASE-Dhadoop.home=E:\servers\hadoop-2.7.2

如下图所示:

在编译过程中,可能遇到说jar包不存在的问题。可以直接从网上下相对应的jar包,或者直接去hadoop-2.7.2/share/hadoop/common/lib目录中,该目录中应该有相对应的jar包,只不过版本不对,我是直接把jar包名字改成了ant所需的jar包名。

 

19、配置eclipse plugin

下载windows64位平台的hadoop2.6插件包(hadoop.dll,winutils.exe)

在hadoop2.7.2源码的hadoop-common-project\hadoop-common\src\main\winutils下,有一个vs.net工程,编译这个工程可以得到这一堆文件,输出的文件中,hadoop.dll、winutils.exe 这二个最有用,将winutils.exe复制到$HADOOP_HOME\bin目录,将hadoop.dll复制到%windir%\system32目录 (主要是防止插件报各种莫名错误,比如空对象引用啥的)。

 

编译hadoop-eclipse-plugin插件

1)将jar包放入eclipse文件夹

将刚刚编译好的hadoop-eclipse-plugin-2.6.0.jar复制到eclipse目录中的plugins文件夹。之后重启Eclipse,然后就可以看到如下图所示的内容:

如图中左侧红色框圈中的部分所示,如果插件安装成功,会出现DFSLocations。

 

2)添加Hadoop installation directory

打开Windows -> Preferens,可以看到Hadoop Map/Reduce选项,点击该选项,然后将hadoop-2.7.2文件夹添加进来。如图所示:

 

3)配置Map/ReduceLocations

点击Window -> Show View -> Other-> MapReduce Tools -> Map/Reduce Locations,然后点击OK。

之后点击新出现的Map/Reduce Locations选项卡,点击右侧小象按钮,如图所示:

点击后会弹出New Hadoop Location窗口。如下图所示,填写红框圈中的内容。

左侧9001那部分的内容,是与你hdfs-site.xml中的dfs.namenode.secondary.http-address中的value一致。具体该配置文件的内容见我上一篇文章。右侧9000那块的内容,是与你core-site.xml中的fs.defaultFS的value一致。

Location name 这里就是起个名字,随便起

Map/Reduce(V2) Master Host 这里就是虚拟机里hadoop master对应的IP地址,下面的端口对应 hdfs-site.xml里dfs.datanode.ipc.address属性所指定的端口

DFS Master Port: 这里的端口,对应core-site.xml里fs.defaultFS所指定的端口

最后的user name要跟虚拟机里运行hadoop的用户名一致,我是用hadoop身份安装运行hadoop2.7.2的,所以这里填写hadoop,如果你是用root安装的,相应的改成root

这些参数指定好以后,点击Finish,eclipse就知道如何去连接hadoop了,一切顺利的话,在Project Explorer面板中,就能看到hdfs里的目录和文件了

可以在文件上右击,选择删除试下,通常第一次是不成功的,会提示一堆东西,大意是权限不足之类,原因是当前的win7登录用户不是虚拟机里hadoop的运行用户,解决办法有很多,比如你可以在win7上新建一个hadoop的管理员用户,然后切换成hadoop登录win7,再使用eclipse开发,但是这样太烦,最简单的办法:

hdfs-site.xml里添加

<property>

    <name>dfs.permissions</name>

    <value>false</value>

</property>

然后在虚拟机里,运行hadoop dfsadmin -safemodeleave

保险起见,再来一个 hadoop fs -chmod 777 /

总而言之,就是彻底把hadoop的安全检测关掉(学习阶段不需要这些,正式生产上时,不要这么干),最后重启hadoop,再到eclipse里,重复刚才的删除文件操作试下,应该可以了。

 

若点击小象按钮后,没弹出该窗口,则点击Window -> ShowView -> Other -> General -> Error Log,打开ErrorLog窗口,看看里面有没有什么错误提示。如果有提示说NoClassDefFoundError的错误,则需要找到对应的jar包,然后将其放入之前编译的hadoop-eclipse-plugin-2.6.0.jar的lib目录中。打开jar包中META-INF目录中的MANIFEST.MF文件,在Bundle-ClassPath中添加该jar包的信息,如下图所示:

 

20、运行wordcount

新建一个项目,选择Map/Reduce Project

后面的Next就行了,然后放一上WodCount.java,代码如下:

package yjmyzz;

 

import java.io.IOException;

import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.Mapper;

import org.apache.hadoop.mapreduce.Reducer;

importorg.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import org.apache.hadoop.util.GenericOptionsParser;

 

public class WordCount {

    publicstatic class TokenizerMapper extends Mapper<Object, Text, Text,IntWritable> {

        privatefinal static IntWritable one = new IntWritable(1);

        privateText word = new Text();

       

        publicvoid map(Object key, Text value, Context context) throws IOException,InterruptedException {

           StringTokenizer itr = new StringTokenizer(value.toString());

           while (itr.hasMoreTokens()) {

               word.set(itr.nextToken());

               context.write(word, one);

            }

        }

    }

   

    publicstatic class IntSumReducer extends Reducer<Text, IntWritable, Text,IntWritable> {

        privateIntWritable result = new IntWritable();

        publicvoid reduce(Text key, Iterable<IntWritable> values, Context context)throws IOException, InterruptedException {

            intsum = 0;

            for(IntWritable val : values)

               sum += val.get();

           result.set(sum);

           context.write(key, result);

        }

    }

   

    publicstatic void main(String[] args) throws Exception {

       Configuration conf = new Configuration();       

        String[]otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();

        if(otherArgs.length < 2) {

           System.err.println("Usage: wordcount <in> [<in>...]<out>");

           System.exit(2);

        }

        Job job= Job.getInstance(conf, "word count");

        job.setJarByClass(WordCount.class);

       job.setMapperClass(TokenizerMapper.class);

       job.setCombinerClass(IntSumReducer.class);

       job.setReducerClass(IntSumReducer.class);

       job.setOutputKeyClass(Text.class);

       job.setOutputValueClass(IntWritable.class);

        for (inti = 0; i < otherArgs.length - 1; ++i)

           FileInputFormat.addInputPath(job, new Path(otherArgs[i]));

       FileOutputFormat.setOutputPath(job,

               new Path(otherArgs[otherArgs.length - 1]));

       System.exit(job.waitForCompletion(true) ? 0 : 1);

    }

}

 

然后再放一个log4j.properties,内容如下:(为了方便运行起来后,查看各种输出)

log4j.rootLogger=INFO, stdout

#log4j.logger.org.springframework=INFO

#log4j.logger.org.apache.activemq=INFO

#log4j.logger.org.apache.activemq.spring=WARN

#log4j.logger.org.apache.activemq.store.journal=INFO

#log4j.logger.org.activeio.journal=INFO

log4j.appender.stdout=org.apache.log4j.ConsoleAppender

log4j.appender.stdout.layout=org.apache.log4j.PatternLayout

log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE}| %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n

 

最终的目录结构如下:

然后可以Run了,当然是不会成功的,因为没给WordCount输入参数,参考下图:

1.5 设置运行参数

因为WordCount是输入一个文件用于统计单词字,然后输出到另一个文件夹下,所以给二个参数,参考上图,在Program arguments里,输入

hdfs://172.28.20.xxx:9000/jimmy/input/README.txt
hdfs://172.28.20.xxx:9000/jimmy/output/

大家参考这个改一下(主要是把IP换成自己虚拟机里的IP),注意的是,如果input/READM.txt文件没有,请先手动上传,然后/output/ 必须是不存在的,否则程序运行到最后,发现目标目录存在,也会报错,这个弄完后,可以在适当的位置打个断点,终于可以调试了:

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值