Hadoop2.6.0在MAC下伪分布安装


一、配置ssh,无密码登录

1、mac开启ssh

2. mac上已经ssh了,在终端输入ssh-keygen -t rsa命令,碰到需要输入密码的直接按enter健即可。以rsa算法,生成公钥、私钥对。出现如下成功

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /Users/jia/.ssh/id_rsa.

Your public key has been saved in /Users/jia/.ssh/id_rsa.pub.

The key fingerprint is:

d4:85:aa:83:ae:db:50:48:0c:5b:dd:80:bb:fa:26:a7 jia@JIAS-MacBook-Pro.local

The key's randomart image is:

+--[ RSA 2048]----+

|. .o.o     ..    |

| =. . .  ...     |

|. o.    ...      |

| ...   ..        |

|  .... .S        |

|  ... o          |

| ...   .         |

|o oo.            |

|E*+o.            |

+-----------------+

 

 在终端输入cd .ssh 进入.ssh目录,输入命令。

cp id_rsa.pub authorized_keys

导入公钥,即可。

二、配置文件

所有配置文件,均在$HADOOP_HOME/etc/hadoop目录下

 

(1) hadoop-env.sh

添加以下信息:

export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home

export HADOOP_HOME=/Users/ycc/workspace/hadoop-2.6.0

export PATH=$PATH:$HADOOP_HOME/bin

export YARN_HOME=/Users/ycc/workspace/hadoop-2.6.0

export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

export YARN_CONF_DIR=$YARN_HOME/etc/hadoop

 

(2) core-site.xml

<configuration></configuration>之间增加如下内容:

    <property>

                     <name>fs.default.name</name>

                     <value>hdfs://localhost:9000</value>

              </property>

(3) yarn-site.xml

              <configuration></configuration>之间增加如下内容: 

              <property> 

                     <name>yarn.noCHdemanager.aux-services</name> 

                     <value>mapreduce_shuffle</value> 

              </property> 

              <property> 

                     <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>                <value>org.apache.hadoop.mapred.ShuffleHandler</value> 

              </property>

     (4)mapred-site.xml

           默认/etc/hadoop文件夹下有mapred-site.xml.template文件,复制更改用户名mapred.xml

              <configuration></configuration>之间增加如下内容: 

              <property> 

                     <name>mapreduce.framework.name</name>

                     <value>yarn</value>

              </property>

     (5) hdfs-site.xml

           <configuration></configuration>之间增加如下内容:

              <property>

                     <name>dfs.replication</name>

                     <value>1</value>

              </property>

              <property>

                     <name>dfs.namenode.name.dir</name>

                     <value>file:/Users/xiaolan/Applications/hadoop-2.6.0/hdfs/name</value>

              </property>

              <property>

                     <name>dfs.datanode.data.dir</name>

                     <value>file: /Users/ycc/workspace/hadoop-2.6.0/hdfs/data</value>

              </property>

三、运行Hadoop

可以开始测试了:

1.先格式化hdfs

bin/hdfsnamenode –format

2、启动dfsyarn

sbin/start-dfs.sh

sbin/start-yarn.sh

然后用jps查看java进程,应该能看到以下几个进程:

25361NodeManager
24931 DataNode
25258 ResourceManager
24797 NameNode
25098 SecondaryNameNode

还可以用以下命令查看hdfs的报告:

bin/hdfsdfsadmin -report 正常情况下可以看到以下内容

ConfiguredCapacity: 48228589568 (44.92 GB)
Present Capacity: 36589916160 (34.08 GB)
DFS Remaining: 36589867008 (34.08 GB)
DFS Used: 49152 (48 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (1):

Name: 127.0.0.1:50010 (localhost)
Hostname: dc191
Decommission Status : Normal
Configured Capacity: 48228589568 (44.92 GB)
DFS Used: 49152 (48 KB)
Non DFS Used: 11638673408 (10.84 GB)
DFS Remaining: 36589867008 (34.08 GB)
DFS Used%: 0.00%
DFS Remaining%: 75.87%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue May 05 17:42:54 CST 2015

3web管理界面查看

http://localhost:50070/

http://localhost:8088/

4、在hdfs的根目录下创建目录x

bin/hdfs dfs-mkdir /x

这样就在hdfs中创建了一个目录x

5、向hdfs中放入文件

bin/hdfs dfs-put README.txt /x

上面的命令会把当前目录下的README.TXT放入hdfs/x目录中,在web管理界面里也可以看到该文件

6WordCount验证

bin/hdfs dfs-mkdir /input

bin/hdfs dfs-put README.txt /input

 

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /input/output

 

四、参考:

mac:http://wollt1992.blog.163.com/blog/static/57013456201411244394378

linux:图文教程,版本较老:

       http://www.linuxidc.com/Linux/2013-01/77681p4.htm

       http://www.cnblogs.com/kinglau/p/3794433.html

官网安装教程:

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/SingleCluster.html

注:

据说可以忽略:WARN util.NativeCodeLoader: Unable to loadnative-hadoop library for your platform... using builtin-java classes whereapplicable:

       http://stackoverflow.com/questions/19943766/hadoop-unable-to-load-native-hadoop-library-for-your-platform-error-on-centos

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值