hadoop2.2.0环境的搭建(源码编译)

安装步骤

1安装jdk

1.1先查看系统中是否已经安装了jdk

 

1.2解压jdk包:

[root@guanyy2 jar]# tar -zxvfjdk-7u55-linux-x64.tar.gz

 

1.3将解压包放在/opt/目录下

[root@guanyy2 jar]# mv jdk1.7.0_55/ /opt

 

1.4配置环境变量/etc/profile

[root@guanyy2 opt]# vim /etc/profile

在文件末尾添加:

############################################

##############JDK BEGIN

JAVA_HOME=/opt/jdk1.7.0_55

 

PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin

CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib

export JAVA_HOME PATH CLASSPATH

#############JDK  END

 

1.5加载环境变量

[root@guanyy2 opt]# source /etc/profile

 

1.6查看变量路径是否正确

[root@guanyy2 opt]# echo $PATH

/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/opt/jdk1.7.0_55/bin:/opt/jdk1.7.0_55/jre/bin

 

1.7测试java,javac是否有信息输出

[root@guanyy2 opt]# javac

[root@guanyy2 opt]# java

 

成功安装jdk

 

 

 

2.集群初始化配置

2.1三台虚拟机器及配置情况

192.168.183.128      guanyy    master

192.168.183.129      guanyy2     slaver

192.168.183.130      guanyy3     slaver

 

2.2更改主机名

查看主机名

[root@guanyy jdk1.7.0_55]# hostname

Guanyy

 

更改主机名

[root@guanyy jdk1.7.0_55]# vim/etc/sysconfig/network

NETWORKING=yes

HOSTNAME=guanyy

 

 

2.3修改hosts文件

设置/etc/hosts文件(每台机器都要设置),添加如下内容:

192.168.183.128 guanyy

192.168.183.129 guanyy2

192.168.183.130 guanyy3

(三台机器都添加)

 

 

 

2.4建立hadoop运行账号

为hadoop集群专门设置一个用户组及用户

[root@guanyy jdk1.7.0_55]# groupadd hadoop    //建立hadoop集群组

[root@guanyy jdk1.7.0_55]# useradd -s/bin/bash -d /home/hadoop -m hadoop -g hadoop //添加用户hadoop,并且用户属于hadoop组

[root@guanyy jdk1.7.0_55]# passwd hadoop        //更改用户密码

 

 

2.5设置SSH无密码访问

(master和所有的slave之间,需要实现双向ssh无密码访问(slaveslave之间不需要实现))

2.5.1配置SSH无密码登陆需3步:

1:生成公钥和私钥

2:导入公钥到认证文件,更改权限

3:测试

 

安装ssh:

yum install openssh-server

yum install openssh-clients

1:生成公钥和私钥

[root@guanyy jdk1.7.0_55]# su Hadoop

[hadoop@guanyy ~]$ ssh-keygen -t dsa -P ''-f ~/.ssh/id_dsa

[hadoop@guanyy ~]$ cd .ssh/

[hadoop@guanyy .ssh]$ ls

id_dsa id_dsa.pub

id_dsa.pub是公钥,id_dsa为私钥,紧接着将公钥文件复制成authorized_keys文件,这个步骤是必须的。

2:导入公钥到认证文件,更改权限

2.1:导入本机

[hadoop@guanyy .ssh]$ cat id_dsa.pub>>authorized_keys

 

2.2:本地测试是否成功

[hadoop@guanyy .ssh]$ ssh localhost

第一次ssh连接需要输入yes,不需要输入密码

(注:ssh免密码登陆失败问题

修改相关权限可解决问题:

1:chmod 700 /home/hadoop/.ssh

2:chmod 600 /home/hadoop/.ssh/authorized_keys

 

2.3同样操作其他几台机器

 

 

2.4主节点master通过ssh免密码登陆到子节点slave

操作guanyy2节点:

[hadoop@guanyy2.ssh]$ scp hadoop@guanyy:/home/hadoop/.ssh/id_dsa.pub/home/hadoop/.ssh/master_id_dsa.pub

 

[hadoop@guanyy2 .ssh]$ ls

authorized_keys  id_dsa id_dsa.pub  known_hosts  master_id_dsa.pub

 

[hadoop@guanyy2 .ssh]$ catmaster_id_dsa.pub >>authorized_keys

 

主节点guanyy SSH免密码登陆guanyy2

[hadoop@guanyy .ssh]$ ssh guanyy2

The authenticity of host 'guanyy2(192.168.183.129)' can't be established.

RSA key fingerprint is ec:15:46:53:df:69:8b:d7:71:ea:0f:08:13:85:1c:bd.

Are you sure you want to continueconnecting (yes/no)? yes

Warning: Permanently added'guanyy2,192.168.183.129' (RSA) to the list of known hosts.

Last login: Tue May 20 08:34:03 2014 fromlocalhost

 

[hadoop@guanyy2 ~]$ exit

logout

Connection to guanyy2 closed.

 

同方法操作guanyy3节点。

 

2.5 对master guanyy自身节点也进行同样的操作,免密码测试成功

[hadoop@guanyy .ssh]$ ssh guanyy

The authenticity of host 'guanyy(192.168.183.128)' can't be established.

RSA key fingerprint is0b:cc:7d:7d:50:a5:c3:7f:07:18:f4:8e:06:bc:d0:fc.

Are you sure you want to continueconnecting (yes/no)? yes

Warning: Permanently added'guanyy,192.168.183.128' (RSA) to the list of known hosts.

Last login: Tue May 20 10:36:46 2014 fromlocalhost

[hadoop@guanyy ~]$ exit

logout

Connection to guanyy closed.

 

 

 

3.Hadoop安装

Hadoop官网提供的编译包只是32位系统使用,使用过程中可能会碰到如下问题:libhadoop.so.1.0.0 which might have disabled stackguard.

 

3.1 Hadoop64为编译

1:安装jdk(64位);

2:安装maven

maven官方下载地址,可以选择源码编码安装,这里就直接下载编译好的:

wget http://mirror.bit.edu.cn/apache/maven/maven-3/3.1.1/binaries/apache-maven-3.1.1-bin.zip

 

解压文件:zip apache-maven-3.1.1-bin.zip

解压文件到:mv apache-maven-3.1.1/opt/apache-maven-3.1.1

 

配置环境变量/etc/profile

vim /etc/profile

MAVEN_HOME=/opt/apache-maven-3.1.1

PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$MAVEN_HOME/bin

CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$MAVEN_HOME/lib

 

验证maven安装成功

mvn –version

 

由于maven国外服务器可能连不上,先给maven配置一下国内镜像,在maven目录下,conf/settings.xml,在<mirrors></mirros>里添加,原本的不要动

<mirror>

     <id>nexus-osc</id>

     <mirrorOf>*</mirrorOf>

     <name>Nexusosc</name>

     <url>http://maven.oschina.net/content/groups/public/</url>

</mirror>

同样,在<profiles></profiles>内新添加

<profile>
<id>jdk-1.7</id>
<activation>
<jdk>1.7</jdk>
</activation>
<repositories>
<repository>
<id>nexus</id>
<name>local private nexus</name>
<url>http://maven.oschina.net/content/groups/public/</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>nexus</id>
<name>local private nexus</name>
<url>http://maven.oschina.net/content/groups/public/</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</profile>

 

 

3.2安装protoc2.5.0

hadoop2.2.0编译需要protoc2.5.0的支持,所以还要下载protoc,
下载地址:https://code.google.com/p/protobuf/downloads/list,要下载2.5.0版本噢对protoc进行编译安装前先要装几个依赖包:gcc,gcc-c++,make 如果已经安装的可以忽略yum install gcc
yum intall gcc-c++
yum install make

安装protoc
tar -xvf protobuf-2.5.0.tar.bz2
cd protobuf-2.5.0
./configure
make && make install

 

3.3需要安装cmake,openssl-devel,ncurses-devel依赖 如果已经安装的可以忽略

yum install cmake
yum install openssl-devel
yum install ncurses-devel

 

 

3.4编译hadoop

首先官方下载hadoop源码
wgethttp://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.2.0/hadoop-2.2.0-src.tar.gz 
现在可以进行编译了,
cd hadoop2.2.0-src
mvn package -Pdist,native -DskipTests –Dtar

 

编译完成后出现如下提示信息:

[INFO] Apache Hadoop Main ................................ SUCCESS[2.891s]

[INFO] Apache Hadoop Project POM ......................... SUCCESS[1.647s]

[INFO] Apache Hadoop Annotations ......................... SUCCESS[8.078s]

[INFO] Apache Hadoop Assemblies .......................... SUCCESS[0.770s]

[INFO] Apache Hadoop Project Dist POM .................... SUCCESS[4.435s]

[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS[4.882s]

[INFO] Apache Hadoop Auth ................................ SUCCESS[4.688s]

[INFO] Apache Hadoop Auth Examples ....................... SUCCESS[6.450s]

[INFO] Apache Hadoop Common .............................. SUCCESS[8:26.398s]

[INFO] Apache Hadoop NFS ................................. SUCCESS[23.670s]

[INFO] Apache Hadoop Common Project ...................... SUCCESS[0.234s]

[INFO] Apache Hadoop HDFS ................................ SUCCESS[4:24.868s]

[INFO] Apache Hadoop HttpFS .............................. SUCCESS[45.613s]

[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS[43.589s]

[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS[5.950s]

[INFO] Apache Hadoop HDFS Project ........................ SUCCESS[0.109s]

[INFO] hadoop-yarn ....................................... SUCCESS[5.661s]

[INFO] hadoop-yarn-api ................................... SUCCESS[1:55.464s]

[INFO] hadoop-yarn-common ................................ SUCCESS[1:05.399s]

[INFO] hadoop-yarn-server ................................ SUCCESS[0.294s]

[INFO] hadoop-yarn-server-common ......................... SUCCESS[15.552s]

[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS[35.431s]

[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS[7.217s]

[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS[27.068s]

[INFO] hadoop-yarn-server-tests .......................... SUCCESS[1.429s]

[INFO] hadoop-yarn-client ................................ SUCCESS[7.912s]

[INFO] hadoop-yarn-applications .......................... SUCCESS[0.157s]

[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS[3.929s]

[INFO] hadoop-mapreduce-client ........................... SUCCESS[0.152s]

[INFO] hadoop-mapreduce-client-core ...................... SUCCESS[54.639s]

[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS[5.344s]

[INFO] hadoop-yarn-site .................................. SUCCESS[0.316s]

[INFO] hadoop-yarn-project ............................... SUCCESS[18.036s]

[INFO] hadoop-mapreduce-client-common .................... SUCCESS[40.871s]

[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS[6.862s]

[INFO] hadoop-mapreduce-client-app ....................... SUCCESS[22.669s]

[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS[11.098s]

[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS[14.853s]

[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS[4.789s]

[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS[11.794s]

[INFO] hadoop-mapreduce .................................. SUCCESS[11.366s]

[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS[10.329s]

[INFO] Apache Hadoop Distributed Copy .................... SUCCESS[28.604s]

[INFO] Apache Hadoop Archives ............................ SUCCESS[6.261s]

[INFO] Apache Hadoop Rumen ............................... SUCCESS[15.648s]

[INFO] Apache Hadoop Gridmix ............................. SUCCESS[10.497s]

[INFO] Apache Hadoop Data Join ........................... SUCCESS[5.987s]

[INFO] Apache Hadoop Extras .............................. SUCCESS[7.420s]

[INFO] Apache Hadoop Pipes ............................... SUCCESS[21.874s]

[INFO] Apache Hadoop Tools Dist .......................... SUCCESS[4.816s]

[INFO] Apache Hadoop Tools ............................... SUCCESS[0.090s]

[INFO] Apache Hadoop Distribution ........................ SUCCESS[50.424s]

[INFO] Apache Hadoop Client .............................. SUCCESS[17.937s]

[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS[0.730s]

[INFO]------------------------------------------------------------------------

[INFO] BUILD SUCCESS

[INFO]------------------------------------------------------------------------

[INFO] Total time: 26:37.500s

[INFO] Finished at: Tue May 20 16:15:41 CST 2014

[INFO] Final Memory: 99M/365M

[INFO]------------------------------------------------------------------------

编译成功!

编译后的路径在:hadoop-2.2.0-src/hadoop-dist/target/hadoop-2.2.0
通过一下命令可以看出hadoop的版本

验证版本号

[root@guanyy bin]# ./hadoop version

Hadoop 2.2.0

Subversion Unknown -r Unknown

Compiled by root on 2014-05-20T07:49Z

Compiled with protoc 2.5.0

From source with checksum 79e53ce7994d1628b240f09af91e1af4

This command was run using /home/hadoop/Download/hadoop-2.2.0-src/hadoop-dist/target/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar

 

验证hadoop lib的位数

[root@guanyy hadoop-2.2.0]# filelib//native/*

lib//native/libhadoop.a:        current ar archive

lib//native/libhadooppipes.a:   current ar archive

lib//native/libhadoop.so:       symbolic link to `libhadoop.so.1.0.0'

lib//native/libhadoop.so.1.0.0: ELF 64-bitLSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped

lib//native/libhadooputils.a:   current ar archive

lib//native/libhdfs.a:          current ar archive

lib//native/libhdfs.so:         symbolic link to `libhdfs.so.0.0.0'

lib//native/libhdfs.so.0.0.0:   ELF 64-bit LSB shared object, x86-64,version 1 (SYSV), dynamically linked, not stripped

编译完成

 

 

 

3.6 Hadoop配置

我们只需要在master上配置好所有的配置文件,然后去不拷贝到slave机器上即可。

3.6.1:将编译好的hadoop包放在/opt/hadoop2.2.0

[root@guanyy target]# cp -r hadoop-2.2.0/opt/

[root@guanyy opt]# chown -R  hadoop:hadoop hadoop-2.2.0/

[root@guanyy opt]# ll

总用量 16

drwxr-xr-x. 6 root   root   4096 9月  17 2013 apache-maven-3.1.1

drwxr-xr-x. 9 hadoop hadoop 4096 5月  20 19:33 hadoop-2.2.0

drwxr-xr-x. 8 uucp      143 4096 3月  18 11:04 jdk1.7.0_55

drwxr-xr-x. 10 109965   5000 4096 5月  20 15:37 protobuf-2.5.0

 

3.6.2:配置环境变量

[root@guanyy hadoop-2.2.0]# vim/etc/profile

############################################

##############JDK BEGIN

HADOOP_HOME=/opt/hadoop-2.2.0

 

PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$MAVEN_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$MAVEN_HOME/lib:$HADOOP_HOME/lib

export JAVA_HOME PATH CLASSPATH

#############JDK  END

 

[root@guanyy hadoop-2.2.0]# source/etc/profile

[root@guanyy hadoop-2.2.0]# echo $PATH

/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/jdk1.7.0_55/bin:/opt/jdk1.7.0_55/jre/bin:/opt/apache-maven-3.1.1/bin:/root/bin:/opt/jdk1.7.0_55/bin:/opt/jdk1.7.0_55/jre/bin:/opt/apache-maven-3.1.1/bin:/opt/hadoop-2.2.0/bin:/opt/hadoop-2.2.0/sbin

 

3.7防火墙设置

关闭selinux方法

关闭SELinux

1、临时关闭(不用重启机器):

setenforce 0         ##设置SELinux成为permissive模式

                     ##setenforce 1 设置SELinux成为enforcing模式

2、修改配置文件需要重启机器:

修改/etc/selinux/config文件

SELINUX=enforcing改为SELINUX=disabled

 

永久关闭防火墙

chkconfig iptables off

chkconfig ip6tables off

 

3.8 hadoop配置文件配置

这里要涉及到的配置文件有7个:

/hadoop-2.2.0/etc/hadoop/hadoop-env.sh

/hadoop-2.2.0/etc/hadoop/yarn-env.sh

/hadoop-2.2.0/etc/hadoop/slaves

/hadoop-2.2.0/etc/hadoop/core-site.xml

/hadoop-2.2.0/etc/hadoop/hdfs-site.xml

/hadoop-2.2.0/etc/hadoop/mapred-site.xml

/hadoop-2.2.0/etc/hadoop/yarn-site.xml

[root@guanyyhadoop]# pwd

/opt/hadoop-2.2.0/etc/hadoop

[root@guanyyhadoop]# ls

capacity-scheduler.xml      hdfs-site.xml               mapred-site.xml.template

configuration.xsl           httpfs-env.sh               slaves

container-executor.cfg      httpfs-log4j.properties     ssl-client.xml.example

core-site.xml               httpfs-signature.secret     ssl-server.xml.example

hadoop-env.cmd              httpfs-site.xml             yarn-env.cmd

hadoop-env.sh               log4j.properties            yarn-env.sh

hadoop-metrics2.properties  mapred-env.cmd              yarn-site.xml

hadoop-metrics.properties   mapred-env.sh

hadoop-policy.xml           mapred-queues.xml.template

[root@guanyyhadoop]#

 

配置hadoop-env.sh

export JAVA_HOME=/opt/jdk1.7.0_55

 

 

 

配置yarn-env.sh

配置mapred-env.sh

 

 

配置slaves(文件保存所有的slave节点)

guanyy2

guanyy3

 

配置core-site.xml

<configuration>

   <property>

       <name>fs.default.name</name>

       <value>hdfs://guanyy:9000<value>

   </property>

 

</configuration>

 

配置hdfs-site.xml

<configuration>

   <property>

       <name>dfs.namenode.name.dir</name>

       <value>file:/home/hadoop/dfs/name</value>

   </property>

   <property>

       <name>dfs.datanode.data.dir</name>

       <value>file:/home/hadoop/dfs/data</value>

   </property>

   <property>

       <name>dfs.replication</name>

       <value>2<value>

   </property>

   <property>

       <name>dfs.webhdfs.enabled</name>

       <value>true</value>

   </property>

</configuration>

 

 

配置mapred-site.xml

<configuration>

   <property>

       <name>mapreduce.framework.name</name>

       <value>yarn</value>

       <span style="text-align:justify">

           <description>

              设置MapReduce的执行框架为YARN,这样MR job会被提交到ResourceManager

           <description>

       </span>

   </property>

 <property>

        <name>mapreduce.job.tracker</name>

        <value>guanyy:9001</value>

</property>

   <property>

       <name>mapreduce.jobhistory.address</name>

       <value>guanyy:10020</value>

   </property>

   <property>

       <name>mapreduce.jobhistory.webapp.address</name>

       <value>guanyy:19888</value>

   </property>

</configuration>

 

 

配置yarn-site.xml

<configuration>

 

<!-- Site specific YARN configurationproperties -->

 

   <property>

       <name>yarn.nodemanager.aux-services</name>

       <value>mapreduce_shuffle</value>

   </property>

   <property>

       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>

       <value>org.apache.hadoop.mapred.ShuffleHandler</value>

   </property>

   <property>

       <name>yarn.resourcemanager.address</name>

       <value>guanyy:8032</value>

   </property>

   <property>

       <name>yarn.resourcemanager.scheduler.address</name>

       <value>guanyy:8030</value>

   </property>

   <property>

       <name>yarn.resourcemanager.resource-tracker.address</name>

       <value>guanyy:8031</value>

   </property>

   <property>

       <name>yarn.resourcemanager.admin.address</name>

       <value>guanyy:8033</value>

   </property>

   <property>

       <name>yarn.resourcemanager.webapp.address</name>

       <value>guanyy:8088</value>

   </property>

 

 

</configuration>

 

 

3.8复制到其他节点

[root@guanyy hadoop]# scp -r  /opt/hadoop-2.2.0  root@guanyy2:/opt/

 

更改权限

[root@guanyy2 opt]# chown -R hadoop:hadoophadoop-2.2.0/

 

[root@guanyy3 opt]# chown -R hadoop:hadoophadoop-2.2.0/

[root@guanyy3 opt]# ll

总用量 8

drwxr-xr-x. 9 hadoop hadoop 4096 5月  20 23:49 hadoop-2.2.0

drwxr-xr-x. 8 uucp      143 4096 3月  18 11:04 jdk1.7.0_55

 

 

3.9启动集群

格式化HDFS

[root@guanyy ~]# hdfs namenode -format

启动hdfs

[hadoop@guanyy hadoop]$ start-dfs.sh

Starting namenodes on [guanyy]

guanyy: starting namenode, logging to/opt/hadoop-2.2.0/logs/hadoop-hadoop-namenode-guanyy.out

guanyy3: starting datanode, logging to/opt/hadoop-2.2.0/logs/hadoop-hadoop-datanode-guanyy3.out

guanyy2: starting datanode, logging to/opt/hadoop-2.2.0/logs/hadoop-hadoop-datanode-guanyy2.out

Starting secondary namenodes [0.0.0.0]

The authenticity of host '0.0.0.0(0.0.0.0)' can't be established.

RSA key fingerprint is0b:cc:7d:7d:50:a5:c3:7f:07:18:f4:8e:06:bc:d0:fc.

Are you sure you want to continueconnecting (yes/no)? yes

0.0.0.0: Warning: Permanently added'0.0.0.0' (RSA) to the list of known hosts.

0.0.0.0: starting secondarynamenode,logging to /opt/hadoop-2.2.0/logs/hadoop-hadoop-secondarynamenode-guanyy.out

 

3.10测试节点运行进程

主节点

[hadoop@guanyy hadoop]$ jps

2146 SecondaryNameNode

1988 NameNode

2294 Jps

 

Slave节点

[hadoop@guanyy3 hadoop]$ jps

1261 DataNode

1345 Jps

 

启动yarn:/sbin/start-yarn.sh

[hadoop@guanyy hadoop]$ start-yarn.sh

starting yarn daemons

starting resourcemanager, logging to/opt/hadoop-2.2.0/logs/yarn-hadoop-resourcemanager-guanyy.out

guanyy3: starting nodemanager, logging to/opt/hadoop-2.2.0/logs/yarn-hadoop-nodemanager-guanyy3.out

guanyy2: starting nodemanager, logging to/opt/hadoop-2.2.0/logs/yarn-hadoop-nodemanager-guanyy2.out

 

验证master节点运行进程

[hadoop@guanyy hadoop]$ jps

2146 SecondaryNameNode

2342 ResourceManager

1988 NameNode

2587 Jps

查看slaver节点运行进程

[hadoop@guanyy2 hadoop]$ jps

1479 Jps

1391 NodeManager

1272 DataNode

 

查看集群状态

[hadoop@guanyy hadoop]$ hdfs dfsadmin-report

Configured Capacity: 37139136512 (34.59 GB)

Present Capacity: 31511695360 (29.35 GB)

DFS Remaining: 31511646208 (29.35 GB)

DFS Used: 49152 (48 KB)

DFS Used%: 0.00%

Under replicated blocks: 0

Blocks with corrupt replicas: 0

Missing blocks: 0

 

-------------------------------------------------

Datanodes available: 2 (2 total, 0 dead)

 

Live datanodes:

Name: 192.168.183.130:50010 (guanyy3)

Hostname: guanyy3

Decommission Status : Normal

Configured Capacity: 18569568256 (17.29 GB)

DFS Used: 24576 (24 KB)

Non DFS Used: 2813583360 (2.62 GB)

DFS Remaining: 15755960320 (14.67 GB)

DFS Used%: 0.00%

DFS Remaining%: 84.85%

Last contact: Wed May 21 01:43:11 CST 2014

 

 

Name: 192.168.183.129:50010 (guanyy2)

Hostname: guanyy2

Decommission Status : Normal

Configured Capacity: 18569568256 (17.29 GB)

DFS Used: 24576 (24 KB)

Non DFS Used: 2813857792 (2.62 GB)

DFS Remaining: 15755685888 (14.67 GB)

DFS Used%: 0.00%

DFS Remaining%: 84.85%

Last contact: Wed May 21 01:43:12 CST 2014

 

 

查看HDFS:

http://192.168.183.128:50070/

 

查看RM:

http://192.168.183.128:8088/

 

 

运行示例程序

在hdfs上创建一个文件夹

[hadoop@guanyy sbin]$ hdfs dfs -mkdir/input

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值