转载自:http://blog.csdn.net/aquester/article/details/24621005
目录
7.8.1. dfs.namenode.rpc-address 11
11.1. 执行“hdfs dfs -ls”时报ConnectException 17
11.2. Incompatible clusterIDs 18
11.3. Inconsistent checkpoint fields 20
1. 前言
本文的目的是为当前最新版本的Hadoop 2.4.0提供最为详细的安装说明,以帮助减少安装过程中遇到的困难,并对一些错误原因进行说明。本文的安装只涉及了hadoop-common、hadoop-hdfs、hadoop-mapreduce和hadoop-yarn,并不包含HBase、Hive和Pig等。
2. 部署
2.1. 机器列表
共5台机器,部署如下表所示:
NameNode | SecondaryNameNode | DataNodes |
172.25.40.171 | 172.25.39.166 | 10.12.154.77 10.12.154.78 10.12.154.79 |
2.2. 主机名
机器IP | 对应的主机名 |
172.25.40.171 | VM-40-171-sles10-64 |
172.25.39.166 | VM-39-166-sles10-64 |
10.12.154.77 | DEVNET-154-77 |
10.12.154.78 | DEVNET-154-70 |
10.12.154.79 | DEVNET-154-79 |
注意主机名不能有下划线,否则启动时,SecondaryNameNode节点会报如下所示的错误(取自hadoop-hadoop-secondarynamenode-VM_39_166_sles10_64.out文件):
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /data/hadoop/hadoop-2.4.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. Exception in thread "main" java.lang.IllegalArgumentException: The value of property bind.address must not be null at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) at org.apache.hadoop.conf.Configuration.set(Configuration.java:971) at org.apache.hadoop.conf.Configuration.set(Configuration.java:953) at org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:391) at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:344) at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:104) at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:292) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:264) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651) |
2.2.1. 临时修改主机名
命令hostname不但可以查看主机名,还可以用它来修改主机名,格式为:hostname 新主机名。
在修改之前172.25.40.171对应的主机名为VM-40-171-sles10-64,而172.25.39.166对应的主机名为VM_39_166_sles10_64。两者的主机名均带有下划线,因此需要修改。为求简单,仅将原下划线改成横线:
hostname VM-40-171-sles10-64
hostname VM-39-166-sles10-64
经过上述修改后,还不够,类似于修改环境变量,还需要通过修改系统配置文件做永久修改。
2.2.2. 永久修改主机名
不行的Linux发行版本,对应的系统配置文件可能不同,对于SuSE 10.1,它的是/etc/HOSTNAME:
# cat /etc/HOSTNAME VM_39_166_sles10_64 |
将文件中的“VM_39_166_sles10_64”,改成“VM-39-166-sles10-64”。有些Linux发行版本对应的可能是/etc/hostname文件,有些可能是/etc/sysconfig/network文件。
不但所在文件不同,修改的方法可能也不一样,比如有些是名字对形式,如:HOSTNAME=主机名。
修改之后,需要重启网卡,以使修改生效,执行命令:/etc/rc.d/boot.localnet start(不同系统,命令会有差异,这是SuSE上的方法),再次使用hostname查看,会发现主机名变了。
直接重启系统,也可以使修改生效。
注意修改主机名后,需要重新验证ssh免密码登录,方法为:ssh 用户名@新的主机名。
2.3. 免密码登录范围
要求能通过免登录包括使用IP和主机名都能免密码登录:
1) NameNode能免密码登录所有的DataNode
2) SecondaryNameNode能免密码登录所有的DataNode
3) NameNode能免密码登录自己
4) SecondaryNameNode能免密码登录自己
5) NameNode能免密码登录SecondaryNameNode
6) SecondaryNameNode能免密码登录NameNode
7) DataNode能免密码登录自己
8) DataNode不需要配置免密码登录NameNode、SecondaryNameNode和其它DataNode。
3. 约定
3.1. 安装目录约定
为便于讲解,本文约定Hadoop、JDK安装目录如下:
| 安装目录 | 版本 | 说明 |
JDK | /data/jdk | 1.7.0 | ln -s /data/jdk1.7.0_55 /data/jdk |
Hadoop | /data/hadoop/current | 2.4.0 | ln -s /data/hadoop/hadoop-2.4.0 /data/hadoop/current |
在实际安装部署时,可以根据实际进行修改。
3.2.
服务端口约定
端口 | 作用 |
9000 | fs.defaultFS,如:hdfs://172.25.40.171:9000 |
9001 | dfs.namenode.rpc-address,DataNode会连接这个端口 |
50070 | dfs.namenode.http-address |
50470 | dfs.namenode.https-address |
50100 | dfs.namenode.backup.address |
50105 | dfs.namenode.backup.http-address |
50090 | dfs.namenode.secondary.http-address,如:172.25.39.166:50090 |
50091 | dfs.namenode.secondary.https-address,如:172.25.39.166:50091 |
50020 | dfs.datanode.ipc.address |
50075 | dfs.datanode.http.address |
50475 | dfs.datanode.https.address |
50010 | dfs.datanode.address,DataNode的数据传输端口 |
8480 | dfs.journalnode.rpc-address |
8481 | dfs.journalnode.https-address |
8032 | yarn.resourcemanager.address |
8088 | yarn.resourcemanager.webapp.address,YARN的http端口 |
8090 | yarn.resourcemanager.webapp.https.address |
8030 | yarn.resourcemanager.scheduler.address |
8031 | yarn.resourcemanager.resource-tracker.address |
8033 | yarn.resourcemanager.admin.address |
8042 | yarn.nodemanager.webapp.address |
8040 | yarn.nodemanager.localizer.address |
8188 | yarn.timeline-service.webapp.address |
10020 | mapreduce.jobhistory.address |
19888 | mapreduce.jobhistory.webapp.address |
2888 | ZooKeeper,如果是Leader,用来监听Follower的连接 |
3888 | ZooKeeper,用于Leader选举 |
2181 | ZooKeeper,用来监听客户端的连接 |
60010 | hbase.master.info.port,HMaster的http端口 |
60000 | hbase.master.port,HMaster的RPC端口 |
60030 | hbase.regionserver.info.port,HRegionServer的http端口 |
60020 | hbase.regionserver.port,HRegionServer的RPC端口 |
8080 | hbase.rest.port,HBase REST server的端口 |
10000 | hive.server2.thrift.port |
9083 | hive.metastore.uris |
4. 工作详单
为运行Hadoop(HDFS、YARN和MapReduce)需要完成的工作详单:
Hadoop是Java语言开发的,所以需要。 | |
NameNode控制SecondaryNameNode和DataNode使用了ssh和scp命令,需要无密码执行。 | |
Hadoop安装和配置 | 这里指的是HDFS、YARN和MapReduce,不包含HBase、Hive等的安装。 |
5. JDK安装
本文安装的JDK 1.7.0版本,基于JDK1.8版本也可以安装成功,但建议采用JDK1.7版本。原因是在编译Hadoop 2.4.0源码时,使用JDK1.8时大量语法错误,改用JDK1.7版本后,顺序通过,详情请参见《在Linux上编译Hadoop-2.4.0》一文。
5.1. 下载安装包
JDK最新二进制安装包下载网址:
http://www.oracle.com/technetwork/java/javase/downloads
JDK1.7二进制安装包下载网址:
http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html
本文下载的是64位Linux版本的JDK1.7:jdk-7u55-linux-x64.gz。请不要安装JDK1.8版本,JDK1.8和Hadoop 2.4.0不匹配,编译Hadoop 2.4.0源码时会报很多错误。
5.2. 安装步骤
JDK的安装非常简单,将jdk-7u55-linux-x64.gz上传到Linux,然后解压,接着配置好环境变量即可(本文jdk-7u55-linux-x64.gz被上传在/data目录下):
1) 进入/data目录
2) 解压安装包:tar xzf jdk-7u55-linux-x64.gz,解压后会在生成目录/data/jdk1.7.0_55
3) 建立软件链接:ln -s /data/jdk1.7.0_55 /data/jdk
4) 修改/etc/profile或用户目录下的profile,或同等文件,配置如下所示环境变量:
export JAVA_HOME=/data/jdk export CLASSPATH=$JAVA_HOME/lib/tools.jar export PATH=$JAVA_HOME/bin:$PATH |
完成这项操作之后,需要重新登录,或source一下profile文件,以便环境变量生效,当然也可以手工运行一下,以即时生效。如果还不放心,可以运行下java或javac,看看命令是否可执行。如果在安装JDK之前,已经可执行了,则表示不用安装JDK。
6. 免密码ssh2登录
以下针对的是ssh2,而不是ssh,也不包括OpenSSH。配置分两部分:一是对登录机的配置,二是对被登录机的配置,其中登录机为客户端,被登录机为服务端,也就是解决客户端到服务端的无密码登录问题。下述涉及到的命令,可以直接拷贝到Linux终端上执行,已全部验证通过,操作环境为SuSE 10.1。
第一步,修改所有被登录机上的sshd配置文件/etc/ssh2/sshd2_config:
1) 将PermitRootLogin值设置为yes,也就是取掉前面的注释号#
2) 将AllowedAuthentications值设置为publickey,password,也就是取掉前面的注释号#
3) 重启sshd服务:service ssh2 restart
第二步,在所有登录机上,执行以下步骤:
1) 进入到.ssh2目录:cd ~/.ssh2
2) ssh-keygen2 -t dsa -P''
-P表示密码,-P''就表示空密码,也可以不用-P参数,但这样就要敲三次回车键,用-P''就一次回车。
成功之后,会在用户的主目录下生成私钥文件id_dsa_2048_a,和公钥文件id_dsa_2048_a.pub。
3) 生成identification文件:echo "IdKey id_dsa_2048_a" >> identification,请注意IdKey后面有一个空格,确保identification文件内容如下:
# cat identification IdKey id_dsa_2048_a |
4) 将文件id_dsa_2048_a.pub,上传到所有被登录机的~/.ssh2目录:scp id_dsa_2048_a.pub root@192.168.0.1:/root/.ssh2,这里假设192.168.0.1为其中一个被登录机的IP。在执行scp之前,请确保192.168.0.1上有/root/.ssh2这个目录,而/root/需要修改为root用户的实际HOME目录,通常环境变量$HOME为用户主目录,~也表示用户主目录,不带任何参数的cd命令也会直接切换到用户主目录。
第三步,在所有被登录机上,执行以下步骤:
1) 进入到.ssh2目录:cd ~/.ssh2
2) 生成authorization文件:echo "Key id_dsa_2048_a.pub" >> authorization,请注意Key后面有一个空格,确保authorization文件内容如下:
# cat authorization Key id_dsa_2048_a.pub |
完成上述工作之后,从登录机到被登录机的ssh登录就不需要密码了。如果没有配置好免密码登录,在启动时会遇到如下错误:
Starting namenodes on [172.25.40.171] 172.25.40.171: Host key not found from database. 172.25.40.171: Key fingerprint: 172.25.40.171: xofiz-zilip-tokar-rupyb-tufer-tahyc-sibah-kyvuf-palik-hazyt-duxux 172.25.40.171: You can get a public key's fingerprint by running 172.25.40.171: % ssh-keygen -F publickey.pub 172.25.40.171: on the keyfile. 172.25.40.171: warning: tcgetattr failed in ssh_rl_set_tty_modes_for_fd: fd 1: Invalid argument |
或下列这样的错误:
Starting namenodes on [172.25.40.171] 172.25.40.171: hadoop's password: |
建议生成的私钥和公钥文件名都带上自己的IP,否则会有些混乱。
按照中免密码登录范围的说明,配置好所有的免密码登录。更多关于免密码登录说明,请浏览技术博客:
1) http://blog.chinaunix.net/uid-20682147-id-4212099.html(两个SSH2间免密码登录)
2) http://blog.chinaunix.net/uid-20682147-id-4212097.html(SSH2免密码登录OpenSSH)
3) http://blog.chinaunix.net/uid-20682147-id-4212094.html(OpenSSH免密码登录SSH2)
7. Hadoop安装和配置
本部分仅包括HDFS、MapReduce和Yarn的安装,不包括HBase、Hive等的安装。
7.1. 下载安装包
Hadoop二进制安装包下载网址:http://hadoop.apache.org/releases.html#Download(或直接进入http://mirror.bit.edu.cn/apache/hadoop/common/进行下载),本文下载的是hadoop-2.4.0版本(安装包:http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.4.0/hadoop-2.4.0.tar.gz,源码包:http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.4.0/hadoop-2.4.0-src.tar.gz),并不是稳定版本,最新的稳定版本是hadoop-2.2.0。
官方的安装说明请浏览Cluster Setup:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html。
7.2. 安装和环境变量配置
1) 将Hadoop安装包hadoop-2.4.0.tar.gz上传到/data/hadoop目录下
2) 进入/data/hadoop目录
3) 在/data/hadoop目录下,解压安装包hadoop-2.4.0.tar.gz:tar xzf hadoop-2.4.0.tar.gz
4) 建立软件链接:ln -s /data/hadoop/hadoop-2.4.0 /data/hadoop/current
5) 修改用户主目录下的文件.profile(当然也可以是/etc/profile),设置Hadoop环境变量:
export HADOOP_HOME=/data/hadoop/current export PATH=$HADOOP_HOME/bin:$PATH |
需要重新登录以生效,或者在终端上执行:export HADOOP_HOME=/data/hadoop/current也可以即时生效。
7.3. 修改hadoop-env.sh
修改所有节点上的$HADOOP_HOME/etc/hadoop/hadoop-env.sh文件,在靠近文件头部分加入:export JAVA_HOME=/data/jdk
特别说明一下:虽然在/etc/profile已经添加了JAVA_HOME,但仍然得修改所有节点上的hadoop-env.sh,否则启动时,报如下所示的错误:
10.12.154.79: Error: JAVA_HOME is not set and could not be found. 10.12.154.77: Error: JAVA_HOME is not set and could not be found. 10.12.154.78: Error: JAVA_HOME is not set and could not be found. 10.12.154.78: Error: JAVA_HOME is not set and could not be found. 10.12.154.77: Error: JAVA_HOME is not set and could not be found. 10.12.154.79: Error: JAVA_HOME is not set and could not be found. |
7.4. 修改/etc/hosts
为省去不必要的麻烦,建议在所有节点的/etc/hosts文件,都做如下所配置:
172.25.40.171 VM-40-171-sles10-64 # NameNode 172.25.39.166 VM-39-166-sles10-64 # SecondaryNameNode 10.12.154.77 DEVNET-154-77 # DataNode 10.12.154.78 DEVNET-154-70 # DataNode 10.12.154.79 DEVNET-154-79 # DataNode |
注意不要为一个IP配置多个不同主机名,否则HTTP页面可能无法正常运作。
主机名,如VM-39-166-sles10-64,可通过hostname命令取得。由于都配置了主机名,在启动HDFS或其它之前,需要确保针对主机名进行过ssh,否则启动时,会遇到如下所示的错误:
VM-39-166-sles10-64: Host key not found from database. VM-39-166-sles10-64: Key fingerprint: VM-39-166-sles10-64: xofiz-zilip-tokar-rupyb-tufer-tahyc-sibah-kyvuf-palik-hazyt-duxux VM-39-166-sles10-64: You can get a public key's fingerprint by running VM-39-166-sles10-64: % ssh-keygen -F publickey.pub VM-39-166-sles10-64: on the keyfile. VM-39-166-sles10-64: warning: tcgetattr failed in ssh_rl_set_tty_modes_for_fd: fd 1: Invalid argument |
上述错误表示没有以主机名ssh过一次VM-39-166-sles10-64。按下列方法修复错误:
ssh hadoop@VM-39-166-sles10-64 Host key not found from database. Key fingerprint: xofiz-zilip-tokar-rupyb-tufer-tahyc-sibah-kyvuf-palik-hazyt-duxux You can get a public key's fingerprint by running % ssh-keygen -F publickey.pub on the keyfile. Are you sure you want to continue connecting (yes/no)? yes Host key saved to /data/hadoop/.ssh2/hostkeys/key_36000_137vm_13739_137166_137sles10_13764.pub host key for VM-39-166-sles10-64, accepted by hadoop Thu Apr 17 2014 12:44:32 +0800 Authentication successful. Last login: Thu Apr 17 2014 09:24:54 +0800 from 10.32.73.69 Welcome to SuSE Linux 10 SP2 64Bit Nov 10,2010 by DIS Version v2.6.20101110 No mail. |
7.5. 修改slaves
修改NameNode和SecondaryNameNode上的$HADOOP_HOME/etc/hadoop/slaves文件,将slaves的节点IP(也可以是相应的主机名)一个人加进去,一行一个IP,如下所示:
> cat slaves 10.12.154.77 10.12.154.78 10.12.154.79 |
7.6. 准备好各配置文件
配置文件放在$HADOOP_HOME/etc/hadoop目录下,对于Hadoop 2.3.0和Hadoop 2.4.0版本,该目录下的core-site.xml、yarn-site.xml、hdfs-site.xml和mapred-site.xml都是空的。如果不配置好就启动,如执行start-dfs.sh,则会遇到各种错误。
可从$HADOOP_HOME/share/hadoop目录下拷贝一份到/etc/hadoop目录,然后在此基础上进行修改(以下内容可以直接拷贝执行,2.3.0版本中各default.xml文件路径不同于2.4.0版本):
# 进入$HADOOP_HOME目录 cd $HADOOP_HOME cp ./share/doc/hadoop/hadoop-project-dist/hadoop-common/core-default.xml ./etc/hadoop/core-site.xml cp ./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml ./etc/hadoop/hdfs-site.xml cp ./share/doc/hadoop/hadoop-yarn/hadoop-yarn-common/yarn-default.xml ./etc/hadoop/yarn-site.xml cp ./share/doc/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml ./etc/hadoop/mapred-site.xml |
接下来,需要对默认的core-site.xml、yarn-site.xml、hdfs-site.xml和mapred-site.xml进行适当的修改,否则仍然无法启动成功。
7.7. 修改core-site.xml
对core-site.xml文件的修改,涉及下表中的属性:
属性名 | 属性值 | 涉及范围 |
fs.defaultFS | hdfs://172.25.40.171:9000 | 所有节点 |
hadoop.tmp.dir | /data/hadoop/current/tmp | 所有节点 |
dfs.datanode.data.dir | /data/hadoop/current/data | 所有DataNode,在hdfs-site.xml也有这个属性 |
注意启动之前,需要将配置的目录创建好,如创建好/data/hadoop/current/tmp目录。详细可参考:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xm。
7.8. 修改hdfs-site.xml
对hdfs-site.xml文件的修改,涉及下表中的属性:
属性名 | 属性值 | 涉及范围 |
dfs.namenode.rpc-address | 172.25.40.171:9001 | 所有节点 |
dfs.namenode.secondary.http-address | 172.25.39.166:50090 | NameNode SecondaryNameNode |
dfs.namenode.name.dir | /data/hadoop/current/dfs/name | NameNode SecondaryNameNode |
dfs.datanode.data.dir | /data/hadoop/current/data | 所有DataNode |
详细配置可参考:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml。
7.8.1. dfs.namenode.rpc-address
如果没有配置,则启动时报如下错误:
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured. |
这里需要指定IP和端口,如果只指定了IP,如<value>172.25.40.171</value>,则启动时输出如下:
Starting namenodes on [] |
改成“<value>172.25.40.171:9001</value>”后,则启动时输出为:
Starting namenodes on [172.25.40.171] |
7.9. 修改mapred-site.xml
对hdfs-site.xml文件的修改,涉及下表中的属性:
属性名 | 属性值 | 涉及范围 |
mapreduce.framework.name | yarn |
|
详细配置可参考:
7.10. 修改yarn-site.xml
对yarn-site.xml文件的修改,涉及下表中的属性:
属性名 | 属性值 | 涉及范围 |
yarn.resourcemanager.hostname | 172.25.40.171 | ResourceManager NodeManager |
yarn.nodemanager.hostname | 0.0.0.0 | 所有的NodeManager |
yarn.nodemanager.hostname如果配置成具体的IP,如10.12.154.79,则会导致每个NamoManager的配置不同。详细配置可参考:
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-common/yarn-default.xml。
8. 启动HDFS
在启动HDFS之前,需要先完成对NameNode的格式化。
8.1. 格式化NameNode
1) 进入$HADOOP_HOME/bin目录
2) 进行格式化:./hdfs namenode -format
如果完成有,输出包含“INFO util.ExitUtil: Exiting with status 0”,则表示格式化成功。
在进行格式化时,如果没有在/etc/hosts文件中添加主机名和IP的映射:“172.25.40.171 VM-40-171-sles10-64”,则会报如下所示错误:
14/04/17 03:44:09 WARN net.DNS: Unable to determine local hostname -falling back to "localhost" java.net.UnknownHostException: VM-40-171-sles10-64: VM-40-171-sles10-64: unknown error at java.net.InetAddress.getLocalHost(InetAddress.java:1484) at org.apache.hadoop.net.DNS.resolveLocalHostname(DNS.java:264) at org.apache.hadoop.net.DNS.<clinit>(DNS.java:57) at org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:945) at org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:573) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:144) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:845) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1256) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1370) Caused by: java.net.UnknownHostException: VM-40-171-sles10-64: unknown error at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:907) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1302) at java.net.InetAddress.getLocalHost(InetAddress.java:1479) ... 8 more |
8.2. 启动HDFS
1) 进入$HADOOP_HOME/sbin目录
2) 启动HDFS:./start-dfs.sh
启动时,遇到如下所示的错误,则表示NameNode不能免密码登录自己。如果之前使用IP可以免密码登录自己,则原因一般是因为没有使用主机名登录过自己,因此解决办法是使用主机名SSH一下,比如:ssh hadoop@VM_40_171_sles10_64,然后再启动。
Starting namenodes on [VM_40_171_sles10_64] VM_40_171_sles10_64: Host key not found from database. VM_40_171_sles10_64: Key fingerprint: VM_40_171_sles10_64: xofiz-zilip-tokar-rupyb-tufer-tahyc-sibah-kyvuf-palik-hazyt-duxux VM_40_171_sles10_64: You can get a public key's fingerprint by running VM_40_171_sles10_64: % ssh-keygen -F publickey.pub VM_40_171_sles10_64: on the keyfile. VM_40_171_sles10_64: warning: tcgetattr failed in ssh_rl_set_tty_modes_for_fd: fd 1: Invalid argument |
8.3. 检查启动是否成功
1) 使用JDK提供的jps命令,查看相应的进程是否已启动
2) 检查$HADOOP_HOME/logs目录下的log和out文件,看看是否有异常信息。
8.3.1. DataNode
执行jps命令,可看到DataNode进程:
$ jps 18669 DataNode 24542 Jps |
8.3.2. NameNode
执行jps命令,可看到NameNode进程:
$ jps 18669 NameNode 24542 Jps |
8.3.3. SecondaryNameNode
执行jps命令,可看到:
$ jps 24542 Jps 3839 SecondaryNameNode |
8.4. 执行HDFS命令
执行HDFS命令,以进一步检验是否已经安装成功和配置好。关于HDFS命令的用法,直接运行命令hdfs或hdfs dfs,即可看到相关的用法说明。
8.4.1. hdfs dfs ls
“hdfs dfs -ls”带一个参数,如果参数以“hdfs://URI”打头表示访问HDFS,否则相当于ls。其中URI为NameNode的IP或主机名,可以包含端口号,即hdfs-site.xml中“dfs.namenode.rpc-address”指定的值。
“hdfs dfs -ls”要求默认端口为8020,如果配置成9000,则需要指定端口号,否则不用指定端口,这一点类似于浏览器访问一个URL。示例:
> hdfs dfs -ls hdfs://172.25.40.171:9001/ |
9001后面的斜杠/是和必须的,否则被当作文件。如果不指定端口号9001,则使用默认的8020,“172.25.40.171:9001”由hdfs-site.xml中“dfs.namenode.rpc-address”指定。
不难看出“hdfs dfs -ls”可以操作不同的HDFS集群,只需要指定不同的URI。
文件上传后,被存储在DataNode的data目录下(由DataNode的hdfs-site.xml中的属性“dfs.datanode.data.dir”指定),如:
$HADOOP_HOME/data/current/BP-139798373-172.25.40.171-1397735615751/current/finalized/blk_1073741825
文件名中的“blk”是block,即块的意思,默认情况下blk_1073741825即为文件的一个完整块,Hadoop未对它进额外处理。
8.4.2. hdfs dfs -put
上传文件命令,示例:
> hdfs dfs -put /etc/SuSE-release hdfs://172.25.40.171:9001/ |
8.4.3. hdfs dfs -rm
删除文件命令,示例:
> hdfs dfs -rm hdfs://172.25.40.171:9001/SuSE-release Deleted hdfs://172.25.40.171:9001/SuSE-release |
9. 启动YARN
9.1. 启动YARN
1) 进入$HADOOP_HOME/sbin目录
2) 执行:start-yarn.sh,即开始启动YARN
若启动成功,则在Master节点执行jps,可以看到ResourceManager:
> jps 24689 NameNode 30156 Jps 28861 ResourceManager |
在Slaves节点执行jps,可以看到NodeManager:
$ jps 14019 NodeManager 23257 DataNode 15115 Jps |
9.2. 执行YARN命令
9.2.1. yarn node -list
列举YARN集群中的所有NodeManager,如:
> yarn node -list Total Nodes:3 Node-Id Node-State Node-Http-Address Number-of-Running-Containers localhost:45980 RUNNING localhost:8042 0 localhost:47551 RUNNING localhost:8042 0 localhost:58394 RUNNING localhost:8042 0 |
9.2.2. yarn node -status
查看指定NodeManager的状态,如:
> yarn node -status localhost:47551 Node Report : Node-Id : localhost:47551 Rack : /default-rack Node-State : RUNNING Node-Http-Address : localhost:8042 Last-Health-Update : 星期五 18/四月/14 01:45:41:555GMT Health-Report : Containers : 0 Memory-Used : 0MB Memory-Capacity : 8192MB CPU-Used : 0 vcores CPU-Capacity : 8 vcores |
10. 运行MapReduce程序
在安装目录的share/hadoop/mapreduce子目录下,有现存的示例程序:
hadoop@VM-40-171-sles10-64:~/current> ls share/hadoop/mapreduce hadoop-mapreduce-client-app-2.4.0.jar hadoop-mapreduce-client-jobclient-2.4.0-tests.jar hadoop-mapreduce-client-common-2.4.0.jar hadoop-mapreduce-client-shuffle-2.4.0.jar hadoop-mapreduce-client-core-2.4.0.jar hadoop-mapreduce-examples-2.4.0.jar hadoop-mapreduce-client-hs-2.4.0.jar lib hadoop-mapreduce-client-hs-plugins-2.4.0.jar lib-examples hadoop-mapreduce-client-jobclient-2.4.0.jar sources |
跑一个示例程序试试:
hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar wordcount ./in ./out |
wordcount运行完成后,结果会保存在out目录下,保存结果的文件名类似于“part-r-00000”。另外,跑这个示例程序有两个需求注意的点:
1) in目录下要有文本文件,或in即为被统计的文本文件,可以为HDFS上的文件或目录,也可以为本地文件或目录
2) out目录不能存在,程序会自动去创建它,如果已经存在则会报错。
包hadoop-mapreduce-examples-2.4.0.jar中含有多个示例程序,不带参数运行,即可看到用法:
> hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar wordcount Usage: wordcount <in> <out>
> hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar An example program must be given as the first argument. Valid program names are: aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files. aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files. bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi. dbcount: An example job that count the pageview counts from a database. distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi. grep: A map/reduce program that counts the matches of a regex in the input. join: A job that effects a join over sorted, equally partitioned datasets multifilewc: A job that counts words from several files. pentomino: A map/reduce tile laying program to find solutions to pentomino problems. pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method. randomtextwriter: A map/reduce program that writes 10GB of random textual data per node. randomwriter: A map/reduce program that writes 10GB of random data per node. secondarysort: An example defining a secondary sort to the reduce. sort: A map/reduce program that sorts the data written by the random writer. sudoku: A sudoku solver. teragen: Generate data for the terasort terasort: Run the terasort teravalidate: Checking results of terasort wordcount: A map/reduce program that counts the words in the input files. wordmean: A map/reduce program that counts the average length of the words in the input files. wordmedian: A map/reduce program that counts the median length of the words in the input files. wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files. |
11. 常见错误
11.1. 执行“hdfs dfs -ls”时报ConnectException
原因可能是指定的端口号9000不对,该端口号由hdfs-site.xml中的属性“dfs.namenode.rpc-address”指定,即为NameNode的RPC服务端口号。
文件上传后,被存储在DataNode的data(由DataNode的hdfs-site.xml中的属性“dfs.datanode.data.dir”指定)目录下,如:
$HADOOP_HOME/data/current/BP-139798373-172.25.40.171-1397735615751/current/finalized/blk_1073741825
文件名中的“blk”是block,即块的意思,默认情况下blk_1073741825即为文件的一个完整块,Hadoop未对它进额外处理。
hdfs dfs -ls hdfs://172.25.40.171:9000 14/04/17 12:04:02 WARN conf.Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 14/04/17 12:04:02 WARN conf.Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 14/04/17 12:04:02 WARN conf.Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 14/04/17 12:04:02 WARN conf.Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 14/04/17 12:04:02 WARN conf.Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 14/04/17 12:04:02 WARN conf.Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /data/hadoop/hadoop-2.4.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 14/04/17 12:04:02 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 14/04/17 12:04:03 WARN conf.Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 14/04/17 12:04:03 WARN conf.Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. ls: Call From VM-40-171-sles10-64/172.25.40.171 to VM-40-171-sles10-64:9000 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused |
11.2. Incompatible clusterIDs
“Incompatible clusterIDs”的错误原因是在执行“hdfs namenode -format”之前,没有清空DataNode节点的data目录。
网上一些文章和帖子说是tmp目录,它本身也是没问题的,但Hadoop 2.4.0是data目录,实际上这个信息已经由日志的“/data/hadoop/hadoop-2.4.0/data”指出,所以不能死死的参照网上的解决办法,遇到问题时多仔细观察。
从上述描述不难看出,解决办法就是清空所有DataNode的data目录,但注意不要将data目录本身给删除了。 data目录由core-site.xml文件中的属性“dfs.datanode.data.dir”指定。
2014-04-17 19:30:33,075 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/hadoop/hadoop-2.4.0/data/in_use.lock acquired by nodename 28326@localhost 2014-04-17 19:30:33,078 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool <registering> (Datanode Uuid unassigned) service to /172.25.40.171:9001 java.io.IOException: Incompatible clusterIDs in /data/hadoop/hadoop-2.4.0/data: namenode clusterID = CID-50401d89-a33e-47bf-9d14-914d8f1c4862; datanode clusterID = CID-153d6fcb-d037-4156-b63a-10d6be224091 at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:472) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815) at java.lang.Thread.run(Thread.java:744) 2014-04-17 19:30:33,081 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to /172.25.40.171:9001 2014-04-17 19:30:33,184 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but service not yet registered with NN java.lang.Exception: trace at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143) at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91) at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:859) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837) at java.lang.Thread.run(Thread.java:744) 2014-04-17 19:30:33,184 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned) 2014-04-17 19:30:33,184 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but service not yet registered with NN java.lang.Exception: trace at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143) at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:861) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837) at java.lang.Thread.run(Thread.java:744) 2014-04-17 19:30:35,185 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode 2014-04-17 19:30:35,187 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0 2014-04-17 19:30:35,189 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at localhost/127.0.0.1 ************************************************************/ |
11.3. Inconsistent checkpoint fields
SecondaryNameNode中的“Inconsistent checkpoint fields”错误原因,可能是因为没有设置好SecondaryNameNode上core-site.xml文件中的“hadoop.tmp.dir”。
2014-04-17 11:42:18,189 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Log Size Trigger :1000000 txns 2014-04-17 11:43:18,365 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint java.io.IOException: Inconsistent checkpoint fields. LV = -56 namespaceID = 1384221685 cTime = 0 ; clusterId = CID-319b9698-c88d-4fe2-8cb2-c4f440f690d4 ; blockpoolId = BP-1627258458-172.25.40.171-1397735061985. Expecting respectively: -56; 476845826; 0; CID-50401d89-a33e-47bf-9d14-914d8f1c4862; BP-2131387753-172.25.40.171-1397730036484. at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:135) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:518) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:383) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:349) at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:345) at java.lang.Thread.run(Thread.java:744)
另外,也请配置好SecondaryNameNode上hdfs-site.xml中的“dfs.datanode.data.dir”为合适的值: <property> <name>hadoop.tmp.dir</name> <value>/data/hadoop/current/tmp</value> <description>A base for other temporary directories.</description> </property> |
12. 相关文档
《HBase-0.98.0分布式安装指南》
《Hive 0.12.0安装指南》
《ZooKeeper-3.4.6分布式安装指南》
《Hadoop 2.3.0源码反向工程》
《在Linux上编译Hadoop-2.4.0》
《Accumulo-1.5.1安装指南》
《Drill 1.0.0安装指南》
《Shark 0.9.1安装指南》
更多,敬请关注技术博客:http://aquester.cublog.cn。