1.1 hadoop的安装
1.1.1 hadoop需重新编译
我们在官网下载的hadoop是32位的,但是我们的操作系统是64位的(CentOs),所以我们下载源码从新编译,这里有一份编译好的
链接:http://pan.baidu.com/s/1sl97QDv 密码:i2hy
1.1.2 版本介绍
JDK版本:java version "1.7.0_79"
hadoop版本:2.6.0
1.1.3 虚拟机的准备
第一步:准备三台虚拟机
第二步:手动配置服务器的ip地址
配置方式1:可以直接在窗口中手动配置
配置方式2:可以修改/etc/sysconfig/network-scripts/ifcfg-ech2(也可能是
ifcfg-ech0
)文件
第三步:修改对应ip的虚拟机的名称
执行命令:vim /etc/sysconfig/network
把里面的HOSTNAME等号右边的值修改成:nameNode
第四步:修改vim /etc/hosts文件,添加:ip地址(对应的ip地址) nameNode
例如:192.168.110 nameNode
第五步:重启network服务
执行命令:service network restart
三台虚拟机分别执行上面的步骤得到是的:
192.168.1.110 nameNode
192.168.1.111 dataNode1
192.168.1.112 dataNode2
(如果hostname没有立马生效,执行命令:hostname 主机名 执行命令:hostname nameNode ,还是没有生效,可以重启所有节点)
1.1.4 配置SSH公钥私钥自动登录
在hadoop集群中,nameNode节点,需要能够SSH无密码登录访问dataNode节点。
第一步:进入SSH目录(即为这个目录~)
[root@nameNode ~]# cd ~ [root@nameNode ~]# ll 总用量 84 -rw-------. 1 root root 2180 11月 17 08:14 anaconda-ks.cfg -rw-r--r--. 1 root root 62826 11月 17 08:13 install.log -rw-r--r--. 1 root root 11949 11月 17 08:07 install.log.syslog [root@nameNode ~]# |
第二步:生存公钥密钥对
[root@nameNode ~]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 1e:d0:45:d9:69:c7:d5:8a:53:c7:a3:76:c9:92:c6:2b root@nameNode The key's randomart image is: +--[ RSA 2048]----+ | .oo o oo| | . .. + +.+| | . . ..++oo| | . oB.+ | | S o.+ | | . . E . | | . . | | | | | +-----------------+ [root@nameNode ~]# |
第三步:进入到.ssh目录下多出了两个文件
[root@nameNode ~]# cd .ssh [root@nameNode .ssh]# ll 总用量 8 -rw-------. 1 root root 1675 2月 21 21:43 id_rsa -rw-r--r--. 1 root root 395 2月 21 21:43 id_rsa.pub [root@nameNode .ssh]# |
私钥文件:id_rsa
公钥文件:id_rsa.pub
第四步:将公钥文件id_rsa.pub内容放到authorized_keys文件中
:
cat id_rsa.pub >> authorized_keys |
第五步:将authorized_keys分发到各个数据节点dataNode中
在这之前需要在nameNode主机上配置数据节点局域网ip地址
即为:
192.168.1.110 nameNode 192.168.1.111 dataNode1 192.168.1.112 dataNode2 |
[root@nameNode .ssh]# scp authorized_keys root@dataNode1:/root/.ssh/ The authenticity of host 'datanode1 (192.168.1.111)' can't be established. RSA key fingerprint is 5a:19:12:ac:62:f9:54:d1:2f:ba:35:94:1f:b2:bc:7b. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'datanode1,192.168.1.111' (RSA) to the list of known hosts. root@datanode1's password: scp: /root/.ssh/: Is a directory [root@nameNode .ssh]# |
第六步:验证ssh无密码登录,如下表示成功
[root@nameNode ~]# ssh dataNode1 Last login: Wed Feb 22 22:34:24 2017 from namenode [root@dataNode1 ~]# |
1.1.5 hadoop的安装步骤
第一步:使用rz上传jar包hadoop-2.6.0.tar.gz到/export/software/目录下。
第二步:解压hadoop-2.6.0.tar.gz压缩包,到/export/servers/目录下
执行命令:tar -zxvf hadoop-2.6.0.tar.gz -C /export/servers/
第三步:配置环境变量
执行命令:vim /etc/profile
复制下列代码到配置文件中
export HADOOP_HOME=/export/servers/hadoop-2.6.0 export PATH=$PATH:$HADOOP_HOME/bin |
让配置文件立马生效
执行命令:source /etc/profile
第四步:进入都/export/servers/hadoop-2.6.0/目录下创建hadoop需要的用到的文件夹,hadoop_save_dir(自己创建的,数据保存目录)。
mkdir -p /export/servers/hadoop-2.6.0/
hadoop_save_dir/tmp
mkdir -p /export/servers/hadoop-2.6.0/
hadoop_save_dir/hdfs
mkdir -p /export/servers/hadoop-2.6.0/
hadoop_save_dir/hdfs/data
mkdir -p /export/servers/hadoop-2.6.0/
hadoop_save_dir/hdfs/name
第五步:进入到/
/export/servers/hadoop-2.6.0/etc/hadoop目录下,配置hadoop-env.sh文件
1. 配置core-site.xml文件
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://nameNode:9000</value> </property><property> <name>hadoop.tmp.dir</name> <value>/export/servers/hadoop-2.6.0/hadoop_save_dir/hadoop_tmp</value> <description>Abase for other temporary directories.</description> </property> <property> <name>io.file.buffer.size</name> <value>4096</value> </property> </configuration> |
2. 配置hdfs-site.xml
<configuration> <property> <name>dfs.nameservices</name> <value>hadoop-cluster1</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>nameNode:50090</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///export/servers/hadoop-2.6.0/hadoop_save_dir/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///export/servers/hadoop-2.6.0/hadoop_save_dir/dfs/data</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration> |
3. 配置mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobtracker.http.address</name> <value>nameNode:50030</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>nameNode:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>nameNode:19888</value> </property> </configuration> |
4. 配置yarn-site.xml
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>nameNode:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>nameNode:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>nameNode:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>nameNode:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>nameNode:8088</value> </property> </configuration> |
5. 使用vim 打开${hadoop_home}/etc/hadoop/slaves(其实就是和上面的几个配置文件同级目录)文件
向配置文件写入备机名称
dataNode1 dataNode2 |
6. 修改hadoop-env.sh和yarn-env.sh中添加JAVA_HOME配置
配置hadoop-env.sh:
export JAVA_HOME=/export/servers/jdk |
配置yarn-env.sh:
export JAVA_HOME=/export/servers/jdk |
1.1.6 hadoop的启动
1. 格式化文件系统 :(HADOOP_HOME目录下执行,分别执行)
执行命令:bin/hdfs namenode -format
执行命令:bin/hadoop namenode -format(两条命令都需要执行)
输出:
[root@nameNode hadoop-2.6.0]# bin/hdfs namenode –format 17/02/23 18:53:09 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = nameNode/192.168.1.110 STARTUP_MSG: args = [–format] STARTUP_MSG: version = 2.6.0 STARTUP_MSG: classpath = /export/servers/hadoop-2.6.0/etc/hadoop:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jsch-0.1.42.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/commons-digester-1.8.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/curator-client-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/commons-el-1.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/commons-cli-1.2.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/asm-3.2.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jsp-api-2.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/htrace-core-3.0.4.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/hamcrest-core-1.3.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/avro-1.7.4.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/curator-framework-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/commons-io-2.4.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/commons-lang-2.6.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/junit-4.11.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/zookeeper-3.4.6.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/servlet-api-2.5.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jersey-core-1.9.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/guava-11.0.2.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jersey-server-1.9.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/xmlenc-0.52.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jetty-6.1.26.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/paranamer-2.3.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/hadoop-annotations-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jettison-1.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jersey-json-1.9.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/commons-codec-1.4.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/commons-net-3.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/xz-1.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/httpclient-4.2.5.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/activation-1.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/log4j-1.2.17.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/gson-2.2.4.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/hadoop-auth-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/hadoop-nfs-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0-tests.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/asm-3.2.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-2.6.0-tests.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/guice-3.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/javax.inject-1.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/asm-3.2.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/jettison-1.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/xz-1.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/activation-1.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-common-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-api-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-common-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-registry-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-client-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/junit-4.11.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0.jar:/export/servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0.jar:/export/servers/hadoop-2.6.0/contrib/capacity-scheduler/*.jar STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z STARTUP_MSG: java = 1.7.0_79 ************************************************************/ 17/02/23 18:53:09 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 17/02/23 18:53:09 INFO namenode.NameNode: createNameNode [–format] Usage: java NameNode [-backup] | [-checkpoint] | [-format [-clusterid cid ] [-force] [-nonInteractive] ] | [-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] | [-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] | [-rollback] | [-rollingUpgrade <rollback|downgrade|started> ] | [-finalize] | [-importCheckpoint] | [-initializeSharedEdits] | [-bootstrapStandby] | [-recover [ -force] ] | [-metadataVersion ] ] 17/02/23 18:53:09 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at nameNode/192.168.1.110 ************************************************************/ [root@nameNode hadoop-2.6.0]# |
2. 启动nameNode和dataNode的守护线程
执行命令:sbin/start-dfs.sh
[root@nameNode hadoop-2.6.0]# sbin/start-dfs.sh 17/02/23 19:23:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [nameNode] nameNode: starting namenode, logging to /export/servers/hadoop-2.6.0/logs/hadoop-root-namenode-nameNode.out dataNode2: starting datanode, logging to /export/servers/hadoop-2.6.0/logs/hadoop-root-datanode-dataNode2.out dataNode1: starting datanode, logging to /export/servers/hadoop-2.6.0/logs/hadoop-root-datanode-dataNode1.out 17/02/23 19:23:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@nameNode hadoop-2.6.0]# |
3. 启动MapReduceManager和NodeManager守护线程
执行命令:sbin/start-yarn.sh
[root@nameNode hadoop-2.6.0]# sbin/start-yarn.sh starting yarn daemons resourcemanager running as process 7090. Stop it first. dataNode1: starting nodemanager, logging to /export/servers/hadoop-2.6.0/logs/yarn-root-nodemanager-dataNode1.out dataNode2: starting nodemanager, logging to /export/servers/hadoop-2.6.0/logs/yarn-root-nodemanager-dataNode2.out [root@nameNode hadoop-2.6.0]# |
4. 查看是否启动成功
执行命令:jps,分别在各个虚拟机中执行jps
7485 DataNode 7794 Jps 7628 NodeManager 7013 DataNode 7155 NodeManager 7309 Jps 8636 ResourceManager 8973 Jps |
停止:sbin/stop-all.sh