Hadoop2环境搭建

#-Hadoop伪分布模式HDFS+YARN

1. 下载 Hadoop2.5.2

1.1. 在 Apache Hadoop 官网 http://hadoop.apache.org/上下载,点击 Download Hadoop 连接。
1.2. 选择 Releases2.5.2 的 Hadoop 。
1.3. 点击 binary 链接。
1.4. 网站会自动推荐一个镜像站的下载地址。
1.5. 复制下载地址,在虚拟机上,下载 hadoop-2.5.2.tar.gz 软件包。
执行命令 pwd 查看当前所处的目录位置。
执行命令 mkdir software 新建一个目录。
执行命令 cd software 进入 software 目录。
执行命令
wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.5.2/hadoop-2.5.2.tar.gz
下载 hadoop-2.5.2.tar.gz 软件包(虚拟机连接到互联网才可以下载)。
1.6. 下载完成后,执行命令 ls 看到刚刚下载的包, 解压该包。
执行命令 tar zxvf hadoop-2.5.2.tar.gz

2. 配置 hosts
2.1.查看主机名
查看当前机器的主机名,执行命令 hostname
查看配置文件中的主机名,执行命令 cat /etc/sysconfig/network
2.2.查看 IP 地址
因为采用自动分配 IP 地址, 它可能会发生变化,所以需要使用 ifconfig 命令再查下 IP 地址。
当然可以手动设置一个 IP 地址,必须保证当前网络中 IP 地址唯一,不能和其他电脑 IP 重复。
执行命令 ifconfig
2.3.配置主机名和 IP 的映射关系
执行命令 sudo vim /etc/hosts
加入内容 192.168.1.119 chinahadoop0

3. 修改 hadoop 配置文件
3.1.配置文件存放位置
配置文件都在 hadoop 安装目录的 etc/hadoop/下面。

3.2.修改 slaves 文件
把 slaves 文件中 localhost 修改为 chinahadoop0


3.3.修改 hadoop-env.sh 文件
把 hadoop-env.sh 文件中 JAVA_HOME 的值修改为 jdk 的安装目录。
使用命令echo $JAVA_HOME可以查看jdk的安装目录。


执行命令vim etc/hadoop/hadoop-env.sh
将 JAVA_HOME 的值修改为/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.95.x86_64


3.4.修改 mapred-site.xml 文件
执行命令 mv etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml
就可以将文件 mapred-site.xml.template 重命名为 mapred-site.xml
输入命令 vim etc/hadoop/mapred-site.xml

加入下面内容
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

3.5.修改 core-site.xml 文件
查看当前机器的hostname

输入命令vim etc/hadoop/core-site.xml
加入下面内容
<property>
<name>fs.default.name</name>
<value>hdfs://chinahadoop0:8020</value>
</property>


3.6.修改 hdfs-site.xml 文件
输入命令vim etc/hadoop/hdfs-site.xml
加入下面内容
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/chinahadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/chinahadoop/dfs/data</value>
</property>

注意: 1、单机版副本数dfs.replication 的值默认是3 这里写为1
2、 dfs.namenode.name.dir 和dfs.datanode.data.dir 的默认值,
在hadoop 安装目录下的tmp 目录下。
3、这里修改为非tmp 目录,此目录无需存在。
它是在启动hadoop 时目录是自动创建的。


3.7.修改 yarn-site.xml 文件
输入命令vim etc/hadoop/yarn-site.xml
加入下面内容
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

4. 搭建伪分布环境
4.1.格式化 namenode
第一次搭建hadoop环境,需要格式化。
执行命令bin/hadoop namenode -format


从控制台输出的信息上可以看到
16/01/29 12:08:17 INFO common.Storage: Storage directory /home/chinahadoop/dfs/name has
been successfully formatted.
说明格式化成功, 并且配置文件中配置的目录/home/chinahadoop/dfs/name 已经被创建。


4.2.启动 namenode
执行命令 sbin/hadoop-daemon.sh start namenode
同时 jps 查看下



4.3.启动 datanode
执行命令 sbin/hadoop-daemon.sh start datanode

因为虚拟机已经配置了主机名和IP的映射关系,所以在Centos6.6虚拟机的浏览器窗口中输入
http://chinahadoop0:50070


4.4.启动 yarn
执行命令 sbin/start-yarn.sh

在Centos6.6虚拟机的浏览器窗口中输入http://chinahadoop0:8088




5. 执行一个 MapReduce 任务
执行命令 bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar pi 2
10
执行任务的信息如下:
[chinahadoop@chinahadoop0 hadoop-2.5.2]$ bin/hadoop jar
share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar pi 2 10
Number of Maps = 2
Samples per Map = 10
16/01/29 15:05:53 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Starting Job
16/01/29 15:05:55 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/01/29 15:05:56 INFO input.FileInputFormat: Total input paths to process : 2
16/01/29 15:05:56 INFO mapreduce.JobSubmitter: number of splits:2
16/01/29 15:05:57 INFO mapreduce.JobSubmitter: Submitting tokens for job:
job_1454050365480_0001
16/01/29 15:05:57 INFO impl.YarnClientImpl: Submitted application
application_1454050365480_0001
16/01/29 15:05:57 INFO mapreduce.Job: The url to track the job:
http://chinahadoop0:8088/proxy/application_1454050365480_0001/
16/01/29 15:05:57 INFO mapreduce.Job: Running job: job_1454050365480_0001

16/01/29 15:06:19 INFO mapreduce.Job: Job job_1454050365480_0001 running in uber mode :
false
16/01/29 15:06:20 INFO mapreduce.Job: map 0% reduce 0%
16/01/29 15:08:58 INFO mapreduce.Job: map 100% reduce 0%
16/01/29 15:10:00 INFO mapreduce.Job: map 100% reduce 100%
16/01/29 15:10:03 INFO mapreduce.Job: Job job_1454050365480_0001 completed successfully
16/01/29 15:10:05 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=50
FILE: Number of bytes written=292413
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=548
HDFS: Number of bytes written=215
HDFS: Number of read operations=11
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=2
Launched reduce tasks=1
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=346087
Total time spent by all reduces in occupied slots (ms)=26951
Total time spent by all map tasks (ms)=346087
Total time spent by all reduce tasks (ms)=26951
Total vcore-seconds taken by all map tasks=346087
Total vcore-seconds taken by all reduce tasks=26951
Total megabyte-seconds taken by all map tasks=354393088
Total megabyte-seconds taken by all reduce tasks=27597824
Map-Reduce Framework
Map input records=2
Map output records=4
Map output bytes=36
Map output materialized bytes=56
Input split bytes=312
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=56
Reduce input records=4
Reduce output records=0
Spilled Records=8
Shuffled Maps =2
Failed Shuffles=0
Merged Map outputs=2
GC time elapsed (ms)=6021
CPU time spent (ms)=4270
Physical memory (bytes) snapshot=383336448
Virtual memory (bytes) snapshot=2925305856
Total committed heap usage (bytes)=257433600
Shuffle Errors
BAD_ID=0
CONNECTION=0

IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=236
File Output Format Counters
Bytes Written=97
Job Finished in 250.186 seconds
Estimated value of Pi is 3.80000000000000000000
在Centos6.6虚拟机的浏览器窗口中刷新http://chinahadoop0:8088
可以看到一个任务已经被提交。

6. 在 win7 的浏览器中访问 hadoop 集群
6.1.关闭 Centos6.6 虚拟机的防火墙
hosts 文件已经配置好 chinahadoop0 的 IP 映射关系。
即时生效:关闭防火墙 sudo service iptables stop
查看防火墙状态 sudo service iptables status


6.2.在 win7 的浏览器中访问
访问 chinahadoop0:50070

访问 chinahadoop0:8088


7. 停止集群
执行命令 sbin/stop-yarn.sh
执行命令 sbin/hadoop-daemon.sh stop datanode
执行命令 sbin/hadoop-daemon.sh stop namenode



8. 配置免密码登录
8.1.生成密钥
通过演示,能看出在启动和停止 yarn 的时候需要输入密码,因此需要配置免密码登录。
默认生成 rsa 类型的密钥, 可直接执行命令 ssh-keygen
也可以执行命令 ssh-keygen -t rsa

8.2.拷贝密钥
执行命令 ssh-copy-id
根据提示信息,找到 ssh-copy-id 文件。

使用 vim 查看下脚本内容,是因为有颜色区分, 看的更清晰些。
首先, 看到了默认值 ID_FILE


其次,发现刚刚执行命令时,控制台输出的信息。


最后, 查看完后, 强制退出即可,不要修改文件内容。强制退出命令:q!
总结五点: 一、 identity_file 的默认值是 ${HOME}/.ssh/id_rsa.pub
二、 中括号 [ ] 是可选参数,
三、 因为当前是 chinahadoop 用户登录,所以${HOME}=/home/chinahadoop
四、 [user@]这里不输入默认是当前登录的用户 chinahadoop
五、 machine 这里使用主机名。
得出结论, 执行命令 ssh-copy-id chinahadoop0 即可

查看那.ssh 目录下都什么文件, 执行命令 ls ~/.ssh/
当前是 chinahadoop 用户登录的,则该命令等价于 ls /home/chinahadoop/.ssh/


查看 authorized_keys 文件内容,执行命令 cat ~/.ssh/authorized_keys

执行命令 ssh chinahadoop0 验证下

8.3.再次启动集群
再次启动集群, 发现不需要再输入密码。


------------------------------------------------------------------------------------------------------------------------------

---------------
#yum install pip

#SecureCRT设置彩色和显示中文
设置Options->SessionOptions ->Emulation,然后把Terminal类型改成xterm,并点中ANSI Color复选框。
字体设置:Options->SessionOptions->Appearance->font然后改成你想要的字体就可以了。
注意:
1):字符集选择utf8,这样可以避免显示汉字乱码
2):选择字体的时候,需要选择ture type的字体(如新宋体),不然会出现汉字乱码
3):scrollback buffer 调大(5000),这样你就可以看到以前显示内容,这样方便很多
4):terminal要选择xtem,这样你ssh到服务器上才能显示颜色,并把ANSI Color打上勾
5):我选择的颜色方案Windows或Traditional。

#ssh windows客户端的颜色,总是深蓝色,怎么改变呢?
https://zhidao.baidu.com/question/554631084218621852.html

#logs about 格式化namenode && 启动集群

[root@master hadoop-2.6.1]# cd bin
[root@master bin]# ./hadoop namenode -format    
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

17/08/04 10:44:46 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master/192.168.183.100
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.1
STARTUP_MSG:   classpath = /usr/local/src/hadoop-2.6.1/etc/hadoop:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/gson-

2.2.4.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/junit-

4.11.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/activation-

1.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/commons-beanutils-core-

1.8.0.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/api-asn1-api-1.0.0-

M20.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/commons-el-

1.0.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/jsch-

0.1.42.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/hadoop-auth-

2.6.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/commons-logging-

1.1.3.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/zookeeper-

3.4.6.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/commons-digester-

1.8.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/curator-framework-2.6.0.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/xmlenc-

0.52.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/apacheds-

kerberos-codec-2.0.0-M15.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/commons-compress-

1.4.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/curator-client-

2.6.0.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/asm-3.2.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/xz-1.0.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/java-xmlbuilder-

0.4.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/curator-recipes-

2.6.0.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/commons-

collections-3.2.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/commons-net-

3.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/stax-api-1.0-

2.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/htrace-core-3.0.4.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/mockito-all-

1.8.5.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/hadoop-annotations-

2.6.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/commons-math3-

3.1.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/common/hadoop-common-2.6.1-tests.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/hadoop-common-

2.6.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/common/hadoop-nfs-2.6.1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/hdfs:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/jackson-core-asl-

1.9.13.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/commons-daemon-

1.0.13.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/jersey-server-

1.9.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/jasper-runtime-

5.5.23.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/commons-codec-

1.4.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/xmlenc-

0.52.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/jsr305-

1.3.9.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/jetty-

6.1.26.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/hadoop-hdfs-

2.6.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/hdfs/hadoop-hdfs-2.6.1-tests.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/protobuf-java-

2.5.0.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/activation-

1.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/guava-

11.0.2.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/aopalliance-

1.0.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/guice-

3.0.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/commons-codec-

1.4.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/leveldbjni-all-

1.8.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/javax.inject-

1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/commons-compress-

1.4.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/asm-

3.2.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/jsr305

-1.3.9.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/jackson-mapper-asl-

1.9.13.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/stax-api-1.0-

2.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/lib/jersey-guice-

1.9.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/hadoop-yarn-

server-resourcemanager-2.6.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/hadoop-yarn-registry-

2.6.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/hadoop-yarn-api-2.6.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/hadoop-yarn-common-

2.6.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/hadoop-yarn-server-common-2.6.1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/yarn/hadoop-yarn-client-2.6.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/yarn/hadoop-yarn-applications-

distributedshell-2.6.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/lib/protobuf-java-

2.5.0.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/lib/jackson-core-asl-

1.9.13.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/lib/jersey-server-

1.9.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/lib/leveldbjni-all-

1.8.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/lib/commons-compress-

1.4.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/lib/paranamer-

2.3.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/lib/log4j-

1.2.17.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/lib/hamcrest-core-

1.3.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/hadoop-

mapreduce-examples-2.6.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/hadoop-mapreduce-client-core-

2.6.1.jar:/usr/local/src/hadoop-2.6.1/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.1-tests.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.1.jar:/usr/local/src/hadoop-

2.6.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.1.jar:/usr/local/src/hadoop-2.6.1/contrib/capacity-

scheduler/*.jar:/usr/local/src/hadoop-2.6.1/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b4d876d837b830405ccdb6af94742f99d49f9c04; 

compiled by 'jenkins' on 2015-09-16T21:07Z
STARTUP_MSG:   java = 1.7.0_45
************************************************************/
17/08/04 10:44:46 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/08/04 10:44:46 INFO namenode.NameNode: createNameNode [-format]
17/08/04 10:44:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java 

classes where applicable
Formatting using clusterid: CID-868a2bee-bf93-492f-b270-8f0715abd865
17/08/04 10:44:49 INFO namenode.FSNamesystem: No KeyProvider found.
17/08/04 10:44:49 INFO namenode.FSNamesystem: fsLock is fair:true
17/08/04 10:44:49 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/08/04 10:44:49 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/08/04 10:44:49 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/08/04 10:44:49 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Aug 04 10:44:49
17/08/04 10:44:49 INFO util.GSet: Computing capacity for map BlocksMap
17/08/04 10:44:49 INFO util.GSet: VM type       = 64-bit
17/08/04 10:44:49 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
17/08/04 10:44:49 INFO util.GSet: capacity      = 2^21 = 2097152 entries
17/08/04 10:44:50 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/08/04 10:44:50 INFO blockmanagement.BlockManager: defaultReplication         = 3
17/08/04 10:44:50 INFO blockmanagement.BlockManager: maxReplication             = 512
17/08/04 10:44:50 INFO blockmanagement.BlockManager: minReplication             = 1
17/08/04 10:44:50 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
17/08/04 10:44:50 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
17/08/04 10:44:50 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/08/04 10:44:50 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
17/08/04 10:44:50 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
17/08/04 10:44:50 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
17/08/04 10:44:50 INFO namenode.FSNamesystem: supergroup          = supergroup
17/08/04 10:44:50 INFO namenode.FSNamesystem: isPermissionEnabled = true
17/08/04 10:44:50 INFO namenode.FSNamesystem: HA Enabled: false
17/08/04 10:44:50 INFO namenode.FSNamesystem: Append Enabled: true
17/08/04 10:44:50 INFO util.GSet: Computing capacity for map INodeMap
17/08/04 10:44:50 INFO util.GSet: VM type       = 64-bit
17/08/04 10:44:50 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
17/08/04 10:44:50 INFO util.GSet: capacity      = 2^20 = 1048576 entries
17/08/04 10:44:50 INFO namenode.NameNode: Caching file names occuring more than 10 times
17/08/04 10:44:50 INFO util.GSet: Computing capacity for map cachedBlocks
17/08/04 10:44:50 INFO util.GSet: VM type       = 64-bit
17/08/04 10:44:50 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
17/08/04 10:44:50 INFO util.GSet: capacity      = 2^18 = 262144 entries
17/08/04 10:44:50 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/08/04 10:44:50 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/08/04 10:44:50 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
17/08/04 10:44:50 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/08/04 10:44:50 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 

600000 millis
17/08/04 10:44:50 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/08/04 10:44:50 INFO util.GSet: VM type       = 64-bit
17/08/04 10:44:50 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
17/08/04 10:44:50 INFO util.GSet: capacity      = 2^15 = 32768 entries
17/08/04 10:44:50 INFO namenode.NNConf: ACLs enabled? false
17/08/04 10:44:50 INFO namenode.NNConf: XAttrs enabled? true
17/08/04 10:44:50 INFO namenode.NNConf: Maximum size of an xattr: 16384
Re-format filesystem in Storage Directory /usr/local/src/hadoop-2.6.1/dfs/name ? (Y or N) y
17/08/04 10:44:59 INFO namenode.FSImage: Allocated new BlockPoolId: BP-555688475-192.168.183.100-1501868699307
17/08/04 10:44:59 INFO common.Storage: Storage directory /usr/local/src/hadoop-2.6.1/dfs/name has been successfully formatted.
17/08/04 10:45:00 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/08/04 10:45:00 INFO util.ExitUtil: Exiting with status 0
17/08/04 10:45:00 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.183.100
************************************************************/
[root@master bin]# cd ..
[root@master hadoop-2.6.1]# ls
bin  dfs  etc  include  lib  libexec  LICENSE.txt  logs  NOTICE.txt  README.txt  sbin  share  tmp
[root@master hadoop-2.6.1]# cd sbin
[root@master sbin]# ls
distribute-exclude.sh  hdfs-config.sh           refresh-namenodes.sh  start-balancer.sh    start-yarn.cmd  stop-balancer.sh    

stop-yarn.cmd
hadoop-daemon.sh       httpfs.sh                slaves.sh             start-dfs.cmd        start-yarn.sh   stop-dfs.cmd        

stop-yarn.sh
hadoop-daemons.sh      kms.sh                   start-all.cmd         start-dfs.sh         stop-all.cmd    stop-dfs.sh         

yarn-daemon.sh
hdfs-config.cmd        mr-jobhistory-daemon.sh  start-all.sh          start-secure-dns.sh  stop-all.sh     stop-secure-dns.sh  

yarn-daemons.sh
[root@master sbin]# ./start-dfs.sh
17/08/04 10:45:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java 

classes where applicable
Starting namenodes on [master]
master: starting namenode, logging to /usr/local/src/hadoop-2.6.1/logs/hadoop-root-namenode-master.out
slave1: starting datanode, logging to /usr/local/src/hadoop-2.6.1/logs/hadoop-root-datanode-slave1.out
slave2: starting datanode, logging to /usr/local/src/hadoop-2.6.1/logs/hadoop-root-datanode-slave2.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /usr/local/src/hadoop-2.6.1/logs/hadoop-root-secondarynamenode-master.out
17/08/04 10:45:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java 

classes where applicable
[root@master sbin]# ./start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/src/hadoop-2.6.1/logs/yarn-badou-resourcemanager-master.out
slave2: starting nodemanager, logging to /usr/local/src/hadoop-2.6.1/logs/yarn-root-nodemanager-slave2.out
slave1: starting nodemanager, logging to /usr/local/src/hadoop-2.6.1/logs/yarn-root-nodemanager-slave1.out
[root@master sbin]# jps
5736 ResourceManager
5997 Jps
5424 NameNode
5595 SecondaryNameNode
[root@master sbin]# cd ..
[root@master hadoop-2.6.1]# cd /src/local/src
bash: cd: /src/local/src: No such file or directory
[root@master hadoop-2.6.1]# jps
5736 ResourceManager
5424 NameNode
5595 SecondaryNameNode
6007 Jps
[root@master hadoop-2.6.1]# jps
5736 ResourceManager
6017 Jps
5424 NameNode
5595 SecondaryNameNode
[root@master hadoop-2.6.1]# jps
5736 ResourceManager
5424 NameNode
5595 SecondaryNameNode
6027 Jps
[root@master hadoop-2.6.1]# 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值