Hadoop是一个开发和运行处理大规模数据的软件平台,是Appach的一个用java语言实现开源软件框架,实现在大量计算机组成的集群中对海量数据进行分布式计算。
Hadoop框架中最核心设计就是:HDFS和MapReduce。HDFS提供了海量数据的存储,MapReduce提供了对数据的计算。
软件环境:jdk-7u80-linux-x64.rpm、hadoop-2.7.3.tar.gz
NameNode:
Hostname:master
IP:10.19.85.100
DataNode(3台):
Hostname:slave1
IP:10.19.85.101
Hostname:slave2
IP:10.19.85.102
Hostname:slave3
IP:10.19.85.103
[root@master ~]# vi /etc/sysconfig/network-scripts/ifcfg-eno16777736
BOOTPROTO=dhcp 修改为 BOOTPROTO=none
在文件的最后添加以下内容:
ONBOOT=yes
IPADDR=10.19.85.100(<color #ed1c24>对应每个机器的IP地址</color>)
PREFIX=24
NETMASK=255.255.255.0
GATEWAY=10.19.85.1
DNS1=10.1.2.242
设置完成后,需要让IP地址生效,运行以下命令:
[root@master ~]# service network restart
hostname=master(每台机器对应不同的名,这儿把master节点设置为master,slave节点设置为slave1、slave2、slave3)
重启一下网络:service network restart
重启机器:reboot -h now,然后hostname验证
[root@master ~]# systemctl stop firewalld
useradd hadoop 增加用户
passwd hadoop 设置密码
[root@master ~]# vi /etc/hosts
在文件后面追加:
10.19.85.100 master
10.19.85.101 slave1
10.19.85.102 slave2
10.19.85.103 slave3
分发到其他机器:
scp -r /etc/hosts root@slave1:/etc/
scp -r /etc/hosts root@slave2:/etc/
scp -r /etc/hosts root@slave3:/etc/
[hadoop@master ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
a8:05:f9:88:98:a9:f3:f4:5e:38:47:09:35:f9:41:68 hadoop@master
The key's randomart image is:
+--[ RSA 2048]----+
| o+. |
| oE.. |
| +. . . |
| + . = o. |
|+ . . * S |
|. = |
|o . + o |
| + . + |
| ..o |
+-----------------+
在所有机器上,把公钥名字修改一下(避免拷贝的时候覆盖):
master节点:cp ~/.ssh/id_rsa.pub ~/.ssh/id_rsa.pub.100
slave1节点:cp ~/.ssh/id_rsa.pub ~/.ssh/id_rsa.pub.101
slave2节点:cp ~/.ssh/id_rsa.pub ~/.ssh/id_rsa.pub.102
slave3节点:cp ~/.ssh/id_rsa.pub ~/.ssh/id_rsa.pub.103
==== 3.1 拷贝公钥到master节点 ====
slave1节点:scp -r ~/.ssh/id_rsa.pub.101 hadoop@master:/home/hadoop/.ssh
slave2节点:scp -r ~/.ssh/id_rsa.pub.102 hadoop@master:/home/hadoop/.ssh
slave3节点:scp -r ~/.ssh/id_rsa.pub.103 hadoop@master:/home/hadoop/.ssh
cat ~/.ssh/id_rsa.pub.101 >> ~/.ssh/authorized_keys
cat ~/.ssh/id_rsa.pub.102 >> ~/.ssh/authorized_keys
cat ~/.ssh/id_rsa.pub.103 >> ~/.ssh/authorized_keys
这些操作必须在master节点的hadoop用户下执行。
执行命令:chmod g-w authorized_keys
scp -r ~/.ssh/authorized_keys hadoop@slave1:/home/hadoop/.ssh
scp -r ~/.ssh/authorized_keys hadoop@slave2:/home/hadoop/.ssh
scp -r ~/.ssh/authorized_keys hadoop@slave3:/home/hadoop/.ssh
验证一下:ssh slave1
word天,竟然可以免密码登陆了,而且各个机器之间都可以LOL。
下载完成后,拷贝到每个节点上。
这儿吃瓜群众可能会问,怎么知道安装了哪个JDK呢?
我们可以通过系统命令查询:
[root@master hadoop]# rpm -qa | grep jdk
java-1.8.0-openjdk-1.8.0.65-3.b17.el7.x86_64
java-1.7.0-openjdk-headless-1.7.0.91-2.6.2.3.el7.x86_64
java-1.8.0-openjdk-headless-1.8.0.65-3.b17.el7.x86_64
java-1.7.0-openjdk-1.7.0.91-2.6.2.3.el7.x86_64
所有节点都要执行卸载命令:
[root@master hadoop]# rpm -e --nodeps java-1.7.0-openjdk
[root@master hadoop]# rpm -e --nodeps java-1.8.0-openjdk
[root@master hadoop]# rpm -e --nodeps java-1.7.0-openjdk-headless
[root@master hadoop]# rpm -e --nodeps java-1.8.0-openjdk-headless
[root@master opt]# rpm -ivh jdk-7u80-linux-x64.rpm
Preparing... ################################# [100%]
Updating / installing...
1:jdk-2000:1.7.0_80-fcs ################################# [100%]
Unpacking JAR files...
rt.jar...
jsse.jar...
charsets.jar...
tools.jar...
localedata.jar...
jfxrt.jar...
如果执行下面命令木有问题的话,恭喜安装成功了。
[root@master opt]# java -version
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
在/etc/profile文件中,追加以下内容:
export JAVA_HOME=/usr/java/jdk1.7.0_80
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin
保存退出,利用source /etc/profile使新修改的环境变量生效。
chown -R hadoop:hadoop /opt/hadoop-2.7.3
在/etc/profile文件中追加以下内容:
export HADOOP_HOME=/opt/hadoop-2.7.3
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
保存退出,让环境变量生效:source /etc/profile。
如果输入hadoop出现下面的提示,表明配置成功了:
[root@master ~]# hadoop
Usage: hadoop [--config confdir] [COMMAND | CLASSNAME]
CLASSNAME run the class named CLASSNAME
or
where COMMAND is one of:
fs run a generic filesystem user client
version print the version
jar <jar> run a jar file
note: please use "yarn jar" to launch
YARN applications, not this command.
checknative [-a|-h] check native hadoop and compression libraries availability
distcp <srcurl> <desturl> copy file or directories recursively
archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
classpath prints the class path needed to get the
credential interact with credential providers
Hadoop jar and the required libraries
daemonlog get/set the log level for each daemon
trace view and modify Hadoop tracing settings
Most commands print help when invoked w/o parameters.
根据之前的解压路径(参照 5.2 解压hadoop包),hadoop配置文件在/opt/hadoop-2.7.3/etc/hadoop里面。
根据JDK的安装路径(参照 4.4 配置java路径),修改为export JAVA_HOME=/usr/java/jdk1.7.0_80 ,保存退出。
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
<final>true</final>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop/dfs/name</value>
<final>true</final>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop/dfs/data</value>
<final>true</final>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
slave1
slave2
slave3
scp -r /etc/profile root@slave1:/etc/
scp -r /etc/profile root@slave2:/etc/
scp -r /etc/profile root@slave3:/etc/
scp -r /opt/hadoop-2.7.3 root@slave1:/opt/
scp -r /opt/hadoop-2.7.3 root@slave2:/opt/
scp -r /opt/hadoop-2.7.3 root@slave3:/opt/
登陆每台slave机器,改一下文件的权限:
chown -R hadoop:hadoop /opt/hadoop-2.7.3
hadoop namenode -format
出现以下log的时候,证明格式化成功了:
16/11/29 16:21:27 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/11/29 16:21:27 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
16/11/29 16:21:27 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/11/29 16:21:27 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/11/29 16:21:27 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/11/29 16:21:27 INFO util.GSet: VM type = 64-bit
16/11/29 16:21:27 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
16/11/29 16:21:27 INFO util.GSet: capacity = 2^15 = 32768 entries
16/11/29 16:21:27 INFO namenode.NNConf: ACLs enabled? false
16/11/29 16:21:27 INFO namenode.NNConf: XAttrs enabled? true
16/11/29 16:21:27 INFO namenode.NNConf: Maximum size of an xattr: 16384
16/11/29 16:21:27 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1721501275-10.19.85.100-1480407687297
16/11/29 16:21:27 INFO common.Storage: Storage directory /home/hadoop/dfs/name has been successfully formatted.
16/11/29 16:21:27 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/11/29 16:21:27 INFO util.ExitUtil: Exiting with status 0
16/11/29 16:21:27 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/10.19.85.100
************************************************************/
执行完成之后,在命令行输入:jps
在master节点如果出现下面的内容,表示NameNode启动成功了:
[hadoop@localhost ~]$ jps
4132 SecondaryNameNode
4529 Jps
18032 DataNode
如果启动中出现错误,可以看一下hadoop的日志文件:
/opt/hadoop-2.7.3/logs/*****namenode.log
/opt/hadoop-2.7.3/logs/*****datanode.log
Hadoop框架中最核心设计就是:HDFS和MapReduce。HDFS提供了海量数据的存储,MapReduce提供了对数据的计算。
1 硬件环境
硬件环境:CentOS 7.1 服务器4台(一台为Master节点,三台为Slave节点)软件环境:jdk-7u80-linux-x64.rpm、hadoop-2.7.3.tar.gz
NameNode:
Hostname:master
IP:10.19.85.100
DataNode(3台):
Hostname:slave1
IP:10.19.85.101
Hostname:slave2
IP:10.19.85.102
Hostname:slave3
IP:10.19.85.103
2 设置IP添加用户配置host
2.1 设置静态IP
便于机器之间的通信,防止路由器电脑重启后,ip变化,导致不能通信。[root@master ~]# vi /etc/sysconfig/network-scripts/ifcfg-eno16777736
BOOTPROTO=dhcp 修改为 BOOTPROTO=none
在文件的最后添加以下内容:
ONBOOT=yes
IPADDR=10.19.85.100(<color #ed1c24>对应每个机器的IP地址</color>)
PREFIX=24
NETMASK=255.255.255.0
GATEWAY=10.19.85.1
DNS1=10.1.2.242
设置完成后,需要让IP地址生效,运行以下命令:
[root@master ~]# service network restart
2.2 修改主机名(hostname)
[root@master ~]# vi /etc/sysconfig/networkhostname=master(每台机器对应不同的名,这儿把master节点设置为master,slave节点设置为slave1、slave2、slave3)
重启一下网络:service network restart
重启机器:reboot -h now,然后hostname验证
2.3 关闭防火墙
注意:如果出现节点启动不了的情况下,最好先查看一下防火墙是否已经关闭。[root@master ~]# systemctl stop firewalld
2.4 建立hadoop运行帐号
最好不要使用root操作hadoop,root是超级管理员权限,不推荐各个机器之间使用root访问,useradd hadoop 增加用户
passwd hadoop 设置密码
2.5 配置hosts文件
只需要对主机master进行操作,然后通过scp命令将这些配置分发给其他电脑即可。[root@master ~]# vi /etc/hosts
在文件后面追加:
10.19.85.100 master
10.19.85.101 slave1
10.19.85.102 slave2
10.19.85.103 slave3
分发到其他机器:
scp -r /etc/hosts root@slave1:/etc/
scp -r /etc/hosts root@slave2:/etc/
scp -r /etc/hosts root@slave3:/etc/
3 SSH免密码
3.1 生成公钥
在master和所有slave机器的上切换到<color #ed1c24>hadoop用户</color>执行命令ssh-keygen -t rsa,然后一顿回车:[hadoop@master ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
a8:05:f9:88:98:a9:f3:f4:5e:38:47:09:35:f9:41:68 hadoop@master
The key's randomart image is:
+--[ RSA 2048]----+
| o+. |
| oE.. |
| +. . . |
| + . = o. |
|+ . . * S |
|. = |
|o . + o |
| + . + |
| ..o |
+-----------------+
在所有机器上,把公钥名字修改一下(避免拷贝的时候覆盖):
master节点:cp ~/.ssh/id_rsa.pub ~/.ssh/id_rsa.pub.100
slave1节点:cp ~/.ssh/id_rsa.pub ~/.ssh/id_rsa.pub.101
slave2节点:cp ~/.ssh/id_rsa.pub ~/.ssh/id_rsa.pub.102
slave3节点:cp ~/.ssh/id_rsa.pub ~/.ssh/id_rsa.pub.103
==== 3.1 拷贝公钥到master节点 ====
slave1节点:scp -r ~/.ssh/id_rsa.pub.101 hadoop@master:/home/hadoop/.ssh
slave2节点:scp -r ~/.ssh/id_rsa.pub.102 hadoop@master:/home/hadoop/.ssh
slave3节点:scp -r ~/.ssh/id_rsa.pub.103 hadoop@master:/home/hadoop/.ssh
3.2 master节点合并所有公钥
cat ~/.ssh/id_rsa.pub.100 >> ~/.ssh/authorized_keyscat ~/.ssh/id_rsa.pub.101 >> ~/.ssh/authorized_keys
cat ~/.ssh/id_rsa.pub.102 >> ~/.ssh/authorized_keys
cat ~/.ssh/id_rsa.pub.103 >> ~/.ssh/authorized_keys
这些操作必须在master节点的hadoop用户下执行。
3.3 修改authorized_keys的权限
组用户不能有 写(W)权限, 不然ssh 由于安全问题不生效,把authorized_keys权限改为644就OK。执行命令:chmod g-w authorized_keys
3.4 分发authorized_keys文件
在master节点执行:scp -r ~/.ssh/authorized_keys hadoop@slave1:/home/hadoop/.ssh
scp -r ~/.ssh/authorized_keys hadoop@slave2:/home/hadoop/.ssh
scp -r ~/.ssh/authorized_keys hadoop@slave3:/home/hadoop/.ssh
验证一下:ssh slave1
word天,竟然可以免密码登陆了,而且各个机器之间都可以LOL。
4 下载&安装JDK
4.1 下载JDK安装包
砖家建议下载ORACLE正版JDK7,这里我们可以选择[[http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html|jdk-7u80-linux-x64.rpm]]安装包。下载完成后,拷贝到每个节点上。
4.2 卸载旧JDK
安装JDK之前,先把之前系统安装的JDK卸载掉。这儿吃瓜群众可能会问,怎么知道安装了哪个JDK呢?
我们可以通过系统命令查询:
[root@master hadoop]# rpm -qa | grep jdk
java-1.8.0-openjdk-1.8.0.65-3.b17.el7.x86_64
java-1.7.0-openjdk-headless-1.7.0.91-2.6.2.3.el7.x86_64
java-1.8.0-openjdk-headless-1.8.0.65-3.b17.el7.x86_64
java-1.7.0-openjdk-1.7.0.91-2.6.2.3.el7.x86_64
所有节点都要执行卸载命令:
[root@master hadoop]# rpm -e --nodeps java-1.7.0-openjdk
[root@master hadoop]# rpm -e --nodeps java-1.8.0-openjdk
[root@master hadoop]# rpm -e --nodeps java-1.7.0-openjdk-headless
[root@master hadoop]# rpm -e --nodeps java-1.8.0-openjdk-headless
4.3 安装新JDK
在安装包jdk-7u80-linux-x64.rpm的路径下,执行安装命令:[root@master opt]# rpm -ivh jdk-7u80-linux-x64.rpm
Preparing... ################################# [100%]
Updating / installing...
1:jdk-2000:1.7.0_80-fcs ################################# [100%]
Unpacking JAR files...
rt.jar...
jsse.jar...
charsets.jar...
tools.jar...
localedata.jar...
jfxrt.jar...
如果执行下面命令木有问题的话,恭喜安装成功了。
[root@master opt]# java -version
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
4.4 配置java路径
安装完JDK之后,在/usr路径下会出现一个java路径,这个就是JDK的安装路径。接下来我们就配置环境变量。在/etc/profile文件中,追加以下内容:
export JAVA_HOME=/usr/java/jdk1.7.0_80
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin
保存退出,利用source /etc/profile使新修改的环境变量生效。
5 下载&解压hadoop包
本章内容只在master节点操作即可。5.1 下载hadoop包
既然hadoop是开源的项目,我们就可以光明正大的去官网下载了,目前最新的版本为[[http://apache.fayea.com/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz|hadoop-2.7.3]]。5.2 解压hadoop包
把下载好的压缩包放到任意路径中,这儿习惯性的把它放到/opt路径下,执行命令:tar -zxvf /opt/hadoop-2.7.3.tar.gz
chown -R hadoop:hadoop /opt/hadoop-2.7.3
5.3 设置hadoop环境变量
在/etc/profile文件中追加以下内容:
export HADOOP_HOME=/opt/hadoop-2.7.3
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
保存退出,让环境变量生效:source /etc/profile。
如果输入hadoop出现下面的提示,表明配置成功了:
[root@master ~]# hadoop
Usage: hadoop [--config confdir] [COMMAND | CLASSNAME]
CLASSNAME run the class named CLASSNAME
or
where COMMAND is one of:
fs run a generic filesystem user client
version print the version
jar <jar> run a jar file
note: please use "yarn jar" to launch
YARN applications, not this command.
checknative [-a|-h] check native hadoop and compression libraries availability
distcp <srcurl> <desturl> copy file or directories recursively
archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
classpath prints the class path needed to get the
credential interact with credential providers
Hadoop jar and the required libraries
daemonlog get/set the log level for each daemon
trace view and modify Hadoop tracing settings
Most commands print help when invoked w/o parameters.
6 配置hadoop参数
本章内容只在master节点操作即可。根据之前的解压路径(参照 5.2 解压hadoop包),hadoop配置文件在/opt/hadoop-2.7.3/etc/hadoop里面。
6.1 配置hadoop-env.sh
找到export JAVA_HOME这一行,去除前面的#号注释符。根据JDK的安装路径(参照 4.4 配置java路径),修改为export JAVA_HOME=/usr/java/jdk1.7.0_80 ,保存退出。
6.2 配置core-site.xml
添加namenode的通信端口和临时文件,在<configuration>标签中,添加以下内容:<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
<final>true</final>
</property>
6.3 配置hdfs-site.xml
添加NameNode保存数据的路径,DataNode保存数据的路径和数据块的副本数,<configuration>标签中,添加以下内容:<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop/dfs/name</value>
<final>true</final>
</property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop/dfs/data</value>
<final>true</final>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
6.4 配置slaves
添加DataNode的hostname,在slaves文件中,添加以下内容:slave1
slave2
slave3
7 分发hadoop文件到其他机器
把第5章和第6章修改的文件分发到每个slave节点上。7.1 分发环境变量/etc/profile
在master机器的root用户上执行以下操作:scp -r /etc/profile root@slave1:/etc/
scp -r /etc/profile root@slave2:/etc/
scp -r /etc/profile root@slave3:/etc/
7.2 分发hadoop文件
在master机器的root用户上执行以下操作:scp -r /opt/hadoop-2.7.3 root@slave1:/opt/
scp -r /opt/hadoop-2.7.3 root@slave2:/opt/
scp -r /opt/hadoop-2.7.3 root@slave3:/opt/
登陆每台slave机器,改一下文件的权限:
chown -R hadoop:hadoop /opt/hadoop-2.7.3
8 启动hadoop
8.1 格式化集群
在master和所有slave节点的hadoop用户上,执行以下命令:hadoop namenode -format
出现以下log的时候,证明格式化成功了:
16/11/29 16:21:27 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/11/29 16:21:27 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
16/11/29 16:21:27 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/11/29 16:21:27 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/11/29 16:21:27 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/11/29 16:21:27 INFO util.GSet: VM type = 64-bit
16/11/29 16:21:27 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
16/11/29 16:21:27 INFO util.GSet: capacity = 2^15 = 32768 entries
16/11/29 16:21:27 INFO namenode.NNConf: ACLs enabled? false
16/11/29 16:21:27 INFO namenode.NNConf: XAttrs enabled? true
16/11/29 16:21:27 INFO namenode.NNConf: Maximum size of an xattr: 16384
16/11/29 16:21:27 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1721501275-10.19.85.100-1480407687297
16/11/29 16:21:27 INFO common.Storage: Storage directory /home/hadoop/dfs/name has been successfully formatted.
16/11/29 16:21:27 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/11/29 16:21:27 INFO util.ExitUtil: Exiting with status 0
16/11/29 16:21:27 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/10.19.85.100
************************************************************/
8.2 启动集群
在master服务器的hadoop用户上,执行命令:start-all.sh,执行完成之后,在命令行输入:jps
在master节点如果出现下面的内容,表示NameNode启动成功了:
[hadoop@localhost ~]$ jps
4132 SecondaryNameNode
4529 Jps
3941 NameNode
在slave节点如果出现下面的内容,表示DataNode启动成功了:
[hadoop@localhost ~]$ jps
18239 Jps18032 DataNode
如果启动中出现错误,可以看一下hadoop的日志文件:
/opt/hadoop-2.7.3/logs/*****namenode.log
/opt/hadoop-2.7.3/logs/*****datanode.log