Hadoop伪分布式安装一(HDFS)

1.安装前的准备工作

在进行Hadoop伪分布式安装前要检测虚拟机是否进行了下面的配置:
1. 修改主机名
2. 修改主机名与ip之间的映射关系
3. 配置虚拟机网络可以进行上网

2.伪分布式部署(HDFS)

1.创建hadoop服务的一个用户
  • 创建hadoop用户
[root@zydatahadoop001 ~]# useradd hadoop
[root@zydatahadoop001 ~]# id hadoop
uid=502(hadoop) gid=502(hadoop) groups=502(hadoop)
  • 配置hadoop用户的sudo权限
[root@zydatahadoop001 ~]# vi /etc/sudoers
在文件中添加一行内容:
hadoop  ALL=(root)      NOPASSWD:ALL
2.部署JAVA

从官网下载 jdk1.8的版本(Open JDK尽量不要使用)点击这里下载jdk
可以参考上一篇Hadoop的编译博客地址
1. 下载jdk1.8
2. 解压
3. 修改环境变量

3.检测ssh服务是运行
[root@zydatahadoop001 ~]# service sshd status
openssh-daemon (pid  1372) is running...
4.解压hadoop
  • 上一篇博客中我们已经对 hadoop-2.8.1 进行了编译,找到编译好的hadoop-2.8.1.tar.gz 进行解压, 一般在/opt/sourcecode/hadoop-2.8.1-src/hadoop-dist/target/hadoop-2.8.1.tar.gz目录下
将hadoop-2.8.1.tar.gz移动到 /opt/software/ 下
[root@zydatahadoop001 software]# cp /opt/sourcecode/hadoop-2.8.1-src/hadoop-dist/target/hadoop-2.8.1.tar.gz /opt/software/

[root@zydatahadoop001 software]# ll
total 208524
drwxr-xr-x.  6 root root      4096 Nov 10  2015 apache-maven-3.3.9
-rw-r--r--.  1 root root   8617253 Nov 30 16:26 apache-maven-3.3.9-bin.zip
drwxr-xr-x.  7 root root      4096 Aug 21  2009 findbugs-1.3.9
-rw-r--r--.  1 root root   7546219 Nov 30 16:25 findbugs-1.3.9.zip
-rw-r--r--.  1 root root 194939924 Dec 18 18:42 hadoop-2.8.1.tar.gz
drwxr-xr-x. 10 root root      4096 Dec 18 11:06 protobuf-2.5.0
-rw-r--r--.  1 root root   2401901 Nov 30 15:11 protobuf-2.5.0.tar.gz

解压文件
[root@zydatahadoop001 software]# tar -xzvf hadoop-2.8.1.tar.gz
5.修改权限
创建软连接(也可以使用mv进行重命名)
[root@zydatahadoop001 software]# ln -s /opt/software/hadoop-2.8.1 hadoop
[root@zydatahadoop001 software]# ll
total 208528
drwxr-xr-x.  6 root root      4096 Nov 10  2015 apache-maven-3.3.9
-rw-r--r--.  1 root root   8617253 Nov 30 16:26 apache-maven-3.3.9-bin.zip
drwxr-xr-x.  7 root root      4096 Aug 21  2009 findbugs-1.3.9
-rw-r--r--.  1 root root   7546219 Nov 30 16:25 findbugs-1.3.9.zip
lrwxrwxrwx.  1 root root        26 Dec 18 18:50 hadoop -> /opt/software/hadoop-2.8.1
drwxr-xr-x.  9 root root      4096 Dec 18 17:50 hadoop-2.8.1
-rw-r--r--.  1 root root 194939924 Dec 18 18:42 hadoop-2.8.1.tar.gz
drwxr-xr-x. 10 root root      4096 Dec 18 11:06 protobuf-2.5.0
-rw-r--r--.  1 root root   2401901 Nov 30 15:11 protobuf-2.5.0.tar.gz

修改用户和用户组(进行三次修改),如果mv命令重命名只需一次
[root@zydatahadoop001 software]# chown -R hadoop:hadoop hadoop
[root@zydatahadoop001 software]# chown -R hadoop:hadoop hadoop/*
[root@zydatahadoop001 software]# chown -R hadoop:hadoop hadoop-2.8.1

[root@zydatahadoop001 software]# ll
total 208528
drwxr-xr-x.  6 root   root        4096 Nov 10  2015 apache-maven-3.3.9
-rw-r--r--.  1 root   root     8617253 Nov 30 16:26 apache-maven-3.3.9-bin.zip
drwxr-xr-x.  7 root   root        4096 Aug 21  2009 findbugs-1.3.9
-rw-r--r--.  1 root   root     7546219 Nov 30 16:25 findbugs-1.3.9.zip
lrwxrwxrwx.  1 hadoop hadoop        26 Dec 18 18:50 hadoop -> /opt/software/hadoop-2.8.1
drwxr-xr-x.  9 hadoop hadoop        4096 Dec 18 17:50 hadoop-2.8.1
-rw-r--r--.  1 root   root   194939924 Dec 18 18:42 hadoop-2.8.1.tar.gz
drwxr-xr-x. 10 root   root        4096 Dec 18 11:06 protobuf-2.5.0
-rw-r--r--.  1 root   root     2401901 Nov 30 15:11 protobuf-2.5.0.tar.gz
[root@zydatahadoop001 software]# cd hadoop
[root@zydatahadoop001 hadoop]# ll
total 148
drwxr-xr-x. 2 hadoop hadoop  4096 Dec 18 17:50 bin
drwxr-xr-x. 3 hadoop hadoop  4096 Dec 18 17:50 etc
drwxr-xr-x. 2 hadoop hadoop  4096 Dec 18 17:50 include
drwxr-xr-x. 3 hadoop hadoop  4096 Dec 18 17:50 lib
drwxr-xr-x. 2 hadoop hadoop  4096 Dec 18 17:50 libexec
-rw-r--r--. 1 hadoop hadoop 99253 Dec 18 17:50 LICENSE.txt
-rw-r--r--. 1 hadoop hadoop 15915 Dec 18 17:50 NOTICE.txt
-rw-r--r--. 1 hadoop hadoop  1366 Dec 18 17:50 README.txt
drwxr-xr-x. 2 hadoop hadoop  4096 Dec 18 17:50 sbin
drwxr-xr-x. 3 hadoop hadoop  4096 Dec 18 17:50 share

注:进行软连接修改用户和用户组的时候使用chown -R hadoop:hadoop hadoop 命令只会修改文件夹的权限,文件夹里面的文件所在的用户和用户组并不会修改,所以还要使用chown -R hadoop:hadoop hadoop/*命令
**而使用mv从命名的话只要使用chown -R hadoop:hadoop hadoop 命令就会修改文件夹及文件夹里面的用户和用户组

6.切换hadoop用户和修改配置
切换用户
[root@zydatahadoop001 hadoop]# su - hadoop
[hadoop@zydatahadoop001 ~]$ ll
total 0
[hadoop@zydatahadoop001 ~]$ cd /opt/software/hadoop
[hadoop@zydatahadoop001 hadoop]$ ll
total 148
drwxr-xr-x. 2 hadoop hadoop  4096 Dec 18 17:50 bin
drwxr-xr-x. 3 hadoop hadoop  4096 Dec 18 17:50 etc
drwxr-xr-x. 2 hadoop hadoop  4096 Dec 18 17:50 include
drwxr-xr-x. 3 hadoop hadoop  4096 Dec 18 17:50 lib
drwxr-xr-x. 2 hadoop hadoop  4096 Dec 18 17:50 libexec
-rw-r--r--. 1 hadoop hadoop 99253 Dec 18 17:50 LICENSE.txt
-rw-r--r--. 1 hadoop hadoop 15915 Dec 18 17:50 NOTICE.txt
-rw-r--r--. 1 hadoop hadoop  1366 Dec 18 17:50 README.txt
drwxr-xr-x. 2 hadoop hadoop  4096 Dec 18 17:50 sbin
drwxr-xr-x. 3 hadoop hadoop  4096 Dec 18 17:50 share
[hadoop@zydatahadoop001 hadoop]$ cd etc
[hadoop@zydatahadoop001 etc]$ ll
total 4
drwxr-xr-x. 2 hadoop hadoop 4096 Dec 18 17:50 hadoop
[hadoop@zydatahadoop001 etc]$ cd hadoop
[hadoop@zydatahadoop001 hadoop]$ ll
total 156
-rw-r--r--. 1 hadoop hadoop  4942 Dec 18 17:50 capacity-scheduler.xml
-rw-r--r--. 1 hadoop hadoop  1335 Dec 18 17:50 configuration.xsl
-rw-r--r--. 1 hadoop hadoop   318 Dec 18 17:50 container-executor.cfg
-rw-r--r--. 1 hadoop hadoop   774 Dec 18 17:50 core-site.xml
-rw-r--r--. 1 hadoop hadoop  3719 Dec 18 17:50 hadoop-env.cmd
-rw-r--r--. 1 hadoop hadoop  4666 Dec 18 17:50 hadoop-env.sh
-rw-r--r--. 1 hadoop hadoop  2598 Dec 18 17:50 hadoop-metrics2.properties
-rw-r--r--. 1 hadoop hadoop  2490 Dec 18 17:50 hadoop-metrics.properties
-rw-r--r--. 1 hadoop hadoop  9683 Dec 18 17:50 hadoop-policy.xml
-rw-r--r--. 1 hadoop hadoop   775 Dec 18 17:50 hdfs-site.xml
-rw-r--r--. 1 hadoop hadoop  1449 Dec 18 17:50 httpfs-env.sh
-rw-r--r--. 1 hadoop hadoop  1657 Dec 18 17:50 httpfs-log4j.properties
-rw-r--r--. 1 hadoop hadoop    21 Dec 18 17:50 httpfs-signature.secret
-rw-r--r--. 1 hadoop hadoop   620 Dec 18 17:50 httpfs-site.xml
-rw-r--r--. 1 hadoop hadoop  3518 Dec 18 17:50 kms-acls.xml
-rw-r--r--. 1 hadoop hadoop  1611 Dec 18 17:50 kms-env.sh
-rw-r--r--. 1 hadoop hadoop  1631 Dec 18 17:50 kms-log4j.properties
-rw-r--r--. 1 hadoop hadoop  5546 Dec 18 17:50 kms-site.xml
-rw-r--r--. 1 hadoop hadoop 13661 Dec 18 17:50 log4j.properties
-rw-r--r--. 1 hadoop hadoop   931 Dec 18 17:50 mapred-env.cmd
-rw-r--r--. 1 hadoop hadoop  1383 Dec 18 17:50 mapred-env.sh
-rw-r--r--. 1 hadoop hadoop  4113 Dec 18 17:50 mapred-queues.xml.template
-rw-r--r--. 1 hadoop hadoop   758 Dec 18 17:50 mapred-site.xml.template
-rw-r--r--. 1 hadoop hadoop    10 Dec 18 17:50 slaves
-rw-r--r--. 1 hadoop hadoop  2316 Dec 18 17:50 ssl-client.xml.example
-rw-r--r--. 1 hadoop hadoop  2697 Dec 18 17:50 ssl-server.xml.example
-rw-r--r--. 1 hadoop hadoop  2191 Dec 18 17:50 yarn-env.cmd
-rw-r--r--. 1 hadoop hadoop  4567 Dec 18 17:50 yarn-env.sh
-rw-r--r--. 1 hadoop hadoop   690 Dec 18 17:50 yarn-site.xml

配置文件:
hadoop-env.sh : hadoop配置环境
core-site.xml : hadoop 核心配置文件
hdfs-site.xml : hdfs服务的 –> 会起进程
[mapred-site.xml : mapred计算所需要的配置文件] 只当在jar计算时才有
yarn-site.xml : yarn服务的 –> 会起进程
slaves: 集群的机器名称

  • 修改core-site.xml
[hadoop@zydatahadoop001 hadoop]$ vi core-site.xml

在文件最后添加下面的内容:
<configuration>
    <property>
         <name>fs.defaultFS</name>
         <value>hdfs://localhost:9000</value>
    </property>
</configuration>
  • 修改hdfs-site.xml
[hadoop@zydatahadoop001 hadoop]$ vi hdfs-site.xml
在文件最后添加下面的内容:
<configuration>
    <property>
            <name>dfs.replication</name>
            <value>1</value>
     </property>
</configuration>
7.配置hadoop用户的ssh的信任关系
  1. SSH基本原理
    SSH之所以能够保证安全,原因在于它采用了公钥加密。过程如下:
    (1)远程主机收到用户的登录请求,把自己的公钥发给用户。
    (2)用户使用这个公钥,将登录密码加密后,发送回来。
    (3)远程主机用自己的私钥,解密登录密码,如果密码正确,就同意用户登录。
  2. SSH基本用法
    假如用户名为java,登录远程主机名为linux,如下命令即可:
    $ ssh java@linux
    SSH的默认端口是22,也就是说,你的登录请求会送进远程主机的22端口。使用p参数,可以修改这个端口,例如修改为88端口,命令如下:
    $ ssh -p 88 java@linux
    注意:如果出现错误提示:ssh: Could not resolve hostname linux: Name or service not known,则是因为linux主机未添加进本主机的Name Service中,故不能识别,需要在/etc/hosts里添加进该主机及对应的IP即可:
    linux 192.168.1.107

  3. SSH无密码原理
    Master(NameNode | JobTracker)作为客户端,要实现无密码公钥认证,连接到服务器Salve(DataNode | Tasktracker)上时,需要在Master上生成一个密钥对,包括一个公钥和一个私钥,而后将公钥复制到所有的Slave上。当Master通过SSH连接Salve时,Salve就会生成一个随机数并用Master的公钥对随机数进行加密,并发送给Master。Master收到加密数之后再用私钥解密,并将解密数回传给Slave,Slave确认解密数无误之后就允许Master进行连接了。这就是一个公钥认证过程,其间不需要用户手工输入密码。

  4. 利用ssh-keygen命令生成一个无密码密钥对。

也可以只用ssh-keygen命令,后面进行回车。
[hadoop@zydatahadoop001 hadoop]$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa

Generating public/private rsa key pair.
Created directory '/home/hadoop/.ssh'.
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
8b:ff:2a:58:43:9d:de:42:a7:f0:2d:0d:c3:96:ac:b6 hadoop@zydatahadoop001
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|       + o       |
|      o @ .      |
|     . BSO       |
|      =.*.+      |
|     +.o.o       |
|    . E.         |
|       .oo.      |
+-----------------+
生成的密钥对:id_rsa(私钥)和id_rsa.pub(公钥),默认存储在"/home/用户名/.ssh"目录下。 

5 . 在master上,导入authorized_keys


[hadoop@zydatahadoop001 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
  • 修改authorized_keys文件的权限:
[hadoop@zydatahadoop001 ~]$ chmod 0600 ~/.ssh/authorized_keys

6.查看.ssh文件

[hadoop@zydatahadoop001 ~]$ cd .ssh
[hadoop@zydatahadoop001 .ssh]$ ll
total 16
-rw-------. 1 hadoop hadoop  404 Dec 18 20:08 authorized_keys
-rw-------. 1 hadoop hadoop 1675 Dec 18 20:08 id_rsa
-rw-r--r--. 1 hadoop hadoop  404 Dec 18 20:08 id_rsa.pub

7.输入ssh zydatahadoop001 date 命令

第一次输入需要yes
[hadoop@zydatahadoop001 .ssh]$ ssh zydatahadoop001 date
The authenticity of host 'zydatahadoop001 (192.168.137.200)' can't be established.
RSA key fingerprint is 8b:62:2d:cf:43:d8:e0:5e:d5:b2:71:55:b5:d4:ed:dc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'zydatahadoop001,192.168.137.200' (RSA) to the list of known hosts.
Mon Dec 18 20:35:45 CST 2017

第二次输入,不需要
[hadoop@zydatahadoop001 .ssh]$ ssh zydatahadoop001 date
Mon Dec 18 20:35:53 CST 2017
[hadoop@zydatahadoop001 .ssh]$ ll 
total 16
-rw-------. 1 hadoop hadoop  404 Dec 18 20:08 authorized_keys
-rw-------. 1 hadoop hadoop 1675 Dec 18 20:08 id_rsa
-rw-r--r--. 1 hadoop hadoop  404 Dec 18 20:08 id_rsa.pub
-rw-r--r--. 1 hadoop hadoop  413 Dec 18 20:35 known_hosts
8.格式化
[hadoop@zydatahadoop001 ~]$ cd /opt/software/hadoop

进行格式化
[hadoop@zydatahadoop001 hadoop]$ bin/hdfs namenode -format

出现下面信息格式化成功
17/12/18 20:41:10 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.
17/12/18 20:41:10 INFO namenode.FSImageFormatProtobuf: Saving image file /tmp/hadoop-hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
17/12/18 20:41:10 INFO namenode.FSImageFormatProtobuf: Image file /tmp/hadoop-hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
17/12/18 20:41:10 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/12/18 20:41:10 INFO util.ExitUtil: Exiting with status 0
17/12/18 20:41:10 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at zydatahadoop001/192.168.137.200
************************************************************/
9.启动HDFS服务
[hadoop@zydatahadoop001 sbin]$ ./start-dfs.sh 
Starting namenodes on [localhost]
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 8b:62:2d:cf:43:d8:e0:5e:d5:b2:71:55:b5:d4:ed:dc.
Are you sure you want to continue connecting (yes/no)? yes
localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
localhost: Error: JAVA_HOME is not set and could not be found.
localhost: Error: JAVA_HOME is not set and could not be found.
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
RSA key fingerprint is 8b:62:2d:cf:43:d8:e0:5e:d5:b2:71:55:b5:d4:ed:dc.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
0.0.0.0: Error: JAVA_HOME is not set and could not be found.

这里出现了一个问题,已经配置的JAVA_HOME,但是还是报错说JAVA_HOME未找到

解决办法:

  • 修改/etc/hadoop/hadoop-env.sh 配置文件,
[hadoop@zydatahadoop001 sbin]$ vi ../etc/hadoop/hadoop-env.sh

修改JAVA_HOME=/usr/java/jdk1.8.0_45
export JAVA_HOME=/usr/java/jdk1.8.0_45
  • 再次启动HDFS服务
[hadoop@zydatahadoop001 sbin]$ ./start-dfs.sh 
Starting namenodes on [localhost]
localhost: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-zydatahadoop001.out
localhost: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-zydatahadoop001.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-zydatahadoop001.out
  • web上访问50070端口
虚拟机上访问:
http://localhost:50070/

这里为什么是localhost呢?
因为我们在核心配置文件core-site.xml 中配置的

 <configuration>
    <property>
            <name>fs.defaultFS</name>
            <value>hdfs://localhost:9000</value>
    </property>
</configuration>
  • 修改core-site.xml
[hadoop@zydatahadoop001 hadoop]$ vi etc/hadoop/core-site.xml

<configuration>
    <property>
         <name>fs.defaultFS</name>
         <value>hdfs://192.168.137.200:9000</value> 改成自己的ip
    </property>
</configuration>
  • 关闭服务
[hadoop@zydatahadoop001 hadoop]$ cd sbin
[hadoop@zydatahadoop001 sbin]$ ./stop-dfs.sh 
Stopping namenodes on [zydatahadoop001]
zydatahadoop001: stopping namenode
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
  • 重新格式化
[hadoop@zydatahadoop001 hadoop]$ bin/hdfs namenode -format

成功信息
17/12/18 21:40:25 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.
17/12/18 21:40:25 INFO namenode.FSImageFormatProtobuf: Saving image file /tmp/hadoop-hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
17/12/18 21:40:26 INFO namenode.FSImageFormatProtobuf: Image file /tmp/hadoop-hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
17/12/18 21:40:26 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/12/18 21:40:26 INFO util.ExitUtil: Exiting with status 0
17/12/18 21:40:26 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at zydatahadoop001/192.168.137.200
************************************************************/
  • 再次启动HDFS服务
[hadoop@zydatahadoop001 hadoop]$ cd sbin
[hadoop@zydatahadoop001 sbin]$ ./start-dfs.sh
Starting namenodes on [zydatahadoop001]
zydatahadoop001: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-zydatahadoop001.out
localhost: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-zydatahadoop001.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-zydatahadoop001.out

查看9000端口,发生改变
[hadoop@zydatahadoop001 sbin]$ netstat -nlp|grep 9000
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 192.168.137.200:9000        0.0.0.0:*                   LISTEN      21967/java    
10.修改HDFS的服务以zydatahadoop001(机器名)启动
  • 我们启动服务时:
    namenode: 是以zydatahadoop001 启动
    datanode: 是以localhost 启动
    secondarynamenode: 是以 0.0.0.0 启动

  • 修改为以zydatahadoop001启动

[hadoop@zydatahadoop001 hadoop]$ sbin/start-dfs.sh 
Starting namenodes on [zydatahadoop001]
zydatahadoop001: namenode running as process 21967. Stop it first.
localhost: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-zydatahadoop001.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: secondarynamenode running as process 22243. Stop it first.
  • 针对于datanode修改:
[hadoop@zydatahadoop001 hadoop]$ vi slaves 
将localhost修改为zydatahadoop001
  • 针对于secondarynamenode修改:
[hadoop@zydatahadoop001 hadoop]$ vi hdfs-site.xml
添加下面内容(zydatahadoop001自己的机器名,记得自己修改)
       <property>
            <name>dfs.namenode.secondary.http-address</name>
            <value>zydatahadoop001:50090</value>
       </property>

       <property>
            <name>dfs.namenode.secondary.https-address</name>
            <value>zydatahadoop001:50091</value>
       </property>
  • 启动服务,查看信息
[hadoop@zydatahadoop001 hadoop]$ pwd
/opt/software/hadoop/etc/hadoop
[hadoop@zydatahadoop001 hadoop]$ cd ../../sbin/

先关闭服务
[hadoop@zydatahadoop001 sbin]$ ./stop-dfs.sh 
Stopping namenodes on [zydatahadoop001]
zydatahadoop001: stopping namenode
zydatahadoop001: no datanode to stop
Stopping secondary namenodes [zydatahadoop001]
zydatahadoop001: stopping secondarynamenode

开启服务
[hadoop@zydatahadoop001 sbin]$ ./start-dfs.sh
Starting namenodes on [zydatahadoop001]
zydatahadoop001: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-zydatahadoop001.out
zydatahadoop001: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-zydatahadoop001.out
Starting secondary namenodes [zydatahadoop001]
zydatahadoop001: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-zydatahadoop001.out
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值