hadoop 3.X 分布式HA集成Kerbos(保姆级教程)

前提:先安装Kerbos

1、创建keytab目录

在每台机器上上提前创建好对应的kertab目录

[hadoop@tv3-hadoop-01 ~]$ sudo mkdir -p /BigData/run/hadoop/keytab/

[hadoop@tv3-hadoop-01 ~]$ sudo mkdir -p /opt/security/

[hadoop@tv3-hadoop-01 ~]$ sudo chown hadoop:hadoop /BigData/run/hadoop/keytab/

[hadoop@tv3-hadoop-01 ~]$ ls -lrt /BigData/run/hadoop/

drwxr-xr-x 2 hadoop hadoop  4096 Jun 26 23:22 keytab

2、创建kerbos证书

进入管理机器,比如tv3-hadoop-01【本例中hadoop服务启动统一使用hadoop用户】

# 进入kadmin

[root@tv3-hadoop-01 ~]# kadmin.local

Authenticating as principal hadoop/admin@EXAMPLE.COM with password.

kadmin.local:  

# 查看用户

kadmin.local:  listprincs

# 创建用户

addprinc -randkey hadoop/tv3-hadoop-01@EXAMPLE.COM

3、证书添加

依次增加其他hdfs节点的验证,并导出到/BigData/run/hadoop/keytab/hadoop.keytab这个文件:

addprinc -randkey hadoop/tv3-hadoop-01@EXAMPLE.COM
addprinc -randkey hadoop/tv3-hadoop-02@EXAMPLE.COM
addprinc -randkey hadoop/tv3-hadoop-03@EXAMPLE.COM
addprinc -randkey hadoop/tv3-hadoop-04@EXAMPLE.COM
addprinc -randkey hadoop/tv3-hadoop-05@EXAMPLE.COM
addprinc -randkey hadoop/tv3-hadoop-06@EXAMPLE.COM

addprinc -randkey HTTP/tv3-hadoop-01@EXAMPLE.COM
addprinc -randkey HTTP/tv3-hadoop-02@EXAMPLE.COM
addprinc -randkey HTTP/tv3-hadoop-03@EXAMPLE.COM
addprinc -randkey HTTP/tv3-hadoop-04@EXAMPLE.COM
addprinc -randkey HTTP/tv3-hadoop-05@EXAMPLE.COM
addprinc -randkey HTTP/tv3-hadoop-06@EXAMPLE.COM

ktadd -k /BigData/run/hadoop/keytab/hadoop.keytab hadoop/tv3-hadoop-01@EXAMPLE.COM
ktadd -k /BigData/run/hadoop/keytab/hadoop.keytab hadoop/tv3-hadoop-02@EXAMPLE.COM
ktadd -k /BigData/run/hadoop/keytab/hadoop.keytab hadoop/tv3-hadoop-03@EXAMPLE.COM
ktadd -k /BigData/run/hadoop/keytab/hadoop.keytab hadoop/tv3-hadoop-04@EXAMPLE.COM
ktadd -k /BigData/run/hadoop/keytab/hadoop.keytab hadoop/tv3-hadoop-05@EXAMPLE.COM
ktadd -k /BigData/run/hadoop/keytab/hadoop.keytab hadoop/tv3-hadoop-06@EXAMPLE.COM

ktadd -k /BigData/run/hadoop/keytab/HTTP.keytab HTTP/tv3-hadoop-01@EXAMPLE.COM
ktadd -k /BigData/run/hadoop/keytab/HTTP.keytab HTTP/tv3-hadoop-02@EXAMPLE.COM
ktadd -k /BigData/run/hadoop/keytab/HTTP.keytab HTTP/tv3-hadoop-03@EXAMPLE.COM
ktadd -k /BigData/run/hadoop/keytab/HTTP.keytab HTTP/tv3-hadoop-04@EXAMPLE.COM
ktadd -k /BigData/run/hadoop/keytab/HTTP.keytab HTTP/tv3-hadoop-05@EXAMPLE.COM
ktadd -k /BigData/run/hadoop/keytab/HTTP.keytab HTTP/tv3-hadoop-06@EXAMPLE.COM

4、权限修改&kertab同步

修改权限到hadoop启动用户,否则会有权限访问问题,并同步到其他hdfs所有服务的节点上(JN,DN,NN,RM,NM)

su  - hadoop

sudo chown hadoop:hadoop /BigData/run/hadoop/keytab/*.keytab

scp /BigData/run/hadoop/keytab/hadoop.keytab  /BigData/run/hadoop/keytab/HTTP.keytab hadoop@tv3-hadoop-06:/BigData/run/hadoop/keytab

5、修改配置文件

5.1 hdfs-site.xml

    <property>
        <name>dfs.block.access.token.enable</name>
        <value>true</value>
        <description>Enable HDFS block access tokens for secure operations</description>
    </property>

    <property>
        <name>dfs.namenode.kerberos.principal</name>
        <value>hadoop/_HOST@EXAMPLE.COM</value>
        <description>namenode对应的kerberos账户为 nn/主机名@EXAMPLE.CPOM   _HOST会自动转换为主机名</description>
    </property>

    <property>
        <name>dfs.namenode.keytab.file</name>
        <value>/BigData/run/hadoop/keytab/hadoop.keytab</value>
        <description>因为使用-randkey 创建的用户 密码随机不知道,所以需要用免密登录的keytab文件 指定namenode需要用的keytab文件在哪里</description>
    </property>

    <property>
        <name>dfs.namenode.kerberos.internal.spnego.principal</name>
        <value>HTTP/_HOST@EXAMPLE.COM</value>
        <description>https 相关(如开启namenodeUI)使用的账户</description>
    </property>

    <property>
        <name>dfs.namenode.kerberos.internal.spnego.keytab</name>
        <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
    </property>

   <property>
        <name>dfs.secondary.namenode.kerberos.principal</name>
        <value>hadoop/_HOST@EXAMPLE.COM</value>
        <description>secondarynamenode使用的账户</description>
    </property>
    <property>
        <name>dfs.secondary.namenode.keytab.file</name>
        <value>/BigData/run/hadoop/keytab/hadoop.keytab</value>
        <description>sn对应的keytab文件</description>
    </property>

    <property>
        <name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name>
        <value>HTTP/_HOST@EXAMPLE.COM</value>
        <description>sn需要开启http页面用到的账户</description>
    </property>
     <property>
        <name>dfs.secondary.namenode.kerberos.internal.spnego.keytab</name>
        <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
    </property>

    <property>
        <name>dfs.journalnode.kerberos.principal</name>
        <value>hadoop/_HOST@EXAMPLE.COM</value>
    </property>

    <property>
        <name>dfs.journalnode.keytab.file</name>
        <value>/BigData/run/hadoop/keytab/hadoop.keytab</value>
    </property>

    <property>
        <name>dfs.journalnode.kerberos.internal.spnego.principal</name>
        <value>HTTP/_HOST@EXAMPLE.COM</value>
    </property>
       <property>
        <name>dfs.journalnode.kerberos.internal.spnego.keytab</name>
        <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
    </property>

    <property>
        <name>dfs.encrypt.data.transfer</name>
        <value>true</value>
        <description>数据传输协议激活数据加密</description>
    </property>
    <property>
        <name>dfs.datanode.kerberos.principal</name>
        <value>hadoop/_HOST@EXAMPLE.COM</value>
        <description>datanode用到的账户</description>
    </property>
    <property>
        <name>dfs.datanode.keytab.file</name>
        <value>/BigData/run/hadoop/keytab/hadoop.keytab</value>
        <description>datanode用到的keytab文件路径</description>
    </property>

    <property>
        <name>dfs.data.transfer.protection</name>
        <value>integrity</value>
    </property>

    <property>
        <name>dfs.https.port</name>
        <value>50470</value>
    </property>

    <!-- required if hdfs support https -->
    <property>
        <name>dfs.http.policy</name>
        <value>HTTPS_ONLY</value>
    </property>

    <!-- WebHDFS security config -->
    <property>
        <name>dfs.web.authentication.kerberos.principal</name>
        <value>HTTP/_HOST@EXAMPLE.COM</value>
        <description>web hdfs 使用的账户</description>
    </property>
    <property>
        <name>dfs.web.authentication.kerberos.keytab</name>
        <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
        <description>对应的keytab文件</description>
     </property>

5.2 core-site.xml

    <property>
        <name>dfs.block.access.token.enable</name>
        <value>true</value>
        <description>Enable HDFS block access tokens for secure operations</description>
    </property>

    <property>
        <name>hadoop.security.authorization</name>
        <value>true</value>
        <description>是否开启hadoop的安全认证</description>
    </property>
    
    <property>
        <name>hadoop.security.authentication</name>
        <value>kerberos</value>
        <description>使用kerberos作为hadoop的安全认证方案</description>
    </property>
    <property>
        <name>hadoop.rpc.protection</name>
        <value>authentication</value>
        <description>authentication : authentication only (default); integrity : integrity check in addition to authentication; privacy : data encryption in addition to integrity</description>
    </property>
    <property>
        <name>hadoop.security.auth_to_local</name>
        <value>
            RULE:[2:$1@$0](hadoop@.*EXAMPLE.COM)s/.*/hadoop/
            RULE:[2:$1@$0](HTTP@.*EXAMPLE.COM)s/.*/hadoop/
            DEFAULT
        </value>
    </property>

5.3 yarn-site.xml

    <property>
        <name>hadoop.http.authentication.type</name>
        <value>kerberos</value>
    </property>

    <property>
        <name>hadoop.http.filter.initializers</name>
        <value>org.apache.hadoop.security.AuthenticationFilterInitializer</value>
    </property>

    <property>
        <name>yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled</name>
        <value>false</value>
        <description>标记以启用使用RM身份验证筛选器覆盖默认kerberos身份验证筛选器以允许使用委派令牌进行身份验证(如果缺少令牌,则回退到kerberos)。仅适用于http身份验证类型为kerberos的情况。</description>
    </property>

    <property>
        <name>hadoop.http.authentication.kerberos.principal</name>
        <value>HTTP/_HOST@EXAMPLE.COM</value>
    </property>

    <property>
        <name>hadoop.http.authentication.kerberos.keytab</name>
        <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
    </property>

    <property>
        <name>yarn.acl.enable</name>
        <value>true</value>
    </property>

    <property>
        <name>yarn.web-proxy.principal</name>
        <value>HTTP/_HOST@EXAMPLE.COM</value>
    </property>
 
    <property>
        <name>yarn.web-proxy.keytab</name>
        <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
    </property>
 
    <property>
        <name>yarn.resourcemanager.principal</name>
        <value>hadoop/_HOST@EXAMPLE.COM</value>
    </property>
 
    <property>
        <name>yarn.resourcemanager.keytab</name>
        <value>/BigData/run/hadoop/keytab/hadoop.keytab</value>
    </property>
 
    <!-- nodemanager -->
    <property>
        <name>yarn.nodemanager.principal</name>
        <value>hadoop/_HOST@EXAMPLE.COM</value>
    </property>
    <property>
        <name>yarn.nodemanager.keytab</name>
        <value>/BigData/run/hadoop/keytab/hadoop.keytab</value>
    </property>
    <property>
        <name>yarn.nodemanager.container-executor.class</name>
     
 <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
</property>
 
    <property>
        <name>yarn.nodemanager.linux-container-executor.group</name>
        <value>hadoop</value>
    </property>

    <property>
        <name>yarn.nodemanager.linux-container-executor.path</name>
        <value>/BigData/run/hadoop/bin/container-executor</value>
    </property>

  <!-- webapp webapp configs -->
    <property>
        <name>yarn.resourcemanager.webapp.spnego-principal</name>
        <value>HTTP/_HOST@EXAMPLE.COM</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.spnego-keytab-file</name>
        <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
    </property>

    <property>
        <name>yarn.timeline-service.http-authentication.type</name>
        <value>kerberos</value>
        <description>Defines authentication used for the timeline server HTTP endpoint. Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#</description>
 
    </property>
    <property>
        <name>yarn.timeline-service.principal</name>
        <value>hadoop/_HOST@EXAMPLE.COM</value>
    </property>
 
    <property>
        <name>yarn.timeline-service.keytab</name>
        <value>/BigData/run/hadoop/keytab/hadoop.keytab</value>
    </property>
 
    <property>
        <name>yarn.timeline-service.http-authentication.kerberos.principal</name>
        <value>HTTP/_HOST@EXAMPLE.COM</value>
    </property>
 
    <property> 
        <name>yarn.timeline-service.http-authentication.kerberos.keytab</name>
        <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
    </property>
 
    <property>
        <name>yarn.nodemanager.container-localizer.java.opts</name>
        <value>-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=EXAMPLE.COM -Djava.security.krb5.kdc=tv3-hadoop-01:88</value>
    </property>
 
    <property>
        <name>yarn.nodemanager.health-checker.script.opts</name>
        <value>-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=EXAMPLE.COM -Djava.security.krb5.kdc=tv3-hadoop-01:88</value>
    </property>

    <property>
        <name>yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user</name>
        <value>hadoop</value>
    </property>
    <property>
        <name>yarn.nodemanager.linux-container-executor.group</name>
        <value>hadoop</value>
    </property>

5.4 mapred-site.xml


<property>
    <name>mapreduce.map.java.opts</name>
    <value>-Xmx1638M -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=EXAMPLE.COM -Djava.security.krb5.kdc=tv3-hadoop-01:88</value>
</property>
 
<property>
    <name>mapreduce.reduce.java.opts</name>
    <value>-Xmx3276M -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=EXAMPLE.COM -Djava.security.krb5.kdc=tv3-hadoop-01:88</value>
</property>
 
<property>
    <name>mapreduce.jobhistory.keytab</name>
    <value>/BigData/run/hadoop/keytab/hadoop.keytab</value>
</property>
 
<property>
    <name>mapreduce.jobhistory.principal</name>
    <value>hadoop/_HOST@EXAMPLE.COM</value>
</property>
 
<property>
    <name>mapreduce.jobhistory.webapp.spnego-keytab-file</name>
    <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
</property>
 
<property>
    <name>mapreduce.jobhistory.webapp.spnego-principal</name>
    <value>HTTP/_HOST@EXAMPLE.COM</value>
</property>

5.5 配置文件同步到各个节点

cd /BigData/run/hadoop/etc/hadoop
scp hdfs-site.xml  yarn-site.xml core-site.xml mapred-site.xml hadoop@tv3-hadoop-06:/BigData/run/hadoop/etc/hadoop/

6、配置SSL(开启https)

6.1 创建https证书(需要在每台机器上执行)
 

[hadoop@tv3-hadoop-01 hadoop]# mkdir -p /opt/security/kerberos_https

[hadoop@tv3-hadoop-01 hadoop]# cd /opt/security/kerberos_https

6.2 在任意一个hadoop节点生成CA证书

[root@tv3-hadoop-01 kerberos_https]# openssl req -new -x509 -keyout hdfs_ca_key -out hdfs_ca_cert -days 9999 -subj /C=CN/ST=shanxi/L=xian/O=hlk/OU=hlk/CN=tv3-hadoop01
Generating a 2048 bit RSA private key
...........................................................................................+++
.................................................................................+++
writing new private key to 'hdfs_ca_key'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
[root@tv3-hadoop-01 kerberos_https]# ls -lrt 
total 8
-rw-r--r-- 1 root root 1834 Jun 29 09:45 hdfs_ca_key
-rw-r--r-- 1 root root 1302 Jun 29 09:45 hdfs_ca_cert

6.3 将上面生成的CA 证书发送到每个节点上


scp -r /opt/security/kerberos_https root@tv3-hadoop-06:/opt/security/

 6.4 在每个hadoop节点上制作证书

cd /opt/security/kerberos_https
   
   # 所有需要输入密码的地方全部输入123456(方便起见,如果你对密码有要求请自行修改)
   
   # 1  输入密码和确认密码:123456,此命令成功后输出keystore文件
   
     name="CN=$HOSTNAME, OU=hlk, O=hlk, L=xian, ST=shanxi, C=CN"
   #需要输入第一步输入的密码四次
     keytool -keystore keystore -alias localhost -validity 9999 -genkey -keyalg RSA -keysize 2048 -dname "$name"
   
   # 2 输入密码和确认密码:123456,提示是否信任证书:输入yes,此命令成功后输出truststore文件
     keytool -keystore truststore -alias CARoot -import -file hdfs_ca_cert
   
   # 3 输入密码和确认密码:123456,此命令成功后输出cert文件
     keytool -certreq -alias localhost -keystore keystore -file cert
   
   # 4 此命令成功后输出cert_signed文件
     openssl x509 -req -CA hdfs_ca_cert -CAkey hdfs_ca_key -in cert -out cert_signed -days 9999 -CAcreateserial
   
   # 5 输入密码和确认密码:123456,是否信任证书,输入yes,此命令成功后更新keystore文件
     keytool -keystore keystore -alias CARoot -import -file hdfs_ca_cert
     keytool -keystore keystore -alias localhost -import -file cert_signed
   
   
 
[root@tv3-hadoop-06 kerberos_https]# ls -lrt
total 28
-rw-r--r-- 1 root root 1302 Jun 29 09:57 hdfs_ca_cert
-rw-r--r-- 1 root root 1834 Jun 29 09:57 hdfs_ca_key
-rw-r--r-- 1 root root  984 Jun 29 10:03 truststore
-rw-r--r-- 1 root root 1085 Jun 29 10:03 cert
-rw-r--r-- 1 root root   17 Jun 29 10:04 hdfs_ca_cert.srl
-rw-r--r-- 1 root root 1188 Jun 29 10:04 cert_signed
-rw-r--r-- 1 root root 4074 Jun 29 10:04 keystore

6.5 修改SSL server文件

在${HADOOP_HOME}/etc/hadoop目录构建ssl-server.xml文件

<configuration>

    <property>
        <name>ssl.server.truststore.location</name>
        <value>/opt/security/kerberos_https/truststore</value>
        <description>Truststore to be used by NN and DN. Must be specified.</description>
    </property>

    <property>
        <name>ssl.server.truststore.password</name>
        <value>123456</value>
        <description>Optional. Default value is "". </description>
    </property>

    <property>
        <name>ssl.server.truststore.type</name>
        <value>jks</value>
        <description>Optional. The keystore file format, default value is "jks".</description>
    </property>

    <property>
        <name>ssl.server.truststore.reload.interval</name>
        <value>10000</value>
        <description>Truststore reload check interval, in milliseconds. Default value is 10000 (10 seconds). </description>
    </property>

    <property>
        <name>ssl.server.keystore.location</name>
        <value>/opt/security/kerberos_https/keystore</value>
        <description>Keystore to be used by NN and DN. Must be specified.</description>
    </property>

    <property>
        <name>ssl.server.keystore.password</name>
        <value>123456</value>
        <description>Must be specified.</description>
    </property>

    <property>
        <name>ssl.server.keystore.keypassword</name>
        <value>123456</value>
        <description>Must be specified.</description>
    </property>

    <property>
        <name>ssl.server.keystore.type</name>
        <value>jks</value>
        <description>Optional. The keystore file format, default value is "jks".</description>
    </property>

    <property>
        <name>ssl.server.exclude.cipher.list</name>
        <value>TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,
        SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,
        SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,
        SSL_RSA_WITH_RC4_128_MD5</value>
        <description>Optional. The weak security cipher suites that you want excludedfrom SSL communication.</description>
    </property>
   
</configuration>

6.6 修改SSL-client文件

<configuration>

    <property>
        <name>ssl.client.truststore.location</name>
        <value>/opt/security/kerberos_https/truststore</value>
        <description>Truststore to be used by clients like distcp. Must be specified.  </description>
    </property>

    <property>
        <name>ssl.client.truststore.password</name>
        <value>123456</value>
        <description>Optional. Default value is "". </description>
    </property>

    <property>
        <name>ssl.client.truststore.type</name>
        <value>jks</value>
        <description>Optional. The keystore file format, default value is "jks".</description>
    </property>

    <property>
        <name>ssl.client.truststore.reload.interval</name>
        <value>10000</value>
        <description>Truststore reload check interval, in milliseconds. Default value is 10000 (10 seconds). </description>
    </property>

    <property>
        <name>ssl.client.keystore.location</name>
        <value>/opt/security/kerberos_https/keystore</value>
        <description>Keystore to be used by clients like distcp. Must be   specified.   </description>
    </property>

    <property>
        <name>ssl.client.keystore.password</name>
        <value>123456</value>
        <description>Optional. Default value is "". </description>
    </property>

    <property>
        <name>ssl.client.keystore.keypassword</name>
        <value>123456</value>
        <description>Optional. Default value is "". </description>
    </property>

    <property>
        <name>ssl.client.keystore.type</name>
        <value>jks</value>
        <description>Optional. The keystore file format, default value is "jks". </description>
    </property>
    
</configuration>

6.7 hdfs配置HTTPS(修改后需要同步到每个节点)

  <property>
           <name>dfs.http.policy</name>
           <value>HTTPS_ONLY</value>
           <description>所有开启的web页面均使用https, 细节在ssl server 和client那个配置文件内配置</description>
       </property>

7、启动hadoop基础测试

7.1 HA模式启动顺序

建议依次启动JN、NN、ZKFC、DN、RM、NM服务

7.2 启动JN(每个服务启动之前需要init单独的节点)

kinit -kt /BigData/run/hadoop/keytab/hadoop.keytab hadoop/$HOSTNAME@EXAMPLE.COM
##重启JournalNode

hadoop-daemon.sh stop journalnode && hadoop-daemon.sh start journalnode

##启动JournalNode

hadoop-daemon.sh start journalnode

##停止JournalNode

hadoop-daemon.sh stop journalnode

7.3 启动NameNode和ZKFC服务

如果是新集群,需要提前format

hadoop namenode -format
kinit -kt /BigData/run/hadoop/keytab/hadoop.keytab hadoop/$HOSTNAME@EXAMPLE.COM
##重启nn

hadoop-daemon.sh stop namenode && hadoop-daemon.sh start namenode

##启动nn

hadoop-daemon.sh start namenode

##停止nn

hadoop-daemon.sh stop namenode


##重启zkfc

hadoop-daemon.sh stop zkfc && hadoop-daemon.sh start zkfc

##启动zkfc

hadoop-daemon.sh start zkfc

##停止zkfc

hadoop-daemon.sh stop zkfc

7.4 启动DatanNode服务

kinit -kt /BigData/run/hadoop/keytab/hadoop.keytab hadoop/$HOSTNAME@EXAMPLE.COM
##重启dn

hadoop-daemon.sh stop datanode && hadoop-daemon.sh start datanode

##启动dn

hadoop-daemon.sh start datanode

##停止dn

hadoop-daemon.sh stop datanode

7.5 验证HA功能(多NameNode)

[hadoop@tv3-hadoop-01 hadoop]$ hdfs haadmin -failover nn2 nn1

7.6 验证HDFS文件读写

[hadoop@tv3-hadoop-01 ~]$ echo '123' > b
[hadoop@tv3-hadoop-01 ~]$ hdfs dfs -put -f b /tmp/
[hadoop@tv3-hadoop-01 ~]$ hdfs dfs -cat /tmp/b
123
[hadoop@tv3-hadoop-01 ~]$ 

7.7 启动HTTPS后 webui无法访问UI状态

7.8 启动Resoucemanager服务

kinit -kt /BigData/run/hadoop/keytab/hadoop.keytab hadoop/$HOSTNAME@EXAMPLE.COM
##重启rm

yarn --daemon stop resourcemanager && yarn --daemon start resourcemanager

##启动rm

yarn --daemon start resourcemanager

##停止rm

yarn --daemon stop resourcemanager

7.9. 启动Nodemanager服务

kinit -kt /BigData/run/hadoop/keytab/hadoop.keytab hadoop/$HOSTNAME@EXAMPLE.COM
##重启rm

yarn --daemon stop nodemanager && yarn --daemon start nodemanager
##启动rm

yarn --daemon start nodemanager

##停止rm

yarn --daemon stop nodemanager

7.10 验证Mapreduce job

hadoop jar /BigData/run/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar pi 5 10

看到下面结果代表YARN已经部署ok

Job Finished in 66.573 seconds
Estimated value of Pi is 3.28000000000000000000
[hadoop@tv3-hadoop-01 hadoop]$ 
  • 19
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值