Centos7+Hadoop3.3.4+KDC1.15集成认证

一、集群规划

        本次测试采用3台虚拟机,操作系统版本为centos7.6。

        Hadoop版本为3.3.4,其中Namenode采用HA高可用架构,Zookeeper版本为3.8.0

        kerberos采用默认YUM源安装,版本为:1.15.1-55

        操作系统用户:hadoop   操作系统用户组:hadoop

IP地址主机名ZKHDFSYARNKDC
192.168.121.101node101.cc.localserver.1

NameNode

DataNode

JournalNode

ResourceManager

NodeManager

JobHistory

KDC master
192.168.121.102node102.cc.localserver.2

NameNode

DataNode

JournalNode

ResourceManager

NodeManager

KDC slaver 1级
192.168.121.103node103.cc.localserver.3

DataNode

JournalNode

NodeManagerKDC slaver 2级

        本次测试使用统一hadoop服务主体(Principal),不再为各服务创建独立主体(Principal)

服务所在主机主体(Principal)

NameNode

DataNode

JournalNode

ResourceManager

NodeManager

JobHistory

Web UI

node101.cc.local

hadoop/node101.cc.local

NameNode

DataNode

JournalNode

ResourceManager

NodeManager

Web UI

node102.cc.localhadoop/node102.cc.local

DataNode

JournalNode

NodeManager

Web UI

node103.cc.localhadoop/node103.cc.local

二、Hadoop Kerberos配置

1、创建Keytab目录

### 在每个节点均执行操作 ###

echo "####mkdir for keytab####"

mkdir /etc/security/keytab/

chown -R root:hadoop /etc/security/keytab/

chmod 770 /etc/security/keytab/

2.1、登录Kerberos数据库后创建主体(Principal)

使用Kerberos管理员用户登录Kerberos数据库客户端kadmin

#####在node101节点上执行#########
# kadmin -p kws/admin@CC.LOCAL
Authenticating as principal kws/admin@CC.LOCAL with password.
Password for kws/admin@CC.LOCAL: kws!101
kadmin:  addprinc -randkey hadoop/node101.cc.local
kadmin:  ktadd -k /etc/security/keytab/hadoop.keytab hadoop/node101.cc.local
#####在node102节点上执行##########
kadmin -p kws/admin@CC.LOCAL
Authenticating as principal kws/admin@CC.LOCAL with password.
Password for kws/admin@CC.LOCAL: kws!101
kadmin:  kadmin:  addprinc -randkey hadoop/node102.cc.local
kadmin:  xst -k /etc/security/keytab/hadoop.keytab hadoop/node102.cc.local
#####在node103节点上执行##########
kadmin -p kws/admin@CC.LOCAL
Authenticating as principal kws/admin@CC.LOCAL with password.
Password for kws/admin@CC.LOCAL: kws!101
kadmin:  addprinc -randkey hadoop/node103.cc.local
kadmin:  xst -k /etc/security/keytab/hadoop.keytab hadoop/node103.cc.local

说明:
addprinc test/test:作用是新建主体

addprinc:增加主体

-randkey:密码随机,因hadoop各服务均通过keytab文件认证,故密码可随机生成

xxx/xxx:新增的主体

xst -k /etc/security/keytab/test.keytab test/test:作用是将主体的密钥写入keytab文件,生成keytab文件,使用ktadd或xst命令均可

xst:将主体的密钥写入keytab文件

-k /etc/security/keytab/test.keytab:指明keytab文件路径和文件名

xxx/xxx:主体

2.2、命令行快捷创建主体(Principal)

#####在node101节点上执行#########
# kadmin -p kws/admin -w kws\!101 -q"addprinc -randkey hadoop/node101.cc.local"
# kadmin -pkws/admin -wkws\!101 -q"xst -k /etc/security/keytab/hadoop.keytab hadoop/node101.cc.local"

#####在node102节点上执行#########
# kadmin -pkws/admin -wkws\!101 -q"addprinc -randkey hadoop/node102.cc.local"
# kadmin -pkws/admin -wkws\!101 -q"xst -k /etc/security/keytab/hadoop.keytab hadoop/node102.cc.local"

#####在node103节点上执行#########
# kadmin -pkws/admin -wkws\!101 -q"addprinc -randkey hadoop/node103.cc.local"
# kadmin -pkws/admin -wkws\!101 -q"xst -k /etc/security/keytab/hadoop.keytab hadoop/node103.cc.local"

说明:
-p:主体
-w:密码,密码中有!符号时需要使用\转义Bash命令
-q:执行语句

chown -R root:hadoop /etc/security/keytab/

chmod 660 /etc/security/keytab/*

3、配置core-site.xml

编辑hadoop-3.3.4/etc/hadoop/core-site.xml

### 在每个节点均执行操作 ###

###提示:通过vi手工添加<configuration></configuration>中的内容 ###

  <!-- Kerberos主体到系统用户的具体映射规则 -->
  <property>
    <name>hadoop.security.auth_to_local</name>
    <value>
        RULE:[2:$1/$2·@$0](hadoop/.*@CC.LOCAL)s/.*/hadoop/
        DEFAULT
    </value>
  </property>

  <!-- 启用Hadoop集群Kerberos安全认证 -->
  <property>
    <name>hadoop.security.authentication</name>
    <value>kerberos</value>
  </property>

  <!-- 启用Hadoop集群授权管理 -->
  <property>
    <name>hadoop.security.authorization</name>
    <value>true</value>
  </property>

  <!-- Hadoop集群间RPC通讯设为仅认证模式 -->
  <property>
    <name>hadoop.rpc.protection</name>
    <value>authentication</value>
  </property>

说明:

Hadoop使用被hadoop.security.auth_to_local指定的规则来映射kerberos principals到操作系统账号
RULE:[<principal translation>](<acceptance filter>)<short name substitution>
RULE:[2:$1/$2@$0](hadoop/.*@.*CC.LOCAL)s/.*/hadoop/

[<principal translation>]
2表示@前包含两个component
在格式化串中,$0 表示realm,$1表示第一个component,$2表示第二个component

(<acceptance filter>)
(hadoop/.*@.*CC.LOCAL)为接收过滤器是一个标准的正则表达式,用于匹配第一部分——principal translation,输出的短名称,只有当成功匹配时,才会将该短名称传递到第三部分——short name substitution。当不匹配时,则跳过该rule,进行下一条rule的匹配,否则进行下一步。

<short name substitution>
可以理解为linux中sed替换命令 (s/.../.../g) ,其输入是principal translation提取出的短名称。这部分是可选的

DEFAULT 是默认规则,默认将principal的第一个component作为短名称输出

验证匹配规则

#hadoop org.apache.hadoop.security.HadoopKerberosName hadoop/node101.cc.local@CC.LOCAL
Name: hadoop/node101.cc.local@CC.LOCAL to hadoop

4、配置hdfs-site.xml

         编辑hadoop-3.3.4/etc/hadoop/hdfs-site.xml

### 在每个节点均执行操作 ###

###提示:通过vi手工添加<configuration></configuration>中的内容 ###
<!-- 访问DataNode数据块时需通过Kerberos认证 -->
  <property>
    <name>dfs.block.access.token.enable</name>
    <value>true</value>
  </property>

  <!-- NameNode服务的Kerberos主体,_HOST会自动解析为服务所在的主机名 -->
  <property>
    <name>dfs.namenode.kerberos.principal</name>
    <value>hadoop/_HOST@CC.LOCAL</value>
  </property>

  <!-- NameNode服务的Kerberos密钥文件路径 -->
  <property>
    <name>dfs.namenode.keytab.file</name>
    <value>/etc/security/keytab/hadoop.keytab</value>
  </property>

  <!-- DataNode服务的Kerberos主体 -->
  <property>
    <name>dfs.datanode.kerberos.principal</name>
    <value>hadoop/_HOST@CC.LOCAL</value>
  </property>

  <!-- DataNode服务的Kerberos密钥文件路径 -->
  <property>
    <name>dfs.datanode.keytab.file</name>
    <value>/etc/security/keytab/hadoop.keytab</value>
  </property>

  <!-- JournalNode服务的Kerberos主体 -->
  <property>
    <name>dfs.journalnode.kerberos.principal</name>
    <value>hadoop/_HOST@CC.LOCAL</value>
  </property>

  <!--  JournalNode服务的Kerberos密钥文件路径 -->
  <property>
    <name>dfs.journalnode.keytab.file</name>
    <value>/etc/security/keytab/hadoop.keytab</value>
  </property>

  <!-- Web服务的Kerberos主体 -->
  <property>
    <name>dfs.namenode.kerberos.internal.spnego.principal</name>
    <value>hadoop/_HOST@CC.LOCAL</value>
  </property>

  <!-- WebHDFS REST服务的Kerberos主体 -->
  <property>
    <name>dfs.web.authentication.kerberos.principal</name>
    <value>hadoop/_HOST@CC.LOCAL</value>
  </property>

  <!-- Web UI的Kerberos密钥文件路径 -->
  <property>
    <name>dfs.web.authentication.kerberos.keytab</name>
    <value>/etc/security/keytab/hadoop.keytab</value>
  </property>

  <!-- 配置NameNode Web UI 使用HTTPS协议 -->
  <property>
    <name>dfs.http.policy</name>
    <value>HTTPS_ONLY</value>
  </property>

  <!-- 配置DataNode数据传输保护策略为仅认证模式 -->
  <property>
    <name>dfs.data.transfer.protection</name>
    <value>authentication</value>
  </property>

5、配置yarn-site.xml

        编辑hadoop-3.3.4/etc/hadoop/yarn-site.xml

### 在每个节点均执行操作 ###

###提示:通过vi手工添加<configuration></configuration>中的内容 ###

  <!-- Resource Manager 服务的Kerberos主体 -->
  <property>
    <name>yarn.resourcemanager.principal</name>
    <value>hadoop/_HOST@CC.LOCAL</value>
  </property>

  <!-- Resource Manager 服务的Kerberos密钥文件 -->
  <property>
    <name>yarn.resourcemanager.keytab</name>
   <value>/etc/security/keytab/hadoop.keytab</value>
  </property>

  <!-- Node Manager 服务的Kerberos主体 -->
  <property>
    <name>yarn.nodemanager.principal</name>
    <value>hadoop/_HOST@CC.LOCAL</value>
  </property>

  <!-- Node Manager 服务的Kerberos密钥文件 -->
  <property>
    <name>yarn.nodemanager.keytab</name>
    <value>/etc/security/keytab/hadoop.keytab</value>
  </property>

6、配置mapred-site.xml

        编辑hadoop-3.3.4/etc/hadoop/mapred-site.xml

### 在每个节点均执行操作 ###

###提示:通过vi手工添加<configuration></configuration>中的内容 ###

    <!-- 历史服务器的Kerberos主体 -->
  <property>
    <name>mapreduce.jobhistory.principal</name>
    <value>hadoop/_HOST@CC.LOCAL</value>
  </property>

  <!-- 历史服务器的Kerberos密钥文件 -->
  <property>
    <name>mapreduce.jobhistory.keytab</name>
    <value>/etc/security/keytab/hadoop.keytab</value>
  </property>

三、配置HDFS使用HTTPS安全传输协议

1、生成私钥和证书文件

#  openssl req -new -x509 -keyout /etc/security/keytab/hdfs_ca_key -out /etc/security/keytab/hdfs_ca_cert -days 36500 -subj '/C=CC/ST=CC/L=CC/O=CC/OU=CC/CN=CC'
Generating a 2048 bit RSA private key
.....................................................................................................+++
.....................+++
writing new private key to '/etc/security/keytab/hdfs_ca_key'
Enter PEM pass phrase: hdp101
Verifying - Enter PEM pass phrase: hdp101
-----

以上命令使用 OpenSSL 工具生成一个自签名的 X.509 证书,执行完成后,会在目录下生成私钥文件hdfs_ca_key和证书文件hdfs_ca_cert

2、将证书文件和私钥文件发送到其他节点

scp -rp /etc/security/keytab/hdfs_ca_* node102:/etc/security/keytab/

scp -rp /etc/security/keytab/hdfs_ca_* node103:/etc/security/keytab/

3、生成keystore文件

        keystore文件存储了SSL握手所涉及的私钥以及证书链信息

#####node101####
# keytool -keystore /etc/security/keytab/keystore -alias node101 -genkey -keyalg RSA -dname "CN=node101.cc.local, OU=CC, O=CC, L=CC, ST=CC, C=CC"

#####node102####
# keytool -keystore /etc/security/keytab/keystore -alias node102 -genkey -keyalg RSA -dname "CN=node102.cc.local, OU=CC, O=CC, L=CC, ST=CC, C=CC"

#####node103####
# keytool -keystore /etc/security/keytab/keystore -alias node103 -genkey -keyalg RSA -dname "CN=node103.cc.local, OU=CC, O=CC, L=CC, ST=CC, C=CC"

CN建议配置为各节点的hostname,这样不会出现验证出错。

4、生成truststore文件

        truststore文件存储了可信任的根证书

### 在每个节点均执行操作 ###
# keytool -keystore /etc/security/keytab/truststore -alias CARoot -import -file /etc/security/keytab/hdfs_ca_cert
Enter keystore password:  
Re-enter new password: 
Owner: CN=CC, OU=CC, O=CC, L=CC, ST=CC, C=CC
Issuer: CN=CC, OU=CC, O=CC, L=CC, ST=CC, C=CC
Serial number: d8e316146bfe7317
Valid from: Thu Apr 11 15:37:59 CST 2024 until: Sat Mar 18 15:37:59 CST 2124
Certificate fingerprints:
         SHA1: 17:85:CA:D7:86:8C:8F:9F:F4:5F:30:B7:FB:43:E0:02:BF:19:D6:F2
         SHA256: 65:85:F0:29:87:0B:09:A6:BD:AD:6F:99:BE:20:3D:9D:FF:8D:7A:44:70:DB:95:C0:D4:13:49:36:27:1E:64:FA
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3

Extensions: 

#1: ObjectId: 2.5.29.35 Criticality=false
AuthorityKeyIdentifier [
KeyIdentifier [
0000: F8 22 25 FF C1 89 F8 9D   7F 48 FF 3E AA E0 DF 75  ."%......H.>...u
0010: F6 B6 A7 AE                                        ....
]
]

#2: ObjectId: 2.5.29.19 Criticality=false
BasicConstraints:[
  CA:true
  PathLen:2147483647
]

#3: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: F8 22 25 FF C1 89 F8 9D   7F 48 FF 3E AA E0 DF 75  ."%......H.>...u
0010: F6 B6 A7 AE                                        ....
]
]

Trust this certificate? [no]:  yes
Certificate was added to keystore

以上命令使用 Java 工具 keytool 将之前生成的自签名 CA 证书 hdfs_ca_cert 导入到指定的 truststore 文件中,并将其命名为 CARoot。命令执行后会在各个节点目录下生成truststore文件。

5、从 keystore 中导出 cert

#####node101######
# keytool -certreq -alias node101 -keystore /etc/security/keytab/keystore -file /etc/security/keytab/cert
Enter keystore password:  

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/security/keytab/keystore -destkeystore /etc/security/keytab/keystore -deststoretype pkcs12".

#####node102######
# keytool -certreq -alias node102 -keystore /etc/security/keytab/keystore -file /etc/security/keytab/cert
Enter keystore password:  

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/security/keytab/keystore -destkeystore /etc/security/keytab/keystore -deststoretype pkcs12".

#####node103######
# keytool -certreq -alias node103 -keystore /etc/security/keytab/keystore -file /etc/security/keytab/cert
Enter keystore password:  

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/security/keytab/keystore -destkeystore /etc/security/keytab/keystore -deststoretype pkcs12".

注意:--alias 需要与各节点生成keystore指定的别名一致。

6、生成自签名证书

使用最开始生成的hdfs_ca_cert证书文件和hdfs_ca_key密钥文件对cert进行签名,生成自签名证书

# openssl x509 -req -CA /etc/security/keytab/hdfs_ca_cert -CAkey /etc/security/keytab/hdfs_ca_key -in /etc/security/keytab/cert -out /etc/security/keytab/cert_signed -days 36500 -CAcreateserial
Signature ok
subject=/C=CC/ST=CC/L=CC/O=CC/OU=CC/CN=node101.cc.local
Getting CA Private Key
Enter pass phrase for /etc/security/keytab/hdfs_ca_key:

7、将CA证书导入到keystore

将之前生成的hdfs_ca_cert证书文件导入到keystore中

### 在每个节点均执行操作 ###
# keytool -keystore /etc/security/keytab/keystore -alias CARoot -import -file /etc/security/keytab/hdfs_ca_cert
Enter keystore password:  
Owner: CN=CC, OU=CC, O=CC, L=CC, ST=CC, C=CC
Issuer: CN=CC, OU=CC, O=CC, L=CC, ST=CC, C=CC
Serial number: 96592715ce2f9bd9
Valid from: Thu Apr 11 16:14:28 CST 2024 until: Sat Mar 18 16:14:28 CST 2124
Certificate fingerprints:
         SHA1: 12:EF:50:6F:7F:91:71:13:21:E9:F6:5D:64:6A:14:13:A4:E7:9E:AC
         SHA256: F1:E5:92:5F:61:3B:D1:13:23:E1:1C:F8:ED:E1:0E:98:FD:25:10:66:C3:2B:87:B4:1F:BD:3A:50:2C:38:B9:8D
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3

Extensions: 

#1: ObjectId: 2.5.29.35 Criticality=false
AuthorityKeyIdentifier [
KeyIdentifier [
0000: 44 1F 19 D8 4A 22 FC AB   01 7B 18 3F FB 85 9B F2  D...J".....?....
0010: 33 D8 7A 1F                                        3.z.
]
]

#2: ObjectId: 2.5.29.19 Criticality=false
BasicConstraints:[
  CA:true
  PathLen:2147483647
]

#3: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: 44 1F 19 D8 4A 22 FC AB   01 7B 18 3F FB 85 9B F2  D...J".....?....
0010: 33 D8 7A 1F                                        3.z.
]
]

Trust this certificate? [no]:  yes
Certificate was added to keystore

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/security/keytab/keystore -destkeystore /etc/security/keytab/keystore -deststoretype pkcs12".

8、将自签名证书导入到keystore

将生成的cert_signed自签名证书导入到keystore中

# keytool -keystore /etc/security/keytab/keystore -alias node101 -import -file /etc/security/keytab/cert_signed
Enter keystore password:  
Certificate reply was installed in keystore

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/security/keytab/keystore -destkeystore /etc/security/keytab/keystore -deststoretype pkcs12".

# keytool -keystore /etc/security/keytab/keystore -alias node102 -import -file /etc/security/keytab/cert_signed
Enter keystore password:  
Certificate reply was installed in keystore

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/security/keytab/keystore -destkeystore /etc/security/keytab/keystore -deststoretype pkcs12".

# keytool -keystore /etc/security/keytab/keystore -alias node103 -import -file /etc/security/keytab/cert_signed
Enter keystore password:  
Certificate reply was installed in keystore

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/security/keytab/keystore -destkeystore /etc/security/keytab/keystore -deststoretype pkcs12".

9、修改keystore和trustores权限

确保hadoop用户(HDFS的启动用户)具有对所生成keystore文件的读权限

# chown -R root:hadoop /etc/security/keytab
# chmod 660 /etc/security/keytab/*

10、配置文件ssl-server.xml

       拷贝并编辑hadoop-3.3.4/etc/hadoop/ssl-server.xml

### 在每个节点均执行操作 ###
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
-->
<configuration>

<property>
  <name>ssl.server.truststore.location</name>
  <value>/etc/security/keytab/truststore</value>
  <description>Truststore to be used by NN and DN. Must be specified.
  </description>
</property>

<property>
  <name>ssl.server.truststore.password</name>
  <value>hdp101</value>
  <description>Optional. Default value is "".
  </description>
</property>

<property>
  <name>ssl.server.truststore.type</name>
  <value>jks</value>
  <description>Optional. The keystore file format, default value is "jks".
  </description>
</property>

<property>
  <name>ssl.server.truststore.reload.interval</name>
  <value>10000</value>
  <description>Truststore reload check interval, in milliseconds.
  Default value is 10000 (10 seconds).
  </description>
</property>

<property>
  <name>ssl.server.keystore.location</name>
  <value>/etc/security/keytab/keystore</value>
  <description>Keystore to be used by NN and DN. Must be specified.
  </description>
</property>

<property>
  <name>ssl.server.keystore.password</name>
  <value>hdp101</value>
  <description>Must be specified.
  </description>
</property>

<property>
  <name>ssl.server.keystore.keypassword</name>
  <value>hdp!101</value>
  <description>Must be specified.
  </description>
</property>

<property>
  <name>ssl.server.keystore.type</name>
  <value>jks</value>
  <description>Optional. The keystore file format, default value is "jks".
  </description>
</property>

<property>
  <name>ssl.server.exclude.cipher.list</name>
  <value>TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,
  SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,
  SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,
  SSL_RSA_WITH_RC4_128_MD5</value>
  <description>Optional. The weak security cipher suites that you want excluded
  from SSL communication.</description>
</property>

</configuration>

四、HDFS启动及验证

注意:启动服务时需要使用对应的操作系统用户

1、单独启动

###切换到hadoop用户###

# su -i hadoop
 

###启动Namenode###

# hdfs --daemon start namenode
 

####启动Datanode###

# hdfs --daemon start datanode
 

####启动Journalnode###
hdfs --daemon start journalnode
 

####启动zkfc###
hdfs --daemon start zkfc

2、批量启动

前提:已配置了hadoop用户的节点间的免密登录

修改$HADOOP_HOME/sbin/start-dfs.sh脚本,在顶部增加以下环境变量

HDFS_NAMENODE_USER=hadoop
HDFS_DATANODE_USER=hadoop
HDFS_JOURNALNODE_USER=hadoop
HDFS_ZKFC_USER=hadoop

同样,修改$HADOOP_HOME/sbin/stop-dfs.sh脚本,在顶部增加环境变量

####启动HDFS###可使用root用户###

# start-dfs.sh

3、查看HDFS页面

访问地址:
https://192.168.121.101:9871/
https://192.168.121.102:9871/

五、HDFS和本地文件系统权限

按照官方文档设置路径访问权限

FilesystemPathUser:GroupPermissions测试用户
localdfs.namenode.name.dirhdfs:hadoopdrwx------hadoop
localdfs.datanode.data.dirhdfs:hadoopdrwx------hadoop
local$HADOOP_LOG_DIRhdfs:hadoopdrwxrwxr-xhadoop
local$YARN_LOG_DIRyarn:hadoopdrwxrwxr-xhadoop
localyarn.nodemanager.local-dirsyarn:hadoopdrwxr-xr-xhadoop
localyarn.nodemanager.log-dirsyarn:hadoopdrwxr-xr-xhadoop
localcontainer-executorroot:hadoop--Sr-s--*root
localconf/container-executor.cfgroot:hadoopr-------*root
hdfs/hdfs:hadoopdrwxr-xr-xhadoop
hdfs/tmphdfs:hadoopdrwxrwxrwxthadoop
hdfs/userhdfs:hadoopdrwxr-xr-xhadoop
hdfsyarn.nodemanager.remote-app-log-diryarn:hadoopdrwxrwxrwxthadoop
hdfsmapreduce.jobhistory.intermediate-done-dirmapred:hadoopdrwxrwxrwxthadoop
hdfsmapreduce.jobhistory.done-dirmapred:hadoopdrwxr-x---hadoop

###hdfs-site.xml dfs.namenode.name.dir/datanode journalnode
# chown -R hadoop:hadoop /opt/hadoop
# chmod 700 /opt/hadoop/hadoop-3.3.4/data/namenode
# chmod 700 /opt/hadoop/hadoop-3.3.4/data/datanode
# chmod 700 /opt/hadoop/hadoop-3.3.4/data/journalnode

###hadoop-env.sh HADOOP_LOG_DIR use HADOOP_HOME/logs###
# chmod 775 /opt/hadoop/hadoop-3.3.4/logs

###yarn-env.sh YARN_LOG_DIR use default HADOOP_LOG_DIR ###
# chmod 775 /opt/hadoop/hadoop-3.3.4/logs

###yarn-site.xml yarn.nodemanager.local-dirs use default hadoop.tmp.dir###
# chmod 755 /opt/hadoop/tmp

###yarn-site.xml yarn.nodemanager.log-dirs use default HADOOP_LOG_DIR/userlogs ###
# chmod 755 /opt/hadoop/hadoop-3.3.4/logsuserlogs

###container-executor###
# chown root:hadoop /opt/hadoop/hadoop-3.3.4/bin/container-executor
# chmod 6050 /opt/hadoop/hadoop-3.3.4/bin/container-executor

###conf/container-executor.cfg###
# chown root:hadoop /opt/hadoop/hadoop-3.3.4/etc/hadoop/container-executor.cfg
# chown root:hadoop /opt/hadoop/hadoop-3.3.4/etc/hadoop
# chown root:hadoop /opt/hadoop/hadoop-3.3.4/etc
# chown root:hadoop /opt/hadoop/hadoop-3.3.4
# chown root:hadoop /opt/hadoop
# chmod 400 /opt/hadoop/hadoop-3.3.4/etc/hadoop/container-executor.cfg

#####在node101节点上执行#########
# kadmin -p kws/admin -w kws\!101 -q"addprinc hadoop/hadoop"
WARNING: no policy specified for hadoop/hadoop@CC.LOCAL; defaulting to no policy
Enter password for principal "hadoop/hadoop@CC.LOCAL": hdp101
Re-enter password for principal "hadoop/hadoop@CC.LOCAL": hdp101
Principal "hadoop/hadoop@CC.LOCAL" created.

 # kinit hadoop/hadoop
Password for hadoop/hadoop@CC.LOCAL: 

# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hadoop/hadoop@CC.LOCAL

Valid starting       Expires              Service principal
04/23/2024 18:01:48  04/24/2024 18:01:48  krbtgt/CC.LOCAL@CC.LOCAL
        renew until 04/30/2024 18:01:48

# hadoop fs -chown hadoop:hadoop / /tmp /user
# hadoop fs -chmod 755 /
# hadoop fs -chmod 1777 /tmp
# hadoop fs -chmod 775 /user

# hadoop fs -chown hadoop:hadoop /tmp/logs
# hadoop fs -chmod 1777 /tmp/logs

# hadoop fs -chown -R hadoop:hadoop /tmp/hadoop-yarn
# hadoop fs -chmod -R 770 /tmp/hadoop-yarn/
# hadoop fs -chmod -R 1777 /tmp/hadoop-yarn/staging/history/done_intermediate
# hadoop fs -chmod -R 750 /tmp/hadoop-yarn/staging/history/done

六、配置Yarn使用LinuxContainerExecutor

        编辑container-executor.cfg

yarn.nodemanager.linux-container-executor.group=hadoop
banned.users=hadoop 
min.user.id=1000
allowed.system.users=
feature.tc.enabled=false

七、Yarn启动及验证

 注意:启动服务时需要使用对应的操作系统用户

1、单独启动

###切换到hadoop用户###

# su -i hadoop
 

###启动Resourcemanager###

# yarn --daemon start resourcemanager
 

####启动nodemanager###

# yarn --daemon start nodemanager

2、批量启动

前提:已配置了hadoop用户的节点间的免密登录

修改$HADOOP_HOME/sbin/start-yarn.sh脚本,在顶部增加以下环境变量

YARN_RESOURCEMANAGER_USER=hadoop
YARN_NODEMANAGER_USER=hadoop

同样,修改$HADOOP_HOME/sbin/stop-yarn.sh脚本,在顶部增加环境变量

####启动Yarn###可使用root用户###

# start-yarn.sh

注意:可能报错:/opt/hadoop/hadoop-3.3.4/bin/container-executor: error while loading shared
 libraries: libcrypto.so.1.1: cannot open shared object file: No such file or directory。此时需要升级openssl到1.1.1版本,可以从官网[ 1.1.1 ] - /source/old/1.1.1/index.html下载后编译生成libcrypto.so.1.1文件后拷贝到/usr/lib64下即可

3、查看Yarn页面

访问地址:
http://192.168.121.101:8088/
http://192.168.121.102:8088/

八、HistoryServer启动及验证

注意:启动服务时需要使用对应的操作系统用户

1、单独启动

###切换到hadoop用户###

# su -i hadoop
 

###启动historyserver###

# mapred --daemon start historyserver

2、查看HistoryServer页面

访问地址:http://192.168.121.101:19888/

  • 29
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

snipercai

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值