hadoop伪分布式安装HDFS(CDH5.16.2版本)

安装步骤

  1. root用户安装JDK
  2. 创建目录:mkdir /usr/java,上传jdk-8u45-linux-x64.gz包到该目录
  3. 解压 tar -zxvf jdk-8u45-linux-x64.gz
    在这里插入图片描述
  4. 修改用户和用户组:chown -R root:root jdk1.8.0_45
  5. 建立软连接:ln -s jdk1.8.0_45 jdk
  6. 修改环境变量:vi /etc/profile
export JAVA_HOME=/usr/java/jdk
export PATH=$JAVA_HOME/bin:$PATH
  1. 创建用户:useradd hadoop,切换目录su - hadoop
  2. 创建目录:mkdir app software sourcecode log tmp data lib
drwxrwxr-x 2 hadoop hadoop 6 Nov 27 21:32 app         解压的文件夹,软连接
drwxrwxr-x 2 hadoop hadoop 6 Nov 27 21:32 data        数据
drwxrwxr-x 2 hadoop hadoop 6 Nov 27 21:32 lib         第三方的jar
drwxrwxr-x 2 hadoop hadoop 6 Nov 27 21:32 log         日志文件夹
drwxrwxr-x 2 hadoop hadoop 6 Nov 27 21:32 software    压缩包
drwxrwxr-x 2 hadoop hadoop 6 Nov 27 21:32 sourcecode  源代码编译
drwxrwxr-x 2 hadoop hadoop 6 Nov 27 21:32 tmp         临时文件夹 ???/tmp
  1. 上传文件到software目录,
  2. 解压文件到app目录:tar -zxvf hadoop-2.6.0-cdh5.16.2.tar.gz -C /home/hadoop/app/
  3. 进入app目录做软连接:ln -s hadoop-2.6.0-cdh5.16.2 hadoop
  4. 配置hadoop的JAVA_HOME显性配置:vi /home/hadoop/app/hadoop/etc/hadoop/hadoop-env.sh
# The java implementation to use.
export JAVA_HOME=/usr/java/jdk
  1. 配置文件
vi /home/hadoop/app/hadoop/etc/hadoop/core-site.xml
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop01:9000</value>
    </property>
</configuration>
vi /home/hadoop/app/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <!--配置snn-->
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>hadoop01:50090</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.https-address</name>
        <value>hadoop01:50091</value>
    </property>
</configuration>
vi /home/hadoop/app/hadoop/etc/hadoop/slaves
hadoop01
  1. ssh无密码信任关系
[hadoop@hadoop01 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Created directory \'/home/hadoop/.ssh\'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:pBLxyxiDd/ou7f5LQjTsc81VTaqPbVuoboCqCZMcq+k ruoze@ruozedata001
The key\'s randomart image is:
+---[RSA 2048]----+
|    .         .o.|
|   . +       . ..|
|  . = * .   . .  |
|   . @ = o . .   |
|  . + B S.o .    |
| . + + o. .  + . |
|  *  .o..  .. = .|
| o o.ooo    .o o |
|+E  o=+.o. oo .  |
+----[SHA256]-----+
[hadoop@hadoop01 ~]$ ls -a
.  ..  app  .bash_history  .bash_logout  .bash_profile  .bashrc  data  lib  log  software  sourcecode  .ssh  tmp
[hadoop@hadoop01 ~]$ cd .ssh
[hadoop@hadoop01 .ssh]$ ls
id_rsa  id_rsa.pub
[hadoop@hadoop01 .ssh]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hadoop@hadoop01 .ssh]$ ll
total 12
-rw-rw-r-- 1 hadoop hadoop  397 Oct 31 22:46 authorized_keys
-rw------- 1 hadoop hadoop 1675 Oct 31 22:43 id_rsa
-rw-r--r-- 1 hadoop hadoop  397 Oct 31 22:43 id_rsa.pub
[hadoop@hadoop01 .ssh]$ chmod 600 authorized_keys 
[hadoop@hadoop01 .ssh]$ ll
total 12
-rw------- 1 hadoop hadoop  397 Oct 31 22:46 authorized_keys
-rw------- 1 hadoop hadoop 1675 Oct 31 22:43 id_rsa
-rw-r--r-- 1 hadoop hadoop  397 Oct 31 22:43 id_rsa.pub
[hadoop@hadoop01 .ssh]$ ssh hadoop01 date
The authenticity of host \'hadoop01 (172.17.0.5)' can't be established.
ECDSA key fingerprint is SHA256:RHKs1HfxMV8qdj04EqUgtl+xUH1ExUJqnnD6GrNg4+4.
ECDSA key fingerprint is MD5:85:0c:a2:eb:a4:79:fa:a9:ae:40:8c:20:17:82:b7:97.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added \'hadoop01,172.17.0.5' (ECDSA) to the list of known hosts.
Sun Oct 31 22:47:21 CST 2021
[hadoop@hadoop01 .ssh]$ ssh hadoop01 date
Sun Oct 31 22:48:34 CST 2021
[hadoop@hadoop01 .ssh]$ ls
authorized_keys  id_rsa  id_rsa.pub  known_hosts
[hadoop@hadoop01 .ssh]$ cat known_hosts 
hadoop01,192.168.0.3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG9N5IGRTqwqGGHZcyNJ2i7lG54isK19GMq+Zw3VDIr64dS2sqoZ79n+8Ibz8ZJsU1aNiaJJTzYUvuxZv5W4iHQ=

当ssh登录到某台机器的时候,会把相应的ip和主机名追加到known_hosts中

15.配置环境变量vi .bashrc

# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=

# User specific aliases and functions
export HADOOP_HOME=/home/hadoop/app/hadoop
export PATH=${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATH
export HADOOP_CONF_DIR=/home/hadoop/app/hadoop/etc/hadoop
  1. source .bashrc
  2. which hadoop
  3. 格式化:hdfs namenode -format
  4. 第一次启动:start-dfs.sh
    start-dfs.sh
    21/11/01 21:19:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Starting namenodes on [hadoop01]
    hadoop01: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.16.2/logs/hadoop-hadoop-namenode-hadoop01.out
    hadoop01: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.16.2/logs/hadoop-hadoop-datanode-hadoop01.out
    Starting secondary namenodes [hadoop01]
    hadoop01: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.16.2/logs/hadoop-hadoop-secondarynamenode-hadoop01.out
    21/11/01 21:19:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    
  5. namenode:hadoop01,fs.defaultFS控制的;datanode:slaves控制的,secondarynamenode:dfs.namenode.secondary.http-address,dfs.namenode.secondary.https-address控制的
    在这里插入图片描述
  6. namenode:主节点,名称节点(老大),读写请求先经过它;datanode:从节点,数据节点(小弟),存储数据和检索数据;secondarynamenode:第二名称节点(老二)
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值