hadoop伪分布式

服务名称版本主机IP地址系统用户名密码
hadoop2.7.110.1.1.101centos 7rootpassword
SSH系统自带10.1.1.101centos 7rootpassword
JAVA1.810.1.1.101centos 7rootpassword

注:所有安装包都在/h3cu下

一、基础配置

  1. 修改主机名为master,并立即生效

    [root@localhost ~]# hostnamectl set-hostname master 
    [root@localhost ~]# bash 
    [root@master ~]#
    
  2. 关闭防火墙,并设置开机不自启

    [root@master ~]# systemctl stop firewalld 
    [root@master ~]# systemctl disable firewalld 
    Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
    [root@master ~]# systemctl status  firewalld 
    ● firewalld.service - firewalld - dynamic firewall daemon
       Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
       Active: inactive (dead)
    
    Jun 19 03:14:25 localhost.localdomain systemd[1]: Starting firewalld - dynamic firewall daemon...
    Jun 19 03:14:26 localhost.localdomain systemd[1]: Started firewalld - dynamic firewall daemon.
    Jun 19 07:46:21 master systemd[1]: Stopping firewalld - dynamic firewall daemon...
    Jun 19 07:46:22 master systemd[1]: Stopped firewalld - dynamic firewall daemon.
    
  3. 配置/etc/hosts文件

    1. 查看IP地址

      [root@master ~]# ip a
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
          link/ether 00:0c:29:5a:51:a2 brd ff:ff:ff:ff:ff:ff
          inet 10.1.1.101/24 brd 10.1.1.255 scope global eno16777736
             valid_lft forever preferred_lft forever
          inet6 fe80::20c:29ff:fe5a:51a2/64 scope link 
             valid_lft forever preferred_lft forever
      
    2. 编辑hosts文件

      [root@master ~]# echo "10.1.1.101 master" >> /etc/hosts
      
    3. 测试

      [root@master ~]# ping master -c 5
      PING master (10.1.1.101) 56(84) bytes of data.
      64 bytes from master (10.1.1.101): icmp_seq=1 ttl=64 time=0.011 ms
      64 bytes from master (10.1.1.101): icmp_seq=2 ttl=64 time=0.034 ms
      64 bytes from master (10.1.1.101): icmp_seq=3 ttl=64 time=0.025 ms
      64 bytes from master (10.1.1.101): icmp_seq=4 ttl=64 time=0.029 ms
      64 bytes from master (10.1.1.101): icmp_seq=5 ttl=64 time=0.032 ms
      
      --- master ping statistics ---
      5 packets transmitted, 5 received, 0% packet loss, time 4028ms
      rtt min/avg/max/mdev = 0.011/0.026/0.034/0.008 ms
      

二、配置SSH

  1. 配置SSH免秘钥登录

    1. 生成秘钥文件

      [root@master ~]# ssh-keygen -t rsa -P ''
      Generating public/private rsa key pair.
      Enter file in which to save the key (/root/.ssh/id_rsa): 
      Created directory '/root/.ssh'.
      Your identification has been saved in /root/.ssh/id_rsa.
      Your public key has been saved in /root/.ssh/id_rsa.pub.
      The key fingerprint is:
      9c:5e:0f:f2:6a:f6:c1:c4:5c:40:b9:e8:31:d3:9e:55 root@master
      The key's randomart image is:
      +--[ RSA 2048]----+
      |         .o.     |
      |          ..  E  |
      |         o ...   |
      |       .=+o..    |
      |       .S=*o     |
      |       ..*oo     |
      |        . + .    |
      |        o. .     |
      |       o...      |
      +-----------------+
      
    2. 发送公钥

      [root@master ~]# ssh-copy-id -i master 
      The authenticity of host 'master (10.1.1.101)' can't be established.
      ECDSA key fingerprint is a2:ec:2f:3a:b9:33:d3:a7:fd:51:9d:d7:cf:ce:fb:ea.
      Are you sure you want to continue connecting (yes/no)? yes
      /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
      /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
      root@master's password: 
      
      Number of key(s) added: 1
      
      Now try logging into the machine, with:   "ssh 'master'"
      and check to make sure that only the key(s) you wanted were added.
      
    3. 测试登录

      [root@master ~]# ssh master 
      Last login: Sat Jun 19 03:20:35 2021 from 10.1.1.1
      

三、安装Java

  1. 查询本机是否有openjdk,有则卸载

    [root@master ~]# rpm -qa |grep openjdk
    
  2. 将/h3cu下面的java安装到/usr/local/src下

    [root@master ~]# tar -xzf /h3cu/jdk-8u144-linux-x64.tar.gz -C /usr/local/src/
    
  3. 将解压后的java文件重命名为java

    [root@master ~]# ll /usr/local/src/
    total 4
    drwxr-xr-x. 8 10 143 4096 Jul 22  2017 jdk1.8.0_144
    [root@master ~]# mv /usr/local/src/jdk1.8.0_144 /usr/local/src/java
    [root@master ~]# ll /usr/local/src/
    total 4
    drwxr-xr-x. 8 10 143 4096 Jul 22  2017 java
    
  4. 配置java环境变量,仅使当前用户生效

    [root@master ~]# vi /root/.bash_profile
    export JAVA_HOME=/usr/local/src/java
    export PATH=$PATH:$JAVA_HOME/bin
    
  5. 加载环境变量,查看java的版本信息

    [root@master ~]# source /root/.bash_profile 
    [root@master ~]# java -version 
    java version "1.8.0_144"
    Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
    Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
    

四、部署hadoop伪分布式

  1. 将/h3cu下的hadoop解压到/usr/lcoal/src下

    [root@master ~]# tar -xzf /h3cu/hadoop-2.7.1.tar.gz -C /usr/local/src/
    
  2. 将解压后的hadoop文件重命名为hadoop

    [root@master ~]# ll /usr/local/src/
    total 8
    drwxr-xr-x. 9 10021 10021 4096 Jun 29  2015 hadoop-2.7.1
    drwxr-xr-x. 8    10   143 4096 Jul 22  2017 java
    [root@master ~]# mv /usr/local/src/hadoop-2.7.1 /usr/local/src/hadoop
    [root@master ~]# ll /usr/local/src/
    total 8
    drwxr-xr-x. 9 10021 10021 4096 Jun 29  2015 hadoop
    drwxr-xr-x. 8    10   143 4096 Jul 22  2017 java
    
  3. 配置hadoop环境变量,仅当前用户生效

    [root@master ~]# vi /root/.bash_profile 
    export HADOOP_HOME=/usr/local/src/hadoop
    export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
    
  4. 加载环境变量,查看hadoop版本

    [root@master ~]# source /root/.bash_profile 
    [root@master ~]# hadoop version 
    Hadoop 2.7.1
    Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a
    Compiled by jenkins on 2015-06-29T06:04Z
    Compiled with protoc 2.5.0
    From source with checksum fc0a1a23fc1868e4d5ee7fa2b28a58a
    This command was run using /usr/local/src/hadoop/share/hadoop/common/hadoop-common-2.7.1.jar
    
  5. 配置hadoop-env.sh

    [root@master ~]# vi /usr/local/src/hadoop/etc/hadoop/hadoop-env.sh 
    export JAVA_HOME=/usr/local/src/java
    
  6. 配置core-site.xml

    1. 命令

      [root@master ~]# vi /usr/local/src/hadoop/etc/hadoop/core-site.xml 
      
    2. 配置文件添加内容

      <property>
        <!--namenode的URL地址(必须写)-->
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
      </property>
      <property>
        <!--SequenceFiles中使用的读/写缓冲区的大小,单位为KB,131072KB默认为64M(该配置可选)-->
        <name>io.file.buffer.size</name>
        <value>131072</value>
      </property>
      
  7. 配置hdfs-site.xml

    1. 命令

      [root@master ~]# vi /usr/local/src/hadoop/etc/hadoop/hdfs-site.xml 
      
    2. 配置文件添加内容

      <property>
        <!--hadoop的副本数量,默认为3(必须写)-->
        <name>dfs.replication</name>
        <value>1</value>
      </property>
      <property>
        <!--在本地文件系统所在的NameNode的存储空间和持续化处理日志(必须写)-->
        <name>dfs.namenode.name.dir</name>
        <value>/usr/local/src/hadoop/dfs/name</value>
      </property>
      <property>
        <!--在本地文件系统所在的DataNode的存储空间和持续化处理日志(必须写)-->
        <name>dfs.datanode.data.dir</name>
        <value>/usr/local/src/hadoop/dfs/data</value>
      </property>
      
  8. 配置mapred-site.xml

    1. 命令

      [root@master ~]# cp /usr/local/src/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/src/hadoop/etc/hadoop/mapred-site.xml
      [root@master ~]# vi /usr/local/src/hadoop/etc/hadoop/mapred-site.xml
      
    2. 配置文件添加内容

      <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
      </property>
      
  9. 配置yarn-site.xml

    1. 命令

      [root@master ~]# vi /usr/local/src/hadoop/etc/hadoop/yarn-site.xml 
      
    2. 配置文件添加内容

       <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
      </property>
      <property>  
        <name>yarn.resourcemanager.address</name>  
        <value>master:8032</value>  
      </property> 
      <property>
        <name>yarn.resourcemanager.scheduler.address</name>  
        <value>master:8030</value>  
      </property>
      <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>  
        <value>master:8031</value>  
      </property>
      
  10. name进行格式化

    [root@master ~]# hdfs namenode -format 
    21/06/19 08:27:08 INFO namenode.NameNode: STARTUP_MSG: 
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = master/10.1.1.101
    STARTUP_MSG:   args = [-format]
    STARTUP_MSG:   version = 2.7.1
    STARTUP_MSG:   classpath = /usr/local/src/hadoop/etc/hadoo(此处省略)
    STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
    STARTUP_MSG:   java = 1.8.0_144
    ************************************************************/
    ...省略
    21/06/19 08:27:09 INFO namenode.FSImage: Allocated new BlockPoolId: BP-83508879-10.1.1.101-1624062429594
    21/06/19 08:27:09 INFO common.Storage: Storage directory /usr/local/src/hadoop/dfs/name has been successfully formatted.
    21/06/19 08:27:09 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
    21/06/19 08:27:09 INFO util.ExitUtil: Exiting with status 0
    21/06/19 08:27:09 INFO namenode.NameNode: SHUTDOWN_MSG: 
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at master/10.1.1.101
    ************************************************************/
    
  11. 启动hadoop集群,查看守护进程

    1. 启动

      [root@master ~]# start-all.sh 
      This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
      Starting namenodes on [master]
      master: starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-root-namenode-master.out
      The authenticity of host 'localhost (::1)' can't be established.
      ECDSA key fingerprint is a2:ec:2f:3a:b9:33:d3:a7:fd:51:9d:d7:cf:ce:fb:ea.
      Are you sure you want to continue connecting (yes/no)? yes
      localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
      localhost: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-root-datanode-master.out
      Starting secondary namenodes [0.0.0.0]
      The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
      ECDSA key fingerprint is a2:ec:2f:3a:b9:33:d3:a7:fd:51:9d:d7:cf:ce:fb:ea.
      Are you sure you want to continue connecting (yes/no)? es
      Please type 'yes' or 'no': yes
      0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
      0.0.0.0: starting secondarynamenode, logging to /usr/local/src/hadoop/logs/hadoop-root-secondarynamenode-master.out
      starting yarn daemons
      starting resourcemanager, logging to /usr/local/src/hadoop/logs/yarn-root-resourcemanager-master.out
      localhost: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-root-nodemanager-master.ou
      
    2. jps查看

      [root@master ~]# jps
      10802 SecondaryNameNode
      11042 NodeManager
      10948 ResourceManager
      10535 NameNode
      10651 DataNode
      11326 Jps
      
  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值