5、hadoop的分布式安装

基本信息

版本 2.7.3
安装机器 三台机器
账号 hadoop
源路径 /opt/software/hadoop-2.7.3.tar.gz
目标路径 /opt/hadoop -> /opt/hadoop-2.7.3
依赖关系 zookeeper

安装过程

1).切换到hadoop账户,通过tar -zxvf命令将hadoop解压缩至目的安装目录:

[root@bgs-5p173-wangwenting opt]# su hadoop
[hadoop@bgs-5p173-wangwenting opt]$ cd /opt/software
[hadoop@bgs-5p173-wangwenting software]$  tar -zxvf hadoop-${version}.tar.gz  -C /opt
[hadoop@bgs-5p173-wangwenting software]$ cd /opt
[hadoop@bgs-5p173-wangwenting opt]$ ln -s /opt/hadoop-${version} /opt/hadoop

2).创建tmpdir目录:

[hadoop@bgs-5p173-wangwenting opt]$ cd  /opt/hadoop
[hadoop@bgs-5p173-wangwenting hadoop]$ mkdir -p tmpdir

3).配置hadoop-env.sh文件:

[hadoop@bgs-5p173-wangwenting hadoop]$ cd /opt/hadoop/etc/hadoop/
[hadoop@bgs-5p173-wangwenting hadoop]$ mkdir -p /opt/hadoop/pids
[hadoop@bgs-5p173-wangwenting hadoop]$ vim hadoop-env.sh
在hadoop-env.sh文件中添加如下配置:
export JAVA_HOME=/opt/java
export HADOOP_PID_DIR=/opt/hadoop/pids

4.配置mapred-env.sh文件:

[hadoop@bgs-5p173-wangwenting hadoop]$ cd /opt/hadoop/etc/hadoop/
[hadoop@bgs-5p173-wangwenting hadoop]$ vim mapred-env.sh
在mapred-env.sh文件中添加如下配置:
export JAVA_HOME=/opt/java

5.配置core-site.xml文件

[hadoop@bgs-5p173-wangwenting hadoop]$ cd /opt/hadoop/etc/hadoop/
[hadoop@bgs-5p173-wangwenting hadoop]$  vim core-site.xml
在core-site.xml文件中添加如下配置:
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://bgs-5p173-wangwenting:8020</value>
    </property>

    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadoop/tmpdir</value>
    </property>
    <property>
        <name>fs.file.impl</name>
        <value>org.apache.hadoop.fs.LocalFileSystem</value>
        <description>The FileSystem for file: uris.</description>
    </property>


    <property>
       <name>fs.hdfs.impl</name>
       <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
       <description>The FileSystem for hdfs: uris.</description>
    </property>


    <property>
        <name>io.compression.codecs</name>
        <value>
               org.apache.hadoop.io.compress.GzipCodec,
               org.apache.hadoop.io.compress.DefaultCodec,
               org.apache.hadoop.io.compress.BZip2Codec,
               org.apache.hadoop.io.compress.SnappyCodec
        </value>
    </property>
   
     <property>
         <name>io.file.buffer.size</name>
            <value>131072</value>
     </property>
    
     <property>
       <name>fs.trash.interval</name>
         <value>1440</value>
      </property>
</configuration>
[hadoop@bgs-5p173-wangwenting hadoop]$ cd /opt/hadoop/etc/hadoop/
[hadoop@bgs-5p173-wangwenting hadoop]$ vim hdfs-site.xml
在hdfs-site.xml文件中添加如下配置:
<configuration>

<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/opt/hadoop/data/namenode</value>
</property>
<property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/opt/hadoop/data/datanode</value>
</property>
<property> 
    <name>dfs.webhdfs.enabled</name> 
    <value>true</value> 
</property>
<property>
    <name>dfs.replication</name>
    <value>2</value>
</property>
<property>
    <name>dfs.namenode.handler.count</name>
    <value>200</value>
</property>
<property>
    <name>dfs.blocksize</name>
    <value>134217728</value>
</property>


<property>
    <name>dfs.permissions.enabled</name>
    <value>true</value>
</property>
<property>
    <name>dfs.permissions</name>
    <value>true</value>
</property>
<property>
    <name>dfs.secondary.http.address</name>
    <value>bgs-5p174-wangwenting:50090</value>
</property>

</configuration>

7.配置mapred-site.xml文件 mapred-site.xml

[hadoop@bgs-5p173-wangwenting hadoop]$ cd /opt/hadoop/etc/hadoop/
[hadoop@bgs-5p173-wangwenting hadoop]$ vim mapred-site.xml
在mapred-site.xml文件中添加如下配置:
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>

    <property>
        <name>mapred.job.history.server.embedded</name>
        <value>true</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>bgs-5p173-wangwenting:10020</value>
    </property>
    <property>
        <name>jobhistory.webapp.address</name>
        <value>bgs-5p173-wangwenting:19888</value>
    </property>
    <property>
        <name>hadoop.job.history.user.location</name>
        <value>/mapred/userhistory</value>
    </property>
    <property>
        <name>mapred.local.dir</name>
        <value>/tmp/local</value>
    </property>

    <property>
       <name>mapreduce.reduce.shuffle.memory.limit.percent</name>
       <value>0.05</value>
    </property>

   <property>
     <name>mapreduce.map.memory.mb</name>
     <value>1536</value>
   </property>
   <property>
     <name>mapreduce.map.java.opts</name>
     <value>-Xmx1024M</value>
   </property>
   <property>
     <name>mapreduce.reduce.memory.mb</name>
     <value>3072</value>
   </property>
   <property>
     <name>mapreduce.reduce.java.opts</name>
     <value>-Xmx2560M</value>
   </property>
</configuration>

8.配置yarn-site.xml文件: yarn-site.xml

[hadoop@bgs-5p173-wangwenting hadoop]$ cd /opt/hadoop/etc/hadoop/
[hadoop@bgs-5p173-wangwenting hadoop]$ vim yarn-site.xml
在yarn-site.xml文件中添加如下配置:
<configuration>

    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>  
        <name>yarn.log-aggregation-enable</name>  
        <value>true</value>  
    </property>  

    <property>
        <name>yarn.nodemanager.remote-app-log-dir</name>  
        <value>/logs</value> 
    </property>

    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>2592000</value>
    </property>

    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>61440</value>
    </property>

    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>22</value>
    </property>

    <property>
        <name>mapreduce.map.output.compress</name>  
        <value>true</value>
    </property>
        
    <property>
        <name>mapred.map.output.compress.codec</name>  
        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
    </property>

    <property>  
        <name>yarn.app.mapreduce.am.env</name>  
        <value>LD_LIBRARY_PATH=$HADOOP_HOME/lib/native</value>  
    </property>  

    <property> 
        <name>yarn.log.server.url</name> 
        <value>http://bgs-5p173-wangwenting:19888/jobhistory/logs/</value> 
    </property> 

    <property>
        <name>yarn.nodemanager.delete.debug-delay-sec</name>
        <value>3600</value>
    </property>

    <property>
        <name>yarn.nodemanager.local-dirs</name>
        <value>/tmp/nm-local-dir</value>
    </property>

    <property>
        <name>yarn.scheduler.maximum-allocation-vcores</name>
        <value>22</value>
    </property>

    <property>
        <name>yarn.resourcemanager.scheduler.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
    </property>

    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>512</value>
    </property>

    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>10240</value>
    </property>

    <property>
        <name>yarn.resourcemanager.am.max-attempts</name>
        <value>4</value>
    </property>

    <!--RM HA-->
    <property>
        <name>yarn.resourcemanager.connect.retry-interval.ms</name>
        <value>2000</value>
    </property>

    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.cluster-id</name>                                       
        <value>rm-cluster</value>                                                          
        <description>集群名称,确保HA选举时对应的集群</description>                        
    </property>
    <property>
        <name>yarn.resourcemanager.ha.id</name>
        <value>rm1</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>bgs-5p173-wangwenting:2181,bgs-5p174-wangwenting:2181,bgs-5p175-wangwenting:2181</value>
    </property>

    <!--rm1-->
    <property>
        <name>yarn.resourcemanager.address.rm1</name>
        <value>bgs-5p173-wangwenting:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address.rm1</name>
        <value>bgs-5p173-wangwenting:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.https.address.rm1</name>
        <value>bgs-5p173-wangwenting:8090</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address.rm1</name>
        <value>bgs-5p173-wangwenting:8088</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
        <value>bgs-5p173-wangwenting:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address.rm1</name>
        <value>bgs-5p173-wangwenting:8033</value>
    </property>

    <!--rm2-->
    <property>
        <name>yarn.resourcemanager.address.rm2</name>
        <value>bgs-5p174-wangwenting:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address.rm2</name>
        <value>bgs-5p174-wangwenting:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.https.address.rm2</name>
        <value>bgs-5p174-wangwenting:8090</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address.rm2</name>
        <value>bgs-5p174-wangwenting:8088</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
        <value>bgs-5p174-wangwenting:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address.rm2</name>
        <value>bgs-5p174-wangwenting:8033</value>
    </property>

    <property>  
        <name>yarn.client.failover-proxy-provider</name>  
        <value>org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider</value>  
    </property>   
    <property>  
        <name>yarn.resourcemanager.ha.automatic-failover.zk-base-path</name>  
        <value>/yarn-leader-election</value>  
    </property>  
    <property>
        <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
        <value>true</value>
    </property>
</configuration>

9.配置hadoop运行的环境变量

[hadoop@bgs-5p173-wangwenting hadoop]$ vim /etc/profile
export HADOOP_HOME=/opt/hadoop
export PATH=$HADOOP_HOME/bin:$PATH
配置成功后,执行source  /etc/profile使配置生效
[hadoop@bgs-5p173-wangwenting hadoop]$ source /etc/profile

10.修改slaves文件:

[hadoop@bgs-5p173-wangwenting hadoop]$ cd /opt/hadoop/etc/hadoop
[hadoop@bgs-5p173-wangwenting hadoop]$ vim slaves
在slaves文件中添加
//datanode的节点的位置
bgs-5p174-wangwenting
bgs-5p175-wangwenting

11.在bgs-5p173-wangwenting上复制hadoop-2.7.3到hadoop@bgs-5p174-wangwenting和hadoop@bgs-5p174-wangwenting机器并按照步骤9修改环境变量并执行以下操作:

[hadoop@bgs-5p173-wangwenting hadoop]$ scp -r /opt/hadoop-${version} hadoop@bgs-5p174-wangwenting:/opt/
[hadoop@bgs-5p173-wangwenting hadoop]$ ln -s /opt/hadoop-${version} /opt/hadoop
[hadoop@bgs-5p173-wangwenting hadoop]$ scp -r /opt/hadoop-${version} hadoop@bgs-5p175-wangwenting:/opt/
[hadoop@bgs-5p173-wangwenting hadoop]$ ln -s /opt/hadoop-${version} /opt/hadoop

12.格式化namenode(仅第一次启动需要格式化!),启动hadoop,并启动jobhistory服务:  

# 格式化 namenode ,仅第一次启动需要格式化!!
[hadoop@bgs-5p173-wangwenting hadoop]$ hadoop namenode -format 
# 启动
[hadoop@bgs-5p173-wangwenting hadoop]$ ${HADOOP_HOME}/sbin/start-all.sh
[hadoop@bgs-5p173-wangwenting hadoop]$ ${HADOOP_HOME}/sbin/mr-jobhistory-daemon.sh start historyserver
start-all.sh包含dfs和yarn两个模块的启动,分别为start-dfs.sh 、 start-yarn.sh,所以dfs和yarn可以单独启动。
注意:如果datanode没有启动起来,看看是不是tmpdir中有之前的脏数据,删除这个目录其他两台机器也要删除。

13.检查每台机器的服务,bgs-5p173-wangwenting、bgs-5p174-wangwenting、bgs-5p175-wangwenting三台机器上分别输入jps:

[hadoop@bgs-5p173-wangwenting ~]$ jps
24429 Jps
22898 ResourceManager
24383 JobHistoryServer
22722 SecondaryNameNode
22488 NameNode
[ahdoop@bgs-5p174-wangwenting ~]$ jps
7650 DataNode
7788 NodeManager
8018 Jps
[hadoop@bgs-5p175-wangwenting ~]$ jps
28407 Jps
28038 DataNode
28178 NodeManager
如果三台机器正常输出上述内容,则表示hadoop集群的服务正常工作。

访问hadoop的服务页面:在浏览器中输入如下地址

http://bgs-5p173-wangwenting:8088
http://bgs-5p173-wangwenting:50070
http://bgs-5p173-wangwenting:19888

 

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值