配置高可用集群

一:配置详情

配置四台虚拟机分别为node1,node2,node3,node4.在1,2,3上配置好zookeeper,1,2机器配置namenode,3,4配置resourcemanager.

二:配置zookeeper步骤

1.新建四台虚拟机,并配置好相应的IP,映射(四台机器)和时间同步器

2.在opt目录下新建install目录,并通过xftp上传hadoop,jdk和zookeeper

3.给所有节点设置免密登录

ssh-keygen -t rsa
ssh-copy-id -i .ssh/id_rsa.pub -p22 root@node1

...

此时可通过

ssh -p 22 root@node1

去登录别的机器

4.复制node1上的/etc/hosts文件给另外三台机器

scp /etc/hosts root@node2:/etc/

5.解压jdk到soft目录并改名为jdk180,添加jdk环境变量

# JAVA_HOME
export JAVA_HOME=/opt/soft/jdk180
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin

6.解压zookeeper到soft目录,改名为zk345,并添加zk环境变量,source,scp给node2,3,4

#zk
export ZOOKEEPER_HOME=/opt/soft/zk345
export PATH=$PATH:$ZOOKEEPER_HOME/bin

7.进入zk345/conf目录,复制zoo_sample.cgf到当前文件夹,并命名为zoo.cfg,进入编辑添加

server.0=192.168.10.136:2287:3387
server.1=192.168.10.137:2287:3387
server.2=192.168.10.138:2287:3387

8.进入zk345/tmp文件夹,新建my.id并填写0

9.复制node1上的zk345到node2,node3上,并修改对应的myid,依次为1,2

10.此时调用zkServer.sh start即可启动zookeeper

三:配置hadoop

1.解压hadoop到soft目录,并更名为hadoop313

2.修改环境变量,scp给node2,node3,node4

# HADOOP_HOME
export HADOOP_HOME=/opt/soft/hadoop313
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export HDFS_JOURNALNODE_USER=root
export HDFS_ZKFC_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

3.hadoop313目录下创建data文件夹

4.进入etc/hadoop目录下编辑

(1)core-site.xml

<configuration>
<property>
        <name>fs.defaultFS</name>
        <value>hdfs://gky</value>
        <description>逻辑名称,必须hdfs-site.xml中dfs.nameservices值保持一致</description>
</property>
<property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/soft/hadoop313/data</value>
        <description>namenode上本地的hadoop临时文件夹</description>
</property>
<property>
        <name>hadoop.http.staticuser.user</name>
        <value>root</value>
</property>
<property>
        <name>hadoop.proxyuser.root.hosts</name>
        <value>*</value>
</property>
<property>
        <name>hadoop.proxyuser.root.groups</name>
        <value>*</value>
</property>
<property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
        <description>Size of read/write SequenceFiles buffer: 128K</description>
</property>
<property>
        <name>ha.zookeeper.quorum</name>
        <value>node1:2181,node2:2181,node3:2181</value>
</property>
<property>
        <name>ha.zookeeper.session-timeout.ms</name>
        <value>10000</value>
        <description>hadoop链接zookeeper的超时时长设置ms</description>
</property>
</configuration>

(2)hdfs-site.xml

<configuration>
<property>
        <name>dfs.replication</name>
        <value>3</value>
        <description>Hadoop中每个block的备份数</description>
</property>
<property>
        <name>dfs.namenode.name.dir</name>
        <value>/opt/soft/hadoop313/data/dfs/name</value>
        <description>namenode上存储hdfs名字空间元数据 </description>
</property>
<property>
        <name>dfs.datanode.data.dir</name>
        <value>/opt/soft/hadoop313/data/dfs/data</value>
        <description>datanode上数据块的物理存储位置</description>
</property>

<property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>node1:9869</value>
</property>
<property>
        <name>dfs.nameservices</name>
        <value>gky</value>
        <description>指定hdfs的nameservice,需要和core-site.xml>中的保持一致</description>
</property>
<property>
        <name>dfs.ha.namenodes.gky</name>
        <value>nn1,nn2</value>
        <description>为集群逻辑名称,映射两个namenode逻辑名>称</description>
</property>
<property>
        <name>dfs.namenode.rpc-address.gky.nn1</name>
        <value>node1:9000</value>
        <description>master01的RPC通信地址</description>
</property>
<property>
        <name>dfs.namenode.http-address.gky.nn1</name>
        <value>node1:9870</value>
        <description>master01的http通信地址</description>
</property>
<property>
        <name>dfs.namenode.rpc-address.gky.nn2</name>
        <value>node2:9000</value>
        <description>master02的RPC通信地址</description>
</property>
<property>
        <name>dfs.namenode.http-address.gky.nn2</name>
        <value>node2:9870</value>
        <description>master02的http通信地址</description>
</property>
<property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://node1:8485;node2:8485;node3:8485;node4:8485/gky</value>
        <description>指定NameNode的edits元数据的共享存储位置(JournalNode列表)</description>
</property>
<property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/opt/soft/hadoop313/tmp/journaldata</value>
        <description>指定JournalNode在本地磁盘存放数据的位置</description>
</property>
<!--容错-->
<property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
        <description>开启NameNode失败自动切换</description>
</property>
<property>
        <name>dfs.client.failover.proxy.provider.gky</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        <description>配置失败自动切换实现方式</description>
</property>
<property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
        <description>脑裂处理</description>
</property>
<property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/root/.ssh/id_rsa</value>
        <description>使用sshfence隔离机制时,需要ssh免密登陆</description>
</property>
<!--权限设定避免因权限问题导致操作失败异常-->
<property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
        <description>关闭权限验证</description>
</property>

<!--限流将更多的内存和带宽让给job-->
<property>
        <name>dfs.image.transfer.bandwidthPerSec</name>
        <value>1048576</value>
</property>
<property>
        <name>dfs.block.scanner.volume.bytes.per.second</name>
        <value>1048576</value>
</property>
<property>
        <name>dfs.datanode.balance.bandwidthPerSec</name>
        <value>20m</value>
</property>
</configuration>

(3)workers

node1
node2
node3
node4

(4)mapred-site.xml

<configuration>
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
        <description>job执行框架:local, classic or yarn.</description>
</property>
<property>
        <name>mapreduce.application.classpath</name>
        <value>/opt/soft/hadoop313/etc/hadoop:/opt/soft/hadoop313/share/hadoop/common/lib/*:/opt/soft/hadoop313/share/hadoop/common/*:/opt/soft/hadoop313/share/hadoop/hdfs:/opt/soft/hadoop313/share/hadoop/hdfs/lib/*:/opt/soft/hadoop313/share/hadoop/hdfs/*:/opt/soft/hadoop313/share/hadoop/mapreduce/lib/*:/opt/soft/hadoop313/share/hadoop/mapreduce/*:/opt/soft/hadoop313/share/hadoop/yarn:/opt/soft/hadoop313/share/hadoop/yarn/lib/*:/opt/soft/hadoop313/share/hadoop/yarn/*</value>
</property>
<!--job history单节点配置即可-->
<property>
        <name>mapreduce.jobhistory.address</name>
        <value>node1:10020</value>
</property>
<property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>node1:19888</value>
</property>
<!--Container内存上限,由nodemanager读取并控制,实际使用超出时会>被nodemanager kill Connection reset by peer-->
<property>
        <name>mapreduce.map.memory.mb</name>
        <value>1024</value>
</property>
<property>
        <name>mapreduce.reduce.memory.mb</name>
        <value>2048</value>
</property>
</configuration>

(5)yarn-site.xml

<configuration>

  <property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
  </property>
  <!-- 指定RM的cluster-id -->
  <property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>yrcabc</value>
  </property>
  <!-- 指定RM的名字 -->
  <property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
  </property>
<!-- 分别指定RM的地址 -->
  <property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>node3</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>node4</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>node3:8088</value>
</property>
<property>
    <name>yarn.resourcemanager.webapp.address.rm2</name>
    <value>node4:8088</value>
</property>
  <!-- 指定zk集群地址 -->
  <property>
    <name>yarn.resourcemanager.zk-address</name>
    <value>node1:2181,node2:2181,node3:2181</value>
  </property>
  <!-- 要运行MapReduce程序必须配置的附属服务 -->
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
<property>
        <name>yarn.nodemanager.local-dirs</name>
        <value>/opt/soft/hadoop313/yarn/local</value>
</property>
<property>
        <name>yarn.nodemanager.log-dirs</name>
        <value>/opt/soft/hadoop313/yarn/log</value>
</property>

<!--资源优化-->
<property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>2048</value>
</property>
<property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>2</value>
</property>
<property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>256</value>
</property>

<!--日志聚合-->
<property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
</property>
<property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>86400</value>
</property>
<property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
</property>
<property>
        <name>yarn.application.classpath</name>
        <value>/opt/soft/hadoop313/etc/hadoop:/opt/soft/hadoop313/share/hadoop/common/lib/*:/opt/soft/hadoop313/share/hadoop/common/*:/opt/soft/hadoop313/share/hadoop/hdfs:/opt/soft/hadoop313/share/hadoop/hdfs/lib/*:/opt/soft/hadoop313/share/hadoop/hdfs/*:/opt/soft/hadoop313/share/hadoop/mapreduce/lib/*:/opt/soft/hadoop313/share/hadoop/mapreduce/*:/opt/soft/hadoop313/share/hadoop/yarn:/opt/soft/hadoop313/share/hadoop/yarn/lib/*:/opt/soft/hadoop313/share/hadoop/yarn/*</value>
</property>
<property>
    <name>yarn.nodemanager.env-whitelist</name>
    <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
</configuration>

5.初始化

hadoop namenode -format

5.将hadoop313文件scp给node2,3,4

6.开启所有服务

start-all.sh

7.此时通过jps就能查看服务开启状态

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值