大数据平台HA架构搭建

mkdir hive
mkdir hadoop
mkdir hbase
mkdir scala
mkdir spark
mkdir zookeeper

mv apache-hive-2.1.0-bin hive/hive-2.1.0
mv hadoop-2.7.3 hadoop
mv hbase-1.2.5 hbase
mv scala-2.11.6 scala
mv spark-2.1.0-bin-hadoop2.7 spark/spark-2.1.0
mv zookeeper-3.4.10 zookeeper

export JAVA_HOME=/usr/java/jdk1.8.0_111
export JRE_HOME=/usr/java/jdk1.8.0_111/jre
export SCALA_HOME=/home/cdh/scala/scala-2.11.6
export SPARK_HOME=/home/cdh/spark/spark-2.1.0
export HADOOP_HOME=/home/cdh/hadoop/hadoop-2.7.3
export HADOOP_OPTS=”-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_HOME/lib/native”
export HIVE_HOME=/home/cdh/hive/hive-2.1.0
#export IDEA_HOME=/home/cdh/idea/idea-IC-141.178.9
export HBASE_HOME=/home/cdh/hbase/hbase-1.2.5
export ZOOKEEPER_HOME=/home/cdh/zookeeper/zookeeper-3.4.10
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SCALA_HOME/bin:$SPARK_HOME/bin:$IDEA_HOME/bin:$HIVE_HOME/bin:$HBASE_HOME/bin:$ZOOKEEPER_HOME/bin

文字描述:本集群仅有三台机子:cdh05、cdh06和cdh07。三台都安装hadoop,zookeeper。cdh05、cdh06共运行了两个namenode,两个datanode运行cdh06和cdh07上。


zooekeeper配置


修改zoo.cfg文件
cp ${ZOOKEEPER_HOME}/conf/zoo_sample.cfg ${ZOOKEEPER_HOME}/conf/zoo.cfg
vim ${ZOOKEEPER_HOME}/conf/zoo.cfg
修改:
dataDir=${ZOOKEEPER_HOME}/tmp
添加:
dataLogDir=${ZOOKEEPER_HOME}/logs
在最后添加:
server.1=cdh05:2888:3888
server.2=cdh06:2888:3888
server.3=cdh07:2888:3888
保存退出
建立文件夹
mkdir ${ZOOKEEPER_HOME}/tmp
mkdir ${ZOOKEEPER_HOME}/logs
设置myid ,根据server编号改为2(cdh06),和3(cdh07)
echo 1 > ${ZOOKEEPER_HOME}/tmp/myid

相关操作脚本:
sed -i ‘s%dataDir=\/tmp\/zookeeper%dataDir=’”$ZOOKEEPER_HOME”’\/tmp%g’ ${ZOOKEEPER_HOME}/conf/zoo.cfg
echo dataLogDir=$ZOOKEEPER_HOME/logs >> ${ZOOKEEPER_HOME}/conf/zoo.cfg
echo “server.1=cdh05:2888:3888
server.2=cdh06:2888:3888
server.3=cdh07:2888:3888” >> ${ZOOKEEPER_HOME}/conf/zoo.cfg

mkdir ${ZOOKEEPER_HOME}/logs
mkdir ${ZOOKEEPER_HOME}/tmp
cdh=cdh0
host=$HOSTNAME
for x in 5 6 7
do
value=${cdh}${x}
if [ “$host” = “$value” ]; then
echo $(($x-4)) > ${ZOOKEEPER_HOME}/tmp/myid
fi
done


搭建hadoop集群


*注:在操作hadoop-env.sh,注释以下内容
#export HADOOP_OPTS=”$HADOOP_OPTS -Djava.net.preferIPv4Stack=true”
依次操作:
vim hadoop-env.sh
vim yarn-env.sh
vim mapred-env.sh
依次添加:
export JAVA_HOME=/usr/java/jdk1.8.0_111
export HADOOP_OPTS=”-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_HOME/lib/native”
依次保存退出

core-site.xml

<configuration>
  <property>
 <name>fs.defaultFS</name>
<value>hdfs://ns1</value>
  </property>
  <property>
 <name>hadoop.tmp.dir</name>
 <value>/home/cdh/hadoop/hadoop-2.7.3/tmp</value>
  </property>
  <property>
 <name>ha.zookeeper.quorum</name>
 <value>cdh05:2181,cdh06:2181,cdh07:2181</value>
  </property>
</configuration>

hdfs-site.xml
        <configuration>
          <property>
             <name>dfs.nameservices</name>
             <value>ns1</value>
          </property>
          <property>
         <name>dfs.ha.namenodes.ns1</name>
         <value>nn1,nn2</value>
          </property>
          <property>
         <name>dfs.namenode.rpc-address.ns1.nn1</name>
         <value>cdh05:9000</value>
          </property>
          <property>
          <name>dfs.namenode.http-address.ns1.nn1</name>
          <value>cdh05:50070</value>
          </property>
      <property>
           <name>dfs.namenode.rpc-address.ns1.nn2</name>
           <value>cdh06:9000</value>
      </property>
      <property>
         <name>dfs.na
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值