hadoop集群安装

hadoop集群安装

1      集群规划

Hadoop-master1

192.168.2.100

Namendoe主

Hadoop-master2

192.168.2.101

Namenode备

Hadoop-slave1

192.168.2.102

Datanode/zookeeper

Hadoop-slave2

192.168.2.103

Datanode/zookeeper

Hadoop-slave3

192.168.2.104

Datanode/zookeeper

 

zookeeper已经安装参考文件3_zookeeper集群安装

2      hadoop安装部署

2.1   软件包

hadoop-2.7.3.tar.gz

2.2   安装步骤

2.2.1  上传资料

创建目录/home/hadoop/hadoop,并将hadoop-2.7.3.tar.gz传到/home/hadoop/hadoop目录并解压。

2.2.2  修改环境变量

source ~/.bash_profile

2.2.3  创建相关目录

mkdir /home/hadoop/hadoop/data

cd /home/hadoop/hadoop/data

mkdir tmp

mkdir namenode

mkdir datanode

mkdir journal

 

2.2.4  修改hadoop-env.sh文件

cd /home/hadoop/hadoop/hadoop-2.7.3/etc/hadoop

vim hadoop-env.sh

修改:export JAVA_HOME=/usr/local/jdk

2.2.5  修改core-site.xml

<configuration>

         <property>

                   <name>fs.defaultFS</name>

                   <value>hdfs://mycluster</value>

         </property>

         <property>

                   <name>hadoop.tmp.dir</name>

                   <value>/home/hadoop/hadoop/data/tmp</value>

         </property>

         <property>

                   <name>fs.trash.interval</name>

                   <value>1440</value>

         </property>

         <property>

                   <name>ha.zookeeper.quorum</name>

                   <value>hadoop-slave1:2181,hadoop-slave2:2181,hadoop-slave3:2181</value>

         </property>

</configuration>

2.2.6  修改hdfs-site.xml

<configuration>

         <property>

                   <name>dfs.namenode.name.dir</name>

                   <value>/home/hadoop/hadoop/data/namenode</value>

         </property>

 

         <property>

                   <name>dfs.datanode.data.dir</name>

                   <value>/home/hadoop/hadoop/data/datanode</value>

         </property>

         <property>

                   <name>dfs.replication</name>

                   <value>3</value>

         </property>

         <property>

                   <name>dfs.permissions.enabled</name>

                   <value>false</value>

         </property>

         <property>

                   <name>dfs.webhdfs.enabled</name>

                   <value>true</value>

         </property>

         <property>

                   <name>dfs.nameservices</name>

                   <value>mycluster</value>

         </property>

         <property>

                   <name>dfs.ha.namenodes.mycluster</name>

                   <value>nn1,nn2</value>

         </property>

         <property>

                   <name>dfs.namenode.rpc-address.mycluster.nn1</name>

                   <value>hadoop-master1:8020</value>

         </property>

         <property>

                   <name>dfs.namenode.rpc-address.mycluster.nn2</name>

                   <value>hadoop-master2:8020</value>

         </property>

         <property>

                   <name>dfs.namenode.http-address.mycluster.nn1</name>

                   <value>hadoop-master1:50070</value>

         </property>

         <property>

                   <name>dfs.namenode.http-address.mycluster.nn2</name>

                   <value>hadoop-master2:50070</value>

         </property>

         <property>

                   <name>dfs.namenode.shared.edits.dir</name>

                   <value>qjournal://hadoop-slave1:8485;hadoop-slave2:8485;hadoop-slave3:8485/mycluster</value>

         </property>

         <property>

                   <name>dfs.journalnode.edits.dir</name>

                   <value>/home/hadoop/hadoop/data/journal</value>

         </property>

         <property>

                   <name>dfs.client.failover.proxy.provider.mycluster</name>

                   <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

         </property>

         <property>

                   <name>dfs.ha.fencing.methods</name>

                   <value>sshfence</value>

         </property>

         <property>

                   <name>dfs.ha.fencing.ssh.private-key-files</name>

                   <value>/home/hadoop/.ssh/id_rsa</value>

         </property>

         <property>

                   <name>dfs.ha.automatic-failover.enabled</name>

                   <value>true</value>

         </property>

</configuration>

2.2.7  修改mapred-site.xml

cp mapred-site.xml.template  mapred-site.xml

<configuration>

         <property>

                   <name>mapreduce.framework.name</name>

                   <value>yarn</value>

         </property>

 

         <property>

                   <name>mapreduce.jobhistory.address</name>

                   <value>hadoop-master1:10020</value>

         </property>

         <property>

                   <name>mapreduce.job.ubertask.enable</name>

                   <value>true</value>

         </property>

         <property>

                   <name>mapreduce.job.ubertask.maxmaps</name>

                   <value>9</value>

         </property>

         <property>

                   <name>mapreduce.job.ubertask.maxreduces</name>

                   <value>1</value>

         </property>

</configuration>

2.2.8  修改yurm-site.xml

<configuration>

         <property>

                   <name>yarn.nodemanager.aux-services</name>

                   <value>mapreduce_shuffle</value>

         </property>

         <property>

                   <name>yarn.web-proxy.address</name>

                   <value>hadoop-master2:8888</value>

         </property>

         <property>

                   <name>yarn.log-aggregation-enable</name>

                   <value>true</value>

         </property>

         <property>

                   <name>yarn.nodemanager.remote-app-log-dir</name>

                   <value>/logs</value>

         </property>

         <property>

                   <name>yarn.log-aggregation.retain-seconds</name>

                   <value>604800</value>

         </property>

         <property>

                   <name>yarn.nodemanager.resource.memory-mb</name>

                   <value>2048</value>

         </property>

         <property>

                   <name>yarn.nodemanager.resource.cpu-vcores</name>

                   <value>2</value>

         </property>

         <property>

                   <name>yarn.resourcemanager.ha.enabled</name>

                   <value>true</value>

         </property>

         <property>

                   <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>

                   <value>true</value>

         </property>

         <property>

                   <name>yarn.resourcemanager.cluster-id</name>

                   <value>yarncluster</value>

         </property>

         <property>

                   <name>yarn.resourcemanager.ha.rm-ids</name>

                   <value>rm1,rm2</value>

         </property>

         <property>

                   <name>yarn.resourcemanager.hostname.rm1</name>

                   <value>hadoop-master1</value>

         </property>

         <property>

                   <name>yarn.resourcemanager.hostname.rm2</name>

                   <value>hadoop-master2</value>

         </property>

         <property>

                   <name>yarn.resourcemanager.webapp.address.rm1</name>

                   <value>hadoop-master1:8088</value>

         </property>

         <property>

                   <name>yarn.resourcemanager.webapp.address.rm2</name>

                   <value>hadoop-master2:8088</value>

         </property>

         <property>

                   <name>yarn.resourcemanager.zk-address</name>

                   <value>hadoop-slave1:2181,hadoop-slave2:2181,hadoop-slave3:2181</value>

         </property>

         <property>

                   <name>yarn.resourcemanager.zk-state-store.parent-path</name>

                   <value>/rmstore</value>

         </property>

         <property>

                   <name>yarn.resourcemanager.recovery.enabled</name>

                   <value>true</value>

         </property>

 

         <property>

                   <name>yarn.resourcemanager.store.class</name>

                   <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>

         </property>

         <property>

                   <name>yarn.nodemanager.recovery.enabled</name>

                   <value>true</value>

         </property>

         <property>

                   <name>yarn.nodemanager.address</name>

                   <value>0.0.0.0:45454</value>

         </property>

</configuration>

2.2.9  修改slave

vim slave

hadoop-slave1

hadoop-slave2

hadoop-slave3

2.2.10 同步各主机

将/home/hadoop/hadoop目录传递到各个主机的home/hadoop目录下。

修改各个主机的~/.bash_profile

3      集群初始化

l  启动zookeeper集群(分别在slave1,slave2,slave3上执行)

zkServer.sh start

l  格式化ZKFC 在master1上执行

hdfs  zkfc –formatZK

l  启动journalnode,分别在slave1,slave2,slave3上执行。

hadoop-daemon.sh start journalnode

l  格式化HDFS,在master1上执行

hdfs namenode –format

l  将格式化后master1节点hadoop工作目录中的元数据目录复制到master2节点

scp –r /home/hadoop/hadoop/data/namenode/* hadoop-master2:/home/hadoop/hadoop/data/namenode/

 

l  初始化完毕后可关闭journalnode(分别在slave1,slave2,slave3上执行)

hadoop-daemon.sh stop journalnode

4      集群启动

启动zookeeper集群(分别在Slave1,slave2,slave3)

zkServer.sh start

启动HDFS (在master1执行)

start-dfs.sh

此命令分别在master1/master2节点启动了NameNode和ZKFC,分别在slave1/slave2/slave3节点启动了DataNode和JournalNode,

// 启动YARN(在master2执行)

$ start-yarn.sh

备注:此命令在master2节点启动了ResourceManager,分别在slave1/slave2/slave3节点启动了NodeManager。

// 启动YARN的另一个ResourceManager(在master1执行,用于容灾)

$ yarn-daemon.sh start resourcemanager

// 启动YARN的安全代理(在master2执行)

$ yarn-daemon.sh start proxyserver

备注:proxyserver充当防火墙的角色,可以提高访问集群的安全性

// 启动YARN的历史任务服务(在master1执行)

$ mr-jobhistory-daemon.sh starthistoryserver

备注:yarn-daemon.sh start historyserver已被弃用;CDH版本似乎有个问题,即mapred-site.xml配置的mapreduce.jobhistory.address和mapreduce.jobhistory.webapp.address参数似乎不起作 用,实际对应的端口号是10200和8188,而且部需要配置就可以在任意节点上开启历史任务服务。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值