CDH5 + ZOOKEEPER + HIVE安装

安装大概步骤:

  • JDK
  • SSH免密码登录
  • NTPDATE 时间同步
  • 网络配置
  • CDH5 安装
  • ZOOKEEPER安装
  • HIVE安装
主机IP  hsotname角色
172.21.25.100namenode.yxnrtf.openpfNameNode  
172.21.25.104datanode01.yxnrtf.openpfDataNode  
172.21.25.105datanode02.yxnrtf.openpfDataNode

1、JDK安装

tar -zxvf jdk-7u80-linux-x64.gz -C /usr/local/java

配置环境变量

#java
export JAVA_HOME=/usr/local/java/jdk1.7.0_80
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

验证:java -version

2、SSH免密登录

    各个节点都执行下面该命令,会在/root/下生成.ssh 文件夹,

ssh-keygen –t rsa –P ''

在NameNode节点上将 id_rsa.pub追加到key里面去

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

将DataNode节点的 id_rsa.pub依次 追加到 主节点的 keys里面

scp ~/.ssh/id_rsa.pub root@172.21.25.100:~/
scp ~/id_rsa.pub hadoop@172.21.25.100:~/authorized_keys

将NameNode节点的authorized_keys 传到DataNode节点上

scp ~/.ssh/authorized_keys root@172.21.25.104:~/.ssh/
scp ~/.ssh/authorized_keys root@172.21.25.105:~/.ssh/
chmod 600 ~/.ssh/authorized_keys # 各个DataNode节点上修改文件权限

在所有的节点上修改/etc/ssh/sshd_config 文件

RSAAuthentication yes # 启用 RSA 认证
PubkeyAuthentication yes # 启用公钥私钥配对认证方式
AuthorizedKeysFile .ssh/authorized_keys # 公钥文件路径(和上面生成的文件同)
service sshd restart # 重启ssh服务

验证 ssh localhost,  ssh DataNode  IP  ,这样节点之间可以相互访问

3、NTPDate 时间同步
       时间同步是通过crontab 定时任务 ntpdate 来实现时间同步,不采用NTP是因为时间上超过某个值就无法实现同步。每5分钟实现一次同步

crontab -e  # crond 命令
*/5 * * * * /usr/sbin/ntpdate ntp.oss.XX  && hwclock --systohc   #ntp.oss.XX 是你的NTP服务器

4、网络配置

    所有机器关闭防火墙,添加/etc/hosts

service iptables status
service iptables stop

添加/etc/hosts

172.21.25.100 namenode.yxnrtf.openpf
172.21.25.104 datanode01.yxnrtf.openpf
172.21.25.105 datanode02.yxnrtf.openpf

 5、安装CDH5

   在NameNode节点上执行

wget http://archive.cloudera.com/cdh5/one-click-install/redhat/6/x86_64/cloudera-cdh-5-0.x86_64.rpm

    禁用GPG签名检查,并安装本地软件包

yum --nogpgcheck localinstall cloudera-cdh-5-0.x86_64.rpm

添加cloudera仓库验证:

rpm --import http://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera

NameNode 节点上安装 软件包namenode、resourcemanager、nodemanager、datanode、mapreduce、historyserver、proxyserver和hadoop-client:

yum install hadoop hadoop-hdfs hadoop-client hadoop-doc hadoop-debuginfo hadoop-hdfs-namenode hadoop-yarn-resourcemanager hadoop-yarn-nodemanager hadoop-hdfs-datanode hadoop-mapreduce hadoop-mapreduce-historyserver hadoop-yarn-proxyserver -y

DataNode 节点上安装

yum install hadoop hadoop-hdfs hadoop-client hadoop-doc hadoop-debuginfo hadoop-yarn hadoop-hdfs-datanode hadoop-yarn-nodemanager hadoop-mapreduce -y

SecondaryNamenode 安装,本人是在NameNode节点上安装的,如果安装在其他服务器上,之后的一些一些配置需要进行修改

yum install hadoop-hdfs-secondarynamenode -y

在/etc/hadoop/conf/hdfs-site.xml 中添加如下配置

<property>
		<name>dfs.namenode.checkpoint.check.period</name>
		<value>60</value>
	</property>
	<property>
		<name>dfs.namenode.checkpoint.txns</name>
		<value>1000000</value>
	</property>
	<property>
		<name>dfs.namenode.checkpoint.dir</name>
		<value>file:///data/cache1/dfs/namesecondary</value>
	</property>
	<property>
		<name>file:///data/cache1/dfs/namesecondary</name>
		<value>hdfs</value>
	</property>
	<property>
		<name>dfs.namenode.num.checkpoints.retained</name>
		<value>2</value>
	</property>
	<!-- 将namenode.yxnrtf.openpf设置成SecondaryNameNode -->
	<property>
		<name>dfs.secondary.http.address</name>
		<value>namenode.yxnrtf.openpf:50090</value>
	</property>

在NameNode节点上创建目录

mkdir -p /data/cache1/dfs/nn
chown -R hdfs:hadoop /data/cache1/dfs/nn
chmod 700 -R /data/cache1/dfs/nn

在DataNode节点上创建目录

mkdir -p /data/cache1/dfs/dn
mkdir -p /data/cache1/dfs/mapred/local
chown -R hdfs:hadoop /data/cache1/dfs/dn
chmod 777 -R /data/
usermod -a -G mapred hadoop
chown -R mapred:hadoop /data/cache1/dfs/mapred/local

各个节点上在/etc/profile上添加如下配置

export HADOOP_HOME=/usr/lib/hadoop
export HIVE_HOME=/usr/lib/hive
export HBASE_HOME=/usr/lib/hbase
export HADOOP_HDFS_HOME=/usr/lib/hadoop-hdfs
export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=/usr/lib/hadoop-hdfs
export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HDFS_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_YARN_HOME=/usr/lib/hadoop-yarn
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$HBASE_HOME/bin:$PATH

source /etc/profile 让其生效

在NameNode节点上添加/etc/hadoop/conf/core-site.xml 如下配置

<property>
		<name>fs.defaultFS</name>
		<value>hdfs://namenode.yxnrtf.openpf:9000</value>
	</property>

	<property>
		<name>dfs.replication</name>
		<value>1</value>
	</property>

	<property>
		<name>hadoop.proxyuser.hadoop.hosts</name>
		<value>namenode.yxnrtf.openpf</value>
	</property>
	<property>
		<name>hadoop.proxyuser.hadoop.groups</name>
		<value>hdfs</value>
	</property>
	<property>
		<name>hadoop.proxyuser.mapred.groups</name>
		<value>*</value>
	</property>
	<property>
		<name>hadoop.proxyuser.mapred.hosts</name>
		<value>*</value>
	</property>
	<property>
		<name>hadoop.proxyuser.yarn.groups</name>
		<value>*</value>
	</property>
	<property>
		<name>hadoop.proxyuser.yarn.hosts</name>
		<value>*</value>
	</property>
	<property>
		<name>hadoop.proxyuser.httpfs.hosts</name>
		<value>httpfs-host.foo.com</value>
	</property>
	<property>
		<name>hadoop.proxyuser.httpfs.groups</name>
		<value>*</value>
	</property>
	
	<property>
		<name>hadoop.proxyuser.hive.hosts</name>
		<value>*</value>
	</property>
	<property>
		<name>hadoop.proxyuser.hive.groups</name>
		  <value>*</value>
	</property>

在/etc/hadoop/conf/hdfs-site.xml添加如下配置

<property>
		<name>dfs.namenode.name.dir</name>
		<value>/data/cache1/dfs/nn/</value>
	</property>
	<property>
		<name>dfs.datanode.data.dir</name>
		<value>/data/cache1/dfs/dn/</value>
	</property>
	<property>
	 	 <name>dfs.hosts</name>
		 <value>/etc/hadoop/conf/slaves</value>
	</property>
	<property>
		<name>dfs.permissions</name>
		<value>false</value>
	</property>
	<property>
		<name>dfs.permissions.superusergroup</name>
		<value>hdfs</value>
	</property>

在/etc/hadoop/conf/mapred-site.xml添加如下配置

<property>
		<name>mapreduce.jobhistory.address</name>
		<value>namenode.yxnrtf.openpf:10020</value>
	</property>

	<property>
		<name>mapreduce.jobhistory.webapp.address</name>
		<value>namenode.yxnrtf.openpf:19888</value>
	</property>

	<property>
		<name>mapreduce.jobhistory.joblist.cache.size</name>
		<value>50000</value>
	</property>

<!-- 前面在HDFS上创建的目录 -->
	<property>
		<name>mapreduce.jobhistory.done-dir</name>
		<value>/user/hadoop/done</value>
	</property>

	<property>
		<name>mapreduce.jobhistory.intermediate-done-dir</name>
		<value>/user/hadoop/tmp</value>
	</property>

	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>

在/etc/hadoop/conf/yarn-site.xml 添加如下配置

	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>

	<property>
		<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
		<value>org.apache.hadoop.mapred.ShuffleHandler</value>
	</property>

	<property>
		<name>yarn.log-aggregation-enable</name>
		<value>true</value>
	</property>

	<property>
		<description>List of directories to store localized files in.</description>
		<name>yarn.nodemanager.local-dirs</name>
		<value>/var/lib/hadoop-yarn/cache/${user.name}/nm-local-dir</value>
	</property>

	<property>
		<description>Where to store container logs.</description>
		<name>yarn.nodemanager.log-dirs</name>
		<value>/var/log/hadoop-yarn/containers</value>
	</property>

	<property>
		<description>Where to aggregate logs to.</description>
		<name>yarn.nodemanager.remote-app-log-dir</name>
		<value>hdfs://namenode.yxnrtf.openpf:9000/var/log/hadoop-yarn/apps</value>
	</property>

	<property>
		<name>yarn.resourcemanager.address</name>
		<value>namenode.yxnrtf.openpf:8032</value>
	</property>
	<property>
		<name>yarn.resourcemanager.scheduler.address</name>
		<value>namenode.yxnrtf.openpf:8030</value>
	</property>
	<property>
		<name>yarn.resourcemanager.webapp.address</name>
		<value>namenode.yxnrtf.openpf:8088</value>
	</property>
	<property>
		<name>yarn.resourcemanager.resource-tracker.address</name>
		<value>namenode.yxnrtf.openpf:8031</value>
	</property>
	<property>
		<name>yarn.resourcemanager.admin.address</name>
		<value>namenode.yxnrtf.openpf:8033</value>
	</property>

	<property>
		<description>Classpath for typical applications.</description>
		<name>yarn.application.classpath</name>
		<value>
			$HADOOP_CONF_DIR,
			$HADOOP_COMMON_HOME/*,
			$HADOOP_COMMON_HOME/lib/*,
			$HADOOP_HDFS_HOME/*,
			$HADOOP_HDFS_HOME/lib/*,
			$HADOOP_MAPRED_HOME/*,
			$HADOOP_MAPRED_HOME/lib/*,
			$HADOOP_YARN_HOME/*,
			$HADOOP_YARN_HOME/lib/*
		</value>
	</property>

	<property>
		<name>yarn.web-proxy.address</name>
		<value>namenode.yxnrtf.openpf:54315</value>
	</property>

修改/etc/hadoop/conf/slaves

datanode01.yxnrtf.openpf
datanode02.yxnrtf.openpf

在yarn.env.xml 注释的下面添加

export JAVA_HOME=/usr/local/java/jdk1.7.0_80

将/etc/hadoop/conf 这个目录拷贝到datanode节点上

scp -r conf/ root@172.21.25.104:/etc/hadoop/
scp -r conf/ root@172.21.25.105:/etc/hadoop/

NameNode节点上启动

hdfs namenode –format
service hadoop-hdfs-namenode init
service hadoop-hdfs-namenode start
service hadoop-yarn-resourcemanager start
service hadoop-yarn-proxyserver start
service hadoop-mapreduce-historyserver start

DataNode节点上启动

service hadoop-hdfs-datanode start
service hadoop-yarn-nodemanager start

在浏览器中查看

http://172.21.25.100:50070

HDFS

http://172.21.25.100:8088

ResourceManager(Yarn)

http://172.21.25.100:8088/cluster/nodes

在线的节点

http://172.21.25.100:8042

 

NodeManager

http://172.21.25.104:8042

http://172.21.25.104:8042

http://172.21.25.100:19888/

JobHistory

6、ZOOKEEPER安装

各个节点运行如下命名

yum install zookeeper* -y

NameNode节点上修改配置文件/etc/zookeeper/conf/zoo.cfg

#clean logs
autopurge.snapRetainCount=3
autopurge.purgeInterval=1
server.1=namenode.yxnrtf.openpf:2888:3888
server.2=datanode01.yxnrtf.openpf:2888:3888
server.3=datanode01.yxnrtf.openpf:2888:3888

在NameNode上启动

service zookeeper-server init --myid=1
service zookeeper-server start

DataNode 节点1上启动

service zookeeper-server init --myid=2
service zookeeper-server start

DataNode 节点2上启动

service zookeeper-server init --myid=3
service zookeeper-server start

此处需要注意的是--myid的值的设置要和配置文件中的一直

验证启动情况,在NameNode节点上运行命令

zookeeper-client -server traceMaster:2181

 

转载于:https://my.oschina.net/u/1433803/blog/780039

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值