hadoop安装
说明
hadoop1,hadoop2,hadoop3三台服务器,做好映射。
JDK安装(集群环境尽量保持一致)
hadoop2安装
-
解压
tar -zxvf hadoop-2.7.7.tar.gz
mv hadoop-2.7.7 hadoop
-
环境变量配置
vim /etc/profile
添加环境变量
export HADOOP_HOME=/usr/local/hadoop export PATH=$SCALA_HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH
查看是否安装成功
hadoop version
显示Hadoop 2.7.7
-
vim etc/hadoop/hadoop-env.sh
修改JAVA_HOME为服务器JDK路径
export JAVA_HOME=/usr/local/JDK1.8
-
vim etc/hadoop/slaves
增加
hadoop1 hadoop2 hadoop3
-
vim etc/hadoop/core-site.xml
增加
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://ns</value> </property> <!-- ns名称可随意但要与hdfs-site.xml中相同--> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/hadoop/tmp</value> </property> <!--存放namenode datanode等数据--> <property> <name>ha.zookeeper.quorum</name> <value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value> </property> <!--zookeeper集群--> <property> <name>ha.zookeeper.session-timeout.ms</name> <value>1000</value> </property> <!-- hadoop链接zookeeper的超时时长设置 --> </configuration>
-
vim etc/hadoop/hdfs-site.xml
增加
<configuration> <property> <name>dfs.nameservices</name> <value>ns</value> </property> <property> <name>dfs.ha.namenodes.ns</name> <value>nn1,nn3</value> </property> <!--指定hadoop1和hadoop3的rpc和http--> <property> <name>dfs.namenode.rpc-address.ns.nn1</name> <value>hadoop1:9000</value> </property> <property> <name>dfs.namenode.http-address.ns.nn1</name> <value>hadoop1:50070</value> </property> <property> <name>dfs.namenode.rpc-address.ns.nn3</name> <value>hadoop3:9000</value> </property> <property> <name>dfs.namenode.http-address.ns.nn3</name> <value>hadoop3:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/ns2</value> </property> <!-- 配置namenode和datanode的工作目录-数据存储目录 --> <property> <name>dfs.namenode.name.dir</name> <value>/usr/local/hadoop/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/usr/local/hadoop/hdfs/data</value> </property> <!--使用journalnode--> <property> <name>dfs.journalnode.edits.dir</name> <value>/usr/local/hadoop/journal</value> </property> <!--集群是否自动回复故障--> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!--failover代理配置--> <property> <name>dfs.client.failover.proxy.provider.ns</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!--使用ssh进行namenode切换--> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <!--ssh秘钥存储位置--> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_rsa</value> </property> <!--连接超时设置--> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> </configuration>
-
vim etc/hadoop/mapred-site.xml
增加
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
-
vim etc/hadoop/yarn-site.xml
增加
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm3</value> </property> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>hadoop1</value> </property> <property> <name>yarn.resourcemanager.hostname.rm3</name> <value>hadoop3</value> </property> <property> <name>yarn.resourcemanager.zk-address</name> <value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value> </property> </configuration>
-
拷贝到其他服务器
scp -r /usr/local/hadoop hadoop1:/usr/local
scp -r /usr/local/hadoop hadoop3:/usr/local
-
分别启动journalnode
sh sbin/hadoop-daemon.sh start journalnode
查看是否启动
jps
显示
13908 JournalNode 13723 QuorumPeerMain
-
在配置的namenode服务器(hadoop1,hadoop3)上运行
hdfs namenode -format
-
把其中一台数据拷贝到另一台
scp -r hdfs hadoop3:/usr/local/hadoop
-
格式化ZKFC
在namenode上运行:
hdfs zkfc -formatZK
jps多出
DFSZKFailoverController
-
启动hdfs
运行:
sh sbin/start-hdfs.sh
-
启动yarn
运行:
sh sbin/start-yarn.sh
jps 多出
ResourceManager
没有启动则每台服务单独运行:`sh sbin/yarn-daemon.sh start resourcemanager
-
查看
浏览器打开:hadoop1:50070;hadoop1:8088
测试上传:
hadoop fs -put /etc/profile /
测试查看:
hadoop fs -ls /
手动关闭其中一个节点,访问浏览器查看是否自动切换另一节点
然后手动启动关掉的节点
sh sbin/hadoop-daemon.sh start namenode
查看此节点状态。