Hadoop版本2安装

Hadoop安装前准备:(所以节点都要做)
1.系统:CentOS 6.4 64位
2.关闭防火墙和SELinux
 service iptables status
 service iptables stop
 chkconfig iptables off
 vi /etc/sysconfig/selinux
 设置 SELEINUX=disabled
      SELINUXTYPE=disabled
 检查:setenforce 0
3.设置静态IP地址
 vi /etc/sysconfig/network-scripts/ifcfg-eth0
 DEVICE=eth0
 BOOTPROTO=none
 HWADDR=00:0c:29:7a:50:d6
 ONBOOT=yes
 NETMASK=255.255.255.0
 IPADDR=192.168.18.111
 TYPE=Ethernet
4.修改hostname
 hostname h1
 vi /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=h1
5.ip与hostname绑定
 vi /etc/hosts
 192.168.18.111 h1
 192.168.18.112  h2
 192.168.18.113  h3
6.更换jdk版本
  检查版本:java -version
 检查Java包:rpm -qa | grep java
 脱离依赖关系:rpm -e --nodeps java-1.4.2-gcj-compat-devel-1.4.2.0-40jpp.115
 安装Java:tar -zxvf jdk-8u11-linux-x64.tar.gz -C /usr/
 授权命令:chmod u+x jdk-8u11-linux-x64.tar.gz
 
 设置环境变量:vi /etc/profile
 export JAVA_HOME=/usr/jdk1.8.0_11
 export JAVA_BIN=/usr/jdk1.8.0_11/bin
 export PATH=$PATH:$JAVA_HOME/bin
 export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar 
 export JAVA_HOME JAVA_BIN PATH CLASSPATH
 环境变量实现:reboot或 source /etc/profile
 检查版本:java -version
7.创建Hadoop用户:
 useradd hadoop
 passwd hadoop

开始安装Hadoop:安装在一台机器,然后拷贝到其他机器
8.安装ssh 证书
[hadoop@h1 ~]$ ssh-keygen -t rsa
[hadoop@h2 ~]$ ssh-keygen -t rsa
[hadoop@h3 ~]$ ssh-keygen -t rsa
[hadoop@h1 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h1
[hadoop@h1 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h2
[hadoop@h1 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h3
[hadoop@h2 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h1
[hadoop@h2 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h2
[hadoop@h2 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h3
[hadoop@h3 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h1
[hadoop@h3 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h2
[hadoop@h3 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h3
安装Hadoop
[hadoop@h1 ~]$ tar -zxvf hadoop-2.6.0-cdh5.5.2-cdh5.5.2.tar.gz
[hadoop@h1 ~]$ vi .bash_profile
export JAVA_HOME=/usr/jdk1.7.0_25
export JAVA_BIN=/usr/jdk1.7.0_25/bin
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME JAVA_BIN PATH CLASSPATH
HADOOP_HOME=/home/hadoop/hadoop-2.6.0-cdh5.5.2
HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
PATH=$HADOOP_HOME/bin:$PATH
export HADOOP_HOME HADOOP_CONF_DIR PATH
[hadoop@h1 ~]$ source  .bash_profile
编辑hadoop 配置文件 一般编辑5个配置文件
 hadoop-env.sh core-site.xml  hdfs-site.xml mapred-site.xml year-site.xml
修改core-site.xml
[hadoop@h1 ~]$ cd hadoop-2.6.0-cdh5.5.2-cdh5.5.2/etc/hadoop
[hadoop@h1 hadoop]$ vi core-site.xml
<configuration>
<property>
   <name>fs.defaultFS</name>
   <value>hdfs://h1:9000</value>
   <description>NameNode URI.</description>
 </property>
 <property>
   <name>io.file.buffer.size</name>
   <value>131072</value>
   <description>Size of read/write buffer used inSequenceFiles.</description>
 </property>
</configuration>

编辑hdfs-site.xml
[hadoop@h1  hadoop-2.6.0-cdh5.5.2]$ mkdir -p dfs/name
[hadoop@h1  hadoop-2.6.0-cdh5.5.2]$ mkdir -p dfs/data
[hadoop@h1  hadoop-2.6.0-cdh5.5.2]$ mkdir -p dfs/namesecondary
[hadoop@h1 hadoop]$ vi hdfs-site.xml
 <property>
   <name>dfs.namenode.secondary.http-address</name>
   <value>h1:50090</value>
   <description>The secondary namenode http server address andport.</description>
 </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:///home/hadoop/hadoop-2.6.0-cdh5.5.2/dfs/name</value>
   <description>Path on the local filesystem where the NameNodestores the namespace and transactions logs persistently.</description>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:///home/hadoop/hadoop-2.6.0-cdh5.5.2/dfs/data</value>
   <description>Comma separated list of paths on the local filesystemof a DataNode where it should store its blocks.</description>
 </property>
 <property>
   <name>dfs.namenode.checkpoint.dir</name>
   <value>file:///home/hadoop/hadoop-2.6.0-cdh5.5.2/dfs/namesecondary</value>
   <description>Determines where on the local filesystem the DFSsecondary name node should store the temporary images to merge. If this is acomma-delimited list of directories then the image is replicated in all of thedirectories for redundancy.</description>
 </property>
<property>
    <name>dfs.replication</name>
    <value>2</value>
</property>

8.
编辑mapred-site.xml
[hadoop@h1 hadoop]$ cp mapred-site.xml.template mapred-site.xml
<property>
   <name>mapreduce.framework.name</name>
<value>yarn</value>
<description>Theruntime framework for executing MapReduce jobs. Can be one of local, classic oryarn.</description>
  </property>
  <property>
   <name>mapreduce.jobhistory.address</name>
    <value>h1:10020</value>
    <description>MapReduce JobHistoryServer IPC host:port</description>
  </property>
  <property>
   <name>mapreduce.jobhistory.webapp.address</name>
    <value>h1:19888</value>
    <description>MapReduce JobHistoryServer Web UI host:port</description>
  </property>
*****
属性”mapreduce.framework.name“表示执行mapreduce任务所使用的运行框架,默认为local,需要将其改为”yarn”
*****
9.
 编辑yarn-site.xml
[hadoop@h1 hadoop]$ vi yarn-site.xml
<property>
   <name>yarn.resourcemanager.hostname</name>
  <value>h1</value>
  <description>The hostname of theRM.</description>
</property>
 <property>
   <name>yarn.nodemanager.aux-services</name>
   <value>mapreduce_shuffle</value>
   <description>Shuffle service that needs to be set for Map Reduceapplications.</description>
 </property>
10.
[hadoop@h1 hadoop]$ vi hadoop-env.sh
export JAVA_HOME=/usr/jdk1.7.0_25
11.
[hadoop@h1 hadoop]$ vi slaves
h2
h3
12.
[hadoop@h1 ~]$ scp -r ./hadoop-2.6.0-cdh5.5.2/ hadoop@h2:/home/hadoop/
[hadoop@h1 ~]$ scp -r ./hadoop-2.6.0-cdh5.5.2/ hadoop@h3:/home/hadoop/
[hadoop@h1 hadoop-2.6.0-cdh5.5.2]$ bin/hadoop namenode -format
========================================================
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable(可以忽略)
解决上面的警告:
cd/home/hadoop/hadoop-2.6.0-cdh5.5.2/etc/hadoop
   vim log4j.properties
  添加:log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
或者:
[hadoop@h1 ~]$ tar -xvf hadoop-native-64-2.6.0.tar -C hadoop-2.6.0-cdh5.5.2/lib/native/
[root@h1 ~]# vi /etc/yum.conf
[Server]
name=rhel_yum
baseurl=file:///mnt/Server
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[root@h1 /]# yum -y install gcc*
系统中的glibc的版本和libhadoop.so需要的版本不一致导致
[hadoop@h1 ~]$ ls -l /lib/libc.so.6
lrwxrwxrwx 1 root root 11 2015-07-04 /lib/libc.so.6 -> libc-2.5.so
[hadoop@h1 ~]$ tar -jxvf glibc-2.9.tar.bz2
[hadoop@h1 ~]$ cd glibc-2.9
注意:glibc-linuxthreads解压到glibc目录下
[hadoop@h1 glibc-2.9]$ cp /ff/hadoopCDH5/glibc-linuxthreads-2.5.tar.bz2 .
[hadoop@h1 glibc-2.9]$ tar -jxvf glibc-linuxthreads-2.5.tar.bz2
[hadoop@h1 ~]$ export CFLAGS="-g -O2"
(加上优化开关)
[hadoop@h1 ~]$ ./glibc-2.9/configure --prefix=/usr --disable-profile --enable-add-ons --with-headers=/usr/include --with-binutils=/usr/bin
[hadoop@h1 ~]$ make
 
[hadoop@h1 ~]$ make install
[hadoop@h1 ~]$ ls -l /lib/libc.so.6
验证:
[hadoop@h1 hadoop-2.6.0-cdh5.5.2]$ bin/hdfs namenode -format
[hadoop@h1 hadoop-2.6.0-cdh5.5.2]$ sbin/start-all.sh
[hadoop@h1 hadoop-2.6.0-cdh5.5.2]$ jps
7054 SecondaryNameNode
7844 Jps
7318 NameNode
7598 ResourceManager
[hadoop@h1 hadoop-2.6.0-cdh5.5.2]$ bin/hadoop fs -ls /
[hadoop@h1 hadoop-2.6.0-cdh5.5.2]$ bin/hadoop fs -mkdir /aaa
安装完成后配置一下环境变量就好了
root用户:vi /etc/profile
export JAVA_HOME=/usr/jdk1.7.0_25
 export JAVA_BIN=/usr/jdk1.7.0_25/bin
 export PATH=$PATH:$JAVA_HOME/bin
 export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
 export JAVA_HOME JAVA_BIN PATH CLASSPATH
 export HADOOP_HOME=/usr/local/hadoop-0.20.2-cdh3u5
 export HADOOP_BIN=/usr/local/hadoop-0.20.2-cdh3u5/bin
 export PATH=$PATH:$HADOOP_HOME/bin
 export HIVE_HOME=/home/hadoop/hive-0.7.1-cdh3u5
 export PATH=$PATH:$HIVE_HOME/bin
Hadoop用户:vi .bash_profile
 PATH=$PATH:$HOME/bin:/usr/local/hadoop-0.20.2-cdh3u5/bin/
 export SQOOP_HOME=/home/hadoop/sqoop-1.3.0-cdh3u5
网页查看进程:http://192.168.18.111:50070/
出错原因:404
 可能是因为配置文件没有生效或者版本更换时没有生效配置文件
 source /etc/profile
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值