编译安装hadoop2.6.0-废弃不再使用该文档

编译hadoop
1. 安装jdk
http://blog.csdn.net/u013619834/article/details/38894649

2. 安装maven
wget http://mirror.bit.edu.cn/apache/maven/maven-3/3.2.5/binaries/apache-maven-3.2.5-bin.tar.gz
tar zxvf apache-maven-3.2.5-bin.tar.gz
mv apache-maven-3.2.5 /usr/local
添加环境变量
echo "export MAVEN_HOME=/usr/local/apache-maven-3.2.5" >> /etc/profile.d/app.sh
echo "export PATH=\$MAVEN_HOME/bin:\$PATH" >> /etc/profile.d/app.sh
source /etc/profile
mvn --version

3. 安装protobuf
yum -y install gcc gcc-c++ make
https://code.google.com/p/protobuf/downloads/list

tar zxvf protobuf-2.5.0.tar.gz
cd protobuf-2.5.0
./configure --prefix=/usr/local/protobuf
make
make install
echo "export PROTOC_HOME=/usr/local/protobuf" >> /etc/profile.d/app.sh
echo "export PATH=\$PROTOC_HOME/bin:\$PATH" >> /etc/profile.d/app.sh
source /etc/profile
protoc --version

4. 安装其他依赖
yum -y install cmake openssl-devel ncurses-devel autoconf automake libtool

编译hadoop
tar zxvf hadoop-2.6.0-src.tar.gz
cd hadoop-2.6.0-src

由于maven国外服务器可能连不上,先给maven配置一下国内镜像,在<mirrors></mirros>里添加,原本的不要动
vim /usr/local/apache-maven-3.2.5/conf/settings.xml
    <mirror>
      <id>nexus-osc</id>
      <mirrorOf>*</mirrorOf>
      <name>Nexusosc</name>
      <url>http://maven.oschina.net/content/groups/public/</url>
    </mirror>

使用maven进行编译
cd /usr/local/src/hadoop-2.6.0-src
mvn package -DskipTests -Pdist,native -Dtar

查看编译后的文件
ls hadoop-dist/target
cp hadoop-dist/target/hadoop-2.6.0.tar.gz  /usr/local/src



以下是安装hadoop的过程


1. 安装jdk
http://blog.csdn.net/u013619834/article/details/38894649

2. 安装zookeeper
http://blog.csdn.net/u013619834/article/details/41316957


3. 创建hadoop用户(所有节点)
useradd hadoop
passwd hadoop


4. 设置hostname(所有节点)
vim /etc/sysconfig/network
修改为
master/slave1/slave2/slave3

hostname master/slave1/slave2/slave3


5. 添加host文件(所有节点)
vim /etc/hosts
10.200.3.151 slave1
10.200.3.152 slave2
10.200.3.153 slave3
10.200.3.154 master


6. 设置hadoop用户SSH无密码登陆(master节点)
su - hadoop
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub  hadoop@master
ssh-copy-id -i ~/.ssh/id_rsa.pub  hadoop@slave1
ssh-copy-id -i ~/.ssh/id_rsa.pub  hadoop@slave2
ssh-copy-id -i ~/.ssh/id_rsa.pub  hadoop@slave3
exit

7. 复制复制hadoop程序到/opt(master节点)
tar zxvf hadoop-2.6.0.tar.gz
mv hadoop-2.6.0 /opt/hadoop
chown -R hadoop.hadoop /opt/hadoop


创建hadoop数据目录(所有节点)
mkdir -p /data/hadoop/dfs/name
mkdir -p /data/hadoop/dfs/data
mkdir -p /data/hadoop/temp
chown -R hadoop.hadoop /data/hadoop


7. 修改配置文件
cd /opt/hadoop/etc/hadoop
如果profile已经配置了JAVA_HOME,下面两文件不用修改
vim hadoop-env.sh
vim yarn-env.sh

vim slaves
添加
slave1
slave2
slave3


vim core-site.xml
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
        <description>NameNode URI</description>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop/temp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>10.200.3.151:2181,10.200.3.152:2181,10.200.3.153:2181</value>
    </property>
</configuration>


vim hdfs-site.xml
<configuration>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>master:9001</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/data/hadoop/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/data/hadoop/dfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
</configuration>


cp mapred-site.xml.template mapred-site.xml
vim mapred-site.xml
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master:19888</value>
    </property>
</configuration>


vim yarn-site.xml
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>master:8088</value>
    </property>
</configuration>

8. 配置好以后将hadoop目录分别copy到其它3个节点上
chown -R hadoop.hadoop /opt/hadoop(所有节点)
su - hadoop
scp -r /opt/hadoop hadoop@slave1:/opt
scp -r /opt/hadoop hadoop@slave2:/opt
scp -r /opt/hadoop hadoop@slave3:/opt

9. 添加环境变量(所有节点)
echo "export HADOOP_HOME=/opt/hadoop" >>/etc/profile.d/app.sh
echo "export PATH=\$PATH:\$HADOOP_HOME/bin:\$HADOOP_HOME/sbin" >>/etc/profile.d/app.sh
source /etc/profile

10.初始化及启动hadoop(master)
su - hadoop

查看hadoop版本
hadoop version

格式化namenode
hdfs namenode -format

启动hdfs
start-dfs.sh

查看java进程
jps



  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值