hadoop0.23虚拟机单机安装

安装环境:

VM7.1,centos5.7,jdk1.6,hadoop0.23

安装步骤:

1、jdk安装

安装jdk,注意配置环境变量和配置

安装完成检查

#java -version

java version "1.6.0_45"
Java(TM) SE Runtime Environment (build 1.6.0_45-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.45-b01, mixed mode)


2、hadoop下载&安装

#wget http://mirror.esocc.com/apache/hadoop/common/hadoop-0.23.9/hadoop-0.23.9.tar.gz

#tar zxvf hadoop-0.23.9.tar.gz -C /opt/

#cd /opt/

#ln -s hadoop-0.23.9/ hadoop


3、添加hadoop用户权限

#groupadd hadoop

#useradd -g hadoop hadoop

#passwd hadoop

#vi /etc/sudoers

添加红色部分

root    ALL=(ALL)       ALL
#forhadoop
hadoop ALL=(ALL:ALL)   ALL


4、配置无密码登录

#su – hadoop

$ssh-keygen -t rsa -P ""

$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

$chmod 600 ~/.ssh/authorized_keys

测试登录

$ssh localhost

如果还是提示输入密码
确认本机sshd的配置文件(需要root权限)
# vi /etc/ssh/sshd_config
找到以下内容,并去掉注释符”#“
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile     .ssh/authorized_keys
重启sshd
#servicesshd restart



5、 配置hadoop

#chown  -R hadoop:hadoop /opt/hadoop

#chown  -R hadoop:hadoop /opt/hadoop-0.23.9

#su – hadoop

$vim .bashrc

export JAVA_HOME=/usr/java/jdk1.6.0_45
export JRE_HOME=${JAVA_HOME}/jre
export HADOOP_HOME=/opt/hadoop
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$HADOOP_HOME/bin:$PATH
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop

$cd /opt/hadoop/etc/hadoop/

$vi yarn-env.sh

追加以下

export HADOOP_FREFIX=/opt/hadoop
export HADOOP_COMMON_HOME=${HADOOP_FREFIX}
export HADOOP_HDFS_HOME=${HADOOP_FREFIX}
export PATH=$PATH:$HADOOP_FREFIX/bin
export PATH=$PATH:$HADOOP_FREFIX/sbin
export HADOOP_MAPRED_HOME=${HADOOP_FREFIX}
export YARN_HOME=${HADOOP_FREFIX}
export HADOOP_CONF_HOME=${HADOOP_FREFIX}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_FREFIX}/etc/hadoop

$vi core-site.xml

<configuration>
 <property>
  <name>fs.defaultFS</name>
  <value>hdfs://localhost:12200</value>
 </property>
 <property>
  <name>hadoop.tmp.dir</name>
  <value>/opt/hadoop/hadoop-root</value>
 </property>
 <property>
  <name>fs.arionfs.impl</name>
  <value>org.apache.hadoop.fs.pvfs2.Pvfs2FileSystem</value>
  <description>The FileSystem for arionfs.</description>
 </property>
</configuration>

$vi hdfs-site.xml

<configuration>
 <property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/opt/hadoop/data/dfs/name</value>
  <final>true</final>
 </property>
 <property>
  <name>dfs.namenode.data.dir</name>
  <value>file:/opt/hadoop/data/dfs/data</value>
  <final>true</final>
 </property>
 <property>
  <name>dfs.replication</name>
  <value>1</value>
 </property>
 <property>
  <name>dfs.permission</name>
  <value>false</value>
 </property>
</configuration>

$cp mapred-site.xml.templatemapred-site.xml

$vim mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.job.tracker</name>
        <value>hdfs://localhost:9001</value>
        <final>true</final>
    </property>
    <property>
        <name>mapreduce.map.memory.mb</name>
        <value>1536</value>
    </property>
    <property>
        <name>mapreduce.map.java.opts</name>
        <value>-Xmx1024M</value>
    </property>
    <property>
        <name>mapreduce.reduce.memory.mb</name>
        <value>3072</value>
    </property>
    <property>
             <name>mapreduce.reduce.java.opts</name>
        <value>-Xmx2560M</value>
    </property>
    <property>
        <name>mapreduce.task.io.sort.mb</name>
        <value>512</value>
    </property>
    <property>
        <name>mapreduce.task.io.sort.factor</name>
        <value>100</value>
    </property>    
    <property>
        <name>mapreduce.reduce.shuffle.parallelcopies</name>
        <value>50</value>
    </property>
    <property>
        <name>mapreduce.system.dir</name>
        <value>file:/opt/hadoop/data/mapred/system</value>
    </property>
    <property>
        <name>mapreduce.local.dir</name>
        <value>file:/opt/hadoop/data/mapred/local</value>
        <final>true</final>
    </property>
</configuration>

$vim yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce.shuffle</value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
  <property>
    <name>user.name</name>
    <value>hadoop</value>
  </property>
  <property>
    <name>yarn.resourcemanager.address</name>
    <value>localhost:54311</value>
            </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>localhost:54312</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>localhost:54313</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>localhost:54314</value>
  </property>
  <property>
    <name>yarn.web-proxy.address</name>
    <value>localhost:54315</value>
  </property>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost</value>
       </property>
</configuration>


6、 启动并运行wordcount程序

设置JAVA_HOME

$vim /opt/hadoop/libexec/hadoop-config.sh添加红色部分

# Attempt to set JAVA_HOME if it is not set
export JAVA_HOME=/usr/java/jdk1.6.0_45
if [[ -z $JAVA_HOME ]]; then
  # On OSX use java_home (or /Library for older versions)
  if [ "Darwin" == "$(uname -s)" ]; then
    if [ -x /usr/libexec/java_home ]; then
      export JAVA_HOME=($(/usr/libexec/java_home))
    else
      export JAVA_HOME=(/Library/Java/Home)
    fi
  fi


  # Bail if we did not detect it
  if [[ -z $JAVA_HOME ]]; then
    echo "Error: JAVA_HOME is not set and could not be found." 1>&2
    exit 1
  fi
fi

格式化namenode

$ hadoop namenode -format

启动

$/opt/hadoop/sbin/start-dfs.sh

$/opt/hadoop/sbin/start-yarn.sh

检查

$jps

6365 SecondaryNameNode
7196 ResourceManager
6066 NameNode
7613 Jps
6188 DataNode
7311 NodeManager


安装成功





评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值