hadoop安装

一、安装Hadoop

     


<1> 安装JDK1.6

chmod该文件以775的可执行权限,然后./filename.bin即可;

<2> 安装ssh,并且设置ssh无密码登录hadoop
<2.1>运行sudo apt-get install ssh/rsync,运行sudo apt-get install openjdk-6-jdk (for jps command)
<2.2>配置ssh本机免口令登录主要有两步:
运行ssh-keygen -t dsa -P '' -f /.ssh/id_dsa
   -t用来指定加密算法,可以选择dsa和rsa两种加密方式;
   -P用来指定密码,两个单引号表示空密码'';
   -f用来指定存放密钥的文件
运行cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
   这一步将公钥添加进本机的authorized_keys中,完成这两步后可以ssh localhost验证是否成功。接下来需要进入已经解好的hadoop-1.0.3中进行配置。hadoop的伪分布模式主要需要配置以下几个配置文件:
<3>conf/hbase-env.sh:主要用来配置hadoop的运行环境,这里需要修改JAVA_HOME到你的jdk1.6目录(见黑体)

点击(此处)折叠或打开

  1. #Set Hadoop-specific environment variables here.

  2. # The only required environment variable is JAVA_HOME. All others are
  3. # optional. When running a distributed configuration it is best to
  4. # set JAVA_HOME in this file, so that it is correctly defined on
  5. # remote nodes.

  6. # The java implementation to use. Required.
  7. export JAVA_HOME=/home/hadoop/platform/jdk1.6.0_45

  8. # Extra Java CLASSPATH elements. Optional.
  9. # export HADOOP_CLASSPATH=

  10. # The maximum amount of heap to use, in MB. Default is 1000.
  11. # export HADOOP_HEAPSIZE=2000

  12. # Extra Java runtime options. Empty by default.
  13. # export HADOOP_OPTS=-server

  14. # Command specific options appended to HADOOP_OPTS when specified
  15. export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"
  16. export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"
  17. export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"
  18. export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"
  19. export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"
  20. # export HADOOP_TASKTRACKER_OPTS=
  21. # The following applies to multiple commands (fs, dfs, fsck, distcp etc)
  22. # export HADOOP_CLIENT_OPTS

  23. # Extra ssh options. Empty by default.
  24. # export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"

  25. # Where log files are stored. $HADOOP_HOME/logs by default.
  26. # export HADOOP_LOG_DIR=${HADOOP_HOME}/logs

  27. # File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default.
  28. # export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves

  29. # host:path where hadoop code should be rsync'd from. Unset by default.
  30. # export HADOOP_MASTER=master:/home/$USER/src/hadoop

  31. # Seconds to sleep between slave commands. Unset by default. This
  32. # can be useful in large clusters, where, e.g., slave rsyncs can
  33. # otherwise arrive faster than the master can service them.
  34. # export HADOOP_SLAVE_SLEEP=0.1

  35. # The directory where pid files are stored. /tmp by default.
  36. # export HADOOP_PID_DIR=/var/hadoop/pids

  37. # A string representing this instance of hadoop. $USER by default.
  38. # export HADOOP_IDENT_STRING=$USER

  39. # The scheduling priority for daemon processes. See 'man nice'.
  40. # export HADOOP_NICENESS=10
<4>conf/core-site.xml
     这里主要配置fs.default.name(用来指定namenode)和hadoop.tmp.dir(默认的hdfs的tmp目录位置),这里可以不设置hadoop.tmp.dir,那么就会保存在默认的/tmp下,每次重启机器都会丢失数据。

点击(此处)折叠或打开

  1. <?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3.   
  4. <!-- Put site-specific property overrides in this file. -->

  5. <configuration>
  6.    <property>
  7.        <name>fs.default.name</name>
  8.        <value>hdfs://localhost:9000</value>
  9.    </property>
  10.    <property>
  11.        <name>hadoop.tmp.dir</name>
  12.        <value>/home/hadoop/hdfs/tmp</value>
  13.    </property>

  14. </configuration>
<5>hdfs-site.xml
     这里的dfs.replication用来设置每份数据块的副本数目,默认是3,因为我们是在单机上配置的伪分布模式,因此设为1。dfs.name.dir和dfs.data.dir非常重要,用来设置存放hdfs中namenode和datanode数据的本地存放位置。这里如果设置不好,后续会出现多个错误。当然你也可以不设置采用默认的/tmp下的目录,但是同样重启会丢失数据。

点击(此处)折叠或打开

  1. <?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

  3. <!-- Put site-specific property overrides in this file. -->

  4. <configuration>
  5.    <property>
  6.        <name>dfs.replication</name>
  7.        <value>1</value>
  8.    </property>
  9.    <property>
  10.        <name>dfs.name.dir</name>
  11.        <value>/home/hadoop/hdfs/name</value>
  12.    </property>
  13.    <property>
  14.        <name>dfs.data.dir</name>
  15.        <value>/home/hadoop/hdfs/data</value>
  16.    </property>

  17. </configuration>
<6>mapred-site.xml

点击(此处)折叠或打开

  1. <?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

  3. <!-- Put site-specific property overrides in this file. -->

  4. <configuration>
  5.    <property>
  6.        <name>mapred.job.tracker</name>
  7.        <value>localhost:9001</value>
  8.    </property>

  9. </configuration>
      然后就是运行测试了,把hadoop-1.0.3/bin加入到/etc/profile中的PATH路径中,方便我们执行Hadoop命令。运行start-all.sh后出现了问题,jps查看namenode无法启动,使用hadoop namenode -format也不能成功,查看日志:

    提示我们存储的HDFS目录要么不存在要么没有权限:
FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:Directory /home/hadoop/hdfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
查看发现已经生成了hdfs目录,那么问题就是权限了,将/home下的hadoop目录权限由755设为775,然后重新运行hadoop,成功:

      在进行安装hbase之前,我们先来按照官方的方法测试一下伪分布式的hadoop,看看安装是否成功:首先将conf下的所有文件拷贝到hdfs上的input目录中,然后运行jar文件将结果存储到hdfs中的output目录中,最后从output目录中查看结果:


点击(此处)折叠或打开

  1. $ bin/hadoop fs -put conf input

  2. $ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'

  3. Copy the output files from the distributed filesystem to the local filesytem and examine them:
  4. $ bin/hadoop fs -get output output 
  5. $ cat output/*

  6. or

  7. View the output files on the distributed filesystem:
  8. $ bin/hadoop fs -cat output/*

  9. When you're done, stop the daemons with:
  10. $ bin/stop-all.sh
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值