Hadoop学习笔记(三)---hadoop的安装

1.下载hadoop
2.通过ftp上传到linux中
3.把文件复制到 /usr/local目录下面,然后解压,重命名
4.跟jdk差不多,来配置环境变量:
1)vi /etc/profile
2.修改之前设置的jdk环境变量如下:

export JAVA_HOME=/usr/local/jdk
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

5.保存退出,然后执行以下:source /etc/profile,使环境变量生效

6.下面修改hadoop的配置文件:

1).hadoop-env.sh

vim /usr/local/hadoop/conf/hadoop-env.sh

找到下面的位置,修改如下:

# The java implementation to use.  Required.
 export JAVA_HOME=/usr/local/jdk

2).下面配置core-site.xml

[root@localhost conf]# vim /usr/local/hadoop/conf/core-site.xml

这是原始文件,可见configuration里面没有任何东西:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

</configuration>

把下面的配置加到configuration中:

<configuration>
   <property>
     <name>fs.default.name</name>
     <value>hdfs://172.21.15.184:9000</value>
   </property>

   <property>
     <name>hadoop.tmp.dir</name>
     <value>/usr/local/hadoop/tmp</value>
   </property>
</configuration>

3).下面修改:hdfs-site.xml

vim /usr/local/hadoop/conf/hdfs-site.xml

然后你会发现这个文件中的configuration中也是什么也没有,添加下面的代码:

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>

  <property>
    <name>dfs.permissions</name>
    <value>false</value>
  </property>
</configuration>

4)下面修改:mapred-site.xml

vim /usr/local/hadoop/conf/mapred-site.xml

同样的道理,添加代码如下:

<configuration>
   <property>
     <name>mapred.job.tracker</name>
     <value>172.21.15.189:9001</value>
   </property>
</configuration>

下面格式化一下namenode:,输出下面的内容表示成功:

[root@localhost conf]# hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.

15/05/31 03:14:56 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG:   java = 1.7.0_79
************************************************************/
15/05/31 03:14:56 INFO util.GSet: Computing capacity for map BlocksMap
15/05/31 03:14:56 INFO util.GSet: VM type       = 64-bit
15/05/31 03:14:56 INFO util.GSet: 2.0% max memory = 1013645312
15/05/31 03:14:56 INFO util.GSet: capacity      = 2^21 = 2097152 entries
15/05/31 03:14:56 INFO util.GSet: recommended=2097152, actual=2097152
15/05/31 03:14:56 INFO namenode.FSNamesystem: fsOwner=root
15/05/31 03:14:56 INFO namenode.FSNamesystem: supergroup=supergroup
15/05/31 03:14:56 INFO namenode.FSNamesystem: isPermissionEnabled=false
15/05/31 03:14:56 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
15/05/31 03:14:56 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
15/05/31 03:14:56 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
15/05/31 03:14:56 INFO namenode.NameNode: Caching file names occuring more than 10 times 
15/05/31 03:14:57 INFO common.Storage: Image file /usr/local/hadoop/tmp/dfs/name/current/fsimage of size 110 bytes saved in 0 seconds.
15/05/31 03:14:57 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/usr/local/hadoop/tmp/dfs/name/current/edits
15/05/31 03:14:57 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/usr/local/hadoop/tmp/dfs/name/current/edits
15/05/31 03:14:57 INFO common.Storage: Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted.
15/05/31 03:14:57 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/

在下面的启动hadoop中,我们先设置一下ssh免密码登陆,不然待会启动的时候会让你一直输密码,很麻烦:

[root@localhost hadoop]# vim /etc/selinux/config

下面是原文件:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

设置SELINUX=disabled

保存退出。

然后[root@localhost hadoop]# ssh-keygen -t dsa,一直回车便可

然后[root@localhost /]# cd ~/.ssh

你会看到里面有这么一个文件:id_dsa.pub

输入命令:[root@localhost .ssh]# cat id_dsa.pub >> authorized_keys

然后测试一下,如果出现下面的内容,没让你输密码就对了

[root@localhost ~]# ssh localhost
Last login: Sun May 31 03:32:36 2015 from 172.21.15.189

下面就可以启动hadoop了:

切换到hadoop下面的bin目录下面,然后执行:

[root@localhost bin]# start-all.sh
[root@localhost bin]# start-all.sh
Warning: $HADOOP_HOME is deprecated.

namenode running as process 24585. Stop it first.
localhost: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-datanode-localhost.localdomain.out
localhost: secondarynamenode running as process 24746. Stop it first.
jobtracker running as process 24820. Stop it first.
localhost: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-tasktracker-localhost.localdomain.out

下面查看一下进程:

[root@localhost bin]# jps
26582 TaskTracker
24746 SecondaryNameNode
26669 Jps
24820 JobTracker
26373 DataNode
24585 NameNode

显示上面的内容,恭喜你安装成功了

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值