Java之美[从菜鸟到高手演练]之Linux下Hadoop的完全分布式安装

本来是想安装一个单节点的环境就好了,后来按装完了总觉得不够过瘾,于是今天继续研究一下,来一个完全分布式的集群安装。用到的软件和上一篇单节点安装Hadoop一样,如下:

  • Ubuntu 14.10 64 Bit Server Edition
  • Hadoop2.6.0
  • JDK 1.7.0_71
  • ssh
  • rsync

准备环境

依然是VirtualBox + Ubuntu 14.10 64 Bit,只不过这次是3个节点,话不多说,下面开始配置准备,基础环境就不再赘述了,包括安装JDK,ssh,rsync等,可以参考上一篇。

master 192.168.1.118 nameNode
slave1 192.168.1.189 dataNode1
slave2 192.168.1.116 dataNode2

修改每个机器的hosts文件,在/etc/hosts文件末尾添加如下配置:

192.168.1.118   master
192.168.1.189   slave1
192.168.1.116   slave2

2015年1月15日补充:

1. 修改每个机器的hostname,在/etc/hostsname下,将master机器的hostname改为master,以此类推。

2. 赋予当前用户管理员权限。

3. 将IP地址改成静态IP,通过修改/etc/network/interfaces文件,添加如下配置:

auto eth0
iface eth0 inet static
address 192.168.1.118
netmask 255.255.255.0
gateway 192.168.1.1

修改完需重启才能生效。

配置namenode对datanode的无密钥访问

直接在namenode控制台执行如下两行命令:

$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

进入namenode的用户根目录,再进入.ssh目录查看生成的文件:authorized_keys, id_dsa, id_dsa.pub

将authorized_keys文件分发到个datanode节点上:

验证:

ssh 192.168.1.189

ssh 192.168.1.116

ssh slave1

ssh slave2

如果不需要密码直接进入则OK,否则重新来配。

安装Hadoop

1. 从官网下载hadoop 2.6.0 tar.gz文件,然后解压到用户目录:tar -zxvf hadoop-2.6.0.tar.gz.

2. 在解压后的hadoop-2.6.0文件夹里创建tmp文件夹。

3. 配置环境变量

添加如下配置信息到/etc/profile文件末尾(每台机器都要配置)。

# set hadoop path
export HADOOP_HOME=/home/adam/hadoop-2.6.0
export PATH=$PATH:$HADOOP_HOME/bin


执行. /etc/profile或者source /etc/profile使配置生效,然后执行hadoop version查看hadoop版本并且验证环境变量是否配置成功。

4. 配置hadoop,进入目录/home/adam/hadoop-2.6.0/etc/hadoop

a>. 编辑core-site.xml

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/adam/hadoop-2.6.0/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>4096</value>
    </property>
</configuration>


b>. 在hadoop-env.sh和yarn-env.sh里配置JAVA_HOME环境变量如下

3. 编辑hdfs-site.xml

<configuration>
    <property>
    	<name>dfs.nameservices</name>
        <value>hadoop-cluster</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>master50090</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///home/adam/hadoop-2.6.0/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///home/adam/hadoop-2.6.0/dfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
</configuration>50090</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///home/adam/hadoop-2.6.0/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///home/adam/hadoop-2.6.0/dfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
</configuration>


4. 编辑mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobtracker.http.address</name>
        <value>master:50030</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master:19888</value>
    </property>
</configuration>


5. 编辑yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>master:8088</value>
    </property>
</configuration>


6. 编辑slaves文件添加如下两行:

slave1

slave2

7. 将hadoop文件夹复制到另个slave结点上

启动Hadoop

1. 格式化namenode

adam@ubuntu:~/hadoop-2.6.0/bin$ ./hdfs namenode -format
15/01/14 19:29:58 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = ubuntu/60.191.124.254
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
STARTUP_MSG:   classpath = /home/adam/hadoop-2.6.0/etc/hadoop:/home/adam/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/adam/hadoop-2.6.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/adam/h ...
jar:/home/adam/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.0.jar:/home/adam/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.0.jar:/home/adam/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0.jar:/home/adam/hadoop-2.6.0/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.7.0_71
************************************************************/
15/01/14 19:29:58 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/01/14 19:29:58 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-3f81e813-604e-4d60-93b1-9794d7c7c079
15/01/14 19:30:10 INFO namenode.FSNamesystem: No KeyProvider found.
15/01/14 19:30:10 INFO namenode.FSNamesystem: fsLock is fair:true
15/01/14 19:30:10 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/01/14 19:30:10 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/01/14 19:30:10 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/01/14 19:30:10 INFO blockmanagement.BlockManager: The block deletion will start around 2015 Jan 14 19:30:10
15/01/14 19:30:10 INFO util.GSet: Computing capacity for map BlocksMap
15/01/14 19:30:10 INFO util.GSet: VM type       = 64-bit
15/01/14 19:30:10 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
15/01/14 19:30:10 INFO util.GSet: capacity      = 2^21 = 2097152 entries
15/01/14 19:30:10 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/01/14 19:30:10 INFO blockmanagement.BlockManager: defaultReplication         = 1
15/01/14 19:30:10 INFO blockmanagement.BlockManager: maxReplication             = 512
15/01/14 19:30:10 INFO blockmanagement.BlockManager: minReplication             = 1
15/01/14 19:30:10 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
15/01/14 19:30:10 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
15/01/14 19:30:10 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/01/14 19:30:10 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
15/01/14 19:30:10 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
15/01/14 19:30:10 INFO namenode.FSNamesystem: fsOwner             = adam (auth:SIMPLE)
15/01/14 19:30:10 INFO namenode.FSNamesystem: supergroup          = supergroup
15/01/14 19:30:10 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/01/14 19:30:10 INFO namenode.FSNamesystem: Determined nameservice ID: hadoop-cluster
15/01/14 19:30:10 INFO namenode.FSNamesystem: HA Enabled: false
15/01/14 19:30:10 INFO namenode.FSNamesystem: Append Enabled: true
15/01/14 19:30:16 INFO util.GSet: Computing capacity for map INodeMap
15/01/14 19:30:16 INFO util.GSet: VM type       = 64-bit
15/01/14 19:30:16 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
15/01/14 19:30:16 INFO util.GSet: capacity      = 2^20 = 1048576 entries
15/01/14 19:30:16 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/01/14 19:30:16 INFO util.GSet: Computing capacity for map cachedBlocks
15/01/14 19:30:16 INFO util.GSet: VM type       = 64-bit
15/01/14 19:30:16 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
15/01/14 19:30:16 INFO util.GSet: capacity      = 2^18 = 262144 entries
15/01/14 19:30:16 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/01/14 19:30:16 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/01/14 19:30:16 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
15/01/14 19:30:16 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/01/14 19:30:16 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/01/14 19:30:16 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/01/14 19:30:16 INFO util.GSet: VM type       = 64-bit
15/01/14 19:30:16 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
15/01/14 19:30:16 INFO util.GSet: capacity      = 2^15 = 32768 entries
15/01/14 19:30:16 INFO namenode.NNConf: ACLs enabled? false
15/01/14 19:30:16 INFO namenode.NNConf: XAttrs enabled? true
15/01/14 19:30:16 INFO namenode.NNConf: Maximum size of an xattr: 16384
15/01/14 19:30:16 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1507698623-60.191.124.254-1421235016468
15/01/14 19:30:16 INFO common.Storage: Storage directory /home/adam/hadoop-2.6.0/dfs/name has been successfully formatted.
15/01/14 19:30:17 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/01/14 19:30:17 INFO util.ExitUtil: Exiting with status 0
15/01/14 19:30:17 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/60.191.124.254
************************************************************/


2. 启动Hadoop

hadoop/sbin start-all.sh或者start-dfs.sh & start-yard.sh

3. 验证安装

启动Job History Server:

root@master:~/hadoop/sbin# ./mr-jobhistory-daemon.sh start historyserver

这样一个完全的分布式hadoop集群就装好了,步骤也不多,有兴趣的同学可以试一试,有什么问题欢迎联系我.

要在Linux安装Hadoop完全分布式,你可以按照以下步骤进行操作: 1. 首先,确保你已经安装Java Development Kit(JDK)。Hadoop依赖于Java来运行。你可以通过在终端中运行以下命令来检查是否安装了JDK: ``` java -version ``` 如果没有安装,请按照适合你的Linux发行版的说明进行安装。 2. 下载Hadoop的最新版本。你可以从Hadoop的官方网站(https://hadoop.apache.org/)上获取最新版本的下载链接。使用wget或curl命令下载Hadoop二进制文件。例如: ``` wget https://downloads.apache.org/hadoop/common/hadoop-X.X.X/hadoop-X.X.X.tar.gz ``` 3. 解压下载的Hadoop二进制文件。可以使用以下命令: ``` tar xzf hadoop-X.X.X.tar.gz ``` 4. 将解压后的Hadoop文件夹移动到合适的位置。例如,可以将其移动到`/usr/local`目录下: ``` sudo mv hadoop-X.X.X /usr/local/hadoop ``` 5. 配置环境变量。编辑你的`.bashrc`文件(或者你正在使用的shell对应的配置文件),并将以下内容添加到文件末尾(根据你的Hadoop路径进行调整): ``` export HADOOP_HOME=/usr/local/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin ``` 6. 创建Hadoop配置文件。进入Hadoop安装目录并创建`etc/hadoop`文件夹: ``` cd /usr/local/hadoop sudo mkdir etc/hadoop ``` 7. 配置Hadoop集群。在`etc/hadoop`目录下创建以下文件并进行相应配置: - core-site.xml:包含Hadoop核心配置。例如,设置Hadoop的默认文件系统和HDFS的URL。 - hdfs-site.xml:包含HDFS配置。例如,设置数据副本数量和块大小。 - mapred-site.xml:包含MapReduce配置。例如,设置任务调度器类型。 - yarn-site.xml:包含YARN配置。例如,设置资源管理器和节点管理器。 8. 格式化HDFS。运行以下命令来初始化HDFS: ``` hdfs namenode -format ``` 9. 启动Hadoop集群。使用以下命令启动Hadoop: ``` start-dfs.sh start-yarn.sh ``` 10. 验证Hadoop安装是否成功。打开浏览器,访问http://localhost:9870/,你应该能够看到Hadoop集群的Web界面。 这些步骤将帮助你在Linux安装Hadoop完全分布式。请注意,还有其他更详细的配置和调优步骤,这里只提供了基本过程。你可以参考官方文档或其他教程来进行进一步学习和调整。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值