hadoop2.6完全分布式安装

192.168.0.110 master

192.168.0.111 slave1

192.168.0.112 slave2

1.配置jdk

另一博客里,三台都要配。

2.添加用户hadoop

groupadd hadoop
useradd -g hadoop hadoop
passwd hadoop
vi /etc/sudoers
配置hadoop  ALL=(ALL)        NOPASSWD: ALL
3.ssh无密码登录

ssh localhost
cd ~
ssh-keygen -t rsa -P '' -f .ssh/id_rsa
cat .ssh/id_rsa.pub >> .ssh/authorized_keys
chmod 600  .ssh/authorized_keys 
把master上的id_rsa.pub传到slave

scp id_rsa.pub  hadoop@slave1:/home/hadoop/.ssh
cat id_rsa.pub >> ~/.ssh/authorized_keys
slave中.ssh的权限为700,authorized_keys的权限为744.

4.配置hadoop集群

修改slaves文件

slave1
slave2

core-site.xml

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop/tmp</value>
</property>
</configuration>

hdfs-site.xml

<configuration>
        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>master:50090</value>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/usr/hadoop/dfs/name</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>/usr/hadoop/dfs/data</value>
        </property>
</configuration>

mapred-site.xml     没有的话把mapred-site.xml.template改为mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>master:10020</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>master:19888</value>
        </property>
</configuration>

yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

压缩hadoop文件夹传到slave1和slave2中


5.启动

hdfs namenode -format 

在master中启动

start-dfs.sh
start-yarn.sh
mr-jobhistory-daemon.sh start historyserver


用jps查看

[hadoop@master sbin]$ jps
6208 JobHistoryServer
5617 NameNode
6242 Jps
5930 ResourceManager
5791 SecondaryNameNode
[hadoop@slave1 hadoop]$ jps
3616 DataNode
3715 NodeManager
3811 Jps
[hadoop@slave2 hadoop]$ jps
5216 DataNode
5315 NodeManager
5412 Jps


用hdfs dfsadmin -report查看

[hadoop@master sbin]$ hdfs dfsadmin -report
Configured Capacity: 37492883456 (34.92 GB)
Present Capacity: 32955469824 (30.69 GB)
DFS Remaining: 32955461632 (30.69 GB)
DFS Used: 8192 (8 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (2):

Name: 192.168.0.111:50010 (slave1)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 18746441728 (17.46 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 2447355904 (2.28 GB)
DFS Remaining: 16299081728 (15.18 GB)
DFS Used%: 0.00%
DFS Remaining%: 86.94%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Jun 16 21:58:20 CST 2017


Name: 192.168.0.112:50010 (slave2)
Hostname: slave2
Decommission Status : Normal
Configured Capacity: 18746441728 (17.46 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 2090057728 (1.95 GB)
DFS Remaining: 16656379904 (15.51 GB)
DFS Used%: 0.00%
DFS Remaining%: 88.85%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Jun 16 21:58:22 CST 2017

这样完全分布式就搭好了

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值