Hadoop完全分布式集群搭建

1、在ubuntu系统上安装vmware workstation12,新建2个虚拟机,作为完全分布式集群的分机,

vmware workstation12下载地址:http://www.vmware.com/cn/products/workstation/workstation-evaluation

安装完成后设置执行命令

sudo chmod a+x VMware-Workstation-Full-12.1.1-3770994.x86_64.bundle

然后执行

sudo ./VMware-Workstation-Full-12.1.1-3770994.x86_64.bundle

安装完成后新建2个虚拟机。

并安装jdk和ssh。

2、设置IP

设置分机名为hadoop-slave1和hadoop-slave2

sudo gedit /etc/hostname

在分机查询IP :ifconfig

kun@hadoop-slave1:~$ ifconfig
eno16777736 Link encap:Ethernet  HWaddr 00:0c:29:e3:d1:5e  
          inet addr:172.16.239.128  Bcast:172.16.239.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fee3:d15e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:24337 errors:0 dropped:0 overruns:0 frame:0
          TX packets:20258 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:11897832 (11.8 MB)  TX bytes:8641314 (8.6 MB)

用主机ping分机IP:

kun@hadoop-master:~$ ping 172.16.239.128
PING 172.16.239.128 (172.16.239.128) 56(84) bytes of data.
64 bytes from 172.16.239.128: icmp_seq=1 ttl=64 time=0.399 ms
64 bytes from 172.16.239.128: icmp_seq=2 ttl=64 time=0.160 ms

</pre>表示一切正常,</p><p>然后设置分机固定IP</p><p><img src="https://img-blog.csdn.net/20160428111719927?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQv/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" height="182" width="227" alt="" /></p><p>同理设置hadoop-slave2 固定IP为172.16.239.129</p><p>在主机和2分机上执行</p><p><pre name="code" class="java">sudo gedit /etc/hosts

设置成如图


然后重启主机和2个分机。

3、主机和分机ssh无密码连接

(1)hadoop-master和hadoop-slave1无密码连接

cd~/.ssh
scp./id_rsa.pub hadoop-slave1:~/.ssh/id_master.pub
ssh hadoop-slave1

cd~/.ssh
catid_master.pub >> authorized_keys
ssh hadoop-slave1
(2)同理设置 hadoop-master和hadoop-slave2无密码连接

4、Hadoop完全分布式安装

(在hadoop-master主机上配置)

进入 /home/kun/soft/hadoop-2.7.1/etc/hadoop

配置core-site.xml

<configuration>

<property>
        <name>fs.default.name</name>
        <value>hdfs://hadoop-master:9000</value>
    </property>

<property>
        <name>hadoop.tmp.dir</name>
        <value>/home/kun/soft/hadoop-2.7.1/tmp</value>
    </property>
</configuration>

配置hdfs.site.xml

<configuration>

<property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>

</configuration>

复制mapred-queues.xml.template 并改名为mapred-site.xml

配置mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

修改slave文件,在文件里面添加如下内容
hadoop-master
hadoop-slave1
hadoop-slave2
配置yarn-site.xml

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <property>
        <name>yarn.resourcemanager.address</name>
        <value>hadoop-master:8032</value>
    </property>

    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>hadoop-master:8030</value>
    </property>

    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>hadoop-master:8031</value>
    </property>

</configuration>

进入soft目录 将此配置好的hadoop文件夹复制到hadoop-slave1和hadoop-slave2

scp -r hadoop-2.7.1 hadoop-slave1:/home/kun/soft/

scp -r hadoop-2.7.1 hadoop-slave2:/home/kun/soft/

(不要忘记在分机配置hadoop的环境变量)

进入hadoop目录下

格式化hdfs

bin/hdfs namenode -format

启动namenode服务

sbin/start-dfs.sh

在slave上输入jps看到
DataNode
jps
在hadoop-master上输入jps看到
jps
SecondaryNameNode
DataNode
NameNode
启动yarn
sbin/start-yarn.sh
如果启动成功,在slave上输入jps看到
DataNode
jps
NodeManager
在hadoop-master上输入jps看到
jps
SecondaryNameNode
DataNode
NameNode
NodeManager
ResourceManager

WordCount在完全分布模式上运行
创建数据路径

hdfs dfs -mkdir /kun
创建input路径
hdfs dfs -mkdir /kun/input
创建输出路径
hdfs dfs -mkdir /kun/output/
上传文件
hdfs dfs -put input/1 /kun/input
运行实例

kun@hadoop-master:~/soft/hadoop-2.7.1$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /kun/input /kun/output/1
16/04/28 10:45:03 INFO input.FileInputFormat: Total input paths to process : 1
16/04/28 10:45:03 INFO mapreduce.JobSubmitter: number of splits:1
16/04/28 10:45:04 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1461811316643_0001
16/04/28 10:45:05 INFO impl.YarnClientImpl: Submitted application application_1461811316643_0001
16/04/28 10:45:05 INFO mapreduce.Job: The url to track the job: http://hadoop-slave1:8088/proxy/application_1461811316643_0001/
16/04/28 10:45:05 INFO mapreduce.Job: Running job: job_1461811316643_0001
16/04/28 10:45:16 INFO mapreduce.Job: Job job_1461811316643_0001 running in uber mode : false
16/04/28 10:45:16 INFO mapreduce.Job:  map 0% reduce 0%
16/04/28 10:45:35 INFO mapreduce.Job:  map 100% reduce 0%
16/04/28 10:45:45 INFO mapreduce.Job:  map 100% reduce 100%
16/04/28 10:45:46 INFO mapreduce.Job: Job job_1461811316643_0001 completed successfully
16/04/28 10:45:47 INFO mapreduce.Job: Counters: 49
	File System Counters
		FILE: Number of bytes read=94
		FILE: Number of bytes written=236011
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=199
		HDFS: Number of bytes written=60
		HDFS: Number of read operations=6
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=2
	Job Counters 
		Launched map tasks=1
		Launched reduce tasks=1
		Data-local map tasks=1
		Total time spent by all maps in occupied slots (ms)=16426
		Total time spent by all reduces in occupied slots (ms)=6785
		Total time spent by all map tasks (ms)=16426
		Total time spent by all reduce tasks (ms)=6785
		Total vcore-seconds taken by all map tasks=16426
		Total vcore-seconds taken by all reduce tasks=6785
		Total megabyte-seconds taken by all map tasks=16820224
		Total megabyte-seconds taken by all reduce tasks=6947840
	Map-Reduce Framework
		Map input records=10
		Map output records=16
		Map output bytes=176
		Map output materialized bytes=94
		Input split bytes=87
		Combine input records=16
		Combine output records=7
		Reduce input groups=7
		Reduce shuffle bytes=94
		Reduce input records=7
		Reduce output records=7
		Spilled Records=14
		Shuffled Maps =1
		Failed Shuffles=0
		Merged Map outputs=1
		GC time elapsed (ms)=312
		CPU time spent (ms)=2480
		Physical memory (bytes) snapshot=291917824
		Virtual memory (bytes) snapshot=3790905344
		Total committed heap usage (bytes)=139288576
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=112
	File Output Format Counters 
		Bytes Written=60

表明运行成功。

查询结果

1原文件

你好!
再见!
啊 啦 啦 啦!
你好!
你好 !
你好!
你好 !
你好!
你好 !
你好!


执行统计后结果

kun@hadoop-master:~/soft/hadoop-2.7.1$ hdfs dfs -cat /kun/output/1/*
你好	3
你好!	5
再见!	1
啊	1
啦	2
啦!	1
!	3


注:系统自带的wordcount方法是根据文本中的空格来区分所以“你好 !”会被认为是2个需要统计的对象,即“你好”和“!”


至此hadoop完全分布式已经搭建完毕。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值