Ubuntu16.04 安装搭建Hadoop spark 集群

1、将single node cluster 复制

复制并且更改名字

2、设置data1的网卡

3、添加第二张网卡

并且设置成主机模式

4、设置data1服务器

 

4.1 编辑interfaces文件

 sudo gedit /etc/network/interfaces
# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

# nat
auto ens33 
iface ens33 inet dhcp

# host
auto ens38
iface ens38 inet static
address    192.168.56.101 
netmask    255.255.255.0 
network    192.168.56.0 
broadcast  192.168.56.255


# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

# nat
auto ens33
iface ens33  inet dhcp

# host
auto ens38
iface ens38 inet static
address    192.168.56.101 
netmask    255.255.255.0 
network    192.168.56.0 
#broadcast  192.168.56.255
gateway    192.168.8.2
dns-nameserver 119.29.29.29

重新启动网络
sudo /etc/init.d/networking restart

4.2 设置主机的名字

sudo gedit /etc/hostname

4.3 设置hosts文件

sudo gedit /etc/hosts
127.0.0.1	localhost
127.0.1.1	ubuntu


192.168.56.100 master 
192.168.56.101 data1 
192.168.56.102 data2 
#192.168.56.103 data3



# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

5、编辑Hadoop文件

5.1 修改core-site.xml

sudo gedit /usr/local/hadoop/etc/hadoop/core-site.xml
<configuration>
   <property>
             <name>fs.default.name</name>
             <value>hdfs://master:9000</value>
        </property>
</configuration>

5.2 修改yarn-site.xml

sudo gedit /usr/local/hadoop/etc/hadoop/yarn-site.xml
<configuration>
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>
	<property>
		<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
		<value>org.apache.hadoop.mapred.ShuffleHandler</value>
	</property>
	<property>
		<name>yarn.resourcemanager.resource-tracker.address</name>
		<value>master:8025</value>
	</property>

	<property>
		<name>yarn.resourcemanager.scheduler.address</name>
		<value>master:8030</value>
	</property>
	<property>
		<name>yarn.resourcemanager.address</name>
		<value>master:8050</value>
	</property>
</configuration>

5.3 修改mapred-site.xml3

sudo gedit /usr/local/hadoop/etc/hadoop/mapred-site.xml
<configuration>

        <property>
              <name>mapred.job.tracker</name>
              <value>master:54311</value>
       </property>
</configuration>

5.4 修改hdfs-site.xml

sudo gedit /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
        <property>
             <name>dfs.replication</name>
             <value>1</value>
        </property>
        <property>
             <name>dfs.datanode.data.dir</name>
            <value>file:/usr/local/hadoop/hadoop_data/hdfs/datanode</value>
        </property>
        
</configuration>

6、将data1进行复制

将data1复制data2,data3,data4和master

步骤如图所示

7、进行master的设置

重复设置4的步骤,但是 配置不同

 sudo gedit /etc/network/interfaces

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

# nat
auto ens33 
iface ens33 inet dhcp

# host
auto ens38
iface ens38 inet static
address    192.168.56.100
netmask    255.255.255.0 
network    192.168.56.0 
broadcast  192.168.56.255


sudo gedit /etc/hostname
设置成master



设置slaves
sudo gedit /usr/local/hadoop/etc/hadoop/slaves
参数:data1
data2


查看设置是否成功:

8、进行data2的设置

 sudo gedit /etc/network/interfaces

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

# nat
auto ens33 
iface ens33 inet dhcp

# host
auto ens38
iface ens38 inet static
address    192.168.56.102
netmask    255.255.255.0 
network    192.168.56.0 
broadcast  192.168.56.255


sudo gedit /etc/hostname
data2

9、设置data3

 sudo gedit /etc/network/interfaces

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

# nat
auto ens33 
iface ens33 inet dhcp

# host
auto ens38
iface ens38 inet static
address    192.168.56.103
netmask    255.255.255.0 
network    192.168.56.0 
broadcast  192.168.56.255


sudo gedit /etc/hostname
data3

10、master 连接data1,data2,data3创建HDFS目录

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值