Hadoop集群搭建

安装虚拟机

配置网卡:

[root@hadoop01 hadoop]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.199.141
NETMASK=255.255.255.0
GATEWAY=192.168.199.2
DNS1=114.114.114.114
DNS2=192.168.199.2

配置主机名

[root@hadoop01 hadoop]# vi /etc/sysconfig/network

NETWORKING=yes
HOSTNAME=hadoop01

关闭防火墙

[root@hadoop01 hadoop]# service iptables stop
[root@hadoop01 hadoop]# chkconfig iptables off

安装ssh客户端

[root@hadoop01 hadoop]# yum install -y openssh-clients

克隆

[root@hadoop01 hadoop]# vi /etc/udev/rules.d/70-persistent-net.rules

# PCI device 0x8086:0x100f (e1000)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:ab:a6:61", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
PCI device 0x8086:0x100f (e1000)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:ab:a6:61", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"

删除eth0,将eth1修改为eth0

配置网卡:

[root@hadoop01 hadoop]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.199.142*************修改此处
NETMASK=255.255.255.0
GATEWAY=192.168.199.2
DNS1=114.114.114.114
DNS2=192.168.199.2

修改主机名

[root@hadoop01 hadoop]# vi /etc/sysconfig/network

NETWORKING=yes
HOSTNAME=hadoop02

重启电脑,使网卡生效

[root@hadoop01 hadoop]# reboot

其他的机器按照上面重新操作一遍(3-4)

hosts映射

修改hosts文件
==windows:C:\Windows\System32\drivers\etc==
linux:
[root@hadoop01 hadoop]# vi /etc/hosts
增加

192.168.199.141 hadoop01
192.168.199.142 hadoop02
192.168.199.143 hadoop03

免密登陆

可以使用脚本做

#!/bin/bash
#yum安装expect
yum -y install expect
#PWD_1是登陆密码,可以自己设定
PWD_1=123456
ips=$(cat /etc/hosts |grep -v "::" | grep -v "127.0.0.1")
key_generate() {
    expect -c "set timeout -1;
        spawn ssh-keygen -t rsa;
        expect {
            {Enter file in which to save the key*} {send -- \r;exp_continue}
            {Enter passphrase*} {send -- \r;exp_continue}
            {Enter same passphrase again:} {send -- \r;exp_continue}
            {Overwrite (y/n)*} {send -- n\r;exp_continue}
            eof             {exit 0;}
    };"
}
auto_ssh_copy_id () {
    expect -c "set timeout -1;
        spawn ssh-copy-id -i $HOME/.ssh/id_rsa.pub root@$1;
            expect {
                {Are you sure you want to continue connecting *} {send -- yes\r;exp_continue;}
                {*password:} {send -- $2\r;exp_continue;}
                eof {exit 0;}
            };"
}
rm -rf ~/.ssh

key_generate

for ip in $ips
do
    auto_ssh_copy_id $ip  $PWD_1
done

jdk安装

1:把文件上传到linux
2:解压文件到安装目录:
[root@hadoop01 hadoop]# tar -zxvf /root/jdk-8u102-linux-x64.tar.gz -C /usr/local/
3:配置环境变量
[root@hadoop01 hadoop]# vi /etc/profile

export JAVA_HOME=/usr/local/jdk1.8.0_102
export PATH=$PATH:$JAVA_HOME/bin

[root@hadoop01 hadoop]# source /etc/profile

安装hadoop

1.上传HADOOP安装包 /root
2.规划安装目录 /usr/local/hadoop-2.7.3
3.解压安装包
[root@hadoop01 hadoop]# tar -zxvf /root/local/hadoop-2.7.3.tar.gz -C /usr/local/
4.修改配置文件 $HADOOP_HOME/etc/hadoop/(进入到路径)
最简化配置如下:
(1)
[root@hadoop01 hadoop]# vi hadoop-env.sh

#The java implementation to use.
export JAVA_HOME=/usr/local/jdk1.8.0_102

(2)
[root@hadoop01 hadoop]# vi core-site.xml
Namenode在哪里 ,临时文件存储在哪里

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop3801:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop-2.7.3/tmp</value>
</property>
</configuration>

(3)
[root@hadoop01 hadoop]# vi hdfs-site.xml

<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop-2.7.3/data/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/local/hadoop-2.7.3/data/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.secondary.http.address</name>
<value>hadoop3801:50090</value>
</property>
</configuration>

(4)
[root@hadoop01 hadoop]# cp mapred-site.xml.template mapred-site.xml
(5)
[root@hadoop01 hadoop]# vi mapred-site.xml

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

(6)
[root@hadoop01 hadoop]# vi yarn-site.xml

<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop3801</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

(7)
[root@hadoop01 hadoop]# vi slaves
删除原有的localhost

    Hadoop02
    Hadoop03

配置文件环境
[root@hadoop01 hadoop]# vi /etc/profile

export JAVA_HOME=/usr/local/jdk1.8.0_102
export HADOOP_HOME=/usr/local/hadoop-2.7.3
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

全局路径生效: source /etc/profile

把第一台安装好的jdk和hadoop以及配置文件发送给另外两台

hosts文件
jdk安装后的文件夹
Hadoop安装后的文件夹
/etc/profile 文件

[root@hadoop01 hadoop]#  scp -r /usr/local/jdk1.8.0_102 hadoop02:/usr/local/
[root@hadoop01 hadoop]#  scp -r /usr/local/hadoop-2.7.3/ hadoop02:/usr/local/
[root@hadoop01 hadoop]#  scp -r /etc/hosts hadoop02:/etc/
[root@hadoop01 hadoop]#  scp -r /etc/profile hadoop02:/etc/

启动集群

初始化HDFS(在hadoop01进行操作)(操作一次就ok)bin/目录下
[root@hadoop01 hadoop]# hadoop namenode -format
启动HDFSsbin/目录下
[root@hadoop01 hadoop]# start-dfs.sh
Jps查看进程
[root@hadoop01 hadoop]# jps

2820 Jps
2028 NameNode
2205 SecondaryNameNode

启动YARN
sbin/目录下
[root@hadoop01 hadoop]# start-yarn.sh
[root@hadoop01 hadoop]# jsp
Hadoop01上进程

    2820 Jps
    2028 NameNode
    2205 SecondaryNameNode
    2350 ResourceManager

通过网页查看

hadoop01:50070
在这里插入图片描述

Hadoop01:8088
在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值