Hadoop3完全分布式

前期准备

三台linux,yum和网络都配置好
yum和网络配置点这里
jdk和hadoop的安装包上传上来

1.修改主机名

master下执行

hostnamectl set-hostname master
bash

slave1下执行

hostnamectl set-hostname slave1
bash

slave2下执行

hostnamectl set-hostname slave2
bash

2.主机映射

三台都要执行

vi /etc/hosts

末尾写入

192.168.26.148 master
192.168.26.149 slave1
192.168.26.150 slave2

3.ssh免密

仅master执行
master输入

ssh-keygen -t rsa

三次回车出现密文如下(大概这个样子,不完全一样)

+--[ RSA 2048]----+
|  .o.+.   ..     |
|. . o..   .E.    |
|.o.++  . * .     |
|..+o..  + =      |
|   +.   S+       |
|    .. .  .      |
|      .          |
|                 |
|                 |
+-----------------+
ssh-copy-id -i /root/.ssh/id_rsa.pub master
ssh-copy-id -i /root/.ssh/id_rsa.pub slave1
ssh-copy-id -i /root/.ssh/id_rsa.pub slave2

依次输入 yes和root 用户的密码
依次验证

ssh master
ssh slave1
ssh slave2

登录一次就及时退出一次(exit)

4.关闭防火墙

三台都要执行

systemctl stop firewalld
systemctl disable firewalld

查看防火墙状态

 systemctl status firewalld

5.安装jdk

仅master执行
解压缩

tar -zxvf jdk-8u152-linux-x64.tar.gz  -C  /usr/

改个好名字,直接改成jdk

mv /usr/jdk1.8.0_261/ /usr/jdk

6.安装hadoop

仅master执行
解压缩

tar -zxvf hadoop-3.1.3.tar.gz  -C  /usr/

改个好名字,直接改成hadoop

mv /usr/hadoop-3.1.3/ /usr/hadoop

7.hadoop配置

仅master执行
①进入目录

cd /usr/hadoop/etc/hadoop

② core-site.xml

vi core-site.xml
<configuration>
         <property>
                <name>fs.defaultFS</name>
                <value>hdfs://master:9000</value>
                <description>HDFS的URI</description>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>file:/usr/hadoop/tmp</value>
                <description>节点上本地的hadoop临时文件夹</description>
        </property>
</configuration>

③ hadoop-env.sh

vi hadoop-env.sh
export JAVA_HOME=/usr/jdk

④ hdfs-site.xml

vi hdfs-site.xml
<configuration>
          <property>
                <name>dfs.namenode.http-address</name>
                <value>master:9870</value>
        </property>
        <!--
        <property>
                <name>dfs.datanode.http.address</name>
                <value>master:9864</value>
                <description>
                The datanode http server address and port.
                If the port is 0 then the server will start on a free port.
                </description>
        </property>
        -->
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/usr/hadoop/hdfs/name</value>
                <description>namenode上存储hdfs名字空间元数据 </description>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>file:/usr/hadoop/hdfs/data</value>
                <description>datanode上数据块的物理存储位置</description>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>3</value>
                <description>副本个数,默认是3,应小于datanode机器数量</description>
        </property>
</configuration>

⑤ mapred-site.xml

vi mapred-site.xml
<configuration>
         <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
                <description>指定mapreduce使用yarn框架</description>
        </property>
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
</configuration>

⑥ yarn-site.xml

vi yarn-site.xml

<configuration>
<property>
               <name>yarn.resourcemanager.hostname</name>
                <value>master</value>
                <description>指定resourcemanager所在的hostname</description>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
                <description>NodeManager上运行的附属服务。需配置成mapreduce_shuffle,才可运行MapReduce程序</description>
        </property>
</configuration>

⑦ yarn-env.sh

vi yarn-env.sh
export JAVA_HOME=/usr/jdk

⑧ workers
删除原有内容
添加

vi workers
slave1
slave2

⑨配置环境变量

vi /etc/profile

写入

#jdk
export JAVA_HOME=/usr/jdk
export PATH=$PATH:$JAVA_HOME/bin

#hadoop
export HADOOP_HOME=/usr/hadoop
export PATH=$HADOOP_HOME/bin:$PATH:$HADOOP_HOME/sbin

export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root

刷新环境变量

source /etc/profile

8.分发文件

jdk

scp -r /usr/jdk slave1:/usr/
scp -r /usr/jdk slave2:/usr/

hadoop
先创建文件夹

mkdir /usr/hadoop/tmp
mkdir -p /usr/hadoop/hdfs/name
mkdir /usr/hadoop/hdfs/data
scp -r /usr/hadoop slave1:/usr/
scp -r /usr/hadoop slave2:/usr/

环境变量

scp -r /etc/profile slave1:/etc/profile
scp -r /etc/profile slave2:/etc/profile

刷新环境变量

source /etc/profile

9.格式化

hdfs namenode -format

10.启动集群

start-all.sh

11.查看集群状态

①jps
master

NameNode
SecondaryNameNode
Jps
ResourceManager

slave

Jps
DataNode
NodeManager

②web

http://192.168.26.148:9870
http://192.168.26.148:8088

12.执行MapReduce词频统计任务

本地创建文件

cd /root
vi 1.txt

写入

Give me the strength lightly to bear my joys and sorrows.Give me the strength to make my love fruitful in service.Give me the strength never to disown the poor or bend my knees before insolent might.Give me the strength to raise my mind high above daily trifles.And give me the strength to surrender my strength to thy will with love.

创建文件夹并上传文件

hadoop fs -mkdir /input
hadoop fs -put /root/1.txt /input

执行命令

cd /usr/hadoop/share/hadoop/mapreduce
hadoop jar hadoop-mapreduce-examples-3.1.3.jar wordcount  /input/1.txt /output

此处的hadoop-mapreduce-examples-3.1.3.jar 根据实际版本填写
查看结果

hadoop fs -ls /output  
hadoop fs -cat /output/part-r-00000

如下

Give	1
above	1
and	1
bear	1
before	1
bend	1
daily	1
disown	1
fruitful	1
give	1
high	1
in	1
insolent	1
joys	1
knees	1
lightly	1
love	1
love.	1
make	1
me	5
might.Give	1
mind	1
my	5
never	1
or	1
poor	1
raise	1
service.Give	1
sorrows.Give	1
strength	6
surrender	1
the	6
thy	1
to	6
trifles.And	1
will	1
with	1
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值