hadoop2.9.1伪分布式环境搭建

1、准备

1.1、在vmware上安装centos7的虚拟机


1.2、系统配置

配置网络

# vi /etc/sysconfig/network-scripts/ifcfg-ens33

BOOTPROTO=static

ONBOOT=yes

IPADDR=192.168.120.131

GATEWAY=192.168.120.2

NETMASK=255.255.255.0

DNS1=8.8.8.8

DNS2=4.4.4.4


1.3、配置主机名

# hostnamectl set-hostname master1

# hostname master1


1.4、指定时区(如果时区不是上海)

# ll /etc/localtime

lrwxrwxrwx. 1 root root 35 6月   4 19:25 /etc/localtime -> ../usr/share/zoneinfo/Asia/Shanghai


如果时区不对的话需要修改时区,方法:

# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime


1.5、上传包

hadoop-2.9.1.tar

jdk-8u171-linux-x64.tar


2、开始搭建环境

2.1、创建用户和组

[root@master1 ~]# groupadd hadoop

[root@master1 ~]# useradd -g hadoop hadoop

[root@master1 ~]# passwd hadoop


2.2、解压包

切换用户

[root@master1 ~]# su hadoop


创建存放包的目录

[hadoop@master1 root]$ cd

[hadoop@master1 ~]$ mkdir src

[hadoop@master1 ~]$ mv *.tar src


解压包

[hadoop@master1 ~]$ cd src

[hadoop@master1 src]$ tar -xf jdk-8u171-linux-x64.tar -C ../

[hadoop@master1 src]$ tar xf hadoop-2.9.1.tar -C ../

[hadoop@master1 src]$ cd

[hadoop@master1 ~]$ mv jdk1.8.0_171 jdk

[hadoop@master1 ~]$ mv hadoop-2.9.1 hadoop


2.3、配置环境变量

[hadoop@master1 ~]$ vi .bashrc

export JAVA_HOME=/home/hadoop/jdk

export JRE_HOME=/$JAVA_HOME/jre

export CLASSPATH=.:$JAVA_HOME/lib

export PATH=$PATH:$JAVA_HOME/bin

export HADOOP_HOME=/home/hadoop/hadoop

export PATH=$PATH:$HADOOP_HOME/bin


使配置文件生效

[hadoop@master1 ~]$ source .bashrc


验证

[hadoop@master1 ~]$ java -version

java version "1.8.0_171"

Java(TM) SE Runtime Environment (build 1.8.0_171-b11)

Java HotSpot(TM) 64-Bit Server VM (build 25.171-b11, mixed mode)


[hadoop@master1 ~]$ hadoop version

Hadoop 2.9.1

Subversion https://github.com/apache/hadoop.git -r e30710aea4e6e55e69372929106cf119af06fd0e

Compiled by root on 2018-04-16T09:33Z

Compiled with protoc 2.5.0

From source with checksum 7d6d2b655115c6cc336d662cc2b919bd

This command was run using /home/hadoop/hadoop/share/hadoop/common/hadoop-common-2.9.1.jar


2.4、修改hadoop配置文件

[hadoop@master1 ~]$ cd hadoop/etc/hadoop/

[hadoop@master1 hadoop]$ vi hadoop-env.sh

export JAVA_HOME=/home/hadoop/jdk


[hadoop@master1 hadoop]$ vi core-site.xml

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://192.168.120.131:9000</value>

</property>

</configuration>


[hadoop@master1 hadoop]$ vi hdfs-site.xml

<configuration>

<property>

<name>dfs.nameservices</name>

<value>hadoop-cluster</value>

</property>

<property>

<name>dfs.namenode.name.dir</name>

<value>file:///data/hadoop/hdfs/nn</value>

</property>

<property>

<name>dfs.namenode.checkpoint.dir</name>

<value>file:///data/hadoop/hdfs/snn</value>

</property>

<property>

<name>dfs.namenode.checkpoint.edits.dir</name>

<value>file:///data/hadoop/hdfs/snn</value>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<value>file:///data/hadoop/hdfs/dn</value>

</property>

</configuration>


[hadoop@master1 hadoop]$ cp mapred-site.xml.template mapred-site.xml

[hadoop@master1 hadoop]$ vi mapred-site.xml

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

</configuration>


[hadoop@master1 hadoop]$ vi yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->

<!-- 指定ResourceManager的地址-->

<property>

<name>yarn.resoutcemanager.hostname</name>

<value>192.168.120.131</value>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.nodemanager.local-dirs</name>

<value>file:///data/hadoop/yarn/nm</value>

</property>

</configuration>


2.5、创建目录并赋予权限

[hadoop@master1 hadoop]$ exit

[root@master1 ~]# mkdir -p /data/hadoop/hdfs/{nn,dn,snn}

[root@master1 ~]# mkdir -p /data/hadoop/yarn/nm

[root@master1 ~]# chown -R hadoop:hadoop /data


2.6、格式化文件系统并启动服务

[root@master1 ~]# su hadoop

[hadoop@master1 ~]$ cd hadoop/bin

[hadoop@master1 bin]$ ./hdfs namenode -format


[hadoop@master1 bin]$ cd sbin

[hadoop@master1 sbin]$ ./hadoop-daemon.sh start namenode

[hadoop@master1 sbin]$ ./hadoop-daemon.sh start datanode

[hadoop@master1 sbin]$ ./yarn-daemon.sh start resourcemanager

[hadoop@master1 sbin]$ ./yarn-daemon.sh start nodemanager

[hadoop@master1 sbin]$ ./mr-jobhistory-daemon.sh start historyserver


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值