centos7.5+hadoop3.1.2实战图文攻略--2019持续更新

一、单机部署HADOOP:(非分布式)

1、环境准备

(1)虚拟内存
dd if=/dev/zero of=swap bs=1M count=2048
mkswap swap
swapon swap
chmod 0600 swap

(2)本地解析文件

vim /etc/hosts

192.168.100.1 server

2、安装HADOOP,配置JAVA环境

yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel -y

tar zxvf hadoop-3.1.2.tar.gz -C /usr/local/

ln -s hadoop-3.1.2/ hadoop

vim /etc/profile

PATH=$PATH:/usr/local/hadoop/bin:/usr/local/hadoop/sbin
export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64

source /etc/profile

3、测试

hadoop version

cd /usr/local/hadoop/share/hadoop/mapreduce

hadoop jar hadoop-mapreduce-examples-3.1.2.jar pi 2 10000000000

二、单机部署HADOOP:(伪分布式)

1、SSH免密登录
ssh-keygen
ssh-copy-id -i id_rsa.pub 192.168.100.1

2、配置HDFS

vim hadoop-env.sh

export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root

vim core-site.xml

<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>

<property>
<name>fs.default.name</name>
<value>hdfs://server:9000</value>
</property>

vim hdfs-site.xml

<property>
<name>dfs.replication</name>
<value>1</value>
</property>

<property>
<name>dfs.permissions</name>
<value>false</value>
</property>

hadoop namenode -format

start-dfs.sh&stop-dfs.sh

hadoop dfsadmin -report

3、配置MAPREDUCE

vim mapred-site.xml

<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

<property>
<name>mapreduce.job.tracker</name>
<value>hdfs://server:8001</value>
<final>true</final>
</property>

<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

4、配置YARN

hadoop classpath

centos7.5+hadoop3.1.2实战图文攻略--2019持续更新

vim yarn-site.xml

<property>
<name>yarn.resourcemanager.hostname</name>
<value>server</value>
</property>

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

<property>
<name>yarn.application.classpath</name>
<value>/usr/local/hadoop-3.1.2/etc/hadoop:/usr/local/hadoop-3.1.2/share/hadoop/common/lib/:/usr/local/hadoop-3.1.2/share/hadoop/common/:/usr/local/hadoop-3.1.2/share/hadoop/hdfs:/usr/local/hadoop-3.1.2/share/hadoop/hdfs/lib/:/usr/local/hadoop-3.1.2/share/hadoop/hdfs/:/usr/local/hadoop-3.1.2/share/hadoop/mapreduce/lib/:/usr/local/hadoop-3.1.2/share/hadoop/mapreduce/:/usr/local/hadoop-3.1.2/share/hadoop/yarn:/usr/local/hadoop-3.1.2/share/hadoop/yarn/lib/:/usr/local/hadoop-3.1.2/share/hadoop/yarn/</value>
</property>

5、启动并测试

start-all.sh&stop-all.sh

hadoop jar hadoop-mapreduce-examples-3.1.2.jar pi 2 10

web访问HDFS:http://192.168.100.1:9870

web访问MAPREDUCE:http://192.168.100.1:8088

转载于:https://blog.51cto.com/72932/2357076

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值