Linux部署hadoop2

  1. 下载完毕后,家目录下的内容如下所示:

[hadoop@node0 ~]$ ls ~

hadoop-2.7.7.tar.gz jdk-8u191-linux-x64.tar.gz

JDK设置(三台电脑都要做)

  1. 解压jdk-8u191-linux-x64.tar.gz文件:

tar -zxvf ~/jdk-8u191-linux-x64.tar.gz

  1. 打开文件~/.bash_profile,在尾部追加以下内容:

export JAVA_HOME=/home/hadoop/jdk1.8.0_191

export JRE_HOME=${JAVA_HOME}/jre

export CLASSPATH=.: J A V A H O M E / l i b : {JAVA_HOME}/lib: JAVAHOME/lib:{JRE_HOME}/lib

export PATH= J A V A H O M E / b i n : {JAVA_HOME}/bin: JAVAHOME/bin:PATH

  1. 执行命令source ~/.bash_profile使得JDK设置生效;

  2. 执行命令java -version确认设置成功:

[hadoop@node0 ~]$ java -version

java version “1.8.0_191”

Java™ SE Runtime Environment (build 1.8.0_191-b12)

Java HotSpot™ 64-Bit Server VM (build 25.191-b12, mixed mode)

创建hadoop要用到的文件夹(三台电脑都要做)

创建文件夹,后面hadoop会用到:

mkdir -p ~/work/tmp/dfs/name && mkdir -p ~/work/tmp/dfs/data

hadoop设置

  1. 以hadoop账号登录node0;

  2. 解压hadoop安装包:

tar -zxvf hadoop-2.7.7.tar.gz

  1. 进入目录~/hadoop-2.7.7/etc/hadoop;

  2. 依次编辑hadoop-env.shmapred-env.sh、yarn-env.sh这三个文件,确保它们的内容中都有JAVA_HOME的正确配置,如下:

export JAVA_HOME=/home/hadoop/jdk1.8.0_191

  1. 编辑core-site.xml文件,找到configuration节点,改成以下内容:

fs.defaultFS

hdfs://node0:8020

hadoop.tmp.dir

/home/hadoop/work/tmp

dfs.namenode.name.dir

file://${hadoop.tmp.dir}/dfs/name

dfs.datanode.data.dir

file://${hadoop.tmp.dir}/dfs/data

  1. 编辑hdfs-site.xml文件,找到configuration节点,改成以下内容,把node2配置成sendary namenode:

dfs.namenode.secondary.http-address

node2:50090

  1. 编辑slaves文件,删除里面的"localhost",增加两行内容:

node1

node2

  1. 编辑yarn-site.xml文件,找到configuration节点,改成以下内容:

yarn.nodemanager.aux-services

mapreduce_shuffle

yarn.resourcemanager.hostname

node0

yarn.log-aggregation-enable

true

yarn.log-aggregation.retain-seconds

106800

  1. 将文件mapred-site.xml.template改名为mapred-site.xml:

mv mapred-site.xml.template mapred-site.xml

  1. 编辑mapred-site.xml文件,找到configuration节点,改成以下内容:

mapreduce.framework.name

yarn

mapreduce.jobhistory.address

node0:10020

mapreduce.jobhistory.webapp.address

node0:19888

  1. 将整个hadoop-2.7.7目录同步到node1的家目录:

scp -r ~/hadoop-2.7.7 hadoop@node1:~/

  1. 将整个hadoop-2.7.7目录同步到node2的家目录:

scp -r ~/hadoop-2.7.7 hadoop@node2:~/

格式化hdfs

在node0执行以下命令格式化hdfs:

~/hadoop-2.7.7/bin/hdfs namenode -format

启动hadoop

  1. 在node0机器执行以下命令,启动hdfs:

~/hadoop-2.7.7/sbin/start-dfs.sh

  1. 在node0机器执行以下命令,启动yarn:

~/hadoop-2.7.7/sbin/start-yarn.sh

  1. 在node0机器执行以下命令,启动ResourceManager:

~/hadoop-2.7.7/sbin/yarn-daemon.sh start resourcemanager

  1. 在node0机器执行以下命令,启动日志服务:

~/hadoop-2.7.7/sbin/mr-jobhistory-daemon.sh start historyserver

  1. 启动成功后,在node0执行jps命令查看java进程,如下:

[hadoop@node0 ~]$ jps

3253 JobHistoryServer

2647 NameNode

3449 Jps

2941 ResourceManager

  1. 在node1执行jps命令查看java进程,如下:

[hadoop@node1 ~]$ jps

2176 DataNode

2292 NodeManager

2516 Jps

  1. 在node2执行jps命令查看java进程,如下:

[hadoop@node2 ~]$ jps

1991 DataNode

2439 Jps

2090 SecondaryNameNode

2174 NodeManager

至此,hadoop启动成功;

验证hadoop

下面运行一次经典的WorkCount程序来检查hadoop工作是否正常:

  1. 以hadoop账号登录node0,在家目录创建文件test.txt,内容如下:

hadoop mapreduce hive

hbase spark storm

sqoop hadoop hive

spark hadoop

  1. 在hdfs上创建一个文件夹:

~/hadoop-2.7.7/bin/hdfs dfs -mkdir /input

  1. 将test.txt文件上传的hdfs的/input目录下:

~/hadoop-2.7.7/bin/hdfs dfs -put ~/test.txt /input

  1. 直接运行hadoop安装包中自带的workcount程序:

~/hadoop-2.7.7/bin/yarn \

jar ~/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar \

wordcount \

/input/test.txt \

/output

控制台输出如下:

[hadoop@node0 ~]$ ~/hadoop-2.7.7/bin/yarn \

jar ~/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar \

wordcount \

/input/test.txt \

/output

19/02/08 14:34:28 INFO client.RMProxy: Connecting to ResourceManager at node1/192.168.119.164:8032

19/02/08 14:34:29 INFO input.FileInputFormat: Total input paths to process : 1

19/02/08 14:34:29 INFO mapreduce.JobSubmitter: number of splits:1

19/02/08 14:34:29 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1549606965916_0001

19/02/08 14:34:30 INFO impl.YarnClientImpl: Submitted application application_1549606965916_0001

19/02/08 14:34:30 INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1549606965916_0001/

19/02/08 14:34:30 INFO mapreduce.Job: Running job: job_1549606965916_0001

19/02/08 14:34:36 INFO mapreduce.Job: Job job_1549606965916_0001 running in uber mode : false

19/02/08 14:34:36 INFO mapreduce.Job: map 0% reduce 0%

19/02/08 14:34:41 INFO mapreduce.Job: map 100% reduce 0%

19/02/08 14:34:46 INFO mapreduce.Job: map 100% reduce 100%

19/02/08 14:34:46 INFO mapreduce.Job: Job job_1549606965916_0001 completed successfully

19/02/08 14:34:46 INFO mapreduce.Job: Counters: 49

File System Counters

FILE: Number of bytes read=94

FILE: Number of bytes written=245525

FILE: Number of read operations=0

FILE: Number of large read operations=0

FILE: Number of write operations=0

HDFS: Number of bytes read=168

HDFS: Number of bytes written=60

HDFS: Number of read operations=6

HDFS: Number of large read operations=0

HDFS: Number of write operations=2

Job Counters

Launched map tasks=1

Launched reduce tasks=1

Data-local map tasks=1

Total time spent by all maps in occupied slots (ms)=2958

Total time spent by all reduces in occupied slots (ms)=1953

Total time spent by all map tasks (ms)=2958

Total time spent by all reduce tasks (ms)=1953

Total vcore-milliseconds taken by all map tasks=2958

Total vcore-milliseconds taken by all reduce tasks=1953

Total megabyte-milliseconds taken by all map tasks=3028992

Total megabyte-milliseconds taken by all reduce tasks=1999872

Map-Reduce Framework

Map input records=4

Map output records=11

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值