linux搭建hadoop集群及MapReduce使用

1 Hadoop分布式存储介绍

2 搭建Hadoop集群

本次实验将搭建一个含有三节点的hadoop集群。

实验环境:
宿主机操作系统: Windows10
虚拟机软件:VMware Workstation
虚拟机操作系统1:Ubuntu2004LTS
虚拟机操作系统2:Ubuntu2004LTS
虚拟机操作系统3:Ubuntu2004LTS

2.1 创建用户(节点)并配置节点间的免密认证

在每个节点上分别进行如下操作:

# 创建新用户hadoop
root@hadoop1:~$ adduser hadoop

# 将hadoop添加到root用户组里
root@hadoop1:~$ chmod-v u+w /etc/sudoers
root@hadoop1:~$ vi /etc/sudoers
在root    ALL=(ALL)       ALL下添加一行:
hadoop    ALL=(ALL)       ALL
root@hadoop1:~$ chmod -v u-w /etc/sudoers

# 配置免密认证
root@hadoop1:~$ su - hadoop

# 将.ssh文件夹复制到每个节点
hadoop@hadoop1:~$ sudo cp -r /root/.ssh ./
hadoop@hadoop1:~$ scp -r ./.ssh hadoop2:~/
hadoop@hadoop1:~$ scp -r ./.ssh hadoop3:~/

# 修改权限
hadoop@hadoop1:~$ chmod 600 ~/.ssh/authorized_keys
hadoop@hadoop1:~$ chmod 600 ~/.ssh/config

若能通过ssh在每个节点的hadoop用户之间无密钥切换,则配置成功。

2.2 每个节点安装Java和Hadoop

在每个节点的hadoop用户下分卸进行如下操作

# 安装Java
hadoop@hadoop1:~$ wget http://bigdata.cg.lzu.edu.cn/bigdata_software/jdk-8u321-linux-x64.tar.gz
hadoop@hadoop1:~$ tar -zxvf jdk-8u321-linux-x64.tar.gz
hadoop@hadoop1:~$ vi ~/.bashrc
添加如下内容:
export JAVA_HOME=~/jdk1.8.0_321
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
export HADOOP_CLASSPATH=$JAVA_HOME/lib/tools.jar

hadoop@hadoop1:~$ source ~/.bashrc
hadoop@hadoop1:~$ java -version

# 安装Hadoop
hadoop@hadoop1:~$ wget http://bigdata.cg.lzu.edu.cn/bigdata_software/hadoop-3.2.3.tar.gz
hadoop@hadoop1:~$ tar -zxvf hadoop-3.2.3.tar.gz
hadoop@hadoop1:~$ vi ~/.bashrc
export HADOOP_HOME=~/hadoop-3.2.3
export HADOOP_MAPRED_HOME=~/hadoop-3.2.3
export HADOOP_YARN_HOME=~/hadoop-3.2.3
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HDFS_NAMENODE_USER=hadoop
export HDFS_DATANODE_USER=hadoop
export HDFS_SECONDARYNAMENODE_USER=hadoop
export YARN_RESOURCEMANAGER_USER=hadoop
export YARN_NODEMANAGER_USER=hadoop

hadoop@hadoop1:~$ source ~/.bashrc
hadoop@hadoop1:~$ hadoop version

2.3 配置Hadoop

在hadoop1节点上配置hadoop:

hadoop@hadoop1:~$ cd ~/hadoop-3.2.3/etc/hadoop/

hadoop@hadoop1:~/hadoop-3.2.3/etc/hadoop/$ vi hadoop-env.sh
添加:
export JAVA_HOME=/home/hadoop/jdk1.8.0_321
export HADOOP_HOME=/home/hadoop/hadoop-3.2.3

hadoop@hadoop1:~/hadoop-3.2.3/etc/hadoop/$ vi core-site.xml
添加下方内容
hadoop@hadoop1:~/hadoop-3.2.3/etc/hadoop/$ vi hdfs-site.xml
添加下方内容
hadoop@hadoop1:~/hadoop-3.2.3/etc/hadoop/$ vi mapred-site.xml
添加下方内容
hadoop@hadoop1:~/hadoop-3.2.3/etc/hadoop/$ vi yarn-site.xml
添加下方内容

# 使用主机名或IP地址配置集群所有节点
hadoop@hadoop1:~/hadoop-3.2.3/etc/hadoop/$ vi workers
修改为:
hadoop1
hadoop2
hadoop3

# 将hadoop文件夹拷贝到其他节点
hadoop@hadoop1:~/hadoop-3.2.3/etc/hadoop/$ cd ~
hadoop@hadoop1:~$ scp -r ~/hadoop-3.2.3 hadoop2:~/
hadoop@hadoop1:~$ scp -r ~/hadoop-3.2.3 hadoop3:~/

# 格式化文件系统
hadoop@hadoop1:~$ hdfs namenode -format

core-site.xml中主要指定了文件系统默认访问地址和端口,以及hdfs文件系统默认目录,内容:
!!注意其中hadoop1应该为IP地址

<configuration>
<!--hdfs服务地址和端口 -->
<property>
        <name>fs.defaultFS</name>
         <value>hdfs://hadoop1:9000</value>
</property>
<!--hadoop文件系统目录,namenode、secondarynamenode和data默认都在此目录下,默认为/tmp/hadoop-${user.name}-->
<property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/hadoop-3.2.3</value>
</property>
</configuration>

hdfs-site.xml配置主要指定namenode、secondarynamenode节点和服务端口,以及副本数和安全检查:

<configuration>
<!--hdfs namenode的http地址,默认为0.0.0.0:9870 -->
<property>
        <name>dfs.namenode.http-address</name>
         <value>hadoop1:9870</value>
</property>
<!--hdfs secondarynamenode的http地址,默认为0.0.0.0:9868,如需将secondarynamenode放在其他节点,修改主机名即可 -->
<property>
        <name>dfs.namenode.secondary.http-address</name>
         <value>hadoop1:9868</value>
</property>
<!-- HDFS副本数,默认为3,单节点伪集群需设置为1 -->
<property>
        <name>dfs.replication</name>
         <value>3</value>
</property>
<!-- 是否启用hdfs权限检查 ,默认为true开启的,设置为false关闭 -->
<property>
        <name>dfs.permissions.enabled</name>
         <value>false</value>
</property>
<!-- 文件分块大小,默认128MB -->
<property>
	<name>dfs.blocksize</name>
	<value>128m</value>
</property>
</configuration>

mapred-site.xml配置主要指定了MR框架以及一些环境变量和路径:

<configuration>
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
</property>
<property>
        <name>mapreduce.admin.user.env</name>
        <value>HADOOP_MAPRED_HOME=/home/hadoop/hadoop-3.2.3</value>
</property>
<property>
        <name>mapreduce.application.classpath</name>
        <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
</property>
<property>
        <name>yarn.app.mapreduce.am.env</name>
        <value>HADOOP_MAPRED_HOME=/home/hadoop/hadoop-3.2.3</value>
</property>
</configuration>

yarn-site.xml,主要指定了yarn resourcemanager节点和一些其他变量:

<configuration>
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>
<property>
        <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
        <name>yarn.resourcemanager.hostname</name>
                <value>hadoop1</value>
</property>
<property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
<property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
</property>
</configuration>

2.4 启动Hadoop

启动集群服务:(仅在Hadoop1节点上运行即可)

# 启动HDFS
hadoop@hadoop1:~$ start-dfs.sh
# 启动yarn:ResourceManager和NodeManager服务进程
hadoop@hadoop1:~$ start-yarn.sh
# 启动MapReduce JobHistory Server历史服务器和timelineserver时间线服务器
hadoop@hadoop1:~$ mapred --daemon start historyserver
hadoop@hadoop1:~$ yarn --daemon start timelineserver

# 查看运行节点情况
hadoop@hadoop1:~$  jps

也可以通过网址访问node节点:http://hadoop1:9870/
访问NodeManager:http://hadoop1:8088/
访问yarn:http://hadoop1:19888/

Hadoop上的一些操作:

# 查看系统健康状况
hadoop@hadoop1:~$ hdfs fsck /

# 创建HDFS文件系统用户目录
hadoop@hadoop1:~$ hdfs dfs -mkdir /user
hadoop@hadoop1:~$ hdfs dfs -mkdir /user/hadoop/
hadoop@hadoop1:~$ hdfs dfs -ls /

# 创建空文件
hadoop@hadoop1:~$ hdfs dfs –touchz /directory/filename

# 查看文件大小
hadoop@hadoop1:~$ hdfs dfs –du –s /directory/filename

# 查看文件内容
hadoop@hadoop1:~$ hdfs dfs –cat /path/to/file_in_hdfs

# 文件从本地上传
hadoop@hadoop1:~$ hdfs dfs -copyFromLocal <localsrc> <hdfs destination>
hadoop@hadoop1:~$ hdfs dfs -put <localsrc> <destination>

# 文件下载到本地
hadoop@hadoop1:~$ hdfs dfs -copyToLocal <hdfs source> <localdst>
hadoop@hadoop1:~$  hdfs dfs -get <src> <localdst>

# 查看目录下文件数目、大小
hadoop@hadoop1:~$ hdfs dfs -count <path>

# 删除文件
hadoop@hadoop1:~$ hdfs dfs –rm <path>

# 复制文件
hadoop@hadoop1:~$ hdfs dfs -cp <src> <dest>

# 移动文件
hadoop@hadoop1:~$ hdfs dfs -mv <src> <dest>

# 清空回收站(被rm的文件/文件夹)
hadoop@hadoop1:~$ hdfs dfs -expunge

# 删除目录
hadoop@hadoop1:~$  hdfs dfs -rmdir <path>

# 查看某命令的用法
hadoop@hadoop1:~$ hdfs dfs -usage <command>

关闭集群服务:

hadoop@hadoop1:~$ stop-yarn.sh
hadoop@hadoop1:~$ stop-dfs.sh 
hadoop@hadoop1:~$ mapred --daemon stop historyserver
hadoop@hadoop1:~$ yarn --daemon stop timelineserver

官方指导手册链接:指导手册

3 在Hadoop上使用MapReduce

下面实现通过MapReduce的单词计数WordCount:
首先需要打开虚拟机并启动集群服务,然后再hadoop用户根目录创建软代码文件:

hadoop@hadoop1:~$ vi WordCount.java

粘贴源代码内容:

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

  public static class TokenizerMapper
       extends Mapper<Object, Text, Text, IntWritable>{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }

  public static class IntSumReducer
       extends Reducer<Text,IntWritable,Text,IntWritable> {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable<IntWritable> values,
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = Job.getInstance(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}

然后执行:

# 编译WordCount.java并生成jar文件(保存在~/目录中)
hadoop@hadoop1:~$ hadoop com.sun.tools.javac.Main WordCount.java
hadoop@hadoop1:~$ jar cf wc.jar WordCount*.class

# 创建wordcount及输入文件夹,输出文件夹会在mapreduce执行过程中自动创建
hadoop@hadoop1:~$ hadoop fs mkdir /user
hadoop@hadoop1:~$ hadoop fs mkdir /user/hadoop
hadoop@hadoop1:~$ hadoop fs mkdir /user/hadoop/wordcount
hadoop@hadoop1:~$ hadoop fs mkdir /user/hadoop/wordcount/input

# 将所有输入文件复制到输入文件夹中(假设文件在用户根目录)
hadoop@hadoop1:~$ hadoop fs -copyFromLocal ~/file01 /user/hadoop/wordcount/input/
hadoop@hadoop1:~$ hadoop fs -copyFromLocal ~/file02 /user/hadoop/wordcount/input/
# 检查输入文件夹
hadoop@hadoop1:~$ hadoop fs -ls /user/hadoop/wordcount/input/

# 进行Map-Reduce操作
hadoop@hadoop1:~$ hadoop jar wc.jar WordCount /user/hadoop/wordcount/input /user/hadoop/wordcount/output

# 查看结果
hadoop@hadoop1:~$ hadoop fs -cat /user/hadoop/wordcount/output/part-r-00000

官方指导手册链接:指导手册

  • 0
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值