Hadoop2.9.1 安装配置,并在 Idea 中运行自带的 WordCount 示例

一、Linux 基础配置

1、创建 Hadoop 用户

如果Linux系统安装的时候不是 hadoop 用户,那么你最好需要创建一个 hadoop 用户,密码设置为 hadoop (密码可以随意指定),使用下面命令创建新用户。创建 hadoop 用户的时候使用的是 root 用户,没有其他说明本文中的所有命令都是在 root 用户下完成的,如果你的登录用户不是 root ,那么执行本文中的一些命令需要在命令之前增加 sudo 才可以正确执行。

useradd -m hadoop -s /bin/bash
passwd hadoop       # 修改 hadoop 用户的密码

 

2、安装 SSH server

如果 Linux(CentOS、Ubuntu)在安装的时候没有选择安装 SSH 服务,那么就需要进行安装,如果已经安装可以跳过这个步骤。可以用 ps -ef | grep ssh  来进行查看是否已经安装 SSH 服务。只要有以下显示就是安装完成。

没有安装用以下命令进行安装

apt-get install openssh-server  (ubuntu 用 此命令)
yum install openssh-server   (CentOS 用此命令)

安装完成之后再执行  ps -ef | grep ssh 进行查看

 

3、配置 SSH 无密码登录

cd ~/.ssh/ # 若没有该目录,请先执行一次ssh localhost
ssh-keygen -t rsa # 会有提示,都按回车就可以
cat id_rsa.pub >> authorized_keys # 加入授权

 

二、软件安装配置

1、安装 JDK

①、下载 jdk jdk-8u77-linux-x64.gz

②、将 jdk-8u77-linux-x64.gz 解压缩到 /usr/local/java/ 目录下,解压命令:

tar -xzvf jdk-8u77-linux-x64.gz /usr/local/java/ 

③、配置环境变量,在 /etc/profile 文件最后追加上入下所示代码:

JAVA_HOME=/usr/local/java/jdk1.8.0_77
JRE_HOME=$JAVA_HOME/jre
CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$PATH:$JAVA_HOME/bin
export JAVA_HOME JRE_HOME CLASSPATH PATH

 

2、安装hadoop

①、下载 hadoop-2.9.1.tar.gz(http://archive.apache.org/dist/hadoop/common/) 在 stable2 或 stable 文件中找 2.9.1 的版本

②、将 hadoop-2.9.1.tar.gz 解压缩到 /usr/local/hadoop/ 目录下,解压命令:

tar -xzvf hadoop-2.9.1.tar  .gz /usr/local/hadoop/

 

3、设置 hadoop 的环境变量(暂时还不知道有什么用,先这么设置)    

export HADOOP_HOME=/usr/local/hadoop/hadoop-2.9.1
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$JAVA_LIBRARY_HOME
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

 

4、配置 hadoop

①、进入hadoop目录下的的/etc/hadoop/文件夹下,修改hadoop-env.sh文件配置JDK,为Hadoop的守护进程设置环境变量

export JAVA_HOME=/usr/local/java/jdk1.8.0_77

②、配置hadoop的目录/etc/hadoop/core-site.xml,配置HDFS的地址和端口,NameNode的IP地址及端口

<configuration>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/home/hadoop/data/</value>
        </property>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://192.168.1.103:9000</value>
        </property>
</configuration>

③、配置hadoop的目录/etc/hadoop/hdfs-site.xml,HDFS配置备份方式默认为3,单机版本中需要改为1

<configuration>
        <property>
                <name>dfs.replication</name>
                <value>1</value>
                <description>HDFS 的数据块的副本存储个数, 默认是3</description>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/home/hadoop/data/name</value>
                <description>为了保证元数据的安全一般配置多个不同目录</description>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>/home/hadoop/data/data</value>
                <description>datanode 的数据存储目录</description>
        </property>
</configuration>

 

5、格式化 HDFS 系统

在启动hadoop之前需要格式化hadoop的文件系统 HDFS

hdfs namenode -format

出现下面文字中红色字体显示的信息表示格式化成功,如果失败请仔细检查3、4步骤的设置是否正确

/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = ubuntu-16.04.2-LTS/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.9.1
STARTUP_MSG:   classpath = .......
STARTUP_MSG:   build = https://github.com/apache/hadoop.git -r e30710aea4e6e55e69372929106cf119af06fd0e; compiled by 'root' on 2018-04-16T09:33Z
STARTUP_MSG:   java = 1.8.0_77
************************************************************/
18/11/19 13:52:20 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
18/11/19 13:52:20 INFO namenode.NameNode: createNameNode [-format]
18/11/19 13:52:20 WARN common.Util: Path /home/hadoop/data/name should be specified as a URI in configuration files. Please update hdfs configuration.
18/11/19 13:52:20 WARN common.Util: Path /home/hadoop/data/name should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-8e8821ce-ec6f-463b-8e4f-ee0fde086438
18/11/19 13:52:20 INFO namenode.FSEditLog: Edit logging is async:true
18/11/19 13:52:20 INFO namenode.FSNamesystem: KeyProvider: null
18/11/19 13:52:20 INFO namenode.FSNamesystem: fsLock is fair: true
18/11/19 13:52:20 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
18/11/19 13:52:20 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
18/11/19 13:52:20 INFO namenode.FSNamesystem: supergroup          = supergroup
18/11/19 13:52:20 INFO namenode.FSNamesystem: isPermissionEnabled = true
18/11/19 13:52:20 INFO namenode.FSNamesystem: HA Enabled: false
18/11/19 13:52:21 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
18/11/19 13:52:21 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
18/11/19 13:52:21 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
18/11/19 13:52:21 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
18/11/19 13:52:21 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Nov 19 13:52:21
18/11/19 13:52:21 INFO util.GSet: Computing capacity for map BlocksMap
18/11/19 13:52:21 INFO util.GSet: VM type       = 64-bit
18/11/19 13:52:21 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
18/11/19 13:52:21 INFO util.GSet: capacity      = 2^21 = 2097152 entries
18/11/19 13:52:21 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
18/11/19 13:52:21 WARN conf.Configuration: No unit for dfs.heartbeat.interval(3) assuming SECONDS
18/11/19 13:52:21 WARN conf.Configuration: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
18/11/19 13:52:21 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
18/11/19 13:52:21 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
18/11/19 13:52:21 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
18/11/19 13:52:21 INFO blockmanagement.BlockManager: defaultReplication         = 1
18/11/19 13:52:21 INFO blockmanagement.BlockManager: maxReplication             = 512
18/11/19 13:52:21 INFO blockmanagement.BlockManager: minReplication             = 1
18/11/19 13:52:21 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
18/11/19 13:52:21 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
18/11/19 13:52:21 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
18/11/19 13:52:21 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
18/11/19 13:52:21 INFO namenode.FSNamesystem: Append Enabled: true
18/11/19 13:52:21 INFO util.GSet: Computing capacity for map INodeMap
18/11/19 13:52:21 INFO util.GSet: VM type       = 64-bit
18/11/19 13:52:21 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
18/11/19 13:52:21 INFO util.GSet: capacity      = 2^20 = 1048576 entries
18/11/19 13:52:21 INFO namenode.FSDirectory: ACLs enabled? false
18/11/19 13:52:21 INFO namenode.FSDirectory: XAttrs enabled? true
18/11/19 13:52:21 INFO namenode.NameNode: Caching file names occurring more than 10 times
18/11/19 13:52:21 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: falseskipCaptureAccessTimeOnlyChange: false
18/11/19 13:52:21 INFO util.GSet: Computing capacity for map cachedBlocks
18/11/19 13:52:21 INFO util.GSet: VM type       = 64-bit
18/11/19 13:52:21 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
18/11/19 13:52:21 INFO util.GSet: capacity      = 2^18 = 262144 entries
18/11/19 13:52:21 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
18/11/19 13:52:21 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
18/11/19 13:52:21 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
18/11/19 13:52:21 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
18/11/19 13:52:21 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
18/11/19 13:52:21 INFO util.GSet: Computing capacity for map NameNodeRetryCache
18/11/19 13:52:21 INFO util.GSet: VM type       = 64-bit
18/11/19 13:52:21 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
18/11/19 13:52:21 INFO util.GSet: capacity      = 2^15 = 32768 entries
18/11/19 13:52:21 INFO namenode.FSImage: Allocated new BlockPoolId: BP-2005443052-127.0.1.1-1542606741546
18/11/19 13:52:21 INFO common.Storage: Storage directory /home/hadoop/data/name has been successfully formatted.
18/11/19 13:52:21 INFO namenode.FSImageFormatProtobuf: Saving image file /home/hadoop/data/name/current/fsimage.ckpt_0000000000000000000 using no compression
18/11/19 13:52:21 INFO namenode.FSImageFormatProtobuf: Image file /home/hadoop/data/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds .
18/11/19 13:52:21 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
18/11/19 13:52:21 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu-16.04.2-LTS/127.0.1.1
************************************************************/

用自带的demo--wordcount来测试hadoop集群能不能正常跑任务:首先将 hadoop conf 目录下的文件移动到 /home/hadoop/tmp/input 目录下然后执行wordcount程序,并将结果放入/test/output/文件夹,hadoop-mapreduce-examples-2.9.1.jar 在 hadoop-2.9.1/share/hadoop/mapreduce 这个目录下

hadoop jar hadoop-mapreduce-examples-2.9.1.jar grep /home/hadoop/tmp/input /home/hadoop/tmp/output 'dfs[a-z.]+'

执行完成可以到 /home/hadoop/tmp/ouput 目录下查看结果

 

6、启动 hdfs 系统

进入hadoop文件夹,输入下面命令:

./sbin/start-dfs.sh # 启动 hdfs 系统

启动成功之后使用 jps 命令查看是否启动成功,如图所示:

 

浏览器访问测试地址:

HDFS:http://192.168.1.103:50070 看到下面信息表示启动成功

 

7、在 Idea 中运行 WordCount 示例

① 创建 maven 工程,在 pom.xml 文件中引入以下依赖包

<dependencies>
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-hdfs</artifactId>
        <version>2.9.1</version>
    </dependency>
 
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-mapreduce-client-core</artifactId>
        <version>2.9.1</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-client</artifactId>
        <version>2.9.1</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-common</artifactId>
        <version>2.9.1</version>
    </dependency>
</dependencies>

② 复制 hadoop-2.9.1-src\hadoop-mapreduce-project\hadoop-mapreduce-examples\src\main\java\org\apache\hadoop\examples 下的 WordCount.java 代码到自己工程下,WordCount 的代码如下

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class WordCount {

    public static class TokenizerMapper
            extends Mapper<Object, Text, Text, IntWritable>{

        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();

        public void map(Object key, Text value, Context context
                       ) throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                word.set(itr.nextToken());
                context.write(word, one);
            }
        }
    }

    public static class IntSumReducer
            extends Reducer<Text,IntWritable,Text,IntWritable> {
        private IntWritable result = new IntWritable();

        public void reduce(Text key, Iterable<IntWritable> values,
                           Context context
                          ) throws IOException, InterruptedException {
            int sum = 0;
            for (IntWritable val : values) {
                sum += val.get();
            }
            result.set(sum);
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
        if (otherArgs.length < 2) {
            System.err.println("Usage: wordcount <in> [<in>...] <out>");
            System.exit(2);
        }
        Job job = new Job(conf, "word count");
        job.setJarByClass(WordCount.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(IntSumReducer.class);
        job.setReducerClass(IntSumReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        for (int i = 0; i < otherArgs.length - 1; ++i) {
            FileInputFormat.addInputPath(job, new Path(otherArgs[i]));
        }
        FileOutputFormat.setOutputPath(job,
                                       new Path(otherArgs[otherArgs.length - 1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://172.22.138.17:9000/user/hadoop/tmp/input/test.txt

③配置WordCount运行参数    Run-》Edit Configurations

注意 Program arguments: 中的参数指定的是虚拟机中的 hdfs 系统地址,2个参数,前面的参数是输入路径,后面的参数是输出路径,两个参数之间用空格分割开。然后我们在 linux 系统 /home/hadoop目录下创建 tmp/input/test.txt 这个文件,文件内容 hello world hello hi 这几个单词,output 目录不需要创建,hdfs 会自动创建

配置完运行参数之后直接运行代码会报错: can not find winutils.exe  和 hadoop.home.dir 找不到的问题, 下载 winutils.exe 和 hadoop.dll  

是因为再 windows 上没有配置 hadoop 的环境变量,所以找不到 hadoop.home.dir 这个路径,所以需要在 windows 上配置 HADOOP_HOME 的环境变量,将上面下载的 hadoop.2.9.1 的包解压到 windows 上的目录下,配置环境变量,然后将下载下来的 winutils.exe 和 hadoop.dll 复制到 HADOOP_HOME/bin 目录下,同时也复制到 c:/windows/system32 这个目录下,电脑需要重启才能生效。

设置完成之后再次运行上面的代码发现会报错,错误信息如下

Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://192.168.1.103:9000/user/hadoop/tmp/input/test.txt

说是找不到  hdfs://192.168.1.103:9000/user/hadoop/tmp/input/test.txt 这个文件,上面我们已经创建对应的文件了为什么会找不到呢?上面创建的文件是在 linux 系统上创建的,这个文件还没上传到 hdfs 文件系统中,所以会找不到对应的文件,可以使用

hdfs dfs -ls /

查看是否有对应的文件,执行完成之后发现什么也没有输出,就是没有对应的文件夹,这时候需要我们先把文件夹创建好,然后再把 test.txt 文件上传到 hdfs 系统中。使用

hdfs dfs -mkdir /user
hdfs dfs -mkdir /user/hadoop
hdfs dfs -mkdir /user/hadoop/tmp
hdfs dfs -mkdir /user/hadoop/tmp/input

 在 根目录下创建 user 文件夹,然后再逐层创建各个文件夹,创建好 input 文件夹之后使用 h

dfs dfs -put /home/hadoop/tmp/input/test.txt /user/hadoop/tmp/input

 将 test.txt 文件上传到 hdfs 系统中。

上传完成之后再兴致勃勃的执行下 WordCount 代码,期待能输出正确的结果,结果又报错了,错误信息如下

org.apache.hadoop.security.AccessControlException: Permission denied: user=administrator, access=WRITE, inode="/user/hadoop/tmp":root:supergroup:drwxr-xr-x

是在生成 output 目录的时候提示操作权限不足,错误原因是,在 windows 平台下默认以 administrator 或自定义的用户去提交作业任务到 hdfs 系统中,对应的就是 hdfs 系统中 /user/xxx 的目录,我的目录为 /user/hadoop,由于 administrator 用户对 /user/hadoop 没有写的权限,所以报权限的问题

解决方法:将 /user/hadoop 的文件夹权限设置成任何用户都可以读写,命令如下

hadoop fs -chmod 777 /user/hadoop
# 我的环境中执行上面一句的话,hadoop 下面的子目录还是没有写的权限,所以还需要下面的语句
hadoop fs -chmod 777 /user/hadoop/*

再次执行 WordCount 代码,出现下面情况说明执行成功了

前面的省略掉

18/11/19 17:34:04 INFO mapreduce.Job: Counters: 35
    File System Counters
        FILE: Number of bytes read=3156
        FILE: Number of bytes written=943529
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=1860
        HDFS: Number of bytes written=936
        HDFS: Number of read operations=13
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=4
    Map-Reduce Framework
        Map input records=11
        Map output records=159
        Map output bytes=1547
        Map output materialized bytes=1389
        Input split bytes=108
        Combine input records=159
        Combine output records=112
        Reduce input groups=112
        Reduce shuffle bytes=1389
        Reduce input records=112
        Reduce output records=112
        Spilled Records=224
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=6
        Total committed heap usage (bytes)=473956352
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters
        Bytes Read=930
    File Output Format Counters
        Bytes Written=936

查看 /user/hadoop/tmp/output 目录下的结果, hdfs dfs -ls /user/hadoop/tmp/output 如下显示,统计结果在 output/part-00000 这个文件中

-rw-r--r--   3 root supergroup          0 2018-11-19 17:34 /user/hadoop/tmp/output/_SUCCESS
-rw-r--r--   3 root upergroup        936 2018-11-19 17:34 /user/hadoop/tmp/output/part-00000

执行

hdfs dfs -cat /user/hadoop/tmp/output/part-00000

输出统计的结果

hello 2
world 1
hi 1

目前为止遇到的问题就这么多,要是有别的没遇到的问题,请自行百度解决下。

 

 

 

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值