15 - MapReduce之压缩/解压缩案例以及Yarn介绍

一:数据流的压缩和解压缩 

          CompressionCodec 有两个方法可以用于轻松地压缩或解压缩数据。要想对正在被写入一个输出流的数据进行压缩,
   我们可以使用 createOutputStream(OutputStreamout)方法创建一个 CompressionOutputStream,将其以压缩格式写入
  底层的流。相反,要想对从输入流读取而来的数据进行解压缩,则调用 createInputStream(InputStreamin)函数,
 从而获得一个CompressionInputStream,从而从底层的流读取未压缩的数据。

二:案例之数据流的压缩

  1. 有如下压缩方式:
      
  2. 代码如下:
     
    package com.kgf.mapreduce.compress;
    
    import java.io.File;
    import java.io.FileInputStream;
    import java.io.FileNotFoundException;
    import java.io.FileOutputStream;
    
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.io.IOUtils;
    import org.apache.hadoop.io.compress.CompressionCodec;
    import org.apache.hadoop.io.compress.CompressionOutputStream;
    import org.apache.hadoop.util.ReflectionUtils;
    
    public class TestCompress {
    
    	public static void main(String[] args) throws Exception {
    //		compress("E:\\input\\a.txt","org.apache.hadoop.io.compress.BZip2Codec");
    //		compress("E:\\input\\a.txt","org.apache.hadoop.io.compress.GzipCodec");
    		compress("E:\\input\\a.txt","org.apache.hadoop.io.compress.DefaultCodec");
    	}
    	/**
    	 * 对文件进行压缩
    	 * @param 文件路径
    	 * @param 压缩的方式
    	 * @throws FileNotFoundException 
    	 */
    	private static void compress(String filePath, String pressMathod) throws Exception {
    		//1:获取数据流
    		FileInputStream fis = new FileInputStream(new File(filePath));
    		//2:获取压缩方式对象
    		Class<?> codecClass  = Class.forName(pressMathod);
    		CompressionCodec  codec = (CompressionCodec) ReflectionUtils.newInstance(codecClass, new Configuration());
    		//3:获取输出流
    		FileOutputStream fos = new FileOutputStream(new File(filePath+codec.getDefaultExtension()));
    		//4:获取压缩输出流
    		CompressionOutputStream cos = codec.createOutputStream(fos);
    		//5:实现流的对拷
    		IOUtils.copyBytes(fis, cos, 1024*1024*5,false);
    		//6:关闭流
    		fis.close();
    		cos.close();
    		fos.close();
    	}
    }
    

  3. 效果如下:
               

三:案例之数据流的解压缩

  1. 代码:
             

四:Map /Reducer输出端采用压缩 

  1. 简介
         即使你的 MapReduce 的输入输出文件都是未压缩的文件,你仍然可以对 map 任务的中
    间结果输出做压缩,因为它要写在硬盘并且通过网络传输到 reduce 节点,对其压缩可以提
    高很多性能,这些工作只要设置两个属性即可,我们来看下代码怎么设置。
     
  2.  Reduce 输出端采用压缩 (和上面一样都只需要修改Driver即可)
             

五:Yarn

  1. Hadoop1.x 和 Hadoop2.x 架构区别 
        在 Hadoop1.x 时代,Hadoop 中的 MapReduce 同时处理业务逻辑运算和资源的调度,耦合性较大。
    在 Hadoop2.x 时代,增加了 Yarn。Yarn 只负责资源的调度,MapReduce 只负责运算。 
  2. Yarn 概述 
         Yarn 是一个资源调度平台,负责为运算程序提供服务器运算资源,相当于一个分布式的操作系统平台,
    而 MapReduce 等运算程序则相当于运行于操作系统之上的应用程序。 
  3.  Yarn 基本架构如下
     
  4. Yarn 运行机制 
     
  5. 工作机制详解 
     
  6.  资源调度器 
     ⑴简介
              目前,Hadoop 作业调度器主要有三种:FIFO、Capacity Scheduler 和 Fair Scheduler。
         Hadoop2.7.2 默认的资源调度器是 Capacity Scheduler。
     ⑵先进先出调度器(FIFO) 
               
     ⑶容量调度器(Capacity Scheduler) 
           
      ⑷公平调度器(Fair Scheduler) 
           
  7.  任务的推测执行 

     
    算法原理: 
           

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
Warning: No configuration directory set! Use --conf <dir> to override. Info: Including Hadoop libraries found via (/opt/hadoop-3.1.2/bin/hadoop) for HDFS access Info: Including HBASE libraries found via (/opt/hbase-2.2.6/bin/hbase) for HBASE access 错误: 找不到或无法加载主类 org.apache.flume.tools.GetJavaProperty Info: Including Hive libraries found via (/opt/hive-3.1.2) for Hive access + exec /opt/jdk1.8.0_351/bin/java -Xmx20m -cp '/opt/flume-1.9.0/lib/*:/opt/hadoop-3.1.2/etc/hadoop:/opt/hadoop-3.1.2/share/hadoop/common/lib/*:/opt/hadoop-3.1.2/share/hadoop/common/*:/opt/hadoop-3.1.2/share/hadoop/hdfs:/opt/hadoop-3.1.2/share/hadoop/hdfs/lib/*:/opt/hadoop-3.1.2/share/hadoop/hdfs/*:/opt/hadoop-3.1.2/share/hadoop/mapreduce/lib/*:/opt/hadoop-3.1.2/share/hadoop/mapreduce/*:/opt/hadoop-3.1.2/share/hadoop/yarn:/opt/hadoop-3.1.2/share/hadoop/yarn/lib/*:/opt/hadoop-3.1.2/share/hadoop/yarn/*:/opt/hbase-2.2.6/conf:/opt/jdk1.8.0_351//lib/tools.jar:/opt/hbase-2.2.6:/opt/hbase-2.2.6/lib/shaded-clients/hbase-shaded-client-byo-hadoop-2.2.6.jar:/opt/hbase-2.2.6/lib/client-facing-thirdparty/audience-annotations-0.5.0.jar:/opt/hbase-2.2.6/lib/client-facing-thirdparty/commons-logging-1.2.jar:/opt/hbase-2.2.6/lib/client-facing-thirdparty/findbugs-annotations-1.3.9-1.jar:/opt/hbase-2.2.6/lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar:/opt/hbase-2.2.6/lib/client-facing-thirdparty/log4j-1.2.17.jar:/opt/hbase-2.2.6/lib/client-facing-thirdparty/slf4j-api-1.7.25.jar:/opt/hadoop-3.1.2/etc/hadoop:/opt/hadoop-3.1.2/share/hadoop/common/lib/*:/opt/hadoop-3.1.2/share/hadoop/common/*:/opt/hadoop-3.1.2/share/hadoop/hdfs:/opt/hadoop-3.1.2/share/hadoop/hdfs/lib/*:/opt/hadoop-3.1.2/share/hadoop/hdfs/*:/opt/hadoop-3.1.2/share/hadoop/mapreduce/lib/*:/opt/hadoop-3.1.2/share/hadoop/mapreduce/*:/opt/hadoop-3.1.2/share/hadoop/yarn:/opt/hadoop-3.1.2/share/hadoop/yarn/lib/*:/opt/hadoop-3.1.2/share/hadoop/yarn/*:/opt/hadoop-3.1.2/etc/hadoop:/opt/hbase-2.2.6/conf:/opt/hive-3.1.2/lib/*' -Djava.library.path=:/opt/hadoop-3.1.2/lib/native org.apache.flume.node.Application --name a1 --conf/opt/flume-1.9.0/conf --conf-file/opt/flume-1.9.0/conf/dhfsspool.conf-Dflume.root.logger=DEBUG,consol SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/flume-1.9.0/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop-3.1.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hive-3.1.2/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 2023-06-08 17:26:46,403 ERROR node.Application: A fatal error occurred while running. Exception follows. org.apache.commons.cli.UnrecognizedOptionException: Unrecognized option: --conf/opt/flume-1.9.0/conf at org.apache.commons.cli.Parser.processOption(Parser.java:363) at org.apache.commons.cli.Parser.parse(Parser.java:199) at org.apache.commons.cli.Parser.parse(Parser.java:85) at org.apache.flume.node.Application.main(Application.java:287)
最新发布
06-09
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值