hadoop中的重要类和接口解析

以下为hadoop中使用的常用类说明(来源于hadoop api),排列仅以学习时出现的顺序为参考,不做其他比较:

1. Configuration

public class Configuration extends Object implements Iterable<Map.Entry<String,String>>, Writable

Provides access to configuration parameters.

Resources

Configurations are specified by resources. A resource contains a set of name/value pairs as XML data. Each resource is named by either a String or by a Path. If named by a String, then the classpath is examined for a file with that name. If named by a Path, then the local filesystem is examined directly, without referring to the classpath.

Unless explicitly turned off, Hadoop by default specifies two resources, loaded in-order from the classpath:

  1. core-default.xml : Read-only defaults for hadoop.
  2. core-site.xml: Site-specific configuration for a given hadoop installation.

Applications may add additional resources, which are loaded subsequent to these resources in the order they are added.

Final Parameters

Configuration parameters may be declared final. Once a resource declares a value final, no subsequently-loaded resource can alter that value. For example, one might define a final parameter with:

  <property>
    <name>dfs.client.buffer.dir</name>
    <value>/tmp/hadoop/dfs/client</value>
    <final>true</final>
  </property>

Administrators typically define parameters as final in core-site.xml for values that user applications may not alter.

2. Serialization

public interface Serialization<T>
All Known Implementing Classes:
JavaSerializationWritableSerialization 

Encapsulates a Serializer/Deserializer pair.

3. Tool

public interface Toolextends Configurable

A tool interface that supports handling of generic command-line options.

Tool, is the standard for any Map-Reduce tool/application. The tool/application should delegate the handling of standard command-line options to ToolRunner.run(Tool, String[]) and only handle its custom arguments.

4. ToolRunner

public class ToolRunner extends Object

A utility to help run Tools.

ToolRunner can be used to run classes implementing Tool interface. It works in conjunction with GenericOptionsParser to parse thegeneric hadoop command line arguments and modifies the Configuration of the Tool. The application-specific options are passed along without being modified.

5.1 Job (newer than JobConf)

public class Job extends JobContext

The job submitter's view of the Job. It allows the user to configure the job, submit it, control its execution, and query the state. The set methods only work until the job is submitted, afterwards they will throw an IllegalStateException.

5.2 JobConf (has more methods than Job)

A map/reduce job configuration.

JobConf is the primary interface for a user to describe a map-reduce job to the Hadoop framework for execution. The framework tries to faithfully execute the job as-is described by JobConf, however:

  1. Some configuration parameters might have been marked as final by administrators and hence cannot be altered.
  2. While some job parameters are straight-forward to set (e.g. setNumReduceTasks(int)), some parameters interact subtly rest of the framework and/or job-configuration and is relatively more complex for the user to control finely (e.g. setNumMapTasks(int)). 

JobConf typically specifies the Mapper, combiner (if any), PartitionerReducerInputFormat and OutputFormat implementations to be used etc.

Optionally JobConf is used to specify other advanced facets of the job such as Comparators to be used, files to be put in theDistributedCache, whether or not intermediate and/or job outputs are to be compressed (and how), debugability via user-provided scripts ( setMapDebugScript(String)/setReduceDebugScript(String)), for doing post-processing on task logs, task's stdout, stderr, syslog. and etc.

 6. GenericOptionsParser

public class GenericOptionsParser extends Object

GenericOptionsParser is a utility to parse command line arguments generic to the Hadoop framework. GenericOptionsParser recognizes several standarad command line arguments, enabling applications to easily specify a namenode, a jobtracker, additional configuration resources etc.

Generic Options

The supported generic options are: 

     -conf <configuration file>     specify a configuration file
     -D <property=value>            use value for given property
     -fs <local|namenode:port>      specify a namenode
     -jt <local|jobtracker:port>    specify a job tracker
     -files <comma separated list of files>    specify comma separated
                            files to be copied to the map reduce cluster
     -libjars <comma separated list of jars>   specify comma separated
                            jar files to include in the classpath.
     -archives <comma separated list of archives>    specify comma
             separated archives to be unarchived on the compute machines.

 7.Mapper

public class Mapper<KEYIN,VALUEIN,KEYOUT,VALUEOUT> extends Object

Maps input key/value pairs to a set of intermediate key/value pairs.

Maps are the individual tasks which transform input records into a intermediate records. The transformed intermediate records need not be of the same type as the input records. A given input pair may map to zero or many output pairs.

The Hadoop Map-Reduce framework spawns one map task for each InputSplit generated by the InputFormat for the job. Mapperimplementations can access the Configuration for the job via the JobContext.getConfiguration().

The framework first calls setup(org.apache.hadoop.mapreduce.Mapper.Context), followed by map(Object, Object, Context) for each key/value pair in the InputSplit. Finally cleanup(Context) is called.

All intermediate values associated with a given output key are subsequently grouped by the framework, and passed to a Reducer to determine the final output. Users can control the sorting and grouping by specifying two key RawComparator classes.

The Mapper outputs are partitioned per Reducer. Users can control which keys (and hence records) go to which Reducer by implementing a custom Partitioner.

Users can optionally specify a combiner, via Job.setCombinerClass(Class), to perform local aggregation of the intermediate outputs, which helps to cut down the amount of data transferred from the Mapper to the Reducer.

Applications can specify if and how the intermediate outputs are to be compressed and which CompressionCodecs are to be used via theConfiguration.

If the job has zero reduces then the output of the Mapper is directly written to the OutputFormat without sorting by keys.

 8. reducer

public class Reducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT> extends Object

Reduces a set of intermediate values which share a key to a smaller set of values.

Reducer implementations can access the Configuration for the job via the JobContext.getConfiguration() method.

Reducer has 3 primary phases:

  1. Shuffle

    The Reducer copies the sorted output from each Mapper using HTTP across the network.

  2. Sort

    The framework merge sorts Reducer inputs by keys (since different Mappers may have output the same key).

    The shuffle and sort phases occur simultaneously i.e. while outputs are being fetched they are merged.

    SecondarySort

    To achieve a secondary sort on the values returned by the value iterator, the application should extend the key with the secondary key and define a grouping comparator. The keys will be sorted using the entire key, but will be grouped using the grouping comparator to decide which keys and values are sent in the same call to reduce.The grouping comparator is specified via Job.setGroupingComparatorClass(Class). The sort order is controlled by Job.setSortComparatorClass(Class).

    For example, say that you want to find duplicate web pages and tag them all with the url of the "best" known example. You would set up the job like:
    • Map Input Key: url
    • Map Input Value: document
    • Map Output Key: document checksum, url pagerank
    • Map Output Value: url
    • Partitioner: by checksum
    • OutputKeyComparator: by checksum and then decreasing pagerank
    • OutputValueGroupingComparator: by checksum
  3. Reduce

    In this phase the reduce(Object, Iterable, Context) method is called for each <key, (collection of values)> in the sorted inputs.

    The output of the reduce task is typically written to a RecordWriter via TaskInputOutputContext.write(Object, Object).

The output of the Reducer is not re-sorted.

RawComparator

public interface RawComparator<T> extends Comparator<T>

All Known Implementing Classes:
BooleanWritable.ComparatorBytesWritable.ComparatorByteWritable.ComparatorDeserializerComparator, DoubleWritable.Comparator
FloatWritable.ComparatorIntWritable.ComparatorJavaSerializationComparator, KeyFieldBasedComparatorKeyFieldBasedComparator
LongWritable.ComparatorLongWritable.DecreasingComparator, MD5Hash.ComparatorNullWritable.ComparatorRecordComparator
SecondarySort.FirstGroupingComparator, SecondarySort.IntPair.ComparatorText.ComparatorUTF8.ComparatorWritableComparator 

Comparator that operates directly on byte representations of objects.

 

转载于:https://www.cnblogs.com/beanmoon/archive/2012/11/25/2787964.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值