Hadoop I/O

Chapter 4 Hadoop I/O
1) Integrity
HDFS transparently checksums all data written to it and by default verifies checksums when reading data.The default is 512 bytes, and because a CRC-32 checksum is 4 bytes long, the storage overhead is less than 1%. Datanodes are responsible for verifying the data they receive before storing the data and its checksum. Each datanode keeps a persistent log of checksum verifications. Each datanode runs a DataBlockScanner in a background thread that periodically verifies all the blocks stored on the datanode.

FileSystem fs = new RawLocalFileSystem();// don't checksum
FileSystem checksummedFs = new ChecksumFileSystem(rawFs);//do checksum

2) Compression
File compression brings two major benefits: it reduces the space needed to store files,and it speeds up data transfer across the network or to or from disk.A codec is the implementation of a compression-decompression algorithm. 
To compress data being written to an output stream, use the createOutput Stream(OutputStream out)method to create a CompressionOutputStreamto which you write your uncompressed data to have it written in compressed form to the underlying stream. Conversely, to decompress data being read from an input stream, call createInputStream(InputStream in)to obtain a CompressionInputStream, which allows you to read uncompressed data from the underlying stream.
For performance, it is preferable to use a native library for compression and decompression.

If you are using a native library and you are doing a lot of compression or decompression in your application, consider using CodecPool, which allows you to reuse compressors and decompressors, thereby amortizing the cost of creating these objects.

When considering how to compress data that will be processed by MapReduce, it is important to understand whether the compression format supports splitting.

3) Serialization
Serializationis the process of turning structured objects into a byte stream for transmission over a network or for writing to persistent storage. Deserializationis the reverse process of turning a byte stream back into a series of structured objects.
Serialization appears in two quite distinct areas of distributed data processing: for interprocess communication and for persistent storage.

Hadoop uses its own serialization format, Writables, which is certainly compact and fast, but not so easy to extend or use from languages other than Java.

4) Serialization Frameworks
Although most MapReduce programs use Writablekey and value types, this isn’t mandated by the MapReduce API.

Apache Avro is a language-neutral data serialization system. The project was created to address the major downside of Hadoop Writables: lack of language portability. Having a data format that can be processed by many languages (currently C, C++, C#, Java, PHP, Python, and Ruby) makes it easier to share datasets with a wider audience than one tied to a single language.

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值