Distributed Cache in Hadoop
Distributed Cache是Hadoop MapReduce提供的一个工具。它可以给我们的Worker(Map/Reduce Jobs)提供cache数据。这些数据可以是文本,压缩文件,jar文件等。一旦我们班cache文件配置好都,Hadoop会确保我们的Workder在各个Node上执行之前,把cache文件都拷贝到各个Node上。
这样我们的Worker就可以在自己的代码中直接使用cache文件了。
下面是从Hadoop的官网上拷贝的实例代码:
1. Copy the requisite files to the FileSystem:
$ bin/hadoop fs -copyFromLocal lookup.dat /myapp/lookup.dat
$ bin/hadoop fs -copyFromLocal map.zip /myapp/map.zip
$ bin/hadoop fs -copyFromLocal mylib.jar /myapp/mylib.jar
$ bin/hadoop fs -copyFromLocal mytar.tar /myapp/mytar.tar
$ bin/hadoop fs -copyFromLocal mytgz.tgz /myapp/mytgz.tgz
$ bin/hadoop fs -copyFromLocal mytargz.tar.gz /myapp/mytargz.tar.gz
2. Setup the application's JobConf:
JobConf job = new JobConf();
DistributedCache.addCacheFile(new URI("/myapp/lookup.dat#lookup.dat"),
job);
DistributedCache.addCacheArchive(new URI("/myapp/map.zip"), job);
DistributedCache.addFileToClassPath(new Path("/myapp/mylib.jar"), job);
DistributedCache.addCacheArchive(new URI("/myapp/mytar.tar"), job);
DistributedCache.addCacheArchive(new URI("/myapp/mytgz.tgz"), job);
DistributedCache.addCacheArchive(new URI("/myapp/mytargz.tar.gz"), job);
3. Use the cached files in the Mapper
or Reducer:
public static class MapClass extends MapReduceBase
implements Mapper<K, V, K, V> {
private Path[] localArchives;
private Path[] localFiles;
public void configure(JobConf job) {
// Get the cached archives/files
File f = new File("./map.zip/some/file/in/zip.txt");
}
public void map(K key, V value,
OutputCollector<K, V> output, Reporter reporter)
throws IOException {
// Use data from the cached archives/files here
// ...
// ...
output.collect(k, v);
}
}
cache的大小默认是10G,但是你可以再 mapred-site.xml中重定义。
但是要注意,使用太多的cache,可能会拖慢你的Worker。
https://hadoop.apache.org/docs/current/api/org/apache/hadoop/filecache/DistributedCache.html
https://data-flair.training/blogs/hadoop-distributed-cache