关闭

hadoop内存不足问题

标签: hadoopimportexceptioninputparsingjava
5752人阅读 评论(0) 收藏 举报
分类:
1. 运行时map/reduce,包内存不足的错误,看来hadoop很耗费内存。

package hbase;

import java.io.*;
import java.util.*;

import org.apache.hadoop.conf.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.TextOutputFormat;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.Reducer.Context;
import org.apache.hadoop.mapreduce.lib.output.*;
import org.apache.hadoop.mapreduce.lib.input.*;
import org.apache.hadoop.util.*;

public class WordCount {


//重写map方法
private static class wordMap extends Mapper<LongWritable,Text,Text,IntWritable>
{
private Text word=new Text();
private final static IntWritable one=new IntWritable(1);
public void map(LongWritable key, Text value,Context context) throws IOException,InterruptedException {
System.out.println(value);
System.out.println(key);
StringTokenizer it=new StringTokenizer(value.toString());
while(it.hasMoreTokens())
{
word.set(it.nextToken());
context.write(word, one);
}
}
}


//重写方法reduce
public static class insumReduce extends Reducer<Text,IntWritable,Text,IntWritable>
{
public void reduce(Text key, Iterator<IntWritable> values,Context context) throws Exception {
int sum=0;
while(values.hasNext())
{
sum+=values.next().get();
}
context.write(key, new IntWritable(sum));
}
}

//main
public static void main(String args[]) throws Exception
{
String input="in";
String output="testout2";
Configuration conf=new Configuration();
Job job=new Job(conf,"word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(wordMap.class);
job.setCombinerClass(insumReduce.class);
job.setReducerClass(insumReduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(input));
FileOutputFormat.setOutputPath(job, new Path(output));
System.exit(job.waitForCompletion(true)?0:1);
}
}

运行结果
12/01/16 11:09:31 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/01/16 11:09:31 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/01/16 11:09:31 INFO input.FileInputFormat: Total input paths to process : 2
12/01/16 11:09:32 INFO mapred.JobClient: Running job: job_local_0001
12/01/16 11:09:32 INFO input.FileInputFormat: Total input paths to process : 2
12/01/16 11:09:32 INFO mapred.MapTask: io.sort.mb = 100
12/01/16 11:09:32 WARN mapred.LocalJobRunner: job_local_0001
java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:781)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:524)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:613)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
12/01/16 11:09:33 INFO mapred.JobClient: map 0% reduce 0%
12/01/16 11:09:33 INFO mapred.JobClient: Job complete: job_local_0001
12/01/16 11:09:33 INFO mapred.JobClient: Counters: 0

分析
1. 初步估计是内存不足引起的

1. 修改hadoop文件
JAVA=$JAVA_HOME/bin/java
JAVA_HEAP_MAX=-Xmx1024m

2 修改hadoop-env.sh文件
# The maximum amount of heap to use, in MB. Default is 1000.
export HADOOP_HEAPSIZE=2000

3, 环境变量 JAVA_OPTS
-Xms64m -Xmx1024m



修改完毕后,重新启动hadoop, 错误消失。
0
0

查看评论
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
    个人资料
    • 访问:217142次
    • 积分:1867
    • 等级:
    • 排名:千里之外
    • 原创:34篇
    • 转载:35篇
    • 译文:0篇
    • 评论:12条
    最新评论