hadoop内存不足问题

1. 运行时map/reduce,包内存不足的错误,看来hadoop很耗费内存。

package hbase;

import java.io.*;
import java.util.*;

import org.apache.hadoop.conf.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.TextOutputFormat;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.Reducer.Context;
import org.apache.hadoop.mapreduce.lib.output.*;
import org.apache.hadoop.mapreduce.lib.input.*;
import org.apache.hadoop.util.*;

public class WordCount {


//重写map方法
private static class wordMap extends Mapper<LongWritable,Text,Text,IntWritable>
{
private Text word=new Text();
private final static IntWritable one=new IntWritable(1);
public void map(LongWritable key, Text value,Context context) throws IOException,InterruptedException {
System.out.println(value);
System.out.println(key);
StringTokenizer it=new StringTokenizer(value.toString());
while(it.hasMoreTokens())
{
word.set(it.nextToken());
context.write(word, one);
}
}
}


//重写方法reduce
public static class insumReduce extends Reducer<Text,IntWritable,Text,IntWritable>
{
public void reduce(Text key, Iterator<IntWritable> values,Context context) throws Exception {
int sum=0;
while(values.hasNext())
{
sum+=values.next().get();
}
context.write(key, new IntWritable(sum));
}
}

//main
public static void main(String args[]) throws Exception
{
String input="in";
String output="testout2";
Configuration conf=new Configuration();
Job job=new Job(conf,"word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(wordMap.class);
job.setCombinerClass(insumReduce.class);
job.setReducerClass(insumReduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(input));
FileOutputFormat.setOutputPath(job, new Path(output));
System.exit(job.waitForCompletion(true)?0:1);
}
}

运行结果
12/01/16 11:09:31 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/01/16 11:09:31 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/01/16 11:09:31 INFO input.FileInputFormat: Total input paths to process : 2
12/01/16 11:09:32 INFO mapred.JobClient: Running job: job_local_0001
12/01/16 11:09:32 INFO input.FileInputFormat: Total input paths to process : 2
12/01/16 11:09:32 INFO mapred.MapTask: io.sort.mb = 100
12/01/16 11:09:32 WARN mapred.LocalJobRunner: job_local_0001
java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:781)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:524)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:613)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
12/01/16 11:09:33 INFO mapred.JobClient: map 0% reduce 0%
12/01/16 11:09:33 INFO mapred.JobClient: Job complete: job_local_0001
12/01/16 11:09:33 INFO mapred.JobClient: Counters: 0

分析
1. 初步估计是内存不足引起的

1. 修改hadoop文件
JAVA=$JAVA_HOME/bin/java
JAVA_HEAP_MAX=-Xmx1024m

2 修改hadoop-env.sh文件
# The maximum amount of heap to use, in MB. Default is 1000.
export HADOOP_HEAPSIZE=2000

3, 环境变量 JAVA_OPTS
-Xms64m -Xmx1024m



修改完毕后,重新启动hadoop, 错误消失。
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值