我用的lucene版本是3.0.1,当我索引到某一个大文本文件是(大约有10M)报了内存溢出,也就说这个字符串太大了,我试过把虚拟机最大内存设置更大(-Xmx),可以解决问题,但是我想试着改代码来解决,不知道有没有哪位朋友遇到故噢累死问题,有什么好的解决办法没有? 以下是部分代码 String content = FileUtils.readFileToString(file, "UTF-8"); Document document = new Document(); document.add(new Field("content",content,Field.Store.YES,Field.Index.ANALYZED)); document.add(new Field("path",file.getAbsolutePath(),Field.Store.YES,Field.Index.NOT_ANALYZED)); indexWriter.addDocument(document); 抛出的异常: Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:515)
2010-04-02 22:06
提问者采纳
就报错来看,还没有用到Lucene就出错了,意思是只到第一行就虚拟机内存溢出了,可以考虑把源文件进行切割,如把10M的文本切成5个1M的,建议你试一下 给一个可以切分文件的程序,可把它作为预处理的一部分 public static void splitToSmallFiles(File file, String outputpath) throws IOException { int filePointer = 0; int MAX_SIZE = 10240000; BufferedWriter writer = null; BufferedReader reader = new BufferedReader(new FileReader(file)); StringBuffer buffer = new StringBuffer(); String line = reader.readLine(); while (line != null) { buffer.append(line).append("\r\n"); if (buffer.toString().getBytes()().length >= MAX_SIZE) { writer = new BufferedWriter(new FileWriter(outputpath + "output" + filePointer + ".txt")); writer.write(buffer.toString()); writer.close(); filePointer++; buffer = new StringBuffer(); } line = reader.readLine(); } writer = new BufferedWriter(new FileWriter(outputpath + "output" + filePointer + ".txt")); writer.write(buffer.toString()); writer.close(); }