Is there a way to limit the size of an index?
This question is sometimes brought up because of the 2GB file size limit of some 32-bit operating systems.
This is a slightly modified answer from Doug Cutting:
The easiest thing is to use IndexWriter.setMaxMergeDocs().
If, for instance, you hit the 2GB limit at 8M documents set maxMergeDocs to 7M. That will keep Lucene from trying to merge an index that won't fit in your filesystem. It will actually effectively round this down to the next lower power of Index.mergeFactor.
So with the default mergeFactor set to 10 and maxMergeDocs set to 7M Lucene will generate a series of 1M document indexes, since merging 10 of these would exceed the maximum.
A slightly more complex solution:
You could further minimize the number of segments if, when you've added 7M documents, optimize the index and start a new index. Then use MultiSearcher to search the indexes.
An even more complex and optimal solution:
Write a version of FSDirectory that, when a file exceeds 2GB, creates a subdirectory and represents the file as a series of files.
在linux下通过ulimit -f 可以更改最大支持的文件大小。可以同时检索多个索引。
问题已经解决,在xp下可以访问索引文件。前端显示接口需要改进。