java nio性能_java.io和java.nio性能简单对比

我从java1.3开始学习java,后来主要用1.4,再后来1.5和1.6中的很多新特性,都停留在“知道”的状态,比如nio,虽然据说可以提升性能,但并没有真正深入使用和测试过,工作操作文件的情况不多,所以关注也不多,即便用到,也还是习惯性的用java.io。今天看到的这篇文章,虽然测试手段非常简单,所得结论也难免有些片面 ,但依然说明,在顺序访问的时候,NIO的性能相对java.io有很大的提升。

也许应该update一下自己的知识了,否则就要OUT,或者早已经OUT了。

下次操作文件或者写socket要用NIO了。

最近我在工作中用到了java i/o相关功能。因为对java.io的了解更多(毕竟面世较早),所以一开始我使用的是java.io包下的类,后来为了测试一下是不是能够通过NIO提高文件操作性能,于是转向了java.nio。我得到的结论让我感到有些震惊,下面是对比测试的一些细节:

1、在java.io的测试代码中,我使用RandomAccessFile直接向文件写数据,并搜索到特定的位置执行记录的插入、读取和删除。

2、在java.nio的初步测试代码中,使用FileChannel对象。NIO之所以比java.io更加高效,是因为NIO面向的是data chunks,而java.io基本上是面向byte的。

3、为了进一步挖掘NIO的能力,我又改用MappedByteBuffer执行测试,这个类是构建在操作系统的虚拟内存机制上的。根据java文档所说,这个类在性能方面是最好的。

为了进行测试,我写了一个模拟员工数据库的小程序,员工数据的结构如下:

view plaincopy to clipboardprint?

class Employee {

String last; // the key

String first;

int id;

int zip;

boolean employed;

String comments;

}

class Employee {

String last; // the key

String first;

int id;

int zip;

boolean employed;

String comments;

}

员工数据写入文件,并将last name作为索引key,日后可以通过这个key从文件中加载该员工对应的数据。无论使用IO、NIO还是MappedByteBuffers,首先都需要打开一个RandomAccessFile。以下代码在用户的home目录下创建一个名为employee.ejb的文件,设置为可读可写,并初始化对应的Channel和MappedByteBuffer:

view plaincopy to clipboardprint?

String userHome = System.getProperty("user.home");

StringBuffer pathname = new StringBuffer(userHome);

pathname.append(File.separator);

pathname.append("employees.ejb");

java.io.RandomAccessFile journal =

new RandomAccessFile(pathname.toString(), "rw");

//下面这一句是为了NIO

java.nio.channels.FileChannel channel = journal.getChannel();

//下面这两句是为了使用MappedByteBuffer

journal.setLength(PAGE_SIZE);

MappedByteBuffer mbb =

channel.map(FileChannel.MapMode.READ_WRITE, 0, journal.length() );

String userHome = System.getProperty("user.home");

StringBuffer pathname = new StringBuffer(userHome);

pathname.append(File.separator);

pathname.append("employees.ejb");

java.io.RandomAccessFile journal =

new RandomAccessFile(pathname.toString(), "rw");

//下面这一句是为了NIO

java.nio.channels.FileChannel channel = journal.getChannel();

//下面这两句是为了使用MappedByteBuffer

journal.setLength(PAGE_SIZE);

MappedByteBuffer mbb =

channel.map(FileChannel.MapMode.READ_WRITE, 0, journal.length() );

使用channel.map进行映射后,当该文件被追加了新的数据时,之前的MappedByteBuffer是看不到这些数据的。因为我们想测试读和写,所以当文件中追加写入新的记录后,需要重新做映射才能使得MappedByteBuffer读取新数据。为了提高效率,降低重新映射的次数,每次空间不够的时候,我们将文件扩张特定的大小(比如说1K)以防止每次追加新记录都要重新映射。

下面是写入员工记录的对比测试:

使用java.io的代码:

view plaincopy to clipboardprint?

public boolean addRecord_IO(Employee emp) {

try {

byte[] last = emp.last.getBytes();

byte[] first = emp.first.getBytes();

byte[] comments = emp.comments.getBytes();

// Just hard-code the sizes for perfomance

int size = 0;

size += emp.last.length();

size += 4; // strlen - Integer

size += emp.first.length();

size += 4; // strlen - Integer

size += 4; // emp.id - Integer

size += 4; // emp.zip - Integer

size += 1; // emp.employed - byte

size += emp.comments.length();

size += 4; // strlen - Integer

long offset = getStorageLocation(size);

//

// Store the record by key and save the offset

//

if ( offset == -1 ) {

// We need to add to the end of the journal. Seek there

// now only if we're not already there

long currentPos = journal.getFilePointer();

long jounralLen = journal.length();

if ( jounralLen != currentPos )

journal.seek(jounralLen);

offset = jounralLen;

}else {

// Seek to the returned insertion point

journal.seek(offset);

}

// Fist write the header

journal.writeByte(1);

journal.writeInt(size);

// Next write the data

journal.writeInt(last.length);

journal.write(last);

journal.writeInt(first.length);

journal.write(first);

journal.writeInt(emp.id);

journal.writeInt(emp.zip);

if ( emp.employed )

journal.writeByte(1);

else

journal.writeByte(0);

journal.writeInt(comments.length);

journal.write(comments);

// Next, see if we need to append an empty record if we inserted

// this new record at an empty location

if ( newEmptyRecordSize != -1 ) {

// Simply write a header

journal.writeByte(0); //inactive record

journal.writeLong(newEmptyRecordSize);

}

employeeIdx.put(emp.last, offset);

return true;

}

catch ( Exception e ) {

e.printStackTrace();

}

return false;

}

public boolean addRecord_IO(Employee emp) {

try {

byte[] last = emp.last.getBytes();

byte[] first = emp.first.getBytes();

byte[] comments = emp.comments.getBytes();

// Just hard-code the sizes for perfomance

int size = 0;

size += emp.last.length();

size += 4; // strlen - Integer

size += emp.first.length();

size += 4; // strlen - Integer

size += 4; // emp.id - Integer

size += 4; // emp.zip - Integer

size += 1; // emp.employed - byte

size += emp.comments.length();

size += 4; // strlen - Integer

long offset = getStorageLocation(size);

//

// Store the record by key and save the offset

//

if ( offset == -1 ) {

// We need to add to the end of the journal. Seek there

// now only if we're not already there

long currentPos = journal.getFilePointer();

long jounralLen = journal.length();

if ( jounralLen != currentPos )

journal.seek(jounralLen);

offset = jounralLen;

}else {

// Seek to the returned insertion point

journal.seek(offset);

}

// Fist write the header

journal.writeByte(1);

journal.writeInt(size);

// Next write the data

journal.writeInt(last.length);

journal.write(last);

journal.writeInt(first.length);

journal.write(first);

journal.writeInt(emp.id);

journal.writeInt(emp.zip);

if ( emp.employed )

journal.writeByte(1);

else

journal.writeByte(0);

journal.writeInt(comments.length);

journal.write(comments);

// Next, see if we need to append an empty record if we inserted

// this new record at an empty location

if ( newEmptyRecordSize != -1 ) {

// Simply write a header

journal.writeByte(0); //inactive record

journal.writeLong(newEmptyRecordSize);

}

employeeIdx.put(emp.last, offset);

return true;

}

catch ( Exception e ) {

e.printStackTrace();

}

return false;

}

使用java.nio的代码:

view plaincopy to clipboardprint?

public boolean addRecord_NIO(Employee emp) {

try {

data.clear();

byte[] last = emp.last.getBytes();

byte[] first = emp.first.getBytes();

byte[] comments = emp.comments.getBytes();

data.putInt(last.length);

data.put(last);

data.putInt(first.length);

data.put(first);

data.putInt(emp.id);

data.putInt(emp.zip);

byte employed = 0;

if ( emp.employed )

employed = 1;

data.put(employed);

data.putInt(comments.length);

data.put(comments);

data.flip();

int dataLen = data.limit();

header.clear();

header.put((byte)1); // 1=active record

header.putInt(dataLen);

header.flip();

long headerLen = header.limit();

int length = (int)(headerLen + dataLen);

long offset = getStorageLocation((int)dataLen);

//

// Store the record by key and save the offset

//

if ( offset == -1 ) {

// We need to add to the end of the journal. Seek there

// now only if we're not already there

long currentPos = channel.position();

long jounralLen = channel.size();

if ( jounralLen != currentPos )

channel.position(jounralLen);

offset = jounralLen;

}

else {

// Seek to the returned insertion point

channel.position(offset);

}

// Fist write the header

long written = channel.write(srcs);

// Next, see if we need to append an empty record if we inserted

// this new record at an empty location

if ( newEmptyRecordSize != -1 ) {

// Simply write a header

data.clear();

data.put((byte)0);

data.putInt(newEmptyRecordSize);

data.flip();

channel.write(data);

}

employeeIdx.put(emp.last, offset);

return true;

}

catch ( Exception e ) {

e.printStackTrace();

}

return false;

}

public boolean addRecord_NIO(Employee emp) {

try {

data.clear();

byte[] last = emp.last.getBytes();

byte[] first = emp.first.getBytes();

byte[] comments = emp.comments.getBytes();

data.putInt(last.length);

data.put(last);

data.putInt(first.length);

data.put(first);

data.putInt(emp.id);

data.putInt(emp.zip);

byte employed = 0;

if ( emp.employed )

employed = 1;

data.put(employed);

data.putInt(comments.length);

data.put(comments);

data.flip();

int dataLen = data.limit();

header.clear();

header.put((byte)1); // 1=active record

header.putInt(dataLen);

header.flip();

long headerLen = header.limit();

int length = (int)(headerLen + dataLen);

long offset = getStorageLocation((int)dataLen);

//

// Store the record by key and save the offset

//

if ( offset == -1 ) {

// We need to add to the end of the journal. Seek there

// now only if we're not already there

long currentPos = channel.position();

long jounralLen = channel.size();

if ( jounralLen != currentPos )

channel.position(jounralLen);

offset = jounralLen;

}

else {

// Seek to the returned insertion point

channel.position(offset);

}

// Fist write the header

long written = channel.write(srcs);

// Next, see if we need to append an empty record if we inserted

// this new record at an empty location

if ( newEmptyRecordSize != -1 ) {

// Simply write a header

data.clear();

data.put((byte)0);

data.putInt(newEmptyRecordSize);

data.flip();

channel.write(data);

}

employeeIdx.put(emp.last, offset);

return true;

}

catch ( Exception e ) {

e.printStackTrace();

}

return false;

}

使用MappedByteBuffer的代码如下:

view plaincopy to clipboardprint?

public boolean addRecord_MBB(Employee emp) {

try {

byte[] last = emp.last.getBytes();

byte[] first = emp.first.getBytes();

byte[] comments = emp.comments.getBytes();

int datalen = last.length + first.length + comments.length + 12 + 9;

int headerlen = 5;

int length = headerlen + datalen;

//

// Store the record by key and save the offset

//

long offset = getStorageLocation(datalen);

if ( offset == -1 ) {

// We need to add to the end of the journal. Seek there

// now only if we're not already there

long currentPos = mbb.position();

long journalLen = channel.size();

if ( (currentPos+length) >= journalLen ) {

//log("GROWING FILE BY ANOTHER PAGE");

mbb.force();

journal.setLength(journalLen + PAGE_SIZE);

channel = journal.getChannel();

journalLen = channel.size();

mbb = channel.map(FileChannel.MapMode.READ_WRITE, 0, journalLen);

currentPos = mbb.position();

}

if ( currentEnd != currentPos )

mbb.position(currentEnd);

offset = currentEnd;//journalLen;

}

else {

// Seek to the returned insertion point

mbb.position((int)offset);

}

// write header

mbb.put((byte)1); // 1=active record

mbb.putInt(datalen);

// write data

mbb.putInt(last.length);

mbb.put(last);

mbb.putInt(first.length);

mbb.put(first);

mbb.putInt(emp.id);

mbb.putInt(emp.zip);

byte employed = 0;

if ( emp.employed )

employed = 1;

mbb.put(employed);

mbb.putInt(comments.length);

mbb.put(comments);

currentEnd += length;

// Next, see if we need to append an empty record if we inserted

// this new record at an empty location

if ( newEmptyRecordSize != -1 ) {

// Simply write a header

mbb.put((byte)0);

mbb.putInt(newEmptyRecordSize);

currentEnd += 5;

}

employeeIdx.put(emp.last, offset);

return true;

}

catch ( Exception e ) {

e.printStackTrace();

}

return false;

}

public boolean addRecord_MBB(Employee emp) {

try {

byte[] last = emp.last.getBytes();

byte[] first = emp.first.getBytes();

byte[] comments = emp.comments.getBytes();

int datalen = last.length + first.length + comments.length + 12 + 9;

int headerlen = 5;

int length = headerlen + datalen;

//

// Store the record by key and save the offset

//

long offset = getStorageLocation(datalen);

if ( offset == -1 ) {

// We need to add to the end of the journal. Seek there

// now only if we're not already there

long currentPos = mbb.position();

long journalLen = channel.size();

if ( (currentPos+length) >= journalLen ) {

//log("GROWING FILE BY ANOTHER PAGE");

mbb.force();

journal.setLength(journalLen + PAGE_SIZE);

channel = journal.getChannel();

journalLen = channel.size();

mbb = channel.map(FileChannel.MapMode.READ_WRITE, 0, journalLen);

currentPos = mbb.position();

}

if ( currentEnd != currentPos )

mbb.position(currentEnd);

offset = currentEnd;//journalLen;

}

else {

// Seek to the returned insertion point

mbb.position((int)offset);

}

// write header

mbb.put((byte)1); // 1=active record

mbb.putInt(datalen);

// write data

mbb.putInt(last.length);

mbb.put(last);

mbb.putInt(first.length);

mbb.put(first);

mbb.putInt(emp.id);

mbb.putInt(emp.zip);

byte employed = 0;

if ( emp.employed )

employed = 1;

mbb.put(employed);

mbb.putInt(comments.length);

mbb.put(comments);

currentEnd += length;

// Next, see if we need to append an empty record if we inserted

// this new record at an empty location

if ( newEmptyRecordSize != -1 ) {

// Simply write a header

mbb.put((byte)0);

mbb.putInt(newEmptyRecordSize);

currentEnd += 5;

}

employeeIdx.put(emp.last, offset);

return true;

}

catch ( Exception e ) {

e.printStackTrace();

}

return false;

}

接下来,调用每种方法插入100,000条记录, 耗时对比如下:

* With java.io: ~10,000 milliseconds

* With java.nio: ~2,000 milliseconds

* With MappedByteBuffer: ~970 milliseconds

使用NIO的性能改善效果非常明显,使用MappedByteBuffer的性能,更是让人吃惊。

使用三种方式读取数据的性能对比如下:

* With java.io: ~6,900 milliseconds

* With java.nio: ~1,400 milliseconds

* With MappedByteBuffer: ~355 milliseconds

和写入的时候情况差不多,NIO有很明显的性能提升,而MappedByteBuffer则有惊人的高效率。从java.io迁移到nio并使用MappedByteBuffer,通常可以获得10倍以上的性能提升。

分享到:

18e900b8666ce6f233d25ec02f95ee59.png

72dd548719f0ace4d5f9bca64e1d7715.png

2011-07-09 20:39

浏览 1225

评论

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值