java中提高代码效率避免OOM的几点注意的地方

1.垃圾收集算法的核心思想

  Java语言建立了垃圾收集机制,用以跟踪正在使用的对象和发现并回收不再使用(引用)的对象。该机制可以有效防范动态内存分配中可能发生的两个危险:因内存垃圾过多而引发的内存耗尽,以及不恰当的内存释放所造成的内存非法引用。

  垃圾收集算法的核心思想是:对虚拟机可用内存空间,即堆空间中的对象进行识别,如果对象正在被引用,那么称其为存活对象,反之,如果对象不再被引用,则为垃圾对象,可以回收其占据的空间,用于再分配。垃圾收集算法的选择和垃圾收集系统参数的合理调节直接影响着系统性能,因此需要开发人员做比较深入的了解。

2.触发主GC(Garbage Collector)的条件

  JVM进行次GC的频率很高,但因为这种GC占用时间极短,所以对系统产生的影响不大。更值得关注的是主GC的触发条件,因为它对系统影响很明显。总的来说,有两个条件会触发主GC:

  ①当应用程序空闲时,即没有应用线程在运行时,GC会被调用。因为GC在优先级最低的线程中进行,所以当应用忙时,GC线程就不会被调用,但以下条件除外。

  ②Java堆内存不足时,GC会被调用。当应用线程在运行,并在运行过程中创建新对象,若这时内存空间不足,JVM就会强制地调用GC线程,以便回收内存用于新的分配。若GC一次之后仍不能满足内存分配的要求,JVM会再进行两次GC作进一步的尝试,若仍无法满足要求,则 JVM将报“out of memory”的错误,Java应用将停止。

  由于是否进行主GC由JVM根据系统环境决定,而系统环境在不断的变化当中,所以主GC的运行具有不确定性,无法预计它何时必然出现,但可以确定的是对一个长期运行的应用来说,其主GC是反复进行的。

3.减少GC开销的措施

  根据上述GC的机制,程序的运行会直接影响系统环境的变化,从而影响GC的触发。若不针对GC的特点进行设计和编码,就会出现内存驻留等一系列负面影响。为了避免这些影响,基本的原则就是尽可能地减少垃圾和减少GC过程中的开销。具体措施包括以下几个方面:

  (1)不要显式调用System.gc()

  此函数建议JVM进行主GC,虽然只是建议而非一定,但很多情况下它会触发主GC,从而增加主GC的频率,也即增加了间歇性停顿的次数。

  (2)尽量减少临时对象的使用

  临时对象在跳出函数调用后,会成为垃圾,少用临时变量就相当于减少了垃圾的产生,从而延长了出现上述第二个触发条件出现的时间,减少了主GC的机会。

  (3)对象不用时最好显式置为Null

  一般而言,为Null的对象都会被作为垃圾处理,所以将不用的对象显式地设为Null,有利于GC收集器判定垃圾,从而提高了GC的效率。

  (4)尽量使用StringBuffer,而不用String来累加字符串(详见blog另一篇文章JAVA中String与StringBuffer)

  由于String是固定长的字符串对象,累加String对象时,并非在一个String对象中扩增,而是重新创建新的String对象,如 Str5=Str1+Str2+Str3+Str4,这条语句执行过程中会产生多个垃圾对象,因为对次作“+”操作时都必须创建新的String对象,但这些过渡对象对系统来说是没有实际意义的,只会增加更多的垃圾。避免这种情况可以改用StringBuffer来累加字符串,因StringBuffer 是可变长的,它在原有基础上进行扩增,不会产生中间对象。

  (5)能用基本类型如Int,Long,就不用Integer,Long对象

  基本类型变量占用的内存资源比相应对象占用的少得多,如果没有必要,最好使用基本变量。

  (6)尽量少用静态对象变量

  静态变量属于全局变量,不会被GC回收,它们会一直占用内存。

  (7)分散对象创建或删除的时间

  集中在短时间内大量创建新对象,特别是大对象,会导致突然需要大量内存,JVM在面临这种情况时,只能进行主GC,以回收内存或整合内存碎片, 从而增加主GC的频率。集中删除对象,道理也是一样的。它使得突然出现了大量的垃圾对象,空闲空间必然减少,从而大大增加了下一次创建新对象时强制主GC 的机会。

-------------------------------我是分割线----------------------------

TEST 1: FOR LOOP OVER ARRAY

When writing a for loop over arrays, the common way of implementing it is use the lengthproperty of the array in the termination condition. But instead of asking the array its length on every single iteration of the loop, we could just read it before entering the loop and cache it in a local variable.

What we test: is there a difference between using a local variable to cache the length of an array instead of using the length property in a termination condition of a for loop?


Standard implementation

?
1
2
3
4
5
6
// arr is an int[] array
 
for ( int i = 0 ; i < arr.length; i++)
{
   // Do something
}


Optimized implementation

?
1
2
3
4
5
6
7
8
// arr is an int[] array
 
int arrLength = arr.length;
 
for ( int i = 0 ; i < arrLength; i++)
{
   // Do something
}


Test results

The results show the average time taken to loop over an array with length = 3000000. The loop is repeated 100 times to check the average time taken for each complete loop.

  Average loop time Result
Standard 58 ms  
Optimized 43 ms 26% faster than standard


Conclusion

Caching the array length in a local variable is faster than reading the value for every single loop iteration. If you know that the length of an array will never change during the loop execution, then caching its value is a good idea to improve the execution speed.


TEST 2: FOR LOOP OVER ARRAYLIST

Like TEST 1, when writing a for loop over ArrayList, the common way of implementing it is use thesize method of the ArrayList in the termination condition. But instead of asking the ArrayList its size on every single iteration of the loop, we could just read it before entering the loop and cache it in a local variable.

What we test: is there a difference between using a local variable to cache the size of an ArrayList instead of using the size method in a termination condition of a for loop?


Standard implementation

?
1
2
3
4
5
6
// arr is an ArrayList<Object>
 
for ( int i = 0 ; i < arr.size(); i++)
{
   // Do something
}


Optimized implementation

?
1
2
3
4
5
6
7
8
// arr is an ArrayList<Object>
 
int arrLength = arr.size();
 
for ( int i = 0 ; i < arrLength; i++)
{
   // Do something
}


Test results

The results show the average time taken to loop over an ArrayList with size = 3000000. The loop is repeated 100 times to check the average time taken for each complete loop.

  Average loop time Result
Standard 262 ms  
Optimized 233 ms 11% faster than standard


Conclusion

Caching the ArrayList size in a local variable is faster than reading the value for every single loop iteration. If you know that the size of an ArrayList will never change during the loop execution, then caching its value is a good idea to improve the execution speed. An ArrayList is generally slower than a simple array because of the method invocations involved in reading and writing values from and to the collection. Different collections have different performances (think about the access time to the elements of a HashMap), so different tests are necessary to test them.


TEST 3: VARIABLE FIELD ACCESS

If you add a variable field to a class, you can choose to make it public so other classes can read and write its value directly, or you can choose to implement getter and setter accessor methods and make the field private or protected.

What we test: is there a difference between accessing a variable field through getter and setter methods instead of accessing it directly?


Getter and setter methods

?
01
02
03
04
05
06
07
08
09
10
11
private long var;
 
public long getVar()
{
   return var;
}
 
public void setVar( long value)
{
   var = value;
}


Direct access

?
1
public long var;


Test results

The results show the average time taken to read and write the value of var in a loop executed 3000000 times. The loop is repeated 100 times to check the average time taken for each complete loop.

  Average loop time Result
Getter and setter 96 ms  
Direct access 60 ms 37% faster than getter and setter


Conclusion

Accessing directly a field of a class instance is always faster than getting and setting its value through accessor methods. If the performance is very important and you don’t need to perform any check while reading or writing the value of the field, then making it public and accessing it directly highly increases the speed (methods invocations are slow).


TEST 4: VARIABLE VS CONSTANT

A value can be stored in a variable field or in a constant field. If a value is expected to never change, storing it in a constant field is not only a good practice, but it also improves the speed of the implementation. Here we are going to verify this and we are going to test whether the use of a static field makes a difference in terms of performance.

What we test: is there a difference between reading a value of a variable field compared to the one of a constant field and does declaring it as static make a difference in terms of performance?


Variable field

?
1
public int var;


Static variable field

?
1
public static int var;


Constant field

?
1
public final int CONST;


Static constant field

?
1
public static final int CONST;


Test results

The results show the average time taken to read the value of var or CONST in a loop executed 5000000 times. The loop is repeated 500 times to check the average time taken for each complete loop.

  Average loop time Result
Variable field 63 ms  
Static variable field 67 ms slightly slower than variable
Constant field 55 ms faster than variable or static variable;
no difference between constant and static constant
Static constant field 55 ms faster than variable or static variable;
no difference between constant and static constant


Conclusion

If you know that a value will never change, then you should declare it as constant through thefinal keyword. Using constants instead of variables is always faster because the compiler can replace their value in the compiled code to avoid the variable field lookup time. In this test we also see that a static variable is slightly slower than an instance variable probably due to the different lookup procedure used by the virtual machine at runtime.


TEST 5: METHOD INVOCATIONS

To have a better structure of a software implementation that helps reuse and maintainability, it’s good to create a modular architecture with multiple classes and methods that solve specific tasks instead of writing very long methods with repeated pieces of code that solve exactly the same problem. This leads anyway to more method invocations instead of having all the necessary code in a single method to complete the task. We want to see what’s the impact of multiple method invocations instead of a single invocation.

What we test: how is the performance affected by multiple method invocations instead of a single invocation to complete the same task?


Multiple method invocations

?
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
private void execMethod()
{
   execMethod1();
}
 
private void execMethod1()
{
   execMethod2();
}
 
private void execMethod2()
{
   execMethod3();
}
 
private void execMethod3()
{
   // Do something
}


Single method invocation

?
1
2
3
4
private void execMethod()
{
   // Do something
}


Test results

The results show the average time taken to execute execMethod inside a loop repeated 3000000 times. The test is repeated 100 times to check the average time taken for each complete loop.

  Average loop time Result
Multiple method invocations 656 ms  
Single method invocation 175 ms 73% faster than multiple method invocations


Conclusion

Multiple method invocations can really slow down the execution speed, so if you need to write high performance code, you should keep in mind that sometimes it’s more important to put all the necessary code in a single method instead of having the same code spread all over different methods or classes. The code structure will not be optimal, but the performance benefit is notable.


TEST 6: POLYMORPHIC METHODS

Like we already said in TEST 5, to have a better structure of a software implementation that helps reuse and maintainability, it’s good to create a modular architecture with multiple classes so we can have the ability to implement polymorphic methods that execute a task in a different way depending on the specific class implementation. This is obtained through class inheritance and methods overriding. In this test we want to see what’s the impact of methods overriding on performance.

What we test: how is the performance affected by methods overriding?


Classes inheritance and methods overriding

?
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
public class PolyClass1
{
   public void execMethod()
   {
     // Do something
   }
}
 
public class PolyClass2 extends PolyClass1
{
   @Override
   public void execMethod()
   {
     super .execMethod();
   }
}
 
public class PolyClass3 extends PolyClass2
{
   @Override
   public void execMethod()
   {
     super .execMethod();
   }
}
 
public class PolyClass4 extends PolyClass3
{
   @Override
   public void execMethod()
   {
     super .execMethod();
   }
}
 
PolyClass4 polyClass = new PolyClass4();
polyClass.execMethod();


Direct method invocation

?
01
02
03
04
05
06
07
08
09
10
public class PolyClass1
{
   public void execMethod()
   {
     // Do something
   }
}
 
PolyClass1 polyClass = new PolyClass1();
polyClass.execMethod();


Test results

The results show the average time taken to execute execMethod inside a loop repeated 3000000 times. The test is repeated 100 times to check the average time taken for each complete loop.

  Average loop time Result
Classes inheritance and methods overriding 674 ms  
Direct method invocation 195 ms 71% faster than methods overriding


Conclusion

Classes inheritance and polymorphism are good to improve the source code structure, but the cost is the reduced performance compared to direct methods invocations. This is basically the same thing we saw in TEST 5 so we didn’t expect something different because each method in a subclass calls the corresponding one in the superclass (a method invocation). Since overriding a method is done to create a different version (with a different behavior) of the same method in a specific class, if you absolutely need high performance you could implement different methods that do something in a different way and call them directly instead of creating inherited classes with overridden methods to have the same result (of course some code will be repeated in this case).


TEST 7: VIRTUAL VS STATIC METHODS

If you declare a method in Java, it is virtual by default. In case it doesn’t need to access other instance methods or fields, it should be declared as static not only as a good programming practice (you clearly say that the method will not modify anything in a class instance), but also to improve performance.

What we test: is there a difference between virtual and static methods in terms of performance?


Virtual method

?
1
2
3
4
public void virtualMethod()
{
   // Do something
}


Static method

?
1
2
3
4
public static void staticMethod()
{
   // Do something
}


Test results

The results show the average time taken to execute virtualMethod or staticMethod inside a loop repeated 3000000 times. The test is repeated 100 times to check the average time taken for each complete loop.

  Average loop time Result
Virtual method 152 ms  
Static method 137 ms 10% faster than virtual method


Conclusion

Static methods are faster than virtual methods and since there are no drawbacks in declaring a method as static if it doesn’t use fields or methods specific to a class instance, you should always do that when you can.


TEST 8: ITERATION OVER ARRAY

Starting with Java 1.5, there’s an easy syntax to implement an iteration over arrays or collections instead of hand-writing the usual for loop. This gives performance improvements compared to some hand-written versions of the iteration, while it is indistinguishable from others.

What we test: what’s the difference between using an iterator over an array instead of a hand-written loop in terms of performance and objects allocation?


Iterator

?
1
2
3
4
5
6
// arr is an Object[] array declared as a class field
 
for (Object obj : arr)
{
   // Do something
}


Manual iteration

?
1
2
3
4
5
6
7
8
// arr is an Object[] array declared as a class field
 
int arrLength = arr.length;
 
for ( int i = 0 ; i < arrLength; i++)
{
   // Do something with arr
}


Manual iteration with local array

?
01
02
03
04
05
06
07
08
09
10
11
12
13
// arr is an Object[] array declared as a class field
 
// We declare a variable localArr that is local to the
// method that implements the iteration (we avoid
// accessing the arr class field inside the loop).
Object[] localArr = arr;
 
int arrLength = localArr.length;
 
for ( int i = 0 ; i < arrLength; i++)
{
   // Do something with localArr
}


Test results

The results show the average time taken to loop over an array with length = 5000000. The test is repeated 500 times to check the average time taken for each complete loop. It is also shown the number of objects allocated in the different loop implementations.

  Average loop time Objects allocation count Result
Iterator 90 ms 0 the same as manual iteration with local array
Manual iteration 102 ms 0 13% slower than iterator or manual iteration with local array
Manual iteration with local array 90 ms 0 the same as iterator


Conclusion

To understand this, you should take a look at the “Designing for Performance” document in the Android developers guide (look at “Use Enhanced For Loop Syntax”). The Iterator case is translated by the compiler in a way identical to the Manual iteration with local array case, that’s why the performance is the same. With simple arrays there is no difference between a hand-written optimized loop and the iteration syntax, so you should always prefer the latter to improve the code readability. In every case there are no objects allocations so we have no problems with the garbage collector. In this test we also see a performance improvement if we store a reference to the array locally in the method that executes the loop instead of accessing directly the array as a class instance field. This is another thing to keep in mind to improve performance (the lookup time for the array variable is reduced).


TEST 9: ITERATION OVER ARRAYLIST

We want to try the same test we did in TEST 8, but with an ArrayList instead of a simple array. When using the iteration syntax introduced in Java 1.5 with collections (like ArrayList) instead of simple arrays, there’s a difference in the compiled code. In this case an Iterator object is used to implement the iteration so we must also consider the extra objects allocation compared to a hand-written loop without the iteration syntax.

What we test: what’s the difference between using an iterator over an ArrayList instead of a hand-written loop in terms of performance and objects allocation?


Iterator

?
1
2
3
4
5
6
// arr is an ArrayList<Object> collection declared as a class field
 
for (Object obj : arr)
{
   // Do something
}


Manual iteration

?
1
2
3
4
5
6
7
8
// arr is an ArrayList<Object> collection declared as a class field
 
int arrLength = arr.size();
 
for ( int i = 0 ; i < arrLength; i++)
{
   // Do something with arr
}


Manual iteration with local ArrayList

?
01
02
03
04
05
06
07
08
09
10
11
12
13
// arr is an ArrayList<Object> collection declared as a class field
 
// We declare a variable localArr that is local to the
// method that implements the iteration (we avoid
// accessing the arr class field inside the loop).
ArrayList<Object> localArr = arr;
 
int arrLength = localArr.size();
 
for ( int i = 0 ; i < arrLength; i++)
{
   // Do something with localArr
}


Test results

The results show the average time taken to loop over an ArrayList with size = 3000000. The test is repeated 100 times to check the average time taken for each complete loop. It is also shown the number of objects allocated in the different loop implementations.

  Average loop time Objects allocation count Result
Iterator 481 ms 100 slower than both manual iterations and 100 objects allocated (an Iterator instance for each test repetition)
Manual iteration 246 ms 0 49% faster than iterator and 5% slower than manual iteration with local ArrayList
Manual iteration with local ArrayList 235 ms 0 51% faster than iterator and 5% faster than manual iteration


Conclusion

You might be surprised to see that the Iterator performance is much slower than a hand-written iteration loop, but the reason is simple: to implement the Iterator syntax, the compiler uses anIterator object, so the loop is made invoking its hasNext and next methods and you know from our previous tests that method invocations are slow. A hand-written loop just uses a standard forsyntax to implement the loop and doesn’t need to invoke any method resulting in a much faster execution. There’s another problem with the Iterator syntax: a new Iterator object is allocated for each complete loop execution and this means more objects to collect for the garbage collector. This can lead to a poor performance in games for example, because there could be many small pauses every time the garbage collector executes while the user is playing, making the overall experience not really great. You should choose the Iterator syntax only for applications that don’t need high performance code or continuous rendering (like games) because it helps to keep the source code cleaner and easier to read, but you should always prefer hand-written iterations when developing games or other similar applications. This test has been made with an ArrayList collection and might give different results with other collections, but it shows that even though theIterator syntax is the same as the one for a standard array (like we saw in TEST 8), the compiled code is much different and we must consider the drawbacks of this syntax while dealing with collections.


TEST 10: OBJECT POOL

In order to reduce the work of the garbage collector, all you can do is reduce the amount of objects you create and recycle the instances you’ve already created. You can do this with the implementation of the Object Pool design pattern. There are different ways to implement anObject Pool, depending on the features you need, and a possible implementation is the one you can find in my other post titled “Recycling objects in Android with an Object Pool to avoid garbage collection“. That will be the implementation used in this test.

What we test: what’s the impact of an Object Pool in terms of performance and objects allocation compared to the standard object creation without recycling the instances?


Standard object creation

?
1
2
3
// Example of a standard object creation of an android.graphics.Point instance
 
Point point = new Point(x, y);


Object Pool

?
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
// Example of a creation of an android.graphics.Point instance through an Object Pool
 
// Object Pool initialization with a maximum capacity or NEEDED_POINTS_COUNT instances.
// The PointPoolObjectFactory is a factory class that creates PointPoolObject
// instances.
ObjectPool pointsPool = new ObjectPool( new PointPoolObjectFactory(), NEEDED_POINTS_COUNT);
 
// The PointPoolObject class extends android.graphics.Point and implements the
// interface needed to make it work with the Object Pool.
PointPoolObject point = (PointPoolObject)pointsPool.newObject();
point.x = x;
point.y = y;
 
// When the point instance is not needed anymore, we put it back in the Object Pool
pointsPool.freeObject(point);


Test results

The results show the average time taken to create and initialize 1000 Point instances inside a loop repeated 100 times to simulate a situation where we need 1000 Point instances to be ready to use at the same time and we need all of them 100 times during our application execution (note: with the Object Pool we also need to free the instances and store them back in the pool, while without the pool that’s the job of the garbage collector). The test is repeated 100 times to check the average time taken for each complete loop. It is also shown the number of objects allocated in the different loop implementations.

  Average loop time Objects allocation count Result
Standard object creation 180 ms 10000000 slower than the Object Pool implementation and 10000000 objects allocated (the total time varies depending on the speed of the garbage collector execution)
Object Pool 47 ms 1000 faster than the standard object creation (the garbage collector doesn’t need to work) and only 1000 objects allocated


Conclusion

As you can see, the Object Pool allows us to allocate only the total amount of Point instances that are needed at the same time (1000) while with a standard object creation we would allocate the objects multiple times without reusing them (1000 points x 100 times x 100 repetitions of the test = 10000000 instances). Of course we could find a strategy to recycle the points even without theObject Pool, but the reason to have an Object Pool is to make this simple and standardized across the source code of our application. As you surely noticed, the test execution takes even less time with the Object Pool and the reason for this is the garbage collector: with the Object Pool and only 1000 instances created, the garbage collector doesn’t need to work at all, while with the standard object creation it needs to collect many instances and it executes a lot of times making everything slower (you can see this if you try to run the test and check in the LogCat view of the DDMS perspective of Eclipse).


  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值