# [ZT]Performance of Java versus C++

## Performance of Java versus C++

J.P.Lewis and Ulrich Neumann
Computer Graphics and Immersive Technology Lab
University of Southern California

www.idiom.com/~zilla

This article surveys a number of benchmarks and finds that Java performance on numerical code is comparable to that of C++, with hints that Java's relative performance is continuing to improve. We then describe clear theoretical reasons why these benchmark results should be expected.

## Benchmarks

Five composite benchmarks listed below show that modern Java has acceptable performance, being nearly equal to (and in many cases faster than) C/C++ across a number of benchmarks.

1. Numerical Kernels

Benchmarking Java against C and Fortran for Scientific Applications
Mark Bull, Lorna Smith, Lindsay Pottage, Robin Freeman,
EPCC, University of Edinburgh (2001).

The authors test some real numerical codes (FFT, Matrix factorization, SOR, fluid solver, N-body) on several architectures and compilers. On Intel they found that the Java performance was very reasonable compared to C (e.g, 20% slower), and that Java was faster than at least one C compiler (KAI compiler on Linux).

The authors conclude, "On Intel Pentium hardware, especially with Linux, the performance gap is small enough to be of little or no concern to programmers."

2. #### More numerical methods: SciMark2 scores

R. F. Boisvert, J. Moriera, M. Phillipsen, R. Pozo,
Java and Numerical Computing,
Computing in Science & Engineering, 3(2):18-24, Mar.-Apr., 2001.

SciMark includes a number of numerical codes. On a PIII/500, MFlops (higher is better):

 ibm jdk 1.3.0 84.5 linux2.2 gcc (2.9x) -O6 87.1
3. #### Still more numerical methods

From the book Object-Oriented Implementations of Numerical Methods by Didier Besset (MorganKaufmann, 2001):

Operation Units C Smalltalk Java
Polynomial 10th degree msec. 1.1 27.7 9.0
Neville Interpolation (20 points) msec. 0.9 11.0 0.8
LUP matrix inversion (100 x 100) sec. 3.9 22.9 1.0

4. Microbenchmarks (cache effects considered)

Several years ago these benchmarks showed java performance at the time to be somewhere in the middle of C compiler performance - faster than the worst C compilers, slower than the best. These are "microbenchmarks", but they do have the advantage that they were run across a number of different problem sizes and thus the results are not reflecting a lucky cache interaction (see more details on this issue in the next section).

These benchmarks were updated with a more recent java(1.4) and gcc(3.2), using full optimization (gcc -O3 -mcpu=pentiumpro -fexpensive-optimizations -fschedule-insns2...). This time java is faster than C the majority of the tests, by a factor of more than 2 in some cases...

... suggesting that java performance is catching up to or even pulling ahead of gcc at least.

These test were mostly integer (except for an FFT).

5. #### Microbenchmarks (cache effects not considered)

In January 2004 OSNews.com posted an article, Nine Language Performance Round-up: Benchmarking Math & File I-O. These are simple numeric and file I/O loops, and no doubt suffer from the arbitrary cache interaction factor described below. They were however run under several different compilers, which helps. Again Java is competitive with (actually slighty faster than) several C++ compilers including Visual C++ in the majority of the benchmarks.

(One exceptional benchmark tested trigonometry library calls. Java numerical programmers are aware that these calls became slower in java 1.4; recent benchmarks suggest this issue was fixed in java 1.4.2)

Note that these benchmarks are on Intel architecture machines. Java compilers on some other processors are less developed at present.

## And In Theory: Maybe Java Should be Faster

Java proponents have stated that Java will soon be faster than C. Why? Several reasons (also see reference [1]):

### 1) Pointers make optimization hard

This is a reason why C is generally a bit slower than Fortran.

In C, consider the code

 x = y + 2 * (...) *p = ... arr[j] = ... z = x + ...
Because p could be pointing at x, a C compiler cannot keep x in a register and instead has to write it to cache and read it back -- unless it can figure out where p is pointing at compile time. And because arrays act like pointers in C/C++, the same is true for assignment to array elements: arr[j] could also modify x.

This pointer problem in C resembles the array bounds checking issue in Java: in both cases, if the compiler can determine the array (or pointer) index at compile time it can avoid the issue.

In the loop below, for example, a Java compiler can trivially avoid testing the lower array bound because the loop counter is only incremented, never decremented. A single test before starting the loop handles the upper bound test if 'len' is not modified inside the loop (and java has no pointers, so simply looking for an assignment is enough to determine this):

 for( int i = 0; i < len; i++ ) { a[i] = ... }

In cases where the compiler cannot determine the necessary information at compile time, the C pointer problem may actually be the bigger performance hit. In the java case, the loop bound(s) can be kept in registers, and the index is certainly in a register, so a register-register test is needed. In the C/C++ case a load from memory is needed.

### 2) Garbage collection- is it worse...or better?

Most programmers say garbage collection is or should be slow, with no given reason- it's assumed but never discussed. Some computer language researchers say otherwise.

Consider what happens when you do a new/malloc: a) the allocator wanders through some lists looking for a slot of the right size, then returns you a pointer. b) This pointer is pointing to some pretty random place.

With GC, a) the allocator doesn't need to look for memory, it knows where it is, b) the memory it returns is adjacent to the last bit of memory you requested. The wandering around part happens not all the time but only at garbage collection. And then (depending on the GC algorithm) things get moved of course as well.

### The cost of missing the cache

The big benefit of GC is memory locality. Because newly allocated memory is adjacent to the memory recently used, it is more likely to already be in the cache.

How much of an effect is this? One rather dated (1993) example shows that missing the cache can be a big cost: changing an array size in small C program from 1023 to 1024 results in a slowdown of 17 times (not 17%). This is like switching from C to VB! This particular program stumbled across what was probably the worst possible cache interaction for that particular processor (MIPS); the effect isn't that bad in general...but with processor speeds increasing faster than memory, missing the cache is probably an even bigger cost now than it was then.

(It's easy to find other research studies demonstrating this; here's one from Princeton: they found that (garbage-collected) ML programs translated from the SPEC92 benchmarks have lower cache miss rates than the equivalent C and Fortran programs.)

This is theory, what about practice? In a well known paper [2] several widely used programs (including perl and ghostscript) were adapted to use several different allocators including a garbage collector masquerading as malloc (with a dummy free()). The garbage collector was as fast as a typical malloc/free; perl was one of several programs that ran faster when converted to use a garbage collector. Another interesting fact is that the cost of malloc/free is significant: both perl and ghostscript spent roughly 25-30% of their time in these calls.

Besides the improved cache behavior, also note that automatic memory management allows escape analysis, which identifies local allocations that can be placed on the stack. (Stack allocations are clearly cheaper than heap allocation of either sort).

### 3) Run-time compilation

The JIT compiler knows more than a conventional "pre-compiler", and it may be able to do a better job given the extra information:

• The compiler knows what processor it is running on, and can generate code specifically for that processor. It knows whether (for example) the processor is a PIII or P4, if SSE2 is present, and how big the caches are. A pre-compiler on the other hand has to target the least-common-denominator processor, at least in the case of commercial software.

• Because the compiler knows which classes are actually loaded and being called, it knows which methods can be de-virtualized and inlined. (Remarkably, modern java compilers also know how to "uncompile" inlined calls in the case where an overriding method is loaded after the JIT compilation happens.)

• A dynamic compiler may also get the branch prediction hints right more often than a static compiler.

It might also be noted that Microsoft has some similar comments regarding C# performance [5]:
• "Myth: JITed Programs Execute Slower than Precompiled Programs"

• .NET still provides a traditional pre-compiler ngen.exe, but "since the run-time only optimizations cannot be provided... the code is usually not as good as that generated by a normal JIT."

## Speed and Benchmark Issues

Benchmarks usually lead to extensive and heated discussion in popular web forums. From our point of view there are several reasons why such discussions are mostly "hot air".

### What is slow?

The notion of "slow" in popular discussions is often poorly calibrated. If you write a number of small benchmarks in several different types of programming language, the broad view of performance might be something like this:

 Language class typical slowdown Assembler: 1 Low level compiled (Fortran, C): 1-2 Byte-code (python): 25-50 Interpreted strings (csh, tcl?): 250x

Despite this big picture, performance differences of less than a factor of two are often upheld as evidence in speed debates. As we describe next, differences of 2x-4x or more are often just noise.

### Don't characterize the speed of a language based on a single benchmark of a single program.

We often see people drawing conclusions from a single benchmark. For example, an article posted on slashdot.org [3] claims to address the question "Which programming language provides the fastest tool for number crunching under Linux?", yet it discussed only one program.

Why isn't one program good enough?

For one, it's common sense; the compiler may happen to do particularly well or particularly poorly on the inner loop of the program; this doesn't generalize. The fourth set of benchmarks above show Java as being faster than C by a factor two on an FFT of an array of a particular size. Should you now proclaim that Java is always twice as fast as C? No, it's just one program.

There is a more important issue than the code quality on the particular benchmark, however:

Cache/Memory effects.

Look at the FFT microbenchmark that we referenced above. The figure is reproduced here with permission:

On this single program, depending on the input size, the relative performance of 'IBM' (IBM's Java) varies from about twice as slow to twice as fast as 'max-C' (gcc) (-O3 -lm -s -static -fomit-frame-pointer -mpentiumpro -march=pentiumpro -malign-functions=4 -fu nroll-all-loops -fexpensive-optimizations -malign-double -fschedule-insns2 -mwide-multiply -finline-function s -fstrict-aliasing). So what do we conclude from this benchmark? Java is twice as fast as C, or twice as slow, or ...

This performance variation due to factors of data placement and size is universal. A more dramatic example of such cache effects is the link mentioned in the discussion on garbage collection above.

The person who posted [3] demonstrated the fragility of his own benchmark in a followup post, writing that "Java now performs as well as gcc on many tests" after changing something (note that it was not the Java language that changed).

## Conclusions: Why is "Java is Slow" so Popular?

Java is now nearly equal to (or faster than) C++ on low-level and numeric benchmarks. This should not be surprising: Java is a compiled language (albeit JIT compiled).

Nevertheless, the idea that "java is slow" is widely believed. Why this is so is perhaps the most interesting aspect of this article.

Let's look at several possible reasons:

• Java circa 1995 was slow. The first incarnations of java did not java a JIT compiler, and hence were bytecode interpreted (like Python for example). JIT compilers appeared in JVMs from Microsoft, Symantec, and in Sun's java1.2.

This explanation is implausible. Most "computer folk" are able to rattle off the exact speed in GHz of the latest processors, and they track this information as it changes each month (and have done so for years). Yet this explanation asks us to believe that they are not able to remember that a single and rather important language speed change occurred in 1996.

• Java can be slow still. For example, programs written with the thread-safe Vector class are necessarily slower (on a single processor at least) than those written with the equivalent thread-unsafe ArrayList class.

This explanation is equally unsatisfying, because C++ and other languages have similar "abstraction penalties". For example, The Kernighan and Pike book The Practice of Programming has a table with the following entries, describing the performance of several implementations of a text processing program:

Version 400 MHz PII
C 0.30 sec
C++/STL/deque 11.2 sec
C++/STL/list 1.5 sec

Another evidently well known problem in C++ is the overhead of returning an object from a function (several unnecessary object create/copy/destruct cycles are involved).

• Java program startup is slow. As a java program starts, it unzips the java libraries and compiles parts of itself, so an interactive program can be sluggish for the first couple seconds of use.

This approaches being a reasonable explanation for the speed myth. But while it might explain user's impressions, it does not explain why many programmers (who can easily understand the idea of an interpreted program being compiled) share the belief.

Two of the most interesting observations regarding this issue are that:
1. there is a similar "garbage collection is slow" myth that persists despite decades of evidence to the contrary, and
2. that in web flame wars, people are happy to discuss their speed impressions for many pages without ever referring to actual data.
Together these suggest that it is possible that no amount of data will alter peoples' beliefs, and that in actuality these "speed beliefs" probably have little to do with java, garbage collection, or the otherwise stated subject. Our answer probably lies somewhere in sociology or psychology. Programmers, despite their professed appreciation of logical thought, are not immune to a kind of mythology, though these particular "myths" are arbitrary and relatively harmless.

### Acknowledgements

Ian Rogers and Curt Fischer clarified some points.

### References

[1] K. Reinholtz, Java will be faster than C++, ACM Sigplan Notices, 35(2): 25-28 Feb 2000.

[2] Benjamin Zorn, The Measured Cost of Conservative Garbage Collection Software - Practice and Experience 23(7): 733-756, 1992.

[3] Linux Number Crunching: Languages and Tools, referenced on slashdot.org

[4] Christopher W. Cowell-Shah, Nine Language Performance Round-up: Benchmarking Math & File I/O, appeared at OSnews.com, Jan. 2004.

[5] E. Schanzer, Performance Considerations for Run-Time Technologies in the .NET Framework, Microsoft Developer Network article.

• 本文已收录于以下专栏：

## Java versus C++ Performance

Recently, I came across an interesting discussion of C++ versus Java performance over on Stack Exc...
• saloon_yuan
• 2012年09月23日 07:58
• 684

## Performance of Java versus C++

source:http://www.idiom.com/~zilla/Computer/javaCbenchmark.htmlPerformance of Java versus C++ On th...
• yuanqingfei
• 2004年11月21日 19:22
• 764

## 读书摘要-Efficient C++ performance programming techniques

Chp 1 The Tracing war story       当你的代码规模超过几千行后，tracing就变得很必要了。     当在一个很小却被频繁调用的函数中加入tracing 机制时，如...
• lovekatherine
• 2008年04月06日 11:23
• 2754

## [ZT]全国车牌详解（含军牌）

• u014461454
• 2014年03月31日 17:31
• 736

## 全程图解主板 (ZT)

大家知道，主板是所有电脑配件的总平台，其重要性不言而喻。而下面我们就以图解的形式带你来全面了解主板。 　　一、主板图解 　　一块主板主要由线路板和它上面的各种元器件组成 　　1.线路板 　　PCB印...
• wishfly
• 2005年12月20日 12:48
• 928

## 《Java Performance》笔记3——Java应用性能分析工具

1.Java应用性能分析的方法分析和内存分析： 方法分析：能够提供java应用程序中方法执行时间的信息，既包括java方法也包括本地方法。 内存分析：提供java应用程序内存使用信息，包括内存中已...
• chjttony
• 2015年05月21日 20:29
• 3814

## 《Java Performance》笔记1——性能分析基础

1.性能分析两种方法： (1).自顶向下： 应用开发人员通过着眼于软件栈顶层的应用，从上往下寻找性能优化的机会。 (2).自底向上： 性能专家从软件栈底层的CPU统计数据(例如CPU高速缓存未...
• chjttony
• 2015年05月21日 20:16
• 2382

## 利用Windows性能计数器(PerformanceCounter)监控

• mao0514
• 2015年03月10日 10:44
• 2582

## linux performance observability tools， 好形象啊

• stpeace
• 2016年07月10日 00:51
• 2914

## [ZT]设计“好看”的用户界面（作者：王咏刚 2003 年10 月）

• l1t
• 2004年12月01日 11:49
• 3388

举报原因： 您举报文章：[ZT]Performance of Java versus C++ 色情 政治 抄袭 广告 招聘 骂人 其他 (最多只允许输入30个字)