Submitted by Suresh Nageswaran on Mon, 28/02/2005 - 13:04.
jMeter | Mercury LoadRunner | OpenSTA | performance testing
[textile]
In _*The Republic*_, _Plato_ conjectured on the idea of the dual level reality. One of these is known as the divided line:
!http://oregonstate.edu/instruct/phl201/images/philosophers/plato/divided_line.gif!
Above the line are the attributes of objective reality; below the line are the attributes of relative reality. This is not very different from user experienced times and response times measured from engineered tests. The problem, then, is to know whether a tool, even a favourite one, tells an objective truth or a relative truth! :)
So here's the real *_masala_* in our performance testing tool shootout. How did the numbers stack up? We ran tests for *1* virtual user (vUser) and multiple iterations, then *8* virtual users and multiple iterations. We stopped at *8* because our server couldn't handle more requests without timing out - apparently *9* is a magic figure. Still, these figures are "all things being equal" i.e. same application, same hardware.
**1 vUser/20 Iterations**
|_. Transaction|_. LR|_. OpenSTA|_. JMeter|
(dark). |MITOURS_01_HOME|1.152|1.16|0.466|
(dark). |MITOURS_02_LOGIN|1.239|0.7985|0.447|
(dark). |MITOURS_03_CLICK_FLIGHTS|1.598|0.6375|0.669|
(dark). |MITOURS_04_SEARCH_FLIGHT|0.4|0.172|0.223|
(dark). |MITOURS_05_SELECT_FLIGHT|0.222|0.2015|0.22|
(dark). |MITOURS_06_PURCHASE_FLIGHT|0.221|0.21|0.222|
(dark). |MITOURS_07_LOGOUT|0.982|0.4|0.449|
**8 VUser, 10 Minutes**
|*Transaction*|*LR*|*OpenSTA*|*JMeter*|
(dark). |MITOURS_01_HOME|1.343|1.247213115|0.561|
(dark). |MITOURS_02_LOGIN|1.384|0.835081967|0.537|
(dark). |MITOURS_03_CLICK_FLIGHTS|1.778|0.720901639|0.736|
(dark). |MITOURS_04_SEARCH_FLIGHT|0.488|0.071065574|0.266|
(dark). |MITOURS_05_SELECT_FLIGHT|0.354|0.258688525|0.235|
(dark). |MITOURS_06_PURCHASE_FLIGHT|0.349|0.305|0.265|
(dark). |MITOURS_07_LOGOUT|1.114|0.444672131|0.48|
**8 VUsers, 20 Iterations**
|*Transaction*|*LR*|*OpenSTA*|*JMeter*|
(dark). |MITOURS_01_HOME|1.323|1.266125|0.5|
(dark). |MITOURS_02_LOGIN|1.427|0.827|0.484|
(dark). |MITOURS_03_CLICK_FLIGHTS|1.761|0.7168125|0.743|
(dark). |MITOURS_04_SEARCH_FLIGHT|0.48|0.0801875|0.248|
(dark). |MITOURS_05_SELECT_FLIGHT|0.312|0.2435625|0.253|
(dark). |MITOURS_06_PURCHASE_FLIGHT|0.364|0.3183125|0.255|
(dark). |MITOURS_07_LOGOUT|1.16|0.4451875|0.481|
Just to find out how the figures stack up, I got MS Excel to spit out a correlation matrix of all the three tool response times for the **8 vuser, 20 iterations** test. Here's what came out.
||*LR*|*OpenSTA*|*Jmeter*|
(dark). |*LR*|1|-|-|
(dark). |*OpenSTA*|0.746747501|1|-|
(dark). |*Jmeter*|0.961989279|0.650244097|1|
*Simple observation, mine*:
Looks like JMeter and LoadRunner tests are very closely interrelated, but OpenSTA figures are off the mark by about 30%. Could it be because of the type of replay engine used by LoadRunner/JMeter? The reference here is to the native socket engine, of course. Might OpenSTA be using a WinInet engine - a look into OpenSTA source should answer that question.
In _*The Republic*_, _Plato_ conjectured on the idea of the dual level reality. One of these is known as the divided line:
!http://oregonstate.edu/instruct/phl201/images/philosophers/plato/divided_line.gif!
Above the line are the attributes of objective reality; below the line are the attributes of relative reality. This is not very different from user experienced times and response times measured from engineered tests. The problem, then, is to know whether a tool, even a favourite one, tells an objective truth or a relative truth! :)
So here's the real *_masala_* in our performance testing tool shootout. How did the numbers stack up? We ran tests for *1* virtual user (vUser) and multiple iterations, then *8* virtual users and multiple iterations. We stopped at *8* because our server couldn't handle more requests without timing out - apparently *9* is a magic figure. Still, these figures are "all things being equal" i.e. same application, same hardware.
**1 vUser/20 Iterations**
|_. Transaction|_. LR|_. OpenSTA|_. JMeter|
(dark). |MITOURS_01_HOME|1.152|1.16|0.466|
(dark). |MITOURS_02_LOGIN|1.239|0.7985|0.447|
(dark). |MITOURS_03_CLICK_FLIGHTS|1.598|0.6375|0.669|
(dark). |MITOURS_04_SEARCH_FLIGHT|0.4|0.172|0.223|
(dark). |MITOURS_05_SELECT_FLIGHT|0.222|0.2015|0.22|
(dark). |MITOURS_06_PURCHASE_FLIGHT|0.221|0.21|0.222|
(dark). |MITOURS_07_LOGOUT|0.982|0.4|0.449|
**8 VUser, 10 Minutes**
|*Transaction*|*LR*|*OpenSTA*|*JMeter*|
(dark). |MITOURS_01_HOME|1.343|1.247213115|0.561|
(dark). |MITOURS_02_LOGIN|1.384|0.835081967|0.537|
(dark). |MITOURS_03_CLICK_FLIGHTS|1.778|0.720901639|0.736|
(dark). |MITOURS_04_SEARCH_FLIGHT|0.488|0.071065574|0.266|
(dark). |MITOURS_05_SELECT_FLIGHT|0.354|0.258688525|0.235|
(dark). |MITOURS_06_PURCHASE_FLIGHT|0.349|0.305|0.265|
(dark). |MITOURS_07_LOGOUT|1.114|0.444672131|0.48|
**8 VUsers, 20 Iterations**
|*Transaction*|*LR*|*OpenSTA*|*JMeter*|
(dark). |MITOURS_01_HOME|1.323|1.266125|0.5|
(dark). |MITOURS_02_LOGIN|1.427|0.827|0.484|
(dark). |MITOURS_03_CLICK_FLIGHTS|1.761|0.7168125|0.743|
(dark). |MITOURS_04_SEARCH_FLIGHT|0.48|0.0801875|0.248|
(dark). |MITOURS_05_SELECT_FLIGHT|0.312|0.2435625|0.253|
(dark). |MITOURS_06_PURCHASE_FLIGHT|0.364|0.3183125|0.255|
(dark). |MITOURS_07_LOGOUT|1.16|0.4451875|0.481|
Just to find out how the figures stack up, I got MS Excel to spit out a correlation matrix of all the three tool response times for the **8 vuser, 20 iterations** test. Here's what came out.
||*LR*|*OpenSTA*|*Jmeter*|
(dark). |*LR*|1|-|-|
(dark). |*OpenSTA*|0.746747501|1|-|
(dark). |*Jmeter*|0.961989279|0.650244097|1|
*Simple observation, mine*:
Looks like JMeter and LoadRunner tests are very closely interrelated, but OpenSTA figures are off the mark by about 30%. Could it be because of the type of replay engine used by LoadRunner/JMeter? The reference here is to the native socket engine, of course. Might OpenSTA be using a WinInet engine - a look into OpenSTA source should answer that question.