Windows上实现精确到毫秒的时钟所遇到的问题与解决方法

问题1: 15ms 内多次调用 GetSystemTime / GetLocalTime(java 中对应的函数为 System.currentTimeMillis()), 返回相同的值

解决办法, 使用GetSystemTime 作为baseline, 然后用windows 提供的高精度计时器 QueryPerformanceCounter(java 中相应的函数为 System.nanoTime()) 做计时,精确时钟为baseline + 计时器时间。


问题2: QueryPerformanceCounter/QueryPerformanceFrequency 的问题

这个问题主要取决与windows QueryPerformanceCounter 的实现, 早期的实现是使用 the CPU-level timestamp-counter (TSC), 由于tick count在不同的核上的值差别比较大, 所以使用这个计时器计算出来的值完全不可预测, 早期比较简单的一种解决办法是只在某个线程使用计时器, 并且把这个线程绑到一个核上去(但是java中线程无法设置cpu affinity)

在windows xp sp2 和 windows server 2003 sp2 以及以后的系统上, QueryPerformanceCounter 可以选择使用power management timer PMTimer, 通过在boot.ini中添加 /usepmtimer 选项来设置QueryPerformanceCounter 的实现方式, 因为PMTimer使用的是主板的计时器, 所以没有多核同步的问题。

 

总结:1. 检查QueryPerformanceFrequency的返回值(参数),

   返回值=3,579,545,则当前系统使用PMTimer  --> step 3

   返回值=CPU频率, 则系统使用TSC  --> step 2

 

2. 如果操作系统为windows xp sp2 / windows server 2003 sp2 or later, 在c:\boot.ini 添加参数/usepmtimer, 并重启系统

 

3. 使用GetSystemTime + QueryPerformanceCounter 实现精确时钟references: 

---------------------------------------------------------------------------------

 

 

http://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks

 

Windows use of clocks and timers varies considerably from platform to platform and is plagued by problems - again this isn't necessarily Window's fault, just as it wasn't the VM's fault: the hardware support for clocks/timers is actually not very good - the references at the end lead you to more information on the timing hardware available. The following relates to the "NT" family (win 2k, XP, 2003) of Windows.

There are a number of different "clock" API's available in Windows. Those used by Hotspot are as follows:

  • System.currentTimeMillis() is implemented using the GetSystemTimeAsFileTime method, which essentially just reads the low resolution time-of-day value that Windows maintains. Reading this global variable is naturally very quick - around 6 cycles according to reported information. This time-of-day value is updated at a constant rate regardless of how the timer interrupt has been programmed - depending on the platform this will either be 10ms or 15ms (this value seems tied to the default interrupt period).
  • System.nanoTime() is implemented using the QueryPerformanceCounter/QueryPerformanceFrequency API (if available, else it returns currentTimeMillis\*10\^6). QueryPerformanceCounter(QPC) is implemented in different ways depending on the hardware it's running on. Typically it will use either the programmable-interval-timer (PIT), or the ACPI power management timer (PMT), or the CPU-level timestamp-counter (TSC). Accessing the PIT/PMT requires execution of slow I/O port instructions and as a result the execution time for QPC is in the order of microseconds. In contrast reading the TSC is on the order of 100 clock cycles (to read the TSC from the chip and convert it to a time value based on the operating frequency). You can tell if your system uses the ACPI PMT by checking if QueryPerformanceFrequency returns the signature value of 3,579,545 (ie 3.57MHz). If you see a value around 1.19Mhz then your system is using the old 8245 PIT chip. Otherwise you should see a value approximately that of your CPU frequency (modulo any speed throttling or power-management that might be in effect.)

    The default mechanism used by QPC is determined by the Hardware Abstraction layer(HAL), but some systems allow you to explicitly control it using options in boot.ini, such as /usepmtimer that explicitly requests use of the power management timer. This default changes not only across hardware but also across OS versions. For example Windows XP Service Pack 2 changed things to use the power management timer (PMTimer) rather than the processor timestamp-counter (TSC) due to problems with the TSC not being synchronized on different processors in SMP systems, and due the fact its frequency can vary (and hence its relationship to elapsed time) based on power-management settings. (The issues with the TSC, in particular for AMD systems, and how AMD aims to provide a stable TSC in future processors is discussed in Rich Brunner's article referenced below. You can also read how the Linux kernel folk have abandoned use of the TSC until a new stable version appears in CPUs.)

The timer related API's for doing timed-waits all use the waitForMultipleObjects API as previously mentioned. This API only accepts timeout values in milliseconds and its ability to recognize the passage of time is based on the timer interrupt programmed through the hardware.

Typically a Windows machine has a default 10ms timer interrupt period, but some systems have a 15ms period. This timer interrupt period may be modified by application programs using the timeBeginPeriod/timeEndPeriod API's. The period is still limited to milliseconds and there is no guarantee that a requested period will be supported. However, usually you can request a 1ms timer interrupt period (though its accuracy has been questioned in some reports). The hotspot VM in fact uses this 1ms period to allow for higher resolution Thread.sleep calls than would otherwise be possible. The sample Sleeper.java will cause this higher interrupt rate to be used, thus allowing experimentation with a 1ms versus 10ms period. It simply calls Thread.sleep(Integer.MAX_VALUE) which (because it is not a multiple of 10ms) causes the VM to switch to a 1ms period for the duration of the sleep - which in this case is "forever" and you'll have to ctrl-C the "java Sleeper" execution.

 public class Sleeper {   public static void main(String[] args) throws Throwable {     Thread.sleep(Integer.MAX_VALUE);   } } 

You can see what interrupt period is being used in Windows by running the perfmon tool. After you bring it up you'll need to add a new item to watch (click the + icon above the graph - even if it appears grayed/disabled). Select the interrupts/sec items and add it. Then right click on interrupts/sec under the graph and edit its properties. On the "data" tab, change the "scale" to 1 and on the graph tab, the vertical max to be 1000. Let the system settle for a few seconds and you should see the graph drawing a steady line. If you have a 10ms interrupt then it will be 100, for 1ms it will be 1000, for 15ms it will be 66.6, etc. Note: on a multiprocessor system show the interrupts/sec for each processor individually, not the total - one processor will be fielding the timer interrupts.

Note that any application can change the timer interrupt and that it affects the whole system. Windows only allows the period to be shortened, thus ensuring that the shortest requested period by all applications is the one that is used. If a process doesn't reset the period then Windows takes care of it when the process terminates. The reason why the VM doesn't just arbitrarily change the interrupt rate when it starts - it could do this - is that there is a potential performance impact to everything on the system due to the 10x increase in interrupts. However other applications do change it, typically multi-media viewers/players. Be aware that a browser running the JVM as a plug-in can also cause this change in interrupt rate if there is an applet running that uses the Thread.sleep method in a similar way to Sleeper.

Further note, that after Windows suspends or hibernates, the timer interrupt is restored to the default, even if an application using a higher interrupt rate was running at the time of suspension/hibernation.

 


 

 

转载于:https://www.cnblogs.com/slime/archive/2011/06/14/2081071.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值