《Anytime Dynamic A*: An Anytime, Replanning Algorithm》论文 个人读后小结

这篇论文主要介绍了三种算法,分别是

1. Dynamic Replanning Algorithms:就我们常说的动态规划算法,论文中主要以D* Lite为例进行了简单的介绍。

2. Anytime Algorithms:这一类算法不要求规划出最小代价路径,而要求在有限的时间内规划出一个epsilon-suboptimal solution,可以理解为一个次优解。

3. Anytime Dynamic A* (记作:AD*): 结合上面D* Lite算法和ARA*算法提出的一种新的算法。


D* Lite算法

D* Lite算法主张规划出一个从起点到终点的最小代价路径。为此,算法中提出了2个估计值,分别是:

g(s)  --- > 表示 结点s到goal结点的代价估计值

rhs(s) --->表示 a one-step lookahead cost:


其中Succ(s)表示s的后向结点集合,c(s,s')表示从结点s到s'的代价,即我们常说的路权。

详细了解可以阅读论文

下面是D* Lite算法伪代码:



ARA*算法


这里主要介绍的是一种与规划时间有关的A*-based算法。在很多情况下,最短路径不一定就是我们想要的,还需要考虑时间等因素。在有限的时间内得到好的、可行的结果才是我们想要的。ARA*算法就是解决这类问题的。

在学习A*算法时我们了解到: f(n)=g(n)+h(n)中关于h(n)的选取:

用d(n)表达状态n到目标状态的距离,那么h(n)的选取大致有如下三种情况:

1  如果h(n)< d(n)到目标状态的实际距离,这种情况下,搜索的点数多,搜索范围大,效率低。但能得到最优解。
2  如果h(n)=d(n),即距离估计h(n)等于最短距离,那么搜索将严格沿着最短路径进行, 此时的搜索效率是最高的。
3  如果 h(n)>d(n),搜索的点数少,搜索范围小,效率高,但不能保证得到最优解。

第三种情况说到,当h(n)>d(n)的时候,会搜索较少的点,快速的产生一个解。ARA*算法就是用到了这一点(至少我是这样理解,如有误,欢迎指出)

        key(s)的计算中,它用到了epsilon*h(Sstart,S),且epsilon>1(比如后面的实验中用到的2.5,1.5).此时快速得到一个解,然后减小epsilon的值,继续规划新的更好的解(epsilon=1)。如果给出的限定时间到了,则输出当前的最新可行解(epsilon-suboptimal solution)。

        论文中关于epsilon如何减小没有提到。



AD*算法

As shown in the previous sections, there exist efficient algorithmsfor coping with dynamic environments (e.g. D* andD* Lite), and complex planning problems (ARA*). However,what about when we are facing both complex planningproblems and dynamic environments at the same time?

AD*算法就是结合了上面两种情况提出的新算法,算法伪代码如下:




  • 1
    点赞
  • 21
    收藏
    觉得还不错? 一键收藏
  • 6
    评论
PassMark BurnInTest V5.3 Copyright (C) 1999-2008 PassMark Software All Rights Reserved http://www.passmark.com Overview ======== Passmark's BurnInTest is a software tool that allows all the major sub-systems of a computer to be simultaneously tested for reliability and stability. Status ====== This is a shareware program. This means that you need to buy it if you would like to continue using it after the evaluation period. Installation ============ 1) Uninstall any previous version of BurnInTest 2) Double click (or Open) the downloaded ".exe" file 3) Follow the prompts UnInstallation ============== Use the Windows control panel, Add / Remove Programs Requirements ============ - Operating System: Windows 2000, XP, 2003 server, Vista (*) - RAM: 32 Meg - Disk space: 6 Meg of free hard disk space (plus an additional 10Meg to run the Disk test) - DirectX 9.0c or above software for 3D graphics and video tests (plus working DirectX drivers for your video card) - SSE compatible CPU for SSE tests - A printer to run the printer test, set-up as the default printer in Windows. - A CD ROM + 1 Music CD or Data CD to run the CD test. - A CD-RW to run the CD burn test. - A network connection and the TCP/IP networking software installed for the Network Tests Pro version only: - A serial port loop back plug for the serial port test. - A parallel port loop back plug for the parallel port test. - A USB port loop back plug for the USB port test. - A USB 2.0 port loop back plug for the USB 2.0 port test. - PassMark ModemTest V1.3 1010 (or higher) for Plugin Modem testing. - PassMark KeyboardTest V2.2 1011 (or higher) for Plugin Keyboard testing. - PassMark Firewire Plugin V1.0 1000 (or higher) and a 揔anguru FireFlash?drive for Plugin Firewire testing. (*) Windows 2000 does not support the CD-RW burn test. The advanced RAM test is only available under Windows 2000 and Windows XP professional (the other RAM tests are supported under the other OS's). Users must have administrator privileges. Windows 98 and Windows ME ========================= Windows 98 and ME are not supported in BurnInTest version 5.3 and above. Use a version of BurnInTest prior to 5.2 for compatibility with W98 and ME. Windows 95 and Windows NT ========================= Windows 95 and NT are not supported in BurnInTest version 4.0 and above. Use a version of BurnInTest prior to 3.1 for compatibility with W95 and NT. Version History =============== Here is a summary of all changes that have been made in each version of BurnInTest. Release 5.3 build 1035 revision 4 WIN32 release 10 November 2008 - Lenovo China specific build. Lenovo system detection changes. Release 5.3 build 1035 revision 3 WIN32 release 7 November 2008 - Lenovo China specific build. Lenovo system detection changes. Release 5.3 build 1035 revision 2 WIN32 release 6 November 2008 - Lenovo China specific build. Lenovo logo and Lenovo system detection changes. Release 5.3 build 1035 WIN32 release 5 November 2008 - Lenovo China specific build. Changes include: Lenovo logo added, Lenovo system support only, 32-bit BurnInTest restricted to 32-bit Windows and BurnInTest run as administrator. Release 5.3 build 1034 WIN32 release 3 October 2008 - Correction to setting the CD burn test drive in preferences. - Changed the mechanism to check for the required DirectX Direct3D as the previous method did not work on some system (some W2003 servers). - Enhanced the mechanism to report memory hardware errors in the Memory torture test. Release 5.3 build 1033 WIN32 release 1 October 2008 - Changes to correct a BurnInTest crash problem on some systems. When the disk and standard RAM tests are run for many hours, BurnInTest may have disappeared with no error message. Release 5.3 build 1030 WIN32 release 25 September 2008 - Changes to investigate a BurnInTest crash problem on XP SP3. Release 5.3 build 1028 WIN32 release 11 September 2008 - Two 2D Video memory test crash bug workarounds implemented. Crashes in (i) DirectX DirectShow and (ii) ATI atiumdag.dll library. - A hang on startup has been corrected. A 2 minute timeout has been added to the collection of system information. - Video playback, Hard disk and CD/DVD test 'no operations' error reporting changed. - When BurnInTest crashes, it will not generate a "minidump" file. Minidumps will need to be sent to Microsoft as per the normal process. However, a log entry will be added to the normal BurnInTest log. - Changes to trace logging to reduce activity when trace logging is not turned on. - Note: We have seen a report of the Video Playback failing (crash) due to a faulty video codec, ffdshow.ax. If you are using this we suggest you try a different Video file and codec. Release 5.3 build 1027 revision 0003 WIN32 release 19 August 2008 - Changed the 2D test to wait for the Video Playback test in order to allow memory allocation for the Video playback test. - Changed the Memory test to wait for the Video Playback test and 3D test to allow memory allocation for these tests. - Minor changes to the No operation error watchdog timer for the CD and Hard disk tests. - Minor correction to the Butterfly seek test. - Video playback trace logging increased. Release 5.3 build 1027 revision 0002 WIN32 release 19 August 2008 - Video playback trace logging increased. Release 5.3 build 1027 WIN32 release 31 July 2008 - Corrected a bug where BurnInTest would fail to start if Activity trace level 2 logging (debug level logging) was turned on and the Logging Summarize option was also selected. - Minor change to the serial port test where, if "Disable RTS/CTS and DSR/DTR test phase" was selected the DTR and RTS lines would be explicitly disabled to prevent any toggling of these lines. Previously these where enabled, but not explicitly toggled. Release 5.3 build 1026 WIN32 release 17 July 2008 - Updated Level 2 and Level 3 CPU cache information for newer Intel CPU's. - Updated the detection of Hyperthreading and the number of logical CPUs for a new Intel CPU. Release 5.3 build 1025 WIN32 release 11 July 2008 - Corrected a Disk test bug where on rare occasions a verification error is incorrectly displayed. This is during the random seeking phase of the "Random data with random seeking" test mode and only occurs with some specific test settings. Release 5.3 build 1024 WIN32 release 10 July 2008 - Workaround for the rare crash bug in Vista in atklumdisp.dll at address 0x730676ae. - Added trace debug information for BurnInTest startup and the 3D test. Release 5.3 build 1022 WIN32 release 12 June 2008 - Corrected a bug where the 2D video memory test in BurnInTest v5.3.1020 and v5.3.1021 would report a "Not enough video memory available for test" error if the test was run a couple of times (without closing BurnInTest). Release 5.3 build 1021 WIN32 release 5 June 2008 - 32-bit BurnInTest PRO 5.3.1020 would not start on Windows 2000. This has been corrected. Release 5.3 build 1020 WIN32 release 29 May 2008 - BurnInTest could have crashed on accessing bad video memory hardware in the 2D test. This problem is now just reported as an error (and BurnInTest) continues. - When BurnInTest crashes, it should now generate a "minidump" file to help debug which system component caused the failure (32-bit Pro version only). - Other minor changes. Release 5.3 build 1019 WIN32 release 16 May 2008 - Corrected rare crash bugs in the 2D and Video tests. - Added a hot Key, F4, to set the auto run flag and run the tests (i.e. set "-r" and then run the tests). - Other minor changes. Release 5.3 build 1018 WIN32 release 16 April 2008 - Added an operation watchdog timer for all tests. In rare cases, a single test can stop in the operating system - i.e. there is a problem in the operating system/ device driver that prevents control being returned to the BurnInTest for that test. This was added for specialized serial port hardware that could lockup after several hours of testing. Release 5.3 build 1017 WIN32 release 3 April 2008 - Corrected the Advanced Network test to run on non-English Operating Systems. Release 5.3 build 1016 WIN32 release 17 March 2008 - Added additional USB 2.0 Loopback plug test initialization to ensure plugs are in a 'clean' state when starting the USB tests. This was added due to reported USB data verification errors after scripted USB testing across multiple reboots. Release 5.3 build 1015 WIN32 release 27 February 2008 - Increased error reporting detail for the standard RAM test, when the -v command line option is used. Release 5.3 build 1014 WIN32 release 30 January 2008 - Corrected a problem where the loopback sound test could run out of memory if run for several days. Release 5.3 build 1013 WIN32 release 31 December 2007 - Improved the reporting of COM port errors such that in the rare case a COM port locks up in the Operating System, the error is still reported. - Corrected a bug, where in rare cases, the result summary could be duplicated in a log file. - Updated license management, in an attempt to remove a rare crash on startup. Release 5.3 build 1012.0002 WIN32 release 31 October 2007 - New build of Rebooter (64-bit Windows correction). - Clarifications in the help file. Release 5.3 build 1012 WIN32 release 17 October 2007 - Changed the Standard Network Test, "Test all available NICs" such that the number of Network Addresses specified in Preferences->Network will be the number of NICs tested. This will error faulty NICs that are not detected by the BurnInTest auto NIC detection mechanism. - Minor change to the 2D memory test when run with the 3D test (multiple large windows) and the RAM test. Aimed at correcting sympton: Access Violation 0x00404CF9. - Corrections to the mapping of paths with ".\". Release 5.3 build 1011 rev 2 WIN32 release 17 September 2007 - Modified the Multi-Process torture test to better describe a new error message introduced in V5.3.1010. Release 5.3 build 1011 - Public release WIN32 release 11 September 2007 - Corrected a bug where "Limited Evaluation Version" could be displayed even after BUrnInTest is licensed (problem introduced in 32-bit BITPRO V5.3.1010). - Changed the Sound test to allow any of the tests (Wave, Midi or MP3) to be excluded from testing by blanking the filename. - The Command line parameter "-j" (cycle disk test patterns after each test file) could fail during the Random data test due to the mechanism used in BurnInTest. The Random data test is now excluded from the test when (and only when) the "-j" command line parameter is specified. - In rare circumstances, the 2D test number of operations could potentially overflow and become negative. This has been corrected. - In rare circumstances, BurnInTest could hang if there was a system problem in rebooting the system (ie. it failed to shutdown) using PassMark Rebooter. This has been corrected. Release 5.3 build 1010 - Public release WIN32 release 28 August 2007 WIN64 release 28 August 2007 - As BurnInTest exercises system components, it is possible for faulty hardware or device drivers to cause software exceptions. These are normally seen as Windows reporting an "Access Violation". Changes have been made to handle these errors for the memory tests (for faulty RAM) and direct device driver access (for some device driver errors), as well as overarching more generic handling of these types of errors. - Corrected a software failure bug on startup (particularly Vista) where a DirectX function was causing software failures in "dsetup.dll". - Updated the "Activity Event" generated with the periodic results summary report to be numbered (from 1 upwards) such that when "Logging->Summarize", these events are not summarized. - Corrected a bug where the HTML log name could include a duplicate of the filename prefix. - Updated to the Common Errors section of help. Release 5.3 build 1009 - Public release WIN32 release 16 August 2007 - Corrected a 'zip' version cleanup problem. Release 5.3 build 1008 - Komputer Swiat Expert magazine version WIN32 STD release 14 August 2007 Release 5.3 build 1007 - Public release WIN32 release 7 August 2007 - Corrected a disk test startup problem for some large RAID systems when SMART testing is selected. - Added additional logging for the disk test when an error occurs. - Changed the 3D test when run with the 2D EMC test to be 'behind' the EMC scrolling H's test. Allowed the test to be easily exited when running the 3D test in Fullscreen mode. - Minor corrections to the Advanced Network test. - Changed the log file reference of "Network Name" to "Computer Name". WIN64 specific: - MMX and 3DNow! are obsolete for native 64-bit applications. BurnInTest has been changed to show "NA" (Not applicable) in the test window for these tests. Release 5.3 build 1006 - Limited release WIN32 release 17 July 2007 - Standard Network Test changes: - Increased the number of destination IP addresses from 4 to 6. - Added an option (default) "Test all available NICs", which will force traffic down every system NIC with a basic algorithm of NIC1 to IP Address 1, NIC2 to IP Address 2 etc. - Advanced Network test changes: - Simplified the test. - Removed the UDP and FTP options. The Standard Network test can be used as a UDP test. - Removed the Advanced Network test specific logging, and included all relevant logging in the standard BurnInTest logging mechanism. - Replaced the complicated dynamic balancing of any system NIC to any Endpoint NIC with a simpler static allocation on test startup. - Changed the error detection mechanism to detect errors much more quickly. - Re-worked the errors reported. - Changed the CPU throttling mechanism to reduce the CPU load. - Updated endpoint.exe. - Removed checkend.exe (now obsolete). - Changed the logging rollover to work with the output of interim results (e.g. per 1 minute). Previously rollover only occurred on error events written to the log. This also corrected an issue where interim results summary logging could be written to the physical disk with some delay (based on Windows disk caching). - Corrected the "Unknown" reporting of some operating systems. - Added the skipping of the Butterfly seek disk test when run on Vista and insufficient privileges. A notification of this is logged. - Intel Quad core L2 cache size reporting has been added. - Added new SMART threshold descriptions. - Added new disk test options, accessed via command line parameters: /ka: keep disk test files in all cases (c.f. /k keep disk test files on error). /j: cycle patterns between test files. Note: Random seeking will be skipped in this case. This option has been added to allow multiple test patterns to be used across very large disks. - Added an option to make some test settings unavailable to the user. An example configuration file available on request. Release 5.3 build 1005 0001 (STD only) - Public release WIN32 release 29 June 2007 - Corrected a bug introduced in v5.3.1005.0000 STD (only) where the disk test would use up more and more system resources, thus causing test failures. Release 5.3 build 1005 rev 0003 (PRO only) - Limited public release WIN32 release 21 June 2007 - Correction to the behavior of a static RAM test pattern (rather than the default Cyclic pattern). Release 5.3 build 1005 rev 0002 (PRO only) - Limited public release WIN32 release 15 June 2007 - The "Select all CD/DVD drives" preferences option has been made user configurable, rather than using pre-defined test settings. Release 5.3 build 1005 rev 0001 (PRO only) - Limited public release WIN32 release 13 June 2007 - Bug correction for the CD auto selection feature. Release 5.3 build 1005 - Public release WIN32 release 18 May 2007 WIN64 release 18 May 2007 - In a number of cases, such as when specifying the post test application, uppercase application names were not accepted. This has been corrected. - The default font height in the 2D scrolling H's test should have been Arial 9. This has been changed. - The BurnInTest Video playback test incompatibility with Nero 6 and Nero 7 has been resolved. - The BurnInTest disk test throughput for dual core systems has been improved. Release 5.3 build 1004 rev2 - Limited release WIN32 release 8 May 2007 - Changed the Standard Network Test to better report packet error ratios. In addition, a new warning has been added to indicate that errors have been detected but not enough packets have been attempted to be sent to determine accurately whether the configured error ratio has been exceeded. - Corrected a bug where the "append to existing" logging option did not work across scripted reboots, and a new log file was created instead of appending to the existing log file. - If the 3D test was running, then BurnInTest blocked a forced close of BurnInTest, this blocking has been removed. - Changed the PASS and FAIL windows so they can now also be closed by selecting the Windows Close "X" button. Release 5.3 build 1004 - Public release WIN32 release 10 April 2007 WIN64 release 10 April 2007 - Corrected a problem introduced in BurnInTest v5.2 where BurnInTest could run out of memory (the main symptom) when tests where run for long periods (> 12hours). WIN64 specific: - Corrected a bug where the number of cores reported on a Quad core system was incorrectly reported as CPU packages. Release 5.3 build 1003 - Limited release WIN32 release 3 April 2007 - A new 2D GUI (Graphical User Interface) test has been added to the standard 2D graphics test. - Resolved an issue where BurnInTest would fail to start on Vista systems with DEP enabled for all programs. - On some systems, the Disk test could pause momentarily even when a duty cycle of 100% was specified. This pause has been removed. - When running the CD test under BartPE (Pre-install environment) 4 additional specific files are skipped as they are unavailable for testing. - Minor bug corrections. Release 5.3 build 1002 rev 0001 - Limited release WIN32 release 16 March 2007 - Changes to the new 3D test: - Added a Full screen non-windowed test for the primary monitor, where the resolution can be selected from those supported by the Graphics card. - Added the user option of changes the vertical sync in the full screen non-windowed test to be either the Maximum rate of the graphics card, or to be the rate of the monitor (this may prevent some flicker). - Added a more complex water texture using DirectX Vertex Shader 2.0 and Pixel Shader 2.0 effects (if supported by the graphics card). This applies to 3D test windows that are 800x600 or larger. - Changed some error messages from window displays (that require user intervention) to standard error reporting. Added new 3D error messages and more detail in the error reporting. - Changed the definition of an operation to be a successfully displayed frame. - Changed the definition of a cycle to be 2000 frames. - Changed 2D video memory test to wait until the 3D test starts (as per V5.2 and earlier). - A new version of rebooter has been included. - If BurnInTest is started with the -p command line parameter (to use the bit.exe directory for files such as the configuration file), then BurnInTest will start rebooter with the -p option. This can be useful when running BurnInTest and Rebooter from a USB drive. Release 5.3 build 1002 - Limited release WIN32 release 19 March 2007 - Corrected a bug introduced in V5.2 where selecting accumulated logging could lead to rebooter failing to launch. Release 5.3 build 1001 - Limited release WIN32 release 16 March 2007 - The 3D test has been improved. The 3D ball test has been replaced with a more complex 3D terrain test. This will more thoroughly exercise modern graphics cards. Further, the 3D test has been changed to support multi- monitor testing (up to 4 monitors). Accordingly, a new preferences section has been added for the 3D test. The multi-monitor test options are only available in BurnInTest Professional. Release 5.3 build 1001 - Limited release WIN32 release 16 March 2007 - The 3D test has been improved. The 3D ball test has been replaced with a more complex 3D terrain test. This will more thoroughly exercise modern graphics cards. Further, the 3D test has been changed to support multi- monitor testing (up to 4 monitors). Accordingly, a new preferences section has been added for the 3D test. The multi-monitor test options are only available in BurnInTest Professional. - BurnInTest uses DirectX 9.0c. This version of BurnInTest uses a more recent version of the Microsoft DirectX Direct3D component, October 2006. BurnInTest has been modified to detect and install this component (file) if it does not exist. - A command line parameter -X has been added to skip the DirectX version checking on BurnInTest start-up. - With the recent introduction of multi-monitor support for the Video Playback test, it is now more likely that the system will run out of memory when running multiple video tests simultaneously, particularly when more memory intensive codecs are used. A specific Insufficient resources to complete test message has been added in this case, rather than the previous more generic unrecoverable error message. The video test have been changed to attempt recovery from this and the more generic unrecoverable error, by closing the current video and opening the next. The logging detail has been increased. - Note: The BurnIntest sample video pack has been altered with the DivX Compressed Video file being removed due to the DivX codec failing with this Video file when used with multiple simultaneous Video playbacks. Access Violation: 0x69756e65. See: http://www.passmark.com/download/bit_download.htm - The video description is now collected for a larger range of Vista systems. - Windows 98 and ME are no longer supported. Please see www.passmark.com for a link to an older version of BurnInTest that will support W98/ME. Release 5.3 build 1000 rev2 - Limited release WIN32 release 9 March 2007 - A command line parameter -P has been added to allow the BurnInTest directory to be used rather than the User's personal directory. This may be useful when running BurnInTest from a USB drive for example. - When running the CD test under BartPE (Pre-install environment) 4 additional specific files are skipped as they are unavailable for testing. - A change has been made to support Hmonitor temperature monitoring on Vista. - A number of undocumented command line parameters have been documented: -B: BurnInTest will generate additional Serial port test information when activity trace level 2 logging is set. -E [data]: Specifies the test data to use in the serial port test. -M: Automatically display the Machine ID Window when BurnInTest is started. -U: Force BurnInTest to set logging on at startup. Release 5.3 build 1000 - Limited release WIN32 release 8 March 2007 - Changed the 2D and Video playback tests to support multi-monitor testing. - When running the CD test under BartPE (Pre-install environment) 4 specific files are skipped as they are unavailable for testing. Release 5.2 build 1006 - Limited release WIN32 release 1 March 2007 - Corrected a bug where BurnInTest would fail to start on certain Vista systems. - Corrected a bug where some files where the full path was not specified would be incorrectly referenced in the Program Files directory, rather than the user personal directory. Release 5.2 build 1005 - Public release WIN32 release 21 February 2007 WIN64 release 21 February 2007 - Updated the Graphics card description for Windows Vista systems. - Updated the Advanced Network test to indicate that elevated administrator privileges are required when running on Vista. - Moved files from the Program files directory for the Advanced Network Test (BurnInTest, EndPoint and CheckEnd). Specifically, the User Application directory is now used for the temporary test FTP files and the User Personal directory is now used for the log and configuration files. - Updated the cleanup process for when running the "zip" version of BurnInTest Professional from a CD or flash drive. - Updated the help link from the Windows Start, All Programs, BurnInTest menu for the browser based help. - Corrected a bug where Disk preferences displayed in the Preferences window would be incorrect when the system had no Floppy drive. - Corrected a bug where the Advanced Network test might not have been displayed until after entering the Duty Cycle selection (ie. just chaning from the standard network test to the advanced test). - Corrected a USB bug in Beta 5.2.1003 where the test would not run if there where there insufficient USB loopback plugs attached to the system. - Included a new version of PassMark Rebooter that supports Windows Vista. Release 5.2 build 1004 - Public Pre-release WIN32 release 13 February 2007 - Updated the reported Operating system for the various Vista product editions. - Disk test settings can be configured for "Automatically Select all Hard Disks", rather than using defaults. - When running the CD test under BartPE (Pre-install environment) 4 specific files are skipped as they are unavailable for testing. - Corrected a bug where temperature information could be duplicated in the HTML report. - Corrected a bug certain 'save report' warning messages could be truncated. - Help file updated. Release 5.2 build 1003 - BETA RELEASE ONLY WIN32 release 23 January 2007 - Changed the USB preferences and test to more completely check for the PassMark USB Loopback plugs and ignore any device that is not a PassMark USB Loopback plug (due to reported incorrect detection with another hardware device). - Increased Trace level debugging for Intel temperature monitoring. - Corrected a bug with the disk test introduced in 5.2.1001 Release 5.2 build 1002 - BETA RELEASE ONLY WIN32 release 22 January 2007 - Increased the number of disks that can be tested from 20 to 26. - Updated BurnInTest to reflect that Temperature monitoring with Intel Desktop utilities is supported. Intel Desktop utilities essentially is a replacement for Intel Active Monitor for newer Intel motherboards. - Increased Trace level debugging for Intel temperature monitoring. Release 5.2 build 1001 - BETA RELEASE ONLY WIN32 release 19 January 2007 - Windows Vista support. - The Block size used in the disk test is now configurable per disk. The default block size has been increased from 16KB to 32KB. - An option has been added to automatically detect all of the CD and DVD drives for the CD test (as per the disk test). This may be useful when testing across many systems with different optical drive configurations. - Increased Trace level debugging for Intel temperature monitoring. - Bugs corrected: - Disk preferences - in rare cases invalid default values could be set for a disk, an invalid value error would occur and the values would need to be manually corrected. Release 5.2 build 1000 - limited release WIN32 release 8 January 2007 - Windows Vista support. - Reduced the need for elevated administrator privileges: - Changed the location of the disk test files from the root directory of the test volume to a BurnInTest data files subdirectory (e.g from "C:\" to "C:\BurnInTest test files\") - Moved many of the files from the Program Files directory to the User directory for Windows 2000, XP and Vista. When running BurnInTest on Windows 98, ME or from a key.dat file (e.g. from a USB drive with a licensed key.dat) BurnInTest will store these files in the BurnInTest program directory. Specifically, the following files have been moved from the Program Files directory to the User Personal directory, e.g. Vista - "C:\Users\\Documents\PassMark\BurnInTest\" XP - "My Documents\PassMark\BurnInTest\" Files: Configuration file, Configuration load/save default directory, Save log file and image default directory, parallel port override "ioports.dat" directory, default command line script directory, log file directory, video file directory, Plugin directory, machine id file directory, Run as script default directory, CD burn image, Advanced network FTP temp files. - Replaced the Help system with Browser based help. - Changed the Disk test block size from 16KB to 256KB. It is planned to make this user configurable in the next build. Release 5.1 build 1014 WIN32 release 2 November 2006 WIN64 release 2 November 2006 - Corrected a bug when running on Vista, where the Standard network test would report a checksum error when the transmitted data was correct. - Corrected a bug where BurnInTest would not stop the tests based on the number of test cycles for the Plugin test or the Advanced Network test. - Made the "Could not set USB2Test mode" USB error message more specific by adding an error for insufficient system resources. - Changed the preferences Window to fit on an 800x600 resolution screen. - Corrected a minor bug in Activity level 2 trace logging with the 'hide duplicate' preference setting. - Corrected a minor memory leak if the 2D test failed to initialize (such as due to a DirectX problem). - The Parallel port test may now be used on Windows Vista. Specifically, the PassMark device driver used for the parallel port test could not be loaded on 64-bit Windows Vista as it was not digitally signed. It is now digitally signed. Release 5.1 build 1013 revision 0002 WIN32 release 19 September 2006 WIN64 release 19 September 2006 - Corrected an Access Violation problem reported by a customer on a particular MB. Release 5.1 build 1013 WIN32 release 7 September 2006 WIN64 release 7 September 2006 - The "Notes" section has been added to the Customer results certificate. - Some additional configuration range validation has been added. Release 5.1 build 1012 WIN32 release 15 August 2006 - Corrected a false report of a "Unable to get disk volume extent information" for the disk butterfly seek test. - Advanced Network test changes for errors: "Corrupt header - packet discarded" and "Advanced Network test timed out" - Advanced Network test Endpoint changes for problems on non-English Operating Systems and systems with the Windows "Network Interface" performance statistics disabled. - SMART parameters on a Samsung Hard Disk caused BurnInTest to fail when running the disk test with SMART thresholds enabled. This has been corrected. - The 2D scrolling H's test could display corrupt characters on the second and subsequent test run. This has been corrected. - A problem with the Integer maths test where the results could display a negative number of operations has been resolved. - Minor improvements to the help file. - HTML help file added for Windows Vista and Longhorn Server. - Minor improvements to the Error Classification file (error descriptions). - Some CD Trace level 1 logging has been moved to trace level 2. - Trace level 1 logging has been added to the test closing software. - New build of Endpoint.exe (1.0 1010). Release 5.1 build 1011 WIN32 release 6 July 2006 - New Advanced Network test error reporting added in the previous build V5.1 1010 has been removed. - A broader range of USB 2.0 Loopback plugs can now be used with BurnInTest. Release 5.1 build 1010 WIN32 release 4 July 2006 - Corrected the HTML report description of the L2/L3 CPU cache when the L3 cache size could not be determined. Advanced network changes: - Endpoints ran at 100% CPU load as they contained no throttling. This impacted their ability to effectively handle multiple threads handling TCP/UDP messaging. Throttling has been added to the EndPoint side to reduce CPU load. This does not greatly impact Network load. - Throttling on the BurnInTest side contained a sleep that was not insignificant. This could have impacted the BurnInTest data test thread to to handle incoming TCP and particularly UDP messages. This sleep has been reduced and other throttling parameters changed to suit. (ie. smaller sleeps more often). - EndPoint systems with x NICs (where x > 1), reported themselves as an Endpoint with x NICs, x times. Effectively registering with BurnInTest as x * x EndPoint NICS. This impacted the effectiveness of the load distribution to EndPoint NICs. An Endpoint system now only registers the once with BurnInTest. - The BurnInTest side did not report data verification Checksum errors for full duplex testing. This error determination has been corrected and reporting added. - The Test statistics sent from the Endpoint to BurnInTest could fail if the statistics block is split across 2 lower level TCP send packets. This could lead to problems like incorrect reporting of Endpoint determined checksum errors, Endpoint load and load balancing. Further it would lead to an Endpoint testthread being put into an endless TCP send loop. This would eventually bring the Endpoint system to its knees as more and more of these test threads go into this state. This has been corrected. - The Data Received reported by BurnInTest was double counted. This has been corrected. Release 5.1 build 1009 WIN32 release 23 June 2006 - Plugin test error classifications were incorrect in the log file detailed description. - Corrections to the advanced network test (BurnInTest and EndPoint). Release 5.1 build 1008 - limited release WIN32 release 20 June 2006 - Advanced network changes corrections. Most notably, a bug where part of the payload data could be lost if the payload block (eg. 1000 bytes) was split across 2 (or more) lower level TCP packets. - Added version reporting for Endpoints. Release 5.1 build 1007 - limited release WIN32 release 16 June 2006 Advanced network changes: - Corrected a BurnInTest access Violation introduced in V5.1 1006. - The Endpoint now reports its version and build to BurnInTest and BurnInTest reports this in the log file if it is an earlier version than expected. This is to help avoid the situation where old Endpoints are run on the Network, that may not be compatible with the version of BurnInTest being run by the user. - Removed a timeout report in a specific instance where a timeout is not an error. - Changed the Endpoint rebalancing and polling to occur less often after the test has been running 3 minutes. This is to help allowing the handling of polling from a larger number of multiple copies of BurnInTest on the Network. - Added a connection retries on failure for the Endpoint. - Corrected a memory leak in the Endpoint. - Increased the number of sockets supported. - Corrected some Advanced Network error classifications. Release 5.1 build 1006 - limited release WIN32 release 14 June 2006 - Improvements to the Advanced Network test (both BurnInTest V5.1 1006 and EndPoint V1.0 1004) to remove corrupted false packet corruption errors. Improved the timeout recovery mechanism. Added some validation to the Windows Network performance data used for NIC utilization. - Changes to the collection of Disk drive information on startup to try to resolve a startup issue on Systems with a large number of physical drives and 'unusual' WMI namings. Release 5.1 build 1005 WIN32 release 2 June 2006 - Corrected a bug in the Advanced network test where the test would not recover from timeout errors. The test appears to be running, but the results are 0 and the number of connected End Points are 0. Also improved the retry on timeout mechanism. - Removed some duplication in error reporting in the Advanced Network test. - Changed the Advanced Network display of Utilization to ensure a maximum of 100% displayed. - Corrected an Advanced Network test bug where the number of Errors reported in the test window would not take into account the corrupt packet threshold, and an error would be added for each occurrence of the corrupt packet (rather than when the user set threshold was reached). Release 5.1 build 1004b WIN32 release 25 May 2006 (not publicly released) - Corrected the default Advanced network corrupt packet threshold value. - Updated the data entry fields in the CD preferences when a different CD drive is selected. - The Advanced Network specific log files should be concatenated for a script run. This was only occurring for the first NIC under test. The concatenation will now occur for each NIC under test, when run from a script. - Corrected a bug where a log file name specified with no directory path could be incorrect. - Corrected a bug where the customer "Test Certificate" report incorrectly translated the "%" character from a customer specific HTML template. eg would be translated to . - The "Advanced Network test error" (215) has been removed and replaced with other existing error messages 214, 219, 220, 221 or 222. - Added the Customer name and Technician name to the text and HTMl reports. Previously, this information was only included in the "Test Certificate" report. - We have added a commandline option to specify the Serial port test data as a constant value. To specify specific data for the Serial port test you should specify e.g. "bit.exe /E 23" from the command line where 23 is in decimal and will be used for all test data (instead of random data). The vales should be between 0 and 255. Release 5.1 build 1004 WIN32 release 19 April 2006 (not publicly released) - Added the COM port speed of 921600 Kbits/s for RS 422/RS485 testing. - Changed the CD test to ensure that the entire test CD data is not cached on systems with a large amount of RAM. - Added a -M command line option to display the Machine ID window automatically when BurninTest starts. - Changed the 2D EMC scrolling H's test to work on multiple monitors were the resolution on each is different. - Changed log files such the syntax "..\" could be used for files in the directory up a level. - Minor correction to the advanced network test. Release 5.1 build 1003 WIN32 release 18 April 2006 WIN64 release 18 April 2006 - Changed the Advanced network test to allow a corrupt packet threshold value up to 1 million. - Bundled a new version of rebooter. Release 5.1 build 1002 WIN32 release 11 April 2006 WIN64 release 11 April 2006 - Corrections to the translation of V4.0 to V5 configuration files. Note: Configuration files in V5.x builds prior to V5.1 1002 could become corrupted if a V4.0 configuration file is loaded. - Corrected a bug where the main Window size and location were not restored on restarting BurnInTest. - Changes to the SMART attribute logging to support a greater range of Disk drive device drivers. Added additional Activity Level 2 trace logging. - Added an option to use CTS (Clear To Send) flow control in the loop back stage of the COM port test. - Corrected a bug where the CPU L3 cache could be reported as -1. - Help file updates. Release 5.1 build 1001 WIN32 release 30/March/2006 - Digitally signed the BurnInTest application to allow it to run under Windows Server "Longhorn". Note, previously only the installation package was digitally signed. - Updated the reported Operating system descriptions, including: - Windows Vista - Windows Server "Longhorn" - Corrected a bug where the Advanced network information was not displayed on the main window when it was run from a script. - The Advanced Network Corrupt threshold packet has been changed to produce an error every time the error is received after the threshold is reached. - Corrected the reporting of "Network, Packet discarded due to corrupt header" as a Network test error. - Corrected a bug where a new log file was not created if (only) the log prefix changed during the running of a script file. - Split the "Network, Advanced Network test error" error into 6 errors: "Network, Advanced Network test error" "Advanced Network Socket error" "Advanced Network Send error" "Advanced Network Send error - no data sent" "Advanced Network Receive error" "Advanced Network Receive error - no data received" Added either activity trace 1 or trace 2 logging for each of the errors, with additional information where available. - Added additional Serial port activity trace 2 logging. Including the logging of all transmit buffer data when the /B command line is used. Release 5.1 build 1000 WIN32 release 27/March/2006 (not a public release) Added the following features: - Create the log file directory specified in the Logging Options if it does not exist. - Condense the Advanced Network Test log files to one log file per IP address per script run, when run from a script. - Added an option to summarize duplicate errors in the log file. - Color coded errors based on severity in the Detailed event log Window and the HTML log file. - Added an option to only create a log file when BurnIn actually runs a test as opposed to every time BurnIn is executed. - Added a warning if a test thread completes with 0 cycles and 0 operations. - In the results summary html file, inserted more spacing between the 揘otes? and 揇etailed Event Log? - Changed the Activity Trace file format to be the same as the log file, ie. text or HTML, rather than always text. - The 2D 揝crolling H抯?test will now display across multiple screens/displays ?i.e. all active displays. - A threshold has been added for the 揷orrupt header ?packet discarded?event in the advanced network options so that a 揊ail?is not produced when that is the only thing that produces errors. - Added looping capability in scripting. LOOP n { ? } where n is the number of times to repeat the commands in the brackets. - Corrected a bug where PASS could be displayed if the Advanced Network test was the only test running, but it failed. Release 5.0 build 1001 WIN32 release 9/March/2006 - Corrected a bug where Network directory paths were not accepted, eg. for the log file name and post test application file name. - The CPU maths test has been improved to better load up all CPU's. Previously BurnInTest started a maths test thread per physical CPU package. BurnInTest has been changed to start a maths test thread per CPU (= num. physical CPU packages x num. CPU cores x num. logical CPUs). - The CPU preferences have been changed to allow the CPU maths test to be locked to any CPU (ie. select a CPU from a list of CPU's where the number of CPU's = num. physical CPU packages x num. CPU cores x num. logical CPUs). - The Parallel and Serial port error message have been modified in the case where a test plug may not have been connected to indicate that the user should check this. - Corrected a bug where a licenced version could display the message "[limited evaluation version]" Release 5.0 build 1000 WIN32 release 24/February/2006 WIN64 release 24/February/2006 NEW TESTS & IMPROVEMENTS TO EXISTING TESTS BurnInTest Standard and Professional versions. - Added a customer style results certificate. This will save the log file in HTML format but from the perspective of a end customer. This report style can be tailored by the user (through changing an HTML template). - An MP3 playback test has been added to the Sound test. - A color printer test has been added. - A new post test option to allow the results to be printed automatically at the end of a test has been added. - Added new Post-test action options of: - Optionally allow the user to "run an external program & exit" after BIT has been manually stopped. Modify the $RESULT variable to "PASS (manual abort)" or "FAIL (manual abort)" for this case. - Allow the results window to be displayed for all post test options (except Reboot). - Added new Pre-test actions to allow an external application to be run and have BIT wait for the application to exit. On continuing, BIT will run the subscript file (of scripting commands) if it has been created. - Changed the manual Stop buttons, to abort the running of a script (rather than just the current test). BurnInTest Professional specific. - Added a "Plugin" test that allows users to develop their own BurnInTest test modules for specialized hardware. Three external plugins may be specified at once. - A Modem test has been added to BurnInTest as a Plugin. PassMark's ModemTest Version V1.3 (latest build) is required. - A KeyBoard Test has been added to BurnInTest as a Plugin. PassMark's KeyboardTest Version V2.2 (latest build) is required. - A Firewire Test has been added to BurnInTest as a Plugin. PassMark's free Firewire plugin is required and a "Kanguru FireFlash" drive is required. - A new advanced network test has been added. BurnInTest Professional only. - The Memory test now allows the user to specify the type of test pattern to be used. - Testing with the USB 2.0 Loopback plug has been improved. When used with USB 2.0 Loopback device driver V2.0.1002, error details will now be reported for: CRC error reported by USB Host controller BIT STUFF error reported by USB Host controller DATA TOGGLE MISMATCH error reported by USB Host controller STALL PID error reported by USB Host controller DEVICE NOT RESPONDING error reported by USB Host controller PID CHECK FAILURE error reported by USB Host controller UNEXPECTED PID error reported by USB Host controller DATA OVERRUN error reported by USB Host controller DATA UNDERRUN error reported by USB Host controller BUFFER OVERRUN error reported by USB Host controller BUFFER UNDERRUN error reported by USB Host controller NOT ACCESSED error reported by USB Host controller FIFO error reported by USB Host controller TRANSACTION (XACT) ERROR reported by USB Host controller BABBLE DETECTED error reported by USB Host controller DATA BUFFER ERROR reported by USB Host controller In the case of these errors, BurnInTest will re-attempt the operation. The user can set the Error reporting to be skipped for the initial recovery attempt. IMPROVEMENTS TO TESTING FACILITIES - Added a disk autoconfig, such that when tests are started, the disk drives and settings will be defaults to all disks (exc. CD/DVD). This may be useful when testing multiple systems with different hard disk drive letters. - Store the position of the Main window on exiting BurnInTest. On starting BurnInTest, position the main window as saved; on starting tests, position the test windows as saved. - Allow a "drag & drop" of the Configuration file directly on the BurnInTest program icon. - Allow testing 99.5% to 100% of disk, instead of 94%, for disks that do not contain the Windows directory and do not contain a swap file. - Added the ability to log interim results, which may be useful for unstable systems. - AMD and Intel Dual core reporting added. - New L2 CPU cache sizes added to reports. - CPU support for SSE3, DEP and PAE added to reports. - Shortcut of "F1" for contextual help added to all Windows. - Improve the flexibility in specifying the EXECUTEWAIT scripting command for sleeper. - Updated logging header information with the hard and optical drive model. - The 2D and 3D tests have been updated to use DirectX 9.0c. - User interface updated. - The HTML report format has been improved. - The BurnInTest configuration file extension has been renamed from .cfg to use .bitcfg, to ensure the configuration file is associated with BurnInTest. - An error message indicating that accumulated log files are not supported when run from CD or DVD has been added. - To allow smaller test files with very large disks, the minimum disk test file size has been reduced from 0.1% to 0.01% of the disk space. - Log events were previously shown as "INFORMATION" if they were low level errors, or simply additional information (not errors). "INFORMATION" now refers to a low level error, and "LOG NOTE" now refers to additional information (that is not in the error count). - Improved the specific detail of the Serial Port errors detected. BurnInTest now reports framing errors, buffer overrun errors, input buffer overflow errors, parity errors and Transmit buffer full errors as specific error messages (rather than a broader error description). - Added the /k command line so the user can specify not to delete HDD test files if an error occurs. - Increased Activity trace level 1 error logging for Serial port testing. - Increased Activity trace level 1 error logging for Hyper threading detection. - Bundled a new version of the Rebooter program. - Improved the Serial port error logging (displaying baud rate) and increased Activity trace level 1 error logging (displaying erroneous data). - Modified the Window sizes to help improve navigation on smaller displays (i.e. 640x480). - The CPU load for the Standard and Torture RAM tests has been made more linear with the duty cycle setting. Note: This means that compared to the previous build of BurnInTest, less RAM test operations will be run per second (when the duty cycle is less than 100). - Additional debug code and very minor changes in the Loopback sound test. - The Post test option of "Run external application and exit" has been modified such that if no external file is specified, this Post test option will just exit BurnInTest. - Allowed the full range of PassMark USB1 loopback plugs to be used with BurnInTest Professional. - Added additional Activity Trace level 2 logging. - The delay inserted between packets in the USB2 test, when the duty cycle is less than 50, has been changed from at least 1ms to at least 1ms to 50ms (for a Duty Cycle of 49 down to 0). - The subscript commands to configure BurnInTest from an external application (i.e. specified in the bit-script-input.txt file and run by specifying either a pre-test or EXECUTEWAIT application) has been changed to allow "LOAD" commands (in addition to "SET" scripting commands). - Renamed the "Error" log to "Event" log. - Changed the order of the items in an Event log line, such that the Severity is the first item. - The EXECUTEWAIT script command has been modified such that the external application may provide an input script file (of SET... commands) to be run after the EXECUTEWAIT application closes. This allows external applications to define test environment parameters (such as the serial number and machine type). - Added scripting commands: SETSERIAL "1234-shdfgdhs-GHGHG" SETMACHINETYPE "HP XPS800" SETNOTES "Test notes defined by the external application." SETLOG "\Program Files\Plugin\plugin_log" SETPLUGIN "\Program Files\Plugin\plugin.exe" - Added POST TEST application parameter substitution to allow values to be passed to an external application at the end of a test. These are: $RESULT - "PASS" or "FAIL" will be substituted. $SERIAL - The serial number will be substituted. $MACHINETYPE - The machine type will be substituted. $NOTES - The notes will be substituted. - Added extra logging for memory allocation errors in the disk test - Added "log bad sector increase" and "bad sector threshold" options to disk test. This resulted in a change to the configuration file format and required additional code to automatically convert from old formats. - Modified the user interface in the preferences window for the disk test and the CD test - Improved the handling of USB 2.0 loopback plugs recovery from sleep states. BUG CORRECTIONS - Corrected a bug where the System and Application events logged in the BurnInTest Trace logs were wrong if the event log had reached its maximum size. - Checks that the Sound test files (WAV and MIDI) exist have been added. - The continuous auto updating of the USB image (USB Loopback plug vs. USB 2.0 Loopback plug) on the main window has been removed. This is now updated on BIT startup, selecting Refresh in USB preferences or on starting a test. If there is a serious USB problem, this (together with the USB 2.0 Loopback device driver, V2.0.1002) will avoid the possibility of BurnInTest locking up. - Corrected a bug with the Butterfly seek mode of the Disk test. This was found to occur with FAT32 disks where the Cylinder size was relatively small and the Sector size relatively large. - Reset Defaults on the Configuration Page now resets the Auto Stop Value. - Reset Defaults on the Configuration Page now resets the color indicators. - The CD test has been modified to skip invalid files either with "?"'s , to avoid reporting errors that are due to the CD test media filenames. - The Network test results window scroll bar has been corrected. - The Memory torture test could fail on some systems with a small amount of RAM and relatively high memory fragmentation. This has been corrected. - Scripting correction for .cmd files. - Corrected a bug that caused problems when running the disk test with SMART monitoring turned on. This problem only occurs on a small number of HDD's. - Corrected memory leaks - On occasion, the measured waveform from the loopback sound test may have been slightly altered on starting or stopping all tests, possibly enough to trigger an error. This has been resolved. - If an error occurred in the final second of a test, the error may have been logged but not included in the big PASS/FAIL results window. This has been corrected. - After running a script file that loaded a configuration file, that had a full path specified, the Save and Load configuration menu options no longer worked. This has been corrected. - Previously, the Version of BurnInTest was only written in the First log file after starting BurnInTest. This log line is now written in all log files. - For USB2 tests that have read or write failures, the Windows error codes are now included in the level 2 Activity trace log. - Command line parameters may now be passed to a PreTest application. - Log files may now use a single static filename. This may be useful when the log file is to be parsed by an external program. - Corrected a bug where the Plugin test would stop prematurely. - Corrected the specification of the Scripting EXECUTEWAIT filename. - Changed Script processing such that a script is aborted if a scripting error is encountered and Stop on error is selected. - Added an indication on the main window that a script is currently running ("Script currently running"). - Corrected the serial port test to identify non-existing plugs when the Disable RTS/CTS and DSR/DTR testing has been selected. - Corrected the display of strange results (666666) reported by a user, related to copy protection. - Fixed a memory leak bug in the MBM interface which caused memory allocation errors. - Added BIT version number to the ASCII log file. - Fixed a bug with the 3D Test that was causing it to stop before the autostop timer period - Changed an error in the tape drive test to a warning if tape drive doesn't support setting drive parameters. History of earlier releases: Please see http://passmark.com/products/bit_history.htm Documentation ============= All the documentation is included in the help file. It can be accessed from the help menu. There is also a PDF format Users guide available for download from the PassMark web site. Support ======= For technical support, questions, suggestions, please check the help file for our email address or visit our web page at http://www.passmark.com Ordering / Registration ======================= All the details are in the help file documentation or you can visit our sales information page http://www.passmark.com/sales Compatibility issues with the Network & Parallel Port Tests =========================================================== If you are running Windows 2000 or XP, you need to have administrator privileges to run this test. Enjoy.. The PassMark Development team
Contents Overview 1 Lesson 1: Concepts – Locks and Lock Manager 3 Lesson 2: Concepts – Batch and Transaction 31 Lesson 3: Concepts – Locks and Applications 51 Lesson 4: Information Collection and Analysis 63 Lesson 5: Concepts – Formulating and Implementing Resolution 81 Module 4: Troubleshooting Locking and Blocking Overview At the end of this module, you will be able to:  Discuss how lock manager uses lock mode, lock resources, and lock compatibility to achieve transaction isolation.  Describe the various transaction types and how transactions differ from batches.  Describe how to troubleshoot blocking and locking issues.  Analyze the output of blocking scripts and Microsoft® SQL Server™ Profiler to troubleshoot locking and blocking issues.  Formulate hypothesis to resolve locking and blocking issues. Lesson 1: Concepts – Locks and Lock Manager This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Describe locking architecture used by SQL Server.  Identify the various lock modes used by SQL Server.  Discuss lock compatibility and concurrent access.  Identify different types of lock resources.  Discuss dynamic locking and lock escalation.  Differentiate locks, latches, and other SQL Server internal “locking” mechanism such as spinlocks and other synchronization objects. Recommended Reading  Chapter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  SOX000821700049 – SQL 7.0 How to interpret lock resource Ids  SOX000925700237 – TITLE: Lock escalation in SQL 7.0  SOX001109700040 – INF: Queries with PREFETCH in the plan hold lock until the end of transaction Locking Concepts Delivery Tip Prior to delivering this material, test the class to see if they fully understand the different isolation levels. If the class is not confident in their understanding, review appendix A04_Locking and its accompanying PowerPoint® file. Transactions in SQL Server provide the ACID properties: Atomicity A transaction either commits or aborts. If a transaction commits, all of its effects remain. If it aborts, all of its effects are undone. It is an “all or nothing” operation. Consistency An application should maintain the consistency of a database. For example, if you defer constraint checking, it is your responsibility to ensure that the database is consistent. Isolation Concurrent transactions are isolated from the updates of other incomplete transactions. These updates do not constitute a consistent state. This property is often called serializability. For example, a second transaction traversing the doubly linked list mentioned above would see the list before or after the insert, but it will see only complete changes. Durability After a transaction commits, its effects will persist even if there are system failures. Consistency and isolation are the most important in describing SQL Server’s locking model. It is up to the application to define what consistency means, and isolation in some form is needed to achieve consistent results. SQL Server uses locking to achieve isolation. Definition of Dependency: A set of transactions can run concurrently if their outputs are disjoint from the union of one another’s input and output sets. For example, if T1 writes some object that is in T2’s input or output set, there is a dependency between T1 and T2. Bad Dependencies These include lost updates, dirty reads, non-repeatable reads, and phantoms. ANSI SQL Isolation Levels An isolation level determines the degree to which data is isolated for use by one process and guarded against interference from other processes. Prior to SQL Server 7.0, REPEATABLE READ and SERIALIZABLE isolation levels were synonymous. There was no way to prevent non-repeatable reads while not preventing phantoms. By default, SQL Server 2000 operates at an isolation level of READ COMMITTED. To make use of either more or less strict isolation levels in applications, locking can be customized for an entire session by setting the isolation level of the session with the SET TRANSACTION ISOLATION LEVEL statement. To determine the transaction isolation level currently set, use the DBCC USEROPTIONS statement, for example: USE pubs GO SET TRANSACTION ISOLATION LEVEL REPEATABLE READ GO DBCC USEROPTIONS GO Multigranular Locking Multigranular Locking In our example, if one transaction (T1) holds an exclusive lock at the table level, and another transaction (T2) holds an exclusive lock at the row level, each of the transactions believe they have exclusive access to the resource. In this scenario, since T1 believes it locks the entire table, it might inadvertently make changes to the same row that T2 thought it has locked exclusively. In a multigranular locking environment, there must be a way to effectively overcome this scenario. Intent lock is the answer to this problem. Intent Lock Intent Lock is the term used to mean placing a marker in a higher-level lock queue. The type of intent lock can also be called the multigranular lock mode. An intent lock indicates that SQL Server wants to acquire a shared (S) lock or exclusive (X) lock on some of the resources lower down in the hierarchy. For example, a shared intent lock placed at the table level means that a transaction intends on placing shared (S) locks on pages or rows within that table. Setting an intent lock at the table level prevents another transaction from subsequently acquiring an exclusive (X) lock on the table containing that page. Intent locks improve performance because SQL Server examines intent locks only at the table level to determine whether a transaction can safely acquire a lock on that table. This removes the requirement to examine every row or page lock on the table to determine whether a transaction can lock the entire table. Lock Mode The code shown in the slide represents how the lock mode is stored internally. You can see these codes by querying the master.dbo.spt_values table: SELECT * FROM master.dbo.spt_values WHERE type = N'L' However, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. For example, value of req_mode = 3 represents the Shared lock mode rather than the Schema Modification lock mode. Lock Compatibility These locks can apply at any coarser level of granularity. If a row is locked, SQL Server will apply intent locks at both the page and the table level. If a page is locked, SQL Server will apply an intent lock at the table level. SIX locks imply that we have shared access to a resource and we have also placed X locks at a lower level in the hierarchy. SQL Server never asks for SIX locks directly, they are always the result of a conversion. For example, suppose a transaction scanned a page using an S lock and then subsequently decided to perform a row level update. The row would obtain an X lock, but now the page would require an IX lock. The resultant mode on the page would be SIX. Another type of table lock is a schema stability lock (Sch-S) and is compatible with all table locks except the schema modification lock (Sch-M). The schema modification lock (Sch-M) is incompatible with all table locks. Locking Resources Delivery Tip Note the differences between Key and Key Range locks. Key Range locks will be covered in a couple of slides. SQL Server can lock these resources: Item Description DB A database. File A database file Index An entire index of a table. Table An entire table, including all data and indexes. Extent A contiguous group of data pages or index pages. Page An 8-KB data page or index page. Key Row lock within an index. Key-range A key-range. Used to lock ranges between records in a table to prevent phantom insertions or deletions into a set of records. Ensures serializable transactions. RID A Row Identifier. Used to individually lock a single row within a table. Application A lock resource defined by an application. The lock manager knows nothing about the resource format. It simply compares the 'strings' representing the lock resources to determine whether it has found a match. If a match is found, it knows that resource is already locked. Some of the resources have “sub-resources.” The followings are sub-resources displayed by the sp_lock output: Database Lock Sub-Resources: Full Database Lock (default) [BULK-OP-DB] – Bulk Operation Lock for Database [BULK-OP-LOG] – Bulk Operation Lock for Log Table Lock Sub-Resources: Full Table Lock (default) [UPD-STATS] – Update statistics Lock [COMPILE] – Compile Lock Index Lock sub-Resources: Full Index Lock (default) [INDEX_ID] – Index ID Lock [INDEX_NAME] – Index Name Lock [BULK_ALLOC] – Bulk Allocation Lock [DEFRAG] – Defragmentation Lock For more information, see also… SOX000821700049 SQL 7.0 How to interpret lock resource Ids Lock Resource Block The resource type has the following resource block format: Resource Type (Code) Content DB (2) Data 1: sub-resource; Data 2: 0; Data 3: 0 File (3) Data 1: File ID; Data 2: 0; Data 3: 0 Index (4) Data 1: Object ID; Data 2: sub-resource; Data 3: Index ID Table (5) Data 1: Object ID; Data 2: sub-resource; Data 3: 0. Page (6) Data 1: Page Number; Data 3: 0. Key (7) Data 1: Object ID; Data 2: Index ID; Data 3: Hashed Key Extent (8) Data 1: Extent ID; Data 3: 0. RID (9) Data 1: RID; Data 3: 0. Application (10) Data 1: Application resource name The rsc_bin column of master..syslockinfo contains the resource block in hexadecimal format. For an example of how to decode value from this column using the information above, let us assume we have the following value: 0x000705001F83D775010002014F0BEC4E With byte swapping within each field, this can be decoded as: Byte 0: Flag – 0x00 Byte 1: Resource Type – 0x07 (Key) Byte 2-3: DBID – 0x0005 Byte 4-7: ObjectID – 0x 75D7831F (1977058079) Byte 8-9: IndexID – 0x0001 Byte 10-16: Hash Key value – 0x 02014F0BEC4E For more information about how to decode this value, see also… Inside SQL Server 2000, pages 803 and 806. Key Range Locking Key Range Locking To support SERIALIZABLE transaction semantics, SQL Server needs to lock sets of rows specified by a predicate, such as WHERE salary BETWEEN 30000 AND 50000 SQL Server needs to lock data that does not exist! If no rows satisfy the WHERE condition the first time the range is scanned, no rows should be returned on any subsequent scans. Key range locks are similar to row locks on index keys (whether clustered or not). The locks are placed on individual keys rather than at the node level. The hash value consists of all the key components and the locator. So, for a nonclustered index over a heap, where columns c1 and c2 where indexed, the hash would contain contributions from c1, c2 and the RID. A key range lock applied to a particular key means that all keys between the value locked and the next value would be locked for all data modification. Key range locks can lock a slightly larger range than that implied by the WHERE clause. Suppose the following select was executed in a transaction with isolation level SERIALIZABLE: SELECT * FROM members WHERE first_name between ‘Al’ and ‘Carl’ If 'Al', 'Bob', and 'Dave' are index keys in the table, the first two of these would acquire key range locks. Although this would prevent anyone from inserting either 'Alex' or 'Ben', it would also prevent someone from inserting 'Dan', which is not within the range of the WHERE clause. Prior to SQL Server 7.0, page locking was used to prevent phantoms by locking the entire set of pages on which the phantom would exist. This can be too conservative. Key Range locking lets SQL Server lock only a much more restrictive area of the table. Impact Key-range locking ensures that these scenarios are SERIALIZABLE:  Range scan query  Singleton fetch of nonexistent row  Delete operation  Insert operation However, the following conditions must be satisfied before key-range locking can occur:  The transaction-isolation level must be set to SERIALIZABLE.  The operation performed on the data must use an index range access. Range locking is activated only when query processing (such as the optimizer) chooses an index path to access the data. Key Range Lock Mode Again, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. Dynamic Locking When modifying individual rows, SQL Server typically would take row locks to maximize concurrency (for example, OLTP, order-entry application). When scanning larger volumes of data, it would be more appropriate to take page or table locks to minimize the cost of acquiring locks (for example, DSS, data warehouse, reporting). Locking Decision The decision about which unit to lock is made dynamically, taking many factors into account, including other activity on the system. For example, if there are multiple transactions currently accessing a table, SQL Server will tend to favor row locking more so than it otherwise would. It may mean the difference between scanning the table now and paying a bit more in locking cost, or having to wait to acquire a more coarse lock. A preliminary locking decision is made during query optimization, but that decision can be adjusted when the query is actually executed. Lock Escalation When the lock count for the transaction exceeds and is a multiple of ESCALATION_THRESHOLD (1250), the Lock Manager attempts to escalate. For example, when a transaction acquired 1250 locks, lock manager will try to escalate. The number of locks held may continue to increase after the escalation attempt (for example, because new tables are accessed, or the previous lock escalation attempts failed due to incompatible locks held by another spid). If the lock count for this transaction reaches 2500 (1250 * 2), Lock Manager will attempt escalation again. The Lock Manager looks at the lock memory it is using and if it is more than 40 percent of SQL Server’s allocated buffer pool memory, it tries to find a scan (SDES) where no escalation has already been performed. It then repeats the search operation until all scans have been escalated or until the memory used drops under the MEMORY_LOAD_ESCALATION_THRESHOLD (40%) value. If lock escalation is not possible or fails to significantly reduce lock memory footprint, SQL Server can continue to acquire locks until the total lock memory reaches 60 percent of the buffer pool (MAX_LOCK_RESOURCE_MEMORY_PERCENTAGE=60). Lock escalation may be also done when a single scan (SDES) holds more than LOCK_ESCALATION_THRESHOLD (765) locks. There is no lock escalation on temporary tables or system tables. Trace Flag 1211 disables lock escalation. Important Do not relay this to the customer without careful consideration. Lock escalation is a necessary feature, not something to be avoided completely. Trace flags are global and disabling lock escalation could lead to out of memory situations, extremely poor performing queries, or other problems. Lock escalation tracing can be seen using the Profiler or with the general locking trace flag, -T1200. However, Trace Flag 1200 shows all lock activity so it should not be usable on a production system. For more information, see also… SOX000925700237 “TITLE: SQL 7.0 Lock escalation in SQL 7.0” Lock Timeout Application Lock Timeout An application can set lock timeout for a session with the SET option: SET LOCK_TIMEOUT N where N is a number of milliseconds. A value of -1 means that there will be no timeout, which is equivalent to the version 6.5 behavior. A value of 0 means that there will be no waiting; if a process finds a resource locked, it will generate error message 1222 and continue with the next statement. The current value of LOCK_TIMEOUT is stored in the global variable @@lock_timeout. Note After a lock timeout any transaction containing the statement, is rolled back or canceled by SQL Server 2000 (bug#352640 was filed). This behavior is different from that of SQL Server 7.0. With SQL Server 7.0, the application must have an error handler that can trap error 1222 and if an application does not trap the error, it can proceed unaware that an individual statement within a transaction has been canceled, and errors can occur because statements later in the transaction may depend on the statement that was never executed. Bug#352640 is fixed in hotfix build 8.00.266 whereby a lock timeout will only Internal Lock Timeout At time, internal operations within SQL Server will attempt to acquire locks via lock manager. Typically, these lock requests are issued with “no waiting.” For example, the ghost record processing might try to clean up rows on a particular page, and before it can do that, it needs to lock the page. Thus, the ghost record manager will request a page lock with no wait so that if it cannot lock the page, it will just move on to other pages; it can always come back to this page later. If you look at SQL Profiler Lock: Timeout events, internal lock timeout typically have a duration value of zero. Lock Duration Lock Mode and Transaction Isolation Level For REPEATABLE READ transaction isolation level, update locks are held until data is read and processed, unless promoted to exclusive locks. "Data is processed" means that we have decided whether the row in question matched the search criteria; if not then the update lock is released, otherwise, we get an exclusive lock and make the modification. Consider the following query: use northwind go dbcc traceon(3604, 1200, 1211) -- turn on lock tracing -- and disable escalation go set transaction isolation level repeatable read begin tran update dbo.[order details] set discount = convert (real, discount) where discount = 0.0 exec sp_lock Update locks are promoted to exclusive locks when there is a match; otherwise, the update lock is released. The sp_lock output verifies that the SPID does not hold any update locks or shared locks at the end of the query. Lock escalation is turned off so that exclusive table lock is not held at the end. Warning Do not use trace flag 1200 in a production environment because it produces a lot of output and slows down the server. Trace flag 1211 should not be used unless you have done extensive study to make sure it helps with performance. These trace flags are used here for illustration and learning purposes only. Lock Ownership Most of the locking discussion in this lesson relates to locks owned by “transactions.” In addition to transaction, cursor and session can be owners of locks and they both affect how long locks are held. For every row that is fetched, when SCROLL_LOCKS option is used, regardless of the state of a transaction, a cursor lock is held until the next row is fetched or when the cursor is closed. Locks owned by session are outside the scope of a transaction. The duration of these locks are bounded by the connection and the process will continue to hold these locks until the process disconnects. A typical lock owned by session is the database (DB) lock. Locking – Read Committed Scan Under read committed isolation level, when database pages are scanned, shared locks are held when the page is read and processed. The shared locks are released “behind” the scan and allow other transactions to update rows. It is important to note that the shared lock currently acquired will not be released until shared lock for the next page is successfully acquired (this is commonly know as “crabbing”). If the same pages are scanned again, rows may be modified or deleted by other transactions. Locking – Repeatable Read Scan Under repeatable read isolation level, when database pages are scanned, shared locks are held when the page is read and processed. SQL Server continues to hold these shared locks, thus preventing other transactions to update rows. If the same pages are scanned again, previously scanned rows will not change but new rows may be added by other transactions. Locking – Serializable Read Scan Under serializable read isolation level, when database pages are scanned, shared locks are held not only on rows but also on scanned key range. SQL Server continues to hold these shared locks until the end of transaction. Because key range locks are held, not only will this prevent other transactions from modifying the rows, no new rows can be inserted. Prefetch and Isolation Level Prefetch and Locking Behavior The prefetch feature is available for use with SQL Server 7.0 and SQL Server 2000. When searching for data using a nonclustered index, the index is searched for a particular value. When that value is found, the index points to the disk address. The traditional approach would be to immediately issue an I/O for that row, given the disk address. The result is one synchronous I/O per row and, at most, one disk at a time working to evaluate the query. This does not take advantage of striped disk sets. The prefetch feature takes a different approach. It continues looking for more record pointers in the nonclustered index. When it has collected a number of them, it provides the storage engine with prefetch hints. These hints tell the storage engine that the query processor will need these particular records soon. The storage engine can now issue several I/Os simultaneously, taking advantage of striped disk sets to execute multiple operations simultaneously. For example, if the engine is scanning a nonclustered index to determine which rows qualify but will eventually need to visit the data page as well to access columns that are not in the index, it may decide to submit asynchronous page read requests for a group of qualifying rows. The prefetch data pages are then revisited later to avoid waiting for each individual page read to complete in a serial fashion. This data access path requires that a lock be held between the prefetch request and the row lookup to stabilize the row on the page so it is not to be moved by a page split or clustered key update. For our example, the isolation level of the query is escalated to REPEATABLE READ, overriding the transaction isolation level. With SQL Server 7.0 and SQL Server 2000, portions of a transaction can execute at a different transaction isolation level than the entire transaction itself. This is implemented as lock classes. Lock classes are used to control lock lifetime when portions of a transaction need to execute at a stricter isolation level than the underlying transaction. Unfortunately, in SQL Server 7.0 and SQL Server 2000, the lock class is created at the topmost operator of the query and hence released only at the end of the query. Currently there is no support to release the lock (lock class) after the row has been discarded or fetched by the filter or join operator. This is because isolation level can be set at the query level via a lock class, but no lower. Because of this, locks acquired during the query will not be released until the query completes. If prefetch is occurring you may see a single SPID that holds hundreds of Shared KEY or PAG locks even though the connection’s isolation level is READ COMMITTED. Isolation level can be determined from DBCC PSS output. For details about this behavior see “SOX001109700040 INF: Queries with PREFETCH in the plan hold lock until the end of transaction”. Other Locking Mechanism Lock manager does not manage latches and spinlocks. Latches Latches are internal mechanisms used to protect pages while doing operations such as placing a row physically on a page, compressing space on a page, or retrieving rows from a page. Latches can roughly be divided into I/O latches and non-I/O latches. If you see a high number of non-I/O related latches, SQL Server is usually doing a large number of hash or sort operations in tempdb. You can monitor latch activities via DBCC SQLPERF(‘WAITSTATS’) command. Spinlock A spinlock is an internal data structure that is used to protect vital information that is shared within SQL Server. On a multi-processor machine, when SQL Server tries to access a particular resource protected by a spinlock, it must first acquire the spinlock. If it fails, it executes a loop that will check to see if the lock is available and if not, decrements a counter. If the counter reaches zero, it yields the processor to another thread and goes into a “sleep” (wait) state for a pre-determined amount of time. When it wakes, hopefully, the lock is free and available. If not, the loop starts again and it is terminated only when the lock is acquired. The reason for implementing a spinlock is that it is probably less costly to “spin” for a short time rather than yielding the processor. Yielding the processor will force an expensive context switch where:  The old thread’s state must be saved  The new thread’s state must be reloaded  The data stored in the L1 and L2 cache are useless to the processor On a single-processor computer, the loop is not useful because no other thread can be running and thus, no one can release the spinlock for the currently executing thread to acquire. In this situation, the thread yields the processor immediately. Lesson 2: Concepts – Batch and Transaction This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Review batch processing and error checking.  Review explicit, implicit and autocommit transactions and transaction nesting level.  Discuss how commit and rollback transaction done in stored procedure and trigger affects transaction nesting level.  Discuss various transaction isolation level and their impact on locking.  Discuss the difference between aborting a statement, a transaction, and a batch.  Describe how @@error, @@transcount, and @@rowcount can be used for error checking and handling. Recommended Reading  Charter 12 “Transactions and Triggers”, Inside SQL Server 2000 by Kalen Delaney Batch Definition SQL Profiler Statements and Batches To help further your understanding of what is a batch and what is a statement, you can use SQL Profiler to study the definition of batch and statement.  Try This: Using SQL Profiler to Analyze Batch 1. Log on to a server with Query Analyzer 2. Startup the SQL Profiler against the same server 3. Start a trace using the “StandardSQLProfiler” template 4. Execute the following using Query Analyzer: SELECT @@VERSION SELECT @@SPID The ‘SQL:BatchCompleted’ event is captured by the trace. It shows both the statements as a single batch. 5. Now execute the following using Query Analyzer {call sp_who()} What shows up? The ‘RPC:Completed’ with the sp_who information. RPC is simply another entry point to the SQL Server to call stored procedures with native data types. This allows one to avoid parsing. The ‘RPC:Completed’ event should be considered the same as a batch for the purposes of this discussion. Stop the current trace and start a new trace using the “SQLProfilerTSQL_SPs” template. Issue the same command as outlines in step 5 above. Looking at the output, not only can you see the batch markers but each statement as executed within the batch. Autocommit, Explicit, and Implicit Transaction Autocommit Transaction Mode (Default) Autocommit mode is the default transaction management mode of SQL Server. Every Transact-SQL statement, whether it is a standalone statement or part of a batch, is committed or rolled back when it completes. If a statement completes successfully, it is committed; if it encounters any error, it is rolled back. A SQL Server connection operates in autocommit mode whenever this default mode has not been overridden by either explicit or implicit transactions. Autocommit mode is also the default mode for ADO, OLE DB, ODBC, and DB-Library. A SQL Server connection operates in autocommit mode until a BEGIN TRANSACTION statement starts an explicit transaction, or implicit transaction mode is set on. When the explicit transaction is committed or rolled back, or when implicit transaction mode is turned off, SQL Server returns to autocommit mode. Explicit Transaction Mode An explicit transaction is a transaction that starts with a BEGIN TRANSACTION statement. An explicit transaction can contain one or more statements and must be terminated by either a COMMIT TRANSACTION or a ROLLBACK TRANSACTION statement. Implicit Transaction Mode SQL Server can automatically or, more precisely, implicitly start a transaction for you if a SET IMPLICIT_TRANSACTIONS ON statement is run or if the implicit transaction option is turned on globally by running sp_configure ‘user options’ 2. (Actually, the bit mask 0x2 must be turned on for the user option so you might have to perform an ‘OR’ operation with the existing user option value.) See SQL Server 2000 Books Online on how to turn on implicit transaction under ODBC and OLE DB (acdata.chm::/ac_8_md_06_2g6r.htm). Transaction Nesting Explicit transactions can be nested. Committing inner transactions is ignored by SQL Server other than to decrements @@TRANCOUNT. The transaction is either committed or rolled back based on the action taken at the end of the outermost transaction. If the outer transaction is committed, the inner nested transactions are also committed. If the outer transaction is rolled back, then all inner transactions are also rolled back, regardless of whether the inner transactions were individually committed. Each call to COMMIT TRANSACTION applies to the last executed BEGIN TRANSACTION. If the BEGIN TRANSACTION statements are nested, then a COMMIT statement applies only to the last nested transaction, which is the innermost transaction. Even if a COMMIT TRANSACTION transaction_name statement within a nested transaction refers to the transaction name of the outer transaction, the commit applies only to the innermost transaction. If a ROLLBACK TRANSACTION statement without a transaction_name parameter is executed at any level of a set of nested transaction, it rolls back all the nested transactions, including the outermost transaction. The @@TRANCOUNT function records the current transaction nesting level. Each BEGIN TRANSACTION statement increments @@TRANCOUNT by one. Each COMMIT TRANSACTION statement decrements @@TRANCOUNT by one. A ROLLBACK TRANSACTION statement that does not have a transaction name rolls back all nested transactions and decrements @@TRANCOUNT to 0. A ROLLBACK TRANSACTION that uses the transaction name of the outermost transaction in a set of nested transactions rolls back all the nested transactions and decrements @@TRANCOUNT to 0. When you are unsure if you are already in a transaction, SELECT @@TRANCOUNT to determine whether it is 1 or more. If @@TRANCOUNT is 0 you are not in a transaction. You can also find the transaction nesting level by checking the sysprocess.open_tran column. See SQL Server 2000 Books Online topic “Nesting Transactions” (acdata.chm::/ac_8_md_06_66nq.htm) for more information. Statement, Transaction, and Batch Abort One batch can have many statements and one transaction can have multiple statements, also. One transaction can span multiple batches and one batch can have multiple transactions. Statement Abort Currently executing statement is aborted. This can be a bit confusing when you start talking about statements in a trigger or stored procedure. Let us look closely at the following trigger: CREATE TRIGGER TRG8134 ON TBL8134 AFTER INSERT AS BEGIN SELECT 1/0 SELECT 'Next command in trigger' END To fire the INSERT trigger, the batch could be as simple as ‘INSERT INTO TBL8134 VALUES(1)’. However, the trigger contains two statements that must be executed as part of the batch to satisfy the clients insert request. When the ‘SELECT 1/0’ causes the divide by zero error, a statement abort is issued for the ‘SELECT 1/0’ statement. Batch and Transaction Abort On SQL Server 2000 (and SQL Server 7.0) whenever a non-informational error is encountered in a trigger, the statement abort is promoted to a batch and transactional abort. Thus, in the example the statement abort for ‘select 1/0’ promotion results in an entire batch abort. No further statements in the trigger or batch will be executed and a rollback is issued. On SQL Server 6.5, the statement aborts immediately and results in a transaction abort. However, the rest of the statements within the trigger are executed. This trigger could return ‘Next command in trigger’ as a result set. Once the trigger completes the batch abort promotion takes effect. Conversely, submitting a similar set of statements in a standalone batch can result in different behavior. SELECT 1/0 SELECT 'Next command in batch' Not considering the set option possibilities, a divide by zero error generally results in a statement abort. Since it is not in a trigger, the promotion to a batch abort is avoided and subsequent SELECT statement can execute. The programmer should add an “if @@ERROR” check immediately after the ‘select 1/0’ to T-SQL execution to control the flow correctly. Aborting and Set Options ARITHABORT If SET ARITHABORT is ON, these error conditions cause the query or batch to terminate. If the errors occur in a transaction, the transaction is rolled back. If SET ARITHABORT is OFF and one of these errors occurs, a warning message is displayed, and NULL is assigned to the result of the arithmetic operation. When an INSERT, DELETE, or UPDATE statement encounters an arithmetic error (overflow, divide-by-zero, or a domain error) during expression evaluation when SET ARITHABORT is OFF, SQL Server inserts or updates a NULL value. If the target column is not nullable, the insert or update action fails and the user receives an error. XACT_ABORT When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back. When OFF, only the Transact-SQL statement that raised the error is rolled back and the transaction continues processing. Compile errors, such as syntax errors, are not affected by SET XACT_ABORT. For example: CREATE TABLE t1 (a int PRIMARY KEY) CREATE TABLE t2 (a int REFERENCES t1(a)) GO INSERT INTO t1 VALUES (1) INSERT INTO t1 VALUES (3) INSERT INTO t1 VALUES (4) INSERT INTO t1 VALUES (6) GO SET XACT_ABORT OFF GO BEGIN TRAN INSERT INTO t2 VALUES (1) INSERT INTO t2 VALUES (2) /* Foreign key error */ INSERT INTO t2 VALUES (3) COMMIT TRAN SELECT 'Continue running batch 1...' GO SET XACT_ABORT ON GO BEGIN TRAN INSERT INTO t2 VALUES (4) INSERT INTO t2 VALUES (5) /* Foreign key error */ INSERT INTO t2 VALUES (6) COMMIT TRAN SELECT 'Continue running batch 2...' GO /* Select shows only keys 1 and 3 added. Key 2 insert failed and was rolled back, but XACT_ABORT was OFF and rest of transaction succeeded. Key 5 insert error with XACT_ABORT ON caused all of the second transaction to roll back. Also note that 'Continue running batch 2...' is not Returned to indicate that the batch is aborted. */ SELECT * FROM t2 GO DROP TABLE t2 DROP TABLE t1 GO Compile and Run-time Errors Compile Errors Compile errors are encountered during syntax checks, security checks, and other general operations to prepare the batch for execution. These errors can prevent the optimization of the query and thus lead to immediate abort. The statement is not run and the batch is aborted. The transaction state is generally left untouched. For example, assume there are four statements in a particular batch. If the third statement has a syntax error, none of the statements in the batch is executed. Optimization Errors Optimization errors would include rare situations where the statement encounters a problem when attempting to build an optimal execution plan. Example: “too many tables referenced in the query” error is reported because a “work table” was added to the plan. Runtime Errors Runtime errors are those that are encountered during the execution of the query. Consider the following batch: SELECT * FROM pubs.dbo.titles UPDATE pubs.dbo.authors SET au_lname = au_lname SELECT * FROM foo UPDATE pubs.dbo.authors SET au_lname = au_lname If you run the above statements in a batch, the first two statements will be executed, the third statement will fail because table foo does not exist, and the batch will terminate. Deferred Name Resolution is the feature that allows this batch to start executing before resolving the object foo. This feature allows SQL Server to delay object resolution and place a “placeholder” in the query’s execution. The object referenced by the placeholder is resolved until the query is executed. In our example, the execution of the statement “SELECT * FROM foo” will trigger another compile process to resolve the name again. This time, error message 208 is returned. Error: 208, Level 16, State 1, Line 1 Invalid object name 'foo'. Message 208 can be encountered as a runtime or compile error depending on whether the Deferred Name Resolution feature is available. In SQL Server 6.5 this would be considered a compile error and on SQL Server 2000 (and SQL Server7.0) as a runtime error due to Deferred Name Resolution. In the following example, if a trigger referenced authors2, the error is detected as SQL Server attempts to execute the trigger. However, under SQL Server 6.5 the create trigger statement fails because authors2 does not exist at compile time. When errors are encountered in a trigger, generally, the statement, batch, and transaction are aborted. You should be able to observe this by running the following script in pubs database: Create table tblTest(iID int) go create trigger trgInsert on tblTest for INSERT as begin select * from authors select * from authors2 select * from titles end go begin tran select 'Before' insert into tblTest values(1) select 'After' go select @@TRANCOUNT go When run in a batch, the statement and the batch are aborted but the transaction remains active. The follow script illustrates this: begin tran select 'Before' select * from authors2 select 'After' go select @@TRANCOUNT go One other factor in a compile versus runtime error is implicit data type conversions. If you were to run the following statements on SQL Server 6.5 and SQL Server 2000 (and SQL Server 7.0): create table tblData(dtData datetime) go select 1 insert into tblData values(12/13/99) go On SQL Server 6.5, you get an error before execution of the batch begins so no statements are executed and the batch is aborted. Error: 206, Level 16, State 2, Line 2 Operand type clash: int is incompatible with datetime On SQL Server 2000, you get the default value (1900-01-01 00:00:00.000) inserted into the table. SQL Server 2000 implicit data type conversion treats this as integer division. The integer division of 12/13/99 is 0, so the default date and time value is inserted, no error returned. To correct the problem on either version is to wrap the date string with quotes. See Bug #56118 (sqlbug_70) for more details about this situation. Another example of a runtime error is a 605 message. Error: 605 Attempt to fetch logical page %S_PGID in database '%.*ls' belongs to object '%.*ls', not to object '%.*ls'. A 605 error is always a runtime error. However, depending on the transaction isolation level, (e.g. using the NOLOCK lock hint), established by the SPID the handling of the error can vary. Specifically, a 605 error is considered an ACCESS error. Errors associated with buffer and page access are found in the 600 series of errors. When the error is encountered, the isolation level of the SPID is examined to determine proper handling based on information or fatal error level. Transaction Error Checking Not all errors cause transactions to automatically rollback. Although it is difficult to determine exactly which errors will rollback transactions and which errors will not, the main idea here is that programmers must perform error checking and handle errors appropriately. Error Handling Raiserror Details Raiserror seems to be a source of confusion but is really rather simple. Raiserror with severity levels of 20 or higher will terminate the connection. Of course, when the connection is terminated a full rollback of any open transaction will immediately be instantiated by the SQL Server (except distributed transaction with DTC involved). Severity levels lower than 20 will simply result in the error message being returned to the client. They do not affect the transaction scope of the connection. Consider the following batch: use pubs begin tran update authors set au_lname = 'smith' raiserror ('This is bad', 19, 1) with log select @@trancount With severity set at 19, the 'select @@trancount' will be executed after the raiserror statement and will return a value of 1. If severity is changed to 20, then the select statement will not run and the connection is broken. Important Error handling must occur not only in T-SQL batches and stored procedures, but also in application program code. Transactions and Triggers (1 of 2) Basic behavior assumes the implicit transactions setting is set to OFF. This behavior makes it possible to identify business logic errors in a trigger, raise an error, rollback the action, and add an audit table entry. Logically, the insert to the audit table cannot take place before the ROLLBACK action and you would not want to build in the audit table insert into every applications error handler that violated the business rule of the trigger. For more information, see also… SQL Server 2000 Books Online topic “Rollbacks in stored procedure and triggers“ (acdata.chm::/ac_8_md_06_4qcz.htm) IMPLICIT_TRANSACTIONS ON Behavior The behavior of firing other triggers on the same table can be tricky. Say you added a trigger that checks the CODE field. Read only versions of the rows contain the code ‘RO’ and read/write versions use ‘RW.’ Whenever someone tries to delete a row with a code ‘RO’ the trigger issues the rollback and logs an audit table entry. However, you also have a second trigger that is responsible for cascading delete operations. One client could issue the delete without implicit transactions on and only the current trigger would execute and then terminate the batch. However, a second client with implicit transactions on could issue the same delete and the secondary trigger would fire. You end up with a situation in which the cascading delete operations can take place (are committed) but the initial row remains in the table because of the rollback operation. None of the delete operations should be allowed but because the transaction scope was restarted because of the implicit transactions setting, they did. Transactions and Triggers (2 of 2) It is extremely difficult to determine the execution state of a trigger when using explicit rollback statements in combination with implicit transactions. The RETURN statement is not allowed to return a value. The only way I have found to set the @@ERROR is using a ‘raiserror’ as the last execution statement in the last trigger to execute. If you modify the example, this following RAISERROR statement will set @@ERROR to 50000: CREATE TRIGGER trgTest on tblTest for INSERT AS BEGIN ROLLBACK INSERT INTO tblAudit VALUES (1) RAISERROR('This is bad', 14,1) END However, this value does not carry over to a secondary trigger for the same table. If you raise an error at the end of the first trigger and then look at @@ERROR in the secondary trigger the @@ERROR remains 0. Carrying Forward an Active/Open Transaction It is possible to exit from a trigger and carry forward an open transaction by issuing a BEGIN TRAN or by setting implicit transaction on and doing INSERT, UPDATE, or DELETE. Warning It is never recommended that a trigger call BEGIN TRANSACTION. By doing this you increment the transaction count. Invalid code logic, not calling commit transaction, can lead to a situation where the transaction count remains elevated upon exit of the trigger. Transaction Count The behavior is better explained by understanding how the server works. It does not matter whether you are in a transaction, when a modification takes place the transaction count is incremented. So, in the simplest form, during the processing of an insert the transaction count is 1. On completion of the insert, the server will commit (and thus decrement the transaction count). If the commit identifies the transaction count has returned to 0, the actual commit processing is completed. Issuing a commit when the transaction count is greater than 1 simply decrements the nested transaction counter. Thus, when we enter a trigger, the transaction count is 1. At the completion of the trigger, the transaction count will be 0 due to the commit issued at the end of the modification statement (insert). In our example, if the connection was already in a transaction and called the second INSERT, since implicit transaction is ON, the transaction count in the trigger will be 2 as long as the ROLLBACK is not executed. At the end of the insert, the commit is again issued to decrement the transaction reference count to 1. However, the value does not return to 0 so the transaction remains open/active. Subsequent triggers are only fired if the transaction count at the end of the trigger remains greater than or equal to 1. The key to continuation of secondary triggers and the batch is the transaction count at the end of a trigger execution. If the trigger that performs a rollback has done an explicit begin transaction or uses implicit transactions, subsequent triggers and the batch will continue. If the transaction count is not 1 or greater, subsequent triggers and the batch will not execute. Warning Forcing the transaction count after issuing a rollback is dangerous because you can easily loose track of your transaction nesting level. When performing an explicit rollback in a trigger, you should immediately issue a return statement to maintain consistent behavior between a connection with and without implicit transaction settings. This will force the trigger(s) and batch to terminate immediately. One of the methods of dealing with this issue is to run ‘SET IMPLICIT_TRANSACTIONS OFF’ as the first statement of any trigger. Other methods may entails checking @@TRANCOUNT at the end of the trigger and continue to COMMIT the transaction as long as @@TRANCOUNT is greater than 1. Examples The following examples are based on this table: create table tbl50000Insert (iID int NOT NULL) go Note If more than one trigger is used, to guarantee the trigger firing sequence, the sp_settriggerorder command should be used. This command is omitted in these examples to simplify the complexity of the statements. First Example In the first example, the second trigger was never fired and the batch, starting with the insert statement, was aborted. Thus, the print statement was never issued. print('Trigger issues rollback - cancels batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran select 'End of trigger', @@TRANCOUNT as 'TRANCOUNT' end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(1) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Second Example The next example shows that since a new transaction is started, the second trigger will be fired and the print statement in the batch will be executed. Note that the insert is rolled back. print('Trigger issues rollback - increases tran count to continue batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(2) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Third Example In the third example, the raiserror statement is used to set the @@ERROR value and the BEGIN TRAN statement is used in the trigger to allow the batch to continue to run. print('Trigger issues rollback - uses raiserror to set @@ERROR') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran -- Increase @@trancount to allow -- batch to continue select @@trancount as ‘Trancount’ raiserror('This is from the trigger', 14,1) end go insert into tbl50000Insert values(3) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert go delete from tbl50000Insert Fourth Example For the fourth example, a second trigger is added to illustrate the fact that @@ERROR value set in the first trigger will not be seen in the second trigger nor will it show up in the batch after the second trigger is fired. print('Trigger issues rollback - uses raiserror to set @@ERROR, not seen in second trigger and cleared in batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback begin tran -- Increase @@trancount to -- allow batch to continue select @@TRANCOUNT as 'Trancount' raiserror('This is from the trigger', 14,1) end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' end go insert into tbl50000Insert values(4) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Lesson 3: Concepts – Locks and Applications This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Explain how lock hints are used and their impact.  Discuss the effect on locking when an application uses Microsoft Transaction Server.  Identify the different kinds of deadlocks including distributed deadlock. Recommended Reading  Charter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  Charter 16 “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney Q239753 – Deadlock Situation Not Detected by SQL Server Q288752 – Blocked SPID Not Participating in Deadlock May Incorrectly be Chosen as victim Locking Hints UPDLOCK If update locks are used instead of shared locks while reading a table, the locks are held until the end of the statement or transaction. UPDLOCK has the advantage of allowing you to read data (without blocking other readers) and update it later with the assurance that the data has not changed since you last read it. READPAST READPAST is an optimizer hint for use with SELECT statements. When this hint is used, SQL Server will read past locked rows. For example, assume table T1 contains a single integer column with the values of 1, 2, 3, 4, and 5. If transaction A changes the value of 3 to 8 but has not yet committed, a SELECT * FROM T1 (READPAST) yields values 1, 2, 4, 5. Tip READPAST only applies to transactions operating at READ COMMITTED isolation and only reads past row-level locks. This lock hint can be used to implement a work queue on a SQL Server table. For example, assume there are many external work requests being thrown into a table and they should be serviced in approximate insertion order but they do not have to be completely FIFO. If you have 4 worker threads consuming work items from the queue they could each pick up a record using read past locking and then delete the entry from the queue and commit when they're done. If they fail, they could rollback, leaving the entry on the queue for the next worker thread to pick up. Caution The READPAST hint is not compatible with HOLDLOCK.  Try This: Using Locking Hints 1. Open a Query Window and connect to the pubs database. 2. Execute the following statements (--Conn 1 is optional to help you keep track of each connection): BEGIN TRANSACTION -- Conn 1 UPDATE titles SET price = price * 0.9 WHERE title_id = 'BU1032' 3. Open a second connection and execute the following statements: SELECT @@lock_timeout -- Conn 2 GO SELECT * FROM titles SELECT * FROM authors 4. Open a third connection and execute the following statements: SET LOCK_TIMEOUT 0 -- Conn 3 SELECT * FROM titles SELECT * FROM authors 5. Open a fourth connection and execute the following statement: SELECT * FROM titles (READPAST) -- Conn 4 WHERE title_ID < 'C' SELECT * FROM authors How many records were returned? 3 6. Open a fifth connection and execute the following statement: SELECT * FROM titles (NOLOCK) -- Conn 5 WHERE title_ID 0 the lock manager also checks for deadlocks every time a SPID gets blocked. So a single deadlock will trigger 20 seconds of more immediate deadlock detection, but if no additional deadlocks occur in that 20 seconds, the lock manager no longer checks for deadlocks at each block and detection again only happens every 5 seconds. Although normally not needed, you may use trace flag -T1205 to trace the deadlock detection process. Note Please note the distinction between application lock and other locks’ deadlock detection. For application lock, we do not rollback the transaction of the deadlock victim but simply return a -3 to sp_getapplock, which the application needs to handle itself. Deadlock Resolution How is a deadlock resolved? SQL Server picks one of the connections as a deadlock victim. The victim is chosen based on either which is the least expensive transaction (calculated using the number and size of the log records) to roll back or in which process “SET DEADLOCK_PRIORITY LOW” is specified. The victim’s transaction is rolled back, held locks are released, and SQL Server sends error 1205 to the victim’s client application to notify it that it was chosen as a victim. The other process can then obtain access to the resource it was waiting on and continue. Error 1205: Your transaction (process ID #%d) was deadlocked with another process and has been chosen as the deadlock victim. Rerun your transaction. Symptoms of deadlocking Error 1205 usually is not written to the SQL Server errorlog. Unfortunately, you cannot use sp_altermessage to cause 1205 to be written to the errorlog. If the client application does not capture and display error 1205, some of the symptoms of deadlock occurring are:  Clients complain of mysteriously canceled queries when using certain features of an application.  May be accompanied by excessive blocking. Lock contention increases the chances that a deadlock will occur. Triggers and Deadlock Triggers promote the deadlock priority of the SPID for the life of the trigger execution when the DEADLOCK PRIORITY is not set to low. When a statement in a trigger causes a deadlock to occur, the SPID executing the trigger is given preferential treatment and will not become the victim. Warning Bug 235794 is filed against SQL Server 2000 where a blocked SPID that is not a participant of a deadlock may incorrectly be chosen as a deadlock victim if the SPID is blocked by one of the deadlock participants and the SPID has the least amount of transaction logging. See KB article Q288752: “Blocked Spid Not Participating in Deadlock May Incorrectly be Chosen as victim” for more information. Distributed Deadlock – Scenario 1 Distributed Deadlocks The term distributed deadlock is ambiguous. There are many types of distributed deadlocks. Scenario 1 Client application opens connection A, begins a transaction, acquires some locks, opens connection B, connection B gets blocked by A but the application is designed to not commit A’s transaction until B completes. Note SQL Server has no way of knowing that connection A is somehow dependent on B – they are two distinct connections with two distinct transactions. This situation is discussed in scenario #4 in “Q224453 INF: Understanding and Resolving SQL Server 7.0 Blocking Problems”. Distributed Deadlock – Scenario 2 Scenario 2 Distributed deadlock involving bound connections. Two connections can be bound into a single transaction context with sp_getbindtoken/sp_bindsession or via DTC. Spid 60 enlists in a transaction with spid 61. A third spid 62 is blocked by spid 60, but spid 61 is blocked by spid 62. Because they are doing work in the same transaction, spid 60 cannot commit until spid 61 finishes his work, but spid 61 is blocked by 62 who is blocked by 60. This scenario is described in article “Q239753 - Deadlock Situation Not Detected by SQL Server.” Note SQL Server 6.5 and 7.0 do not detect this deadlock. The SQL Server 2000 deadlock detection algorithm has been enhanced to detect this type of distributed deadlock. The diagram in the slide illustrates this situation. Resources locked by a spid are below that spid (in a box). Arrows indicate blocking and are drawn from the blocked spid to the resource that the spid requires. A circle represents a transaction; spids in the same transaction are shown in the same circle. Distributed Deadlock – Scenario 3 Scenario 3 Distributed deadlock involving linked servers or server-to-server RPC. Spid 60 on Server 1 executes a stored procedure on Server 2 via linked server. This stored procedure does a loopback linked server query against a table on Server 1, and this connection is blocked by a lock held by Spid 60. Note No version of SQL Server is currently designed to detect this distributed deadlock. Lesson 4: Information Collection and Analysis This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Identify specific information needed for troubleshooting issues.  Locate and collect information needed for troubleshooting issues.  Analyze output of DBCC Inputbuffer, DBCC PSS, and DBCC Page commands.  Review information collected from master.dbo.sysprocesses table.  Review information collected from master.dbo.syslockinfo table.  Review output of sp_who, sp_who2, sp_lock.  Analyze Profiler log for query usage pattern.  Review output of trace flags to help troubleshoot deadlocks. Recommended Reading Q244455 - INF: Definition of Sysprocesses Waittype and Lastwaittype Fields Q244456 - INF: Description of DBCC PSS Command for SQL Server 7.0 Q271509 - INF: How to Monitor SQL Server 2000 Blocking Q251004 - How to Monitor SQL Server 7.0 Blocking Q224453 - Understanding and Resolving SQL Server 7.0 Blocking Problem Q282749 – BUG: Deadlock information reported with SQL Server 2000 Profiler Locking and Blocking  Try This: Examine Blocked Processes 1. Open a Query Window and connect to the pubs database. Execute the following statements: BEGIN TRAN -- connection 1 UPDATE titles SET price = price + 1 2. Open another connection and execute the following statement: SELECT * FROM titles-- connection 2 3. Open a third connection and execute sp_who; note the process id (spid) of the blocked process. (Connection 3) 4. In the same connection, execute the following: SELECT spid, cmd, waittype FROM master..sysprocesses WHERE waittype 0 -- connection 3 5. Do not close any of the connections! What was the wait type of the blocked process?  Try This: Look at locks held Assumes all your connections are still open from the previous exercise. • Execute sp_lock -- Connection 3 What locks is the process from the previous example holding? Make sure you run ROLLBACK TRAN in Connection 1 to clean up your transaction. Collecting Information See Module 2 for more about how to gather this information using various tools. Recognizing Blocking Problems How to Recognize Blocking Problems  Users complain about poor performance at a certain time of day, or after a certain number of users connect.  SELECT * FROM sysprocesses or sp_who2 shows non-zero values in the blocked or BlkBy column.  More severe blocking incidents will have long blocking chains or large sysprocesses.waittime values for blocked spids.  Possibl
Table of Contents Preface 1. Introduction to Hibernate 1.1. Preface 1.2. Part 1 - The first Hibernate Application 1.2.1. The first class 1.2.2. The mapping file 1.2.3. Hibernate configuration 1.2.4. Building with Ant 1.2.5. Startup and helpers 1.2.6. Loading and storing objects 1.3. Part 2 - Mapping associations 1.3.1. Mapping the Person class 1.3.2. A unidirectional Set-based association 1.3.3. Working the association 1.3.4. Collection of values 1.3.5. Bi-directional associations 1.3.6. Working bi-directional links 1.4. Part 3 - The EventManager web application 1.4.1. Writing the basic servlet 1.4.2. Processing and rendering 1.4.3. Deploying and testing 1.5. Summary 2. Architecture 2.1. Overview 2.2. Instance states 2.3. JMX Integration 2.4. JCA Support 2.5. Contextual Sessions 3. Configuration 3.1. Programmatic configuration 3.2. Obtaining a SessionFactory 3.3. JDBC connections 3.4. Optional configuration properties 3.4.1. SQL Dialects 3.4.2. Outer Join Fetching 3.4.3. Binary Streams 3.4.4. Second-level and query cache 3.4.5. Query Language Substitution 3.4.6. Hibernate statistics 3.5. Logging 3.6. Implementing a NamingStrategy 3.7. XML configuration file 3.8. J2EE Application Server integration 3.8.1. Transaction strategy configuration 3.8.2. JNDI-bound SessionFactory 3.8.3. Current Session context management with JTA 3.8.4. JMX deployment 4. Persistent Classes 4.1. A simple POJO example 4.1.1. Implement a no-argument constructor 4.1.2. Provide an identifier property (optional) 4.1.3. Prefer non-final classes (optional) 4.1.4. Declare accessors and mutators for persistent fields (optional) 4.2. Implementing inheritance 4.3. Implementing equals() and hashCode() 4.4. Dynamic models 4.5. Tuplizers 5. Basic O/R Mapping 5.1. Mapping declaration 5.1.1. Doctype 5.1.2. hibernate-mapping 5.1.3. class 5.1.4. id 5.1.4.1. Generator 5.1.4.2. Hi/lo algorithm 5.1.4.3. UUID algorithm 5.1.4.4. Identity columns and sequences 5.1.4.5. Assigned identifiers 5.1.4.6. Primary keys assigned by triggers 5.1.5. composite-id 5.1.6. discriminator 5.1.7. version (optional) 5.1.8. timestamp (optional) 5.1.9. property 5.1.10. many-to-one 5.1.11. one-to-one 5.1.12. natural-id 5.1.13. component, dynamic-component 5.1.14. properties 5.1.15. subclass 5.1.16. joined-subclass 5.1.17. union-subclass 5.1.18. join 5.1.19. key 5.1.20. column and formula elements 5.1.21. import 5.1.22. any 5.2. Hibernate Types 5.2.1. Entities and values 5.2.2. Basic value types 5.2.3. Custom value types 5.3. Mapping a class more than once 5.4. SQL quoted identifiers 5.5. Metadata alternatives 5.5.1. Using XDoclet markup 5.5.2. Using JDK 5.0 Annotations 5.6. Generated Properties 5.7. Auxiliary Database Objects 6. Collection Mapping 6.1. Persistent collections 6.2. Collection mappings 6.2.1. Collection foreign keys 6.2.2. Collection elements 6.2.3. Indexed collections 6.2.4. Collections of values and many-to-many associations 6.2.5. One-to-many associations 6.3. Advanced collection mappings 6.3.1. Sorted collections 6.3.2. Bidirectional associations 6.3.3. Bidirectional associations with indexed collections 6.3.4. Ternary associations 6.3.5. Using an <idbag> 6.4. Collection examples 7. Association Mappings 7.1. Introduction 7.2. Unidirectional associations 7.2.1. many to one 7.2.2. one to one 7.2.3. one to many 7.3. Unidirectional associations with join tables 7.3.1. one to many 7.3.2. many to one 7.3.3. one to one 7.3.4. many to many 7.4. Bidirectional associations 7.4.1. one to many / many to one 7.4.2. one to one 7.5. Bidirectional associations with join tables 7.5.1. one to many / many to one 7.5.2. one to one 7.5.3. many to many 7.6. More complex association mappings 8. Component Mapping 8.1. Dependent objects 8.2. Collections of dependent objects 8.3. Components as Map indices 8.4. Components as composite identifiers 8.5. Dynamic components 9. Inheritance Mapping 9.1. The Three Strategies 9.1.1. Table per class hierarchy 9.1.2. Table per subclass 9.1.3. Table per subclass, using a discriminator 9.1.4. Mixing table per class hierarchy with table per subclass 9.1.5. Table per concrete class 9.1.6. Table per concrete class, using implicit polymorphism 9.1.7. Mixing implicit polymorphism with other inheritance mappings 9.2. Limitations 10. Working with objects 10.1. Hibernate object states 10.2. Making objects persistent 10.3. Loading an object 10.4. Querying 10.4.1. Executing queries 10.4.1.1. Iterating results 10.4.1.2. Queries that return tuples 10.4.1.3. Scalar results 10.4.1.4. Bind parameters 10.4.1.5. Pagination 10.4.1.6. Scrollable iteration 10.4.1.7. Externalizing named queries 10.4.2. Filtering collections 10.4.3. Criteria queries 10.4.4. Queries in native SQL 10.5. Modifying persistent objects 10.6. Modifying detached objects 10.7. Automatic state detection 10.8. Deleting persistent objects 10.9. Replicating object between two different datastores 10.10. Flushing the Session 10.11. Transitive persistence 10.12. Using metadata 11. Transactions And Concurrency 11.1. Session and transaction scopes 11.1.1. Unit of work 11.1.2. Long conversations 11.1.3. Considering object identity 11.1.4. Common issues 11.2. Database transaction demarcation 11.2.1. Non-managed environment 11.2.2. Using JTA 11.2.3. Exception handling 11.2.4. Transaction timeout 11.3. Optimistic concurrency control 11.3.1. Application version checking 11.3.2. Extended session and automatic versioning 11.3.3. Detached objects and automatic versioning 11.3.4. Customizing automatic versioning 11.4. Pessimistic Locking 11.5. Connection Release Modes 12. Interceptors and events 12.1. Interceptors 12.2. Event system 12.3. Hibernate declarative security 13. Batch processing 13.1. Batch inserts 13.2. Batch updates 13.3. The StatelessSession interface 13.4. DML-style operations 14. HQL: The Hibernate Query Language 14.1. Case Sensitivity 14.2. The from clause 14.3. Associations and joins 14.4. Forms of join syntax 14.5. The select clause 14.6. Aggregate functions 14.7. Polymorphic queries 14.8. The where clause 14.9. Expressions 14.10. The order by clause 14.11. The group by clause 14.12. Subqueries 14.13. HQL examples 14.14. Bulk update and delete 14.15. Tips & Tricks 15. Criteria Queries 15.1. Creating a Criteria instance 15.2. Narrowing the result set 15.3. Ordering the results 15.4. Associations 15.5. Dynamic association fetching 15.6. Example queries 15.7. Projections, aggregation and grouping 15.8. Detached queries and subqueries 15.9. Queries by natural identifier 16. Native SQL 16.1. Using a SQLQuery 16.2. Alias and property references 16.3. Named SQL queries 16.3.1. Using return-property to explicitly specify column/alias names 16.3.2. Using stored procedures for querying 16.3.2.1. Rules/limitations for using stored procedures 16.4. Custom SQL for create, update and delete 16.5. Custom SQL for loading 17. Filtering data 17.1. Hibernate filters 18. XML Mapping 18.1. Working with XML data 18.1.1. Specifying XML and class mapping together 18.1.2. Specifying only an XML mapping 18.2. XML mapping metadata 18.3. Manipulating XML data 19. Improving performance 19.1. Fetching strategies 19.1.1. Working with lazy associations 19.1.2. Tuning fetch strategies 19.1.3. Single-ended association proxies 19.1.4. Initializing collections and proxies 19.1.5. Using batch fetching 19.1.6. Using subselect fetching 19.1.7. Using lazy property fetching 19.2. The Second Level Cache 19.2.1. Cache mappings 19.2.2. Strategy: read only 19.2.3. Strategy: read/write 19.2.4. Strategy: nonstrict read/write 19.2.5. Strategy: transactional 19.3. Managing the caches 19.4. The Query Cache 19.5. Understanding Collection performance 19.5.1. Taxonomy 19.5.2. Lists, maps, idbags and sets are the most efficient collections to update 19.5.3. Bags and lists are the most efficient inverse collections 19.5.4. One shot delete 19.6. Monitoring performance 19.6.1. Monitoring a SessionFactory 19.6.2. Metrics 20. Toolset Guide 20.1. Automatic schema generation 20.1.1. Customizing the schema 20.1.2. Running the tool 20.1.3. Properties 20.1.4. Using Ant 20.1.5. Incremental schema updates 20.1.6. Using Ant for incremental schema updates 20.1.7. Schema validation 20.1.8. Using Ant for schema validation 21. Example: Parent/Child 21.1. A note about collections 21.2. Bidirectional one-to-many 21.3. Cascading lifecycle 21.4. Cascades and unsaved-value 21.5. Conclusion 22. Example: Weblog Application 22.1. Persistent Classes 22.2. Hibernate Mappings 22.3. Hibernate Code 23. Example: Various Mappings 23.1. Employer/Employee 23.2. Author/Work 23.3. Customer/Order/Product 23.4. Miscellaneous example mappings 23.4.1. "Typed" one-to-one association 23.4.2. Composite key example 23.4.3. Many-to-many with shared composite key attribute 23.4.4. Content based discrimination 23.4.5. Associations on alternate keys 24. Best Practices
中文名: 算法导论 原名: Introduction to Algorithms 作者: Thomas H.Cormen, 达特茅斯学院计算机科学系副教授 Charles E.Leiserson, 麻省理工学院计算机科学与电气工程系教授 Ronald L.Rivest, 麻省理工学院计算机科学系Andrew与Erna Viterbi具名教授 Clifford Stein, 哥伦比亚大学工业工程与运筹学副教授 资源格式: PDF(完整书签目录) 出版社: The MIT Press ISBN 978-0-262-03384-8 (hardcover : alk. paper)—ISBN 978-0-262-53305-8 (pbk. : alk. paper) 发行时间: 2009年09月30日 地区: 美国 语言: 英文 1 The Role of Algorithms in Computing 5 1.1 Algorithms 5 1.2 Algorithms as a technology 11 2 Getting Started 16 2.1 Insertion sort 16 2.2 Analyzing algorithms 23 2.3 Designing algorithms 29 3 Growth of Functions 43 3.1 Asymptotic notation 43 3.2 Standard notations and common functions 53 4 Divide-and-Conquer 65 4.1 The maximum-subarray problem 68 4.2 Strassen's algorithm for matrix multiplication 75 4.3 The substitution method for solving recurrences 83 4.4 The recursion-tree method for solving recurrences 88 4.5 The master method for solving recurrences 93 4.6 Proof of the master theorem 97 5 Probabilistic Analysis and Randomized Algorithms 114 5.1 The hiring problem 114 5.2 Indicator random variables 118 5.3 Randomized algorithms 122 5.4 Probabilistic analysis and further uses of indicator random variables 130 II Sorting and Order Statistics Introduction 147 6 Heapsort 151 6.1 Heaps 151 6.2 Maintaining the heap property 154 6.3 Building a heap 156 6.4 The heapsort algorithm 159 6.5 Priority queues 162 7 Quicksort 170 7.1 Description of quicksort 170 7.2 Performance of quicksort 174 7.3 A randomized version of quicksort 179 7.4 Analysis of quicksort 180 8 Sorting in Linear Time 191 8.1 Lower bounds for sorting 191 8.2 Counting sort 194 8.3 Radix sort 197 8.4 Bucket sort 200 9 Medians and Order Statistics 213 9.1 Minimum and maximum 214 9.2 Selection in expected linear time 215 9.3 Selection in worst-case linear time 220 III Data Structures Introduction 229 10 Elementary Data Structures 232 10.1 Stacks and queues 232 10.2 Linked lists 236 10.3 Implementing pointers and objects 241 10.4 Representing rooted trees 246 11 Hash Tables 253 11.1 Direct-address tables 254 11.2 Hash tables 256 11.3 Hash functions 262 11.4 Open addressing 269 11.5 Perfect hashing 277 12 Binary Search Trees 286 12.1 What is a binary search tree? 286 12.2 Querying a binary search tree 289 12.3 Insertion and deletion 294 12.4 Randomly built binary search trees 299 13 Red-Black Trees 308 13.1 Properties of red-black trees 308 13.2 Rotations 312 13.3 Insertion 315 13.4 Deletion 323 14 Augmenting Data Structures 339 14.1 Dynamic order statistics 339 14.2 How to augment a data structure 345 14.3 Interval trees 348 IV Advanced Design and Analysis Techniques Introduction 357 15 Dynamic Programming 359 15.1 Rod cutting 360 15.2 Matrix-chain multiplication 370 15.3 Elements of dynamic programming 378 15.4 Longest common subsequence 390 15.5 Optimal binary search trees 397 16 Greedy Algorithms 414 16.1 An activity-selection problem 415 16.2 Elements of the greedy strategy 423 16.3 Huffman codes 428 16.4 Matroids and greedy methods 437 16.5 A task-scheduling problem as a matroid 443 17 Amortized Analysis 451 17.1 Aggregate analysis 452 17.2 The accounting method 456 17.3 The potential method 459 17.4 Dynamic tables 463 V Advanced Data Structures Introduction 481 18 B-Trees 484 18.1 Definition of B-trees 488 18.2 Basic operations on B-trees 491 18.3 Deleting a key from a B-tree 499 19 Fibonacci Heaps 505 19.1 Structure of Fibonacci heaps 507 19.2 Mergeable-heap operations 510 19.3 Decreasing a key and deleting a node 518 19.4 Bounding the maximum degree 523 20 van Emde Boas Trees 531 20.1 Preliminary approaches 532 20.2 A recursive structure 536 20.3 The van Emde Boas tree 545 21 Data Structures for Disjoint Sets 561 21.1 Disjoint-set operations 561 21.2 Linked-list representation of disjoint sets 564 21.3 Disjoint-set forests 568 21.4 Analysis of union by rank with path compression 573 VI Graph Algorithms Introduction 587 22 Elementary Graph Algorithms 589 22.1 Representations of graphs 589 22.2 Breadth-first search 594 22.3 Depth-first search 603 22.4 Topological sort 612 22.5 Strongly connected components 615 23 Minimum Spanning Trees 624 23.1 Growing a minimum spanning tree 625 23.2 The algorithms of Kruskal and Prim 631 24 Single-Source Shortest Paths 643 24.1 The Bellman-Ford algorithm 651 24.2 Single-source shortest paths in directed acyclic graphs 655 24.3 Dijkstra's algorithm 658 24.4 Difference constraints and shortest paths 664 24.5 Proofs of shortest-paths properties 671 25 All-Pairs Shortest Paths 684 25.1 Shortest paths and matrix multiplication 686 25.2 The Floyd-Warshall algorithm 693 25.3 Johnson's algorithm for sparse graphs 700 26 Maximum Flow 708 26.1 Flow networks 709 26.2 The Ford-Fulkerson method 714 26.3 Maximum bipartite matching 732 26.4 Push-relabel algorithms 736 26.5 The relabel-to-front algorithm 748 VII Selected Topics Introduction 769 27 Multithreaded Algorithms Sample Chapter - Download PDF (317 KB) 772 27.1 The basics of dynamic multithreading 774 27.2 Multithreaded matrix multiplication 792 27.3 Multithreaded merge sort 797 28 Matrix Operations 813 28.1 Solving systems of linear equations 813 28.2 Inverting matrices 827 28.3 Symmetric positive-definite matrices and least-squares approximation 832 29 Linear Programming 843 29.1 Standard and slack forms 850 29.2 Formulating problems as linear programs 859 29.3 The simplex algorithm 864 29.4 Duality 879 29.5 The initial basic feasible solution 886 30 Polynomials and the FFT 898 30.1 Representing polynomials 900 30.2 The DFT and FFT 906 30.3 Efficient FFT implementations 915 31 Number-Theoretic Algorithms 926 31.1 Elementary number-theoretic notions 927 31.2 Greatest common divisor 933 31.3 Modular arithmetic 939 31.4 Solving modular linear equations 946 31.5 The Chinese remainder theorem 950 31.6 Powers of an element 954 31.7 The RSA public-key cryptosystem 958 31.8 Primality testing 965 31.9 Integer factorization 975 32 String Matching 985 32.1 The naive string-matching algorithm 988 32.2 The Rabin-Karp algorithm 990 32.3 String matching with finite automata 995 32.4 The Knuth-Morris-Pratt algorithm 1002 33 Computational Geometry 1014 33.1 Line-segment properties 1015 33.2 Determining whether any pair of segments intersects 1021 33.3 Finding the convex hull 1029 33.4 Finding the closest pair of points 1039 34 NP-Completeness 1048 34.1 Polynomial time 1053 34.2 Polynomial-time verification 1061 34.3 NP-completeness and reducibility 1067 34.4 NP-completeness proofs 1078 34.5 NP-complete problems 1086 35 Approximation Algorithms 1106 35.1 The vertex-cover problem 1108 35.2 The traveling-salesman problem 1111 35.3 The set-covering problem 1117 35.4 Randomization and linear programming 1123 35.5 The subset-sum problem 1128 VIII Appendix: Mathematical Background Introduction 1143 A Summations 1145 A.1 Summation formulas and properties 1145 A.2 Bounding summations 1149 B Sets, Etc. 1158 B.1 Sets 1158 B.2 Relations 1163 B.3 Functions 1166 B.4 Graphs 1168 B.5 Trees 1173 C Counting and Probability 1183 C.1 Counting 1183 C.2 Probability 1189 C.3 Discrete random variables 1196 C.4 The geometric and binomial distributions 1201 C.5 The tails of the binomial distribution 1208 D Matrices 1217 D.1 Matrices and matrix operations 1217 D.2 Basic matrix properties 122

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值